Protecting customers against cryptomining threats with VM Threat Detection in Security Command Center

As organizations move to the cloud, VM-based architectures continue to make up a significant portion of compute-centric workloads. To help ensure strong protection for these deployments, we are thrilled to announce a public preview of our newest layer of threat detection in Security Command Center (SCC): Virtual Machine Threat Detection (VMTD). VMTD is a first-to-market detection capability from a major cloud provider that provides agentless memory scanning to help detect threats like cryptomining malware inside your virtual machines running in Google Cloud.The economy of scale enabled by the cloud can help fundamentally change the way security is executed for any business operating in today’s threat landscape. As more companies adopt cloud technologies, security solutions built into cloud platforms help address emerging threats for more and more organizations. For example, in the latest Google Cybersecurity Action Team Threat Horizons Report, we saw 86% of compromised cloud instances were used to perform cryptocurrency mining. VMTD is one of the ways we protect our Google Cloud Platform customers against growing attacks like coin mining, data exfiltration, and ransomware.Our unique approach with agentless VM threat detectionTraditional endpoint security relies on deploying software agents inside a guest virtual machine to gather signals and telemetry to inform runtime threat detection. But as is the case in many other areas of infrastructure security, cloud technology offers the ability to rethink existing models. For Compute Engine, we wanted to see if we could collect signals to aid in threat detection without requiring our customers to run additional software. Not running an agent inside of their instance means less performance impact, lowered operational burden for agent deployment and management, and exposing less attack surface to potential adversaries. What we learned is that we could instrument the hypervisor — the software that runs underneath and orchestrates our customers’ virtual machines — to include nearly universal and hard-to-tamper-with threat detection.Illustrative data path for Virtual Machine Threat DetectionGetting Started with Virtual Machine Threat Detection (VMTD)We’re excited about the kinds of detection that are possible with VMTD. During our public preview, VMTD detects cryptomining attacks. Over the next months as we move VMTD towards general availability, you can expect to see a steady release of new detective capabilities and integrations with other parts of Google Cloud. To get started with VMTD, open the Settings page in Security Command Center. Click on “MANAGE SETTINGS” under Virtual Machine Threat Detection. You can then select a scope for VMTD. To confirm that VMTD is working for your environment, you can download and execute this test binary that simulates cryptomining activity. Safeguarding customer trustWe know safeguarding users’ trust in Google Cloud is as important as securing their workloads. We are taking several steps to ensure the ways in which VMTD inspects workloads for potential threats preserves trust: First, we are introducing VMTD’s public preview as an opt-in service for our Security Command Center Premium customers. Additionally, not only does Confidential Computing provide encryption for memory as it moves out of a CPU to RAM, we never process memory in VMTD from Confidential nodes. Comprehensive threat detection with SCC PremiumVirtual Machine Threat Detection is fully integrated and available through Security Command Center Premium. VMTD complements the existing threat detection capabilities enabled by the Event Threat Detection and Container Threat Detection built-in services in SCC Premium. Together, these three layers of advanced defense provide holistic protection for workloads running in Google Cloud: Multiple layers of threat detection in Security Command CenterIn addition to threat detection, the premium version of Security Command Center is a comprehensive security and risk management platform for Google Cloud. It provides built-in services that enable you to gain visibility into your cloud assets, discover misconfigurations and vulnerabilities in your resources, and help maintain compliance based on industry standards and benchmarks.To enable a Security Command Center Premium subscription, contact your Google Cloud Platform sales team. You can learn more about all these new capabilities in SCC in ourproduct documentation.Related ArticleHow Vuclip safeguards its cloud environment across 100+ projects with Security Command CenterLearn how Security Command Center enables Vuclip to manage security and risk for their cloud environment.Read Article
Quelle: Google Cloud Platform

Measure and maximize the value of Data Science and AI teams

Investing in Artificial Intelligence (AI) can bring a competitive advantage to your organization. If you’re in charge of an AI or Data Science team, you’ll want to measure and maximize the value that you’re providing. Here is some advice from our years of experience in the field. A checklist to embark on a project: As you embark on projects we’ve found it’s good to have the following areas covered: Have a customer. It’s important to have a customer for your work, and that they  agree with what you’re trying to achieve. Be sure to know what value you’re delivering to them. Have a business case.  This will rely on estimates and assumptions, and may take no more than a few minute’s work.  You should revise this, but always know what justifies your team’s effort, and what you (and your customer) expect to get in return. Know what process you will change or create. You’ll want to put your work in production, so you have to be clear about what business operations are changing or created around your work and who needs to be involved to make it happenHave a measurement plan. You’ll want to show that ongoing work is impacting some relevant business indicator. Measure and show incremental value. The goal of these measurements is to establish what has changed because of your project that would otherwise not have changed. Be sure to account for other factors like seasonality or other business changes that may affect your measurements.Use all the above to get your organization’s support for your team and your work. What measures to use?As you start the work, what measures and indicators can you use to show that your team’s work is useful for your organization?How many decisions you make. A major function of ML is to automate and optimize decisions: which product to recommend, which route to follow, etc. Use logs to track how many decisions your systems are making. Changes to revenue or costs. Better and quicker decisions often lead to increased revenue or savings. If possible, measure it directly, otherwise estimate it (for example fuel costs saved from less distance traveled, or increased purchases from personalized offers). As an example, the Illinois Department of Employment Security is using Contact Center AI to rapidly deploy virtual agents to help more than 1 million citizens file unemployment claims. To measure success the team tracked the two outcomes:  (1) the number of web inquiries and voice calls they were able to handle, and (2) the overall cost of the call center after the implementation. Post implementation, they were able to observe more than 140,000 phone and web inquiries per day and over 40,000 after-hours calls per night. They  also anticipate an estimated annual cost savings of $100M based on an initial analysis of IDES’s virtual agent data (see more in the link to case study).Implementation costs. The other side of increased revenue or savings, is to put your achievements in the context of how much they cost. Show the technology costs that your team incurs and, ideally, how you can deliver more value, more efficiently. How much time was saved.  If the team built a routing system then it saved travel time, if it built an email classifier then it saved reading time, etc. Quantify how many hours were given back to the organization thanks to the efficiency of your system. In the medical field, quicker diagnostics matter. Johns Hopkins University’s Brain Injury Outcomes (BIOS) Division has focused on studying brain hemorrhage aiming to improve medical outcomes. The team identified the time to insights as a key metric in measuring business success. They experimented with a range of cloud computing solutions like Dataflow, Cloud Healthcare API, Compute Engine, and AI Platform for distributed training to accelerate iterations. As a result, in their recent work they were able to accelerate insights from scans from approximately 500 patients from 2,500 hours to 90 minutes.How many applications your team supports. Some of your organization’s operations don’t use ML (say reconciling financial ledgers) but others do. Know how many parts of your organization benefit from the optimization and automation your team builds.User experience. You may be able to measure your customer’s experience: fewer complaints, better reviews, reduced latency, more interactions, etc. This is valid both for internal and external stakeholders. At Google we measure usage and regularly ask for feedback on any internal system or process.One of our customers, The City of Memphis, is using VisionAI and ML to tackle a common but very challenging issue: identifying and addressing potholes.  The implementation team identified the percentage increase of potholes identified as one of the key metrics along with accuracy and cost savings. The solution captures video footage from it’s public vehicles and leverages Google Cloud capabilities like Compute Engine, AI Platform, and BigQuery to automate the review of videos.  The project increased  pothole detection by 75% with over 90% accuracy. By measuring and demonstrating these outcomes, the team proved the viability of a cost-effective, cloud-based machine learning model and is looking into new applications of AI and ML that will further improve city services and help it build a better future for its 652,000 residents. AcknowledgementsFilipe and Payam would like to thank our colleague and co-author Mona Mona (AI/ML Customer Engineer, Healthcare and lifesciences) who contributed equally to the writing.Related ArticleInnovating and experimenting in EMEA’s Public Sector: Lessons from 2020–2021Government organisations worldwide have been using technology to manage remote work challenges and continue to provide services to consti…Read Article
Quelle: Google Cloud Platform

Getting Started with Google Cloud Logging Python v3.0.0

We’re excited to announce the release of a major update to the Google Cloud Python logging library. v3.0.0 makes it even easier for Python developers to send and read logs from Google Cloud, providing real-time insights into what is happening in your application.  If you’re a Python developer working with Google Cloud, now is a great time to try out Cloud Logging!If you’re unfamiliar with the `google-cloud-logging` library, getting started is simple. First, download the library using pip:Now, you can set up the client library to work with Python’s built-in `logging` library. Doing this will make it so that all your standard Python log statements will start sending data to Google Cloud:We recommend using the standard Python `logging` interface for log creation, as demonstrated above. However, if you need access to other Google Cloud Logging features (reading logs, managing log sinks, etc), you can use `google.cloud.logging` directly:Here are some of the main features of the new release:Support More Cloud EnvironmentsPrevious versions of google-cloud-logging supported onlyApp Engine andKubernetes Engine. Users reported that the library would occasionally drop logs on serverless environments like Cloud Run and Cloud Functions. This was because the library would send logs in batches over the network. When a serverless environment would spin down, unsent batches could be lost.v3.0.0 fixes this issue by making use of GCP’s built instructured JSON logging functionality on supported environments (GKE, Cloud Run, or Cloud Functions). If the library detects it is running on an environment that supports structured logging, it will automatically make use of the newStructuredLogHandler, which writes logs as JSON strings printed to standard out. Google Cloud’s built-in agents will then parse the logs and deliver them to Cloud Logging, even if the code that produced the logs has spun down. Structured Logging is more reliable on serverless environments, and it allows us to support all major GCP compute environments in v3.0.0. Still, if you would prefer to send logs over the network as before, you can manually set up the library with a CloudLoggingHandler instance:Metadata AutodetectionWhen you troubleshoot your application, it can be useful to have as much information about the environment as possible captured in your application logs. `google-cloud-logging` attempts to help in this process by detecting and attaching metadata about your environment to each log message. The following fields are currently supported:`resource`: The Google Cloud resource the log originated from for example, Functions, GKE, or Cloud Run`httpRequest`: Information about an HTTP request in the log’s contextFlask and Django are currently supported`sourceLocation` : File, line, and function namestrace, spanId, and traceSampled: Cloud Trace metadataSupports X-Cloud-Trace-Context and w3c transparent trace formatsThe library will make an attempt to populate this data whenever possible, but any of these fields can also be explicitly set by developers using the library.JSON Support in Standard Library IntegrationGoogle Cloud Logging supports bothstring and JSON payloads for LogEntries, but up until now,the Python standard library integration could only send logs with string payloads.In `google-cloud-logging` v3,  you can log JSON data in two ways:1. Log a JSON-parsable string:2. Pass a `json_fields` dictionary using Python logging’s `extra` argument:Next StepsWith version v3.0.0, the Google Cloud Logging Python library now supports more compute environments, detects more helpful metadata, and provides more thorough support for JSON logs. Along with these major features, there are also user-experience improvements like a new log method and more permissive argument parsing. If you want to learn more about the latest release, these changes and others are described in more detail in the v3.0.0 Migration Guide. If you’re new to the library, check out the google-cloud-logging user guide. If you want to learn more about observability on GCP in general, you can spin up test environments using Cloud Ops Sandbox.Finally, if you have any feedback about the latest release, have new feature requests, or would like to make any contributions, feel free to open issues on our GitHub repo. The Google Cloud Logging libraries are open source software, and we welcome new contributors!Related ArticleTake the first step toward SRE with Cloud Operations SandboxSpin up the Cloud Operations Sandbox to see how Google’s logging, monitoring, tracing, profiling and debugging can kickstart your SRE pra…Read Article
Quelle: Google Cloud Platform

Genomic analysis on Galaxy using Azure CycleCloud

Cloud computing and digital transformation have been powerful enablers for genomics. Genomics is expected to be an exabase-scale big data domain by 2025, posing data acquisition and storage challenges on par with other major generators of big data. Embracing digital transformation offers a practically limitless ability to meet the genomic science demands in both research and medical institutions. The emergence of cloud-based computing platforms such as Microsoft Azure has paved the path for online, scalable, cost-effective, secure, and shareable big data persistence and analysis with a growing number of researchers and laboratories hosting (publicly and privately) their genomic big data on cloud-based services.

At Microsoft, we recognize the challenges faced by the genomics community and are striving to build an ecosystem (backed by OSS and Microsoft products and services) that can facilitate genomics work for all. We’ve focused our efforts on three main core areas—research and discovery in genomic data, building out a platform to enable rapid automation and analysis at scale, and optimized and secure pipelines at a clinical level. One of the core Azure services that has enabled us to leverage high performance compute environment to perform genomic analysis is Azure CycleCloud.

Galaxy and Azure CycleCloud

Galaxy is a scientific workflow, data integration, and data analysis persistence and publishing platform that aims to make computational biology accessible to research scientists that do not have computer programming or systems administration experience. Although it was initially developed for genomic research, it is largely domain agnostic and is now used as a general bioinformatics workflow management system. Galaxy system is used for accessible, reproducible, and transparent computational research.

Accessible: Programming experience is not required to easily upload data, run complex tools and workflows, and visualize results.
Reproducible: Galaxy captures information so that you don't have to; any user can repeat and understand a complete computational analysis, from tool parameters to the dependency tree.
Transparent: Users share and publish their histories, workflows, and visualizations via the web.
Community-centered: Inclusive and diverse users (developers, educators, researchers, clinicians, and more) are empowered to share their findings.

Azure CycleCloud is an enterprise-friendly tool for orchestrating and managing high-performance computing (HPC) environments on Azure. With Azure CycleCloud, users can provision infrastructure for HPC systems, deploy familiar HPC schedulers, and automatically scale the infrastructure to run jobs efficiently at any scale. Through Azure CycleCloud, users can create different types of file systems and mount them to the compute cluster nodes to support HPC workloads. With dynamic scaling of clusters, the business can get the resources it needs at the right time and the right price. Azure CycleCloud automated configuration enables IT to focus on providing service to the business users.

Deploying Galaxy on Azure using Azure CycleCloud

Galaxy is used by most academic institutions that conduct genomic research. Most institutions that already use Galaxy want to stick to it because it provides multiple tools for genomic analysis as a SaaS platform. Users can also deploy custom tools onto Galaxy.

Galaxy users generally use the SaaS version of Galaxy as part of UseGalaxy resources. UseGalaxy servers implement a common core set of tools and reference genomes and are open to anyone to use. All information on its usage is available on the Galaxy Platform Directory.

However, there are some research institutions that intend to deploy Galaxy in-house as an on-premises solution or a cloud-based solution. The remainder of this article describes how to deploy and run Galaxy on Microsoft Azure using Azure CycleCloud and grid engine cluster. The solution was built during the Microsoft hackathon (October 12 to 14, 2021) with code implementation assistance from Azure HPC Specialist, Jerry Morey. The architectural pattern described below can help organizations to deploy Galaxy in an Azure environment using CycleCloud and a scheduler of choice.

As a pre-requisite, genomic data should be available in a storage location, either cloud or on-premises. Azure CycleCloud should be deployed using the steps described in the “Install CycleCloud using the Marketplace image” documentation.

Cluster deployment that is truly supported by Galaxy on the cloud is called the unified method. In this method, the copy of Galaxy on the application server is the same copy as the one on the cluster nodes. The most common method to do this would be to put Galaxy in a network file system (NFS) somewhere that is accessible by the application server and the cluster nodes. This is the most common deployment method for Galaxy.

An admin user can SSH into Azure CycleCloud virtual machines or Galaxy server virtual machines to perform admin-related activities. It is recommended to close the SSH port when in production. Once the Galaxy server is running on a node, end users (researchers) can load the portal on their end device to perform analysis tasks which include loading data, installing, uploading tools, and more.

Access to functionalities (such as installing and deleting tools versus the usage of tools for analysis) are controlled by parameters defined in galaxy.yml that resides in the Galaxy server. Once a user accesses a functionality, they are converted to jobs that are submitted to the grid engine cluster for further execution.

Deployment scripts are available to ease deployment. These scripts can be used to deploy the latest version of Galaxy on Azure CycleCloud.
Following are the steps to use the deployment scripts:

Git clone this project (The project is in active development, so cloning the latest release is recommended).

git clone –b release_21.09 https://github.com/themorey/galaxy-gridengine.git

Upload project to CC locker.

cd galaxy-gridengine

Modify files (if needed)

cyclecloud locker list

Azure cycle Locker (az://mystorageaccount/cyclecloud

cyclecloud project upload "Azure cycle Locker"

Import cluster template to CC.

cyclecloud import_cluster <cluster-name> -c <galaxy-folder-name> -f templates/gridengine-galaxy2.txt

NOTE: Substitute <cluster-name> with a name for your cluster—all lower case, no spaces.

Navigate to CC Portal to configure and start the cluster.

Wait for 30 to 45 minutes for the Galaxy server to be installed.

To check if the server is installed correctly, SSH into Galaxy server node and check galaxy.log in /shared/home/<galaxy-folder-name> directory.

This deployment was adopted by a leading United States-based academic medical center. The Microsoft Industry Solutions team helped deploy this solution on the customer’s Azure tenant. Researchers at the center tested to assess the parity of this solution to existing Galaxy deployment on their on-premises HPC environment. They were able to successfully test the deployed Galaxy server that used Azure CycleCloud for job orchestration. Several common bioinformatics tools such as bedtools, fastqc, bcftools, picard, and snpeff were installed and tested. Galaxy supports local user by default. As part of this engagement, a solution to integrate their corporate active directory was tested and deployed. The solution was found to be on par with their on-premises deployment. With the increased number of execute nodes and size of those nodes, they found that the jobs were executed in less time.

For more information, support, or guidance related to the content in this blog, we recommend you reach out to your Microsoft sales representative.

Learn more

Learn more about Microsoft Genomics solutions.

Microsoft Genomics service on Azure.
Azure CycleCloud—HPC Cluster and Workload Management.
Galaxy on Azure deployment scripts.

Quelle: Azure

Learn how open source plays a key role in Microsoft’s cloud strategy with Inside Azure for IT

With more than 1 million views of our fireside chats, we’re inspired by the tremendous opportunity to connect those within the community—customers, partners, and technology enthusiasts everywhere. Whether you engage in the live ask-the-experts sessions, watch the deep-dive skilling videos, or join us for fireside chats—the Azure team and I are delighted and humbled by your participation and enthusiasm for Inside Azure for IT. 

In our third episode, we talk about some of our Linux and open source-related partnerships, product innovation, and initiatives, plus how that helps customers and communities. To those who think of Azure as “mostly Windows cloud,” it may be surprising to learn that more than 60 percent of Azure customer compute cores are Linux-based, and that Linux virtual machine (VM) cores are growing faster than those based on Windows.

My own career has mirrored Microsoft’s evolution of how we think about, contribute to, and consume Linux and open source. For example, I’ve gone from being solely focused on Windows and Windows Server, to learning how to contribute upstream to make Linux run great on Hyper-V, to now, where open source and Linux are core to the development of Azure.

In this episode, you’ll get a behind-the-scenes peek at Microsoft’s approach, and how we've brought together customers, partners, and communities to innovate and collaborate across open-source technologies.

Innovate with Open Source and Linux on Azure

The episode is divided into three separate segments so you can watch them individually on-demand at your convenience.

Part one: Microsoft and Red Hat on simplifying cloud adoption with joint innovation on Azure with Linux

In this segment, you’ll hear from Red Hat about partnering with Microsoft and how it helps customers with their cloud modernization and migration journey. Mike Evans, VP, Technical Business Development, and Xavier Lecauchois, Sr. Director Ansible Cloud Services, from Red Hat join me to chat about the strategy and the latest innovation, the Red Hat Ansible Automation Platform on Azure. Watch: Simplifying cloud adoption with joint innovation on Azure with Linux.

Part two: Brendan Burns and Krishna Ganugapati on safeguarding workloads with Mariner—Microsoft’s internal Linux distro

Delivering reliable Azure services to customers faster is the driving force behind the creation of Mariner, Microsoft’s own Linux distro. Join me, as I chat with Krishna Ganugapati, VP of Software Engineering, Edge OS, and Brendan Burns, CVP, Azure Cloud Native on why the Azure team created Mariner and how it’s benefiting customers and Microsoft engineers. Watch: Safeguarding workloads with Mariner—Microsoft Azure’s own internal Linux distro.

Part three: Microsoft's Sarah Novotny on working together with open source communities to drive innovation

Open source connects developers around the world, providing ways to collaborate and innovate collectively. Join Sarah Novotny, Director of Open-source Strategy for Azure as we chat about running open-source technologies in the cloud, how the relationship between IT and developers enables open-source innovation, Microsoft’s leadership and contributions to help secure open-source software, and her unique background in the open-source community. Watch: Developing in the open and working together to drive innovation.

Stay current with Inside Azure for IT

Beyond this latest episode, there are many more technical and cloud-skilling resources available through Inside Azure for IT. Learn more about empowering an adaptive IT environment with best practices and resources designed to enable productivity, digital transformation, and innovation. Take advantage of technical training videos and learn about implementing these scenarios.

Register for Azure Open Source Day to watch live on February 15, 2022, 9:00 AM to 10:30 AM Pacific Time or on-demand later.
Get started by learning about Linux on Azure.
See our schedule for Ask the product experts live.
Watch part one: Microsoft and Red Hat on simplifying cloud adoption with joint innovation on Azure with Linux.
Watch part two: Brendan Burns and Krishna Ganugapati on safeguarding workloads with Mariner—Microsoft Azure’s own internal Linux distro.
Watch part three: Microsoft’s Sarah Novotny on developing in the open and working together to drive innovation.

Quelle: Azure

AWS Secrets Manager unterstützt jetzt Drehungsfenster

AWS Secrets Manager unterstützt jetzt die Möglichkeit, geheime Drehungen innerhalb bestimmter Zeitfenster zu planen. Mit dieser Funktion können Sie Geheimnis-Drehungen auf bestimmte Stunden an bestimmten Tagen beschränken. Zuvor unterstützte Secrets Manager die automatische Drehung von Geheimnissen innerhalb der letzten 24 Stunden des angegebenen Drehungsintervalls. Mit der heutigen Einführung müssen Sie sich nicht mehr zwischen dem Komfort verwalteter Drehungen und der Betriebssicherheit von Wartungsfenstern entscheiden.
Quelle: aws.amazon.com