Azure Files share snapshot management by Azure Backup is now generally available

Microsoft Azure Files offers fully managed file shares in the cloud that are accessible via the industry standard Server Message Block (SMB) protocol. For users of Azure Files, share snapshots have offered a read-only version of file shares from a previous point in time. Share snapshots are also incremental in nature, making their storage usage efficient. Although customers can simply use these share snapshots to go back in time, managing snapshots using scripts or automation is a labor-intensive process. Microsoft Azure Backup offers a simple and reliable way to backup and protect Azure Files using share snapshots.

Today, we are announcing the general availability of snapshot management for Azure Files by Azure Backup. Apart from being available natively in the cloud, Azure Backup offers significant benefits while protecting file shares using Recovery Services vault.

Key benefits

Simple configuration: You can use the +Backup option from Recovery Services vault to discover all unprotected file shares in storage accounts, select multiple file shares if necessary, choose a policy, and configure backup for all file shares at once. Once configured, you can manage your backups directly from the Azure Files portal.
Zero infrastructure solution: Being an Azure native solution, using Azure Backup means that you don’t need to run any additional compute. This saves you from setting up infrastructure to schedule snapshots, or maintain or modify them periodically.

Azure File Sync users do not need to back up their data from on-premises servers as the entire data is available in cloud. You can enable cloud tiering on your on-premises servers or machines and continue to use Azure Backup to protect the cloud data.

Flexible backup policy: Azure Backup provides you with the ability to create and modify policies of choice to define the schedule for snapshots.

You’re already used to creating daily snapshots as part of the Azure Backup policy. As part of the general availability release, we have also introduced the ability to create weekly, monthly, and yearly snapshots. You can also choose retention for these snapshots for up to 10 years. The backup policy automatically takes care of pruning expired snapshots, allowing you to stay within the 200 snapshots limit per file share.

Comprehensive restore capabilities: Azure Backup offers a variety of options to restore your file share data. You can choose to restore the entire file share or individual files and folders. Restores can also be done to the original location or to alternate file shares in the same or different storage accounts. Azure Backup also preserves and restores all access control lists (ACLs) of files and folders.

Apart from the options above, Azure Backup ensures that the restores are sync-aware. We coordinate with the Azure File Sync service to trigger a resync back to on-premises servers once we complete restores to the associated file shares in the cloud.

Protection against accidental deletion: Accidental deletion can happen at multiple levels.

Individual files and folders: The lowest level is a file or folder. This is also the most common scenario. Using scheduled snapshots and being able to restore individual files and folders addresses this issue.
Snapshot: Azure Backup becomes the initiator of the snapshot that it takes using the backup policy. However, administrators can still delete specific snapshots in their file shares. These deletions are not recommended as the restore points become invalid. We’re actively working on a mechanism that will allow Azure Backup to prevent you from any accidental snapshot deletions.
File share: You could delete your file share and end up wiping out all snapshots taken for the file share. Azure Backup is currently working on protecting against accidental deletion of your file shares and the solution should be available in the first few regions soon.
Storage account: Deleting storage accounts can wipe out all file shares inside the storage account along with its snapshots. Customer conversations indicate that, although this is a less common scenario, there needs to be protection against it. Hence, Azure Backup takes a delete lock on a storage account as soon as the first file share in the storage account is configured for backup.

On-demand snapshots: Apart from the backup policy option to schedule snapshots, you can also choose to create up to four on-demand backups every day. Taking multiple on-demand backups in a day reduces the recovery point objective (RPO) for customers. Although Azure Backup purges these snapshots based on the retention set during backup, you need to ensure that you do not exceed the 200 snapshots per file share limit while using this capability.
Alerts and reports: Integration with Azure Backup alerts enables you to configure email notifications for critical failures. Once the general availability release is available across all regions, you will start seeing backup related data in Azure Backup reports.

What’s next?

Based on our conversations with customers, we‘re working to deliver functionality above and beyond snapshot management using Azure Backup, including the ability to copy file share data to Recovery Services vault. We welcome this and all feedback from customers that help us align our work on features you will value. You can help us by filling out this survey.

Getting started

Start protecting your file shares by using the Recovery Services vaults in your region. For the list of supported regions, please refer to the support matrix. The backup goal option in the vault overview will let you choose Azure File shares to back up from storage accounts in your region. You can refer to our documentation for more details.

Pricing

For pricing details, please follow the Azure Backup pricing page for updates as we are currently rolling out the regional prices. Snapshot management using Azure Backup will not be chargeable for customers until July 1, 2020. All users can access and trial the feature without added cost through June 2020. You can write to AskAzureBackupTeam@microsoft.com for any feedback and queries.

Related links and additional content

Support matrix for Azure Files snapshot management using Azure Backup.
If you are new to Azure Backup, start configuring the backup on the Azure portal.
Want more details? Check out Azure Backup documentation.
Need help? Reach out to the Azure Backup forum for support.
Tell us how we can improve Azure Backup by contributing new ideas and voting up existing ones.
Follow us on Twitter @AzureBackup for the latest news and updates.

Quelle: Azure

Announcing the general availability of Windows Server containers and private clusters for Azure Kubernetes Service

Today’s application environments are often heterogeneous, composed of both Windows and Linux applications. Using Kubernetes to host containers on both Windows Server and Linux not only saves costs but also reduces operational complexity. Many Azure customers have demonstrated this with their usage of Windows server containers. Their success in our preview makes me thrilled to announce the general availability of Windows Server container support on Azure Kubernetes Service (AKS). 

AKS simplifies the deployment and management of Kubernetes clusters and provides a highly reliable and available environment for your applications. It integrates seamlessly with world-class development tools such as GitHub and Visual Studio Code and is built on years of Microsoft security expertise focusing on data protection and compliance. With the general availability of Windows Server containers, you can now lift and shift your Windows applications to run on managed Kubernetes service with Azure and get the full benefits of AKS for your production workloads using consistent tools and processes. For example, you can create, upgrade, and scale Windows node pools in AKS through the standard tools (portal/CLI) and Azure will help manage the health of the cluster automatically. Running both Windows and Linux applications side by side in a single AKS cluster, you can modernize your operations processes for a broader set of applications while increasing the density (and thus lowering the costs) of your application environment.

Today, we're also announcing the general availability of both private clusters and managed identities support in AKS. This further empowers our customers to achieve hardened security and meet compliance requirements with reduced efforts. Private clusters ensure that customers can create and use managed Kubernetes that only exists inside their private network and never on the internet. This network isolation provides security assurances that are especially important for regulated industries like finance and health care. In addition, Azure managed identities for AKS allows you to interact securely with other Azure services including Azure Monitor for Containers, Azure Policy, and more. With the introduction of managed identity, you don’t have to manage your own service principals or rotate credentials often. 

Applying best practices makes it easier to optimize your enterprise Kubernetes environment and applications. We continue to develop more integrations between AKS and Azure Advisor, bringing industry best practices right into the AKS experience. Regardless whether you are new or a seasoned Kubernetes user, our customers receive proactive and actionable recommendations to secure resources, maintain cluster hygiene, and increase operational efficiency. These recommendations are based on our learnings from thousands of customer engagements. Likewise, we have integrated developer advice into the VS Code extension for Kubernetes and integrated security advice into the Azure Security Center. We are also focused on providing learning, frameworks, and tools to ensure developers, operators, and architects in every enterprise can successfully use Kubernetes on Azure. Putting all this together gives you more confidence in your use of Kubernetes even as you are learning the system.

We’re going through unprecedented challenges in the world today. I hope that these updates make it easier for you to secure and optimize your Kubernetes environment today, allowing you to focus your energy on your business critical projects. You can learn more about Kubernetes on Azure here.
Quelle: Azure

Azure + Red Hat: Expanding hybrid management and data services for easier innovation anywhere

For the past few years, Microsoft and Red Hat have co-developed hybrid solutions enabling customers to innovate both on-premises and in the cloud. In May 2019, we announced the general availability of Azure Red Hat OpenShift, allowing enterprises to run critical container-based production workloads via an OpenShift managed service on Azure, jointly operated by Microsoft and Red Hat.

Microsoft and Red Hat are now working together to further extend Azure services to hybrid environments across on-premises and multi-cloud with upcoming support of Azure Arc for OpenShift and Red Hat Enterprise Linux (RHEL), so our customers will be able to more effectively develop, deploy, and manage cloud-native applications anywhere. With Azure Arc, customers will have a more consistent management and operational experience across their Microsoft hybrid cloud including Red Hat OpenShift and RHEL.

What’s new for Red Hat Customers with Azure Arc

As part of the Azure Arc preview, we’re expanding Azure Arc’s Linux and Kubernetes management capabilities to add support specifically for Red Hat customers, enabling you to:

Organize, secure, and govern your Red Hat ecosystem across environments

Many of our customers have workloads sprawling across clouds, datacenters, and edge locations. Azure Arc enables customers to centrally manage, secure, and control RHEL servers and OpenShift clusters from Azure at scale. Wherever the workloads are running, customers can view inventory and search from the Azure Portal. They can apply policies and manage compliance for connected servers and clusters from Azure Policy; either one or many clusters at a time. Customers can enhance their security posture through built-in Azure security policies and RBAC for the managed infrastructure that works the same way wherever they run. As Azure Arc progresses towards general availability, more policies will be enabled, such as reporting on expiring certificates, password complexity, managing SSH keys, and enforcing disk encryption.

In addition, with SQL Server 2019 for RHEL 8 is now quicker to deploy via new images now available in the Azure Marketplace, we’re expanding Azure Arc to manage SQL Server on RHEL, providing integrated database and server governance via unified Azure Policies.

Finally, Azure Arc makes it easy to use Azure Management services such as Azure Monitor and Azure Security Center when dealing with workloads and infrastructure running outside of Azure.

Manage OpenShift clusters and applications at scale

Manage container-based applications running in Azure Red Hat OpenShift service on Azure, as well as OpenShift clusters running on IaaS, virtual machines (VMs), or on-premises bare metal. Applications defined in Github repositories can be automatically deployed via Azure Policy and Azure Arc to any repo-linked OpenShift cluster, and policies can be used to keep them up to date. New application versions can be distributed globally to all Azure Arc-managed OpenShift clusters using Github pull requests, with full DevOps CI/CD pipeline integrations for logging and quality testing. Additionally, if an application is modified in an unauthorized way, the change is reverted, so your OpenShift environment remains stable and compliant.

Run Azure Data Services on OpenShift and anywhere else

Azure Arc enables you to run Azure data services on OpenShift on-premises, at the edge, and in multi-cloud environments, whether a self-deployed cluster or a managed container service like Azure Red Hat OpenShift. With Azure Arc support for Azure SQL Managed Instance on OpenShift, you’ll know your container-based data infrastructure is always current and up to date; Microsoft SQL Big Data Cluster (BDC) support for OpenShift provides a new container-based deployment pattern for big data storage and analytics, allowing you to elastically scale your data with your dynamic OpenShift based application anywhere it runs.

Managing multiple configurations for an on-premises OpenShift deployment from Azure Arc.

Azure SQL Managed Instances within Azure Arc.

If you’d like to learn more about how Azure is working with Red Hat to make innovation easier for customers in hybrid cloud environments, join us for a fireside chat between Scott Guthrie, EVP of Cloud and AI at Microsoft, and Paul Cormier, president and CEO of Red Hat, including a demo of Azure Arc for Red Hat today at the Red Hat Summit 2020 Virtual Experience.

Private hybrid clusters and OpenShift 4 added to Azure Red Hat OpenShift

Rounding out our hybrid offerings for Red Hat customers, today we’re announcing the general availability of Azure Red Hat OpenShift on OpenShift 4.

This release brings key innovations from Red Hat OpenShift 4 to Azure Red Hat OpenShift. Additionally we‘re enabling features to support hybrid and enterprise customer scenarios, such as:

Private API and ingress endpoints: Customers can now choose between public and private cluster management (API) and ingress endpoints. With private endpoints and Azure Express Route support we’re enabling private hybrid clusters, allowing our mutual customers to extend their on-premises solutions to Azure.
 
Industry compliance certifications: To help customers meet their compliance obligations across regulated industries and markets worldwide, Azure Red Hat OpenShift is now PCI DSS, HITRUST, and FedRAMP certified. Azure maintains the largest compliance portfolio in the industry both in terms of total number of offerings, as well as number of customer-facing services in assessment scope.
 
Multi-Availability Zones clusters: To ensure the highest resiliency, cluster components are now deployed across 3 Azure Availability Zones in supported Azure regions to maintain high availability for the most demanding mission-critical applications and data. Azure Red Hat OpenShift has a Service Level Agreement (SLA) of 99.9 percent.

Cluster-admin support: We’ve enabled the cluster-admin role on Azure Red Hat OpenShift clusters, enabling full cluster customization capabilities, such as running privileged containers and installing Custom Resource Definitions (CRDs).

Getting started with Azure Arc

To learn more about Azure Arc for RHEL environments, get started with the preview today. For anyone interested in Azure Arc enabled OpenShift, we will be going into public preview soon. Contact us here for more info.
Quelle: Azure

Accelerating Cybersecurity Maturity Model Certification (CMMC) compliance on Azure

As we deliver on our ongoing commitment to serving as the most secure and compliant cloud, we’re constantly adapting to the evolving landscape of cybersecurity to help our customers achieve compliance more rapidly. Our aim is to continue to provide our customers and partners with world-class cybersecurity technology, controls, and best practices, making compliance faster and easier with native capabilities in Azure and Azure Government, as well as Microsoft 365 and Dynamics 365.

In architecting solutions with customers, a foundational component of increasing importance is building more secure and reliable supply chains. For many customers, this is an area where new tools, automation, and process maturity can improve an organization’s security posture while reducing manual compliance work.

In preparing for the new Cybersecurity Maturity Model Certification (CMMC) from the Department of Defense (DoD), many of our customers and partners have asked for more information on how to prepare for audits slated to start as early as the summer of 2020. 

Designed to improve the security posture of the Defense Industrial Base (DIB), CMMC requires an evaluation of the contractor’s technical security controls, process maturity, documentation, policies, and the processes that are in place and continuously monitored. Importantly, CMMC also requires validation by an independent, certified third-party assessment organization (C3PAO) audit, in contrast to the historical precedent of self-attestation.

Expanding compliance coverage to meet CMMC requirements

Common questions we’ve heard from customers include: “when will Azure achieve CMMC accreditation?” and “what Microsoft cloud environments will be certified?”

While the details are still being finalized by the DoD and CMMC Accreditation Body (CMMC AB), we expect some degree of reciprocity with FedRAMP, NIST 800-53, and NIST CSF, as many of the CMMC security controls map directly to controls under these existing cybersecurity frameworks. Ultimately, Microsoft is confident in its cybersecurity posture and is closely following guidance from DoD and the CMMC AB to demonstrate compliance to the C3PAOs. We will move quickly to be evaluated once C3PAOs are accredited and approved to begin conducting assessments. 

Microsoft’s goal is to continue to strengthen cybersecurity across the DIB through world-class cybersecurity technology, controls, and best practices, and to put its cloud customers in a position to inherit Microsoft’s security controls and eventual CMMC certifications. Our intent is to achieve certification for Microsoft cloud services utilized by DIB customers.

Note: While commercial environments are intended to be certified as they are for FedRAMP High, CMMC by itself should not be the deciding factor on choosing which environment is most appropriate. Most DIB companies are best aligned with Azure Government and Microsoft 365 GCC High for data handling of Controlled Unclassified Information (CUI).

New CMMC acceleration program for a faster path to certification

The Microsoft CMMC acceleration program is an end-to-end program designed to help customers and partners that serve as suppliers to the DoD improve their cybersecurity maturity, develop the cyber critical thinking skills essential to CMMC, and benefit from the compliance capabilities native to Azure and Azure Government. 

The program will help you close compliance gaps and mitigate risks, evolve your cybersecurity toward a more agile and resilient defense posture, and help facilitate CMMC certification. Within this program, you’ll have access to a portfolio of learning resources, architectural references, and automated implementation tools custom-tailored to the certification journey.

For more information on participating in the program, email cmmc@microsoft.com. 

Learn more about the CMMC framework

Read our in-depth article on CMMC on the Microsoft Tech Communities blog, and stay tuned to the Azure Government Dev Blog for ongoing guidance on implementing Azure to achieve compliance with CMMC requirements.

Disclaimer: Customers are wholly responsible for ensuring their own compliance with all applicable laws and regulations. Information provided in this post does not constitute legal advice, and customers should consult their legal advisors for any questions regarding legal or regulatory compliance.
Quelle: Azure

Microsoft Services is now a Kubernetes Certified Service Provider

Modern applications are increasingly built using containers, which are microservices packaged with their dependencies and configurations. For this reason, many companies are either containerizing their existing applications or creating new complex applications that are composed of multiple containers.

As applications grow to span multiple containers deployed across multiple servers, operating them becomes more complex. To manage this complexity, Kubernetes, an open-source software for deploying and managing those containers at scale, provides an open source API that controls how and where those containers will run.

Kubernetes Certified Service Provider

Microsoft Services is now a Kubernetes Certified Service Provider (KCSP). The KCSP program is a pre-qualified tier of vetted service providers who have deep experience helping enterprises successfully adopt Kubernetes. The KCSP partners offer Kubernetes support, consulting, professional services, and training for organizations embarking on their Kubernetes journey.

We have trained hundreds of consultants on Kubernetes, developed a comprehensive service offering around Kubernetes, and successfully delivered Kubernetes engagements to many customers in all industries, all over the world.

Using our global reach and ecosystem, we empower organizations to put innovation into practice to deliver strategic business outcomes, maximize the value of cloud technology, and drive success through continual support.

Microsoft Services is your partner to enable your organization to leverage container capabilities and frameworks, such as Kubernetes, to adopt modern technologies to increase speed and agility while also maintaining control and good governance.

The Azure Workloads for Containers offering

We recognize a need to help you address your secure infrastructure challenges and requirements. We envision the containers infrastructure to be more than just the containers orchestration layer to include networking, storage, secrets, and Infrastructure as Code (IaC).

Microsoft Services has a full Kubernetes offering, called Azure Workloads for Containers. This offering is composed of several workstreams that focus on the activities and outcomes that are most relevant to our customers. These workstreams provide full flexibility to our customers as each one of them can be selected independently and customized to meet the specific needs of a given project.

Below are the details of these workstreams.

Kubernetes foundation

Design and plan Azure Kubernetes Service (AKS) cluster and shared services.
Implement AKS cluster and shared services.
Deploy application on AKS.
Test application.
Rollout to production. ​

Containers migration

Assess, design, and plan migration.
Migrate the containers-based application(s).
Test the migrated application(s).
Rollout to production.

Kubernetes security hardening

Refactor your security controls for AKS.
Secure your CI/CD pipeline (DevSecOps).
Harden your AKS environment to meet your compliance obligations.
Assist with third-party security product integration.

Kubernetes threat modeling

Build a threat mo​​del based on the AKS cluster and the apps running on it.
Identify threats and mitigations.
Produce clear actions to mitigate the threats.

Application containerization

Create container image(s) for one or multiple applications.
Test the application(s) running as container.
Deploy the application to an AKS cluster in production​.

The offering is aligned to Microsoft’s Cloud Adoption Framework for Azure and focuses primarily on the Adopt: Innovate principle of your cloud journey for Kubernetes.

Learn more

To learn more, have a look at the Azure Workloads for Containers datasheet.
Quelle: Azure

DCsv2-series VM now generally available from Azure confidential computing

Security and privacy are critically important when storing and processing sensitive information in the cloud, from payment transactions, to financial records, personal health data, and more. With the general availability of DCsv2-series VMs, we are ushering in the start of a new level of data protection in Azure.

With more workloads moving to the cloud and more customers putting their trust in Microsoft, the Azure confidential computing team continues to innovate to provide offerings that keep and build upon that trust. Starting with our world-class security researchers, and working closely with industry partners, we are developing new ways to protect data while it’s in use with Azure confidential computing. DCsv2-series VMs can protect the confidentiality and integrity of your data even while it’s processed.

What is confidential computing?

There are ways to encrypt your data at rest and while in transit, but confidential computing protects the confidentiality and integrity of your data while it is in use. Azure is the first public cloud to offer virtualization infrastructure for confidential computing that uses hardware-based trusted execution environments (TEEs). Even cloud administrators and datacenter operators with physical access to the servers cannot access TEE-protected data.

By combining the scalability of the cloud and ability to encrypt data while in use, new scenarios are possible now in Azure, like confidential multi-party computation where different organizations combine their datasets for compute-intensive analysis without being able to access each other’s data. Examples include banks combining transaction data to detect fraud and money laundering, and hospitals combining patient records for analysis to improve disease diagnosis and prescription allocation.

Data protection powered by Intel hardware

Our DCsv2 confidential computing virtual machines run on servers that implement Intel Software Guard Extensions (Intel SGX). Because Intel SGX hardware protects your data and keeps it encrypted while the CPU is processing it, even the operating system and hypervisor cannot access it, nor can anyone with physical access to the server.

Microsoft and Intel are committed to providing best-in-class cloud data protection through our deep ongoing partnership:

“Customers are demanding the capability to reduce the attack surface and help protect sensitive data in the cloud by encrypting data in use. Our collaboration with Microsoft brings enterprise-ready confidential computing solutions to market and enables customers to take greater advantage of the benefits of cloud and multi-party compute paradigms using Intel® SGX technology.” —Anil Rao, VP Data Center Security and Systems Architecture, Intel

Partners in the Azure Marketplace

Microsoft works directly with platform partners to provide seamless solutions, development, and deployment experiences running on top of our Azure confidential computing infrastructure. Software offerings can be discovered through our Azure Marketplace including:

Fortanix—Offers a cloud-native data security solution including key management, HSM, tokenization, and secrets management built on Azure confidential computing.
Anjuna—Delivers secure Azure instances using end-to-end CPU hardware-level encryption without changing your application or operations.
Anqlave—A valued partner in Singapore, offers enterprise ready confidential computing solutions.

“Anqlave’s proprietary, institutional-grade modern key management and data encryption solution addresses the most critical security issues we face today. With Anqlave Data Vault (ADV) secret management allows users to securely create, store, transport and use its secrets. Leveraging Azure confidential computing, allows us to make this technology more accessible to our enterprise customers and easily support their scale. Providing a secure enclave that is portable in the cloud is one the key reasons why our enterprises will prefer to host their ADV on Azure confidential computing regardless of their other cloud infrastructure.” —Assaf Cohen, CEO, Anqlave

How customers are succeeding with Azure confidential computing

Customers are already using Azure confidential computing for production workloads. One customer is Signal:

“Signal develops open source technology for end-to-end encrypted communications, like messaging and calling. To meet the security and privacy expectations of millions of people every day, we utilize Azure confidential computing to provide scalable, secure environments for our services. Signal puts users first, and Azure helps us stay at the forefront of data protection with confidential computing.” —Jim O’Leary, VP of Engineering, Signal

While many applications and services can take advantage of data protection with confidential computing, we have seen particular benefits with regulated industries, such as financial, government, and healthcare. Companies can now take advantage of the cloud for processing sensitive customer data with reduced risk and higher confidence that their data can be protected, including when processing.

For example, MobileCoin, a new international cryptocurrency trusts Azure confidential computing to support digital currency transfers. Their network code is now available in open source, and a TestNet is available to tryout:

“MobileCoin partners with Azure because Microsoft has decided to invest in trustworthy systems. Confidential computing rides the edge between what we can imagine and what we can protect. The praxis we’ve experienced with Azure allows us to commit to systems that are integral, high trust, and performant.” —Joshua Goldbard, CEO, MobileCoin

Confidential computing has proven useful for enterprise-grade blockchain, enabling fast and secure transaction verification across a decentralized network. Fireblocks is yet another customer taking advantage of Azure confidential computing infrastructure:

“At Fireblocks, our mission is to secure blockchain-based assets and transactions for the financial industry. Once we realized the traditional tech stack was not suitable for this challenge, we turned to Azure confidential computing and Intel SGX to implement our patent-pending technology. Our customers trust Fireblocks to securely store and move their digital assets—over $6.5 billion of them each month—and Azure provides a backbone for us to deliver on that promise.” —Michael Shaulov, CEO and co-founder, Fireblocks

Industry leadership bringing confidential computing to the forefront

Microsoft is not alone in bringing confidential computing to the forefront of the cloud computing industry. In September 2019, we were a founding member of the Confidential Computing Consortium (CCC), which now consists of dozens of companies working to develop and open source technologies and best practices for protecting data while it’s in use. These companies include hardware, cloud, platform, and software providers.

Microsoft is also committed to the developer experience to ensure platform partners and application developers can build solutions that take advantage of confidential computing. We donated our Open Enclave SDK to the consortium, an open source SDK for developing platforms and applications on top of confidential computing infrastructure.

Get started today

Get started deploying your own DCsv2 virtual machine from the Azure Marketplace and install necessary tools. Then, run the Hello World sample using the Open Enclave SDK to begin building confidential workloads in the cloud.
Quelle: Azure

Update #3: Business continuity with Azure

Thank you for your response to our cloud continuity blogs; many of you have told us that this information is helpful. We’re committed to providing further posts when we have additional information.

Here at Microsoft, as most of our company starts the seventh week in changed professional and personal arrangements, we are learning new ways to live, work, learn and communicate. We are also learning from you—our customers and partners. We are all adjusting in this moment together and are appreciative of the feedback we receive and the confidence our customers have in our wide range of cloud services.

As a technology first responder serving first responders battling the global health crisis, as a trusted cloud provider to ensure your technology investment continues to deliver the value you expect, and as a company committed to assisting as organizations adapt to changing needs—we are relentlessly focused on providing the support needed to help the workforce operate as smoothly as possible during these changing times.

To ensure optimum focus, our efforts continue to be anchored in two key areas of action:

Help our customers address their most urgent needs.
Ensure Microsoft Azure continues to scale to meet new demand.

The rest of this post shares insights into the work we have done to support those two areas of continuity for organizations, businesses, and the people within them, around the world.

Helping our customers address their most urgent needs

Across our portfolio of cloud services, we work with a diverse group of global customers and organizations. Although their fields of work and customer needs are unique, there is consistency in what they’re looking for from cloud providers. Remote work, distance learning, real-time insights, and analytics have all been common themes of when it comes to the most pressing needs during this time.

Some examples of this work in action:

As businesses and schools around the world prioritize the safety and well-being of their employees and students, Microsoft Teams, which runs on Azure, is playing a critical role in helping them stay connected through video meetings, calls, and chats. We’ve seen a new daily record of 2.7 billion meeting minutes in one day. One of the organizations using Teams is St. Luke’s University Health Network. St. Luke’s University Health network serves approximately 1 million people across 10 counties in Pennsylvania and New Jersey. In a matter of weeks, they transformed the way they work and deliver patient care through Teams, and since mid-March have completed over 75,000 virtual patient visits. This allowed them to continue critical outpatient visits while protecting both patients and physicians from COVID-19 exposure and preserving valuable resources like masks and gloves. Tablets have also been installed in patient rooms so providers can engage with infected patients via Teams, minimizing exposure while still allowing for face-to-face connections between patients and caregivers.

HoloLens 2 and Dynamics 365 Remote Assist are being used on the front lines by nurses and doctors (like Dr. Thomas Gregory) to maintain social distancing and minimize interactions all while ensuring expert support of patients via remote participation of support staff and access to valuable patient data and health records. And for the first time ever, instead of working together on campus, all 185 first-year students from Case Western Reserve University’s School of Medicine are using HoloLens and the university’s signature HoloAnatomy mixed-reality software, in light of the need for physical separation during the pandemic.

Hundreds of healthcare providers have installed the Power Platform Emergency Response Solution for hospitals, which was developed with Swedish Health Services in the Seattle area to analyze and improve resource tracking and decision support tools for hospital administrators.

Our Nonprofit Data Warehouse Quickstart efforts are helping nonprofits easily deploy Azure analytics services such as Azure Synapse Analytics and with prebuilt Power BI templates by integrating sample datasets such as the World Health Organization Water and Sanitation data repository, data that is aligned to the International Aid Transparency Initiative (IATI) data standard, and the Common Data Model for Nonprofits.

We recently announced the Dynamics 365 Healthcare Accelerator Patient Scheduling and Screening Template—a tool designed to help healthcare organizations address large volumes of patient requests with higher efficiency. The template provides access to a portal with information about COVID-19, an easy-to-use self-assessment tool for patients to determine risk, and an automated process for booking and performing COVID-19 screening.

Emergency Medical Services Copenhagen provides emergency care for about one-third of Denmark’s population. Shortly after the COVID-19 outbreak calls to its emergency lines almost doubled, with around 2,000 calls daily by early March from worried people showing symptoms of COVID-19 or having questions about the disease. Emergency Medical Services Copenhagen is now one of many healthcare organizations in Europe and beyond using Microsoft’s Healthcare Bot service to help screen people for potential coronavirus infection and treatment.

Ensuring Azure continues to scale to meet new demand

The impact of the current pandemic is a great example of how cloud computing can rapidly meet new challenges. All of Microsoft’s cloud services including Teams and other Microsoft 365 products, Dynamics 365 and Azure were put to the test during these unprecedented and uncertain times. We are incredibly proud to be serving our customers, like those mentioned above, through this time and we also acknowledge that it hasn’t all been without issue. We look to continuously improve our design and operations to account for all circumstances. Before we share the improvements we’re making, here’s some background on how we build and operate Azure.

Azure has been designed to quickly scale to meet surges in demand when they occur. Over the past few years, we have seen phenomenal demand for Azure services. To keep up with this demand, we have continued to expand our datacenter footprint—with 58 datacenter regions around the world. To manage the normal high growth we have come to expect, we design and source our own infrastructure components, (and share our designs back to the community through the Open Compute Project), and closely manage our strategic demand and supply chain forecasting models. In general, in any particular Azure region we ensure a near-instant capacity buffer within the datacenters, and hold additional infrastructure buffer warehoused, ready to ship to regions with high demand.

Last month, the surging use of Teams for remote work and education due to the pandemic crossed into unprecedented territory. Although we had seen surges in specific datacenter regions or wider geographies before, such as in response to natural disasters, the substantial Teams demand increase from Asia and then quickly followed in Europe indicated that we were seeing something very different, and increasingly global. Without knowing the true scale of the new demand, we took a cautious approach and put in place temporary resource limits on new Azure subscriptions. (Existing customer subscriptions did not experience these restrictions as each Azure customer account has a defined quota of services they can access.) This allowed us to continue to meet the promised quota for all existing Azure customers, prioritize new needs for life and safety organizations on the front lines of the pandemic response and support the dramatic shift to remote work and education on Teams.

As this surge in Teams demand occurred, we quickly took steps towards managing increased cloud infrastructure and network demand including:

Optimized and load-balanced the Teams architecture and quickly rolled out these improvements worldwide (using Azure DevOps), without interrupting the customer experience. This work is durable such that we can manage Teams rapid growth moving forward without creating pressure on Azure customers’ capacity needs.
Expediting additional server capacity to the specific regions that faced constraints, while ensuring the safety and health of our datacenter staff and supply chain partners.
Approving the backlog of customer quota requests, which we are rapidly doing every day and are on track to complete over the next few weeks in almost all regions.
Removing restrictions for new free and benefit subscriptions in several regions, so that anyone can learn more about Azure’s capabilities and develop new skills.
Refining our Azure demand models. Our data science models are using what we’ve learned from this pandemic to better forecast future demands, including adding more support to handle future global events like a pandemic that drives simultaneous demand usage everywhere in the world.

We remain committed to operational excellence and we will continue to share what we are learning and doing to support everyone during this time.
Quelle: Azure

Azure Migrate now available in Azure Government

Microsoft’s service for datacenter migration, Azure Migrate, is now available in Azure Government—unlocking the whole range of functionality for government customers. Previously, Azure Migrate V1 was available to US Azure Government customers, which performed limited scale assessment for VMware workloads. Azure Migrate V2 for Azure Government, now available, includes a one-stop shop for discovery, assessment, and migration of largescale datacenters.Why migrate to Azure GovernmentWe know how important security is for Government customers. Fortunately, Azure Government, Microsoft’s government cloud offering, provides industry-leading security with more compliance certifications than any other cloud provider. By using a cloud government solution, your organization can meet high compliance certifications that aren’t available on-premises. Azure Government has six government-exclusive datacenter regions across the US, with an Impact Level 5 Provisional Authorization. This means Azure Government can host workloads for the most sensitive organizations, like the US Department of Defense. Azure Government also offers hybrid flexibility, which allows you to customize your digital transformation by keeping select data and functionality on-premises. Leading-edge innovations in Azure ensure your government organization is modernized and effective, with advanced data analytics, artificial intelligence (AI), IoT, and high-performance computing. Transform how your organization learns from and interacts with citizens. Analyze smart devices real-time to improve weather sensors and optimize emergency services. Take preemptive action against evolving security threats with predictive models. Learn more about Azure Government.Azure Migrate supports your migration to Azure GovernmentAzure Migrate provides a central hub of Microsoft and ISV migration tools. The hub helps identify the right tools for your migration scenario and features end-to-end progress tracking to help with largescale datacenter migrations and cloud transformation projects. Azure Migrate provides comprehensive coverage for a variety of migration scenarios, now all available for government customers, including: Windows and Linux servers—Largescale discovery, assessment, and migration for VMware, Hyper-V, and bare metal servers. Features include agentless discovery, application inventory mapping, dependency mapping, and cost analysis. You can also migrate VMware VMs (now generally available) to Azure with zero data loss and minimal downtime using an agentless migration, in addition to the agent-based migration capability.SQL and other databases—Assessment and migration for a variety of on-premises databases to Azure SQL database and Azure SQL Database managed instance. Web-apps—Assessment and migration of .NET and PHP web apps to Azure App Service.Virtual Desktop Infrastructure (VDI) migration—Migration of virtual desktop infrastructure to Windows Virtual Desktop in Azure.Data migration—Migration of block data to Azure using Data Box.Azure Migrate Hub.Agentless Discovery. Dependency Mapping. Learn more about Azure Migrate.Geographic and regional availability for Azure MigrateAzure Migrate is currently available in Asia Pacific, Australia, Canada, Europe, India, Japan, United Kingdom, and United States for public cloud. Now, Azure Migrate capabilities will be extended to US Gov Arizona and US Gov Virginia for government customers. Note the individual SKUs supported in the assessment and migration tools will depend on availability in these regions. See a comparison of Gov SKUs with respect to public cloud SKUs. Get started with Azure Migrate for GovernmentAs always, Azure Migrate is included in your Azure subscription without any additional licensing costs. To get started with Azure Government, request an Azure Government trial. If you already have an Azure Government subscription,  you can get started using Azure Migrate to discover, assess, and migrate your mission critical workloads to Azure. You can learn how to get started with Azure Migrate and access tutorials in the Azure Migrate documentation.We are thrilled to empower our customers to be future ready and leverage the continuous innovation of Azure. You can see the latest and greatest Azure Migrate capabilities in action in the videos below. Get started with Azure Migrate Migrate VMware VMs to Azure How to discover, assess, and migrate Hyper-V VMs to Azure
Quelle: Azure

Optimize cost and performance with Query Acceleration for Azure Data Lake Storage

The explosion of data-driven decision making is motivating businesses to have a data strategy to provide better customer experiences, improve operational efficiencies, and make real-time decisions based on data. As businesses become data driven, we see more customers build data lakes on Azure. We also hear that more cost optimization and more performance are two of the most important features of data lake architecture on Azure. Normally, these two qualities are traded off for each other—if you want more performance, you will need to pay more; if you want to save money, expect your performance curve to go down.

That’s why today, we’re announcing the preview of Query Acceleration for Azure Data Lake Storage—a new capability of Azure Data Lake Storage, which improves both performance and cost. The feature is now available for customers to start realizing these benefits and improving their data lake deployment on Azure.

How Query Acceleration for Azure Data Lake improves performance and cost

Big data analytics frameworks, such as Spark, Hive, and large-scale data processing applications, work by reading all of the data using a horizontally-scalable distributed computing platform with techniques such as MapReduce. However, a given query or transformation generally does not require all of the data to achieve its goal. Therefore, applications typically incur the costs of reading, transferring over the network, parsing into memory and finally filtering out the majority of the data that is not required. Given the scale of such data lake deployments, these costs become a major factor that impacts the design and how ambitious you can be. Improving cost and performance at the same time enhances how much valuable insight you can extract from your data.

Query Acceleration for Azure Data Lake Storage allows applications and frameworks to push-down predicates and column projections, so they may be applied at the time data is first read, meaning that all downstream data handling is saved from the cost of filtering and processing unrequired data.

The following diagram illustrates how a typical application uses Query Acceleration to process data:

The client application requests file data by specifying predicates and column projections.
Query Acceleration parses the specified query and distributes work to parse and filter data.
Processors read the data from the disk, parses the data by using the appropriate format, and then filters data by applying the specified predicates and column projections.
Query Acceleration combines the response shards to stream back to client application.
The client application receives and parses the streamed response. The application doesn't need to filter any additional data and can apply the desired calculation or transformation directly.

Azure offers powerful analytic services

Query Acceleration for Azure Data Lake Storage is yet another example of how we’re committed to making Azure the best place for organizations to unlock transformational insights from all data. Customers can benefit from tight integration with other Azure Services for building powerful cloud scale end-to-end analytics solutions. These solutions support modern data warehousing, advanced analytics, and real-time analytics easily and more economically.

We’re also committed to remaining an open platform where the best-in-breed open source solutions benefit equally from the innovations occurring at all points within the platform. With Azure Data Lake Storage underpinning an entire ecosystem of powerful analytics services, customers can extract transformational insights from all data assets.

Learn more

To find out more about Query Acceleration for Azure Data Lake Storage you can:

Sign up for the Azure Data Lake Storage preview program.
Read the Azure Data Lake Storage documentation.
Learn how to use Query Acceleration for Java and .NET.
Understand the pricing model for Query Acceleration.
Learn more about Azure Data Lake Storage.

Quelle: Azure

Azure GPUs with Riskfuel’s technology offer 20 million times faster valuation of derivatives

Exchange-traded financial products—like stocks, treasuries, and currencies—have had the benefit of a tremendous wave of technological innovation in the past 20 years, resulting in more efficient markets, lower transaction costs, and greater transparency to investors.

However, large parts of the capital markets have been left behind. Valuation of instruments composing the massive $500 trillion market in over-the-counter (OTC) derivatives—such as interest rate swaps, credit default swaps, and structured products—lack the same degree of immediate clarity that is enjoyed by their more straightforward siblings.

In times of increased volatility, traders and their managers need to know the impacts of market conditions on a given instrument as the day unfolds to be able to take appropriate action. Reports reflecting the conditions at the previous close of business are only valuable in calm markets and even then, firms with access to fast valuation and risk sensitivity calculations have a substantial edge in the marketplace.

Unlike exchange-traded instruments, where values can be observed each time the instrument trades, values for OTC derivatives need to be computed using complex financial models. The conventional means of accomplishing this is through traditional Monte Carlo—a simple but computationally expensive probabilistic sweep through a range of scenarios and resultant outcomes- or finite-difference analysis.

Banks spend tens of millions of dollars annually to calculate the values of their OTC derivatives portfolios in large, nightly batches. These embarrassingly parallel workloads have evolved directly from the mainframe days to run on on-premise clusters of conventional, CPU-bound workers—delivering a set of results good for a given day.

Using conventional algorithms, real-time pricing, and risk management is out of reach. But as the influence of machine learning extends into production workloads, a compelling pattern is emerging across scenarios and industries reliant on traditional simulation. Once computed, the output of traditional simulation can be used to train DNN models that can then be evaluated in near real-time with the introduction of GPU acceleration.

We recently collaborated with Riskfuel, a startup developing fast derivatives models based on artificial intelligence (AI), to measure the performance gained by running a Riskfuel-accelerated model on the now generally available Azure ND40rs_v2 (NDv2-Series) Virtual Machine instance powered by NVIDIA GPUs against traditional CPU-driven methods.

Riskfuel is pioneering the use of deep neural networks to learn the complex pricing functions used to value OTC derivatives. The financial instrument chosen for our study was the foreign exchange barrier option.

The first stage of this trial consisted of generating a large pool of samples to be used for training data. In this instance, we used conventional CPU-based workers to generate 100,000,000 training samples by repeatedly running the traditional model with inputs covering the entire domain to be approximated by the Riskfuel model. The traditional model took an average of 2250 milliseconds (ms) to generate each valuation. With the traditional model, the valuation time is dependent on the maturity of the trade.

The histogram in Figure 1 shows the distribution of valuation times for a traditional model:

 

Figure 1: Distribution of valuation times for traditional models.

Once the Riskfuel model is trained, valuing individual trades is much faster with a mean under 3 ms, and is no longer dependent on maturity of the trade:

Figure 2: Riskfuel model demonstrating valuation times with a mean under 3 ms.

These results are for individual valuations and don’t use the massive parallelism that the Azure ND40rs_v2 Virtual Machine can deliver when saturated in a batch inferencing scenario. When called upon to value portfolios of trades, like those found in a typical trading book, the benefits are even greater. In our study, the combination of a Riskfuel-accelerated version of the foreign exchange barrier option model and with an Azure ND40rs_v2 Virtual Machine showed a 20M+ times performance improvement over the traditional model.

In Figure 3 shows the throughput, as measured in valuations per second, of the traditional model running on a non-accelerated Azure Virtual Machine versus the Riskfuel model running on an Azure ND40rs_v2 Virtual Machine (in blue):

 

Figure 3: Model comparison of traditional model running versus the Riskfuel model.

For portfolios with 32,768 trades, the throughput on an Azure ND40rs_v2 Virtual Machine is 915,000,000 valuations per second, whereas the traditional model running on CPU-based VMs has a throughput of just 32 valuations per second. This is a demonstrated improvement of more than 28,000,000x.

It is critical to point out here that the speedup resulting from the Riskfuel model does not sacrifice accuracy. In addition to being extremely fast, the Riskfuel model effectively matches the results generated by the traditional model, as shown in Figure 4:

 

Figure 4: Accuracy of Riskfuel model.

These results clearly demonstrate the potential of supplanting traditional on-premises high-performance computing (HPC) simulation workloads with a hybrid approach: using traditional methods in the cloud as a methodology to produce datasets used to train DNNs that can then evaluate the same set of functions in near real-time.

The Azure ND40rs_v2 Virtual Machine is a new addition to the NVIDIA GPU-based family of Azure Virtual Machines. These instances are designed to meet the needs of the most demanding GPU-accelerated AI, machine learning, simulation, and HPC workloads, and the decision to use the Azure ND40rs_v2 Virtual Machine was to take full advantage of the massive floating point performance it offers to achieve the highest batch-oriented performance for inference steps, as well as the greatest possible throughput for model training.

The Azure ND40rs_v2 Virtual Machine is powered by eight NVIDIA V100 Tensor Core GPUs, each with 32 GB of GPU memory, and with NVLink high-speed interconnects. When combined, these GPUs deliver one petaFLOPS of FP16 compute.

Riskfuel’s Founder and CEO, Ryan Ferguson, predicts the combination of Riskfuel accelerated valuation models and NVIDIA GPU-powered VM instances on Azure will transform the OTC market:

“The current market volatility demonstrates the need for real-time valuation and risk management for OTC derivatives. The era of the nightly batch is ending. And it’s not just the blazing fast inferencing of the Azure ND40rs_v2 Virtual Machine that we value so much, but also the model training tasks as well. On this fast GPU instance, we have reduced our training time from 48 hours to under four! The reduced time to train the model coupled with on-demand availability maximizes the productivity of our AI engineering team.”

Scotiabank recently implemented Riskfuel models into their leading-edge derivatives platform already live on the Azure GPU platform with NVIDIA GPU-powered Azure Virtual Machine instances. Karin Bergeron, Managing Director and Head of XVA Trading at Scotiabank, sees the benefits of Scotia’s new platform:

“By migrating to the cloud, we are able to spin up extra VMs if something requires some additional scenario analysis. Previously we didn’t have access to this sort of compute on demand. And obviously the performance improvements are very welcome. This access to compute on demand helps my team deliver better pricing to our customers.”

Additional resources

Learn more about Azure NDv2-Series Virtual Machines.
Explore Azure HPC.
Learn more about Riskfuel solutions.

Quelle: Azure