Microsoft launches Azure Health Data Services to unify health data and power AI in the cloud

Today, we take a giant step toward making the dream of interoperability in healthcare real. Microsoft is announcing the general availability of Azure Health Data Services, a platform as a service (PaaS) offering designed exclusively to support Protected Health Information (PHI) in the cloud. Azure Health Data Services is a new way of working with unified data—providing your team with a platform to support both transactional and analytical workloads from the same data store and enabling cloud computing to transform how we develop and deliver AI across the healthcare ecosystem.

Imagine this scenario:

"Give me all the medications prescribed and connected home health device data with all the CT Scan documents and their associated radiology reports for any patient older than 45 with a diagnosis of osteosarcoma over the last 2 years."

The above statement is a common request to health data managers. It may come from physicians, clinical researchers, or data scientists, but this type of request in today’s systems can often take days or even months as health systems have to query multiple data stores that don’t speak the same language, then extract the data files, and finally work to unify them in batches for the user. Different types of health data are stored in siloed formats and databases—structured data like medications and patient attributes in pharma or EHR databases, CT scans in DICOM format, radiology reports as unstructured text, and medical device data in a separate data estate. By the time all the data is finally organized for use, the information is stale. With Azure Health Data Service, queries like these can be filled in minutes and the data can be connected to the places you need.

Azure Health Data Services is the first of its kind to unify diverse data types in the same data store at the patient level as you bring it into the cloud, which means you can view structured, unstructured, and imaging data together for a holistic, real-time view—in just minutes. With the service, you can search and query across your data using a unified Fast Healthcare Interoperability Resources (FHIR®) structure and deploy a suite of services to connect it rapidly to the technology you need. Whether you’re blending patient data with population health data sets for AI development and analytics, visualizing data for operational efficiencies, deploying patient engagement tools for personalized care, or querying imaging metadata alongside clinical data using our new DICOMcast feature, Azure Health Data Services work with your existing systems to enhance what you’re doing today. It’s also built on open standards to ensure you can support new solutions and innovations yet to come.

Watch the video below to learn more.

Accelerate interoperability and innovation with your data in Azure Health Data Services

Provides a suite of health data APIs enabling you to securely ingest and unify PHI in the cloud and persist it within a compliance boundary in Azure.

FHIR service for management of clinical data.
Tools to enable the transformation of data in HL7v2, CDA, JSON, or CSV formats to FHIR.
Digital Imaging and Communications in Medicine (DICOM) service for management of imaging data.
DICOMcast technology extracts metadata from DICOM instances and integrates it with other clinical data in the FHIR store enabling a single quarriable dataset.
Streaming data from MedTech devices can be ingested and transformed to FHIR, with templates to capture data from Apple HealthKit, Google Fit, and Fitbit.
Unstructured data from clinical notes or health documents can connect and map to FHIR through Text Analytics for health structuring to FHIR for viewing in context with clinical data records at the patient level.

Enables tools for data management, de-identification, event notification, and transformation of data for downstream use. A logical workspace in Azure Health Data Services enables you to manage your FHIR, DICOM, and MedTech services with common configuration across services, and integration with other Azure services. 
Connects your PHI data to powerful technology in the Microsoft Cloud for Healthcare ecosystem. Deep analytics and AI development begin with one click to push PHI data to Azure Synapse Analytics using the Synapse Link for FHIR, where you can send it to Microsoft Power BI for data visualization, or to Microsoft Teams and Microsoft Dynamics 365 for operational and engagement tools.
Enlists trust with layered, in-depth defense and advanced threat protection aligned with strict industry compliance standards and regulatory requirements, including ISO, GDPR, HITRUST CSF, and HIPAA through BAA coverage. Azure Health Data Services helps providers and payors meet the requirements of the 21st Century Cures Act and CMS Interoperability and Patient Access final rules.
Lowers costs in the cloud with a consumption-based pricing model that gives you full transparency with a pay-only-for-what-you-use structure. Azure Health Data Services removes infrastructure costs associated with multiple accounts, only charging for storage, API calls, transformation, and conversion as used, which means you can try the service for smaller workloads and control costs as you add more complex workloads. If you are just getting started, we have also made it developer-friendly by providing monthly entitlements for usage and storage at lower limits, so it is easy to innovate and explore.

Azure Health Data Services sits at the heart of the Microsoft Cloud for Healthcare as your foundation for PHI data in the cloud.

 

How did we build Azure Health Data Services?

We started with the patient. Whether you are working with data in healthcare settings, life sciences, or clinical research, bringing data together in a longitudinal record is powerful, but complex. Too much time is spent trying to unify data, and we wanted to simplify that process so our customers could focus their development dollars on what matters most: improving the delivery of clinical and operational care, fueling precision medicine workflows, and enabling rapid collaboration and research innovation across the health ecosystem.

We listened to our partners and customers in the health industry. The foundation of Azure Health Data Services is about removing barriers to working with health data. It starts with giving organizations trusted security that allows you to apply the security, access, and control you need for PHI so everyone in your organization can move faster. Azure Health Data Services platform allows developers to scale in the cloud quickly, provides tools for data scientists to de-identify and export for research, and allows end users to connect to solutions and apps with SMART on FHIR and Microsoft Power Platform to design and power front end solutions.

We built it for the future. There are a lot of ways to bring data to the cloud. Rather than just create a platform to ‘lift and shift’ data silos into a lake, we wanted a place where our customers could harmonize data around open-source standards like HL7 FHIR and DICOM as it came into the cloud. Organization around these open standards accelerates downstream innovation and interoperability across the entire health ecosystem. And with your PHI data in a compliant boundary in the cloud, you can support both transactional and analytical processing from the same data store. That is important because health data is projected to grow at a compounded annual growth rate (CAGR) of 36 percent through 20251 and building for the future of healthcare means setting up your architecture for efficiency and scale.

We built it with trust. Thousands of developers and customers are already using Azure Health Data Services in production around the world. It is powerful to see our cloud technology bringing together collaboration in clinical research, pharma, healthcare, MedTech, and life sciences. We are honored that they chose Azure for their health data, and we are even more inspired by what they are already doing with our platform to transform the future of health.

“There are several areas where cloud technology like Azure Health Data Services can help enhance healthcare. I believe it will play a critical role between various systems, allowing us to take data from health records and other data sources and combine it together in a centralized place where it can be used to inform and deliver patient-centric care. It also can enable real-time complex deep learning – by normalizing data from different systems in a way that allows complex algorithmic analyses to occur via AI or ML—and integrate research-based insights back into a clinical workflow.”—Matthew Kull, Chief Information Officer, Cleveland Clinic

“Together, SAS and Microsoft Azure are building deep technology integrations that unlock value by making disparate data and advanced analytics more accessible to health and life science organizations. With new capabilities such as the FHIR API within Azure Health Data Services, the embedded AI capabilities of SAS Health are more efficient and secure, expanding the possibilities of patient-centric innovation and trusted collaboration across the health ecosystem.”—Gail Stephens, Vice President, Health Care and Life Sciences, SAS

“When a healthcare company is deciding what technology stack they can rely on, the important topics are data protocols, security and compliance, and ease-of-integration. We chose Azure because it covers all of these topics and then some.”—Sunny Webb, Chief Technology Officer, Veris Health

“Interoperability and secure data sharing are essential in building an open ecosystem to address the fragmentation in the patient journey. The Digital Health Platform is using Azure as the trusted cloud and its Healthcare related service such as FHIR to achieve the goal of a sustainable health system. Through the partnership with Microsoft we will continue to innovate and deliver new services to patients and our DHP partners by embracing Azure Health Data Services.”—Roland Scharrer, Group Chief Data and Emerging Technology Officer, AXA

“As existing providers of clinical decision support software, Capita Healthcare Decisions utilizes the Microsoft FHIR service in the Azure Health Data Services, this enables rapid exchange of data through Fast Healthcare Interoperability Resources (FHIR®) APIs, backed by a managed platform as a service (PaaS) offering in the cloud. This allows our Head Home product to deliver a Hospital at Home experience supporting clinicians to keep patients at home and free up hospital beds.”—Stuart Bailey, Product Director, Capita Healthcare Decisions

“The combination of Sensoria’s remote patient monitoring wearable technology, behavioral feedback enabled mobile apps, and artificial intelligence Microsoft Azure cloud software solutions along with DARCO’s manufacturing, medical footwear design, and diabetic wound offloading expertise will help reduce risk of amputations and improve people’s lives. We are excited about this partnership and look forward to bringing the next generation of diabetic footwear to market.”—Davide Vigano, CEO, Sensoria Health

“ZEISS is able to connect our medical technology to Microsoft’s cloud enabling improved clinical workflows in a secure environment.”—Euan S. Thomson, Ph.D., President, Ophthalmic Devices and Head of Digital Business, ZEISS Medical Technology

“Ksana Health is partnering with Microsoft to digitally transform mental healthcare delivery through continuous, objective behavioral health monitoring and digital interventions. Microsoft’s Cloud for Healthcare and Azure Healthcare APIs accelerate our ability to deliver our full solution with industry-leading end-to-end security, integration, AI services and scalability.”—Tony Scripa, CFO, Ksana Health

Do more with your data with Microsoft Cloud for Healthcare

With Azure Health Data Services, health organizations can transform their patient experience, discover new insights with the power of machine learning and AI, and manage PHI data with confidence. Enable your data for the future of healthcare innovation with Microsoft Cloud for Healthcare.

We look forward to being your partner as you build the future of health.

Learn more about Azure Health Data Services.
Learn more about Microsoft Cloud for Healthcare.
Learn more about how health companies are using Azure to drive better health outcomes.

15 Ways Big Data is Changing the Healthcare Industry, Fingent Blog.

FHIR® is the registered trademark of HL7 and is used with the permission of HL7.
Quelle: Azure

Learn how Microsoft Circular Centers are scaling cloud supply chain sustainability

Aiming at delivering the most sustainable, scalable, and reliable cloud for Azure customers, continued innovation in cloud hardware is a constant priority for Microsoft. This extends beyond server architecture and rack design to include intelligent provisioning, deployment, and ultimately, decommissioning of cloud computing hardware in datacenters.

As we look to deliver upon Microsoft’s commitments towards a net-zero carbon future, our cloud supply chain has integrated a zero-waste philosophy into every stage of the datacenter hardware lifecycle.

For this edition of our Hardware Innovation blog series, I’ve invited Paul Clark, GM, Cloud Engineering and Supply Chain Sustainability, and Anand Narasimhan, GM, Cloud Supply Chain Sustainability to share more about how Microsoft Circular Centers are extending the lifespan of our servers with the goal of increasing component reuse by up to 90 percent.

 

Microsoft team at the opening of the Circular Center in Boydton, Virginia. Pictured from left to right: David Beyer, Anand Narasimhan, Alex Bitiukov, Rani Borkar, Jeff Bertocci, Paul Clark, Kesava Viswanathan, Mo Cruz, Pedro Ramos.

 

In January of 2022, the Microsoft Cloud supply chain achieved significant milestones toward its goal of reusing 90 percent of its cloud computing hardware assets by 2025. We launched two additional Circular Centers, which process decommissioned cloud servers and hardware, sort, and intelligently channel the components and equipment to optimize reuse or repurpose.

Our pilot Circular Center opened in Amsterdam in 2020, and the new centers that went live this year are located at our datacenter campuses in Dublin, Ireland, and Boydton, Virginia. We plan to expand the program at Microsoft datacenters in Quincy, Washington; Chicago, Illinois; Singapore and additional sites over the next few years in Des Moines, Iowa; San Antonio, Texas; Cheyenne, Wyoming; Sydney, Australia; Sweden, and more.

Addressing e-waste is crucial for Microsoft. We have set industry-leading sustainability goals of being carbon negative and water positive by 2030 while ensuring zero waste across our direct operations, products, and packaging. Our cloud supply chain plays a critical role in achieving that target.

The Microsoft Cloud is powered by millions of servers and a range of networking and storage hardware spread across more than 60 datacenter regions across 140 countries. We expect continued expansion of datacenters over the next few years because of the rapid growth in demand for digital services, and are striving to decouple business growth from the impact on natural resources.

So far, the Amsterdam Circular Center has achieved 83 percent reuse and 17 percent recycle of critical parts while contributing to the goal of reducing carbon emissions by 145,000 metric tons CO2 equivalent. This innovative approach to addressing e-waste and the success of the pilot recently resulted in our being named a finalist in the 2022 Gartner Power of the Profession Supply Chain Awards in the Social Impact of the Year category.

Deep history, wide-ranging benefits

Since 2012, Microsoft business units have been charged an internal fee based on the emissions associated with their operations. In 2020, that internal carbon fee was extended to include all Scope 1, Scope 2, and Scope 3 emissions. Scope 1 includes direct emissions from operations that are under a company’s control. Scope 2 is indirect emissions, such as those produced by the generation of electricity that a company uses. Scope 3 is all emissions that a company is indirectly responsible for, up and down its value chain—which is the majority of an organization’s total greenhouse gas emissions.

Those internal fees are deposited into a fund that is then used to drive several of our sustainability initiatives, which incubated technology innovations such as the Circular Center program among many others.

The first Circular Center was launched in March 2020, and we saw in the first year that the center enabled us to react faster to supply chain shortages impacted by COVID-19, by using harvested parts with components from end-of-life assets.

Decommissioned servers processed by Circular Centers are also finding a second life in schools as a resource for skills training programs. We work closely with partners to find new opportunities for end-of-life parts and equipment—like a company in Asia that is repurposing used memory cards in electronic toys and gaming systems. Similarly, through collaboration with suppliers, customers, industry groups, regulators, and other organizations, we’re finding other opportunities to further reduce carbon emissions and waste across the supply chain. For example, we are working with our network device suppliers to evaluate returning network devices to them to maximize reuse.

Decommissioned servers to be processed by Microsoft Circular Center in Boydton, Virginia.

Plan for every part

Microsoft designs a growing portion of its own hardware portfolio, and we make sustainability considerations a key part of the entire Azure hardware design process—including energy efficiency, repairability, upgradability, durability, and an optimized disposition plan for every part and component.

To enact these principles at scale, we developed the Intelligent Disposition and Routing System (IDARS), which establishes and executes a zero-waste plan for every piece of our hardware assets. IDARS is an end-to-end planning system aiming to define the most sustainable path for every part at any point in its lifecycle across the entire supply chain from upstream suppliers to downstream options for circularity.

Paired with Microsoft Dynamics 365 Supply Chain Management and Microsoft Power Platform, IDARS uses AI and machine learning to process and sort a wide range of end-of-life assets, optimize routes for those assets, and provide Circular Center operators precise instructions on how to dispose of the asset. IDARS also ensures compliance and security of the Microsoft Cloud and customer data.

This technology, along with our close collaboration with both upstream and downstream partners, makes the Circular Center program one that we think could revolutionize circular models in the technology industry. We believe it could catalyze sustainable business models everywhere. Taking our learnings and best practices from working with suppliers and partners, we recently made a contribution on “Life Cycle Assessment (LCA) Guidelines for Cloud Providers” to the Open Compute Project (OCP), aiming to encourage the broader cloud hardware community to conduct LCAs to better understand and reduce their environmental impact.

Focusing on suppliers and Scope 3 emissions

As we scale our Circular Center efforts, we will be able to accelerate Microsoft’s progress towards its carbon reduction and broader sustainability goals. In our first year of operations, 7 percent of our servers that were decommissioned globally were routed to the pilot Circular Center in Amsterdam. Over the next 18 months, we expect to increase the decommissioned assets processed and repurposed through Circular Centers to more than 80 percent globally, putting us solidly on the path to our 90 percent goal.

One of the key factors to our success in reducing our Scope 3 emissions through this program—and in turn, our customers’ Scope 3 emissions—is our close collaboration with suppliers. Since 2020, Microsoft’s Supplier Code of Conduct has required that suppliers disclose greenhouse gas emissions as well as plans to reduce those emissions. That affects thousands of suppliers around the world; in fact, the cloud supply chain alone works with hundreds of suppliers, from hardware manufacturers to packaging suppliers to logistics providers.

Learn more

You can learn more about Microsoft’s progress towards our commitments on sustainability in the newly released 2021 Environmental Sustainability Report as well as take a deeper look at the global hardware supply chain that powers Microsoft Cloud.

Learn more about Microsoft’s global infrastructure.
Take a virtual tour of Microsoft’s datacenters.
Discover career opportunities in Azure hardware with Microsoft.

Quelle: Azure

Scaling cloud solutions to new heights with Microsoft’s partner ecosystem

Companies building cloud solutions (such as independent software vendors (ISVs), SaaS providers, app builders, and more)—have never been more important to the world today.

With the continued acceleration of digital transformation, every organization, small or large, in every industry across the globe, will require cloud infrastructure and services to power their business. As customers’ needs for cloud solutions exponentially increase, so do the opportunities for ISVs to connect with partners and customers across the Microsoft Cloud and the commercial marketplace. To help our ecosystem harness these opportunities, we are announcing:

Private offers with margin sharing to motivate 90,000-plus cloud partners: Now generally available, ISVs can use the private offer capability in the commercial marketplace to create and share margins to partners in the Cloud Solution Provider program—creating new sales channels instantly.
Increased agility with private offers for customers: With enhancements to private offers in the commercial marketplace, ISVs can now create a unique private offer per customer in less than 15 minutes. This helps ISVs unlock enterprise customers for seven-digit deals and sell directly to customers with a cloud consumption commitment (if the ISV solution is eligible for Azure IP co-sell).

For Microsoft, the commercial marketplace is the connector between ISVs and customers—it’s an engine dedicated to accelerating growth. By selling through the commercial marketplace, ISVs get instant access to global reach: 1 billion people that use Microsoft technology, 95 percent of Fortune 500 companies who use Microsoft Azure, and 270M monthly active users on Microsoft Teams. 

Shifts in business-to-business (B2B) buying

Before COVID-19, customers in both B2C and B2B environments already expressed a preference for digital commerce experiences, COVID-19 only accelerated digital adoption—digital-first selling is here to stay.

Harvard Business Review1 recently surveyed 1,000 B2B buyers. 43 percent of those surveyed would prefer a purely digital experience for all sales. When the data was cut by generation, 29 percent of Baby Boomers preferred digital experiences in B2B buying and 54 percent of millennials had the same sentiment. Considering ten years from now, the channels we use for B2B buying today will be obsolete or a least forever transformed. Commercial marketplaces deliver on digital-first. Through B2B marketplaces, customers get a trusted buying experience that simplifies the purchase and deployment while helping customers optimize costs with pre-committed cloud spend.

Private offers to scale and motivate 90K-plus cloud partners

The ISV margin sharing to partners in the Cloud Solution Provider program (CSPs) became generally available on February 14, 2022. With margin-sharing, ISVs can directly incentivize CSPs to sell their solutions, this delivers on the promise of partner-to-partner marketing.  

Collaborating with CSPs, ISVs can lower customer acquisition costs and scale business to new customers globally. We are seeing pairings of ISV and CSP partners having tremendous success. Just two months into partnering with Pax8 (the CSP) and LawToolBox (the ISV) has seen a 105% increase in licenses transacted through marketplace.  

Another partner pairing, Sherweb (the CSP) and Nimble (the ISV), were able to work together and scale without adding any overhead. 

“The outcome of becoming a P2P co-seller with Microsoft has enabled Nimble to scale our simple serum for Microsoft 365 to over 22 countries around the world without hiring one person. That's amazing.”

Jon Ferrara, CEO Nimble

ISVs can offer margin to 400 eligible partners at once to open new sales channels, mobilizing a global ecosystem of partners. This also helps ISVs lower acquisition costs and simplify the sales process while increasing customer retention. And finally, when CSPs sell an ISV solution, they can bundle it with Microsoft Cloud solutions and their own value-add services to drive scale and recurring revenue.

Guidance on how to create a private offer and extend a margin to partners in the Cloud Solution Provider program.

Increased agility with private offers—accelerating seven-digit sales

To meet the needs of customers with agility, ISVs often use private offers. Private offers are the key to enterprise deal-making in the marketplace delivering flexibility like negotiated pricing, private terms and conditions, and specialized configurations. Microsoft has recently made substantial improvements to this functionality—ISVs can now create unique private offers per customer in less than 15 minutes.

Additional improvements include:

Create an unlimited number of private offers.
Ability to time-bound the private offer.
Offer custom terms and conditions.
Bundle multiple products in the same private offer.

One of the main motivators for customers to buy through B2B marketplaces is to decrement pre-committed cloud spend. Microsoft offers 100 percent of sales through the Azure Marketplace for Azure IP co-sell eligible solutions to count towards a customer’s Microsoft Azure Consumption Commitment (MACC). These deals are often in the millions and commonly transacted via private offers—the large deal sizes often need customized terms and conditions, special pricing considerations, and so on.

The recent improvements in private offers help ISVs connect with MACC-eligible customers. According to tackle.io’s annual State of Cloud Marketplaces report2, 82 percent of ISVs listed unlocking pre-committed cloud spend as their number one reason to sell through commercial marketplaces, and 43 percent of customers listed spending pre-committed cloud spend as their number one reason to buy through commercial marketplaces. Microsoft has a rich set of enterprise customers that require private offers, and we are seeing the acceleration. Year-over-year we have seen a 300 percent increase in customers buying Azure IP co-sell solutions through the commercial marketplace and we expect those numbers to continue to grow.

For agility and speed, ISVs can leverage APIs to create private offers and can view all private offers in a centralized dashboard with the flexibility to copy, withdraw, and upgrade offers as appropriate. As customers accept private offers, or when private offers are set to expire, the ISV will be notified in Partner Center. For the customer, they will see all the private offers associated with their account and when they purchase, they simply accept the offer with a click. No need to re-deploy their virtual machines—the solution deploys right from the Azure portal and is configured to work in the customer’s tenant.

Embracing the marketplace as a sales channel

With the proliferation of cloud solutions, commercial marketplaces simplify selling and offer customers convenience and a trusted environment to buy and deploy solutions to run their business. ISVs can accelerate their growth by embracing a third-party marketplace as a major sales channel. The improvements to private offers give ISVs the agility they need whether selling to customers with cloud consumption commitments or scaling through our 90,000-plus partners in the CSP program.

As the most trusted and comprehensive cloud—the commercial marketplace is how we are helping deliver tech intensity at scale—connecting over 30,000 solutions from partners to the 1 billion customers who use Microsoft products. Activate this channel by becoming a Microsoft partner and by publishing a transactable offer to the commercial marketplace.

Resources

Join ISV Success Program (private preview)
Learn how to sell through commercial marketplace
Create a channel strategy to activate partners

1 Harvard Business Review
2 tackle.io State of Cloud Marketplaces report
Quelle: Azure

Join Microsoft Azure at NVIDIA GTC developer conference 2022

The convergence of HPC+AI has opened new pathways for companies and developers worldwide to develop innovative, transformative applications. While this presents a plethora of new business opportunities in fields like academic research, climate modeling, and energy sustainability, they also push the boundaries of compute, data, and process capabilities of the underlying infrastructure so they can perform the way they were intended.

Microsoft Azure is committed to providing those capabilities through a continual improvement cycle that incorporates the newest and fastest processors into the cloud. This spring, NVIDIA GTC will illustrate that commitment in detail, showcasing NVIDIA’s accelerated computing capabilities powering resources on Azure that highlight our ongoing commitment to HPC+AI computing across the spectrum of edge, on-premises, and the cloud, while extending data security and privacy capabilities to meet customer and business data needs.

Register for NVIDIA GTC, running March 21 through 24, 2022.

Get a chance to win an NVIDIA Jetson Nano

Two of our sessions give you the chance to win a SWAG box, complete with an HPC t-shirt and Jetson Nano developer kit. Attend these sessions and don’t forget to look for the special link to enter!

Supercomputer Performance, Meet Cloud Versatility.
Nidhi Chappell, Head of Product, Azure HPC/AI Microsoft; John Montgomery, Corporate Vice President Program Management, Azure AI- Microsoft,
Tuesday, March 22, 2022, 11:00 AM PDT.

The Azure HPC+AI platform enables a new era of innovative applications and services that leverage the versatility of the cloud with the power of supercomputing performance. The convergence of HPC and AI is a revolution, bringing dramatic acceleration to every kind of simulation, and advancing fields across science and industry. Whether you need to scale to over 80,000 cores for your message passing interface (MPI)-based workloads, or you are looking for AI supercomputing capabilities, Azure can support your needs with all of the versatility of the cloud. In this session, we will provide an overview of the Azure HPC+AI platform reviewing recent accomplishments, and cover in detail how the Azure HPC+AI portfolio can support your accelerator workload needs ranging from AI inferencing to deep learning and more.

Unlocking New Possibilities for Privacy-Preserving Data Analytics with Azure Confidential Computing.
Mark Russinovich, Azure CTO and Technical Fellow, Microsoft; Ian Buck, Vice President and General Manager of Accelerated Computing, NVIDIA.

In this session, Microsoft Azure CTO and Technical Fellow Mark Russinovich and NVIDIA Data Center VP Ian Buck discuss how Microsoft and NVIDIA are partnering together to integrate the latest GPU technology with Azure confidential computing to help customers process large data workloads such as AI and machine learning, multi-party analytics, and 3D rendering while keeping data private and secure. Currently, there is no comparable offering in the marketplace, and Azure is driving first to market with this game-changing technology in our quest to be the most secure cloud.

Preserving privacy with confidential computing

Organizations across industries are going through a major AI-led disruption. For example, in healthcare, hospitals, pharmaceuticals, and researchers are leveraging AI to accelerate research, refine diagnostics, and improve drug discovery and development. Yet, the democratization of AI is limited by concerns regarding share and use of personal data. For example, banks are often unable to collaborate on critical tasks such as fraud and money laundering detection.

Microsoft has pioneered several privacy-preserving technologies such as homomorphic encryption, confidential computing, and differential privacy to address these challenges.

Join us at NVIDIA GTC to learn more about how to unlock new possibilities for privacy-preserving data analytics with Azure confidential computing to help process large data workloads such as AI and machine learning, multi-party analytics, and 3D rendering, while keeping data private and secure. Learn about how confidential GPUs offer high efficiency and confidentiality and how customers and organizations across the world benefit from it.

For more information about the latest on Azure confidential computing: check out our website, technical documentation, and blogs.

Transforming AI and machine learning at the edge

According to IoT Signals, IoT and AI adoption isn’t slowing down with 90 percent of adopters stating that IoT is critical to their success and 79 percent of businesses indicating they were successfully adopting AI within their IoT solutions with top reasons including predictive maintenance at 67 percent and prescriptive maintenance at 65 percent. Additionally, 56 percent of organizations are combining AI and IoT to create a better user experience.

However, at the same time, 46 percent of businesses with AI strategies are struggling to get their projects past the proof-of-concept stage due to technical challenges and complexity. By shifting AI, analytics, and logic to edge devices, edge computing can help solve speed, latency, security, and reliability challenges within AI and IoT applications.

At NVIDIA GTC, learn more about how Nvidia and Microsoft are working together to Transform AI and machine learning at the Edge leveraging the power of the GPU at the edge combined with Azure AI and IoT services. Learn about accelerating AI and IoT solutions with Azure Percept, Azure Stack HCI, and other Azure IoT services.

For more information check out NVIDIA + Microsoft Accelerated Edge AI webpage.

NVIDIA DLI Training Powered by Azure

We’re proud to host NVIDIA’s Deep Learning Institute (DLI) training at NVIDIA GTC again this year, with instructor-led workshops around accelerated computing, accelerated data science, and deep learning.  Hosted on Microsoft Azure, these sessions enable and empower you to leverage NVIDIA GPUs on the Microsoft Azure platform to solve the world’s most interesting and relevant problems. Register for a DLI workshop today.

Microsoft customers solve complex problems with Azure and NVIDIA GPUs

Sensyne Health aids National Health Service in the COVID-19 struggle with Microsoft HPC and AI technologies.

In the midst of COVID-19 the need for a way to get faster test results, Sensyne Health developed its MagnifEye solution, a mobile app that uses a device’s camera to capture the lateral flow test (LFT) stick image and read it in tenths of seconds with a stunning 99.6 percent accuracy rate.

Learn More

Learn more about Microsoft at NVIDIA GTC.
Register for NVIDIA GTC DLI workshops and training sponsored by Microsoft Azure.
Learn more about our joint Edge to Cloud story with NVIDIA.
Learn more about our recently launched Digital Certification program with Cap Gemini focusing on NIVIDA GPU-powered Azure virtual machines.
Altair ultraFluidX™ on Azure.
Barracuda Virtual Reactor on Azure.
Autodesk VRED on Azure.

Quelle: Azure

5 reasons to attend the Azure VMware Solution digital event

Imagine getting more from your VMware workloads by easily extending them to the cloud—and optimizing your costs in the process. Azure VMware Solution, a jointly engineered Microsoft and VMware service, makes this possible.

Azure VMware Solution lets you run your VMware infrastructure natively on Azure. Migrating your on-premises VMware investments and skills to Azure allows you to:

Quickly scale, automate, and modernize your VMware workloads.
Continue using your favorite VMware tools in the cloud, including VMware vSphere, vSAN, and vCenter.
Improve your disaster recovery, business resiliency, and VMware application performance.
Take advantage of Azure management, security, and compliance services.
Respond more quickly to the needs of your customers.
Reduce your IT infrastructure spending.

To show you how simple transitioning to Azure VMware Solution is and highlight the product’s capabilities, Microsoft and VMware are hosting an Azure VMware Solution digital event on Wednesday, March 23, 2022, from 9:00 AM to 11:30 AM Pacific Time.

Register now for this free event to:

1. Get product insights directly from customers

Find out how organizations significantly enhanced their VMware infrastructure with Azure VMware Solution in these customer interviews during the event keynote:

Learn how the University of Miami—which houses one of the nation’s leading oceanographic institutes and operates in a region known for hurricanes—strengthened its disaster recovery strategy and increased its IT flexibility while reducing costs.
Hear how Carhartt, the global premium workwear brand, increased its workload reliability and delivered easy-to-manage, secure digital workspaces to its employees.

2. See Azure VMware Solution in action

Watch demos of how to migrate your on-premises VMware workloads to the cloud and how to optimize, scale, and automate them there.

3. Take a technical deep dive into the product

Join breakout sessions and instructor-led workshops on topics including:

Learning your way around the Azure VMware Solution architecture.
Understanding the enterprise-scale landing zone considerations.
Deploying and connecting to a private cloud with Azure VMware Solution.
Protecting your data and managing disaster recovery using Azure VMware Solution with VMware Site Recovery Manager.
Simplifying your workload migration to Azure with VMware HCX.

4. Learn about product updates, best practices, and cost-saving programs

Hear from Microsoft and VMware leaders including:

Kathleen Mitford, CVP, Azure Marketing, Microsoft.
Eric Lockard, CVP, Azure Dedicated, Microsoft.
Sumit Dhawan, President, VMware.
Mark Lohmeyer, SVP and GM, Cloud Infrastructure Business Group, VMware.

5. Get answers to your technical questions

Ask the product experts your questions about migrating and operating your VMware infrastructure natively on Azure in the live chat Q and A.

Join us on March 23, 2022, to get to know Azure VMware Solution better, hear advice and insights from customers, engage with product insiders, and explore how to quickly and cost-effectively move your VMware investments to the cloud.

Check out the keynotes and event sessions

Beginning at 9:00 AM Pacific Time:

Keynote featuring Microsoft and VMware leadership and customers.

Join Kathleen Mitford (CVP, Microsoft Azure) and Sumit Dhawan (President, VMware) in conversation with Azure VMware Solution customers.
Special guest speakers from:

University of Miami
Carhartt

Product Update and Real-World Demo

Get insider info from Sue Hartford (Director of Product Marketing, Microsoft Azure), Eric Lockard (CVP, Microsoft Azure), and Mark Lohmeyer (SVP and GM, VMware) and see Azure VMware Solution in action

Beginning at 10:25 AM Pacific Time:

Azure VMware Solution
Session by Ram Gowrishankar (Partner Group Program Manager, Microsoft)

Simple deployment
Session by Trevor Davis (Senior Technical Specialist, Microsoft) and Emad Younis (Director, Multi-Cloud Center of Excellence, VMware)

Disaster recovery with Azure VMware Solution
Session by Saravanan Manickam (Program Manager, Microsoft)

AVS: Offers to accelerate Azure migration and modernization
Session by Vinoo Srinivas Murali (Director of Business Strategy, Microsoft)

Automating onboarding of Azure VMware Solution
Session by Prasad Gandham (Principal Program Manager, Microsoft) and Sapna Jeswani (Principal Program Manager, Microsoft)

Hands-On Workshop: Azure VMware Solution private cloud deployment and connectivity

Hands-On Workshop: Disaster protection with Azure VMware Solution and VMware Site Recovery Manager

Hands-On Workshop: Azure VMware Solution workload migration with VMware HCX

Microsoft Learn Live: Prepare to migrate VMware workloads to Azure by deploying Azure VMware Solution

Microsoft Learn Live: Deploy disaster recovery using VMware Site Recovery Manager and Azure VMware Solution
Tutorials with Microsoft Senior Cloud Advocates Amy Colyer and Pierre Roman

We hope to see you there!

Extend to the Cloud with Azure VMware Solution
Wednesday, March 23, 2022
9:00 AM to 11:30 AM Pacific Time

Delivered in partnership with VMware.

Quelle: Azure

The anatomy of a datacenter—how Microsoft's datacenter hardware powers the Microsoft Cloud

Leading hardware engineering at a company known for its vast portfolio of software applications and systems is not as strange as it sounds, as the Microsoft Cloud depends on hardware as the foundation of trust, reliability, capacity, and performance, to make it possible for Microsoft and our customers to achieve more. The underlying infrastructure that powers our 60 plus datacenter regions across 140 countries consists of hardware and systems that sit within the physical buildings of datacenters—enabling millions of customers to execute critical and advanced workloads, such as AI and quantum computing, as well as unleashing future innovations.

Datacenter hardware development is imperative to the evolution of the Microsoft Cloud

As the Microsoft Cloud offers services and products to meet the world’s ever-growing computing demands, it is critical that we continuously design and advance hardware systems and infrastructure to deliver greater performance, higher efficiency, and more resiliency to customers—all with security and sustainability in mind. Today, our hardware engineering efforts and investments focus heavily on roadmap and lifecycle planning, sourcing and provisioning of servers, and innovating to deliver next-generation infrastructure for datacenters. In our new Hardware Innovation blog series, I’ll be sharing some of the hardware development and investments that are driving the most impact for the Microsoft Cloud and making Azure the trusted cloud that delivers innovative, reliable, and sustainable hybrid cloud solutions. But first, let’s look “under the hood” of a Microsoft datacenter:

From server to cloud: the end-to-end cloud hardware lifecycle

Our hardware planning starts with what customers want: capacity, differentiated services, cost-savings, and ultimately the ability to solve harder problems with the help of the Microsoft Cloud. We integrate key considerations such as customer feedback, operational analysis, technology vetting, with evaluation of disruptive innovations into our strategy and roadmap planning, improvement of existing hardware in our datacenters for compute, network architecture, and storage, while future-proofing innovative workloads for scale. Our engineers then design, build, test, and integrate software and firmware into hardware fleets that meet a stringent set of quality, security, and compliance requirements before deploying them into Microsoft’s datacenters across the globe.

Sourcing and provisioning cloud hardware, sustainably and securely

With Microsoft’s scale, the ways in which we provision, deploy, and decommission hardware parts have the potential to drive massive planetary impact. While we work with suppliers to reimagine a more resilient and efficient supply chain using technologies such as blockchain and digital twins, we also aim to have sustainability built into every step of the way. An example of our sustainability leadership is the execution of Microsoft Circular Centers, where servers and hardware that are being decommissioned are repurposed—efforts that are expected to increase the reuse of servers and components by up to 90 percent by 2025. I will be sharing more on our Circular Centers progress this year. We also have in place the Azure Security and Resiliency Architecture (ASRA) as an approach to drive security and resiliency consistently and comprehensively across the Microsoft Cloud infrastructure supply chain.

Innovating to deliver next-generation datacenter infrastructure

We are investigating and developing technology that would allow datacenters to be more agile, efficient, and sustainable to operate while meeting the computing demands of the future. We showcased development in datacenter energy efficiency, such as our two-phase liquid immersion cooling, allowing more densely packed servers to fit in smaller spaces, and addressing processor overclocking for higher computing efficiency with a lower carbon footprint. We also continue to invest in and develop workload-optimized infrastructure—from servers, racks, systems, to datacenter designs—for more custom general-purpose offerings as well as specialized compute such as AI, high-performance computing, quantum, and beyond.

Building the most advanced and innovative hardware for the intelligent cloud and the intelligent edge

The journey of building Microsoft Cloud’s hardware infrastructure is an exciting and humbling one as we see continual advancement in technology to meet the needs of the moment. I have been in the hardware industry for more than thirty years—yet, I’m more excited each day as I work alongside leaders and experts on our team, with our partners across the industry, and with the open source community. Like many of the cloud services that sit on top of it, Microsoft’s hardware engine runs on consistency in quality, reliability, and scalability. Stay tuned as we continue to share more deep dives and updates of our cloud hardware development, progress, and results—and work to drive forward technology advancement, enable new capabilities, and push the limits of what we can achieve in the intelligent cloud and the intelligent edge.

Learn more

Learn more about Microsoft’s global infrastructure.
Take a virtual tour of Microsoft’s datacenters.
Discover career opportunities in Azure hardware with Microsoft.

Quelle: Azure

Technical leaders agree: AI is now a necessity to compete

AI is enabling new experiences everywhere. When people watch a captioned video on their phone, search for information online, or receive customer assistance from a virtual agent, AI is at the heart of those experiences. As users increasingly expect the conveniences that AI can unlock, they’re seen less as incremental improvements and more as the core to any app experience. A recent Forrester study shows that 84 percent of technical leaders feel they need to implement AI into apps to maintain a competitive advantage. Over 70 percent agree that the technology has graduated out of its experimental phase and now provides meaningful business value.

To make AI a core component of their business, organizations need faster, responsible ways to implement AI into their systems, ideally using their teams’ existing skills. In fact, 81 percent of technical leaders surveyed in the Forrester study say they would use more AI if it were easier to develop and deploy.

So, how can leaders accelerate the execution of their AI ambitions? Here are three important considerations for any organization to streamline AI deployments into their apps:

1. Take advantage of cloud AI services

There are cloud AI services that provide prebuilt AI models for key use cases, like translation and speech-to-text transcription. This makes it possible to implement these capabilities into apps without requiring data science teams to build models from scratch. Two-thirds of technical leaders say the breadth of use cases supported by cloud AI services is a key benefit. Using the APIs and SDKs provided, developers can add and customize these services to meet their organization’s unique needs. And prebuilt AI models benefit from regular updates for greater accuracy and regulatory compliance.

Azure has two categories of these services:

Azure Applied AI Services that are scenario-specific to accelerate time to value.
Cognitive Services that make high-quality AI models available through APIs for a more customized approach.

2. Empower your developers

Your developers can use APIs and SDKs within your cloud AI services to build intelligent capabilities into apps within their current development process. Developers of any skill level can get started quickly using the programming languages they already know. And should developers need added support, cloud vendors readily offer learning resources for quicker onboarding and troubleshooting.

Azure offers a 30-day developer learning journey for understanding key AI concepts, as well as step-by-step guidance on Microsoft Learn for those who want to build AI-powered applications.

3. Prioritize your most relevant use cases first

With AI, time to value is a matter of selecting use cases that will provide the most utility in the shortest time. Identify the needs within your organization to determine where AI capabilities can deliver the greatest impact.

For example, customers like Ecolab harness knowledge mining with Azure Cognitive Search to help their agents retrieve key information instantly, instead of spending over 30 minutes sifting through thousands of documents each time. KPMG applies speech transcription and language understanding with Azure Cognitive Services to reduce the amount of time to identify compliance risks in contact center calls from 14 weeks to two hours. And Volkswagen uses machine translation with Azure Translator to rapidly localize content including user manuals and management documents into 40 different languages.

These are just a few of the practical ways organizations have found efficiency and utility in out-of-the-box AI services that didn’t demand an unreasonable investment of time, effort, or customization to deploy.

Create business value with AI starting today

Implementing AI is simpler and more accessible than ever. Organizations of every size are deploying AI solutions that increase efficiencies, drive down overhead, or delight employees and customers in ways that are establishing them as brands of choice. It’s a great time to join them.

Learn more

Read the commissioned study by Forrester Consulting, “Fuel Application Innovation With Cloud AI Services”.
Watch the webinar on the Forrester study.
Visit the Azure AI page for more on key AI use cases.

Quelle: Azure

Introducing dynamic lineage extraction from Azure SQL Databases in Azure Purview

Data citizens including both technical and business users rely on data lineage for root cause analysis, impact analysis, data quality tracing, and other data governance applications. In the current data landscape, where data is fluidly moving across locations (on-premises to and across clouds) and across data platforms and applications, it is increasingly important to map the lineage of data. That’s why we’re introducing dynamic lineage extraction currently in preview.

Conventional systems map lineage by parsing data transformation scripts, otherwise called static code analysis. This works well in simple scenarios. For example, when a SQL script is used to produce a target table Customer_Sales by joining two tables called Customer and Sales, static code analysis can map data lineage. However, in many real use cases, the data processing workloads are quite complicated. The scripts could be wrapped in a stored procedure that is parametrized and uses dynamic SQL. There could be a decision tree with an if then else statement executing different scripts at runtime. Or simply, data transactions could have failed to commit at runtime.

In all these examples, dynamic analysis is required to track lineage effectively. Even more importantly, static lineage analysis does not associate data and processes with runtime metadata, limiting customer applications significantly. For instance, dynamic lineage encoding by whom and when a stored procedure was run, and from what application and which server, will enable customers to govern privacy, comply with regulations, increase time-to-insight, and better understand their overall data and processes.

Dynamic data lineage—Azure SQL Databases

Today, we are announcing the preview release of dynamic lineage extraction from Azure SQL Databases in Azure Purview. Azure SQL Database is one of the most widely used relational database systems in enterprises. Stored procedures are commonly used to perform data transformations and aggregations on SQL tables for downstream applications. With this release, the Azure Purview Data Map can be further enriched with dynamic lineage metadata such as run status, impacted number of rows, the client from which the stored procedure is run, user info, and other operational details from actual runs of SQL stored procedures in Azure SQL Databases.

Limited lineage metadata from static code analysis*

The actual implementation involves Azure Purview Data Map tapping into the instrumentation framework of the SQL engine, and extracting runtime logs to aggregate dynamic lineage. The runtime logs also provide actual queries executed in the SQL engine for data manipulation, using Azure Purview can map data lineage and gather additional detailed provenance information. Azure Purview scanners run several times a day to keep up the freshness of dynamic lineage and provenance from Azure SQL Databases.

To learn more about Azure Purview dynamic data lineage from Azure SQL Databases, check out the video:

Get started with Azure Purview today

The native integration with Azure SQL Databases for dynamic lineage and provenance extraction is the first of its kind and Azure Purview is leading the way. Follow the steps below to get started.

Quickly and easily create an Azure Purview account to try the generally available features.
Read quick start documentation on how to connect an Azure SQL Database to an Azure Purview account for dynamic data lineage.

Quelle: Azure

Meet PCI compliance with credit card tokenization

In building and running a business, the safety and security of your and your customers' sensitive information and data is a top priority, especially when storing financial information and processing payments are concerned. The Payment Card Industry Data Security Standard (PCI DSS)1 defines a set of regulations put forth by the largest credit card companies to help reduce costly consumer and bank data breaches.

In this context, PCI compliance refers to meeting the PCI DSS’ requirements for organizations and sellers to help safely and securely accept, store, process, and transmit cardholder data during credit card transactions, to prevent fraud and theft.

Towards confidential computing

In June 2021, the Monetary Authority of Singapore (MAS)2 issued an advisory circular on addressing the technology and cyber security risks associated with public cloud adoption. The paper describes a set of risk management principles and best practice standards to guide financial institutions in implementing appropriate data security measures to help protect the confidentiality and integrity of sensitive data in the public cloud, taking into consideration data-at-rest, data-in-motion, and data-in-use where applicable3. Specifically, at section 21, reported below, for data that is being used or processed in the public cloud, financial institutes (FIs) may implement confidential computing solutions if available from the cloud service provider. Confidential computing solutions protect data by isolating sensitive data in a protected, hardware-based computing enclave.

Data security and cryptographic key management

FIs should implement appropriate data security measures to protect the confidentiality and integrity of sensitive data in the public cloud, taking into consideration data-at-rest, data-in-motion and data-in-use where applicable.

For data-at-rest, that is, data in cloud storage, FIs may implement additional measures e.g. data object encryption, file encryption or tokenization in addition to the encryption provided at the platform level.
For data-in-motion, that is, data that traverses to and from, and within the public cloud, FIs may implement session encryption or data object encryption in addition to the encryption provided at the platform level.
For data-in-use, that is, data that is being used or processed in the public cloud, FIs may implement confidential computing solutions if available from the CSPs. Confidential computing solutions protect data by isolating sensitive data in a protected, hardware-based computing enclave during processing.

Confidential virtual machines

On these premises, FIs can leverage Azure confidential computing for building an end-to-end data and code protection solution on the latest technology for hardware-based memory encryption. The solution presented in this article for processing credit card payments makes use of confidential virtual machines (CVMs) running on AMD Secure Encrypted Virtualization (SEV)—Secure Nested Paging (SNP) technology.

AMD introduced SEV to isolate virtual machines from the hypervisor. Hypervisors are typically considered trusted components in the virtualization security model, and many customers have requested a VM trust model which reduces the exposure to vulnerabilities in the infrastructure. With SEV, individual VMs are assigned a unique encryption key wired in the CPU, used for automatically encrypting the memory allocated by the hypervisor to run a VM.

The latest generation of SEV technology includes SNP capability. SNP adds new hardware-based security by providing strong memory integrity protection from potential attacks to the hypervisor, including data replay and memory re-mapping.

Azure confidential computing offers confidential VMs based on AMD processors with SEV-SNP technology. Confidential VMs are for tenants with high security and confidentiality requirements. You can use confidential VMs for migrations without making changes to your code, with the platform help protect your VM’s state from being read or modified. Benefits of confidential VMs include:

Robust hardware-based isolation between virtual machines, hypervisor, and host management code.
Attestation policies to ensure the host’s compliance before deployment.
Cloud-based full-disk encryption before the first boot.
VM encryption keys that the platform or the customer (optionally) owns and manages.
Secure key release with cryptographic binding between the platform’s successful attestation and the VM’s encryption keys.
Dedicated virtual Trusted Platform Module (TPM) instance for attestation and protection of keys and secrets in the virtual machine.

The provisioning of a confidential VM in Azure is as simple as any other regular virtual machine, using your preferred tool, either manually via the Azure Portal, or by scripting with Azure command-line interface (CLI). Figure 2 shows the process of creating a virtual machine in the Azure Portal, with specific attention to the “Security type” attribute. For provisioning a confidential VM based on AMD SEV-SNP technology, you have to select that specific entry in the dropdown list. At the time of writing (March 2022), confidential VMs are in preview in Azure, and thus limited in availability across regions. As this service enters general availability, more regions will be available for deployment.

Figure 1: Confidential Virtual Machine in Azure Portal.

Credit card tokenization

In the scenario above in Figure 2, the process of tokenization is a random oracle, which is a process that, given an input, generates a non-predictable output. The random output always varies even if the same input is provided. For example, when a customer makes a second payment using the same credit card used in a previous transaction, the token generated will be different. Lastly, when providing that random output back to the service, the tokenization interface fetches the original input.

Not by coincidence that I used the term “interface” for describing this tokenization service. Indeed, the technical implementation of such random generator is a Web API running in the .NET 6 runtime. Figure 3 describes the reference architecture for the solution.

Figure 2: Credit card tokenization architecture reference.

A payment transaction is initiated by the customer and payment data is transferred to the .NET Web API. This API is running on a confidential VM.
The random token is generated by the API based on the input data. Tokenization includes also encryption of such data, with a symmetric cryptographic algorithm (AES specifically).
The encryption key is stored in Azure Key Vault running on a managed Hardware Secure Module (HSM). This is a critical component of the confidential solution, as the encryption key is preserved inside the HSM. The HSM helps protecting keys from the cloud provider or any other rogue administrator. Only the Web API app is authorized to access the secret key.
The following code snippets show the implementation of the key retrieval from AKV inside the Get method of the Web API.

[HttpGet(Name = "GetToken")]
public async Task<TokenTuple> Get(CreditCard card)
{
        // Retrieve the AES encryption key from AKV
        string akvName = Environment.GetEnvironmentVariable("KEY_VAULT_NAME");
        var akvUri = $"https://{akvName}.vault.azure.net";
        var akvClient = new SecretClient(new Uri(akvUri), new Azure.Identity.DefaultAzureCredential());
        var secret = await akvClient.GetSecretAsync("AesEncryptionKey");
        EncryptionKey key = JsonSerializer.Deserialize<EncryptionKey>(secret.Value.Value);

Azure Key Vault Managed HSM is a fully managed, highly available, single-tenant, standards-compliant cloud service that enables you to safeguard cryptographic keys for your cloud applications, using FIPS 140-2 Level 3 validated HSMs.

The service is highly available and zone resilient (where availability zones are supported): Each HSM cluster consists of multiple HSM partitions that span across at least two availability zones. If the hardware fails, member partitions for your HSM cluster will be automatically migrated to healthy nodes.

Each Managed HSM instance is dedicated to a single customer and consists of a cluster of multiple HSM partitions. Each HSM cluster uses a separate customer-specific security domain that cryptographically isolates each customer's HSM cluster.

The HSM is FIPS 140-2 Level 3 validated, which means that it meets compliance requirements with Federal Information Protection Standard 140-2 Level 3.

AKV Managed Hardware Security Module (MHSM) also assists with data residency as it doesn't store and process customer data outside the region the customer deploys the HSM instance in.

Lastly, with AKV MHSM, customers can generate HSM-protected keys in their own on-premises HSM and import them securely into Azure.

The obtained encryption key is then used to encrypt the payment data with a symmetric cipher. The encrypted value is associated with a newly generated token and added as a message to the queue. In the code snippet below, the pair token and encrypted data is stored in a tuple object and then enqueued.
// Encrypt the credit card information
string json = JsonSerializer.Serialize(card);
string encrypted = SymmetricCipher.EncryptToString(json, key);

// Generate token
Token token = Token.CreateNew();

// Add the token tuple to the queue
TokenTuple tuple = new (token, encrypted);
QueueManager.Instance.Enqueue(tuple);

The generated token is added to an in-memory queue. There is no persistence of data in the solution. The token expires after a configurable amount of time, typically a few seconds, that allows the payment gateway to process the payment information from the queue. The combination of running this solution on a confidential infrastructure, as well as the volatility of data in the queue, helps customers make their system PCI compliant: no sensitive payment data is stored and processed in clear text.
The queue mechanism can be implemented with any highly reliable queue engine, such as RabbitMQ. By running in a confidential VM, confidentiality of data in the queue is retained also during in-memory processing utilizing a third-party application such as RabbitMQ or similar with no code changes.
The payment gateway implements the Publish-Subscribe pattern (Pub-Sub) for retrieving messages from the queue, using a webhook for registering the endpoint to invoke and de-queue a message.
[HttpGet(Name = "ResolveToken")]
        public async Task Post(string subscriberUri)
        {
            TokenTuple tuple = QueueManager.Instance.Dequeue();
            await HttpClientFactory.PostAsync(subscriberUri, tuple);
        }

Get started

To get started with Azure confidential computing and implement a similar solution, I recommend having a look at our official Azure confidential computing documentation.

More specifically, you may want to start by creating a confidential VM as your test environment for publishing your code. You can follow the instructions described in this article to configure a CVM manually in the Azure Portal, or you may want to leverage an ARM template for automation.

All virtual machines in Azure are protected with policies and access constraints. Confidential VMs add protection in depth at the hardware root. That is, any data and code running in a confidential VM are isolated from the hypervisor and thus protected from the cloud service provider. As any IaaS service, you are still responsible for provisioning and maintenance, including OS patching and runtime installation. And as any other VM, you have the freedom to install and run any software you want that is compatible with the installed operating system. This, basically, enables you to “lift and shift” any existing application and code to Azure confidential computing, and get immediate benefits of the in-memory data protection that Azure confidential computing delivers.

References

1The Payment Card Industry Data Security Standard (PCI DSS).

2The Monetary Authority of Singapore (MAS).

3Advisory on Addressing the Technology and Cyber Security Risks Associated with Public Cloud Adoption, MAS, June 1, 2021.
Quelle: Azure

Microsoft DDoS protection response guide

Receiving Distributed Denial of Service (DDoS) attack threats?

DDoS threats have seen a significant rise in frequency lately, and Microsoft stopped numerous large-scale DDoS attacks last year. This guide provides an overview of what Microsoft provides at the platform level, information on recent mitigations, and best practices.

Microsoft DDoS platform

Microsoft provides robust protection against layer three (L3) and layer four (L4) DDoS attacks, which include TCP SYN, new connections, and UDP/ICMP/TCP floods.
Microsoft DDoS Protection utilizes Azure’s global deployment scale, is distributed in nature, and offers 60Tbps of global attack mitigation capacity.
All Microsoft services (including Microsoft365, Azure, and Xbox) are protected by platform level DDoS protection. Microsoft's cloud services are intentionally built to support high loads, which help to protect against application-level DDoS attacks.
All Azure public endpoint VIPs (Virtual IP Address) are guarded at platform safe thresholds. The protection extends to traffic flows inbound from the internet, outbound to the internet, and from region to region.
Microsoft uses standard detection and mitigation techniques such as SYN cookies, rate limiting, and connection limits to protect against DDoS attacks. To support automated protections, a cross-workload DDoS incident response team identifies the roles and responsibilities across teams, the criteria for escalations, and the protocols for incident handling across affected teams.
Microsoft also takes a proactive approach to DDoS defense. Botnets are a common source of command and control for conducting DDoS attacks to amplify attacks and maintain anonymity. The Microsoft Digital Crimes Unit (DCU) focuses on identifying, investigating, and disrupting malware distribution and communications infrastructure to reduce the scale and impact of botnets.

Recent incidents1

At Microsoft, despite the evolving challenges in the cyber landscape, the Azure DDoS Protection team was able to successfully mitigate some of the largest DDoS attacks ever, both in Azure and in the course of history.

Last October 2021, Microsoft reported on a 2.4 terabit per second (Tbps) DDoS attack in Azure that we successfully mitigated. Since then, we have mitigated three larger attacks.
In November 2021, Microsoft mitigated a DDoS attack with a throughput of 3.47 Tbps and a packet rate of 340 million packets per second (pps), targeting an Azure customer in Asia. As of February 2022, this is believed to be the largest attack ever reported in history. It was a distributed attack originating from approximately 10,000 sources and from multiple countries across the globe, including the United States, China, South Korea, Russia, Thailand, India, Vietnam, Iran, Indonesia, and Taiwan.

Protect your applications in Azure against DDoS attacks in three steps:

Customers can protect their Azure workloads by onboarding to Azure DDoS Protection Standard. For web workloads it is recommended to use web application firewall in conjunction with DDoS Protection Standard for extensive L3-L7 protection.

1. Evaluate risks for your Azure applications. This is the time to understand the scope of your risk from a DDoS attack if you haven’t done so already.

a. If there are virtual networks with applications exposed over the public internet, we strongly recommend enabling DDoS Protection on those virtual networks. Resources in a virtual network that requires protection against DDoS attacks are Azure Application Gateway and Azure Web Application Firewall (WAF), Azure Load Balancer, virtual machines, Bastion, Kubernetes, and Azure Firewall. Review “DDoS Protection reference architectures” to get more details on reference architectures to protect resources in virtual networks against DDoS attacks.

2. Validate your assumptions. Planning and preparation are crucial to understanding how a system will perform during a DDoS attack. You should be proactive to defend against DDoS attacks and not wait for an attack to happen and then act.

a. It is essential that you understand the normal behavior of an application and prepare to act if the application is not behaving as expected during a DDoS attack. Have monitors configured for your business-critical applications that mimic client behavior and notify you when relevant anomalies are detected. Refer to monitoring and diagnostics best practices to gain insights on the health of your application.

b. Azure Application Insights is an extensible application performance management (APM) service for web developers on multiple platforms. Use Application Insights to monitor your live web application. It automatically detects performance anomalies. It includes analytics tools to help you diagnose issues and to understand what users do with your app. It's designed to help you continuously improve performance and usability.

c. Finally, test your assumptions about how your services will respond to an attack by generating traffic against your applications to simulate DDoS attack. Don’t wait for an actual attack to happen! We have partnered with Ixia, a Keysight company, to provide a self-service traffic generator (BreakingPoint Cloud) that allows Azure DDoS Protection customers to simulate DDoS test traffic against their Azure public endpoints.

3. Configure alerts and attack analytics. Azure DDoS Protection identifies and mitigates DDoS attacks without any user intervention.

a. To get notified when there’s an active mitigation for a protected public IP, we recommend configuring an alert on the metric under DDoS attack or not. DDoS attack mitigation alerts are automatically sent to Microsoft Defender for Cloud.

b. You should also configure attack analytics to understand the scale of the attack, traffic being dropped, and other details.

Best practices to be followed

Provision enough service capacity and enable auto-scaling to absorb the initial burst of a DDoS attack.
Reduce attack surfaces; reevaluate the public endpoints and decide whether they need to be publicly accessible.
If applicable, configure Network Security Group to further lock-down surfaces.
If IIS (Internet Information Services) is used, leverage IIS Dynamic IP Address Restrictions to control traffic from malicious IPs.
Setup monitoring and alerting if you have not done so already.
Some of the counters to monitor:

TCP connection established
Web current connections
Web connection attempts

Optionally, use third-party security offerings, such as web application firewalls or inline virtual appliances, from the Azure Marketplace for additional L7 protection that is not covered via Azure DDoS Protection and Azure WAF (Azure Web Application Firewall).

When to contact Microsoft support

During a DDoS attack if you find that the performance of the protected resource is severely degraded, or the resource is not available. Review step two above on configuring monitors to detect resource availability and performance issues.
You think your resource is under DDoS attack, but DDoS Protection service is not mitigating the attack effectively.
You're planning a viral event that will significantly increase your network traffic.

For attacks that have a critical business impact, create a severity-A support ticket to engage DDoS Rapid Response team.

References

1Azure DDoS Protection—2021 Q3 and Q4 DDoS attack trends
Quelle: Azure