IoT Signals retail report: IoT’s promise for retail will be unlocked addressing security, privacy and compliance

Few industries have been disrupted by emerging technology quite like retail. From exploding online sales to the growth of mobile shopping, the industry has made a permanent shift to accommodate digital consumers.

The rise of IoT has forced the retail industry to take notice; IDC expects that by 2025 there will be 41.6 billion connected IoT devices or ‘things,’ generating more than 79 zettabytes (ZB) of data. These billions of devices are creating unprecedented visibility into a business, leading to transformation of operations, from the supply chain to automated checkout, personalized discounts, smart shelves, and other advances powered by IoT. In fact, IoT can help brick-and-mortar stores create customer experiences that rival that of online stores; for instance, customers can be sent alerts about discounts relevant to them when they get close to a store, and those stores can use IoT to keep track of inventory and increase efficiency.

Today we're sharing a new IoT Signals report focused on the retail industry that provides an industry pulse on the state of IoT adoption to help inform how we better serve our partners and customers, as well as help retail leaders develop their own IoT strategies. We surveyed 168 decision makers in enterprise retail organizations to deliver an industry-level view of the IoT ecosystem, including adoption rates, related technology trends, challenges, and benefits of IoT.

The study found that while IoT is almost universally adopted in retail and considered critical to success, companies are challenged by compliance, privacy concern, and skills shortages. To summarize the findings:

Retail IoT is strong and improving customer experience is a growth opportunity. Retailers’ future planning focuses on IoT projects that help customers get in and out quickly, which increases revenue. Areas like automated checkout and optimizing inventory and layout are key, and survey respondents rank store analytics (57 percent) and supply chain optimization and inventory tracking (48 percent) as the top two IoT use cases.
AI is integral to IoT and retailers who incorporate it achieve greater IoT success. For many retail IoT decision makers (44 percent), AI is a core component of their IoT solutions. Furthermore, retailers who leverage AI say they are able to use their IoT solutions more quickly and more fully. They also plan to use IoT even more in the future than those not integrating AI. Those surveyed who use AI as a core part of their solutions are more likely to use it for layout optimization, digital signage, smart shelving, and in-store contextualized marketing (including beacons).
Across regions, unique retail benefits and challenges emerge around IoT, but all are committed. Globally, IoT is being widely adopted in retail, with the survey respondents in the US, UK, and France all reporting 92 percent IoT in adoption. In the US, IoT is often utilized for security and store analytics (65 percent each), while store analytics (49 percent) and supply chain and store optimization (43 percent) are more popular uses in Europe. Despite a variety of adoption barriers across regions, retailers are dedicated to overcoming challenges and leveraging IoT even more in the future.
IoT is seen as critical to retail business success. Nearly 9 in 10 (87 percent) surveyed consider IoT as critical to their business success. Looking forward, respondents believe the biggest benefits they will see from IoT adoption include increased efficiency (69 percent), cost savings (64 percent), increased competitive advantage (62 percent), and new revenue streams (56 percent).
The biggest barriers to success for retailers include budget, privacy concerns, compliance challenges, and talent. In the US, the top three concerns of retailers surveyed are a lack of budget, consumer privacy concerns, and lack of technical knowledge. In Europe, compliance and regulatory challenges top the list, followed by human resources and timing and deployment issues. Despite these challenges, the future of IoT looks bright, with 82 percent of US and 73 percent of European respondents anticipating greater IoT implementation in the future.

Microsoft is leading the charge to address these IoT challenges

We're committed to helping retail customers bring their vision to life with IoT, and this starts with simplifying and securing IoT. Our customers are embracing IoT as a core strategy to drive better business outcomes, and we are heavily investing in this space committing $5 billion in IoT and intelligent edge innovation by 2022 and growing our IoT and intelligent edge partner ecosystem to over 10,000.

We're dramatically simplifying IoT to enable every business on the planet to benefit. We have the most comprehensive and complete IoT platform and are going beyond that to simplify IoT. Some key examples include Azure IoT Central, which enables customers and partners to provision an IoT app in seconds, customize it in hours, and go to production the same day. To help ensure that retailers have a robust talent pool of IoT developers, we've developed both an IoT School and an AI School, which provides free training for common application patterns and deployments.

Security is crucial for trust and integrity in IoT cloud- and edge-connected devices because they may not always be in trusted custody. Azure Sphere takes a holistic security approach from silicon to cloud, providing a highly secure solution for connected microcontroller units (MCUs), which go into devices ranging from connected home devices to medical and industrial equipment. Azure Security Center provides unified security management and advanced threat protection for systems running in the cloud and on the edge.

Finally, we’re helping our retail customers leverage their IoT investments with AI at the intelligent edge. Azure IoT Edge enables customers to distribute cloud intelligence to run in isolation on IoT devices directly and Azure Databox Edge builds on Azure IoT Edge and adds virtual machine and mass storage support. Going forward, Azure Digital Twins (currently in preview) will enable retailers to create complete virtual models of physical environments, making it easy to unlock insights into their retail environments.

When IoT is foundational to a retailer’s transformation strategy, it can have a significantly positive impact on the bottom line, customer experiences, and products. We are invested in helping our partners, customers, and the broader industry to take the necessary steps to address barriers to success. Read the full IoT Signals Retail Report and learn how we are helping retailers embrace the future and unlock new opportunities with IoT.
Quelle: Azure

Azure is now certified for the ISO/IEC 27701 privacy standard

We are pleased to share that Azure is the first major US cloud provider to achieve certification as a data processor for the new international standard ISO/IEC 27701 Privacy Information Management System (PIMS). The PIMS certification demonstrates that Azure provides a comprehensive set of management and operational controls that can help your organization demonstrate compliance with privacy laws and regulations. Microsoft’s successful audit can also help enable Azure customers to build upon our certification and seek their own certification to more easily comply with an ever-increasing number of global privacy requirements.

Being the first major US cloud provider to achieve a PIMS certification is the latest in a series of privacy firsts for Azure, including being the first to achieve compliance with EU Model clauses. Microsoft was also the first major cloud provider to voluntarily extend the core data privacy rights included in the GDPR (General Data Protection Regulation) to customers around the world.

PIMS is built as an extension of the widely-used ISO/IEC 27001 standard for information security management, making the implementation of PIMS’s privacy information management system a helpful compliance extension for the many organizations that rely on ISO/IEC 27001, as well as creating a strong integration point for aligning security and privacy controls. PIMS accomplishes this through a framework for managing personal data that can be used by both data controllers and data processors, a key distinction for GDPR compliance. In addition, any PIMS audit requires the organization to declare applicable laws/regulations in its criteria for the audit meaning that the standard can be mapped to many of the requirements under GDPR, CCPA (California Consumer Privacy Act), or other laws. This universal framework allows organizations to efficiently operationalize compliance with new regulatory requirements.

PIMS also helps customers by providing a template for implementing compliance with new privacy regulations, helping reduce the need for multiple certifications and audits against new requirements and thereby saving both time and money. This will be critical for supply chain business relationships as well as cross-border data movement. 

This short video demonstrates how Microsoft complies with ISO/IEC 27701 and our compliance benefits customers. 

Schellman & Company LLC issued a certificate of registration for ISO/IEC 27701:2019 that covers the requirements, controls, and guidelines for implementing a privacy information security management system as an extension to ISO/IEC 27001:2013 for privacy management as a personally identifiable information (PII) processor relevant to the information security management system supporting Microsoft Azure, Dynamics, and other online services that are deployed in Azure Public, Government cloud, and Germany Cloud, including their development, operations, and infrastructures and their associated security, privacy, and compliance per the statement of applicability version 2019-02. A copy of the certification is available on the Service Trust Portal.

Modern business is driven by digital transformation, including the ability to deeply understand data and unlock the power of big data analytics and AI. But before customers – and regulators – will allow you to leverage this data, you must first win their trust. Microsoft simplifies this privacy burden with tools that can help you automate privacy, including built-in controls like PIMS. 

Microsoft has longstanding commitments to privacy, and we continue to take steps to give customers more control over their data. Our Trusted Cloud is built on our commitments to privacy, security, transparency, and compliance, and our Trust Center provides access to validated audit reports, data management capabilities, and information about the number of legal demands we received for customer data from law enforcement.
Quelle: Azure

Retailers embrace Azure IoT Central

For many retailers around the world, the busiest quarter of the year just finished with holiday shopping through Black Friday and Cyber Monday to Boxing Day. From supply chain optimization, to digital distribution, and in-store analytics, the retail industry has wholeheartedly embraced IoT technology to support those spikes in demand; particularly in scenarios where brands need to build flexibility, hire strong talent, and optimize the customer experience in order to build brand loyalty. In our latest IoT Signals for Retail research, commissioned by Microsoft and released January 2020, we explore the top insights from leaders who are using IoT today. We discuss growth areas such as improving the customer experience, the use of artificial intelligence to achieve break-through success, and nuances between global markets around security concerns and compliance.

Building retail IoT solutions with Azure IoT Central

As Microsoft and its global partners continue to turn retail insights into solutions that empower retailers around the world, a key question continues to face decision makers about IoT investments; whether to build a solution from scratch, or buy a solution that fits their needs. For many solution builders, Azure IoT Central is the perfect fit, a fully managed IoT platform with predictable pricing and unique features like retail specific application templates that can accelerate solution development thanks to the inclusion of over 30 underlying Azure services. Let us manage the services so you can focus on what’s more important, applying your deep industry knowledge to help your customers.

New tools to accelerate building a retail IoT Solution

Today we are excited to announce the addition of our sixth IoT Central retail application template for solution builders. The Micro-fulfilment center template showcases how connectivity and automation can reduce cost by eliminating downtime, increasing security, and improving efficiency. App templates can help solution builders get started quickly and includes sample operator dashboards, sample device templates, simulated devices producing real-time data, access to Plug and Play devices, and security features that give you peace of mind. Fulfillment optimization is a cornerstone of operations for many retailers and optimizing early may offer significant returns in the future. Application templates are helping solution builders overcome challenges like getting past the proof-of-concept phase, or building rapid business cases for new IoT scenarios.

IoT Central Retail Application Templates for solution builders.

Innovative Retailers share their IoT stories

In addition to rich industry insights like those found in IoT Signals for Retail, we are proudly releasing three case stories detailing decisions, trade-offs, processes, and results from top global brands investing in IoT solutions, and the retail solution builders supporting them. Read more about how these companies are implementing and winning with their IoT investments and uncover details that might offer you an edge as you navigate your own investments and opportunities.

South Africa Breweries and CIRT team up to solve a cooler tracking conundrum

South Africa Breweries, a subsidiary of AB InBev, is the worlds’ largest brewing company and is committed to keeping its product fresh and cold for customers, a challenge that most consumers take for granted. From tracking missing coolers to reducing costs, and achieving sustainability goals, Sameer Jooma, Director of Innovation and Analytics for AB InBev turned to IoT innovation led by Consumption Information Real Time (CIRT), a South African solution builder. CIRT was tasked to pilot Fridgeloc Connected Cooler, a cooler monitoring system, providing real time insight into temperature (both internal cooler and condenser), connected state and location of hundreds of coolers through urban and rural South Africa. Revamping an existing cooler audit process that involved auditors visiting dealer locations to verify that a cooler was in the right place, and tracking the time between delivery and installation to an outlet are just two of the process optimization benefits found by Jooma.

“The management team wanted to have a view of the coolers, and to be able to manage them centerally at a national level. IoT Central enabled us to gain that live view.” – Sameer Jooma, Director: Innovation and Analytics, AB InBev.

Learn more about the universal cooler challenges that face merchants and consumer packaged goods companies worldwide in the case story.

On the “road” to a connected cooler in rural South Africa, a field technician gets stuck in the sand on his way to the tavern.

Fridgeloc Connected Cooler at a tavern in Soweto, South Africa.

Mars Incorporated Halloween display campaign unveils new insights thanks to Footmarks Inc.

For most consumer packaged goods companies, sales spike during holiday times thanks to investments across the marketing and sales mix, from online display advertising to in-store physical displays. This past Halloween, Jason Wood, Global Display Development Head, Mars Inc., a global manufacturer of confectionery and other food products, decided it was time to gain deeper insights into an age-old problem of tracking where their product displays went after they left the warehouse. Previously, Mars was only able to track the number of displays it produced, and how many left its warehouses for retailer destinations. They found the right partner with Footmarks Inc. who has designed their beacon and gateway-based display tracking solution with Azure IoT Central to deliver secure, simple and scalable insights into what happens once displays begin transit. Several interesting insights emerged throughout the campaign and afterward.

"Information on when displays came off the floor were surprising—major insights that we wouldn't have been able to get to without the solution." – Jason Wood, Global Display Development Head, Mars Inc.

Learn more about challenges Mars and Footmarks faced scaling, pricing, and managing devices for display tracking in the case story.

Foormarks Inc., Smart Connect Cloud dashboard for Mars Wrigley showing display tracking solution using IoT sensors for the 2019 Halloween campaign.

Microsoft turns to C.H. Robinson and Intel for Xbox and Surface supply chain visibility

In advance of the busy 2019 holiday season and the introduction of many new Surface SKU’s, the Microsoft supply chain team was interested in testing the benefits of a single platform connecting IoT devices on shipments globally, streamlining analytics and device management. This Microsoft team was also thinking ahead, preparing for the launch of the latest Xbox console, Xbox Series X, and for a series of new Surface product launches. With Surface and Xbox demand projected to grow around the world, the need for insights and appropriate actions along the supply chain was only going to increase. The Microsoft team partnered with TMC (a division of C.H. Robinson), a global technology and logistics management provider who partnered with Intel, to design a transformative solution based on their existing Navisphere Vision software that could be deployed globally using Azure IoT Central. The goal was to track and monitor shipments’ ambient condition for shock, light, and temperature to identify any damage in real time, anywhere in the world—at a scale covering millions of products.

“The real power comes in the combination of C.H. Robinson’s Navisphere Vision, technology that is built by and for supply chain experts, and the speed, security, and connectivity of Azure IOT Central.” – Chris Cutshaw, Director of Commercial and Product Strategy at TMC

Learn more about the results from the recent holiday season and what Navisphere Vision can do for global supply chain visibility in the case story.

Navisphere Vision dashboard showing IoT Sensors activity, managed through Azure IoT Central.

Getting started

NRF 2020: Retail's Big Show is happening in Manhattan from January 12 to 14. Azure IoT and other experts including retail solution builders Attabotics, C.H. Robinson, and CIRT will be in attendance.

Read more about IoT Signals for Retail report.

Get started with Azure IoT Central today.

Learn more about the solutions being used by these customers today.

Footmarks Inc. Smart Tracking asset tracking for consumer packaged goods companies.
CIRT Fridgeloc solution.
C.H. Robinson Navisphere Vision solution.

Intel, the Intel logo, and other Intel marks are trademarks of Intel Corporation or its subsidiaries.

Quelle: Azure

Azure Cost Management 2019 year in review

When we talk about cost management, we focus on three core tenets:

Ensuring cost visibility so everyone is aware of the financial impact their solutions have.
Driving accountability throughout the organization to stop bad spending patterns.
Continuous cost optimization as your usage changes over time to do more with less.

These were the driving forces in 2019 as we set out to build a strong foundation that pulls together all costs across all account types and ensures everyone in the organization has a means to report on, control, and optimize costs. Our ultimate goal is to empower you to lead a healthier, more financially responsible organization.

All costs behind a single pane of glass

On the heels of the Azure Cost Management preview, 2019 started off strong with the general availability of Enterprise Agreement (EA) accounts in February and pay-as-you-go (PAYG) in April. At the same time, Microsoft as a whole embarked on a journey to modernize the entire commerce platform with the new Microsoft Customer Agreement (MCA), which started rolling out for enterprises in March, pay-as-you-go subscriptions in July, and Cloud Solution Providers (CSP) using Azure plan in November. Whether you get Azure through the Microsoft field, directly from Azure.com, or through a Microsoft partner, you have the power of Azure Cost Management at your fingertips. But getting basic coverage of your Azure usage is only part of the story.

To effectively manage costs, you need all costs together, in a single repository. This is exactly what Azure Cost Management brings you. From the unprecedented ability to monitor Amazon Web Services (AWS) costs within the Azure portal in May (a first for any cloud provider), to the inclusion of reservation and Marketplace purchases in June, Azure Cost Management enables you to manage all your costs from a single pane of glass, whether you're using Azure or AWS.

What's next?

Support for Sponsorship and CSP subscriptions not on an Azure plan are at the top of the list to ensure every Azure subscription can use Azure Cost Management. AWS support will become generally available and then Google Cloud Platform (GCP) support will be added.

Making it easier to report on and analyze costs

Getting all costs in one place is only the beginning. 2019 also saw many improvements that help you report on and analyze costs. You were able to dig in and explore costs with the 2018 preview, but the only way to truly control and optimize costs is to raise awareness of current spending patterns. To that end, reporting in 2019 was focused on making it easier to customize and share.

The year kicked off with the ability to pin customized views to the Azure portal dashboard in January. You could share links in May, save views directly from cost analysis in August, and download charts as an image in September. You also saw a major Power BI refresh in October that no longer required classic API keys and added reservation details and recommendations. Each option helps you not only save time, but also starts that journey of driving accountability by ensuring everyone is aware of the costs they're responsible for.

Looking beyond sharing, you also saw new capabilities like forecasting costs in June and switching between currencies in July, simpler out-of-the-box options like the new date picker in May and invoice details view in September, and changes that simply help you get your job done the way you want to like support for the Azure portal dark theme and continuous accessibility improvements throughout the year.

From an API automation and integration perspective, 2019 was also a critical milestone as EA cost and usage APIs moved to Azure Resource Manager. The Resource Manager APIs are forward-looking and designed to minimize your effort when it comes time to transition to Microsoft Customer Agreement by standardizing terminology across account types. If you haven't started the migration to the Resource Manager APIs, make that your number one resolution for the new year!

What's next?

2020 will continue down this path, from more flexible reporting and scheduling email notifications to general improvements around ease of use and increased visibility throughout the Azure portal. Power BI will get Azure reservation and Hybrid Benefit reports as well as support for subscription and resource group users who don't have access to the whole billing account. You can also expect to see continued API improvements to help make it easier than ever to integrate cost data into your business systems and processes.

Flexible cost control that puts the power in your hands

Once you understand what you're spending and where, your next step is to figure out how to stop the bad spending patterns and keep costs under control. You already know you can define budgets to get notified about and take action on overages. You decide what actions you want to take, whether that be as simple as an email notification or as drastic as deleting all your resources to ensure you won't be charged. Cost control in 2019 was centered on helping you stay on top of your costs and giving you the tools to control spending as you see fit.

This started with a new, consolidated alerts experience in February where you can see all your invoice, credit, and budget overage alerts in a single place. Budgets were expanded to support new account types we talked about above, and to support management groups in June giving you a view of all your costs across subscriptions. Then in August, you were able to create targeted budgets with filters for fine-grained tracking, whether that be for an entire service, a single resource, or an application that spans multiple subscriptions (via tags). This also came with an improved experience when creating budgets to help you better estimate what your budget should be based on historical and forecasted trends.

What's next?

2020 will take cost control to the next level by allowing you to split shared costs with cost allocation rules and define an additional markup for central teams who typically run on overhead or don't want to expose discounts to the organization. We're also looking at improvements around management groups and tags to give you more flexibility to manage costs the way you need to for your organization.

New ways to save and do more with less

Cloud computing comes with a lot of promises, from flexibility and speed to scalability and security. The promise of cost savings is often the driving force behind cloud migrations, yet is also one of the more elusive to achieve. Luckily, Azure delivers new cost optimization opportunities nearly every month! This is on top of the recommendations offered by Azure Advisor, which are specifically tuned to save money on the resources you already have deployed. Here are a few of the over two dozen new cost saving opportunities you saw in 2019:

New pricing options for virtual machines, SQL databases, Azure Monitor, Azure DevOps, and Azure Search.
Reduced prices for services you're already using, like Azure Archive Storage, Azure App Service, Azure Container Instances, Content Delivery Network, and Azure AD B2C.
Promotional pricing for new virtual machine, App Service, and Azure Front Door Service offers.
New features with lower prices for running multiple workloads, like Azure Dedicated Host and Azure SQL Database instance pools.
Expanded set of Azure reservation offers – now available for 16 services.
More flexible ways to pay for reservations with monthly payment options.
New and updated recommendations in Azure Advisor, like improved right-sizing recommendations.

What's next?

Expect to see continued updates in these areas through 2020. We're also partnering with individual service teams to deliver even more built-in recommendations for database, storage, and PaaS services, just to name a few.

Streamlined account and subscription management

Throughout 2019, you may have noticed a lot of changes to Cost Management + Billing in the Azure portal. What was purely focused on PAYG subscriptions in early 2018 became a central hub for billing administrators in 2019 with full administration for MCA accounts in March, new EA account management capabilities in July, and subscription provisioning and transfer updates in August. All of these are helping you get one step closer to having a single portal to manage every aspect of your account.

What's next?

2020 will be the year of converged and consolidated experiences for Cost Management + Billing. This will start with the Billing and Cost Management experiences within the Azure portal and will expand to include capabilities you're currently using the EA, Account, or Cloudyn portals for today. Whichever portal you use, expect to see all these come together into a single, consolidated experience that has more consistency across account types. This will be especially evident as your account moves from the classic EA, PAYG, and CSP programs to Microsoft Customer Agreement (and Azure plan), which is fully managed within the Azure portal and offers critical new billing capabilities, like finer-grained access control and grouping subscriptions into separate invoices.

Looking forward to another year

The past 12 months have been packed with one improvement after another, and we're just getting started! We couldn't list them all here, but if you only take one thing away, please do check out and subscribe to the Azure Cost Management monthly updates for the latest news on what's changed and what's coming. We've already talked about what you can expect to see in 2020 for each area, but the key takeaway is:

2020 will bring one experience to manage all your Azure, AWS, and GCP costs from the Azure portal, with simpler, yet more powerful cost reporting, control, and optimization tools that help you stay more focused on your mission.

We look forward to hearing your feedback as these new and updated capabilities become available. And if you're interested in the latest features, before they're available to everyone, check out Azure Cost Management Labs (introduced in July) and don’t hesitate to reach out with any feedback. Cost Management Labs gives you a direct line to the Azure Cost Management engineering team and is the best way to influence and make an immediate impact on features being actively developed and tuned for you.

Follow @AzureCostMgmt on Twitter and subscribe to the YouTube channel for updates, tips, and tricks! And, as always, share your ideas and vote up others in the Cost Management feedback forum. See you in 2020!
Quelle: Azure

Advancing no-impact and low-impact maintenance technologies

“This post continues our reliability series kicked off by my July blog post highlighting several initiatives underway to keep improving platform availability, as part of our commitment to provide a trusted set of cloud services. Today I wanted to double-click on the investments we’ve made in no-impact and low-impact update technologies including hot patching, memory-preserving maintenance, and live migration. We’ve deployed dozens of security and reliability patches to host infrastructure in the past year, many of which were implemented with no customer impact or downtime. The post that follows was written by John Slack from our core operating systems team, who is the Program Manager for several of the update technologies discussed below.” – Mark Russinovich, CTO, Azure

This post was co-authored by Apurva Thanky, Cristina del Amo Casado, and Shantanu Srivastava from the engineering teams responsible for these technologies.

 

We regularly update Azure host infrastructure to improve the reliability, performance, and security of the platform. While the purposes of these ‘maintenance’ updates vary, they typically involve updating software components in the hosting environment or decommissioning hardware. If we go back five years, the only way to apply some of these updates was by fully rebooting the entire host. This approach took customer virtual machines (VMs) down for minutes at a time. Since then, we have invested in a variety of technologies to minimize customer impact when updating the fleet. Today, the vast majority of updates to the host operating system are deployed in place with absolute transparency and zero customer impact using hot patching. In infrequent cases in which the update cannot be hot patched, we typically utilize low-impact memory preserving update technologies to roll out the update.

Even with these technologies, there are still other rare cases in which we need to do more impactful maintenance (including evacuating faulty hardware or decommissioning old hardware). In such cases, we use a combination of live migration, in-VM notifications, and planned maintenance providing customer controls.

Thanks to continued investments in this space, we are at a point where the vast majority of host maintenance activities do not impact the VMs hosted on the affected infrastructure. We’re writing this post to be transparent about the different techniques that we use to ensure that Azure updates are minimally impactful.

Plan A: Hot patching

Function-level “hot” patching provides the ability to make targeted changes to running code without incurring any downtime for customer VMs. It does this by redirecting all new invocations of a function on the host to an updated version of that function, so it is considered a ‘no impact’ update technology. Wherever possible we use hot patching to apply host updates completely avoiding any impact to the VMs running on that host. We have been using hot patching in Azure since 2017. Since then, we have worked to broaden the scope of what we can hot patch. As an example, we updated the host operating system to allow the hypervisor to be hot patched in 2018. Looking forward, we are exploring firmware hot patches. This is a place where the industry typically hasn't focused. Firmware has always been viewed as ‘if you need to update it, reboot the server,’ but we know that makes for a terrible customer experience. We've been working with hardware manufacturers to consider our own firmware to make them hot patchable and incrementally updatable.

Some large host updates contain changes that cannot be applied using function-level hot patching. For those updates, we endeavor to use memory-preserving maintenance.

Plan B: Memory-preserving maintenance

Memory-preserving maintenance involves ‘pausing’ the guest VMs (while preserving their memory in RAM), updating the host server, then resuming the VMs and automatically synchronizing their clocks. We first used memory-preserving maintenance for Azure in 2018. Since then we have improved the technology in three important ways. First, we have developed less impactful variants of memory-preserving maintenance targeted for host components that can be serviced without a host reboot. Second, we have reduced the duration of the customer experienced pause. Third, we have expanded the number of VM types that can be updated with memory preserving maintenance. While we continue to work in this space, some variants of memory-preserving maintenance are still incompatible with some specialized VM offerings like M, N, or H series VMs for a variety of technical reasons.

In the rare case we need to make more impactful maintenance (including host reboots, VM redeployment), customers are notified in advance and given the opportunity to perform the maintenance at a time suitable for their workload(s).

Plan C: Self-service maintenance

Self-service maintenance involves providing customers and partners a window of time, within which they can choose when to initiate impactful maintenance on their VM(s). This initial self-service phase typically lasts around a month and empowers organizations to perform the maintenance on their own schedules so it has no or minimal disruption to users. At the end of this self-service window, a scheduled maintenance phase begins—this is where Azure will perform the maintenance automatically. Throughout both phases, customers get full visibility of which VMs have or have not been updated—in Azure Service Health or by querying in PowerShell/CLI. Azure first offered self-service maintenance in 2018. We generally see that administrators take advantage of the self-service phase rather than wait for Azure to perform maintenance on their VMs automatically.

In addition to this, when the customer owns the full host machine, either using Azure Dedicated Hosts or Isolated virtual machines, we recently started to offer maintenance control over all non-zero impact platform updates. This includes rebootless updates which only cause a few seconds pause. It is useful for VMs running ultra-sensitive workloads which can’t sustain any interruption even if it lasts just for a few seconds. Customers can choose when to apply these non-zero impact updates in a 35-day rolling window. This feature is in public preview, and more information can be found in this dedicated blog post.

Sometimes in-place update technologies aren’t viable, like when a host shows signs of hardware degradation. In such cases, the best option is to initiate a move of the VM to another host, either through customer control via planned maintenance or through live migration.

Plan D: Live migration

Live migration involves moving a running customer VM from one “source” host to another “destination” host. Live migration starts by moving the VM’s local state (including RAM and local storage) from the source to the destination while the virtual machine is still running. Once most of the local state is moved, the guest VM experiences a short pause usually lasting five seconds or less. After that pause, the VM resumes running on the destination host. Azure first started using live migration for maintenance in 2018. Today, when Azure Machine Learning algorithms predict an impending hardware failure, live migration can be used to move guest VMs onto different hosts preemptively.

Amongst other topics, planned maintenance and AI Operations were covered in Igal Figlin’s recent Ignite 2019 session “Building resilient applications in Azure.” Watch the recording here for additional context on these, and to learn more about how to take advantage of the various resilient services Azure provides to help you build applications that are inherently resilient.

The future of Azure maintenance 

In summary, the way in which Azure performs maintenance varies significantly depending on the type of updates being applied. Regardless of the specifics, Azure always approaches maintenance with a view towards ensuring the smallest possible impact to customer workloads. This post has outlined several of the technologies that we use to achieve this, and we are working diligently to continue improving the customer experience. As we look toward the future, we are investing heavily in machine learning-based insights and automation to maintain availability and reliability. Eventually, this “AI Operations” model will carry out preventative maintenance, initiate automated mitigations, and identify contributing factors and dependencies during incidents more effectively than our human engineers can. We look forward to sharing more on these topics as we continue to learn and evolve.
Quelle: Azure

Azure Lighthouse: The managed service provider perspective

This blog post was co-authored by Nikhil Jethava, Senior Program Manager, Azure Lighthouse.

Azure Lighthouse became generally available in July this year and we have seen a tremendous response from Azure managed service provider communities who are excited about the scale and precision of management that the Azure platform now enables with cross tenant management. Similarly, customers are empowered with architecting precise and just enough access levels to service providers for their Azure environments. Both customers and partners can decide on the precise scope of the projection.

Azure Lighthouse enables partners to manage multiple customer tenants from within a single control plane, which is their environment. This enables consistent application of management and automation across hundreds of customers and monitoring and analytics to a degree that was unavailable before. The capability works across Azure services (that are Azure Resource Manager enabled) and across licensing motion. Context switching is a thing of the past now.

In this article, we will answer some of the most commonly asked questions:

How can MSPs perform daily administration tasks across different customers in their Azure tenant from a single control plane?
How can MSPs secure their intellectual property in the form of code?

Let us deep dive into a few scenarios from the perspective of a managed service provider.

Azure Automation

Your intellectual property is only yours. Service providers, using Azure delegated resource management, are no longer required to create Microsoft Azure Automation runbooks under customers’ subscription or keep their IP in the form of runbooks in someone else’s subscription. Using this functionality, Automation runbooks can now be stored in a service provider's subscription while the effect of the runbooks will be reflected on the customer's subscription. All you need to do is ensure the Automation account's service principal has the required delegated built-in role-based access control (RBAC) role to perform the Automation tasks. Service providers can create Azure Monitor action groups in customer's subscriptions that trigger Azure Automation runbooks residing in a service provider's subscription.
   

Azure Monitor alerts

Azure Lighthouse allows you to monitor the alerts across different tenants under the same roof. Why go through the hassle of storing the logs ingested by different customer's resources in a centralized log analytics workspace? This helps your customers stay compliant by allowing them to keep their application logs under their own subscription while empowering you to have a helicopter view of all customers.

Azure Resource Graph Explorer

With Azure delegated resource management, you can query Azure resources from Azure Resource Graph Explorer across tenants. Imagine a scenario where your boss has asked you for a CSV file that would list the existing Azure Virtual Machines across all the customers’ tenants. The results of the Azure Resource Graph Explorer query now include the tenant ID, which makes it easier for you to identify which Virtual Machine belongs to which customer.

 

 
 

Azure Security Center

Azure Lighthouse provides you with cross-tenant visibility of your current security state. You can now monitor compliance to security policies, take actions on security recommendations, monitor the secure score, detect threats, execute file integrity monitoring (FIM), and more, across the tenants.

   

Azure Virtual Machines

Service providers can perform post-deployment tasks on different Azure Virtual Machines from different customer's tenants using Azure Virtual Machine extensions, Azure Virtual Machine Serial Console, run PowerShell commands using Run command option, and more in the Azure Portal. Most administrative tasks on Azure Virtual Machines across the tenants can now be performed quickly since the dependency on taking remote desktop protocol (RDP) access to the Virtual Machines lessens. This also solves a big challenge since admins now do not require to log on to different Azure Subscriptions in multiple browser tabs just to get to the Virtual Machine’s resource menu.

Managing user access

Using Azure delegated resource management, MSPs no longer need to create administrator accounts (including contributor, security administrator, backup administrator, and more) in their customer tenants. This allows them to manage the lifecycle of delegated administrators right within their own Microsoft Azure Active Directory (AD) tenant. Moreover, MSPs can add user accounts to the user group in their Azure Active Directory (AD) tenant, while customers make sure those groups have the required access to manage their resources. To revoke access when an employee leaves the MSP’s organization, it can simply be removed from the specific group the access has been delegated to.

Added advantages for Cloud Solution Providers

Cloud Solution Providers (CSPs) can now save on administration time. Once you’ve set up the Azure delegated resource management for your users, there is absolutely no need for them to log in to the Partner Center (found by accessing Customers, Contoso, and finally All Resources) to administer customers’ Azure resources.

Also, Azure delegated resource management happens outside the boundaries of the Partner Center portal. Instead, the delegated user access is managed directly under Azure Active Directory. This means subscription and resource administrators in Cloud Solution Providers are no longer required to have the 'admin agent' role in the Partner Center. Therefore, Cloud Solutions Providers can now decide which users in their Azure Active Directory tenant will have access to which customer and to what extent.

More information

This is not all. There is a full feature list available for supported services and scenarios in Azure Lighthouse documentation. Check out Azure Chief Technology Officer Mark Russinovich’s blog for a deep under-the-hood view.

So, what are you waiting for? Get started with Azure Lighthouse today.
Quelle: Azure

Connecting Microsoft Azure and Oracle Cloud in the UK and Canada

In June 2019, Microsoft announced a cloud interoperability collaboration with Oracle that will enable our customers to migrate and run enterprise workloads across Microsoft Azure and Oracle Cloud.

At Oracle OpenWorld in September, the cross-cloud collaboration was a big part of the conversation. Since then, we have fielded interest from mutual customers who want to accelerate their cloud adoption across both Microsoft Azure and Oracle Cloud. Customers are interested in running their Oracle database and enterprise applications on Azure and in the scenarios enabled by the industry’s first cross-cloud interconnect implementation between Azure and Oracle Cloud Infrastructure. Many are also excited about our announcement to integrate Microsoft Teams with Oracle Cloud Applications. We have already enabled the integration of Azure Active Directory with Oracle Cloud Applications and continue to break new ground while engaging with customers and partners.

Interest from the partner community

Partners like Accenture are very supportive of the collaboration between Microsoft Azure and Oracle Cloud. Accenture recently published a white paper, articulating their own perspective and hands-on experiences while configuring the connectivity between Microsoft Azure and Oracle Cloud Infrastructure.

Another Microsoft and Oracle partner who expressed interest early on is SYSCO. SYSCO is a European IT-company specializing in solutions for the utilities sector. They offer unique industry expertise combined with highly skilled technology experts within AI and analytics, cloud, infrastructure, and applications. SYSCO is a Microsoft Gold Cloud Platform partner and a Platinum Oracle partner.

In August 2019, we introduced the ability to interconnect Microsoft Azure (UK South) and Oracle Cloud Infrastructure in London, UK providing our joint customers access to a direct, low-latency, and highly reliable network connection between Azure and Oracle Cloud Infrastructure. Prior to that, for partners like SYSCO, the ability to leverage this new collaboration between Microsoft Azure and Oracle Cloud was out of reach.

“The Microsoft Azure and Oracle Cloud Interconnect announcement is one of the best announcements in years for our customers! A direct link provides the Microsoft / Oracle cloud interconnect with a new option for all customers using proprietary business applications. With our expertise across both Microsoft and Oracle, we are thrilled to be one of the first partners to pilot this together with our customers in the utilities industry in Norway.”–Frank Vikingstad VP International – SYSCO

Azure and Oracle Cloud Infrastructure interconnect in Toronto, Canada

Today we are announcing that we have extended the Microsoft Azure and Oracle Cloud Infrastructure interconnect to include the Azure Canada Central region and Oracle Cloud Infrastructure region in Toronto, Canada.

“This unique Azure and Oracle Cloud Infrastructure solution delivers the performance, easy integration, rigorous service level agreements, and collaborative enterprise support that enterprise IT departments need to simplify their operations. We’ve been pleased by the demand for the interconnected cloud solution by our mutual customers around the world and are thrilled to extend these capabilities to our Canadian customers.” –Clive D’Souza, Sr. Director & Head of Product Management, Oracle Cloud Infrastructure

What this means for you

In addition to being able to run certified Oracle databases and applications on Azure, you now have access to new migration and deployment scenarios enabled by the interconnect. For example, you can rely on tested, validated, and supported deployments of Oracle applications on Azure with Oracle databases, Real Application Clusters (RAC) and Exadata, deployed in Oracle Cloud Infrastructure. You can also run custom applications on Azure backed by Oracle’s Autonomous Database on Oracle Cloud Infrastructure.

To learn more about the collaboration between Oracle and Microsoft and how you can run Oracle applications on Azure please refer to our website.
Quelle: Azure

Tips for learning Azure in the new year

As 2020 is upon us, it's natural to take time and reflect back on the current year’s achievements (and challenges) and begin planning for the next year. One of our New Year’s resolutions was to continue live streaming software development topics to folks all over the world. In our broadcasts in late November and December, the Azure community saw some of our 2020 plans. While sharing, many others typed in the chat from across the world that they’d set a New Year’s resolution to learn Azure and would love any pointers.

When we shared our experiences learning Azure in the “early days,” we talked about the number of great resources (available at no cost) users can take advantage of right now, and carry their learnings into the new year and beyond. 

Here are a few tips for our developer community to help them keep their resolutions to learn Azure:

Create a free account: The first thing that you’ll need is to create a free account. You can sign up with a Microsoft or GitHub account and get access to 12 months of popular free services, a 30-day Azure free trial with $200 to spend during that period and over 25 services that are free forever. Once your 30-day trial is over, we’ll notify you so you can decide if you want to upgrade to pay-as-you-go pricing and remove the spending limit. In other words, no surprises here folks!
Stay current with the Azure Application Developer and languages page: This home page is a single, unified destination for developers and architects that covers Azure application development along with all of our language pages such as .NET, Node.js, Python, and more. It is refreshed monthly and your go-to-source for our SDKs, hands-on tutorials, docs, blogs, events, and other Azure resources. Check out our recent Python for Beginners series to jump right in.
Free Developer’s Guide to Azure eBook: This free eBook includes all the updates from Microsoft’s first-party conferences, along with new services and features announced since then. In addition to these important services, we drill into practical examples that you can use in the real world and included a table and reference architecture that show you “what to use when” for databases, containers, serverless scenarios, and more. There is also a key focus on security to help you stop potential threats to your business before they happen. You’ll also see brand new sections on IoT, DevOps, and AI/ML that you can take advantage of today. In the more than 20 pages of demos, you’ll be diving into topics that include creating and deploying .NET Core web apps and SQL Server to Azure from scratch, building on to the application to perform analysis of the data with Cognitive Services. After the app is created, we’ll make it more robust and easier to update by incorporating CI/CD using API Management to control our APIs and generate documentation automatically.
Azure Tips and Tricks (weekly tips and videos): Azure Tips and Tricks helps developers learn something new within a couple of minutes. Since inception in 2017, the collection has grown to over 230 tips and more than 80 videos, conference talks, and several eBooks spanning the entire universe of the Azure platform. Featuring a new weekly tip and video it is designed to help you boost your productivity with Azure, and all tips are based on practical real-world scenarios. The series spans the entire universe of the Azure platform from Azure App Services, to containers, and more. Swing by weekly for a tip or stay for hours watching our Azure YouTube playlist.
Rock, Paper, Scissors, Lizard, Spock sample application: Rock, Paper, Scissors, Lizard, Spock is the geek version of the classic Rock, Paper, Scissors game. Rock, Paper, Scissors, Lizard, Spock is created by Sam Kass and Karen Bryla.
The sample application running in Azure was presented at Microsoft Ignite 2019 by Scott Hanselman and friends. It’s a multilanguage application built with Visual Studio and Visual Studio Code, deployed with GitHub Actions, and running on Azure Kubernetes Service (AKS). The sample application also uses Azure Machine Learning and Azure Cognitive Services (custom vision API). Languages used in this application include .NET, Node.js, Python, Java, and PHP.
Microsoft.Source Newsletter: Get the latest articles, documentation, and events from our curated monthly developer community newsletter. Learn about new technologies and find opportunities to connect with other developers online and locally. Each edition, you’ll have the opportunity to share your feedback and shape the newsletter as it grows and evolves.

Additional resources

Here are some bonus tips to help you keep up with Azure as it changes:

Azure documentation is the most comprehensive and current resource you’ll find for all of our Azure services.
See how Microsoft does DevOps: Customers are looking for guidance and insights about companies that have undergone a transformation through DevOps. To that end, we are sharing the stories of four Microsoft teams that have experienced DevOps transformation, with guidance on lessons learned and ways to drive organizational change through Azure technologies and internal culture. The stories are aimed at providing practical information about DevOps adoption to developers, IT professionals, and decision-makers.
Azure Friday is a video series that releases up to three new episodes per week to keep up with the latest in Azure with hosts such as Scott Hanselman.

Quelle: Azure

New features in Azure Monitor Metrics Explorer based on your feedback

A few months ago, we posted a survey to gather feedback on your experience with metrics in Azure Portal. Thank you for participation and for providing valuable suggestions!

We want to share some of the insights we gained from the survey and highlight some of the features that we delivered based on your feedback. These features include:

Resource picker that supports multi-resource scoping.
Splitting by dimension allows limiting the number of time series and specifying sort order.
Charts can show a large number of datapoints.
Improved chart legends.

Resource picker with multi-resource scoping

One of the key pieces of feedback we heard was about the resource picker panel. You said that being able to select only one resource at a time when choosing a scope is too limiting. Now you can select multiple resources across resource groups in a subscription.

Ability to limit the number of timeseries and change sort order when splitting by dimension

Many of you asked for the ability to configure the sort order based on dimension values, and for control over the maximum number of timeseries shown on the chart. Those who asked explained that for some metrics, including available memory and remaining disk space, they want to see the timeseries with smallest values, while for other metrics, including CPU utilization or count of failures, showing the timeseries with highest values make more sense. To address your feedback, we expanded the dimension splitter selector with Sort order and Limit count inputs.
 

Charts that show a large number of datapoints

Charts with multiple timeseries over the long period, especially with short time grain are based on queries that return lots of datapoints. Unfortunately, processing too many datapoints may slow down chart interactions. To ensure the best performance, we used to apply a hard limit on the number of datapoints per chart, prompting users to lower the time range or to increase the time grain when the query returns too much data.

Some of you found the old experience frustrating. You said that occasionally you might want to plot charts with lots of datapoints, regardless of performance. Based on your suggestions, we changed the way we handle the limit. Instead of blocking chart rendering, we now display a message that suggests that the metrics query will return a lot of data, but will let you proceed anyways (with a friendly reminder that you might need to wait longer for the chart to display).
   
High-density charts from lots of datapoints can be useful to visualize the outliers, as shown in this example:
  

Improved chart legend

A small but useful improvement was made based on your feedback that the chart legends often wouldn’t fit on the chart, making it hard to interpret the data. This was almost always happening with the charts pinned to dashboards and rendered in the tight space of dashboard tiles, or on screens that have a smaller resolution. To solve the problem, we now let you scroll the legend until you find the data you need:
  

Feedback

Let us know how we're doing and what more you'd like to see. Please stay tuned for more information on these and other new features in the coming months. We are continuously addressing pain points and making improvements based on your input.

If you have any questions or comments before our next survey, please use the feedback button on the Metrics blade. Don’t feel shy about giving us a shout out if you like a new feature or are excited about the direction we’re headed. Smiles are just as important in influencing our plans as frowns.

Quelle: Azure

Advancing Azure Active Directory availability

“Continuing our Azure reliability series to be as transparent as possible about key initiatives underway to keep improving availability, today we turn our attention to Azure Active Directory. Microsoft Azure Active Directory (Azure AD) is a cloud identity service that provides secure access to over 250 million monthly active users, connecting over 1.4 million unique applications and processing over 30 billion daily authentication requests. This makes Azure AD not only the largest enterprise Identity and Access Management solution, but easily one of the world’s largest services. The post that follows was written by Nadim Abdo, Partner Director of Engineering, who is leading these efforts.” – Mark Russinovich, CTO, Azure

 

Our customers trust Azure AD to manage secure access to all their applications and services. For us, this means that every authentication request is a mission critical operation. Given the critical nature and the scale of the service, our identity team’s top priority is the reliability and security of the service. Azure AD is engineered for availability and security using a truly cloud-native, hyper-scale, multi-tenant architecture and our team has a continual program of raising the bar on reliability and security.

Azure AD: Core availability principles

Engineering a service of this scale, complexity, and mission criticality to be highly available in a world where everything we build on can and does fail is a complex task.

Our resilience investments are organized around the set of reliability principles below:

Our availability work adopts a layered defense approach to reduce the possibility of customer visible failure as much as possible; if a failure does occur, scope down the impact of that failure as much as possible, and finally, reduce the time it takes to recover and mitigate a failure as much as possible.

Over the coming weeks and months, we dive deeper into how each of the principles is designed and verified in practice, as well as provide examples of how they work for our customers.

Highly redundant

Azure AD is a global service with multiple levels of internal redundancy and automatic recoverability. Azure AD is deployed in over 30 datacenters around the world leveraging Azure Availability Zones where present. This number is growing rapidly as additional Azure Regions are deployed.

For durability, any piece of data written to Azure AD is replicated to at least 4 and up to 13 datacenters depending on your tenant configuration. Within each data center, data is again replicated at least 9 times for durability but also to scale out capacity to serve authentication load. To illustrate—this means that at any point in time, there are at least 36 copies of your directory data available within our service in our smallest region. For durability, writes to Azure AD are not completed until a successful commit to an out of region datacenter.

This approach gives us both durability of the data and massive redundancy—multiple network paths and datacenters can serve any given authorization request, and the system automatically and intelligently retries and routes around failures both inside a datacenter and across datacenters.

To validate this, we regularly exercise fault injection and validate the system’s resiliency to failure of the system components Azure AD is built on. This extends all the way to taking out entire datacenters on a regular basis to confirm the system can tolerate the loss of a datacenter with zero customer impact.

No single points of failure (SPOF)

As mentioned, Azure AD itself is architected with multiple levels of internal resilience, but our principle extends even further to have resilience in all our external dependencies. This is expressed in our no single point of failure (SPOF) principle.

Given the criticality of our services we don’t accept SPOFs in critical external systems like Distributed Name Service (DNS), content delivery networks (CDN), or Telco providers that transport our multi-factor authentication (MFA), including SMS and Voice. For each of these systems, we use multiple redundant systems configured in a full active-active configuration.

Much of that work on this principle has come to completion over the last calendar year, and to illustrate, when a large DNS provider recently had an outage, Azure AD was entirely unaffected because we had an active/active path to an alternate provider.

Elastically scales

Azure AD is already a massive system running on over 300,000 CPU Cores and able to rely on the massive scalability of the Azure Cloud to dynamically and rapidly scale up to meet any demand. This can include both natural increases in traffic, such as a 9AM peak in authentications in a given region, but also huge surges in new traffic served by our Azure AD B2C which powers some of the world’s largest events and frequently sees rushes of millions of new users.

As an added level of resilience, Azure AD over-provisions its capacity and a design point is that the failover of an entire datacenter does not require any additional provisioning of capacity to handle the redistributed load. This gives us the flexibility to know that in an emergency we already have all the capacity we need on hand.

Safe deployment

Safe deployment ensures that changes (code or configuration) progress gradually from internal automation to internal to Microsoft self-hosting rings to production. Within production we adopt a very graduated and slow ramp up of the percentage of users exposed to a change with automated health checks gating progression from one ring of deployment to the next. This entire process takes over a week to fully rollout a change across production and can at any time rapidly rollback to the last well-known healthy state.

This system regularly catches potential failures in what we call our ‘early rings’ that are entirely internal to Microsoft and prevents their rollout to rings that would impact customer/production traffic.

Modern verification

To support the health checks that gate safe deployment and give our engineering team insight into the health of the systems, Azure AD emits a massive amount of internal telemetry, metrics, and signals used to monitor the health of our systems. At our scale, this is over 11 PetaBytes a week of signals that feed our automated health monitoring systems. Those systems in turn trigger alerting to automation as well as our team of 24/7/365 engineers that respond to any potential degradation in availability or Quality of Service (QoS).

Our journey here is expanding that telemetry to provide optics of not just the health of the services, but metrics that truly represent the end-to-end health of a given scenario for a given tenant. Our team is already alerting on these metrics internally and we’re evaluating how to expose this per-tenant health data directly to customers in the Azure Portal.

Partitioning and fine-grained fault domains

A good analogy to better understand Azure AD are the compartments in a submarine, designed to be able to flood without affecting either other compartments or the integrity of the entire vessel.

The equivalent for Azure AD is a fault domain, the scale units that serve a set of tenants in a fault domain are architected to be completely isolated from other fault domain’s scale units. These fault domains provide hard isolation of many classes of failures such that the ‘blast radius’ of a fault is contained in a given fault domain.

Azure AD up to now has consisted of five separate fault domains. Over the last year, and completed by next summer, this number will increase to 50 fault domains, and many services, including Azure Multi-Factor Authentication (MFA), are moving to become fully isolated in those same fault domains.

This hard-partitioning work is designed to be a final catch all that scopes any outage or failure to no more than 1/50 or ~2% of our users. Our objective is to increase this even further to hundreds of fault domains in the following year.

A preview of what’s to come

The principles above aim to harden the core Azure AD service. Given the critical nature of Azure AD, we’re not stopping there—future posts will cover new investments we’re making including rolling out in production a second and completely fault-decorrelated identity service that can provide seamless fallback authentication support in the event of a failure in the primary Azure AD service.

Think of this as the equivalent to a backup generator or uninterruptible power supply (UPS) system that can provide coverage and protection in the event the primary power grid is impacted. This system is completely transparent and seamless to end users and is now in production protecting a portion of our critical authentication flows for a set of M365 workloads. We’ll be rapidly expanding its applicability to cover more scenarios and workloads.

We look forward to sharing more on our Azure Active Directory Identity Blog, hearing your questions and topics of interest for future posts.
Quelle: Azure