Introducing incremental enrichment in Azure Cognitive Search

Incremental enrichment is a new feature of Azure Cognitive Search that brings a declarative approach to indexing your data. When incremental enrichment is turned on, document enrichment is performed at the least cost, even as your skills continue to evolve. Indexers in Azure Cognitive Search add documents to your search index from a data source. Indexers track updates to the documents in your data sources and update the index with the new or updated documents from the data source.

Incremental enrichment is a new feature that extends change tracking from document changes in the data source to all aspects of the enrichment pipeline. With incremental enrichment, the indexer will drive your documents to eventual consistency with your data source, the current version of your skillset, and the indexer.

Indexers have a few key characteristics:

Data source specific.
State aware.
Can be configured to drive eventual consistency between your data source and index.

In the past, editing your skillset by adding, deleting, or updating skills left you with a sub-optimal choice. Either rerun all the skills on the entire corpus, essentially a reset on your indexer, or tolerate version drift where documents in your index are enriched with different versions of your skillset.

With the latest update to the preview release of the API, the indexer state management is being expanded from only the data source and indexer field mappings to also include the skillset, output field mappings knowledge store, and projections.

Incremental enrichment vastly improves the efficiency of your enrichment pipeline. It eliminates the choice of accepting the potentially large cost of re-enriching the entire corpus of documents when a skill is added or updated, or dealing with the version drift where documents created/updated with different versions of the skillset and are very different in shape and/or quality of enrichments.

Indexers now track and respond to changes across your enrichment pipeline by determining which skills have changed and selectively execute only the updated skills and any downstream or dependent skills when invoked. By configuring incremental enrichment, you will be able to ensure that all documents in your index are always processed with the most current version of your enrichment pipeline, all while performing the least amount of work required. Incremental enrichment also gives you the granular controls to deal with scenarios where you want full control over determining how a change is handled.

Indexer cache

Incremental indexing is made possible with the addition of an indexer cache to the enrichment pipeline. The indexer caches the results from each skill for every document. When a data source needs to be re-indexed due to a skillset update (new or updated skill), each of the previously enriched documents is read from the cache and only the affected skills, changed and downstream of the changes are re-run. The updated results are written to the cache, the document is updated in the index and optionally, the knowledge store. Physically, the cache is a storage account. All indexes within a search service may share the same storage account for the indexer cache. Each indexer is assigned a unique cache id that is immutable.

Granular controls over indexing

Incremental enrichment provides a host of granular controls from ensuring the indexer is performing the highest priority task first to overriding the change detection.

Change detection override: Incremental enrichment gives you granular control over all aspects of the enrichment pipeline. This allows you to deal with situations where a change might have unintended consequences. For example, editing a skillset and updating the URL for a custom skill will result in the indexer invalidating the cached results for that skill. If you are only moving the endpoint to a different virtual machine (VM) or redeploying your skill with a new access key, you really don’t want any existing documents reprocessed.

To ensure that that the indexer only performs enrichments you explicitly require, updates to the skillset can optionally set disableCacheReprocessingChangeDetection query string parameter to true. When set, this parameter will ensure that only updates to the skillset are committed and the change is not evaluated for effects on the existing corpus.

Cache invalidation: The converse of that scenario is one where you may deploy a new version of a custom skill, nothing within the enrichment pipeline changes, but you need a specific skill invalidated and all affected documents re-processed to reflect the benefits of an updated model. In these instances, you can call the invalidate skills operation on the skillset. The reset skills API accepts a POST request with the list of skill outputs in the cache that should be invalidated. For more information on the reset skills API, see the documentation.

Updates to existing APIs

Introducing incremental enrichment will result in an update to some existing APIs.

Indexers

Indexers will now expose a new property:

Cache

StorageAccountConnectionString: The connection string to the storage account that will be used to cache the intermediate results.
CacheId: The cacheId is the identifier of the container within the annotationCache storage account that is used as the cache for this indexer. This cache is unique to this indexer and if the indexer is deleted and recreated with the same name, the cacheid will be regenerated. The cacheId cannot be set, it is always generated by the service.
EnableReprocessing: Set to true by default, when set to false, documents will continue to be written to the cache, but no existing documents will be reprocessed based on the cache data.

Indexers will also support a new querystring parameter:

ignoreResetRequirement set to true allows the commit to go through, without triggering a reset condition.

Skillsets

Skillsets will not support any new operations, but will support new querystring parameter:

disableCacheReprocessingChangeDetection set to true when you want no updates to on existing documents based on the current action.

Datasources

Datasources will not support any new operations, but will support new querystring parameter:

ignoreResetRequirement set to true allows the commit to go through without triggering a reset condition.

Best practices

The recommended approach to using incremental enrichment is to configure the cache property on a new indexer or reset an existing indexer and set the cache property. Use the ignoreResetRequirement sparingly as it could lead to unintended inconsistency in your data that will not be detected easily.

Takeaways

Incremental enrichment is a powerful feature that allows you to declaratively ensure that your data from the datasource is always consistent with the data in your search index or knowledge store. As your skills, skillsets, or enrichments evolve the enrichment pipeline will ensure the least possible work is performed to drive your documents to eventual consistency.

Next steps

Get started with incremental enrichment by adding a cache to an existing indexer or add the cache when defining a new indexer.
Quelle: Azure

Accelerating innovation: Start with Azure Sphere to secure IoT solutions

From agriculture to healthcare, IoT unlocks opportunity across every industry, delivering profound returns, such as increased productivity and efficiency, reduced costs, and even new business models. And with a projected 41.6 billion IoT connected devices by 2025, momentum continues to build.

While IoT creates new opportunities, it also brings new cybersecurity challenges that could potentially result in stolen IP, loss of brand trust, downtime, and privacy breaches. In fact, 97 percent of enterprises rightfully call out security as a key concern when adopting IoT. But when organizations have a reliable foundation of security on which they can build from the start, they can realize durable innovation for their business versus having to figure out what IoT device security requires and how to achieve it.

Read on to learn how you can use Azure Sphere—now generally available—to create and accelerate secure IoT solutions for both new devices and existing equipment. As you look to transform your business, discover why IoT security is so important to build in from the start and see how the integration of Azure Sphere has enabled other companies to focus on innovation. For a more in-depth discussion, be sure to watch the Azure Sphere general availability webinar.

Defense in depth, silicon-to-cloud security

It’s important to understand on a high level how Azure Sphere delivers quick and cost-effective device security. Azure Sphere is designed around the seven properties of highly secure devices and builds on decades of Microsoft experience in delivering secure solutions. End-to-end security is baked into the core, spanning the hardware, operating system, and cloud, with ongoing service updates to keep everything current.

While other IoT device platforms must rely on costly manual practices to mitigate missing security properties and protect devices from evolving cybersecurity threats, Azure Sphere delivers defense-in-depth to guard against and respond to threats. Add in ongoing security and OS updates to help ensure security over time, and you have the tools you need to stay on top of the shifting digital landscape.

Propel innovation on a secure foundation

Azure Sphere removes the complexity of securing IoT devices and provides a secure foundation to build on. This means that IoT adopters spend less time and money focused on security and more time innovating solutions that solve key business problems, delivering a greater return on investment as well as faster time to market.

Connected coffee with Azure Sphere 

A great example is Starbucks, who partnered with Microsoft to connect its fleet of coffee machines using the guardian module with Azure Sphere. The guardian module helps businesses quickly securely connect existing equipment without any redesign, saving both time and money.

With IoT-enabled coffee machines, Starbucks collects more than a dozen data points such as type of beans, temperature, and water quality for every shot of espresso. They are also able to perform proactive maintenance on the machines to avoid costly breakdowns and service calls. Finally, they are using the solution to transmit new recipes directly to the machines, eliminating manual processes and reducing costs.

Azure Sphere innovation within Microsoft

Here at Microsoft, Azure Sphere is also being used by the cloud operations team in their own datacenters. With the aim of providing safe, fast and reliable cloud infrastructure to everyone, everywhere, it was an engineer’s discovery of Azure Sphere that started to make their goal of connecting the critical environment systems—the walls, the roof, the electrical system, and mechanical systems that house the datacenters—a reality.

Using the guardian module with Azure Sphere, they were able to move to a predictive maintenance model and better prevent issues from impacting servers and customers. Ultimately it is allowing them to deliver better outcomes for customers and utilize the datacenter more efficiently. And even better, Azure Sphere is giving them the freedom to innovate, create and explore—all on a secure, cost-effective platform.

Partner collaborations broaden opportunities

Throughout it all, enabling this innovation, is our global ecosystem of Microsoft partners that enable us to advance capabilities and bring Azure Sphere to a broad range of customers and applications.

Together, we can provide a more extensive range of options for businesses—from the single chip Wi-Fi solution from MediaTek that meets more traditional needs to other upcoming solutions from NXP and Qualcomm. NXP will provide an Azure Sphere certified chip that is optimized for performance power, and Qualcomm will offer the first cellular-native Azure Sphere chip.

Register today

Register for the Azure Sphere general availability webinar to explore how Azure Sphere works, how businesses are benefiting from it, and how you can use Azure Sphere to create secure, trustworthy IoT devices that enable true business transformation.
Quelle: Azure

New Azure RTOS collaborations with leaders in the semiconductor industry

IoT is reaching mainstream adoption across businesses in all market segments. Our vision is to enable Azure to be the world’s computer, giving businesses real-time visibility into every aspect of their operations, assets, and products. Businesses are harnessing signals from IoT devices of all shapes and sizes, from the very smallest microcontroller units (MCUs) to very capable microprocessor units (MPUs). This presents a great opportunity for collaboration between semiconductor manufacturers with extensive expertise in MCUs/MPUs and Azure IoT, an industry leader in IoT.

It has been nearly one year since we acquired Express Logic and their popular ThreadX RTOS, and last year we announced Azure RTOS that provides customers those capabilities with the leading real-time operating system (RTOS) in the industry.

Today, we’re announcing additional collaborations with industry leaders, which together represent the vast majority of the market for 32-bit MCUs. Their MCUs are embedded into billions of devices from sensors, streetlights, and shipping containers to smart home appliances, medical devices, and more.

STMicroelectronics, Renesas, NXP, Microchip, and Qualcomm will all offer embedded development kits featuring Azure RTOS ThreadX, one of the components of the Azure RTOS embedded application development suite. This allows embedded developers to access reliable, real-time performance for resource-constrained devices, and seamless integration with the power of Azure IoT to connect, monitor, and control a global fleet of IoT assets.

We will also be releasing the full source code for all Azure RTOS components on GitHub, allowing developers to freely explore, develop, test, and adapt Azure RTOS to suit their needs. When developers are ready to take their code into production, the production license will be included automatically if they deploy to any of the supported MCU devices from STMicroelectronics, Renesas, NXP, Microchip, or Qualcomm. If they prefer to use a different device in production, they may contact Microsoft for direct licensing details.

As we work with our semiconductor partners to implement best practices for connected devices, Azure RTOS will include easy-to-use reference projects and templates for connectivity to Azure IoT Hub, Azure IoT Central, Azure IoT Edge Gateways as well as first-class integration with Azure Security Center. Azure RTOS will soon ship with an Azure Security Center module for monitoring threats and vulnerabilities on IoT devices.

When combined with Azure Sphere, Azure RTOS enables embedded developers to quickly build real-time, highly-secured IoT devices for even the most demanding environments—robust devices that offer real-time performance and protection from evolving cybersecurity threats. For MCUs and system on chips (SoCs) that are smaller than what Azure Sphere supports, Azure RTOS and Azure IoT Hub Device Management enable secure communications for embedded developers and device operators who have the ability to implement best practices to protect devices from cybersecurity attacks.

For partners wishing to deliver reliable, real-time performance on highly-secured connected devices that stay secured against evolving cybersecurity threats over time, we recommend Azure RTOS and Azure Sphere together for the most demanding environments.

Here are more details on our collaboration with industry leaders.

STMicroelectronics (ST)

STMicroelectronics (ST) is a renowned world leader in ARM® Cortex®-M MCUs with its STM32 family, providing their OEM and mass-market customers with a wide portfolio of simple-to-use MCUs, coming with a complete development environment and best-in-class ecosystem.

“We are delighted to be collaborating with Microsoft to address even better our customers’ needs,” said Ricardo de Sa Earp, Group Vice-President, Microcontrollers Division General Manager, STMicroelectronics. “Leveraging our installed base of more than five billion STM32 MCUs shipped to date to the global embedded market, we see Azure RTOS ThreadX and middleware as a perfect match to both our mass-market and OEM IoT strategies, complementing our development environment with industry-proven, reliable, high-quality source code.” 

Renesas Electronics Corporation

Renesas Electronics Corporation is a premier supplier of advanced semiconductor solutions. Last October, we announced that Azure RTOS will be broadly available across Renesas' products, including the Synergy and RA MCU families. Renesas is also working to build Azure RTOS into their broader set of MCUs and MPUs.

“Our Synergy and RX cloud kits combined with Azure RTOS and other Azure IoT building blocks offer MCU customers a quick and secure end-to-end solution for cloud connectivity,” said Sailesh Chittipeddi, Executive Vice President, General Manager of Renesas’ IoT and Infrastructure business unit. “We are excited to expand our collaboration with Microsoft and look forward to bringing Microsoft Azure to our MCU and MPU customers, including solutions that will support Azure IoT Edge Runtime for Linux on our RZ MPUs.”

NXP Semiconductors 

NXP Semiconductors is a world leader in secure connectivity solutions for embedded applications, serving customers in the automotive, industrial and IoT, mobile, and communication infrastructure sectors. Microsoft has been collaborating with NXP to extend intelligent cloud computing to the intelligent edge, from adding voice control directly to devices to offering machine learning solutions for edge devices, to device security with Azure Sphere. They plan to integrate Azure RTOS into their evaluation kits and some of the most popular IoT processor families in the industry.

“Edge computing reduces the latency, bandwidth and privacy concerns of a cloud-only Internet of Things," said Jerome Schang, Head of Cloud Partnership programs at NXP. “Enabling Azure RTOS on NXP’s MCUs is yet another step to provide edge computing solutions that unlock the benefits of edge to Azure IoT cloud interaction.”

Microchip Technology, Inc.

Microchip Technology Inc. is a leading provider of smart, connected, and secure embedded control solutions. Their solutions serve customers across the industrial, automotive, consumer, aerospace and defense, communications, and computing markets. Microchip plans to incorporate support for Azure RTOS and Azure IoT Edge across their product families.

“Microchip is building on its already comprehensive portfolio of tools and solutions to enable quick, easy development of secure IoT applications across the full spectrum of embedded control devices and architectures,” said Greg Robinson, associate vice president of Microchip’s 8-bit microcontroller business unit. “Our partnership with Microsoft Azure extends our dedication to developing innovative solutions.”

Qualcomm Technologies, Inc.

Qualcomm is a pioneer of wireless technology and powers the cellular connection of smartphones and tablets all over the planet. Qualcomm will be offering a cellular-enabled Azure Sphere certified chip and will be bringing Azure RTOS to cellular-connected device solutions found inside asset trackers, health monitors, security systems, smart city sensors, and smart meters, as well as a range of wearables.

”Qualcomm is a leader in wireless compute and connectivity technologies – not just in mobile, but in emerging markets like the Internet of Things as well,” said Jeff Torrance, Vice President, IoT, Qualcomm. “We’re proud to continue to work closely with Microsoft on solutions like Azure RTOS and Azure Sphere to jointly advance the IoT industry around the world.”

Learn more

We continue to work diligently with industry-leaders to create a rich, robust ecosystem that serves the world’s unique and diverse needs. Our collective aim is to enable customers to easily bring their ideas to life and truly unlock the opportunities available on the intelligent edge and the intelligent cloud. Find out more about why so many IoT industry leaders are excited about the benefits that Azure RTOS brings to their device solutions.
Quelle: Azure

Announcing server-side encryption with customer-managed keys for Azure Managed Disks

Today, we're announcing the general availability for server-side encryption (SSE) with customer-managed keys (CMK) for Azure Managed Disks. Azure customers already benefit from SSE with platform-managed keys for Managed Disks enabled by default. SSE with CMK improves on platform-managed keys by giving you control of the encryption keys to meet your compliance need.

Today, customers can also use Azure Disk Encryption, which leverages the Windows BitLocker feature and the Linux dm-crypt feature to encrypt Managed Disks with CMK within the guest virtual machine (VM). SSE with CMK improves on Azure Disk encryption by enabling you to use any OS types and images, including custom images, for your VMs by encrypting data in the Azure Storage service.

SSE with CMK is integrated with Azure Key Vault, which provides highly available and scalable secure storage for your keys backed by Hardware Security Modules. You can either bring your own keys (BYOK) to your Key Vault or generate new keys in the Key Vault.

About the key management

Managed Disks are encrypted and decrypted transparently using 256-bit Advanced Encryption Standard (AES) encryption, one of the strongest block ciphers available. The Storage service handles the encryption and decryption in a fully transparent fashion using envelope encryption. It encrypts data using 256-bit AES-based data encryption keys, which are, in turn, protected using your keys stored in a Key Vault.

The Storage service generates data encryption keys and encrypts them with CMK using RSA encryption. The envelope encryption allows you to rotate (change) your keys periodically as per your compliance policies without impacting your VMs. When you rotate your keys, the Storage service re-encrypts the data encryption keys with the new CMK.

Full control of your keys

You are in full control of your keys in your Key Vault. Managed Disks uses system-assigned managed identity in your Azure Active Directory (Azure AD) for accessing keys in Key Vault. An administrator with required permissions in the Key Vault must first grant access to Managed Disks in Key Vault to use the keys for encrypting and decrypting the data encryption key. You can prevent Managed Disks from accessing your keys by either disabling your keys or by revoking access controls for your keys—doing so for disks attached to running VMs will cause the VMs to fail. Moreover, you can track the key usage through Key Vault monitoring to ensure that only Managed Disks or other trusted Azure services are accessing your keys.

Availability of SSE with CMK

SSE with CMK is available for Standard HDD, Standard SSD, and Premium SSD Managed Disks that can be attached to Azure Virtual Machines and VM scale sets. Ultra Disk Storage support will be announced separately. SSE with CMK is now enabled in all the public and Azure Government regions and will be available in the regions in Germany (Sovereign) and China in a few weeks.

You can use Azure Backup to back up your VMs using Managed Disks encrypted with SSE with CMK. Also, you can choose to encrypt the backup data in your Recovery Services vaults using your keys stored in your Key Vault instead of platform-managed keys available by default. Refer to documentation for more details on the encryption of backups using CMK.

You can use Azure Site Recovery to replicate your Azure virtual machines that have Managed Disks encrypted with SSE with CMK to other Azure regions for disaster recovery. You can also replicate your on-premises virtual machines to Managed Disks encrypted with SSE with CMK in Azure. Learn more about replicating your virtual machines using Managed Disks encrypted with SSE with CMK.

Get started

To enable the encryption with CMK for Managed Disks, you must first create an instance of a new resource type called DiskEncryptionSet and then grant the instance access to the key Vault. DiskEncryptionSet represents a key in your Key Vault and allows you to reuse the same key for encrypting many disks, snapshots, and images with the same key.

Let’s look at an example of creating an instance of DiskEncryptionSet:

1. Create an instance of DiskEncryptionSet by specifying a key in your Key Vault.

keyVaultId=$(az keyvault show –name yourKeyVaultName –query [id] -o tsv)

keyVaultKeyUrl=$(az keyvault key show –vault-name yourKeyVaultName –name yourKeyName –query [key.kid] -o tsv)

az disk-encryption-set create -n yourDiskEncryptionSetName -l WestCentralUS -g yourResourceGroupName –source-vault $keyVaultId –key-url $keyVaultKeyUrl

2. Grant the instance access to the Key Vault. When you created the instance, the system automatically created a system-assigned managed identity in your Azure AD and associated the identity with the instance. The identity must have access to the Key Vault to perform required operations such as wrapkey, unwrapkey and get.

desIdentity=$(az disk-encryption-set show -n yourDiskEncryptionSetName -g yourResourceGroupName –query [identity.principalId] -o tsv)

az keyvault set-policy -n yourKeyVaultName -g yourResourceGroupName –object-id $desIdentity –key-permissions wrapkey unwrapkey get

az role assignment create –assignee $desIdentity –role Reader –scope $keyVaultId

You are ready to enable the encryption for disks, snapshots, and images by associating them with the instance of DiskEncryptionSet. There is no restriction on the number of resources that can be associated with the same DiskEncryptionSet.

Let’s look at an example of enabling for an existing disk:

1. To enable the encryption for disks attached to a VM, you must stop(deallocate) a virtual machine.

az vm stop –resource-group MyResourceGroup –name MyVm

2. Enable the encryption for an attached disk by associating it with the instance of DiskEncryptionSet.

diskEncryptionSetId=$(az disk-encryption-set show -n yourDiskEncryptionSetName -g yourResourceGroupName –query [id] -o tsv)

az disk update -n yourDiskEncryptionSetName -g yourResourceGroupName –encryption-type EncryptionAtRestWithCustomerKey –disk-encryption-set $diskEncryptionSetId

3. Start the VM.

az vm start -g MyResourceGroup -n MyVm

Refer to the Managed Disks documentation for detailed instructions on enabling server side encryption with CMK for Managed Disks.

Send us your feedback

We look forward to hearing your feedback for SSE with CMK. Please email us here. 
Quelle: Azure

General availability of new Azure disk sizes and bursting

Today marks the general availability of new Azure disk sizes, including 4, 8, and 16 GiB on both Premium and Standard SSDs, as well as bursting support on Azure Premium SSD Disks.

To provide the best performance and cost balance for your production workloads, we are making significant improvements to our portfolio of Azure Premium SSD disks. With bursting, even the smallest Premium SSD disks (4 GiB) can now achieve up to 3,500 input/output operations per second (IOPS) and 170 MiB/second. If you have experienced jitters in disk IOs due to unpredictable load and spiky traffic patterns, migrate to Azure and improve your overall performance by taking advantage of bursting support.

We offer disk bursting on a credit-based system. You accumulate credits when traffic is below the provisioned target and you consume credit when traffic exceeds it. It can be best leveraged for OS disks to accelerate virtual machine (VM) boot or data disks to accommodate spiky traffic. For example, if you conduct a SQL checkpoint or your application issues IO flushes to persist the data, there will be a sudden increase of writes against the attached disk. Disk bursting will give you the headroom to accommodate the expected and unexpected change in load.

Disk bursting will be enabled by default for all new deployments of burst eligible disks with no user action required. For any existing Premium SSD Managed Disks (less than or equal to 512GiB/P20), whenever your disk is reattached or VM is restarted, disk bursting will start to take effect and your workloads can then experience a boost on disk performance. To read more about how disk bursting works, refer to this Premium SSD bursting article.

Further, the new disk sizes introduced on Standard SSD disk provide you the most cost-efficient SSD offering in the cloud, providing consistent disk performance at the lowest cost per GiB. We've also increased the performance target for all Standard SSD disks less than 64GiB (E6) to 500 IOPS. It is an ideal replacement of HDD based disk storage from either on-premises or cloud. It is best suited for hosting web servers, business applications that are not IO intensive but require stable and predictable performance for your business operations.

In this post, we’ll be sharing how you can start leveraging these new disk capabilities to build your most high performance, robust, and cost-efficient solution on Azure today.

Getting started

You can create new managed disks using the Azure portal, Powershell, or command-line interface (CLI) now. You can find the specifications of burst eligible and new disk sizes in the table below. Both new disk sizes and bursting support on Premium SSD Disks are available in all regions in Azure Public Cloud, with support for sovereign clouds coming soon.

Azure Premium SSD Managed Disks

Here are the burst eligible disks including the newly introduced sizes. Disk bursting doesn’t apply to disk sizes greater than 512 GiB (above P20) as the provisioned target of these sizes are sufficient for most workloads.  To learn more details on the disk sizes and performance targets, please refer to this "What disk types are available in Azure?" article.

30 mins

Burst capable disks
Disk size
Provisioned IOPS per disk
Provisioned bandwidth per disk
Max burst IOPS per disk
Max burst bandwidth per disk
Max burst duration at peak burst rate

P1—New
4 GiB
120
25 MiB/second
3,500
170 MiB/second
30 minutes

P2—New
8 GiB
120
25 MiB/second
3,500
170 MiB/second
30 minutes

P3—New
16 GiB
120
25 MiB/second
3,500
170 MiB/second

30 minutes

P4
32 GiB
120
25 MiB/second
3,500

170 MiB/second

30 minutes

P6
64 GiB
240
50 MiB/second
3,500

170 MiB/second

30 minutes

P10
128 GiB
500
100 MiB/second
3,500

170 MiB/second

30 minutes

P15
256 GiB
1,100
125 MiB/second
3,500
170/MiB/second
30 minutes

P20
512 GiB
2,300
150 MiB/second
3,500

170 MiB/second

30 minutes

Standard SSD Managed Disks

Here are the new disk sizes introduced on Standard SSD Disks. The performance targets define the max IOPS and bandwidth you can achieve on these sizes. Unlike Premium Disks, Standard SSD does not offer provisioned IOPS and bandwidth. For your performance-sensitive workloads or single instance deployment, we recommend leveraging Premium SSDs.    

 
Disk size
Max IOPS per disk
Max bandwidth per disk

E1—New
4 GiB
500
25 MiB/second

E2—New
8 GiB
500
25 MiB/second

E3—New
16 GiB
500
25 MiB/second

Visit our service website to explore the Azure Disk Storage portfolio. To learn about pricing, you can visit the Azure Managed Disks pricing page. 

Your feedback

We look forward to hearing your feedback; please reach out to us here with your comments.
Quelle: Azure

Microsoft partners with the industry to unlock new 5G scenarios with Azure Edge Zones

Cloud, edge computing, and IoT are making strides to transform whole industries and create opportunities that weren't possible just a few years ago. With the rise of 5G mobile connectivity, there are even more possibilities to deliver immersive, real-time experiences that have demanding, ultra-low latency, and connectivity requirements. 5G opens new frontiers with enhanced mobile broadband up to 10x faster, reliable low-latency communication, and very high device density up to 1 million devices per square kilometer.

Today we’re announcing transformative advances to combine the power of Azure, 5G, carriers, and technology partners around the world to enable new scenarios for developers, customers, and partners, with the preview of Azure Edge Zones.

New 5G customer scenarios with Azure Edge Zones

Azure Edge Zones and Azure Private Edge Zones deliver consistent Azure services, app platform, and management to the edge with 5G unlocking new scenarios by enabling:

Development of distributed applications across cloud, on-premises, and edge using the same Azure Portal, APIs, development, and security tools.
Local data processing for latency critical industrial IoT and media services workloads.
Acceleration of IoT, artificial intelligence (AI), and real-time analytics by optimizing, building, and innovating for robotics, automation, and mixed reality.
New frontiers for developers working with high-density graphics and real-time operations in industries such as gaming.
An evolving platform built with customers, carriers, and industry partners to allow seamless integration and operation of a wide selection of Virtual Network Functions, including 5G software and SD-WAN and firewalls from technology partners such as Affirmed, Mavenir, Nuage Networks from Nokia, Metaswitch, Palo Alto, and VeloCloud By VMware.

Building on our previous work with AT&T, we’re announcing the preview of Azure Edge Zones with carriers, connecting Azure services directly to 5G networks in the carrier’s datacenter. This will enable developers to build optimized and scalable applications using Azure and directly connected to 5G networks, taking advantage of consistent Azure APIs and tooling available in the public cloud. We were the first public cloud to announce 5G integration with AT&T in Dallas in 2019, and now we're announcing a close collaboration with AT&T on a new Edge Zone targeted to become available in Los Angeles in late spring. Customers and partners interested in Edge Zones with AT&T can register for our early adopter program.

“This is a uniquely challenging time across the globe as we rethink how to help organizations serve their customers and stakeholders,” said Anne Chow, chief executive officer, AT&T Business. “Fast and intelligent mobile networks will be increasingly central to all of our lives. Combining our network knowledge and experience with Microsoft’s cloud expertise will give businesses a critical head start.”

These new zones will boost application performance, providing an optimal user experience when running ultra-low latency, sensitive mobile applications, and SIM-enabled architectures including:

Online gaming: Every press of the button, every click is important for a gamer. Responsiveness is critical, especially in multi-player scenarios. Game developers can now develop cloud-based applications optimized for mobile, directly accessing the 5G network at different carrier sites. They can achieve millisecond latency and scale to as many users as they want.
Remote meetings and events: As the prevalence of digital-forward experiences continue to rise in response to global health challenges, we can help bring together thousands of people to enjoy a real-time shared experience. Enabling scenarios like social engagement, mobile digital experiences, live interaction, and payment and processing require ultra-low latency to provide an immersive, responsive experience.
Smart Infrastructure: With the rise of IoT, organizations are looking to create efficiency, savings, and immersive experiences across residential and commercial buildings, or even citywide. With 5G and cloud computing, organizations can reliably connect millions of endpoints, analyze data, and deliver immersive experiences.

With Azure Edge Zones we’re expanding our collaboration with several of our carrier partners to bring the Azure Edge Zones family to our mutual customers later this year.

In addition to partnering with carriers, we'll also deliver standalone Azure Edge Zones in select cities over the next 12 months, bringing Azure closer to customers and developers in highly dense areas.

Azure Private Edge Zones

We’re also announcing the preview of Azure Private Edge Zones, a private 5G/LTE network combined with Azure Stack Edge on-premises delivering an ultra-low latency, secure, and high bandwidth solution for organizations to enable scenarios, like with Attabotics, accelerating e-commerce delivery times by using 3D robotic goods-to-person storage, retrieval, and real-time order fulfillment solutions. This solution leverages Azure Edge Zones and IoT technologies such as Azure IoT Central and Azure Sphere.

 

“In collaboration with Microsoft, Rogers is delivering new and innovative solutions with our Private LTE capabilities combined with Azure Edge Zones,” said Dean Prevost, President, Rogers for Business. “Working with Attabotics, we’re enabling Canadian businesses to transform the traditional supply model with a retail e-fulfillment solution that showcases the exciting possibilities of today and opens the door to our 5G future.”

Partnering with the broad industry of carriers, systems integrators, and technology partners, we're launching a platform to support orchestration and management of customers' private cellular networks to enable scenarios such as:

Smart Factory/IoT: Off-shore operations or security isolated facilities can now take advantage of the power of edge computing. Connecting everything, from silicon to sensors, leveraging security to AI at the edge, deploying Digital Twins or using mixed reality, with a secure and private connection.
Logistics and operations: Retail customers have high expectations today in online and retail shopping, creating a need for appealing advertising before a potential customer looks away from a product on-line or in an aisle at the store. Wide selection, tailored offers, convenience, and availability are musts for success. The combination of cloud and distributed edge computing, efficiently working together is a game changer for the industry.
Medicine: From remote surgeries to complicated diagnostics that rely on cross-institutional collaboration, efficient compute and storage at the edge, with AI and minimal latency, enables these and multiple other scenarios that will save lives. Private mobile connections will work as smart grids for hospitals, patient data, and diagnostics that will never have to be exposed to the internet to take advantage of Azure technologies.

A consistent Edge Zone solution

Together, Azure, Azure Edge Zones, and Azure Private Edge Zones unlock a whole new range of distributed applications with a common and consistent architecture companies can use. For example, enterprises running a headquarters’ infrastructure on Azure, may leverage Azure Edge Zones for latency sensitive interactive customer experiences, and Azure Private Edge Zones for their remote locations. Enterprise solution providers can take advantage of the consistent developer, management, and security experience, allowing developers to continue using Github, Azure DevOps, and Kubernetes Services to create applications in Azure and simply move the application to either Azure Edge Zones or Private Edge Zones depending on the customer's requirements.

“By combining Vodafone 5G and mobile private networks with Azure Private Edge Zones, our customers will be able to run cloud applications on mobile devices with single-digit millisecond responsiveness. This is essential for autonomous vehicles and virtual reality services, for example, as these applications need to react in real-time to deliver business impact. It will allow organizations to innovate and transform their operations, such as the way their employees work with virtual reality services, high speed and precise robotics, and accurate computer vision for defect detection. Together, we expect Vodafone and Microsoft to provide our customers with the capabilities they need to create high performing, innovative and safe work environments.” – Vinod Kumar, CEO of Vodafone Business

New possibilities for the telecommunication industry with Azure

For the last few decades, carriers and operators have pioneered how we connect with each other, laying the foundation for telephony and cellular. With cloud and 5G, there are new possibilities by combining cloud services, including compute and AI, with mobile high bandwidth and ultra-low latency connections. Microsoft is partnering with carriers and operators to bring 5G to life in immersive applications built by organizations and developers.

Carriers, operators, and networking providers can build 5G-optimized services and applications for their partners and customers with Azure Edge Zones, taking advantage of Azure compute, storage, networking, and AI capabilities. For organizations that want an on-premises, private mobile solution, partners and carriers can deploy, manage, and build offers with Azure Private Edge Zones. Customers need help understanding the complexities of the cellular spectrum, access points, and overall management. Carrier partners can help such enterprises manage these scenarios including manufacturing, robotics, and retail.

In addition to new business application opportunities, we're looking to transform 5G infrastructure with cloud technology. Today, most 5G infrastructure is built on specialized hardware with high capital expenditures and little flexibility. Microsoft will be working to help operators reduce costs and build capacity for their network workloads in new and innovative ways. Last week, we announced the signing of a definitive agreement to acquire Affirmed Networks, a leader in fully virtualized cloud-native mobile network solutions. We look forward to building on their great work and technology expertise to do even more to create new opportunities for customers, technology partners, and operators in virtual mobile networks

As we continue to innovate and discover new, interesting ways to provide unique scenarios built with 5G and our Edge Zone platforms we will be sure to keep you updated. Please visit our page to learn more and keep track of the latest news here.
Quelle: Azure

Extending the power of Azure AI to Microsoft 365 users

Today, Yusuf Mehdi, Corporate Vice President of Modern Life and Devices, announced the availability of new Microsoft 365 Personal and Family subscriptions. In his blog, he shared a few examples of how Microsoft 365 is innovating to deliver experiences powered by artificial intelligence (AI) to billions of users every day. Whether through familiar products like Outlook and PowerPoint, or through new offerings such as Presenter Coach and Microsoft Editor across Word, Outlook, and the web, Microsoft 365 relies on Azure AI to offer new capabilities that make their users even more productive.

What is Azure AI?

Azure AI is a set of AI services built on Microsoft’s breakthrough innovation from decades of world-class research in vision, speech, language processing, and custom machine learning. What is particularly exciting is that Azure AI provides our customers with access to the same proven AI capabilities that power Microsoft 365, Xbox, HoloLens, and Bing. In fact, there are more than 20,000 active paying customers—and more than 85 percent of the Fortune 100 companies have used Azure AI in the last 12 months.

Azure AI helps organizations:

Develop machine learning models that can help with scenarios such as demand forecasting, recommendations, or fraud detection using Azure Machine Learning.
Incorporate vision, speech, and language understanding capabilities into AI applications and bots, with Azure Cognitive Services and Azure Bot Service.
Build knowledge-mining solutions to make better use of untapped information in their content and documents using Azure Search.

Microsoft 365 provides innovative product experiences with Azure AI

The announcement of Microsoft Editor is one example of innovation. Editor, your personal intelligent writing assistant is available across Word, Outlook.com, and browser extensions for Edge and Chrome. Editor is an AI-powered service available in more than 20 languages that has traditionally helped writers with spell check and grammar recommendations. Powered by AI models built with Azure Machine Learning, Editor can now recommend clear and concise phrasing, suggest more formal language, and provide citation recommendations.

Additionally, Microsoft PowerPoint utilizes Azure AI in multiple ways. PowerPoint Designer uses Azure Machine Learning to recommend design layouts to users based on the content on the slide. In the example image below, Designer made the design recommendation based on the context in the slide. It can also can intelligently crop objects and people in images and place them in optimal layout on a slide. Since its launch, PowerPoint Designer users have kept nearly two billion Designer slides in their presentation.

You can take a closer look at how the PowerPoint team built this feature with Azure Machine Learning in this blog.

PowerPoint also uses Azure Cognitive Services such as the Speech service to power live captions and subtitles for presentations in real-time, making it easier for all audience members to follow along. Additionally, PowerPoint also uses Translator Text to provide live translations into over 60 languages to reach an even wider audience. These AI-powered capabilities in PowerPoint are providing new experiences for users, allowing them to connect with diverse audiences they were unable to reach before.

These same innovations can also be found in Microsoft Teams. As we look to stay connected with co-workers, Teams has some helpful capabilities intended to make it easier to collaborate and communicate while working remotely. For example, Teams offers the ability of live captioning meetings, which leverages the Speech API for speech transcription. But it doesn’t stop there. As you saw with PowerPoint, Teams also uses Azure AI for live translations when you set up Live Events. This functionality is particularly useful for company town hall meetings or even for any virtual event with up to ten thousand attendees, allowing presenters to reach audiences worldwide

These are just a few of the ways Microsoft 365 applications utilize Azure AI to deliver industry-leading experiences to billions of users. When you consider the fact that other Microsoft products such as Microsoft 365, Xbox, HoloLens 2, Dynamics 365, and Power Platform all rely on Azure AI, you begin to see the massive scale and the breadth of scenarios that only Azure can offer. Best of all, these same capabilities are available to anyone in Azure AI. 
Quelle: Azure

Update #2 on Microsoft cloud services continuity

Since last week’s update, the global health pandemic continues to impact every organization—large or small—their employees, and the customers they serve. Everyone is working tirelessly to support all our customers, especially critical health and safety organizations across the globe, with the cloud services needed to sustain their operations during this unprecedented time. Equally, we are hard at work providing services to support hundreds of millions of people who rely on Microsoft to stay connected and to work and play remotely.

As Satya Nadella shared, “It’s times like this that remind us that each of us has something to contribute and the importance of coming together as a community”. In these times of great societal disruption, we are steadfast in our commitment to help everyone get through this.

For this week’s update, we want to share common questions we’re hearing from customers and partners along with insights to address these important inquiries. If you have any immediate needs, please refer to the following resources.

Azure Service Health – for tracking any issues impacting customer workloads and understanding Azure Service Health
Microsoft 365 Service health and continuity – for tracking and understanding M365 Service health
Xbox Live – for tracking game and service status

What have you observed over the last week?
In response to health authorities emphasizing the importance of social distancing, we’ve seen usage increases in services that support these scenarios—including Microsoft Teams, Windows Virtual Desktop, and Power BI.

We have seen a 775 percent increase of our cloud services in regions that have enforced social distancing or shelter in place orders.
We have seen a very significant spike in Teams usage, and now have more than 44 million daily users. Those users generated over 900 million meeting and calling minutes on Teams daily in a single week. You can read more about Teams data here.
Windows Virtual Desktop usage has grown more than 3x.
Government use of public Power BI to share COVID-19 dashboards with citizens has surged by 42 percent in a week.

Have you made any changes to the prioritization criteria you outlined last week?
No. Our top priority remains support for critical health and safety organizations and ensuring remote workers stay up and running with the core functionality of Teams.

Specifically, we are providing the highest level of monitoring during this time for the following:

First Responders (fire, EMS, and police dispatch systems)
Emergency routing and reporting applications
Medical supply management and delivery systems
Applications to alert emergency response teams for accidents, fires, and other issues
Healthbots, health screening applications, and websites
Health management applications and record systems

Given your prioritization criteria, how will this impact other Azure customers?
We’re implementing a few temporary restrictions designed to balance the best possible experience for all of our customers. We have placed limits on free offers to prioritize capacity for existing customers. We also have limits on certain resources for new subscriptions. These are ‘soft’ quota limits, and customers can raise support requests to increase these limits. If requests cannot be met immediately, we recommend customers use alternative regions (of our 54 live regions) that may have less demand surge. To manage surges in demand, we will expedite the creation of new capacity in the appropriate region.

Have there been any service disruptions?
Despite the significant increase in demand, we have not had any significant service disruptions. As a result of the surge in use over the last week, we have experienced significant demand in some regions (Europe North, Europe West, UK South, France Central, Asia East, India South, Brazil South) and are observing deployments for some compute resource types in these regions drop below our typical 99.99 percent success rates.

Although the majority of deployments still succeed, (so we encourage any customers experiencing allocation failures to retry deployments), we have a process in place to ensure that customers that encounter repeated issues receive relevant mitigation options. We treat these short-term allocation shortfalls as a service incident and we send targeted updates and mitigation guidance to impacted customers via Azure Service Health—as per our standard process for any known platform issues.

When these service incidents happen, how do you communicate to customers and partners?
We have standard operating procedures for how we manage both mitigation and communication. Impacted customers and partners are notified through the Service Health experience in the Azure portal and/or in the Microsoft 365 admin center.

What actions are you taking to prevent capacity constraints?
We are expediting the addition of significant new capacity that will be available in the weeks ahead. Concurrently, we monitor support requests and, if needed, encourage customers to consider alternative regions or alternative resource types, depending on their timeline and requirements. If the implementation of these efforts to alleviate demand is not sufficient, customers may experience intermittent deployment related issues. When this does happen, impacted customers will be informed via Azure Service Health.

Have you needed to make any changes to the Teams experience?
To best support our Teams customers worldwide and accommodate new growth and demand, we made a few temporary adjustments to select non-essential capabilities such as how often we check for user presence, the interval in which we show when the other party is typing, and video resolution. These adjustments do not have significant impact on our end users’ daily experiences.

Is Xbox Live putting a strain on overall Azure capacity?
We’re actively monitoring performance and usage trends to ensure we’re optimizing services for gamers worldwide. At the same time, we’re taking proactive steps to plan for high-usage periods, which includes taking prudent measures with our publishing partners to deliver higher-bandwidth activities like game updates during off-peak hours.

How does in-home broadband use impact service continuity and capacity? Any specific work being done with ISPs?
We’ve been in regular communication with ISPs across the globe and are actively working with them to augment capacity as needed. In particular, we’ve been in discussions with several ISPs that are taking measures to reduce bandwidth from video sources in order to enable their networks to be performant during the workday.

We’ll continue to provide regular updates on the Microsoft Azure blog.
Quelle: Azure

How Azure Machine Learning enables PowerPoint Designer

If you use Office 365, you have likely seen the Microsoft PowerPoint Designer appear to offer helpful ideas when you insert a picture into a PowerPoint slide. You may also have found it under the Home tab in the ribbon. In either case, Designer provides users with redesigned slides to maximize their engagement and visual appeal. These designs include different ways to represent your text as diagrams, layouts to make your images pop, and now it can even surface relevant icons and images to bring your slides to the next level. Ultimately, it saves users time while enhancing their slides to create stunning, memorable, and effective presentations.

Designer uses artificial intelligence (AI) capabilities in Office 365 to enable users to be more productive and unlock greater value from PowerPoint. It applies AI technologies and machine learning based techniques to suggest high-quality professional slide designs. Content on slides such as images, text, and tables are analyzed by Designer and formatted based on professionally designed templates for enhanced effectiveness and visual appeal.

The data science team, working to grow and improve Designer, is comprised of five data scientists with diverse backgrounds in applied machine learning and software engineering. They strive to continue pushing barriers in the AI space, delivering tools that make everyone’s presentation designs more impactful and effortless. They’ve shared some of the efforts behind PowerPoint Designer, just so we can get a peek under the hood of this powerful capability.

PowerPoint Designer capabilities

Designer has been processing user requests in the production environment for several years and uses machine learning models for problems such as image categorization, content recommendation, text analysis, slide structure analysis, suggestion ranking, and more. Since its launch, Designer users have kept 1.7 billion Designer slides in their presentations. This means the team needs a platform to run their models at a large scale. Plus, the Designer team is regularly retraining models in production and driving model experimentation to provide optimized content recommendations.

Recently, the data analysis and machine learning team within PowerPoint started leveraging Azure Machine Learning and its robust MLOps capabilities to build models faster and at scale, replacing local development. Moving toward content suggestions, like background images, videos, and more, requires a highly performant platform, further necessitating the shift towards Azure Machine Learning.

The team uses Azure Machine Learning and its MLOps capabilities to create automated pipelines that can be iterated on, without disrupting the user experience. The pipeline starts at the Azure Data Lake, where the data is stored. From there, the team gathers data and preprocesses it—merging data from different sources and transforming raw data into a format that models can understand. Utilizing the Azure Machine Learning distributed training, they retrain their current models weekly or monthly. Distributed training allows the team to train models in parallel across multiple virtual machines (VMs) and GPUs (graphic processing units). This saves the team considerable time to ensure the model training doesn’t disrupt the user experience for the data science team, so they can focus on other objectives like experimentation.

The team does experimentation in parallel as well—trying variants, or hyperparameters, and comparing results. The final model is then put back into Azure Data Lake and downloaded to Azure Machine Learning.

The following diagram shows the conceptualized, high-level architecture of data being used from local caches in Azure Data Lake to develop machine learning models on the Azure Machine Learning. These models are then integrated into the micro-service architecture of the Designer backend service that presents PowerPoint users with intelligent slide suggestions.

Benefits of Azure Machine Learning for the PowerPoint team

The PowerPoint team decided to move its workloads over to the Azure Machine Learning based on the following capabilities:

Supports Python notebooks which can be accessed on any machine through the browser.
Natively supports running the latest TensorFlow and PyTorch-based algorithms and pre-trained models.
Experimentation is very easy to set up with minimal ramp-up time It allows execution locally or on the cloud seamlessly thereby presenting developers with a hybrid environment.
Azure Machine Learning is one of Microsoft’s key AI investments.

Follow the Azure blog to be the first to know when features leveraging new models that recommend more types of content, such as image classification and content recommendations, are released.

Azure Machine Learning | Azure Data Lake | Azure Machine Learning pipelines

Learn more

Learn more about Azure Machine Learning.

Get started with a free trial of Azure Machine Learning.
Quelle: Azure

Announcing general availability of incremental snapshots of Managed Disks

We're announcing the general availability of incremental snapshots of Azure Managed Disks. Incremental snapshots are a cost-effective, point-in-time backup of managed disks. Unlike current snapshots, which are billed for the full size, incremental snapshots are billed for the delta changes to disks since the last snapshot and are always stored on the most cost-effective storage, Standard HDD storage irrespective of the storage type of the parent disks. For additional reliability, Managed Disks are also stored on Zone Redundant Storage (ZRS) by default in regions that support ZRS.

Incremental snapshots provide differential capability, enabling customers and independent solution vendors (ISVs) to build backup and disaster recovery solutions for Managed Disks. It allows you to get the changes between two snapshots of the same disk, thus copying only changed data between two snapshots across regions, reducing time and cost for backup and disaster recovery. Incremental snapshots are accessible instantaneously; you can read the underlying data of incremental snapshots or restore disks from them as soon as they are created. Azure Managed Disk inherit all the compelling capabilities of current snapshots and have a lifetime independent from their parent managed disks and independent of each other.

Examples of incremental snapshots

Let’s look at a few examples to understand how the incremental snapshots help you reduce cost.

If you were using a disk with 100 GiB already occupied and added 20 GiB of data to the disk, you took the first incremental snapshot before 20 GiB of data was added to the disk, making the first copy occupy 100 GiB of data. Then 20 GiB of data was added on the disk before you created the second incremental snapshot. Now with incremental snapshots, the second snapshot occupies only 20 GiB and you’re billed for only 20 GiB compared to the current full snapshots that would have occupied 120 GiB and billed for 120 GiB of data, reducing your cost.

The second incremental snapshot references 100 GiB of data from the first snapshot. When you restore the disk from the second incremental snapshot, the system can restore 120 GiB of data by copying 100 GiB of data from the first snapshot and 20 GiB of data from the second snapshot.

Let's now understand what happens when 5 GiB of data was modified on the disk before you took the third incremental snapshot. The third snapshot then occupies only 5 GiB of data, references 95 GiB of data from the first snapshot, and references 20 GiB of data from the second snapshot.

Now, if you deleted the first incremental snapshot the second and the third snapshots continue to function normally as incremental snapshots are independent of each other. The system merges the data occupied by the first snapshot with the second snapshot under the hood to ensure that the second and the third snapshots are not impacted due to the deletion of the first snapshot. The second snapshot now occupies 120 GiB of data.  Since we launched the preview for incremental snapshot in September 2019, our ISVs have used this capability on a wide range of workloads to reduce the cost and time for backup and disaster recovery.

Below are some quotes from partners in our preview program:

“Zerto has been helping enterprise customers who leverage Microsoft Azure become IT Resilient for years. Extending Azure Managed Disks with the incremental snapshots API has enabled Zerto to improve upon industry-best RTOs and RPOs in Azure. The powerful capabilities of Azure Managed Disks enable Zerto to meet the scale and performance requirements of a modern enterprise. With Zerto and Microsoft’s continued collaboration and integration, we’ll continue to pave the way for IT Resilience in the public cloud.” – Michael Khusid, Director of Product Management, Zerto, Inc.

“Combining Rubrik Azure data protection with the latest Microsoft API delivering incremental snapshots, we reduce the time and cost for backup and recovery, and help our joint customers achieve 18x lower costs, high storage efficiency, reduced network traffic, and hourly RPOs. Together, Rubrik and Microsoft enable our enterprise customers to accelerate their cloud journey while unlocking productivity and better cloud economics.” – Shay Mowlem, Senior Vice President of Product & Strategy, Rubrik

“With incremental snapshots of Azure managed disks, Dell EMC PowerProtect Cloud Snapshot Manager (CSM) customers will be able to reduce their backup times and storage costs significantly. Also, they’ll be able to achieve much shorter recovery time objectives with instant access to their data from snapshots. Designed for any-size cloud infrastructure, CSM provides global visibility and control to gain insights into data protection activities across Azure subscriptions, making CSM a great solution for protecting customer workloads in public cloud environments.” – Laura Dubois, vice president, product management, Dell Technologies Data Protection

 

Availability and pricing

You can now create incremental snapshots in all regions, including sovereign regions.

Incremental snapshots are charged per GiB of the storage occupied by the delta changes since the last snapshot. For example, if you're using a managed disk with a provisioned size of 128 GiB, with 100 GiB used, the first incremental snapshot is billed only for the used size of 100 GiB. 20 GiB of data is added on the disk before you create the second snapshot. Now, the second incremental snapshot is billed for only 20 GiB.

Incremental snapshots are always stored on standard storage irrespective of the storage type of parent managed disks and charged as per the pricing of standard storage. For example, incremental snapshots of a Premium SSD Managed Disk are stored on standard storage. They are stored on ZRS by default in regions that support ZRS. Otherwise, they are stored on locally redundant storage (LRS). The per GB pricing of both the LRS and ZRS options is the same.

Incremental snapshots cannot be stored on premium storage. If you are using current snapshots on premium storage to scale up virtual machine deployments, we recommend you use custom images on standard storage in Shared Image Gallery. This will help you to achieve higher scale with lower cost.

You can visit the Managed Disk Pricing for more details about the snapshot pricing.

Getting started

Create an incremental snapshot using CLI.
Create an incremental snapshot using PowerShell.

Quelle: Azure