Azure Blob Storage enhancing data protection and recovery capabilities

Enterprises, partners, and IT professionals store business-critical data in Azure Blob Storage. We are committed to providing the best-in-class data protection and recovery capabilities to keep your applications running. Today, we are announcing the general availability of Geo-Zone-Redundant Storage (GZRS)—providing protection against regional disasters and Account failover—allowing you to determine when to initiate a failover instead.

Additionally, we are releasing two new preview features: Versioning and Point in time restore. These new functionalities expand upon Azure Blob Storage’s existing capabilities such as data redundancy, soft delete, account delete locking, and immutable blobs, making our data protection and restore capabilities even better.

Geo-Zone-Redundant Storage (GZRS)

Geo-Zone-Redundant Storage (GZRS) and Read-Access Geo-Zone-Redundant Storage (RA-GZRS) are now generally available offering intra-regional and inter-regional high availability and disaster protection for your applications.

GZRS writes three copies of your data synchronously across multiple Azure Availability zones, similar to Zone redundant storage (ZRS), providing you continued read and write access even if a datacenter or availability zone is unavailable. In addition, GZRS asynchronously replicates your data to the secondary geo pair region to protect against regional unavailability. RA-GZRS exposes a read endpoint on this secondary replica allowing you to read data in the event of primary region unavailability.

To learn more, see Azure Storage redundancy.

Account failover

Customer-initiated storage account failover is now generally available, allowing you to determine when to initiate a failover instead of waiting for Microsoft to do so. When you perform a failover, the secondary replica of the storage account becomes the new primary. The DNS records for all storage service endpoints—blob, file, queue, and table—are updated to point to this new primary. Once the failover is complete, clients will automatically begin reading from and writing to data to the storage account in the new primary region, with no code changes.

Customer initiated failover is available for GRS, RA-GRS, GZRS and RA-GZRS accounts. To learn more, see our Disaster recovery and account failover documentation.

Versioning preview

Applications create, update, and delete data continuously. A common requirement is the ability to access and manage both current and previous versions of the data. Versioning automatically maintains prior versions of an object and identifies them with version IDs. You can restore a prior version of a blob to recover your data if it is erroneously modified or deleted.

A version captures a committed blob state at a given point in time. When versioning is enabled for a storage account, Azure Storage automatically creates a new version of a blob each time that blob is modified or deleted.

Versioning and soft delete work together to provide you with optimal data protection. To learn more, see our documentation on Blob versioning.

Point in time restore preview

Point in time restore for Azure Blob Storage provides storage account administrators the ability to restore a subset of containers or blobs within a storage account to a previous state. This can be done by an administrator to a specific past date and time in the event of an application corrupting data, a user inadvertently deleting contents, or a test run of a machine learning model.

Point in time restore makes use of Blob Change feed, currently in preview. Change feed enables recording of all blob creation, modification, and deletion operations that occur in your storage account. Today we are expanding upon our Change feed preview by enabling four new regions and enabling support for two new blob event types: BlobPropertiesUpdated and BlobSnapshotCreated.

This improvement now captures change records caused by the SetBlobMetadata, SetBlobProperties, and SnapshotBlob operations. To learn more, read Change feed support in Azure Blob Storage (Preview).

Point in time restore is intended for ISV partners and customers who want to implement their own restore workflow on top of Azure Storage. To learn more, see Point in time restore.

Build it, use it, and tell us about it

These new capabilities provide greater data protection control for all users of Azure Storage. The general availability release of GZRS adds region-replicated zone redundancy types. Account failover allows customers to control geo-replicated failover for their storage accounts. In addition, the previews of Versioning and Point in time restore allow greater control of data protection and restoration to a previous date and time.

We look forward to hearing your feedback on these features and suggestions for future improvements through email at AzureStorageFeedback@microsoft.com. As a reminder, we love hearing all of your ideas and suggestions about Azure Storage, which you can post at the Azure Storage feedback forum.
Quelle: Azure

Minecraft Earth and Azure Cosmos DB part 2: Delivering turnkey geographic distribution

This post is part 2 of a two-part series about out how organizations are using Azure Cosmos DB to meet real world needs and the difference it’s making to them. In part 1, we explored the challenges that led service developers for Minecraft Earth to choose Azure Cosmos DB and how they’re using it to capture almost every action taken by every player around the globe—with ultra-low latency. In part 2 we examine the solution’s workload and how Minecraft Earth service developers have benefited from building it on Azure Cosmos DB.

Geographic distribution and multi-region writes

Minecraft Earth service developers used the turnkey geographic distribution feature in Azure Cosmos DB to achieve three goals: fault tolerance, disaster recovery, and minimal latency—the latter achieved by also using the multi-master capabilities of Azure Cosmos DB to enable multi-region writes. Each supported geography has at least two service instances. For example, in North America, the Minecraft Earth service runs in the West US and East US Azure regions, with other components of Azure used to determine which is closer to the user and route traffic accordingly.

Nathan Sosnovske, a Senior Software Engineer on the Minecraft Earth services development team explains:

“With Azure available in so many global regions, we were able to easily establish a worldwide footprint that ensures a low-latency gaming experience on a global scale. That said, people mostly travel within one geography, which is why we have multi-master writes setup between all of the service instances in each geography. That’s not to say that a player who lives in San Francisco can’t travel to Europe and still play Minecraft Earth—it’s just that we’re using a different mechanism to minimize round-trip latency in such cases.”

Request units per second (RU/s) consumption

In Azure Cosmos DB, request units per second (RU/s) is the “currency” used to reserve guaranteed database throughput. For Minecraft Earth, a typical write request consumers about 10 RU/s, with an additional 2-3 RU/s used for background processing of the append-only event log, which is driven by Azure Service Bus.

“We’ve found that our RU/s usage scales quite linearly; we only need to increase capacity when we have a commensurate increase in write requests per second. At first, we thought we would need more throughput, but it turned out there was a lot of optimization to be done,” says Sosnovske. “Our original design handled request volumes and complexity relatively well, but it didn’t handle the case where the system would shard—that is, physically repartition itself internally—because of overall data volumes.”

The reason for this was because allocated RU/s are equally distributed across physical partitions, and the physical partition with the most current data was running a lot hotter than the rest.

“Fortunately, because our system is modeled as an append only log that gets materialized into views for the client, we very rarely read old data directly from Azure Cosmos DB,” explains Sosnovske. “Our data model was flexible enough to allow us to archive events to cold storage after they were processed them into views, and then delete them from Azure Cosmos DB using its Time to Live feature.”

Today, with the service’s current architecture, Sosnovske isn’t worried about scalability at all.

“During development, we tested the scalability of Azure Cosmos DB up to one million RU/s, and it delivered that throughput without a problem,” Sosnovske says.

Initial launch of Mindcraft Earth

Minecraft Earth was formally released in one geography in October 2019, with its global rollout across all other geographies completed over the following weeks. For Minecraft fans, Minecraft Earth provides a means of experiencing the game they know and love at an entirely new level, in the world of augmented reality.

And for Sosnovske and all the other developers who helped bring Minecraft Earth to life, the opportunity to extend one of the most popular games of all time into the realm of augmented reality has been equally rewarding.

“A lot of us are gamers ourselves and jumped on the opportunity to be a part of it all,” Sosnovske recalls. “Looking back, everything went pretty well—and we’re all quite satisfied with the results.”

Benefits of using Azure Cosmos DB

Although Azure Cosmos DB is just one of several Azure services that support Minecraft Earth, it plays a pivotal role.

“I can’t think of another way we could have delivered what we did without building something incredibly complex completely from scratch,” says Sosnovske. “Azure Cosmos DB provided all the functionality we needed, including low latency, global distribution, multi-master writes, and more. All we had to do was properly put it to use.”

Specific benefits of using Azure Cosmos DB to build the Minecraft Earth service included the following:

Easy adoption and implementation. According to Sosnovske, Azure Cosmos DB was easy to adopt.

“Getting started with Azure Cosmos DB was incredibly easy, especially within the context of the .NET ecosystem,”  Sosnovske says. “We simply had to install the Nuget package and point it at the proper endpoint. Documentation for the service is very thorough; we haven’t had any major issues due to misunderstanding how the SDK works.”

Zero maintenance. As part of Microsoft Azure, Azure Cosmos DB is a fully managed service, which means that nobody on the Minecraft Earth services team needs to worry about patching servers, maintaining backups, data center failures, and so on.

“Not having to deal with day-to-day operations is a huge bonus,” says Sosnovske. “However, this is really a benefit of building on Azure in general.”

Guaranteed low latency. A big reason developers chose Azure Cosmos DB was because it provides a guaranteed single-digit (<10ms) latency SLA for reads and writes at the 99th percentile, at any scale, anywhere in the world. In comparison, Table storage latency would have been higher—with no guaranteed upper bound.

“Azure Cosmos DB is delivering as promised, in that we’re seeing an average latency of 7 milliseconds for reads,” says Sosnovske.

Elastic scalability. Thanks to the elastic scalability provided by Azure Cosmos DB, the game enjoyed a frictionless launch.

“At no point was Azure Cosmos DB the bottleneck in scaling our service,” says Sosnovske. “We’ve done a lot of work to optimize performance since initial release and knowing that we wouldn’t hit any scalability limits as we did that work was a huge benefit. We may have paid a bit more for throughput then we had to at first, but that’s a lot better than having a service that can’t keep up with growth in user demand.”

Turnkey geographic distribution. With Azure Cosmos DB, geographic distribution was a trivial task for Minecraft Earth service developers. Adjustments to provisioned throughput (in RU/s) are just as easy because Azure Cosmos DB transparently performs the necessary internal operations across all the regions, continuing to provide a single system image.

“Turnkey geo-distribution was a huge benefit,” says Sosnovske. “We did have to think a bit more carefully about how to model our system when turning on multi-master support, but it was orders of magnitude less work than solving the problem ourselves.”

Compliance. Through their use of Time-to-Live within Azure Cosmos DB, developers can safely store location-based gameplay data for short periods of time without having to worry about violating compliance mandates like Europe’s General Data Protection Regulation (GDPR).

“It lets us drive workflows like ‘This player should only be able to redeem this location once in a given period of time,’ after which Azure Cosmos DB automatically cleans up the data within our set TTL,” explains Sosnovske.

In summarizing his experience with Azure Cosmos DB, Sosnovske says it was quite positive.

“Azure Cosmos DB is highly reliable, easy to use after you take the time to understand the basic concepts, and, best of all, it stays out of the way when you’re writing code. When junior developers on my team are working on features, they don’t need to think about the database or how data is stored; they can simply write code for a domain and have it just work.”

Get started with Azure Cosmos DB

Visit Azure Cosmos DB.
Learn more about Azure for Gaming.

Quelle: Azure

Minecraft Earth and Azure Cosmos DB part 1: Extending Minecraft into our real world

This post is part 1 of a two-part series about how organizations use Azure Cosmos DB to meet real world needs and the difference it’s making to them. In part 1, we explore the challenges that led service developers for Minecraft Earth to choose Azure Cosmos DB and how they’re using it to capture almost every action taken by every player around the globe—with ultra-low latency. In part 2, we examine the solution’s workload and how Minecraft Earth service developers have benefited from building it on Azure Cosmos DB.

Extending the world of Minecraft into our real world

You’ve probably heard of the game Minecraft, even if you haven’t played it yourself. It’s the best-selling video game of all time, having sold more than 176 million copies since 2011. Today, Minecraft has more than 112 million monthly players, who can discover and collect raw materials, craft tools, and build structures or earthworks in the game’s immersive, procedurally generated 3D world. Depending on game mode, players can also fight computer-controlled foes and cooperate with—or compete against—other players.

In May 2019, Microsoft announced the upcoming release of Minecraft Earth, which began its worldwide rollout in December 2019. Unlike preceding games in the Minecraft franchise, Minecraft Earth takes things to an entirely new level by enabling players to experience the world of Minecraft within our real world through the power of augmented reality (AR).

For Minecraft Earth players, the experience is immediately familiar—albeit deeply integrated with the world around them. For developers on the Minecraft team at Microsoft, however, the delivery of Minecraft Earth—especially the authoritative backend services required to support the game—would require building something entirely new.

Nathan Sosnovske, a Senior Software Engineer on the Minecraft Earth services development team explains:

“With vanilla Minecraft, while you could host your own server, there was no centralized service authority. Minecraft Earth is based on a centralized, authoritative service—the first ‘heavy’ service we’ve ever had to build for the Minecraft franchise.”

In this case study, we’ll look at some of the challenges that Minecraft Earth service developers faced in delivering what was required of them—and how they used Azure Cosmos DB to meet those needs.

The technical challenge: Avoiding in-game lag

Within the Minecraft Earth client, which runs on iOS-based and Android-based AR-capable devices, almost every action a player takes results in a write to the core Minecraft Earth service. Each write is a REST POST that must be immediately accepted and acknowledged to avoid any noticeable in-game lag.

“From a services perspective, Minecraft Earth requires low-latency writes and medium-latency reads,” explains Sosnovske. “Writes need to be fast because the client requires confirmation on each one, such as might be needed for the client to render—for example, when a player taps on a resource to see what’s in it, we don’t want the visuals to hang while the corresponding REST request is processed. Medium-latency reads are acceptable because we can use client-side simulation until the backing model behind the service can be updated for reading.”

To complicate the challenge, Minecraft Earth service developers needed to ensure low-latency writes regardless of a player’s location. This required running copies of the service in multiple locations within each geography where Minecraft Earth would be offered, along with built-in intelligence to route the Minecraft Earth client to the nearest location where the service is deployed.

“Typical network latency between the east and west coasts of the US is 70 to 80 milliseconds,” says Sosnovske. “If a player in New York had to rely on a service running in San Francisco, or vice versa, the in-game lag would be unacceptable. At the same time, the game is called Minecraft Earth—meaning we need to enable players in San Francisco and New York to share the same in-game experience. To deliver all this, we need to replicate the service—and its data—in multiple, geographically distributed datacenters within each geography.”

The solution: An event sourcing pattern based on Azure Cosmos DB

To satisfy their technical requirements, Minecraft Earth service developers implemented an event sourcing pattern based on Azure Cosmos DB.

“We originally considered using Azure Table storage to store our append-only event log, but its lack of any SLAs for read and write latencies made that unfeasible,” says Sosnovske. “Ultimately, we chose Azure Cosmos DB because it provides 10 millisecond SLAs for both reads and writes, along with the global distribution and multi-master capabilities needed to replicate the service in multiple locations within each geography.”

With an event sourcing pattern, instead of just storing the current state of the data, the Minecraft Earth service uses an append-only data store that’s based on Azure Cosmos DB to record the full series of actions taken on the data—in this case, mapping to each in-game action taken by the player. After immediate acknowledgement of a successful write is returned to the client, queues that subscribe to the append-only event store handle postprocessing and asynchronously apply the collected events to a domain state maintained in Azure Blob storage. To optimize things further, Minecraft Earth developers combined the event sourcing pattern with domain-driven design, in which each app domain—such as inventory items, character profiles, or achievements—has its own event stream.

“We modeled our data as streams of events that are stored in an append-only log and mutate an in-memory model state, which is used to drive various client views,” says Sosnovske. “That cached state is maintained in Azure Blob storage, which is fast enough for reads and helps to keep our request unit costs for Azure Cosmos DB to a minimum. In many ways, what we’ve done with Azure Cosmos DB is like building a write cache that’s really, really resilient.”

The following diagram shows how the event sourcing pattern based on Azure Cosmos DB works:

Putting Azure Cosmos DB in place

In putting Azure Cosmos DB to use, developers had to make a few design decisions:

Azure Cosmos DB API. Developers chose to use the Azure Cosmos DB Core (SQL) API because it offered the best performance and the greatest ease of use, along with other needed capabilities.

“We were building a system from scratch, so there was no need for a compatibility layer to help us migrate existing code,” Sosnovske explains. “In addition, some Azure Cosmos DB features that we depend on—such as TransactionalBatch—are only supported with the Core (SQL) API. As an added advantage, the Core (SQL) API was really intuitive, as our team was already familiar with SQL in general.”

Read Introducing TransactionalBatch in the .NET SDK to learn more.

Partition key. Developers ultimately decided to logically partition the data within Azure Cosmos DB based on users.

“We originally partitioned data on users and domains—again, examples being inventory items or achievements—but found that this breakdown was too granular and prevented us from using database transactions within Azure Cosmos DB to their full potential,” says Sosnovske.”

Consistency level. Of the five consistency levels supported by Azure Cosmos DB, developers chose session consistency, which they combined with heavy etag checking to ensure that data is properly written.

“This works for us because of how we store data, which is modeled as an append-only log with a head document that serves as a pointer to the tail of the log,” explains Sosnovske. “Writing to the database involves reading the head document and its etag, deriving the N+1 log ID, and then constructing a transactional batch operation that overwrites the head pointer using the previously read etag and creates a new document for the log entry. In the unlikely case that the log has already been written, the etag check and the attempt to create a document that already existed will result in a failed transaction. This happened regardless of whether another request ‘beats’ us to writing or if our request reads slightly out-of-date data.”

In part 2 of this series, we examine the solution’s current workload and how Minecraft Earth service developers have benefited from building it on Azure Cosmos DB.

Get started with Azure Cosmos DB

Visit Azure Cosmos DB.
Learn more about Azure for Gaming.

Quelle: Azure

How Azure VPN helps organizations scale remote work

In the weeks and months we have all been grappling with the global pandemic, there’s no doubt about the impact it has had on the lives of people everywhere. A shift to remote work is one of the widespread effects of the global pandemic, and we heard from organizations around the world who are looking for ways to enable more of their employees to work remotely for their safety and that of the community. With this shift, we’re working to address common infrastructure challenges businesses face when helping employees stay connected at scale.

Common challenges for businesses expanding secure, remote access

One of the major challenges while setting up remote access is providing workers/employees access to key internal resources, which may reside on-premises or Azure, for example, healthcare or government organizations who have sensitive patient or tax information in on-premises datacenters and other sensitive information in Azure.

Another challenge that the businesses around the world now face is how to quickly scale an existing VPN setup, which is typically targeted at a small portion of an organization’s workforce, to now accommodate all or most workers. Even within Microsoft, we’ve seen our typical remote access at 50,000+ employee spike to as high as 128,000 employees while we’re working to protect staff and our communities during the global pandemic.

How Azure VPN can help with secure, remote work at scale

The Azure network is designed to withstand sudden changes in the utilization of resources and can greatly help during periods of peak utilization. The Azure Point-to-Site (P2S) VPN Gateway solution is cloud-based and can be provisioned quickly to cater for the increased demand of users to work from home. It can scale up easily and be turned off just as easily.

Tips to help you get started with Azure VPN Gateway

Based on the customers we’ve been working with and best practices we’ve established over our years of work with enterprises, here are tips to help your own company get started with Azure VPN Gateway:

For scenarios where you need to access resources on-premises or in Azure, you can build a VPN Gateway in Azure and connect your existing VPN solution to Azure. This eliminates single point of failure to on-premises and provides nearly limitless scale. See Remote work using Azure VPN Gateway Point-to-Site to help you understand how to set up Azure VPN Gateway and integrate it with your existing setup.
Use Azure Active Directory (Azure AD), certificate-based authentication, or RADIUS authentication to authenticate users and to validate the status of their device before allowing them on VPN. You can review Create an Azure AD tenant for P2S OpenVPN protocol connections for more details.
We recommend split tunneling VPN traffic. This allows network traffic to go directly to public resources—such as Office 365 and Windows Virtual Desktops—and prevents internet traffic from having to go back to the corporate office, reducing overall load and bandwidth on your corporate internet links and on-premises VPN infrastructure.
To improve on-premises to Azure connectivity to support scale, you can work with your local telecommunications provider to temporarily increase connectivity to the internet. This can help scale your connectivity from your office or data center to Microsoft up to 10 Gbps.
Apply all available security updates to your VPN and firewall devices. The patching and updates for the Azure VPN gateway are managed by Microsoft. For your on-premises devices, please follow the guidance from the device vendor. We’ve brought together tips in this blog post.

How to get started

If you’re not currently using P2S tunnels, please review the following document, evaluate your scenario, and follow the instructions to start using Azure VPN services.
Quelle: Azure

Manage and find data with Blob Index for Azure Storage—now in preview

 

Blob Index—a managed secondary index, allowing you to store multi-dimensional object attributes to describe your data objects for Azure Blob storage—is now available in preview. Built on top of blob storage, Blob Index offers consistent reliability, availability, and performance for all your workloads. Blob Index provides native object management and filtering capabilities, which allows you to categorize and find data based on attribute tags set on the data.

Manage and find data with Blob Index

As datasets get larger, finding specific related objects in a sea of data can be difficult and frustrating. Previously, clients used the ListBlobs API to retrieve 5000 lexicographical records at a time, parse through the list, and repeat until you found the blobs you wanted. Some users also resorted to managing a separate lookup table to find specific objects. These separate tables can get out-of-sync—increasing cost, complexity, and frustration. Customers should not have to worry about data organization or index table management, and instead focus on building powerful applications to grow their business.

Blob Index alleviates the data management and querying problem with support for all blob types (Block Blob, Append Blob, and Page Blob). Blob Index is exposed through a familiar blob storage endpoint and APIs, allowing you to easily store and access both your data and classification indices on the same service to reduce application complexity.

To populate the blob index, you define key-value tag attributes on your data, either on new data during upload or on existing data already in your storage account. These blob index tags are stored alongside your underlying blob data. The blob indexing engine then automatically reads the new tags, indexes them, and exposes them to a user-queryable blob index. Using the Azure portal, REST APIs, or SDKs, you can then issue a FindBlobsByTags API call specify a set of criteria. Blob storage will return a filtered result set consisting only of the blobs that met the match criteria.

The below scenario is an example of how Blob Index works:

In a storage account container with a million blobs, a user uploads a new blob “B2” with the following blob index tags: < Status = Unprocessed, Quality = 8K, Source = RAW >.
The blob and its blob index tags are persisted to the storage account and the account indexing engine exposes the new blob index shortly after.
Later on, an encoding application wants to find all unprocessed media files that are at least 4K resolution quality. It issues a FindBlobs API call to find all blobs that match the following criteria: < Status = Unprocessed AND Quality >= 4K AND Status == RAW>.
The blob index quickly returns just blob “B2,” the sole blob out of one million blobs that matches the specified criteria. The encoding application can quickly start its processing job, saving idle compute time and money.

 

Platform feature integrations with Blob Index

Blob Index not only helps you categorize, manage, and find your blob data but also provides integrations with other Blob service features, such as Lifecycle management.

Using the new blobIndexMatch as a filter, you can move data to cooler tiers or delete data based on the tags applied to your blobs. This allows you to be more granular in your rules and only move or delete data if they match your specified criteria.

The following sample lifecycle management policy applies to block blobs in the “videofiles” container and tiers objects to archive storage after one day only if the blobs match the blob index tag of Status = ‘Processed’ and Source = ‘RAW’.

Lifecycle management integration with Blob Index is just the beginning. We will be adding more integrations with other blob platform features soon!

Conditional blob operations with Blob Index tags

In REST versions 2019-10-10 and higher, most blob service APIs now support a new conditional header, x-ms-if-tags, so that the operation will only succeed if the specified blob index tags condition is met. If the condition is not met, the operation will fail, thus not modifying the blob. This functionality by Blob Index can help ensure data operations only occur on explicitly tagged blobs and can protect against inadvertent deletion or modification by multi-threaded applications.

How to get started

To enroll in the Blog Index preview, submit a request to register this feature to your subscription by running the following PowerShell or CLI commands:

Register by using PowerShell

Register-AzProviderFeature -FeatureName BlobIndex -ProviderNamespace Microsoft.Storage

Register-AzResourceProvider -ProviderNamespace Microsoft.Storage

Register by using Azure CLI

az feature register –namespace Microsoft.Storage –name BlobIndex

​az provider register –namespace 'Microsoft.Storage'

After your request is approved, any existing or new General-purpose v2 (GPv2) storage accounts in France Central and France South can leverage Blob Index’s capabilities. As with most previews, we recommend that this feature should not be used for production workloads until it reaches general availability.

Build it, use it, and tell us about it!

Once you’re registered and approved for the preview, you can start leveraging all that Blob Index has to offer by setting tags on new or existing data, finding data based on tags, and setting rich lifecycle management policies with tag filters. For more information, please see Manage and find data on Azure Blob Storage with Blob Index.

Note, customers are charged for the total number of Blob Index tags within a storage account, averaged over the month. Requests to SetBlobTags, GetBlobTags, and FindBlobsByTags are charged in accordance to their respective operation types. There is no cost for the indexing engine. See Block Blob pricing to learn more.

We will continue to improve our feature capabilities and are looking forward to hearing your feedback regarding Blob Index or other features through email at BlobIndexPreview@microsoft.com. As a reminder, we love hearing all of your ideas and suggestions about Azure Storage, which you can post at Azure Storage feedback forum.
Quelle: Azure

Microsoft announces next evolution of Azure VMware Solution

Today, I’m excited to announce the preview of the next generation of Azure VMware Solution, designed, built, and supported by Microsoft and endorsed by VMware.

With the current economic environment, many organizations face new challenges to find rapid and cost-effective solutions that enable business stability, continuity, and resiliency. The new Azure VMware Solution empowers customers to seamlessly extend or completely migrate their existing on-premises VMware applications to Azure without the cost, effort, or risk of re-architecting applications or retooling operations. This helps our customers gain cloud efficiency and enables them to innovate at their own pace with Azure services across security, data, and artificial intelligence, as well as unified management capabilities. Customers can also save money with Windows Server and SQL Server workloads running on Azure VMware by taking advantage of Azure Hybrid Benefits.

Microsoft first party service

The new Azure VMware Solution is a first party service from Microsoft. By launching a new service that is directly owned, operated, and supported by Microsoft, we can ensure greater quality, reliability, and direct access to Azure innovation for our customers while providing you with a single point of contact for all your needs. With today’s announcement and our continued collaboration with VMware, the new Azure VMware Solution lays the foundation for our customers’ success in the future.

Sanjay Poonen, Chief Operating Officer at VMware commented, “VMware and Microsoft have a long-standing partnership and a shared heritage in supporting our customers. Now more than ever it is important we come together and help them create stability and efficiency for their businesses. The new Azure VMware Solution gives customers the ability to use the same VMware foundation in Azure as they use in their private data centers. It provides a consistent operating model that can increase business agility and resiliency, reduces costs, and enable a native developer experience for all types of applications.”

These comments were echoed by Jason Zander, Executive Vice President at Microsoft, who said, “This is an amazing milestone for Microsoft and VMware to meet our customers where they are today on their cloud journey. Azure VMware Solution is a great example of how we design Azure services to support a broad range of customer workloads. Through close collaboration with the VMware team, I’m excited that customers running VMware on-premises will be able to benefit from Azure’s highly reliable infrastructure sooner.”

The new solution is built on Azure, delivering the speed, scale, and high availability of our global infrastructure. You can provision a full VMware Cloud Foundation environment on Azure and gain compute and storage elasticity as your business needs change. Azure VMware Solution is VMware Cloud Verified, giving customers confidence they're using the complete set of VMware capabilities, with consistency, performance, and interoperability for their VMware workloads.

Access to VMware technology and experiences

Azure VMware Solution allows you to leverage your existing investments, in VMWare skills and tools. Customers can maintain operational consistency as they accelerate a move to the cloud with the use of familiar VMware technology including VMWare vSphere, HCX, NSX-T, and vSAN. Additionally, the new Azure VMware Solution has an option to add VMware HCX Enterprise, which will enable customers to further simplify their migration efforts to Azure including support for bulk live migrations. HCX also enables customers running older versions of vSphere on-premises to move to newer versions of vSphere seamlessly running on Azure VMware Solution.

Seamless Azure integration

Through integration with Azure management, security, and services, Azure VMware Solution provides the opportunity for customers to continue to build cloud competencies and modernize overtime. Customers maintain the choice to use the native VMware tools and management experiences they are familiar with, and incrementally leverage Azure capabilities as required.

As we look to meet customers where they are today, we are deeply investing in support for hybrid management scenarios, and automation that can streamline the journey. We are excited to announce more about future hybrid capabilities as they relate to Azure VMware Solution, soon.

Leverage Azure Hybrid Benefit pricing for Microsoft workloads

Take advantage of Azure as the best cloud for your Microsoft workloads running in Azure VMware Solution with unmatched pricing benefits for Windows Server and SQL Server. Azure Hybrid Benefit extends to Azure VMware Solution allowing customers with software assurance to maximize the value of existing on-premises Windows Server and SQL Server license investments when migrating or extending to Azure. In addition, Azure VMware Solution customers are also eligible for three years of free Extended Security Updates on 2008 versions of Windows Server and SQL Server. The combination of these unmatched pricing benefits on Azure ensures customers can simplify cloud adoption with cost efficiencies across their VMware environments.

In addition, at general availability Reserved Instances will also be available for Azure VMware Solution customers, with one-year and three-year options on dedicated hosts.

Global availability and expansion

The Azure VMware Solution preview is initially available in US East and West Europe Azure regions. We expect the new Azure VMware Solution to be generally available in the second half of 2020 and at that time, availability will be extended across more regions. Plans on regional availability for Azure VMware Solution will be made available here as they are disclosed.

To register your interest in taking part in the Azure VMware Solution preview, please contact your Microsoft Account Representative or contact our sales team.

Learn more about Azure VMware Solution on the Azure website.
Quelle: Azure

Enable remote work faster with new Windows Virtual Desktop capabilities

In the past few months, there has been a dramatic and rapid shift in the speed at which organizations of all sizes have enabled remote work amidst the global health crisis. Companies examining priorities and shifting resources with agility can help their employees stay connected from new locations and devices, allowing for business continuity essential to productivity.

We have seen thousands of organizations turn to Windows Virtual Desktop to help them quickly deploy remote desktops and apps on Azure for users all over the globe. The service and its new updates available today in preview will simplify getting started and enabling secure access to what users need each day.

Get started quickly with the Azure Portal with a new admin experience to accelerate end-to-end deployment, management, and optimization.

Gain enhanced security and compliance using reverse connect technology, Azure Firewall to limit internet egress traffic from your virtual machines to specific IP addresses in Azure, and several other new additions.

Enjoy an upgraded Microsoft Teams experience coming in the next month with audio/video redirection (AV redirect) to reduce latency in conversations running in a virtualized environment.

Learn more about today’s announcement in the Microsoft 365 blog from Julia White and Brad Anderson.

Get started with Windows Virtual Desktop and connect with technical experts in the Windows Virtual Desktop Tech Community.
Quelle: Azure

Cross Region Restore (CRR) for Azure Virtual Machines using Azure Backup

Today we're introducing the preview of Cross Region Restore (CRR) for Microsoft Azure Virtual Machines (VMs) support using Microsoft Azure Backup.

Azure Backup uses Recovery Services vault to hold customers' backup data which offers both local and geographic redundancy. To ensure high availability of backed up data, Azure Backup defaults storage settings to geo-redundancy. By virtue, backed up data in the primary region is geo-replicated to an Azure-paired secondary region. If Azure declares a disaster in the primary region, the data replicated in the secondary region is available to restore in the secondary region only. With the introduction of this new feature, the customer can initiate restores in a secondary region at will to mitigate real downtime disaster in the primary region for their environment. This makes the secondary region restores completely customer-controlled. Azure Backup utilizes the backed-up data replicated to the secondary region for such restores.

For the following scenarios, customers can leverage the secondary region data mentioned above using this feature:

Full outage: Previously, if there was an Azure primary region disaster for the customer, the customer had to wait for Azure to declare disaster to access their secondary region data. With the cross region restore feature, there is no wait time for the customer to recover data in the secondary region. The customer can initiate restores in the secondary region even before Azure declares an outage.
Partial outage: Downtime can occur in specific storage clusters where Azure Backup stores a customer’s backed up data or even in-network, connecting Azure Backup and storage clusters associated with a customer’s backed up data. Customers previously could not perform restores to primary or secondary regions. With Cross Region Restore, customers can perform a restore in the secondary region using a replica of backed up data in the secondary region.
No outage: Previously there was no provision for customers to conduct business continuity and disaster recovery (BCDR) drills for audit or compliance purposes with the secondary region data. This new capability enables customers to perform a restore of backed up data in the secondary region even if there is not a full or partial outage in the primary region for business continuity and disaster recovery drills.

Azure Backup leverages storage accounts’ read-access geo-redundant storage (RA-GRS) capability to support restores from a secondary region. Note that due to delays in storage replication from primary to secondary, there will be latency in the backed up data being available for a restore in the secondary region.

Key features available with the preview include:

Self-service recoveries of secondary backed up data in a secondary region
Enables the ability to conduct disaster recovery (DR) drills for audit and compliance anytime
High availability of backup data during partial or full outages of an Azure region

With this preview, Azure Backup will support restoring Azure Virtual Machines as well as disks from a secondary region.

How to onboard to this feature

Cross Region Restore can be enabled on Recovery Services vault by turning on the Cross Region Restore setting for Recovery Services vault with the geo-redundant storage redundancy setting. Note that this feature does not support the restore of classic virtual machines as well as vaults with locally redundant storage (LRS) redundancy settings. Only Recovery Service vault enabled with geo-redundant storage settings will have the option to onboard to this feature. Cross Region Restore is now available in all Azure public regions. The regions where this feature is supported are updated in this Cross Region Restore documentation.

The road ahead

Azure Backup will extend its support to all other workloads apart from Azure Virtual Machines in the coming months. Learn more about Cross Region Restore and sign up for the preview.

Pricing

Currently pricing for enabling Cross Region Restore on Recovery Services vault will remain the same as pricing for geo-redundant storage based Recovery Services vault. Please refer to Azure Backup pricing to learn more about the details of Cross Region Restore pricing. For further queries related to pricing, please contact AskAzureBackupTeam.

Get started with Cross Region Restore

Learn more about Cross Region Restore.
Getting started with Recovery Services vault.
Need help? Reach out to Azure Backup forum for support.
Tell us how we can improve Azure Backup by contributing new ideas and voting up existing ones.
Follow us on Twitter @AzureBackup for the latest news and updates.

Quelle: Azure

Azure Container Registry: Mitigating data exfiltration with dedicated data endpoints

Azure Container Registry announces dedicated data endpoints, enabling tightly scoped client firewall rules to specific registries, minimizing data exfiltration concerns.

Pulling content from a registry involves two endpoints:

Registry endpoint, often referred to as the login URL, used for authentication and content discovery.
A command like docker pull contoso.azurecr.io/hello-world makes a REST request which authenticates and negotiates the layers which represent the requested artifact.
Data endpoints serve blobs representing content layers.

Registry managed storage accounts

Azure Container Registry is a multi-tenant service, where the data endpoint storage accounts are managed by the registry service. There are many benefits for managed storage, such as load balancing, contentious content splitting, multiple copies for higher concurrent content delivery, and multi-region support with geo-replication.

Azure Private Link virtual network support

Azure Container Registry recently announced Private Link support, enabling private endpoints from Azure Virtual Networks to be placed on the managed registry service. In this case, both the registry and data endpoints are accessible from within the virtual network, using private IPs.

The public endpoint can then be removed, securing the managed registry and storage accounts to access from within the virtual network.
 

Unfortunately, virtual network connectivity isn’t always an option.

Client firewall rules and data exfiltration risks

When connecting to a registry from on-prem hosts, IoT devices, custom build agents, or when Private Link may not be an option, client firewall rules may be applied, limiting access to specific resources.

 
As customers locked down their client firewall configurations, they realized they must create a rule with a wildcard for all storage accounts, raising concerns for data-exfiltration. A bad actor could deploy code that would be capable of writing to their storage account.

To mitigate data-exfiltration concerns, Azure Container Registry is making dedicated data endpoints available.

Dedicated data endpoints

When dedicated data endpoints are enabled, layers are retrieved from the Azure Container Registry service, with fully qualified domain names representing the registry domain. As any registry may become geo-replicated, a regional pattern is used:

[registry].[region].data.azurecr.io.

For the Contoso example, multiple regional data endpoints are added supporting the local region with a nearby replica.

With dedicated data endpoints, the bad actor is blocked from writing to other storage accounts.

Enabling dedicated data endpoints

Note: Switching to dedicated data-endpoints will impact clients that have configured firewall access to the existing *.blob.core.windows.net endpoints, causing pull failures. To assure clients have consistent access, add the new data-endpoints to the client firewall rules. Once completed, existing registries can enable dedicated data-endpoints through the az cli.

Using az cli version 2.4.0 or greater, run the az acr update command:

az acr update –name contoso –data-endpoint-enabled

To view the data endpoints, including regional endpoints for geo-replicated registries, use the az acr show-endpoints cli:

az acr show-endpoints –name contoso

outputs:

{
  "loginServer": "contoso.azurecr.io",
  "dataEndpoints": [
    {
      "region": "eastus",
      "endpoint": "contoso.eastus.data.azurecr.io",
    },
    {
      "region": "westus",
      "endpoint": "contoso.westus.data.azurecr.io",
    }
  ]
}

Security with Azure Private Link

Azure Private Link is the most secure way to control network access between clients and the registry as network traffic is limited to the Azure Virtual Network, using private IPs. When Private Link isn’t an option, dedicated data endpoints can provide secure knowledge in what resources are accessible from each client.

Pricing information

Dedicated data endpoints are a feature of premium registries.

For more information on dedicated data endpoints, see the pricing information here.
Quelle: Azure

Azure Cost Management + Billing updates – April 2020

Whether you're a new student, thriving startup, or the largest enterprise, you have financial constraints and you need to know what you're spending, where, and how to plan for the future. Nobody wants a surprise when it comes to the bill, and this is where Azure Cost Management + Billing comes in.

We're always looking for ways to learn more about your challenges and how Azure Cost Management + Billing can help you better understand where you're accruing costs in the cloud, identify and prevent bad spending patterns, and optimize costs to empower you to do more with less. Here are a few of the latest improvements and updates based on your feedback:

Azure Spot Virtual Machines now generally available.
Monitoring your reservation and Marketplace purchases with budgets.
Automate cost savings with Azure Resource Graph.
Azure Cost Management covered by FedRAMP High.
Tell us about your reporting goals.
New ways to save money with Azure.
New videos and learning opportunities.
Documentation updates.

Let's dig into the details.

 

Azure Spot Virtual Machines now generally available

We all want to save money. We often look at our largest workloads for savings opportunities, but make sure you don't stop there. You may be able to save up to 90 percent on interruptible virtual machine workloads with Azure Spot Virtual Machines (Spot VMs), now generally available.

Spot VMs allow you to utilize unused compute capacity at very low rates compared to pay-as-you-go prices. Spot VMs are best suited to batch jobs, supplementing workloads that can be interrupted, dev/test environments, stateless applications, and other fault-tolerant applications. Spot VMs can be very useful in reducing the cost of running applications significantly or alternatively staying within budget while scaling out your applications.

Learn more about how to identify and track Spot VM costs in Azure Cost Management.

 

Monitoring your reservation and Marketplace purchases with budgets

Azure Cost Management budgets help you plan for and drive organizational accountability by ensuring everyone is aware as costs increase. You already know you can monitor usage of your Azure and AWS services. Now you can also track and get notified when a reservation or Marketplace purchase causes you to exceed your budget.

With the inclusion of purchases, your budgets become even more powerful. You have a more complete picture of your costs, enabling you to proactively manage and optimize costs to stay within your financial constraints. You can even target these costs more specifically, for finer-grained monitoring.

Let's say you don't expect your Marketplace purchases to be over $1000 per month. Create a monthly budget where PublisherType is set to Marketplace and ChargeType is set to Purchase. Setup notifications for 50 percent, 75 percent, or another portion of your budget, and you'll get an email if those thresholds are hit. Pretty simple.

How about reservation purchases? You may not want to limit reservation purchases since they do help save money, but maybe you just want to be notified when they're used throughout the organization. Create a yearly budget where PublisherType is set to Azure and ChargeType is set to Purchase. You'll get notified as purchases cause the threshold to be exceeded and at that point, you can even increase the budget amount to continue to get notified as new reservations are purchased.

Alternatively, if you only want to monitor usage, simply filter ChargeType to Usage. That's it!

Of course, this is just the tip of the iceberg. Learn more about how to monitor and control spending with Azure Cost Management budgets.

 

Automate cost savings with Azure Resource Graph

You already know Azure Advisor helps you reduce and optimize costs without sacrificing quality. And you may already be familiar with the Azure Advisor APIs that enable you to integrate recommendations into your own reporting or automation. Now you can also get recommendations via Azure Resource Graph.

Azure Resource Graph enables you to explore your Azure resources across subscriptions. You can use advanced filtering, grouping, and sorting based on resource properties and relationships to target specific workloads and even take that further to automate resource management and governance at scale. Now, with the addition of Azure Advisor recommendations, you can also query your cost saving recommendations.

Querying for recommendations is easy. Just open Azure Resource Graph in the Azure portal and explore the advisorresources table. Let's say you want a summary of your potential cost savings opportunities:

advisorresources
// First, we trim down the list to only cost recommendations
| where type == 'microsoft.advisor/recommendations'
| where properties.category == 'Cost'
//
// Then we group rows…
| summarize
// …count the resources and add up the total savings
     resources = dcount(tostring(properties.resourceMetadata.resourceId)),
     savings = sum(todouble(properties.extendedProperties.savingsAmount))
     by
// …for each recommendation type (solution)
     solution = tostring(properties.shortDescription.solution),
     currency = tostring(properties.extendedProperties.savingsCurrency)
//
// And lastly, format and sort the list
| project solution, resources, savings = bin(savings, 0.01), currency
| order by savings desc

Take this one step further using Logic Apps or Azure Functions and send out weekly emails to subscription and resource group owners. Or pivot this on resource ID and setup an approval workflow to automatically delete unused resources or downsize underutilized virtual machines. The sky's the limit!

 

Azure Cost Management covered by FedRAMP High

Azure Cost Management is now one of the 101 services covered by the Federal Risk and Authorization Management Program (FedRAMP) High Provisional Authorization to Operate (P-ATO) for Azure Government—more services than any other cloud provider.

Learn more about the expanded FedRAMP High coverage.

 

Tell us about your reporting goals

As you know, we're always looking for ways to learn more about your needs and expectations. If you already responded last month, thank you! If not, we'd like to learn about the most important reporting tasks and goals you have when managing and optimizing costs. We'll use your inputs from this survey to help prioritize reporting improvements within Cost Management + Billing experiences over the coming months. The 9-question survey should take about 10 minutes. Please share this with anyone working with Azure Cost Management + Billing. The more diverse perspectives we get, the better we can serve you, your team, and your organization.

Take the survey.

 

New ways to save money with Azure

Lots of cost optimization improvements over the past month. Here are a few you might be interested in:

Azure Spot VMs are now generally available, enabling you to save up to 90 percent on interruptible workloads.
Save up to 49 percent with new 3-year reservations for Azure Database for MariaDB.
Save up to 65 percent with new 3-year reservations for Azure Database for MySQL.
Save up to 65 percent with Azure Dedicated Host reservations.
Simplify Windows virtual machine management and save money with Azure DevTest discounts.
Reduce user license costs with Azure DevOps multi-org billing.

 

New videos and learning opportunities

For those visual learners out there, there are five new videos and a new MS Learn learning path you should take a look at:

Setting up for success (8 minutes).
Setting up entity hierarchies (8 minutes).
Controlling access (12 minutes).
Reporting by dimensions and tags (8 minutes).
How to set up "Connectors for AWS" in Azure Cost Management (9 minutes).
Control Azure spending and manage bills with Azure Cost Management + Billing (2 hours 36 minutes).

Follow the Azure Cost Management + Billing YouTube channel to stay in the loop with new videos as they're released and let us know what you'd like to see next.

Want a more guided experience? Start with Control Azure spending and manage bills with Azure Cost Management + Billing.

 

Documentation updates

Here are a few documentation updates you might be interested in:

Prevent unexpected charges with Azure Cost Management + Billing.
How to enable access to costs for new/renewed EA enrollments.
How to determine what reservations you should purchase.
Added reservation and spot usage analysis to common cost analysis uses.
Create management groups as part of a Resource Manager deployment template.

Want to keep an eye on all of the documentation updates? Check out the Cost Management + Billing doc change history in the azure-docs repository on GitHub. If you see something missing, select Edit at the top of the document and submit a quick pull request.

 

What's next?

These are just a few of the big updates from last month. Don't forget to check out the previous Azure Cost Management + Billing updates. We're always listening and making constant improvements based on your feedback, so please keep the feedback coming.

Follow @AzureCostMgmt on Twitter and subscribe to the YouTube channel for updates, tips, and tricks. And, as always, share your ideas and vote up others in the Cost Management feedback forum.

We know these are trying times for everyone. Best wishes from the Azure Cost Management team. Stay safe and stay healthy!
Quelle: Azure