Monitor your Azure workload compliance with Azure Security Benchmark

The Azure Security Benchmark v1 was released in January 2020 and is being used by organizations to manage their security and compliance policies for their Azure workloads. We are pleased to share that you can now track and monitor your compliance with the benchmark across your Azure environment in Azure Security Center.

The Azure Security Benchmark is a collection of over 90 security best practice recommendations you can employ to increase the overall security and compliance of all your workloads in Azure. The Azure Security Benchmark is based on common compliance frameworks and standards but is tailored to cloud deployments and specifically to Azure workloads. The benchmark provides specific guidance on how these common controls apply to Azure, and what you specifically need to implement in Azure to meet those requirements.

Now, not only can you understand the fundamental compliance framework requirements in Azure terms, but you can also measure and track how your own deployed Azure workloads are meeting those requirements at any given time.

Azure Security Center provides built-in automation for monitoring your compliance with the benchmark controls across different Azure resource types and workloads. Azure Security Center not only measures your compliance with the controls but also provides actionable recommendations for how to remediate the non-compliant resources and meet the requirements. The benchmark guidance and recommendations are contextualized for each Azure service, making it easier for you to implement the controls for the Azure services you are actively using.

The benchmark can be monitored using the Azure Security Center Regulatory Compliance Dashboard. The Azure Security Center compliance dashboard enables you to track and monitor industry-driven common compliance frameworks like NIST 800-53, Azure CIS, PCI-DSS, and ISO 27001, among others. To monitor the benchmark in this dashboard, you need to onboard the Azure Security Benchmark as a tracked standard. Once you onboard, you get a clear view of how your currently deployed Azure environment is meeting the benchmark controls. You can use the dashboard to track the status of your Azure resources with respect to benchmark requirements, download a summary report, and improve your compliance posture using Azure Security Center remediation guidance and automation.

To onboard the benchmark to your Azure Security Center compliance dashboard, you need to add the Azure Security Benchmark initiative package to your compliance view. You can then view the dashboard and start tracking your compliance status with benchmark controls.

 

Increasing coverage of the Azure Security Benchmark

The Azure Security Benchmark core requirements are already being met by all major Azure services, and those controls can be monitored and tracked in this dashboard today. With time, coverage will increase even further as Azure services are working to create additional features supporting the full set of security and compliance requirements of the Azure Security Benchmark, and monitors for those.
Here are a couple of recent examples of Azure services providing added capabilities to help you implement the security benchmark:

Encrypt sensitive information at rest: In some cases, you may want to use your own encryption key to protect your data. Fifty new services including Azure Cosmos DB and Azure Data Lake now support customer-managed keys for encryption at rest.
Protect Azure resources within virtual networks: Private Link allows you to securely access an Azure Service over a private endpoint in your virtual network. Thirteen new services including Azure Kubernetes Service and Azure Data Explorer now support Private Link.

Over time, a larger portion of controls will be supported and will be monitorable using the dashboard. 

The Azure Security Benchmark and Secure Score

Secure Score in Azure Security Center is a measure that helps you track your security posture, and effectively and efficiently improve your security by prioritizing the actions most likely to create a risk to your organization. Secure Score is comprised of a set of controls, where each control reflects a certain attack surface. Each control has an associated score (number of points) that represents your vulnerability for that attack surface, along with a set of security recommendations for reducing your vulnerability and improving your security. The cumulative scores for all controls are then used to calculate your overall Secure Score, which is a single KPI measurement representing your security posture.

The underlying security recommendations stipulated by Secure Score are the same as those associated with the Azure Security Benchmark controls. They are comprised of the same set of actions, that ultimately serve the common purpose of maximizing your Azure security posture. The Secure Score adds the additional dimension of threat analysis, risk, and vulnerability to each of those recommendations, and thus helps you prioritize action according to the most significant factors in reducing risk in your environment. The benchmark then illustrates how these security settings and factors apply to compliance framework requirements. It also adds some additional requirements that are compliance-focused but don’t have a direct impact on security risk.
 

Our recommendation is to use Azure Secure Score view to address misconfigurations starting with the highest priority recommendations.  The Azure Security Benchmark view is helpful for understanding your compliance and is sorted by controls rather than score impact.

Summary and next steps

The Azure Security Benchmark compliance dashboard in Azure Security Center can help you continuously track your compliance posture in Azure and improve your Azure workloads’ adherence to compliance requirements.

Get started now by learning about the Azure Security Benchmark and onboarding the benchmark to the Security Center compliance dashboard.

You can look forward to seeing upcoming releases of the dashboard with additional automation and improved coverage for benchmark controls, as well as extended capabilities to manage compliance controls and additional report types.

We would love to hear your feedback, you can use this link to send us an email.
Quelle: Azure

Microsoft and Redis Labs collaborate to give developers new Azure Cache for Redis capabilities

Now more than ever, enterprises must deliver their applications with speed and robustness that matches the high expectations of their customers. The ability to provide sub-millisecond response times, reliably support the demands of enterprises from small to large, and scale seamlessly to handle millions of requests per second are critical to modern application development. At the same time, technology solutions need to be more open and flexible to handle cloud native architectures while maintaining mission-critical uptime and reliability.

Microsoft and Redis Labs partnering to bring new features to Azure Cache for Redis

With this in mind, I am announcing a new partnership between Microsoft and Redis Labs to bring their industry-leading technology and expertise to Azure Cache for Redis. This partnership represents the first native integration between Redis Labs technology and a major cloud platform, underscoring our commitment to customer choice and flexibility.

For years, developers have utilized the speed and throughput of Redis to produce unbeatable responsiveness and scale in their applications. We’ve seen tremendous adoption of Azure Cache for Redis, our managed solution built on open source Redis, as Azure customers have leveraged Redis performance as a distributed cache, session store, and message broker. The incorporation of the Redis Labs Redis Enterprise technology extends the range of use cases in which developers can utilize Redis, while providing enhanced operational resiliency and security.

Redis integration designed for enterprise customers

With this partnership, we worked together with Redis to build a new expanded offering powered by Redis Labs technologies aimed specifically at the needs of enterprise customers. With the integration of Redis Labs technology with Azure Cache for Redis, customers can now access Redis Labs-developed modules including, RediSearch, RedisBloom, and RedisTimeSeries, which provide new data structures that will further enable use cases like data analytics and machine learning.

These modules will be paired with existing features of Azure Cache for Redis including data persistence, clustering, geo-replication, and network isolation. Customers will also now have the option to deploy on SSD flash storage, offering up to ten times larger cache sizes and similar performance levels at a lower price per GB. Reliability will be an even greater priority with an enhanced SLA and the capability to utilize active geo-replication to configure a globally available cache that can fail over to another region without any data loss. In the future, this capability will make it possible to connect on-premise caches with caches in Azure for availability and failover.

Native Azure management creates streamlined experience for developers

While the new service is managed natively in Azure as two new Enterprise tiers, customers will subscribe to the Redis Labs software through Azure Marketplace as an integral part of the configuration process. This unique integration provides all the benefits of using a service embedded in Azure, including management through the Azure portal and command-lines, security and standards compliance, and a unified billing experience.

Microsoft will handle first-line support and collaborate with Redis Labs on specific support issues to utilize their deep knowledge of the technology. As a native offering, developer teams will now find it significantly easier to integrate Redis Enterprise functionality into their Azure development efforts by taking advantage of the security, configuration, and support tools they are already familiar with. Plus, developers can enable these new features with no downtime or change in billing management.

We are thrilled to be expanding our relationship with Redis Labs and continuing our collaboration with the Redis open source community. Together, we will unlock the potential of Redis and enable enterprises to build applications that are more responsive and scalable than ever before with tools that developers love.

Learn more

For more information on the Redis Labs partnership, you can read the blog post from Redis Labs CEO, Ofer Bengal. Additional product information is also available on the Redis Labs blog. The initial announcement was made at RedisConf 2020 Takeaway. Preview for this new offering will be available later this year. Sign up to be notified when the preview is available.
Quelle: Azure

Announcing the general availability of Azure Spot Virtual Machines

Today we’re announcing the general availability of Azure Spot Virtual Machines (VMs). Azure Spot VMs provide access to unused Azure compute capacity at deep discounts. Spot pricing is available on single VMs in addition to VM scale sets (VMSS). This enables you to deploy a broader variety of workloads on Azure while enjoying access to discounted pricing compared to pay-as-you-go rates. Spot VMs offer the same characteristics as a pay-as-you-go virtual machine, the differences being pricing and evictions. Spot VMs can be evicted at any time if Azure needs capacity.

The workloads that are ideally suited to run on Spot VMs include, but are not necessarily limited to, the following:

Batch jobs.
Workloads that can sustain or recover from interruptions.
Development and test.
Stateless applications that can use Spot VMs to scale out, opportunistically saving cost.
Short lived jobs which can easily be run again if the VM is evicted.

Spot VMs have replaced the preview of Azure low-priority VMs on scale sets. Eligible low-priority VMs have been automatically transitioned over to Spot VMs.

Spot Virtual Machine pricing

Unlike low-priority VMs, prices for Spot VMs will vary based on capacity for size or SKU in an Azure region. Spot pricing can give you insights into the availability and demand for a given Azure VM series and specific size in a region. The prices will change slowly to provide stabilization, thus allowing you to better manage budgets. In the Azure portal, you will have access to the current Azure VM Spot prices to easily determine which region or VM size best fits your needs. Spot prices are capped at pay-as-you-go rates.

 

Deployment of Spot Virtual Machines

Spot VMs are easy to deploy and manage. Deploying a Spot VM is similar to configuring and deploying a regular VM. For example, in the Azure portal, you can simply select Azure Spot instance to deploy a Spot VM. You can also define your maximum price for your Spot VMs. You get a couple of options:

You can choose to deploy your Spot VM without capping the price. Azure will charge you the Spot VM price at any given time, giving you piece of mind that your VMs will not be evicted for price reasons.
 
Alternatively, you can decide to provide a specific price to stay within your budget. Azure will not charge you above the maximum price you set and will evict the VM if the spot price rises above your defined maximum price.
  
 

There are a few other options available to lower costs:

If your workload does not require a specific VM series and size, then you can find other VMs in the same region that may be cheaper.
If your workload is not dependent on a specific region and you do not have data residency requirements, then you can find a different Azure region to reduce your cost.

Quota for Spot VMs

As part of this announcement, to give better flexibility, Azure is also rolling out a separate quota for Spot VMs that is separate from your pay-as-you-go VM quota. The quota for Spot VMs and Spot VMSS instances is a single quota for all VM sizes in a specific Azure region. This approach will give you easy access to a broader set of VMs.
  

Handling evictions

Azure will try to keep your Spot VM running and minimize evictions, but your workload should be prepared to handle evictions as runtime for an Azure Spot VMs and VMSS instances is not guaranteed. You can optionally get a 30-second eviction notice by subscribing to scheduled events. Your VMs can be evicted due to the following reasons:

Spot prices have gone above the max price you defined for the VM. Azure Spot VMs get evicted when the Spot price for the VM you have chosen goes above the price you defined at the time of deployment. You can try to redeploy your VM by changing prices.
Azure needs to reclaim capacity.

In both scenarios, you can try to redeploy the VM in the same region or availability zone.

Best practices

Here are some effective ways to best utilize Azure Spot VMs:

For long running operations, try to create checkpoints so that you can restart your workload from a previously known checkpoint to handle evictions and save time.
In scale-out scenarios, to save costs, you can have two VMSS, where one has regular VMs and the other has Spot VMs. You can put both in the same load balancer to opportunistically scale out.
Listen to eviction notifications in the VM to get notified when your VM is about to be evicted.
If you are willing to pay up to pay-as-you-go prices then use Eviction type to Capacity Eviction only, in the API provide -1 as max price as Azure never charges you more than the Spot VM price.
To handle evictions, build a retry logic to redeploy VMs. If you do not require a specific VM series and size, then try to deploy a different size that matches your workload needs.
While deploying VMSS, select max spread in portal management tab or FD==1 in the API to find capacity in a zone or region.

Customer success stories

We are pleased with the feedback customer and partners are providing, and we plan to extend the capabilities of this offering to meet the needs of our stakeholders.

“We constantly hear from our customers that they want flexibility in their HPC environment. Flexibility in VM types, available capacity, and even up-front commitment. Azure’s Spot offering is exciting because it provides that flexibility, which combined with Rescale provides cost efficiencies and reduced preemption risk.” Gerhard Esterhuizen, VP of Engineering at Rescale and Brian Tecklenburg, VP of HPC Marketing at Rescale

“We benchmark performance across cloud providers, and Azure has consistently been among the top performers. Azure Spot VMs now allow our customers to use the best infrastructure available in an ad-hoc fashion. Azure Spot VMs, combined with Rescale’s HPC job orchestration and automated checkpoint restarts, help mitigate preemption risks. As a result, our customers can finally use the best cloud infrastructure, whenever they want.” Mulyanto Poort, VP of HPC Engineering at Rescale

 

“InMobi runs one of our largest platforms, the InMobi Exchange, entirely on Azure. Having a cost-effective, cloud-native solution supporting high degrees of concurrency and scale was critical for our business, as the InMobi Exchange frequently finds itself catering to fluctuating traffic curves given the seasonal nature of the digital advertising industry. Leveraging the Azure Spot VM offerings, we’ve been able to rewire our application stack to be fully stateless and it’s been a real game changer with respect to making it cost efficient . Since InMobi was one of the early adopters of the Spot VM offering, we’ve found Microsoft to be excellent partners in ensuring the product evolves to meet our required levels of scale and functionality. As of now, we’ve moved the majority of our serving and data processing compute needs to Azure Spot VMs. And by doing so, we have been able to realize nearly 50-60 percent cost efficiencies on our compute needs, and that’s been a massive help in making our business more economically efficient.” Prasanna Prasad, Senior Vice President, Engineering, InMobi

Learn more about Azure Spot Virtual Machines

Spot VM webpage.
Spot VM pricing: Windows and Linux.
Create Spot VMs in Azure portal.
Create Spot VMs in Azure CLI.
Create Spot VMs in Azure PowerShell.
Create Spot VMs in Azure Resource Manager templates.
Create Spot VMSS in Azure Resource Manager templates.

Quelle: Azure

Announcing Azure Front Door Rules Engine in preview

Starting today, customers of Azure Front Door (AFD) can take advantage of new rules to further customize their AFD behavior to best meet the needs of their customers. These rules bring the specific routing needs of your customers to the forefront of application delivery on Azure Front Door, giving you more control in how you define and enforce what content gets served from where.

Azure Front Door provides Azure customers the ability to deliver content fast and securely using Azure’s best-in-class network. We’ve heard from customers how important it is to have the ability to customize the behavior of your web application service, and we’re excited to announce Rules Engine, a new functionality on Azure Front Door, in preview today. Rules Engine is for all current and new Azure Front Door customers but is particularly important for customers looking to streamline security and content delivery at the edge.

New scenarios in Azure Front Door

Rules Engine allows you to specify how HTTP requests are handled at the edge.

The malleable nature of Rules Engine makes it the ideal solution to address legacy application migrations, where you don’t want to worry about users accessing old applications or not knowing how to find content in your new apps. Similarly, geo match and device identification capabilities ensure that your users are always seeing the best content for where they are and what device they are accessing it on. Implementing security headers and cookies with Rules Engine can also ensure that no matter how your users come to interact with the site, that they’re doing so over a secure connection, preventing browser-based vulnerabilities from impacting your site.

Different combinations of match conditions and actions give you fine-grained control over which users get which content and make the possible scenarios that you can accomplish with Rules Engine endless. Some of the technical capabilities that empower these new scenarios on AFD include the following:

Enforce HTTPS, ensure all your end users interact with your content over a secure connection.
Implement security headers to prevent browser-based vulnerabilities, like HTTP Strict-Transport-Security (HSTS), X-XSS-Protection, Content-Security-Policy, X-Frame-Options, as well as Access-Control-Allow-Origin headers for CORS scenarios. Security-based attributes can also be defined with cookies.
Route requests to mobile or desktop versions of your application based on the patterns in the contents of request headers, cookies, or query strings.
Use redirect capabilities to return 301/302/307/308 redirects to the client to redirect to new hostnames, paths, or protocols.
Dynamically modify the caching configuration of your route based on the incoming requests.
Rewrite the request URL path and forward the request to the appropriate backend in your configured backend pool.

Rules Engine is designed to handle a full breadth of scenarios. To learn more, a full list of match conditions and AFD Rules Engine actions can be found in our documentation.

How Rules Engine works

Rules Engine handles requests at the edge. Once configuring Rules Engine, when a request hits your Front Door endpoint, Web Application Firewall (WAF) will be executed first, followed by the Rules Engine configuration associated with your frontend or domain. When a Rules Engine configuration is executed, it means that the parent routing rule is already a match. Whether all actions in each of the rules within the Rules Engine configuration are executed is subject to all of the match conditions within that rule being satisfied. If a request matches none of the conditions in your Rule Engine configuration, then the default Routing Rule is executed.

For example, in the configuration below, a Rules Engine is configured to append a response header which changes the max-age of the cache control if the match condition is met.
  

In another example, we see that Rules Engine is configured to send a user to a mobile version of the site if the match condition, device type, is true.
 

In both examples, when none of the match conditions in Rules Engine are met, the default behavior specified in the Route Rule is what gets executed.

Next steps

We look forward to seeing how Rules Engine helps you unlock further capabilities in Azure Front Door. To learn more about what’s available today, check out the documentation for Azure Front Door Rules Engine.
Quelle: Azure

Learn how to deliver insights faster with Azure Synapse Analytics

Today, it’s even more critical to have a data-driven culture. Analytics and AI play a pivotal role in helping businesses make insights-driven decisions—decisions to transform supply chains, develop new ways to interact with customers, and evaluate new offerings.

Many organizations are turning to cloud analytics solutions to quickly create a data-driven culture, accelerate time to insight, reduce costs, and maximize ROI. Join us on Wednesday, June 17, 2020, from 10:00 AM–11:00 AM Pacific Time for Azure Synapse Analytics: How It Works, a virtual event where you’ll hear directly from Microsoft Azure customers. They’ll explain how they’re using the newest Azure Synapse capabilities to deliver insights faster, bring together an entire analytics ecosystem in a central location, reduce costs, and transform decision-making.

In technical demos, customers will show how they combine data ingestion, data warehousing, and big data analytics in a single cloud-native service with Azure Synapse. If you’re a data engineer trying to wrangle multiple data types from multiple sources to create pipelines, or a database administrator with responsibilities over your data lake and data warehouse, you’ll see how all this can be simplified in a code-free environment.

Customers will also demonstrate how Power BI provides a graphical complement to Azure Synapse with built-in Power BI authoring, giving their employees access to unprecedented insights from enterprise data—in seconds, through beautiful visualizations.

Companies have demonstrated significant cost reductions with cloud analytics solutions. Compared to on-premises solutions, these solutions:

Require lower implementation and maintenance costs.
Reduce analytics project development time.
Provide access to more frequent innovation.
Deliver higher levels of security and business continuity.
Help ensure better competitive advantage and higher customer satisfaction.

With cloud analytics, organizations pay for data and analytics tools only when needed, pausing consumption when not in use. Businesses can reallocate budget previously spent on hardware and infrastructure management to optimizing processes and launching new projects. In fact, customers average a 271 percent ROI with Azure Synapse—savings that come from lower operating costs, increased productivity, reallocating staff to higher-value activities, and increasing operating income due to improved analytics. Analytics in Azure is up to 14 times faster and costs 94 percent less than other cloud providers.

BI specialists, data engineers, and other IT and data professionals all use Azure Synapse to build, manage, and optimize analytics pipelines, using a variety of skillsets and in multiple industries. The Azure Synapse studio provides a unified workspace for data prep, data management, data warehousing, big data, and AI tasks.

Data engineers can use a code-free visual environment for managing data pipelines.
Database administrators can automate query optimization and easily explore data lakes.
Data scientists can build proofs of concept in minutes.
Business analysts can securely access datasets and use Power BI to build dashboards in minutes—all while using the same analytics service.

At the Azure Synapse Analytics: How It Works event, you’ll learn how to access and analyze all your data, from your enterprise data lake to multiple data warehouses and big data analytics systems, with blazing speed. With Azure Synapse, data professionals can query both relational and non-relational data using the familiar SQL language, using either serverless or provisioned resources.

Of course, trust is critical for any cloud solution. Customers will share how they take advantage of advanced Azure Synapse security and privacy features such as automated threat detection and always-on data encryption. They help ensure that data stays safe and private by using column-level security and native row-level security, as well as dynamic data masking to automatically protect sensitive data in real time.

Attend the Azure Synapse Analytics: How It Works virtual event on June 17, 2020, to learn how to deliver:

Powerful insights.
Unprecedented ROI.
Unified experience.
Limitless scale.
Unmatched security.

Register early for a chance to win a Microsoft Surface Go tablet (three winners total). Winners will be selected at random. NO PURCHASE NECESSARY. Open to any registered event attendee 18 years of age or older. Void in Cuba, Iran, North Korea, Sudan, Syria, Region of Crimea, and where prohibited. Sweepstakes ends June 17, 2020. See the Official Rules.  
Quelle: Azure

Office Licensing Service and Azure Cosmos DB part 1: Migrating the production workload

This post is part 1 of a two-part series about how organizations use Azure Cosmos DB to meet real world needs, and the difference it’s making to them. In part 1, we explore the challenges that led the Microsoft Office Licensing Service team to move from Azure Table storage to Azure Cosmos DB, and how it migrated its production workload to the new service. In part 2, we examine the outcomes resulting from the team’s efforts.

The challenge: Limited throughput and other capabilities

At Microsoft, the Office Licensing Service (OLS) supports activation of the Microsoft Office client on millions of devices around the world—including Windows, Mac, tablets, and mobile. It stores information such as machine ID, product ID, activation count, expiration date, and more. OLS is accessed by the Office client more than more than 240 million times per day by users around the world, with the first call coming from the client upon license activation and then every 2-3 days thereafter as the client checks to make sure the license is still valid.

Until recently, OLS relied on Azure Table storage for its backend data store, which contained about 5 TB of data spread across 18 tables—with separate tables used for different license categories such as consumer, enterprise, and OEM pre-installation.

In early 2018, after years of continued workload growth, the OLS service began approaching the point where it would require more throughput than Table storage could deliver. If the issue wasn’t addressed, the inherent throughput limit of Table storage would begin to threaten overall service quality to the detriment of millions of users worldwide.

Danny Cheng, a software engineer at Microsoft, who leads the OLS development team explains:

“Each Table storage account has a fixed maximum throughput and doesn’t scale past that. By 2018, OLS was running low on available storage throughput, and, given that we were already maintaining each table in its own Table storage account, there was no way for us to get more throughput to serve more requests from our customers. We were being throttled during peak usage hours for the OLS service, so we had to find a more scalable storage backend soon.

In looking for a long-term solution to its storage needs, the OLS team wanted more than just additional throughput. We wanted the ability to deploy OLS in different regions around the world, as a means of minimizing latency by putting copies of the service closer to where our users are. But with Table storage, geo-replication capabilities are fairly limited.”

The OLS team also wanted better disaster recovery. With Table storage, they were storing all data in multiple regions within the United States. All reads and writes went to the primary region, and there were no SLAs in place for replication to the two backup regions, which could take up to 60 minutes. If the primary region became unavailable, human intervention would be required and data loss would be likely.

“If a region were to go down, it would be a real panic situation—with 30 to 60 minutes of downtime and a similar window for data loss,” says Cheng.

The solution: A lift-and-shift migration to Azure Cosmos DB

The OLS team chose to move to Azure Cosmos DB, which offered a lift-and-shift migration path from Table storage—making it easy to swap-in a premium backend service with turnkey global distribution, low latency, virtually unlimited scalability, guaranteed high availability, and more.

“At first, when we realized we needed a new storage backend, it was intimidating in that we didn’t know how much new code would be needed,” says Cheng. “We looked at several storage options on Azure, and Azure Cosmos DB was the only one that met all our needs. And with its Table API, we wouldn’t even need to write much new code. In many ways, it was an ideal lift-and-shift—delivering the scalability we needed and lots of other benefits with little work.”

Design decisions

In preparing to deploy Azure Cosmos DB, the OLS team had to make a few basic design decisions:

Consistency level, which gave the team options for addressing the fundamental tradeoffs between read consistency and latency, availability, and throughput.

“We picked strong consistency because some of our business logic requires reading from storage immediately after writing to it,” explains Cheng.

Partition key, which dictates how items within an Azure Cosmos DB container are divided into logical partitions—and determines the ultimate scalability of the data store.

“With the Azure Cosmos DB Table API, partition keys naturally map to what we had in Table storage—so we were able to reuse the same partition key,” says Cheng.

Migration process

Although Azure Cosmos DB offered a data migration tool, its use at that time would have entailed some downtime for the OLS service, which wasn’t an option. (Note: Today you can do live migrations without downtime.) To address this, the OLS team built a data migration solution that consisted of three components:

A Data Migrator that moves current data from Table storage to Azure Cosmos DB.
A Dual Writer that writes new database changes to both Table storage and Azure Cosmos DB.
A Consistency Checker that catches any mismatches between Table storage and Azure Cosmos DB.

The Data Migrator component is based on the same one provided to Microsoft customers by the Azure Cosmos DB team.

“To solve the downtime problem, we added Dual Writer and Consistency Checker components, which run on the same production servers as the OLS service itself,” explains Cheng.

The OLS team completed the migration process in late 2019. Today, Azure Cosmos DB is deployed to the same three regions as Table storage, which the team did to mimic the Table storage topology as closely as possible during the migration. Similarly, North Central US is the primary (read/write) region while the other two regions are currently read-only. The Azure Cosmos DB environment has 18 tables containing 5 TB of data and consumes about 1 million request units per second (RU/s), which are the units used to reserve guaranteed database throughput in Azure Cosmos DB.

Now that migration is complete, the team plans to turn on multi-master capabilities, which will write-enable all regions instead of just the primary one. Tying into this, the team also plans to scale out globally by replicating its backend store to additional regions around the world—as a means of improving latency from the perspective of the Office client by putting copies of the OLS data closer to where its users are.

In part 2 of this series, we examine the outcomes resulting from the team’s efforts to build its new Office Licensing Service on Azure Cosmos DB.

Get started with Azure Cosmos DB today

Visit Azure Cosmos DB.

See Introduction to Azure Cosmos DB Table API.

Quelle: Azure

Office Licensing Service and Azure Cosmos DB part 2: Improved performance and availability

This post is part 2 of a two-part series about how organizations use Azure Cosmos DB to meet real world needs, and the difference it’s making to them. In part 1, we explored the challenges that led the Microsoft Office Licensing Service team to move from Azure Table storage to Azure Cosmos DB, and how it migrated its production workload to the new service. In part 2, we examine the outcomes resulting from the team’s efforts.

Strong benefits with minimal effort

The Microsoft Office Licensing Service (OLS) team’s migration from Azure Table storage to Azure Cosmos DB was simple and straightforward, enabling the team to meet all its needs with minimal effort.

An easy migration

In moving to Azure Cosmos DB, thanks to its Table API, the OLS team was able to reuse most of its data access code, and the migration engine they wrote to avoid any downtime was fast and easy to build.

Danny Cheng, a software engineer at Microsoft, who leads the OLS development team explains:

“The migration engine was the only real ‘new code’ we had to write. And the code samples for all three parts are publicly available, so it’s not like we had to start from scratch. All in all, the migration tooling we developed took three developers about four weeks each.”

Virtually unlimited throughput

Today, database throughput is no longer an issue for the OLS team. With Table storage, the team faced a throughput limit of 20,000 operations per second per storage account, which forced them to maintain each of their 18 tables in a different storage account to achieve maximum throughput. The team now maintains one Azure Cosmos DB account, which has no upper limit on throughput and can support more than 10 million operations per second per table—all dedicated and backed by SLAs.

Guaranteed high availability

Azure Cosmos DB gives the OLS team a 99.999 percent read availability SLA for all multi-region accounts. This has led to a significant increase in storage quality-of-service (QoS), as illustrated in the following metrics captured using internally developed tooling.

“During peak traffic hours, Azure Cosmos DB delivers much better storage QoS than we were seeing with Table storage,” says Cheng. “Today we’re seeing five nines, when in the past we were at about three nines.”

Automatic failover

The OLS team can now configure automatic or manual failovers to help protect against the unlikely event of a regional outage, with all SLAs maintained. The team can also prioritize failover order for its multi-region accounts and can manually trigger failover to test the end-to-end availability of OLS.

“We’ve configured automatic failover, but the service is so reliable that we haven’t needed it yet,” says Cheng.

Lower latency

Table storage provided the OLS team with no upper bounds on latency. In contrast, Azure Cosmos DB provides single-digit latency for reads and writes, backed with a guarantee of <10 millisecond latency for reads and writes at the 99th percentile, at any scale, anywhere in the world. The following metrics illustrate the differences in latency that the OLS service is seeing between Table storage and Azure Cosmos DB. (DbTable is Azure Table storage and CosmosDbTable is the Azure Cosmos DB Table API.)

Turnkey data distribution

With Table storage, options for global distribution were limited. What’s more, the OLS team couldn’t implement failover on its own. With Azure Cosmos DB, the team now enjoys distribution  to any number of regions—including multi-master capabilities, which when enabled will let any regions accept write operation.

“Just by clicking on the map, data can be automatically replicated to any Azure region in the world,” says Cheng. “This feature is very convenient, and we plan to put it to use soon.”

Other technical benefits

In addition to the above, Azure Cosmos DB provides the OLS team with some additional advantages over Table storage:

Automatic indexing. With Table storage, primary indexes are limited to PartitionKey and RowKey, and there are no secondary indexes. Azure Cosmos DB provides automatic and complete indexing on all properties by default, with no index management.

Faster query times. With Table storage, query execution uses the index for the primary key and scans otherwise. With Azure Cosmos DB, queries can take advantage of automatic indexing on all properties for faster query times.

Consistency. With Table storage, the OLS team was limited to strong consistency within the primary region and eventual consistency within the secondary region. With Azure Cosmos DB, they can choose from well-defined consistency levels, enabling them to optimize tradeoffs between read consistency and latency, availability, and throughput while they were designing the solution.

Get started with Azure Cosmos DB today

Visit Azure Cosmos DB.
See Introduction to Azure Cosmos DB Table API.

Quelle: Azure

Automating cybersecurity guardrails with new Zero Trust blueprint and Azure integrations

In our day-to-day work, we focus on helping customers advance the security of their digital estate using the native capabilities of Azure. In the process, we frequently find that using Azure to improve an organization’s cybersecurity posture can also help these customers achieve compliance more rapidly.

Today, many of our customers in regulated industries are adopting a Zero Trust architecture, moving to a security model that more effectively adapts to the complexity of the modern environment, embraces the mobile workforce, and protects people, devices, applications, and data wherever they’re located.

Regardless of where the request originates or what resource it accesses, Zero Trust teaches us to “never trust, always verify.” In a Zero Trust model, every access request is strongly authenticated, authorized within policy constraints, and inspected for anomalies before granting access. This approach can aid the process of achieving compliance for industries that use NIST-based controls including financial services, defense industrial base, and government.

A Zero Trust approach should extend throughout the entire digital estate and serve as an integrated security philosophy and end-to-end strategy, across three primary principles: (1) verify explicitly, (2) enforce least privilege access, and (3) assume breach.

Use the Azure blueprint for faster configuration of Zero Trust

The Azure blueprint for Zero Trust enables application developers and security administrators to more easily create hardened environments for their application workloads. Essentially, the blueprint will help you implement Zero Trust controls across six foundational elements: identities, devices, applications, data, infrastructure, and networks.

Using the Azure Blueprints service, the Zero Trust blueprint will first configure your VNET to deny all network traffic by default, enabling you to extend it and/or set rules for selective traffic based on your business needs. In addition, the blueprint will enforce and maintain Azure resource behaviors and configuration in compliance with specific NIST SP 800-53 security control requirements using Azure Policy.

The blueprint includes Azure Resource Manager templates to deploy and configure Azure resources such as Virtual Network, Network Security Groups, Azure Key Vault, Azure Monitor, Azure Security Center, and more. If you’re working with applications that need to comply with FedRAMP High or DoD Impact Level 4 requirements or just want to improve the security posture of your cloud deployment, the blueprint for Zero Trust is designed to help you get there faster.

The Azure blueprint for Zero Trust is currently in preview with limited support. To learn more and find instructions to deploy into Azure, see Azure blueprint for Zero Trust. For more information, questions, and feedback, please contact us at Zero Trust blueprint feedback.

In addition to this new blueprint, we’re announcing two new integrations with Azure to bring faster authorization and increased flexibility to the public sector and regulated industries:

Accelerate risk management for Azure deployments with Xacta

Increasing the speed with which cloud-based initiatives achieve authorization is a critical part of modernization. Often this process is highly manual and lacks the ability to provide a clear picture for continuous monitoring

Xacta now integrates with Azure Policy and Azure Blueprints, enabling customers to centrally manage compliance policies, track their compliance status, and more easily enforce policies to ensure ongoing compliance. For example, Xacta streamlines and automates many labor-intensive tasks associated with key security frameworks such as the NIST Risk Management Framework (RMF), NIST Cybersecurity Framework (CSF), FedRAMP, and ISO 27001.

Through this new integration, Azure Policy automatically generates a significant portion of the required accreditation package directly into Xacta, instantiating a risk management framework and reducing the manual effort required of risk professionals, freeing up their time to focus on critical risk decisions.

Enable continuous monitoring of containers using Anchore

Customers using containers to achieve greater flexibility within regulated environments commonly encounter security and governance challenges. To address those challenges, Anchore recently announced their support for Windows containers, delivering more choice for public sector agencies and enterprises developing container-based applications and implementing broad DevSecOps initiatives. Anchore Enterprise 2.3 performs deep image inspection of Windows container images, helping teams establish policy-based approaches to container compliance without compromising velocity.

Whether you’re using containers today or evaluating services, such as Azure Kubernetes Service, you can count on us to continue to provide world-class cybersecurity technology, controls, and best practices to help you accelerate both security and compliance.

Learn more

To learn more about how to implement Zero Trust architecture on Azure, read the six-part blog series on the Azure Government Dev blog. You may also want to bookmark the Security blog to keep up with our coverage on security matters and follow us at @MSFTSecurity for the latest news and updates on cybersecurity.
Quelle: Azure

Use Azure Firewall for secure and cost-effective Windows Virtual Desktop protection

This post was co-authored by Pavithra Thiruvengadam, Program Manager, Windows Virtual Desktop

Work from home policies require many IT organizations to address fundamental changes in capacity, network, security, and governance. Many employees aren’t protected by the layered security policies associated with on-premises services while working from home. Virtual desktop infrastructure (VDI) deployments on Azure can help organizations rapidly respond to this changing environment.  However, you need a way to protect inbound or outbound internet access to and from these VDI deployments.

Windows Virtual Desktop is a comprehensive desktop and application virtualization service running in Azure. It’s the only VDI that delivers simplified management, multi-session Windows 10, and optimizations for Office 365. You can deploy and scale your Windows desktops and apps on Azure in minutes and get built-in security and compliance features. In this post, we explore how to use Azure Firewall for secure and cost-effective Windows Virtual Desktop protection.

Windows Virtual Desktop components

The Windows Virtual Desktop service is delivered in a shared responsibility model:

Customer-managed RD clients connect to Windows desktops and applications from their favorite client device from anywhere on the internet.
Microsoft-managed Azure service handles connections between RD clients and Windows Virtual Machines in Azure (including Windows 10 multi-session).
Customer-managed virtual network in Azure hosts Windows 10 multi-session virtual machines in host pools.

Windows Virtual Desktop doesn’t require you to open any inbound access to your virtual network. However, to ensure platform connectivity between customer-managed virtual machines and the service, a set of outbound network connections must be enabled for the host pool virtual network. While these dependencies can be configured using Network Security Groups, this configuration is limited to network-level traffic filtering only. For application-level protection, you can use Azure Firewall or a third party network virtual appliance (NVA). For best practices to consider before deploying an NVA, see Best practices to consider before deploying a network virtual appliance.

Host pool outbound access to Windows Virtual Desktop

Azure Firewall is a cloud-native firewall as a service (FWaaS) offering that allows you to centrally govern and log all your traffic flows using a DevOps approach. The service supports both application and network-level filtering rules and is integrated with the Microsoft Threat Intelligence feed for filtering known malicious IP addresses and domains. Azure Firewall is highly available with built-in auto scaling.

Azure Firewall provides a Windows Virtual Desktop FQDN Tag to simplify host pool outbound access to Windows Virtual Desktop. Use the following steps to allow outbound platform traffic:

Deploy Azure Firewall and configure your Windows Virtual Desktop host pool subnet User Defined Route (UDR) to route all traffic via the Azure Firewall.
Create an application rule collection and add a rule to enable the WindowsVirtualDesktop FQDN tag. The source IP address range is the host pool virtual network, the protocol is https, and the destination is WindowsVirtualDesktop.

   

The set of required storage and service bus accounts for your Windows Virtual Desktop host pool is deployment specific and isn’t yet captured in the WindowsVirtualDesktop FQDN tag. Additionally, a network rule collection is needed to allow DNS access from your Active Directory Domain Services (ADDS) deployment and KMS access from your virtual machines to Windows Activation Service. To configure access for these additional dependencies, see Use Azure Firewall to protect Windows Virtual Desktop deployments.

Host pool outbound access to the internet

Depending on your organization needs, you may want to enable secure outbound internet access for your end users. As Windows Virtual Desktop sessions are running on customer-managed virtual machines, they are also subject to your virtual network security controls. In cases where the list of allowed destinations is well-defined (for example, Office 365 access), you can use Azure Firewall application and network rules to configure the required access. This routes end-user traffic directly to the internet for best performance.

If you want to filter outbound user internet traffic using an existing on-premises secure web gateway, you can configure web browsers or other applications running on the Windows Virtual Desktop host pool with an explicit proxy configuration. For example, see How to use Microsoft Edge command-line options to configure proxy settings. These proxy settings only influence your end-user internet access, allowing outbound traffic directly via Azure Firewall.

Next steps

For more information on everything we covered above please see the following blogs, documentation, and videos.

What is Windows Virtual Desktop?
Azure Firewall documentation.
Use Azure Firewall to protect Windows Virtual Desktop deployments.
Azure Firewall February 2020 blog: New Azure Firewall certification and features in Q1 CY2020.

Quelle: Azure

Azure Virtual Machine Scale Sets now provide simpler management during scale-in

We recently announced the general availability of three features for Azure Virtual Machine Scale Sets. Instance protection, custom scale-in policy, and terminate notification provide new capabilities to simplify management of virtual machine instances during scale-in.

Azure Virtual Machine Scale Sets are a way to collectively deploy and easily manage a number of virtual machine (VM) instances in a group. You can also configure autoscaling rules for your scale set that enable you to dynamically increase or decrease the number of instances based on what the workload requires.

With these new features, you now have more control over gracefully handling the removal of instances during scale-in, enabling you to achieve better user experience for your applications and services. These new features are available across all Azure regions for public cloud as well as sovereign clouds. There is no extra charge for using these features with Azure Virtual Machine Scale Sets.

Let’s take a look at how these features provide you better control during scale-in.

Instance protection—protect one or more instances from scale-in

You can apply the policy Protect from scale-in to one or more instances in your scale set if you do not want these instances to be deleted when a scale-in occurs. This is useful when you have a few special instances that you would like to preserve while dynamically scaling in or out other instances in your scale set. These instances might be performing certain specialized tasks different from other instances in the scale set and you may want these special instances to not be removed from the scale set. Instance protection provides you the capability to enable such scenarios for your workload.

Protect one or more instances from scale-set actions

Instance protection also allows you to protect one or more of your instances from getting modified during other scale-set operations like reimage or upgrade. This can be done by applying the policy Protect from scale-set actions to specific instances. Applying this policy to an instance automatically also protects it from a scale-in.

Custom scale-in policy—configure the order of instance removal during scale-in

When one or more instances need to be removed form a scale set during scale-in, then instances are selected for deletion in such a way that the scale set remains balanced across availability zones and fault domains, if applicable. Custom scale-in policies allow you to further specify and control the order in which instances should be selected for deletion during scale-in. You can use the OldestVM scale-in policy to remove the oldest created instance first, or NewestVM scale-in policy to remove the newest created instance first. In both the scenarios, balancing across availability zones is given preference. If you have applied either of the protection policies to an instance, then it will not be picked up for deletion during scale-in.

Below are a couple examples of the scale-in order for a scale set with three availability zones and initial instance count 9. These examples assume that the VM with smallest instance ID was created first and that the VM associated with highest instance ID was created last. The VM instance enclosed in a dotted square represents that it has been protected using one of the instance protection policies. The cross indicates that the VM instance will be selected for deletion during scale-in.

 

Terminate notification—receive in-VM notification of instance deletion

When an instance is about to be deleted from a scale set, you may want to perform certain custom actions on the instance. Examples of these actions could be de-registering from the load balancer, or copying the logs, among others. When instance deletions are triggered by the platform, for example due to a scale-in, then these actions need to be programmatically performed to ensure that application does not get interrupted or useful logs are properly retained. With the terminate notification feature, you can configure your instances to receive in-VM notifications about upcoming instance deletion and pause the delete operation for 5 to 15 minutes to perform such custom actions on the instance.

The terminate notifications are sent through the Azure metadata service—Scheduled events—and can be received using a REST endpoint accessible from within the VM instance. Specific actions or scripts can be configured to run when an instance receives the terminate notification at the configured endpoint. Once these actions are completed and you do not want to wait for the pre-configured pause timeout to finish, then you can approve the deletion by issuing a POST call to the metadata service. This will allow deletion of the instance to continue.

Get started

You can enable these features for your scale set using REST API, Azure CLI, Azure PowerShell or Azure Portal. Below are the links to the documentation pages for detailed instructions.

Instance protection
Custom scale-in policy
Terminate notification

Quelle: Azure