Hot patching SQL Server Engine in Azure SQL Database

In the world of cloud database services, few things are more important to customers than having uninterrupted access to their data. In industries like online gaming and financial services that experience high transaction rates, even the smallest interruptions can potentially impact the end-user’s experience. Azure SQL Database is evergreen, meaning that it always has the latest version of the SQL Engine, but maintaining this evergreen state requires periodic updates to the service that can take the database offline for a second. For this reason, our engineering team is continuously working on innovative technology improvements that reduce workload interruption.

Today’s post, in collaboration with the Visual C++ Compiler team, covers how we patch SQL Server Engine without impacting workload at all.

Figure 1 – This is what hot patching looks like under the covers. If you’re interested in the low-level details, see our technical blog post.

The challenge

The SQL Engine we are running in Azure SQL Database is the very latest version of the same engine customers run on their own servers, except we manage and update it. To update SQL Server or the underlying infrastructure (i.e., Azure Service Fabric or the operating system), we must stop the SQL Server process. If that process hosts the primary database replica, we move the replica to another machine, requiring a failover.

During a failover, the database may be offline for a second and still meet our 99.995 percent SLA. However, failover of the primary replica impacts workload because it aborts in-flight queries and transactions. We built features such as resumable index (re)build and accelerated database recovery to address these situations, but not all running operations are automatically resumable. It may be expensive to restart complex queries or transactions that were aborted due to an upgrade. So even though failovers are quick, we want to avoid them.

SQL Server and the overall Azure platform invests significant engineering effort into platform availability and reliability. In SQL database, we have multiple replicas of every database. During upgrade, we ensure that hot standbys are available to take over immediately.

We’ve worked closely with the broader Azure and Service Fabric teams to minimize the number of failovers. When we first decide to fail over a database for upgrade, we apply updates to all components in the stack at the same time: OS, Service Fabric, and SQL Server. We have automatic scheduling that avoids deploying during an Azure region’s core business hours. Just before failover, we attempt to drain active transactions to avoid aborting them. We even utilize database workload patterns to perform failover at the best time for the workload.

Even with all that, we don’t get away from the fact that to update SQL Engine to a new version, we must restart the process and failover the database’s primary replica at least once. Or do we?

Hot patching and results

Hot patching is modifying in-memory code in a running process without restarting the process. In our case, it gives us the capability to modify C++ code in SQL Engine without restarting sqlservr.exe. Since we don’t restart, we don’t failover the primary replica and interrupt the workload. We don't even need to pause SQL Server activity while we patch. Hot patching is unnoticed by the user workload, other than the patch payload, of course!

Hot patching does not replace traditional, restarting upgrades – it complements them. Hot patching currently has limitations that make it unsuitable when there are a large number of changes, such as when a major new feature is introduced. But it is perfect for smaller, targeted changes. More than 80 percent of typical SQL bug fixes are hot patchable. Benefits of hot patching include:

Reduced workload disruption – No restart means no database failover and no workload impact.
Faster bug fixes – Previously, we weighed the urgency of a bug fix vs. impact on customer workloads from deploying it. Sometimes we would deem a bug fix not important enough for worldwide rollout because of the workload impact. With hot patching, we can now deploy bug fixes worldwide right away.
Features available sooner – Even with the 500,000+ functional tests that we run several times per day and thorough testing of every new feature, sometimes we discover problems after a new feature has been made available to customers. In such cases, we may have to disable the feature or delay go-live until the next scheduled full upgrade. With hot patching, we can fix the problem and make the feature available sooner.

We did the first hot patch in production in 2018. Since then, we have hot patched millions of SQL Servers every month. Hot patching increases SQL Database ship velocity by 50 percent, while at the same time improving availability.

How hot patching works

For the technically interested, see our technical blog post for a detailed explanation of how hot patching works under the covers. Start reading at section three.

Closing words and next steps

With the capability in place, we are now working to improve the tooling and remove limitations to make more changes hot patchable with quick turnaround. For now, hot patching is only available in Azure SQL Database, but some day it may also come to SQL Server. Let us know via SQLDBArchitects@microsoft.com if you would be interested in that.

Please leave comments and questions below or contact us on the email above if you would like to see more in-depth coverage of cool technology we work on.
Quelle: Azure

Navigating the intelligent edge: answers to top questions

Over the past ten years, Microsoft has seen embedded IoT devices get progressively smarter and more connected, running software intelligence near the point where the data is being generated within a network. And having memory and compute capabilities at the intelligent edge solves multiple conundrums related to connectivity, bandwidth, latencies, and privacy/security.

Of course, each device that connects to a network brings the challenge of how to secure, provision, and manage them. It raises issues of privacy requirements, data regulations, bandwidth, and transfer protocols. And when you have thousands of devices connecting to each other and broader systems like the cloud, all this can get very complex, very quickly.

Here are some of the most frequent questions around the intelligent edge and examples of how Azure solutions can help simplify securing, provisioning, and managing it. To hear more in-depth thoughts on this topic, join Olivier Bloch on October 10 as he speaks at the IoT in Action event in Santa Clara.
 

Securing the intelligent edge

“How do I ensure the devices that are connected are the ones they say they are, and that they are authenticating to the back end and securing data in an encrypted way?”

Each device that gets installed on a network provides one more potential network doorway for bad actors. No one wants their car radio, scale, or vending machine hacked. No one wants customer data stolen. We’ve already seen too much of that in the news. Securing the intelligent edge is rightfully a key concern for customers interested in IoT technology.

The key is to start simple by building on top of solutions that have addressed these important concerns. Microsoft intelligent edge and intelligent cloud solutions have been designed to complement each other, which makes it much easier to create secure IoT solutions that you can trust.

Azure Sphere is a great place to start. It provides a turnkey IoT solution that builds on decades of Microsoft experience, ensuring comprehensive, multi-layer security from the multipoint control unit (MCU) to the operating system to the cloud.

It begins with Azure Sphere-certified MCUs from our hardware partners, with Microsoft hardware root of trust embedded into the silicone. The operating system (OS) provides in-depth defense that guards against hackers and enables automated OS and security updates. The Azure Sphere Security Service safeguards every device with seven properties of highly secured, internet-connected devices. Azure Sphere only runs signed, authentic software, reducing risk of malware or application tampering. Even if you have devices that are already installed, they can be secured with Azure Sphere guardian modules, with little or no redesign required.

Provisioning and managing the intelligent edge

“Connecting one device manually to the cloud is part of the story. But what if I need to provision and then manage a whole bunch of devices at scale?”

You want to ensure devices are easy to provision, update, and manage. You want to be able to roll out new devices, and when the time comes, retire devices. You want to provision and manage devices like you would a fleet of PCs without having to manually update software and firmware.

Again, Microsoft has solutions that simplify all of this.

Azure IoT Hub enables you to connect, manage, and scale devices to the edge with per-device authentication and scaled provisioning. Azure IoT Edge, which is an intelligent edge runtime managed and configured from Azure IoT Hub, enables you to deploy cloud workloads to run on edge devices using standard containers. IoT Edge secures the communications between IoT applications and your edge devices, enabling you to power and remotely configure the devices. Built-in device management and provisioning capabilities enable you to connect and manage devices at scale.

To implement scaled provisioning, Azure IoT Hub is paired with the Device Provisioning Service (DPS) which streamlines the enrollment process by allowing you to register and provision all your devices to IoT Hub without any human intervention. DPS takes advantage of hardware-secured modules where secure seeds are planted by silicon manufacturers and confidential compute is possible, all to establish a trusted connection and authentication with a global endpoint (DPS). This, in turn, can be configured to not only provide IoT Hub device identity and credentials back to devices, but it also can deliver a first configuration at provisioning time. It’s a powerful and scalable way to manage IoT devices during their whole life cycle from the first connection to retirement, including transfers of ownership.

Learn more about the intelligent edge at an IoT in Action event

Microsoft continues to innovate with solutions that help streamline and simplify securing, provisioning, and managing the intelligent edge. To learn more about how you can best leverage this technology, be sure to register for the upcoming Santa Clara IoT in Action event on October 10. As part of the event, I will be leading a panel discussion focused on how customers and partners are simplifying IoT and solving industry problems. 

If you can’t make it to the Santa Clara event, there will also be one-day events held in cities around the world, including Warsaw, Frankfurt, Toronto, Auckland, Taipei, Shenzhen, and more. These events are a valuable opportunity to get all your questions answered and build connections with potential IoT partners. Through interactive sessions, Microsoft will share how various solutions and accelerators can help simplify IoT so you can get secure solutions out the door faster and more cost effectively.

Prefer a virtual event? Browse the IoT in Action webinar series which features IoT industry experts discussing real-life solution use cases. You can also get started on further advancing your technical IoT skills by watching the IoT Show, joining the IoT Tech Community, and learning at IoT School.
Quelle: Azure

How to develop your service health alerting strategy

Service issues are anything that could affect your availability, from outages and planned maintenance to service transitions and retirements. While rare—and getting rarer all the time, thanks to innovations in impactless maintenance and disciplines like site reliability engineering—service issues do occur, which is why service health alerting is such a critical part of successfully managing cloud operations. It’s all about helping your team understand the status and health of your environment so you can act quickly in the event of an issue. That can mean taking corrective measures like failing over to another region to keep your app running or simply communicating with your stakeholders so they know what’s going on.

In this blog, we’ll cover how you can develop an effective service health alerting strategy and then make it real with Azure Service Health alerts.

How Azure Service Health alerts work

Azure Service Health is a free Azure service that provides alerts and guidance when Azure service issues like outages and planned maintenance affect you. Azure Service Health is available in the portal as a dashboard where you can check active, upcoming, and past issues.

Of course you may not want to check the Azure Service Health dashboard regularly. That’s why Azure Service Health also offers alerts. Azure Service Health alerts automatically notify you via your preferred channel such as email, SMS, mobile push notification, webhook into your internal ticketing system like ServiceNow or PagerDuty, and more if there’s an issue affecting you.

If you’re new to Azure Service Health alerts, you’ll notice that there are many choices to make during the configuration process. Who should I alert about which services and regions? Who should I alert for which types of health events? Outages? Planned maintenance? Health advisories? And what type of notification like email, SMS, push notification, webhook, or something else should I use?

To answer these questions the right way, you’ll need to have a conversation with your team and develop your service health alerting strategy.

How to develop your service health alerting strategy with your team

There are three key considerations for your team to address when you set up your Azure Service Health alerts.

First, think about criticality. How important is a given subscription, service, or region? If it’s production, you’ll want to set up an alert for it, but dev/testing might be unnecessary. Azure Service Health is personalized, so we won’t trigger your alert if the service issue affects a service or region you aren’t using.

Next, decide who to inform in the event of an issue. Who is the right person or team to tell about a service issue so they can act? For example, send Azure SQL or Azure Cosmos DB issues to your database team.

Finally, agree on how to inform that individual or team. What is the right communication channel for the message? Email is noisy, so it might take longer for your teams to respond. That’s fine for planned maintenance that’s weeks away, but not for an outage affecting you right now, in which case you’ll want to alert your on-call team using a channel that’s immediately seen, like a push notification or SMS. Or if you’re a larger or more mature organization, plug the alerts into your existing problem management system using a webhook/ITSM connection so you can follow your normal workflow.

For more information on Azure Service Health, how to set up alerts, and other critical guidance for handling service issues including, in some cases, avoiding their impact altogether, check out the video below:

Set up your Azure Service Health alerts today

Once you’ve had your Azure Service Health alerting conversation with your team and developed your strategy, configure your Azure Service Health alerts in the Azure Portal.

For more in-depth guidance, visit the Azure Service Health documentation. Let us know if you have a suggestion by submitting an idea via our feedback forum.
Quelle: Azure

Extending the power of Azure AI to business users

Today, Alysa Taylor, Corporate Vice President of Business Applications and Industry, announced several new AI-driven insights applications for Microsoft Dynamics 365.

Powered by Azure AI, these tightly integrated AI capabilities will empower every employee in an organization to make AI real for their business today. Millions of developers and data scientists around the world are already using Azure AI to build innovative applications and machine learning models for their organizations. Now business users will also be able to directly harness the power of Azure AI in their line of business applications.

What is Azure AI?

Azure AI is a set of AI services built on Microsoft’s breakthrough innovation from decades of world-class research in vision, speech, language processing, and custom machine learning. What I find particularly exciting is that Azure AI provides our customers with access to the same proven AI capabilities that power Xbox, HoloLens, Bing, and Office 365.

Azure AI helps organizations:

Develop machine learning models that can help with scenarios such as demand forecasting, recommendations, or fraud detection using Azure Machine Learning.
Incorporate vision, speech, and language understanding capabilities into AI applications and bots, with Azure Cognitive Services and Azure Bot Service.
Build knowledge-mining solutions to make better use of untapped information in their content and documents using Azure Search.

Bringing the power of AI to Dynamics 365 and the Power Platform

The release of the new Dynamics 365 insights apps, powered by Azure AI, will enable Dynamics 365 users to apply AI in their line of business workflows. Specifically, they benefit from the following built-in Azure AI services:

Azure Machine Learning which powers personalized customer recommendations in Dynamics 365 Customer Insights, analyzes product telemetry in Dynamics 365 Product Insights, and predicts potential failures in business-critical equipment in Dynamics 365 Supply Chain Management.
Azure Cognitive Services and Azure Bot Service that enable natural interactions with customers across multiple touchpoints with Dynamics 365 Virtual Agent for Customer Service.
Azure Search which allows users to quickly find critical information in records such as accounts, contacts, and even in documents and attachments such as invoices and faxes in all Dynamics 365 insights apps.

Furthermore, since Dynamics 365 insights apps are built on top of Azure AI, business users can now work with their development teams using Azure AI to add custom AI capabilities to their Dynamics 365 apps.

The Power Platform, comprised of three services – Power BI, PowerApps, and Microsoft Flow, also benefits from Azure AI innovations. While each of these services is best-of-breed individually, their combination as the Power Platform is a game-changer for our customers.

Azure AI enables Power Platform users to uncover insights, develop AI applications, and automate workflows through low-code, point-and-click experiences. Azure Cognitive Services and Azure Machine Learning empower Power Platform users to:

Extract key phrases in documents, detect sentiment in content such as customer reviews, and build custom machine learning models in Power BI.
Build custom AI applications that can predict customer churn, automatically route customer requests, and simplify inventory management through advanced image processing with PowerApps.
Automate tedious tasks such as invoice processing with Microsoft Flow.

The tight integration between Azure AI, Dynamics 365, and the Power Platform will enable business users to collaborate effortlessly with data scientists and developers on a common AI platform that not only has industry leading AI capabilities but is also built on a strong foundation of trust. Microsoft is the only company that is truly democratizing AI for businesses today.

And we’re just getting started. You can expect even deeper integration and more great apps and experiences that are built on Azure AI as we continue this journey.

We’re excited to bring those to market and eager to tell you all about them!
Quelle: Azure

The Marco Polo Network uses Azure and Corda blockchain to modernize trade finance

The Marco Polo Network is now generally available on Azure to help both trade banks and corporations take advantage the R3 Corda distributed ledger to better facilitate global trade in this ever-changing world. Regardless of what headlines will lead you to believe, international trade is the lifeblood of the modern global economy. Each year, hundreds of trillions of dollars in goods, assets, credit, and money change hands to keep the engine of global trade running. When a multinational corporation (acting as a seller or exporter) sends goods to their customers (acting as buyers or importers,) the corporation often doesn’t receive payment for 30-90 days. This problem can be exacerbated by variables such as tariffs or new customs duties. To manage cash flow while waiting for payment, sellers often resort to taking out short-term loans from trade banks. But trade banks find it difficult to keep pace having to rely on aging systems and siloed data that increases cost and process friction for all involved.

The disadvantages of disconnected trade

If global trade is an engine, financing is the fuel. But many trade banks rely on decades-old, paper-based processes that slow trade flow and add complexity, with antiquated financing tools that make onboarding expensive, reconciliation cumbersome, and the customer experience poor.

Furthermore, as global providers of trade and supply chain finance, trade banks must manage transactions between sellers and buyers while navigating increasingly complex regulatory processes pronounced by national boundaries. Due to these global regulations, banks can be forced to use different financing platforms for each geolocation, leading to an overabundance of disconnected management tools.

Without a network to exchange data and a platform for viewing and managing transactions, banks have tremendous difficulty processing and executing their clients trade and supply chain financing transactions. At the same time, buyers and sellers can lack awareness of their own financial health due to paper-based trade contracts which aren’t immediately understood across the organization. Furthermore, many small- and medium-sized import and export businesses are unable to scale due to staggering overhead costs.

A cloud-based network to streamline global trade

To improve efficiency in global trade finance, technology firms Trade IX and R3 partnered together with leading banks to create the Marco Polo Network. Launched in 2017, Marco Polo provides a digital, distributed technology platform that allows trading parties to automate and streamline their trade and supply chain finance activities. Applications are built and deployed on top of the platform that allow banks and corporations to perform specific product and trade orchestrations. Trading parties – buyers, sellers, logistics providers, insurers, banks, and other key stakeholders- are able to exchange trade data and assets securely, in real time, and peer to peer using an open and distributed network powered by Corda. Importantly, the network and platform are open – meaning third-parties can build, develop, and deploy their own solutions on the network and platform.

The Marco Polo Network, a platform built by TradeIX using 18 distinct Azure services and R3’s Corda distributed ledger technology, is revolutionizing trade finance. TradeIX packaged Corda and the Marco Polo Network application stack, or node, for deployment using Azure Container Instances and the Azure Container Registry. This gave participating banks and corporations the flexibility to pursue one of two different hosting options; run a Marco Polo node inside of the TradeIX Azure tenant or pull down the application binaries as Docker images from an Azure Container Registry where they could then be deployed within the bank’s Azure tenant. The result is a transformational technology and distributed platform that enables the world’s leading trade banks and their corporate clients to exchange data in real-time resulting in streamlined, automated business activities that increase efficiency and transparency for receivables financing and cash flow management. TradeIX built these exciting new collaboration capabilities into the Marco Polo Network using an innovative, integrated application stack comprised of Corda, Azure SQL Server, CosmosDB and Microsoft Dynamics 365 technologies.

One of the more novel features of the Marco Polo Network is the use of the R3 Corda distributed ledger to ensure that all of the counterparties involved in a financing request have a secure medium by which they can securely and seamlessly exchange trade data, contracts, and financial assets that are critical to completing a supply chain finance transaction. By hosting this platform in the cloud, TradeIX delivers an improved customer experience by providing a single infrastructure for banks and clients to manage their transactions—regardless of geolocation, currency, type of transaction, and industry. Because it’s an open, cloud-native network, Marco Polo Network members can share best practices, run pilot programs, and adjust the platform to meet their specific needs. However, this openness should not come at the expense of the security and compliance fundamentals required by the world’s leading banks and corporations. Microsoft and TradeIX implemented a host of Azure security controls such as Log Analytics, Security Center, Application Gateway, and DDOS Protection to ensure that the Marco Polo Network would be well-positioned to maintain the highest levels of trust, transparency, standards conformance for all members across the network.

In the near future, the Marco Polo Network will also provide corporate treasurers with an ERP-embedded Marco Polo App supported by Dynamics 365, that allows companies to manage their trade finance directly within their own ERP system. The TradeIX – Dynamics 365 interface enables corporations to submit requests for finance directly to their trade bank of choice where it will be automatically acknowledged, received, and processed by the bank’s Corda instance resulting in a free exchange of data without the need for manual reconciliation.

Reducing expenses, improving revenue

An important objective of the Marco Polo Network is to obtain all trade data necessary for a transaction as directly as possible, from the original data source. This also includes external third parties such as logistic providers. Imagine a scenario where two companies (a buyer and a seller) and their corresponding banks, exchange order and delivery data via the Marco Polo Network. Payment terms would then be secured by an irrevocable payment commitment, triggered through automated matching of trade data. This would then be followed by an automatic matching of trade data achieved with involvement of the executing logistics provider, which enters the relevant transport details directly into the network. The ability for the third-party logistics provider to automatically trigger a payment from buyer to supplier following goods delivery with data reconciliation flowing across multiple banks simultaneously demonstrates the real-world value of the Marco Polo Network.

A growing network, built with business in mind

Because the Marco Polo Network is governed by member banks, the model promotes an atmosphere of collaboration across the global trade industry. This formalized governance framework has helped the Marco Polo Network onboard trade banks and corporations across Africa, Asia, Europe, the Middle East, as well as North and South America. Companies of all sizes will benefit from better visibility into trading relationships and easier access to financing options, beyond point to point relationships, to a global network of trading parties.

“I’m very pleased to see Microsoft’s Azure team is pushing the boundaries of banking and technology innovation with their partnership with the Marco Polo Network built by TradeIX. These 2 solutions coupled with Corda creates a very compelling and modern proposition for any smart business looking to take advantage of the benefits that distributed architecture offers.” – Andrew Speers, Director, Product and Innovation at NatWest and Board Director at the Corda Network Foundation.

“International trade is indeed the lifeblood of the economy, which is why R3 is so proud to be a part of the Marco Polo Network. Together, Corda and Microsoft Azure are enabling TradeIX’s mission to transform trade finance, by bringing much needed efficiencies to this market, which holds hidden treasure in the hunt for high yields. We are honored to be part of the ecosystem that will build trade finance solutions on blockchain, and are excited to see what’s next” – Ricardo Correia, Head of Partners at R3.

“It is exciting to be part of the growing ecosystem building trade finance solutions on blockchain. Microsoft is honoured to be providing our global scale cloud as a foundation to R3 and TradeIX to speed this solution to market,”  – Michael Glaros, Azure Blockchain Engineering, Microsoft.

“One of the founding technology decisions that were made for the Marco Polo Network was to use the infrastructure provided by Microsoft Azure. We firmly believe that our partnership with Microsoft provides Marco Polo members with the best infrastructure and highest security and transparency standards combined with improved customer experience.” Oliver Belin CMO, TradeIX. 
Quelle: Azure

Three ways to leverage composite indexes in Azure Cosmos DB

Composite indexes were introduced in Azure Cosmos DB at Microsoft Build 2019. With our latest service update, additional query types can now leverage composite indexes. In this post, we’ll explore composite indexes and highlight common use cases.

Index types in Azure Cosmos DB

Azure Cosmos DB currently has the following index types that are used for the following types of queries:

Range indexes:

Equality queries
Range queries
ORDER BY queries on a single property
JOIN queries

Spatial indexes:

Geospatial functions

Composite indexes:

ORDER BY queries on multiple properties
Queries with a filter as well as an ORDER BY clause
Queries with a filter on two or more properties

Composite index use cases

By default, Azure Cosmos DB will create a range index on every property. For many workloads, these indexes are enough, and no further optimizations are necessary. Composite indexes can be added in addition to the default range indexes. Composite indexes have both a path and order (ASC or DESC) defined for each property within the composite index.

ORDER BY queries on multiple properties

If a query has an ORDER BY clause with two or more properties, a composite index is required. For example, the following query requires a composite index defined on age and name (age ASC, name ASC):

SELECT * FROM c WHERE c.age ASC, c.name ASC

This query will sort all results in ascending order by the value of the age property. If two documents have the same age value, the query will sort the documents by name.

Queries with a filter as well as an ORDER BY clause

If a query has a filter as well as an ORDER BY clause on different properties, a composite index will improve performance. For example, the following query will require fewer request units (RU’s) if a composite index on name and age is defined and the query is updated to include the name in the ORDER BY clause:

Original query utilizing range index:

SELECT * FROM c WHERE c.name = “Tim” ORDER BY c.age ASC

Revised query utilizing a composite index on name and age:

SELECT * FROM c WHERE c.name = “Tim” ORDER BY c.name ASC, c.age ASC

While a composite index will significantly improve query performance, you can still run the original query successfully without a composite index. When you run the revised query with a composite index, it will sort documents by the age property. Since all documents matching the filter have the same name value, the query will return them in ascending order by age.

Queries with a filter on multiple properties

If a query has a filter with two or more properties, adding a composite index will improve performance.

Consider the following query:

SELECT * FROM c WHERE c.name = “Tim” and c.age > 18

In the absence of a composite index on (name ASC, and age ASC), we will utilize a range index for this query. We can improve the efficiency of this query by creating a composite index for name and age.

Queries with multiple equality filters and a maximum of one range filter (such as >,<, <=, >=, !=) will utilize the composite index. In some cases, if a query can’t fully utilize a composite index, it will use a combination of the defined composite indexes and range indexes. For more information, reference our indexing policy documentation.

Composite index performance benefits

We can run some sample queries to highlight the performance benefits of composite indexes. We will use a nutrition dataset that is used in Azure Cosmos DB labs.

In this example, we will optimize a query that has a filter as well as an ORDER BY clause. We will start with the default indexing policy which indexes all properties with a range index. Executing the following query as referenced in the image below in the Azure Portal, we observe the query metrics:

Query metrics:

This query, with the default indexing policy, required 21.8 RU’s.

Adding a composite index on foodGroup and _ts and updating the query text to include foodGroup in the ORDER BY clause significantly reduced the query’s RU charge.

Query metrics:

After adding a composite index, the query’s RU charge decreased from 21.8 RU’s to only 4.07 RU’s. This query optimization will be particularly impactful as the total data size increases. The benefits of a composite index are significant when the properties in the ORDER BY clause have a high cardinality.

Creating composite indexes

You can learn more about creating composite indexes in this documentation. It’s simple to update the indexing policy directly through the Azure Portal. While creating a composite index for data that’s already in Azure Cosmos DB, the index update will utilize the RU’s leftover from normal operations. After the new indexing policy is defined, Azure Cosmos DB will automatically index properties with a composite index as they’re written.

Explore whether composite indexes will improve RU utilization for your existing workloads on Azure Cosmos DB.
Quelle: Azure

Azure Files premium tier gets zone redundant storage

Azure Files premium tier is now zone redundant!

We're excited to announce the general availability of zone redundant storage (ZRS) for Azure Files premium tier. Azure Files premium tier with ZRS replication enables highly performant, highly available file services, that are built on solid-state drives (SSD).

Azure Files ZRS premium tier should be considered for managed file services where performance and regional availability are critical for the business. ZRS provides high availability by synchronously writing three replicas of your data across three different Azure Availability Zones, thereby protecting your data from cluster, datacenter, or entire zone outage. Zonal redundancy enables you to read and write data even if one of the availability zones is unavailable.

With the release of the ZRS for Azure Files premium tier, premium tier now offers two sets of durability options to meet your storage needs, zone redundant storage (ZRS) for intra-region high availability and locally-redundant storage (LRS) for lower-cost single region durable storage.

Getting started

You can create ZRS Azure premium files account through Azure Portal, Azure CLI, or Azure PowerShell.

Azure Files premium tier requires FileStorage as the account kind. To create a ZRS account in the Azure Portal, set the following properties:

Currently, ZRS option for Azure Files premium tier is available in West Europe and we will be gradually expanding the regional coverage. Stay up to date on the premium tier ZRS region availability through the Azure documentation.

Migration from LRS premium files account to ZRS premium files account requires manual copy or movement of data from an existing LRS account to a new ZRS account. Live account migration on request is not supported yet. Please check the migration documentation for the latest information.

Refer to the pricing page for the latest pricing information.

To learn more about premium tier, visit Azure Files premium tier documentation. Give it a try and share your feedback on the Azure Storage forum or email us at azurefiles@microsoft.com.

Happy sharing!
Quelle: Azure

HDInsight support in Azure CLI now out of preview

We are pleased to share that support for HDInsight in Azure CLI is now generally available. The addition of the az hdinsight command group allows you to easily manage your HDInsight clusters using simple commands while taking advantage of all that Azure CLI has to offer, such as cross-platform support and tab completion.

Key Features

Cluster CRUD: Create, delete, list, resize, show properties, and update tags for your HDInsight clusters.
Script actions: Execute script actions, list and delete persistent script actions, promote ad-hoc script executions to persistent script actions, and show the execution history of script action runs.
Manage Azure Monitor integration: Enable, disable, and show the status of Azure Monitor integration on HDInsight clusters.
Applications: Create, delete, list, and show properties for applications on your HDInsight clusters.
Core usage: View available core counts by region before deploying large clusters.

Create an HDInsight cluster using a single, simple Azure CLI command

Azure CLI benefits

Cross platform: Use Azure CLI on Windows, macOS, Linux, or the Azure Cloud Shell in a browser to manage your HDInsight clusters with the same commands and syntax across platforms.
Tab completion and interactive mode: Autocomplete command and parameter names as well as subscription-specific details like resource group names, cluster names, and storage account names. Don't remember your 88-character storage account key off the top of your head? Azure CLI can autocomplete that as well!
Customize output: Make use of Azure CLI's globally available arguments to show verbose or debug output, filter output using the JMESPath query language, and change the output format between json, tab-separated values, or ASCII tables, and more.

Getting started

You can get up and running to start managing your HDInsight clusters using Azure CLI with 3 easy steps.

Install Azure CLI for Windows, macOS, or Linux. Alternatively, you can use Azure Cloud Shell to use Azure CLI in a browser.
Log in using the az login command.
Take a look at our reference documentation, “az hdinsight” or run az hdinsight -h to see a full list of supported HDInsight commands and descriptions and start using Azure CLI to manage your HDInsight clusters.

About HDInsight

Azure HDInsight is an easy, cost-effective, enterprise-grade service for open source analytics that enables customers to easily run popular open source frameworks, such as Apache Hadoop, Spark, Kafka, and more. The service is available in 28 public regions and Azure Government Clouds in the US, Germany, and China. Azure HDInsight powers mission-critical applications in a wide variety of sectors and enables a wide range of use cases including ETL, streaming, and interactive querying.
Quelle: Azure

SAP on Azure Architecture – Designing for security

This blog post was contributed to by Chin Lai The, Technical Specialist, SAP on Azure.

This is the first in a four-part blog series on designing a great SAP on Azure Architecture, and will focus on designing for security.

Great SAP on Azure Architectures are built on the pillars of security, performance and scalability, availability and recoverability, and efficiency and operations.

Microsoft investments in Azure Security

Microsoft invests $1 billion annually on security research and development and has 3,500 security professionals employed across the company. Advanced AI is leveraged to analyze 6.5 trillion global signals from the Microsoft cloud platforms and detect and respond to threats. Enterprise-grade security and privacy are built into the Azure platform including enduring, rigorous validation by real world tests, such as the Red Team exercises. These tests enable Microsoft to test breach detection and response as well as accurately measure readiness and impacts of real-world attacks, and are just one of the many operational processes that provide best-in-class security for Azure.

Azure is the platform of trust, with 90 compliance certifications spanning nations, regions, and specific industries such as health, finance, government, and manufacturing. Moreover, Azure Security and Compliance Blueprints can be used to easily create, deploy, and update your compliant environments.

Security – a shared responsibility

It’s important to understand the shared responsibility model between you as a customer and Microsoft. The division of responsibility is dependent on the cloud model used – SaaS, PaaS, or IaaS. As a customer, you are always responsible for your data, endpoints, account/access management, irrespective of the chosen cloud deployment.

SAP on Azure is delivered using the IaaS cloud model, which means security protections are built into the service by Microsoft at the physical datacenter, physical network, and physical hosts. However, for all areas beyond the Azure hypervisor i.e. the operating systems and applications, customers need to ensure their enterprise security controls are implemented.

Key security considerations for deploying SAP on Azure

Resource based access control & resource locking

Role-based access control (RBAC) is an authorization system which provides fine-grained access for the management of Azure resources. RBAC can be used to limit access and control permissions on Azure resources for the various teams within your IT operations.

For example, the SAP basis team members can be permissioned to deploy virtual machines (VMs) into Azure virtual networks (VNets). However, the SAP basis team can be restricted from creating or configuring VNets. On the flip side, members of the networking team can create and configure VNets, however, they are prohibited from deploying or configuring VMs in VNets where SAP applications are running.

We recommend validating and testing the RBAC design early during the lifecycle of your SAP on Azure project.

Another important consideration is Azure resource locking which can be used to prevent accidental deletion or modification of Azure resources such as VMs and disks.  It is recommended to create the required Azure resources at the start of your SAP project. When all additons, moves, and changes are finished, and the SAP on Azure deployment is operational all resources can be locked. Following, only a super administrator can unlock a resource and permit the resource (such as a VM) to be modified.

Secure authentication

Single-sign-on (SSO) provides the foundation for integrating SAP and Microsoft products, and for years Kerberos tokens from Microsoft Active Directory have been enabling this capability for both SAP GUI and web-browser based applications when combined with third party security products.

When a user logs onto their workstation and successfully authenticates against Microsoft Active Directory they are issued a Kerberos token. The Kerberos token can then be used by a 3rd party security product to handle the authentication to the SAP application without the user having to re-authenticate. Additionally, data in transit from the users front-end towards the SAP application can also be encrypted by integrating the security product with secure network communications (SNC) for DIAG (SAP GUI), RFC and SPNEGO for HTTPS.

Azure Active Directory (Azure AD) with SAML 2.0 can also be used to provide SSO to a range of SAP applications and platforms such as SAP NetWeaver, SAP HANA and the SAP Cloud Platform.

This video demonstrates the end-to-end enablement of SSO between Azure AD and SAP NetWeaver

Protecting your application and data from network vulnerabilities

Network security groups (NSG) contain a list of security rules that allow or deny network traffic to resources within your Azure VNet. NSGs can be associated to subnets or individual network interfaces attached to VMs. Security rules can be configured based on source/destination, port, and protocol.
NSG’s influence network traffic for the SAP system. In the diagram below, three subnets are implemented, each having an NSG assigned – FE (Front-End), App and DB.

A public internet user can reach the SAP Web-Dispatcher over port 443
The SAP Web-Dispatcher can reach the SAP Application server over port 443
The App Subnet accepts traffic on port 443 from 10.0.0.0/24
The SAP Application server sends traffic on port 30015 to the SAP DB server
The DB subnet accepts traffic on port 30015 from 10.0.1.0/24.
Public Internet Access is blocked on both App Subnet and DB Subnet.

SAP deployments using the Azure virtual Ddatacenter architecture will be implemented using a hub and spoke model. The hub VNet is the central point for connectivity where an Azure Firewall or other type of network virtual appliances (NVA) is implemented to inspect and control the routing of traffic to the spoke VNet where your SAP applications reside.

Within your SAP on Azure project, it is recommended to validate that that inspection devices and NSG security rules are working as desired, this will ensure that your SAP resources are shielded appropriately against network vulnerabilities.

Maintaining data integrity through encryption methods

Azure Storage service encryption is enabled by default on your Azure Storage account where it cannot be disabled. Therefore, customer data at rest on Azure Storage is secured by default where data is encrypted/decrypted transparently using 256-bit AES. The encrypt/decrypt process has no impact on Azure Storage performance and is cost free.  You have the option of Microsoft managing the encryption keys or you can manage your own keys with Azure Key Vault. Azure Key Vault can be used to manage your SSL/TLS certificates which are used to secure interfaces and internal communications within the SAP system.

Azure also offers virtual machine disk encryption using BitLocker for Windows and DM-Crypt for Linux to provide volume encryption for virtual machine operating system and data disks. Disk encryption is not enabled by default.

Our recommended approach to encrypting your SAP data at rest is as follows:

Azure Disk Encryption for SAP Application servers – operating system disk and data disks.
Azure Disk Encryption for SAP Database servers – operating system disks and those data disk not used by the DBMS.
SAP Database servers – leverage Transparent Data Encryption offered by the DBMS provider to secure your data and log files and to ensure the backups are also encrypted.

Hardening the operating system

Security is a shared responsibility between Microsoft and you as a customer where your customer specific security controls need to be applied to the operating system, database, and the SAP application layer. For example, you need to ensure the operating system is hardened to eradicate vulnerabilities which could lead to attacks on the SAP database.

Windows, SUSE Linux, RedHat Linux and others are supported for running SAP applications on Azure and various images of these operating systems are available within the Azure Marketplace. You can further harden these images to comply with the security policies of your enterprise and within the guidance from the Center of Internet Security (CIS)- Microsoft Azure foundations benchmark.

Enterprises generally have operational processes in place for updating and patching of their IT software including the operating system. Once an operating system vulnerability has been exposed, it is published in security advisories and usually remediated quickly. The operating system vendor regularly provides security updates and patches. You can use the Update Management solution in Azure Automation to manage operating system updates for your Windows and Linux VMs in Azure. A best practice approach is a selective installation of security updates for the operating system on a regular cadence and installation of other updates such as new features during maintenance windows.

Learn more

Within this blog we have touched upon a selection of security topics as they relate to deploying SAP on Azure. Incorporating solid security practices will lead to a secure SAP deployment on Azure.

Azure Security Center is the place to learn about the best practices for securing and monitoring your Azure deployments. Also, please read the Azure Security Center technical documentation along with Azure Sentinel to understand how to detect vulnerabilities, generate alerts when exploitations have occurred and provide guidance on remediation.

In blog number two in our series we will cover designing for performance and scalability.

Quelle: Azure

Announcing Azure Private Link

Customers love the scale of Azure that gives them the ability to expand across the globe, and while being highly available. Through the rapidly growing adoption of Azure, customers need to access the data and services privately and securely from their networks grow exponentially. To help with this, we’re announcing the preview of Azure Private Link.

Azure Private Link is a secure and scalable way for Azure customers to consume Azure Services like Azure Storage or SQL, Microsoft Partner Services or their own services privately from their Azure Virtual Network (VNet). The technology is based on a provider and consumer model where the provider and the consumer are both hosted in Azure. A connection is established using a consent-based call flow and once established, all data that flows between the service provider and service consumer is isolated from the internet and stays on the Microsoft network. There is no need for gateways, network address translation (NAT) devices, or public IP addresses to communicate with the service.

Azure Private Link brings Azure services inside the customer’s private VNet. The service resources can be accessed using the private IP address just like any other resource in the VNet. This significantly simplifies the network configuration by keeping access rules private.

Today we would like to highlight a few unique key use cases that are made possible by the Azure Private Link announcement:

Private connectivity to Azure PaaS services

Multi-tenant shared services such as Azure Storage and Azure SQL Database are outside your VNet and have been reachable only via the public interface. Today, you can secure this connection using VNet service endpoints which keep the traffic within the Microsoft backbone network and allow the PaaS resource to be locked down to just your VNet. However, the PaaS endpoint is still served over a public IP address and therefore not reachable from on-premises through Azure ExpressRoute private peering or VPN gateway. With today’s announcement of Azure Private Link, you can simply create a private endpoint in your VNet and map it to your PaaS resource (Your Azure Storage account blob or SQL Database server). These resources are then accessible over a private IP address in your VNet, enabling connectivity from on-premises through Azure ExpressRoute private peering and/or VPN gateway and keep the network configuration simple by not opening it up to public IP addresses.

Private connectivity to your own service

This new offering is not limited to Azure PaaS services, you can leverage it for your own service as well. Today, as a service provider in Azure, you have to make your service accessible over a public interface (IP address) in order for it to be accessible for other consumers running in Azure. You could use VNet peering and connect to the consumer’s VNet to make it private, but it is not scalable and will soon run into IP address conflicts. With today’s announcement, you can run your service completely private in your own VNet behind an Azure Standard Load Balancer, enable it for Azure Private Link, and allow it to be accessed by consumers running in different VNet, subscription, or Azure Active Directory (AD) tenant all using simple clicks and approval call flow. As a service consumer all you will have to do is create a private endpoint in your own VNet and consume the Azure Private Link service completely private without opening your access control lists (ACLs) to any public IP address space.

Private connectivity to SaaS service

Microsoft’s multiple partners already offer many different software-as-a-service (SaaS) solutions to Azure customers today. These solutions are offered over the public endpoints and to consume these SaaS solutions, Azure customers must open their private networks to the public internet. Customers want to consume these SaaS solutions within their private networks as if they are deployed right within their networks. The ability to consume the SaaS solutions privately within the customer's own network has been a common request. With Azure Private Link, we’re extending the private connectivity experience to Microsoft partners. This is a very powerful mechanism for Microsoft partners to reach Azure customers. We're confident that a lot of future Azure Marketplace offerings will be made through Azure Private Link. 

Key highlights of Azure Private Link

Private on-premises access: Since PaaS resources are mapped to private IP addresses in the customer’s VNet, they can be accessed via Azure ExpressRoute private peering. This effectively means that the data will traverse a fully private path from on-premises to Azure. The configuration in the corporate firewalls and route tables can be simplified to allow access only to the private IP addresses.
Data exfiltration protection: Azure Private Link is unique with respect to mapping a specific PaaS resource to private IP address as opposed to mapping an entire service as other cloud providers do. This essentially means that any malicious intent to exfiltrate the data to a different account using the same private endpoint will fail, thus providing built-in data exfiltration protection.
Simple to setup: Azure Private Link is simple to setup with minimal networking configuration needed. Connectivity works on an approval call flow and once a PaaS resource is mapped to a private endpoint, the connectivity works out of the box without any additional configurations on route tables and Azure Network Security Groups (NSGs).

Overlapping address space: Traditionally, customers use VNet peering as the mechanism to connect multiple VNets. VNet peering requires the VNets to have non-overlapping address space. In enterprise use cases, its often common to find networks with an overlapping IP address space. Azure Private Link provides an alternative way to privately connect applications in different VNets that have an overlapping IP address space.

Roadmap

Today, we’re announcing Azure Private Link preview in a limited set of regions. We will be expanding to more regions in the near future. In addition, we will also be adding more Azure PaaS services to Azure Private Link including Azure Cosmos DB, Azure MySQL, Azure PostgreSQL, Azure MariaDB, Azure Application Service, and Azure Key Vault, and Partner Services in coming months.

We encourage you to try out the Azure Private Link preview and look forward to hearing and incorporating your feedback. Please refer to the documentation for additional details.
Quelle: Azure