Disaster recovery for SAP HANA Systems on Azure

This blog will cover the design, technology, and recommendations for setting up disaster recovery (DR) for an enterprise customer, to achieve best in class recovery point objective (RPO) and recovery time objective (RTO) with an SAP S/4HANA landscape. This post was co-authored by Sivakumar Varadananjayan, Global Head of Cognizant’s SAP Cloud Practice.

Microsoft Azure provides a trusted path to enterprise-ready innovation with SAP solutions in the cloud. Mission critical applications such as SAP run reliably on Azure, which is an enterprise proven platform offering hyperscale, agility, and cost savings for running a customer’s SAP landscape.

System availability and disaster recovery are crucial for customers who run mission-critical SAP applications on Azure.

RTO and RPO are two key metrics that organizations consider in order to develop an appropriate disaster recovery plan that can maintain business continuity due to an unexpected event.  Recovery point objective refers to the amount of data at risk in terms of “Time” whereas Recovery Time Objective refers to the amount of time or the maximum tolerable time that system can be down after disaster occurs.

The below diagram gives a view of RPO and RTO on a timeline view in a business as usual (BAU) scenario.

Orica is the world's largest provider of commercial explosives and innovative blasting systems to the mining, quarrying, oil and gas, and construction markets. They are also a leading supplier of sodium cyanide for gold extraction and a specialist provider of ground support services in mining and tunneling.

As part of Orica’s digital transformation journey, Cognizant has been chosen as a trusted technology advisor and managed cloud platform provider to build highly available, scalable, disaster proof IT platforms for SAP S/4HANA and other SAP applications in Microsoft Azure.

This blog describes how Cognizant took up the challenge of building a disaster recovery solution for Orica as a part of the Digital Transformation Program with SAP S/4HANA as a digital core. This blog contains the SAP on Azure architectural design considerations, by Cognizant and Orica, over the last two years, leading to a reduction in RTO to 4 hours. This is achieved by deploying the latest technology features available on Azure, coupled with automation. Along with reduction in RTO, there’s also reduction in RPO to less than 5 minutes with the use of database specific technologies such as SAP HANA system replication and Azure Site Recovery.

Design principles for disaster recovery systems

Selection of DR Region based on SAP Certified VMs for SAP HANA – It is important to verify the availability of SAP Certified VMs types in DR Region.
RPO and RTO Values – Businesses need to lay out clear expectations in RPO and RTO values which greatly affect the architecture for Disaster Recovery and requirements of tools and automation required to implement Disaster Recovery
Cost of Implementing DR, Maintenance and DR Drills

Criticality of systems – It is possible to establish Trade-off between Cost of DR implementation and Business Requirements. While most critical systems can utilize state of the art DR architecture, medium and less critical systems may afford higher RPO/RTO values.
On Demand Resizing of DR instances – It is preferable to use small size VMs for DR instances and upsize those during active DR scenario. It is also possible to reserve the required capacity of VMs at DR region so that there is no “waiting” time to upscale the VMs. Microsoft offers Reserved Instances with which one can reserve virtual machines in advance and save up to 80 percent. According to required RTO value a tradeoff needs to be worked out between running smaller VMs vs. Azure RI.
Additional considerations for cloud infrastructure costs, efforts in setting up environment for Non-disruptive DR Tests. Non-disruptive DR Tests refers to executing DR Tests without performing failover of actual productive systems to DR systems thereby avoiding any business downtimes. This involves additional costs for setting up temporary infrastructure which is in completely isolated vNet during the DR Tests.
Certain components in SAP systems architecture such as clustered network file system (NFS) which are not recommended to be replicated using Azure Site Recovery, hence there is a need for additional tools with license costs such as SUSE Geo-cluster or SIOS Data keeper for NFS Layer DR.

Selection of specific technology and tools – While Azure offers “Azure Site Recovery (ASR)” which replicates the virtual machines across the region, this technology is used at non-database components or layers of the system while database specific methods such as SAP HANA system replication (HSR) are used at database layer to ensure consistency of databases.

Disaster recovery architecture for SAP systems running on SAP HANA Database

At a very high level, the below diagram depicts the architecture of SAP systems based on SAP HANA and which systems will be available in case of local or regional failures.

The diagram below gives next level details of SAP HANA systems components and corresponding technology used for achieving disaster recovery.

Database layer

At the database layer, database specific method of replications such as SAP HANA system replication (HSR) is used. Use of database specific replication method allows better control over RPO values by configuring various replication specific parameters and offers consistency of database at DR site. The alternative methods of achieving disaster recovery at the database (DB) layer such as backup and restore, and recovery or storage base replications are available however, they result in higher RTO values.

RPO Values for SAP HANA database depend on factors including replication methodology (Synchronous in case of high availability or Asynchronous in case of DR replication), backup frequency, backup data retention policies, savepoint, and replication configuration parameters.

SAP Solution Manager can be used to monitor the replication status, such that an e-mail alert is triggered if the replication is impacted.

Even though multi-node replication is available as of SAP HANA 2.0 SP 3, revision 33, at the time or writing this article, this scenario is not tested in conjunction with high availability cluster. With successful implementation of multi-target replications, the DR maintenance process will become simpler and will not need manual interventions due to fail-over scenarios at primary site.

Application layer – (A)SCS, APP, iSCSI

Azure Site Recovery is used for replication of non-database components of SAP systems architecture including (A)SCS, application servers, and Linux cluster fencing agents such as iSCSI (with an exception of NFS layer which is discussed below.) Azure Site Recovery replicates workloads running on a virtual machines (VMs) from a primary site to a secondary location at storage layer and it does not require VM to be in a running state, and VMs can be started during actual disaster scenarios or DR drills.

There are two options to set up a pacemaker cluster in Azure. You can either use a fencing agent, which takes care of restarting a failed node via the Azure APIs or you can use a storage based death (SBD) device. The SBD device requires at least one additional virtual machine that acts as an iSCSI target server and provides an SBD device. These iSCSI target servers can however be shared with other pacemaker clusters. The advantage of using an SBD device is a faster failover time.

Below diagram describes disaster recovery at the application layer, (A)SCS, App servers, and iSCSI servers use the same architecture to replicate the data across DR region using Azure Site Recovery. 

NFS layer – NFS layer at primary site uses a cluster with distributed replicated block device (DRBD) for high availability replication purposes. We evaluated multiple technologies for the implementation of DR at NFS layer. Since DRBD and Site Recovery configurations are not compatible, solutions such as SUSE geo cluster, SIOS data keeper, or simple VM snapshot backups and restore are available for achieving NFS layer DR. Since DRBD enables high availability at NFS layer using disk replication, Site Recovery replication is not supported. In case where DRBD is enabled, the cost-effective solution to achieve DR for NFS layer is by using simple backup/restore using VM snapshot backups.

Steps for invoking DR or a DR drill

Microsoft Azure Site Recovery technology helps in faster replication of data at the DR region. In a DR implementation where Site Recovery is not used or configured, it would take more than 24 hours to recover about five systems, and eventually RTO will result in 24 or more hours. However, when Site Recovery is used at the application layer with database specific method of replication at DB Layer being leveraged, it is possible to reduce the RTO value to well below four hours for same number of systems. Below diagram describes timeline view with the steps to activate disaster recovery with four hours RTO Value.

Steps for Invoking DR or a DR drill:

DNS Changes for VMs to use new IP addresses
Bring up iSCSI – single VM from ASR Replicated data
Recover Databases and Resize the VMs to required capacity
Manually provision NFS – Single VM using snapshot backups
Build Application layer VMs from ASR Replicated data
Perform cluster changes
Bring up applications
Validate Applications
Release systems

Recommendations on non-disruptive DR drills

Some businesses cannot afford down-time during DR drills. Non-disruptive DR drills are suggested in case where it is not possible to arrange downtimes to perform DR. A non-disruptive DR procedure can be achieved by creating an additional DR VNet, isolating it from the network, and carrying out DR Drill with below steps.

As a prerequisite, build SAP HANA database servers in the isolated VNet and configure SAP HANA system replication.

Disconnect express route circuit to DR region, as express route gets disconnected it simulates abrupt unavailability of systems in primary region
As a prerequisite, backup domain controller is required to be active and in replication mode with primary domain controller until the time of express route disconnection
DNS server needs to be configured in isolated DR VNet (additional DR VNet Created for non-disruptive DR drill) and kept in standby mode until the time of express route disconnection
Establish point to site VPN tunnel for administrators and key users for DR test
Manually update the NSGs so that DR VNet is isolated from the entire network
Bring up applications using DR enable procedure in DR region
Once test is concluded, reconfigure NSGs, express route, and DR replications

Involvement of relevant infrastructure and SAP subject matter experts is highly recommended during DR tests.

Note that the non-disruptive DR procedure need to be executed with extreme caution with prior validation and testing with non-production systems. Database VMs capacity at DR region should be decided with a tradeoff between reserving full capacity vs. Microsoft’s timeline to allocate required capacity to resize the database VMs.

Next steps

To learn more about architecting a optimal Azure infrastructure for SAP see the following resources:

SAP on Azure – Designing for security

SAP on Azure – Designing for performance and scalability

SAP on Azure – Designing for availability and recoverability

SAP on Azure- Designing for Efficiency and Operations

Quelle: Azure

Azure Cost Management updates – October 2019

Whether you're a new student, thriving startup, or the largest enterprise, you have financial constraints and you need to know what you're spending, where, and how to plan for the future. Nobody wants a surprise when it comes to the bill, and this is where Azure Cost Management comes in!

We're always looking for ways to learn more about your challenges and how Cost Management can help you better understand where you're accruing costs in the cloud, identify and prevent bad spending patterns, and optimize costs to empower you to do more with less. Here are a few of the latest improvements and updates based on your feedback:

Cost Management at Microsoft Ignite 2019
Cost Management update for partners
Major refresh for the Power BI connector
BP implements cloud governance and effective cost management
What's new in Cost Management Labs
Scope selection and navigation optimized for active billing accounts
Improved right-sizing recommendations for virtual machines
New ways to save money with Azure!
New videos
Documentation updates

Let's dig into the details.

 

Cost Management at Microsoft Ignite 2019

Microsoft Ignite 2019 is right around the corner! Come join us in these Azure Cost Management sessions and don't forget to stop by the Azure Cost Management booth on the expo floor to say hi and get some cool swag.

Analyze, manage, and optimize your cloud cost with Azure Cost Management (Session BRK3190, November 5, 3:30-4:15 PM)
Learn how Azure Cost Management can help you gain visibility, drive accountability, and optimize your cloud costs. Special guest, Mars Inc, will show how they use Azure Cost Management to get the most value out of Azure.
Manage and optimize your cloud cost with Azure Cost Management (Session THR2184, November 7, 9:00-9:20 AM)
Can't make the full hour? Join us for a quick overview of Azure Cost Management in this short, theater session.

And if you're still hungry for more, here are a few other sessions you might be interested in:

Get the most out of Microsoft Azure with Azure Advisor (Session THR2181, 20m)
Keeping costs down in Azure (Session AFUN70, 45m)
Make the most of Azure to reduce your cloud spend (Session BRK2140, 45m)
Optimizing cost for Azure solutions (Session THR2364, 20m)
Optimize Azure spend while maximizing cloud potential (Session THR2288, 20m)
Lessons learned in gaining visibility and lowering cost in our Azure environments (Session THR2220, 20m)

 

Cost Management update for partners

November will bring a lot of exciting announcements across Azure and Microsoft as a whole. Perhaps the one we’re most eager to see is the one we mentioned in our July update: the launch of Microsoft Customer Agreement support for partners, where Azure Cost Management will become available to Microsoft Cloud Solution Provider (CSP) partners and customers. CSP partners who have onboarded their customers to Microsoft Customer Agreement will be able to take advantage of all the native cost management tools Microsoft Enterprise Agreement and pay-as-you-go customers have today, but optimized for CSP.

Partners will be able to:

Understand and analyze costs directly in the portal and break them down by customer, subscription, meter, and more
Setup budgets to be notified or trigger automated actions when costs exceed predefined thresholds
Review invoiced costs and partner-earned credits associated with customers, subscriptions, and services
Enable Cost Management for customers using pay-as-you-go rates

And once Cost Management has been enabled for CSP customers, they’ll also be able to take advantage of these native tools when managing their subscriptions and resource groups.

All of this and more will be available to CSP partners and customers within the Azure portal and the underlying Resource Manager APIs to enable rich automation and integration to meet your specific needs. And this is just the first of a series of updates to enable Azure Cost Management for partners and their customers. We hope you find these tools valuable as an addition to all the new functionality Microsoft Customer Agreement offers and look forward to delivering even more cost management capabilities next year, including support for existing CSP customers. Stay tuned for the full Microsoft Customer Agreement announcement coming in November!

 

Major refresh for the Power BI connector

Azure Cost Management offers several ways to report on your cost and usage data. You can start with cost analysis in the portal, then download data for offline analysis. If you need more automation, you can use Cost Management APIs or schedule an export to push data to a storage account on a daily basis. But maybe you just need detailed reporting alongside other business reports. This is where the Azure Cost Management connector for Power BI comes in. This month you'll see a few major updates to the Power BI connector.

First and foremost, this is a new connector that replaces both the Azure Consumption Insights connector for Enterprise Agreement accounts and the Azure Cost Management (Beta) connector for Microsoft Customer Agreement accounts. The new connector supports both by accepting either an Enterprise Agreement billing account ID (enrollment number) or Microsoft Customer Agreement billing profile ID.

The next change Enterprise Agreement admins will notice is that you no longer need an API key. Instead, the new connector uses Azure Active Directory. The connector still requires access to the entire billing account, but now a read-only user can set it up without requiring a full admin to create an API key in the Enterprise Agreement portal.

Lastly, you'll also notice a few new tables for reservation details and recommendations. Reservation and Marketplace purchases have been added to the Usage details table as well as a new Usage details amortized table, which includes the same amortized data available in cost analysis. For more details, refer to the Reservation and Marketplace purchases update we announced in June 2019. Those same great changes are now available in Power BI.

Please check out the new connector and let us know what you'd like to see next!

 

BP implements cloud governance and effective cost management

BP has moved a significant portion of its IT resources to the Microsoft Azure cloud platform over the past five years as part of a company-wide digital transformation. To manage and deliver all its Azure resources in the most efficient possible way, BP uses Azure Policy for governance to control access to Azure services. At the same time, the company uses Azure Cost Management to track usage of Azure services. BP has been able to reduce its cloud spend by 40 percent with the insights it has gained.

"We’ve used Azure Cost Management to help cut our cloud costs by 40 percent. Even though our total usage has close to doubled, our total spending is still well below what it used to be."
– John Maio, Microsoft Platform Chief Architect

Learn more about BP's customer story.

 

What's new in Cost Management Labs

With Cost Management Labs, you get a sneak peek at what's coming in Azure Cost Management and can engage directly with us to share feedback and help us better understand how you use the service, so we can deliver more tuned and optimized experiences. Here are a few features you can see in Cost Management Labs:

Get started quicker with the cost analysis Home view
Cost Management offers five  built-in views to get started with understanding and drilling into your costs. The Home view gives you quicker access to those views so you get to what you need faster.
New: Scope selection and navigation optimized for active billing accounts – Now available in the portal
Cost Management now prioritizes active billing accounts when selecting a default scope and displaying available scopes in the scope picker.
New: Performance optimizations in cost analysis and dashboard tiles
Whether you're using tiles pinned to the dashboard or the full experience, you'll find cost analysis loads faster than ever.

Of course, that's not all. Every change in Cost Management is available in Cost Management Labs a week before it's in the full Azure portal. We're eager to hear your thoughts and understand what you'd like to see next. What are you waiting for? Try Cost Management Labs today.

 

Scope selection and navigation optimized for active billing accounts

Cost Management is available at every scope above your resources – from a billing account or management group down to the individual resource groups where you manage your apps. You can manage costs in the context of the scope you're interested in or start in Cost Management and switch between scopes without navigating around the portal. Whatever works best for you. This month, we're introducing a few small tweaks to make it even easier to manage costs for your active billing accounts and subscriptions.

For those who start in Cost Management, you may notice the default scope has changed for you. Cost Management now prioritizes active billing accounts and subscriptions over renewed, cancelled, or disabled ones. This will help you get started even quicker without needing to change scope.

When you do change scope, the list of billing accounts may be a little shorter than you last remember. This is because those older billing accounts are now hidden by default, keeping you focused on your active billing accounts. To see your inactive billing accounts, uncheck the "Only show active billing accounts" checkbox at the bottom of the scope picker. This option also allows you to see all subscriptions, regardless of what's been pre-selected in the global subscription filter.

Lastly, when you're looking at all billing accounts and subscriptions, you'll see the inactive ones at the bottom of the list, with their status clearly called out for ultimate transparency and clarity.

We hope these changes will make it easier for you manage costs across scopes. Let us know what you'd like to see next.

 

Improved right-sizing recommendations for virtual machines

One of the most critical learnings when moving to the cloud is how important it is to size virtual machines for the workload and use auto-scaling capabilities to grow (or shrink) to meet usage demands. In an effort to ensure your virtual machines are using the optimal size, Azure Advisor now factors CPU usage, memory, and network usage into right-sizing recommendations for more accurate recommendations you can trust. Learn more about the change in the latest Advisor update.

 

New ways to save money with Azure

There have been several new cost optimization improvements over the past month. Here are a few you might be interested in:

Save up to 25 percent with the new capacity-based pricing options for Azure Monitor Log Analytics
Only pay for the licenses you use with the new Azure DevOps assignment-based billing option
Take advantage of the free, promotional pricing for data transfer to Azure Front Door through the end of November 2019

 

New videos

For those visual learners out there, here are a couple new videos you should check out:

How to apply budgets to subscriptions (5m)
How to use cost analysis (2.5m)

Subscribe to the Azure Cost Management YouTube channel to stay in the loop with new videos as they're released and let us know what you'd like to see next.

 

Documentation updates

There were a lot of documentation updates. Here are a few you might be interested in:

Lots of updates around Microsoft Partner Agreement for partners – start with the Getting started with your Microsoft Partner Agreement billing account
Added Microsoft Partner Agreement scopes to Understand and work with scopes
Summarized a few of the common uses of cost analysis
Added Microsoft Customer Agreement details for virtual machine reservations

Want to keep an eye on all of the documentation updates? Check out the Cost Management doc change history in the azure-docs repository on GitHub. If you see something missing, select Edit at the top of the document and submit a quick pull request.

 

What's next?

These are just a few of the big updates from last month. We're always listening and making constant improvements based on your feedback, so please keep the feedback coming.

Follow @AzureCostMgmt on Twitter and subscribe to the YouTube channel for updates, tips, and tricks. And, as always, share your ideas and vote up others in the Cost Management feedback forum.
Quelle: Azure

New in Stream Analytics: Machine Learning, online scaling, custom code, and more

Azure Stream Analytics is a fully managed Platform as a Service (PaaS) that supports thousands of mission-critical customer applications powered by real-time insights. Out-of-the-box integration with numerous other Azure services enables developers and data engineers to build high-performance, hot-path data pipelines within minutes. The key tenets of Stream Analytics include Ease of use, Developer productivity, and Enterprise readiness. Today, we're announcing several new features that further enhance these key tenets. Let's take a closer look at these features:

Preview Features

Rollout of these preview features begins November 4th, 2019. Worldwide availability to follow in the weeks after. 

Online scaling

In the past, changing Streaming Units (SUs) allocated for a Stream Analytics job required users to stop and restart. This resulted in extra overhead and latency, even though it was done without any data loss.

With online scaling capability, users will no longer be required to stop their job if they need to change the SU allocation. Users can increase or decrease the SU capacity of a running job without having to stop it. This builds on the customer promise of long-running mission-critical pipelines that Stream Analytics offers today.

Change SUs on a Stream Analytics job while it is running.

C# custom de-serializers

Azure Stream Analytics has always supported input events in JSON, CSV, or AVRO data formats out of the box. However, millions of IoT devices are often programmed to generate data in other formats to encode structured data in a more efficient yet extensible format.

With our current innovations, developers can now leverage the power of Azure Stream Analytics to process data in Protobuf, XML, or any custom format. You can now implement custom de-serializers in C#, which can then be used to de-serialize events received by Azure Stream Analytics.

Extensibility with C# custom code

Azure Stream Analytics traditionally offered SQL language for performing transformations and computations over streams of events. Though there are many powerful built-in functions in the currently supported SQL language, there are instances where a SQL-like language doesn't provide enough flexibility or tooling to tackle complex scenarios.

Developers creating Stream Analytics modules in the cloud or on IoT Edge can now write or reuse custom C# functions and invoke them right in the query through User Defined Functions. This enables scenarios such as complex math calculations, importing custom ML models using ML.NET, and programming custom data imputation logic. Full-fidelity authoring experience is made available in Visual Studio for these functions.

Managed Identity authentication with Power BI

Dynamic dashboarding experience with Power BI is one of the key scenarios that Stream Analytics helps operationalize for thousands of customers worldwide.

Azure Stream Analytics now offers full support for Managed Identity based authentication with Power BI for dynamic dashboarding experience. This helps customers align better with their organizational security goals, deploy their hot-path pipelines using Visual Studio CI/CD tooling, and enables long-running jobs as users will no longer be required to change passwords every 90 days.

While this new feature is going to be immediately available, customers will continue to have the option of using the Azure Active Directory User-based authentication model.

Stream Analytics on Azure Stack

Azure Stream Analytics is supported on Azure Stack via IoT Edge runtime. This enables scenarios where customers are constrained by compliance or other reasons from moving data to the cloud, but at the same time wish to leverage Azure technologies to deliver a hybrid data analytics solution at the Edge.

Rolling out as a preview option beginning January 2020, this will offer customers the ability to analyze ingress data from Event Hubs or IoT Hub on Azure Stack, and egress the results to a blob storage or SQL database on the same. You can continue to sign up for preview of this feature until then.

Debug query steps in Visual Studio

We've heard a lot of user feedback about the challenge of debugging the intermediate row set defined in a WITH statement in Azure Stream Analytics query. Users can now easily preview the intermediate row set on a data diagram when doing local testing in Azure Stream Analytics tools for Visual Studio. This feature can greatly help users to breakdown their query and see the result step-by-step when fixing the code.

Local testing with live data in Visual Studio Code

When developing an Azure Stream Analytics job, developers have expressed a need to connect to live input to visualize the results. This is now available in Azure Stream Analytics tools for Visual Studio Code, a lightweight, free, and cross-platform editor. Developers can test their query against live data on their local machine before submitting the job to Azure. Each testing iteration takes less than two to three seconds on average, resulting in a very efficient development process.

Live Data Testing feature in Visual Studio Code

Private preview for Azure Machine Learning

Real-time scoring with custom Machine Learning models

Azure Stream Analytics now supports high-performance, real-time scoring by leveraging custom pre-trained Machine Learning models managed by the Azure Machine Learning service, and hosted in Azure Kubernetes Service (AKS) or Azure Container Instances (ACI), using a workflow that requires users to write absolutely no code.

Users can build custom models by using any popular python libraries such as Scikit-learn, PyTorch, TensorFlow, and more to train their models anywhere, including Azure Databricks, Azure Machine Learning Compute, and HD Insight. Once deployed in Azure Kubernetes Service or Azure Container Instances clusters, users can use Azure Stream Analytics to surface all endpoints within the job itself. Users simply navigate to the functions blade within an Azure Stream Analytics job, pick the Azure Machine Learning function option, and tie it to one of the deployments in the Azure Machine Learning workspace.

Advanced configurations, such as the number of parallel requests sent to Azure Machine Learning endpoint, will be offered to maximize the performance.

You can sign up for preview of this feature now.

Feedback and engagement

Engage with us and get early glimpses of new features by following us on Twitter at @AzureStreaming.

The Azure Stream Analytics team is highly committed to listening to your feedback and letting the user's voice influence our future investments. We welcome you to join the conversation and make your voice heard via our UserVoice page.
Quelle: Azure

Enabling Diagnostic Logging in Azure API for FHIR®

Access to Diagnostic Logs is essential for any healthcare service where being compliant with regulatory requirements (like HIPAA) is a must. The feature in Azure API for FHIR that makes this happen is Diagnostic settings in the Azure Portal UI. For details on how Azure Diagnostic Logs work, please refer to the Azure Diagnostic Log documentation.

At this time, service is emitting the following fields in the Audit Log: 

Field Name 

Type  

Notes

TimeGenerated
DateTime
Date and Time of the event.

OperationName   

String
 

CorrelationId  
String
 

RequestUri  
String
The request URI.

FhirResourceType  
String
The resource type the operation was executed for.

StatusCode  
Int  
The HTTP status code (e.g., 200).

ResultType  
String  
The available value currently are ‘Started’, ‘Succeeded’, or ‘Failed.’

OperationDurationMs
Int  
The milliseconds it took to complete the request.

LogCategory  
String
The log category. We are currently emitting 'AuditLogs' for the value.

CallerIPAddress  
String
The caller's IP address.

CallerIdentityIssuer  
String  
Issuer

CallerIdentityObjectId  
String  
Object_Id

CallerIdentity  
Dynamic  
A generic property bag containing identity information.

Location  
String
The location of the server that processed the request (e.g., South Central US).

How do I get to my Audit Logs?

To enable diagnostic logging in Azure API for FHIR, navigate to Diagnostic settings in the Azure Portal. Here you will see standard UI that all services use for emitting diagnostic logging.

There are three ways to get to the diagnostic:

Archive to the Storage Account for auditing or manual inspection.
Stream to Event Hub for ingestion by third-party service or custom analytics solutions, such as Power BI.
Stream to Log Analytics workspace in Azure Monitor.

Please note, it may take up to 15 minutes for the first Logs to show in Log Analytics.

For more information on how to work with Diagnostic Logs, please refer to Diagnostic Logs documentation.

Conclusion

Having access to Diagnostic Logs is essential for monitoring service and providing compliance reports. Azure API for FHIR allows you to do this through Diagnostic Logs.

FHIR® is the registered trademark of HL7 and is used with the permission of HL7.
Quelle: Azure

TensorFlow 2.0 on Azure: Fine-tuning BERT for question tagging

This post is co-authored by Abe Omorogbe, Program Manager, Azure Machine Learning, and John Wu, Program Manager, Azure Machine Learning

Congratulations to the TensorFlow community on the release of TensorFlow 2.0! In this blog, we aim to highlight some of the ways that Azure can streamline the building, training, and deployment of your TensorFlow model. In addition to reading this blog, check out the demo discussed in more detail below, showing how you can use TensorFlow 2.0 in Azure to fine-tune a BERT (Bidirectional Encoder Representations from Transformers) model for automatically tagging questions.

TensorFlow 1.x is a powerful framework that enables practitioners to build and run deep learning models at massive scale. TensorFlow 2.0 builds on the capabilities of TensorFlow 1.x by integrating more tightly with Keras (a library for building neural networks), enabling eager mode by default, and implementing a streamlined API surface.

TensorFlow 2.0 on Azure

We've integrated Tensorflow 2.0 with the Azure Machine Learning service to make bringing your TensorFlow workloads into Azure as seamless as possible. Azure Machine Learning service provides an SDK that lets you write machine learning models in your preferred framework and run them on the compute target of your choice, including a single virtual machine (VM) in Azure, a GPU (graphics processing unit) cluster in Azure, or your local machine. The Azure Machine Learning SDK for Python has a dedicated TensorFlow estimator that makes it easy to run TensorFlow training scripts on any compute target you choose.

In addition, the Azure Machine Learning service Notebook VM comes with TensorFlow 2.0 pre-installed, making it easy to run Jupyter notebooks that use TensorFlow 2.0.

TensorFlow 2.0 on Azure demo: Automated labeling of questions with TF 2.0, Azure, and BERT

As we’ve mentioned, TensorFlow 2.0 makes it easy to get started building deep learning models. Using TensorFlow 2.0 on Azure makes it easy to get the performance benefits of Microsoft’s global, enterprise-grade cloud for whatever your application may be.

To highlight the end-to-end use of TensorFlow 2.0 in Azure, we prepared a workshop that will be delivered at TensorFlow World, on using TensorFlow 2.0 to train a BERT model to suggest tags for questions that are asked online. Check out the full GitHub repository, or go through the higher-level overview below.

Demo Goal

In keeping with Microsoft’s emphasis on customer obsession, Azure engineering teams try to help answer user questions on online forums. Azure teams can only answer questions if we know that they exist, and one of the ways we are alerted to new questions is by watching for user-applied tags. Users might not always know the best tag to apply to a given question, so it would be helpful to have an AI agent to automatically suggest good tags for new questions.

We aim to train an AI agent to automatically tag new Azure-related questions.

Training

First, check out the training notebook. After preparing our data in Azure Databricks, we train a Keras model on an Azure GPU cluster using the Azure Machine Learning service TensorFlow Estimator class. Notice how easy it is to integrate Keras, TensorFlow, and Azure’s compute infrastructure. We can easily monitor the progress of training with the run object.

Inferencing

Next, open up the inferencing notebook. Azure makes it simple to deploy your trained TensorFlow 2.0 model as a REST endpoint in order to get tags associated with new questions.

Machine Learning Operations

Next, open up the Machine Learning Operations instructions. If we intend to use the model in a production setting, we can bring additional robustness to the pipeline with ML Ops, an offering by Microsoft that brings a DevOps mindset to machine learning, enabling multiple data scientists to work on the same model while ensuring that only models that meet certain criteria will be put into production.

Next steps

TensorFlow 2.0 opens up exciting new horizons for practitioners of deep learning, both old and new. If you would like to get started, check out the following resources:

TensorFlow 2.0 announcement
TensorFlow estimator on Azure

Quelle: Azure

How Hanu helps bring Windows Server workloads to Azure

For decades our Microsoft services partners have fostered digital transformation at customer organizations around the world. With deep expertise in both on-premises and cloud operating models, our partners are trusted advisers to their customer, helping shape migration decisions. Partners give customers hands-on support with everything from initial strategy to implementation – giving them a unique perspective on why migration matters.

Hanu is one of our premier Microsoft partners and the winner of the 2019 Microsoft Azure Influencer Partner of the Year.  Hanu experts rely on expertise with Windows Server and SQL Server, as well as Azure, to plan and manage cloud migration. This ensures that customers get proactive step-by-step guidance and best in class support as they transform with the cloud. 

Recently, I sat down with Dave Sasson, Chief Strategy Officer at Hanu, to learn more about why Windows Server customers migrate to the cloud, and why they choose Azure. Below I am sharing a few key excerpts.

How often are Windows Server customers considering cloud as a part of their digital strategy today? How are they thinking about migrating business applications?

Very frequently we talk to customers that have Windows Servers running their business-critical apps. For a significant number of custom apps, .NET is the code base.  For the CIOs at these companies, cloud initiatives are their top priorities. In this competitive age, end users are demanding great experiences and our customers are looking at ways to innovate quicker and fail faster. Cloud is the natural choice to deliver these new experiences.

Aging infrastructure that is prone to failure and is vulnerable to security threats are also driving cloud considerations. The recent end of support for SQL Server 2008 and 2008 R2, and the upcoming end of support for Windows Server 2008 and 2008 R2, are decision points for customers on whether to invest in on-premises infrastructure or move their workloads to the cloud.

What are some of the considerations you see Windows Sever customers reviewing when choosing the cloud?

Security, performance and uptime, management, and cost optimization are the top technical considerations mentioned. IT skill is another significant consideration.

Customers want to invest in cloud partners that have technology leadership. This enables customers to modernize their applications and data estates, leverage chatbots, machine learning, and infuse AI services into their internal processes and their customer facing applications.   

What are the challenges you see customers facing when they are transitioning from on-premises to the cloud?

Operating in the cloud is a new paradigm for most customers.  Security, compliance, performance, and uptime are immediate concerns to ensure that companies have business continuity while they digitally transform across the company. Due to recent security threats and compliance requirements, we see this as a concern in not only industry verticals that are traditionally considered highly regulated, but across the board.

Another top challenge for CIOs is how they leverage their organization’s expertise in this new age of IT. Most customer have tons of in-house expertise, but the worry is whether their existing skills will apply when cloud becomes part of their IT environment and keep a high uptime.  

In your experience, why do customers choose Azure for their Windows Server Workloads?

Windows Server and SQL Server users trust Microsoft as their chosen technology partner. Azure offers even better built-in security features and controls to protect cloud environments than what is available on-premises. Azure’s 90+ compliance offerings across the breadth of industry verticals help customers quickly move to a compliant state while running in the cloud. The Azure Governance application also helps automate compliance tracking.

"We worked with Hanu to move our business-critical workloads running on Windows Sever to VMs in Azure. We are saving approximately 30% in cost and best of all, we can now focus entirely on innovation." Paul Athaide, Senior Manager, Multiple Sclerosis Society of Canada

Azure offers first party support for Windows Sever and SQL Server. This means the support team is backed by experts that built Windows Server and SQL server. Azure’s First party support promise combined with Hanu’s world class ISO-27001 certified NOC and SOC standards give customers the confidence that they can run business critical apps in Azure. 

Every customer operates their on-premises environment while they build out their operating environment in the cloud. Azure offers tools for Windows Server admins such as Windows Admin Center to manage their on-premises workloads and their Azure VMs. Many Azure services such as Azure Security Center, Update, Monitoring, Site Recovery and Backup work on-premises and are available through Windows Admin Center. Secondly, Azure Services like Azure SQL Database, App Service, and Azure Kubernetes service natively run Windows applications.

Lastly, we tell all our customers to take advantage of Azure Hybrid Benefit. If they have Software Assurance, they can save significantly on cloud cost by moving their Windows and SQL Server workloads to Azure. 

How does Hanu see the value in building a practice in migrating Windows Server on-premises workloads to the cloud?

Customers who are running Windows Server and SQL Server on-premises today have a greater understanding for and confidence in the cloud. We are frequently being pulled into discussions to assist in building customers environments in Azure. Consequently, we have invested a lot of time and resources in our Windows Server migration practice. As a Microsoft Partner, we are excited to see the innovations that Azure is bringing and ways we can help our customers digitally transform their business. 

Dave, thanks so much for sitting down with me. It sounds like our customers are in good hands! 

It’s always great to hear from our premier partners on what challenges customers face and how Microsoft Azure meets those requirements. 

Please check out the Partner Portal to find partners that meet your requirements. We realize every customer has challenges that are unique to their business and our Microsoft Partner Network has 1000’s of partners that meet those requirements. To learn more about Hanu, try Hanu's solution available on Azure Marketplace. 
Quelle: Azure

Automated machine learning and MLOps with Azure Machine Learning

Azure Machine Learning is the center for all things machine learning on Azure, be it creating new models, deploying models, managing a model repository, or automating the entire CI/CD pipeline for machine learning. We recently made some amazing announcements on Azure Machine Learning, and in this post, I’m taking a closer look at two of the most compelling capabilities that your business should consider while choosing the machine learning platform.

Before we get to the capabilities, let’s get to know the basics of Azure Machine Learning.

What is Azure Machine Learning?

Azure Machine Learning is a managed collection of cloud services, relevant to machine learning, offered in the form of a workspace and a software development kit (SDK). It is designed to improve the productivity of:

Data scientists who build, train and deploy machine learning models at scale
ML engineers who manage, track and automate the machine learning pipelines

Azure Machine Learning comprises of the following components:

An SDK that plugs into any Python-based IDE, notebook or CLI
A compute environment that offers both scale up and scale out capabilities with the flexibility of auto-scaling and the agility of CPU or GPU based infrastructure for training
A centralized model registry to help keep track of models and experiments, irrespective of where and how they are created
Managed container service integrations with Azure Container Instance, Azure Kubernetes Service and Azure IoT Hub for containerized deployment of models to the cloud and the IoT edge
Monitoring service that helps tracks metrics from models that are registered and deployed via Machine Learning

Let us introduce you to Machine Learning with the help of this video where Chris Lauren from the Azure Machine Learning team showcases and demonstrates it.

As you see in the video, Azure Machine Learning can cater to workloads of any scale and complexity. Please see below, a flow for the connected car application demonstrated in the video. This is also a canonical pattern for machine learning solutions built on Machine Learning:

Visual: Connected Car demo architecture leveraging Azure Machine Learning

Now that you understand Azure Machine Learning, let’s look at the two capabilities that stand out:

Automated machine learning

Data scientists spend an inordinate amount of time iterating over models during the experimentation phase. The whole process of trying out different algorithms and hyperparameter combinations until an acceptable model is built is extremely taxing for data scientists, due to the monotonous and non-challenging nature of work. While this is an exercise that yields massive gains in terms of the model efficacy, it sometimes costs too much in terms of time and resources and thus may have a negative return on investment (ROI).

This is where automated machine learning (ML) comes in. It leverages the concepts from the research paper on Probabilistic Matrix Factorization and implements an automated pipeline of trying out intelligently-selected algorithms and hypermeter settings, based on the heuristics of the data presented, keeping into consideration the given problem or scenario. The result of this pipeline is a set of models that are best suited for the given problem and dataset.

Visual: Automated machine learning

 

Automated ML supports classification, regression, and forecasting and it includes features such as handling missing values, early termination by a stopping metric, blacklisting algorithms you don’t want to explore, and many more to optimize the time and resources.

Automated ML is designed to help professional data scientists be more productive and spend their precious time concentrating on specialized tasks such as tuning and optimizing the models, alongside mapping real-world cases to ML problems, rather than spending time in monotonous tasks like trial and error with a bunch of algorithms. Automated ML with its newly introduced UI mode (akin to a wizard) also helps open the doors of machine learning to novice or non-professional data scientists as they can now become valuable contributors in data science teams by leveraging these augmented capabilities and churning out accurate models to accelerate time to market. This ability to expand data science teams beyond the handful of highly specialized data scientists enables enterprises to invest and reap the benefits of machine learning at scale without having to compromise high-value use cases due to the lack of data science talent.

To learn more about automated ML in Azure Machine Learning, explore this automated machine learning article.

Machine learning operations (MLOps)

Creating a model is just one part of an ML pipeline, arguably the easier part. To take this model to production and reap benefits of the data science model is a completely different ball game. One has to be able to package the models, deploy the models, track and monitor these models in various deployment targets, collects metrics, use these metrics to determine the efficacy of these models and then enable retraining of the models on the basis of these insights and/or new data. To add to it, all this needs a mechanism that can be automated with the right knobs and dials to allow data science teams to be able to keep a tab and not allow the pipeline to go rogue, which could result in considerable business losses, as these data science models are often linked directly to customer actions.

This problem is very similar to what application development teams face with respect to managing apps and releasing new versions of it at regular intervals with improved features and capabilities. The app dev teams address these with DevOps, which is the industry standard for managing operations for an app dev cycle. To be able to replicate the same to machine learning cycles is not the easiest task.

  Visual: DevOps Process

 

This is where the Azure Machine Learning shines the most. It presents the most complete and intuitive model lifecycle management experience alongside integrating with Azure DevOps and GitHub.

The first task in the ML lifecycle management, after a data scientist has created and validated a model or an ML pipeline, is that it needs to be packaged, so that it can execute where it needs to be deployed. This means that the ML platform needs to enable containerizing the model with all its dependencies, as containers are the default execution unit across scalable cloud services and the IoT edge. Azure Machine Learning provides an easy way for data scientists to be able to package their models with simple commands that can track all dependencies like conda environments, python versioned libraries, and other libraries that the model references so that the model can execute seamlessly within the deployed environment.

The next step is to be able to version control these models. Now, the code generated, like the Python notebooks or scripts can be easily versioned controlled in GitHub, and this is the recommended approach, but in addition to the notebooks and scripts you also need a way to version control the models, which are different entities than the python files. This is important as data scientists may create multiple versions of the model, and very easily lose track of these in search of better accuracy or performance. Azure Machine Learning provides a central model registry, which forms the foundation of the lifecycle management process. This repository enables version control of models, it stores model metrics, it allows for one-click deployment, and even tracks all deployments of the models so that you can restrict usage, in case the model becomes stale or its efficacy is no longer acceptable. Having this model registry is key as it also helps trigger other activities in the lifecycle when new changes appear, or metrics cross a threshold.

Visual: Model Registry in Azure Machine Learning

Once a model is packaged and registered, it’s time to test the packaged model. Since the package is a container, it is most ideal to test it in Azure Container Instances, which provides an easy, cost-effective mechanism to deploy containers. The important thing here is you don’t have to go outside Azure Machine Learning, as it has built strong links to Azure Container Instances within its workspace. You can easily set up an Azure Container Instance from within the workspace or from your IDE, where you’re already using Azure Machine Learning, via the SDK. Once you deploy this container to Azure Container Instances, you can easily inference against this model for testing purposes.

Following a thorough round of testing of the model, it is now time to be able to deploy the model into production. Production environments are synonymous with scale, flexibility and tight monitoring capabilities. This is where Azure Kubernetes Services (AKS) can be very useful for container deployments. It provides scale-out capabilities as it’s a cluster and can be sized to cater to the business’ needs. Again, very much like Azure Container Instances, Azure Machine Learning also provides the capability to set up an AKS cluster from within its workspace or the IDE of choice for the user.

If your models are sufficiently small and don’t need scale-out requirements, you can also take your models to production on Azure Container Instances. Usually, that’s not the case, as models are accessed by end-user applications or many different systems, and such planning for scale always helps. Both Azure Container Instances and AKS provide extensive monitoring and logging capabilities.

Once your model is deployed, you want to be able to collect metrics on the model. You want to ascertain that the model is drifting from its objective and that the inference is useful for the business. This means you capture a lot of metrics and analyze them. Azure Machine Learning enables this tracking of metrics for the model is a very efficient manner. The central model registry becomes the one place where all this hosted.

As you collect more metrics and additional data becomes available for training, there may be a need to be able to retrain the model in the hope of improving its accuracy and/or performance. Also, since this is a continuous process of integrations and deployment (CI/CD), there’s a need for this process to be automated. This process of retraining and effective CI/CD of ML models is the biggest strength of Azure Machine Learning.

Azure Machine Learning integrated with Azure DevOps for you to be able to create MLOps pipelines inside the DevOps environment. Azure DevOps has an extension for Azure Machine Learning, which enables it to listen to Azure Machine Learning’s model registry in addition to the code repository maintained in GitHub for the python notebooks and scripts. This enables to trigger Azure Pipelines based on new code commits into the code repository or new models published into the model repository. This is extremely powerful, as data science teams can configure stages for build and release pipelines within Azure DevOps for the machine learning models and completely automate the process.

What’s more, since Azure DevOps is also the environment to manage app lifecycles it now enables data science teams and app dev teams to collaborate seamlessly and trigger new version of the apps whenever certain conditions are met for the MLOps cycle, as they are the ones often leveraging the new versions of the ML models, infusing them into apps or updating inference call URLs, when desired.

This may sound simple and the most logical way of doing it, but nobody has been able to bring MLOps to life with such close-knit integration into the whole process. Azure Machine Learning does an amazing job of it enabling data science teams to become immensely productive.

Please see the diagrammatic representation below for MLOps with Azure Machine Learning.

Visual: MLOps with Azure Machine Learning

To learn more about MLOps please visit the Azure Machine Learning documentation on MLOps.

Get started now!

This has been a long post so thank you for your patience, but this is just the beginning. As we observe, Azure Machine Learning presents capabilities that make the entire ML lifecycle a seamless process. With these two features, we’re just scratching the surface of its capabilities as there are many more features to help data scientists and machine learning engineers create, manage, and deploy their models in a much more robust and thoughtful manner.

Model interpretability – Understand the model and its behavior.

ONNX runtime support – Deploy models created in the open ONNX format.

Model telemetry collection – Collect telemetry from live running models.

Field programmable gated-array (FPGA) inferencing – Score or featurize image data using pre-trained deep neural networks with blazing fast speed and low cost.

IoT Edge deployment – Deploy model to IoT devices.

And many more to come. Please visit the Getting started guide to start the exciting journey with us!
Quelle: Azure

Customize networking for DR drills: Azure Site Recovery

One of the most important features of a disaster recovery tool is failover readiness. Administrators ensure this by watching out for health signals from the product. Some also choose to set up their own monitoring solutions to track readiness. End to end testing is conducted using disaster recovery (DR) drills every three to six months. Azure Site Recovery offers this capability for replicated items and customers rely heavily on test failovers or planned failovers to ensure that the applications work as expected. With Azure Site Recovery, customers are encouraged to use non-production network for test failover so that IP addresses and networking components are available in the target production network in case of an actual disaster. Even with non-production network, the drill should be the exact replica of the actual failover.

Until now, it has been close to being the replica. The networking configurations for test failover did not entirely match the failover settings. Choice of subnet, network security group, internal load balancer, and public IP address per network interfacing controller (NIC) could not be made. This means that customer had to ensure a particular alphabetical naming convention of subnets in test failover network to ensure the replicated items are failed over as intended. This requirement conflicted with the organizations that enforce naming conventions for Azure resources. Also in case you wished to attach networking components, it was only possible manually post test failover operation. Further, if a customer tests the failover of an entire application via recovery plan, the Azure virtual network selection was applied to all the virtual machines irrespective of the application tier.

Test failover settings for networking resources

DR administrators of Azure Site Recovery now have a highly configurable setup for such operational activities. The network settings required for test failover are available for every replicated item. These settings are optional. If you skip it, old behavior will be applied where you can select the Azure virtual network at the time of triggering test failover.

You can go to Compute and Network blade and choose a test failover network. You can further update all the networking settings for each NIC. Only those settings can be updated that were configured on source at the time of enabling replication. The settings only allow you to choose a networking resource which is already created in the target location. Azure Site Recovery does not replicate the changes on networking resources at source. Read the full guidance on networking customization in Azure Site Recovery documentation.

At the time of initiating test failover via replicated item blade, you will no longer see the dropdown to choose Azure virtual network if the settings are pre-configured. If you initiate test failover via recovery plan, you will still see the dropdown to choose Vnet. However, the Vnet will be applied only to those machines that do not have settings pre-configured.

These settings are only available for Azure machines that are protected by Azure Site Recovery. Test failover settings for VMware and physical machines will be available in a couple of milestones.

Azure natively provides you the high availability and reliability for your mission-critical workloads, and you can choose to improve your protection and meet compliance requirements using the disaster recovery provided by Azure Site Recovery. Getting started with Azure Site Recovery is easy, check out pricing information, and sign up for a free Microsoft Azure trial. You can also visit the Azure Site Recovery forum on MSDN for additional information and to engage with other customers.

Related links and additional content

Set up disaster recovery for Azure virtual machines
Customize networking for test failovers
Learn more about disaster recovery drills

Quelle: Azure

Preview: Server-side encryption with customer-managed keys for Azure Managed Disks

Today we’re introducing the preview for server-side encryption (SSE) with customer-managed keys (CMK) for Azure Managed Disks. Azure customers already benefit from server-side encryption with platform managed keys (PMK) for Azure Managed Disks enabled by default. Customers also benefit from Azure disk encryption (ADE) that leverages the BitLocker feature of Windows and the DM-Crypt feature of Linux to encrypt Managed Disks with customer managed keys within the guest virtual machine.

Server-side encryption with customer-managed keys improves on platform managed keys by giving you control of the encryption keys to meet your compliance needs. It improves on Azure disk encryption by enabling you to use any OS types and images for your virtual machines by encrypting data in the storage service. Server-side encryption with customer-managed keys is integrated with Azure Key Vault (AKV) that provides highly available and scalable, secure storage for RSA cryptographic keys backed by hardware security modules (HSMs). You can either import your RSA keys to Azure Key Vault or generate new RSA keys in Azure Key Vault.

Azure Storage handles the encryption and decryption in a fully transparent fashion using envelope encryption. It encrypts data using an Advanced Encryption Standard (AES) 256 based data encryption key which is in turn protected using your keys stored in Azure Key Vault. You have full control of your keys, and Azure Managed Disks uses system-assigned managed identity in your Azure Active Directory for accessing keys in Azure Key Vault. A user with required permissions in Azure Key Vault must first grant permissions before Azure Managed Disks can access the keys. You can prevent Azure Managed Disks from accessing your keys by either disabling your keys or by revoking access controls for your keys. Moreover, you can track the key usage through Azure Key Vault monitoring to ensure that only Azure Managed Disks or other trusted Azure services are accessing your keys.

To enable customer-managed keys for Azure Managed Disks, you must first create an instance of a new resource type called DiskEncryptionSet, which represents a customer-managed key. You must associate your disks, snapshots, and images with a DiskEncryptionSet to encrypt them with customer-managed keys. There is no restriction on the number of resources that can be associated with the same DiskEncryptionSet.

Availability

Server-side encryption customer-managed keys are available for Standard HDD, Standard SSD, and Premium SSD Managed Disks. You can now perform the following operations in the West Central US region via Azure Compute Rest API version 2019-07-01:

Create a virtual machine from an Azure Marketplace image with OS disk encrypted with server-side encryption with customer-managed keys.
Create a custom image encrypted with server-side encryption with customer-managed keys.
Create a virtual machine from a custom image with OS disk encrypted with server-side encryption with customer-managed keys.
Create data disks encrypted with server-side encryption with customer-managed keys.
Create snapshots encrypted with server-side encryption with customer-managed keys.

We’re going to add support for Azure SDKs and other regions soon.

Getting Started

Please email AzureDisks@microsoft.com to get access to the preview.

Review the server-side encryption with customer-managed keys for Managed Disks preview documentation to learn how to do the following:

Create a virtual machine from an Azure marketplace image with disks encrypted with server-side encryption with customer-managed keys
Create a virtual machine from a custom image with disks encrypted with server-side encryption with customer-managed keys
Create an empty managed disk encrypted with server-side encryption with customer-managed keys and attach it to a virtual machine
Create a new custom image encrypted with server-side encryption with customer-managed keys from a virtual machine with disks encrypted with server-side encryption with customer-managed keys.

Quelle: Azure

Rain or shine: Azure Maps Weather Services will bring insights to your enterprise

Weather: the bane of many motorists, transporters, agriculturalists, retailers, or just about anyone who has to deal with it—which is all of us. That said, we can embrace weather and use weather data to our advantage by integrating it into our daily lives.

Azure Maps is proud to share the preview of a new set of Weather Services for Azure customers to integrate into their applications. Azure Maps is also proud to announce a partnership with AccuWeather—the leading weather service provider, recognized and documented as the most accurate source of weather forecasts and warnings in the world. Azure Maps Weather Services adds a new layer of real-time, location-aware information to our portfolio of native Azure geospatial services powering Microsoft enterprise customer applications.

“AccuWeather’s partnership with Microsoft gives all Azure Maps customers the ability to easily use and integrate authentic and highly accurate weather-based location intelligence and routing into their applications. This is a game-changer.”, says Dr. Joel N. Myers, AccuWeather Founder and CEO. “We are delighted with this collaboration with Microsoft as it will open up new opportunities for organizations–large and small–to benefit from our superior weather data based on their unique needs.”

The power of Azure Maps Weather Services

Bringing Weather Services to Azure Maps means customers now have a simple means of integrating highly dynamic, real-time weather data and visualizations into their applications. There are a multitude of scenarios that require global weather information for enterprise applications. For motorists, we can pick up our phones, or ask a smart speaker about the weather. Our cars can determine the best path for us based on traffic, weather, and personal timing considerations.

Transportation companies can now feed weather information into dynamic routing algorithms to determine the best route conditions for their respective loads. Agriculturalists can have their smart sprinkler systems running connected edge computing informed of incoming rain, saving crops from overwatering, and conserving the delicate resource that is water. Retailers can use predicted weather information to determine the need for high-volume goods, optimizing supply chain.

Did you know that most electrical vehicle batteries lose a percentage of their charge when the temperature dips below freezing? With Azure Maps Weather Services, you can use current or forecasted temperatures to determine your vehicle’s range. Range can determine how far a car can drive along a route, set better expectatoins for estimated arrival times, determine if charging stations are close by, or find hotels that are reachable based on this reduction in battery life. Freezing temperatures also increase the duration a battery will take to charge—meaning more time spent at the charging station.

Having the insight of temperature drops at charging stations means having the ability to calculate the length of time a driver will spend at a charging station; which, in turn, allows charging station owners to recalculate productivity metrics for their respective stations based on weather conditions.

Azure Maps Weather Services in preview

Azure Maps Weather Services are available as a preview with the following capabilities:

Weather Tile API: Fetches radar and infrared raster weather tiles formatted to be integrated into the Azure Maps SDKs. By default, Azure Maps uses vector map tiles for its web SDK (see Zoom Levels and Tile Grid). Use of the Azure Maps SDK is not required and developers are free to integrate the Azure Maps Weather Services into their own Azure Maps applications as needed.
Current Conditions: Returns detailed current weather conditions such as precipitation, temperature, and wind for a given coordinate location. By default, the most current weather conditions will be returned. Observations from the past 6 or 24 hours for a particular location can be retrieved.
Minute Forecast: Request minute-by-minute forecasts for a given location for the next 120 minutes. Users can request weather forecast in the interval of 1, 5 and 15 minutes. The response will include details such as the type of precipitation (including rain, snow, or a mixture of both), start time, and precipitation intensity value.
Hourly Forecast: Request detailed weather forecast by hour for the next 1, 12, 24 (1 day), 72 (3 days), 120 (5 days), and 240 hours (10 days) for the given the given coordinate location. The API returns details such as temperature, humidity, wind, precipitation, and ultraviolet (UV) index.
Quarter-Day Forecast: Request detailed weather forecast by quarter-day for the next 1, 5, 10, or 15 days for a given location. Response data is presented by quarters of the day—morning, afternoon, evening, and overnight. Details such as temperature, humidity, wind, precipitation, and UV index are returned.
Daily Forecast: Returns detailed weather forecast such as temperature, humidity, wind by day for the next 1, 5, 10, 15, 25, or 45 days for a given coordinate location. The API returns details such as temperature, humidity, wind, precipitation, and UV index.
Weather Along Route: Weather along a route API returns hyperlocal (one kilometer or less), up-to-the-minute weather nowcasts, weather hazard assessments, and notifications along a route described as a sequence of waypoints. This includes a list of weather hazards affecting the waypoint or route, and the aggregated hazard index for each waypoint might be used to paint each portion of a route according to how safe it is for the driver. Data is updated every five minutes. The service supplements Azure Maps Route Service that allows you to first request a route between an origin and a destination and use that as an input for Weather Along Route endpoint.

Using the Azure Maps Weather Service along a calculated route (using Azure Maps Route Service), customers can generate weather notifications for waypoints that experience an increase in intensity of a weather hazard. If the vehicle is expected to begin experiencing heavy rain as it reaches a waypoint, a weather notificationwill be generated, allowing the end product to display a heavy rain notification before the driver reaches that waypoint. The trigger for when to display the notification for a waypoint is left up to the product developer and could be based, for example, on a fixed geometry (geofence), or selectable distance to the waypoint.

Azure Maps services are designed to be used in combination with one another to build rich, geospatial applications and insights as part of your Azure Maps account. Azure Maps Weather Service is a new pillar of intelligence added to Azure Maps Location Based Services, Azure Maps Mobility Services, and Azure Maps Spatial Operations, all actuated via the Azure Maps Web, Android SDKs and REST endpoints.

These new weather services are available to all Azure customers, includeing both pay-as-you-go or enterprise agreements. Simply navigate to the Azure Portal, create your Azure Maps account and start using the Azure Maps Weather Service.

We want to hear from you

We are always working to grow and improve the Azure Maps platform and want to hear from you! We’re here to help and want to make sure you get the most out of the Azure Maps platform.

Have a feature request? Add it or vote up the request on our feedback site.
Having an issue getting your code to work? Have a topic you would like us to cover on the Azure blog? Ask us on the Azure Maps forums.
Looking for code samples or wrote a great one you want to share? Join us on GitHub.
To learn more, read the Azure Maps documentation.

Quelle: Azure