Announcing the general availability of larger, more powerful standard file shares for Azure Files

Better scale and more power for IT professionals and developers!

We're excited to announce the general availability of larger, more powerful standard file shares for Azure Files. Azure Files is a secure, fully managed public cloud file storage with full range of data redundancy options and hybrid capabilities using Azure File Sync.

Here is a quick look at some of the improvements in the Azure Files standard file shares' capacity and performance.

With the release of large file shares, a single standard file share in a general purpose account can now support up to 100 TiB capacity, 10K IOPS, and 300 MiB/s throughput. All premium file shares in Azure FileStorage accounts currently support large file shares by default. If your workload is latency sensitive and requires a higher level of performance, you should consider Azure Files premium tier. Visit Azure Files scale limits documentation to get more details.

What’s new?

Since the preview of large file shares, we have been working on making the Azure Files experience even better. Large file shares now has:

Ability to upgrade existing general purpose storage accounts and existing file shares.
Ability to opt in for larger files shares at a storage account instead of subscription level.
Expanded regional coverage.
Support for both locally redundant and zonal redundant storages.
Improvements in the performance and scale of sync to work better with larger file shares. Visit Azure File Sync scalability targets to keep informed of the latest scale.

Pricing and availability

The increased capacity and scale of standard file shares on your general purpose accounts come at zero additional cost. Refer to the pricing page for further details.

Currently, standard large file shares support is available for locally redundant and zone redundant storages and available in 13 regions worldwide. We are quickly expanding the coverage to all Azure regions. Stay up to date on region availability by visiting Azure Files documentation.

Getting started

You no longer need to register your subscription for the large file shares feature.

New storage account

Create a new general purpose storage account in one of the supported regions on a supported redundancy option. While creating storage account, go to Advanced tab and enable Large file shares feature. See detailed steps on how to enable large file shares support on a new storage account. All new shares created under this new account will, by default, have 100 TiB capacity with increased scale.


Existing storage account

On an existing general purpose storage account that resides on one of the supported regions, go to Configuration, enable Large file shares feature, and hit Save. You can now update quota for existing shares under this upgraded account to more than 5 TiB. All new shares created under this upgraded account will, by default, have 100 TiB capacity with increased scale.


See detailed steps on how to enable large file shares support on an existing storage account.

Opting in your storage accounts into large file shares feature does not cause any disruption to your existing workloads, including Azure File Sync. Once opted in, you cannot disable the large files shares feature on your account.


Please share your feedback on the Azure Storage forum or send us email at You can also post your ideas and suggestions about Azure Storage on our feedback forum.

Happy sharing!
Quelle: Azure

SAP on Azure–Designing for Efficiency and Operations

This is the final blog in our four-part series on Designing A Great SAP on Azure Architecture.

Robust SAP on Azure Architectures are built on the pillars of Security, Performance and Scalability, Availability and Recoverability, and Efficiency and Operations.

Within this blog we will a cover a range of Azure services and a new GitHub repository which can support operational efficiencies for your SAP applications running on Azure.

Let’s get started.

Simplifying SAP Shared Storage architecture with Azure NetApp Files

Azure NetApp Files (ANF) can be used to simplify your SAP on Azure deployment architecture, providing an excellent use case for high availability (HA) of your SAP shared files based on Enterprise NFS.

SAP Shared Files are critical for SAP systems with high availability requirements and more than one application server. Additionally, SAP HANA scale-out systems also require a common set of shared files i.e.

 /sapmnt which stores SAP kernel files, profiles and job logs.
 /hana/shared, which houses binaries, configuration files and traces for SAP HANA scale-out.

Prior to Azure NetApp Files, SAP on Azure customers running Linux with high availability requirements had to protect the SAP Shared Files using Pacemaker clusters and block replication devices. These setups were complex to manage and required a high degree of technical skills to administer. With the introduction of Azure NetApp Files, a Pacemaker cluster can be removed from the architecture which reduces landscape sprawl and maintenance efforts. Moreover, there is no longer a need to stripe disks nor configure block replication technologies for the SAP Shared Files. Rather, Azure NetApp Files volumes can be configured using Azure Portal, Azure CLI or PowerShell and mounted to the SAP Central Services clusters. Azure NetApp Files volumes can also be resized on the fly and protected by way of storage snapshots.

To simplify your SAP on Azure deployment architecture, we have published two scenarios for high availability of your SAP System Central Services and SAP shared files based on Azure NetApp Files with NFS.

• High Availability for SAP NetWeaver on Azure VMs on Red Hat Enterprise Linux with Azure NetApp Files for SAP applications

• High availability for SAP NetWeaver on Azure VMs on SUSE Linux Enterprise Server with Azure NetApp Files for SAP applications

Optimizing Dev, Test and Sandbox deployments with Azure Connector for SAP LaMa

Within a typical SAP estate, several application landscapes are often deployed i.e. ERP, SCM, BW etc. and there is an ongoing need to perform SAP system copies and SAP system refreshes, i.e. creating new SAP projects systems for technical/application releases or periodically refreshing QA systems from Production copies. The end-to-end process for SAP system copies and refreshes can be both time-consuming and labor intensive.

SAP LaMa Enterprise Edition can support operational efficiencies in this area where several steps involved in the SAP system copy or refresh can be automated. Our Azure Connector for LaMa enables copying, deletion and relocation of Azure Managed Disks to help your SAP operations team perform SAP system copies and system refreshes rapidly reducing manual efforts.

In terms of virtual machines (VMs) operations, the Azure Connector for LaMa can be used to reduce the run costs for your SAP estate on Azure. You can stop (deallocate) and start your SAP virtual machines which enables you to run certain workloads with a reduced utilization profile i.e. though the LaMa interface scheduling your SAP S/4HANA sandbox virtual machine to be online from 08:00-18:00, 10 hours per day instead of running 24 hours. Furthermore, the Azure Connector for LaMa also allows you to resize your virtual machine when performance demands arise directly from within LaMa.

Save Time and Reduce Errors by Automating SAP Deployments

The manual deployment of your SAP infrastructure and software installation can be time consuming, tedious and error prone. One of the major benefits of Azure is the ability to automate your SAP infrastructure deployment i.e. virtual machines, storage and the installation of your SAP software. Automation reduces errors and deviation and facilitates programmatic and accelerated SAP deployments. As a customer, you have a wide range of automation tools available natively on Azure such as Azure Resource Manager templates and you can also create deployment scripts via both PowerShell and Azure CLI. Moreover, you also have the option to leverage your favorite configuration management tools.

We have included some links below as a kick-starter around Azure automation for your SAP deployment.

 Sample ARM Templates:
 Sample Terraform and Ansible
 Sample SUSE Solution Templates

Get a Holistic View with Azure Monitor for SAP Solutions

SAP on Azure customers can now benefit from having a central location to monitor infrastructure telemetry as well as database metrics. We have enhanced our Azure Monitor functionality to include SAP Solutions monitoring. This enhancement on Azure Monitor covers both SAP on Azure virtual machines (VMs) and our bare-metal HANA Large Instances (HLI) offering. Azure Monitor for SAP Solution capabilities include:

 Monitoring the health & utilization of infrastructure
 Correlation of data between infrastructure and the SAP database for troubleshooting
 Trending data to identify patterns enabling proactive remediation

Azure Monitor for SAP Solutions does not run an agent on the SAP HANA VM or HLI. Instead, it deploys a managed resource group within your customer subscription which contains resources to collects telemetry from the SAP HANA server and in-turn ingest the data into Azure’s Log Analytics.

Some of the components deployed in managed resource group are:

Azure Key Vault – used to store customer secrets such as database credentials
User-Assigned Identity – assigned to Key Vault as access policy
Log Analytics – workspace to collect and analyze monitoring telemetry
Collector Virtual Machine– runs the logic to collect telemetry from the SAP HANA database server

Our vision here is to enable a single point of monitoring and analysis where your infrastructure and SAP telemetries coincide to ease issue identification and implement remediations before any fatal outage occurs. A simple example is where the memory utilization trajectory is going critical and SAP HANA starts experiencing column unload., When this happens, an alert is triggered to inform the administrators before the issue exacerbates.

At October 2019, Azure Monitor for SAP is able to collect statistics from SAP HANA and is currently in Private Preview, therefore, please reach out to your Microsoft Account team should you have interest in this service.

Additional resources for optimizing your SAP deployments

The AzureCAT SAP Deployment Engineering team provides deep engagement on customer projects where we help our customers successfully deploy their SAP applications on Azure with quality. Throughout the project lifecycle, there can be times where remediation or optimizations of a customer’s SAP deployment architecture is required. For example:

 Lifting the Resilience of the SAP Deployment Architecture:

A scenario can arise where a customer may have deployed their SAP system in single instance virtual machines (SLA 99.9 percent) rather than a high availability configuration via Azure Availability Sets (SLA 99.95 percent). Now the customer has a need to move to an Availability Set configuration while retaining their existing network (IP, vNIC) and data disks.

Performance Optimization:

An SAP on Azure customer is already running in Production and would now like to benefit from Proximity Placement Groups to optimize the network performance between their SAP Application and Database virtual machines.

 Availability Zones Selection:

A customer requires guidance to select the optimum Azure Availability Zones to minimize network Round-Trip-Time and facilitate a recovery point objective of zero (sync) for their SAP database.

To address the above topics (and more), we have created a new GitHub repository. This repository will be enduring, and our customers and partners can expect new scripts to land on an ongoing basis to support operational efficiencies of SAP deployments on Azure.


This blog closes out our series on Designing a Great SAP on Azure Architecture. We hope you’ve enjoyed our latest offerings to efficiently operate your SAP assets on Azure and as always, change is the only constant in the world of clouds and we are here to accommodate the change and make it simpler.

As a next step, we recommend you check-out our SAP on Azure Getting Started page.

For the previous blogs in the series you can refer to the links below:

Designing for Security
Designing for Performance and Scalability
Designing for Availability and Recoverability

Quelle: Azure

Microsoft Azure AI hackathon’s winning projects

We are excited to share the winners of the first Microsoft Azure AI Hackathon, hosted on Devpost. Developers of all backgrounds and skill levels were welcome to join and submit any form of AI project, whether using Azure AI to enhance existing apps with pre-trained machine learning (ML) models, or by building ML models from scratch.
Quelle: Azure

Azure Monitor adds Worker Service SDK, new ASP.NET core metrics

Application Insights from Azure Monitor empowers developers and IT professionals to observe, debug, diagnose, and improve their distributed services hosted on the cloud, on-premises, and through hybrid solutions.

The release of the Application Insights for ASP.NET Core 2.8.0 for web applications and the Application Insights for .NET Core Worker Service 2.8.0 for non-web applications delivers new value to developers including:

Support for more applications types.
New alertable metrics.
Support for ASP.NET Core 3.0.
Cross-vendor distributed tracing.

Support for more application types

The Application Insights Worker Service SDK supports the new ASP.NET Core 3.0 Worker Service template, and customer engagement on GitHub helped us prioritize this work. Beyond .NET Core Worker Service Applications, this SDK brings the full power of Application Insights to other non-web applications including Console Applications, Queue Processing, and Background Jobs. Get started with our step-by-step onboarding guide.

New alertable metrics

Event Counters allow you to observe and alert on new metrics including Time in Garbage Collection, Allocation Rate, and Thread Pool Queue Length. Event Counters expand the historical Windows Performance Counters to be cross-platform—Linux, MacOS, and Windows. Application Insights now collects these metrics out-of-the-box, making them easily observable and alertable.

Additionally, you can now observe CPU usage on Linux, MacOS, and Windows with one-second latency using our popular Live Metrics Stream. This milestone means our live metrics feature on Linux and MacOS reaches parity with Windows, reinforcing our commitment to cross-platform feature parity.

Support for ASP.NET Core 3.0

Application Insights now supports ASP.NET Core 3.0 Applications when using Application Insights ASP.NET Core 2.8.0 SDK or higher.

Cross-vendor distributed tracing

Microsoft joins a growing list of vendors adopting W3C Trace Context. This means your traces will propagate across services instrumented with other application performance monitoring vendors who recognize the W3C Trace Context standard. As more vendors adopt the W3C Trace Context standard, the reach of your distributed tracing will expand.

Future plans

Application Insights ASP.NET Core 3.0 support in Azure App Service is scheduled to release in November.
Quelle: Azure

Azure SQL Database: Continuous innovation and limitless scale at an unbeatable price

More companies are choosing Azure for their SQL workloads, and it is easy to see why. Azure SQL Database is evergreen, meaning it does not need to be patched or upgraded, and it has a strong track record of innovation and reliability for mission-critical workloads. But, in addition to delivering unparalleled innovation, it is also important to provide customers with the best price-performance. Here, once again, SQL Database comes out on top.

SQL Database leads in price-performance for mission-critical workloads

GigaOm, an independent research firm, recently published a study where they tested throughput performance between Azure SQL Database and SQL Server on AWS RDS. SQL Database emerged as the price-performance leader for mission-critical workloads while costing up to 86 percent less than AWS RDS.1

The image above is a price-performance comparison from the GigaOm report. The price-performance metric is price divided by throughput (transactions per second, tps). Lower is better.

Customers like H&R Block found it easy to extend their on-premises experience to Azure, where they tapped into new levels of performance, scalability, and flexibility.

“SQL Database managed instance gives us a smooth migration path for moving existing workloads to Azure with minimal technical reengineering. All the applications have a target architecture in Azure SQL Database so they can take advantage of zone awareness and scale up or down to meet changing demands in a cost-optimized way for our seasonal business.”- Sameer Agarwal: Manager-Enterprise Data Analytics, H&R Block

As you adopt the cloud and migrate your data workloads, Azure SQL Database service is a cost-effective solution, yielding up to 212 percent return on investment and a payback period of as little as 6 months. According to The Total Economic Impact™ of Microsoft Azure SQL Database Managed Instance, when you add it up, customers pay less when they bring their SQL workloads to Azure.

Innovation powers limitless scale and performance for your mission-critical workloads

Our proven track record of innovation is built on the SQL Server engine that has evolved with market trends and perfected over 25 years. This has resulted in the most comprehensive and consistent surface area across on-premises, the cloud, and on the edge. Our most recent investments provide the highest SQL Server on-premises application compatibility, remove the limits to application growth, and unleash meaningful productivity gains with built-in intelligence.

SQL Database Hyperscale enables limitless scale that goes far beyond other cloud providers, breaking through the resource constraints of modern application development with limitless size and scale. Hyperscale eliminates the challenges often seen with very large workloads, with virtually instantaneous backups and the capability to restore databases in the hundreds of terabytes within minutes. Now customers can significantly expand the potential for application growth without being limited by storage size.

Built-in AI lets customers put their databases on auto-pilot, with features that are trained on millions of databases to optimize performance and security on their behalf. As the apps run, the database continuously learns their unique patterns, adaptively tuning performance and automatically improving reliability and data protection. Features like automatic tuning and advanced data security are on the job 24×7, so customers can focus more on driving their business than managing their databases.

“Azure SQL Database requires minimum management effort and it is scalable, a must for our type of applications. The ‘Intelligent Performance’ monitoring with its Recommendation engine and its Query Performance Insight is like having a DBA on staff, 24×7, looking at optimizing our database. We could not have it done better!” Cezar Nasui, Director – Operations and Special Projects, Centris

SQL Database provides enterprise-grade reliability with industry-leading availability guarantees, up to 99.995 percent. It also provides the only 100 percent business continuity SLA in the industry for a relational database service. With built-in high availability using Always On technology, these guarantees represent our commitment to ensuring customers’ data is safe and the applications and processes their businesses rely upon continue running in the face of a disruptive event.

Get started with SQL in Azure

SQL databases are simply best on Azure, making it the natural destination for customers to help secure and modernize their SQL Server databases. Learn more about why SQL Server is best on Azure or get started for free.

1Price-performance claim based on data from a study commissioned by Microsoft and conducted by GigaOm in August 2019. The study compared price performance between a single, 80 vCore, Gen 5 Azure SQL Database on the business-critical service tier and the db.r4.16xlarge offering for SQL Server on AWS RDS. Benchmark data is taken from a GigaOm Analytic Field Test derived from a recognized industry standard, TPC Benchmark™ E (TPC-E), and is based on a mixture of read-only and update intensive transactions that simulate activities found in complex OLTP application environments. Price-performance is calculated by GigaOm as the cost of running the cloud platform continuously for three years divided by transactions per second throughput. Prices are based on publicly available US pricing in East US for Azure SQL Database and US East (Ohio) for AWS RDS as of August 2019. Price-performance results are based upon the configurations detailed in the GigaOm Analytic Field Test. Actual results and prices may vary based on configuration and region.
Quelle: Azure

CIS Azure Security Foundations Benchmark open for comment

One of the best ways to speed up securing your cloud deployments is to focus on the most impactful security best practices. Best practices for securing any service begins with a fundamental understanding of cybersecurity risk and how to manage it. As an Azure customer, you can leverage this understanding by using security recommendations from Microsoft to help guide your risk-based decisions as they’re applied to specific security configuration settings in your environment.

We partnered with the Center for Internet Security (CIS) to create the CIS Microsoft Azure Foundations Benchmark v1.  Since that submission, we’ve received good feedback and wanted to share it with the community for comment in a document we call the Azure Security Foundations Benchmark. This benchmark contains recommendations that help improve the security of your applications and data on Azure. The recommendations in this document will go into updating the CIS Microsoft Azure Foundations Benchmark v1, and are anchored on the security best practices defined by the CIS Controls, Version 7.

In addition, these recommendations are or will be integrated into Azure Security Center and their impact will be surfaced in the Azure Security Center Secure Score and the Azure Security Center Compliance Dashboard.

We want your feedback on this document. There are two ways you can let us know what you think:

Send us an email.
Fill in the feedback form.

The Azure Security Foundation Benchmark is now in draft stage and we’d like to get your input on this effort. Specifically, we’d like to know:

Does this document provide you with the information needed to understand how to define your own security baseline for Azure based resources?
Does this format work for you? Are there other formats that would make it easier for you to use the information and act on it?
Do you currently use the CIS Controls as a framework and the current edition of the CIS Azure Security Foundation Benchmarks?
What additional information do you need on how to implement the recommendations using Azure security related capabilities?
Once we have the final version of the benchmark ready, we will be integrating with Azure Security Center Compliance Portal. Does this meet your requirements of monitoring Azure resources based on CIS Benchmarks?

The Azure Security Foundation Benchmark team wants to hear from you! You can connect with us via email or the feedback form.

What’s in the Azure Security Foundation Benchmark document

The benchmark document is divided into three main sections:

Overview information.
Security recommendations.
Security implementation in Azure services.

The Overview information provides background on why we put this document together, how you can use it to improve your security posture in Azure, and some key definitions of benchmark terminology.

The security recommendations are the cornerstone of the document. In this phase, we cover security recommendations in the following areas:

Identity and access management
Data protection

The recommendations are surfaced in tables like those seen in the image below.

The last section shows how the Azure security recommendations are implemented in a selection of core Azure services. The implementations include links to documents that will help you understand how to apply each component of the benchmark to improve your security.

Implementation information is contained in tables as seen below.

We hope you find this information useful and thank you in advance for your input on how we can make this document more useful for you and your organization! Remember to send us your feedback via email on the CIS Azure Cloud Security Benchmark.
Quelle: Azure

Leveraging Cognitive Services to simplify inventory tracking

Who spends their summer at the Microsoft Garage New England Research & Development Center (or “NERD”)? The Microsoft Garage internship seeks out students who are hungry to learn, not afraid to try new things, and able to step out of their comfort zones when faced with ambiguous situations. The program brought together Grace Hsu from Massachusetts Institute of Technology, Christopher Bunn from Northeastern University, Joseph Lai from Boston University, and Ashley Hong from Carnegie Mellon University. They chose the Garage internship because of the product focus—getting to see the whole development cycle from ideation to shipping—and learning how to be customer obsessed.

Microsoft Garage interns take on experimental projects in order to build their creativity and product development skills through hacking new technology. Typically, these projects are proposals that come from our internal product groups at Microsoft, but when Stanley Black & Decker asked if Microsoft could apply image recognition for asset management on construction sites, this team of four interns accepted the challenge of creating a working prototype in twelve weeks.

Starting with a simple request for leveraging image recognition, the team conducted market analysis and user research to ensure the product would stand out and prove useful. They spent the summer gaining experience in mobile app development and AI to create an app that recognizes tools at least as accurately as humans can.

The problem

In the construction industry, it’s not unusual for contractors to spend over 50 hours every month tracking inventory, which can lead to unnecessary delays, overstocking, and missing tools. All together, large construction sites could lose more than $200,000 worth of equipment over the course of a long project. Addressing this problem is an unstandardized mix that typically involves barcodes, Bluetooth, RFID tags, and QR codes. The team at Stanley Black & Decker asked, “wouldn’t it be easier to just take a photo and have the tool automatically recognized?”

Because there are many tool models with minute differences, recognizing a specific drill, for example, requires you to read a model number like DCD996. Tools can also be assembled with multiple configurations, such as with or without a bit or battery pack attached, and can be viewed from different angles. You also need to take into consideration the number of lighting conditions and possible backgrounds you’d come across on a typical construction site. It quickly becomes a very interesting problem to solve using computer vision.


How they hacked it

Classification algorithms can be easily trained to reach strong accuracy when identifying distinct objects, like differentiating between a drill, a saw, and a tape measure. Instead, they wanted to know if a classifier could accurately distinguish between very similar tools like the four drills shown above. In the first iteration of the project, the team explored PyTorch and Microsoft’s Custom Vision service. Custom Vision appeals to users by not requiring a high level of data science knowledge to get a working model off the ground, and with enough images (roughly 400 for each tool), Custom Vision proved to be an adequate solution. However, it immediately became apparent that manually gathering this many images would be challenging to scale for a product line with thousands of tools. The focus quickly shifted to find ways of synthetically generating the training images.

For their initial approach, the team did both three-dimensional scans and green screen renderings of the tools. These images were then overlaid with random backgrounds to mimic a real photograph. While this approach seemed promising, the quality of the images produced proved challenging.

In the next iteration, in collaboration with Stanley Black & Decker’s engineering team, the team explored a new approach using photo-realistic renders from computer-aided design (CAD) models. They were able to use relatively simple Python scripts to resize, rotate, and randomly overlay these images on a large set of backgrounds. With this technique, the team could generate thousands of training images within minutes.


On the left is an image generated in front of a green screen versus an extract from CAD on the right.

Benchmarking the iterations

The Custom Vision service offers reports on the accuracy of the model as shown below.

For a classification model that targets visually similar products, a confusion matrix like the one below is very helpful. A confusion matrix visualizes the performance of a prediction model by comparing the true label of a class in the rows with the label outputted by the model in the columns. The higher the scores on the diagonal, the more accurate the model is. When high values are off the diagonal it helps the data scientists understand which two classes are being confused with each other by the trained model.

Existing Python libraries can be used to quickly generate a confusion matrix with a set of test images.

The result

The team developed a React Native application that runs on both iOS and Android and serves as a lightweight asset management tool with a clean and intuitive UI. The app adapts to various degrees of Wi-Fi availability and when a reliable connection is present, the images taken are sent to the APIs of the trained Custom Vision model on Azure Cloud. In the absence of an internet connection, the images are sent to a local computer vision model.

These local models can be obtained using Custom Vision, which exports models to Core ML for iOS, TensorFlow for Android, or as a Docker container that can run on a Linux App Service in Azure. An easy framework for the addition of new products to the machine learning model can be implemented by exporting rendered images from CAD and generating synthetic images.

Images in order from left to right: inventory checklist screen, camera functionality to send a picture to Custom Vision service, display of machine learning model results, and a manual form to add a tool to the checklist.

What’s next

Looking for an opportunity for your team to hack on a computer vision project? Search for an OpenHack near you.

Microsoft OpenHack is a developer focused event where a wide variety of participants (Open) learn through hands-on experimentation (Hack) using challenges based on real world customer engagements designed to mimic the developer journey. OpenHack is a premium Microsoft event that provides a unique upskilling experience for customers and partners. Rather than traditional presentation-based conferences, OpenHack offers a unique hands-on coding experience for developers.

The learning paths can also help you get hands on with the cognitive services.
Quelle: Azure

Introducing Azure Spring Cloud: fully managed service for Spring Boot microservices

As customers have moved their workloads to the cloud, we’ve seen a growth in the use of cloud-native architectures, particularly microservices. Microservice-based architectures help improve scalability and velocity but implementing them can pose challenges. For many Java developers, Spring Boot and Spring Cloud have helped address these challenges, providing a robust platform with well-established patterns for developing and operating microservice applications. But creating and maintaining a Spring Cloud environment requires work. Such as setting up the infrastructure for dynamic scaling, installing and managing multiple components, and wiring up the application to your logging infrastructure. 

To help make it simpler to deploy and operate Spring Cloud applications, together with Pivotal, Microsoft have created Azure Spring Cloud.

Azure Spring Cloud is jointly built, operated, and supported by both Pivotal and Microsoft. This means that you can use Azure Spring Cloud for your most demanding applications and know that both Pivotal and Microsoft are standing behind the service to ensure your success.

High productivity development

Azure Spring Cloud abstracts away the complexity of infrastructure management and Spring Cloud middleware management, so you can focus on building your business logic and let Azure take care of dynamic scaling, security patches, compliance standards, and high availability.

With a few clicks, you can provision an Azure Spring Cloud instance. After configuring a couple dependencies in your pom file, your Spring Cloud app is automatically wired up with Spring Cloud Config Server and Service Registry. Furthermore, you can deploy and scale Spring Boot applications in seconds. 

To accelerate your development experience, we provide support for the Azure Spring Cloud Maven plugin and VS Code extensions that optimize Spring development. In other words, you can use the tools that you already know and love.

Ease of monitoring

With out-of-the-box support for aggregating logs, metrics, and distributed app traces into Azure Monitor, you can easily visualize how your applications are performing, detect and diagnose issues across microservice applications and their dependencies, drill into monitoring data for troubleshooting and gain better understanding of what end-users do with your apps.

Open source innovation with Spring integrations

Azure Spring Cloud sets up the compute foundation for cloud-native Spring applications. From there, Azure Spring Cloud makes it simple to connect to data services such as Azure SQL Database, MySQL, PostgreSQL, or Cosmos DB to enable enterprise grade end-user authentication and authorization using Azure Active Directory, to bind cloud streams with Service Bus or Event Hubs, and to load and manage secrets with Azure Key Vault. To help you save the effort of manually figuring out dependencies and eliminate boilerplate code, we’ve created a rich library of Spring integrations and starters for your Spring applications.

Sign up for Azure Spring Cloud

Both Pivotal and Microsoft are looking forward to hearing feedback on the new Azure Spring Cloud from our joint customers. If you’re interested in joining the private preview, please submit your contact details here. To hear more from Pivotal on today’s announcement, head over to their blog and let us know what you think.

The service will be available in public preview, for all customers, before end of the calendar year.
Quelle: Azure

SAP on Azure–Designing for availability and recoverability

This is the third in a four-part blog series on Designing a great SAP on Azure Architecture.

Robust SAP on Azure Architectures are built on the pillars of security, performance and scalability, availability and recoverability, efficiency and operations.

We covered designing for performance and scalability previously and within this blog we will focus on availability and recoverability.

Designing for availability

Designing for availability ensures that your mission critical SAP applications such as SAP ERP or S/4HANA have high-availability (HA) provisions applied. These HA provisions ensure the application is resilient to both hardware and software failures and that the SAP application uptime is secured to meet your service-level-agreements (SLAs).

Within the links below, you will find a comprehensive overview on Azure virtual machine maintenance versus downtime where unplanned hardware maintenance events, unexpected downtime and planned maintenance events are covered in detail.

Manage the availability of Linux Virtual Machines documentation

Manage the availability of Windows virtual machines in Azure

From an availability perspective the options you have for deploying SAP on Azure are as follows:

99.9 percent SLA for single instance VMs with Azure premium storage. In this case, the SAP database (DB), system central services A(SCS) and application servers are either running on separate VMs or consolidated on one or more VMs. A 99.9 percent SLA is also offered on our single node, bare metal HANA Large Instances.
99.95 percent SLA for VMs within the same Azure availability set. The availability set enforces that the VMs within the set are deployed in separate fault and update domains, in turn this ensures the VMs are safeguarded against unplanned hardware maintenance events, unexpected downtime and planned maintenance events. To ensure HA of the SAP application, the availability sets are used in conjunction with Azure Load Balancers,  guest operating system clustering technologies such as Windows Failover cluster or Linux Pacemaker to facilitate short failover times and synchronous database replication technologies (SQL AlwaysOn, HANA System Replication, etc) to guarantee no loss of data. Additionally, configuring the SAP Enqueue Replication Server can mitigate against loss of the SAP lock table during a failover of the A(SCS).
99.99 percent SLA for VMs within Azure availability zones. An availability zone in an Azure region is a combination of a fault domain and an update domain. The Azure platform recognizes this distribution across update domains to ensure that VMs in different zones are not updated at the same time in the case of Azure planned maintenance events.  Additionally, availability zones are physically separate zones within an Azure region where each zone has its own power source, network, cooling and is logically separated from the other zones within the Azure region. This construct hedges against unexpected downtime due to a hardware or infrastructure failure within a given zone. By architecting the SAP deployment to leverage replication across zones i.e. DBMS replication (HANA System Replication, SQL AlwaysOn), SAP Enqueue Replication Server and distributing the SAP application servers (for redundancy) across zones you can protect the SAP system from the loss of a complete datacenter. If one zone is compromised, the SAP System will be available in another zone. For an overview of Azure availability zones and our latest Mv2 VM offering you can check out this video.
HANA Large Instances are offered at an SLA of 99.99 percent when they are configured as an HA pair, this applies to single datacenter and availability zones deployments.

In the case of availability sets and availability zones, guest OS clustering is necessary for HA. We would like to use this opportunity to clarify the Linux Pacemaker Fencing options on Azure to avoid split brain of your SAP application, these are:

Azure Fencing Agent

Storage Based Death (SBD)

The Azure Fencing Agent is available on both RedHat Enterprise Linux (RHEL) and SUSE Enterprise Linux (SLES) and SBD is supported by SLES, but not RHEL;  for the shortest cluster failover times for SAP on Azure with Pacemaker, we recommend:

Azure Fencing Agent for SAP clusters built on RHEL.

SBD for SAP clusters built on SLES

In the case of productive SAP applications, we strongly recommend availability sets or availability zones.  Availability zones are an alternative to availability sets to provide HA with the addition of resiliency to datacenter failures within an Azure region. However, be mindful, there is no guarantee of a certain distances between the building structures hosting different availability zones. Different Azure regions can encounter different setups in terms of distance of the physical buildings. Therefore, for deterministic application performance and the lowest network Round-Trip-Time (RTT), Availability sets could be the better option.

Single Instance VMs can be a good fit for non-production (project, sandbox and test SAP systems) which don’t have availability SLAs on the same level as production, this option also helps to minimize run costs.

Designing for recoverability

Designing for recoverability means recovering from data loss, such as a logical error on the SAP database, from large scale disasters, or loss of a complete Azure region. When designing for recoverability, it is necessary to understand the Recovery Point Objective (RPO) and Recovery Time Objective (RTO) of your SAP Application. Azure Regional Pairs are recommended for disaster recovery which offer isolation and availability to hedge against the risks of natural or human disasters impacting a single region.

On the DBMS layer, asynchronous replication can be used to replicate your production data from your primary region to your disaster recovery (DR) region. On the SAP application layer, Azure-to-Azure Site Recovery can be used as part of an efficient, cost-conscious DR solution. You could also choose to architect a dual-purpose scenario on your DR side such as running a combined QA/DR system for a better return on your investments as shown below.

In addition to HA and DR provisions an enterprise data protection solution for backup and recovery of your SAP data is essential.

Our first party Azure Backup offering is certified for SAP HANA, the solution is currently in public preview (as of September 2019) and supports SAP HANA scale-up (data and log backup) with further scenarios to be supported in the future such as data snapshot and SAP HANA scale-out.

Additionally, the Azure platforms supports a broad range of ISVs which offer enterprise data protection and management for your SAP applications. One such ISV is Commvault where Microsoft have recently partnered to produce this whitepaper. A key advantage of Commvault is the IntelliSnap (data snapshot) capability which offers instantaneous application consistent data snapshots of your SAP database – this is hugely beneficial for large databases which have low RTO requirements. Commvault facilitates highly performant multi-streaming (backint) data backup directly to Azure Blob storage for both SAP HANA scale-up, SAP HANA scale-out and anyDB workloads. Your enterprise data protection strategy can include a combination of data snapshots and data backup i.e. running daily snapshots and a data backup (backint) on the weekend. Below, a data snapshot executed via IntelliSnap against an SAP HANA database on an M128s (2TB) VM, the snapshot duration is 20 seconds.

Within this blog we have summarized the options for designing SAP on Azure for Availability and Recoverability. When architecting and deploying your production SAP applications on Azure, it is essential to include availability sets or availability zones to support your mission critical SAP SLAs. Furthermore, you should apply DR provisions and enterprise data protection to secure your SAP application against the loss of a complete Azure region or data corruption.

Be sure to execute HA and DR testing through the lifecycle of your SAP to Azure project and also re-test these capabilities during maintenance windows once your SAP Applications are in productive operations i.e. DR drill tests annually.
Availability and Recoverability should be reviewed on an ongoing basis to incorporate the latest technologies and guidance on best practices from Microsoft.

In blog #4 in our series we will cover designing for efficiency and operations.
Quelle: Azure