Unlock cloud savings on the fly with autoscale on Azure

Unused cloud resources can put an unnecessary drain on your computing budget, and unlike legacy on-premises architectures, there is no need to over-provision compute resources for times of heavy usage.

Autoscaling is one of the value levers that can help unlock cost savings for your Azure workloads by automatically scaling up and down the resources in use to better align capacity to demand. This practice can greatly reduce wasted spend for those dynamic workloads with inherently “peaky” demand.

In some cases, workloads with occasionally high peak demand have extremely low average utilization, making them ill-suited for other cost optimization practices, such as rightsizing and reservations.

For periods when an app puts a heavier demand on cloud resources, autoscaling adds resources to handle the load and satisfy service-level agreements for performance and availability. And for those times when the load demand decreases (nights, weekends, holidays), autoscaling can remove idle resources to reduce costs. Autoscaling automatically scales between the minimum and maximum number of instances and will run, add, or remove VMs automatically based on a set of rules.

Autoscaling is near real-time cost optimization. Think of it this way: Rather than build an addition to your house with extra bedrooms that will go unused most of the year, you have an agreement with a nearby hotel. Your guests can check-in, at any time and at the last minute, and the hotel will automatically charge you for the days when they visit.

Not only does it utilize cloud elasticity by paying for capacity only when you need it, you can also reduce the need for an operator to continually monitor the performance of a system and make decisions about adding or removing resources.

What services can you autoscale?

Azure provides built-in autoscaling using Azure Monitor autoscale for most compute options, including:

Azure Virtual Machines Scale Sets—see How to use automatic scaling and virtual machine scale sets.
Service Fabric—see Scale a Service Fabric cluster in or out using autoscale rules.
Azure App Service—see Scale instance count manually or automatically.
Azure Cloud Services has built-in autoscaling at the role level. See How to configure autoscaling for a cloud service in the portal.

Azure Functions differs from the previous compute options because you don't need to configure any autoscale rules. The hosting plan you choose dictates how your function app is scaled:

With a consumption plan, your functions app will scale automatically, and you will only pay for compute resources when your functions are running.
With a premium plan, your app will automatically scale based on demand using pre-warmed workers that run applications with no delay after being idle.
With a dedicated plan, you will run your functions within an App Service plan at regular App Service plan rates.

Azure Monitor autoscale provides a common set of autoscaling functionality for virtual machine scale sets, Azure App Service, and Azure Cloud Service. Scaling can be performed on a schedule, or based on a runtime metric, such as CPU or memory usage.

Use the built-in autoscaling features of the platform if they meet your requirements. If not, carefully consider whether you really need more complex scaling features. Examples of additional requirements may include more granularity of control, different ways to detect trigger events for scaling, scaling across subscriptions, and scaling other types of resources.

Note that application design can impact how that app handles scale as a load increases. To review design considerations for scalable applications, including choosing the right data storage and VM size, and more, check out Design scalable Azure applications—Microsoft Azure Well-Architected Framework.

Also know that, in general, it is better to scale up than to scale down. Scaling down usually involves deprovisioning or downtime. So, choose smaller instances when a workload is highly variable and scale out to get the required level of performance.
You can set up autoscale in the Azure portal, PowerShell, Azure CLI, or Azure Monitor REST API.

Get started with autoscaling

With autoscaling, you can dynamically scale your apps to meet changing demand or anticipate loads with different schedules and set rules that trigger scaling actions. Regardless of how you set it up, the goal is to maximize the performance of your application and save money by not wasting server resources.
Quelle: Azure

Azure delivers strong MLPerf inferencing v2.0 results from 1 to 8 GPUs

Microsoft Azure is committed to providing its customers with industry-leading real-world AI capabilities. In December 2021, Microsoft Azure debuted its leadership performance with the MLPerf training v1.1 results. Azure debuted at number one among cloud providers and number two overall at scale among all submitters. Azure’s supercomputer's building blocks were used to generate the results in our v2.0 submissions for the MLPerf inferencing results published on April 6, 2022.

These industry-leading results are driven by Microsoft’s publicly available supercomputing capabilities designed for real-world AI inferencing workloads. Microsoft enables customers of all scales to deploy powerful AI solutions, whether at a focused local scale or at the scale of the largest supercomputers in the world.

Microsoft Azure’s publicly available AI inferencing capabilities are led by the NDm A100 v4, ND A100 v4, and NC A100 v4 virtual machines (VMs) that are powered by NVIDIA A100 SXM and PCIe Tensor Core graphics processing units (GPUs). These results showcase Azure’s commitment to making AI inferencing available to all in the most accessible way—while raising the bar for AI inferencing in Azure.

In our quest to continually provide the best technology for our customers, Azure has recently announced the preview for the NC A100 v4. With this introduction of the NC A100 v4 series, we have provided our customers with three different VM sizes ranging from one to four GPUs. From our benchmarking, we have seen more than two times performance over the previous generation. Azure’s customers can get access to these new systems today by signing up for the preview program.

Some highlights for this round of MLPerf inferencing submissions can be seen in the following tables.

Highlights from the results

ND96amsr A100 v4 powered by NVIDIA A100 80G SXM Tensor Core GPU

Benchmark
Samples/second
Queries/second
Scenarios

bert-99
27,500 plus
~22,500 plus
Offline and server

resnet
300,000 plus
~200,000 plus
Offline and server

3d-unet
24.87
 
Offline

NC96ads A100 v4 powered by NVIDIA A100 80G PCIe Tensor Core GPU

Benchmark
Samples/second
Queries/second
Scenarios

bert-99
~6,300
~5,300
Offline and server

resnet
144,000
~119,600
Offline and server

3d-unet
11.7
 
Offline

The above tables showcase three of the six benchmarks the team ran using NVIDIA A100 SXM and PCIe Tensor Core GPUs for offline and server scenarios respectively. Take a look at the full list of results for the various divisions.

Azure works closely with NVIDIA

The results were generated by deploying the environment using the VM offerings and Azure’s Ubuntu 18.04-HPC marketplace image. We worked closely with NVIDIA to quickly deploy the environment and perform benchmarks with industry-leading results in performance and scalability.

These results are a testament to Azure’s focus on offering scalable supercomputing for any workload while enabling our customers to utilize “on-demand” supercomputing capabilities in the cloud to solve their most complex problems. Visit the Azure Tech Community blog to read the steps to reproduce the results.

More about MLPerf

MLPerf is a consortium of AI leaders from academia, research labs, and industry where the mission is to “build fair and useful benchmarks” that provide unbiased evaluations of training and inference performance for hardware, software, and services—all conducted under prescribed conditions. To stay on the cutting edge of industry trends, MLPerf continues to evolve, holding new tests at regular intervals and adding new workloads that represent state-of-the-art AI. MLPerf’s tests are transparent and objective, so users can rely on the results to make informed buying decisions. The industry benchmarking group, formed in May 2018, is backed by dozens of industry leaders. The benchmark tests across inferencing are increasingly becoming the key tests that hardware and software vendors use to demonstrate performance. Take a look at the full list of results for MLPerf Inference v2.0.
Quelle: Azure

The future is on FHIR for SAS and Microsoft Azure

This blog has been co-authored by Steve Kearney, PharmD, Global Medical Director, SAS.

This blog is part of a series in collaboration with our partners and customers leveraging the newly announced Azure Health Data Services. Azure Health Data Services, a platform as a service (PaaS) offering designed to support Protected Health Information (PHI) in the cloud, is a new way of working with unified data—providing care teams with a platform to support both transactional and analytical workloads from the same data store and enabling cloud computing to transform how we develop and deliver AI across the healthcare ecosystem.

There is a dichotomy in health care technology. Despite new developments in imaging, diagnostics, treatment, and surgical techniques, the lack of data standardization in the industry has trapped health insights in functional silos. Providers and payers alike struggle to manually reconcile incompatible file formats, which slows the transfer of information and negatively impacts quality care and patient experience.

Microsoft, along with partners such as global analytics software company SAS, are driving towards increased interoperability through enabling the use of standards such as Fast Healthcare Interoperability Resources (FHIR®). Together, SAS and Microsoft Azure are building deep technology integrations that unlock value by making disparate data and advanced analytics more accessible to health and life science organizations. With new capabilities such as the integration from Azure Health Data Services to SAS on Azure, the embedded AI capabilities of SAS Health are more efficient and secure, expanding the possibilities of patient-centric innovation and trusted collaboration across the health landscape.

FHIR puts the patient at the center of the health care ecosystem. When querying information in the previous HL7 format, the query is answered with the entire patient dataset that must be parsed to find the information desired for predictive modeling. Additionally, data would require harmonization within and across the organization, creating limitations on available data. In contrast, harmonized FHIR datasets persisting on Azure Health Data Services enable FHIR-based requests directed to the specific data points required, speeding up queries to near-real-time and protecting patient data.

While FHIR’s footprint in the industry is small compared to HL7’s, the global adoption of the FHIR standard is growing. Major electronic health records (EHR) companies like Cerner and Epic are moving quickly to support FHIR.1 Notably in the United States, the Centers for Medicare and Medicaid Services (CMS) has mandated its use for health insurance payers and providers.

Transform your analytical experience in the health cloud

The integration between Azure Health Data Services and SAS Health can be transformational for organizations who have struggled to operationalize analytics. Not only does this integration offer a technology that is secure, fast, and scalable, it democratizes analytics by allowing the business or clinical user to query a patient data set using a pre-set parameter or algorithm and return results within a clinical workflow.

The traditional view of health analytics is that it occurs outside the process of care and is in some way removed from the patient. That’s changing, thanks to secure health cloud environments like Azure Health Data Services and presents the opportunity for more real-time integration of patient and claims data. With the evolution of the citizen data scientist and respective interoperability, we now see a clearer path from analytics to improved health care outcomes.

The graphic below illustrates the role of health data analytic interoperability in health and life sciences. Ultimately, the use of diverse health data throughout the process of care in a shared cloud environment will enable better outcomes for us all.

SAS Health and Azure Health Data Services

The embedded-AI capabilities of SAS Health running on FHIR data ingested through Azure Health Data Services provide game-changing advantages across health care delivery and research.

Providers

SAS Health on FHIR gives speedy access to analytic insights within EHRs, parsing out only the information needed, allowing near-real-time results from, for example, pharmacy claims, laboratory results, or imaging. Predictive insights such as medication adherence or emerging health risks are more available through a secure FHIR-based exchange. Quality care and patient satisfaction increase when providers can integrate data across multiple systems and record types including patient records and claims data into a single view.

Payers

Payers governed by CMS are already mandated to transition to FHIR-based communication standards and are experiencing early wins. For example, adjudication of claims is one of the most time-consuming parts of the payer process. With FHIR, payers can securely query patient records to determine medical necessity of a service or procedure and whether appropriate authorization was obtained, cutting time dramatically in the process. With FHIR’s extensibility beyond the payer-provider core, pharmacy data can be queried to inform proactive disease management programs with specialty drugs and more real-time formulary approvals to meet patient needs.

Academic researchers

For clinical research, data sharing can be a common, time-consuming obstacle. FHIR-ready datasets can accelerate the generation of new health insights and expand the universe of data types for research, including social determinants of health, real-world data, genetics, device data from the internet of medical things, and more.

Ultimately, these innovations in health data analytic interoperability can make insights faster across the vast ecosystem of professionals who are committed to a healthier world. While technology is only one part of the solution, improving health begins with predicting future health risks and taking proactive steps to mitigate disease and promote physical and mental wellness.

Do more with your data with Microsoft Cloud for Healthcare

With Azure Health Data Services, health organizations can transform their patient experience, discover new insights with the power of machine learning and AI, and manage PHI data with confidence. Enable your data for the future of healthcare innovation with Microsoft Cloud for Healthcare.

We look forward to being your partner as you build the future of health.

Learn more about Azure Health Data Services.
Learn more about SAS Health on Azure.
Read our recent blog, “Microsoft launches Azure Health Data Services to unify health data and power AI in the cloud.”
Learn more about Microsoft Cloud for Healthcare.

®FHIR is a registered trademark of Health Level Seven International, registered in the U.S. Trademark Office and are used with their permission.

1Journal of the American Medical Informatics Association, Volume 28, Issue 11, November 2021, pages 2379–2384.
Quelle: Azure

Increase remote storage performance with Azure Ebsv5 VMs—now generally available

At Microsoft Ignite in November 2021, we announced the memory-optimized Ev5 Azure Virtual Machine (VM) series based on the 3rd Gen Intel Xeon Platinum 8370C processor. The Ev5 VMs are designed for memory-intensive business-critical applications, relational database servers, and in-memory data analytics workloads.

Today, we are announcing the general availability of the Ebsv5 VM series, a new addition to the Ev5 Azure VM family. The Ebsv5 and Ebdsv5 VMs offer up to 120,000 IOPS and 4,000MBps of remote disk storage throughput. They also include up to 512 GiB of RAM and local SSD storage (maximum 2,400 GiB). This new VM series provides up to three times an increase in remote storage performance compared to previous VM generations and helps consolidate existing workloads on fewer VMs or smaller VM sizes to achieve potential cost savings. Additionally, the Ebdsv5 series features a local disk, and Ebsv5 is without a local disk to best match your workload requirements. To checkout regional availability, please visit Microsoft Azure Products by Region.

The Ebsv5 and Ebdsv5 VM series

As customers transition their business-critical applications to the cloud, questions arise on how to strike a balance among the various requirements such as availability, business continuity, resilience, performance, cost, and complexity, to name a few. To offer the best-in-class service to customers, Microsoft partners with technology vendors such as Intel to embed their latest innovations within Azure IaaS. With this strong collaboration, Azure delivers continuous infrastructure efficiency and performance improvements that customers expect from the cloud.

For instance, customers usually deploy large database workloads such as online transaction processing systems, data warehousing applications, and analytical applications on memory-optimized Ev5 VM series. While the Ev5 VMs meet the performance requirements for many business-critical applications, some workloads demand even higher VM-to-disk throughput and input/output operations per second (IOPS) performance, now offered by the Ebsv5 VM series. Workloads requiring higher throughput and IOPS can now migrate from the previous generation Ev4 VM series or constrained core vCPU capable Azure VMs to the Ebsv5 VMs to reduce the cost on both infrastructures and licensed commercial software running on those instances.

Ebsv5 series VM specifications:

Size

vCPU

Memory
GiB

Max uncached disk throughput: IOPS/MBps
(Premium SSD)

Max uncached disk throughput: IOPS/MBps
(Ultra disk)

Max burst uncached disk throughput IOPS/MBps
(Premium-SSD)

Max burst uncached disk throughput
IOPS/MBps
(Ultra disk)

Standard_E2bs_v5

2

16

5500/156

7370/156

10000/1200

13400/1200

Standard_E4bs_v5

4

32

11000/350

14740/350

20000/1200

26800/1200

Standard_E8bs_v5

8

64

22000/625

29480/625

40000/1200

53600/1200

Standard_E16bs_v5

16

128

44000/1250

58960/1250

64000/2000

85760/2000

Standard_E32bs_v5

32

256

88000/2500

117920/2500

120000/4000

160000/4000

Standard_E48bs_v5

48

384

120000/4000

160000/4000

120000/4000

160000/4000

Standard_E64bs_v5

64

512

120000/4000

160000/4000

120000/4000

160000/4000

Ebdsv5 series VM specifications:

Note the Uncached IOPS/ throughput specs are the same as Ebsv5 VMs

Size

vCPU

Memory
GiB

Temp storage
GiB

Max cached disk throughput: IOPS/MBps

Standard_E2bds_v5

2

16

75

9000/125

Standard_E4bds_v5

4

32

150

19000/250

Standard_E8bds_v5

8

64

300

38000/500

Standard_E16bds_v5

16

128

600

75000/1000

Standard_E32bds_v5

32

256

1200

150000/1250

Standard_E48bds_v5

48

384

1800

225000/2000

Standard_E64bds_v5

64

512

2400

300000/4000

Customer testimonials

We had the opportunity to collaborate with various Azure customers during the preview period. Companies like SAS, Blue Yonder, and Silk tested the performance of the new VMs:

SAS is a leader in analytics. Through innovative software and services, SAS empowers and inspires customers around the world to transform data into intelligence.

"Microsoft has introduced the Ebdsv5 Azure Virtual Machines series for applications that require high IO throughput to process large volumes of data. We run several computationally and IO intensive tests to measure concurrent, mixed analytics workload performance. The Ebdsv5 VMs can offer increased IO throughput to external storage, and excellent overall performance to meet our SAS applications requirements. We are excited to start using the Ebdsv5 VMs to run SAS data and analytics solutions on Azure."—Bryan Harris, Executive VP and CTO, SAS

Blue Yonder provides supply chain management, manufacturing planning, retail planning, store operations, and category management offerings.

“At Blue Yonder we have successfully transitioned our applications to a Software as a Service deployment model on Azure. We are continuously improving the scalability and cost of our offerings. That is also why we were thrilled to participate in the preview of the new Ebdsv5 Azure VMs. These new VM-series provide us with optimal sizes for our workloads and are able to meet high IO-throughput requirements with strong CPU performance and large memory footprints at a cost-effective price. The new Ebdsv5 Azure VMs will allow us to run solutions up to 20 percent faster, run larger workloads, while also reducing the overall total cost of ownership.”—Jan Karstens, CVP SaaS, Evangelist, Blue Yonder

The Silk Platform allows customers to scale their larger database storage solution to deliver increased performance.

“Silk has validated the Ebsv5 Azure VM series for use in the Silk Cloud Platform to run Mission Critical database workloads. The fantastic performance of these VMs makes them ideal for applications that need to process large volumes of data. We have seen over 10GBytes/sec sustained throughput to a single Ebsv5 series VM, from the Silk Cloud Platform, with SQL Server database workloads. The Silk Cloud Platform aggregates the egress performance from multiple VMs to enable maximum ingress performance to a database VM. We are excited to onboard the new Ebsv5 VMs when they become generally available.”—Tom O’Neill, CTO, Silk

Getting started

You can learn more about the Ebsv5 and Ebdsv5 VMs by registering for the upcoming webinar and reading the documentation. You can also check out pricing for Windows and Linux.

If you need help selecting the best VM for your workload, try using the virtual machine selector.
Quelle: Azure

Now in preview: Azure Virtual Machines with Ampere Altra Arm-based processors

Up to 50 percent better price-performance than comparable x86-based virtual machines (VMs) for scale-out workloads.

The demand for compute capacity to sustain business modernization and digital transformation initiatives continues to grow. Organizations are facing a complex set of challenges as they deploy a broad range of workloads globally, from the edge to the cloud. There is also a need for a new breed of operationally efficient cloud-native computing solutions that can meet this demand without a massive growth in infrastructure footprint and energy consumption.

To address some of these challenges Microsoft is announcing the preview of Azure Virtual Machines series featuring the Ampere Altra Arm-based processor. The new VMs are engineered to efficiently run scale-out workloads, web servers, application servers, open-source databases, cloud-native as well as rich .NET applications, Java applications, gaming servers, media servers, and more. The new VM series include general-purpose Dpsv5 and memory-optimized Epsv5 VMs, which can deliver up to 50 percent better price-performance than comparable x86-based VMs. You can request access to the preview by filling out this form.

The new Azure Virtual Machines, featuring the Ampere Altra Arm-based processor, further extend our portfolio of compute solutions to help customers manage complexity and seamlessly run modern, dynamic, and scalable applications. Azure customers will benefit from the improvements the new VMs provide in terms of scalability, performance, and operational efficiency.

One customer is Amadeus, the leading IT provider for the global travel industry. Their research and development team gained early access to the preview and is excited about the potential of the offering.

“We power better journeys through travel technology. To achieve that, we design and deliver the most complex, trusted, and critical systems that our customers need”, said Denis Lacroix, SVP Cloud Transformation Program at Amadeus. “Travelers demand that their needs are met efficiently and quickly, and that they receive a consistent, personalized experience through every step of their journeys, from inspiration to search and booking, to ticketing, check-in, and arriving home. With Azure Arm64 VMs, we will be able to deliver higher throughput and even better experiences than the x86 VM that we’ve used in the past. Azure Arm64 VM series have proven to be a reliable platform for our applications, and we’ve accelerated our plans to deploy Arm64-based Azure solutions.”

A growing solution ecosystem

The Dpsv5 and Epsv5 Azure VM-series feature the Ampere Altra Arm-based processor operating at up to 3.0GHz. The new VMs provide up to 64 vCPUs and include VM sizes with 2GiB, 4GiB, and 8GiB per vCPU memory configurations, up to 40 Gbps networking, and optional high-performance local SSD storage.

The VMs currently in preview support Canonical Ubuntu Linux, CentOS, and Windows 11 Professional and Enterprise Edition on Arm. Support for additional operating systems including Red Hat Enterprise Linux, SUSE Linux Enterprise Server, Debian, AlmaLinux, and Flatcar is on the way.

"We see companies using Arm based architectures as a way of reducing both cost and energy consumption. It's a huge step forward for those looking to develop with Linux on Azure and we are pleased to partner with Microsoft to offer Ubuntu images."—Alexander Gallagher, Vice President of Public Cloud, Canonical

"Red Hat was one of the early leaders in creating common standards around Arm-based platforms, helping to ultimately bring Arm processors to the datacenter and beyond. This aligns with Red Hat’s long-standing commitment to giving our customers a broad set of choices to meet their unique enterprise computing needs, which extends to choice of architecture on-premises and in public clouds. We look forward to supporting Ampere Arm instances on Microsoft Azure as well as continuing our collaboration around the evolution of these platforms with key partners like Microsoft.”—Maryam Zand, Vice President, Cloud Partners, Red Hat

“SUSE has played a significant and active role in the Arm ecosystem, supporting the Arm 64-bit architecture and the Ampere Altra server instances.  SUSE is excited to partner with Microsoft Azure in supporting the Dpsv5 and Epsv5 Azure VM-series based on the Ampere Altra Arm-based server instances in our upcoming SUSE Linux Enterprise Server 15 SP4 release.  Arm-optimized solutions in the cloud offer significant market potential as enterprises improve time to value and scale-out cloud environments with Azure Virtual Machines.  We look forward to continued collaboration with Microsoft Azure.”—Dr. Thomas Di Giacomo, Chief Technology and Product Officer, SUSE

We are also excited about the collaboration with Ampere and Arm. We have been working together to help Azure customers build and manage modern applications at cloud scale.

“Microsoft’s preview of their new Ampere Altra Azure Virtual Machines will provide customers with a first-hand look at its leadership performance across cloud workloads of all types. We have seen rapid growth in the adoption of our Ampere Cloud Native Processors, and this further expands their global scale and availability. Not only do Ampere Altra processors deliver new levels of performance to the cloud, but they are also the efficient and sustainable choice.”—Jeff Wittich, Chief Product Officer Ampere

“Organizations are shifting to a cloud-first approach as modern scale-out workloads diversify, emphasizing the importance of price-performance and power efficiency. The new Microsoft Azure VMs, powered by the Arm Neoverse™-based Ampere Altra platform, highlight our deep collaboration with industry change-makers, and deliver on the power of choice to the cloud computing market.”—Chris Bergey, SVP and GM, Infrastructure Line of Business, Arm.

The next generation of computing technology needs to be designed from the ground up for cloud-native software technologies like microservices, containers, and serverless. To that end, customers will be able to deploy and manage containerized applications with Azure Kubernetes Service (AKS) running on Ampere Altra Arm-based processors.

“As we continue to see customers adopting AKS as their Cloud Native compute platform, providing the price performance of the Ampere Arm-based processor through a consistent managed Kubernetes API gives them the ability to migrate their workloads to drive further efficiencies as they scale up their cloud footprint.”—Sean McKenna, Group Product Manager AKS, Microsoft

Developer platforms and tools

Most major developer platforms and languages are gearing up to, or already provide Arm support and the inherent benefits that this processor architecture brings.

The modern .NET platform introduced native support for the Arm architecture on Linux starting with .NET 5 and has built upon that with the recent .NET 6 release. With C# 10 and F# 6, .NET 6 delivers language improvements that simplify your code. Additionally, a new dynamic profile-guided optimization (PGO) system delivers deep optimizations that are only possible at runtime, driving significant gains in performance that can reduce the cost of running cloud services in Azure, improved cloud diagnostics, and access to many new APIs. With the introduction of native support for Arm in the .NET Framework 4.8.1 (currently in preview and available as part of the latest Windows 11 Insider Preview builds), investments in the vast ecosystem of .NET Framework apps can also now leverage the benefits of running these workloads on Arm.

The latest Microsoft Visual C++ tools (currently in preview and available as part of Visual Studio 17.2 previews) allow you to not just run your apps, but also build natively for Arm, on Arm.

Java has played a critical role in democratizing cross-platform development. With Microsoft's recent JEP 388 contribution to OpenJDK, Java applications can now run on a wider range of Arm systems with no additional changes.

Java developers can enjoy the development experience they are familiar with while building and running their applications with the Microsoft Build of OpenJDK. Microsoft provides binaries for Windows, Linux, and macOS on compatible Arm hardware, for Java 11 and Java 17.

Last, but not least, the totally free Visual Studio Code editor running natively on Arm enables you to harness the power of the cloud for not just your production environment, but now also for your development environment.

General purpose and memory intensive workloads

The new Dpsv5 VM-series are engineered to run several Linux enterprise workloads such as web servers, application servers, open-source databases, .NET applications, Java applications, gaming servers, media servers, and more.

We are also introducing the Dpldsv5 VM-series, which provide 2GiBs per vCPU and offer a combination of vCPUs, memory, and local storage able to cost-effectively run workloads that do not require larger amounts of RAM per vCPU.

Finally, the new Epsv5 VM sizes can meet the requirements associated with memory-intensive Linux-based workloads including open-source databases, in-memory caching applications, gaming, and data analytics engines.

Series

vCPUs

Memory (GiBs)

Local Disk (GiBs)

Max Data Disks

Max NICs

Dpsv5-series

2 – 64

8 – 208

n/a

4 – 32

2 – 8

Dpdsv5-series

2 – 64

8 – 208

75 – 2,400

4 – 32

2 – 8

Dplsv5-series

2 – 64

4 – 128

n/a

4 – 32

2 – 8

Dpldsv5-series

2 – 64

4 – 128

75 – 2,400

4 – 32

2 – 8

Epsv5-series

2 – 32

16 – 208

n/a

4 – 32

2 – 8

Epdsv5-series

2 – 32

16 – 208

75 – 2,400

4 – 32

2 – 8

The Dpsv5, Dplsv5, and Epsv5 VM-series also offer options with no temporary storage at lower price points. You can attach Standard SSDs, Standard HDDs, and Premium SSDs to any of the VMs currently in preview, with Ultra Disk storage support coming soon. Virtual Machine Scale Sets are also supported.

Spot Virtual Machines are available; however, Azure Reserved Virtual Machine Instances pricing will be offered only after the VMs become generally available. Prices vary by region.

Learn more about the new Azure Virtual Machines and request access to the preview

The preview is initially available in the West US 2, West Central US, and West Europe Azure regions.

To learn more register for the upcoming webinar, and to request access to the preview, fill out this form.

Additional resources

•    Ampere blog.
•    Arm blog.
•    Microsoft’s binary distribution of the OpenJDK and related support.
•    Azure Virtual Machines pricing.
Quelle: Azure

Empowering space development off the planet with Azure

Any developer can be a space developer with Azure. Microsoft has a long history of empowering the software development community. We have the world’s most comprehensive developer tools and platforms from Github to Visual Studio, and we support a wide range of industries and use cases from healthcare, financial services, critical industries, and now space.

As Microsoft expands its focus toward space, we are bringing the power, approachability, and security of our developer story to the next frontier. Microsoft is empowering developers with a platform for on-orbit compute at the ultimate edge, so that spacecraft running AI workloads are connected to the hyperscale Azure cloud.

We are reducing the barriers to entry for space application development and increasing the flexibility and modularity of software solutions. Enabling those building space workloads to easily leverage the productivity of our developer tools and integration with Azure services—to develop, analyze, deploy, and operate space applications in orbit and on the ground.

Today we are bringing new partnerships and capabilities to the development community, including:

NASA and Hewlett Packard Enterprise (HPE) are testing AI at the ultimate edge for Astronaut Safety.
New partnerships are bringing development capabilities to on-orbit compute.

Unlocking new on-orbit climate data applications with Thales Alenia Space (TAS).
Developing new technologies with Loft Orbital to demonstrate re-taskable satellite functions and seamless connectivity to the terrestrial cloud.
Demonstrating reconfigurable on-orbit compute and AI processing with Ball Aerospace.

Rapidly analyzing spaceborne data with the new reference architecture for Azure Orbital with Azure Synapse.
Empowering analysts with newly integrated Blackshark.ai geospatial models are available with Azure Orbital.

Testing AI for Astronaut Safety at the ultimate edge

Microsoft, NASA, and HPE developed an AI workload test to run on the International Space Station (ISS) that could detect damage to astronaut equipment.

Using Microsoft’s cloud computing platform, NASA and Microsoft created a computer vision application that identifies the condition of the space gloves. Once trained in the cloud, the app was deployed to the HPE Spaceborne Computer-2, an AI-enabled software and hardware platform, aboard the ISS, and then operated at the ultimate edge enabling both local and remote analysis of the glove conditions.

Learn more about this project today.

On-orbit partnerships

Thales Alenia Space unlocks new on-orbit climate data applications with Microsoft to gather unmatched Earth observation insights.

Microsoft is partnering with Thales Alenia Space to demonstrate and validate on-orbit compute technologies with a demonstration onboard the International Space Station (ISS). Thales Alenia Space, a joint venture between Thales (67 percent) and Leonardo (33 percent), is the leader in orbital infrastructures and is developing high-power, edge-computing solutions for space.  Microsoft and Thales Alenia Space will deploy a powerful on-orbit computer, an on-orbit application framework, and high-performance Earth Observation sensors to unlock new on-orbit climate data processing applications for the benefit of our planet's sustainability. In collaboration with Microsoft Research (MSR), Microsoft and Thales Alenia Space will work with research teams in remote sensing, computer vision, and climate science to demonstrate the potential of next-generation on-orbit compute for Earth observation. This space edge computing capacity will allow gathering faster, to-the-point Earth observation insights immediately applicable for our planet’s surveillance, understanding, and protection. This joint collaboration comes a year after the integration of Deeper Vision, an Earth observation data analytics software by Thales Alenia Space, into Azure Space and is a strong milestone towards joint strategic ambitions between Microsoft and Thales Alenia Space which have just signed a Memorandum of Understanding on geospatial solutions, digital ground segment, and space edge computing.

New partnership with Loft Orbital to advance space edge computing and software deployment to orbit.

The Microsoft and Loft Orbital partnership will enable a new way to develop, test, and validate software applications for space systems in Microsoft Azure, and then seamlessly deploy them to satellites in orbit using Loft's space infrastructure tools and platforms. This solution also offers more efficient paths to flight for modern ‘massless’ payloads, where parties needing space capabilities can leverage shared on-orbit hardware rather than having to build and launch their own.

Working together with Loft, we are bringing core satellite capabilities like tasking which has typically been executed on the ground, to a more agile commanding and tasking paradigm executed on-orbit. To do so, we are integrating the Microsoft Azure suite of products, including terrestrial cloud and ground stations services, with Loft software capabilities that provide access to spacecraft, including on-orbit edge computing environment and sensors.

This strategic partnership will provide government and commercial users with a scalable and simplified capability to deploy software in space, enabling new paradigms in remote sensing, edge compute, on-orbit autonomy, and other areas. This groundbreaking capability will be brought to market first on a jointly used satellite launching in 2023 that will provide a host environment for third-party software applications, enabling users to deploy and operate their applications in orbit.

Demonstrating reconfigurable on-orbit compute processing with Ball Aerospace.

Ball Aerospace, a systems integrator with a heritage of designing and building government satellite programs and mission applications, is planning a series of on-orbit testbed satellites that target the agile implementation of new software and hardware for the US Government. Together, Ball Aerospace and Microsoft are collaborating on the execution of these spacecraft missions to demonstrate reconfigurable on-orbit processing technologies, leveraging the Azure Cloud. This includes the use of containerization and cloud on the edge to enable a software-defined mission approach that embraces standards such as Sensor Open Systems Architecture (SOSA), Universal Command and Control Interface (UCI), and Open Mission Systems (OMS). Modular and reconfigurable on-orbit compute will support multiple complex missions for the United States Government and grant the ability to support future concepts for smaller, agile, multi-mission capabilities across all federal space programs.

Analytics for spaceborne data using Azure Orbital

Satellite imagery is a valuable asset; using AI with satellite imagery is a value multiplier. Using geospatial AI over the same area of interest with regularly refreshed satellite imagery, analysts can monitor change detection for their respective areas of interest.

The use of AI with satellite imagery is a powerful, cost-effective tool spanning all industries that monitor, measure, and/or monetize large areas of the Earth. Extracting this value is hard work as satellite imagery consists of unstructured, big data that requires significant resources to transform and analyze in order to access information and store and use it as structured data.

The Azure Space team released a reference architecture articulating how to apply AI to satellite imagery at scale using Azure resources. This reference architecture makes use of Azure Synapse Analytics, Azure Data Lake Store Gen 2, Apache Spark Pool, Azure Data Share, Azure Batch, and Azure Container Registry. This Azure workflow reduces the complexity of extracting insights from remote sensing data by articulating how to group Azure resources to ingest, store, transform, and apply AI over satellite imagery then use the results for various applications. Azure resources allow for flexibility in the workflow, management of storage options, parallelization of workload, and (re)use of containerized models.

Given Azure’s orchestration flexibility, customers can bring their own imagery. Alternatively, if a customer needs imagery, they can call another imagery provider API specifying the respective area of interest, resolution, and vintage of their choosing through Microsoft’s partner Airbus Defense and Space or Microsoft’s Planetary Computer as an option. Customers can also bring their own trained models into the orchestration. If a customer needs geospatial intelligence and remote sensing AI, Microsoft has partnerships with Blackshark.ai, Orbital Insight, and Esri. For those customers looking to build AI, Microsoft offers tools like Azure Machine Learning and Azure Custom Vision.

Blackshark.ai geospatial models are available for analytics on Azure

Blackshark.ai is offering an end-to-end geospatial platform. Part of this platform is the geospatial analytics service called Orca, which detects objects, and extracts attributes about buildings, vegetation, and a growing number of other detection classes, such as roads or infrastructure in the future. This service is now available through Azure Synapse Analytics.

The containerized Orca service–fully integrated into Azure Synapse Analytics provides fast, global-scale, and accurate insights based on satellite or aerial imagery data sets that are available via Azure or provided by customers. Whenever fresh input data is available, the Orca service can provide precise insights for object and change detection, enabling applications such as efficient 3D mapping services, logistic planning, risk analysis, telecom signal propagation planning, or disaster relief planning. More detailed information about the Orca service is available on the Orca support page.

Learn more

Through our announcements today we are continuing our mission to reduce the barriers to entry to space. We are working closely with our partners to empower and enable developers that are building space workloads to easily leverage the best of Azure services and capabilities to transform their approach to development for space. We’re also working closely with an expanding partner ecosystem to help drive innovation on and off the planet.

Through the combination of cloud and on-orbit space capabilities, new applications are being created and iterated upon even faster which in turn provides original approaches to challenging problems. We look forward to meeting our industry peers to continue this discussion at this week’s Space Symposium.

Learn more about Azure Space today.
Quelle: Azure

Bring your own IP addresses (BYOIP) to Azure with Custom IP Prefix

When planning a potential migration of on-premises infrastructure to Azure, you may want to retain your existing public IP addresses due to your customers' dependencies (for example, firewalls or other IP hardcoding) or to preserve an established IP reputation. Today, we are excited to announce the general availability of the ability to bring your own IP addresses (BYOIP) to Azure in all public regions. Using the Custom IP Prefix resource, you can now bring your own public IPv4 ranges to Azure and use them like any other Azure-owned public IP ranges. Once onboarded, these IPs can be associated with Azure resources, interact with private IPs and VNETs within Azure’s network, and reach external destinations by egressing from Microsoft’s Wide Area Network. Read more about how bringing your IP addresses to Azure can help to speed up your cloud migration.

Provisioning a custom IP range

Onboarding your ranges to Azure can be done through the Azure portal, Azure PowerShell, Azure CLI, or by using Azure Resource Manager (ARM) templates. In order to bring a public IP range to use on Azure, you must own and have registered the range with a Routing Internet Registry such as ARIN or RIPE. When bringing an IP range to use on Azure, it remains under your ownership, but Microsoft is permitted to advertise it from our Wide Area Network (WAN). The ranges used for onboarding must be no smaller than a /24 (256 IP addresses) so that they will be accepted by Internet service providers. When you create a Custom IP Prefix resource for your IP range, Microsoft performs validation steps to verify your ownership of the range and its association with your Azure subscription. Each onboarded range is associated with an Azure region.

Using a custom IP range

Once your range has been provisioned on Azure, you have the option to assign public IP addresses from the range to resources immediately or to begin advertising the range before assigning, depending on what fits your specific use case. After the command is issued to commission a range, Microsoft will advertise it both regionally (within Azure) and globally (to the Internet). The specific region where the range was onboarded will also be posted publicly for geolocation providers. To assign the BYOIPs, you would create public IP prefixes (contiguous blocks of Standard SKU public IP addresses), from which you can allocate specific individual public IP addresses. Note that while an IP range is onboarded under the context of an Azure subscription, prefixes from this range can be derived from other subscriptions with appropriate permissions. Onboarded IPs can be associated with any resource that supports Standard SKU public IPs, such as virtual machines, Standard Public Load Balancers, Azure Firewalls, and more. You are not charged for maintenance and hosting of your onboarded Public IPs Prefix; you are charged only for egress bandwidth from the IPs and any attached resources.

Key takeaways

The ability to bring your own IP addresses (BYOIP) to Azure is currently available in all regions.
The minimum size of an onboarded range is /24 (256 IP addresses).
Onboarded IPs are put in a Custom IP Prefix resource for management, from which Public IP Prefixes can be derived and utilized across subscriptions.
You are not charged for the hosting or management of onboarded ranges brought to Azure.

Additional resources

Custom IP Prefixes.
Create a Custom IP Prefix using Azure PowerShell.

Quelle: Azure

Join us at the Innovate for Impact digital event

How does a retail giant administer COVID-19 vaccinations in days, not months? Can a series-A startup’s innovation help identify massive fraud in seconds instead of hours? For leaders driving business growth, the cloud has forever opened our minds to endless possibilities. In a time of tectonic shifts in all markets and ways of life, what will distinguish the brands we choose tomorrow is an ability to inspire developer ingenuity, create immersive customer experiences, and not just adapt to changing needs, behaviors, and trends, but to anticipate them.

It sounds good, doesn’t it? But the environment and endurance needed to keep innovating can be hard for companies of all sizes, and this is why we’re kicking off Innovate for Impact on Tuesday, April 5 from 9:00 AM to 10:30 AM Pacific Time. When you register for the event, you’ll get to hear from thought leaders, technologists, and executives about the business impact of app innovation from every angle. Let’s talk about entrepreneurial ethos, dev culture, and how to read and leverage trends with cloud-native and AI technology—there’s so much we can learn from each other on this journey.

The event also brings together both born-in-cloud and enterprise perspectives on how to solve big challenges. You'll learn how Aqua Security, Confluent, Elastic, Trimble, and KPMG defied conventional thinking and built something new by empowering their teams, forging business models, and scaling through better processes. Don’t miss the opportunity to learn from tangible, motivating cloud stories—not just what each company achieved, but what they overcame to differentiate.

Microsoft is one of the most innovative companies I know—from championing innovation through a ‘growth mindset’, to transforming our own business model to be cloud-first, to continuing to be at the forefront of delivering immersive experiences in gaming and the metaverse. Microsoft’s ability to continuously reimagine, this clairvoyance at predicting and getting ahead of trends, and the humility to “always be learning” is what makes it successful.

We’ve learned a lot from the thought leaders who joined us for this event, and I can’t wait to share the learnings with all of you. I also hope you’ll participate in the Q and A during the event.  We want to hear from you and have an ongoing dialogue about making innovation real.

Thank you and enjoy!
Ashmi

Register now

Innovate for Impact

Drive business growth with app innovation,
Tuesday, April 5, 2022,
9:00 to 10:30 AM Pacific Time.

Quelle: Azure

Diversifying the telecommunications supply chain with Open RAN

Over the past few years, there has been an increasingly steady drumbeat for the need to diversify and open the telecommunications supply chain. This has been driven both by security concerns and by the need to improve the negotiating power of operators by introducing new entrants into the market. A key part of this supply chain that can be diversified is radio access network (RAN), where operators have typically spent most of their investments in network infrastructure.

To address the need for diversification, groups such as the O-RAN alliance have formed to open up RAN capabilities. In addition, select operator communities from all over Europe, the Middle East, Asia, and Africa have begun experimenting in this space. Governments have also been weighing in, designating telecommunications networks as a national priority and a critical part of infrastructure that needs to be secured, and nurtured to drive innovation. An example of this was the UK Government’s 5G diversification strategy—a plan to grow the telecommunications supply chain while simultaneously making it more resilient to future trends and threats.

Microsoft has successfully transformed into an edge and cloud company; so, we understand the magnitude of such an evolution. At Microsoft, our guiding principle is to support, develop, and foster a partner-rich ecosystem. We believe that the role that we play best as a cloud provider is to provide a secure, scalable, well-managed carrier-grade platform serving as the enabler for third parties to build upon.

Future Radio Access Network Challenge (FRANC)

As it turns out, the UK government’s Department of Digital, Culture, Media, and Sports (DCMS) was thinking along the same lines. The Future Radio Access Network Challenge (FRANC) was designed as a follow-on to their diversification strategy. It identified the need to accelerate Open RAN innovation to meet its target of 35 percent of all network traffic over Open RAN by 2030, as well as spark UK-based innovation in this space.

This initiative aligns well with our ambitions—to grow and diversify the supply chain as well as support a healthy and vibrant Open RAN ecosystem. We reached out to Intel and Capgemini, industry leaders in Open RAN, and the University of Edinburgh, a leading academic institution, to join us in demonstrating how beautifully our ideas could fit together to achieve our mutual objectives.

DCMS has endorsed this approach, with the Microsoft-led consortium being one of the award recipients of their challenge. At the Mobile World Congress (MWC) 2022, we pulled back the curtain a bit more to explain what we will be doing jointly as a group, and how our combined efforts will help accelerate the Open RAN ecosystem.

Technology showcased at the Mobile World Congress 2022

At MWC 2022, in close collaboration with our partners, we showed how disaggregated software and hardware are the future of telecommunications networks. This new software-driven programmable network architecture leads to faster rollouts with lower total cost of ownership. Cloud technologies—AI and machine learning analytics, edge computing, large-scale management, self-diagnostics, network programmability, network verification, and global connectivity—can be leveraged to improve highly secure operational efficiency of the virtualized RAN. This infrastructure also supports the creation of new revenue streams through the enablement of a developer ecosystem. 

Additionally, we announced the next wave of Azure for Operators solutions and services, which includes Azure Operator Distributed Services (AODS). AODS combines the enhanced version of AT&T’s Network Cloud software we acquired with the best of Azure, including our industry-leading security, monitoring, analytics, AI, machine learning, and so much more. Capable of handling network-intensive workloads and mission-critical applications, AODS is a carrier-grade platform that provides flexibility and scalability to support deployments at the edge of the cloud, the edge of the network, or the enterprise edge. We’ve been focused on ensuring that this edge infrastructure (both near and far edge) is capable of supporting RAN workloads.

We demonstrated a system that used our AODS solution, which provides hybrid cloud platform for telecommunications network functions. The architectural components included commercial off-the-shelf hardware equipment with Intel’s silicon and Capgemini’s Open RAN network functions. Specifically, Intel® Xeon® Scalable processors, PTP-enabled network interface cards (NICs), and the Intel vRAN Accelerator ACC100 Adapter, all leveraged by the FlexRAN™ layer 1 software. Capgemini provided the vCU and vDU Open RAN network functions, and Microsoft provided the cloud-managed platform through AODS.

Figure 1: Hardware setup of the live demonstration of carrier-grade cloud-managed Open RAN platform at MWC 2022. 

The setup included four commercial off-the-shelf servers connected to a top-of-rack (ToR) and management switch. A radio unit (RU), capable of 4×4 multiple input, multiple output (MIMO) over a 100 MHz channel, was connected to the ToR using 7.2 times front-haul interface. A grandmaster clock was also connected to the ToR switch providing PTP synchronization to the RU and the servers. A 5G smartphone device was used to connect to the network. We showed how with AODS we can connect all these together and deploy a virtualized RAN from a cloud with a few mouse clicks and manage it remotely. Such a seamless deployment process reduces the integration efforts that have to be done by vendors, allowing them to focus on innovation instead.

Here you can view a video of the demo. In particular, notice the live analytics this system provides. We will have more to say about that in an upcoming article.

Looking ahead

Along with our partners, Microsoft is bringing to life carrier-grade edge-cloud solutions that empower operators globally to deploy Open RAN network functions easily and securely. Our tools and services can manage RAN deployments at scale. With Azure machine learning and AI, a core component of our technologies, operators can perform analytics that optimize performance, improve management, and proactively detect and solve problems.

Security principles designed for the cloud are being adopted to make the platform resilient, to prevent, detect, and respond to threats in the network and across the firmware and telecommunications supply chains. Edge and network monitoring and programmability via open API’s will enable a new generation of 5G applications while simultaneously improving operational efficiency. Operators can increase revenues and reduce infrastructure costs while building future-proof solutions.

As we begin to build on our promise, we encourage operators and ecosystem partners to contact us to learn more.
Quelle: Azure

Accelerate silicon design innovation on Azure with Synopsys Cloud

Semiconductor and silicon technology are the basis of digital transformation happening everywhere, across industries and our daily lives, impacting the way we work, learn, and play. The continuous improvement in the performance and power of silicon has been key to enabling this innovation. Here at Microsoft, we’ve empowered our long-standing partners in the semiconductor industry to embrace Azure’s cloud infrastructure and scale out electronic design automation (EDA). With a new EDA-optimized cloud environment running on Azure, the launch of Synopsys Cloud marks a significant milestone for the industry by offering silicon design teams the ability to scale and accelerate their development cycles—transforming chip design the way that the cloud transformed computing.

Increasing flexibility and efficiency in silicon development on Azure

The collective rise in time-to-market pressure caused by the global chip shortage and increasing computational demands have caused chipmakers to seek more flexibility and efficiency in the silicon design process. Migrating chip design to Azure’s optimized infrastructure helps address part of this equation by enabling critical design and verification workloads on the cloud—resulting in faster time-to-results and better quality at a lower cost. With Synopsys Cloud built on Azure, chip designers will now also have access to a new pay-per-use model offering automated provisioning of infrastructure and EDA tools to address the growing demands of silicon design.

This “pay-as-you-go” model is a software as a service (SaaS)-based approach that will reduce barriers for companies of all sizes while enabling greater innovation and value for customers and EDA vendors alike. Using the power of Azure’s workload scaling and virtual machine (VM) selection capabilities, Synopsys Cloud customers will be able to optimize critical EDA workloads—from reducing processing time on verification tasks to saving runtime and enabling faster design convergence on library characterization.

Expanding access to chip design on the cloud

Microsoft has long been committed to helping companies of all sizes unlock more potential on the cloud. With its powerful chip design and verification tools running on Azure’s trusted and comprehensive cloud platform, Synopsys is Microsoft’s preferred partner for EDA on the cloud. Using Synopsys’ solution, customers ranging from startups to large design enterprises benefit from simplified access to custom infrastructure for all their chip design needs—helping them build silicon and tackle designs they previously could not

Innovation for wide-ranging impact

From intelligent scaling of EDA resources to using AI and machine learning models to transform design and resource management, silicon manufacturing has already seen vast improvements with the introduction of the cloud. The shift towards cloud-centric silicon design has enabled newfound access to compute, storage, and tooling resources. Ultimately, improved time-to-results, quality-of-results, and cost-of-results are just the beginning of what cloud-enabled EDA enhancements can offer. As design on the cloud becomes increasingly widespread, I look forward to seeing the silicon industry continuing to innovate towards new levels of ingenuity—powered by the Microsoft Cloud.

 

Tune in to Synopsys’ SNUG 2022 conference to watch a keynote by Dr. Aart de Geus, Chairman and co-CEO of Synopsys, including a fireside chat with Satya Nadella, Chairman and CEO of Microsoft.

Learn more

Learn more about Azure for the semiconductor industry.
Learn more about Azure high-performance computing (HPC) for Silicon.
Learn more about Azure Virtual Machines (VMs).
Learn more about Microsoft’s global infrastructure.
Take a virtual tour of Microsoft’s datacenters.

Quelle: Azure