Advancing Azure Virtual Machine availability monitoring with Project Flash

“As we head into the fourth calendar year of the Advancing Reliability blog series, empowering organizations to run their workloads reliably on Azure remains one of our top priorities. We continually invest in evolving the Azure platform to help achieve this on a daily basis. Your ability to monitor virtual machine (VM) availability in a robust and comprehensive way is paramount to ensuring that your applications are available and resilient. For today’s post in the series, I have asked Program Manager, Pujitha Desiraju, from our Azure Core Platform Fundamentals Engineering team to talk about the latest observability enhancements for VM availability monitoring, as well as planned investments to deliver the best monitoring experience.”—Mark Russinovich, CTO, Azure

 

This post was co-authored by Principal Software Engineering Manager, Gaurav Jagtiani.

Flash, as the project is internally known, is a collection of efforts across Azure Engineering, that aims to evolve Azure’s virtual machine (VM) availability monitoring ecosystem into a centralized, holistic, and intelligible solution customers can rely on to meet their specific observability needs. Today, we’re excited to announce the completion of the project’s first two milestones—the preview of VM availability data in Azure Resource Graph, and the private preview of a VM availability metric in Azure Monitor.

What is Project Flash?

Project Flash derives its name from our commitment to building robust and rapid ways to monitor virtual machine (VM) availability as comprehensively as possible—a key prerequisite for efficient application performance. It’s our mission to ensure you can:

Consume accurate and actionable data on VM availability disruptions (for example, VM reboots and restarts, application freezes due to network driver updates, and 30-second host OS updates), along with precise failure details (for example, platform versus user-initiated, reboot versus freeze, planned versus unplanned).
Analyze and alert on trends in VM availability for quick debugging and month-over-month reporting.
Periodically monitor data at scale and build custom dashboards to stay updated on the latest availability states of all resources.
Receive automated root cause analyses (RCAs) detailing impacted VMs, downtime cause and duration, consequent fixes, and similar—all to enable targeted investigations and post-mortem analyses.
Receive instantaneous notifications on critical changes in VM availability to quickly trigger remediation actions and prevent end-user impact.
Dynamically tailor and automate platform recovery policies, based on ever-changing workload sensitivities and failover needs.

With these goals in mind, we’ve divided our execution strategy into two phases—a near-term phase to meet critical current needs, and a long-term phase to deliver the best VM availability monitoring experience. This two-phased approach helps us continually bridge gaps, iterate on service quality, and learn from your feedback at every step along the way.

Announcing new monitoring options

For the first phase, we are providing different options to enable convenient access to VM availability data to address a range of observability needs. We aim to maintain data consistency with similar rigorous quality standards across all of these existing features and solutions, like Resource Health or Activity Log, to deliver a consistent view agnostic of the solution you choose.

Introducing at-scale analysis for VM availability

Today, we’re excited to reach our first Project Flash milestone—with the preview release of VM availability states in Azure Resource Graph for at-scale programmatic consumption.

Azure Resource Graph is a service in Azure that is extensively adopted for its efficient ability to query across many subscriptions, all at once and at low latencies. We’re currently emitting VM availability states (Available, Unavailable, and Unknown) to the Health Resources table in Azure Resource Graph, so you can perform complex Kusto Query Language (KQL) queries for sieving through large datasets at once. This functionality is handy for tracking historical changes in VM availability, for building custom dashboards, and for performing detailed investigations across numerous resource properties spread across multiple tables.

Figure 1: Azure Resource Graph Explorer Window with query and results, to demonstrate fetching data from the HealthResources table.

We are planning to add failure details and degraded VM scenarios to the Health Resources table in Azure Resource Graph, later this year. These details will ensure you are properly informed on the cause and impact of any failures—so you can either failover, reboot in place, or take the appropriate mitigations to prevent end-user impact.

Navigate to Azure Resource Graph Explorer on the Azure portal to get started with any of the KQL queries published for the Health Resources table.

Introducing VM availability metric in Azure Monitor

We’re also pleased to announce the private preview of an out-of-box VM availability metric in Azure Monitor, for a curated metric alerting and monitoring experience.

Metrics in Azure Monitor are great for monitoring and analyzing time series representations of VM availability for quick and easy debugging, receiving scoped alerts on concerning trends, catching early indicators of degraded availability, correlating with other platform metrics, and more.

The metric allows you to track the pulse of your VMs—during expected behavior, the metric displays a value of 1. In response to any VM availability disruptions, the metric dips to a 0 for the duration of impact. In case of an Azure infrastructure outage, we will emit nulls represented as a dotted line on the portal.

Figure 2: Screenshot of VM availability metric as seen on Metrics Explorer in the Azure portal, with occasional dips to reflect VM availability disruptions.

We released the private preview of the metric as phase one of our rollout plan, and are currently collecting customer feedback, to further improve our offering. We are planning to add failure details such as metric dimensions and platform logs next year, to allow you to precisely alert on failure scenarios that are impactful.

Coming soon

The two monitoring options introduced above are just the beginning for Project Flash! We will continue to build upon our existing solutions by improving data quality and failure attribution. In parallel, we are designing two new monitoring offerings to meet your latency and mitigation needs, while also investing heavily in the underlying platform to make our fault detection more resilient and comprehensive.

Azure Event Grid for instantaneous notifications

Successfully running business-critical applications requires hyper-awareness of any VM availability impacting event, so remediation actions can be triggered instantaneously to prevent end-user impact. To support you in your daily operations, we are planning to design a notification mechanism that leverages the low-latency technology of Azure Event Grid. This will allow you to simply subscribe to an Event Grid system topic, and route scoped events via event handlers to any downstream tooling, instantaneously.

Automate and tailor platform recovery policies

Considering the numerous ongoing investments to improve your VM availability monitoring experience, Project Flash intends to empower you even further by providing you knobs to customize recovery policies triggered by the platform, in response to cases of VM availability disruptions.

One such knob we are designing is the ability to opt-out of Service Healing for single-instance VMs, in response to a specific set of unanticipated Availability disruptions. This knob will be made available via the portal or at the time of VM deployment and can be updated dynamically. Note that leveraging this feature will render the usual Azure Virtual Machine availability SLAs ineffective.

In the future, we will explore introducing knobs to also opt-out of other applicable recovery policies (for example, Live Migration or Tardigrade), to ensure you can easily adapt to your ever-changing mitigation needs.

Ongoing platform quality investments

While the first phase is designed to meet your current observability needs, we remain focused on our long-term goal of delivering a world-class observability experience surrounding VM availability. We are extremely excited for all the data enrichments and technology advancements that will contribute to this experience, so here’s an early look at our roadmap of planned investments:

Fault detection and attribution: We are continuously evolving our underlying infrastructure to detect and attribute failures both precisely and instantaneously—so that we can reduce unknown or missing health status reports, emit actionable failure details, and handle platform recovery customizations. This remains our top investment area on which we continue to iterate every cycle.
Root cause analysis (RCA) automation: We are planning to implement easy tracking mechanisms for every unique VM downtime, along with automatic construction and emission of detailed downtime RCA statements to reduce manual tracking and churn on your end.
AIOps integration: We are looking to leverage the tremendous advancements being made in AIOps across Microsoft, for enabling smart insights and anomaly detection and diagnosis across the multitude of data points on VM Availability.
Centralized and cohesive user experience: We acknowledge that a consequence of our near-term approach is that across our different services we have multiple monitoring, alerting, and recovery tools which may lead to a confusing and disparate experience for you. This is a problem we intend to solve with our final phase. Our north star goal is to provide end-users access to distinct and necessary representations of VM availability, consolidated within Azure Monitor, and categorized according to common usage patterns for discoverability, ease of use and intuitive onboarding.

Learn more

This list is certainly not exhaustive as we have multiple enrichments planned as part of our long-term strategy. To reiterate, our intention with Project Flash is to make VM availability monitoring extremely intuitive, comprehensive, and seamless—so you are always prepared for and informed about any changes in the health of your workloads, ultimately to maintain your own SLAs and business promises.

We will continue to share updates on Project Flash through blogs like this, to ensure you stay up to date on the latest. Stay tuned!
Quelle: Azure

Migrating your files to Azure has never been easier

We pride ourselves on listening to our customers and then building products and partnerships that meet customer needs and enable every application to migrate to Azure. We recognize that migrating Virtual Desktop, Virtual Server, High Performance Compute, Analytics, and many other critical applications requires copying tens of terabytes to several petabytes of file data stored on file servers, NAS appliances, and Object Storage to Azure. Automated, intuitive, and scalable solutions are required to migrate file data between heterogeneous platforms and eliminate the inherent complexity and risk of these projects. Our customers have told us that copying unstructured and semi-structured file data to Azure Blob Storage, Azure Files, and Azure NetApp Files needs to be fast and easy so you can focus on innovating with Azure services.

Today we are announcing the Azure File Migration Program which gives customers and partners in our Solution Integrator and Service Provider ecosystem, access to industry-leading file migration solutions from Komprise and Data Dynamics—at no cost. These solutions help easily, safely, and securely migrate file and object data to Azure Storage.

Azure Migrate offers a very powerful set of no-cost (or low-cost) tools to help you migrate virtual machines, websites, databases, and virtual desktops for critical applications. You can modernize legacy applications by migrating them from servers to containers and build a cloud native environment. Our new program complements Azure Migrate and provides the means to migrate applications and workloads that include large volumes of unstructured file data.

This program offers free software licensing, an onboarding session, and access to the migration solution provider’s support organization. You can review a detailed comparison of the solutions, review the Getting Started Guides for Data Dynamics and Komprise, and watch videos showcasing their functionality. After choosing the solution that best fits your needs, you simply select the appropriate Azure sponsored offer from the Azure Marketplace.

We plan to expand this program going forward to include additional migration ISVs and target storage platforms to support any and every storage migration scenario—subscribe to this blog for updates as we expand the program.

Learn more about the Azure File Migration Program

To learn more about this program, please visit our Tech Community Blog where Principal Program Manager Karl Rautenstrauch has written a post to help you move forward and take advantage of this great offer! You can also learn more about migrating application workloads to Azure by visiting the Azure Migration and Modernization Center.
Quelle: Azure

Genomic analysis on Galaxy using Azure CycleCloud

Cloud computing and digital transformation have been powerful enablers for genomics. Genomics is expected to be an exabase-scale big data domain by 2025, posing data acquisition and storage challenges on par with other major generators of big data. Embracing digital transformation offers a practically limitless ability to meet the genomic science demands in both research and medical institutions. The emergence of cloud-based computing platforms such as Microsoft Azure has paved the path for online, scalable, cost-effective, secure, and shareable big data persistence and analysis with a growing number of researchers and laboratories hosting (publicly and privately) their genomic big data on cloud-based services.

At Microsoft, we recognize the challenges faced by the genomics community and are striving to build an ecosystem (backed by OSS and Microsoft products and services) that can facilitate genomics work for all. We’ve focused our efforts on three main core areas—research and discovery in genomic data, building out a platform to enable rapid automation and analysis at scale, and optimized and secure pipelines at a clinical level. One of the core Azure services that has enabled us to leverage high performance compute environment to perform genomic analysis is Azure CycleCloud.

Galaxy and Azure CycleCloud

Galaxy is a scientific workflow, data integration, and data analysis persistence and publishing platform that aims to make computational biology accessible to research scientists that do not have computer programming or systems administration experience. Although it was initially developed for genomic research, it is largely domain agnostic and is now used as a general bioinformatics workflow management system. Galaxy system is used for accessible, reproducible, and transparent computational research.

Accessible: Programming experience is not required to easily upload data, run complex tools and workflows, and visualize results.
Reproducible: Galaxy captures information so that you don't have to; any user can repeat and understand a complete computational analysis, from tool parameters to the dependency tree.
Transparent: Users share and publish their histories, workflows, and visualizations via the web.
Community-centered: Inclusive and diverse users (developers, educators, researchers, clinicians, and more) are empowered to share their findings.

Azure CycleCloud is an enterprise-friendly tool for orchestrating and managing high-performance computing (HPC) environments on Azure. With Azure CycleCloud, users can provision infrastructure for HPC systems, deploy familiar HPC schedulers, and automatically scale the infrastructure to run jobs efficiently at any scale. Through Azure CycleCloud, users can create different types of file systems and mount them to the compute cluster nodes to support HPC workloads. With dynamic scaling of clusters, the business can get the resources it needs at the right time and the right price. Azure CycleCloud automated configuration enables IT to focus on providing service to the business users.

Deploying Galaxy on Azure using Azure CycleCloud

Galaxy is used by most academic institutions that conduct genomic research. Most institutions that already use Galaxy want to stick to it because it provides multiple tools for genomic analysis as a SaaS platform. Users can also deploy custom tools onto Galaxy.

Galaxy users generally use the SaaS version of Galaxy as part of UseGalaxy resources. UseGalaxy servers implement a common core set of tools and reference genomes and are open to anyone to use. All information on its usage is available on the Galaxy Platform Directory.

However, there are some research institutions that intend to deploy Galaxy in-house as an on-premises solution or a cloud-based solution. The remainder of this article describes how to deploy and run Galaxy on Microsoft Azure using Azure CycleCloud and grid engine cluster. The solution was built during the Microsoft hackathon (October 12 to 14, 2021) with code implementation assistance from Azure HPC Specialist, Jerry Morey. The architectural pattern described below can help organizations to deploy Galaxy in an Azure environment using CycleCloud and a scheduler of choice.

As a pre-requisite, genomic data should be available in a storage location, either cloud or on-premises. Azure CycleCloud should be deployed using the steps described in the “Install CycleCloud using the Marketplace image” documentation.

Cluster deployment that is truly supported by Galaxy on the cloud is called the unified method. In this method, the copy of Galaxy on the application server is the same copy as the one on the cluster nodes. The most common method to do this would be to put Galaxy in a network file system (NFS) somewhere that is accessible by the application server and the cluster nodes. This is the most common deployment method for Galaxy.

An admin user can SSH into Azure CycleCloud virtual machines or Galaxy server virtual machines to perform admin-related activities. It is recommended to close the SSH port when in production. Once the Galaxy server is running on a node, end users (researchers) can load the portal on their end device to perform analysis tasks which include loading data, installing, uploading tools, and more.

Access to functionalities (such as installing and deleting tools versus the usage of tools for analysis) are controlled by parameters defined in galaxy.yml that resides in the Galaxy server. Once a user accesses a functionality, they are converted to jobs that are submitted to the grid engine cluster for further execution.

Deployment scripts are available to ease deployment. These scripts can be used to deploy the latest version of Galaxy on Azure CycleCloud.
Following are the steps to use the deployment scripts:

Git clone this project (The project is in active development, so cloning the latest release is recommended).

git clone –b release_21.09 https://github.com/themorey/galaxy-gridengine.git

Upload project to CC locker.

cd galaxy-gridengine

Modify files (if needed)

cyclecloud locker list

Azure cycle Locker (az://mystorageaccount/cyclecloud

cyclecloud project upload "Azure cycle Locker"

Import cluster template to CC.

cyclecloud import_cluster <cluster-name> -c <galaxy-folder-name> -f templates/gridengine-galaxy2.txt

NOTE: Substitute <cluster-name> with a name for your cluster—all lower case, no spaces.

Navigate to CC Portal to configure and start the cluster.

Wait for 30 to 45 minutes for the Galaxy server to be installed.

To check if the server is installed correctly, SSH into Galaxy server node and check galaxy.log in /shared/home/<galaxy-folder-name> directory.

This deployment was adopted by a leading United States-based academic medical center. The Microsoft Industry Solutions team helped deploy this solution on the customer’s Azure tenant. Researchers at the center tested to assess the parity of this solution to existing Galaxy deployment on their on-premises HPC environment. They were able to successfully test the deployed Galaxy server that used Azure CycleCloud for job orchestration. Several common bioinformatics tools such as bedtools, fastqc, bcftools, picard, and snpeff were installed and tested. Galaxy supports local user by default. As part of this engagement, a solution to integrate their corporate active directory was tested and deployed. The solution was found to be on par with their on-premises deployment. With the increased number of execute nodes and size of those nodes, they found that the jobs were executed in less time.

For more information, support, or guidance related to the content in this blog, we recommend you reach out to your Microsoft sales representative.

Learn more

Learn more about Microsoft Genomics solutions.

Microsoft Genomics service on Azure.
Azure CycleCloud—HPC Cluster and Workload Management.
Galaxy on Azure deployment scripts.

Quelle: Azure

Learn how open source plays a key role in Microsoft’s cloud strategy with Inside Azure for IT

With more than 1 million views of our fireside chats, we’re inspired by the tremendous opportunity to connect those within the community—customers, partners, and technology enthusiasts everywhere. Whether you engage in the live ask-the-experts sessions, watch the deep-dive skilling videos, or join us for fireside chats—the Azure team and I are delighted and humbled by your participation and enthusiasm for Inside Azure for IT. 

In our third episode, we talk about some of our Linux and open source-related partnerships, product innovation, and initiatives, plus how that helps customers and communities. To those who think of Azure as “mostly Windows cloud,” it may be surprising to learn that more than 60 percent of Azure customer compute cores are Linux-based, and that Linux virtual machine (VM) cores are growing faster than those based on Windows.

My own career has mirrored Microsoft’s evolution of how we think about, contribute to, and consume Linux and open source. For example, I’ve gone from being solely focused on Windows and Windows Server, to learning how to contribute upstream to make Linux run great on Hyper-V, to now, where open source and Linux are core to the development of Azure.

In this episode, you’ll get a behind-the-scenes peek at Microsoft’s approach, and how we've brought together customers, partners, and communities to innovate and collaborate across open-source technologies.

Innovate with Open Source and Linux on Azure

The episode is divided into three separate segments so you can watch them individually on-demand at your convenience.

Part one: Microsoft and Red Hat on simplifying cloud adoption with joint innovation on Azure with Linux

In this segment, you’ll hear from Red Hat about partnering with Microsoft and how it helps customers with their cloud modernization and migration journey. Mike Evans, VP, Technical Business Development, and Xavier Lecauchois, Sr. Director Ansible Cloud Services, from Red Hat join me to chat about the strategy and the latest innovation, the Red Hat Ansible Automation Platform on Azure. Watch: Simplifying cloud adoption with joint innovation on Azure with Linux.

Part two: Brendan Burns and Krishna Ganugapati on safeguarding workloads with Mariner—Microsoft’s internal Linux distro

Delivering reliable Azure services to customers faster is the driving force behind the creation of Mariner, Microsoft’s own Linux distro. Join me, as I chat with Krishna Ganugapati, VP of Software Engineering, Edge OS, and Brendan Burns, CVP, Azure Cloud Native on why the Azure team created Mariner and how it’s benefiting customers and Microsoft engineers. Watch: Safeguarding workloads with Mariner—Microsoft Azure’s own internal Linux distro.

Part three: Microsoft's Sarah Novotny on working together with open source communities to drive innovation

Open source connects developers around the world, providing ways to collaborate and innovate collectively. Join Sarah Novotny, Director of Open-source Strategy for Azure as we chat about running open-source technologies in the cloud, how the relationship between IT and developers enables open-source innovation, Microsoft’s leadership and contributions to help secure open-source software, and her unique background in the open-source community. Watch: Developing in the open and working together to drive innovation.

Stay current with Inside Azure for IT

Beyond this latest episode, there are many more technical and cloud-skilling resources available through Inside Azure for IT. Learn more about empowering an adaptive IT environment with best practices and resources designed to enable productivity, digital transformation, and innovation. Take advantage of technical training videos and learn about implementing these scenarios.

Register for Azure Open Source Day to watch live on February 15, 2022, 9:00 AM to 10:30 AM Pacific Time or on-demand later.
Get started by learning about Linux on Azure.
See our schedule for Ask the product experts live.
Watch part one: Microsoft and Red Hat on simplifying cloud adoption with joint innovation on Azure with Linux.
Watch part two: Brendan Burns and Krishna Ganugapati on safeguarding workloads with Mariner—Microsoft Azure’s own internal Linux distro.
Watch part three: Microsoft’s Sarah Novotny on developing in the open and working together to drive innovation.

Quelle: Azure

IoT adoption remains strong in the Asia-Pacific region as organizations broaden usage

The Asia-Pacific region has long been a strong manufacturing base and the sector continues to be a strong adopter of the Internet of Things (IoT). But as the latest Microsoft IoT Signals report shows, IoT is now much more widely adopted across verticals, and across the globe, with smart spaces—a key focus for many markets in the Asia-Pacific region—becoming one of the leading application areas.

The newest edition of this report provides encouraging reading for organizations in the Asia-Pacific region. The global study of over 3,000 business decision-makers (BDMs), developers, and internet technology decision-makers (ITDMs) across ten countries—including Australia, China, and Japan—shows that IoT continues to be widely adopted for a range of uses and is seen as critical to business success by a large majority. Further, rather than slowing growth which some might have feared, the COVID-19 pandemic is driving even greater investment across different industries as IoT becomes more tightly integrated with other technologies.

Across the Asia-Pacific region, the research shows that organizations in Australia report the highest rate of IoT adoption at 96 percent—beating both Italy (95 percent) and the United States (94 percent)—and that organizations in China are adopting IoT for more innovative use cases and have the highest rates of implementation against emerging technology strategies. In Japan, it found that companies are using IoT more often to improve productivity and optimize operations. Below we dive into three key trends that emerge for organizations in this region.

1. A greater focus on planning IoT projects pays off

Whilst IoT projects in the region take slightly longer to reach fruition, it seems that this reflects a more thoughtful and diligent approach which appears to be paying off. By thinking through and taking time upfront to determine the primary business objectives for success, organizations in the Asia-Pacific region report high levels of IoT adoption (96 percent in Australia), importance (99 percent of companies in China say IoT is critical to business success), and overall satisfaction (99 percent and 97 percent in China and Australia respectively). These objectives are broadly in line with global findings, with quality assurance and cloud security consistently mentioned across all three countries in this region. Organizations in Australia and Japan adopt IoT to help with optimization and operational efficiencies: in Australia, the focus is on energy optimization (generation, distribution, and usage); and in Japan, it is on manufacturing optimization (agile factory, production optimization, and front-line worker). Those in Australia and China also tend to do more device monitoring as part of IoT-enabled condition-based maintenance practices.

Companies in the region report that these varied use cases are delivering significant benefits in terms of more operational efficiency and staff productivity, improved quality by reducing the chance of human error, and greater yield by increasing production capacity.

2. Emerging technologies accelerate IoT adoption

Of the organizations surveyed, the 88 percent that are set to either increase or maintain their IoT investment in the next year are more likely to incorporate emerging technologies such as AI, edge computing, and digital twins into their IoT solutions. And in the Asia-Pacific region, awareness of these technologies tends to be higher than in other markets.

Organizations in China are far more likely than their counterparts elsewhere to have strategies that address these three areas. They lead all other countries when it comes to implementing against AI and edge computing strategies, and a staggering 98 percent of companies in Australia that are aware of digital twins say they have a specific strategy for that technology. More significantly, their experience with these technologies is driving greater adoption of IoT across the region, with around eight in ten organizations working to incorporate them into their IoT solutions.

3. Industry-specific IoT solutions drive a broader range of benefits

The IoT Signals report analyzed several industries in-depth, all well represented in the Asia-Pacific region. Organizations in Australia, for instance, should note that energy, power, and utility companies use IoT to help with grid automation (44 percent) and maintenance (43 percent), while oil and gas companies tend to apply it more to workplace and employee safety (45 percent and 43 percent respectively). Energy companies are also much more likely to use AI in their IoT solutions than other industries (89 percent of organizations versus 79 percent for all verticals). The benefits of IoT being seen by organizations in these sectors include increases in operational efficiency, increases in production capacity, and increases in customer satisfaction.

In Japan, where manufacturing makes up an important part of the market, we find that there are more IoT projects in the usage stage (26 percent) than in other sectors, mainly focused on bolstering automation. Manufacturing organizations are using these IoT solutions to ensure quality, facilitate industrial automation, and monitor production flow. In doing so, they benefit from improved operational efficiency and greater production capacity, driving competitive advantage. In this industry, it’s not technology that poses a challenge but the huge business transformation that takes extra time and thought, often due to legacy systems and processes.

China, of course, has always been an innovator when it comes to devices, so its manufacturing sector will see the same impacts. But smart spaces—as in other countries in the Asia Pacific region—are getting a lot of attention, and this is where we see the highest levels of IoT adoption (94 percent) and overall satisfaction (98 percent). It also has the strongest indications of future growth with 69 percent planning to use IoT more in the next two years. It’s also the industry sector where the highest proportion of organizations are implementing IoT against AI strategies. The top applications of IoT in smart spaces are around productivity and building safety, where organizations can benefit from improved operational efficiency and personal safety.

Learn more

It’s clear from the report that IoT is here to stay, and the diligent approach taken by organizations across the Asia Pacific region is paying off. For a more detailed exploration of how businesses in this region and across the globe are leveraging IoT, as well as drilldowns into topics such as security, implementation strategy, and sustainability, make sure you download the full Microsoft IoT Signals report.
Quelle: Azure

Announcing the public preview of Microsoft Azure Payment HSM service

This blog post has been co-authored by May Chen, Product Manager, Azure Security.

The growing trend for running payment workloads in the cloud

Momentum is building as financial institutions move some or all their payment applications to the cloud. This entails a migration from the legacy on-premises applications and hardware security modules (HSM) to a cloud-based infrastructure that is not generally under their direct control. Often it means a subscription service rather than perpetual ownership of physical equipment and software. Corporate initiatives for efficiency and a scaled-down physical presence are the drivers for this. Conversely, with cloud-native organizations, the adoption of cloud-first without any on-premises presence is their fundamental business model. End-users of a cloud-based payment infrastructure expect reduced IT complexity, streamlined security compliance, and flexibility to scale their solution seamlessly as their business grows.

Potential challenges

Cloud offers significant benefits. Yet, there are challenges when migrating a legacy on-premises payment application (involving payment HSM) to the cloud that must be addressed. Some of these are:

Shared responsibility and trust—what potential loss of control in some areas is acceptable?
Latency—how can an efficient, high-performance link between the application and HSM be achieved?
Performing everything remotely—what existing processes and procedures may need to be adapted?
Security certifications and audit compliance—how will current stringent requirements be fulfilled?

The Azure Payment HSM service addresses these challenges and delivers a compelling value proposition to the users of the service.

Introducing the Microsoft Azure Payment HSM

Today, we are excited to announce that Azure Payment HSM is in preview in East US and North Europe.

The Azure Payment HSM is a “BareMetal” service delivered using Thales payShield 10K payment HSMs to provide cryptographic key operations for real-time, critical payment transactions in the Azure cloud. Azure Payment HSM is designed specifically to help a service provider and an individual financial institution accelerate their payment system’s digital transformation strategy and adopt the public cloud. It meets stringent security, audit compliance, low latency, and high-performance requirements by the Payment Card Industry (PCI).

HSMs are provisioned and connected directly to users’ virtual network, and HSMs are under users’ sole administration control. HSMs can be easily provisioned as a pair of devices and configured for high availability. Users of the service utilize Thales payShield Manager for secure remote access to the HSMs as part of their Azure subscription. Multiple subscription options are available to satisfy a broad range of performance and multiple application requirements that can be upgraded quickly in line with end-user business growth. Azure Payment HSM offers the highest performance level 2,500 CPS.

Enhanced security and compliance

End-users of the service can leverage Microsoft security and compliance investments to increase their security posture. Microsoft maintains PCI DSS and PCI 3DS compliant Azure data centers, including those which house Azure Payment HSM solutions. The Azure Payment HSM can be deployed as part of a validated PCI P2PE and PCI PIN component or solution, helping to simplify ongoing security audit compliance. Thales payShield 10K HSMs deployed in the security infrastructure are certified to FIPS 140-2 Level 3 and PCI HSM v3.

*The Azure Payment HSM service is currently undergoing PCI DSS and PCI 3DS audit assessment.

Manage your Payment HSM in Azure

The Azure Payment HSM service offers complete administrative control of the HSMs to the customer. This includes exclusive access to the HSMs. The customer could be a payment service provider acting on behalf of multiple financial institutions or a financial institution that wishes to directly access the Azure Payment HSM. Once the HSM is allocated to a customer, Microsoft has no access to customer data. Likewise, when the HSM is no longer required, customer data is zeroized and erased as soon as the HSM is released to Microsoft to maintain complete privacy and security. The customer is responsible for deploying and configuring HSMs for high availability, backup and disaster recovery requirements, and to achieve the same performance available on their on-premises HSMs.

Accelerate digital transformation and innovation in cloud

The Azure Payment HSM solution offers native access to a payment HSM in Azure for ‘lift and shift’ with low latency. The solution offers high-performance transactions for mission-critical payment applications. Thales payShield customers can utilize their existing remote management solutions (payShield Manager and payShield TMD together) to work with the Azure Payment HSM service. Customers new to payShield can source the hardware accessories from Thales or one of its partners before deploying their Payment HSM.

Typical use cases

With benefits including low latency and the ability to quickly add more HSM capacity as required, the cloud service is a perfect fit for a broad range of use cases which include:

Payment processing:

Card and mobile payment authorization
PIN and EMV cryptogram validation
3D-Secure authentication

Payment credential issuing:

Cards
Mobile secure elements
Wearables
Connected devices
Host card emulation (HCE) applications

Securing keys and authentication data:

POS, mPOS, and SPOC key management
Remote key loading (for ATM, POS, and mPOS devices)
PIN generation and printing
PIN routing

Sensitive data protection:

Point to point encryption (P2PE)
Security tokenization (for PCI DSS compliance)
EMV payment tokenisation

Suitable for both existing and new payment HSM users

The solution provides clear benefits for both payment HSM users with a legacy on-premises  HSM footprint, and those new payment ecosystem entrants with no legacy infrastructure to support and who may choose a cloud-native approach from the outset.

Benefits for existing on-premises HSM users:

Requires no modifications to payment applications or HSM software to migrate existing applications to the Azure solution.
Enables more flexibility and efficiency in HSM utilization.
Simplifies HSM sharing between multiple teams geographically dispersed.
Reduces physical HSM footprint in their legacy data centers.
Improves cash flow for new projects.

Benefits for new payment participants:

Avoids introduction of on-premises HSM infrastructure.
Lowers upfront investment via the Azure subscription model.
Offers access to the latest certified hardware and software on-demand.

Learn more about the service:

Azure Payment HSM
Azure Payment HSM documentation
Thales payShield 10K
Thales payShield Manager
Thales payShield Trusted Management Device

Quelle: Azure

Azure Cost Management and Billing updates – January 2022

Whether you're a new student, a thriving startup, or the largest enterprise, you have financial constraints, and you need to know what you're spending, where you're spending it, and how to plan for the future. Nobody wants a surprise when it comes to the bill, and this is where Azure Cost Management and Billing comes in.

We're always looking for ways to learn more about your challenges and how Azure Cost Management and Billing can help you better understand where you're accruing costs in the cloud, identify and prevent bad spending patterns, and optimize costs to empower you to do more with less. Here are a few of the latest improvements and updates based on your feedback:

Cost Management is now available in the Microsoft 365 Admin Center.
Multitasking in the cost analysis preview.
Help shape the future of cost reporting.
What's new in Cost Management Labs.
New ways to save money with Azure.
New videos and learning opportunities.
Documentation updates.
Join the Azure Cost Management and Billing team.

Let's dig into the details.

 

Cost Management available in the Microsoft 365 Admin Center

In October, we added support for Microsoft 365, Dynamics 365, and other seat-based offers in Cost Management. While we started with a small set of offers available via the Cloud Solution Provider program, you can expect to see new offers and support for the broader Microsoft Customer Agreement audience added over time. The next phase began its rollout in January with new offers, as well as the addition of Cost Management to the Microsoft 365 Admin Center.

If you have any of the new seat-based offers, you’ll find a new Billing > Cost Management menu item in the Microsoft 365 Admin Center. This will give you a lightweight cost analysis experience with the ability to create budgets. This is only the beginning and there’s a lot more coming, but we’re excited to be able to give you a peek at things to come.

For those paying close attention, you may notice the similarities here with the cost analysis preview in the Azure portal. Expect to see full alignment across portals with an even broader scope of capabilities coming throughout the year. Please let us know what you’d like to see next.

 

Multitasking in the cost analysis preview

Starting a new year is always exciting. Starting a new year with an exciting usability update is even better! We’ve been testing a new tabbed experience in the cost analysis preview for a couple months. You may have seen it already. You start with a list of the built-in views and can open multiple tabs to explore different aspects of your costs simultaneously.

Here are the views available in the cost analysis preview:

Subscriptions is available for billing accounts and management groups to break costs down by subscription and resource group.
Resource groups gives you a breakdown of each resource group within your subscription, management group, or billing account, with nested resources.
Resources shows a list of all resources you have (or used to have, in the case of deleted resources). Some of you may be familiar with the Cost by resource view in classic cost analysis. Resources improves on that basic design with improved performance and a better grouping of related costs (such as Azure and Marketplace costs are grouped together in preview).
Services shows a list of the services and products you use. This view is similar to the Invoice details view in classic cost analysis. The main difference is that rows are grouped by service, making it simpler to see your total cost at a service level and also break it down by the individual products you're using within each service. This view is only available in preview but will be released to everyone soon.
Reservations provides a breakdown of your amortized reservation costs, allowing you to see which resources are consuming each reservation. This is something that isn't possible without a lot of adding and removing filters in classic cost analysis.

From the new tab, select the view you need, and you're back to the traditional preview experience you're used to. If you have a question and need to drill into some other data, simply open a new tab and go! It's that simple.

That's about it! If you're new to the cost analysis preview, here are some of the other things you'll see:

Simpler and more flexible custom date range selection with support for relative periods.
Customize the download to exclude nested details (such as resources without meters in the Resources view).
Smart insights to help you better understand your data, like subscription cost anomalies.
Quick access to help set up Power BI for your EA or MCA billing account or billing profile.
Additional troubleshooting details are available to help streamline your support experience.

There's still a lot in the backlog. Stay tuned for summarized totals, charts, filtering, and improved drill down. Let us know what you'd like to see next.

 

Help shape the future of cost reporting

Do you use Azure Cost Management and Billing to manage your cloud spending? Are you familiar with Azure cost analysis? We're exploring new and updated designs for the cost analysis tool and will be running a usability study to gather feedback on these changes to understand how they can better meet your needs and expectations.

If you or someone you know has experience with cost analysis, we would love to get your feedback. If you are interested in participating, please contact our research team.

 

What's new in Cost Management Labs

With Cost Management Labs, you get a sneak peek at what's coming in Azure Cost Management and can engage directly with us to share feedback and help us better understand how you use the service, so we can deliver more tuned and optimized experiences.

Here are a few features you can see in Cost Management Labs:

Update: Multitasking in the cost analysis preview—now available in the Azure portal

Introducing a new tabbed experience in the cost analysis preview. Start with a list of the built-in views and open multiple tabs to explore different aspects of your costs simultaneously. Let us know what you think. We're looking for explicit feedback here.

Subscription cost anomalies

Identify subscription cost anomalies with insights in the cost analysis preview. You can enable the cost anomaly preview using Try preview. If you don't see anomaly details in insights after enabling the preview, check back after 24 hours. Note that anomaly detection is only available when viewing cost for a subscription scope.

View cost for your resources

The cost for your resources is one click away from the resource overview in the preview portal. Just click View cost to quickly jump to the cost of that particular resource.

Change scope from the menu

Change scope from the menu for quicker navigation. You can opt-in using Try preview.

Of course, that's not all. Every change in Azure Cost Management is available in Cost Management Labs a week before it's in the full Azure portal. We're eager to hear your thoughts and understand what you'd like to see next. What are you waiting for? Try Cost Management Labs today.

 

New ways to save money with Azure

There have been lots of cost optimization improvements over the past couple of months! Here are seven new and updated offers you might be interested in:

Reduced price: DCsv2 and DCsv3 virtual machine pricing reduced by up to 33 percent, effective January 1, 2022.
General availability: Azure Database for PostgreSQL Flexible Server.
General availability: Microsoft Azure is available from the new cloud region in Sweden.
Preview: Stop your Azure Spring Cloud applications to reduce charges.
Preview: Windows Server guest licensing offer for Azure Stack HCI—free while in preview.
Preview: DCasv5 and ECasv5 confidential, hardware-encrypted virtual machines.
Preview: DCv3 virtual machines are now available in Europe West and Europe North.

 

New videos and learning opportunities

Here are a few videos you might be interested in:

Improve the price-performance of your apps with the latest Azure virtual machines (25 minutes).
10 things you can implement to save costs in your Azure environment (14 minutes).
Managing EA enrollments in the Azure portal (3 minutes).
Managing EA departments in the Azure portal (3 minutes).
Managing EA enrollment accounts in the Azure portal (3 minutes).
Managing EA enrollment account subscriptions in the Azure portal (2 minutes).

Follow the Azure Cost Management and Billing YouTube channel to stay in the loop with new videos as they’re released and let us know what you'd like to see next.

Want a more guided experience? Start with Control Azure spending and manage bills with Azure Cost Management and Billing.

 

Documentation updates

Here are a few documentation updates you might be interested in:

Get started with reporting.
Save and share customized views.
Simplified the cost analysis quickstart tutorial.
Added videos to the Azure portal administration for direct enterprise agreements.

Want to keep an eye on all of the documentation updates? Check out the Cost Management and Billing documentation change history in the azure-docs repository on GitHub. If you see something missing, select Edit at the top of the document and submit a quick pull request.

 

Join the Azure Cost Management and Billing team

Are you excited about helping customers and partners better manage and optimize costs? We're looking for passionate, dedicated, and exceptional people to help build best in class cloud platforms and experiences to enable exactly that. If you have experience with big data infrastructure, reliable and scalable APIs, or rich and engaging user experiences, you'll find no better challenge than serving every Microsoft customer and partner in one of the most critical areas for driving cloud success.

Join our team.

What's next?

These are just a few of the big updates from last month. Don't forget to check out the previous Azure Cost Management and Billing updates. We're always listening and making constant improvements based on your feedback, so please keep the feedback coming.

Follow @AzureCostMgmt on Twitter and subscribe to the YouTube channel for updates, tips, and tricks. You can also share ideas and vote up others in the Cost Management feedback forum or join the research panel to participate in a future study and help shape the future of Azure Cost Management and Billing.

We know these are trying times for everyone. Best wishes from the Azure Cost Management and Billing team. Stay safe and stay healthy.
Quelle: Azure

The intersection of edge computing, telecommunications networks, and the cloud

It’s been 12-plus years since we embarked on the paradigm-shifting edge computing story, which brings the cloud closer to the source of data generation and consumption. Nowadays, the cloud provides resource-rich compute and storage capabilities, remote management, and new applications and services as latency continues to be reduced. Edge computing has gone mainstream, as evidenced by numerous conferences and workshops; thousands of research papers, mainstream media articles, Ph.D. theses; and many products, including those from Microsoft.

Years ago, an article we wrote stated that the killer application for edge computing was video analytics. The article, as published by IEEE, envisioned cameras and video located everywhere, increased ability to understand these video streams, and improved ability to react appropriately, stemming from real-time video analysis at the edge. Microsoft continues to believe that edge video analytics will be the dominant service for edge computing, just as we noted many years ago. Since then, we have evolved to an edge fabric, enabling ubiquitous computing. Here, the computing fabric is all around us in many different settings—working for us, improving efficiency, protecting us from problems, and entertaining us.

In this article, we focus on what’s next, including the topic of edge computing for telecommunications, which has been evolving into the next wave of innovation, and one we must embrace. Microsoft believes the telecom edge is the catalyst creating a new world where the telecom and cloud industries join forces to eliminate duplication while creating a new era of latency-sensitive applications and services.

Enabling private 5G Networks with Azure private multi-access edge compute (MEC)

A private 5G network is a local-area mobile network; technically, it is the same as a public wide-area 5G network. This next-generation network enables advanced use cases not supported by current mainstream Wi-Fi technologies. For example, private 5G networks can unify connectivity and support a variety of enterprise-specific secure IoT services and applications.

In June 2021, Microsoft unveiled a new product category for the telecom industry when we announced our Azure private multi-access edge compute (Azure PMEC) managed solution. Azure private MEC is a solution for modernizing enterprise networks, comprised of Azure Stack Edge, Azure Network Function Manager, first and third-party network functions, and manageability via Azure Arc. With it, carriers and ecosystem partners can easily and rapidly deploy and manage network functions like 5G mobile cores, radio access networking (RAN) solutions, and Software-Defined Wide Area Network (SD-WAN) products directly from Azure Marketplace. Our open platform solution empowers operators and system integrators (SIs) to unlock the private 5G opportunity by delivering managed, curated solutions to enterprises with the flexibility of first and third-party offerings, including their choice of RAN and latency-sensitive applications.

Many of us in the IT and telecom industries accept edge computing as a game-changing architectural innovation, reducing the time needed to process the packet after it is generated at the source. All edge computing products that exist today provide this, but Azure private MEC enables even more. With the emergence of novel software-only 5G implementations, edge computing is evolving to become an exciting part of the packet creation infrastructure.

Conflation of Virtual Radio Access Networks and edge computing

The figure below illustrates the shift away from specialized, monolithic network infrastructure to programmable, virtualized Radio Access Network (vRAN) elements. Virtualized RAN offers a cost-efficient solution for running the 5G RAN as a virtualized network function (VNF) on commodity hardware. To implement vRAN, telcos need a low-latency connection between their signal acquisition and computing hardware, necessitating edge computing to make vRAN possible.

It is possible to implement vRAN over a hierarchy of edge installations. In 3GPP RAN parlance, the distributed units (DU) that implement the near-real-time functionality of the RAN, which include physical layer processing (often referred to as L1) and medium access control (often referred to as L2/L3), are implemented at the “Far Edge.” The rest of the RAN stack, along with the network core, is implemented at the “Near Edge.” We have been working on providing this edge infrastructure to operators as part of Microsoft’s core offering.

Figure 1: RAN architectural evolution and innovations in 5G networks.

Despite this evolution in 5G networks, there is still more to do. When implementing RAN functionality at the Far and Near Edges, one has to decide how many server cores are needed to support a given number of cell sites. This type of problem is easy to solve. Microsoft computer scientists are able to determine the number of cores needed to serve the client device, and have further invented and developed algorithms and techniques to allow scaling, energy management, fault tolerance, and feature deployment in running systems. Note that server cores can be provisioned to both assist with packet generation and running applications and services.

In ACM SIGCOMM 2021, we published a paper entitled, Concordia: Teaching the 5G vRAN to Share Compute. As noted in this publication, one reason why vRAN is more efficient than traditional RANs is because it multiplexes several base station workloads on the same computer hardware. Although this multiplexing provides efficiency gains, more than 50 percent of the CPU cycles in typical vRAN settings still remain unused.

Here, co-locating the vRAN functionality with general-purpose workloads not only improves CPU utilization, but it also allows us to service low-latency applications on the same hardware. This is important since vRAN tasks have sub-millisecond latency requirements that have to be met 99.999 percent of the time—difficult to accomplish with existing systems.

Microsoft has also built a user space deadline scheduling framework for the vRAN. Our system includes prediction models using quantile decision trees to outline worst-case execution times of vRAN signal processing tasks. Running every 20 microseconds, the ultra-fast scheduler delivers accurate prediction models, enabling the system to reserve a minimum number of cores required for vRAN tasks while leaving the rest for general-purpose workloads. Evaluated on a commercial-grade reference vRAN platform, our design meets the 99.999 percent reliability requirements and reclaims more than 70 percent of idle CPU cycles without affecting RAN performance.

Looking ahead

Edge computing was created jointly by Microsoft and our academic colleagues. Edge computing products have evolved, as we fine-tune solutions to new sets of problems we are solving. Beyond implementing 5G infrastructure on commodity hardware, our software takes advantage of the latest discoveries we’ve made in applying machine learning techniques to improve the performance of our edge nodes. We continue to work closely with our academic colleagues, and serve on the advisory board of two National Science Foundation (NSF)-funded Edge AI research centers (The Institute for Future Edge Networks and Distributed Intelligence and The Institute for Edge Computing Leveraging Next Generation Networks). Both research institutes focus on developing AI technologies as part of edge computing that leverages next-generation communications networks to provide previously impossible services.

The future is bright because we are on the right track with Azure private MEC. The architecture we are developing and the products we are delivering will make edge computing indispensable, as every packet in the mobile network will be processed by an edge node, leading to a large ubiquitous processing fabric, the likes of which we have never enjoyed before.

Learn more

To learn more about our Azure for Operators strategy, refer to the Azure for Operators e-book.
Quelle: Azure

Improve your security defenses for ransomware attacks with Azure Firewall

To ensure customers running on Azure are protected against ransomware attacks, Microsoft has invested heavily in Azure security and has provided customers with the security controls needed to protect their Azure cloud workloads.

A comprehensive overview of best practices and recommendations can be found in the "Azure Defenses for Ransomware Attack" e-book.

Here, we would like to zoom into network security and understand how Azure Firewall can assist you with protecting against ransomware.

Ransomware is basically a type of malicious software designed to block access to your computer system until a sum of money is paid. The attacker usually exploits an existing vulnerability in your system to penetrate your network and execute the malicious software on the target host.

Ransomware is often spread through phishing emails that contain malicious attachments or through drive-by downloading. Drive-by downloading occurs when a user unknowingly visits an infected website and then malware is downloaded and installed without the user’s knowledge.

Here Azure Firewall Premium comes into help. With its intrusion detection and prevention system (IDPS) capability, every packet will be inspected thoroughly, including all its headers and payload to identify malicious activity and to prevent it from penetrating your network. IDPS allows you to monitor your network for malicious activity, log information about this activity, report it, and optionally attempt to block it.

The IDPS signatures are applicable for both application and network-level traffic (Layers 4-7), they are fully managed and contain more than 65,000 signatures in over 50 different categories to keep them up to date with the dynamic ever-changing attack landscape:

Azure Firewall is getting early access to vulnerability information from Microsoft Active Protections Program (MAPP) and Microsoft Security Response Center (MSRC).
Azure Firewall is releasing 30 to 50 new signatures each day.

Nowadays, modern encryption, such as Secure Sockets Layer (SSL) or Transport Layer Security (TLS), is used globally to secure internet traffic. Attackers are using encryption to carry their malicious software into the victim network. Therefore, customers must inspect their encrypted traffic just like any other traffic.

Azure Firewall Premium IDPS allows you to detect attacks in all ports and protocols for non-encrypted traffic. However, when HTTPS traffic needs to be inspected, Azure Firewall can use its TLS inspection capability to decrypt the traffic and accurately detect malicious activities.

After the ransomware is installed on the target machine, it may try to encrypt the machine’s data, therefore it requires using an encryption key and may use the Command and Control (C&C) to get the encryption key from the C&C server hosted by the attacker. CryptoLocker, WannaCry, TeslaCrypt, Cerber, and Locky are some of the ransomware using C&C to fetch the required encryption keys.

Azure Firewall Premium has hundreds of signatures that are designed to detect C&C connectivity and block it to prevent the attacker from encrypting customers’ data.

Figure 1: Firewall protection against ransomware attack using command and control channel

Taking a comprehensive approach to fend off ransomware attacks

Taking a holistic approach to fend off ransomware attacks is recommended. Azure Firewall operates in a default deny mode and will block access unless explicitly allowed by the administrator. Enabling Threat Intelligence (TI) feature in alert/deny mode will block access to known malicious IPs and domains. Microsoft Threat Intel feed is updated continuously based on new and emerging threats.

Firewall policy can be used for the centralized configuration of firewalls. This helps with responding to threats rapidly. Customers can enable Threat Intel and IDPS across multiple firewalls with just a few clicks. Web categories let administrators allow or deny user access to web categories such as gambling websites, social media websites, and others. URL filtering provides scoped access to external sites and can cut down risk even further. In other words, Azure Firewall has everything necessary for companies to defend comprehensively against malware and ransomware.

Detection is equally important as prevention. Azure Firewall solution for Microsoft Sentinel gets you both detection and prevention in the form of an easy-to-deploy solution. Combining prevention and detection allows you to ensure that you both prevent sophisticated threats when you can, while also maintaining an “assume breach mentality” to detect and quickly respond to cyberattacks.

Learn more about Azure Firewall Premium and ransomware protection

Learn more about Azure Firewall Premium features from Microsoft documentation.
Download our e-book, "Azure Defenses for Ransomware Attack."
Read more about how to optimize security with Azure Firewall solution for Microsoft Sentinel.

Quelle: Azure

New performance and logging capabilities in Azure Firewall

Organizations are speeding up workload migration to Azure to take advantage of the growing set of innovative cloud services, scale, and economic benefits of the public cloud. Applications migration to the cloud consequently increases the network traffic throughput demand. This puts pressure on network elements and more specifically on Azure Firewall which is in the critical path of most network traffic. Currently, Azure Firewall supports 30 Gbps which is sufficient to meet current throughput demands for many of our customers. However, we are seeing some organizations require even more throughput and towards this, we are announcing new Azure Firewall capabilities as well as updates for January 2022:

Azure Firewall network rule name logging.
Azure Firewall premium performance boost.
Performance whitepaper.

Azure Firewall network rule name logging

We have heard your feedback and are happy to announce the rule name availability in the Network logs. Like application rules, network rule name is now available in the logs.

Previously, the event of a network rule hit would show the source, destination IP/port, and the action, allow or deny. With the new functionality, the event logs for network rules will also contain the policy name, Rule Collection Group, Rule Collection, and the rule name hit.

After enabling the feature, the following information will be provided for a network rule hit event in the logs:

Figure 1: Network rule event in the logs after enabling the “network rule name logging” feature.

Note: For Classic Firewalls (those not managed by an Azure Firewall policy), only the rule name will be visible.

To enable the network rule name logging feature, follow the instructions.

Azure Firewall Premium performance boost

As more applications are moved to the cloud, the performance of network elements might become a bottleneck. The firewall as the central piece of any network design needs to be able to support all those workloads. Hence, we are happy to announce that the Azure Firewall Premium performance boost functionality is going to preview to allow more scalability for those deployments.

This feature increases the maximum throughput of the Azure Firewall Premium by more than 300 percent (to 100Gbps). See the performance whitepaper section below for more details.

To enable the Azure Firewall Premium performance boost feature, follow the instructions.

*Make sure to also check out the comprehensive testing done by Andrew Myers for a detailed analysis and as a reference to build your own test environment.

Azure Firewall Performance whitepaper

Reliable firewall performance is essential to operate and protect your virtual networks in Azure. Not only should Azure Firewall handle the current traffic on a network, but it should also be ready for potential traffic growth. To provide customers with a better visibility into the expected performance of Azure Firewall, we are releasing the Azure Firewall Performance documentation.

As we are always working to improve the Azure Firewall service, the metrics highlighted in the document will be updated to reflect the latest performance results you could expect from the Azure Firewall. So, make sure to bookmark the page to stay up to date with the latest information.

Learn more about Azure Firewall

For more information on everything we covered above, see the following documentation:

Azure Firewall documentation
Azure Firewall performance documentation
Azure Firewall preview features
Azure Firewall logs and metrics
Azure Firewall FAQ

Quelle: Azure