Advancing anomaly detection with AIOps—introducing AiDice

This blog post has been co-authored by Jeffrey He, Product Manager, AIOps Platform and Experiences Team.

In Microsoft Azure, we invest tremendous efforts in ensuring our services are reliable by predicting and mitigating failures as quickly as we can. In large-scale cloud systems, however, we may still experience unexpected issues simply due to the massive scale of the system. Given this, using AIOps to continuously monitor health metrics is fundamental to running a cloud system successfully, as we have shared in our earlier posts. First, we shared more about this in Advancing Azure service quality with artificial intelligence: AIOps. We also shared an example deep dive of how we use AIOps to help Azure in the safe deployment space in Advancing safe deployment with AIOps. Today, we share another example, this time about how AI is used in the field of anomaly detection. Specifically, we introduce AiDice, a novel anomaly detection algorithm developed jointly by Microsoft Research and Microsoft Azure that identifies anomalies in large-scale, multi-dimensional time series data. AiDice not only captures incidents quickly, it also provides engineers with important context that helps them diagnose issues more effectively, providing the best experience possible for end customers.

Why are AIOps needed for anomaly detection?

We need AIOps for anomaly detection because the data volume is simply too large to analyze without AI. In large-scale cloud environments, we monitor an innumerable number of cloud components, and each component logs countless rows of data. In addition, each row of data for any given cloud component might contain dozens of columns such as the timestamp, the hardware type of the virtual machine, the generation number, the OS version, the datacenter where the nodes hosting the virtual machine stay in, or the country. The structure of the data we have is essentially multi-dimensional time series data, which contains an exponential number of individual time series due to the various combinations of dimensions. This means that iterating through and monitoring every single time series is simply not practical—applying AIOps is necessary.

How did we approach this, before AiDice?

Before AiDice, the way we handled anomaly detection in large-scale, high-dimensional time series data was to conduct anomaly detection on a selected set of dimensions that were the most important. By focusing on a scoped subset, we would be able to detect anomalies within those combinations quickly. Once these anomalies were detected, engineers would then dive deeper into the issues, using pivot tables to drill down into the other dimensions not included to better diagnose the issue. Although this approach worked, we saw two key opportunities to improve the process. First, the old approach required a lot of manual effort by engineers to determine the exact pivot of anomalies. Second, the approach also limited the scope of direct monitoring by only allowing us to input a limited number of dimensions into our anomaly detection algorithms. Given these reasons, Microsoft Research and Azure worked together to develop AiDice, which improves both of these areas.

How do we approach this now with AiDice, and how does it work?

Now with AiDice, we can automatically localize pivots on time series data even if looking at dozens of dimensions at the same time. This allows us to add a lot more attributes, whether that be the hardware generation or hardware microcode, the OS version, or the networking agent version. Though this makes the search space much larger, AiDice encodes the problem as a combinatorial optimization problem, allowing it to search through the space more efficiently than traditional approaches. Brief details of AiDice are described below, but to see a full explanation of the algorithm, please see the paper published at the ESEC/FSE '20: 28th ACM Joint European Software Engineering Conference and Symposium on the Foundations of Software Engineering (ESEC/FSE 2020).

Part 1: AiDice algorithm—formulation as a search problem

The AiDice algorithm works by first turning the data into a search problem. Search nodes are formed by starting at a given pivot and building the relationships out to the neighbors. For example, if we take a node, "Country=USA, Datacenter=DC1, DiskType=SSD", we can form out the neighboring nodes by swapping, adding, or removing a dimension-value pair, as shown in the diagram below.

Part 2: AiDice algorithm—objective function

Next, the AiDice algorithm searches through the search space in a smart manner by maximizing an objective function that emphasizes two key components. First, the bigger the sudden burst or change in errors, the higher AiDice scores the objective function. Second, the higher the proportion of the errors that occur in this pivot in relation to the total number of errors, the higher AiDice scores the objective function. For example, if there are 5,000 total errors that occurred, it is more important to alert the user about the pivot that went from 3000 errors to 4000 errors than the pivot that went from 10 to 20 errors.

Part 3: Customization of alerts to reduce noise

Next, the alerts that AiDice produces need to be filtered and customized to be less noisy and more actionable since the results so far are optimized from a mathematical perspective but have not yet incorporated domain knowledge around the meaning of the input data. This step can vary widely depending on the nature of the input data, but an example could be that consecutive alerts that share the same error code may be grouped together to reduce the number of total alerts.

AiDice in action—an example

The following is a real example in which AiDice helped detect a real issue early on. The details are altered for confidentiality reasons.

We applied AiDice to monitor low memory error events in a certain type of virtual machine with more than a dozen dimensions of attribute information alongside the fault count, including the region, the datacenter location, the cluster, the build, the RAM, or the event type.
AiDice identified an increase in the number of low memory events on distinct nodes in a particular pivot, which indicated a memory leak.

Build=11.11111, Ram=00.0, ProviderName=Xxxxx-x-Xxxxxx, EventType=8888 (details have been altered for privacy).

When looking at the aggregate trend, this issue is hidden and without AiDice it would take manual effort to detect the exact location of the issue (see graphs below, data normalized for privacy).
The engineer responsible for the ticket looked at the alert and some example cases shown in the alerts to quickly able figure out what was going on.

In this real-world example, AiDice was able to detect an issue in a dimension combination that was causing a particular error type in an automatic fashion, quickly and efficiently. Soon after, the memory leak was discovered and Azure engineers were able to mitigate the issue.

Looking forward

Looking ahead, we hope to improve AiDice to make Azure even more resilient and reliable. Specifically, we plan to:

Support additional scenarios in Azure: AiDice is being applied to many scenarios in Azure already, but the algorithm has room to improve with respect to the types of metrics it can operate on. Microsoft Azure and the Microsoft Research team are working together to support more metric scenarios.
Prepare additional data feeds in Azure for AiDice: In addition to upgrading the AiDice algorithm itself to support more scenarios, we are also working to add supporting attributes to certain data sources to fully leverage the power of AiDice.

Learn more

Sign up for Microsoft Azure today.
Visit the Advancing Reliability Series.

Quelle: Azure

Ensure zone resilient outbound connectivity with NAT gateway

Our customers—across all industries—have a critical need for highly available and resilient cloud frameworks to ensure business continuity and adaptability of ever-growing workloads. One way that customers can achieve resilient and reliable infrastructures in Microsoft Azure (for outbound connectivity) is by setting up their deployments across availability zones in a region.

When customers need to connect outbound to the internet from their Azure infrastructures, Network Address Translation (NAT) gateway is the best way. NAT gateway is a zonal resource that is configured to subnets from the same virtual network, which means that it can be deployed to individual zones to allow outbound connectivity. Subnets and virtual networks, on the other hand, are regional constructs that are not restricted to individual zones. Subnets can contain virtual machine instances or scale sets spanning across multiple availability zones.

Even without being able to traverse multiple availability zones, NAT gateway still provides a highly resilient and reliable way to connect outbound to the internet. This is because it does not rely on any single compute instance like a virtual machine. Instead, NAT gateway leverages software-defined networking to operate as a fully managed and distributed service with built-in redundancy. This built-in redundancy means that customers are unlikely to experience individual NAT gateway resource outages or downtime in their Azure infrastructures.

To ensure that you have the optimal outbound configuration to meet your availability and security needs while also safeguarding against zonal outages, let’s look at how to create zone resilient setups in Azure with NAT gateway.

Zone resilient outbound connectivity scenarios with NAT gateway

Customer setup

Let's say you are a retailer who is preparing for an upcoming Black Friday event. You anticipate that traffic to your retail website will increase significantly on the day of the sale. You decide to deploy a virtual machine scale set (VMSS) so that way your compute resources can automatically scale out to meet the increased traffic demands. Scalability is not the only requirement you have in preparation for this event, but also resiliency and security. To ensure that you safeguard against potential zonal outages that could impact traffic flow, you decide to deploy these VMSS across multiple availability zones. In addition to using VMSS in multiple availability zones, you plan to use NAT gateway to handle all outbound traffic flow in a scalable, secure, and reliable manner.

How should you set up your NAT gateway with your VMSS across multiple availability zones? Let’s take a look at a few different configurations along with which setups will and won’t work.

Scenario 1: Set up a single zonal NAT gateway with your zone-spanning VMSS

First, you decide to deploy a single NAT gateway resource to availability zone 1 and your VMSS across all three availability zones within the same subnet. You then configure your NAT gateway to this single subnet and to a /28 public IP prefix, which provides you a contiguous set of 16 public IP addresses for connecting outbound. Does this setup safeguard you against potential zone outages? No.

Figure 1: A single zonal NAT gateway configured to a zone-spanning set of virtual machines does not provide optimal zone resiliency. NAT gateway is deployed out of zone 1 and configured to a subnet that contains a VMSS that spans across all three availability zones of the Azure region. If availability zone 1 goes down, outbound connectivity across all three zones will also go down.

Here’s why:

If the zone that goes down is also the zone in which NAT gateway has been deployed then all outgoing traffic from virtual machines across all zones will be blocked.
If the zone that goes down is different than the zone that NAT gateway has been deployed in, then outgoing traffic from the other zones will still occur and only virtual machines from the zone that has gone down will be impacted.

Scenario 2: Attach multiple NAT gateways to a single subnet

Since the previous configuration will not provide the highest degree of resiliency, you decide you will instead deploy 3 NAT gateway resources, one in each availability zone, and attach them to the subnet that contains the VMSS. Will this setup work? Unfortunately, no.

Figure 2: Multiple NAT gateways cannot be attached to a single subnet by design.

Here’s why:

A subnet cannot have more than one NAT gateway attached to it and it is not possible to set up multiple NAT gateways on a single subnet. When NAT gateway is configured to a subnet, NAT gateway becomes the default next hop type for network traffic before reaching the internet. Consequently, virtual machines in a subnet will source NAT to the public IP address(es) of NAT gateway before egressing to the internet. If more than one NAT gateway were to be attached to the same subnet, the subnet would not know which NAT gateway to use to send outbound traffic.

Scenario 3: Deploy zonal NAT gateways with zonally configured VMSS for optimal zone resiliency

What is the optimal solution then for creating a secure, resilient, and scalable outbound setup? The solution is to deploy a VMSS in each availability zone, configure each to their own respective subnet and then attach each subnet to a zonal NAT gateway resource.

Figure 3: Zonal NAT gateways configured to individual subnets for zonal VMSS provide optimal zone resiliency for outbound connectivity.

Deploying zonal NAT gateways to match the zones of the VMSS provides the greatest protection against zonal outages. Should one of the availability zones go down, the other two zones will still be able to egress outbound traffic from the other two zonal NAT gateway resources.

Summary of zone resilient scenarios with NAT gateway

Scenario

Description

Rating

Scenario 1

Set up a single zonal NAT gateway with your VMSS that spans across multiple availability zones but confined to a single subnet.

Not recommended: if the zone that NAT gateway is located in goes down then outbound connectivity for all VMs in the scale set goes down.

Scenario 2

Attach multiple zonal NAT gateways to a subnet that contains zone-spanning virtual machines.

Not possible: multiple NAT gateways cannot be associated to a single subnet by design.

Scenario 3

Deploy zonal NAT gateways to separate subnets with zonally configured VMSS.

Optimal configuration to provide zone resiliency and protect against outages.

FAQ on NAT gateway and availability zones

What does it mean to have a "no zone" NAT gateway?

"No zone" is the default availability zone selected when you deploy a NAT gateway resource. No zone means that Azure places the NAT gateway resource into a zone for you, but you do not have visibility into which zone it is specifically placed. It is recommended that you deploy your NAT gateway to specific zones so that you know in which zone your NAT gateway resource resides. Once NAT gateway is deployed, the availability zone designation cannot be changed.

If I have Load Balancer or instance-level public IPs (IL PIPs) on virtual machines and NAT gateway deployed in the same virtual network and NAT gateway or an availability zone goes down, will Azure fall back to using Load Balancer or IL PIPs for all outbound traffic?

Azure will not failover to using Load Balancer or IL PIPs for handling outbound traffic when NAT gateway is configured to a subnet. After NAT gateway has been attached to a subnet, the user-defined route (UDR) at the source virtual machine will always direct virtual machine–initiated packets to the NAT gateway even if the NAT gateway goes down.

Learn more

NAT gateway and availability zones.
Design virtual networks with NAT gateway.
Create a NAT gateway with the portal.

Quelle: Azure

Strengthen your security with Policy Analytics for Azure Firewall

This blog was co-authored by Gopikrishna Kannan, Principal Program Manager, Azure Networking.

Network security policies are constantly evolving to keep pace with the demands of workloads. With the acceleration of workloads to the cloud, network security policies—Azure Firewall policies in particular—are frequently changing and often updated multiple times in a week (in many cases several times in a day). Over time, the Azure Firewall network and application rules grow and can become suboptimal, impacting the firewall performance and security. For example, high volume and frequently hit rules can be unintentionally prioritized lower. In some cases, applications are hosted in a network that has been migrated to a different network. However, the firewall rules referencing older networks have not been deleted.

Optimizing Firewall rules is a challenging task for any IT team. Especially for large, geographically dispersed organizations, optimizing Azure Firewall policy can be manual, complex, and involve multiple teams across the world. Updates are risky and can potentially impact a critical production workload causing serious downtime. Well, not anymore!

Policy Analytics has been developed to help IT teams manage Azure Firewall rules over time. It provides critical insights and recommendations for optimizing Azure Firewall rules with a goal of strengthening your security posture. We are now excited to share that Policy Analytics for Azure Firewall is now in preview.

Optimize Azure Firewall rules with Policy Analytics

Policy Analytics helps IT teams address these challenges by providing visibility into traffic flowing through the Azure Firewall. Key capabilities available in the Azure Portal include:

Firewall flow logs: Displays all traffic flowing through the Azure Firewall alongside hit rate and network and application rule match. This view helps identify top flows across all rules. You can filter flows matching specific sources, destinations, ports, and protocols.
Rule analytics: Displays traffic flows mapped to destination network address translation (DNAT), network, and application rules. This provides enhanced visibility of all the flows matching a rule over time. You can analyze rules across both parent and child policies.
Policy insight panel: Aggregates policy insights and highlights policy recommendations to optimize your Azure Firewall policies.
Single-rule analysis: The single-rule analysis experience analyzes traffic flows matching the selected rule and recommends optimizations based on those observed traffic flows.

Deep dive into single-rule analysis

Let’s investigate single-rule analysis. Here we select a rule of interest to analyze the matching flows and optimize thereof.

Users can analyze Firewall rules with a few easy clicks.

Figure 1: Start by selecting Single-rule analysis.

With Policy Analytics, you can perform rule analysis by picking the rule of interest. You can pick a rule to optimize. For instance, you may want to analyze rules with a wide range of open ports or a large number of sources and destinations.

Figure 2: Select a rule and Run analysis.

Policy Analytics surfaces the recommendations based on the actual traffic flows. You can review and apply the recommendations, including deleting rules which don’t match any traffic or prioritizing them lower. Alternatively, you can lock down the rules to specific ports matching traffic.

Figure 3: Review the results and Apply selected changes.

Pricing

While in preview, enabling Policy Analytics on a Firewall Policy associated with a single firewall is billed per policy as described on the Azure Firewall Manager pricing page. Enabling Policy Analytics on a Firewall Policy associated with more than one firewall is offered at no additional cost.

Next steps

Policy Analytics for Azure Firewall simplifies firewall policy management by providing insights and a centralized view to help IT teams have better and consistent control of Azure Firewall. To learn more about Policy Analytics, see the following resources:

Get started with Azure Firewall and Policy Analytics.
Watch this video for a detailed walkthrough of the Policy Analytics capabilities.
Firewall Manager documentation.
Azure Firewall Standard features, Microsoft Learn.
Azure Firewall Premium features, Microsoft Learn.

Quelle: Azure

RoQC and Microsoft simplify cloud migration with Microsoft Energy Data Services

This post was co-authored by Ian Barron, Chief Technology Officer, RoQC.

The vast amount of data in energy companies slows down their digital transformation. Together with RoQC solutions, Microsoft Energy Data Services will accelerate your journey in democratizing access to data by providing an easy-to-deploy managed service fully supported by Microsoft.

Managing large data sets is complicated, and few industries have larger and more complex data sets than the energy industry. Data complexity and large investments in on-premises storage solutions and multitudes of computer systems prevent the transition to cloud-based sub-surface data management. A single company can have tens of petabytes of structured and unstructured data, which if not quality-assured, can lead to an increase in cost if it goes wrong.

Solutions from RoQC, a Norwegian software company, clean up structured data for energy companies. This makes data management more efficient from a time and cost perspective, and also makes decision-making more reliable.

With Microsoft Energy Data Services, energy companies can leverage new cloud-based data management capabilities provided by RoQC and Microsoft Energy Data Services.

Microsoft Energy Data Services is a data platform fully supported by Microsoft, that enables efficient data management, standardization, liberation, and consumption in energy exploration. The solution is a hyperscale data ecosystem that leverages the capabilities of the OSDU Data Platform™, Microsoft's secure and trustworthy cloud services with our partners’ extensive domain expertise.

"Through machine learning, our software gives energy companies complete control of their data and assets. When the amounts of data are reduced, we eliminate uncertainty and duplication, and optimize the quality of the data sets. Traditionally a petrophysicist might spend a day or two cleaning up the logs for one well before they can be used for detailed analysis—with RoQC LogQA the same petrophysicist can clean hundreds of thousands of logs in the same timeframe. By cooperating with one of the largest platform providers in the world, we gain access to technology, competency, and markets it would be hard for us to get otherwise."—Bjørn Thorsen, CEO of RoQC.

New possibilities through cooperation

RoQC, a certified independent software vendor with Microsoft, has been able to expand its technology globally through the partnership.

Partner development manager for Microsoft Norway, Ole Christian Smerud, assures that the cooperation is mutually beneficial. "As a platform provider, we depend on strong partners to give our customers the best solutions. While we provide a platform, cloud competency, and access to an ecosystem for RoQC, they bring domain knowledge and relevance to their industry," he says.

Save millions with better data

RoQC believes that the energy industry struggles to take the step into the cloud, simply because of the data complexity and that most companies lack control over their data. By qualifying and quantifying data sets by identifying and deleting duplicates, RoQC Tools can reduce the data set size with commensurate dramatic savings in storage costs.

By reducing the amount of data by 10 to 30 percent, we’re talking millions of dollars in savings. The bigger the organization, the bigger the effect.

RoQC Tools are primarily designed so that data managers can perform tasks that are usually time-consuming as efficiently as possible. Very often they can complete a task that usually takes months, in a minute or two. Sometimes, the tasks would not be possible at all without the tools.

There is an obvious and well-documented correlation between increasing the quality of your data and reducing the risk of decisions based on that data. Geoscientists and project leaders in this field make decisions worth millions, maybe billions. You don’t want to make a decision of that magnitude based on insufficient or weak data.

RoQC believes the energy companies’ data is the key to shifting away from fossil resources. In the data sets, subsea energy companies have knowledge of "everything" about the ocean floor and sub-sea.

"Minerals from the ocean floor and sub-surface might be the next big thing for subsea oil-dependent nations like Norway. It is an already overused statement, but data is literally the new oil for this industry," says Bjørn Thorsen.

Preparing efficient data migration

RoQC provides both tools and consultants to enable a client to prepare their data prior to migrating the data to Azure. This preparation can include everything from simply identifying and removing duplicates to developing and implementing standards and then cleaning the data to comply with the standards. These preparations can be done directly in the clients’ normal (e.g., Halliburton/Schlumberger) interpretation platforms.

Furthermore, RoQC’s LogQA provides extremely powerful native, machine learning–based QA and cleanup tools for log data once the data has been migrated to Microsoft Energy Data Services, an enterprise-grade OSDU Data Platform on the Microsoft Cloud.

LogQA monitors the quality of the well log data that a client has stored on OSDU Data Platform. LogQA was partially developed in collaboration with Microsoft as part of Microsoft Energy Data Services, and LogQA is maintained on the latest OSDU Data Platform APIs and version/schema.

As LogQA is native to the Microsoft Cloud infrastructure there is no customer deployment required before a customer can utilize LogQA to monitor, identify, and rapidly rectify the data quality issues. LogQA is designed to work with typically energy industry client datasets, which is potentially millions of well logs.

How to work with RoQC Solutions on Microsoft Energy Data Services

For access to RoQC solutions, reach out to Bjørn Thorsen, CEO, RoQC Data Management AS, Norway at Bjorn@roqc.no.

Microsoft Energy Data Services is an enterprise-grade, fully managed, OSDU Data Platform for the energy industry that is efficient, standardized, easy to deploy, and scalable for data management—for ingesting, aggregating, storing, searching, and retrieving data. The platform will provide scale, security, privacy, and compliance that are expected by our enterprise customers. The platform offers out-of-the-box compatibility with RoQC applications, which accelerates time-to-market and being able to run their domain workflows with ease, with data contained in Microsoft Energy Data Services, and minimal effort.

Get started with Microsoft Energy Data Services today.
Quelle: Azure

Cost Management updates—September 2022

Whether you're a new student, a thriving startup, or the largest enterprise, you have financial constraints, and you need to know what you're spending, where, and how to plan for the future. Nobody wants a surprise when it comes to the bill, and this is where Microsoft Cost Management comes in.

We're always looking for ways to learn more about your challenges and how Microsoft Cost Management can help you better understand where you're accruing costs in the cloud, identify and prevent bad spending patterns, and optimize costs to empower you to do more with less. Here are a few of the latest improvements and updates based on your feedback:

Monitor your budgets from the Azure mobile app.
Cost savings insights in the cost analysis preview.
We want to learn about your cloud commerce experience.
What's new in Cost Management Labs.
New licensing benefits make bringing workloads and licenses to partners’ clouds easier.
New ways to save money with Microsoft Cloud.
Documentation updates.
Join the Microsoft Cost Management team.

Let's dig into the details.

Monitor your budgets from the Azure mobile app

In June, we shared the addition of your current cost to the Azure mobile app. This was great because it gave you direct access to your costs whenever you need it, wherever you are. The experience has now been expanded and you can now track your budgets on the go.

To view your budgets, simply open any subscription or resource group and select the Cost Management tile. You’ll see a new screen with your current cost, forecast, last month’s cost, and any budgets you’ve created.

What would you like to see next?

Cost savings insights in the cost analysis preview

Cost optimization is arguably the number one goal for organizations using Cost Management. When I talk to people about how to drive cost efficiency, I always start with Azure Advisor. Advisor recommendations are the quickest way to address some of the most common inefficiencies. Unfortunately, many still don’t know about Azure Advisor or forget to check in to see if there are any new recommendations. In an effort to help you drive the most efficient solutions, the cost analysis preview now includes a summary of your subscription cost recommendations.

We hope this helps you drive cost efficiency across your resources. You can find this today on subscriptions and we’re working on expanding that to resource groups as well.

Learn more about Cost savings insights or check it out yourself in the cost analysis preview. Let us know what you’d like to see next!

We want to learn about your cloud commerce experience

Are you responsible for managing Microsoft 365 and Azure cloud costs? We’d love to learn more about your experience and business process when it comes to your cloud costs. If interested in connecting with us for a 60-minute interview, please reach out to ce-uxr@microsoft.com.

What's new in Cost Management Labs

With Cost Management Labs, you get a sneak peek at what's coming in Microsoft Cost Management and can engage directly with us to share feedback and help us better understand how you use the service, so we can deliver more tuned and optimized experiences. Here are a few features you can see in Cost Management Labs:

Updated: Cost savings insights in the cost analysis preview—Now available in the public portal. 
Identify potential savings available from Azure Advisor cost recommendations for your Azure subscription. You can opt in using Try preview.
Updated: Product column experiment in the cost analysis preview—Now available in the public portal.
We’re testing new columns in the Resources and Services views in the cost analysis preview for Microsoft Customer Agreement. You may see a single Product column instead of the Service, Tier, and Meter columns. Please leave feedback to let us know which you prefer. 
Forecast in the cost analysis preview.
Show your forecast cost for the period at the top of the cost analysis preview. You can opt in using Try preview.
Group related resources in the cost analysis preview.
Group related resources, like disks under VMs or web apps under App Service plans, by adding a “cm-resource-parent” tag to the child resources with a value of the parent resource ID.
Charts in the cost analysis preview.
View your daily or monthly cost over time in the cost analysis preview. You can opt in using Try Preview.
View cost for your resources.
The cost for your resources is one click away from the resource overview in the preview portal. Just click View cost to quickly jump to the cost of that resource.
Change scope from the menu.
Change scope from the menu for quicker navigation. You can opt-in using Try Preview.

Of course, that's not all. Every change in Microsoft Cost Management is available in Cost Management Labs a week before it's in the full Azure portal or Microsoft 365 admin center. We're eager to hear your thoughts and understand what you'd like to see next. What are you waiting for? Try Cost Management Labs today.

New licensing benefits make bringing workloads and licenses to partners’ clouds easier

On October 1, 2022, Microsoft will implement significant upgrades to our outsourcing and hosting terms that will benefit customers worldwide. Added benefits will enable new scenarios for how customers can license and run workloads with cloud providers, including partners in the Cloud Solution Provider program. These changes will support customers’ ability to move their licenses to a partners’ cloud, leverage shared hardware, and have more flexibility in deployment options for their software licenses. With these changes, customers have additional choices to deploy their solutions with more flexibility. See Microsoft Licensing News for further details on what we’re doing to give customers more flexibility and choice. You can also see how we’re working with partners to make hosted cloud easier for customers, whether they bring their licenses or get them from a partner. Partners can learn more at the Microsoft Partner blog.

New ways to save money in the Microsoft Cloud

Check out these seven generally available updates that can help you save money:

Reserved capacity for Azure Backup Storage.
Azure Virtual Machines with Ampere Altra Arm–based processors.
Up to 45 percent performance gains in stream processing.
Azure Dedicated Host support for Azure Kubernetes Service.
Microsoft Azure available from new cloud region in Qatar.
Azure Arc–enabled servers in South Africa North and China East 2 and China North 2.
Azure VMware Solution now in Sweden Central.

Documentation updates

Here are a few documentation updates you might be interested in:

New: Customize views in cost analysis.
New: Optimize costs for Azure Backup Storage with reserved capacity.
Updated: Added new option for How to buy Marketplace software reservations.
Plus 12 updates based on your feedback.

Want to keep an eye on all documentation updates? Check out the Cost Management and Billing documentation change history in the azure-docs repository on GitHub. If you see something missing, select Edit at the top of the document and submit a quick pull request. You can also submit a GitHub issue. We welcome and appreciate all contributions!

Join the Microsoft Cost Management team

Are you excited about helping customers and partners better manage and optimize costs? We're looking for passionate, dedicated, and exceptional people to help build best in class cloud platforms and experiences to enable exactly that. If you have experience with big data infrastructure, reliable and scalable APIs, or rich and engaging user experiences, you'll find no better challenge than serving every Microsoft customer and partner in one of the most critical areas for driving cloud success.

Watch the video below to learn more about the Microsoft Cost Management team:

Join our team.

What's next?

These are just a few of the big updates from last month. Don't forget to check out the previous Microsoft Cost Management updates. We're always listening and making constant improvements based on your feedback, so please keep the feedback coming.

Follow @MSCostMgmt on Twitter and subscribe to the YouTube channel for updates, tips, and tricks. You can also share ideas and vote up others in the Cost Management feedback forum or join the research panel to participate in a future study and help shape the future of Microsoft Cost Management.

We know these are trying times for everyone. Best wishes from the Microsoft Cost Management team. Stay safe and stay healthy.
Quelle: Azure

New Azure for Operators solution accelerator offers a fast path to network insights

5G marks an inflection point for operators. The disaggregation of software and hardware in 5G enables operators to move telecommunication workloads to public or hybrid public/private cloud infrastructures, giving them unprecedented agility and flexibility to deliver exceptional customer experiences and realize cost efficiencies. However, the full benefit of running large-scale telecommunication services in the cloud can only be achieved if cloud adoption is accompanied by a comprehensive approach to network analysis and automation supported by cloud-based big data and AI.

Today, Azure for Operators is introducing a network analytics solution accelerator program, providing a standardized approach to data acquisition and visualization that aids operators on their journey toward complete end-to-end AI Operations (AIOps). The solution employs the same operational techniques and capabilities that Microsoft uses to manage Azure, packaged specifically for operator analytics. Our network analytics solution comprises existing Azure services combined with unique capabilities developed specifically for communications service providers, which allows network planners and engineers to visualize performance and troubleshoot service anomalies.

Disaggregated cloud native 5G networks add many new individual elements that must interwork effortlessly. These increasing interdependencies mean management and analytics tools can no longer run in relative isolation. Successfully deploying and managing end-to-end services in such environments requires the ability to analyze network and host platform data simultaneously from numerous sources. Only then can operators reactively and proactively diagnose issues, while ensuring operational costs are kept in check and that customers are always presented with the best user experiences.

With the scale and complexity of such services, network management needs to operate autonomously in a closed loop manner—taking operational insights on the health of network elements and the underlying distributed cloud infrastructure and ensuring a service is configured optimally.

At Microsoft, we understand this journey because Azure went through a similar evolution. In the early days, we recognized the challenges of troubleshooting across disparate services. To solve this, we established a common data analytics infrastructure that gave us a comprehensive view of how our services performed, which resulted in lower engineering overheads and better service quality.

Control starts with network insights

Large operators generate petabytes of data every day—complicating the challenges associated with quickly ingesting, cost-effectively storing, and concisely analyzing the information to gain meaningful insights. Public clouds are ideal for solving these problems because they simplify the ability to aggregate and analyze data, thereby allowing operators to rapidly identify and act on any irregularities or opportunities. Azure excels in this area with a portfolio of trusted storage, machine learning, business intelligence, and automation tools.

Azure Data Lake, for example, can capture and store a wealth of disparate log data generated by communications services. Data lakes are more adept than classic data warehouses at handling the sheer velocity, volume, and variety of information operators will need to store. Lakehouses, such as those enabled using Azure Databricks, provide a mediation layer to enforce data quality and consistency.

Once ingested, Azure has several standardized services for aggregating and analyzing otherwise distinct data streams such as logs, traces, telemetry information, and alerts, from inherently different platforms, network functions, and devices. Azure Data Explorer (ADX) rapidly ingests and analyzes petabytes of unstructured, structured, and semi-structured data formats. Similarly, Power BI provides real-time analytical intelligence through a combination of dynamic visualizations and AI-driven insights.

Azure network analytics empowers operations teams to accelerate root cause analysis, enables capacity planners to understand where to deploy new resources, and allows engineers to improve customer experiences by enhancing network performance and quality of service. Our analytics offerings can also assist business teams in tuning marketing strategies toward reducing customer churn and increasing monetization opportunities.

Naturally, with large companies and many users handing enormous amounts of potentially sensitive information, we must guarantee the governance, integrity, and security of this data, providing role-based access while ensuring relevant compliance standards and policies are followed. Microsoft’s Purview provides a fully managed and centralized unified data governance service that delivers the tools such organizations demand. Purview can even prevent the duplication of analytics dashboards, providing a quick and easy way to search for existing interfaces that meet their immediate needs.

Intent-based management and closing the loop

A critical step towards a fully automated network is the ability to identify anomalies and predict issues before they become catastrophic failures. Existing rules-based systems rely on heuristic approaches that will struggle to scale to the quantity and complexity of data they must ingest to pinpoint potential problems within modern network infrastructures. Instead, big data and machine learning–driven inferencing approaches are needed to predict problems hidden within terabytes of disparate logs, error messages, and security alerts with lower severity levels.

Closing the loop from detection to resolution requires a comprehensive vendor and platform-agnostic approach to provisioning standalone network functions and end-to-end services. This evolves to solutions working at the application layer that make choices about how and where to instantiate elements that enable a complete end-to-end service. Such solutions operate across multiple access, edge, core compute, and cloud platforms and are responsible for assigning appropriate resources and tuning configurations within each component to meet the requirements of the service. Underpinning this is multi-cloud and edge lifecycle management systems such as Azure Arc, which provides ongoing governance and management of virtual machines, Kubernetes clusters, and databases.

Ultimately, the goal is that the network operates autonomously based on a loose set of expected outcomes rather than explicit rules defining how to react to specific requests or conditions. Such intent-based management systems will require the application of artificial neural networks which employ deep learning on the vast amounts of real-time data streams that will enable them to train themselves to carry out tasks and perform actions.

There are many scenarios where our network analytics capabilities are needed today. Operators can use the solution to proactively analyze the quality of service in mobile and fixed voice networks, detect issues, prevent outages, and gain insight into infrastructure utilization for capacity planning. The network analytics solution also monitors mobile core performance, looking for underlying platform issues and reporting poor quality of service to accelerate root cause analysis. Furthermore, the solution performs deep packet analysis of end-to-end services, which accelerates deployments and reduces the mean time to repair.

Partner with Microsoft on the AIOps journey

The network management and automation journey can look daunting but, with our network analytics solution accelerator program we offer operators an easier path. With the right technology and the flexibility to handle data from many systems, operators can adopt automation incrementally and at their own pace, meeting business objectives along the way. Azure network analytics allows operations teams to build trust in big data and AI and provides the foundation for closed loop automation.

As part of the Azure for Operators program, Microsoft is making it easy to start discovering the power of Azure’s network analytics offerings. Our solution accelerator enables service providers and systems integrators to take advantage of the Azure tools and services available today as they evolve their longer-term AIOps analytics strategies. Our experts are on hand to guide you through the process of importing, analyzing, and visualizing the massive amounts of data produced by the networks you maintain. Plus, we have resources available to help solve any network issues you are experiencing today or simply understand how your infrastructure is performing. To learn more about participating in our solution accelerator program, contact us here.
Quelle: Azure

EPAM and Microsoft partner on data governance solutions with Microsoft Energy Data Services

This blog was co-authored by Emile Veliyev, Director, OSDU Delivery, EPAM.

The energy industry creates and consumes large amounts of highly complex data for key business decisions, like where to drill the next series of wells, how to optimize production, and where to lease the next big field. Despite good intentions, the industry is still plagued by large quantities of data that are inconsistent in location, quality, and format—much of which cannot reliably be found or used when needed. Even when the data is reliable, it can be locked into application-specific data stores that limit its use. The solution to this dilemma is multi-faceted and increasingly includes cloud technology, the OSDU™ Data Platform, modern applications, and data governance focused on people and their business processes.

Microsoft Energy Data Services is a data platform fully supported by Microsoft, that enables efficient data management, standardization, liberation, and consumption in energy exploration. The solution is a hyperscale data ecosystem that leverages the capabilities of the OSDU Data Platform, Microsoft's secure and trustworthy cloud services with our partners’ extensive domain expertise.

Cloud and the OSDU Data Platform

Cloud-based computing is the future—scalable, reliable, secure storage and compute capabilities, all managed for you with many powerful add-on capabilities at your fingertips. For the energy industry, the Open Group® OSDU Data Platform is rapidly emerging as the standard—an open source, cloud-based data platform that unlocks data from applications and provides standard data schemas and access protocols, enabling both data governance and rapid innovation.

One of the things that EPAM discovered when delivering app developer boot camps and deploying the platform for ourselves and for clients is its high level of complexity. In those earlier days, platform deployment was a multi-step process, with each service being deployed and validated separately, taking up to a week. Before we could move on to solving business problems, a part of our work was to guide our clients through various technical deployment obstacles. In addition, it took another several days to ingest pre-formatted sample data in order to test the platform with real data. Not anymore.

Microsoft Energy Data Services

Microsoft has made the OSDU Data Platform enterprise-ready and pre-bundled with the capabilities needed to optimize Energy Company data value using the Microsoft Cloud. EPAM has seen its benefits. As an enterprise-grade platform, Microsoft Energy Data Services has nearly single-click deployment. Deployment time has reduced significantly—what previously took multiple days now takes about 45 minutes! Similarly, the time to ingest the sample data is drastically reduced from one week to around one hour! In addition, the management layer surrounding the platform provides the assured reliability, stability, security, tools, performance, and the SLAs needed by large enterprises such as major energy companies.

Data governance and modern applications

As noted before, excellent infrastructure alone does not magically solve all data and business problems. With Microsoft Energy Data Services providing a solid foundation with which to store data, process data, and build and host cloud-native apps aligned with the OSDU Technical Standard, what remains to empower a data-driven organization is modern applications and data governance.

It is a daunting task to manually track the manifold ways that data enters the company, the many places it is stored, and the many ways it is consumed, enriched, and duplicated. Improving this requires a team who can map out the detailed way in which all of this happens today. It also takes modern digital tools to automate the aggregation, parsing, quality assessment, and lineage-tracking of the data. It takes people with a broad and deep view to accomplish this for large organizations—people who understand the business, the data types, the technology, and how to provide the right data, in the right formats, in the right place, at the right time, to the right people. That includes application connectors and analytical applications themselves designed for the modern cloud environment so that liberated data can move back and forth to users seamlessly.

How to work with EPAM on Microsoft Energy Data Services

EPAM brings industry knowledge, technical expertise, tools, frameworks, relationships with software vendors, and world-class delivery built on the Microsoft Energy Data Services platform. EPAM has developed a document extraction and processing system (DEPS) accelerator, which provides capabilities to facilitate the development of customizable workflows for extracting and processing unstructured data in the form of scanned or digitalized document formats. DEPS is powered by Azure AI/machine learning and deep learning algorithms.  It includes pluggable sub-systems for customization, uses machine learning pre/post processors, validation and extensions for UI review, automation machine learning models training, manual labeling, and analytics capabilities to improve classic optical character recognition (OCR) and text extraction accuracy. DEPS can be adapted to process numerous data types covering both image and text, PDF, XLS, ASCII, and other file formats. Ask more at OSDU@epam.com.

Microsoft Energy Data Services is an enterprise-grade, fully-managed, OSDU Data Platform for the energy industry that is efficient, standardized, easy to deploy, and scalable for data management—for ingesting, aggregating, storing, searching, and retrieving data. The platform will provide scale, security, privacy, and compliance expected by our enterprise customers. EPAM offers services providing the right data, in the right formats, in the right place, at the right time, to the right people, which includes application connectors and analytical applications, with data contained in Microsoft Energy Data Services.

Get started with Microsoft Energy Data Services today.
Quelle: Azure

Cegal and Microsoft break down data silos and offer open collaboration with Microsoft Energy Data Services

This blog post was co-authored by Espen Knudsen, Principal Digitalization and Innovation Advisor, Cegal.

The vast amount of applications and data in energy companies across isolated environments is exposing inefficiencies in collaboration. Together with Cegal Cetegra, Microsoft Energy Data Services will accelerate your journey toward seamless access to the data and applications you need for your day-to-day work by providing an easy-to-deploy managed service fully supported by Microsoft.

Cegal has been successfully collaborating with Microsoft and partners to help evaluate the new Microsoft Energy Data Services preview program, an enterprise-grade OSDU Data Platform powered by the cloud.

With Microsoft Energy Data Services, energy companies can leverage new cloud-based data management and collaboration capabilities provided by Cegal and Microsoft. 

Microsoft Energy Data Services is a data platform fully supported by Microsoft, that enables efficient data management, standardization, liberation, and consumption in energy exploration. The solution is a hyperscale data ecosystem that leverages the capabilities of the OSDU Data Platform and Microsoft's secure and trustworthy cloud services with our partners’ extensive domain expertise.

Cegal and Microsoft create collaborative cloud-based applications on Microsoft Energy Data Services

As an ISV and a specialist systems integrator for the energy industry, Cegal has always seen great value in removing data silos to free organizations from dated constraints that can lead to lower productivity. Opening for universal access to one of the most critical assets in any organization—knowingly data—is an obvious path to innovation. Integrating proprietary IP in existing workflows, contextualizing data through new AI-based routines, and integrating best-of-breed applications to create new and innovative solutions are critical steps toward more efficient and productive operations.

To achieve this goal, Cegal and Microsoft closely collaborated over several months, during which multiple relevant use cases have been thoroughly assessed and tested on the Microsoft Energy Data Services platform. From operating the platform to developing new solutions on top of it, Cegal had the opportunity to put in context a wide range of scenarios, making sure the experience was as extensive as possible yet realistic for the energy industry.

Cegal recently released Cetegra, a cloud-based platform offering to its users a modern, collaborative environment, uniquely designed to cater to energy industry–specific needs in terms of digitalization and data management. Deployed through a fully scalable, pay-as-you-go model, Cetegra leverages the strengths of the Microsoft Cloud and will provide full support for the Microsoft Energy Data Services platform. Cetegra with Microsoft Energy Data Services delivers a one-stop shop for all types of data and applications tightly linked to OSDU, offering energy players a comprehensive integration of their applications portfolio, also allowing them to develop and test new apps within the Cetegra Innovation Space without impacting existing business operations.

With Microsoft Energy Data Services entering preview, Cegal looks forward to delivering operational support for the platform. As a global specialist in digitalization, capitalizing on years of experience within the energy sector, Cegal represents the partner of choice to support and guide energy players as they navigate their digital transformation journey. 

How to work with Cegal Solutions on Microsoft Energy Data Services

Microsoft Energy Data Services is an enterprise-grade, fully-managed, OSDU Data Platform for the energy industry that is efficient, standardized, easy to deploy, and scalable for data management—for ingesting, aggregating, storing, searching, and retrieving data. The platform will provide the scale, security, privacy, and compliance that are expected by our enterprise customers. The platform offers out-of-the-box compatibility with Cegal Cetegra, a cloud-based platform offering a modern, collaborative environment, with data contained in Microsoft Energy Data Services.

Learn more

For detailed information on Cegal Solutions for Microsoft Energy Data Services please visit Cetegra's website.
Get started with Microsoft Energy Data Services today.

Quelle: Azure

Future-ready IoT implementations on Microsoft Azure

IoT technologies continue to evolve in power and sophistication. Enterprises are combining cloud-to-edge solutions to connect complex environments and deliver results never before imagined. In the past eight years, Azure IoT has seen significant growth across many industry sectors, including manufacturing, energy, consumer goods, transportation, healthcare, and retail. It has played a leading role in helping customers achieve better efficiency, agility, and sustainability outcomes. In 2021, Gartner positioned Microsoft as a Leader in the Gartner Magic Quadrant for Industrial IoT Platforms for the second year in a row. And Frost and Sullivan named the platform the Global IoT platform of the year. As computing becomes more distributed and embedded, this is a huge opportunity to unite IoT, Edge, and Hybrid, continuing our forward momentum while doubling down on our investments to date.

Companies commit to IoT to be future-ready

With the pandemic, economic changes, and rise of remote work, C-suite and IT leaders have had to rethink what it means for their organization to be “future-ready.” Here are a few Azure IoT customer stories and the solutions they have deployed to solve critical business challenges.

Manufacturing companies like Iventec, a Taiwan-based electronics company, combined 5G, AI, and IoT to create scalable smart factories which were so successful that the company began selling the solution to other manufacturing companies. Norway-based TOMRA developed a sensor-based system that can process up to five billion data points per day, enabling faster and more accurate materials recycling. Monarc merged AI, IoT, and programmability to create the “world’s first robotic quarterback,” which allows any player on a football team to build specific skills without having to involve an entire squad.
Energy providers are equally diverse in what “future-ready” looks like. XTO Energy, a subsidiary of Exxon Mobile, used Azure IoT to monitor and optimize oil fields. Allego based their fast-growing EV charging infrastructure on Azure database as a service (DBaaS) knowing that their tools and technologies will need to scale exponentially. E.ON built a platform based on machine learning and IoT to monitor and redistribute energy across an entire district/city grid to drastically reduce energy usage. Metroscope developed large-scale digital twin solutions for energy production plants that monitor and analyze industrial assets to gain greater operational insights in reducing emissions.
Consumer goods enterprises like Grupo Bimbo increased manufacturing speed and efficiency by deploying sophisticated data analytics to manage all bakery equipment on a factory line through a network of data sensors. Keurig Dr Pepper used Azure IoT Central to perfect the at-home customer experience with highly personalized coffee brewing preferences which feeds data to corporate R&D for more focused and faster product development.
Transportation companies like Iberia Express, a major player in the low-cost airline market, deployed AI to create a loyal customer base through personalized and immediate passenger experiences. Italian rail infrastructure manager Ferrovie dello Stato Italiane melded AI, AR and drone technology to optimize the way it monitors its construction sites.
Healthcare providers like CAE Healthcare were able to pivot during Covid from using in-person intelligent patient mannequins, which mimic a medical patient’s conditions, to virtual mannequins using Microsoft Azure IoT Hub and Azure Functions which expanded the company’s reach and training capabilities.
Retail companies GetGo and Cooler Screens collaborated to reshape customer experiences by combining GetGo’s traditional beverage cooler doors with the IoT-connected, 4K smart screens developed by Cooler Screens. The Azure-based solution modernized the high-traffic beverage aisles cooler doors to meet the buying patterns of on-the-go and impulse-driven convenience store consumers.

Visit the Microsoft Industry Blogs for a deeper dive into industry and technology deployments. Microsoft Cloud solutions for industries looks more closely at the available cloud platforms for specific industries and the recently published 2022 IoT Signals report explores the key trends in IoT adoption in the manufacturing industry.

Microsoft continues to grow its IoT services and support

Microsoft believes in simplifying cloud to edge for our customers. Our platform provides solutions to the challenges of preserving existing investments, addressing security issues, and managing complex technology environments.

Diverse edge and device offerings: Currently, over 7,000 OEMs build Windows IoT devices like human-machine interface (HMI) for industrial PCs on factory floors, point-of-sales in retail, kiosks in transportation, medical equipment in healthcare, and the growing smart building automation sector.
Comprehensive cloud-to-edge security: Controlling and governing increasingly complex environments that extend across data centers, multiple clouds, and the edge can present a variety of security challenges. Azure Sphere can securely connect and protect IoT edge devices.
Hybrid environment maximization: To take advantage of cloud innovations and maximize existing on-premises investments, organizations need an effective hybrid and multicloud strategy. Azure provides a holistic approach to manage, govern, and help to secure servers and Kubernetes clusters, as well as databases and apps across on-premises, multicloud, and edge environments with Azure Arc, Azure private multi-access edge compute (MEC), and Azure Stack HCI.
End-to-end product portfolio: Microsoft has a broad range of services for data intake, storage, reporting, and insights. Services like Azure Synapse Analytics, Azure Data Explorer, Azure Digital Twins, and PowerBI pull information from disparate data streams into powerful dashboards and comprehensive digital models of real-world environments.

Partner ecosystem brings faster innovation

With more than 15,000 industry-leading solutions, apps, and services from Microsoft and partners, Azure Marketplace makes it easy to find pre-built, well architected, and Azure-optimized IoT solutions for many industries and use cases. Below are a few examples of ready-to-deploy Industrial IoT solutions that help organizations improve manufacturing efficiency, energy efficiency, and sustainability and can provide a faster path to value than a custom-built solution.

Sight Machine delivers a solution that allows manufacturers to see the impacts of problems from machine to enterprise level as well the supply chain. Its streaming data platform converts unstructured plant data into a standardized data foundation and continuously analyzes all assets, data sources, and processes in near real-time to improve productivity, profitability, and sustainability.
PTC and their ThingWorx Digital Performance Management (DPM) solution enables manufacturers to boost plant throughput by identifying issues that lower productivity, cause downtime, and/or reduce Overall Equipment Effectiveness (OEE) with pinpoint accuracy at individual unit, line, facility, or production network scale.
Uptake developed an AI-driven asset performance management solution, called Fusion, that gives all departments in an enterprise a single, shared view of every asset in an operation. Additionally, their unified industrial data management solution enables manufacturers to connect machines, people, and data to unlock and accelerate AI-enabled industrial intelligence.
e-Magic Inc. is a specialist in large-scale Industrial IoT and Factory Digital Twin solutions. Their TwinWorX digital twin solution normalizes data from equipment, assets, systems, and other IoT devices into a unified view to provide situational awareness and command and control of facilities, equipment, and processes.
ICONICS provides smart building automation solutions that integrates traditional building management systems, modern sensors, and end user productivity tools to gather and analyze real-time information from any application on any device from single buildings to global enterprises.

Begin your migration to Azure IoT

Certified Microsoft partners with experience in IoT solutions, analytics, and applications are ready to help you with your migration projects. Programs like FastTrack, with expert Azure assistance, can also help accelerate your cloud deployments while minimizing risk.

For enterprises in the Americas, Xoriant, Insight, Hitachi, NTT, Mesh Systems, and Kyndryl are certified Azure migration partners. Companies based in Europe and Asia can contact Cognizant, HCL, Capgemini, Infosys, or Codit. Ingram Micro and TD Synnex can help SMBs and ISVs plan migrations.

The future of IoT and the cloud

The future evolution of IoT plays an integral part of a bigger technology investment—the industrial metaverse. Azure is already bringing the physical and digital worlds together with digital twins. The 2022 Microsoft Build featured "From the Edge to the Metaverse, how IoT powers it all" to provide an in-depth look at how companies can use intelligent technologies from Azure to create value.

Learn more

At Microsoft, we look forward to hearing from you and becoming your strategic partner. Reach out to our migration partners listed above, search the Azure Marketplace for the right solution for your use cases, or do a more in-depth technical study in the Azure Internet of Things (IoT) collection.
Quelle: Azure

Azure Payment HSM achieves PCI PIN certification offering customers secure digital payments solutions in the cloud

This blog post has been co-authored by Darius Ryals, General Manager of Partner Promises and Azure Chief Information Security Officer.

Today we’re announcing Azure Payment HSM has achieved Payment Card Industry Personal Identification Number (PCI PIN) making Azure the first hyperscale cloud service provider to obtain this certification.

Financial technology has rapidly disrupted the payments industry and securing payment transactions is of the utmost importance. Azure helps customers secure their critical payment infrastructure in the cloud and streamlines global payments security compliance. Azure remains committed to helping customers achieve compliance with the Payment Card Industry’s leading compliance certifications.

Enhanced security and compliance through Azure Payment HSM

Azure Payment HSM is a bare metal infrastructure as a service (IaaS) that provides cryptographic key operations for real-time payment transactions in Azure. The service empowers financial institutions and service providers to accelerate their digital payment strategy through the cloud. Azure Payment HSM is certified across stringent security and compliance requirements established by the PCI Security Standards Council (PCI SSC) including PCI DSS, PCI 3DS, and PCI PIN and offers HSMs certified to FIPS 140-2 Level 3 and PCI HSM v3.

Azure Payment HSM enables a wide range of use cases. These include payment processing for card and mobile payment authorization and 3D-Secure authentication; payment credential issuing for cards, wearables, and connected devices; securing keys and authentication data for POS, mPOS, Remote key loading, PIN generation, and PIN routing; sensitive data protection for point-to-point encryption, security tokenization, and EMV payment tokenization.

Azure Payment HSM is designed to meet the low latency and high-performance requirements for mission-critical payment applications. The service is comprised of single-tenant HSMs offering customers complete remote administrative control and exclusive access. HSMs are provisioned and connected directly to users’ virtual networks, and HSMs are under users’ sole administration control. HSMs can be easily provisioned as a pair of devices and configured for high availability.

Azure Payment HSM provides great benefits for both payment HSM users with a legacy on-premises HSM footprint and those new payment ecosystem entrants who may choose a cloud-native approach from the outset. The customer could be a payment service provider acting on behalf of multiple financial institutions or a financial institution that wishes to directly access the Azure Payment HSM.

Leverage Azure Payment HSM PCI PIN certification

PINs are used to verify cardholder identity during online and offline payment card transactions.

The PCI PIN Security Standard contains requirements for the secure management, processing, and transmission of PIN data and applies to merchants and service providers that store, process, transmit, or can impact the security of PIN data.

Azure Payment HSM customers can reduce their compliance burden by leveraging Azure’s PCI PIN Attestation of Compliance (AOC) which addresses Azure’s portion of responsibility for each PCI PIN requirement and contains the list of certified Azure regions. The Azure Payment HSM Shared Responsibility Matrix is also available to help customers significantly reduce time, effort, and cost during their own PCI PIN assessments by simplifying the compliance process.

Learn more

When moving payment systems to the cloud, payment security must adhere to Payment Industry’s mandate compliance without failure. Financial institutions and service providers in the payment ecosystem including issuers, service providers, acquirers, processors, and payment networks would benefit from Azure Payment HSM. To learn how Microsoft Azure capabilities can help, see the resources below:

Azure Payment HSM
Azure Payment HSM documentation
Azure PCI PIN AOC
Azure PCI DSS AOC
Azure PCI 3DS AOC

Quelle: Azure