High availability solutions on Microsoft Azure by SLES for SAP Applications

This post was co-authored with Sherry Yu, Director of SAP Success Architect, SUSE.

In today’s business world, service availability and reliability are key to a successful digital transformation. Extensive downtime not only costs a business revenue and productivity, but may also cause reputational damage. SUSE and Microsoft have been working closely to provide a trusted path to SAP Solutions in the cloud, including solutions to reduce unplanned and planned downtime.

SUSE and Microsoft work together

SUSE is the leader in SAP Solutions, especially the developer of high availability (HA) solutions. HA Solutions are first tested, supported on-premises, and documented in the official configuration guides that are published on SUSE’s site. Microsoft tests the solutions in Azure’s infrastructure, tunes the settings and configurations, then releases Azure-specific HA configuration guides on Microsoft’s documentation site. Microsoft actively provides feedback and requests support for new scenarios from SUSE. The working process can be summarized in the chart below. It’s been a smooth collaboration between SUSE and Microsoft to support customers’ digital transformation journey.

Solutions to reduce unplanned downtime

High availability solutions—that can prevent 24/7 SAP systems from being disrupted by various issues caused by hardware, network, and applications—are commonly based on cluster technologies. Pacemaker is an open source cluster, used by various HA solutions.
For SAP HANA, the HA solutions are based on HANA System Replication (HSR). SUSE has developed resource agents to automate the failover of HANA System Replication in scale-up and scale-out scenarios.

For SAP S/4HANA and NetWeaver, the HA solutions are based on ASCS/ERS enqueue replication. SUSE’s HA solutions for ENSA1 and ENSA2 are both certified by SAP HA-Interface certification. Recently SUSE released a new architecture called Simple Mount File System, that reduces the complexity of the Pacemaker configuration for SAP ASCS/ERS architecture. It’s also SAP HA-Interface certified. Microsoft was the first cloud provider to release a configuration guide for SAP ASCS/ERS simple mount structure.

HA for SAP ASCS/ERS on Azure with SLES for SAP Applications

The following configurations are supported on Azure based on ASCS/ERS Enqueue Replication, ENSA1 and ENSA2, respectively:

SAP ASCS/ERS with NFS on Azure Files on SLES for SAP Applications
SAP ASCS/ERS with NFS on ANF on SLES for SAP Applications
SAP ASCS/ERS with NFS cluster (DRBD)
SAP ASCS/ERS Multi-SID guide on SLES for SAP Applications

The paragraph below outlines the major differences between the various scenarios:

New simple mount architecture for SAP ASCS/ERS on Azure VMs with NFS

This is a new architecture to simplify the management of shared file systems on NFS. Instead of using a FileSystem resource agent to manage the shared file systems by the cluster, shared file systems are managed by the OS and mounted at boot time. A new resource agent SAPStartSrv was created to control the start and stop of the SAP start framework of each SAP instance. The benefit is a more robust cluster architecture.

This solution has been tested and released on Microsoft Azure with the official configuration guide published.

HA for SAP HANA on Azure with SLES for SAP Applications

HA Solutions for SAP HANA are based on HANA System Replication (HSR) in scale-up and scale-out. The following scenarios are supported on Azure:

Scenario

SLES for SAP Applications

 

Scale-up HSR + Pacemaker

High availability of SAP HANA on Azure VMs on SLES – Azure Virtual Machines | Microsoft Docs

• Basic HANA scale-up and HSR

• Can be used with NFS-mounted file systems

• Doesn’t include more resilient pacemaker configuration to handle loss of NFS mounts

Scale-up HSR with NFS-mounted file systems

High availability of SAP HANA Scale-up with ANF on SLES – Azure Virtual Machines | Microsoft Docs

• Additional Pacemaker configuration monitors the NFS file systems

• Loss of access to NFS-mounted files systems (including /hana/shared), triggers failover

Scale-out n+m

(scale-out with stand-by node)

SAP HANA scale-out with standby with Azure NetApp Files on SLES – Azure Virtual Machines | Microsoft Docs

• Requires shared storage (ANF on Azure)

• For /hana/data and /hana/logs only NFSv4.1 supported!

• For /hana/shared NFSv3 or NFSv4.1 is supported

Scale-out HSR + Pacemaker

SAP HANA scale-out with HSR and Pacemaker on SLES – Azure Virtual Machines | Microsoft Docs

• Includes the additional Pacemaker configuration for loss of NFS access

 

There are some considerations in picking the right scenario based on your business needs:

How critical is it to minimize downtime in the case of a failover?
What is the willingness to increase spend and/or lower downtime, in case of incident?

Azure virtual machine availability overview

Azure offers several compute deployment options and it’s important to understand their differences, especially the SLAs as noted below:

Solutions to reduce planned downtime

Planned downtime normally is associated with maintenance of the environment. SUSE and Microsoft present the following solutions to minimize the planned downtime.

For instance, when performing maintenance on a SAP HANA system running in a cluster, whether it’s to upgrade of the OS or apply HANA SPs, it’s recommended to do a rolling update. That means to upgrade the secondary HANA node first, perform a takeover, then upgrade the former primary HANA node. It’s an effective way to reduce the planned downtime to the time necessary to perform a takeover. The same approach can be applied to SAP Central Services in HA configuration.

To keep the SAP systems secure, system admins must apply security patches in a timely manner. Kernel Live Patching is provided by SUSE to effectively help avoid reboots for up to one year. It’s highly practical and recommended for mission-critical HANA systems.

When performing maintenance to the SAP ASCS/ERS running in the cluster, it’s essential to leverage the sap_vendor_cluster_connector that SUSE has developed for the SAP HA-Interface certification, to avoid split-brain. During maintenance, a system admin can stop an SAP ASCS or ERS instance via SAP tools such as sapcontrol or MMC. If the instance is managed by the cluster, via the cluster connector, the cluster will be notified that this is intended and instead of trying to remediate the “failure,” the cluster will not interfere. The HA-Interface helps avoid accidents during planned maintenance windows. You can find the details and an example in this blog.

Accelerate your SAP S/4HANA migration to Azure

SUSE and Microsoft provide solutions to automate, validate and monitor the SAP Landscape:
•    Automation: Microsoft Automation Framework for SAP provides built-in best practices to speed up provisioning and reduce errors. Deployment time is reduced from months or weeks to days. SUSE as a contributing partner provided best practices especially for the HA deployment.
•    Validation: SUSE Project Trento, part of SLES for SAP Applications, provides rule-based autodetection of SAP configuration issues in Azure infrastructure. It can be used as a powerful pre-go-live validation tool to ensure quality. In Day 2 operations it continuously checks the production system to detect deviation and prevent outage.
•    Monitoring: Microsoft Azure Monitor for SAP helps customers gain insights into the SAP landscape, especially HA clusters. The proactive monitoring helps to fix issues before outages happen. Monitor for clusters on SLES for SAP Applications co-developed with SUSE.

SLES for SAP Applications

SLES for SAP Applications is the leading Linux platform for SAP HANA, SAP NetWeaver, and SAP S/4HANA solutions and is an SAP Endorsed App. Two of the many key components of SLES for SAP Applications are the High Availability Extension and Resource Agents. The High Availability Extension provides Pacemaker, an open-source cluster framework. The Resource Agents manage automated failover of SAP HANA System Replication, S/4HANA ASCS/ERS ENSA2, and NetWeaver ASCS/ERS ENSA1. On Microsoft Azure’s marketplace, the PAYG image of SLES for SAP Applications includes Live Patching.

Learn More

Microsoft Azure is an enterprise-class cloud platform optimized for SAP that provides significant cost savings, new insights from advanced analytics, and unmatched security and compliance.

SUSE has more than 20 years in SAP Partnership with more than 130 worldwide benchmarks. Some of the world’s largest SAP workloads run on SUSE on Azure. The first reference architectures for SAP on Azure were SUSE based. As a result of the close collaboration between Microsoft and SUSE, a comprehensive portfolio of HA Solutions for SAP on Azure is available for customers, leveraging the strengths of SUSE on Microsoft Azure.
Quelle: Azure

Microsoft named a Leader in 2022 Gartner® Magic Quadrant™ for Data Integration Tools

In the modern business landscape, the intake of information and data is growing at an incredibly rapid pace. Organizations, regardless of size, need to quickly gain insights from all data to inform customer experiences and empower their employees. Current solutions are bespoke and siloed, leading to users spending considerable time and resources stitching together disparate products across a variety of vendors. This creates costly operational overhead and diverts resources away from value creation. In response to this high-pressure environment, many organizations are looking for cutting-edge data integration platforms and resources, and Microsoft is fully invested in empowering these companies to succeed.

We are excited to share that Gartner has positioned Microsoft as a Leader once again in the 2022 Gartner Magic Quadrant for Data Integration Tools. We believe this recognition shows our continued growth and ongoing commitment to delivering comprehensive and cost-effective data integration solutions.

The Gartner Magic Quadrant for Data Integration Tools evaluated companies on a range of categories including data engineering, cloud migration, and operational data integration tasks.

Translating data into a competitive advantage

It’s easy to be overwhelmed with the amount of data businesses are generating every day. Not only do organizations need to deal with the technical requirements of processing their data, they also are operating in a high-risk environment, where the regulatory challenges are significant and noncompliance can mean an expensive penalty.

Against this backdrop, Microsoft brings an end-to-end data integration strategy to drive competitive advantage and deliver better business outcomes. Regardless of where source data is coming from—from operational databases to software as a service (SaaS) to multicloud—Microsoft data integration serves as the foundation that brings this data together and prepares it for cloud-scale analytics.

To lay the groundwork for reliable data pipelines, organizations can choose from more than 100 connectors to seamlessly move data. New capabilities also enable connections without time-consuming export, transform, and load (ETL) processes, so users can achieve insights faster. Microsoft data integration works seamlessly to combine data and prepare it for analysis in a central, secure environment. Simplified data migration, low or no-code ETL, enterprise business workflows, metadata management, and data governance help boost productivity and empowers organizations to achieve more with data. The company’s entire data team—from data engineers to business analysts—can discover and use the data they need, whether users want to write their own queries or leverage a low-code environment to ingest and transform data.

Microsoft services for data integration

With tooling that delivers a comprehensive set of capabilities, organizations can build a solid data integration foundation.

Azure Data Factory is a managed cloud service that's built for petabyte-scale data ingestion, data transformation, and orchestration at scale. Use Azure Data Factory for data engineering (build, manage, and operationalize data ingestion and transformation pipelines), data and cloud migration (customers migrating data from on-premises or another cloud), and operational data integration (ongoing data integration and synchronization to support ongoing and critical business processes).

Azure Data Factory Studio is purpose-built to provide data engineers with a familiar and productive environment for authoring their data integration pipelines and data flows for code-free transformations at scale. The experience provides users with sophisticated control flow and orchestration capability to author robust data integration tasks that operate over large amounts of data. Hundreds of connectors enable data-source-specific connectivity from Azure Data Factory and Power Query.

Power Query is a data transformation and data preparation engine that delivers an approachable user experience with self-service and enterprise-ready connectors to hundreds of data sources, from cloud to on-premises. Power Query enables business analysts to handle data preparation tasks on their own for workloads across Power Platform, Dynamics 365, and Microsoft Excel.

Azure Synapse Link is a service that eliminates barriers between Microsoft data stores and Azure Synapse Analytics. Automatically move data from both operational databases and business applications without time-consuming ETL processes. Get an end-to-end view of the business by easily connecting separate systems—and democratize data access with a solution that brings the power of analytics to every data-connected team.

Azure Synapse Link already connects to a variety of Microsoft data stores, such as Azure Cosmos DB and Azure SQL Database, and will connect to more in the future. Here are the connections available now:

Azure Synapse Link for Dataverse—now generally available.
Azure Synapse Link for Cosmos DB—now generally available.
Azure Synapse Link for SQL (both SQL Server 2022 and Azure SQL Database)—now in preview.

The future of data is integration

In this complex environment where data holds such immense value, our north star is to enable our customers to drive a data culture and power a new class of data-first applications. We want our customers to take intelligent action based on insights unlocked from their data, and turn it into competitive advantage, all while respecting and maintaining compliance. We do this by empowering every individual and organization, delivering data integration and analytic tools and resources to inform every decision, at any scale.

Learn more

Read the full complimentary report from Gartner.
Learn more about Azure Synapse Analytics.
Get a free copy of the Limitless Analytics with Azure Synapse e-book.
Learn more about the Microsoft Intelligent Data Platform.
Get started with a free Azure account.
Join the free Azure Synapse Influencers program.

 

 

Gartner, Magic Quadrant for Data Integration Tools, August 17, 2022, Ehtisham Zaidi, Robert Thanaraj, Sharat Menon, and Nina Showell.

This graphic was published by Gartner, Inc. as part of a larger research document and should be evaluated in the context of the entire document. The Gartner document is available upon request from Microsoft. GARTNER and Magic Quadrant are registered trademarks and service mark of Gartner, Inc. and its affiliates in the United States and internationally and are used herein with permission. All rights reserved. Gartner does not endorse any vendor, product, or service depicted in its research publications and does not advise technology users to select only those vendors with the highest ratings or other designation. Gartner research publications consist of the opinions of Gartner’s research organization and should not be construed as statements of fact. Gartner disclaims all warranties, expressed or implied, with respect to this research, including any warranties of merchantability or fitness for a particular purpose.
Quelle: Azure

Dive deep into NAT gateway’s SNAT port behavior

In our last blog, we examined a scenario on how network address translation (NAT) gateway mitigates connection failures happening at the same destination endpoint with its randomized source network address translation (SNAT) port selection and reuse timers. In addition to handling these scenarios, NAT gateway’s unique SNAT port allocation is beneficial to dynamic, scaling workloads connecting to several different destination endpoints over the internet. In this blog, let’s deep dive into the key aspects of NAT gateway’s SNAT port behavior that makes it the preferred solution for different outbound scenarios in Azure.

Why SNAT ports are important to outbound connectivity

For anyone working in a virtual cloud space, it is likely that you will encounter internet connection failures at some point. One of the most common reasons for connection failures is SNAT port exhaustion, which happens when the source endpoint of a connection runs out of SNAT ports to make new connections over the internet.

Source endpoints use ports through a process called SNAT, which allows destination endpoints to identify where traffic was sent and where to send return traffic. NAT gateway SNATs the private IPs and ports of virtual machines (VMs) within a subnet to NAT gateway’s public IP address and ports before connecting outbound, and in turn provides a scalable and secure means to connect outbound.

Figure 1: Source network address translation by NAT gateway: connections going to the same destination endpoint over the internet are differentiated by the use of different source ports.

With each new connection to the same destination IP and port, a new source port is used. A new source port is necessary so that each connection can be distinguished from one another. SNAT port exhaustion is an all too easy issue to encounter with recurring connections going to the same destination endpoint since a different source port must be used for each new connection.

How NAT gateway allocates SNAT ports

NAT gateway solves the problem of SNAT port exhaustion by providing a dynamic pool of SNAT ports, consumable by all virtual machines in its associated subnets. This means that customers don’t need to worry about knowing the traffic patterns of their individual virtual machines since ports are not pool-based in fixed amounts to each virtual machine. By providing SNAT ports on-demand to virtual machines, the risk of SNAT exhaustion is significantly reduced, which in turn helps prevent connection failures.

Figure 2: SNAT ports are allocated on-demand by NAT gateway, which alleviates the risk of SNAT port exhaustion. 

Customers can ensure that they have enough SNAT ports for connecting outbound by scaling their NAT gateway with public IP addresses. Each NAT gateway public IP address provides 64,512 SNAT ports, and NAT gateway can scale to use up to 16 public IP addresses. This means that NAT gateway can provide over one million SNAT ports for connecting outbound.

How NAT gateway selects and reuses SNAT ports

Another key component of NAT gateway’s SNAT port behavior that helps prevent outbound connectivity failures is how it selects SNAT ports. Whether connecting to the same or different destination endpoints over the internet, NAT gateway selects a SNAT port at random from its available inventory.

Figure 3: NAT gateway randomly selects SNAT ports from its available inventory to make new outbound connections.

A SNAT port can be reused to connect to the same destination endpoint. However, before doing so, NAT gateway places a reuse cooldown timer on that port after the initial connection closes.

NAT gateway’s SNAT port reuse cooldown timer helps prevent ports from being selected too quickly for connecting to the same destination endpoint. This is advantageous when destination endpoints have their own source port reuse cooldown timers in place.

Figure 4: SNAT port 111 is released and placed in a cooldown period before it can connect to the same destination endpoint again. In the meantime, port 106 (dotted outline) is selected at random from the available inventory of ports to connect to the destination endpoint. The destination endpoint has a firewall with its own source port cooldown timer. There is no issue getting past the on-premise destination’s firewall since the connection from source port 106 is new.

What happens then when all SNAT ports are in use? When NAT gateway cannot find any available SNAT ports to make new outbound connections, it can reuse a SNAT port that is currently in use so long as that SNAT port connects to a different destination endpoint. This specific behavior is beneficial to any customer who is making outbound connections to multiple destination endpoints with NAT gateway.

Figure 5: When all SNAT ports are in use, NAT gateway can reuse a SNAT port to connect outbound so long as the port actively in use goes to a different destination endpoint. Ports in use by destination 1 are shown in blue. Port connecting to destination 2 is shown in yellow. Port 111 is yellow with a blue outline to show it is connected to destinations 1 and 2 simultaneously.

What have we learned about NAT gateway’s SNAT port behavior?

In this blog, we explored how NAT gateway allocates, selects, and reuses SNAT ports for connecting outbound. To summarize:

Function
NAT gateway SNAT port behavior
Benefit

SNAT port capacity
Up to 16 public IP addresses.
 
64,512 SNAT ports / NAT gateway public IP addresses.   
Easy to scale for large and variable workloads.

SNAT port allocation
Dynamic and On-demand.
Great for flexible, unknown, and large-scale workloads.

SNAT port selection
Randomized.
Reduces risk of connection failures to the same destination endpoint.

SNAT port reuse
Reuse to a different destination—connect outbound immediately.
 
Reuse to the same destination—set on a cooldown timer.
Reduces risk of connection failures to the same destination endpoint with source port reuse cooldown timers.

Deploy NAT gateway today

Whether your outbound scenario requires you to make many connections to the same or to several different destination endpoints, NAT gateway provides a highly scalable and reliable way to make these connections over the internet. See the NAT gateway SNAT behavior article to learn more.

NAT gateway is easy to use and can be deployed to your virtual network with just a few clicks of a button. Deploy NAT gateway today and follow along on how with: Create a NAT gateway using the Azure portal.
Quelle: Azure

Gain Deeper Insights with Microsoft Intelligent Data Platform

Data is foundational to any digital transformation strategy, yet many organizations struggle to understand what data they have, how to extract insights from it, and how to govern it—according to a 2022 Evanta survey1, over half of Chief Data Officers (CDOs) struggle with siloed operating models when it comes to data sharing and democratization. According to Harvard Business Review2, organizations that have embraced their data as a strategic asset have been better positioned to drive strategic differentiation and grow their revenue, but the fragmentation that exists today between databases, analytics, and governance is a common barrier to success.

The Microsoft Intelligent Data Platform, empowers organizations to invest more time creating value rather than integrating and managing their data estate. It integrates best-in-class solutions across Microsoft’s technology stack—breaking down data siloes and enabling organizations to extract real-time insights with the data governance needed to run the business safely.

“Shifting from a legacy on-premises data warehouse to Azure Synapse, supported by Datometry, has allowed us to virtualize the vast majority of our code without needing to repoint it. We have gained speed, performance, and agility while reducing costs and taken a big step forward in modernizing our enterprise data storage and management.”—Charlotte Lock, Director of Data, Digital & Loyalty at Co-op.

Added security and analytics features for the Azure data portfolio

The Microsoft Intelligent Data Platform features everything already available in the Azure Data portfolio (Azure Data Factory, Azure Data Explorer, SQL Server 2022, Azure SQL, Cosmos DB, and more.) as well as new products and features, including SQL Server 2022, Azure Synapse Link for SQL, Microsoft Purview Data Estate Insights, and Datamart in Power BI:

SQL Server 2022, currently in preview, is the most secure database of the last decade. And is now integrated with Microsoft Purview and Azure Synapse Link, allowing for richer insights and governance from data at scale. SQL Server 2022 also comes with new features including AWS S3 support, Azure Active Directory authentication, Query Store hints, as well as security improvements compared to SQL Server 2019.
Azure Synapse Link for SQL, now in preview, offers real-time analytics for data stored in Azure Synapse Analytics and Azure SQL. It is an automated system that allows for replication of data from transactional databases (both SQL Server 2022 and Azure SQL Database) to a dedicated SQL pool in Azure Synapse Analytics. Azure Synapse Link features near real-time analytics, low-code/no-code solutions for replicating data, as well as minimal operational impact on source systems.
Purview Data Estate Insights is an application that provides Chief Data Officers and other strategic leaders with a summary of their data estate and the risk associated with that data. Purview provides insights on data stewardship, inventory, curation, and governance through automatically generated reports which can be easily shared with stakeholders.
Lastly, Datamart in Power BI allows analysts to access richer insights from their data sets through data marts. Datamarts are self-service analytic solutions that help to bridge the gap between business users through a simple and optionally no-code experience. With datamarts, you can easily ingest and prepare data, add business semantics to data, manage and govern data, as well as build and share reports.

Real-world applications for businesses through real-time data

Let’s explore one example of how the Microsoft Intelligent Data Platform helped navigate supply chain issues:

Many operations companies conduct daily batch runs, where they must manually track their inventory levels and input the data at least once a day. With this method, these organizations cannot accurately predict how much product to sell and must err on the side of selling less to avoid running out of inventory. In times when supply chains are uncertain, this means companies miss out on even more sales.

.

With the Microsoft Intelligent Data Platform, companies can get real-time information on current inventory levels, rather than a daily report. They can also extract AI-driven insights based on demand spikes, shipping delays, and factory status that predict how many units will be available in a week’s time. This information is supported by the upgraded SQL Server 2022 as well as Azure Synapse Link for SQL server, which allows for more on-premises data to be extended to the cloud, analyzed, and used for decision making.

But what about using data for customer-facing solutions? The Microsoft Intelligent Data platform leverages the CosmosDB platform, providing consumers with recommendations for the best product based on real-time availability of units, delivery time, and compatibility with their needs. Consumers also have access to a support number powered by Power Virtual Agents; through Conversational AI, consumers can get intelligent updates on their order status so they can get the information they need quickly.

Learn more

These applications are only the tip of the iceberg when it comes to using the Microsoft Intelligent Data Platform. Learn more about the platform and how to get started—and make sure to watch the entire episode of the Microsoft Intelligent Data Platform Mechanics video, where we cover the technology and sample scenario, by clicking the linked image below!

 

 

Sources:

1Top 3 Goals & Challenges for CDOs in 2022, evanta.com.

2How to Lead a Data-Driven Digital Transformation, hbr.org.
Quelle: Azure

Azure Data Explorer: Log and telemetry analytics benchmark

Azure Data Explorer (ADX), a component of Azure Synapse Analytics, is a highly scalable analytics service optimized for structured, semi-structured, and unstructured data. It provides users with an interactive query experience that unlocks insights from the ocean of ever-growing log and telemetry data. It is the perfect service to analyze high volumes of fresh and historical data in the cloud by using SQL or the Kusto Query Language (KQL), a powerful and user-friendly query language.

Azure Data Explorer is a key enabler for Microsoft’s own digital transformation. Virtually all Microsoft products and services use ADX in one way or another; this includes troubleshooting, diagnosis, monitoring, machine learning, and as a data platform for Azure services such as Azure Monitor, PlayFab, Sentinel, Microsoft 365 Defender, and many others. Microsoft’s customers and partners are using ADX for a large variety of scenarios from fleet management, manufacturing, security analytics solutions, package tracking and logistics, IoT device monitoring, financial transaction monitoring, and many other scenarios. Over the last years, the service has seen phenomenal growth and is now running on millions of Azure virtual machine cores.

Last year, the third generation of the Kusto engine (EngineV3) was released and is currently offered as a transparent, in-place upgrade to all users not already using the latest version. The new engine features a completely new implementation of the storage, cache, and query execution layers. As a result, performance has doubled or more in many mission-critical workloads.

Superior performance and cost-efficiency with Azure Data Explorer

To better help our users assess the performance of the new engine and cost advantages of ADX, we looked for an existing telemetry and logs benchmark that has the workload characteristics common to what we see with our users:

Telemetry tables that contain structured, semi-structured, and unstructured data types.
Records in the hundreds of billions to test massive scale.
Queries that represent common diagnostic and monitoring scenarios.

As we did not find an existing benchmark to meet these needs, we collaborated with and sponsored GigaOm to create and run one. The new logs and telemetry benchmark is publicly available in this GitHub repo. This repository includes a data generator to generate datasets of 1GB, 1TB, and 100TB, as well as a set of 19 queries and a test driver to execute the benchmark.

The results, now available in the GigaOm report, show that Azure Data Explorer provides superior performance at a significantly lower cost in both single and high-concurrency scenarios. For example, the following chart taken from the report displays the results of executing the benchmark while simulating 50 concurrent users: 

Learn more

For further insights, we highly recommend reading the full report. And don’t just take our word for it. Use the Azure Data Explorer free offering to load your data and analyze it at extreme speed and unmatched productivity.

Check out our documentation to find out more about Azure Data Explorer and learn more about Azure Synapse Analytics. For deeper technical information, check out the new book Scalable Data Analytics with Azure Data Explorer by Jason Myerscough.
Quelle: Azure

Announcing Microsoft Dev Box Preview

Many IT organizations must choose between giving developers the flexibility they need to be productive and keeping developer workstations managed and secure. Supply chain challenges have led to developers waiting weeks or months to get the hardware they need, forcing them to use aging hardware or unsecured personal devices. At the same time, hybrid work has forced IT to open access to corporate and on-premises resources to developers around the world. With access to sensitive source code and customer data, developers are increasingly becoming the target of more sophisticated cyberattacks.

Today, we’re excited to announce the preview of Microsoft Dev Box is now available to the public. Microsoft Dev Box is a managed service that enables developers to create on-demand, high-performance, secure, ready-to-code, project-specific workstations in the cloud. Sign in to the Azure portal and search for “dev box” to begin creating dev boxes for your organization.

Focus on code—not infrastructure

With Microsoft Dev Box, developers can focus on writing the code only they can write instead of trying to get a working environment that can build and run the code. Dev boxes are ready-to-code and preconfigured by the team with all the tools and settings developers need for their projects and tasks. Developers can create their own dev boxes whenever they need to quickly switch between projects, experiment on a proof-of-concept, or kick off a full build in the background while they move on to the next task.

Microsoft Dev Box supports any developer IDE, SDK, or tool that runs on Windows. Developers can target any development workload that can be built from Windows including desktop, mobile, IoT, and web applications. Microsoft Dev Box even supports building cross-platform apps thanks to Windows Subsystem for Linux and Windows Subsystem for Android. Remote access gives developers the flexibility to securely access dev boxes from any device, whether it’s Windows, MacOS, Android, iOS, or a web browser.

Tailor dev boxes to the needs of the team

With Microsoft Dev Box, developer teams create and maintain dev box images with all the tools and dependencies their developers need to build and run their applications. Developer leads can instantly deploy the right size dev box for specific roles in a team anywhere in the world, selecting from 4 vCPU / 16GB to 32 vCPU / 128GB SKUs to scale to any size application. By deploying dev boxes in the closest Azure region and connecting via the Azure Global Network, dev teams ensure a smooth and responsive experience with gigabit connection speeds for developers around the world.

Using Azure Active Directory groups, IT admins can grant access to sensitive source code and customer data for each project. With role-based permissions and custom network configurations, developer leads can give vendors limited access to the resources they need to contribute to the project—eliminating the need to ship hardware to short-term contractors and helping keep development more secure.

Centralize governance and management

Developer flexibility and productivity can’t come at the expense of security or compliance. Microsoft Dev Box builds on Windows 365, making it easy for IT administrators to manage dev boxes together with physical devices and Cloud PCs through Microsoft Intune and Microsoft Endpoint Manager. IT admins can set conditional access policies to ensure users only access dev boxes from compliant devices while keeping dev boxes up to date using expedited quality updates to deploy zero-day patches across the organization and quickly isolate compromised devices. Endpoint Manager’s deep device analytics make it easy to audit application health, device utilization, and other critical metrics, giving developers the confidence to focus on their code knowing they’re not exposing the organization to any unnecessary risk.

Microsoft Dev Box uses a consumption-based compute and storage pricing model, meaning organizations only pay for what they use. Automated schedules can warm up dev boxes at the start of the day and stop them at the end of the day while they sit idle. With hibernation, available in a few weeks, developers can resume a stopped dev box and pick up right where they left off.

Get started now

Microsoft Dev Box is available today as a preview from the Azure Portal. During this period, organizations get the first 15 hours of the dev box 8vCPU and 32 GB Memory SKU for free every month, along with the first 365 hours of the dev box Storage SSD 512 GB SKU. Beyond that, organizations pay only for what they use with a consumption-based pricing model. With this model, organizations are charged on a per-hour basis depending on the number of Compute and Storage that are consumed.

To learn more about Microsoft Dev Box and get started with the service, visit the Microsoft Dev Box page or find out how to deploy your own Dev Box from a pool.
Quelle: Azure

Security for next generation telecommunication networks

Almost two years ago, the National Defense Science Board invited me to participate in the Summer Study 2020 Panel, “Protecting the Global Information Infrastructure.” They requested that I brief them on the evolution of the global communications infrastructure connecting all nations. The U.S., like other nations, both cooperates and competes in the commercial telecom market, while prioritizing national security.

This study group was interested in the implementation of 5G and its evolution to 6G. They understood that softwarization of the core communication technologies and the inclusion of edge and cloud computing as core infrastructure components of telecommunications services is inevitable. Because of my expertise in these areas, they invited me to share my thoughts on how we might secure and protect the emerging networks and systems of the future. I prepared for the meeting by looking at how Microsoft, as a major cloud vendor, had worked to secure our global networks.

My conclusion was simple. It is clear that attacks on the national communications infrastructure will occur with much greater sophistication than ever before. Because of this, we continue to develop our networks and systems with security as our first principle and we stay constantly vigilant. To these ends, Microsoft has adopted a zero-trust security architecture in all our platforms, services, and network functions.

Specialized hardware replaced by disaggregated software

One challenge for the panel was to understand precisely what the emerging connectivity infrastructure will be, and what security attributes must be assured with respect to that infrastructure.

Classical networks (the ones before the recent 5G networks), were deployed as hub-and-spoke architecture. Packets came to a specialized hardware-software package developed by a single vendor. From there, they were sent to the Internet. But 5G (and beyond) networks are different. In many ways, the specialized hardware has been “busted open.”

Functionality is now disaggregated into multi-vendor software components that run on different interconnected servers. As a result, the attack surface area has increased dramatically. Network architects have to protect each of these components along their interconnects—both independently and together. Furthermore, packets are now processed by multiple servers, any of which could be compromised. 5G brings the promise of a significant number of connected Internet-of-Things (IoT) devices that, once compromised, could also be turned into an army of attackers.

The power of cloud lies in its scale

In a word, Microsoft Azure is big: 62 regions in 140 countries worldwide host millions of networked servers, with regions connected by over 180,000 miles of fiber. Some of our brightest and most experienced engineers have used their knowledge to make this infrastructure safe and secure for customers, which includes companies and people working in healthcare, government services, finance, energy, manufacturing, retail, and more.

As of today, Microsoft tracks more than 250 unique nation-states, cybercriminals, and other threat actors. Our cloud processes and analyzes more than 43 trillion security signals every single day. Nearly 600,000 organizations worldwide use our security offering. With all this, Microsoft’s infrastructure is secure, and we have earned the trust of our customers. Many of the world’s largest companies with vital and complex security needs have offloaded much of their network and compute workloads to Azure. Microsoft Azure has become part of their critical infrastructure.

Securing Open RAN architecture

The cloud’s massive and unprecedented scale is unique, and precisely what makes the large investments in sophisticated defense and security economically possible. Microsoft Azure’s ground-up design includes strict security measures to withstand any type of attack imaginable. Conversely, the scale required to defend against sophisticated threats is not logical or feasible for smaller-scale, on-premises systems.

The report, “Why 5G requires new approaches to cybersecurity”1 articulates several good reasons why we need to think about how to protect our infrastructure. Many of us in research and engineering have also been thinking about these issues, as evidenced by Microsoft’s recently published white paper, Bringing Cloud Security to the Open RAN, which describes how we can defend and mitigate against malicious attacks against O-RANs, beginning with security as the first principle.

With respect to O-RAN and Azure for Operators Distributed Services (AODS), we explain how they inherit and benefit from the cloud’s robust security principles applied in the development of the far-edge and the near-edge. The inherently modular nature of Open RAN, alongside recent advancements in Software Defined Networking (SDN) and network functions virtualization (NFV), enables Microsoft to deploy security capabilities and features at scale across the O-RAN ecosystem.

We encapsulate code into secure containers and enable more granular control of sensitive data and workloads than prior generations of networking technologies. Additionally, our computing framework makes it easy to add sophisticated security features in real-time, including AI/ML and advanced cloud security capabilities to promptly detect and actively mitigate malicious activities.

Microsoft is actively working on delivering the most resilient platform in the industry, backed by our proven security capabilities, trustworthy guarantees, and a well-established secure development lifecycle. This platform is being integrated with Microsoft security defense services to prevent, detect, and respond to attacks. It includes AI/ML technologies to allow creation of logic to automate and create actionable intelligence to improve security, fault analyses, and operational efficiency.

We are also leveraging Azure services such as Active Directory, Azure Container Registry, Azure Arc, and Azure Network Function Manager to provide a foundation for secure and verifiable deployment of RAN components. Additional technologies include secure RAN deployment and management processes on top of these, which will eliminate significant upfront cost otherwise incurred by RAN vendors when building these technologies themselves.

It is noteworthy that across the entire project lifecycle—from planning to sunsetting—we integrate security practices. All software deliverables are developed in a “secure by default” manner, going through a pipeline that leverages Microsoft Azure’s security analysis tools that perform static analysis, credential scanning, regression, and functionality testing.

We are taking steps to integrate our RAN analytics engine with Microsoft Sentinel. This enables telecom operators to manage vulnerability and security issues, and to deploy secure capabilities for their data and assets. We expect Microsoft Sentinel, Azure Monitor, and other Azure services will incorporate our RAN analytics to support telecommunications customers. With this, we will deliver intelligent security analytics and threat intelligence for alert detection, threat visibility, proactive hunting, and threat response. We also expect that Azure AI Gallery will host sophisticated 3rd party ML models for RAN optimization and threat detection, running on the data streams we collect.

Mitigating the impact of compromised systems

We have built many great tools to keep the “bad guys” out, but building secure telecommunication platforms requires dealing with the unfortunate reality that sometimes systems can still be compromised. As a result, we are aggressively conducting research and building technologies, including fast detection and recovery from compromised systems.

Take the case of ransomware. Traditional ransomware attacks encrypt a victim’s data and ask for a ransom in exchange for decrypting it. However, modern ransomware attacks do not limit themselves to encrypting data. Instead, they remove the enterprise’s ability to control its platforms and critical infrastructure. The RAN constitutes critical infrastructure and can suffer from ransomware attacks.

Specifically, we have developed technology that prepares us for the unfortunate time when systems may be compromised. Our latest technology makes it easier to recover as quickly as possible, and with minimal manual effort. This is especially important in telco far-edge scenarios, where the large number of sites makes it prohibitively expensive to send technicians into the field for recovery. Our solution, which leverages a concept called trusted beacons, automatically recovers a far-edge node from a compromise or failure. When trusted beacons are absent, the platform automatically reboots and re-installs an original, unmodified, and uncompromised software image.

Looking into the future

We have developed mechanisms for monitoring and analyzing data as we look for threats. Our best-in-class verification technology checks every configuration before lighting it up. Our researchers are constantly adding new AI techniques that use the compute power of the cloud to protect our infrastructure better than ever before. Our end-to-end zero-trust solutions spanning identity, security, compliance, and device management, across cloud, edge, and all connected platforms will protect the telecommunications infrastructure. We continue to invest billions to improve cybersecurity outcomes.

Microsoft will continue to update you on developments that impact the security of our network, including many of the technologies noted within this article. Microsoft knows that while we need to continue to be vigilant, the telecommunications industry ultimately benefits by making Microsoft Azure part of their critical infrastructure.

1 Tom Wheeler and David Simpson, “Why 5G requires new approaches to cybersecurity.” The Brookings Institution.
Quelle: Azure

3 ways Azure Speech transforms game development with AI

With Azure Cognitive Services for Speech, customers can build voice-enabled apps confidently and quickly with the Speech SDK. We make it easy for customers to transcribe speech to text (STT) with high accuracy, produce natural-sounding text-to-speech (TTS) voices, and translate spoken audio. In the past few years, we have been inspired by the innovations coming out of the gaming industry, specific to AI.

Why AI for gaming? AI in gaming allows for flexible and reactive video game experiences. As technology continues to change and evolve, AI innovation has led to pioneering and tremendous advances in the gaming industry. Here are three popular use cases:

Use Cases for AI Gaming

Game dialogue prototyping with text to speech: Shorten the amount of time and money spent on the product to get the game to market sooner. Designers and producers can rapidly swap lines of dialogue using different emotional voices and listen to variations in real-time to ensure accuracy.

Greater accessibility with transcription, translation, and text to speech: Make gaming more accessible and add functionality through a single interface. Gameplay instructions that make games more accessible to individuals unable to read the text or language. Storylines for visually impaired gamers or younger users that have yet to be taught to read.

Scalable non-playable character voices and interaction with text to speech: Easily produce voice characters that stay on-brand with consistent quality and speaking styles. Game developers can add emotions, accents, nuances, laughter, and other paralinguistic sounds and expressions to game avatars and NPCs (non-playable characters) that can initiate or participate in a conversation in-game.

Featured Customers for AI Gaming

Flight Simulator: Our first-party game developers are using AI for speech to improve end-user experiences. Flight Simulator is the longest-running franchise in Microsoft history, and the latest critically acclaimed release not only builds on that legacy, but it also pushes the boundaries as the most technologically advanced simulator ever made. By adding authentic air traffic controller voices, Flight Simulator added a small-but-powerful way to elevate the Flight Simulator experience.​ Recording audio to replicate air traffic controllers from every airport on Earth was a huge task—TTS is a great solution that can handle the dynamic content as well as serve the air traffic controller voices as a low-latency, highly available, secure, and scalable solution. Let’s check out a video for the newly released Flight Simulator experience with custom neural voice implemented for real-time air traffic controller voice.

Undead Labs: Undead Labs studio is on a mission to take gaming in bold new directions. They are the makers of the State of Decay franchise and use Azure Neural TTS during game development.

Double Fine: Double Fine is the producer of many popular games, including Psychonauts. They are utilizing our neural TTS to prototype future game projects.

You can check out our use case presentation at Microsoft’s Game Developers Conference 2022 for more details.

Speech Services and Responsible AI

We are excited about the future of Azure Speech with human-like, diverse, and delightful quality under the high-level architecture of XYZ-code AI framework. Our technology advancements are also guided by Microsoft’s Responsible AI process, and our principles of fairness, inclusiveness, reliability and safety, transparency, privacy and security, and accountability. We put these ethical standards into practice through the Office of Responsible AI (ORA)—which sets our rules and governance processes, the AI Ethics and Effects in Engineering and Research (Aether) Committee—which advises our leadership on the challenges and opportunities presented by AI innovations, and Responsible AI Strategy in Engineering (RAISE)—a team that enables the implementation of Microsoft Responsible AI rules across engineering groups.

Get started

Start building new customer experiences with Azure Neural TTS and STT. In addition, the Custom Neural Voice capability enables organizations to create a unique brand voice in multiple languages and styles.

Resources

Get started with text to speech
Get started with speech to text
Get started with Custom Neural Voice
Get started with speech translation

Quelle: Azure

Microsoft is a Leader in 2022 Gartner Magic Quadrant for Cloud AI Developer Services

Gartner has recognized Microsoft as a Leader in the 2022 Gartner® Magic Quadrant™ for Cloud AI Developer Services, with Microsoft placed furthest in “Completeness of Vision”.

Gartner defines the market as “cloud-hosted or containerized services that enable development teams and business users who are not data science experts to use AI models via APIs, software development kits (SDKs), or applications.”

We are proud to be recognized for our Azure AI Platform. In this post, we’ll dig into the Gartner evaluation, what it means for developers, and provide access to the full reprint of the Gartner Magic Quadrant to learn more.

Scale intelligent apps with production-ready AI

“Although ModelOps practices are maturing, most software engineering teams still need AI capabilities that do not demand advanced machine learning skills. For this reason, cloud AI developer services (CAIDS) are essential tools for software engineering teams.”—Gartner

A staggering 87 percent of AI projects never make it into production.¹ Beyond the complexity of data preprocessing and building AI models, organizations wrestle with scalability, security, governance, and more to make their model’s production ready. That’s why over 85 percent of Fortune 100 companies use Azure AI today, spanning industries and use cases.

More and more, we see developers accelerate time to value by using pre-built and customizable AI models as building blocks for intelligent solutions. Microsoft Research has made significant breakthroughs in AI over the years, being the first to achieve human parity across speech, vision, and language capabilities. Today, we’re pushing the boundaries of language model capabilities with large models like Turing, GPT-3, and Codex (the model powering GitHub Copilot) to help developers be more productive. Azure AI packages these innovations into production-ready general models known as Azure Cognitive Services and use case-specific models, Azure Applied AI Services for developers to integrate via API or an SDK, then continue to fine tune for greater accuracy.

For developers and data scientists looking to build production-ready machine learning models at scale, we support automated machine learning also known as autoML. AutoML in Azure Machine Learning is based on breakthrough Microsoft research focused on automating the time-consuming, iterative tasks of machine learning model development. This frees up data scientists, analysts, and developers to focus on value-add tasks outside operations and accelerate their time to production.

Enable productivity for AI teams across the organization

“As more developers use CAIDS to build machine learning models, the collaboration between developers and data scientists will become increasingly important.”—Gartner

As AI becomes more mainstream across organizations, it’s essential that employees have the tools they need to collaborate, build, manage, and deploy AI solutions effectively and responsibly. As Microsoft Chairman and CEO Satya Nadella shared at Microsoft Build, Microsoft is "building models as platforms in Azure" so that developers with different skills can take advantage of breakthrough AI research and embed them into their own applications. This ranges from professional developers building intelligent apps with APIs and SDKs to citizen developers using pre-built models via Microsoft Power Platform.

Azure AI empowers developers to build apps in their preferred language and deploy in the cloud, on-premises, or at the edge using containers. Recently we also announced the capability to use any Kubernetes cluster and extend machine learning to run close to where your data lives. These resources can be run through a single pane with the management, consistency, and reliability provided by Azure Arc.

Operationalize Responsible AI practices

“Vendors and customers alike are seeking more than just performance and accuracy from machine learning model. When selecting AutoML services, they should prioritize vendors that excel at providing explainable, transparent models with built-in bias detection and compensatory mechanisms.”—Gartner

At Microsoft, we apply our Responsible AI Standard to our product strategy and development lifecycle, and we’ve made it a priority to help customers do the same. We also provide tools and resources to help customers understand, protect, and control their AI solutions, including a Responsible AI Dashboard, bot development guidelines, and built-in tools to help them explain model behavior, test for fairness, and more. Providing a consistent toolset to your data science team not only supports responsible AI implementation but also helps provide greater transparency and enables more consistent, efficient model deployments.

Microsoft is proud to be recognized as a Leader in Cloud AI Developer Services, and we are excited by innovations happening at Microsoft and across the industry that empower developers to tackle real-world challenges with AI. You can read and learn from the complete Gartner Magic Quadrant now.

Learn more

Explore other analyst reports for Azure AI.
Read the latest announcements from Azure AI on the Azure blog.

References

¹Why do 87 percent of data science projects never make it into production? Venture Beat.

Gartner Inc.: “Magic Quadrant for Cloud AI Developer Services,” Van Baker, Svetlana Sicular, Erick Brethenoux, Arun Batchu, Mike Fang, May 23, 2022.

Gartner and Magic Quadrant are registered trademarks and service marks of Gartner, Inc. and/or its affiliates in the U.S. and internationally and are used herein with permission. All rights reserved. This graphic was published by Gartner, Inc. as part of a larger research document and should be evaluated in the context of the entire document. The Gartner document is available upon request from Microsoft. Gartner does not endorse any vendor, product or service depicted in its research publications, and does not advise technology users to select only those vendors with the highest ratings or other designation. Gartner research publications consist of the opinions of Gartner’s research organization and should not be construed as statements of fact. Gartner disclaims all warranties, expressed or implied, with respect to this research, including any warranties of merchantability or fitness for a particular purpose.
Quelle: Azure

Bluware and Microsoft Azure develop OSDU-enabled interactive AI seismic interpretation solution for energy super major

This blog post has been co-authored by Kapil Raval, Principal Program Manager, Microsoft.​

Bluware, which develops cloud-native solutions to help oil and gas operators to increase exploration and production workflow productivity through deep learning by enabling geoscientists to deliver faster and smarter decisions about the subsurface and today announced its collaboration with Microsoft for its next-generation automated interpretation solution, InteractivAI™, which is built on the Azure implementation of the OSDU™ Data Platform.

The two companies are working together to provide comprehensive solutions combining Microsoft Cloud implementation of OSDU™ Data Platform with Bluware’s subsurface knowledge. As the world’s energy companies retool for the future, they are juggling priorities between new forms of energy, carbon emissions, and maintaining the growing demand for fossil fuels. Innovative solutions such as cloud computing and machine learning are playing an important role in this transition.

To address an energy super major’s seismic interpretation challenges, Bluware is providing an interactive deep learning solution that runs natively on Azure, called InteractivAI™.

InteractivAI™ is utilized by the organization’s exploration and reservoir development teams to accelerate seismic interpretations and improve results by assisting geoscientists in identifying geological and geophysical features that may have been previously missed, incorrectly interpreted, or simply too time-consuming to interpret.

Using a data-centric approach, the application is unique in its ability, allowing users to train and infer simultaneously. Imagine running deep learning in real-time where the interpreter is providing feedback that the operator can actually see as the network suggests on-the-fly interpretations. This even includes results on data that is either not readily visible to the human eye or very difficult to see. This interactive workflow delivers more precise and comprehensive results in hours compared to months resulting in higher quality exploration and reservoir development.

The interactive deep learning approach

Bluware is pioneering the concept of ‘interactive deep learning’, wherein the scientist remains in the figurative ‘driver’s seat’ and steers the network as it learns and adapts based on the interpreter’s teachings. The adjustment and optimization of training the data set provides immediate feedback to the network, which in turn adjusts weights and biases accordingly in real-time.

Bluware differs from other deep learning approaches which use a neural network that has been pre-trained on multiple data sets. Users must rely on a network that was trained on data they have not seen, created with a set of unknown biases, and therefore something they have no control over.

The basic parameterization exposed to scientists in these traditional approaches gives the illusion of network control without really ceding any significant control to the user. Processing times can be days or weeks, and scientists can only supply feedback to the network once the training is complete, at which point training will need to run again from scratch.

The interactive deep learning approach is a data-specific approach that focuses on creating the best learning and training model for the geology the user is working with. Unlike traditional deep learning approaches, the idea is to start with a blank, untrained network and train it while labeling to identify any feature of interest. This approach is not limited to salt or faults, but can also be used to capture shallow hazards, injectites, channels, bright spots, and more. This flexibility allows the expert to explore the myriad of possibilities and alternative interpretations within the area of interest.

The energy company initially conducted a two-month evaluation with multiple experts across their global asset teams. The results were remarkable, and the organization is continually adding users. Additionally, Bluware has provided a blueprint for the company’s IT team for an Azure Kubernetes Service (AKS) implementation which will accelerate and expand this Azure-based solution.

A seismic data format designed for the cloud

As companies continue to wrestle with enormous, complex data streams such as petabytes of seismic data, the pressure to invest in digital technology intensifies. Bluware has adapted to this imperative, delivering a cloud-based format for storing seismic data called Volume Data Store™ (VDS). Microsoft and Bluware have worked together to natively enable VDS as part of the Microsoft Cloud implementation of OSDU™ Data Platform, where developers and customers can connect to the seismic data stored and provide interactive AI-driven seismic interpretation workflows by using the InteractivAI™ SaaS from the Azure Appsource.

Bluware and Microsoft are collaborating in parallel to support major energy customers through their seismic shift initiatives including moving petabytes of data to Azure Blob storage in a cloud-native VDS environment.

Revolutionizing the way energy companies store and use seismic data

Bluware designed InteractivAI™ not only with seismic workflows in mind but also with an eye on the trends shaping the future of the energy sector. Creating a cloud-native data format makes it scalable for energy companies to do more with their data while lowering costs and speeding up workflows, allowing them to arrive at more accurate decisions faster leveraging the power of Azure.

About Bluware

In 2018, a group of energy-focused software companies, namely Bluware, Headwave, Hue, and Kalkulo AS merged to become Bluware Corp. to empower change, growth, and a sustainable future for the energy sector.

As companies pivot from fossil fuels to cleaner energy sources, the combination of new industry standards, cloud computing, and AI will be critical for companies to adapt quickly, work smarter, and continue to be profitable. Companies that adapt faster, will have a significant advantage over their competition. For more information, visit Bluware’s website.
Quelle: Azure