Power mission-critical applications with Azure Database for PostgreSQL

In our current environment, organizations are increasingly looking towards digital solutions to engage their customers and remain competitive. They’re discovering that their customers’ needs can be best met through differentiated, digital experiences delivered by cloud-native applications.

When building a new application, one of the most important decisions to make is where to store the application data. We see tremendous interest in Azure Database for PostgreSQL when it comes to storing relational data in the cloud for mission-critical applications. Here’s why:

Why Azure Database for PostgreSQL?

100 percent open source.

Azure Database for PostgreSQL is built on community edition Postgres, with open extension support so you can leverage valuable PostgreSQL features, including JSONB, geospatial support, and rich indexing.
Our Postgres team at Microsoft is committed to nurturing a culture of contributing to and collaborating with the Postgres community, and we’re excited to welcome Postgres committers to the team. These committers review submitted code for Postgres, “commit” it into the source code repository, and work with other contributors to test, refine, and eventually incorporate it into the next Postgres build. In future blogs, they’ll share what they’re working on when it comes to new versions of Postgres and updates to the Citus open source extension.

Fully managed.

Using a managed Postgres database service on Azure makes your job simpler and allows you to focus on your application, by automating time- and cost-intensive tasks like configuring and managing high-availability, disaster recovery, backups, and data replication across regions.
Azure Database for PostgreSQL has enterprise-grade security and compliance capabilities, such as Azure Advanced Threat Protection, and provides customized performance recommendations, making it suitable for your most mission-critical applications.

High-performance horizontal scaling.

Azure Database for PostgreSQL Hyperscale (Citus) scales out horizontally to hundreds of nodes, with no application rewrites, so you can easily build incredibly scalable applications. This is done using the Citus open source Postgres extension that intelligently and transparently distributes your data and queries across multiple nodes to achieve massive parallelism, along with a much bigger compute, memory, and disk footprint.
In future blogs, we’ll dive deeper into key use cases for Hyperscale (Citus) and share how organizations are building powerful applications using these capabilities. Hyperscale (Citus) enables organizations to scale multi-tenant SaaS applications, build high throughput transactional applications, and more.

Key considerations for running modern enterprise applications on Postgres

If you’re new to the world of Postgres and want to understand whether it’s a good fit for building your enterprise applications, check out our new whitepaper, Running Enterprise Applications on PostgreSQL, to learn about:

Benefits of using Postgres for modern, mission-critical applications, and considerations around performance, scalability, security, and more.
Postgres’ extensible design that enables you to future-proof your applications.
Postgres capabilities that empower developers to work more productively.

Announcing our Postgres blog series

We are here to help you at every stage of your journey with Azure Database for PostgreSQL, and will be keeping you up to date with regular posts both here and on our Azure Database for PostgreSQL blog. We’ll be sharing:

Insights and updates directly from Postgres committers Thomas Munro, Jeff Davis, Andres Freund, and David Rowley—you can read the first post in the series, How to securely authenticate with SCRAM in Postgres 13, now.
Key customer use cases, stories, and architectures.
Resources and best practices to help you develop innovative applications to solve business challenges.

Whether you’re new to Postgres, new to Azure, or are already using Azure Database for PostgreSQL to power your enterprise applications, we look forward to supporting your application development journey.

Stay tuned for future blogs here and subscribe to our Azure Database for PostgreSQL blog.

Learn more about Azure Database for PostgreSQL.
Quelle: Azure

How Azure Synapse Analytics can help you respond, adapt, and save

Business disruptions, tactical pivots, and remote work have all emphasized the critical role that analytics plays in all organizations. Uncharted situations demand proven performance insights so that businesses can quickly determine what is and is not working. In recent months, the urgency for business-guiding insights has only been heightened, leading to a need for real-time analytics solutions. Equally important is the need to discover and share these insights in the most cost-effective manner.

Not only has COVID-19 been a challenge to world health but also has created new economic challenges to businesses worldwide. These challenges have resulted in an increased need for tools that quickly deliver insights to business leaders—empowering informed decisions. This is where Microsoft Azure Synapse Analytics can help.

New circumstances demand new solutions

Azure Synapse Analytics is a new type of analytics platform that enables you to accelerate your time-to-insight with a unified experience and—just as important—save on costs while doing so. It is up to 14 times faster and costs 94 percent less than other cloud providers. Let’s dive into how Azure Synapse can help you respond, adapt, and save.

Respond to disruption and adapt to a new normal

History shows that proven analytics technologies, such as Azure Synapse, have a strong track record of enabling more dynamic and exploratory responses that can guide businesses through difficult times. Traditional data warehouses and reports can’t scale to provide the intelligence and insight that business executives demand in today’s world.

To make good strategic decisions, businesses need to quickly and effectively find new insights in their data. This can only come through more advanced tools and an improved understanding of how to get the most from them.

Each recent global economic crisis can be correlated with a follow-up increase in data analytics projects as companies worldwide lean on data analytics to boost their recovery.

To enable teams to collaborate and innovate, they need tools and services that help them discover, explore, and quickly and efficiently find new insights.

Azure Synapse has an intelligent architecture that makes it industry-leading in unifying big data workloads with traditional data warehousing while at the same time encouraging collaboration and reducing costs.

Using Azure Synapse, businesses can empower their teams to collaborate, adapt, and create new strategies that are driven by data. Azure Synapse not only makes it easy to start and scale in the cloud, but it has key security, governance, and monitoring tools that are critical for successful data analytics solutions.

Save on costs with Azure Synapse

The current economic challenges have certainly made us all—individuals and businesses—more conscious of our spending. Businesses are looking for new ways to improve productivity and efficiency on limited budgets. Cloud analytics in general, and Azure Synapse in particular, are a great fit for this requirement because it helps businesses start small and scale as needed.

Azure Synapse offers a cost-effective service due to its intelligent architecture, which separates storage, compute power, and resources—but makes them seamlessly available when needed. This means that you do not have to keep paying for cloud services if you experience unexpected events that cause business disruptions and tactical pivots. Services can simply be paused to release resources and save costs. You can also scale compute separately from storage, which brings even more cost savings.

Azure Synapse has been found to offer a significantly better price-to-performance ratio when compared with similar services from other cloud providers. This chart from an independent study shows the price-performance comparison (lower is better).

In a recent study, GigaOm, an independent emerging technology research firm, found that Azure Synapse has the best price-to-performance ratio on the market. The study surveyed many services from all of the major cloud providers and took both performance and cost into account. Besides being powerful and cost-effective, Azure Synapse offers industry-leading features when it comes to governance, monitoring, and collaboration that address key challenges for data analytics projects. These features provide businesses with the right tools to control not only costs but also the entire analytics lifecycle, including security, performance, and accuracy.

Learn more

Great leadership, a clear vision, and intelligent data analytics are key components that can help during significant economic and health challenges. Business leaders must lean on the data and insights available and the will, knowledge, and skills of their teams. Empowering your team with the right tools is critical to ensuring they have what they need to effectively collaborate, discover, and work towards recovery.

To learn more about Azure Synapse:

Read the e-book Three Ways Analytics Can Help: Respond, Adapt, and Save.
Get started on Azure Synapse Analytics with an Azure account.
Visit the Azure Synapse documentation webpage for tutorials.
Request a call from an Azure Synapse sales specialist when you’re ready.

Quelle: Azure

Bringing AI supercomputing to customers

The trend toward the use of massive AI models to power a large number of tasks is changing how AI is built. At Microsoft Build 2020, we shared our vision for AI at Scale utilizing state-of-the-art AI supercomputing in Azure and a new class of large-scale AI models enabling next-generation AI. The advantage of large scale models is that they only need to be trained once with massive amounts of data using AI supercomputing, enabling them to then be “fine-tuned” for different tasks and domains with much smaller datasets and resources. The more parameters that a model has, the better it can capture the difficult nuances of the data, as demonstrated by our 17-billion-parameter Turing Natural Language Generation (T-NLG) model and its ability to understand language to answer questions from or summarize documents seen for the first time. Natural language models like this, significantly larger than the state-of-the-art models a year ago, and many orders of magnitude the size of earlier image-centric models, are now powering a variety of tasks throughout Bing, Word, Outlook, and Dynamics.

Training models at this scale requires large clusters of hundreds of machines with specialized AI accelerators interconnected by high-bandwidth networks inside and across the machines. We have been building such clusters in Azure to enable new natural language generation and understanding capabilities across Microsoft products, and to power OpenAI on their mission to build safe artificial general intelligence. Our latest clusters provide so much aggregated compute power that they are referred to as AI supercomputers, with the one built for OpenAI reaching the top-five publicly disclosed supercomputers in the world. Using this supercomputer, OpenAI unveiled in May their 175-billion-parameter GPT-3 model and its ability to support a wide range of tasks it wasn’t specifically trained for, including writing poetry or translation.

The work that we have done on large-scale compute clusters, leading network design, and the software stack, including Azure Machine Learning, ONNX Runtime, and other Azure AI services, to manage it is directly aligned with our AI at Scale strategy. The innovation generated through this process is ultimately making Azure better at supporting the AI needs of all our customers, irrespective of their scale. For example, with the NDv2 VM series, Azure was the first and only public cloud offering clusters of VMs with NVIDIA’s V100 Tensor Core GPUs, connected by high-bandwidth low-latency NVIDIA Mellanox InfiniBand networking. A good analogy is how automotive technology is pioneered in the high-end racing industry and then makes its way into the cars that we drive every day.

New frontiers with unprecedented scale

“Advancing AI toward general intelligence requires, in part, powerful systems that can train increasingly more capable models. The computing capability required was just not possible until recently. Azure AI and its supercomputing capabilities provide us with leading systems that help accelerate our progress”  – Sam Altman, OpenAI CEO

In our continuum of Azure innovation, we’re excited to announce the new ND A100 v4 VM series, our most powerful and massively scalable AI VM, available on-demand from eight, to thousands of interconnected NVIDIA GPUs across hundreds of VMs.

The ND A100 v4 VM series starts with a single virtual machine (VM) and eight NVIDIA Ampere A100 Tensor Core GPUs, but just like the human brain is composed of interconnected neurons, our ND A100 v4-based clusters can scale up to thousands of GPUs with an unprecedented 1.6 Tb/s of interconnect bandwidth per VM. Each GPU is provided with its own dedicated topology-agnostic 200 Gb/s NVIDIA Mellanox HDR InfiniBand connection. Tens, hundreds, or thousands of GPUs can then work together as part of a Mellanox InfiniBand HDR cluster to achieve any level of AI ambition. Any AI goal (training a model from scratch, continuing its training with your own data, or fine-tuning it for your desired tasks) will be achieved much faster with dedicated GPU-to-GPU bandwidth 16x higher than any other public cloud offering.

The ND A100 v4 VM series is backed by an all-new Azure-engineered AMD Rome-powered platform with the latest hardware standards like PCIe Gen4 built into all major system components. PCIe Gen 4 and NVIDIA’s third-generation NVLINK architecture for the fastest GPU-to-GPU interconnection within each VM keeps data moving through the system more than 2x faster than before. 

Most customers will see an immediate boost of 2x to 3x compute performance over the previous generation of systems based on NVIDIA V100 GPUs with no engineering work. Customers leveraging new A100 features like multi-precision Tensor Cores with sparsity acceleration and Multi-Instance GPU (MIG) can achieve a boost of up to 20x.

“Leveraging NVIDIA’s most advanced compute and networking capabilities, Azure has architected an incredible platform for AI at scale in the cloud. Through an elastic architecture that can scale from a single partition of an NVIDIA A100 GPU to thousands of A100 GPUs with NVIDIA Mellanox Infiniband interconnects, Azure customers will be able to run the world’s most demanding AI workloads.” – Ian Buck, General Manager and Vice President of Accelerated Computing at NVIDIA

The ND A100 v4 VM series leverages Azure core scalability blocks like VM Scale Sets to transparently configure clusters of any size automatically and dynamically. This will allow anyone, anywhere, to achieve AI at any scale, instantiating even AI supercomputer on-demand in minutes. You can then access VMs independently or launch and manage training jobs across the cluster using the Azure Machine Learning service.

The ND A100 v4 VM series and clusters are now in preview and will become a standard offering in the Azure portfolio, allowing anyone to unlock the potential of AI at Scale in the cloud. Please reach out to your local Microsoft account team for more information.
Quelle: Azure

Share big data at scale with Azure Data Share in-place sharing for Azure Data Explorer

This post was co-authored by Jie Feng Principal Program Manager, and Sumi Venkitaraman Senior Product Manager, Microsoft Azure.

Microsoft Azure Data Share is an open, easy, and secure way to share data at scale by enabling organizations to share data in-place or as a data snapshot. Microsoft Azure Data Explorer is a fast and highly scalable data analytics service for telemetry, time-series, and log data.

Fueled by digital transformation, modern organizations want to increasingly enable fluid data sharing to drive business decisions. Seamlessly sharing data for inter-departmental and inter-organizational collaboration can unlock tremendous competitive advantage. Maintaining control and visibility, however, remains an elusive goal. Even today, data is shared using File Transfer Protocols (FTPs), application programming interfaces (APIs), USB devices, and email attachments. These methods are simply not secure, cannot be governed, and are inefficient at best.

Azure Data Share in-place Sharing for Azure Data Explorer, now generally available, enables you to share big data easily and securely between internal departments and with external partners, vendors, or customers for near real-time collaboration.

Once data providers share data, recipients (data consumers) always have the latest data without needing any additional intervention. Additionally, data providers maintain control over the sharing and can revoke access at will. By being able to centrally manage all shared relationships, data providers gain full control of what data is shared and with whom. Operating within a fully managed environment that can scale on-demand, data providers can focus on the logic while Data Share manages the infrastructure.

Here is what our customers are saying:

“Our clients love that ability to easily, seamlessly, and securely connect to their data and then build their own custom reports and analytics. And near real-time sharing with Azure Data Explorer and Azure Data Share permits cross-organizational data collaboration without compromising data security.” —Paul Stirpe, CTO, Financial Fabric

“We’re excited by the prospect of leveraging in-place sharing with Azure Data Explorer and Azure Data Share. The ability to give stakeholders near real-time access will allow them to prioritize product development and improve customer uptime. With a focus on data privacy, we have also been able to ensure secure and easy analysis of telemetry data, with no performance impact to our core infrastructure.” —Saajan Patel, IT Product Manager, Daimler Trucks North America

How in-place data sharing works

Data providers can initiate sharing by specifying the Azure Data Explorer cluster or database they want to share, who to share with, and terms of use. Next, the Data Share service sends an email invitation to the data consumer who can accept the sharing.

After the sharing relationship is established, Data Share creates a symbolic link between the provider and consumer's Azure Data Explorer cluster. This enables the data consumer to read and query the data in near real-time. Access to the data uses compute resources from the consumer's Azure Data Explorer cluster.

With Azure Data Explorer, data is cached, indexed, and distributed on the compute nodes within the cluster and persisted on Azure storage. Since the compute and storage are decoupled, multiple consuming clusters can be attached to the same source storage with different set of caching policies without impacting the performance and security of the source cluster.

The in-place sharing capability is a game changer for organizations looking for near real-time big data collaboration between internal departments or with external partners and customers.

Get started

To learn more and get started today using Azure Data Share in-place sharing for Azure Data Explorer see these resources:

Watch the Azure Friday video, How to share data in place from Azure Data Explorer.
Read the QuickStart guide, Create Azure Data Explorer cluster and database.
Read Use Azure Data Share to share data with Azure Data Explorer.
See the Financial Fabric case study.
Read the Share IoT and Log data in real-time using Azure Data Share and Azure Data Explorer blog.

Quelle: Azure

Advancing the outage experience—automation, communication, and transparency

“Service incidents like outages are an unfortunate inevitability of the technology industry. Of course, we are constantly improving the reliability of the Microsoft Azure cloud platform. We meet and exceed our Service Level Agreements (SLAs) for the vast majority of customers and continue to invest in evolving tools and training that make it easy for you to design and operate mission-critical systems with confidence.

In spite of these efforts, we acknowledge the unfortunate reality that—given the scale of our operations and the pace of change—we will never be able to avoid outages entirely. During these times we endeavor to be as open and transparent as possible to ensure that all impacted customers and partners understand what’s happening. As part of our Advancing Reliability blog series, I asked Sami Kubba, Principal Program Manager overseeing our outage communications process, to outline the investments we’re making to continue improving this experience.”—Mark Russinovich, CTO, Azure

 

In the cloud industry, we have a commitment to bring our customers the latest technology at scale, keeping customers and our platform secure, and ensuring that our customer experience is always optimal. For this to happen Azure is subject to a significant amount of change—and in rare circumstances, it is this change that can bring about unintended impact for our customers. As previously mentioned in this series of blog posts we take change very seriously and ensure that we have a systematic and phased approach to implementing changes as carefully as possible.

We continue to identify the inherent (and sometimes subtle) imperfections in the complex ways that our architectural designs, operational processes, hardware issues, software flaws, and human factors can align to cause service incidents—also known as outages. The reality of our industry is that impact caused by change is an intrinsic problem. When we think about outage communications we tend not to think of our competition as being other cloud providers, but rather the on-premises environment. On-premises change windows are controlled by administrators. They choose the best time to invoke any change, manage and monitor the risks, and roll it back if failures are observed.

Similarly, when an outage occurs in an on-premises environment, customers and users feel that they are more ‘in the know.’ Leadership is promptly made fully aware of the outage, they get access to support for troubleshooting, and expect that their team or partner company would be in a position to provide a full Post Incident Report (PIR)—previously called Root Cause Analysis (RCA)—once the issue is understood. Although our data analysis supports the hypothesis that time to mitigate an incident is faster in the cloud than on-premises, cloud outages can feel more stressful for customers when it comes to understanding the issue and what they can do about it.

Introducing our communications principles

During cloud outages, some customers have historically reported feeling as though they’re not promptly informed, or that they miss necessary updates and therefore lack a full understanding of what happened and what is being done to prevent future issues occurring. Based on these perceptions, we now operate by five pillars that guide our communications strategy—all of which have influenced our Azure Service Health experience in the Azure portal and include:

Speed
Granularity
Discoverability
Parity
Transparency

Speed

We must notify impacted customers as quickly as possible. This is our key objective around outage communications. Our goal is to notify all impacted Azure subscriptions within 15 minutes of an outage. We know that we can’t achieve this with human beings alone. By the time an engineer is engaged to investigate a monitoring alert to confirm impact (let alone engaging the right engineers to mitigate it, in what can be a complicated array of interconnectivities including third-party dependencies) too much time has passed. Any delay in communications leaves customers asking, “Is it me or is it Azure?” Customers can then spend needless time troubleshooting their own environments. Conversely, if we decide to err on the side of caution and communicate every time we suspect any potential customer impact, our customers could receive too many false positives. More importantly, if they are having an issue with their own environment, they could easily attribute these unrelated issues to a false alarm being sent by the platform. It is critical that we make investments that enable our communications to be both fast and accurate.

Last month, we outlined our continued investment in advancing Azure service quality with artificial intelligence: AIOps. This includes working towards improving automatic detection, engagement, and mitigation of cloud outages. Elements of this broader AIOps program are already being used in production to notify customers of outages that may be impacting their resources. These automatic notifications represented more than half of our outage communications in the last quarter. For many Azure services, automatic notifications are being sent in less than 10 minutes to impacted customers via Service Health—to be accessed in the Azure portal, or to trigger Service Health alerts that have been configured, more on this below.

With our investment in this area already improving the customer experience, we will continue to expand the scenarios in which we can notify customers in less than 15 minutes from the impact start time, all without the need for humans to confirm customer impact. We are also in the early stages of expanding our use of AI-based operations to identify related impacted services automatically and, upon mitigation, send resolution communications (for supported scenarios) as quickly as possible.

Granularity

We understand that when an outage causes impact, customers need to understand exactly which of their resources are impacted. One of the key building blocks in getting the health of specific resources are Resource Health signals. The Resource Health signal will check if a resource, such as a virtual machine (VM), SQL database, or storage account, is in a healthy state. Customers can also create Resource Health alerts, which leverage Azure Monitor, to let the right people know if a particular resource is having issues, regardless of whether it is a platform-wide issue or not. This is important to note: a Resource Health alert can be triggered due to a resource becoming unhealthy (for example, if the VM is rebooted from within the guest) which is not necessarily related to a platform event, like an outage. Customers can see the associated Resource Health checks, arranged by resource type.

We are building on this technology to augment and correlate each customer resource(s) that has moved into an unhealthy state with platform outages, all within Service Health. We are also investigating how we can include the impacted resources in our communication payloads, so that customers won’t necessarily need to sign in to Service Health to understand the impacted resources—of course, everyone should be able to consume this programmatically.

All of this will allow customers with large numbers of resources to know more precisely which of their services are impacted due to an outage, without having to conduct an investigation on their side. More importantly, customers can build alerts and trigger responses to these resource health alerts using native integrations to Logic Apps and Azure Functions.

Discoverability

Although we support both ‘push’ and ‘pull’ approaches for outage communications, we encourage customers to configure relevant alerts, so the right information is automatically pushed out to the right people and systems. Our customers and partners should not have to go searching to see if the resources they care about are impacted by an outage—they should be able to consume the notifications we send (in the medium of their choice) and react to them as appropriate. Despite this, we constantly find that customers visit the Azure Status page to determine the health of services on Azure.

Before the introduction of the authenticated in-portal Service Health experience, the Status page was the only way to discover known platform issues. These days, this public Status page is only used to communicate widespread outages (for example, impacting multiple regions and/or multiple services) so customers looking for potential issues impacting them don’t see the full story here. Since we rollout platform changes as safely as possible, the vast majority of issues like outages only impact a very small ‘blast radius’ of customer subscriptions. For these incidents, which make up more than 95 percent of our incidents, we communicate directly to impacted customers in-portal via Service Health.

We also recently integrated the ‘Emerging Issues’ feature into Service Health. This means that if we have an incident on the public Status page, and we have yet to identify and communicate to impacted customers, users can see this same information in-portal through Service Health, thereby receiving all relevant information without having to visit the Status page. We are encouraging all Azure users to make Service Health their ‘one stop shop’ for information related to service incidents, so they can see issues impacting them, understand which of their subscriptions and resources are impacted, and avoid the risk of making a false correlation, such as when an incident is posted on the Status page, but is not impacting them.

Most importantly, since we’re talking about the discoverability principle, from within Service Health customers can create Service Health alerts, which are push notifications leveraging the integration with Azure Monitor. This way, customers and partners can configure relevant notifications based on who needs to receive them and how they would best be notified—including by email, SMS, LogicApp, and/or through a webhook that can be integrated into service management tools like ServiceNow, PagerDuty, or Ops Genie.

To get started with simple alerts, consider routing all notifications to email a single distribution list. To take it to the next level, consider configuring different service health alerts for different use cases—maybe all production issues notify ServiceNow, maybe dev and test or pre-production issues might just email the relevant developer team, maybe any issue with a certain subscription also sends a text message to key people. All of this is completely customizable, to ensure that the right people are notified in the right way.

Parity

All Azure users should know that Service Health is the one place to go, for all service impacting events. First, we ensure that this experience is consistent across all our different Azure Services, each using Service Health to communicate any issues. As simple as this sounds, we are still navigating through some unique scenarios that make this complex. For example, most people using Azure DevOps don’t interact with the Azure portal. Since DevOps does not have its own authenticated Service Health experience, we can’t communicate updates directly to impacted customers for small DevOps outages that don’t justify going to the public Status page. To support scenarios like this, we have stood up the Azure DevOps status page where smaller scale DevOps outages can be communicated directly to the DevOps community.

Second, the Service Health experience is designed to communicate all impacting events across Azure—this includes maintenance events as well as service or feature retirements, and includes both widespread outages and isolated hiccups that only impact a single subscription. It is imperative that for any impact (whether it is potential, actual or upcoming) customers can expect the same experience and put in place a predictable action plan across all of their services on Azure.

Lastly, we are working towards expanding our philosophy of this pillar to extend to other Microsoft cloud products. We acknowledge that, at times, navigating through our different cloud products such as Azure, Microsoft 365, and Power Platform can sometimes feel like navigating technologies from three different companies. As we look to the future, we are invested in harmonizing across these products to bring about a more consistent, best-in-class experience.

Transparency

As we have mentioned many times in the Advancing Reliability blog series, we know that trust is earned and needs to be maintained. When it comes to outages, we know that being transparent about what is happening, what we know, and what we don’t know is critically important. The cloud shouldn’t feel like a black box. During service issues, we provide regular communications to all impacted customers and partners. Often, in the early stages of investigating an issue, these updates might not seem detailed until we learn more about what’s happening. Even though we are committed to sharing tangible updates, we generally try to avoid sharing speculation, since we know customers make business decisions based on these updates during outages.

In addition, an outage is not over once customer impact is mitigated. We could still be learning about the complexities of what led to the issue, so sometimes the message sent at or after mitigation is a fairly rudimentary summation of what happened. For major incidents, we follow this up with a PIR generally within three days, once the contributing factors are better understood.

For incidents that may have impacted fewer subscriptions, our customers and partners can request more information from within Service Health by requesting a PIR for the incident. We have heard feedback in the past that PIRs should be even more transparent, so we continue to encourage our incident managers and communications managers to provide as much detail as possible—including information about the issue impact, and our next steps to mitigate future risk. Ideally to ensure that this class of issue is less likely and/or less impactful moving forward.

While our industry will never be completely immune to service outages, we do take every opportunity to look at what happened from a holistic perspective and share our learnings. One of the future areas of investment at which we are looking closely, is how best to keep customers updated with the progress we are making on the commitments outlined in our PIR next steps. By linking our internal repair items to our external commitments in our next steps, customers and partners will be able to track the progress that our engineering teams are making to ensure that corrective actions are completed.

Our communications across all of these scenarios (outages, maintenance, service retirements, and health advisories) will continue to evolve, as we learn more and continue investing in programs that support these five pillars.

Reliability is a shared responsibility

While Microsoft is responsible for the reliability of the Azure platform itself, our customers and partners are responsible for the reliability of their cloud applications—including using architectural best practices based on the requirements of each workload. Building a reliable application in the cloud is different from traditional application development. Historically, customers may have purchased levels of redundant higher-end hardware to minimize the chance of an entire application platform failing. In the cloud, we acknowledge up front that failures will happen. As outlined several times above, we will never be able to prevent all outages. In addition to Microsoft trying to prevent failures, when building reliable applications in the cloud your goal should be to minimize the effects of any single failing component.

To that end, we recently launched the Microsoft Azure Well-Architected Framework—a set of guiding tenets that can be used to improve the quality of a workload. Reliability is one of the five pillars of architectural excellence alongside Cost Optimization, Operational Excellence, Performance Efficiency, and Security. If you already have a workload running in Azure and would like to assess your alignment to best practices in one or more of these areas, try the Microsoft Azure Well-Architected Review.

Specifically, the Reliability pillar describes six steps for building a reliable Azure application. Define availability and recovery requirements based on decomposed workloads and business needs. Use architectural best practices to identify possible failure points in your proposed/existing architecture and determine how the application will respond to failure. Test with simulations and forced failovers to test both detection and recovery from various failures. Deploy the application consistently using reliable and repeatable processes. Monitor application health to detect failures, monitor indicators of potential failures, and gauge the health of your applications. Finally, respond to failures and disasters by determining how best to address it based on established strategies.

Returning to our core topic of outage communications, we are working to incorporate relevant Well-Architected guidance into our PIRs in the aftermath of each service incident. Customers running critical workloads will be able to learn about specific steps to improve reliability that would have helped to avoid and lessen impact from that particular outage. For example, if an outage only impacted resources within a single Availability Zone, we will call this out as part of the PIRs and encourage impacted customers to consider zonal redundancies for their critical workloads.

Going forward

We outlined how Azure approaches communications during and after service incidents like outages. We want to be transparent about our five communication pillars, to explain both our progress to date and the areas in which we’re continuing to invest. Just as our engineering teams endeavor to learn from each incident to improve the reliability of the platform, our communications teams endeavor to learn from each incident to be more transparent, to get customers and partners the right details to make informed decisions, and to support customers and partners as best as possible during each of these difficult situations.

We are confident that we are making the right investments to continuing improving in this space, but we are increasingly looking for feedback on whether our communications are hitting the mark. We include an Azure post-incident survey at the end of each PIR we publish. We strive to review every response to learn from our customers and partners and validate whether we are focusing on the right areas and to keep improving the experience.

We continue to identify the inherent (and sometimes subtle) imperfections in the complex ways that our architectural designs, operational processes, hardware issues, software flaws, and human factors align to cause outages. Since trust is earned and needs to be maintained, we are committed to being as transparent as possible—especially during these infrequent but inevitable service issues.
Quelle: Azure

Build resilient applications with Kubernetes on Azure

Welcome to KubeCon EU 2020, the virtual edition. While we won’t be able to see each other in person at KubeCon EU this year, we're excited that this new virtual format of KubeCon will make the conference more accessible than ever, with more people from the amazing Kubernetes community able to join and participate from around the world without leaving their homes.

With everything that has been happening, the last year has been an up and down experience, but through it all I’m incredibly proud of the focus and dedication from the Azure Kubernetes team. They have continued to iterate and improve our Kubernetes on Azure that provides enterprise-grade experience for our customers.

Kubernetes on Azure (and indeed anywhere) delivers an open and portable ecosystem for cloud-native development. In addition to this core promise, we also deliver a unique enterprise-grade experience that ensures the reliability and security your workloads demand, while also enabling the agility and efficiency that business today desires. You can securely deploy any workload to Azure Kubernetes Service (AKS) to drive cost-savings at scale across your business. Today, we're going to tell you about even more capabilities that can help you along on your cloud-native journey to Kubernetes on Azure.

Improving latency and operational efficiency

One of the key drivers of cloud adoption is reducing latency. It used to be that it took days to get physical computers and set them up in a cluster. Today, you can deploy a Kubernetes cluster on Azure in less than five minutes. These improvements benefit the agility of our customers. For customers who want to scale and provision faster, we are announcing a preview of ephemeral OS disk support which makes responding to new compute demands on your cluster even faster.

Latency isn’t just about the length of time to create a cluster. It’s also about how fast you can detect and respond to operational problems. To help enterprises improve their operational efficiency, we’re announcing preview integration with Azure Resource Health which can alert you if your cluster is unhealthy for any reason. We’re also announcing the general availability of node image updates which allow you to upgrade the underlying operating system to respond to bugs or vulnerabilities in your cluster while staying on the same Kubernetes version for stability.

Finally, though Kubernetes has always enabled enterprises to drive cost savings through containerization, the new economic realities of the world during a pandemic mean that achieving cost efficiency for your business is more important than ever. We’ve got a great exercise that can help you learn how to optimize your costs using containers and the Azure Kubernetes Service.

Secure by design with Kubernetes on Azure

One of the key pillars of any enterprise computing platform is security. With market-leading features like policy integration and Azure Active Directory identity for Pods and cloud-native security have always been an important part of the Azure Kubernetes Service. I’m excited about some new features we’ve added recently to further enhance the security of your workloads running on Kubernetes.

Though Kubernetes has built-in support for secrets, most enterprise environments require a more secure and more compliant implementation. In the Azure Kubernetes Service, being enterprise-grade means providing integration between Azure Key Vault and the Azure Kubernetes service. Using Key Vault with Kubernetes enables you to securely store your credentials, certificates, and other secrets in state of the art, compliant secret store, and easily use them with your applications in an Azure Kubernetes cluster.

It’s even more exciting that this integration is built on the back of an open Container Storage Interface (CSI) driver that the Azure team built and open sourced for the entire Kubernetes community. Giving back to open source is an important part of what it means to be a community steward, and it was exciting to see our approach get validated as it was picked up and used by the HashiCorp Vault team for their secrets integration. Our open source team has been hard at work on improving many other parts of the security ecosystem. We’ve enhanced the CSI driver for Windows, and worked on cgroups v2 and containerd. If you want to learn more about how to secure your cloud-native workloads and make sure that your enterprise is following Microsoft’s best practices, check out our guide to Kubernetes best practices. They will teach you how to integrate firewalls, policy, and more to ensure you have both security and agility in your cloud-native development.

Next steps and KubeCon EU

I hope that you have an awesome KubeCon EU. As you go through the conference and learn more about Kubernetes, you can also learn more about Kubernetes on Azure with all of the great information online and in our virtual booth. If you’re new to KubeCon and Kubernetes and wondering how you can adopt Kubernetes for workloads from hobbyist to enterprise, we’ve got a great Kubernetes adoption guide for you.
Quelle: Azure

How to optimize your Azure workload costs

The economic challenges posed by the global health pandemic continue to affect every organization around the world. During this difficult time, cost optimization has become an especially critical topic. Recently, we provided an overview of how to approach cost optimization on Microsoft Azure, which laid out three focus areas to help you get the most value out of your Azure investment: understanding and forecasting your costs, optimizing your workload costs, and controlling your costs.

Today, we’ll dive more deeply into the second focus area—how you can optimize your Azure workloads costs—and show you how guidance in the Microsoft Azure Well-Architected Framework, tools like Azure Advisor, and offers like the Azure Hybrid Benefit and Azure Reservations can help you operate more efficiently on Azure and save.

Design workloads for cost optimization using best practices from the Azure Well-Architected Framework

The Azure Well-Architected Framework is designed to help you build and deploy cloud workloads with confidence, using actionable and simple to use deep technical content, assessments, and reference architectures based on proven industry best practices. You can assess workloads against the five pillars of the Azure Well-Architected Framework cloud design—cost optimization, reliability, security, performance efficiency, and operational excellence—to help you focus on the right activities and to ensure you optimize workloads and proactively meet business needs.

The cost optimization section of the Azure Well-Architected Framework is all about managing costs to get the most value out of your Azure workloads and covers:

Cost management principles, a series of important considerations that can help you achieve both business objectives and cost justification.
Cost best practices for design, provisioning, monitoring, and optimization.
Trade-offs between cost and other pillars like reliability and performance.

A great way to get started with the Azure Well-Architected Framework is by taking the Azure Well-Architected Review. This review examines your workload against the best practices defined by the pillars of reliability, cost optimization, operational excellence, security, and performance efficiency. You can choose to take the review for any or all of the pillars, so you can start by focusing on cost optimization, if you prefer.

Optimize your Azure resources with best practice recommendations from Azure Advisor

Your workloads are composed of resources, so configuring your resources according to the latest Azure best practices is critical to ensuring your workloads are cost optimized. Azure Advisor is a free service that helps you optimize your already-deployed Azure resources for cost, security, performance, reliability, and operational excellence. Advisor is aligned with the Azure Well-Architected Framework, but is targeted at the resource level instead of the workload level. Advisor’s recommendations are personalized to your Azure environment based on your resource telemetry and configurations.

Examples of Advisor cost recommendations include rightsizing underutilized or shutting down unused resources, buying reserved instances to save over pay-as-you-go costs, and using storage lifecycle management. Our full list of Advisor cost recommendations is available.

Advisor offers several features to make it faster and easier to optimize your resources. Quick Fix enables one-click bulk remediation of recommendations, so you can multi-select resources you’d like to remediate. Click Quick Fix, and Advisor takes care of the rest. You can configure Advisor to display only the recommendations that mean the most to you, such as those for your production subscriptions and resource groups. Advisor alerts notify you when you have new recommendations, and Advisor recommendation digests remind you about available recommendations you haven’t remediated yet.

Visit the Advisor documentation to learn more and get started remediating your cost recommendations.

Save big on Azure by leveraging your existing on-premises licensing investment with the Azure Hybrid Benefit

The Azure Hybrid Benefit is a licensing benefit that lets you bring your Windows Server and SQL Server on-premises licenses with Software Assurance or subscriptions to Azure and save up to 85 percent compared to standard pay as-you-go rates,1 so you only pay for the compute costs on Azure. You can apply these savings across Azure SQL and Azure Dedicated Host.

License mobility benefits offered by Azure include the ability to bring your Windows Server and SQL Server licenses to the cloud, leverage SQL Server licensing in Infrastructure as a Service (IaaS) and Platform as a Service (PaaS) environments, and gain additional licensing benefits, including 180-day dual use rights of your licenses, both on-premises and in Azure. For your heavily-virtualized workloads, you get four vCPUs of Azure SQL Database or Azure SQL Managed Instance for each core of SQL Server Enterprise.

These unique advantages plus additional benefits such as, free fail-over servers licensing for SQL Server disaster recovery and free extended security updates, make Azure the best-in-class cloud for Windows Server and SQL Server.

Check out the Azure Hybrid Benefit Documentation for more technical tutorials and resources.

Reserve upfront and pay less with Azure Reservations

Receive a discount on your Azure services by purchasing Azure Reservations, which is a one-year or three-year commitment to specific Azure services. Giving us visibility into your one-year or three-year resource needs in advance allows us to be more efficient. In return, we pass these savings onto you as discounts of up to 72 percent.2 When you buy a reservation, you immediately receive a discount and are no longer charged at pay-as-you-go rates. This offer is ideal for Azure services that use significant capacity or run for long periods of time in a consistent way.

Reservation discounts apply to the following eligible subscriptions and offer types:

Enterprise agreements (offer numbers: MS-AZR-0017P or MS-AZR-0148P).
Microsoft Customer Agreement subscriptions.
Individual plans with pay-as-you-go rates (offer numbers: MS-AZR-0003P or MS-AZR-0023P).
Cloud Solution Provider (CSP) subscriptions.

Learn more

Learn more about how to lower your upfront cash outflow with various monthly payment options at no additional cost in the reservations documentation.

You could achieve the lowest cost of ownership when you combine the Azure Hybrid Benefit, reservation pricing, and extended security updates. Cost optimize your Azure workloads today with these Azure cost saving options.

Check back soon for more cloud cost optimization guidance or visit our Azure cost optimization page to learn more.

1The 85 percent savings is based on 8-Core D13 v2 VM in East US 2 region. Savings are calculated from full price (license included) for SQL Server Enterprise edition VM against reduced rate (applying Azure Hybrid Benefit for SQL Server and Windows Server), which excludes Software Assurance costs for SQL Server and Windows Server, which may vary based on EA agreement or any applicable volume licensing agreement. Actual savings may vary based on region, instance size and compute family. Prices as of June 5, 2018 are subject to change.

2The 72 percent saving is based on one M32ts Azure VM for Windows OS in US Gov Virginia region running for 36 months at a pay-as-you-go rate of ~$3,660.81/month; reduced rate for a 3-year Reserved Instance of ~$663.45/month. Azure pricing as of October 30, 2018 (prices subject to change). Actual savings may vary based on location, instance type, or usage.
Quelle: Azure

Prioritize datacenter discovery and readiness assessments to accelerate cloud migration

Cloud migrations are an effective way to drive operational efficiencies and to flip capital expenses to operational. Successful cloud migrations are rooted in bias towards action and execute with urgency towards triggers that need immediate attention. In our experience, migration projects that start with a deep understanding of the IT landscape, are best positioned to mitigate any complexities. In our experience, leaders who set actionable project goals and timelines, bring together teams and encourage solution thinking, and lean in to track progress towards well-defined objectives, are the most effective in helping their organizations realize cloud migration targets.

In the kick-off blog of this series, we listed prioritizing assessments as one of our top three recommendations to accelerate your cloud migration journey. Comprehensive cloud migration assessments should cover the entire fleet and help you arrive at key decisions related to candidate apps, optimum resource allocation, and cost projections. You’ll want to understand your applications, their on-premises performance, uncover dependencies and interrelated systems, and estimate cloud readiness and run-cost. This analysis is critical to fully recognize what you are working with and proactively understand how to best manage these resources in the cloud. Further, in our experience with customers, inadequately planned migrations—especially those that don’t focus on optimizing infrastructure resources and cost levers such as compute, storage, licensing, and benefits including Azure Hybrid Benefit and Software Assurance—often result in long-term sticker shock.

Prioritizing assessments is also important to keep your IT and financial organizations aligned around how to transform your business with Azure while keeping the cost structure lean to weather changing market conditions. We shared our guidance in this cloud migration blog to help you understand the financial considerations for cloud migrations and best practice guidance for managing cloud costs.

Comprehensive discovery with Azure Migrate and Movere

The discovery process can be slow and daunting, especially for enterprises that host hundreds of applications and resources across multiple datacenters. Arriving at an accurate baseline of your IT infrastructure is tedious and often requires you to connect disparate sets of information across various tools, sub-systems, and business teams. Leverage Azure Migrate or Movere to automate this process and quickly perform discovery and assessments of your on-premises infrastructure, databases, and applications. Movere is available via the United States and Worldwide Solutions Assessment program. Azure Migrate is available with your Azure subscription at no additional cost.

Azure Migrate discovery and assessment capabilities are agentless and offer the following key features:

Comprehensive, at-scale discovery features for Linux and Windows servers, running on hypervisor platforms such as VMware vSphere or Microsoft Hyper-V, public clouds such as AWS or GCP, or bare metal servers.
Discovery of infrastructure configuration and actual resource utilization in terms of cores, memory, disks, IOPS, and more so that you can right-size and optimize your infrastructure based on what you actually need to meet the desired application performance in Azure. Discovery of IOPS characteristics over a period of time results in an accurate prediction of resources that your applications need in the cloud.
Azure assessment reports that help you understand the various offers, SKUs, and offerings and associated cost of running your applications in Azure. You can customize for different scenarios and compare results to make decisions related to target regions, EA pricing, reserved instances, SKU consolidation, and more.
Features that help you inventory your applications and software components installed on your servers – this capability is crucial in understanding your application vendor estate and evaluating compatibility, end-of-support, and more.
Agentless dependency mapping so that you can visualize dependencies across different tiers of an application or across applications – this feature helps you design high-confidence migration waves and to mitigate any complexities upfront.

CMDBs, ITAM, and management tools enrich discovery data

Discovery and assessment results are important, but intersecting them with your existing on-premises data sources unlocks powerful insights, driving better decision-making. These are data sources that are great to get started with – your Configuration Management Database (CMDB), IT asset management systems (ITAM), Active Directory, management tools, and monitoring systems. Merging your rich IT data repositories with discovery and assessment reports broadens understanding across different dimensions and renders a more complete and accurate view of your business units, IT assets, and business applications.

Use Azure cost estimations from the assessment output and allocate the projections to various business teams to better recognize their future budgetary requirements. Compare Azure cost against the current spend to estimate potential cloud savings your teams can accrue by moving to Azure.
Identify machines that have reached their OS end-of-support and reference your CMDB to identify associated application owners and teams to prioritize migrations to Azure.
Filter out machines with high CPU and memory utilization, and correlate with performance events in your monitoring systems to identify applications with capacity constraints. These applications are ideal candidates that can benefit from Azure’s auto-scaling, and VM scale sets capabilities.
Identify related systems using the Azure Migrate dependency mapping feature, and map associated owners from your CMDBs and Azure AD to identify move group owners.
Identify servers with zero to low usage and work with owning business units on decommissioning options.
Understand the recommend migration window by mapping RTO/RPO information from your private data sources.
Understand your storage IOPS and projected application growth to select the appropriate Azure storage and disk SKUs.

These are just a few samples of the many insights that can be surfaced by unifying discovery and assessment results with IT data sources.

Data-driven progress tracking

CIOs and leaders who are on point for driving cloud migration initiatives should periodically track progress, identify and communicate migration priorities, and bring together stakeholders to ensure that teams on the ground are making progress. Dashboards that track progress on the projects and the quality of insights and actions being generated are effective tools to stay focused.

Some important dimensions that dashboards should include are datacenter cost trends, fleet size in terms of physical hosts, count of virtual servers, provisioned storage, OS distribution, VM density by host, resource utilization in terms of cores, memory, and storage. Additionally, views should help quickly identify important cloud migration triggers such as hardware that is coming up for refresh, OS versions that are hitting end-of-support, business units that are constrained by capacity.

Here is a sample PowerBI dashboard that an Azure customer is using to track the progress of their cloud assessment and migration project:

 
  

Next steps

Investigate the Microsoft Cloud Adoption Framework for Azure to align your cloud migration priorities and objectives before you start planning and ensure a more successful migration.
Make sure you start your journey right by understanding how to build your migration plan with Azure Migrate and reviewing the best practices for creating assessments.
For expert assistance from Microsoft or our qualified partners, check out our Cloud Solution Assessment offerings or join the Azure Migration Program (AMP).
To learn more and to get started, visit the Azure Migration Center.

Coming up next, we’ll explore a big topic that’s key to succeeding in your migrations: anticipating and mitigating complexities. We’ll talk about the organizational challenges and decisions you’ll need to make as you start planning and executing your cloud migrations.

Share your feedback

Please share your experiences or thoughts as this series comes together in the comments below—we appreciate your feedback.
Quelle: Azure

New Azure SQL Learning Tools help reduce the global technology skills gap

Microsoft’s learning solutions pave the way toward data-centric jobs of the future

"It’s been forecasted 800 million people need to learn new skills for their jobs by 2030. In this time of change, people are hungry to learn, gain new skills, and grow their economic opportunity.”—Satya Nadella, CEO, Microsoft

Across Microsoft, we are helping a new generation of technology workers develop the right level of skills. Recently, Microsoft announced the availability of new virtual learning programs. These programs, focused on technical topics, are already helping people enhance their digital expertise and, for some, are providing a foundation for success in a new career path.

Building upon this goal, we're excited to announce the Azure Data team’s latest additions to these educational programs.

Our all-new content will help beginners being introduced to Azure as well as SQL experts learn how to understand the benefits of Azure SQL. Since SQL Server and Azure SQL share the same engine, these new set of tools builds upon familiar content. This means SQL Server professionals can become Azure SQL professionals with just a little bit of help, such as:

Microsoft Learn learning path: This six-course Azure SQL fundamentals learning path provides a built-in lab environment for you to learn at your own pace without a subscription.

YouTube/Channel9 series: We offer more than 60 videos to help beginners learn more about Azure SQL. Viewers can experience on-demand training through Microsoft Developer and Azure SQL playlists on YouTube and Channel9.

GitHub content: Learners and educators can dig into open-source code in a scenario-driven GitHub workshop, where forking and redelivering is encouraged. You can access this content by visiting the SQL Server workshops page and selecting “Workshop: Azure SQL”.

Learn Live in the Azure SQL Bootcamp: In this four-day series of live sessions, Microsoft SQL experts Anna Hoffman and Bob Ward will help you get ramped up and support you as you learn. You can sign up for Azure SQL Bootcamp here to join us.

Azure SQL’s rapid adoption creates new opportunities

Azure SQL adoption is accelerating at a dramatic growth rate and will continue on this trajectory for the foreseeable future. Azure SQL unlocks new opportunities for our customers to optimize costs, build resiliency, and promote agility with AI-based features, rapid scaling capability, and much more.

A few weeks ago, a Morgan Stanley report noted, “The key insight of [the] 2nd edition of our New Stack monthly is that the relational database, commonly viewed as outdated for the digital era, is not only not dead but is seeing a resurgence reflecting strong growth in cloud. MSFT is a key beneficiary with the top share in cloud and overall".

It’s a terrific time to join the Azure SQL community and fine-tune your technical skills. If you have questions about the benefits, opportunities, or process of making a move from SQL on-premises to SQL in the Cloud, we can lend a hand to guide you. Our new learning materials answer these questions and go into greater technical depth. On Twitter, you can follow us @AzureSQL and get more involved in the community with the #AzureSQL.

Kudos due to our SQL community 

I’d like to take a moment to acknowledge a few members of our team who gathered feedback from customers, took that information to heart and developed our new curriculum. First, a heartfelt thank you to Anna Hoffman, Data Scientist, for your dedicated efforts to providing customers the latest content and for enabling more scalable platforms to deliver it. I’d also like to thank Bob Ward, one of our SQL visionaries, who has invested over 26 years driving SQL development. Last but not least, I’m grateful to Buck Woody, who has written hundreds of articles about databases to help educate future data experts.

Jumpstart your journey

In these uniquely challenging times, it is more important than ever for Microsoft to equip our SQL community with new tools and resources to help you succeed. Whether you are a SQL expert, or someone just starting, I encourage you to visit our latest resources and find out how you can jumpstart your journey to learn about Azure SQL.
Quelle: Azure

Announcing preview of Java Message Service 2.0 over AMQP on Azure Service Bus

Azure Service Bus simplifies enterprise messaging scenarios by leveraging familiar queue and topic subscription semantics over the industry-driven AMQP protocol. It offers customers a fully managed platform as a service (PaaS) offering with deep integrations with Azure services to provide a messaging broker with high throughput, reliable latency while ensuring high availability, secure design, and scalability as a first-class experience. We aim to offer Azure Service Bus for customer workloads on most application stacks and ecosystems.

In keeping with that vision, we’re excited to announce preview support for Java Message Service (JMS) 2.0 over AMQP in Azure Service Bus Premium tier. With this, we empower customers to seamlessly lift and shift their Java and Spring workloads to Azure while also helping them modernize their application stack with best in class enterprise messaging in the cloud.

As enterprise customers look to lift and shift their workloads to Azure, they may take the opportunity to modernize their application stack by leveraging cloud-native Azure offerings. This is more appropriate for components on the data plane, storing or moving data, which benefit from moving away from an infrastructure as a service (IaaS) hosted setup to a more cloud-native PaaS setup.

With databases and data stores, the establishment of standardized APIs and protocols has paved the way for seamless migration, wherein the application is agnostic of the actual provider or implementation of this standardized API and with negligible or configuration only code changes, the applications can move from their current on-premises provider to Azure’s fully managed PaaS offering with expected behavior.

The enterprise messaging ecosystem has been largely fragmented compared to the data ecosystem until the recent AMQP 1.0 protocol standardization in 2011 that drove consistent behavior across all enterprise message brokers guaranteed by the protocol implementation. However, this still did not lead to a standardized API contract, perpetuating the fragmentation in the enterprise messaging space.

The Java Enterprise community (and by extension, Spring) has made some forward strides with the Java Message Service (JMS 1.1 and 2.0) specification to standardize the API utilized by producer and consumer applications when interacting with an enterprise messaging broker. The Apache QPID community furthered this by its implementation of the JMS API specification over AMQP. QPID-JMS, whether standalone or as part of the Spring JMS package, is the de-facto JMS implementation for most enterprise customers working with a variety of message brokers.

Connect existing applications with Azure Service Bus over AMQP

With the feature list supported with this preview (with full parity planned by general availability), Azure Service Bus supports all Java Message Service API contracts, enabling customers to bring their existing applications to Azure without rewriting the application. Here is a list of JMS features that are supported today:

Queues.
Topics.
Temporary queues.
Temporary topics.
Subscriptions.

Shared durable subscriptions.
Shared non-durable subscriptions.
Unshared durable subscriptions.
Unshared non-durable subscriptions.

QueueBrowser.
TopicBrowser.
Auto-creation of all the above entities (if they don’t already exist).
Message selectors.
Sending messages with delivery delay (scheduled messages).

Seamless migration from on-premises or IaaS hosted JMS provider to Azure Service Bus

To connect an existing JMS based application with Azure Service Bus, simply add the Azure Service Bus JMS Maven package or the Azure Service Bus starter for Spring boot to the application’s pom.xml and add the Azure Service Bus connection string to the configuration parameters.

With configuration only code changes, as shown above, customers can keep their business logic agnostic of the message broker and avoid any vendor lock-in.
  

Simple pricing, painless deployments, and scalable resourcing

By leveraging Azure Service Bus JMS support, customers can now avoid the overhead of procuring licenses, managing an enterprise messaging broker on their own IaaS Compute, simplify cost management with a fixed price per messaging unit, and by leveraging automatic scale up and down provisioning to address variability in workloads.

Integrate with other Azure offerings to further modernize your application stack

You can also leverage Azure Service Bus’s integration with other Azure offerings to modernize and simplify the application stack. Here are some ways on how you can do that.

Azure Logic Apps: Utilize Azure Logic Apps connectors for Azure Service Bus to replace various critical business workflows with a simple low-code pay-as-you-go Serverless offering.
Azure Functions: Utilize Azure Functions triggers for Azure Service Bus to replace custom applications with a simple pay-as-you-go serverless PaaS offering.
Azure Monitor and Alerts: Utilize Azure monitor and alerts to keep an eye on the Azure Service Bus Namespace, Queue, Topics, and Subscriptions level metrics.
Azure KeyVault: Utilize integration with Azure KeyVault to encrypt the data on the namespace with a customer-managed key.
Virtual Networks and Private endpoints: Secure access to Azure Service Bus using Virtual network service endpoints. Connect with a cloud-hosted service via an address hosted on your private network using Private endpoints.

Get started today

Get started today by provisioning a Service Bus namespace with JMS features and migrating your existing Java and Spring applications from Active MQ to Service Bus.
Quelle: Azure