Amazon Connect Contact Lens bietet jetzt Konversationsanalysen in Echtzeit für Chats

Amazon Connect Contact Lens bietet jetzt Konversationsanalysen in Echtzeit für Amazon Connect Chat und erweitert damit die auf Machine Learning basierenden Analysen nach einem Kontakt (z. B. Stimmungsanalyse, automatische Kontaktkategorisierung usw.) auf Kontaktszenarien in Echtzeit. Diese Funktionen ermöglichen es Contact-Center-Managern, Kundenprobleme bei laufenden Chat-Kontakten zu erkennen und ihnen zu helfen, Kundenprobleme schneller zu lösen. Zum Beispiel können Manager jetzt in Echtzeit eine E-Mail-Benachrichtigung erhalten, wenn die Kundenstimmung zu einem Chat-Kontakt negativ wird, sodass sie dem laufenden Kontakt beitreten und bei der Lösung des Kundenproblems helfen können.
Quelle: aws.amazon.com

State of the Word 2023: Watch Live on December 11

It’s almost time for State of the Word 2023! Join us for this live stream event on December 11th at 10am ET.

State of the Word is the annual keynote address delivered by the WordPress project’s co-founder and Automattic CEO, Matt Mullenweg. Every year, the event shares reflections on the project’s progress and the future of open source. Expect all that and more in this year’s edition.

For the first time ever, this event is venturing beyond North America, bringing the WordPress community to a new and vibrant city: Madrid, Spain! The event will be live-streamed to WordPress enthusiasts and newcomers alike via the WordPress YouTube channel.

Join Matt as he provides a retrospective of 2023, demos the latest in WordPress tech, and comments on the future of the WordPress open source project.

Watch State of the Word 2023 live!

What: State of the Word 2023

When: Monday, December 11, 2023 @ 10:00 am ET (15:00 UTC)

How: The live stream is embedded in this post, just above, and will go live at the time of the event. It will also be available through the WordPress YouTube channel. Additionally, there are a number of locally organized watch parties happening around the world if you’d like to watch it in the company of other WordPressers.

Don’t worry, we’ll post the recorded event early next week if you aren’t able to catch it live.
Quelle: RedHat Stack

Democratizing FinOps: Transform your practice with FOCUS and Microsoft Fabric

Cloud computing has revolutionized the way you build, deploy, and scale applications and services. While you have unprecedented flexibility, agility, and scalability, you also face greater challenges in managing cost, security, and compliance. While IT security and compliance are often managed by central teams, cost is a shared responsibility across executive, finance, product, and engineering teams, which is what makes managing cloud cost such a challenge. Having the right tools to enable cross-group collaboration and make data-driven decisions is critical.

Fortunately, you have everything you need in the Microsoft Cloud to implement a streamlined FinOps practice that brings people together and connects them to the data they need to make business decisions. And with new developments like Copilot in Microsoft Cost Management and Microsoft Fabric, there couldn’t be a better time to take a fresh look at how you manage cost within your organization and how you can leverage the FinOps Framework and the FinOps Open Cost and Usage Specification (FOCUS) to accelerate your FinOps efforts.

There’s a lot to cover in this space, so I’ll split this across a series of blog posts. In this first blog post, I’ll introduce the core elements of Cost Management and Fabric that you’ll need to lay the foundation for the rest of the series, including how to export data, how FOCUS can help, and a few quick options that anyone can use to setup reports and alerts in Fabric with just a few clicks.

No-code extensibility with Cost Management exports

As your FinOps team grows to cover new services, endpoints, and datasets, you may find they spend more time integrating disparate APIs and schemas than driving business goals. This complexity also keeps simple reports and alerts just out of reach from executive, finance, and product teams. And when your stakeholders can’t get the answers they need, they push more work on to engineering teams to fill those gaps, which again, takes away from driving business goals.

We envision a future where FinOps teams can empower all stakeholders to stay informed and get the answers they need through turn-key integration and AI-assisted tooling on top of structured guidance and open specifications. And this all starts with Cost Management exports—a no-code extensibility feature that brings data to you.

As of today, you can sign up for a limited preview of Cost Management expands where you can export five new datasets directly into your storage account without a single line of code. In addition to the actual and amortized cost and usage details you get today, you’ll also see:

Cost and usage details aligned to FOCUS

Price sheets

Reservation details

Reservation recommendations

Reservation transactions

Of note, the FOCUS dataset includes both actual and amortized costs in a single dataset, which can drive additional efficiencies in your data ingestion process. You’ll benefit from reduced data processing times and more timely reporting on top of reduced storage and compute costs due to fewer rows and less duplication of data.

Beyond the new datasets, you’ll also discover optimizations that deliver large datasets more efficiently, reduced storage costs by updating rather than creating new files each day, and more. All exports are scheduled at the same time, to ensure scheduled refreshes of your reports will stay in sync with the latest data. Coupled with file partitioning, which is already available and recommended today, and data compression, which you’ll see in the coming months, the exports preview removes the need to write complex code to extract, transfer, and load large datasets reliably via APIs. This better enables all FinOps stakeholders to build custom reports to get the answers they need without having to learn a single API or write a single line of code.

To learn about all the benefits of the exports preview—yes, there’s more—read the full synopsis in Cost Management updates. And to start exporting your FOCUS cost and usage, price sheet, and reservation data, sign up for the exports preview today.

FOCUS democratizes cloud cost analytics

In case you’re not familiar, FOCUS is a groundbreaking initiative to establish a common provider and service-agnostic format for billing data that empowers organizations to better understand cost and usage patterns and optimize spending and performance across multiple cloud, software as a service (SaaS), and even on-premises service offerings. FOCUS provides a consistent, clear, and accessible view of cost data, explicitly designed for FinOps needs. As the new “language” of FinOps, FOCUS enables practitioners to collaborate more efficiently and effectively with peers throughout the organization and even maximize transferability and onboarding for new team members, getting people up and running quicker.

FOCUS 0.5 was originally announced in June 2023, and we’re excited to be leading the industry with our announcement of native support for the FOCUS 1.0 preview as part of Cost Management exports on November 13, 2023. We believe FOCUS is an important step forward for our industry, and we look forward to our industry partners joining us and collaboratively evolving the specification alongside FinOps practitioners from our collective customers and partners.

FOCUS 1.0 preview adds new columns for pricing, discounts, resources, and usage along with prescribed behaviors around how discounts are applied. Soon, you’ll also have a powerful new use case library, which offers a rich set of problems and prebuilt queries to help you get the answers you need without the guesswork. Armed with FOCUS and the FinOps Framework, you have a literal playbook on how to understand and extract answers out of your data effortlessly, enabling you to empower FinOps stakeholders regardless of how much knowledge or experience they have, to get the answers they need to maximize business value with the Microsoft Cloud.

For more details about FOCUS or why we believe it’s important, see FOCUS: A new specification for cloud cost transparency. And stay tuned for more updates as we dig into different scenarios where FOCUS can help you.

Microsoft Fabric and Copilot enable self-service analytics

So far, I’ve talked about how you can leverage Cost Management exports as a turn-key solution to extract critical details about your costs, prices, and reservations using FOCUS as a consistent, open billing data format with its use case library that is a veritable treasure map for finding answers to your FinOps questions. While these are all amazing tools that will accelerate your FinOps efforts, the true power of democratizing FinOps lies at the intersection of Cost Management and FOCUS with a platform that enables you to provide your stakeholders with self-serve analytics and alerts. And this is exactly what Microsoft Fabric brings to the picture.

Microsoft Fabric is an all-in-one analytics solution that encompasses data ingestion, normalization, cleansing, analysis, reporting, alerting, and more. I could write a separate blog post about how to implement each FinOps capability in Microsoft Fabric, but to get you acclimated, let me introduce the basics.

Your first step to leveraging Microsoft Fabric starts in Cost Management, which has done much of the work for you by exporting details about your prices, reservations, and cost and usage data aligned to FOCUS.

Once exported, you’ll ingest your data into a Fabric lakehouse, SQL, or KQL database table and create a semantic model to bring data together for any reports and alerts you’ll want to create. The database option you use will depend on how much data you have and your reporting needs. Below is an example using a KQL database, which uses Azure Data Explorer under the covers, to take advantage of the performance and scale benefits as well as the powerful query language.

Fabric offers several ways to quickly explore data from a semantic model. You can explore data by simply selecting the columns you want to see, but I recommend trying the auto-create a report option which takes that one step further by generating a quick summary based on the columns you select. As an example, here’s an auto-generated summary of the FOCUS EffectiveCost broken down by ChargePeriodStart, ServiceCategory, SubAccountName, Region, PricingCategory, and CommitmentDiscountType. You can apply quick tweaks to any visual or switch to the full edit experience to take it even further.

Those with a keen eye may notice the Copilot button at the top right. If we switch to edit mode, we can take full advantage of Copilot and even ask it to create the same summary:

Copilot starts to get a little fancier with the visuals and offers summarized numbers and a helpful filter. I can also go further with more specific questions about commitment-based discounts:

Of course, this is barely scratching the surface. With a richer semantic model including relationships and additional details, Copilot can go even further and save you time by giving you the answers you need and building reports with less time and hassle.

In addition to having unparalleled flexibility in reporting on the data in the way you want, you can also create fine-grained alerts in a more flexible way than ever before with very little effort. Simply select the visual you want to measure and specify when and how you want to be alerted:

This gets even more powerful when you add custom visuals, measures, and materialized views that offer deeper insights.

This is just a glimpse of what you can do with Cost Management and Microsoft Fabric together. I haven’t even touched on the data flows, machine learning capabilities, and the potential of ingesting data from multiple cloud providers or SaaS vendors also using FOCUS to give you a full, single pane of glass for your FinOps efforts. You can imagine the possibilities of how Copilot and Fabric can impact every FinOps capability, especially when paired with rich collaboration and automation tools like Microsoft Teams, Power Automate, and Power Apps that can help every stakeholder accomplish more together. I’ll share more about these in a future blog post or tutorial.

Next steps to accomplish your FinOps goals

I hope you’re as excited as I am about the potential of low- or even no-code solutions that empower every FinOps stakeholder with self-serve analytics. Whether you’re in finance seeking answers to complex questions that require transforming, cleansing, and joining multiple datasets, in engineering looking for a solution for near-real-time alerts and analytics that can react quickly to unexpected changes, or a FinOps team that now has more time to pursue something like unit cost economics to measure the true value of the cloud, the possibilities are endless. As someone who uses Copilot often, I can say that the potential of AI is real. Copilot saves me time in small ways throughout the day, enabling me to accomplish more with less effort. And perhaps the most exciting part is knowing that the more we leverage Copilot, the better it will get at automating tasks that free us up to solve bigger problems. I look forward to Copilot familiarizing itself with FOCUS and the use case library to see how far we’re able to go with a natural language description of FinOps questions and tasks.

And of course, this is just the beginning. We’re on the cusp of a revolutionary change to how organizations manage and optimize costs in the cloud. Stay tuned for more updates in the coming months as we share tutorials and samples that will help you streamline and accomplish FinOps tasks in less time. In the meantime, familiarize yourself with Microsoft Fabric and Copilot and learn more about how you can accomplish your FinOps goals with an end-to-end analytics platform.
The post Democratizing FinOps: Transform your practice with FOCUS and Microsoft Fabric appeared first on Azure Blog.
Quelle: Azure

How Azure is ensuring the future of GPUs is confidential

In Microsoft Azure, we are continually innovating to enhance security. One such pioneering effort is our collaboration with our hardware partners to create a new foundation based on silicon, that enables new levels of data protection through the protection of data in memory using confidential computing.

Data exists in three stages in its lifecycle: in use (when it is created and computed upon), at rest (when stored), and in transit (when moved). Customers today already take measures to protect their data at rest and in transit with existing encryption technologies. However, they have not had the means to protect their data in use at scale. Confidential computing is the missing third stage in protecting data when in use via hardware-based trusted execution environments (TEEs) that can now provide assurance that the data is protected during its entire lifecycle.

The Confidential Computing Consortium (CCC), which Microsoft co-founded in September 2019, defines confidential computing as the protection of data in use via hardware-based TEEs. These TEEs prevent unauthorized access or modification of applications and data during computation, thereby always protecting data. The TEEs are a trusted environment providing assurance of data integrity, data confidentiality, and code integrity. Attestation and a hardware-based root of trust are key components of this technology, providing evidence of the system’s integrity and protecting against unauthorized access, including from administrators, operators, and hackers.

Confidential computing can be seen as a foundational defense in-depth capability for workloads who prefer an extra level of assurance for their cloud workloads. Confidential computing can also aid in enabling new scenarios such as verifiable cloud computing, secure multi-party computation, or running data analytics on sensitive data sets.

While confidential computing has recently been available for central processing units (CPUs), it has also been needed for graphics processing units (GPU)-based scenarios that require high-performance computing and parallel processing, such as 3D graphics and visualization, scientific simulation and modeling, and AI and machine learning. Confidential computing can be applied to the GPU scenarios above for use cases that involve processing sensitive data and code on the cloud, such as healthcare, finance, government, and education. Azure has been working closely with NVIDIA® for several years to bring confidential to GPUs. And this is why, at Microsoft Ignite 2023, we announced Azure confidential VMs with NVIDIA H100-PCIe Tensor Core GPUs in preview. These Virtual Machines, along with the increasing number of Azure confidential computing (ACC) services, will allow more innovations that use sensitive and restricted data in the public cloud.

Potential use cases

Confidential computing on GPUs can unlock use cases that deal with highly restricted datasets and where there is a need to protect the model. An example use case can be seen with scientific simulation and modeling where confidential computing can enable researchers to run simulations and models on sensitive data, such as genomic data, climate data, or nuclear data, without exposing the data or the code (including model weights) to unauthorized parties. This can facilitate scientific collaboration and innovation while preserving data privacy and security.

Another possible use case for confidential computing applied to image generation is medical image analysis. Confidential computing can enable healthcare professionals to use advanced image processing techniques, such as deep learning, to analyze medical images, such as X-rays, CT scans, or MRI scans, without exposing the sensitive patient data or the proprietary algorithms to unauthorized parties. This can improve the accuracy and efficiency of diagnosis and treatment, while preserving data privacy and security. For example, confidential computing can help detect tumors, fractures, or anomalies in medical images.

Given the massive potential of AI, confidential AI is the term we use to represent a set of hardware-based technologies that provide cryptographically verifiable protection of data and models throughout their lifecycle, including when data and models are in use. Confidential AI addresses several scenarios spanning the AI lifecycle.

Confidential inferencing. Enables verifiable protection of model IP while simultaneously protecting inferencing requests and responses from the model developer, service operations and the cloud provider.

Confidential multi-party computation. Organizations can collaborate to train and run inferences on models without ever exposing their models or data to each other, and enforcing policies on how the outcomes are shared between the participants.

Confidential training. With confidential training, models builders can ensure that model weights and intermediate data such as checkpoints and gradient updates exchanged between nodes during training aren’t visible outside of TEEs. Confidential AI can enhance the security and privacy of AI inferencing by allowing data and models to be processed in an encrypted state, preventing unauthorized access or leakage of sensitive information.

Confidential computing building blocks

In response to growing global demands for data security and privacy, a robust platform with confidential computing capabilities is essential. It begins with innovative hardware as part of its core foundation and incorporating core infrastructure service layers with Virtual Machines and containers. This is a crucial step towards allowing services to transition to confidential AI. Over the next few years, these building blocks will enable a confidential GPU ecosystem of applications and AI models.

Confidential Virtual Machines

Confidential Virtual Machines are a type of virtual machine that provides robust security by encrypting data in use, ensuring that your sensitive data remains private and secure even while being processed. Azure was the first major cloud to offer confidential Virtual Machines powered by AMD SEV-SNP based CPUs with memory encryption that protects data while processing and meets the Confidential Computing Consortium (CCC) standard for data protection at the Virtual Machine level.

Confidential Virtual Machines powered by Intel® TDX offer foundational virtual machines-level protection of data in use and are now broadly available through the DCe and ECe virtual machines. These virtual machines enable seamless onboarding of applications with no code changes required and come with the added benefit of increased performance due to the 4th Gen Intel® Xeon® Scalable processors they run on. 

Confidential GPUs are an extension of confidential virtual machines, which are already available in Azure. Azure is the first and only cloud provider offering confidential virtual machines with 4th Gen AMD EPYC™ processors with SEV-SNP technology and NVIDIA H100 Tensor Core GPUs in our NCC H100 v5 series virtual machines. Data is protected throughout its processing due to the encrypted and verifiable connection between the CPU and the GPU, coupled with memory protection mechanism for both the CPU and GPU. This ensures that the data is protected throughout processing and only seen as cipher text from outside the CPU and GPU memory.

Confidential containers

Container support for confidential AI scenarios is crucial as containers provide modularity, accelerate the development/deployment cycle, and offer a lightweight and portable solution that minimizes virtualization overhead, making it easier to deploy and manage AI/machine learning workloads.

Azure has made innovations to bring confidential containers for CPU-based workloads:

To reduce the infrastructure management on organizations, Azure offers serverless confidential containers in Azure Container Instances (ACI). By managing the infrastructure on behalf of organizations, serverless containers provide a low barrier to entry for burstable CPU-based AI workloads combined with strong data privacy-protective assurances, including container group-level isolation and the same encrypted memory powered by AMD SEV-SNP technology. 

To meet various customer needs, Azure now also has confidential containers in Azure Kubernetes Service (AKS), where organizations can leverage pod-level isolation and security policies to protect their container workloads, while also benefiting from the cloud-native standards built within the Kubernetes community. Specifically, this solution leverages investment in the open source Kata Confidential Containers project, a growing community with investments from all of our hardware partners including AMD, Intel, and now NVIDIA, too.

These innovations will need to be extended to confidential AI scenarios on GPUs over time.

The road ahead

Innovation in hardware takes time to mature and replace existing infrastructure. We’re dedicated to integrating confidential computing capabilities across Azure, including all virtual machine shop keeping units (SKUs) and container services, aiming for a seamless experience. This includes data-in-use protection for confidential GPU workloads extending to more of our data and AI services.

Eventually confidential computing will become the norm, with pervasive memory encryption across Azure’s infrastructure, enabling organizations to verify data protection in the cloud throughout the entire data lifecycle.

Learn about all of the Azure confidential computing updates from Microsoft Ignite 2023.
The post How Azure is ensuring the future of GPUs is confidential appeared first on Azure Blog.
Quelle: Azure

Building resilience to your business requirements with Azure

At Microsoft, we understand the trust customers put in us by running their most critical workloads on Microsoft Azure. Whether they are retailers with their online stores, healthcare providers running vital services, financial institutions processing essential transactions, or technology partners offering their solutions to other enterprise customers—any downtime or impact could lead to business loss, social services interruptions, and events that could damage their reputation and affect the end-user confidence. In this blog post, we will discuss some of the design principles and characteristics that we see among the customer leaders we work with closely to enhance their critical workload availability according to their specific business needs.

A commitment to reliability with Azure

As we continue making investments that drive platform reliability and quality, there remains a need for customers to evaluate their technical and business requirements against the options Azure provides to meet availability goals through architecture and configuration. These processes, along with support from Microsoft technical teams, ensure you are prepared and ready in the event of an incident. As part of the shared responsibility model, Azure offers customers various options to enhance reliability. These options involve choices and tradeoffs, such as possible higher operational and consumption costs. You can use the flexibility of cloud services to enable or disable some of these features if your needs change. In addition to technical configuration, it is essential to regularly check your team’s technical and process readiness.

“We serve customers of all sizes in an effort to maximize their return on investment, while offering support on their migration and innovation journey. After a major incident, we participated in executive discussions with customers to provide clear contextual explanations as to the cause and reassurances on actions to prevent similar issues. As product quality, stability, and support experience are important focus areas, a common outcome of these conversations is an enhancement of cooperation between customer and cloud provider for the possibility of future incidents. I’ve asked Director of Executive Customer Engagement, Bryan Tang, from the Customer Support and Service team to share more about the types of support you should seek from your technical Microsoft team & partners.”—Mark Russinovich, CTO, Azure.

Design principles

Key elements to building a reliable workload begin with establishing an agreed available target with your business stakeholders, as that would influence your design and configuration choices. As you continue to measure uptime against baseline, it is critical to be ready to adopt any new services or features that can benefit your workload availability given the pace of Cloud innovation. Finally, adopt a Continuous Validation approach to ensure your system is behaving as designed when incidents do occur or identify weak points early, along with your team’s readiness upon major incidents to partner with Microsoft on minimizing business disruptions. We will go into more details on these design principles:

Know and measure against your targets

Continuously assess and optimize

Test, simulate, and be ready

Know and measure against your targets

Azure customers may have outdated availability targets, or workloads that don’t have targets defined with business stakeholders. To cover the targets mentioned more extensively, you can refer to the business metrics to design resilient Azure applications guide. Application owners should revisit their availability targets with respective business stakeholders to confirm those targets, then assess if their current Azure architecture is designed to support such metrics, including SLA, Recovery Time Objective (RTO), and Recovery Point Objective (RPO). Different Azure services, along with different configurations or SKU levels, carry different SLAs. You need to ensure that your design does, at a minimum, reflect: 

Defined SLA versus Composite SLA: Your workload architecture is a collection of Azure services. You can run your entire workload based on infrastructure as a service (IaaS) virtual machines (VMs) with Storage and Networking across all tiers and microservices, or you can mix your workloads with PaaS such as Azure App Service and Azure Database for PostgreSQL, they all provide different SLAs to the SKUs and configurations you selected. To assess their workload architecture, we asked customers about their SLA. We found that some customers had no SLA, some had an outdated SLA, and some had unrealistic SLAs. The key is to get a confirmed SLA from your business owners and calculate the Composite SLA based on your workload resources. This shows you how well you meet your business availability objectives.

Continuously assess options and be ready to optimize

One of the most significant drivers for cloud migration is the financial benefits, such as shifting from Capital Expenditure to Operating Expenditure and taking advantage of the economies cloud providers operating at scale. However, one often-overlooked benefit is our continued investment and innovation in the newest hardware, services, and features.

Many customers have moved their workloads from on-premises to Azure in a quick and simple way, by replicating workload architecture from on-premises to Azure, without using the extra options and features Azure offers to improve availability and performance. Or we see customers treating their Cloud architecture as pets versus cattle, instead of seeing them as resources that work together and can be changed with better options when they are available. We fully understand customer preference, habit, and maybe the worries of black-box as opposed to managing your own VMs where you do maintenance or security scans. However, with our ongoing innovation and commitment to providing platform as a service (PaaS) and software as a service (SaaS), it gives you opportunities to focus your limited resources and effort on functions that make your business stand out.

Architecture reliability recommendations and adoption:

We make every effort to ensure you have the most specific and latest recommendations through various channels, our flagship channel through Azure Advisor, which now also supports the Reliability Workbook, and we partner closely with engineering to ensure any additional recommendations that might take time to work into workbook and Azure Advisor are available to your consideration through Azure Proactive Resiliency Library (APRL). These collectively provide a comprehensive list of documented recommendations for the Azure services you leverage for your considerations.

Security and data resilience:

While the previous point focuses on configurations and options to leverage for the Azure components that make up your application architecture, it is just as critical to ensure your most critical asset is protected and replicated. Architecture gives you a solid foundation to withstand failure in cloud service level failure, it is as critical to ensure you have the necessary data and resource protection from any accidental or malicious deletes. Azure offers options such as Resource Locks, enabling soft delete on your storage accounts. Your architecture is as solid as the security and identity access management applied to it as an overall protection. 

Assess your options and adopt:

While there are many recommendations that can be made, ultimately, implementation remains your decision. It is understandable that changing your architecture might not just a matter of modifying your deployment template, as you want to ensure your test cases are comprehensive, and it may involve time, effort, and cost to run your workloads. Our field is prepared to help you with exploring options and tradeoffs, but the decision is ultimately yours to enhance availability to meet the business requirements of your stakeholders. This mentality to change is not limited to reliability, but also other aspects of Well-Architected Framework, such as Cost Optimization. 

Test, simulate, and be ready

Testing is a continuous process, both at a technical and process level, with automation being a key part of the process. In addition to a paper-based exercise in ensuring the selection of the right SKUs and configurations of cloud resources to strive for the right Composite SLA, applying Chaos Engineering to your testing helps find weaknesses and verify readiness otherwise. The criticality of monitoring your application to detect any disruptions and react to quickly recover, and finally, knowing how to engage Microsoft support effectively, when needed, can help set the proper expectations to your stakeholders and end users in the event of an incident. 

Continuous validation-Chaos Engineering: Operating a distributed application, with microservices and different dependencies between centralized services and workloads, having a chaos mindset helps inspire confidence in your resilient architecture design by proactively finding weak points and validating your mitigation strategy. For customers that have been striving for DevOps success through automation, continuous validation (CV) became a critical component for reliability, besides continuous integration (CI) and continuous delivery (CD). Simulating failure also helps you to understand how your application would behave with partial failure, how your design would respond to infrastructure issues, and the overall level of impact to end users. Azure Chaos Studio is now generally available to assist you further with this ongoing validation. 

Detect and react: Ensure your workload is monitored at the application and component level for a comprehensive health view. For instance, Azure Monitor helps collecting, analyzing, and responding to monitoring data from your cloud and on-premises environments. Azure also offers a suite of experiences to keep you informed about the health of your cloud resources in Azure Status that informs you of Azure service outages, Service Health that provides service impacting communications such as planned maintenance, and Resource Health on individual services such as a VM. 

Incident response plan: Partner closely with our technical support teams to jointly develop an incident response plan. The action plan is essential to developing shared accountability between yourself and Microsoft as we work towards resolution of your incident. The basics of who, what, when for you and us to partner through a quick resolution. Our teams are ready to run test drill with you as well to validate this response plan for our joint success. 

Ultimately, your desired reliability is an outcome that you can only achieve if you take into account all these approaches and the mentality to update for optimization. Building application resilience is not a single feature or phase, but a muscle that your teams will build, learn, and strengthen over time. For more details, please check out our Well Architected Framework guidance to learn more and consult with your Microsoft team as their only objective is you realizing full business value on Azure. 
The post Building resilience to your business requirements with Azure appeared first on Azure Blog.
Quelle: Azure