Amazon CloudFront kündigt neuen Edge-Standort in Israel an

Details: Amazon CloudFront kündigt Präsenz in Israel mit dem ersten Edge-Standort in Tel Aviv an. Mit diesem neuen Edge-Standort reduziert CloudFront die Latenz bei der Bereitstellung von Inhalten für Benutzer in Israel um 75 %. CloudFront verfügt nun über insgesamt 188 Präsenzpunkte (PoPs) in 70 Städten in 31 Ländern.
Näheres über die Preise von CloudFront sowie die Preise für den neuen Edge-Standort in Israel erfahren Sie in unserer Preisübersicht.
Quelle: aws.amazon.com

Better together, synergistic results from digital transformation

Intelligent manufacturing transformation can bring great changes, such as connecting the sales organization with field services. Moving to the cloud also provides benefits such as an intelligent supply chain and innovations enabled by connected products. As such, digital transformation is the goal of many, as it can mean finding a competitive advantage.

The Azure platform offers a wealth of services for partners to enhance, extend, and build industry solutions. Here we describe how one Microsoft partner uses Azure to solve a unique problem.

Leverage through Azure services

One company, PTC, is well-known for ThingWorx, a market-leading, end-to-end Industrial Internet of Things (IIoT) solution platform, built for industrial environments. PTC has moved its platform to Azure, and in doing so, leverages the resources and technical advantages of Microsoft. Together, the two create a synergy that can help any manufacturer make a successful move to the digital world.

Why things matter

The ThingWorx by PTC platform includes a number of components that can kickstart any effort to digitally transform a manufacturing floor. The platform consists of two notable components:

ThingWorx analytics
ThingWorx industrial connectivity

By implementing the platform, developers can create comprehensive, feature-rich IoT solutions and deliver faster time-to-insights, critical to the success of industrial implementations. Because the platform is customized for industrial environments and all aspects of manufacturing, as outlined below, it streamlines the digital transformation with capabilities unique to manufacturing. Add to that, PTC’s partnership with Microsoft and you get capabilities such as integrating HoloLens devices into mixed reality experiences.

Azure IoT Hub integration

Azure IoT Hub has a central role on the platform. The service is accessed through the ThingWorx Azure IoT Connector. Features include:

Ingress processing: Devices that are running Azure IoT Hub SDK applications send messages to the Azure IoT Hub. These messages arrive through an Azure Event Hub endpoint that is provided by the IoT Hub. Communication with the ThingWorx platform is asynchronous to allow for optimal message throughput.
Egress processing: Egress messages arrive from the ThingWorx platform and are pushed to the Azure IoT Hub through its service client.
Device methods as remote services: The Azure IoT Hub enables you to invoke device (direct) methods on edge devices from the cloud.
Azure IoT Blob Storage: allows integration with Azure Blob Storage accounts.
File transfers: The Azure IoT Hub Connector supports transferring files between edge devices and an Azure Storage container.

Next steps

To learn more, go to the Azure marketplace for ThingWorx for Azure and click Contact me.
To see more about Azure in manufacturing, go to Azure for Manufacturing.

Quelle: Azure

Developing Docker-Powered Applications on Windows with WSL 2

This is a guest post from Docker Captain Antonis Kalipetis, a Senior Software Engineer at e-food — the leading online food delivery service in Greece. He is a Python lover and developer and helps teams embrace containers and improve their development workflow. He loves automating stuff and sharing knowledge around all things containers, DevOps and developer workflows. You can follow him on Twitter @akalipetis.

WSL 2 (or Windows Subsystem for Linux version 2) is Microsoft’s second take on shipping a Linux Kernel with Windows. The first version was awesome as it translated Linux system calls to the equivalent Windows NT call in real time. The second version includes a full fledged virtual machine. 

It was only natural that Docker would embrace this change and ship a Docker Desktop for Windows version that runs on WSL 2 (WSL 1 had issues running the Docker daemon). This is still a Technical Preview, but after using it for a couple of days, I’ve completely switched my local development to take advantage of it and I’m pretty happy with it.

In this blog, I’ll show you an example of how to develop Docker-powered applications using the Docker Desktop WSL 2 Tech Preview.

Why use Docker Desktop WSL 2 Tech Preview over the “stable” Docker Desktop for Windows?

The main advantage of using the technical preview is that you don’t have to manage your Docker VM anymore.  More specifically:

The VM grows and shrinks with your needs in terms of RAM/CPU, so you don’t have to decide its size and preallocate resources., It can shrink to almost zero CPU/RAM if you don’t use it. It works so well that most of the time you forget there’s a VM involved.Filesystem performance is great, with support for inotify and the VM’s disk size can match the size of your machine’s disk.

Apart from the above, if you love Visual Studio Code like I do, you can use the VS Code Remote WSL plugin to develop Docker-powered applications locally (more on that in a bit). You also get the always awesome Docker developer experience while using the VM.

How does Docker Desktop for WSL 2 Tech Preview work?

When you install it, it automatically installs Docker in a managed directory in your default WSL 2 distribution. This installation includes the Docker daemon, the Docker CLI and the Docker Compose CLI. It is kept up to date with Docker Desktop and you can either access it from within WSL, or from PowerShell by switching contexts — see, Docker developer experience in action!

Developing applications with Docker Desktop for WSL 2 Tech Preview

For this example, we’ll develop a simple Python Flask application, with Redis as its data store. Every time you visit the page, the page counter will increase — say hello to Millennium!

Setting up VS Code Remote – WSL

Visual Studio Code recently announced a new set of tools for developing applications remotely — using SSH, Docker or WSL. This splits Visual Studio Code into a “client-server” architecture, with the client (that is the UI) running on your Windows machine and the server (that is your code, Git, plugins, etc) running remotely. In this example, we’re going to use the WSL version.

To start, open VS Code and select “Remote-WSL: New Window”. This will install the VS Code Remote server in your default WSL distribution (the one running Docker) and open a new VS Code workspace in your HOME directory.

Getting and Exploring the Code

Clone this Github repository by running git clone https://github.com/akalipetis/python-docker-example. Next, run code -r python-docker-example to open this directory in VS Code and let’s go a quick tour!

Dockerfile and docker-compose.yml

These should look familiar. The Dockerfile is used for building your application container, while docker-compose.yml is the one you could use for deploying it. docker-compose.override.yml contains all the things that are needed for local development.

Pipfile and Pipfile.lock

These include the application dependencies. Pipenv is the tool used to manage them.

The app.py  file contains the Flask application, which we’re just using in this example. Nothing special here!

Running the application and making changes

In order to run the application, open a WSL terminal (this is done using the integrated terminal feature of VS Code) and run docker-compose up. This will start all the containers (in this case, a Redis container and the one running the application). After doing so, visit http://localhost:5000 in your browser and voila — you’ve visited your new application. That’s not development though, so let’s change and see it in action. Open the app.py in VS Code and change the following line:

Refresh the web page and observe that:

The message was immediately changedThe visit counter continued counting from the latest value

Under the hood

Let’s see what actually happened.

We changed a file in VS Code, which is running on Windows.Since VS Code is running on a client-server mode with the server running is WSL 2, the change was actually made to the file living inside WSL.Since you’re using the Technical Preview of Docker Desktop for WSL 2 and docker-compose.override.yml is using Linux workspaces to mount the code from WSL 2 directly into the running container, the change was propagated inside the container.While this is possible with the “stable” Docker Desktop for Windows, it isn’t as easy. By using Linux workspaces, we don’t need to worry about file system permissions. It’s also super fast, as it’s a local Linux filesystem mount.Flask is using an auto-reloading server by default, which — using <code>inotify</code> — is reloading the server on every file change and within milliseconds from saving your file, your server was reloaded.Data is stored in Redis using a Docker volume, thus the visits counter was not affected by the restart of the server.

Other tips to help you with Docker Desktop for WSL 2

Here are a few additional tips on developing inside containers using the Technical Preview of Docker Desktop for WSL 2:

For maximum file system performance, use Docker volumes for your application’s data and Linux Workspaces for your code.To avoid running an extra VM, switch to Windows containers for your “stable” Docker Desktop for Windows environment.Use docker context and default|wsl to switch contexts and develop both Windows and Linux Docker-powered applications easily.

Final Thoughts

I’ve switched to Windows and WSL 2 development for the past two months and I can’t describe how happy I am with my development workflow. Using Docker Desktop for WSL 2 for the past couple of days seems really promising, and most of the current issues of using Docker in WSL 2 seem to be resolved. I can’t wait for what comes next!

The only thing currently missing in my opinion is integration with VS Code.  Remote Containers (instead of Remote WSL which was used for this blogpost) would allow you to run all your tooling within your Docker container.

Until VS Code Remote Containers support is ready, you can run pipenv install –dev to install the application dependencies on WSL 2, allowing VS Code to provide auto-complete and use all the nice tools included to help in development.

Get the Technical Preview and Learn More

If you’d like to get on board, read the instructions and install the technical preview from the Docker docs.

For more on WSL 2, check out these blog posts:

5 Things to Try with Docker Desktop WSL 2 Tech PreviewGet Ready for the Tech Preview of Docker Desktop for WSL 2

The post Developing Docker-Powered Applications on Windows with WSL 2 appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Geo Zone Redundant Storage in Azure now in preview

Announcing the preview of Geo Zone Redundant Storage in Azure. Geo Zone Redundant Storage provides a great balance of high performance, high availability, and disaster recovery and is beneficial when building highly available applications or services in Azure. Geo Zone Redundant Storage helps achieve higher data resiliency by doing the following:

Synchronously writing three replicas of your data across multiple Azure Availability Zones, such as zone-redundant storage today, protecting from cluster, datacenter, or entire zone failure.

Asynchronously replicating the data to another region within the same geo into a single zone, such as locally redundant storage, protecting from a regional outage.

When using Geo Zone Redundant Storage, you can continue to read and write the data even if one of the availability zones in the primary region is unavailable. In the event of a regional failure, you can also use Read Access Geo Zone Redundant Storage to continue having read access.

Please note that Read Access Geo Zone Redundant Storage requires a general purpose v2 account and is available for block blobs, non-disk page blobs, files, tables, queues, and Azure Data Lake Storage Gen2.

With the release of the Geo Zone Redundant Storage preview, Azure offers a compelling set of durability options for your storage needs:

Scenario

Locally redundant storage

Geo-redundant storage

Read Access geo-redundant storage

Zone-redundent storage

Geo Zone Redundant Storage

Read Access Geo Zone Redundant Storage

Node unavailability within a data center

Yes

An entire data center (zonal or non-zonal) becomes unavailable

No

Yes (failover is required)

Yes

Yes

A region-wide outage

No

Yes (failover is required)

No

Yes (failover is required)

Read access to your data (in a remote, geo-replicated region) in the event of region-wide unavailability

No

No

Yes

No

No

Yes

Designed to provide X% durability of objects over a given year

at least 11 9's

at least 16 9's

at least 12 9's

at least 16 9's

Supported storage account types

GPv2, GPv1, Blob

GPv2

Availability SLA for read requests

At least 99.9% (99% for Cool Access Tier)

At least 99.99% (99.9% for Cool Access Tier)

At least 99.9% (99% for cool access tier)

At least 99.99% (99.9% for Cool Access Tier)

Availability SLA for write requests

At least 99.9% (99% for Cool Access Tier)

 

Current Geo Zone Redundant Storage prices are discounted preview prices and will change at the time of general availability. For details on various redundancy options please refer to Azure Storage redundancy documentation. In regions where Read Access Geo Zone Redundant Storage is not available you can still use it to build highly available applications.

The preview of Geo Zone Redundant Storage and Read Access Geo Zone Redundant Storage is initially available in US East with more regions to follow in 2019. Please check our documentation for the latest list of regions where the preview is enabled.

You can create a Geo Zone Redundant Storage account using various methods including the Azure portal, Azure CLI, Azure PowerShell, Azure Resource Manager, and the Azure Storage Management SDK. Refer to Read Access Geo Zone Redundant Storage documentation for more details.

Converting from locally redundant storage, geo-redundant storage, read-access geo-redundant storage, or zone-redundant storage to Read Access Geo Zone Redundant Storage is supported. To convert from zone-redundant storage to Read Access Geo Zone Redundant Storage you can use Azure CLI, Azure PowerShell, Azure portal, Azure Resource Manager, and the Azure Storage Management SDK.

There are two options for migrating to Read Access Geo Zone Redundant Storage from non-zone-redundant storageaccounts:

Manually copy or move data to a new Read Access Geo Zone Redundant Storage account from an existing account.
Request a live migration.

Please let us know if you have any questions or need our assistance. We are looking forward to your participation in the preview and hearing your feedback.

Resources

For more details on the conversion process please refer to the Read Access Geo Zone Redundant Storage documentation.
Learn how to leverage Azure Storage in your applications with our quickstarts and tutorials.
Refer to the pricing page to learn more about the pricing.

Quelle: Azure

GlideFinder: How we built a platform on Google Cloud that can monitor wildfires

Editor’s note:Today’s post comes from Dmitry Kryuk, Founder & CTO, GlideFinder, which offers a platform that can locate wildfires, alert subscribers, and provide analytics that can help contain wildfires in a timely manner. The last decade has demonstrated that climate change is not an abstract concept—it’s something that’s happening now, and is already translating into multibillion damages, business bankruptcies, and lost property. We founded GlideFinder with a long-term goal to help prevent major uncontrolled wildfire, and we firmly believe that applying knowledge and insights from a variety of data sources can have great impact in controlling and managing the effects of these natural disasters. Understanding risk exposure is an important way businesses and communities can prepare for the possibility of wildfire, and a key component of estimating that risk is the availability of a reliable real-time fire detection and notification framework. But creating such a framework has historically been challenging because it requires bringing together a wide variety of raw datasets from various agencies, and then the training, domain knowledge, and time and resources it takes to make sense of that data. As we developed GlideFinder, our aim was to create a framework that could easily help users understand which preventative measures would be most efficient and cost effective for their specific location. Because this problem is much more severe in developing countries, it needed to be affordable and scalable as well. To address some of these challenges, we built a public platform for detection, notifications and analytics of wildfire risks. It correlates streaming satellite data with supplemental weather, geologic, terrain, demographic, economic and agency response data, visualizes fire development on an interactive map, gives a sense of how fast the fire is spreading and in which direction, and can help define proactive long-term planning and short-term emergency response procedures. In the future, the platform can be extended to other natural phenomena observable from space, UAVs, cameras and sensor networks.Building GlideFinder On Google CloudSince Glidefinder is an analytical platform, it needs data, and acquiring high quality data was the biggest challenge. There are multiple agencies that collect data in different formats, sometimes on paper, at varying levels of detail. That’s why the focus so far has been the integration of satellite data, which is factual, reliable, and consistent in resolution over time.ScalabilityGlideFinder currently ingests data from the NASA Suomi NPP satellite, VIIRS, and historical MODIS data. The satellite generates on average 50,000 records per day, which are updated every hour. Each record represents a hotspot of 375mX375m with characteristics like temperature, data quality, and more.  There are approximately 200 million records, or 22 GB of data for the VIIRS/MODIS dataset. Right now, we’re in the process of integrating geostationary satellites GOES-16 and GOES-17, each running a new scan every 5-15 minutes. We’ll have roughly 210 GB of data for a single year.All this data is stored in Cloud Storage and BigQuery. When a user runs a search for a particular location, they define a few search criteria:Level of aggregation—It could be annual, monthly, daily, or not aggregated at all.Time rangeZoom level—The defined area around the location search to present on the map. The default is 40kmX40km, but you can increase or decrease that area so that, in conjunction with other criteria, you can control the amount of data you pull. The results include metrics like total burned area for the selected time range, zoom level, aggregation level by year, county, and number of days the fire was in the radius of 10km, by city or zip code. Here’s what it looks like:We can’t run this query against BigQuery directly for the following reasons:In general, BigQuery has a default of 100  concurrent interactive queries per project, and we can expect hundreds of thousands of concurrent users during fire season from all over the world.Even if this solution was built for mid-size enterprise customers with a few hundred users, querying BigQuery directly would take a significant amount of time. We use multiple aggregate functions, joint with city/county/international administrative unit geographies, so a single query can run for a few minutes.To solve this problem, we materialize the views with the required business logic and use a Dataflow job to process that data and write into JSON files in Cloud Storage and Cloud Firestore.The filenames contain location coordinates and period name. E.g.That link means that we are pulling historical data for an entire year (2012), with monthly aggregation (there will be 12 nodes in a single JSON file). Coordinates 12.5405_-13.9194 define a corner of a 20kmX20km square. We also separate processes for dealing with historical data and incremental real-time data. Once the business logic is defined, we can run Dataflow processing once for the entire history, and then append current year/month files. We do something similar with real time data: we create JSON files that span a window of the current fire season (for example the last 5 months). Those files are stored in a different location and demonstrate propagation of the current fire. The data is not aggregated by time, and shows hotspots from every satellite scan in the timeline. We use streaming Dataflow pipelines to process new incremental fire data and new alert locations that subscribers add to monitoring.Red dots correspond to the current position on the timeline. The darker spots are from previous scans.Since subscribers can add new locations for monitoring all the time, and we are still bounded by the default of 100 concurrent queries, new alert configurations are exported every 3 minutes from Firestore to BigQuery. This way, a batch of locations is processed by a single query. When a match between an alert location and fire data is found, a cloud function materializes the view and triggers Dataflow which writes report files and JSON files that define the customer and which notification method to use. Once the file lands in the bucket, it triggers a cloud function that uses a third-party API to send an email or SMS.This is how alert and interactive map looks on the mobile:Geographic AggregationOne question that arises from processing and visualizing satellite data is how to do it efficiently without jeopardizing accuracy. VIIRS resolution is 375x375m, and large fires have multiple adjacent pixels. Different scenarios may require different accuracy: if we are presenting a fire that is happening right now, we may want to present every hotspot on the map from each scan.When we present aggregated annual data, in most cases it would not make sense to present every hotspot. We can’t use DISTINCT in the query since the coordinates are float numbers, and the 375×375 pixels may overlap between scans.To work around this, multiple pixels can be snapped to another point in the intersection of a grid.The query below does it with latitude:Why only latitude and not longitude? Because every latitude line is about the same length in km, but the further you move from the equator, lines of longitude become shorter. Hence for long_increment we use a trigonometric formula:GIS FormatsIn addition to the satellite data, our platform also uses several GIS datasets such as vegetation types, vegetation disturbance (e.g., insect infestation), and potential fire intensity, as well as information about who is responsible for administering each area. We join those datasets with fire data, so that we can tell in which zipcode/province/region the fire has occurred, compare metrics from different geographies, and estimate the risk of fire based on the availability of fuel.  We also use public dataset bigquery-public-data:geo_us_boundaries for US counties, cities, zip codes and join with US Census data.The GIS source is usually in shapefile format, which can be converted to GeoJson with the open source ogr2ogr utility. The problem is that when those files are created in ESRI or other specialized GIS tools, they have less restrictions on the types of geometries than currently in BigQuery. So unless we use the SAFE prefix, the query that uses GIS functions will fail. Usually there’s a small percentage of erroneous polygons, but sometimes those are the largest polygons. In the SRE dataset, one of those polygons covered most of California.To overcome this issue, we used the following workaround:Export erroneous polygons from BigQuery:Upload exported polygons into PostGIS and fix them there:However, that operation is computationally intensive. For example, a single table of vegetation types contains 782M polygons (281GB), of which 10M are invalid polygons. Fixing them on a single machine would take days. In this case, Dataflow came to the rescue again: we created an autoscaling data pipeline that employed GIS utilities to fix the polygons.SummaryWe chose Google Cloud infrastructure because of its scalability (which allows millions of concurrent users from anywhere in the world), live data streaming and processing with Dataflow, and the ability to work with petabytes of data in BigQuery. We are also planning to use Vision API and machine learning to process imagery data from cameras and additional satellites.  Below are the main GCP components we use:BigQuery data warehouse—For historical and real-time satellite data, census datasets, USGS fuel and vegetation data, USFA fire response data, and user location data. User data is correlated with detection data, and based on that information alerts are sent.App Engine with Cloud Tasks and background Cloud Functions—used to pull data from NASA sources, detect events such as new data files, materialize BigQuery views on events, send alerts, publish Pub/Sub messages, and write user-specific data into Firestore.Dataflow—streams new user and satellite data into BigQuery, picks up the results of BigQuery views and writes them into Google Cloud Storage for user consumption. Pub/Sub messaging—Triggers Cloud Functions. If you have questions or recommendations about our platform, or would like to help us in development, please reach out to dmitry@glidefinder.com.
Quelle: Google Cloud Platform

Improving Azure Virtual Machines resiliency with Project Tardigrade

"Our goal is to empower organizations to run their workloads reliably on Azure. With this as our guiding principle, we are continuously investing in evolving the Azure platform to become fault resilient, not only to boost business productivity but also to provide a seamless customer experience. Last month I published a blog post highlighting several initiatives underway to keep improving in this space, as part of our commitment to provide a trusted set of cloud services. Today I wanted to expand on the mention of Project Tardigrade – a platform resiliency initiative that improves high availability of our services even during the rare cases of spontaneous platform failures. The post that follows was written by Pujitha Desiraju and Anupama Vedapuri from our compute platform fundamentals team, who are leading these efforts.” Mark Russinovich, CTO, Azure

This post was co-authored by Jim Cavalaris, Principal Software Engineer, Azure Compute. 

 

Codenamed Project Tardigrade, this effort draws its inspiration from the eight-legged microscopic creature, the tardigrade also known as the water bear. Virtually impossible to kill, tardigrades can be exposed to extreme conditions, but somehow still manage to wiggle their way to survival. This is exactly what we envision our servers to emulate when we consider resiliency, hence the name Project Tardigrade. Similar to a tardigrade’s survival across a wide range of extreme conditions, this project involves building resiliency and self-healing mechanisms across multiple layers of the platform ranging from hardware to software, all with a view towards safeguarding your virtual machines (VMs) as much as possible.

How does it work?

Project Tardigrade is a broad platform resiliency initiative which employs numerous mitigation strategies with the purpose of ensuring your VMs are not impacted due to any unanticipated host behavior. This includes enabling components to self-heal and quickly recover from potential failures to prevent impact to your workloads. Even in the rare cases of critical host faults, our priority is to preserve and protect your VMs from these spontaneous events to allow your workloads to run seamlessly.

One example recovery workflow is highlighted below, for the uncommon event in which a customer initiated VM operation fails due to an underlying fault on the host server. To carry out the failed VM operation successfully, as well proactively prevent the issue from potentially affecting other VMs on the server, the Tardigrade recovery service will be notified and will begin executing failover operations.

The following phases briefly describe the Tardigrade recovery workflow:

Phase 1:

This step has no impact to running customer VMs. It simply recycles all services running on the host. In the rare case that the faulted service does not successfully restart, we proceed to Phase 2.

Phase 2:

Our diagnostics service runs on the host to collect all relevant logs/dumps systematically, to ensure that we can thoroughly diagnose the reason for failure in Phase 1. This comprehensive analysis allows us to ‘root cause’ the issue and thereby prevent reoccurrences in the future.

Phase 3:

At a high level, we reset the OS into a healthy state with minimal customer impact to mitigate the host issue. During this phase we preserve the states of each VM to RAM, after which we begin to reset the OS into a healthy state. While the OS swiftly resets underneath, running applications on all VMs hosted on the server briefly ‘freeze’ as the CPU is temporarily suspended. This experience is similar to a network connection temporarily lost but quickly resumed due to retry logic. After the OS is successfully reset, VMs consume their stored state and resume normal activity, thereby circumventing any potential VM reboots.

With the above principles we ensure that the failure of any single component in the host does not impact the entire system, making customer VMs more immune to unanticipated host faults. This also allows us to recover quickly from some of the most extreme forms of critical failures (like kernel level failures and firmware issues) while still retaining the virtual machine state that you care about.

Going forward

Currently we use the aforementioned Tardigrade recovery workflow to catch and quickly recover from potential software host failures in the Azure fleet. In parallel we are continuously innovating our technical capabilities and expanding to different host failure scenarios we can combat with this resiliency initiative.

We are also looking to explore the latest innovations in machine learning to harness the proactive capabilities of Project Tardigrade. For example, we plan to leverage machine learning to predict more types of host failures as early as possible. For example, to detect abnormal resource utilization patterns of the host that may potentially impact its workloads. We will also leverage machine learning to help recommend appropriate repair actions (like Tardigrade recovery steps, potentially live migration, etc.) thereby optimizing our fleetwide recovery options.

As customers continue to shift business-critical workloads onto the Microsoft Azure cloud platform, we are constantly learning and improving so that we can continue to meet customer expectations around interruptions from unplanned failures. Reliability is and continues to be a core tenet of our trusted cloud commitments, alongside compliance, security, privacy, and transparency. Across all of these areas, we know that customer trust is earned and must be maintained, not just by saying the right thing but by doing the right thing. Platform resiliency as practiced by Project Tardigrade is already strengthening VM availability by ensuring that underlying host issues do not affect your VMs.

We will continue to share further improvements on this project and others like it, to be as transparent as possible about how we’re constantly improving platform reliability to empower your organization.
Quelle: Azure