How Loblaw turns to technology to support Canadians during COVID-19

The impact of COVID-19—and the shift in how we work, live, and shop—has tested every industry, and retailers in the grocery space are no exception. While hospitals and local governments have crisis plans for scenarios like what we are experiencing, few grocery store operators had fully contemplated implications of a global pandemic. While traditional grocers have seen new, online, digital-native competitors enter the market before COVID-19, a majority of consumers still relied on physical grocery stores to purchase their household essentials. For these traditional grocers, online grocery sales were minimal compared to brick-and-mortar sales, causing many to focus their IT attention on in-store experiences. However, due to the pandemic, consumers’ online grocery spend increased from 1.5% to 9% on average in Canada alone. The drastic and immediate shift to online has created immense strain on grocers—from employee and community safety issues, to handling surges in online traffic that can crash websites, to struggling with inventory and order fulfilment. Some have weathered this new reality better than others, thanks to a variety of technologies that improve efficiency and help keep employees and customers safe. Meeting unprecedented online demandLoblaw is a great example of a grocer using technology to support its community and protect its employees, while also growing its business. A 100-year-old company, Loblaw is the largest grocery and retail pharmacy chain in Canada, with approximately 200,000 employees and more than 18 million shoppers active in its loyalty program. When COVID-19 first hit North America, Loblaw was one of a few grocery chains in Canada offering the ability to order online and pick up groceries the same day. As more shoppers moved their shopping to online, the company shifted quickly to meet the rising demand. As online traffic and order volumes reached unprecedented levels, the performance of Loblaw’s online grocery websites was starting to strain under the load. Google Cloud then activated its BFCM (Black Friday Cyber Monday) protocols, including a dedicated war room with Loblaw Digital’s Technology team where engineers from both companies worked side-by-side to quickly adjust the Loblaw platform and ensure an uninterrupted experience for shoppers.Together, Loblaw Digital and Google Cloud quickly stabilized and settled into a level of online traffic that seems now to be the new normal. According to Hesham Fahmy, VP of Technology at Loblaw Digital, “Loblaw’s systems are operating at a scale comparable to other large global e-commerce retailers.” Automating fulfilment with TakeoffWhile Google was working to scale Loblaw’s e-commerce system, the grocer was also searching for new ways to improve the fulfilment process and keep up with order volume. The Loblaw Digital PC Express team took several steps to reduce the bottleneck, such as hiring thousands of new personal shoppers, adding thousands of slots for pickup every week, along with the introduction of new technology to increase capacity across the country. Fortunately, Loblaw was in the process of rolling out its first Micro Fulfillment Center (MFC) with Google Cloud partner Takeoff Technologies. Takeoff’s MFC is essentially a small-scale automation and fulfillment solution placed within an existing storefront in a space that could be as small as two or three grocery aisles. The MFC uses a robotic racking system and cloud and AI technology, powered by Google Cloud, to store, pack, and fulfill orders. The efficiency of automation helps to drastically reduce what’s known as “last mile” costs, by keeping products as close to the customer as possible.  While the MFC implementation had been in progress for nearly a year, its completion couldn’t come at a better time. The new technology, opens-up additional availability for orders and has the capacity to support order volume for multiple PC Express locations in close proximity. To get the MFC up-and-running ahead of schedule, Waltham, Mass.-based Takeoff dispatched employees to Canada armed with webcams and Google Meet, Google’s premium video conferencing solution, to handle the last steps of go-live. This process would normally take 12+ employees, but Takeoff only needed to send two. After two weeks of self-quarantine in Canada, the employees collaborated with their team back at home via Google Meet to ensure an effective rollout. With the MFC in place, colleagues are able to pick and pack items faster than they could manually. As José V. Aguerrevere and Max Pedró, co-founders of Takeoff, put it: “It’s a great example of how automation can help support employee workloads to alleviate a time-consuming and costly process. This type of hyperlocal automation will help local firms not only survive, but thrive. In the long-run it also has the potential to lower food prices, decrease the footprint of stores, and feed data back to suppliers to reduce food and packaging waste, which could eventually help our planet.” Takeoff’s Chief Technology Officer, Zach LaValley, elaborated: “Google has allowed us to shine, particularly in our recent launch with Loblaw; from their solution architecture partnership during the implementation phase, to the stability and reliability of their cloud platform, to the ease of using Google Meet to remotely launch a new site. We have an ambitious mission to transform the grocery industry, and our services have never been more vital. Google provides the reliability, scalability, and global perspective we need in order to provide the top-tier service our retail partners deserve and need at this time.”Loblaw is Takeoff’s first Canadian facility to go-live; its technology, built on Google Cloud’s scalable platform, is expected to be live in 53 retail chains across the United States, Australia, New Zealand, and the Middle East by the end of 2020. Building on a Google Cloud foundationThe technology groundwork laid by Loblaw Digital has enabled them to quickly respond to sudden shifts in shopping dynamics. For the last two years, Google Cloud and Loblaw Digital have been working hand-in-hand on a broader digital roadmap that started with PC Express and expanded to include a new marketplace for pet supplies, toys, baby essentials, home decor, and other items that aren’t available in brick-and-mortar stores. Along the way, Loblaw Digital has gotten more efficient at building new online platforms using Google Cloud as the foundation. PC Express was completed in less than six months, while Loblaw’s new marketplace was up-and-running in just weeks. The teams are now consolidating multiple data sources in Google Cloud, which will give them the ability to look across their data and find new ways to serve customers.“In the matter of days, our online traffic spiked four-fold,” said Vice President, Online Grocery, Sharon Lansing. “During this time, it was critical that our teams were able to find ways to better serve our customers and ensure that we were able to deliver that service quickly.” Loblaw’s foresight and investments in technology enabled the company to react and adapt quickly to COVID-19. To learn more, read the Loblaw case study.
Quelle: Google Cloud Platform

Mitigating Web Scraping with reCAPTCHA Enterprise

As more and more businesses post content, pricing, and other information on their websites, information is more important than ever in today’s digital age. Web scraping—also commonly referred to as web harvesting or web extracting—is the act of extracting information from websites all around the internet, and it’s becoming so common that some companies have separate terms and conditions for automated data collection. In this blog post, however, we’ll examine the rising trend of malicious web scraping, how and why it happens, and how it can be mitigated with reCAPTCHA Enterprise. Web scraping 101Gathering all the information on the Internet manually would be time consuming and tedious. Web scraping with bots enables companies and individuals to automate web scraping in real time, and makes it very easy to retrieve and store the information being scraped much faster than a human ever could.Two of the most common types of web scraping are price scraping and content scraping. Price scraping is used to gather the pricing details of products and services posted on a website. Competitors can gain tremendous value by knowing each other’s products, offerings, and prices. Bots can be used to scrape that information and find out when competitors place an item on sale or when they make updates to their products. This information can then be used to undercut prices or make better competitive decisions. Content scraping is the theft of huge amounts of data from a specific site or sites. Content can be stolen and then reposted on other sites or distributed through other means, which can lead to a huge loss of advertising revenues or traffic to digital content. This information can also be resold to competitors or used in other bot campaigns, like spamming. Web scraping can also negatively impact how your site utilizes resources. Bots often consume more website resources than humans do because they can make requests much faster and more frequently. In addition, they search for information everywhere, often ignoring a site’s robots.txt file, which normally sets guidelines on what should be scraped. This can cause performance degradation for real users and increased compute costs from serving content to scraping bots. How reCAPTCHA Enterprise can helpScrapers who are abusing your site and retrieving data will often try to avoid detection in a similar manner to malicious actors performing credential stuffing attacks. For example, these bots may be hiding in plain sight, attempting to appear as a legitimate service in their user agent string and request patterns.reCAPTCHA Enterprise can identify these bots and continue to identify them as their methods evolve, without causing interference to human consumers. Sophisticated and motivated attackers can easily bypass static rules. With its advanced artificial intelligence and machine learning, reCAPTCHA Enterprise can identify bots that are working silently in the background. It then gives you the tools and visibility to prevent those bots from accessing your valuable web content and reduce the computational power spent on serving content to them. This has the added benefit of letting security administrators spend less time writing manual firewall and detection rules to mitigate dynamic botnets.In today’s threat landscape, fighting automated threats requires behavioral analysis. reCAPTCHA Enterprise can also give you visibility into just how many bots are accessing your web pages and how often. Most importantly, reCAPTCHA Enterprise’s detection won’t slow down or interfere with your end users and customers, providing protections with zero friction for your most important users—real humans.
Quelle: Google Cloud Platform

Introducing Azure Load Balancer insights using Azure Monitor for Networks

We are excited to announce that Azure Load Balancer customers now have instant access to a packaged solution for health monitoring and configuration analysis. Built as part of Azure Monitor for Networks, customers now have topological maps for all their Load Balancer configurations and health dashboards for their Standard Load Balancers preconfigured with relevant metrics.

Through this, you have a window into the health and configuration of your networks, enabling rapid fault localization and informed design decisions. You can access this through the Insights blade of each Load Balancer resource and Azure Monitor for Networks, a central hub that provides access to health and connectivity monitoring for all your network resources.

Visualize functional dependencies

The functional dependency view will enable you to picture even the most complex load balancer setups. With visual feedback on Load Balancing rules, Inbound NAT rules, and backend pool resources, you can make updates while keeping a complete picture of your configuration in mind.

For Standard Load Balancers, your backend pool resources are color-coded with Health Probe status empowering you to visualize the current availability of your network to serve traffic. Alongside the above topology you are presented with a time-wise graph of health status, giving a snapshot view of the health of your application.

Monitor a rich metric dashboard with no setup needed

After reviewing your topology, you may want to dig even further into the data to understand how your Load Balancer is performing through the detailed metrics page. The detailed metrics page is a dashboard preconfigured with separate tabs providing useful insights into Availability, Data Throughput, Flow Distribution, and Connection Latency.

The Overview tab provides a high-level view and from here you can visit the Frontend & Backend Availability or Data Throughput tabs for more in-depth information.

Through the Frontend and Backend availability tabs, you are provided with a breakdown of your Load Balancer and backend pool health status over time. You can consult the Data Throughput tab to learn how much data is parsed through your services by frontend IP, frontend port, and direction.

The Flow Distribution tab provides visualization of load distribution amongst backend resources. This enables you to see the number of flows being created by each backend instance and to keep track of whether you are approaching the limit.

The Connection Monitors tab plots round-trip latencies from Connection Monitors across the globe on a map. With this, you can evaluate the performance impact distances from regions around the world have on your service.

The new monitoring experience is seamless and straightforward to use, with integrated guides and instructions provided as part of each tab.

One place for all your network monitoring needs

Azure Monitor for Networks fully supports the new monitoring and insights experience for Azure Load Balancer. With all your network resource metrics in a single place, you can quickly filter by type, subscription, and keyword to view the health, connectivity, and alert status of all your Azure network resources such as Azure Firewalls, ExpressRoute, and Application Gateways.

As we rapidly transition to the cloud and applications become more complex, customers need tools to easily maintain, monitor, and update their network configurations. With the integration of the Azure Load Balancer with Azure Monitor for Networks, we deliver a piece of this and look forward to continuing to provide our valued customers with the best in class experience they deserve.

Next steps

Learn more about the Azure Load Balancer, Azure Monitor for Networks, and Network Watcher.
Deploy your first Load Balancer, customize your metrics, and create a Connection Monitor.
Give us feedback on this and new features you want to see.

Quelle: Azure

Azure Support API: Create and manage Azure support tickets programmatically

Large enterprise customers running business-critical workloads on Azure manage thousands of subscriptions and use automation for deployment and management of their Azure resources. Expert support for these customers is critical in achieving success and operational health of their business. Today, customers can keep running their Azure solutions smoothly with self-help resources, such as diagnosing and solving problems in the Azure portal, and by creating support tickets to work directly with technical support engineers.

We have heard feedback from our customers and partners that automating support procedures is key to help them move faster in the cloud and focus on their core business. Integrating internal monitoring applications and websites with Azure support tickets has been one of their top asks. Customers expect to create, view, and manage support tickets without having to sign-in to the Azure portal. This gives them the flexibility to associate the issues they are tracking with the support tickets they raise with Microsoft. The ability to programmatically raise and manage support tickets when an issue occurs is a critical step for them in Azure usability.

We’re happy to share that the Azure Support API is now generally available. With this API, customers can integrate the creation and management of support tickets directly into their IT service management (ITSM) system, and automate common procedures.

Using the Azure Support API, you can:

Create a support ticket for technical, billing, subscription management, and subscription and service limits (quota) issues.
Get a list of support tickets with detailed information, and filter by status or created date.
Update severity, status, and contact information.
Manage all communications for a support ticket.

Benefits of Azure Support API

Reduce the time between finding an issue and getting support from Microsoft

A typical troubleshooting process when the customer encounters an Azure issue looks something like this:

On step five, if the issue is unresolved and identified to be on the Azure side, customers navigate to the Azure portal, to contact support. With programmatic case management access, customers can automate their support process with their internal tooling to create and manage their support tickets, thus reducing the time between finding an issue and contacting support.

Customers now have one end-end process that goes smoothly from internal to external without the person filing the issue having to deal with the complexity and challenges between separate case management systems.

Create support tickets via ARM templates

Deploying an ARM template that creates resources can sometimes result in a ResourceQuotaExceeded deployment error, indicating that you have exceeded your Azure subscription and service limits (quotas). This happens because quotas are applied in the resource group, subscription, account, and other scopes. For example, your subscription may be configured to limit the number of cores for a region. If you attempt to deploy a virtual machine with more cores than the permitted amount, you receive an error stating the quota has been exceeded. The way to resolve it is to request a quota increase by filing a support ticket. With Support APIs in place, you can avoid signing in to the Azure portal to create a ticket, instead request quota increases directly via ARM templates.

Getting started

The Azure Support API is available with a Professional Direct, Premier, or Unified technical support plan.

For detailed examples using .NET and C#, refer to our code samples.

View the list of all languages and interfaces we support for ticket creation and management. As always, you can also directly use the Support REST API.

Use the API and tell us about it

We are looking forward to hearing your feedback about the Azure Support API. In the Azure support feedback forum, you can post ideas and suggestions for the API and other aspects of the support experience.

To report an API issue, go to the issues section of the GitHub repository for the language or interface you're using. For example, go to the repository for issues with the PowerShell cmdlets. Select New issue and tag it with the labels Support and Service Attention.
Quelle: Azure

Deploy to Azure Container Instances with Docker Desktop

This blog was co-authored by MacKenzie Olson, Program Manager, Azure Container Instances. 

Today we’re excited about the first release of the new Docker Desktop integration with Microsoft Azure. Last month Microsoft and Docker announced this collaboration, and today you can experience it for yourself.

The new edge release of Docker Desktop provides an integration between Docker and Microsoft Azure that enables you to use native Docker commands to run your applications as serverless containers with Azure Container Instances.

You can use the Docker CLI to quickly and easily sign into Azure, create a Container Instances context using an Azure subscription and resource group, then run your single-container applications on Container Instances using docker run. You can also deploy multi-container applications to Container Instances that are defined in a Docker Compose file using docker compose up.

Code-to-Cloud with a serverless containers

Azure Container Instances is a great solution for running a single Docker container or an application comprised of multiple containers defined with a Docker Compose file. With Container Instances, you can run your containers in the cloud without needing to set up any infrastructure and take advantage of features such as mounting Azure Storage and GitHub repositories as volumes. Because there is no infrastructure or platform management overhead, Container Instances caters to those who need to quickly run containers in the cloud.

Container Instances is also a good target to run the same workloads in production. In production cases, we recommend leveraging Docker commands inside of an automated CI/CD flow. This saves time having to rewrite configuration files because the same Dockerfile and Docker Compose files can be deployed to production with tools such as GitHub Actions. Container Instances also has a pay-as-you-go pricing model, which means you will only be billed for CPU and memory consumption per second, only when the container is running.

Let’s look at the new Docker Azure integration using an example. We have a worker container that continually pulls orders off a queue and performs necessary order processing. Here are the steps to run this in Container Instances with native Docker commands:

Run a single container

As you can see from the above animation, the new Docker CLI integration with Azure makes it easy to get a container running in Azure Container Instances. Using only the Docker CLI you can log in to Azure with multi-factor authentication and create a Docker context using Container Instances as the backend. Detailed information on Container Instances contexts can be found in the documentation.

Once the new Container Instances context is created it can be used to target Container Instances with many of the standard Docker commands you likely already use; like docker run, docker ps, and docker rm. Running a simple docker run <image> command will start a container in Container Instances using the image that is stored in a registry like Docker Hub or Azure Container Registry. You can run other common Docker commands to inspect, attach-to, and view logs from the running container.

Use Docker Compose to deploy a multi-container app

We see many containerized applications that consist of a few related containers. Sidecar containers often perform logging or signing services for the main container. With the new Docker Azure integration, you can use Docker Compose to describe these multi-container applications.

You can use a Container Instances context and a Docker Compose file as part of your edit-build-debug inner loop, as well as your CI/CD flows. This enables you to use docker compose up and down commands to spin up or shut down multiple containers at once in Container Instances.

Visual Studio Code for an even better experience

The Visual Studio Code Docker extension provides you with an integrated experience to start, stop, and manage your containers, images, contexts, and more. Use the extension to scaffold Dockerfiles and Docker Compose files for any language. For Node.js, Python, and .NET, you get integrated, one-click debugging of your app inside the container. And then of course there is the Explorer, which has multiple panels that make the management of your Docker objects easy from right inside Visual Studio Code.

Use the Containers panel to list, start, stop, inspect, view logs, and more.

 

From the Images panel you can list, pull, tag, and push your images.

 
Connect to Azure Container Registry and Docker Hub in the Registries panel to view and manage your images in the cloud. You can even deploy straight to Azure.

 

The Contexts panel lets you list all your contexts and quickly switch between them. When you switch context, the other panels will refresh to show the Docker objects from the selected context. Container Instances contexts will be fully supported in the next release of the docker extension.

Try it out

To start using the Docker Azure integration install the Docker Desktop edge release. You can leverage the current Visual Studio Code Docker extension today, Container Instances context support will be added very soon.

To learn more about the Docker Desktop release, you can read this blog post from Docker. You can find more information in the documentation for using Docker Container Instances contexts.
Quelle: Azure

The next frontier in machine learning: driving responsible practices

Organizations around the world are gearing up for a future powered by artificial intelligence (AI). From supply chain systems to genomics, and from predictive maintenance to autonomous systems, every aspect of the transformation is making use of AI. This raises a very important question: How are we making sure that the AI systems and models show the right ethical behavior and deliver results that can be explained and backed with data?

This week at Spark + AI Summit, we talked about Microsoft’s commitment to the advancement of AI and machine learning driven by principles that put people first.

Understand, protect, and control your machine learning solution

Over the past several years, machine learning has moved out of research labs and into the mainstream and has grown from a niche discipline for data scientists with PhDs to one where all developers are empowered to participate. With power comes responsibility. As the audience for machine learning expands, practitioners are increasingly asked to build AI systems that are easy to explain and that comply with privacy regulations.

To navigate these hurdles, we at Microsoft, in collaboration with the Aether Committee and its working groups, have made available our responsible machine learning (responsible ML) innovations that help developers understand, protect and control their models throughout the machine learning lifecycle. These capabilities can be accessed in any Python-based environment and have been open sourced on GitHub to invite community contributions.

 
Understanding the model behavior includes being able to explain and remove any unfairness within the models. The interpretability and fairness assessment capabilities powered by the InterpretML and Fairlearn toolkits, respectively, enable this understanding. These toolkits help determine model behavior, mitigate any unfairness, and improve transparency within the models.

Protecting the data used to create models by ensuring data privacy and confidentiality, is another important aspect of responsible ML. We’ve released a differential privacy toolkit, developed in collaboration with researchers at the Harvard Institute for Quantitative Social Science and School of Engineering. The toolkit applies statistical noise to the data while maintaining an information budget. This ensures an individual’s privacy while enabling the machine learning process to run unharmed.

Controlling models and its metadata with features, like audit trails and datasheets, brings the responsible ML capabilities full circle. In Azure Machine Learning, auditing capabilities track all actions throughout the lifecycle of a machine learning model. For compliance reasons, organizations can leverage this audit trail to trace how and why a model’s predictions showed certain behavior.

Many customers, such as EY and Scandinavian Airlines, use these capabilities today to build ethical, compliant, transparent, and trustworthy solutions while improving their customer experiences.

Our continued commitment to open source

In addition to open sourcing our responsible ML toolkits, there are two more projects we are sharing with the community. The first is Hyperspace, a new extensible indexing subsystem for Apache Spark. This is designed to work as a simple add-on, and comes with Scala, Python, and .Net support. Hyperspace is the same technology that powers the indexing engine inside Azure Synapse Analytics. In benchmarking against common workloads like TPC-H and TPC-DS, Hyperspace has provided gains of 2x and 1.8x, respectively. Hyperspace is now on GitHub. We look forward to seeing new ideas and contributions on Hyperspace to make Apache Spark’s performance even better.

The second is a preview of ONNX Runtime's support for accelerated training. The latest release of training acceleration incorporates innovations from the AI at Scale initiative, such as ZeRO optimization and Project Parasail, which improves memory utilization and parallelism on GPUs.

We deeply value our partnership with the open source community and look forward to collaborating to establish responsible ML practices in the industry.

Additional resources

Learn more about responsible ML.
Walk through an interactive demo for responsible ML.
Read the IDC white paper on responsible AI.
Use the Azure architecture center for proven architectures on analytics and AI.

 

Quelle: Azure

Running a container in Microsoft Azure Container Instances (ACI) with Docker Desktop Edge

Earlier this month Docker announced our partnership with Microsoft to shorten the developer commute between the desktop and running containers in the cloud. We are excited to announce the first release of the new Docker Azure Container Instances (ACI) experience today and wanted to give you an overview of how you can get started using it.

The new Docker and Microsoft ACI experience allows developers to easily move between working locally and in the Cloud with ACI; using the same Docker CLI experience used today! We have done this by expanding the existing docker context command to now support ACI as a new backend. We worked with Microsoft to target ACI as we felt its performance and ‘zero cost when nothing is running’ made it a great place to jump into running containers in the cloud.

ACI is a Microsoft serverless container solution for running a single Docker container or a service composed of a group of multiple containers defined with a Docker Compose file. Developers can run their containers in the cloud without needing to set up any infrastructure and take advantage of features such as mounting Azure Storage and GitHub repositories as volumes. For production cases, you can leverage Docker commands inside of an automated CI/CD flow.

Thanks to this new ACI context, you can now easily run a single container in Microsoft ACI using the docker run command but also multi-container applications using the docker compose up command.

This new experience is now available as part of Docker Desktop Edge 2.3.2 . To get started, simply download the latest Edge release or update if you are already on Desktop Edge.

Create an ACI context

Once you have the latest version, you will need to get started by logging into an Azure account. If you don’t have one you can sign up for one with $200 of credit for 30 days to try out the experience here. Once you have an account you can get started in the Docker CLI by logging into Azure: 

This will load up the Azure authentication page allowing you to login using your credentials and Multi-Factor Authentication (MFA). Once you have authenticated you will see a login succeeded in the CLI, you are now ready to create your first ACI context. To do this you will need to use the docker context create aci command. You can either pass in an Azure subscription and resource group to the command or use the interactive CLI to choose them, or even create a resource group. For this example I will deploy to my default Resource Group.

My context is then created and I can check this using docker context ls

Single Container Application Example

Before I use this context, I am now going to test my application locally to check everything is working as expected. I am just going to use a very simple web server with a static HTML web page on.

I start by building my image and then running it locally:

Getting ready to run my container on ACI, I now push my image to Dockerhub using docker push bengotch/simplewhale and then change my context using docker context use myacicontext. From that moment, all the subsequent commands we will execute will be run against this ACI context.

I can check no containers are running in my new context using docker ps. Now to run my container on ACI I only need to  repeat the very same  docker run command as earlier. I can see my container is running and use the IP address to access my container running in ACI!

I can now remove my container using docker rm. Note that once the command has been executed, nothing is running on ACI and all resources are removed from ACI – resulting in no ongoing cost.

Multi-Container Application Example

With the new Docker ACI experience we can also deploy multi-container applications using Docker Compose. Let’s take a look at a simple 3 part application with a Java backend, Go frontend and postgres DB:

To start, I swap to my default (local) context and run a docker-compose up to run my app locally. 

I then check to see that I can access it and see it running locally:

Now I swap over to my ACI context using docker context use myacicontext and run my app again. This time I can use the new syntax docker compose up (note the lack of a ‘-’ between docker and compose).

And I can then go and see if this is working using its public IP address:

I have now run both my single container locally and in the cloud, along with running my multi-container app locally and in the cloud – all using the same artifacts and using the same Docker experience that I know and love!

Try out the new experience!

To try out the new experience, download the latest version of Docker Desktop Edge today, you can raise bugs on our beta repo and let us know what other features you would like to see integrated by adding an issue to the Docker Roadmap!
The post Running a container in Microsoft Azure Container Instances (ACI) with Docker Desktop Edge appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/