Deploy to Azure Container Instances with Docker Desktop

This blog was co-authored by MacKenzie Olson, Program Manager, Azure Container Instances. 

Today we’re excited about the first release of the new Docker Desktop integration with Microsoft Azure. Last month Microsoft and Docker announced this collaboration, and today you can experience it for yourself.

The new edge release of Docker Desktop provides an integration between Docker and Microsoft Azure that enables you to use native Docker commands to run your applications as serverless containers with Azure Container Instances.

You can use the Docker CLI to quickly and easily sign into Azure, create a Container Instances context using an Azure subscription and resource group, then run your single-container applications on Container Instances using docker run. You can also deploy multi-container applications to Container Instances that are defined in a Docker Compose file using docker compose up.

Code-to-Cloud with a serverless containers

Azure Container Instances is a great solution for running a single Docker container or an application comprised of multiple containers defined with a Docker Compose file. With Container Instances, you can run your containers in the cloud without needing to set up any infrastructure and take advantage of features such as mounting Azure Storage and GitHub repositories as volumes. Because there is no infrastructure or platform management overhead, Container Instances caters to those who need to quickly run containers in the cloud.

Container Instances is also a good target to run the same workloads in production. In production cases, we recommend leveraging Docker commands inside of an automated CI/CD flow. This saves time having to rewrite configuration files because the same Dockerfile and Docker Compose files can be deployed to production with tools such as GitHub Actions. Container Instances also has a pay-as-you-go pricing model, which means you will only be billed for CPU and memory consumption per second, only when the container is running.

Let’s look at the new Docker Azure integration using an example. We have a worker container that continually pulls orders off a queue and performs necessary order processing. Here are the steps to run this in Container Instances with native Docker commands:

Run a single container

As you can see from the above animation, the new Docker CLI integration with Azure makes it easy to get a container running in Azure Container Instances. Using only the Docker CLI you can log in to Azure with multi-factor authentication and create a Docker context using Container Instances as the backend. Detailed information on Container Instances contexts can be found in the documentation.

Once the new Container Instances context is created it can be used to target Container Instances with many of the standard Docker commands you likely already use; like docker run, docker ps, and docker rm. Running a simple docker run <image> command will start a container in Container Instances using the image that is stored in a registry like Docker Hub or Azure Container Registry. You can run other common Docker commands to inspect, attach-to, and view logs from the running container.

Use Docker Compose to deploy a multi-container app

We see many containerized applications that consist of a few related containers. Sidecar containers often perform logging or signing services for the main container. With the new Docker Azure integration, you can use Docker Compose to describe these multi-container applications.

You can use a Container Instances context and a Docker Compose file as part of your edit-build-debug inner loop, as well as your CI/CD flows. This enables you to use docker compose up and down commands to spin up or shut down multiple containers at once in Container Instances.

Visual Studio Code for an even better experience

The Visual Studio Code Docker extension provides you with an integrated experience to start, stop, and manage your containers, images, contexts, and more. Use the extension to scaffold Dockerfiles and Docker Compose files for any language. For Node.js, Python, and .NET, you get integrated, one-click debugging of your app inside the container. And then of course there is the Explorer, which has multiple panels that make the management of your Docker objects easy from right inside Visual Studio Code.

Use the Containers panel to list, start, stop, inspect, view logs, and more.

 

From the Images panel you can list, pull, tag, and push your images.

 
Connect to Azure Container Registry and Docker Hub in the Registries panel to view and manage your images in the cloud. You can even deploy straight to Azure.

 

The Contexts panel lets you list all your contexts and quickly switch between them. When you switch context, the other panels will refresh to show the Docker objects from the selected context. Container Instances contexts will be fully supported in the next release of the docker extension.

Try it out

To start using the Docker Azure integration install the Docker Desktop edge release. You can leverage the current Visual Studio Code Docker extension today, Container Instances context support will be added very soon.

To learn more about the Docker Desktop release, you can read this blog post from Docker. You can find more information in the documentation for using Docker Container Instances contexts.
Quelle: Azure

The next frontier in machine learning: driving responsible practices

Organizations around the world are gearing up for a future powered by artificial intelligence (AI). From supply chain systems to genomics, and from predictive maintenance to autonomous systems, every aspect of the transformation is making use of AI. This raises a very important question: How are we making sure that the AI systems and models show the right ethical behavior and deliver results that can be explained and backed with data?

This week at Spark + AI Summit, we talked about Microsoft’s commitment to the advancement of AI and machine learning driven by principles that put people first.

Understand, protect, and control your machine learning solution

Over the past several years, machine learning has moved out of research labs and into the mainstream and has grown from a niche discipline for data scientists with PhDs to one where all developers are empowered to participate. With power comes responsibility. As the audience for machine learning expands, practitioners are increasingly asked to build AI systems that are easy to explain and that comply with privacy regulations.

To navigate these hurdles, we at Microsoft, in collaboration with the Aether Committee and its working groups, have made available our responsible machine learning (responsible ML) innovations that help developers understand, protect and control their models throughout the machine learning lifecycle. These capabilities can be accessed in any Python-based environment and have been open sourced on GitHub to invite community contributions.

 
Understanding the model behavior includes being able to explain and remove any unfairness within the models. The interpretability and fairness assessment capabilities powered by the InterpretML and Fairlearn toolkits, respectively, enable this understanding. These toolkits help determine model behavior, mitigate any unfairness, and improve transparency within the models.

Protecting the data used to create models by ensuring data privacy and confidentiality, is another important aspect of responsible ML. We’ve released a differential privacy toolkit, developed in collaboration with researchers at the Harvard Institute for Quantitative Social Science and School of Engineering. The toolkit applies statistical noise to the data while maintaining an information budget. This ensures an individual’s privacy while enabling the machine learning process to run unharmed.

Controlling models and its metadata with features, like audit trails and datasheets, brings the responsible ML capabilities full circle. In Azure Machine Learning, auditing capabilities track all actions throughout the lifecycle of a machine learning model. For compliance reasons, organizations can leverage this audit trail to trace how and why a model’s predictions showed certain behavior.

Many customers, such as EY and Scandinavian Airlines, use these capabilities today to build ethical, compliant, transparent, and trustworthy solutions while improving their customer experiences.

Our continued commitment to open source

In addition to open sourcing our responsible ML toolkits, there are two more projects we are sharing with the community. The first is Hyperspace, a new extensible indexing subsystem for Apache Spark. This is designed to work as a simple add-on, and comes with Scala, Python, and .Net support. Hyperspace is the same technology that powers the indexing engine inside Azure Synapse Analytics. In benchmarking against common workloads like TPC-H and TPC-DS, Hyperspace has provided gains of 2x and 1.8x, respectively. Hyperspace is now on GitHub. We look forward to seeing new ideas and contributions on Hyperspace to make Apache Spark’s performance even better.

The second is a preview of ONNX Runtime's support for accelerated training. The latest release of training acceleration incorporates innovations from the AI at Scale initiative, such as ZeRO optimization and Project Parasail, which improves memory utilization and parallelism on GPUs.

We deeply value our partnership with the open source community and look forward to collaborating to establish responsible ML practices in the industry.

Additional resources

Learn more about responsible ML.
Walk through an interactive demo for responsible ML.
Read the IDC white paper on responsible AI.
Use the Azure architecture center for proven architectures on analytics and AI.

 

Quelle: Azure

Azure.com operates on Azure part 1: Design principles and best practices

Azure puts powerful cloud computing tools into the hands of creative people around the world. So, when your website is the face of that brand, you better use what you build, and it better be good. As in, 99.99-percent composite SLA good.

That’s our job at Azure.com, the platform where Microsoft hopes to inspire people to invent the next great thing. Azure.com serves up content to millions of people every day. It reaches people in nearly every country and is localized in 27 languages. It does all this while running on the very tools it promotes.

In developing Azure.com, we practice what we preach. We follow the guiding principles that we advise our customers to adopt and the principles of sustainable software engineering (SSE). Even this blog post is hosted on the very infrastructure that it describes.

In part one of our two-part series, we will peek behind the Azure.com web page to show you how we think about running a major brand website on a global scale. We will share our design approach and best practices for security, resiliency, scalability, availability, environmental sustainability, and cost-effective operations—on a global scale.

Products, features, and demos supported on Azure.com

As a content platform, Azure.com serves an audience of business and technical people—from S&P 500 enterprises to independent software vendors, and from government agencies to small businesses. To make sure our content reaches everyone, we follow Web Content Accessibility Guidelines (WCAG). We also adopted sustainable software engineering principles to help us responsibly achieve global scale and reduce our carbon footprint.

Azure.com supports static content, such as product and feature descriptions. But the fun is in the interactive components that let readers customize the details, like the products available by region page where we show service availability across 61 regions (and growing), the Azure updates page that keeps people informed about Azure changes, and the search box.

The Azure pricing page provides up-to-date pricing information for more than 200 services across multiple markets, and it factors in any discounts for which a signed-in user is eligible. We also built a comprehensive pricing calculator for all services. Prospective customers can calculate and share complex cost estimates in 24 currencies.

As a marketing channel, Azure.com also hosts demos. For example, we created in-browser interactive demos to display the benefits of Azure Cognitive Services, and we support streaming media for storytelling. We also provided a total cost of ownership (TCO) calculator for estimating cloud migration savings in 27 languages and 12 regions.

And did we mention the 99.99-percent composite SLA that Azure.com meets?

Pricing calculator: Interactive cost estimation tool for all Azure products and services.

History of Azure.com

As the number of Azure services has grown, so has our website, and it has always run on Azure. Azure.com is always a work in progress, but here are a few milestones in our development history:

2013: Azure.com begins life on the popular open-source Umbraco CMS. It markets seven Azure services divided into four categories: compute, data services, app services, and network.
2015: Azure.com moves to a custom ASP.NET Model View Controller (MVC) application hosted on Azure. It now supports 16 Azure services across four categories.
2020: Azure.com continues to expand its support of more categories of content. Today, the website describes more than 200 Azure offerings, including Azure services, capabilities, and features.

 

Azure.com timeline: Every year we support more great Azure products and services.

Design principles behind Azure.com

To create a solid architectural foundation for Azure.com, we follow the core pillars of great Azure architecture. These pillars are the design principles behind the security, performance, availability, and efficiency that make Azure.com run smoothly and meet our business goals.

Design principles: Azure.com follows the tenets of Azure architectural best practices.

You can take a class on how to Build great solutions with the Microsoft Azure Well-Architected Framework.

A pillar of security and resiliency

Like any cloud application, Azure.com requires security at all layers. That means everything covered by the Open Systems Interconnection (OSI) model, from the network to the application, web page, and backend dependencies. This is our defense-in-depth approach to security.

Resiliency is the ability to defend against malicious attacks, bad actors, or bots saturating your compute resources and possibly causing unnecessary scale-out and cost overruns. Resiliency isn’t about avoiding failure, but rather responding to failure in a way that avoids downtime and data loss.

One metric for resiliency is the recovery time objective (RTO), which says how long an application can be offline after suffering an outage. For us, it’s less than 30 minutes. Failure mode analysis (FMA) is another assessment of resiliency and includes planning for failures and running live fire drills. We use both these methods to assess the resiliency of Azure.com.

Super scalable and highly available

Any cloud application needs enough scalability to handle peak loads. For Azure.com, peaks occur during major events and marketing campaigns. Regardless of the load, Azure.com requires high availability to support around-the-clock operations. We trust the platform to support business continuity and guard against unexpected outages, overloaded resources, or failures caused by upstream dependencies.

As a case in point, we rely on Azure scalability to handle the big spikes in demand during Microsoft Build and Microsoft Ignite, the largest annual events handled by Azure.com. The number of requests per second (RPS) jumps 20 to 30 percent as tens of thousands of event attendees flock to Azure.com to learn about newly announced Azure products and services.

Whatever the scale, the Azure platform provides reliable, sustainable operations that enable Microsoft and other companies to deliver premium content to our customers.

Cost-effective high performance is a core design principle

Our customers often tell us that they want to move to a cloud-based system to save money. It’s no different at Azure.com, where cost-efficient provisioning is a core design principle. Azure.com has a handy cost calculator to compare the cost of running on-premises to running on Azure.

Efficiency means having a way to track and optimize underutilized resources and use dynamic scaling to support seasonal traffic demands. This principle applies to all layers of the software development life cycle (SDLC), starting with managing all the work items, using a source code repository, and implementing continuous integration (CI) and continuous deployment (CD). Cost-efficiency extends to the way we provision and host resources in multiple environments, and maintain an inventory of our digital estate.

But being cost-conscious doesn’t mean giving up on speed. Top-notch performance takes minimal network latency, fast server response times, and consistent page load and render times. Azure.com performance always focuses on the user experience, so we make sure to optimize network routing and minimize round-trip time (RTT).

Operating with zero downtime

Uptime is important for any large web application. We aim for zero downtime. That means no service downtime—ever. It’s a lofty goal, but it’s possible when you use CI/CD practices that spare users from the effects of the build and deployment cycles.

For example, if we push a code update, we aim for no site downtime, no failed requests, and no adverse impact on Azure.com users. Our CI/CD pipeline is based on Azure DevOps and pumps out hundreds of builds and multiple deployments to the live production servers every day without a hitch.

Another service level indicator (SLI) that we use is mean time to repair (MTTR). With this metric, lower is better. To minimize MTTR SLI, you need DevOps tools for identifying and repairing bottlenecks or crashing processes.

Next steps

From our experience working on Azure.com, we can say that following these design principles and best practices improves application resiliency, lowers costs, boosts security, and ensures scalability.

To review the workings of your Azure architecture, consider taking the architecture assessment.

For more information about the Azure services that make up Azure.com, see the next article in this blog series, How Azure.com operates on Azure part 2: Technology and architecture.
Quelle: Azure

How Azure.com operates on Azure part 2: Technology and architecture

When you’re the company that builds the cloud platforms used by millions of people, your own cloud content needs be served up fast. Azure.com—a complex, cloud-based application that serves millions of people every day—is built entirely from Azure components and runs on Azure.

Microsoft culture has always been about using our own tools to run our business. Azure.com serves as an example of the convenient platform-as-a-service (PaaS) option that Azure provides for agile web development. We trust Azure to run Azure.com with 99.99-percent availability across a global network capable of a round-trip time (RTT) of less than 100 milliseconds per request.

In part two of our two-part series we share our blueprint, so you can learn from our experience building a website on planetary scale and move forward with your own website transformation.

This post will help you get a technical perspective on the infrastructure and resources that make up Azure.com. For details about our design principles, read Azure.com operates on Azure part 1: Design principles and best practices.

The architecture of a global footprint

With Azure.com, our goal is to run a world-class website in a cost-effective manner at planetary scale. To do this, we currently run more than 25 Azure services. (See Services in Azure.com below.)

This blog examines the role of the main services, such as Azure Front Door, which routes HTTP requests to the web front end, and Azure App Service, a fully managed platform for creating and deploying cloud applications.

The following diagram shows you a high-level view of the global Azure.com architecture.

On the left, networking services provide the secure endpoints and connectivity that give users instant access, no matter where they are in the world.
On the right, developers use Azure DevOps services to run a continuous integration (CI) and continuous deployment (CD) pipeline that delivers updates and features with zero downtime.
In between, a variety of PaaS options that provide compute,  storage, security, monitoring, and more.

Azure.com global architecture: A high-level look at the Azure services and dataflow.

Host globally, deliver regionally

The Azure.com architecture is hosted globally but runs locally in multiple regions for high availability. Azure App Service hosts Azure.com from the nearest global datacenter infrastructure, and its automatic scaling features ensure that Azure.com meets changing demands.

The diagram below shows a close-up of the regional architecture hosted in App Service. We use deployment slots to deploy to development, staging, and production environments. Deployment slots are live apps with their own host names. We can swap content and configurations between the slots while maintaining application availability.

Azure.com regional architecture: App Service hosts regional instances in slots.

A look at the key PaaS components behind Azure.com

Azure.com is a complex, multi-tier web application. We use PaaS options as much as possible because managed services save us time. Less time spent on infrastructure and operations means more time to create a world-class customer experience. The platform performs OS patching, capacity provisioning, and load balancing, so we’re free to focus elsewhere.

Azure DNS

Azure DNS enables self-service quick edits to DNS records, global nameservers with 100-percent availability, and blazing fast DNS response times via Anycast addressing. We use Azure DNS aliases for both CNAME and ANAME record types.

Azure Front Door Service

Azure Front Door Service enables low-latency TCP-splitting, HTTP/2 multiplexing and concurrency, and performance based global routing. We saw a reduction in RTT to less than 100 milliseconds per request, as clients only need to connect to edge nodes, not directly to the origin.

For business continuity, Azure Front Door Service supports backend health probes, a resiliency pattern, that in effect removes unhealthy regions when they are misbehaving. In addition, to enable a backup site, Azure.com uses priority-based traffic routing. In the event our primary service backend goes offline, this method enables Azure Front Door Service to support ringed failovers.

Azure Front Door Service also acts as a reverse proxy, enabling pattern-based URL rewriting or request forwarding to handle dynamic traffic changes.

Web Application Firewall

Web Application Firewall (WAF) helps improve the platform’s security posture by providing load shedding bad bots and protection against OWASP top 10 attacks at the application layer. WAF forces developers to pay more attention to their data payloads, such as cookies, request URLs, form post parameters, and request headers.

We use WAF custom rules to block traffic to certain geographies, IPs, URLs, and other request properties. Rules offload traffic at the network edge from reaching your origin.

Content Delivery Network

To reduce load times, Azure.com uses Content Delivery Network (CDN) for load shedding to origin. CDN helps us lower the consumed bandwidth and keep costs down. CDN also improves performance by caching static assets at the Point of Presence (POP) edge nodes and reducing RTT latency. Without CDN, our origin nodes would have to handle every request for static assets.

CDN also supports DDoS protection, improving app security. We enable CDN compression and HTTP/2 to optimize delivery for static payloads. Using CDN is also a sustainable approach to optimizing network traffic because it reduces the data movement across a network.

Azure App Service

We use App Service horizontal autoscaling to handle burst traffic. The Autoscale feature is simple to use and is based on Azure Monitor metrics for requests per second (RPS) per node. We also reduced our Azure expenses by 50 percent by using elastic compute—a benefit that directly reduces our carbon consumption.

Azure.com uses several other handy App Service features:

Always On means there’s no idle timeout.
Application initialization provides custom warmup and validation.
VIP swap blue-green deployment pattern supports zero-downtime deployments.
To reduce network latency to the edge, we run our app in 12 geographically separate datacenters. This practice supports geo-redundancy should one or more datacenters go dark.
To improve app performance, we use the App Service DaaS – .NET profiler. This feature identifies node bottlenecks and hotspots for weak performing code blocks or slow dependencies.
For disaster recovery and improved mean time to recovery (MTTR), we use slot swap. In the event that an app deployment exception is not caught by our PPE testing, we can quickly roll back to last stable version.

App Service is also a PaaS service, which means we don't have to worry about the virtual machine (VM) infrastructure, OS updates, app frameworks, and the downtime associated with managing these. We follow the paired region concept when choosing our datacenters to mitigate against any rolling infrastructure updates and ensure improved isolation and resiliency.

As a final note, it’s important to choose the right App Service plan tier so that you can right-size your vertical scaling. The plan you choose also affects sustainable energy proportionality, which means running instances at a higher utilization rate to maximize carbon efficiency.

DaaS – .NET Profiler: identifying code bottlenecks and measuring improvements. In this case we found our HTML whitespace “minifier” was saturating our compute nodes. After disabling it, we verified response times, and CPU usage improved significantly.

Azure Monitor

Azure Monitor enables passive health monitoring over Application Insights, Log Analytics, and Azure Data Explorer data sources. We rely on these query monitor alerts to build configuration-based health models based on our telemetry logs so we know when our app is misbehaving before our customers tell us.

For example, we monitor CPU consumption by datacenter as the following screenshot shows. If we see sustained, high CPU usage for our app metrics, Monitor can trigger a notification to our response team, who can quickly respond, triage the problem, and help improve MTTR. We also receive proactive notifications if a client-browser is misbehaving or throwing console errors, such as when Safari changes a specific push and replace state pattern.

Performance counters: We are alerted if CPU spikes are sustained for more than five minutes.

Application Insights

Application Insights, a feature of Monitor, is used for client- and server-side Application Performance Management (APM) telemetry logging. It monitors page performance, exceptions, slow dependencies, and offers cross-platform profiling. Customers typically use Application Insights in break-fix scenarios to improve MTTR and to quickly triage failed requests and application exceptions.

We recommend enabling telemetry sampling so you don’t exhaust your data volume storage quota. We set up daily storage quota alerts to capture any telemetry saturation before it shuts off our logging pipeline.

Application Insights also provides OpenTelemetry support for distributed tracing across app domain boundaries and dependencies. This feature enables traceability from the client side all the way to the backend data or service tier.

Data volume capacity alert: Example showing that the data storage threshold is exceeded, which is useful for tracking runaway telemetry logs.

Developing with Azure DevOps

A big team works on Azure.com, and we use Azure DevOps Services to coordinate our efforts. We create internal technical docs with Azure Wikis, track work items using Azure Boards, build CI/CD workflows using Azure Pipelines, and manage application packages using Azure Artifacts. For software configuration management and quality gates, we use GitHub, which works well with Azure Boards.

We submit hundreds of daily pull requests as part of our build process, and the CI/CD pipeline deploys multiple updates every day to the production site. Having a single tool to manage the entire software development life cycle (SDLC) simplifies the learning curve for the engineering team and our internal customers.

To stay on top of what’s coming, we do a lot of planning in Delivery Plans. It’s a great tool for viewing incremental tasks and creating forecasts for the major events that affect Azure.com traffic, such as Microsoft Build, Microsoft Ignite, and Microsoft Ready.

What’s next

As the Azure platform evolves, so does Azure.com. But some things stay the same—the need for a reliable, scalable, sustainable, and cost-effective platform. That’s why we trust Azure.

Microsoft offers many resources and best practices for cloud developers, please see our additional resources below. To get started, create your Azure free account today.

Services in Azure.com

For more information about the services that make up Azure.com, check out the following resources.

Compute

Azure App Service
Azure Functions
Azure Cognitive Services

Networking

Azure Front Door
Azure DNS
Web Application Firewall
Azure Traffic Manager
Azure Content Delivery Network

Storage

Azure Cognitive Search
Azure Cache for Redis
Azure Blob storage and Azure queues
Application Insights
Azure Cosmos DB
Azure Data Explorer
Azure Media Services

Access provisioning

Azure Active Directory
Microsoft Graph
Azure Key Vault

Application life cycle

Azure DevOps
Azure Log Analytics
Azure Monitor
Azure Security Center
Azure Resource Manager
Azure Cost Management
Azure Service Health
Azure Advisor

Quelle: Azure

Stay ahead of attacks with Azure Security Center

With massive workforces now remote, the stress of IT admins and security professionals is compounded by the increased pressure to keep everyone productive and connected while combatting evolving threats. Now more than ever, organizations need to reduce costs, keep up with compliance requirements, all while managing risks in this constantly evolving landscape.

Azure Security Center is a unified infrastructure security management system that strengthens the security posture of your data centers and provides advanced threat protection across your hybrid workloads in the cloud, whether they're in Azure or not, as well as on-premises.

Last week Ann Johnson, Corporate Vice President, Cybersecurity Solutions Group, shared news of an upcoming Azure Security Center virtual event—Stay Ahead of Attacks with Azure Security Center on June 30, 2020, from 10:00 AM to 11:00 AM Pacific Time. It’s a great opportunity to learn threat protection strategies from the Microsoft security community and to hear how your peers are tackling tough and evolving security challenges.

At the event, you’ll learn how to strengthen your cloud security posture and achieve deep and broad threat protection across cloud workloads—in Azure, on-premises, and in hybrid cloud. We will also talk about how to combine Security Center with Azure Sentinel for advanced threat hunting.

The one-hour event will open with Microsoft Corporate Vice President of Cybersecurity Ann Johnson and General Manager of Microsoft Security Response Center Eric Doerr stepping through three strategies to help you lock down your environment:

Protect all cloud resources across cloud-native workloads, virtual machines, data services, containers, and IoT edge devices.
Strengthen your overall security posture with enhanced Azure Secure Score.
Connect Azure Security Center with Azure Sentinel for proactive hunting and threat mitigation with advanced querying and the power of AI.

You’ll then see demos of Secure Score and other Security Center features. Stuart Gregg, Security Operations Manager of ASOS, a world leader in online fashion retail business and a Microsoft customer, will join Ann and Eric to share how they’ve gained stronger threat protection by pairing these technologies with smarter security management practices. Our security experts will be online to answer your questions.

Following the virtual event, you’ll have the opportunity to watch deep dive sessions where I will be hosting Yuri Diogenes, from the Customer Experience Engineering team at Microsoft. Azure Security Center today provides threat protection across cloud-native workloads, data services and servers, and virtual machines. Yuri and I will take you through a demo tour about these capabilities and chat about how you can use Security Center to achieve hybrid and multicloud threat protection. Here are the details:

Cloud-native workloads. Kubernetes is the new standard for deploying and managing software in the cloud. Learn how Security Center supports containers and provides vulnerability assessment for virtual machines and containers.
Data services. Breakthroughs in big data and machine learning make it possible for Security Center to detect anomalous database access and query patterns, SQL injection attacks, and other threats targeting your SQL databases in Azure and Azure virtual machines. Learn how you can protect your sensitive data, protect your Azure Storage against malware, and protect your Azure Key Vault from threats.
Servers and virtual machines. Learn how to protect your Linux and Windows virtual machines (VMs) using the new Security Center features Just-In-Time VM Access, adaptive network hardening, and adaptive application controls. Yuri and I will also talk about how Security Center works with Microsoft Defender Advanced Threat Protection to provide threat detection for endpoint servers.

When it comes to threat protection, the key is to cover all resources. Azure Security Center provides threat protection for servers, cloud-native workloads, data, and IoT services. Threat protection capabilities are part of Standard Tier and you can start a free trial today.

I hope you’ll join us and learn how to implement broad threat protection across all your cloud resources and improve your cloud security posture management. If you can’t catch the event online, the content will be available for you to watch at the Azure Security Expert Series web page after the event.

Quelle: Azure

Rules Engine for Azure Front Door and Azure CDN is now generally available

Today we are announcing the general availability of the Rules Engine feature on both Azure Front Door and Azure Content Delivery Network (CDN). Rules Engine places the specific routing needs of your customers at the forefront of Azure’s global application delivery services, giving you more control in how you define and enforce what content gets served from where. Both services offer customers the ability to deliver content fast and securely using Azure’s best-in-class network. We have learned a lot from our customers during the preview and look forward to sharing the latest updates going into general availability.

How Rules Engine works

We recently talked about how we are building and evolving the architecture and design of Azure Front Door Rules Engine. The Rules Engine implementation for Content Delivery Network follows a similar design. However, rather than creating groups of rules in Rules Engine Configurations, all rules are created and applied to each Content Delivery Network endpoint. Content Delivery Network Rules Engine also boasts the concept of a global rule which acts as a default rule for each endpoint that always triggers its action.

General availability capabilities

Azure Front Door

The most important feedback we heard during the Azure Front Door Rules Engine preview was the need for higher rule limits. Effective today, you will be able to create up to 25 rules per configuration, for a total of 10 configurations, giving you the ability to create a total of 250 rules across your Azure Front Door. There remains no additional charge for Azure Front Door Rules Engine.

Azure Content Delivery Network 

Similarly, Azure Content Delivery Network limits have been updated. Through preview, users had access to five total rules including the global rule for each CDN endpoint. We are announcing that as part of general availability, the first five rules will continue to be free of charge, and users can now purchase additional rules to customize CDN behavior further. We’re also increasing the number of match conditions and actions within each rule to ten match conditions and five actions.

Rules Engine scenarios

Rules Engine streamlines security and content delivery logic at the edge, a benefit to both current and new customers of either service. Different combinations of match conditions and actions give you fine-grained control over which users get what content and make the possible scenarios that you can accomplish with Rules Engine endless.

For instance, it’s an ideal solution to address legacy application migrations, where you don’t want to worry about users accessing old applications or not knowing how to find content in your new apps. Similarly, geo match and device identification capabilities ensure that your users always see the optimal content their location and device are using. Implementing security headers and cookies with Rules Engine can also ensure that no matter how your users come to interact with the site, they are doing so over a secure connection, preventing browser-based vulnerabilities from impacting your site.

Here are some additional scenarios that Rules Engine empowers:

Enforce HTTPS, ensure all your end-users interact with your content over a secure connection.
Implement security headers to prevent browser-based vulnerabilities like HTTP Strict-Transport-Security (HSTS), X-XSS-Protection, Content-Security-Policy, X-Frame-Options, as well as Access-Control-Allow-Origin headers for Cross-Origin Resource Sharing (CORS) scenarios. Security-based attributes can also be defined with cookies.
Route requests to mobile or desktop versions of your application based on the patterns in the contents of request headers, cookies, or query strings.
Use redirect capabilities to return 301, 302, 307, and 308 redirects to the client to redirect to new hostnames, paths, or protocols.
Dynamically modify the caching configuration of your route based on the incoming requests.
Rewrite the request URL path and forward the request to the appropriate backend in your configured backend pool.
Optimize media delivery to tune the caching configuration based on file type or content path (Azure Content Delivery Network only).

Next steps

We look forward to working with more customers using both Azure Front Door and Content Delivery Network Rules Engine. For more information, please see the documentation for Azure Front Door Rules Engine and Azure Content Delivery Network Rules Engine.
Quelle: Azure

Azure Container Registry: Securing container workflows

Securing any environment requires multiple lines of defense. Azure Container Registry recently announced the general availability of features like Azure Private Link, customer-managed keys, dedicated data-endpoints, and Azure Policy definitions. These features provide tools to secure Azure Container Registry as part of the container end-to-end workflow.

Customer-managed keys

By default, when you store images and other artifacts in an Azure Container Registry, content is automatically encrypted at rest with Microsoft-managed keys.

Choosing Microsoft-managed keys means that Microsoft oversees managing the key’s lifecycle. Many organizations have stricter compliance needs, requiring ownership and management of the key’s lifecycle and access policies. In such cases, customers can choose customer-managed keys that are created and maintained in a customer’s Azure Key Vault instance. Since the keys are stored in Key Vault, customers can also closely monitor the access of these keys using the built-in diagnostics and audit logging capabilities  in Key Vault. Customer-managed keys supplement the default encryption capability with an additional encryption layer using keys provided by customers. See details on how you can create a registry enabled for customer-managed keys.

Private links

Container Registry previously had the ability to restrict access using firewall rules. With the introduction of Private Link, the registry endpoints are assigned private IP addresses, routing traffic within your virtual network and the service through a Microsoft backbone network.

Private Link support has been one of the top asks, allowing customers to benefit from the Azure management of their registry while benefiting from tightly controlled network ingress and egress.

Private links are available across a wide range of Azure resources with more coming soon, allowing a wide range of container workloads with the security of a private virtual network. See documentation on how to configure Azure Private Link for Container Registry.

Dedicated data-endpoints

Private Link is the most secure way to control network access between clients and the registry as network traffic is limited to the Azure Virtual Network. When Private Link can't be used, dedicated data-endpoints can minimize data exfiltration concerns. Enabling dedicated data endpoints means they can configure firewall rules with fully qualified domain names ([registry].[region].data.azurecr.io) rather than a rule with wildcard (*.blob.core.windows.net) for all storage accounts.

You can enable dedicated data-endpoints using the Azure portal or the Microsoft Azure CLI. The data endpoints follow a regional pattern, <registry-name>.<region>.data.azurecr.io. In a geo-replicated registry, enabling data endpoints allows endpoints in all replica regions. Review the documentation on how to enable dedicated data endpoints to learn more.

Azure built-in policies

Having security capabilities will secure your workflows if they’re implemented. To assure your Azure resources are following the best security practices, Azure Container Registry has added built-in Azure Policy definitions that you can leverage to enforce security rules. Here are some of the built-in policies that you can enable for your container registry:

Container Registries should be encrypted with a customer-managed key. Audit Container Registries that do not have encryption enabled with customer-managed keys.
Container Registries should not allow unrestricted network access. Audit Container Registries that do not have any network (IP or VNET) rules configured and allow all network access by default. Container Registries with at least one IP or firewall rule, or configured virtual network will be deemed compliant.
Container Registries should use private links. Audit Container Registries that do not have at least one approved private endpoint connection. Clients in a virtual network can securely access resources that have private endpoint connections through private links.

Using Azure Policy, you can ensure that your registries stay compliant with your organization's compliance needs.

Additional links

Learn more about Azure Container Registry.
UserVoice: To vote for existing requests or create a new request.
Issues: To view existing bugs and issues or log new ones.
Azure Container Registry documentation: For Azure Container Registry tutorials and documentation.

Quelle: Azure

Streamline connectivity and improve efficiency for remote work using Azure Virtual WAN

Today, we see a huge shift to remote work due to the global pandemic. Organizations around the world need to enable more of their employees to work remotely. We are working to address common infrastructure challenges businesses face when helping remote employees stay connected at scale.

A common operational challenge is to seamlessly connect remote users to on-premises resources. Even within Microsoft, we’ve seen our typical remote access of roughly 55,000 employees spike to as high as 128,000 employees while we’re working to protect our staff and communities during the global pandemic. Traditionally, you planned for increased user capacity, deployed additional on-premises connectivity resources, and had time to re-arrange routing infrastructure to meet organization transit connectivity and security requirements. Today’s dynamic environment demands rapid enablement of remote connectivity. Azure Virtual WAN supports multiple scenarios providing large scale connectivity and security in a few clicks.

Azure Virtual WAN provides network and security in a unified framework. Typically deployed with a hub and spoke topology, the Azure Virtual WAN architecture enables scenarios such as:

Branch connectivity via connectivity automation provided by Virtual WAN VPN/SD-WAN partners.
IPsec VPN connectivity.
Remote User VPN (Point-to-Site) connectivity.
Private (ExpressRoute) connectivity.
Intra cloud connectivity (transitive connectivity for Virtual Networks).
Transit connectivity for VPN and ExpressRoute.
Routing.
Security with Azure Firewall and Firewall Manager.

Organizations can quickly use Virtual WAN to deploy remote user connectivity in minutes and provide access to on-premises resources. A standard virtual WAN allows fully meshed hubs and routing infrastructure.

 
Here is how to support remote users:

Set up remote user connectivity: Connect to your Azure resources with an IPsec/IKE (IKEv2) or OpenVPN connection. This requires a virtual private network (VPN) client to be configured for the remote user. The Azure VPN Client, OpenVPN Client, or any client that supports IKEv2 can be used. For more information, see Create a point-to-site connection.
Enable connectivity from the remote user to on-premises: Two options are:

Set up Site-to-Site connectivity with an existing VPN device. When you connect the IPsec VPN device to Azure Virtual WAN hub, interconnectivity between the Point-to-Site User VPN (remote user) and Site-to-Site VPN is automatic. For more information on how to set up Site-to-Site VPN from your on-premise VPN device to Azure Virtual WAN, see Create a Site-to-Site connection using Virtual WAN.
Connect your ExpressRoute circuit to the Virtual WAN hub. Connecting an ExpressRoute circuit requires deploying an ExpressRoute gateway in Virtual WAN. As soon as you have deployed one, interconnectivity between the Point-to-Site User VPN and ExpressRoute user is automatic. To create the ExpressRoute connection, see Create an ExpressRoute connection using Virtual WAN. You can use an existing ExpressRoute circuit to connect to Azure Virtual WAN.

Connect your Azure resources to the Virtual Hub: Select a Virtual Network and attach it to your hub of choice.
Set up firewall policies in Virtual Hub: A secured virtual hub is an Azure Virtual WAN hub with associated security and routing policies configured by Azure Firewall Manager. Use secured virtual hubs to easily create native security services for traffic governance and protection. You can choose the services to protect and govern your network traffic with Azure Firewall. Azure Firewall Manager also allows you to use your familiar, best-in-breed, third-party security as a service (SECaaS) offerings to protect Internet access for your users. To create a firewall policy and secure your hub, see Secure your cloud network with Azure Firewall Manager using the Azure portal.

Learn more

For additional information, please explore these resources.

•    Virtual WAN Global Transit Architecture.
•    SD-WAN Connectivity Architecture with Virtual WAN.
•    Virtual WAN Monitoring (metrics and logs).
•    Install Azure Firewall in Virtual Hub.
•    Virtual WAN FAQ.
•    Virtual WAN pricing.
•    Using Azure Virtual WAN to support remote work documentation.
Quelle: Azure

Simplifying declarative deployments in Azure

Azure provides customers a simple and intuitive way to declaratively provision and manage infrastructure through Azure Resource Manager (ARM) templates. You can describe your entire Azure environment using template language, and then use your favorite CI/CD or scripting tool to stand up this environment in minutes. The ARM template language takes the form of JSON and is a direct representation of the resource schema. Which means you can create any Azure resource using an ARM Template from day one and configure any setting on the resources. Using ARM templates, you can describe the resources needed to make up the environment in a declarative, parameterized fashion. Because the ARM templates are declarative, you need only specify what you want, and Azure Resource Manager will figure out the rest.

Over the last couple of months, we have renewed our focus in ARM template deployments with a focus on addressing some of the key challenges shared by our customers. Today, we’re sharing some of the investments we’ve made to address some of these challenges.

Simplified authoring experience with Visual Studio Code

Our newest users have shared that their first time authoring and editing an ARM template from scratch can be intimidating. We have simplified the getting started experience by enabling you to create the resources you need in the Azure Portal and exporting an ARM template that you can reuse. We also have a template Quickstart gallery of over 800 sample templates to provision resources. But now we have taken things a step further for you.

With the new Azure Resource Manager (ARM) Tools in Visual Studio Code, we've added support for snippets (pre-created resource definitions), IntelliSense, colorization, ARM template outline, and comments. With comments support in ARM templates, you can deploy any template with comments using CLI, PowerShell, and Azure portal, and it will just work. Here is a short video on the new ARM template authoring experience in VS Code.

What-if: Pre-deployment impact analysis

Our customers often need to assess the impact of deployment to an environment before submitting any changes to the deployed resources. With new what-if features in Azure, customers can do pre-deployment assessments to determine what resources will be created, updated, or deleted, including any resource property changes. The what-if command does a real-time check of the current state of the environment and eliminates the need to manage any state. Get started with what-if here. While what-if is in preview, please let us know about issues and feature requests in our GitHub repo.

Deployment scripts: completing the ‘last mile’ scenarios

There are often scenarios where customers need to run custom script code in an ARM template deployment to complete their environment setup. These scripts that previously required a step outside of a template deployment can now be executed inside of a template deployment using the deploymentScript resource. The new deploymentScript resource will execute any PowerShell or bash script as part of your template deployment. This script can be included as part of your ARM template or referenced from an external source. Deployment scripts now give you the ability to complete your end-to-end environment setup in a single ARM template. Learn more about deployment scripts with this documentation. If there are certain Azure resource actions not exposed in our APIs that you would like to see surfaced natively in our control plane, please file your request here.

Management group and subscription provisioning at scale

As an organization expands its use of Azure, there are often conversations about the need to create a management group (MG) hierarchy (grouping construct) and Azure Subscriptions to ensure separation of environments, applications, billing, or security. Customers need a consistent and declarative way to provision management group and subscriptions to save time and resources. With the new tenant and MG deployment APIs, we now support the provisioning of MGs and subscriptions using ARM templates. This enables you to automate the setup of your entire estate and the associated infrastructure resources in a single ARM template. Read more about this and get sample templates here. Additionally, we now support tagging of subscriptions, removed the 800 deployments per resource group limit, increased the limit of the number of resource groups per deployment to 800, and increased the number of subscriptions per Enterprise Agreement (EA) account to 2000 enabling you to provision and manage at scale.

Continued focus on quality and reliability

Quality and reliability are at the forefront of everything we do at Microsoft. This is an area where we have continued our focus, starting with improving the quality of our schemas and having schema coverage for all resources. The benefits of this are seen in the improved authoring experience and template export capabilities. We are diligently working to improve our error messages and enhance the quality of our pre-flight validation to catch issues before you deploy. We have also invested heavily in improving our documentation by publishing all the API versions to template references and added template snippets to resource documentation.

To help with testing your ARM Template code we open sourced the ARM Template Toolkit which we use internally at Microsoft to ensure our ARM Templates follow best practices. Lastly, we recognize speed matters and we have made significant improvements to reduce our deployment times for large-scale deployments by roughly 75 percent.

The future of Infrastructure as Code with Azure Resource Manager templates

We have just begun our journey on enhancing ARM template deployments and the teams are consciously working hard to address current gaps and innovating for the future. You can hear about some of our future investments which we shared at the recent Microsoft Build 2020 conference.

We would love your continued feedback on ARM deployments. If you are interested in deeper conversations with the engineering team, please join our Deployments and Governance Yammer group.
Quelle: Azure

Rapid recovery planning for IT service providers

Azure Lighthouse is launching the “Azure Lighthouse Vision Series,” a new initiative to help partners with the business challenges of today and provide them the resources and knowledge needed to create a thriving Azure practice.

We are starting the series with a webinar aimed at helping our IT service partners prepare for and manage a new global economic climate. This webinar will be hosted by industry experts from Service Leadership Inc., advisors to service provider owners, and executives worldwide. It will cover offerings and execution strategies for solutions and services to optimize profit, growth, and stock value. Service Leadership publishes the Service Leadership Index® of solution provider performance, the industry's broadest and deepest operational and financial benchmark service.

The impact of a recession on service providers

As we continue through unchartered economic territory, service providers must prepare for possible recovery scenarios. Service Leadership has developed an exclusive (and no-cost) guide for service provider owners and executives called the Rapid Recovery™ Planning Guide, based on historical financial benchmarks of solution providers in recessions, and likely recovery scenarios.

The guide unlocks the best practices used by those service providers who did best in past recessions, as evidenced by their financial performance from the 2008 recession to the present day. As noted in the guide, through their Service Leadership Index® Annual Solution Provider Industry Profitability Report™, Service Leadership determined that:

In the 2001 and 2008 recessions, value-added reseller (VAR) and reseller revenue declined an average 45 percent within two quarters.
In the 2008 recession, mid-size and enterprise managed services providers (MSPs) experienced a 30 percent drop in revenue within the first three quarters.
Private cloud providers saw the smallest average dip, only 10 percent, in past recessions.
Project services firms experienced the most significant decline, having dropped into negative adjusted EBITDA back in 2008.

The upcoming webinar will explore methods used by the top performing service providers to plan and execute successfully in the current economy.

Tackling the challenges of today and tomorrow

Service providers have an essential role to play in our economic recovery. As we shift to a remote working culture, companies across the globe are ramping up efforts to reduce cost, ensure continuity in all lines of business, and manage new security challenges with a borderless office.

The chart below shows how three Service Provider Predominant Business Models™ have performed since the end of the last recession.
 
During the webinar, Service Leadership will provide estimated financial projections using multiple economic scenarios through 2028. These predictions, coupled with service provider best practices for managing an economic downturn, will be at the heart of our presentation.

Navigating success with Azure

Our Principal PM Manager for Azure Lighthouse, Archana Balakrishnan, will join Service Leadership to illustrate how Microsoft Azure Management tools can give service providers the tools needed to scale, automate, and optimize managed services on Azure.

Join us to learn how you can build and scale your Azure practice utilizing one native Azure solution to centrally manage your customer environments, monitor cost and performance, ensure compliance and proper governance, and optimize using the latest capabilities of Azure Lighthouse and Azure Arc.

Event details

Azure Lighthouse Vision Series: Rapid Recovery Planning for IT Service Providers

In this session, industry expert Paul Dippell, CEO of Service Leadership Inc., and Principal PM Manager for Azure Lighthouse, Archana Balakrishnan will cover these topics:

Likely upcoming macro-economic scenarios.
Likely service provider revenue and profit paths through recovery.
Suggested actions for service providers to maximize revenue, profit, and safety.
Azure Management tools for building and scaling services on Azure.
Closing advice for partners.

Paul and Archana will be available for a live Q&A with attendees during this session.

The webinar will be held on Monday, June 29, 2020 from 11:00 AM – 12:00 PM PT. To register for this free event, please visit, Azure Lighthouse Vision Series: Rapid Recovery Planning for IT Service Providers.
Quelle: Azure