Enabling Zero Trust with Azure network security services

This blog has been co-authored by Eliran Azulai, Principal Program Manager.

With the accelerated pace of digital transformation since the COVID-19 pandemic breakthrough, organizations continuously look to migrate their workloads to the cloud and to ensure their workloads are secure. Moreover, organizations need a new security model that more effectively adapts to the complexity of the modern environment, embraces the hybrid workplace, and protects applications and data regardless of where they are.

Microsoft’s Zero Trust Framework protects assets anywhere by adhering to three principles:

Verify explicitly: Always authenticate and authorize based on all available data points, including user identity, location, device health, service or workload, data classification, and anomalies.
Use least privileged access: Limit user access with just-in-time and just-enough-access (JIT and JEA), risk-based adaptive policies, and data protection to help secure both data and productivity.
Assume breach: Minimize blast radius and segment access. Verify end-to-end encryption and use analytics to get visibility, drive threat detection, and improve defenses.

In this blog, we will describe some Azure network security services that help organizations to address Zero Trust, focusing on the third principle—assume breach.

Network firewalling

Network firewalls are typically deployed at the edge networks, filtering traffic between trusted and untrusted zones. The Zero Trust approach extends this model and recommends filtering traffic between internal networks, hosts, and applications.

The Zero Trust approach assumes breach and accepts the reality that bad actors are everywhere. Rather than building a wall between trusted and untrusted zones, it recommends we verify all access attempts, limit user access to JIT and JEA, and harden the resources themselves. However, this doesn’t preclude us from maintaining security zones. In fact, network firewalling provides a type of checks and balances for network communications, by segmenting the network into smaller zones and controlling what traffic is allowed to flow between them. This security-in-depth practice forces us to consider whether a particular connection should cross a sensitive boundary.

Where should firewalling take place in Zero Trust networks? Since your network is vulnerable by nature, one should implement firewalling at the host level and outside of it. In Azure, we provide filtering and firewalling services that are deployed at different network locations: at the host and between virtual networks or subnets. Let’s discuss how Azure's firewalling services support Zero Trust.

Azure network security group (NSG)

You can use Azure network security group to filter network traffic to and from Azure resources in an Azure virtual network. NSG is implemented at the host level, outside of the virtual machines (VMs). In terms of user configuration, NSG can be associated with a subnet or VM NIC. Associating an NSG with a subnet is a form of perimeter filtering that we’ll discuss later. The more relevant application of NSG in the context of Zero Trust networks is associated with a specific VM (such as by means of assigning an NSG to a VM NIC). It supports filtering policy per VM, making the VM a participant in its own security. It serves the goal of ensuring that every VM filters its own network traffic, rather than delegating all firewalling to a centralized firewall.

While host firewalling can be implemented at the guest OS level, Azure NSG safeguards against a VM that becomes compromised. An attacker who gains access to the VM and elevates its privilege could remove the on-host firewall. NSG is implemented outside of the VM, isolating host-level filtering, which provides strong guarantees against attacks on the firewalling system.

Inbound and outbound filtering

NSG provides for both inbound filtering (regulate traffic entering a VM) and outbound filtering (regulate traffic leaving a VM). Outbound filtering, especially between resources in the vnet has an important role in Zero Trust networks to further harden the workloads. For example, a misconfiguration in inbound NSG rules can result in a loss of this important inbound filtering layer of defense that is very difficult to discover. Pervasive NSG outbound filtering protects subnets even when such critical misconfiguration takes place.

Simplify NSG configuration with Azure application security groups

Azure application security groups (ASGs) simplify the configuration and management of NSGs, by configuring network security as an extension of an application’s structure. ASGs allow you to group VMs and define network security policies based on these groups. Using ASGs you can reuse network security at scale without manual maintenance of explicit IP addresses. In the simplified example below, we apply an NSG1 on a subnet level and associate two VMs with a WebASG (web application tier ASG), and another VM with a LogicASG (business logic application tier ASG).

We can apply security rules to ASGs instead of each of the VMs individually. For example, the below rule allows HTTP traffic from the Internet (TCP port 80) to VM1 and VM2 in the web application tier, by specifying WebASG as the destination, instead of creating a separate rule for each VM.

Priority

Source

Source ports

Destination

Destination ports

Protocol

Access

100

Internet

*

WebASG

80

TCP

Allow

Azure Firewall

While host-level filtering is ideal in creating micro perimeters, firewalling at the virtual network or subnet level adds another important layer of protection. It protects as much infrastructure as possible against unauthorized traffic and potential attacks from the internet. It also serves to protect east-west traffic to minimize blast radius in case of attacks.

Azure Firewall is a native network security service for firewalling. Implemented alongside NSG, these two services provide important checks and balances in Zero Trust networks. Azure Firewall implements global rules and coarse-grained host policy while NSG sets fine-grained policy. This separation of perimeter versus host filtering can simplify the administration of the firewalling policy.

Zero Trust model best practice is to always encrypt data in transit to achieve end-to-end encryption. However, from an operational perspective, customers would often wish to have visibility into their data as well as to apply additional security services on the unencrypted data.

Azure Firewall Premium with its transport layer security (TLS) inspection can perform full decryption and encryption of the traffic, giving the ability to utilize intrusion detection and prevention systems (IDPS), as well as providing customers with visibility into the data itself.

DDoS Protection

Zero Trust strives to authenticate and authorize just about everything on the network, but it does not provide good mitigation against DDoS attacks, particularly against volumetric attacks. Any system that can receive packets is vulnerable to DDoS attacks, even those employing a Zero Trust architecture. Consequently, it’s imperative that any Zero Trust implementation is fully protected against DDoS attacks.

Azure DDoS Protection Standard provides DDoS mitigation features to defend against DDoS attacks. It is automatically tuned to help protect any internet-facing resource in a virtual network. Protection is simple to enable on any new or existing virtual network and requires no application or resources changes.

Optimize SecOps with Azure Firewall Manager

Azure Firewall Manager is a security management service that provides central security policy and route management for cloud-based security perimeters.

In addition to Azure Firewall policy management, Azure Firewall Manager now allows you to associate your virtual networks with a DDoS protection plan. Under a single-tenant, DDoS protection plans can be applied to virtual networks across multiple subscriptions. You can use the Virtual Networks dashboard to list all virtual networks that don’t have a DDoS Protection Plan and assign new or available protection plans to them.

Moreover, Azure Firewall Manager allows you to use your familiar, best-of-breed, third-party security as a service (SECaaS) offerings to protect Internet access for your users.

By seamlessly integrating with Azure core security services, such as Microsoft Defender for Cloud, Microsoft Sentinel, and Azure Log Analytics, you can further optimize your SecOps with a one-stop-shop providing you best-of-breed networking security services, posture management, and workload protection—as well as SIEM and data analytics.

What’s next

Zero Trust is imperative for organizations working to protect everything as it is. It’s an ongoing journey for security professionals but getting started begins with some first steps and continuous iterative improvements. In this blog, we described several Azure security services and how they enable the Zero Trust journey for all organizations.

To learn more about these services, see the following resources:

Zero Trust Framework.
Azure network security groups.
Azure application security groups.
Azure Firewall.
Azure Firewall Manager.
Azure DDoS Protection Standard.

Quelle: Azure

7 reasons to attend Azure Open Source Day

To show you the latest capabilities of using Linux and Azure—and share some exciting announcements—we will be hosting Azure Open Source Day on Tuesday, February 15, 2022, from 9:00 AM to 10:30 AM Pacific Time.

Push your apps and data to the next level by using Azure, open source, and Linux together. Join this free digital event to learn how to natively run your open-source workloads on Azure, expand their capabilities, and innovate in new ways using Azure services.

At this event, you’ll learn how Microsoft is committed to open source and works with the open source community to develop new technologies. Hear about the latest trends and capabilities of using Linux and Azure together—direct from Microsoft insiders. Whether you’re new to Azure or are already using it, you’ll discover how to turbocharge your apps and data with open source and hybrid cloud technologies.

Here are seven reasons to attend the event

Get the inside scoop on CBL-Mariner, the Linux distribution built by Microsoft to host Azure open source services.
Find out how to better automate and manage Linux investments on Azure using Azure Hybrid Benefit and Project Bicep.
Discover tools for every developer, including Visual Studio Code, GitHub Codespaces, and Azure managed database and AI services.
Learn about application modernization best practices using containers and Azure Kubernetes Service (AKS).
Hear from Microsoft insiders and Linux industry leaders like Red Hat and SUSE.
Ask the experts your questions during the live chat Q and A.
Plus, be among the first to hear Microsoft CEO, Satya Nadella, share a special announcement on the 30th anniversary for Linux.

We look forward to seeing you there!

Register today for the Azure Open Source Day

Azure Open Source Day
Tuesday, February 15, 2022,
9:00 AM to 10:30 AM Pacific Time.

Delivered in partnership with AMD.
Quelle: Azure

Azure DDoS Protection—2021 Q3 and Q4 DDoS attack trends

This blog post was co-authored by Anupam Vij, Principal PM Manager, and Syed Pasha, Principal Network Engineer, Azure Networking

In the second half of 2021, the world experienced an unprecedented level of Distributed Denial-of-Service (DDoS) activity in both complexity and frequency. The gaming industry was perhaps the hardest hit, with DDoS attacks disrupting gameplay of Blizzard games1, Titanfall2, Escape from Tarkov3, Dead by Daylight4, and Final Fantasy 145 among many others. Voice over IP (VoIP) service providers such as Bandwidth.com6, VoIP Unlimited7, and VoIP.ms8 suffered outages following ransom DDoS attacks. In India, we saw a 30-fold increase of DDoS attacks during the nation’s festive season in October9 with multiple broadband providers targeted, which shows that the holidays are indeed an attractive time for cybercriminals. As we highlighted in the 2021 Microsoft Digital Defense Report, the availability of DDoS for-hire services as well as the cheap costs—at only approximately $300 USD per month—make it extremely easy for anyone to conduct targeted DDoS attacks.

At Microsoft, despite the evolving challenges in the cyber landscape, the Azure DDoS Protection team was able to successfully mitigate some of the largest DDoS attacks ever, both in Azure and in the course of history. In this review, we share trends and insights into DDoS attacks we observed and mitigated throughout the second half of 2021.

August recorded the highest number of attacks

Microsoft mitigated an average of 1,955 attacks per day, a 40 percent increase from the first half of 2021. The maximum number of attacks in a day recorded was 4,296 attacks on August 10, 2021. In total, we mitigated upwards of 359,713 unique attacks against our global infrastructure during the second half of 2021, a 43 percent increase from the first half of 2021.

Interestingly, there was not as much of a concentration of attacks during the end-of-year holiday season compared to previous years. We saw more attacks in Q3 than in Q4, with the most occurring in August, which may indicate a shift towards attackers acting all year round—no longer is holiday season the proverbial DDoS season! This highlights the importance of DDoS protection all year round, and not just during peak traffic seasons.

Microsoft mitigated a 3.47 Tbps attack, and two more attacks above 2.5 Tbps

Last October, Microsoft reported on a 2.4 terabit per second (Tbps) DDoS attack in Azure that we successfully mitigated. Since then, we have mitigated three larger attacks.

In November, Microsoft mitigated a DDoS attack with a throughput of 3.47 Tbps and a packet rate of 340 million packets per second (pps), targeting an Azure customer in Asia. We believe this to be the largest attack ever reported in history.

This was a distributed attack originating from approximately 10,000 sources and from multiple countries across the globe, including the United States, China, South Korea, Russia, Thailand, India, Vietnam, Iran, Indonesia, and Taiwan. Attack vectors were UDP reflection on port 80 using Simple Service Discovery Protocol (SSDP), Connection-less Lightweight Directory Access Protocol (CLDAP), Domain Name System (DNS), and Network Time Protocol (NTP) comprising one single peak, and the overall attack lasted approximately 15 minutes.

In December, we mitigated two more attacks that surpassed 2.5 Tbps, both of which were again in Asia. One was a 3.25 Tbps UDP attack in Asia on ports 80 and 443, spanning more than 15 minutes with four main peaks, the first at 3.25 Tbps, the second at 2.54 Tbps, the third at 0.59 Tbps, and the fourth at 1.25 Tbps. The other attack was a 2.55 Tbps UDP flood on port 443 with one single peak, and the overall attack lasted just a bit over five minutes.

In these cases, our customers do not have to worry about how to protect their workloads in Azure, as opposed to running them on-premises. Azure’s DDoS protection platform, built on distributed DDoS detection and mitigation pipelines, can scale enormously to absorb the highest volume of DDoS attacks, providing our customers the level of protection they need. The service employs fast detection and mitigation of large attacks by continuously monitoring our infrastructure at many points across the Microsoft global network. Traffic is scrubbed at the Azure network edge before it can impact the availability of services. If we identify that the attack volume is significant, we leverage the global scale of Azure to defend the attack from where it is originating.

Short burst and multi-vector attacks remain prevalent, although more attacks are lasting longer

As with the first half of 2021, most attacks were short-lived, although, in the second half of 2021, the proportion of attacks that were 30 minutes or less dropped from 74 percent to 57 percent. We saw a rise in attacks that lasted longer than an hour, with the composition more than doubling from 13 percent to 27 percent. Multi-vector attacks continue to remain prevalent.

It’s important to note that for longer attacks, each attack is typically experienced by customers as a sequence of multiple short, repeated burst attacks. One such example would be the 3.25 Tbps attack mitigated, which was the aggregation of four consecutive short-lived bursts that each ramped up in seconds to terabit volumes.

UDP spoof floods dominated, targeting the gaming industry

UDP attacks rose to the top vector in the second half of 2021, comprising 55 percent of all attacks, a 16 percent increase from the first half of 2021. Meanwhile, TCP attacks decreased from 54 percent to just 19 percent. UDP spoof floods was the most common attack type (55 percent), followed by TCP ACK floods (14 percent) and DNS amplification (6 percent).

Gaming continues to be the hardest hit industry. The gaming industry has always been rife with DDoS attacks because players often go to great lengths to win. Nevertheless, we see that a wider range of industries are just as susceptible, as we have observed an increase in attacks in other industries such as financial institutions, media, internet service providers (ISPs), retail, and supply chain. Particularly during the holidays, ISPs provide critical services that power internet phone services, online gaming, and media streaming, which make them an attractive target for attackers.

UDP is commonly used in gaming and streaming applications. The majority of attacks on the gaming industry have been mutations of the Mirai botnet and low-volume UDP protocol attacks. An overwhelming majority were UDP spoof floods, while a small portion were UDP reflection and amplification attacks, mostly SSDP, Memcached, and NTP.

Workloads that are highly sensitive to latency, such as multiplayer game servers, cannot tolerate such short burst UDP attacks. Outages of just a couple seconds can impact competitive matches, and outages lasting more than 10 seconds typically will end a match. For this scenario, Azure recently released the preview of inline DDoS protection, offered through partner network virtual appliances (NVAs) that are deployed with Azure Gateway Load Balancer. This solution can be tuned to the specific shape of the traffic and can mitigate attacks instantaneously without impacting the availability or performance of highly latency-sensitive applications.

Huge increase in DDoS attacks in India, East Asia remains popular with attackers

The United States remains the top attacked destination (54 percent). We saw a sharp uptick in attacks in India, from just 2 percent of all attacks in the first half of 2021 to taking the second position at twenty-three percent of all attacks in the second half of 2021. East Asia (Hong Kong) remains a popular hotspot for attackers (8 percent). Interestingly, relative to other regions, we saw a decrease in DDoS activity in Europe, dropping from 19 percent in the first half of 2021 to 6 percent in the second half.

The concentration of attacks in Asia can be largely explained by the huge gaming footprint10, especially in China, Japan, South Korea, Hong Kong, and India, which will continue to grow as the increasing smartphone penetration drives the popularity of mobile gaming in Asia. In India, another driving factor may be that the acceleration of digital transformation, for example, the “Digital India” initiative11, has increased the region’s overall exposure to cyber risks.

Defended against new attack vectors

During the October-to-December holiday season, we defended against new TCP PUSH-ACK flood attacks that were dominant in the East Asia region, namely in Hong Kong, South Korea, and Japan. We observed a new TCP option manipulation technique used by attackers to dump large payloads, whereby in this attack variation, the TCP option length is longer than the option header itself.

This attack was automatically mitigated by our platform’s advanced packet anomaly detection and mitigation logic, with no intervention required and no customer impact at all.

Protect your workloads from DDoS attacks with Microsoft

As the world moves towards a new era of digitalization with the expansion of 5G and IoT, and with more industries embracing online strategies, the increased online global footprint means that the threat of cyberattacks will continue to grow. As we have witnessed that DDoS attacks are now rampant even during non-festive periods, it is crucial for businesses to develop a robust DDoS response strategy all year round, and not just during the holiday season.

At Microsoft, the Azure DDoS Protection team protects every property in Microsoft and the entire Azure infrastructure. Our vision is to protect all internet-facing workloads in Azure, against all known DDoS attacks across all levels of the network stack.

Combine DDoS Protection Standard with Application Gateway Web Application Firewall for comprehensive protection

When combined with DDoS Protection Standard, Application Gateway web application firewall (WAF), or a third-party web application firewall deployed in a virtual network with a public IP, provides comprehensive protection for L3-L7 attacks on web and API assets. This also works if you are using Azure Front Door alongside Application Gateway WAF, or if your backend resources are in your on-premises environment.

If you have PaaS web application services running on Azure App Service or Azure SQL Database, you can host your application behind an Application Gateway and WAF and enable DDoS Protection Standard on the virtual network which contains the Application Gateway and WAF. In this scenario, the web application itself is not directly exposed to the public Internet and is protected by Application Gateway WAF and DDoS Protection Standard. To minimize any potential attack surface area, you should also configure the web application to accept only traffic from the Application Gateway public IP address and block unwanted ports.

Use inline DDoS protection for latency-sensitive workloads

If you have workloads that are highly sensitive to latency and cannot tolerate short burst DDoS attacks, we recently released the preview of inline DDoS protection, offered through partner network virtual appliances (NVAs) that are deployed with Azure Gateway Load Balancer. Inline DDoS protection mitigates even short-burst low-volume DDoS attacks instantaneously without impacting the availability or performance of highly latency-sensitive applications.

Optimize SecOps with Azure Firewall Manager

DDoS Protection Standard is automatically tuned to protect all public IP addresses in virtual networks, such as those attached to an IaaS virtual machine, Load Balancer (Classic and Standard Load Balancers), Application Gateway, and Azure Firewall Manager. In addition to Azure Firewall policy management, Azure Firewall Manager, a network security management service, now supports managing DDoS Protection Standard for your virtual networks. Enabling DDoS Protection Standard on a virtual network will protect the Azure Firewall and any publicly exposed endpoints that reside within the virtual network.

Learn more about Azure DDoS Protection Standard

•    Azure DDoS Protection Standard product page.
•    Azure DDoS Protection Standard documentation.
•    Azure DDoS Protection Standard reference architectures.
•    DDoS Protection best practices.
•    Azure DDoS Rapid Response.
•    DDoS Protection Standard pricing and SLA.

1Overwatch, World of Warcraft Go Down After DDoS | Digital Trends

2After years of struggling against DDoS attacks, Titanfall is being removed from sale | PC Gamer

3'Escape From Tarkov' suffers sustained server issues in possible DDoS attacks (nme.com)

4Dead by Daylight streamers are being DDoS attacked

5'Final Fantasy 14' EU servers affected by DDoS attack (nme.com)

6Bandwidth CEO confirms outages caused by DDoS attack | ZDNet

7DDoS Attack Hits VoIP and Internet Provider VoIP Unlimited Again UPDATE2 – ISPreview UK

8VoIP company battles massive ransom DDoS attack | ZDNet

930-fold increase in DDoS cyber attacks in India in festive season (ahmedabadmirror.com)

10Gaming industry in Asia Pacific – statistics and facts | Statista

11Di-Initiatives | Digital India Programme | Ministry of Electronics and Information Technology (MeitY) Government of India
Quelle: Azure

Rightsize to maximize your cloud investment with Microsoft Azure

If you are running on-premises servers, chances are that you utilize a fraction of your overall server cores most of the time but are forced to over-provision to handle peak loads. Moving those workloads to the cloud can greatly reduce cost by “rightsizing” server capacity as needed.

Rightsizing is one of the key levers you have for controlling costs and optimizing resources. By understanding cloud economics, and using what Azure provides, you can identify the smallest virtual server instances that support your requirements and realize immediate savings by eliminating unused capacity.

Many industries experience spikes in server usage. When you rightsize with Azure, you are no longer compelled to buy and provision capacity based on peak demand, which results in excess capacity and excess spending.

For instance, H&R Block found that its servers got used most at specific times of the year—namely, tax season, and maintaining expensive on-premises infrastructure throughout the year was driving up costs. Once the tax preparer migrated the first 20 percent of its apps and platforms to Azure, it became very clear how the variable cost model of the cloud contrasted with the fixed model of the on-premises datacenters and revaluated its architecture.

Rightsizing in the cloud will mean different things to different organizations. One of the first questions to ask is how much of your environment is elastic versus static to get an idea of savings based on the reduction in footprint. In the example below, static usage never went above 30 percent of capacity, indicating a huge opportunity for savings.

What does rightsizing look like for you?

Turning off workloads can obviously have an immediate impact on your budget. But how aggressively should you look to trim? Do you always know what is driving the consumption? Are there situations where you can not immediately rightsize? For workloads still needed, what can be done to optimize those resources?

That optimization can take several forms:

Resizing virtual machines: Business and applications requirements evolve, so committing to a specific virtual machine size ahead of time can be limiting.
Shutting down underutilized instances: With workloads in the cloud, use Azure Advisor to find underutilized resources and get recommendations for resource optimization. This tool also can help determine the cost savings from rightsizing or shutting down central processing units (CPUs).
Interrupting workloads with Azure Spot Virtual Machines: You can get deep discounts for interruptible workloads that do not need to be completed within a specific timeframe.
Identify workloads that need extra capacity: With Azure, it is easier to meet consumption demands. In fact, the process can be largely automated.

When migrating workloads to Azure, do not consider it a one-to-one migration of server cores. The cloud is infinitely more flexible, allowing for unpredictable workloads and paying only for the resources needed. Plan for the peak but know that you do not have to hold on to that capacity. Under consistently high usage, consumption-based pricing can be less efficient for estimating baseline costs when compared to the equivalent provisioned pricing.

Be sure to consider tradeoffs between cost optimization and other aspects of the design, such as security, scalability, resilience, and operability. When using tools like Azure Advisor, understand that they can only give a snapshot of usage during their discovery period. If your organization experiences large seasonal fluctuations, you can save on provisioning your base workloads, typically your line-of-business applications, by reserving virtual machine instances and capacity with a discount. And when those seasonal patterns and occasional bursts drive up usage, pay-as-you-go pricing kicks in.

Those consistent workloads, like a batch process that runs every day using the same resources, you can get reduced pricing by taking advantage of Azure Reservations and receive discounts up to 72 percent by reserving your resources in advance.

And speaking of cost optimization tools, use the Azure Well-Architected Framework to optimize the quality of your Azure workloads. Read the overview of cost optimization to dive deeper into the tools and processes for creating cost-effective workloads. These tools really can help. According to an IDC assessment, Azure customer enablement tools can lower the three-year cost of operations by 24 percent.

Planning for growth no longer means overprovisioning for fear of hitting capacity. When you understand cloud economics and follow the key financial and technical guidance from Azure, your workloads will be much more cost-effective in Azure.

Learn more

Read the cost optimization documentation.
Review the cost optimization checklist.
Understand Azure Cost Management and Billing.

Quelle: Azure

Save big by using your on-premises licenses on Azure

Are you still hesitating to move some or all your workloads to the cloud due to the added cost? One of the easiest ways to significantly lower your cost of ownership is by using a special licensing offer called Azure Hybrid Benefit.

When migrating Windows Server or SQL Server on-premises workloads to Microsoft Azure, Azure Hybrid Benefit allows you to use your existing licenses covered by Software Assurance (SA) or other subscriptions in Azure. By bringing both Windows and SQL Server licenses with SA to Azure, you can save up to 85 percent compared to pay-as-you-go pricing.

Don’t pay double

Server migration cost concerns take several shapes, including paying double for the cloud and on-premises licenses while migrating, and the added infrastructure and security costs. During migrations, Azure Hybrid Benefit helps reduce risk by allowing 180 days to run Azure and on-premises workloads simultaneously at no additional cost. Or you can keep both licenses permanently to continue running a hybrid infrastructure.

When using cloud services from other providers, organizations are required to pay for both the infrastructure and the licenses. With Azure Hybrid Benefit, you pay only for additional infrastructure. You will need to repurchase your Windows Server license on other providers’ clouds. And only Azure offers free extended security updates. When you move a Windows or SQL Server workload to Azure, the extended security updates provide three years of free security updates after the end of support, reducing risk and cost.

Moreover, Azure Hybrid Benefit applies to active and unused on-premises Red Hat or SUSE Linux subscriptions, allowing you to use your existing Linux workloads on Azure and pay only for your virtual machine infrastructure costs.

Windows Server savings

Only Azure Hybrid Benefit enables Windows Server license assignment in the cloud. The benefit is applicable to customers with an active SA or subscription license, such as EAS, SCE, or Open Value subscription on Windows Server (both Standard and Datacenter editions of Windows Server 2012 R2, Windows Server 2016, and Windows Server 2019). It is supported in all Azure regions and on virtual machines that are running SQL or third-party marketplace software.

Only Azure Hybrid Benefit offers unlimited virtualization for dedicated hosts. For a breakdown of the number of virtual cores allocated for those licenses, their requirements, and how to apply for benefits, check out Azure Hybrid Benefit for Windows Server.

Below is a snapshot of how much Azure Hybrid Benefit can save when moving Windows Server workloads.

SQL Server savings

The Hybrid Benefit for SQL Server on Azure Virtual Machines allows customers with Software Assurance to use their on-premises licenses when they run SQL Server on Azure Virtual Machines. With Software Assurance, you can use the benefit when deploying a new SQL virtual machine or activate SQL Server Azure Hybrid Benefit for an existing SQL virtual machine with a pay-as-you-go license.

And know that only with Azure Hybrid Benefit can you leverage your existing SQL Server licenses in both IaaS and PaaS environments. And only Azure Hybrid Benefit applies to SQL Server DBaaS and gives you four virtual CPUs for one core of SQL Server Enterprise in the exchange.

You can centrally manage your Azure Hybrid Benefit for SQL Server across the scope of an entire Azure subscription or overall billing account. In the Azure portal, you can now centrally manage your Azure Hybrid Benefit for SQL Server by assigning licenses to the scope of an entire Azure subscription or overall billing account.

Here is an example of the benefit applied to SQL Server:

Azure Hybrid Benefit helps you to significantly reduce the costs of running your workloads in the cloud. See the benefit description and rules for more on the licensing structure and use cases.

More ways to save

Even more savings can be found by purchasing Azure reserved instances, which provides discounts on Azure services when purchasing predicted capacity in advance. It gives us visibility into your one-year or three-year resource needs, which allows us to be more efficient and these savings are passed on to you as discounts of up to 72 percent. Together with Azure Hybrid Benefit, these reservations can provide more than 80 percent savings over the standard pay-as-you-go rate. Your actual savings may vary so use the Azure Hybrid Benefit Savings Calculator to estimate your savings range.

Learn more

Get more financial and technical guidance from Azure by visiting cloud economics.
Find out more special offers at Azure benefits and incentives.

Quelle: Azure

Microsoft launches landing zone accelerator for Azure Arc-enabled servers

We continue to innovate and add new capabilities to Azure Arc to enable new scenarios in hybrid and multicloud. We also want to provide our customers with the right guidance and best practices to adopt hybrid and multicloud technologies to meet their business needs. Today we’re launching the Azure Arc-enabled servers landing zone accelerator within the Azure Cloud Adoption Framework. The landing zone accelerator provides best practices, guidance, and automated reference implementations so that customers can get started with their deployments quickly and easily.

Azure Arc-enabled servers landing zone accelerator makes it easier for customers to increase security, governance, and compliance posture on servers that are deployed outside of Azure. Along with Azure Arc, services such as Microsoft Defender for Cloud, Azure Sentinel, Azure Monitor, Azure Log Analytics, Azure Policy, and many others are included in the reference implementations that can then be extended to production environments.

Design areas within the landing zone accelerator

The Azure Arc-enabled servers landing zone accelerator enables customers’ cloud adoption journey with considerations, recommendations, and architecture patterns most important to customers. For deploying Azure Arc-enabled servers in the most recommended way, we created a set of seven critical design areas. Each of these specific areas, walks customers through a set of design considerations, recommendations, architectures, and next steps:

Identity and access management
Network topology and connectivity
Resource organization
Governance and security disciplines
Management disciplines
Cost governance
Automation disciplines

Automation for landing zone accelerator

Azure Arc landing zone accelerator uses the sandbox automation powered by Azure Arc Jumpstart, for reference implementations. Since launching 18 months ago, Azure Arc Jumpstart has grown tremendously, with more than 90 automated scenarios, thousands of visitors a month, and a vivid open-source community sharing their learnings on Azure Arc. As part of Jumpstart, we developed ArcBox, an automated sandbox environment for all-things Azure Arc, deployed in customers’ Azure subscriptions.

Here’s what Kevin Booth, Principal Cloud Architect at Insight, a technology provider, had to say about Jumpstart—“The Azure Arc Jumpstarts have proven invaluable to us at Insight in familiarizing our people and our clients with Azure Arc’s use cases, feature set, and capabilities. We at Insight have taken the Jumpstart scenarios and integrated them into our own IP to help accelerate implementation to more rapidly onboard customers, in a best practice manner.”

For the Azure Arc-enabled servers landing zone accelerator, we developed the new ArcBox for IT Pros, which will act as the sandbox automation solution for Azure Arc-enabled servers with services like Azure Policy, Azure Monitor, Microsoft Defender for Cloud, Microsoft Sentinel, and more.

This provides customers with a comprehensive experience that can just be deployed and have a fully operational Azure Arc-enabled servers environment.

The sandbox automation supports Bicep, Terraform, and ARM templates, so customers can choose what makes sense to them and their organizations’ automation practices. This is also part of our new ArcBox 2.0 release.

Getting started

Hop over to the Hybrid and multicloud Cloud Adoption Framework page and explore the Azure Arc-enabled servers landing zone accelerator, the critical design areas, and sandbox automation.
Quelle: Azure

Azure introduces new capabilities for live video analytics

In June 2020, we announced the preview of the Live Video Analytics platform—a groundbreaking new set of capabilities in Azure Media Services that allows you to build workflows that capture and process video with real-time analytics from the intelligent edge to intelligent cloud. We continue to see customers across industries enthusiastically using Live Video Analytics on IoT Edge in preview, to drive positive outcomes for their organizations. Last week at Microsoft Ignite, we announced new features, partner integrations, and reference apps that unlock additional scenarios that include social distancing, factory floor safety, security perimeter monitoring, and more. The new product capabilities that enable these scenarios include:

Spatial Analysis in Azure Computer Vision for Cognitive Services: Enhanced video analytics that factor in the spatial relationships between people and movement in the physical domain.
Intel OpenVINO Model Server integration: Build complex, highly performant live video analytics solutions powered by OpenVINO toolkit, with optimized pre-trained models running on Intel CPUs (Atom, Core, Xeon), FPGAs, and VPUs.
NVIDIA DeepStream integration: Support for hardware accelerated hybrid video analytics apps that combine the power of NVIDIA GPUs with Azure services.
Arm64 support: Develop and deploy live video analytics solutions on low power, low footprint Linux Arm64 devices.
Azure IoT Central Custom Vision Template: Build rich custom vision applications in as little as a few minutes to a few hours with no coding required.
High frame rate inferencing with Cognitive Services Custom Vision integration: Demonstrated in a manufacturing industry reference app that supports six useful out of the box scenarios for factory environments.

Making video AI easier to use

Given the wide array of available CPU architectures (x86-64, Arm, and more) and hardware acceleration options (Intel Movidius VPU, iGPU, FPGA, NVIDIA GPU), plus the dearth of data science professionals to build customized AI, putting together a traditional video analytics solution entails significant time, effort and complexity.

The announcements we’re making today further our mission of making video analytics more accessible and useful for everyone—with support for widely used chip architectures, including Intel, NVIDIA and Arm, integration with hardware optimized AI frameworks like NVIDIA DeepStream and Intel OpenVINO, closer integration with complementary technologies across Microsoft’s AI ecosystem—Computer Vision for Spatial Analysis and Cognitive Services Custom Vision, as well as an improved development experience via the Azure IoT Central Custom Vision template and a manufacturing floor reference application.

Live video analytics with Computer Vision for Spatial Analysis

The Spatial Analysis capability of Computer vision, a part of Azure Cognitive Service, can be used in conjunction with Live Video Analytics on IoT Edge to better understand the spatial relationships between people and movement in physical environments. We’ve added new operations that enable you to count people in a designated zone within the camera’s field of view, to track when a person crosses a designated line or area, or when people violate a distance rule.

The Live Video Analytics module will capture live video from real-time streaming protocol (RTSP) cameras and invoke the spatial analysis module for AI processing. These modules can be configured to enable video analysis and the recording of clips locally or to Azure Blob storage.

Deploying the Live Video Analytics and the Spatial Analysis modules on edge devices is made easier by Azure IoT Hub. Our recommended edge device is Azure Stack Edge with the NVIDIA T4 Tensor Core GPU. You can learn more about how to analyze live video with Computer Vision for Spatial Analysis in our documentation.

Live Video Analytics with Intel’s OpenVINO Model Server

You can pair the Live Video Analytics on IoT Edge module with the OpenVINO Model Server(OVMS) – AI Extension from Intel to build complex, highly performant live video analytics solutions. Open vehicle monitoring system (OVMS) is an inference server powered by the OpenVINO toolkit that’s highly optimized for computer vision workloads running on Intel. As an extension, HTTP support and samples have been added to OVMS to facilitate the easy exchange of video frames and inference results between the inference server and the Live Video Analytics module, empowering you to run any object detection, classification or segmentation models supported by OpenVINO toolkit.

You can customize the inference server module to use any optimized pre-trained models in the Open Model Zoo repository, and select from a wide variety of acceleration mechanisms supported by Intel hardware without having to change your application, including CPUs (Atom, Core, Xeon), field programmable gate arrays (FPGAs), and vision processing units (VPUs) that best suit your use case. In addition, you can select from a wide variety of use case-specific Intel-based solutions such as Developer Kits or Market Ready Solutions and incorporate easily pluggable Live Video Analytics platform for scale.

“We are delighted to unleash the power of AI at the edge by extending OpenVINO Model Server for Azure Live Video Analytics. This extension will simplify the process of developing complex video solutions through a modular analytics platform. Developers are empowered to quickly build their edge to cloud applications once and deploy to Intel’s broad range of compute and AI accelerator platforms through our rich ecosystems.”—Adam Burns, VP, Edge AI Developer Tools, Internet of Things Group, Intel

 

 

 

Live Video Analytics with NVIDIA’s DeepStream SDK

Live Video Analytics and NVIDIA DeepStream SDK can be used to build hardware-accelerated AI video analytics apps that combine the power of NVIDIA graphic processing units (GPUs) with Azure cloud services, such as Azure Media Services, Azure Storage, Azure IoT, and more. You can build sophisticated real-time apps that can scale across thousands of locations and can manage the video workflows on the edge devices at those locations via the cloud. You can explore some related samples on GitHub.

You can use Live Video Analytics to build video workflows that span the edge and cloud, and then combine DeepStream SDK to build pipelines to extract insights from video using the AI of your choice.

The diagram above illustrates how you can record video clips that are triggered by AI events to Azure Media Services in the cloud. The samples are a testament to robust design and openness of both platforms.

“The powerful combination of NVIDIA DeepStream SDK and Live Video Analytics powered by the NVIDIA computing stack helps accelerate the development and deployment of world-class video analytics. Our partnership with Microsoft will advance adoption of AI-enabled video analytics from edge to cloud across all industries and use cases.”—Deepu Talla, Vice President and General Manager of Edge Computing, NVIDIA

 

 

Live Video Analytics now runs on Arm

You can now run Live Video Analytics on IoT Edge on Linux Arm64v8 devices, enabling you to use low power-consumption, low-footprint devices such as the NVIDIA® Jetson™ series.

Develop Solutions Rapidly Using the IoT Central Video Analytics Template

The new IoT Central video analytics template simplifies the setup of an Azure IoT Edge device to act as a gateway between cameras and Azure cloud services. It integrates the Azure Live Video analytics video inferencing pipeline and OpenVINO Model Server—an AI Inference server by Intel, enabling customers to build a fully working end-to-end solution in a couple of hours with no code. It’s fully integrated with the Azure Media Services pipeline to capture, record, and play analyzed videos from the cloud.

The template installs IoT Edge modules such as an IoT Central Gateway, Live Video Analytics on IoT Edge, Intel OpenVINO Model server, and an ONVIF module on your edge devices. These modules help the IoT Central application configure and manage the devices, ingest live video streams from the cameras, and easily apply AI models such as vehicle or person detection. Simultaneously in the cloud, Azure Media Services and Azure Storage record and stream relevant portions of the live video feed. Refer to our IoT Show episode and related blog post for a full overview and guidance on how to get started.

Integration of Cognitive Services Custom Vision models in Live Video Analytics

Many organizations already have a large number of cameras deployed to capture video data but are not conducting any meaningful analysis on the streams. With the advent of Live Video Analytics, applying even basic image classification and object detection algorithms to live video feeds can help unlock truly useful insights and make businesses safer, more secure, more efficient, and ultimately more profitable. Potential scenarios include:

Detecting if employees in an industrial/manufacturing plant are wearing hard hats to ensure their safety and compliance with local regulations.
Counting products or detecting defective products on a conveyer belt.
Detecting the presence of unwanted objects (people, vehicles, and more) on-premises and notifying security.
Detecting low and out of stock products on retail store shelves or on factory parts shelves.

Developing AI models from scratch to perform tasks like these and deploying them at scale to work on live video streams on the edge entails a non-trivial amount of work. Doing it in a scalable and reliable way is even harder and more expensive. The integration of Live Video Analytics on IoT Edge with Cognitive Services Custom Vision makes it possible to implement working solutions for all of these scenarios in a matter of minutes to a few hours.

You begin by first building and training a computer vision model by uploading pre-labeled images to the Custom Vision service. This doesn’t require you to have any prior knowledge of data science, machine learning, or AI. Then, you can use Live Video Analytics to deploy the trained custom model as a container on the edge and analyze multiple camera streams in a cost-effective manner.

Live Video Analytics powered manufacturing floor reference app

We have partnered with the Azure Stack team to evolve the Factory.AI solution, a turn-key application that makes it easy to train and deploy vision models without the need for data science knowledge. The solution includes capabilities for object counting, employee safety, defect detection, machine misalignment, tool detection, and part confirmation. All these scenarios are powered by the integration of Live Video Analytics running on Azure Stack Edge devices.

In addition, the Factory.AI solution also allows customers to train and deploy their own custom ONNX models using Custom Vision SDK. Once a custom model is deployed on the edge, the reference app leverages gRPC from Live Video Analytics for high frame rate accurate inferencing. You can learn more about the manufacturing reference app at Microsoft Ignite or by visiting the Azure intelligent edge patterns page.

Get started today

In closing, we’d like to thank everyone who is already participating in the Live Video Analytics on IoT Edge preview. We appreciate your ongoing feedback to our engineering team as we work together to fuel your success with video analytics both in the cloud and on the edge. For those of you who are new to our technology, we’d encourage you to get started today with these helpful resources:

Watch the Live Video Analytics introduction video.
Find more information on the product details page.
Watch the Live Video Analytics demo.
Try the new Live Video Analytics features today with an Azure free trial account.
Register on the Media Services Tech Community and hear directly from the Engineering team on upcoming new features, to provide feedback and discuss roadmap requests.
Download Live Video Analytics on IoT Edge from the Azure Marketplace.
Get started quickly with our C# and Python code samples.
Review our product documentation.
Search the GitHub repo for Live Video Analytics open source projects.
Contact amshelp@microsoft.com for questions.

Intel, the Intel logo, Atom, Core, Xeon, and OpenVINO are registered trademarks of Intel Corporation or its subsidiaries.

NVIDIA and the NVIDIA logo are registered trademarks or trademarks of NVIDIA Corporation in the U.S. and/or other countries. Other company and product names may be trademarks of the respective companies with which they are associated.
Quelle: Azure

Microsoft named a leader in Gartner’s Magic Quadrant for Enterprise Integration Platform as a Service

We are excited to share that Gartner has positioned Microsoft as a leader in the 2020 Enterprise Integration Platform as a Service (EiPaaS) Magic Quadrant, based on our ability to execute and completeness of vision. Microsoft has now remained an EiPaaS MQ leader for the past three years.

According to Gartner, “Enterprise iPaaS providers continue to broaden their go-to-market strategies to cover an increasing range of enterprise integration scenarios.” Our vision is to help customers enable integration in all areas of their operations, from traditional central IT to business-led activities. Azure Integration Services (AIS), comprising of API Management, Logic Apps, Service Bus, and Event Grid, helps customers connect applications and data seamlessly to the cloud for services such as machine learning, cognitive services, and analytics, enabling increased enterprise-wide visibility and agility.

Best-in-class integration capabilities and platform

As applications and data are becoming more connected than ever, integration has become a key part of building applications. Azure Integration Services provides a comprehensive and cohesive set of integration capabilities spanning applications, data, and AI, with over 370 connectors and UI automation with Robotic Process Automation (RPA), for customers to connect everything quickly and easily. We provide these capabilities through high productivity and low code serverless integration and automation across Azure, edge, on-premises, and multi-cloud.

Global availability and customer momentum

Microsoft Azure has a global presence with more than 60 regions and a large, active partner community around the globe. Azure Integration Services has over 40,000 customers across the globe, with more than 60 percent of Fortune 500 companies using AIS for their business integration needs. Learn how customers such as ABB, RXR Realty, Rockefeller Capital Management, and Flexigroup use Azure Integration Services, including Logic Apps, API Management and Service Bus to connect their business applications, data, and processes in a seamless and better way. We look forward to continuing to partner with customers on their integration journey together.

Next steps

Read the full Gartner report here. Visit our website to learn more about Azure Integration Services.
Catch up on the latest Logic Apps product announcements at Microsoft Ignite. 

This graphic was published by Gartner, Inc. as part of a larger research document and should be evaluated in the context of the entire document. The Gartner document is available upon request.

Gartner does not endorse any vendor, product, or service depicted in its research publications and does not advise technology users to select only those vendors with the highest ratings or other designation. Gartner research publications consist of the opinions of Gartner’s research organization and should not be construed as statements of fact. Gartner disclaims all warranties, expressed or implied, with respect to this research, including any warranties of merchantability or fitness for a particular purpose.
Quelle: Azure

General availability of Azure Maps support for Azure Active Directory, Power Apps integration, and more

This blog post was co-authored by Chad Raynor, Principal Program Manager, Azure Maps.

New and recent updates for Microsoft Azure Maps include support for Azure Maps integration with Azure Active Directory (generally available), integration with the Microsoft Power Apps platform, Search and Routing services enhancements, new Weather services REST APIs (in preview), and expanded coverage for Mobility services.

Read on to learn more about the latest features and integrations for Azure Maps:

Azure Maps integration with Azure AD support now generally available

Azure Maps integration with Azure Active Directory (Azure AD) is now generally available allowing customers to rely on Azure Maps and Azure AD for enterprise level security. This update includes support for additional role-based control (RBAC) roles for fine grained access control. Azure Maps supports access to all principal types for Azure RBAC including; individual Azure AD users, groups, applications, Azure resources, and Azure Managed identities. Additionally, Azure Maps documentation now includes expanded implementation examples and estimates on implementation effort to assist choosing the right implementation method based on type of scenario.   

Geospatial features in Power Apps powered by Azure Maps now in preview

Microsoft Power Apps announced the preview of geospatial capabilities powered by Azure Maps. This includes both an interactive map component as well as an address suggestion component. Power Apps is a suite of apps, services, and connectors, along with a and data platform that provides a rapid app development environment to build custom apps for your business needs.

To get started with the new geospatial components, you don’t need to be a professional developer to take advantage of these features. You can add these components with the ease of drag-and-drop and low-code development. As an example, you can build an app for your field service workers to report any repair needs including the location of the work and detailed pictures.

The preview includes the following:

Interactive vector tile maps supporting multiple base map styles including satellite imagery.
Address search with dynamic address suggestions as you type and geocoding. The component suggests multiple potential address matches that the user can select and returns the address as structured data. This allows your app to extract information like city, street, municipality—and even latitude and longitude—in a format friendly to many locales and international address formats.

Figure 1. Azure Maps interactive vector tiles in Power Apps.

Search and Route services enhancements

Search business locations by brand name

Through Azure Maps, Search services customers have access to almost 100 million Point of Interest (POI) locations globally. To help our customers to restrict the POI search results to specific brands, we introduced brandSet parameter that is supported by all Search APIs covering POI search capabilities.

As an example, the users of your application want to search restaurants by typing ‘restaurant’ or selecting category ‘restaurant,’ and you want to show them all the restaurants that are under your brand umbrella. In your API request, you can pass us multiple brands or just one brand name, and results will be filtered based on those.

POI Category Tree API in preview

Azure Maps POI Search APIs support almost 600 categories and sub-categories. In an effort to make it easier for our customers to understand and leverage this information to optimize their queries, we have added POI Category Tree API to our Search services portfolio, and introduced categorySet parameter that is supported by all Search APIs covering POI search capabilities.  Azure Maps POI category API provides a full list of supported POI categories and subcategories, along with an ID for each category.

Customers can use POI category API along with Search API to query for POIs mapped to a specific category ID. With these capabilities customers can increase the accuracy of POI search results when customers use category IDs instead of description strings.

Request reachable range by travel distance

Azure Maps offers a full suite of vehicle routing capabilities, for example, to support fleet and logistics related scenarios. Azure Maps Get Route Range API, also known as Isochrone API, allows customers to determine the distance from a single point in which a user can reach based on time, fuel or energy. The API allows customers to also request reachable range based on travel distance returning the polygon boundary (isochrone). The returned isochrone, good for visualization, can also be used to filter for spatial queries, which opens a wide variety of applications for spatial filtering. 

For example, when there is a traffic incident at a given point, you need to know which of the electronic traffic signs should be updated to alert drivers of the incident. To figure out which signs to update, you want to know which signs drivers would hit within 5 kilometers of driving to the incident. You can use Route Range API to calculate the reachable range and after that you can use the returned polygon to check which signs resided in that polygon. See our code sample to test route range features in action.

Figure 2. Map visualizing reachable range based on travel distance.

Search Electric Vehicle (EV) charging stations by connector type

Azure Maps Route services support today’s multiple routing related customer scenarios covering routing support for private, electric, and commercial vehicles like trucks. To make it possible for our customers to build advanced features for their Electric Vehicle (EV) applications, our customer can now restrict the result to Electric Vehicle Station supporting specific connector types to find the most suitable EV charging stations. Consider that you have an application that allows your users to determine if there are suitable charging stations along their planned routes, or as business fleet owner you want to know if there are enough specific types of charging stations for your employees to charge their vehicles while they are delivering goods for your customers. You can now accomplish this through all Azure Maps Search service APIs with POI search capability, like Search Fuzzy and Search Nearby APIs.

Mobility services updates

Expanded coverage

We are announcing that we expanded our Mobility service coverage globally covering today almost 3,200 cities, to offer the best in-class public transit data. To highlight some improvements, we now support countries and regions such as Liechtenstein and metro areas like Hyderabad (India), Côte d’Azur (France), Dakar (Senegal), Rio Verde (Brazil) and San Francisco-San Jose (CA, US). You can find all the supported metro areas from our full Mobility services coverage page. One metro area can be country/region, city, or a group of multiple cities.

MetroID is no longer a required parameter for multiple Mobility APIs—To make it easier for our customers to request transit data, we have changed request parameter metroID optional for the following Mobility services APIs:

Get Nearby Transit
Get Transit Stop Info
Get Transit Route
Get Real-time Arrivals
Get Transit Line Info

As a result, Azure Maps customers don’t need to first request metroID parameter by calling Get Metro ID API to be able to call public transit route directions API.

Weather services updates

Severe Weather Alerts

Severe weather phenomenon can significantly impact our everyday life and business operations. For example, severe weather conditions such as tropical storms, high winds, or flooding can close roads and force logistics companies to reroute their fleet—causing delays in reaching destinations and breaking the cold chain of refrigerated food products. Azure Maps Severe Weather Alerts API returns the severe weather alerts that are available worldwide from both official Government Meteorological Agencies and leading global to regional weather alert providers.

The service can return details such as alert type, category, level, and detailed descriptions about the active severe alerts for the requested location, such as hurricanes, thunderstorms, lightning, heat waves or forest fires. As an example, logistics managers can visualize severe weather conditions on a map along with business locations and planned routes, and further coordinate with drivers and local workers.

Figure 3. Active Severe Weather Alerts on a map.

Azure Maps Indices API

There may be times when you want to know if weather conditions are optimal for a specific activity, such as outdoor construction, indoor activities, running or farming, including soil moisture information. Azure Maps Indices API returns index values that will guide users to plan future activities. For example, a health mobile application can notify users that today is good weather for running or for other outdoors activities like playing golf, and retail stores can optimize their digital marketing campaigns based on predicted index values. The service returns in daily indices values for current and the next 5, 10, and 15 days.

Request past and future radar and satellite tiles

In addition to real-time radar and satellite tiles, Azure Maps customers can now request past and future tiles to enhance data visualizations with map overlays by calling directly Get Map Tile v2 API or requesting tiles via Azure Maps web and mobile SDKs. Radar tiles are provided up to 1.5 hours in the past and up to 2 hours in the future and are available in 5-minute intervals. Infrared tiles are provided up to 3 hours in the past in 10-minute intervals.

Figure 4. Historical radar tiles visualized on a map.

We want to hear from you

We are always working to grow and improve the Azure Maps platform and want to hear from you. We’re here to help and want to make sure you get the most out of the Azure Maps platform.

Have a feature request? Add it or vote up the request on our feedback site.
Having an issue getting your code to work? Have a topic you would like us to cover on the Azure blog? Ask us on the Azure Maps forums.
Looking for code samples? There’s a plethora of them on our Azure Maps Code Sample site. Wrote a great one you want to share? Join us on GitHub.
To learn more, read the Azure Maps documentation.

Quelle: Azure

Scaling Microsoft Kaizala on Azure

This post was co-authored by Anubhav Mehendru, Group Engineering Manager, Kaizala.

Mobile-only workers depend on Microsoft Kaizala—a simple and secure work management and mobile messaging app—to get the work done. Since COVID-19 has forced many of us to work from home across the world, Kaizala usage has surged close to 3x from pre-COVID-19. While this is a good opportunity for the product to grow, it has increased pressure on the engineering team to ensure that the service scales along with the increased usage while maintaining the customer promised SLA of 99.99 percent.

Today, we’re sharing share some of the learnings about managing and scaling an enterprise grade secure productivity app and the backend service behind it.

Foundation of Kaizala

Kaizala is a productivity tool primarily targeted for mobile-only users and is based on Microservice architecture with Microsoft Azure as the core cloud platform. Our workload runs on Azure Cloud Services, with Azure SQL DB and Azure Blob Storage used for primary storage. We use Azure Cache for Redis to handle caching, and Azure Service Bus and Azure Notification Hub manages async processing of events. Azure Active Directory (Azure AD) is used for our user authentication. We use Azure Data Explorer and Azure Monitoring for data analytics. Azure Pipelines is used for automated safe deployments where we can deploy updates rapidly multiple times in a week with high confidence.

We follow a safe deployment process, ensuring minimal customer impact, and stage wise release of new feature and optimizations with full control on exposure and rollback ability.

In addition, we use a centralized configuration management system where all our config can be controlled, such as exposure of a new feature to a set of users/groups or tenants. We fine grained control on msg processing rate, receipt processing, user classification, priority processing, slow down non-core functionalities etc. This allows us to rapidly prototype new feature and optimization over a user set.

Key resiliency strategies

We employ the following key resilience strategies:

API rate limit

To protect our existing service from misuse, we need to control the incoming calls coming from multiple clients within a safe limit. We incorporated a rate limiter entirely based on in-memory caching that does the work with negligible latency impact on customer operations.

Optimized caching

To provide optimal user experience, we created a generic in-memory caching infra where multiple compute nodes are able the quickly sync back the state changes using Azure Redis PubSub. Using this a significant number of external API calls were avoided which effectively reduced our SQL load.

Prioritize critical operations

In case of overload of service due to heavy customer traffic, we prioritize the critical customer operations such as messaging over non-core operations such as receipts.

Isolation of core components

Core system components that support messaging are now totally isolated from other non-core parts so that any overload does not impact the core messaging operations. The isolation is done at every resource level such as separate compute nodes, separate service bus for event processing and totally separate storage for non-core operations.

Reduction in intra node communication

We made multiple enhancements in our message processing system where we significantly reduced scenarios of intra node communication that caused a heavy intra node dependency and slows down the entire message processing.

Controlled service rollout

We made several changes in our rollout process to ensure controlled rollout of new features and optimizations to minimize and negative customer impact. The deployments moved to early morning slots where the customer load is minimal to prevent any downtime.

Monitoring and telemetry

We setup specific monitoring dashboards to give a quick overview of service health that monitor important parameters, such as CPU consumption, thread count, garbage collection (GC) rate, rate of incoming messages, unprocessed messages, lock contention rate, and connected clients.

GC rate

We have finetuned the options to control the rate of Gen2 GC happening in a cloud service as per the needs of the web and worker instances to ensure minimal latency impact of GC during customer operations.

Node partitioning

Users need to be partitioned across multiple nodes to distribute the ownership responsibility using a consistent hashing mechanism. This master ownership helps in ensuring that only required user's information is stored in the in-memory cache on a particular node.

Active passive user

In large group messaging operations, there are always users who are actively using the app while a lot of users are not active. Our idea is to prioritize message delivery for active users so that the last bucket active user received the message fast.

Serialization optimization

Default JSON serialization is costly when the input output operations are very frequent and burn precious CPU cycles. ProtoBuf offers a fast binary serialization protocol that was leveraged to optimize the operations for large data structures.

Scaling compute

We re-evaluated our compute usage in our internal multiple test and scale environments and judiciously reduced the compute node SKU to optimize as per the needs and optimize cost of goods sold (COGS). While most of our traffic in an Azure region is during the day time, there is minimal load at the night where we leverage to do heavy tasks, such as redundant data cleanup, cache cleanups, GC, database re-indexing, and compliance jobs.

Scaling storage

With increasing scale, the load of receipts became huge on the backend service and was consuming a lot of storage. While critical operations required highly consistent data, the requirement is less for non-critical operations. We moved the receipt to highly available No-SQL storage, which costs a tenth of the SQL storage.

Queries for background operations were spread out lazily to reduce the overall peak load on SQL storage. Certain non-critical Operations were moved from being strongly consistent to eventual consistency model to flatten the peak storage load, thus creating more capacity for additional users.

Our future plans

As the COVID-19 situation continues to be grave, we are expecting an accelerated pace of Kaizala adoption from multiple customers. To keep up with the increase in messaging load and high customer usage, we are working on new enhancements and optimizations to ensure that we remain ahead of the curve including:

Developing alternative messaging flows where users actively using the app can directly pull group messages even if the backend system is overloaded. Message delivery is prioritized for active users over passive users.
Aggressively working on distributed in-memory caching of data entities to enable fast user response and alternative designs to keep cache in sync while minimizing stale data.
Moving to container-based deployment model from the current virtual machine (VM)-based model to bring more agility and reduce operational cost.
Exploring alternative storage mechanism which scale well with massive write operations for large consumer groups supporting batched data flush in a single connection.
Actively exploring ideas around active-active service configuration to minimize downtime due to data center outages and minimize Recovery Time Objective (RTO) and Recovery Point Objective (RPO).
Exploring ideas around moving some of the non-core functionalities to passive scale units to utilize the standby compute/storage resources there.
Evaluating the dynamic scaling abilities of Azure Cloud services where we can automatically reduce the number of compute nodes during nighttime hours where our user load is less than fifth of the peak load.

Quelle: Azure