Plan your migration to Azure VMware solution using Azure Migrate

Azure Migrate now supports assessments for Azure VMware Solution (AVS), providing even more options for you to plan your migration to Azure. AVS enables you to run VMware natively on Azure. AVS provides a dedicated Software Defined Data Center (SDDC) for your VMware environment on Azure, ensuring you can leverage familiar VMware tools and investments, while modernizing applications overtime with integration to Azure native services. Delivered and operated as a service, your private cloud environment provides all compute, networking, storage, and software required to extend and migrate your on-premises VMware environments to Azure.

As organizations now more than ever look for cost efficiencies, business stability, and consistency, choosing the most efficient migration path is imperative. This means considering a number of different workload scenarios and destinations, such as migrating your servers to Azure Virtual Machines or running your existing VMware workloads natively on Azure with AVS.

Previously, Azure Migrate tooling provided support for migrating Windows and Linux servers to Azure Virtual Machines, as well as support for database, web application, and virtual desktop scenarios. Now, you can use the migration hub to assess machines for migrating to AVS as well.

With the Azure Migrate: Server Assessment tool, you can analyze readiness, Azure suitability, cost planning, performance-based rightsizing, and application dependencies for migrating to AVS. The AVS assessment feature is currently available in preview.

This expanded support allows you to get an even more comprehensive assessment of your datacenter. Compare cloud costs between Azure native virtual machines (VMs) and AVS to make the best migration decisions for your business. Azure Migrate acts as an intelligent hub, gathering insights throughout the assessment to make suggestions, including tooling recommendations for migrating VM or VMware workloads.

How to perform an AVS assessment

You can use all the existing assessment features that Azure Migrate offers for Azure Virtual Machines to perform an AVS assessment. Plan your migration to AVS with up to 35,000 VMware servers in one Azure Migrate project.

Discovery: Use the Azure Migrate: Server Assessment tool to perform a datacenter discovery, either by downloading the Azure Migrate appliance or by importing inventory data through a CSV upload. Read Assess your servers with a CSV import into Azure Migrate to learn more about the import feature.
Group servers: Create groups of servers from the list of machines discovered. Here, you can select whether you’re creating a group for an Azure Virtual Machine assessment or AVS assessment. Application dependency analysis features allow you to refine groups based on connections between applications.
Assessment properties: You can customize the AVS assessments by changing the properties and recomputing the assessment. Select a target location, node type, and Redundant Array of Independent Disks (RAID) level—there are currently three locations available—including East US, West Europe, and West US, and more will continue to be added as additional nodes are released.
Suitability analysis: The assessment gives you a few options for sizing nodes in Azure, between performance-based or as on-premises. It checks AVS support for each of the discovered servers and determines if the server can be migrated “as is” to AVS. If there are any issues found, the assessment automatically provides remediation guidance.
Assessment and cost planning report: Run the assessment to get a look into how many machines are in use and what estimated monthly and per-machine costs will be in AVS. The assessment also recommends a tool for migrating the machines to AVS. With this, you have all the information you need to plan and execute your AVS migration as efficiently as possible.
 

AVS Assessment and cost planning report.

 
AVS Readiness report with suggested migration tool.

Learn more

For detailed instructions on how to perform an AVS assessment, go to the documentation page.
Read more about Azure VMware Solution on the website or documentation page.
Learn more about Azure Migrate on the Azure Migrate website.
Watch the latest Azure Migrate video for a demo of performing a server migration.
Check out the new Azure Migrate e-book.

Quelle: Azure

Azure Firewall Manager is now generally available

Azure Firewall Manager is now generally available and includes Azure Firewall Policy, Azure Firewall in a Virtual WAN Hub (Secure Virtual Hub), and Hub Virtual Network. In addition, we are introducing several new capabilities to Firewall Manager and Firewall Policy to align with the standalone Azure Firewall configuration capabilities.

Key features in this release include:Threat intelligence-based filtering allow list in Firewall Policy is now generally available.Multiple public IP addresses support for Azure Firewall in Secure Virtual Hub is now generally available.Forced tunneling support for Hub Virtual Network is now generally available.Configuring secure virtual hubs with Azure Firewall for east-west traffic (private) and a third-party security as a service (SECaaS) partner of your choice for north-south traffic (internet bound). Integration of third-party SECaaS partners are now generally available in all Azure public cloud regions.Zscaler integration will be generally available on July 3, 2020. Check Point is a supported SECaaS partner and will be in preview on July 3, 2020. iboss integration will be generally available on July 31, 2020.Support for domain name system (DNS) proxy, custom DNS, and fully-qualified domain name (FQDN) filtering in network rules using Firewall Policy are now in preview.

Firewall Policy is now generally available

Firewall Policy is an Azure resource that contains network address translation (NAT), network, and application rule collections, as well as threat intelligence and DNS settings. It’s a global resource that can be used across multiple Azure Firewall instances in Secured Virtual Hubs and Hub Virtual Networks. Firewall policies work across regions and subscriptions.

You do not need Firewall Manager to create a firewall policy. There are many ways to create and manage a firewall policy, including using REST API, PowerShell, or command-line interface (CLI).

After you create a firewall policy, you can associate the policy to one or more firewalls using Firewall Manager or using REST API, PowerShell, or CLI.  Refer to the policy-overview document for a more detailed comparison of rules and policy.

Migrating standalone firewall rules to Firewall Policy

You can also create a firewall policy by migrating rules from an existing Azure Firewall. You can use a script to migrate firewall rules to Firewall Policy, or you can use Firewall Manager in the Azure portal.

Importing rules from an existing Azure Firewall.

Firewall Policy pricing

If you just create a Firewall Policy resource, it does not incur any charges. Additionally, a firewall policy is not billed if it is associated with just a single Azure firewall. There are no restrictions on the number of policies you can create.

Firewall Policy pricing is fixed per Firewall Policy per region. Within a region, the price for Firewall Policy managing five firewalls or 50 firewalls is the same. The following example uses four firewall policies to manage 10 distinct Azure firewalls:

Policy 1: cac2020region1policy—Associated with six firewalls across four regions. Billing is done per region, not per firewall.
Policy 2: cac2020region2policy—Associated with three firewalls across three regions and is billed for three regions regardless of the number of firewalls per region.
Policy 3: cac2020region3policy—Not billed because the policy is not associated with more than one firewall.
Policy 4: cacbasepolicy—A central policy that is inherited by all three policies. This policy is billed for five regions. Once again, the pricing is lower compared to per-firewall billing approach.

Firewall Policy billing example.

Configure a threat intelligence allow list, DNS proxy, and custom DNS

With this update, Firewall Policy supports additional configurations including custom DNS and DNS proxy settings (preview) and a threat intelligence allow list. SNAT Private IP address range configuration is not yet supported but is in our roadmap.

While Firewall Policy can typically be shared across multiple firewalls, NAT rules are firewall specific and cannot be shared. You can still create a parent policy without NAT rules to be shared across multiple firewalls and a local derived policy on specific firewalls to add the required NAT rules. Learn more about Firewall Policy.

Firewall Policy now supports IP Groups

IP Groups is a new top-level Azure resource in that allows you to group and manage IP addresses in Azure Firewall rules. Support for IP Groups is covered in more detail in our recent Azure Firewall blog.

Configure secured virtual hubs with Azure Firewall and a third-party SECaaS partner

You can now configure virtual hubs with Azure Firewall for private traffic (virtual network to virtual network/branch to virtual network) filtering and a security partner of your choice for internet (virtual network to internet/branch to internet) traffic filtering.

A security partner provider in Firewall Manager allows you to use your familiar, best-in-breed, third-party SECaaS offering to protect internet access for your users. With a quick configuration, you can secure a hub with a supported security partner, and route and filter internet traffic from your virtual networks (VNets) or branch locations within a region. This is done using automated route management, without setting up and managing User Defined Routes (UDRs).

You can create a secure virtual hub using Firewall Manager’s Create new secured virtual hub workflow. The following screenshot shows a new secure virtual hub configured with two security providers.

Creating a new secure virtual hub configured two security providers.

Securing connectivity

After you create a secure hub, you need to update the hub security configuration and explicitly configure how you want internet and private traffic in the hub to be routed. For private traffic, you don’t need to specify prefixes if it’s in the RFC1918 range. If your organization uses public IP addresses in virtual networks and branches, you need to add those IP prefixes explicitly.

To simplify this experience, you can now specify aggregate prefixes instead of specifying individual subnets. Additionally, for internet security via a third-party security provider, you need to complete your configuration using the partner portal. Please see the security partner provider page for more details.

Selecting a third-party SECaaS for internet traffic filtering.

Secured virtual hub pricing

A secured virtual hub is an Azure Virtual WAN Hub with associated security and routing policies configured by Firewall Manager. Pricing for secured virtual hubs depends on the security providers configured.

See the Firewall Manager pricing page for additional details.

Next steps

For more information on these announcements, see the following resources:

Firewall Manager documentation.
Azure Firewall Manager now supports virtual networks blog.
New Azure Firewall features in Q2 CY2020 blog.

Quelle: Azure

Build, distribute, and deploy application updates to Azure virtual machine scale sets

As the needs of your business grow, and you deploy business-critical applications at cloud scale, the complexity and administrative overhead of managing those applications can increase substantially. To help reduce this management overhead, Azure continues to invest in new capabilities that make it easier to build and distribute application updates across distributed cloud environments.

We recently announced the general availability of automatic image-based upgrades for custom images, providing you the ability to automatically deploy new versions of virtual machine (VM) images to your virtual machine scale sets. Automatic image upgrade natively integrates with Shared Image Gallery, combining the scalable distribution of VM images with the ease and safety of orchestrated infrastructure updates, to offer an end-to-end solution from image publishing to workload deployment.

This blog describes how you can use integrated Azure services to build custom images with your application updates, distribute those images across your organization and automatically deploy the new images to your virtual machine scale sets.

Build images with application updates

Deploying application and security updates across an organization can often be a complex process, involving multiple stages of deployments across disjointed systems. Standardized VM images allow organizations to ensure consistency across deployments, and these images typically include predefined security and configuration settings, and software workloads.

You can build standardized images through your own imaging pipeline or use the Azure VM Image Builder service. Using Azure VM Image Builder (currently in preview), you can quickly start building standardized images without needing to set up your own imaging pipeline. Just provide a simple configuration describing your image, submit it to the Image Builder service, and the image is built and distributed.

The Azure VM Image Builder lets you start with a Windows or Linux-based Azure Marketplace image, as well as existing custom images, and add your own customizations.

Distribute your images

Shared Image Gallery enables image distribution across multiple subscriptions and regions through a centralized image management platform. Shared Image Gallery helps you organize images in logical groups by specifying different image definitions and image versions, allowing you to iterate new image builds for different applications.

As you build new image versions with Image Builder, you can also distribute these images globally by replicating the images across multiple Azure regions based on your organization’s needs. You only need to specify the target regions and Shared Image Gallery will replicate your image versions to the regions you selected.

Shared Image Gallery also allows you to share your images across subscriptions and Azure Active Directory (Azure AD) tenants, so you can centralize image management across your entire organization.

Deploy your images

The final step in the process is the deployment of your newly created images to your virtual machine scale sets. With automatic OS image upgrade enabled for your scale sets, you do not need to take any additional action to deploy your images. Automatic OS image upgrade monitors your image gallery and automatically begins scale set upgrades when a new image version is deployed, facilitating faster image deployment without manual overhead.

An upgrade works by replacing the OS disk of a VM with a new disk created using the latest image version. Any configured extensions and custom scripts are run on the OS disk, while data disks are retained. To minimize the application downtime, upgrades take place in batches, with no more than 20 percent of the scale set upgrading at any time. The update orchestrator monitors the health of the VMs being upgraded as well as the health of the scale set during the upgrade process. If more than 20 percent of the scale set virtual machines become unhealthy, then the scale set upgrade stops at the end of the current batch. The upgrade process also supports automatic rollback for upgrade failures. This ensures that rollouts are gradual and orchestrated in a safe manner, preventing any scale set-wide disruption caused by a customization in the image.

An upgrade on a scale set only starts when the new image version is replicated to the region of the scale set. You can stagger global deployments by staging imaging replication to different regions at different times, further increasing global application uptime.

Get started

You can start from your image definition under Shared Image Gallery through the Azure portal and use the + Create VMSS option to create a new scale set from your image.

In the create experience for virtual machine scale set, under the Management tab, simply select the On option for Automatic OS upgrades.

You can also further customize the process and integrate your existing image building pipeline with Shared Image Gallery to benefit from automatic OS image upgrade.

Read the Azure documentation to learn more about the powerful capabilities described above.

Automatic OS image upgrade
Shared Image Gallery
Azure VM Image Builder

Quelle: Azure

New Azure Firewall features in Q2 CY2020

We are pleased to announce several new Azure Firewall features that allow your organization to improve security, have more customization, and manage rules more easily. These new capabilities were added based on your top feedback:

Custom DNS support now in preview.
DNS Proxy support now in preview.
FQDN filtering in network rules now in preview.
IP Groups now generally available.
AKS FQDN tag now generally available.
Azure Firewall is now HIPAA compliant. 

In addition, in early June 2020, we announced Azure Firewall forced tunneling and SQL FQDN filtering are now generally available.

Azure Firewall is a cloud-native firewall as a service (FWaaS) offering that allows you to centrally govern and log all your traffic flows using a DevOps approach. The service supports both application and network-level filtering rules and is integrated with the Microsoft Threat Intelligence feed for filtering known malicious IP addresses and domains. Azure Firewall is highly available with built-in auto scaling.

Custom DNS support now in preview

Since its launch in September 2018, Azure Firewall has been hardcoded to use Azure DNS to ensure the service can reliably resolve its outbound dependencies. Custom DNS provides separation between customer and service name resolution. This allows you to configure Azure Firewall to use your own DNS server and ensures the firewall outbound dependencies are still resolved with Azure DNS. You may configure a single DNS server or multiple servers in Azure Firewall and Firewall Policy DNS settings.

Azure Firewall is also capable of name resolution using Azure Private DNS, as long as your private DNS zone is linked to the firewall virtual network.

DNS Proxy now in preview

With DNS proxy enabled, outbound DNS queries are processed by Azure Firewall, which initiates a new DNS resolution query to your custom DNS server or Azure DNS. This is crucial to have reliable FQDN filtering in network rules. You may configure DNS proxy in Azure Firewall and Firewall Policy DNS settings. 

DNS proxy configuration requires three steps:

Enable DNS proxy in Azure Firewall DNS settings.
Optionally configure your custom DNS server or use the provided default.
Finally, you must configure the Azure Firewall’s private IP address as a Custom DNS server in your virtual network DNS server settings. This ensures DNS traffic is directed to Azure Firewall.

 
Figure 1. Custom DNS and DNS Proxy settings on Azure Firewall.

FQDN filtering in network rules now in preview

You can now use fully qualified domain names (FQDN) in network rules based on DNS resolution in Azure Firewall and Firewall Policy. The specified FQDNs in your rule collections are translated to IP addresses based on your firewall DNS settings. This capability allows you to filter outbound traffic using FQDNs with any TCP/UDP protocol (including NTP, SSH, RDP, and more). As this capability is based on DNS resolution, it is highly recommended you enable the DNS proxy to ensure your protected virtual machines and firewall name resolution are consistent.

FQDN filtering in application rules for HTTP/S and MSSQL is based on application level transparent proxy. As such, it can discern between two FQDNs that are resolved to the same IP address. This is not the case with FQDN filtering in network rules, so it is always recommended you use application rules when possible.

 
Figure 2. FQDN filtering in network rules.

IP Groups now generally available

IP Groups is a new top-level Azure resource that allows you to group and manage IP addresses in Azure Firewall rules. You can give your IP group a name and create one by entering IP addresses or uploading a file. IP Groups eases your management experience and reduce time spent managing IP addresses by using them in a single firewall or across multiple firewalls. IP Groups is now generally available and supported within a standalone Azure Firewall configuration or as part of Azure Firewall Policy. For more information, see the IP Groups in Azure Firewall documentation.

Figure 3. Creating a new IP Group.

AKS FQDN tag now in generally available

An Azure Kubernetes Service (AKS) FQDN tag can now be used in Azure Firewall application rules to simplify your firewall configuration for AKS protection. Azure Kubernetes Service (AKS) offers managed Kubernetes cluster on Azure that reduces the complexity and operational overhead of managing Kubernetes by offloading much of that responsibility to Azure.

For management and operational purposes, nodes in an AKS cluster need to access certain ports and FQDNs. For more guidance on how to add protection for Azure Kubernetes cluster using Azure Firewall, see Use Azure Firewall to protect Azure Kubernetes Service (AKS) Deployments. 

  Figure 4. Configuring application rule with AKS FQDN tag.

Next steps

For more information on everything we covered here, see these additional resources:

Azure Firewall documentation.
Azure Firewall Forced Tunneling and SQL FQDN filtering now generally available.
Azure Firewall IP Groups.
Azure Firewall Custom DNS, DNS Proxy (preview).
Azure Firewall FQDN filtering in network rules (preview).
Use Azure Firewall to protect Azure Kubernetes Service (AKS) Deployments. 

Quelle: Azure

Azure Cost Management + Billing updates – June 2020

Whether you're a new student, thriving startup, or the largest enterprise, you have financial constraints and you need to know what you're spending, where, and how to plan for the future. Nobody wants a surprise when it comes to the bill, and this is where Azure Cost Management + Billing comes in.

We're always looking for ways to learn more about your challenges and how Azure Cost Management + Billing can help you better understand where you're accruing costs in the cloud, identify and prevent bad spending patterns, and optimize costs to empower you to do more with less. Here are a few of the latest improvements and updates based on your feedback:

More flexibility for budget notifications.
Subscribe to active cost recommendations with Advisor digests.
Automate subscription creation in Azure Government.
Subscription ownership transfer improvements.
New ways to save money with Azure.
New videos and learning opportunities.
Documentation updates.

Let's dig into the details.

 

More flexibility for budget notifications

You already know Azure Cost Management budgets keep you informed as your costs increase over time. We're introducing two changes to make it easier than ever to tune your budgets to suit your specific needs.

You can now specify a custom start month for your budget, allowing you to create a budget that starts in the future. This will allow you to plan ahead and pre-configure budgets to account for seasonal changes in usage patterns or simply preparing for the upcoming fiscal year, just to name a couple examples.

You can also add alert thresholds above 100 percent for even greater awareness about how far over budget you are. Not only can you send a separate email to a broader audience when you've hit, let's say 110 percent of your budget, you can also trigger more critical actions to be performed if costs continue to rise above 100 percent. This can be especially useful for organizations tracking internal margins.

We hope these changes help you plan ahead and take action on overages better. How will you use start dates and alert thresholds to better monitor and optimize costs?

 

Subscribe to active cost recommendations with Advisor digests

Running a truly optimized environment requires diligence. As your environment grows and usage patterns change, it's critical to stay on top of new opportunities to optimize costs. This is where Azure Advisor recommendation digests come in.

Recommendation digests provide an easy and proactive way to stay on top of your active recommendations. You can receive periodic notifications via email, SMS, or other channel by using action groups. Each digest notification includes a summary of your active recommendations and complements Advisor alerts to give you a more complete picture of your cost optimization opportunities.

Advisor alerts notify you about new recommendations as they become available, while recommendation digests summarize all available recommendations that you haven’t yet acted on. Together, Advisor recommendation digests and alerts help you stay current with Azure best practices.

Learn more about Advisor recommendation digests.

 

Automate subscription creation in Azure Government

Managing subscriptions efficiently at scale requires automation. Now, organizations with Azure Government accounts can automate the creation of subscriptions with the Microsoft.Subscription/createSubscription API. This expands on previous subscription management capabilities and brings API parity between Azure Global and Azure Government. What would you like to see next?

 

Subscription ownership transfer improvements

Whether you're restructuring your environment or simply expanding your scope, you may run into situations where you need to transfer ownership of your Azure subscriptions to another person or organization. And now, you can do that directly from within the Azure portal for even more subscription types. In addition to existing support for Pay-As-You-Go (PAYG) subscriptions, you can now transfer any of the following subscription types to a new owner from the Azure portal:

Microsoft Customer Agreement.
Visual Studio Enterprise.
Microsoft Partner Network (MPN).
Microsoft Azure Sponsorship.

The portal will also clarify and explain why certain subscriptions cannot be transferred and surface any potential issues and reservation warnings, helping you transfer with ease, avoiding any unintended consequences.

Learn more about subscription ownership transfers and let us know how we can improve your ownership transfer experience.

 

New ways to save money with Azure

We're always looking for ways to help you optimize costs. Here's what's new this month:

Save up to 70 percent on spiky and unpredictable workloads with Cosmos DB autoscale.
Save up to 61 percent on Azure Spring Cloud with the new Basic tier.
Azure Dedicated Hosts supports additional virtual machine sizes offering more opportunities to save.
Azure SQL database serverless auto-scaling limits increased from 16 to 40 vCores.
Azure DevTest Labs environments are now available in Azure Government.
Azure DevTest Labs is now available in Switzerland regions.

 

New videos and learning opportunities

For those visual learners out there, here's one new video you might be interested in:

Evaluate and optimize your costs using the Microsoft Azure Well-Architected Framework (29 minutes).

Follow the Azure Cost Management + Billing YouTube channel to stay in the loop with new videos as they're released and let us know what you'd like to see next.

Want a more guided experience? Start with Control Azure spending and manage bills with Azure Cost Management + Billing.

 

Documentation updates

Here are a couple documentation updates you might be interested in:

Moved group and filter options into its own document (with a video).
Updated the analyze and manage costs section of the Cost Management best practices.
Added note about using the Monitoring Reader role to analyze resource usage for RBAC scopes.
Clarified what subscriptions are supported by Cost Management within management groups.
Documented Invoices API support for Enterprise Agreement billing accounts.
Documented more Azure Advisor cost optimization recommendations.

Want to keep an eye on all of the documentation updates? Check out the Cost Management + Billing doc change history in the azure-docs repository on GitHub. If you see something missing, select Edit at the top of the document and submit a quick pull request.

 

What's next?

These are just a few of the big updates from last month. Don't forget to check out the previous Azure Cost Management + Billing updates. We're always listening and making constant improvements based on your feedback, so please keep the feedback coming.

Follow @AzureCostMgmt on Twitter and subscribe to the YouTube channel for updates, tips, and tricks. And, as always, share your ideas and vote up others in the Cost Management feedback forum.

We know these are trying times for everyone. Best wishes from the Azure Cost Management team. Stay safe and stay healthy!
Quelle: Azure

Azure Digital Twins: Powering the next generation of IoT connected solutions

Last month at Microsoft Build 2020, we announced the new features for Azure Digital Twins, the IoT platform that enables the creation of next-generation IoT connected solutions that model the real world. Today, we are announcing that these updated capabilities are now available in preview. Using the power of IoT, businesses have gained unprecedented insights into their assets. But as connected solutions continue to evolve, companies are looking for ways to create richer models of entire business environments, which is quite challenging even for sophisticated businesses.

Our goal with Azure Digital Twins is to make the creation of sophisticated digital twin solutions easy. With today’s announcement, you can apply your domain expertise on top of Azure Digital Twins to design and build comprehensive digital models of entire environments.

Using Azure Digital Twins, you can gain insights that drive better products, optimization of operations, cost reduction, and breakthrough customer experiences. And you can now do so across environments of all types, including buildings, factories, farms, energy networks, railways, stadiums—even entire cities.

What’s new?

We received a lot of valuable feedback from the Azure Digital Twins preview and we are excited to share the expanded capabilities of Azure Digital Twins that will simplify and accelerate your creation of IoT connected solutions.

Open modeling language

The new preview of Azure Digital Twins lets you create custom models of any connected environment. The new preview of Azure Digital Twins lets you create custom models of any connected environment. Using the rich and flexible Digital Twins Definition Language (DTDL), based on the JSON-LD standard, you can configure your Azure Digital Twins service to tailor to the specific needs of your use case.

Real-world environments are created from connected twins. Each twin is modeled using properties, telemetry events, components, and relationships that define how twins can be connected into rich knowledge graphs. DTDL is also used for models throughout other Azure IoT services, including IoT Plug and Play and Azure Time Series Insights.

DTDL is the glue that helps you keep your Azure Digital Twins solution connected and compatible with other parts of the Azure ecosystem.

As part of our commitment to openness and interoperability, we will continue to promote best practices and shared digital twin models for a wide range of businesses and industry domains through the Digital Twins Consortium and other channels to accelerate your time building valuable IoT connected solutions that span many industry verticals and use cases.

Live execution environment

Azure Digital Twins lets you bring your digital twins to life using data from IoT and other data sources, creating an always-up-to-date digital representation of your environment that is scalable and secure.

Using a robust event system, you can build dynamic business logic and data processing as data flows through the execution environment, and now, you can harness the power of external compute resources, such as Azure Functions. This makes it easy to use pre-existing code with Azure Digital Twins, which provides freedom of choice in terms of programming languages and compute models.

To extract insights from the live execution environment, Azure Digital Twins provides a powerful query system that allows you to search for twins based on a wide range of conditions and relationships.

Input from IoT and business systems

You can easily connect assets such as IoT and IoT Edge devices, as well as existing business systems such as ERP and CRM, to Azure Digital Twins to drive the live execution environment.

You can now use a new or existing Azure IoT Hub to connect, monitor, and manage all of your assets at scale, taking advantage of the full device management capabilities that IoT Hub provides. The ability to use any existing Hub makes it easier to add Azure Digital Twins to existing IoT solutions incrementally.

Using the Azure Digital Twins REST APIs, you can also use data sources other than IoT, unlocking even more actionable insights with Azure Digital Twins.

Output to Azure Time Series Insight, storage, and analytics

You can integrate Azure Digital Twins with other Azure services to build complete end-to-end solutions. You can define event routes that send selected data to downstream services through endpoints that support Event Hubs, Event Grid, or Service Bus Event routes to send data to Azure Data Lake for long term storage; to data analytics services such as Azure Synapse Analytics to apply machine learning; to Logic Apps for workflow integration; or to Power BI to extract insights. Another important use case is time series data integration and historian analytics with Azure Time Series Insight.

Combined, these capabilities greatly simplify today’s difficult tasks of modeling and creating a digital representation of an environment, helping you focus on what differentiates your business rather than building and operating complex, distributed systems architecture securely and at scale.

Innovating with customers and partners

Azure Digital Twins is already being used by a broad set of customers and partners. Below are a few examples showcasing the applicability of Azure Digital Twins across a wide range of industries:

Ansys Twin Builder: physics-based digital twins

Physics-based simulation has long been an essential part of the product design process, helping engineers to optimize and validate design choices. With the broad deployment of IoT sensors in products and their environment, it is now possible to apply the same simulation technology after a product has been built, shipped and deployed in the field.  Simulation technology can be used to optimize performance and energy usage or predict failures in a highly accurate and immediate way, without the complexities associated with alternative techniques.

“Ansys Twin Builder lets engineers quickly deliver real-time simulation models for operational use. With Microsoft’s Azure Digital Twins platform, it is now possible to efficiently integrate the simulation-based twins into a broader IoT solution.” —Sameer Kher, Senior Director,Twin Builder Product Line for Ansys 

Bentley iTwin: infrastructure digital twins

In the world of infrastructure development, complex CAD data is the backbone of planning, execution and operation of major infrastructures, such as road and rail networks, public works and utilities, industrial plants, and commercial and institutional facilities. Bentley’s iTwin platform captures geometry and metadata of the project and its environment as the source of truth that drives daily decisions throughout the entire lifecycle of the project. As a developer, you can think of it as GitHub for CAD.

“Using Azure Digital Twins, we can bring this backbone to life using raw and processed information from IoT sensors distributed throughout the infrastructure. By bringing a wide range of information sources together into a comprehensive Digital Twin, including CAD data, real-world scans and photometry, IoT sensor data, weather feeds and many more, we can revolutionize the way infrastructure projects are planned, built and operated.” —Pavan Emani, Vice President, iTwin software development for Bentley 

To learn more about the customers and partners using Azure Digital Twins in exciting ways, we encourage you to visit the customer stories covering a spectrum of industry use cases.
  

Get started

We look forward to continuing to deliver on our commitment of simplifying and accelerating your time to value building next-generation IoT connected solutions. We are excited about the role Azure Digital Twins will play helping you gain valuable insights across your environments.

Watch this video to learn more:

 

Get started with Azure Digital Twins today.

Visit the Azure Digital Twins product page.

See the Azure Digital Twins documentation and quick start guides.

Watch the Deep Dive: Azure Digital Twins webinar for technical walkthrough. Join event at 9 AM PT June 29, 2020 for Live Q & A.

Watch the Deep Dive: Bentley and Azure Digital Twins webinar for architectural overview. Join event at 9 AM PT August 3, 2020 for Live Q & A.

Watch how Bentley uses Azure Digital Twins to build Bentley iTwin solution.

Watch the Azure Digital Twins Microsoft Build event session.

Read our customer stories from Ansys and Bentley.

Read Announcing Azure Digital Twins: Create digital replicas of spaces and infrastructure using cloud, AI and IoT to get familiar with the previous release.

Quelle: Azure

Advancing Azure service quality with artificial intelligence: AIOps

“In the era of big data, insights collected from cloud services running at the scale of Azure quickly exceed the attention span of humans. It’s critical to identify the right steps to maintain the highest possible quality of service based on the large volume of data collected. In applying this to Azure, we envision infusing AI into our cloud platform and DevOps process, becoming AIOps, to enable the Azure platform to become more self-adaptive, resilient, and efficient. AIOps will also support our engineers to take the right actions more effectively and in a timely manner to continue improving service quality and delighting our customers and partners. This post continues our Advancing Reliability series highlighting initiatives underway to keep improving the reliability of the Azure platform. The post that follows was written by Jian Zhang, our Program Manager overseeing these efforts, as she shares our vision for AIOps, and highlights areas of this AI infusion that are already a reality as part of our end-to-end cloud service management.”—Mark Russinovich, CTO, Azure

This post includes contributions from Principal Data Scientist Manager Yingnong Dang and Partner Group Software Engineering Manager Murali Chintalapati.

 

As Mark mentioned when he launched this Advancing Reliability blog series, building and operating a global cloud infrastructure at the scale of Azure is a complex task with hundreds of ever-evolving service components, spanning more than 160 datacenters and across more than 60 regions. To rise to this challenge, we have created an AIOps team to collaborate broadly across Azure engineering teams and partnered with Microsoft Research to develop AI solutions to make cloud service management more efficient and more reliable than ever before. We are going to share our vision on the importance of infusing AI into our cloud platform and DevOps process. Gartner referred to something similar as AIOps (pronounced “AI Ops”) and this has become the common term that we use internally, albeit with a larger scope. Today’s post is just the start, as we intend to provide regular updates to share our adoption stories of using AI technologies to support how we build and operate Azure at scale.

Why AIOps?

There are two unique characteristics of cloud services:

The ever-increasing scale and complexity of the cloud platform and systems
The ever-changing needs of customers, partners, and their workloads

To build and operate reliable cloud services during this constant state of flux, and to do so as efficiently and effectively as possible, our cloud engineers (including thousands of Azure developers, operations engineers, customer support engineers, and program managers) heavily rely on data to make decisions and take actions. Furthermore, many of these decisions and actions need to be executed automatically as an integral part of our cloud services or our DevOps processes. Streamlining the path from data to decisions to actions involves identifying patterns in the data, reasoning, and making predictions based on historical data, then recommending or even taking actions based on the insights derived from all that underlying data.

 
Figure 1. Infusing AI into cloud platform and DevOps.

The AIOps vision

AIOps has started to transform the cloud business by improving service quality and customer experience at scale while boosting engineers’ productivity with intelligent tools, driving continuous cost optimization, and ultimately improving the reliability, performance, and efficiency of the platform itself. When we invest in advancing AIOps and related technologies, we see this ultimately provides value in several ways:

Higher service quality and efficiency: Cloud services will have built-in capabilities of self-monitoring, self-adapting, and self-healing, all with minimal human intervention. Platform-level automation powered by such intelligence will improve service quality (including reliability, and availability, and performance), and service efficiency to deliver the best possible customer experience.
Higher DevOps productivity: With the automation power of AI and ML, engineers are released from the toil of investigating repeated issues, manually operating and supporting their services, and can instead focus on solving new problems, building new functionality, and work that more directly impacts the customer and partner experience. In practice, AIOps empowers developers and engineers with insights to avoid looking at raw data, thereby improving engineer productivity.
Higher customer satisfaction: AIOps solutions play a critical role in enabling customers to use, maintain, and troubleshoot their workloads on top of our cloud services as easily as possible. We endeavor to use AIOps to understand customer needs better, in some cases to identify potential pain points and proactively reach out as needed. Data-driven insights into customer workload behavior could flag when Microsoft or the customer needs to take action to prevent issues or apply workarounds. Ultimately, the goal is to improve satisfaction by quickly identifying, mitigating, and fixing issues.

My colleagues Marcus Fontoura, Murali Chintalapati, and Yingnong Dang shared Microsoft’s vision, investments, and sample achievements in this space during the keynote AI for Cloud–Toward Intelligent Cloud Platforms and AIOps at the AAAI-20 Workshop on Cloud Intelligence in conjunction with the 34th AAAI Conference on Artificial Intelligence. The vision was created by a Microsoft AIOps committee across cloud service product groups including Azure, Microsoft 365, Bing, and LinkedIn, as well as Microsoft Research (MSR). In the keynote, we shared a few key areas in which AIOps can be transformative for building and operating cloud systems, as shown in the chart below.
 

Figure 2. AI for Cloud: AIOps and AI-Serving Platform.

AIOps

Moving beyond our vision, we wanted to start by briefly summarizing our general methodology for building AIOps solutions. A solution in this space always starts with data—measurements of systems, customers, and processes—as the key of any AIOps solution is distilling insights about system behavior, customer behaviors, and DevOps artifacts and processes. The insights could include identifying a problem that is happening now (detect), why it’s happening (diagnose), what will happen in the future (predict), and how to improve (optimize, adjust, and mitigate). Such insights should always be associated with business metrics—customer satisfaction, system quality, and DevOps productivity—and drive actions in line with prioritization determined by the business impact. The actions will also be fed back into the system and process. This feedback could be fully automated (infused into the system) or with humans in the loop (infused into the DevOps process). This overall methodology guided us to build AIOps solutions in three pillars.

Figure 3. AIOps methodologies: Data, insights, and actions.

AI for systems

Today, we're introducing several AIOps solutions that are already in use and supporting Azure behind the scenes. The goal is to automate system management to reduce human intervention. As a result, this helps to reduce operational costs, improve system efficiency, and increase customer satisfaction. These solutions have already contributed significantly to the Azure platform availability improvements, especially for Azure IaaS virtual machines (VMs). AIOps solutions contributed in several ways including protecting customers’ workload from host failures through hardware failure prediction and proactive actions like live migration and Project Tardigrade and pre-provisioning VMs to shorten VM creation time.

Of course, engineering improvements and ongoing system innovation also play important roles in the continuous improvement of platform reliability.

Hardware Failure Prediction is to protect cloud customers from interruptions caused by hardware failures. We shared our story of Improving Azure Virtual Machine resiliency with predictive ML and live migration back in 2018. Microsoft Research and Azure have built a disk failure prediction solution for Azure Compute, triggering the live migration of customer VMs from predicted-to-fail nodes to healthy nodes. We also expanded the prediction to other types of hardware issues including memory and networking router failures. This enables us to perform predictive maintenance for better availability.
Pre-Provisioning Service in Azure brings VM deployment reliability and latency benefits by creating pre-provisioned VMs. Pre-provisioned VMs are pre-created and partially configured VMs ahead of customer requests for VMs. As we described in the IJCAI 2020 publication, As we described in the AAAI-20 keynote mentioned above,  the Pre-Provisioning Service leverages a prediction engine to predict VM configurations and the number of VMs per configuration to pre-create. This prediction engine applies dynamic models that are trained based on historical and current deployment behaviors and predicts future deployments. Pre-Provisioning Service uses this prediction to create and manage VM pools per VM configuration. Pre-Provisioning Service resizes the pool of VMs by destroying or adding VMs as prescribed by the latest predictions. Once a VM matching the customer's request is identified, the VM is assigned from the pre-created pool to the customer’s subscription.

AI for DevOps

AI can boost engineering productivity and help in shipping high-quality services with speed. Below are a few examples of AI for DevOps solutions.

Incident management is an important aspect of cloud service management—identifying and mitigating rare but inevitable platform outages. A typical incident management procedure consists of multiple stages including detection, engagement, and mitigation stages. Time spent in each stage is used as a Key Performance Indicator (KPI) to measure and drive rapid issue resolution. KPIs include time to detect (TTD), time to engage (TTE), and time to mitigate (TTM).

 
Figure 4. Incident management procedures.

As shared in AIOps Innovations in Incident Management for Cloud Services at the AAAI-20 conference, we have developed AI-based solutions that enable engineers not only to detect issues early but also to identify the right team(s) to engage and therefore mitigate as quickly as possible. Tight integration into the platform enables end-to-end touchless mitigation for some scenarios, which considerably reduces customer impact and therefore improves the overall customer experience.

Anomaly Detection provides an end-to-end monitoring and anomaly detection solution for Azure IaaS. The detection solution targets a broad spectrum of anomaly patterns that includes not only generic patterns defined by thresholds, but also patterns which are typically more difficult to detect such as leaking patterns (for example, memory leaks) and emerging patterns (not a spike, but increasing with fluctuations over a longer term). Insights generated by the anomaly detection solutions are injected into the existing Azure DevOps platform and processes, for example, alerting through the telemetry platform, incident management platform, and, in some cases, triggering automated communications to impacted customers. This helps us detect issues as early as possible.

For an example that has already made its way into a customer-facing feature, Dynamic Threshold is an ML-based anomaly detection model. It is a feature of Azure Monitor used through the Azure portal or through the ARM API. Dynamic Threshold allows users to tune their detection sensitivity, including specifying how many violation points will trigger a monitoring alert.

Safe Deployment serves as an intelligent global “watchdog” for the safe rollout of Azure infrastructure components. We built a system, code name Gandalf, that analyzes temporal and spatial correlation to capture latent issues that happened hours or even days after the rollout. This helps to identify suspicious rollouts (during a sea of ongoing rollouts), which is common for Azure scenarios, and helps prevent the issue propagating and therefore prevents impact to additional customers. We provided details on our safe deployment practices in this earlier blog post and went into more detail about how Gandalf works in our USENIX NSDI 2020 paper and slide deck.

AI for customers

To improve the Azure customer experience, we have been developing AI solutions to power the full lifecycle of customer management. For example, a decision support system has been developed to guide customers towards the best selection of support resources by leveraging the customer’s service selection and verbatim summary of the problem experienced. This helps shorten the time it takes to get customers and partners the right guidance and support that they need.

AI-serving platform

To achieve greater efficiencies in managing a global-scale cloud, we have been investing in building systems that support using AI to optimize cloud resource usage and therefore the customer experience. One example is Resource Central (RC), an AI-serving platform for Azure that we described in Communications of the ACM. It collects telemetry from Azure containers and servers, learns from their prior behaviors, and, when requested, produces predictions of their future behaviors. We are already using RC to predict many characteristics of Azure Compute workloads accurately, including resource procurement and allocation, all of which helps to improve system performance and efficiency.

Looking towards the future

We have shared our vision of AI infusion into the Azure platform and our DevOps processes and highlighted several solutions that are already in use to improve service quality across a range of areas. Look to us to share more details of our internal AI and ML solutions for even more intelligent cloud management in the future. We’re confident that these are the right investment solutions to improve our effectiveness and efficiency as a cloud provider, including improving the reliability and performance of the Azure platform itself.
Quelle: Azure

Five reasons to view this Azure Synapse Analytics virtual event

The virtual event Azure Synapse Analytics: How It Works is now available on demand. In demos and technical discussions, Microsoft customers explain how they’re using the newest Azure Synapse Analytics capabilities to deliver insights faster, bring together an entire analytics ecosystem in a central location, reduce costs, and transform decision-making.

This post outlines five key reasons to view the one-hour event.

Learn how to deliver powerful insights with speed and ease

Today, it’s critical to have a data-driven culture in your organization. Analytics play a pivotal role in helping many organizations make insights-driven decisions—decisions to transform supply chains, develop new ways to interact with customers, and evaluate new offerings.

At Azure Synapse Analytics: How It Works, customers showed how they combine data ingestion, data warehousing, and big data analytics in a single cloud-native service using Azure Synapse. If you’re a data engineer trying to wrangle multiple data types from multiple sources to create pipelines or a database administrator with responsibilities over your data lake and data warehouse, you’ll see how all this can be simplified in a code-free environment.

Customers also demonstrated how they give their employees access to unprecedented, real-time insights from enterprise data using Azure Synapse with built-in Power BI authoring.

Achieve unprecedented ROI

Companies featured at the event have demonstrated significant cost reductions with cloud analytics solutions. Compared to on-premises solutions, these solutions:

Require lower implementation and maintenance costs.
Reduce analytics project development time.
Provide access to more frequent innovation.
Deliver higher levels of security and business continuity.
Help ensure a better competitive advantage and higher customer satisfaction.

With cloud analytics, organizations pay for data and analytics tools only when needed, pausing consumption when not in use. They can reallocate budget previously spent on hardware and infrastructure management to optimizing processes and launching new projects. In fact, customers average a 271 percent ROI with Azure Synapse—savings that come from lower operating costs, increased productivity, reallocating staff to higher-value activities, and increasing operating income due to improved analytics. Analytics in Azure is up to 14 times faster and costs 94 percent less than other cloud providers.

Deliver a unified analytics experience to everyone in your organization

BI specialists, data engineers, and other IT and data professionals are using Azure Synapse to build, manage, and optimize analytics pipelines, using a variety of skillsets.

Data engineers can use a code-free visual environment for managing data pipelines.
Database administrators can automate query optimization and easily explore data lakes.
Data scientists can build proofs of concept in minutes.
Business analysts can securely access datasets and use Power BI to build dashboards in minutes—all while using the same analytics service.

Analyze data at limitless scale

By viewing the event, you’ll learn how to access and analyze all your data, from your enterprise data lake to multiple data warehouses and big data analytics systems, with blazing speed. Join us to see how data professionals can query both relational and non-relational data using the familiar SQL language, using either serverless or provisioned resources—with Azure Synapse.

Attain unmatched security

Of course, trust is critical for any cloud solution. Customers will share how they take advantage of advanced Azure Synapse security and privacy features such as automated threat detection and always-on data encryption to help ensure that data stays safe and private by using column-level security and native row-level security. You’ll also learn about dynamic data masking, which automatically protects sensitive data in real time.

In summary, by viewing the Azure Synapse Analytics: How It Works virtual event, you’ll learn how to deliver:

Powerful insights.
Unprecedented ROI.
Unified experience.
Limitless scale.
Unmatched security.

Quelle: Azure

Introducing Azure Load Balancer insights using Azure Monitor for Networks

We are excited to announce that Azure Load Balancer customers now have instant access to a packaged solution for health monitoring and configuration analysis. Built as part of Azure Monitor for Networks, customers now have topological maps for all their Load Balancer configurations and health dashboards for their Standard Load Balancers preconfigured with relevant metrics.

Through this, you have a window into the health and configuration of your networks, enabling rapid fault localization and informed design decisions. You can access this through the Insights blade of each Load Balancer resource and Azure Monitor for Networks, a central hub that provides access to health and connectivity monitoring for all your network resources.

Visualize functional dependencies

The functional dependency view will enable you to picture even the most complex load balancer setups. With visual feedback on Load Balancing rules, Inbound NAT rules, and backend pool resources, you can make updates while keeping a complete picture of your configuration in mind.

For Standard Load Balancers, your backend pool resources are color-coded with Health Probe status empowering you to visualize the current availability of your network to serve traffic. Alongside the above topology you are presented with a time-wise graph of health status, giving a snapshot view of the health of your application.

Monitor a rich metric dashboard with no setup needed

After reviewing your topology, you may want to dig even further into the data to understand how your Load Balancer is performing through the detailed metrics page. The detailed metrics page is a dashboard preconfigured with separate tabs providing useful insights into Availability, Data Throughput, Flow Distribution, and Connection Latency.

The Overview tab provides a high-level view and from here you can visit the Frontend & Backend Availability or Data Throughput tabs for more in-depth information.

Through the Frontend and Backend availability tabs, you are provided with a breakdown of your Load Balancer and backend pool health status over time. You can consult the Data Throughput tab to learn how much data is parsed through your services by frontend IP, frontend port, and direction.

The Flow Distribution tab provides visualization of load distribution amongst backend resources. This enables you to see the number of flows being created by each backend instance and to keep track of whether you are approaching the limit.

The Connection Monitors tab plots round-trip latencies from Connection Monitors across the globe on a map. With this, you can evaluate the performance impact distances from regions around the world have on your service.

The new monitoring experience is seamless and straightforward to use, with integrated guides and instructions provided as part of each tab.

One place for all your network monitoring needs

Azure Monitor for Networks fully supports the new monitoring and insights experience for Azure Load Balancer. With all your network resource metrics in a single place, you can quickly filter by type, subscription, and keyword to view the health, connectivity, and alert status of all your Azure network resources such as Azure Firewalls, ExpressRoute, and Application Gateways.

As we rapidly transition to the cloud and applications become more complex, customers need tools to easily maintain, monitor, and update their network configurations. With the integration of the Azure Load Balancer with Azure Monitor for Networks, we deliver a piece of this and look forward to continuing to provide our valued customers with the best in class experience they deserve.

Next steps

Learn more about the Azure Load Balancer, Azure Monitor for Networks, and Network Watcher.
Deploy your first Load Balancer, customize your metrics, and create a Connection Monitor.
Give us feedback on this and new features you want to see.

Quelle: Azure

Azure Support API: Create and manage Azure support tickets programmatically

Large enterprise customers running business-critical workloads on Azure manage thousands of subscriptions and use automation for deployment and management of their Azure resources. Expert support for these customers is critical in achieving success and operational health of their business. Today, customers can keep running their Azure solutions smoothly with self-help resources, such as diagnosing and solving problems in the Azure portal, and by creating support tickets to work directly with technical support engineers.

We have heard feedback from our customers and partners that automating support procedures is key to help them move faster in the cloud and focus on their core business. Integrating internal monitoring applications and websites with Azure support tickets has been one of their top asks. Customers expect to create, view, and manage support tickets without having to sign-in to the Azure portal. This gives them the flexibility to associate the issues they are tracking with the support tickets they raise with Microsoft. The ability to programmatically raise and manage support tickets when an issue occurs is a critical step for them in Azure usability.

We’re happy to share that the Azure Support API is now generally available. With this API, customers can integrate the creation and management of support tickets directly into their IT service management (ITSM) system, and automate common procedures.

Using the Azure Support API, you can:

Create a support ticket for technical, billing, subscription management, and subscription and service limits (quota) issues.
Get a list of support tickets with detailed information, and filter by status or created date.
Update severity, status, and contact information.
Manage all communications for a support ticket.

Benefits of Azure Support API

Reduce the time between finding an issue and getting support from Microsoft

A typical troubleshooting process when the customer encounters an Azure issue looks something like this:

On step five, if the issue is unresolved and identified to be on the Azure side, customers navigate to the Azure portal, to contact support. With programmatic case management access, customers can automate their support process with their internal tooling to create and manage their support tickets, thus reducing the time between finding an issue and contacting support.

Customers now have one end-end process that goes smoothly from internal to external without the person filing the issue having to deal with the complexity and challenges between separate case management systems.

Create support tickets via ARM templates

Deploying an ARM template that creates resources can sometimes result in a ResourceQuotaExceeded deployment error, indicating that you have exceeded your Azure subscription and service limits (quotas). This happens because quotas are applied in the resource group, subscription, account, and other scopes. For example, your subscription may be configured to limit the number of cores for a region. If you attempt to deploy a virtual machine with more cores than the permitted amount, you receive an error stating the quota has been exceeded. The way to resolve it is to request a quota increase by filing a support ticket. With Support APIs in place, you can avoid signing in to the Azure portal to create a ticket, instead request quota increases directly via ARM templates.

Getting started

The Azure Support API is available with a Professional Direct, Premier, or Unified technical support plan.

For detailed examples using .NET and C#, refer to our code samples.

View the list of all languages and interfaces we support for ticket creation and management. As always, you can also directly use the Support REST API.

Use the API and tell us about it

We are looking forward to hearing your feedback about the Azure Support API. In the Azure support feedback forum, you can post ideas and suggestions for the API and other aspects of the support experience.

To report an API issue, go to the issues section of the GitHub repository for the language or interface you're using. For example, go to the repository for issues with the PowerShell cmdlets. Select New issue and tag it with the labels Support and Service Attention.
Quelle: Azure