Introducing Azure Dedicated Host

We are excited to announce the preview of Azure Dedicated Host, a new Azure service that enables you to run your organization’s Linux and Windows virtual machines on single-tenant physical servers. Azure Dedicated Hosts provide you with visibility and control to help address corporate compliance and regulatory requirements. We are extending Azure Hybrid Benefit to Azure Dedicated Hosts, so you can save money by using on-premises Windows Server and SQL Server licenses with Software Assurance or qualifying subscription licenses. Azure Dedicated Host is in preview in most Azure regions starting today.

You can use the Azure portal to create an Azure Dedicated Host, host groups (a collection of hosts), and to assign Azure Virtual Machines to hosts during the virtual machine (VM) creation process. 

Visibility and control

Azure Dedicated Hosts can help address compliance requirements organizations may have in terms of physical security, data integrity, and monitoring. This is accomplished by giving you the ability to place Azure VMs on a specific and dedicated physical server. This offering also meets the needs of IT organizations seeking host-level isolation.

Azure Dedicated Hosts provide visibility over the server infrastructure running your Azure Virtual Machines. They allow you to gain further control over:

The underlying hardware infrastructure (host type)
Processor brand, capabilities, and more 
Number of cores
Type and size of the Azure Virtual Machines you want to deploy

You can mix and match different Azure Virtual Machine sizes within the same virtual machine series on a given host.

With an Azure Dedicated Host, you can control all host-level platform maintenance initiated by Azure (e.g., host OS updates). An Azure Dedicated Host gives you the option to defer host maintenance operations and apply them within a defined maintenance window, 35 days. During this self-maintenance window, you can apply maintenance to your hosts at your convenience, thus gaining full control over the sequence and velocity of the maintenance process.

Licensing cost savings

We now offer Azure Hybrid Benefit for Windows Server and SQL Server on Azure Dedicated Hosts, making it the most cost-effective dedicated cloud service for Microsoft workloads.

Azure Hybrid Benefit allows you to use existing Windows Server and SQL Server licenses with Software Assurance, or qualifying subscription licenses, to pay a reduced rate on Azure services. Learn more by referring to the Azure Hybrid Benefit FAQ.
We are also expanding Azure Hybrid Benefit so you can take advantage of unlimited virtualization for Windows Server and SQL Server with Azure Dedicated Hosts. Customers with Windows Server Datacenter licenses and Software Assurance can use unlimited virtualization rights in Azure Dedicated Hosts. In other words, you can deploy as many Windows Server virtual machines as you like on the host, subject only to the physical capacity of the underlying server. Similarly, customers with SQL Server Enterprise Edition licenses and Software Assurance can use unlimited virtualization rights for SQL Server on their Azure Dedicated Hosts.
Consistent with other Azure services, customers will get free Extended Security Updates for Windows Server 2008/R2 and SQL Server 2008/R2 on Azure Dedicated Host. Learn more about how to prepare for SQL Server and Windows Server 2008 end of support.

Azure Dedicated Hosts allow you to use other existing software licenses, such as SUSE or RedHat Linux. Check with your vendors for detailed license terms.

With the introduction of Azure Dedicated Hosts, we’re updating the outsourcing terms for Microsoft on-premises licenses to clarify the distinction between on-premises/traditional outsourcing and cloud services. For more details about these changes, read the blog “Updated Microsoft licensing terms for dedicated hosted cloud services.” If you have any additional questions, please reach out to your Microsoft account team or partner.

Getting started

The preview is available now. Get started with your first Azure Dedicated Host.

You can deploy Azure Dedicated Hosts with an ARM template or using CLI, PowerShell, and the Azure portal. For a more detailed overview, please refer to our website and the documentation for both Windows and Linux.

Frequently asked questions

Q: Which Azure Virtual Machines can I run on Azure Dedicated Host?

A: During the preview period you will be able to deploy Dsv3 and Esv3 Azure Virtual Machine series. Support for Fsv2 virtual machines is coming soon. Any virtual machine size from a given virtual machine series can be deployed on an Azure Dedicated Host instance, subject to the physical capacity of the host. For additional information please refer to the documentation.

Q: Which Azure Disk Storage solutions are available to Azure Virtual Machines running on an Azure Dedicated Host?

A: Azure Standard HDDs, Standard SSDs, and Premium SSDs are all supported during the preview program. Learn more about Azure Disk Storage.

Q: Where can I find pricing and more details about the new Azure Dedicated Host service?

A: You can find more details about the new Azure Dedicated Host service on our pricing page.

Q: Can I use Azure Hybrid Benefit for Windows Server/SQL Server licenses with my Azure Dedicated Host?

A: Yes, you can lower your costs by taking advantage of Azure Hybrid Benefit for your existing Windows Server and SQL Server licenses with Software Assurance or qualifying subscription licenses. With Windows Server Datacenter and SQL Server Enterprise Editions, you get unlimited virtualization when you license the entire host and use Azure Hybrid Benefit. As a result, you can deploy as many Windows Server virtual machines as you like on the host, subject to the physical capacity of the underlying server. All Windows Server and SQL Server workloads in Azure Dedicated Hosts are also eligible for free Extended Security Updates for Windows Server and SQL Server 2008/R2.

Q: Can I use my Windows Server/SQL Server licenses with dedicated cloud services?

A: In order to make software licenses consistent across multitenant and dedicated cloud services, we are updating licensing terms for Windows Server, SQL Server, and other Microsoft software products for dedicated cloud services. Beginning October 1, 2019, new licenses purchased without Software Assurance and mobility rights cannot be used in dedicated hosting environments in Azure and certain other cloud service providers. This is consistent with our policy for multitenant hosting environments. However, SQL Server licenses with Software Assurance can continue to use their licenses on dedicated hosts with any cloud service provider via License Mobility, even if licenses were purchased after October 1, 2019. Customers may use on-premises licenses purchased before October 1, 2019 on dedicated cloud services. For more details regarding licensing, please read the blog “Updated Microsoft licensing terms for dedicated hosted cloud services.”

For additional information, please refer to the Azure Dedicated Host website and the Azure Hybrid Benefit page.
Quelle: Azure

Moving your VMware resources to Azure is easier than ever

Back in April we announced the Azure VMware Solution to deliver a comprehensive VMware environment allowing you to run native VMware-based workloads on Azure. It’s a fully managed platform as a service (PaaS) that includes vSphere, vCenter, vSAN, NSX-T, and corresponding tools.

The VMware environment runs natively on Azure’s bare metal infrastructure, so there’s no nested virtualization and you can continue using your existing VMware tools. There’s no need to worry about operating, scaling, or patching the VMware physical infrastructure or re-platforming your virtual machines. The other benefit of this solution is that you can stretch your on-premises subnets into Azure. It’s like connecting another location to your VMware environment, only that location happens to be in Azure.

We’ve recently published a new episode of Microsoft Mechanics featuring Markus Hain, Senior Program Manager from the Azure engineering team. In this episode, Markus walks through the experience of coming from an on-premises VMware vSphere environment, provisioning an Azure VMware Solution private cloud, getting both environments to communicate, and what you can do once the service is up and running.

Beyond building out and configuring the environment, Markus explains how the hybrid networking works to connect VMware sites and how the service translates bidirectional traffic between virtual networks used in Azure with virtual LANs (VLANs) used in VMware.

Once the services are running, it’s easy to vMotion as you normally would between VMware sites. We show a simple vMotion migration to move virtual machine workloads into Azure. As your VMware workloads start to run in Azure you can take advantage of integrating Azure services seamlessly to existing VMware workloads. For example, your developers can create new VMware virtual machines inside the Azure portal leveraging the same VMware templates from the on-premises environment, and ultimately running those virtual machines in your VMware private cloud in Azure.

Virtual machines created in the Azure portal will be visible, accessible, and run in the VMware vSphere environment. You have the flexibility to manage those resources as you normally would in vSphere, Azure, or both. The environments are deeply integrated at the API level to ensure that what you see in either experience is synchronized. This enables hybrid management, as well as allowing your developers to manage both Azure and VMware resources using a single Azure Resource Manager template.

What’s more, you can monitor those virtual machines like you would Azure infrastructure as a service (IaaS) virtual machines and connect them to the broad set of resources across data, compute, networking, storage, and more. In fact, Markus shows how you can configure an application gateway running in Azure to load balance inbound traffic to your virtual machines running in the Azure VMware Solution. Since this is a truly hybrid and deeply integrated set of services, there’s really no limit to how you architect your apps and solutions, and like a native cloud service, you can benefit from the elasticity of the number of VMware nodes you’ll need to match seasonal or otherwise variable demand.

Right now, the Azure VMware Solution by CloudSimple is available in East US and West US regions. Western Europe is coming next, and we’ll add more regions over the coming months. To get started, just search for “vmware” while signed into the Azure portal and provision the service, nodes, and virtual machines. You’ll then be on your way to running your own private cloud in Azure!

For more information, check out our Azure VMware Solution site.

Quelle: Azure

Azure Cost Management updates – July 2019

Whether you're a new student, thriving startup, or the largest enterprise, you have financial constraints and you need to know what you're spending, where, and how to plan for the future. Nobody wants a surprise when it comes to the bill, and this is where Microsoft Azure Cost Management comes in.

We're always looking for ways to learn more about your challenges and how Azure Cost Management can help you better understand where you're accruing costs in the cloud, identify and prevent bad spending patterns, and optimize costs to empower you to do more with less. Here are a few of the latest improvements and updates based on your feedback:

Azure Cost Management for partners
Marketplace usage for pay-as-you-go (PAYG) subscriptions
Cost Management Labs
Save and share customized views directly in cost analysis
Viewing costs in different currencies
Manage EA accounts from the Azure portal
Expanded availability of resource tags in cost reporting
Tag your resources with up to 50 tags
Documentation updates

Let's dig into the details.

 

Azure Cost Management for partners

Partners play a critical role in successful planning, implementation, and long-term cloud operations for organizations, big and small. Whether you're a partner who sells to or manages Azure on behalf of another organization or you're working with a partner to help keep you focused on your core mission instead of managing infrastructure, you need a way to understand, control, and optimize your cloud costs. This is where Azure Cost Management comes in!

In June, we announced new capabilities in the Cloud Solution Provider (CSP) program coming in October 2019. With this update, CSP partners can onboard customers using the same Microsoft Customer Agreement (MCA) platform used across Azure. CSP partners and customers will see product alignment, which includes common Azure Cost Management tools, available at the same time they're available for pay-as-you-go (PAYG) and enterprise customers.

Azure Cost Management capabilities optimized for partners and their customers will be released over time, starting with the ability to enable Azure Cost Management for MCA customers. You'll see periodic updates throughout Q4 2019 and 2020, including support for customers who do not transition to MCA. Once enabled, partners and customers will have the full benefits of Azure Cost Management.

If you're a managed service provider, be sure to check out Azure Lighthouse, which enables partners to more efficiently manage resources at scale across customers and directories. Help your customers manage their Azure and AWS costs in a single place with Azure Cost Management!

Stay tuned for more updates in October 2019. We're eager to bring much-anticipated Azure Cost Management capabilities to partners and their customers!

 

Marketplace usage for pay-as-you-go (PAYG) subscriptions

Last month, we talked about how effective cost management starts by getting all your costs into a single place with a single taxonomy. Now, with the addition of Azure Marketplace usage for pay-as-you-go (PAYG) subscriptions, you have a more complete picture of your costs.

Azure and Marketplace charges have different billing cycles. To investigate and reconcile billed charges, select the appropriate Azure or Marketplace invoice period in date picker. To view all charges together, select calendar months and group by publisher type to see a breakdown of your Azure and Marketplace costs.

 

Cost Management Labs

Cost Management Labs are the way to get the latest cost management features and enhancements! It is the same great service you're used to, but with a few extra features we're testing and looking for feedback on as we finalize before releasing to the world. This is your chance to drive the direction and impact the future of Azure Cost Management.

Participating in Cost Management Labs is as easy as opening the Azure preview portal and selecting Cost Management from Azure Home. On the Cost Management overview, you'll see the preview features available for testing and have links to share new ideas or report any bugs that may pop up. Reporting a bug is a direct line back to the Azure Cost Management engineering team, where we'll work with you to understand and resolve the issue.

Here's what you'll see in Cost Management Labs today:

Save and share customized views directly within cost analysis
Download your customized view in cost analysis as an image
Several small bug fixes and improvements, like minor design changes within cost analysis

Of course, that's not all! There's more coming and we're very eager to hear your thoughts and understand what you'd like to see next. What are you waiting for? Try Cost Management Labs today!

 

Save and share customized views in cost analysis

Customizing a view in cost analysis is easy. Just pick the date range you need, group the data to see a breakdown, choose the right visualization, and you're good to go! Pin your view to a dashboard for one-click access, then share the dashboard with your team so everyone can track cost from a single place.

You can also share a direct link to your customized view so others can copy and personalize it for themselves:

Both sharing options offer flexibility, but you need something more convenient. You need to save customized views and share them with others, directly from within cost analysis. Now you can!

People with Cost Management Contributor (or greater) access can create shared views. You can create up to 50 shared views per scope.

Anyone can save up to 50 private views, even if they only have read access. These views cannot be shared with others directly in cost analysis, but they can be pinned to a dashboard or shared via URL so others can save a copy.

All views are accessible from the view menu. You'll see your private views first, then those shared across the scope, and lastly the built-in views which are always available.

Need to share your view outside of the portal? Simply download the charts as an image and copy it into an email or presentation, as an example, to share it with your team. You'll see a slightly redesigned Export menu which now offers a PNG option when viewing charts. The table view cannot be downloaded as an image.

You'll also see a few small design changes to the filter bar in the preview:

The scope pill shows more of the scope name for added clarity
The view menu has been restyled based on its growing importance with saved views
The granularity and group by pickers are closer to the main chart to address confusion about what they apply to

This is just the first step. There's more to come. Try the preview today and let us know what you'd like to see next! We're excited to hear your ideas!

 

Viewing costs in different currencies

Every organization has its own unique setup and challenges. You may get a single Azure invoice or perhaps you need separate invoices per department. You may even be in a multi-national organization with multiple billing accounts in different currencies. Or perhaps you simply moved subscriptions between billing accounts in different currencies. Regardless of how you ended up with multiple currencies, you haven't had a way to view costs in the portal. Now you can!

When cost analysis detects multiple currencies, you'll have an option to switch between them, viewing costs in each currency individually. Today, this only shows charges for the selected currency – cost analysis is not converting currencies. For example, if you have two charges, one for $1 and another for £1, you can see either USD only ($1) or GBP only (£1). You cannot see $1+£1 in USD or GBP today. In the future, Azure Cost Management will convert costs into a single currency to show everything in USD (e.g. $2.27 in this case) and eventually in a currency you select (e.g. ¥243.43).

 

Manage EA departments and policies from the Azure portal

If you manage an Enterprise Agreement (EA), you're all too familiar with the Enterprise portal, which lets you to keep an eye on your usage, monetary commitment credits, and additional charges each month. Did you know you can also do this in the Azure portal? With richer reporting in cost analysis and finer-grained control with budgets, the Azure portal delivers even more capabilities to understand and control your costs.

Now, you can also create and manage your departments and policy settings from the Azure portal. Departments allow you to organize subscriptions and delegate access to manage account owners and policy settings allow you to enable or disable reservations, Azure Marketplace purchases, and Azure Cost Management for your organization. To ensure everyone in the organization can see and manage costs, make sure you enable account owners to view charges.

Enabling account owners to view charges also ensures subscription users with RBAC access have visibility into their costs throughout the lifetime of their resources, can control spending with budgets, and can optimize their spending with cost-saving recommendations. Enabling cost visibility is critical to driving accountability throughout your organization. Once enabled, you can manage finer-grained access with the Cost Management Reader and Cost Management Contributor roles on any resource group, subscription, or management group. We recommend Cost Management Contributor to ensure everyone can create and share Azure Cost Management views and budgets across the resources and costs they have visibility to.

If you're still using the enterprise portal on a regular basis, we encourage you to give the Azure portal a shot. Simply go to the portal and click Cost Management + Billing in the list of favorites on the left.

And don't forget to plan your move from the key-based EA APIs (such as consumption.azure.com) to the latest UsageDetails API (version 2019-04-01-preview or newer). The key-based APIs will not be supported after your next EA renewal into Microsoft Customer Agreement (MCA) and switching to the UsageDetails API now will streamline this transition and minimize future migration work.

 

Expanded availability of resource tags in cost reporting

Tagging is the best way to organize and categorize your resources outside of the built-in management group, subscription, and resource group hierarchy. Add your own metadata and build custom reports using cost analysis. While most Azure resources support tags, some resource types do not. Here are the latest resource types which now support tags:

VPN gateways

Remember tags are a part of every usage record and are only available in Azure Cost Management reporting after the tag is applied. Historical costs are not tagged. Update your resources today for the best cost reporting.

 

Tag your resources with up to 50 tags

To effectively manage costs in a large organization, you need to map costs to reporting entities. Whether you're breaking down cost by organization, application, environment, or some other construct, resource tags are a great way to add that metadata and reuse it for cost, health, security, and compliance tracking and enforcement. But as your reporting needs change over time, you may have hit the 15 tag limit on resources. No more! You can now apply up to 50 tags to each resource!

To learn more about tag management and the benefits of tags, see "Use tags to organize your Azure resources".

 

Documentation updates

Lots of documentation updates! Here are a few you might be interested in:

Updated Marketplace usage status for PAYG in "Understand Cost Management data"
Updated PAYG usage terminology in "Understand the terms in your Azure usage and charges file"
Added forecast to "Explore and analyze costs with cost analysis"
Expanded details about viewing reservations in cost analysis in "Get Enterprise Agreement reservation costs and usage"
Added resource group scoping to multiple docs for reservations
Created new "How to buy" and "How the discount is applied" docs for Azure DataBricks reservations
Added instance size flexibility to the "How to buy" and "How the discount is applied" virtual machine reservation docs
Added steps on how to rename your Azure subscriptions to "Change the profile information for your Azure account"
Lots of updates across multiple docs for Microsoft Customer Agreements

Want to keep an eye on all documentation updates? Check out the Azure Cost Management doc change history in the azure-docs repository on GitHub. If you see something missing, select "Edit" at the top of the doc and submit a quick pull request.

 

What's next?

These are just a few of the big updates from the last month. We're always listening and making constant improvements based on your feedback, so please keep the feedback coming!

Follow @AzureCostMgmt on Twitter and subscribe to the YouTube channel for updates, tips, and tricks! And, as always, share your ideas and vote up others in the Azure Cost Management feedback forum.
Quelle: Azure

Understanding and leveraging Azure SQL Database’s SLA

When data is the lifeblood of your business, you want to ensure your databases are reliable, secure, and available when called upon to perform. Service level agreements (SLA) set an expectation for uptime and performance, and are a key input for designing systems to meet business needs. We recently published a new version of the SQL Database SLA, guaranteeing the highest availability among relational database services as well as introducing the industry’s first business continuity SLA. These updates further cement our commitment to ensuring your data is safe and the apps and processes your business relies upon continue running in the face of a disruptive event.

As we indicated in the recent service update, we made two major changes in the SLA. First, Azure SQL Database now offers a 99.995% availability SLA for zone redundant databases in its business critical tier. This is the highest SLA in the industry among all relational database services. It is also backed by up to a 100% monthly cost credit for when the SLA is not maintained. Second, we offer a business continuity SLA for databases in the business critical tier that are geo-replicated between two different Azure regions. That SLA comes with very strong guarantees of a five second recovery point objective (RPO) and a 30 second recovery time objective (RTO), including a 100% monthly cost credit when the SLA is not maintained. Azure SQL Database is the only relational database service in the industry offering a business continuity SLA.

The following table provides a quick side by side comparison of different cloud vendors’ SLAs.

Platform
Availability
Business continuity

Uptime
Max Credit
RTO
Max Credit
RPO
Max Credit

Azure SQL Database
99.995%
100%
30 seconds
100%
5 seconds
100%

AWS RDS
99.95%
100%
n/a
n/a
n/a
n/a

GCP Cloud SQL
99.95%
50%
n/a
n/a
n/a
n/a

Alibaba ApsaraDB
99.9%
25%
n/a
n/a
n/a
n/a

Oracle cloud
99.99%
25%
n/a
n/a
n/a
n/a

Data current as of July 18, 2019 and subject to change without notice.

Understanding availability SLA

The availability SLA reflects SQL Database’s ability to automatically handle disruptive events that periodically occur in every region. It relies on the in-region redundancy of the compute and storage resources, constant health monitoring and self-healing operations using automatic failover within the region. These operations rely on synchronously replicated data and incur zero data loss. Therefore, uptime is the most important metric for availability. Azure SQL Database will continue to offer a baseline 99.99% availability SLA across all of its service tiers, but is now providing a higher 99.995% SLA for the business critical or premium tiers in the regions that support availability zones. The business critical tier, as the name suggests, is designed for the most demanding applications, both in terms of performance and reliability. By integrating this service tier with Azure availability zones (AZ), we leverage the additional fault tolerance and isolation that AZs provide, which in turn allows us to offer a higher availability guarantee using the compute and storage redundancy across AZs and the same self-healing operations. Because the compute and storage redundancy is built in for business critical databases and elastic pools, using availability zones comes at no additional cost to you. Our documentation, “High-availability and Azure SQL Database” provides more details of how the business critical service tier leverages availability zones. You can also find the list of regions that support AZs in our documentation, “What are Availability Zones in Azure.”

99.99% availability means that for any database, including those in the business critical tier, the downtime should not exceed 52.56 minutes per year. Zone redundancy increases availability to 99.995%, which means a maximum downtime of only 26.28 minutes per year or a 50% reduction. A minute of downtime is defined as the period during which all attempts to establish a connection failed. To achieve this level of availability, all you need to do is select zone redundant configuration when creating a business critical database or elastic pool. You can do so programmatically using a create or update database API, or in Azure portal as illustrated in the following diagram.

We recommend using the Gen5 compute generation because the zone redundant capacity is based on Gen5 in most regions. The conversion to a zone redundant configuration is an asynchronous online process, similar to what happens when you change the service tier or compute size of the database. It does not require acquiescing or taking your application offline. As long as your connectivity logic is properly implemented, your application will not be interrupted during this transition.

Understanding business continuity SLA

Business continuity is the ability of a service to quickly recover and continue to function during catastrophic events with an impact that cannot be mitigated by the in-region self-healing operations. While these types of unplanned events are rare, their impact can be dramatic. Business continuity is implemented by provisioning stand-by replicas of your databases in two or more geographically separated locations. Because of the long distances between those locations, asynchronous data replication is used to avoid performance impact from network latency. The main trade-off of using asynchronous replication is the potential for data loss. The active geo-replication feature in SQL Database is designed to enable business continuity by creating and managing geographically redundant databases. It’s been in production for several years and we have plenty of telemetry to support very aggressive guarantees.

There are two common metrics used to measure the impact of business continuity events. Recovery time objective (RTO) measures how quickly the availability of the application can be restored. Recovery point objective (RPO) measures the maximum expected data loss after the availability is restored. Not only do we provide SLAs of five seconds for RPO and 30 seconds for RTO, but we also offer an industry first, 100% service credit if these SLAs are not met. That means if any of your database failover requests do not complete within 30 seconds or any time the replication lag exceeds five seconds in 99th percentile within an hour, you are eligible for a service credit for 100% of the monthly cost of the secondary database in question. To qualify for the service credit, the secondary database must have the same compute size as the primary. Note however, these metrics should not be interpreted as a guarantee of automatic recovery from a catastrophic outage. They reflect the Azure SQL’s reliability and performance when synchronizing your data and the speed of the failover when your application requests it. If you prefer a fully automated recovery process, you should consider auto-failover groups with automatic failover policy, which has a one hour RTO.

To measure the duration of the failover request, i.e. the RTO compliance, you can use the following query against the sys.dm_operation_status in master database on the secondary server. Please be aware that the operation status information is only kept for 24 hours.

SELECT datediff(s, start_time, last_modify_time) as [Failover time in seconds] FROM sys.dm_operation_status WHERE major_resource_id = '<my_secondary_db_name>', operation=’ALTER DATABASE FORCE FAILOVER ALLOW DATA LOSS ’, state=2 ORDER BY start_time DESC;

The following query against sys.dm_replication_link_status in the primary database will show replication lag in seconds, i.e. the RPO compliance, for the secondary database created on partner_server. You should run the same query every 30 seconds or less to have a statistically significant set of measurements per hour.

SELECT link_guid, partner_server, replication_lag_sec FROM sys.dm_replication_link_status

Combining availability and business continuity to build mission critical applications

What does the updated SLA mean to you in practical terms? Our goal is enabling you to build highly resilient and reliable services on Azure, backed by SQL Database. But for some mission critical applications, even 26 minutes of downtime per year may not be acceptable. Combining a zone redundant database configuration with a business continuity design creates an opportunity to further increase availability for the application. This SLA release is the first step toward realizing that opportunity.
Quelle: Azure

Run Windows Server and SQL Server workloads seamlessly across your hybrid environments

In recent weeks, we’ve been talking about the many reasons why Windows Server and SQL Server customers choose Azure. Security is a major concern when moving to the cloud, and Azure gives you the tools and resources you need to address those concerns. Innovation in data can open new doors as you move to the cloud, and Azure offers the easiest cloud transition, especially for customers running on SQL Server 2008 or 2008 R2 with concerns about end of support. Today we’re going to look at another critical decision point for customers as they move to the cloud. How easy is it to combine new cloud resources with what you already have on-premises? Many Windows Server and SQL Server customers choose Azure for its industry leading hybrid capabilities.

Microsoft is committed to enabling a hybrid approach to cloud adoption. Our commitment and passion stems from a deep understanding of our customers and their businesses over the past several decades. We understand that customers have business imperatives to keep certain workloads and data on premises, and our goal is to meet them where they are and prepare them for the future by providing the right technologies for every step along the way. That’s why we designed and built Azure to be hybrid from the beginning and have been delivering continuous innovation to help customers operate their hybrid environments seamlessly across on-premises, cloud and edge. Enterprise customers are choosing Azure for their Windows Server and SQL Server workloads. In fact, in a 2019 Microsoft survey of 500 enterprise customers, when those customers were asked about their migration plans for Windows Sever, they were 30 percent more likely to choose Azure.

Customers trust Azure to power their hybrid environments

Take Komatsu as an example. Komatsu achieved 49 percent cost reduction and nearly 30 percent performance gain by moving on-premises applications to Azure SQL Database Managed Instance and building a holistic data management and analytics solutions across their hybrid infrastructure.

Operating a $15 billion enterprise, Smithfield Foods slashed datacenter costs by 60 percent and accelerated application delivery from two months to one day using a hybrid cloud model built on Azure. Smithfield has factories and warehouses often in rural areas that have less than ideal internet bandwidth. It relies on Azure ExpressRoute to connect their major office locations globally to Azure to gain the flexibility and speed needed.

The government of Malta built a complete hybrid cloud eco-system powered by Azure and Azure Stack to modernize its infrastructure. This hybrid architecture, combined with a robust billing platform and integrated self-service backup, brings new level of flexibility and agility to the Maltese government operations, while also providing citizens and businesses more efficient services that they can access whenever they want.

Let’s look at some of Azure’s unique built-in hybrid capabilities.

Bringing the cloud to local datacenters with Azure Stack

Azure Stack, our unparalleled hybrid offering, lets customers build and run cloud-native applications with Azure services in their local datacenters or in disconnected locations. Today, it’s available in 92 countries and customers like Airbus Defense & Space, iMOKO, and KPMG Norway are using Azure Stack to bring cloud benefits on-premises.

We recently introduced Azure Stack HCI solutions so customers can run virtualized applications on-premises in a familiar way and enjoy easy access to off-the-shelf Azure management services such as backup and disaster recovery.

With Azure, Azure Stack, and Azure Stack HCI, Microsoft is the only cloud provider in the market that offers a comprehensive set of hybrid solutions.

Modernizing server management with Windows Admin Center

Windows Admin Center, a modern browser-based application free of charge, allows customers to manage Windows Servers on-premises, in Azure, or in other clouds. With Windows Admin Center, customers can easily access Azure management services to perform tasks such as disaster recovery, backup, patching, and monitoring. Since its launch just over a year ago, Windows Admin Center has seen tremendous momentum, managing more than 2.5 million server nodes each month.

Easily migrating on-premises SQL Server to Azure

Azure SQL Database is a fully managed and intelligent database service.  SQL Database is evergreen, so it’s always up to date: no more worry about patching, upgrades or End of Support. Azure SQL Database Managed Instance has the full surface area the of SQL Server database engine in Azure.  Customers use  Managed Instance to migrate SQL Server to Azure without changing the application code. Because the service is consistent with on-premises SQL Server, customers can continue using familiar features, tools and resources in Azure.

With SQL Database Managed Instance, customers like Komatsu, Carlsberg Group, and AllScripts were able to quickly migrate SQL databases to Azure with minimal downtime and benefit from built-in PaaS capabilities such as automatic patching, backup, and high availability.

Connecting hybrid environments with fast and secure networking services

Customers build extremely fast private connections between Azure and local infrastructure, allowing both to and through access using Azure ExpressRoute at bandwidths up to 100 Gbps. Azure Virtual WAN makes it possible to quickly add and connect thousands of branch sites by automating configuration and connectivity to Azure and for global transit across customer sites, using the Microsoft global network.

Customers are also taking full advantage of services like Azure Firewall, Azure DDoS Protection, and Azure Front Door Service to secure virtual networks and deliver the best application performance experience to users.

Managing anywhere access with a single identity platform

Over 90 percent of enterprise customers use Active Directory on-premises. With Azure, customers can easily connect on-premises Active Directory with Azure Active Directory to provide seamless directory services for all Office 365 and Azure services. Azure Active Directory gives users a single sign-on experience across cloud, mobile and on-premises applications, and secures data from unauthorized access without compromising productivity.

Innovating continuously at the edge

Customers are extending their hybrid environments to the edge so they can take on new business opportunities. Microsoft has been leading the innovation in this space. The following are some examples.

Azure Data Box Edge provides a cloud managed compute platform for containers at the edge, enabling customers to process data at the edge and accelerate machine learning workloads. Data Box Edge also enables customers to transfer data over the internet to Azure in real-time for deeper analytics, model re-training at cloud scale or long-term storage.

At Microsoft Build 2019, we announced Azure SQL Database Edge as available in preview, to bring SQL engine to the edge. Developers will now be able to adopt a consistent programming surface area to develop on a SQL database and run the same code on-premises, in the cloud, or at the edge.

Get started – Integrate your hybrid environments with Azure

Check out the resources on Azure hybrid such as overviews, videos, and demos so you can learn more about how to use Azure to run Windows Server and SQL Server workloads successfully across your hybrid environments.
Quelle: Azure

Cloud providers unite on frictionless health data exchange

This post was co-authored by Heather Jordan Cartwright, General Manager, Microsoft Healthcare

Cloud computing is rapidly becoming a bigger and more central part of the infrastructure of healthcare. We see this as a historic shift that motivates us to think hard about how to ensure that, in this cloud-based future, interoperable health data is available as needed and without friction.

Microsoft continues to build health data interoperability into the core of the Azure cloud, empowering developers and partners to easily build data-rich health apps with the Azure API for FHIR®. We are also actively contributing to healthcare community with open source software like the FHIR Server for Azure, bringing together developers on collaborative solutions that move the industry forward.

We take interoperability seriously. At last summer’s CMS Blue Button Developer Conference, we made a public commitment to promote the frictionless exchange of health data with our counterparts at AWS, Google, IBM, Salesforce and Oracle. That commitment remains strong.

Today, at the same conference of health IT community leaders, we are sharing a joint announcement that showcases how we have moved from principles and commitment to actions. Our activities over the past year include open-source software releases, development of new standards and implementation guides, and deployment of services that support U.S. federal interoperability mandates.

Here’s the full text of our joint announcement:

As healthcare evolves across the globe, so does our ability to improve the health and wellness of communities. Patients, providers, and health plans are striving for more value-based care, more engaging user experiences, and broader application of machine learning to assist clinicians in diagnosis and patient care.

Too often, however, patient data are inconsistently formatted, incomplete, unavailable, or missing – which can limit access to the best possible care. Equipping patients and caregivers with information and insights derived from raw data has the potential to yield significantly better outcomes. But without a robust network of clinical information, even the best people and technology may not reach their potential.

Interoperability requires the ability to share clinical information across systems, networks, and care providers. Barriers to data interoperability sit at the core of many process problems. We believe that better interoperability will unlock improvements in individual and population-level care coordination, delivery, and management. As such, we support efforts from ONC and CMS to champion greater interoperability and patient access.

This year's proposed rules focus on the use of HL7® FHIR® (Fast Healthcare Interoperability Resources) as an open standard for electronically exchanging healthcare information. FHIR builds on concepts and best-practices from other standards to define a comprehensive, secure, and semantically-extensible specification for interoperability. The FHIR community features multidisciplinary collaboration and public channels where developers interact and contribute.

We’ve been excited to use and contribute to many FHIR-focused, multi-language tools that work to solve real-world implementation challenges. We are especially proud to highlight a set of open-source tools including: Google’s FHIR protocol buffers and Apigee Health APIx, Microsoft’s FHIR Server for Azure, Cerner's FHIR integration for Apache Spark, a serverless reference architecture for FHIR APIs on AWS, Salesforce/Mulesoft's Catalyst Accelerator for Healthcare templates, and IBM’s Apache Spark service.

Beyond the production of new tools, we have also proudly participated in developing new specifications including the Bulk Data $export operation (and recent work on an $import operation), Subscriptions, and analytical SQL projections. All of these capabilities demonstrate the strength and adaptability of the FHIR specification. Moreover, through connectathons, community events, and developer conferences, our engineering teams are committed to the continued improvement of the FHIR ecosystem. Our engineering organizations have previously supported the maturation of standards in other fields and we believe FHIR version R4 — a normative release — provides an essential and appropriate target for ongoing investments in interoperability.

We have seen the early promise of standards-based APIs from market leading Health IT systems, and are excited about a future where such capabilities are universal. Together, we operate some of the largest technical infrastructure across the globe serving many healthcare and non-healthcare systems alike. Through that experience, we recognize the scale and complexity of the task at hand. We believe that the techniques required to meet the objectives of ONC and CMS are available today and can be delivered cost-effectively with well-engineered systems.

As a technology community, we believe that a forward-thinking API strategy as outlined in the proposed rules will advance the ability for all organizations to build and deploy novel applications to the benefit of patients, care providers, and administrators alike. ONC and CMS’s continued leadership, thoughtful rules, and embrace of open standards help move us decisively in that direction.

Signed,
Amazon, Google, IBM, Microsoft, Oracle, and Salesforce

The positive collaboration on open FHIR standards and the urgency for data interoperability have strengthened our commitment to an open-source-first approach in healthcare technology. We continue to incorporate feedback from the community to develop new features, and are actively identifying new places where open source software can help accelerate interoperability.

Support from the ONC and CMS in 2019 to adopt FHIR APIs as a foundation for clinical data interoperability will have a profound and positive effect on the industry. Looking forward, the application of FHIR to healthcare financial data including claims, explanation of benefit, insurance coverage, and network participation will continue to accelerate interoperability at scale and open new pathways for machine learning.

While it’s still early, we’ve seen our partners leveraging FHIR to better coordinate care, to develop innovative global health tracking systems for super-bacteria, and to proactively prevent the need for patients undergoing chemotherapy to be admitted to the emergency room. FHIR is providing a foundational platform on which our partners can drive rapid innovation, and it inspires us to work even harder to deliver technology that makes interoperable data a reality.

We’re just beginning to see what is possible in this new world of frictionless health data exchange, and we’d love for you to join us. If you want to participate, comment or learn more about FHIR, you can reach our FHIR Community chat here.
Quelle: Azure

Microsoft Azure welcomes customers, partners, and industry leaders to Siggraph 2019!

SIGGRAPH is back in Los Angeles and so is Microsoft Azure! I hope you can join us at Booth #1351 to hear from leading customers and innovative partners.

Teradici, Bebop, Support Partners, Blender, and more will be there to showcase the latest in cloud-based rendering and media workflows:

See a real-time demonstration of Teradici’s PCoIP Workstation Access Software, showcasing how it enables a world-class end-user experience for graphics-accelerated applications on Azure’s NVIDIA GPUs.
Experience a live demonstration of industry-standard visual effects, animation, and other post-production tools on the BeBop platform. It is the leading solution for cloud-based media and entertainment workflows, creativity, and collaboration.
Learn more about how cloud-integrator Support Partners enables companies to run complex and exciting hybrid workflows in Azure.
Be the first to hear about Azure’s integration with Blender’s render manager Flamenco and how users can easily deploy a completely virtual render farm and file server. The Azure Flamenco Manager will be freely available on GitHub, and we can’t wait to hear how it is being used and get your feedback.

We’re also demonstrating how you can simplify the creation and management of hybrid cloud rendering environments, get the most of your on-prem investments while bursting to the cloud for scale on demand and increase your output with high performance GPUs. Microsoft Avere, HPC, and Batch teams will be onsite to answer your questions about these new technologies, which are all generally available at SIGGRAPH 2019.

Azure Render Hub simplifies the creation and management of hybrid cloud rendering environments in Azure, providing integration with your existing AWS Thinkbox Deadline or PipelineFX Qube! render farm, Tractor and OpenCue are coming soon. It also orchestrates infrastructure setup and provides pay per use licensing and governance controls, including detailed cost tracking. The Azure Render Hub web app is available from GitHub where we welcome feedback and feature requests.
Maximize your resources pools by integrating your existing network attached storage (NAS) and Azure Blog Storage using Azure FXT Edge Filer. This on-premises caching appliance optimizes access to data in your datacenter, in Azure, and across a wide-area network (WAN). A combination of software and hardware, Microsoft Azure FXT Edge Filer delivers high throughput and low latency for hybrid storage infrastructure supporting large rendering workloads. You can learn more by visiting the Azure FXT Edge Filer product page.
Support powerful remove visualization workloads and other graphics-intensive applications, using Azure NV-series VMs, backed by the NVIDIA GPUs. Large memory, support for premium disks, and hyper-threading means these VMs offer double the number of vCPUs compared to the previous generation. Learn more about the NVIDIA and Azure partnership.

The Microsoft team and our partners will also be in room #512 for our Azure Customer Showcase and Training Program.

Tuesday and Wednesday morning Azure engineers, software partners, and top production companies will share unique insights on cloud enabled workflows that can help you improve efficiency and lower production costs.
Then, in the afternoon, we have a three hour deep dive into studio workflows on Azure. This will cover everything from Azure infrastructure, networking, and storage capabilities to how to enable Avere caching technology and set up burst render environments with popular render farm managers. At the end of every training session, industry leaders will join us for a fireside chat to talk about the cloud. Seating is first-come-first-serve so get there early! Full schedule below.

Tuesday, July 30:  2pm – 5pm
Wednesday, July 31:  2pm – 5pm

thursday, August 1: 10am – 1pm

If you’re curious about our Xbox Adaptive Controllers, come and check them out at the Adaptive Tech area of the Experience Hall and dive deep into new technologies by adding the following Tech Talks to your agenda:

Monday, July 29, 2019 from 12:30pm – 2pm room 504: Living in a Virtual World – With the VFX and animation industry moving into a new frontier of studio infrastructure and pipeline, join us as we delve into the best practices of moving your studio into a virtual environment securely, efficiently and economically.
Tuesday, July 30, 2019 from 2pm – 3:30pm room 503: Going Cloud Native – Join a continued discussion with key representatives from the graphics community who will compare experiences and explore techniques related to pushing the production pipeline and correlated resources toward the cloud.
Wednesday, July 31, 2019 from 12pm – 1pm room 309: Volumetric Video Studios – Volumetric Video providers gather to discuss their experiences, challenges, and opportunities in the early days of this new medium. Where is the market now, and where will it go? Topics include successes and lessons learned so far, most/least active scenarios, creator and consumer perceptions, technology evolution, trends in the market, and predictions for the years ahead.
Wednesday, July 31, 2019 from 2pm to 4pm room 406A: Volumetric Video Creators – Content creators discuss the advantages of using volumetric video captures as a way to tell stories, entertain and educate, as well as lessons learned along the way. Topics covered including the funding landscape, best methods of reaching audiences, most effective storytelling methods and future creative directions.

If you haven’t registered yet or are looking for a pass, you can register now for a free guest pass using code MICROSOFT19.

We hope to see you at the show and will look forward to learning more about your projects and requirements!
Quelle: Azure

Choosing between Azure VNet Peering and VNet Gateways

As customers adopt Azure and the cloud, they need fast, private, and secure connectivity across regions and Azure Virtual Networks (VNets). Based on the type of workload, customer needs vary. For example, if you want to ensure data replication across geographies you need a high bandwidth, low latency connection. Azure offers connectivity options for VNet that cater to varying customer needs, and you can connect VNets via VNet peering or VPN gateways.

It is not surprising that VNet is the fundamental building block for any customer network. VNet lets you create your own private space in Azure, or as I call it your own network bubble. VNets are crucial to your cloud network as they offer isolation, segmentation, and other key benefits. Read more about VNet’s key benefits in our documentation “What is Azure Virtual Network?”

VNet peering

VNet peering enables you to seamlessly connect Azure virtual networks. Once peered, the VNets appear as one, for connectivity purposes. The traffic between virtual machines in the peered virtual networks is routed through the Microsoft backbone infrastructure, much like traffic is routed between virtual machines in the same VNet, through private IP addresses only. No public internet is involved. You can peer VNets across Azure regions, too – all with a single click in the Azure Portal.

VNet peering – connecting VNets within the same Azure region
Global VNet peering – connecting VNets across Azure regions

To learn more, look at our documentation overview "Virtual network peering" and "Create, change, or delete a virtual network peering."

VPN gateways

A VPN gateway is a specific type of VNet gateway that is used to send traffic between an Azure virtual network and an on-premises location over the public internet. You can also use a VPN gateway to send traffic between VNets. Each VNet can have only one VPN gateway.

To learn more, look at our documentation overview "What is VPN Gateway?" and "Configure a VNet-to-VNet VPN gateway connection by using the Azure portal."

Which is best for you?

While we offer two ways to connect VNets, based on your specific scenario and needs, you might want to pick one over the other.

VNet Peering provides a low latency, high bandwidth connection useful in scenarios such as cross-region data replication and database failover scenarios. Since traffic is completely private and remains on the Microsoft backbone, customers with strict data policies prefer to use VNet Peering as public internet is not involved. Since there is no gateway in the path, there are no extra hops, ensuring low latency connections.

VPN Gateways provide a limited bandwidth connection and is useful in scenarios where encryption is needed, but bandwidth restrictions are tolerable. In these scenarios, customers are also not as latency-sensitive.

VNet Peering and VPN Gateways can also co-exist via gateway transit

Gateway transit enables you to use a peered VNet’s gateway for connecting to on-premises instead of creating a new gateway for connectivity. As you increase your workloads in Azure, you need to scale your networks across regions and VNets to keep up with the growth. Gateway transit allows you to share an ExpressRoute or VPN gateway with all peered VNets and lets you manage the connectivity in one place. Sharing enables cost-savings and reduction in management overhead.

With gateway transit enabled on VNet peering, you can create a transit VNet that contains your VPN gateway, Network Virtual Appliance, and other shared services. As your organization grows with new applications or business units and as you spin up new VNets, you can connect to your transit VNet with VNet peering. This prevents adding complexity to your network and reduces management overhead of managing multiple gateways and other appliances.

To learn more about the powerful and unique functionality of gateway transit, refer to our blog post "Create a transit VNet using VNet peering."

Differences between VNet Peering and VPN Gateways

 

 

VNet Peering

VPN Gateways

Cross-region support?

Yes – via Global VNet Peering

 

Yes

Cross-Azure Active Directory tenant support?

Yes, learn how to set it up in our documentation "Create a virtual network peering."

Yes, see our documentation on VNet-to-VNet connections. 

Cross-subscription support?

Yes, see our documentation "Resource Manager, different subscriptions."

Yes, see our documentation "Configure a VNet-to-VNet VPN gateway connection by using the Azure portal."

Cross-deployment model support?

Yes, see our documentation "different deployment models, same subscription."

 

Yes, see our documentation "Connect virtual networks from different deployment models using the portal."

Limits

You can keep up to 500 VNets with one VNet as seen in the documentation on Networking Limits.

Each VNet can only have one VPN Gateway. VPN Gateways depending on the SKU have type different number of tunnel supported.

Pricing

Ingress/Egress charged.

Gateway + Egress charged.

 

Encrypted?

Software level encryption is recommended

Yes, custom IPsec/IKE policy can be created and applied to new or existing connections.

Bandwidth limitations?

No bandwidth limitations.

Varies based on type of Gateway from 100 Mbps to 1.25Gps.

 

Private?

Yes, no Public IP endpoints. Routed through Microsoft backbone and is completely private. No public internet involved.

Public IP involved.

Transitive relationship

If VNet A is peered to VNet B, and VNet B is peered to VNet C, VNet A and VNet C cannot currently communicate. Spoke to spoke communication can be achieved via NVAs or Gateways in the hub VNet. See an example in our documentation.

If VNet A, VNet B, and VNet C are connected via VPN Gateways and BGP is enabled in the VNet connections, transitivity works.

Typical customer scenarios

Data replication, database failover, and other scenarios needing frequent backups of large data.

Encryption specific scenarios that are not latency sensitive and do not need high throughout.

Initial setup time

It took me 24.38 seconds, but you should give it a shot!

30 mins to set it up

FAQ link

VNet peering FAQ

VPN gateway FAQ

Conclusion

Azure offers VNet peering and VNet gateways to connect VNets. Based on your unique scenario, you might want to pick one over the other. We recommend VNet peering within region/cross-region scenarios.

We always love to hear from you, so please feel free to provide any feedback via our forums.
Quelle: Azure

Announcing general availability for the Azure Security Center for IoT

As organizations pursue digital transformation by connecting vital equipment or creating new connected products, IoT deployments will get bigger and more common. In fact, IDC forecasts that IoT will continue to grow at double digit rates until IoT spending surpasses $1 trillion in 2022. As these IoT deployments come online, newly connected devices will expand the attack surface available to attackers, creating opportunities to target the valuable data generated by IoT.

Organizations understand the risks and are rightly worried about IoT. Bain’s research shows that security concerns are the top reason organizations have slowed or paused IoT rollouts*. Because IoT requires integrating many different technologies (heterogenous devices must be linked to IoT cloud services that connect to analytics services and business applications), organizations face the challenge of securing both the pieces of their IoT solution and the connections between those pieces. Attackers target weak spots; even one weak device configuration, cloud service, or admin account can provide a way into your solution. Your organization must monitor for threats and misconfigurations across all parts of your IoT solution: devices, cloud services, the supporting infrastructure, and the admin accounts who access them.

To give your organization IoT threat protection and security posture management across your entire IoT solution, we’re announcing the general availability of Azure Security Center for IoT. Azure Security Center allows you to protect your end-to-end IoT deployment by identifying and responding to emerging threats, as well as finding issues in your configurations before attackers can use them to compromise your deployment. As organizations use Azure Security Center for IoT to manage their security roadblocks, they remove the barriers keeping them from business transformation:

“With Azure Security Center for IoT, we can both address very real IoT threat models with the velocity of Azure and gain management control over the fastest scaling part of our business, which allows me to focus on delivering outcomes rather than hot fixing devices.” – Alex Kreilein, CISO RapidDeploy

Building secure IoT solutions with Azure Security Center

Securing IoT is challenging for many reasons: IoT deployments are complicated, creating opportunity for integration errors that attackers can exploit; IoT devices are heterogenous and often lack proper security measures; organizations may not have the skillsets or SecOps headcount to take on a new IoT security workload; and IoT deployments are difficult to monitor using traditional IT security tools. When organizations choose Microsoft for their IoT deployments, however, they get secure-by-design devices and services such as Azure Sphere and IoT Hub, end-to-end integration and monitoring from device to cloud, and the expertise from Microsoft and our partners to build a secure solution that meets their exact use case.

Azure Security Center for IoT builds on Microsoft’s secure-by-design IoT services with threat protection and security posture management designed for securing entire IoT deployments, including Microsoft and 3rd party devices. Azure Security Center is the first IoT security service from a major cloud provider that enables organizations to prevent, detect, and help remediate potential attacks on all the different components that make up an IoT deployment: from small sensors, to edge computing devices and gateways, to Azure IoT Hub, and on to the compute, storage, databases, and AI/ML workloads that organizations connect to their IoT deployments. This end-to-end protection is vital to secure IoT deployments. Although devices may be a common target for attackers, the services that store your data and the admins who manage your IoT solution are also valuable targets.

As IoT threats evolve due to creative attackers analyzing the new devices, use cases, and applications the industry creates, Microsoft’s unique threat intelligence, sourced from the more than 6 trillion signals that Microsoft collects every day, keeps your organization ahead of attackers. Azure Security Center creates a list of potential threats, ranked by importance, so security pros and IoT admins can remediate problems across devices, IoT services, connected Azure services, and the admins who use them.

Azure Security Center also creates ranked lists of possible misconfigurations and insecure settings, allowing IoT admins and security pros to fix the most important issues in their IoT security posture first. To create these security posture suggestions, Azure Security Center draws from Microsoft’s unique threat intelligence, as well as the industry standards. Customers can also port their data into SIEMs such as Azure Sentinel, allowing security pros to combine IoT security data with data from across the organization for artificial intelligence or advanced analysis.

Organizations can monitor their entire IoT solution, stay ahead of evolving threats, and fix configuration issues before they become threats. When combined with Microsoft’s secure-by-design devices, services, and the expertise we share with you and your partners, Azure Security Center for IoT provides an important way to reduce the risk of IoT while achieving your business goals. 

Next steps

Watch Securing your IoT Application with Azure Security Center.
Get started with IoT Hub to start using Azure Security Center for IoT.
Learn more about Azure Security Center.
Learn more about IoT Security.

 

*Used with permission from Bain & Company
Quelle: Azure

Accessing virtual machines behind Azure Firewall with Azure Bastion

Azure Virtual Network enables a flexible foundation for building advanced networking architectures. Managing heterogeneous environments with various types of filtering components, such as Azure Firewall or your favorite network virtual appliance (NVA), requires a little bit of planning.

Azure Bastion, which is currently in preview, is a fully managed platform as a service (PaaS) that provides secure and seamless remote desktop protocol (RDP) and secure shell (SSH) access to your virtual machines (VMs) directly through the Azure portal. Azure Bastion is provisioned directly in your virtual network, supporting all VMs attached without any exposure through public IP addresses.

When you deploy Azure Firewall, or any NVA, you invariably force tunnel all traffic from your subnets. Applying a 0.0.0.0/0 user-defined route can lead to asymmetric routing for ingress and egress traffic to your workloads in your virtual network.

While not trivial, you often find yourself creating and managing a growing set of network rules, including DS NAT, forwarding, and so on, for all your applications to resolve this. Although this can impact all your applications, RDP and SSH are the most common examples. In this scenario, the ingress traffic from the Internet may come directly to your virtual machine within your virtual network, but egress traffic will end up going to the NVA. Since most NVAs are stateful, it ends up dropping this traffic as it did not initially receive it.

Azure Bastion, allows for simplified set up of RDP/SSH to your workloads within virtual networks containing stateful NVAs or Azure Firewall with force tunneling enabled. In this blog, we will look at how to make that work seamlessly.

For a reference on how to deploy Azure Bastion (preview) in your virtual network, please see the documentation “Create an Azure Bastion host (Preview).”
To learn how to implement Azure Firewall in your virtual network, refer to the documentation “Deploy and configure Azure Firewall using the Azure portal.”

Having deployed both Azure Bastion and Azure Firewall in your virtual network, let us look at how you can configure Azure Bastion to work in this scenario.

Configuring Azure Bastion

When deploying Azure Firewall, or a virtual appliance, you may end up associating your RouteTable, which was created while deploying Azure Firewall, to all subnets in your virtual network. You may even be including the AzureBastionSubnet subnet as well. 

This applies a user-defined route to the AzureBastionSubnet subnet which directs all Azure Bastion traffic to Azure Firewall, thereby blocking traffic required for Azure Bastion. To avoid this, configuring Azure Bastion is very easy, but do not associate the RouteTable to AzureBastionSubnet subnet.

As you would have noticed above, myRouteTable is not associated with the AzureBastionSubnet, but with other subnets like Workload-SN.

The AzureBastionSubnet subnet is secure platform managed subnet, and no other Azure Resource can deploy in this subnet except Azure Bastion. All connections to Azure Bastion are enforced through the Azure Active Directory token-based authentication with 2FA, and all traffic is encrypted/over HTTPS. 

Azure Bastion is internally hardened and allows traffic only through port 443, saving you the task of applying additional network security groups (NSGs) or user-defined routes to the subnet.

With this, the RDP/SSH requests will land on Azure Bastion. Configured using the example above, the default route (0.0.0.0/0) does not apply to AzureBastionSubnet as it's not associated with this subnet. Based on the incoming RDP/SSH requests, Azure Bastion connects to your virtual machines in other subnets, like Workload-SN, which do have a default route associated. The return traffic from your virtual machine will go directly to Azure Bastion, instead of going to the NVA, in your virtual network as the return traffic is directed to a specific private IP in your virtual network. The specific private IP address in your virtual network makes it a more specific route and hence, takes precedence over the force-tunnel route to the NVA, making your RDP/SSH traffic work seamlessly with Azure Bastion when a NVA or Azure Firewall is deployed in your virtual network.

We are grateful and appreciate the engagement and excitement of customers and community and are looking forward to your feedback in further improving the service and making it generally available soon.
Quelle: Azure