Individually great, collectively unmatched: Announcing updates to 3 great Azure Data Services

As Julia White mentioned in her blog today, we’re pleased to announce the general availability of Azure Data Lake Storage Gen2 and Azure Data Explorer. We also announced the preview of Azure Data Factory Mapping Data Flow. With these updates, Azure continues to be the best cloud for analytics with unmatched price-performance and security. In this blog post we’ll take a closer look at the technical capabilities of these new features.

Azure Data Lake Storage – The no compromise Data Lake

Azure Data Lake Storage (ADLS) combines the scalability, cost effectiveness, security model, and rich capabilities of Azure Blob Storage with a high-performance file system that is built for analytics and is compatible with the Hadoop Distributed File System. Customers no longer have to tradeoff between cost effectiveness and performance when choosing a cloud data lake.

One of our key priorities was to ensure that ADLS is compatible with the Apache ecosystem. We accomplished this by developing the Azure Blob File System (ABFS) driver. The ABFS driver is officially part of Apache Hadoop and Spark and is incorporated in many commercial distributions. The ABFS driver defines a URI scheme that allows files and folders to be distinctly addressed in the following manner:

abfs[s]://file_system@account_name.dfs.core.windows.net/<path>/<path>/<filename>

It is important to note that the file system semantics are implemented server-side. This approach eliminates the need for a complex client-side driver and ensures high fidelity file system transactions.

To further boost analytics performance, we implemented a hierarchical namespace (HNS) which supports atomic file and folder operations. This is important because it reduces the overhead associated with processing big data on blob storage. This speeds up job execution and lowers cost because fewer compute operations are required.

The ABFS driver and HNS significantly improve ADLS’ performance, removing scale and performance bottlenecks.  This performance enhancement is now available at the same low cost as Azure Blob Storage.

ADLS offers the same powerful data security capabilities built into Azure Blob Storage, such as:

Encryption of data in transit and at rest via TLS 1.2
Storage account firewalls
Virtual network integration
Role-based access security

In addition, ADLS’ file system provides support for POSIX compliant access control lists (ACLs). With this approach, you can provide granular security protection that restricts access to only authorized users, groups, or service principals and provides file and object data protection.

ADLS is tightly integrated with Azure Databricks, Azure HDInsight, Azure Data Factory, Azure SQL Data Warehouse, and Power BI, enabling an end-to-end analytics workflow that delivers powerful business insights throughout all levels of your organization. Furthermore, ADLS is supported by a global network of big data analytics ISV’s and system integrators, including Cloudera and Hortonworks.

Next steps

Visit the Azure Data Lake Storage product page to learn more.
Access documentation, quick starts, and tutorials.
Find pricing information for Azure Data Lake Storage.
Get started with Azure Data Lake Storage now.

Azure Data Explorer – The fast and highly scalable data analytics service

Azure Data Explorer (ADX) is a fast, fully managed data analytics service for real-time analysis on large volumes of streaming data. ADX is capable of querying 1 billion records in under a second with no modification of the data or metadata required. ADX also includes native connectors to Azure Data Lake Storage, Azure SQL Data Warehouse, and Power BI and comes with an intuitive query language so that customers can get insights in minutes.

Designed for speed and simplicity, ADX is architected with two distinct services that work in tandem: The Engine and Data Management (DM) service. Both services are deployed as clusters of compute nodes (virtual machines) in Azure.

The Data Management (DM) service ingests various types of raw data and manages failure, backpressure, and data grooming tasks when necessary. The DM service also enables fast data ingestion through a unique method of automatic indexing and compression.

The Engine service is responsible for processing the incoming raw data and serving user queries. It uses a combination of auto scaling and data sharding to achieve speed and scale. The read-only query language is designed to make the syntax easy to read, author, and automate. The language provides a natural progression from one-line queries to complex data processing scripts for efficient query execution.

ADX is available in 41 Azure regions and is supported by a growing ecosystem of partners, including ISV’s and system integrators.

Next steps

Visit the Azure Data Explorer product page to learn more.
Access documentation, quick starts, and tutorials.
Find pricing information for Azure Data Explorer.
Get started with Azure Data Explorer now.

Azure Data Factory Mapping Data Flow – Visual, zero-code experience for data transformation

Azure Data Factory (ADF) is a hybrid cloud-based data integration service for orchestrating and automating data movement and transformation. ADF provides over 80 built-in connectors to structured, semi-structured, and unstructured data sources.

With Mapping Data Flow in ADF, customers can visually design, build, and manage data transformation processes without learning Spark or having a deep understanding of their distributed infrastructure.

Mapping Data Flow combines a rich expression language with an interactive debugger to easily execute, trigger, and monitor ETL jobs and data integration processes.

Azure Data Factory is available in 21 regions and expanding, and is supported by a broad ecosystem of partners including ISV’s and system integrators.

Next steps

Visit the Azure Data Factory product page to learn more.
Access documentation, quick starts, and tutorials.
Find pricing information on Azure Data Factory.
Learn more about Mapping Data Flow.
Get started and sign-up for the preview of Azure Data Factory – Mapping Data Flow.

Azure is the best place for data analytics

With these technical innovations announced today, Azure continues to be the best cloud for analytics. Learn more why analytics in Azure is simply unmatched.
Quelle: Azure

Microsoft Healthcare Bot brings conversational AI to healthcare

Today we announced the general availability of the Microsoft Healthcare Bot in the Azure Marketplace. The Microsoft Healthcare Bot is a cloud service that powers conversational AI for healthcare. It’s designed to empower healthcare organizations to build and deploy compliant, AI-powered virtual health assistants and chatbots that help them put more information in the hands of their users, enable self-service, drive better outcomes, and reduce costs.

The Healthcare Bot service has several unique aspects:

Out-of-the-box healthcare intelligence including language models to understand healthcare intents and medical terminology, as well as content from credible providers with information about conditions, symptoms, doctors, medications, and even a symptom checker.
Customization and extensibility, which allows partners to introduce their own business flows, and securely connect to their own backend systems over HL7 FHIR or REST APIs. The service model allows our partners to focus on the important things like their key business needs and their own flows.
Security and compliance with industry standards, such as ISO 27001, ISO 27018, HIPAA, Cloud Security Alliance (CSA) Gold, and GDPR which we consider as table stakes in this industry. We also provide tools and out-of-the-box functionality that help our partners create secure and compliant solutions.

The close collaboration with our preview partners, including Premera Blue Cross, Quest Diagnostics, and Advocate Aurora Heath, helped identify diverse use cases that address the needs and expectations of healthcare organizations. We now have a better understanding of what’s important to our partners, and how to evolve the product by focusing on key differentiating features. For example, we realized the importance of enabling a visual design environment that allows review of the flows by clinical personnel and domain experts who are non-developers. We also evolved our scenario templates catalog and provided a gallery of example use cases to start from, which allows our partners to develop their bots quickly and inexpensively.
 
It has been exciting for us to see our partners go live with their chatbots, enhance their chatbots over time, and meet their business goals. And in the upcoming months, we will develop the service further.

It’s my opinion that virtual health assistants and chatbot technology will never replace medical personnel. But technology can help make better use of medical personnel's time and relieve some of the burden from the healthcare system.

Technology is here to enable that. It’s the responsibility of our generation to leverage technology to help solve important problems for humankind.

For more information:

Microsoft Healthcare Bot service on Azure Marketplace

Microsoft Healthcare Bot project page
Quelle: Azure

Lighting up healthcare data with FHIR®: Announcing the Azure API for FHIR

In the last several years we’ve seen fundamental transformation in healthcare data management, but the biggest, and perhaps most important shift, has been in how healthcare organizations think about cloud technology and their most sensitive health data. Healthcare leaders have transitioned from asking “Why should I manage healthcare data in the cloud?” and are now asking “How?”.

The change in the question may seem subtle, but the rigor required to ensure the highest level of privacy, security, and management of Protected Health Information (PHI) in the cloud has been a barrier to entry for much of the healthcare ecosystem. Compounding the difficulty is the state of data: multiple datasets, fragmented sources of truth, inconsistent formats, and exponential growth of data types.

We are now seeing, almost daily, new breakthroughs with applied machine learning on health data. But to truly apply machine learning at scale in the healthcare industry, we must ensure a secure and trusted pathway to manage that data in the cloud. Moving data into the cloud in its current state can reduce cost, but cost isn’t the only measure. Healthcare leaders are thinking about how they bring their data into the cloud while increasing opportunities to use and learn from that data: How do we ensure the privacy of patient data? How do we retain control and access management for our data at scale? How do we bring data into the cloud in a way that will accelerate machine learning for the future?  

And today I am thrilled to announce Azure technology that begins to answer the question of “how”: Azure API for FHIR®.

Azure API for FHIR®: Your health data. Unlocked with FHIR.

Data management in the open source FHIR (Fast Healthcare Interoperability Resource) standard is becoming turnkey for interoperability and machine learning on healthcare data. There is a growing need for healthcare partners to build and maintain FHIR services that exchange and manage data in the FHIR format.

Azure API for FHIR offers exchange of data via a FHIR API and a managed Platform as a Service (PaaS) offering in Azure, designed for management and persistence of PHI data in the native FHIR format. The FHIR API and data store enables you to securely connect and interact with any system that utilizes FHIR APIs, and Microsoft takes on the operations, maintenance, updates and compliance requirements in the PaaS offering, so you can free up your own operational and development resources.

Key features of the Azure API for FHIR will include:

Provision and start running in just a few minutes
High performance, low latency
Enterprise grade, managed FHIR service
Role Based Access Control (RBAC) – allowing you to manage access to your data at scale
Audit log tracking for access, creation, modification, and reads within each data store
SMART on FHIR functionality
Secure compliance in the cloud: ISO 27001:2013 certified, supports HIPAA and GDPR, and built on the HITRUST certified Azure platform
Data is isolated to a unique database per API instance
Protection of your data with multi-region failover

The cost-effective way to start in the cloud

Because we believe it's important to invest in the FHIR standard, you pay only for underlying database usage and data transfer when using the Azure API for FHIR.

The cloud environment you choose for healthcare applications is critical. You want elastic scale so you pay only for the throughput and storage you need. The Azure services that power Azure API for FHIR are designed for rapid performance no matter what size datasets you’re managing. The data persistence layer in the Azure API for FHIR leverages Azure Cosmos DB, which guarantees latencies at the 99th percentile and guarantees high availability with multi-homing capabilities.

Those with experience in healthcare data management may wonder:  we have HL7 standards in the industry already, why do we need FHIR to bring data into the cloud? HL7 has served the industry well since its first implementations in the 1980s. But as it’s evolved, customizations of HL7 can translate to a heavy lift for the future of healthcare learning: data science. FHIR is gaining traction because it provides a consistent, open source, extensible data standard that can scale as we learn. In order to accelerate machine learning on healthcare data, organizations are shifting data to the FHIR format as they transition into the cloud:  saving both time and money.

Where can I apply the Azure API for FHIR?

Azure API for FHIR is intended for customers developing solutions that integrate healthcare data from one or more systems of record. The API promotes the use of ingesting, managing, and persisting that data in native FHIR resources. Leveraging an open source standard (FHIR) enables interoperability for data sharing both within and outside of your ecosystem and helps accelerates the machine learning process on data is normalized in FHIR.

Our customers are already seeing powerful scenarios for FHIR applications:

Startup/IoMT:
Fred Hutchinson Cancer Research in Seattle, WA is developing innovative IoMT and patient applications to remotely monitor patients undergoing chemotherapy. While in development, they needed a secure, fully managed backend service to handle patient data across multiple participating hospitals. To ensure they could design once and integrate quickly into a broad number hospital EHR systems, they are using Azure API for FHIR and a SMART on FHIR implementation.

Provider Ecosystems:
University of Pittsburgh Medical Center has been working with Microsoft FHIR offerings in their hospital systems: “The ability to one-click deploy a FHIR server as a managed service allows us to think more about our applications and customer needs, and less about the plumbing required to store and represent clinical data.” – Brian Kolowitz, director of product management, UPMC Enterprises.

Research:
Associate Dean of Research Information Technology at University of Michigan, Dr. Sachin Kheterpal, is leading efforts to streamline data ingestion and management for Michigan Medicine’s research teams. To drive faster research innovation and ML development, University of Michigan will be piloting the management of data through the Azure API instead of their on-premise systems. “We’re expecting to reduce operational workloads, increase data control, improve data de-identification, and enable our data scientists to move faster with data normalized in the FHIR standard that benefits from a community of developers based upon FHIR resources.”

If you want additional support as you integrate FHIR, we’ve also been working with over 25 partners in our Early Access Program. ISV and SI partners in the Early Access Program understand the technical details and applications for Azure API for FHIR and can help get your data into FHIR and the cloud even more easily.

Investing in FHIR to accelerate AI in healthcare

The Azure ecosystem already has robust components for Microsoft partners to build secure and compliant health solutions in the cloud on their own, but we’re going to continue making it easier. We’re focused on delivering turnkey cloud solutions so our healthcare partners can focus their attention on innovation. Check out Azure API for FHIR and do more with your health data.

FHIR® is the registered trademark of HL7 and is used with the permission of HL7 
Quelle: Azure

Reserved instances now applicable to classic VMs, cloud services, and Dev/Test subscriptions

Expanding reserved instances discounts to classic virtual machines, Azure Cloud Services, and Dev/Test subscriptions

Today, we are excited to announce two new Azure Reserved VM Instances’ (RI) features to provide our customers with additional savings and purchase controls.

Since launch, we have continued to add multiple features such as instance size flexibility, RIs for US Government regions, purchase recommendations, and RIs in the Cloud Solution Provider (CSP) channel. We have also extended the capability to provide reservation discounts on SQL Databases and Cosmos DB.

Features that we are launching today:

1. Classic VMs and Cloud Services users can now benefit from the RI discounts

RIs with the instance size flexibility option enabled will now apply the discount to both classic VMs and cloud services. For cloud services, the reservation discount applies only to the compute cost. When the reservation discount is applied to cloud services, the usage charges will be split into compute charges (Linux meter) and a cloud services charges (cloud services management meter). Learn how the reservation discount applies to Cloud Services.

2. Enterprise Dev/Test and Pay-As-You-Go Dev/Test subscriptions can now benefit from the RI discounts

Newly purchased RIs or existing RIs can now be applied to your Dev/Test subscriptions. VM usage on Dev/Test subscriptions will be automatically eligible for the RI discount and all existing reservations with shared scope will be updated to apply discounts to Dev/Test subscriptions.

Next steps

Visit FAQs on reservation page.
Read the documentation, “What are Azure Reservations?” to learn more.

Quelle: Azure

Configure resource group control for your Azure DevTest Lab

As a lab owner, you now have the option to configure all your lab virtual machines (VMs) to be created in a single resource group. This helps prevent you from reaching resource group limits on your Microsoft Azure subscription. The feature will also help by enabling you to consolidate all your lab resources within a single resource group. In result this will simplify tracking those resources and applying policies to manage them at the resource group level. This article will discuss improving governance of your development and test environments by using Azure polices that you can apply at the resource group level.

This feature allows you to use a script to either specify a new or existing resource group within your Azure subscription for all your lab VMs to be created in. It is important to note that currently we support this feature through an API, however we will soon be adding an in-product experience for you to configure this setting for your lab.

Now let’s walk through the options you have as a lab owner while using this API:

You can choose the lab’s resource group for all VMs to be created in going forward.
You can choose an existing resource group other than the lab's resource group for all VMs to be created in going forward.
You can enter a new resource group name for all VMs to be created in going forward.
You can also continue with the existing behavior.

This setting will apply to new VMs created in the lab. This means older VMs in your lab that are created in their own resource groups will continue to remain unaffected. However, you can migrate these VMs from their individual resource groups to the common resource group you selected initially, allowing all your lab VMs to be in one common resource group going forward. You can learn more about migrating resources across resource groups by visiting our documentation, “Move resources to new resource group or subscription.” ARM environments created in your lab will continue to remain in their own resource groups and will not be affected by any option you select while working with this API.

You can also learn more about how to use this API along with an example script by visiting our documentation, “About Azure DevTest Labs.” We hope you find this feature useful!

Got an idea to make it work better for you? Submit your feedback and ideas, or vote for others at Azure DevTest Labs UserVoice forum. Have a question? Check out the answers or ask a new one at our MSDN forum.
Quelle: Azure

Azure Cost Management now general availability for enterprise agreements and more!

As enterprises accelerate cloud adoption, it is becoming increasingly important to manage cloud costs across the organization. Last September, we announced the public preview of a comprehensive native cost management solution for enterprise customers. We are now excited to announce the general availability (GA) of Azure Cost Management experience that helps organizations visualize, manage, and optimize costs across Azure.

In addition, we are excited to announce the public preview for web direct Pay-As-You-Go customers and Azure Government cloud.

With the addition of the Azure Cost Management, customers now have an always-on, low-latency solution to understand and visualize costs with the following features available in Cost Management:

Cost analysis

This feature allows you to track costs over the course of the month and offers you a variety of ways to analyze your data. To learn more about how to use cost analysis, please visit our documentation, “Quickstart: Explore and analyze costs with Cost analysis.”

Budgets

Use budgets to proactively manage costs and drive accountability within your organization. To learn more about using Azure budgets please visit our documentation, “Tutorial: Create and manage Azure budgets.”

Exports

Export all your cost data to an Azure storage account using our new exports feature. You can use this data in external systems and combine it with your own data to maximize your cost management capabilities. To learn more about using Azure exports please visit our documentation, “Tutorial: Create and manage exported data.”

New Azure APIs

As a part of this release we are also making the APIs mentioned below available for you to build your own cost management solutions. To learn more about developing on top of our new cost management functionality, please visit the Azure REST API documentation links below.

Usage Query – Develop advanced API query calls to learn the most about your organization’s usage and cost patterns.
Budgets – Create and view your budgets in an automated fashion.
Exports – Automate data export configuration.
Usage details by Management Group – Use this API to analyze your organization’s usage across multiple subscriptions.

Alerts (in preview)

View and manage all your alerts in one single place with the new alerts preview feature. In the release you can view budget alerts, monetary commitment alerts, and department spending quota alerts. You can also view active and dismissed alerts.

Getting started

Get started now on this end-to-end cost management and optimization solution that enables you to get the most value for every cloud dollar spent. Please visit the Azure Cost Management documentation page for tutorial and details on getting started.

What’s coming next?

We will continue to iterate additional Cost Management features, so can enjoy a more unified user experience with features like ability to save and schedule reports, additional capabilities in cost analysis, budgets, alerts, and exports, as well as show backs in the coming months.

Partners will also soon be able to leverage the benefits of cost management with our support for the Cloud Solution Provider (CSP) program. With Azure Cost Management, Microsoft is committed to continuing the investment in supporting a multi-cloud environment including Azure and AWS. Public preview for AWS is currently targeted for Q2 of the current calendar year. We plan continue to enhance this with support for other clouds in the near future.

Are you ready for the best part? Azure Cost Management is available for free to all customers and partners to manage Azure costs.

The Cloudyn portal will continue to be available to customers while we integrate all relevant functionality into native Azure Cost Management.

Follow us on Twitter @AzureCostMgmt for exciting cost management updates.
Quelle: Azure

Build your own deep learning models on Azure Data Science Virtual Machines

As a modern developer, you may be eager to build your own deep learning models but aren’t quite sure where to start. If this is you, I recommend you take a look at the deep learning course from fast.ai. This new fast.ai course helps software developers start building their own state-of-the-art deep learning models. Developers who complete this fast.ai course will become proficient in deep learning techniques in multiple domains including computer vision, natural language processing, recommender algorithms, and tabular data.

You’ll also want to learn about Microsoft’s Azure Data Science Virtual Machine (DSVM). Azure DSVM empowers developers like you with the tools you need to be productive with this fast.ai course today on Azure, with virtually no setup required. Using fast cloud-based GPU virtual machines (VMs), at the most competitive rates, Azure DSVM saves you time that would otherwise be spent in installation, configuration, and waiting for deep learning models to train.

Here is how you can effectively run the fast.ai course examples on Azure.

Running the fast.ai deep learning course on Azure DSVM

While there are several ways in which you can use Azure for your deep learning course, one of the easiest ways is to leverage Azure Data Science Virtual Machine (DSVM). Azure DSVM is a family of virtual machine (VM) images that are pre-configured with a rich curated set of tools and frameworks for data science, deep learning, and machine learning.

Using Azure DSVM, you can utilize tools like Jupyter notebooks and necessary drivers to run on powerful GPUs. In result saving time that would otherwise be spent installing, configuring, and troubleshooting any compatibility issues on your system. Azure DSVM is offered on both Linux and Windows editions. Azure VMs provides a neat extension mechanism that the DSVM can leverage, allowing you to automatically configure your VM to your needs.

Microsoft provides an extension to the DSVM specifically for the fast.ai course, making the process so simple you can answer a couple of questions and get your own instance of DSVM provisioned in a few minutes. The fast.ai extension installs all the necessary libraries you need to run the course Jupyter notebooks and also pull down the latest course notebooks from the fast.ai GitHub repository. So in a very short time, you’ll be ready to start running your course samples.

Getting started with Azure DSVM and fast.ai

Here’s how simple it is to get started:

1. Sign in or sign up for an Azure subscription

If you don’t have an Azure subscription you can start off with a free trial subscription to explore any Azure service for 30 days and access to a set of popular services free for 12 months. Please note that free trial subscriptions do not give access to GPU resources. For GPU access, you need to sign up for an Azure pay-as-you-go subscription or use the Azure credits from the Visual Studio subscriptions if you have one. Once you have created your subscription, you can login to the Azure portal.

2. Create a DSVM instance with fast.ai extension

You can now create a DSVM with the fast.ai extension by selecting one of the links below. Choose one depending on whether you prefer a Windows or a Linux environment for your course.

Linux (Ubuntu) edition of DSVM with fast.ai
Windows Server 2016 edition of DSVM with fast.ai

After answering a few simple questions in the deployment form, your VM is created in about five to 10 minutes and is pre-configured with everything you need to run the fast.ai course. While creating the DSVM, you can choose between a GPU-based or a CPU-only instance of the DSVM. A GPU instance will drastically cut down execution times when training deep learning models. This is largely what the course notebooks covers, so I recommend a GPU instance. Azure also offers low-priority instances including GPU at a significant discount which is as much as 80 percent on compute usage charges compared to standard instances. Though keep in mind, they can be preempted and deallocated from your subscription at any time depending on factors like the demand for these resources. If you want to take advantage of the deep discount, you can create preemptable Linux DSVM instance with the fast.ai extension.

3. Run your course notebooks

Once you have created your DSVM instance, you can immediately start using it to run all the code in the course examples by accessing Jupyter and the course notebooks that are preloaded in the DSVM.

You can find more information on how to get started with fast.ai for Azure on the course documentation page.

Next steps

You can continue your journey in machine learning and data science by taking a look at the Azure Machine Learning service which enables you to track your experiments. You can also use automated machine learning, build custom models, and deploy machine learning, deep learning models, or pipelines in production at scale with several sample notebooks that are pre-built in the DSVM. You can also find additional learning resources on Microsoft’s AI School and LearnAnalytics.

I look forward to your feedback and questions on the fast.ai forums or on Stack Overflow.
Quelle: Azure

Best practices to consider before deploying a network virtual appliance

A network virtual appliance (NVA) is a virtual appliance primarily focused on network functions virtualization. A typical network virtual appliance involves various layers four to seven functions like firewall, WAN optimizer, application delivery controllers, routers, load balancers, IDS/IPS, proxies, SD-WAN edge, and more. While the public cloud may provide some of these functionalities natively, it is quite common to see customers deploying network virtual appliances from independent software vendors (ISV). These capabilities in the public cloud enable hybrid solutions and are generally available through the Azure Marketplace.

What exactly is the network virtual appliance in the cloud?

A network virtual appliance is often a full Linux virtual machine (VM) image consisting of a Linux kernel and includes user level applications and services. When a VM is created, it first boots the Linux kernel to initialize the system and then starts up any application or management services needed to make the network virtual appliance functional. The cloud provider is responsible for the compute resources, while the ISV provides the image that represents the software stack of the virtual appliance.

Similar to a standard Linux distribution, the Linux kernel is integral to the NVA’s image and is provided by the ISV often customized. The kernel itself includes the drivers needed for all network and disk devices available to the virtual machine. The version and customizations made to the NVA’s kernel will often impact the performance and functionality of the virtual machine, for more information about Linux and accelerated networking see our documentation, “Create a Linux virtual machine with Accelerated Networking.” As new networking enhancements are made to the Azure platform such as performance improvements or even entirely new networking features, the ISV may need to update the software image to provide support for those enhancements. Often, this entails updating their version of the Linux kernel from the upstream Linux project. For the latest updates, see the Linux Kernel Archives website.

All NVA images published in the Azure Marketplace go through rigorous testing and onboarding workflows. As part of Azure’s continuous integration and deployment life cycle, NVA images are deployed and tested in a pre-production environment for any regression or issues. ISVs are responsible for publishing deployment guidelines and GithHub published Azure Resource Manager (ARM) templates for their specific products. Technical and performance specifications of the appliance are owned by the ISVs, while Microsoft owns the technical and performance specifications of the host environment. Technical support for the customer’s virtual appliance, it’s features, recommended OS version, kernel version, and security updates are provided by the ISV.

Pricing for NVA solutions may vary based on product types and publisher specifications. Software license fees and Microsoft Azure usage costs are charged separately through the Azure subscription. Learn more by visiting our list of Marketplace FAQs related to virtual appliance and Azure marketplace.

Below is an example of a hybrid network that extends an on-premises network to Azure. Demilitarized zone (DMZ) represents a perimeter network between on-premises and Azure, which includes NVAs.

Another example below shows an NVA with Azure Virtual WAN. For more details on how to steer traffic from a Virtual WAN hub to a network virtual appliance please visit our documentation, “Create Virtual Hub route table steer traffic to a Network Virtual Appliance.”

Common best practices

Microsoft continues to collaborate with multiple ISVs to improve cloud experience for Microsoft customers.

Azure accelerated networking support: Consider a virtual appliance that is available on one of the supported VM types with Azure’s accelerated networking capability. Accelerated networking enables single root I/O virtualization (SR-IOV) to a VM, greatly improving its networking performance. This high-performance path bypasses the host from the datapath, reducing latency, jitter, and CPU utilization for use with the most demanding network workloads on supported VM types. Accelerated networking is supported on most general purpose and compute-optimized instance sizes with two or more vCPUs. For a list of supported OS and additional information visit our documentation, “Create a Windows virtual machine with Accelerated Networking.” 
Multi-NIC support: A network interface (NIC) is the interconnection between a VM and a virtual network (VNet). A VM must have at least one NIC, but can have more than one depending on the size of the VM you create. Learn about how many NICs each VM size supports for Windows and Linux in our documentation, “Sizes for Windows virtual machines in Azure” or “Sizes for Linux virtual machines in Azure.” Many network virtual appliances require multiple NICs. With multiple NICs you can better manage your network traffic by isolating various types of traffic across the different NICs. A good example would be separating data plane traffic from the management plane and hence the VM supporting at least two NICs. A VM can only have as many network interfaces attached to it as the VM size supports. If you are considering adding a NIC after deploying the NVA, be sure to Enable IP forwarding on the NIC. The setting disables Azure's check of the source and destination for a network interface. Learn more about how to enable IP forwarding for a network interface.
HA Port with Azure Load Balancer: Azure Standard Load Balancer helps you load-balance TCP and UDP flows on all ports simultaneously when you're using an internal load balancer. A high availability (HA) port load balancing rule is a variant of a load balancing rule, configured on an internal Standard Load Balancer. You would want your NVA to be reliable and highly available, to achieve these goals simply by adding NVA instances to the back-end pool of your internal load balancer and configuring a HA ports load-balancer rule. For more information on HA port overview please visit our documentation, “High availability ports overview.”

Support for Virtual Machine Scale Sets (VMSS): Azure Virtual Machine Scale Sets let you create and manage a group of identical, load balanced VMs. The number of VM instances can automatically increase or decrease in response to a demand or a defined schedule. Scale sets provide high availability to your applications, and allow you to centrally manage, configure, and update a large number of VMs. Scale sets are built from virtual machines. With scale sets, the management and automation layers are provided to run and scale your applications. For more information visit our documentation, “What are virtual machine scale sets.”

As enterprises move ever demanding mission-critical workloads to the cloud, it is important to consider comprehensive networking services that are easy to deploy, manage, scale, and monitor. We are fully committed to providing you the best network virtual appliance experience that can provide all the benefits of cloud in conjunction with your network needs. Picking a virtual appliance can be an important decision when you are designing your network. We want to ensure you do so for ease of use, scale, and a better future together.

Additional links

Support for Linux and open source technology in Azure
Deploy highly available network virtual appliances
Azure Reference Architectures

Quelle: Azure

Investing in our partners’ success

Today Gavriella Schuster, CVP of Microsoft’s Partner organization, spoke about our longstanding commitment to partners, and new investments to enable partners to accelerate customer success.

As we shared in our recent earnings, Azure is growing at 76 percent, driven by a combination of continued innovation, strong customer adoption across industries and a global ecosystem of talented partners. I’m inspired by partners such as Finastra, Cognata, ABB, and Egress who are working with Azure to enable digital transformation within their respective industries.

While Microsoft has long been a partner-oriented organization, some things are different with cloud. Specifically, partners need Microsoft to be more than just a great technology provider, you need us to be a trusted business partner. This requires long-term commitment and the ability to continually adapt and innovate as the market shifts. This has been, and continues to be, our commitment. Our partnership philosophy is grounded in the foundation that we can only deliver on our mission if there is a strong and successful ecosystem around us.

In the spirit of being a trusted business partner, I wanted to highlight our key partner-oriented investments and some of the resources to help our partners successfully grow their businesses.  

Committed to growing our partners’ cloud businesses

Unlock new growth opportunities. Microsoft has sales organizations in 120 countries around the world. Our comprehensive partner co-selling program allows partners to tap into our global network to expose their solutions and services to new markets and new opportunities. Microsoft sales people are paid to bring the best solutions to our customers, spanning both Microsoft and partner solutions.

The Azure Marketplace and AppSource digital storefronts enable customers to easily find, try, and buy the right solutions from our partners. In March, we will add new capabilities to our marketplaces that enable partners to publish to a single location and then merchandise to over 75 million Microsoft customers, thousands of Microsoft sales people, and tens of thousands of Microsoft partners with the click of a button. This new capability further enables partners in our Cloud Solution Provider (CSP) program to create comprehensive, tailored solutions for their end customers. And this is just the beginning. More innovations are on the way, and you can view what’s coming through our Marketplaces roadmap.

“Azure Marketplace has transformed Chef’s business because it has opened up brand new channels and a new lead generation.” – Michele Todd, Chef Software

Technical resources and support whenever and wherever you need it. Whether you’re getting acquainted with Azure, or are further along in developing your solution – there are resources to help you find the answers:

Find Azure training whether online, in a classroom or at an event near you
We are committed to providing you with up-to-date documentation and transparency on the product roadmap
Technical support programs in various levels based on your need
Community forums supported by dedicated Microsoft technical experts

Cloud migration. I previously wrote about how we’re making it easy for customers to migrate their existing workloads to Azure. For our SI and managed services partners, the approaching SQL Server 2008 and Windows Server 2008 end of support also brings new opportunities to provide cloud migration, app modernization and ongoing app management services to customers. Just this migration opportunity alone creates over $50B in opportunity for our partners.

We’ve created the Cloud Migration and Modernization partner playbook and offer the Azure FastTrack program to help you connect with Microsoft engineers as you accelerate this practice. And available this week, new migration content will be launched on Digital Marketing Content OnDemand, a free benefit in MPN Go-to-Market Services.

An open, hybrid, and trusted platform to turn ideas into solutions faster

Build on a secure and trusted foundation. With GDPR and cybersecurity top of mind for customers, partners need a cloud partner that allows them to focus on building their solution, and not on performing security and privacy audits. Microsoft leads the industry in establishing clear security and privacy requirements and in consistently meeting these requirements. And to protect our partners’ cloud-based innovations and investments, we’ve created unique programs like the Microsoft Azure IP Advantage program which lets you leverage a portfolio of Microsoft’s patents to protect against IP infringement risks.

Flexibility to deliver hybrid cloud solutions. Azure has been developed for hybrid deployment from the ground up, providing partners the flexibility to build hybrid solutions for customers, using Windows and Linux.

Develop on any platform, with tools that you know and love. With Azure, partners can migrate existing apps to the cloud, implement Kubernetes-based architectures, or develop cloud-native apps using microservices and serverless technologies from Microsoft, our partners, and the open-source community.

New innovations to light up customer opportunities

Analytics and insights. Our customers’ hunger for better insights is creating great opportunities for partners. Azure enables customers to efficiently manage the end to end data analytics lifecycle. TimeXtender is helping customers speed up digital transformation by building platforms for operational data exchange (ODX) using Azure. Neal Analytics created an algorithm for retailers and consumer goods companies that makes inventory data actionable

AI. Azure provides a comprehensive set of flexible AI services, and a thoughtful and trusted approach to AI, so partners can create AI solutions quickly and with confidence. Talview is a pioneer in using artificial intelligence (AI) and cognitive technologies to analyze video interviews in multiple formats. 

“The Talview platform was previously hosted on Amazon Web Services (AWS), but we shifted to Azure because its AI capabilities were deeper and richer for our needs.” – Sanjoe Jose, CEO, Talview

Internet of Things. Partners use of Azure IoT has become a key differentiator. Willow is enabling its customer thyssenkrupp Elevators to drive building insights and improvements using Azure Digital Twins, that creates virtual representations of the physical world, allowing partners to develop contextually-aware solutions specific to their industries.

“Partnering with Microsoft gives us access to both the best technology platform for designing and developing innovative solutions for our clients, along with the best partner enablement organization in the industry.” – Matt Jackson, VP Services for Americas, Insight

We are thrilled to be on this journey together with you. And, if you’re new to Azure, I invite you to become an Azure partner today.
Quelle: Azure

Microsoft Azure portal February 2019 update

This month we’re bringing you updates to several compute (IaaS) resources, the ability to export contents of lists of resources and resource groups as CSV files, an improvement to the layout of essential properties on overview pages, enhancements to the experience on recovery services pages, and expansions of setting options in Microsoft Intune.

Sign in to the Azure portal now and see for yourself everything that’s new. You can also download the Azure mobile app.

Here is a list of February updates to the Azure portal:

Compute (laaS)

Add a new virtual machine (VM) directly to an application gateway or load balancer
Migrate classic virtual machines (VMs) to Azure Resource Manager
Virtual machine scale sets (VMSS) password reset

Shell

Export as CSV in All resources and Resource groups
Layout change for essential properties on overview pages

Site Recovery

Azure Site Recovery UI updates

Other

Updates to Microsoft Intune

Let’s look at each of these updates in detail.

Compute (laaS)

Add a new VM directly to an application gateway or load balancer

We learned from you that a common scenario involves adding a new VM to a load balanced set such as setting up a SharePoint form or putting together a three-tier web application. You can now add a new VM to an existing load balancing solution during the VM creation process. When you specify networking parameters for your virtual machine, you can now choose to add it to the backend pool of an application gateway for HTTP and HTTPS traffic or load balancer Standard SKU for all TCP and UDP traffic.

Migrate classic VMs to Azure Resource Manager

The Azure Resource Manager (ARM) deployment model was released nearly three years ago, and many features have been added since then that are exclusive to ARM. The Azure platform supports migrating classic Azure Service Manager (ASM) resources to ARM, and you can now use the Azure portal to migrate existing infrastructure virtual machines, virtual networks, and storage accounts to the modern ARM deployment model.

Navigate to a classic virtual machine, and select Migrate to ARM from the Resource menu under Settings.

VMSS password reset

You can now use the portal to reset the password of virtual machine scale set instances.

Navigate to a virtual machine scale set in the Azure portal, and select Reset password.

Shell

Export as CSV in All resources and Resource groups

We have recently added the ability to export the contents of lists of resources and resource groups to a CSV (comma separated values) file.

This capability is available in the All resources screen:

It is also available also in the Resource groups screen:

We have added this capability to an instance of the Resource group screen, so you can download all the resources within a single resource group to a CSV file:

Layout change for essential properties on overview pages

We’ve changed the way that essential properties are laid out on overview pages so there’s less vertical scrolling required now. On standard wide screen resolutions, the essential properties (key/value) will be laid out horizontally rather than vertically to save vertical space. However, you will still get the vertical layout if the essential properties do not have enough horizontal space to avoid truncation and/or ellipsis of the important information.

Select Virtual Machines within the menu on the left.
Select any virtual machine.

Site Recovery

Azure Site Recovery UI updates

The new enhanced IaaS VM disaster recovery multiple tab experience lets you configure the replication with a single click. It’s as simple as selecting the Target region.

Select any virtual machine.
Select Disaster recovery within the menu located on the left.
Select Target region.
Select Review + Start replication.

We also now have a new immersive experience for Site Recovery infrastructure with the addition of an overview tab.

Select any Recovery service vault.
Select Site Recovery infrastructure under the subheading Manage.

Other

Updates to Microsoft Intune

The Microsoft Intune team has been hard at work on updates. You can find a complete list on the What’s new in Microsoft Intune page, including changes that affect your experience using Intune.

Did you know?

You can always test features by visiting the preview version of Azure portal.

Next steps

Thank you for all your terrific feedback. The Azure portal is built by a large team of engineers who are always interested in hearing from you.

We recently launched the Azure portal “how to” series where you can learn about a specific feature of the portal in order to become more productive using it. To learn more please watch the videos “How to manage multiple accounts, directories, and subscriptions in Azure” and “How to create a virtual machine in Azure.” Keep checking in on the Azure YouTube channel for new videos each week.

If you’re interested in learning how we streamlined resource creation in Microsoft Azure to improve usability, consistency, and accessibility, read the new Medium article, “Creation at Cloud Scale.” If you’re curious to learn more about how the Azure portal is built, be sure to watch the Microsoft Ignite 2018 session, “Building a scalable solution to millions of users.”

Don’t forget to sign in on the Azure portal and download the Azure mobile app today to see everything that’s new. Let us know your feedback in the comments section or on Twitter. See you next month.
Quelle: Azure