FedRAMP Moderate Blueprints helps automate US federal agency compliance

We’ve just released our newest Azure Blueprints for the important US Federal Risk and Authorization Management Program (FedRAMP) certification at the moderate level. FedRAMP is a key certification because cloud providers seeking to sell services to US federal government agencies must first demonstrate FedRAMP compliance. Azure and Azure Government are both approved for FedRAMP at the high impact level, and we’re planning that a future Azure Blueprints will provide control mappings for high impact.

Azure Blueprints is a free service that helps enable customers to define a repeatable set of Azure resources that implement and adhere to standards, patterns, and requirements. Azure Blueprints allow customers to set up compliant environments matched to common internal scenarios and external standards like ISO 27001, Payment Card Industry data security standard (PCI DSS), and Center for Internet Security (CIS) Benchmarks.

Compliance with standards such as FedRAMP is increasingly important for all types of organizations, making control mappings to compliance standards a natural application for Azure Blueprints. Azure customers, particularly those in regulated industries, have expressed a strong interest in compliance blueprints to help ease the burden of their compliance obligations.

FedRAMP was established to provide a standardized approach for assessing, monitoring, and authorizing cloud computing services under the Federal Information Security Management Act (FISMA), and to help accelerate the adoption of secure cloud solutions by federal agencies.

The Office of Management and Budget now requires all executive federal agencies to use FedRAMP to validate the security of cloud services. The National Institute of Standards and Technology (NIST) 800-53 sets the standard, and FedRAMP is the program that certifies that a Cloud Solution Provider (CSP) meets that standard. Azure is also compliant with NIST 800-53, and we already offer an Azure Blueprints for NIST SP 800-53 Rev4.

The new blueprint provides partial control mappings to important portions of FedRAMP Security Controls Baseline at the moderate level, including:

Access control (AC)

 AC-2 account management (AC-2). Assigns Azure Policy definitions that audit external accounts with read, write, and owner permissions on a subscription and deprecated accounts, implement role-based access control (RBAC) to help you manage who has access to resources in Azure, and monitor virtual machines that can support just-in-time access but haven't yet been configured.
 Information flow enforcement (AC-4).Assigns an Azure Policy definition to help you monitor Cross-Origin Resource Sharing (CORS) resources access restrictions.
 Separation of duties (AC-5). Assigns Azure Policy definitions that help you control membership of the administrators group on Windows virtual machines.
 Remote access (AC-17). Assigns an Azure Policy definition that helps you with monitoring and control of remote access.

Audit and accountability (AU)

 Response to audit processing failures (AU-5). Assigns Azure Policy definitions that monitor audit and event logging configurations.
 Audit generation (AU-12). Assigns Azure Policy definitions that audit log settings on Azure resources.

Configuration management (CM)

 Least functionality (CM-7). Assigns an Azure Policy definition that helps you monitor virtual machines where an application whitelist is recommended but has not yet been configured.
 User-installed software (CM-11). Assigns an Azure Policy definition that helps you monitor virtual machines where an application whitelist is recommended but has not yet been configured.

Contingency planning (CP)

 Alternate processing site (CP-7). Assigns an Azure Policy definition that audits virtual machines without disaster recovery configured.

Identification and authentication (IA)

 Network access to privileged accounts (IA-2). Assigns Azure Policy definitions to audit accounts with the owner and write permissions that don't have multi-factor authentication enabled.
 Authenticator management (IA-5). Assigns policy definitions that audit the configuration of the password encryption type for Windows virtual machines.

Risk assessment (RA)

 RA-5 Vulnerability scanning (RA-5). Assigns policy definitions that audit and enforce Advanced Data Security on SQL servers as well as help with the management of other information system vulnerabilities.

Systems and communications protection (SC)

 Denial of service protection (SC-5). Assigns an Azure Policy definition that audits if the distributed denial-of-service (DDoS) standard tier is enabled.
 Boundary protection (SC-7). Assigns Azure Policy definitions that monitor for network security group hardening recommendations as well as monitor virtual machines that can support just-in-time access but haven't yet been configured.
 Transmission confidentiality and integrity (SC-8). Assigns Azure Policy definitions that help you monitor cryptographic mechanisms implemented for communications protocols.
 Protection of information at rest (SC-28). Assigns Azure Policy definitions that enforce specific cryptograph controls and audit the use of weak cryptographic settings.

System and information integrity (SI)

 Flaw remediation (SI-2). Assigns Azure Policy definitions that monitor missing system updates, operating system vulnerabilities, SQL vulnerabilities, and virtual machine vulnerabilities.
 Malicious code protection (SI-3). Assigns Azure Policy definitions that monitor for missing endpoint protection on virtual machines and enforces the Microsoft antimalware solution on Windows virtual machines.
 Information system monitoring (SI-4). Assigns policies that audit and enforce deployment of the Log Analytics agent, and enhanced security settings for SQL databases, storage accounts, and network resources.

Azure tenants seeking to comply with FedRAMP should note that although the FedRAMP Blueprints controls may help customers assess compliance with particular controls, they do not ensure full compliance with all requirements of a control. In addition, controls are associated with one or more Azure Policy definitions, and the compliance standard includes controls that aren't addressed by any Azure Policy definitions in blueprints at this time. Therefore, compliance in Azure Policy will only consist of a partial view of your overall compliance status.

Customers are ultimately responsible for meeting the compliance requirements applicable to their environments and must determine for themselves whether particular information helps meet their compliance needs.

Learn more about the Azure FedRAMP moderate Blueprints in our documentation.
Quelle: Azure

Announcing the general availability of the new Azure HPC Cache service

If data-access challenges have been keeping you from running high-performance computing (HPC) jobs in Azure, we’ve got great news to report! The now-available Microsoft Azure HPC Cache service lets you run your most demanding workloads in Azure without the time and cost of rewriting applications and while storing data where you want to—in Azure or on your on-premises storage. By minimizing latency between compute and storage, the HPC Cache service seamlessly delivers the high-speed data access required to run your HPC applications in Azure.

Use Azure to expand analytic capacity—without worrying about data access

Most HPC teams recognize the potential for cloud bursting to expand analytic capacity. While many organizations would benefit from the capacity and scale advantages of running compute jobs in the cloud, users have been held back by the size of their datasets and the complexity of providing access to those datasets, typically stored on long-deployed network-attached storage (NAS) assets. These NAS environments often hold petabytes of data collected over a long period of time and represent significant infrastructure investment.

Here’s where the HPC Cache service can help. Think of the service as an edge cache that provides low-latency access to POSIX file data sourced from one or more locations, including on-premises NAS and data archived to Azure Blob storage. The HPC Cache makes it easy to use Azure to increase analytic throughput, even as the size and scope of your actionable data expands.

Keep up with the expanding size and scope of actionable data

The rate of new data acquisition in certain industries such as life sciences continues to drive up the size and scope of actionable data. Actionable data, in this case, could be datasets that require post-collection analysis and interpretation that in turn drive upstream activity. A sequenced genome can approach hundreds of gigabytes, for example. As the rate of sequencing activity increases and becomes more parallel, the amount of data to store and interpret also increases—and your infrastructure has to keep up. Your power to collect, process, and interpret actionable data—your analytic capacity—directly impacts your organization’s ability to meet the needs of customers and to take advantage of new business opportunities.

Some organizations address expanding analytic throughput requirements by continuing to deploy more robust on-premises HPC environment with high-speed networking and performant storage. But for many companies, expanding on-premises environments presents increasingly daunting and costly challenges. For example, how can you accurately forecast and more economically address new capacity requirements? How do you best juggle equipment lifecycles with bursts in demand? How can you ensure that storage keeps up (in terms of latency and throughput) with compute demands? And how can you manage all of it with limited budget and staffing resources?

Azure services can help you more easily and cost-effectively expand your analytic throughput beyond the capacity of existing HPC infrastructure. You can use tools like Azure CycleCloud and Azure Batch to orchestrate and schedule compute jobs on Azure virtual machines (VMs). More effectively manage cost and scale by using low-priority VMs, as well as Azure Virtual Machine Scale Sets. Use Azure’s latest H- and N-series Virtual Machines to meet performance requirements for your most complex workloads.

So how do you start? It’s straightforward. Connect your network to Azure via ExpressRoute, determine which VMs you will use, and coordinate processes using CycleCloud or Batch—voila, your burstable HPC environment is ready to go. All you need to do is feed it data. Ok, that’s the stickler. This is where you need the HPC Cache service.

Use HPC Cache to ensure fast, consistent data access

Most organizations recognize the benefits of using cloud: a burstable HPC environment can give you more analytic capacity without forcing new capital investments. And Azure offers additional pluses, letting you take advantage of your current schedulers and other toolsets to ensure deployment consistency with your on-premises environment.

But here’s the catch when it comes to data. Your libraries, applications, and location of data may require the same consistency. In some circumstances, a local analytic pipeline may rely on POSIX paths that must be the same whether running in Azure or locally. Data may be linked between directories, and those links may need to be deployed in the same way in the cloud. The data itself may reside in multiple locations and must be aggregated. Above all else, the latency of access must be consistent with what can be realized in the local HPC environment.

To understand how the HPC Cache works to address these requirements, consider it an edge cache that provides low-latency access to POSIX file data sourced from one or more locations. For example, a local environment may contain a large HPC cluster connected to a commercial NAS solution. HPC Cache enables access from that NAS solution to Azure Virtual Machines, containers, or machine learning routines operating across a WAN link. The service accomplishes this by caching client requests (including from the virtual machines), and ensuring that subsequent accesses of that data are serviced by the cache rather than by re-accessing the on-premises NAS environment. This lets you run your HPC jobs at a similar performance level as you could in your own data center. HPC Cache also lets you build a namespace consisting of data located in multiple exports across multiple sources while displaying a single directory structure to client machines.

HPC Cache provides a Blob-backed cache (we call it Blob-as-POSIX) in Azure as well, facilitating migration of file-based pipelines without requiring that you rewrite applications. For example, a genetic research team can load reference genome data into the Blob environment to further optimize the performance of secondary-analysis workflows. This helps mitigate any latency concerns when you launch new jobs that rely on a static set of reference libraries or tools.

  
Azure HPC Cache Architecture

HPC Cache Benefits

Caching throughput to match workload requirements

HPC Cache offers three SKUs: up to 2 gigabytes per second (GB/s), up to 4 GB/s, and up to 8 GB/s throughput. Each of these SKUs can service requests from tens to thousands of VMs, containers, and more. Furthermore, you choose the size of your cache disks to control your costs while ensuring the right capacity is available for caching.

Data bursting from your datacenter

HPC Cache fetches data from your NAS, wherever it is. Run your HPC workload today and figure out your data storage policies over the longer term.

High-availability connectivity

HPC Cache provides high-availability (HA) connectivity to clients, a key requirement for running compute jobs at larger scales.

Aggregated namespace

The HPC Cache aggregated namespace functionality lets you build a namespace out of various sources of data. This abstraction of sources makes it possible to run multiple HPC Cache environments with a consistent view of data.

Lower-cost storage, full POSIX compliance with Blob-as-POSIX

HPC Cache supports Blob-based, fully POSIX-compliant storage. HPC Cache, using the Blob-as-POSIX format, maintains full POSIX support including hard links. If you need this level of compliance, you’ll be able to get full POSIX at Blob price points.

Start here

The Azure HPC Cache Service is available today and can be accessed now. For the very best results, contact your Microsoft team or related partners—they’ll help you build a comprehensive architecture that optimally meets your specific business objectives and desired outcomes.

Our experts will be attending at SC19 in Denver, Colorado, the conference on high-performance computing, ready and eager to help you accelerate your file-based workloads in Azure!
Quelle: Azure

Democratizing agriculture intelligence: introducing Azure FarmBeats

For an industry that started 12,000 years ago, there is a lot of unpredictability and imprecision in agriculture. To be predictable and precise, we need to align our actions with insights gathered from data. Last week at Microsoft Ignite, we launched the preview of Azure FarmBeats, a purpose-built, industry-specific solution accelerator built on top of Azure to enable actionable insights from data.

With AgriTechnica 2019 starting today, more than 450,000 attendees from 130 countries are gathering to experience innovations in the global agriculture industry. We wanted to take this opportunity to share more details about Azure FarmBeats.

Azure FarmBeats is a business-to-business offering available in Azure Marketplace. It enables aggregation of agriculture datasets across providers and generation of actionable insights by building artificial intelligence (AI) or machine learning (ML) models based on fused datasets. So, agribusinesses can focus on their core value-add rather than the undifferentiated heavy lifting of data engineering.

Figure 1: Overview of Azure FarmBeats

With the preview of Azure FarmBeats you can:

Assess farm health using vegetation index and water index based on satellite imagery.
Get recommendations on how many sensors to use and where to place them.
Track farm conditions by visualizing ground data collected by sensors from various vendors.
Scout farms using drone imagery from various vendors.
Get soil moisture maps based on the fusion of satellite and sensor data.
Gain actionable insights by building AI or ML models on top of fused datasets.
Build or augment your digital agriculture solution by providing farm health advisories.

As an example, here is how a farm populated with data appears in Azure FarmBeats:

Figure 2: Boundary, sensor locations, and sensor readings for a farm

Figure 3: Drone imagery and model-generated precision maps (soil moisture, sensor placement)

For a real-world example of how it works, take a look at our partnership with the United States Department of Agriculture (USDA). In a pilot, USDA is using Azure FarmBeats to collect data from multiple sources, such as sensors, drones, and satellites, and feeding it into cloud-based AI models to get a detailed picture of conditions on the farm.

Azure FarmBeats includes the following components:

 Datahub: An API layer that enables aggregation, normalization, and contextualization of various agriculture datasets across providers. You can leverage the following data providers:

Available now:

o Sensor: Davis Instruments, Teralytic

o Drone imagery: DJI, EarthSense, senseFly, SlantRange

Coming soon: DTN, Pessl

Datahub is designed as an API platform and we are working with many more providers – sensor, satellite, drone, weather, farm equipment – to integrate with FarmBeats, so you have more choice while building your solution.

Accelerator: A sample solution, built on top of Datahub, that jumpstarts your user interface (UI) and model development. This web application leverages APIs to demonstrate visualization of ingested sensor data as charts and visualization of model output as maps. For example, you can use this to quickly create a farm and easily get a vegetation index map or a sensor placement map for that farm.

While this preview is the culmination of years of research work and working closely with more than a dozen agriculture majors, it is just the beginning. It would not have been possible without the early feedback and validation from these organizations, and we take this opportunity to extend our sincere gratitude.

Azure FarmBeats is offered at no additional charge and you pay only for the Azure resources you use. You can get started by installing it from Azure Marketplace in Azure Portal. In addition, you can:

Get and stay informed with our documentation.
Seek help by posting a question on our support forum.
Provide feedback by posting or voting for an idea on our feedback forum.

With Azure FarmBeats preview, we are pioneering a cloud platform to empower every person and every organization in agriculture to achieve more, by harnessing the power of IoT, cloud, and AI. We are delighted to have you with us on this global transformational journey and look forward to your feedback on the preview.
Quelle: Azure

Sharing the DevOps journey at Microsoft

Today, more and more organizations are focused on delivering new digital solutions to customers and finding that the need for increased agility, improved processes, and collaboration between development and operation teams is becoming business-critical. For over a decade, DevOps has been the answer to these challenges. Understanding the need for DevOps is one thing, but the actual adoption of DevOps in the real world is a whole other challenge. How can an organization with multiple teams and projects, with deeply rooted existing processes, and with considerable legacy software change its ways and embrace DevOps?

At Microsoft, we know something about these challenges. As a company that has been building software for decades, Microsoft consists of thousands of engineers around the world that deliver many different products. From Office, to Azure, to Xbox we also found we needed to adapt to a new way of delivering software. The new era of the cloud unlocks tremendous potential for innovation to meet our customers’ growing demand for richer and better experiences—while our competition is not slowing down. The need to accelerate innovation and to transform how we work is real and urgent.

The road to transformation is not easy and we believe that the best way to navigate this challenging path is by following the footsteps of those who have already walked it. This is why we are excited to share our own DevOps journey at Microsoft with learnings from teams across the company who have transformed through the adoption of DevOps.

 

More than just tools

An organization’s success is achieved by providing engineers with the best tools and latest practices. At Microsoft, the One Engineering System (1ES) team drives various efforts to help teams across the company become high performing. The team initially focused on tool standardization and saw some good results—source control issues decreased, build times and build reliability improved. But over time it became clear that the focus on tooling is not enough, to help teams, 1ES had to focus on culture change as well. Approaching culture change can be tricky, do you start with quick wins, or try to make a fundamental change at scale? What is the right engagement model for teams of different sizes and maturity levels? Learn more about the experimental journey of the One Engineering System team.

Redefining IT roles and responsibilities

The move to the cloud can challenge the definitions of responsibilities in an organization. As development teams embrace cloud innovation, IT operations teams find that the traditional models of ownership over infrastructure no longer apply. The Manageability Platforms team in the Microsoft Core Service group (previously Microsoft IT), found that the move to Azure required rethinking the way IT and development teams work together. How can the centralized IT model be decentralized so the team can move away from mundane, day-to-day work while improving the relationship with development teams? Explore the transformation of the Manageability Platforms team.

Streamlining developer collaboration

Developer collaboration is a key component of innovation. With that in mind, Microsoft open-sourced the .NET framework to invite the community to collaborate and innovate on .NET. As the project was open-sourced over time, its scale and complexity became apparent. The project spanned over many repositories, each with its own structure using multiple different continuous integration (CI) systems, making it hard for developers to move between repositories. The .NET infrastructure team at Microsoft decided to invest in streamlining developer processes. That challenge was approached by focusing on standardizing repo structure, shared tooling, and converging on a single CI system so both internal and external contributors to the project would benefit. Learn more about the investments made by the .NET infrastructure team.

A journey of continuous learning

DevOps at Microsoft is a journey, not a destination. Teams adapt, try new things, and continue to learn how to change and improve. As there is always more to learn, we will continue to share the transformation stories of additional teams at Microsoft in the coming months. As an extension of this continuous internal learning journey, we invite you to join us on the journey and learn how to embrace DevOps and empower your teams to build better solutions, faster and deliver them to happier customers.

Resources

The DevOps journey at Microsoft
What is DevOps?
DevOps Solutions on Azure

Azure. Invent with purpose.
Quelle: Azure

10 user experience updates to the Azure portal

We’re constantly working to improve your user experience in the Azure portal. Our goal is to offer you a productive and easy-to-use single-pane-of glass where you can build, manage, and monitor your Azure services, applications, and infrastructure. In this post, I’d like to share the highlights of our latest experience improvements, including:

Improved portal home experience: increased focus and clarity to bring services and instances that are relevant to you front and center.
New service cards: new service hovercards that present contextual information relevant to each service.
Enhanced service browsing experience: simplified offering navigation by progressively disclosing services.
Extended Microsoft Learn integration: contextual integration of free training in key parts of the experience.
Improved instance browsing experience: updated experience for more than 70 services with improved performance, better filtering and sorting options, grouping, and to allow exporting your resource lists to a CSV file.
Improved Azure Resource Graph experience: re-use and share your queries via Resource Graph Saved Queries.
Automatic refresh in Azure Dashboard: set automatic refresh intervals for your dashboard.
Improved service icons: New icons re-designed for better visual consistency and reduced distractions.
Simplified settings panel: better separation between general settings and localization.
New landing page for Azure Mobile application: added a new landing page that brings important information.

Improved portal home experience

We have improved the Azure portal home page to increase focus and clarity and to make things that are important to you easily accessible.

  Figure 1 – simplified Azure portal home.

We’ve organized these into differentiated sections for ease of use:

Services and resources (dynamic): the top section has dynamic content that gets adjusted based on your usage without requiring any additional customizations. The more you use the portal, the more it adjusts to you!
Common entry points and useful info (static): the lower section contains static content with common entry points to provide quick access to main navigation flows that are always there, enabling users to develop muscle memory for repeated usage.

Figure 2 – sections of the home page.

The Azure services section provides quick access to the Azure Marketplace, a list of eight of the most-used Azure services, and access to browse the entire Azure offering. The list of services is populated by default with some of our most popular services and gets automatically updated with your most recently used services. The Recent resources section shows a list of your recently used resources. Both lists get updated as you use the product. Our goal is to bring relevant services and instances front and center without requiring customization. The more you use the product, the more useful it gets for you! The rest of the sections are static, providing important points of reference for navigation and access to key Azure products, services, content, and training.

The overall home experience has been streamlined by hiding the left navigation bar under an always present menu button in the top navigation bar:

Figure 3 – The menu button

The main motivation for this change is improving focus, reducing distractions and redundancy, and to enable more immersive experiences. Before this change, when you were immersed in a workload in the portal you always had two vertical menus side by side, the left navigation bar and the menu for the experience. The left navigation bar is still available with all its functionality, including favorites, through the menu button at the top bar, always only one click away.

Figure 4 – The new experience allows for more focus.

If you prefer the old visual, having the left navigation always present, you can always bring it back using the Portal Settings panel.

New service cards

We have added hover cards associated with each service that show contextual information and provide direct access to some of the most common workflows. These hover cards are displayed after the cursor is placed for about a second on a service tile. We used the same interaction pattern and design than Outlook uses for identities (users and groups) that are well established with our customer base.

Figure 5 – hover card for virtual machines.

The cards expose relevant contextual information and actions for a service, including:

Create an instance: this provides quick access to a very common flow, short circuiting going though intermediate screens to launch the creation.
Browse instances: browse the full list of instances of that service.
Recently used: the last three recently used instances of that service, providing direct contextual access.
Microsoft Learn content: specialized free training curated for that service. The curation has been done by the Microsoft Learn team based on usage data and customer feedback.
Links to documents: key documents to learn or use the product (quick starts, technical docs, pricing.)
Free offerings available: if the service has free options available, surface them.

Figure 6 – Anatomy of the card

The cards help improve on multiple aspects including more efficient customer journeys, better discoverability, and contextualized information, all presented in the context of one service. The card also helps customers of all levels of expertise: While new customers can benefit from Microsoft Learn content and free offerings advanced customers have a faster path the create instances or access their recently used instances of that service.

The card does not only show on the home page. It is available in every place we display a service like the left navigation bar, the all services list, as well as the Azure home page.

Extended Microsoft Learn integration

Microsoft Learn provides official high-quality free learning material for Microsoft technologies. In this portal update we have introduced several contextual integration points:

Service browsing: contextual integration at the service category level (compute, storage, web, etc.)
Service cards: contextual integration at the service level (virtual machine, Cosmos DB, etc.) available in Azure home page, left navigation, and service browsing experience.
Azure QuickStart center: integration of most popular trainings in the landing page
Azure home: direct access to the main Microsoft Learn entry point

Moving forward, the Azure portal and Microsoft Learn integration will continue to grow, to help you improve your Azure journey!

Enhanced service browsing experience

Azure is big and gets bigger every day. Navigating through Azure’s offering in the portal can be intimidating and challenging due to the vast set of available services. To make this easier, we’ve made the following updates:

Improved global search: improved performance and functionality when searching for services in the global search box in the top bar of the portal. This improved search is also always present and available in your portal session.
Improved service browsing experience: improved the All services experience adding an overview category supporting progressive disclosure of services, reducing visual clutter, and adding contextual Microsoft Learn content.

For service browsing, we introduced an overview category with the goal of progressively disclosing information.

Figure 7 – progressive disclosure of information and better discoverability

The new Overview category presents a list of 15 of Azure’s most popular services, curated Microsoft Learn training content, and access to key functionality like Azure QuickStart center and free offerings.

If the service that you are looking for is not available on this screen you can use the service search functionality, at the top left, or you can browse through the different categories available, at the left of the screen. When displaying a category, we are now surfacing contextual and free Microsoft Learn content to assist you in your Azure learning journey.

Figure 8 – service category with contextual and free Microsoft Learn integration. The training offered in this category is contextual and related to databases in this case.

Improved instance browsing experience

The resource instances browsing experience, going through the list of instances and services is one of the most common entry points for customers using the portal. We are introducing an updated experience that leverages the power of Azure Resource Graph to provide improved performance, better filtering and sorting options, better grouping, and allows exporting your resource lists to a CSV file.

Figure 9 – improved resource browsing experience

As of this month, this experience will be available for more than 70 services and over the next few months it will be rolled out across the entire platform.

Improved Azure Resource Graph experience

The Azure Resource Graph Explorer available in the portal enables you to write queries and create dashboards using the full power of Azure Resource Graph. Here is a video that shows how to use Resource Graph to write queries and create an inventory dashboard for your Azure subscriptions.

We have now introduced Azure Resource Graph Queries in the Azure portal as a new top-level resource. Basically, you can save any Kusto Query Language (KQL) query as a resource in your Azure subscription. Like any other resource you can share it with colleagues, set permissions, check activity logs, and tag it.

Figure 10 – Azure Graph Queries

Automatic refresh in Azure Dashboards

We have added automatic refresh to our Azure dashboards, allowing to automatically refresh your dashboards over several time intervals.

Figure 11 – Configuring automatic refresh

Improved service icons

We’ve updated all of the service icons in the Azure portal with a more consistent and modern look. All these icons have been designed together as a family to provide better visual consistency and reduce distractions.

Figure 12– Improved icons

Simplified settings panel

The settings panel has been simplified. The main reason for this change is that many customers could not find the “Language & region” settings in the previous design and were asking us for capabilities that were already available in the portal. This new design separates the general and the Language & region settings, the portal supports 18 languages and dozens of regional formats, which was a common source of confusion for many of our users.

Figure 13 – separation of general and localization settings

New landing page for Azure Mobile application

The Azure mobile app enables you to stay connected, informed, and in control of your Azure assets while on the go. The app is available for iOS and Android devices.

We have added a brand-new landing screen to the Azure Mobile App that brings all important information together as soon as you open the application. The new Home experience is composed of multiple cards with support for:

Azure services
Recent resources
Latest alerts
Service Health
Resource groups
Favorites

The home view is fully customizable, you can decide what sections to show and in which order to show them.

Figure 14 – new home in the Azure Mobile App

If you have not tried the Azure Mobile app yet, make sure to try it out.

Let us know what you think

We’ve gone through a lot of new capabilities and still did not cover everything that is coming up in this release! The team is always hard at work focusing on improving the experience and is always eager to get your feedback and learn how can we make your experience better.

Azure. Invent with purpose.
Quelle: Azure

Azure SQL Data Warehouse is now Azure Synapse Analytics

On November fourth, we announced Azure Synapse Analytics, the next evolution of Azure SQL Data Warehouse. Azure Synapse is a limitless analytics service that brings together enterprise data warehousing and Big Data analytics. It gives you the freedom to query data on your terms, using either serverless on-demand or provisioned resources—at scale. Azure Synapse brings these two worlds together with a unified experience to ingest, prepare, manage, and serve data for immediate business intelligence and machine learning needs.

With Azure Synapse, data professionals can query both relational and non-relational data using the familiar SQL language. This can be done using either serverless on-demand queries for data exploration and ad hoc analysis or provisioned resources for your most demanding data warehousing needs. A single service for any workload.

In fact, it’s the first and only analytics system to have run all the TPC-H queries at petabyte-scale. For current SQL Data Warehouse customers, you can continue running your existing data warehouse workloads in production today with Azure Synapse and will automatically benefit from the new preview capabilities when they become generally available. You can sign up to preview new features like serverless on-demand query, Azure Synapse studio, and Apache Spark™ integration.

 

Taking SQL beyond data warehousing

A cloud native, distributed SQL processing engine is at the foundation of Azure Synapse and is what enables the service to support the most demanding enterprise data warehousing workloads. This week at Ignite we introduced a number of exciting features to make data warehousing with Azure Synapse easier and allow organizations to use SQL for a broader set of analytics use cases.

Unlock powerful insights faster from all data

Azure Synapse deeply integrates with Power BI and Azure Machine Learning to drive insights for all users, from data scientists coding with statistics to the business user with Power BI. And to make all types of analytics possible, we’re announcing native and built-in prediction support, as well as runtime level improvements to how Azure Synapse handles streaming data, parquet files, and Polybase. Let’s dive into more detail:

With the native PREDICT statement, you can score machine learning models within your data warehouse—avoiding the need for large and complex data movement. The PREDICT function (available in preview) relies on open model framework and takes user data as input to generate predictions. Users can convert existing models trained in Azure Machine Learning, Apache Spark™, or other frameworks into an internal format representation without having to start from scratch, accelerating time to insight.

We’ve enabled direct streaming ingestion support and ability to execute analytical queries over streaming data. Capabilities such as: joins across multiple streaming inputs, aggregations within one or more streaming inputs, transform semi-structured data and multiple temporal windows are all supported directly in your data warehousing environment (available in preview). For streaming ingestion, customers can integrate with Event Hubs (including Event Hubs for Kafka) and IoT Hubs.

We’re also removing the barrier that inhibits securely and easily sharing data inside or outside your organization with Azure Data Share integration for sharing both data lake and data warehouse data.

By using new ParquetDirect technology, we are making interactive queries over the data lake a reality (in preview). It’s designed to access Parquet files with native support directly built into the engine. Through improved data scan rates, intelligent data caching and columnstore batch processing, we’ve improved Polybase execution by over 13x.

Workload isolation

To support customers as they democratize their data warehouses, we are announcing new features for intelligent workload management. The new Workload Isolation functionality allows you to manage the execution of heterogeneous workloads while providing flexibility and control over data warehouse resources. This leads to improved execution predictability and enhances the ability to satisfy predefined SLAs.

COPY statement

Analyzing petabyte-scale data requires ingesting petabyte-scale data. To streamline the data ingestion process, we are introducing a simple and flexible COPY statement. With only one command, Azure Synapse now enables data to be seamlessly ingested into a data warehouse in a fast and secure manner.

This new COPY statement enables using a single T-SQL statement to load data, parse standard CSV files, and more.

COPY statement sample code:

COPY INTO dbo.[FactOnlineSales] FROM ’https://contoso.blob.core.windows.net/Sales/’

Safe keeping for data with unmatched security

Azure has the most advanced security and privacy features in the market. These features are built into the fabric of Azure Synapse, such as automated threat detection and always-on data encryption. And for fine-grained access control businesses can ensure data stays safe and private using column-level security, native row-level security, and dynamic data masking (now generally available) to automatically protect sensitive data in real time.

To further enhance security and privacy, we are introducing Azure Private Link. It provides a secure and scalable way to consume deployed resources from your own Azure Virtual Network (VNet). A secure connection is established using a consent-based call flow. Once established, all data that flows between Azure Synapse and service consumers is isolated from the internet and stays on the Microsoft network. There is no longer a need for gateways, network addresses translation (NAT) devices, or public IP addresses to communicate with the service.

Get started today

Businesses can continue running their existing data warehouse workloads in production today with generally available features on Azure Synapse.

Email the team to nominate yourself to try the preview features announced in this blog.
Visit the Azure Synapse Analytics page to learn more.
Get started with a free Azure Synapse Analytics account.
Register for the live virtual event with the Azure Synapse Analytics team.

Azure. Invent with purpose.

Quelle: Azure

What’s new with Azure Monitor

At Microsoft Ignite 2018, we shared our vision to bring together infrastructure, application, and network monitoring into one unified offering, and provide full-stack monitoring for your applications. We have since made rapid strides towards delivering that reality to our customers. From consolidating our logs, metrics and alerts platforms, and integrating existing capabilities such as Application Insights and Log Analytics, to adding new monitoring capability containers and virtual machines, and contributing back to the community through open-source projects such as OpenTelemetry. In this blog, I'll share the newest enhancements from Azure Monitor at Microsoft Ignite, including four examples of how we continue to build seamless, and integrated monitoring solution that works well for cloud-native and legacy workloads and is cost-effective. Be sure to read the full blog post to get a list of all the exciting enhancements.

Monitor containers anywhere

Customers love the convenience of the out of the box monitoring that Azure Monitor for containers provides for all their Azure Kubernetes Service (AKS) clusters. But, you also have Kubernetes clusters running outside AKS. For customers who have hybrid environments, we are now launching the ability to monitor Kubernetes clusters on-premises and on Azure Stack (with AKS Engine) in preview. Just install the container agent and you can create alerts and get insights into the performance of your on-premises workloads in the Azure portal, along with your AKS workloads. Learn more about hybrid Kubernetes monitoring.

We are also making the popular Prometheus integration generally available. Azure Monitor can now scrape your Prometheus metrics and store them on your behalf, without you having to operate your own Prometheus collection and storage infrastructure. We also have new Grafana templates for you to visualize all the performance data that is collected from your Kubernetes clusters. Learn more about the Prometheus integration and Grafana templates.

Troubleshooting network issues faster

Monitoring a typical cloud network containing application gateways, VPN connections, virtual networks, etc., is a time-consuming activity. To troubleshoot an issue, you need to know the specific networking resources that support your application and scan for the health of these resources across multiple subscriptions and resource groups.

The Network Insights preview in Azure Monitor provides a single dashboard that gives you visibility into network topology, dependencies, health, and other key metrics for related network resources. The insights are derived from data that’s available in Azure Monitor today, so no additional setup or configuration is required.

With Network Insights, you have visibility into the health of your network across all of your subscriptions. Intuitive search and detailed topology maps enable faster drill-downs, help localization of networking issues, and suggest remediation in a matter of minutes. Learn more about Network Insights.

Work better and collaborate with workbooks

We've gotten great feedback from customers on Azure Monitor workbooks because it gives you a single tool that can combine text, analytic queries, metrics, and parameters into a rich interactive report that you can share with your team members and collaborate.

We have seen customers use workbooks in several ways including exploring the usage of an app, going through a root cause analysis, putting together an operational playbook, and more. We are now making workbooks generally available. Since the launch in preview, we have added support for a number of new data sources, including Azure Data Explorer, Azure Resource Graph, Azure Monitor Logs, Metrics, Alerts, etc., and added visualization options such as charts, grids, tiles, honeycombs, and maps. The Azure Monitor Workbook platform now forms the basis of new monitoring experiences in Azure services such as Azure Sentinel, Storage accounts, Azure Cosmos DB, Azure Active Directory, and SAP Hana. Learn more about Azure Monitor workbooks.

In addition to the highlights of the innovation that we are driving above, here are even more detailed new capabilities we're delivering today:

 New agent and additions to profiling and tracing capabilities in Application Insights: For customers who have ASP.NET applications hosted on Azure Virtual Machines (VMs) running IIS, we are adding a new “codeless” onboarding method that uses an agent and does not require access to the code. Learn more.

We've added the ability to specify central processing units and memory thresholds for the Application Insights Profiler, so you have better control of when to collect traces. Learn more.
We've also added a source code view (via decompilation) in Application Insights Snapshot Debugger to allow you to quickly diagnose the failing code.

 Application change analysis enhancements: We have added a lot of features for application change analysis to help you scale. We have introduced the ability to turn on application change analysis at an App Services plan level, you can now see resource manager changes for any resource, and there are richer diagnostics for common scenarios (such as VMs + VNET, SQL server, and Storage). We also added an impact analysis feature to see downstream dependencies for a change and revamped the user experience. Learn more.
 Traffic Analytics accelerated processing: The new accelerated processing option in Traffic Analytics allows you to process NSG Flow logs at 10-minute intervals. Learn more.
 Live container metrics and live deployments (preview): We are adding the ability to see live performance metrics and live deployments in your AKS cluster. Together with the live events and live logs features, you can get a near real-time performance and health view of your AKS cluster and troubleshoot issues faster.
 Log integrations: Using the new Subscription Diagnostic settings, you can now stream every type of activity log for your subscription to Azure Monitor Logs, Event Hub, and Storage and no longer need Subscription Log Profiles or Log Analytics Activity Log connector. In addition, you can now export log data from services such as Azure App Services and Azure Storage accounts directly to Azure Monitor. These features are available for free while in preview.
 Azure Monitor for Cosmos DB: You can now view usage, failures, capacity, throughput, and operations for your Azure Cosmos DBs across your subscriptions.  You can see the rollups at subscription, Azure Cosmos DB level or the individual container level and then drill through to the resource for further troubleshooting.

Our customer feedback has been instrumental in shaping these features, and we hope you'll keep the feedback coming. If you have any questions or suggestions, reach out to our Tech Community forum.

Azure. Invent with purpose.
Quelle: Azure

Accelerating customer success with Azure migration

This blog post was co-authored by Jeremy Winter, Partner Director and Tanuj Bansal, Senior Director for Microsoft Azure.

At last year's Microsoft Ignite 2018, we shared best practices on how to move to the cloud and why Azure is the best destination for all your apps, data, and infrastructure. Since then, we’re happy to share that a number of customers have joined us on Azure—H&R Block, Albertsons, Devon Energy, and Carlsberg Group, just to name a few. Azure has helped these customers drive innovation, enhance their security posture, and reduce costs with unique offers such as Azure Hybrid Benefit.

At this week’s Microsoft Ignite event in Orlando, we shared the approach these customers took and more news in Azure migration sessions and one-on-one architecture review sessions with Azure engineers.

In this blog, we want to share some of the exciting news we shared at Microsoft Ignite.

Accelerating customer success: Azure Migration Program (AMP)

Since its launch in July, AMP has seen an enthusiastic reception with more than a thousand customers entering the program for migration projects ranging across Windows Server, SQL Server, and Linux workloads. To recap, AMP offers customers:

Technical skill building with foundational, workload, migration, and role-specific courses to build Azure skills for long-term success to enable organizational readiness.
Curated, step-by-step guidance from Microsoft experts and specialized migration partners based on our Cloud Adoption Framework methodology.
Free Azure migration tools including Azure Migrate to assess and migrate workloads, and Azure Cost Management to optimize cloud costs.
Unique offers to reduce migration costs, including Azure Hybrid Benefit and free Extended Security Updates for Windows Server 2008 and SQL Server 2008.

“We are on a multi-year transformation journey, and cloud migration is an important first step. Azure Migration Program offered the right mix of training, best practice guidance, tooling, and specialized partners to best meet our needs. Importantly, Microsoft was prepared to work hand in hand with us and showed deep commitment to our success.”

– Marc Gunter, Vice President of Infrastructure, Planning and Engineering, Canadian Imperial Bank of Commerce, CIBC

AMP engagements begin by asking and addressing questions on organizational leadership rather than around technology or product. For example:

Have you identified an executive sponsor?
Have you identified your business, application, and IT team participants?
Have you developed a business case with an initial assessment of your on-premises estate and a total cost of ownership (TCO) analysis?
Have you identified a partner to help you with migration?

Ultimately, the answers to these questions form the basis of a robust migration plan. To help accelerate this step, customers can now use the new self-serve tool, Strategic Migration Assessment & Readiness Tool (SMART). More details are available in this whitepaper.

Check out this video to learn more about Azure Migration Program and apply today. Get prescriptive self-serve guidance at Azure migration center.

New Azure Migrate capabilities–your hub for all things migration

In parallel with our Azure Migration Program efforts, we’ve continued investing in product innovation to improve the migration experience for customers. Azure Migrate is a one-stop hub for all your migration needs across applications, infrastructure, and data; delivering a simplified, end-to-end migration experience, with a choice of Microsoft and partner tools.

Building on our July release, we're excited to announce support for new migration scenarios and several new capabilities described below.

Application migration

Many of you run .NET web applications on-premises that address internal line-of-business and customer-facing scenarios. Based on your feedback, we have streamlined and automated the Azure migration journey for these applications. Azure Migrate now integrates with App Service Migration Assistant to provide a comprehensive experience for migrating .NET applications to Azure App Service. 
  

New Infrastructure Migration for virtual desktop infrastructure (VDI)

Your organization may require a virtualized desktop experience for reasons like meeting compliance regulations, securing access to sensitive data, and managing access to corporate data and apps for a mobile workforce. Windows Virtual Desktop provides the best-virtualized Office and Windows experience on Azure. We have integrated with Lakeside, a Microsoft partner, to enable assessment of on-premises virtual desktops for migration to Windows Virtual Desktop (WVD) on Azure.

New Server Assessment and Migration Capabilities

Since our acquisition of Movere, we have been hard at work integrating its capabilities into our toolsets. We're pleased to announce that this work is now complete—customers can now consume Movere’s innovative discovery and assessment capabilities from Azure Migrate.

We're also announcing discovery of on-premises physical servers, in addition to the existing VMware, and Hyper-V support.

Server Assessment now also provides application discovery capabilities, giving you visibility into the applications installed, their roles, features, and versions enabled on your on-premises virtual machines, which will help you identify the right migration path for each underlying workload. Application discovery is now available for VMware virtual machines.

Many of you have been using the dependency visualization capability to identify all the components that make up your application along with their interdependencies. We have now enabled agentless dependency visualization for VMware virtual machines, currently in preview.

Agentless server migration for VMware virtual machines has also graduated from preview to general availability. 

We have significantly streamlined the process of uploading configuration and performance data of your on-premises servers into Azure Migrate. Now, you can simply use CSV import-based discovery to upload virtual machine configuration and performance details in CSV format. Once the server inventory is uploaded, you can then create assessments on the imported data without having to do appliance-based discovery.

Get started with Azure Migrate, learn more from our documentation, and try our preview features. Visit our UserVoice forum if you would like to provide feedback or learn more about our roadmap.

Azure. Invent with purpose.
Quelle: Azure

Azure Cognitive Services for building enterprise ready scalable AI solutions

This post is co-authored by Tina Coll, Senior Product Marketing Manager, Azure Cognitive Services and Anny Dow, Product Marketing Manager, Azure Cognitive Services.

Azure Cognitive Services brings artificial intelligence (AI) within reach of every developer without requiring machine learning expertise. All it takes is an API call to embed the ability to see, hear, speak, understand, and accelerate decision-making into your apps. Enterprises have taken these pre-built and custom AI capabilities to deliver more engaging and personalized intelligent experiences. We’re continuing the momentum from Microsoft Build 2019 by making Personalizer generally available, and introducing additional advanced capabilities in Vision, Speech, and Language categories. With many advancements to share, let’s dive right in.

Personalizer: Powering rich user experiences

Winner of this year’s ‘Most Innovative Product’ award at O’Reilly’s Strata Conference, Personalizer is the only AI service on the market that makes reinforcement learning available at-scale through easy-to-use APIs. Personalizer is powered by reinforcement learning and provides developers a way to create rich, personalized experiences for users, even if they do not necessarily have deep machine learning expertise.

Giving customers what they want at any given moment is one of the biggest challenges faced by retail, media, and e-commerce businesses today. Whether it’s applying randomized A/B tests or supervised machine learning, businesses struggle to keep up with delivering unique and relevant experiences to each user. This is where Personalizer comes in, exploring new options to stay atop of previously unencountered influences on user behavior through a cutting-edge machine learning technique known as reinforcement learning. This technique allows Personalizer to learn from what’s happening in the world in real-time and update the underlying algorithm as frequently as every few minutes. The result is a significant improvement to your app usability and user satisfaction. When XBOX implemented Personalizer on their homepage, they saw a 40 percent lift in user engagement.

Form Recognizer: Increase efficiency with automated text extraction and feedback loop

Businesses often rely on a variety of documents that can be hard to read; these documents are not always cleanly printed, and many include handwritten text. Businesses including Chevron use Form Recognizer to accelerate document processing through automatic information extraction from printed forms. This frees their employees to focus on more challenging and higher-value tasks.

Form Recognizer extracts key-value pairs, tables, and text from documents including W2 tax statements, oil and gas drilling well reports, completion reports, invoices, and purchase orders. Today we are announcing feedback loop support to enable even more accurate data extraction. Users will be able to provide labeled examples of the specific values they want extracted. This feature enables Form Recognizer to support any type of form including values without keys, keys under values, tilted forms, photos of forms, and more. Starting with just 10 forms, users can train a model tailored to their use case with high-quality results. A new user experience gets you started quickly, selects values of interest, labels, and trains your custom model.

In addition, Form Recognizer can now train a single model without labels for all the different types of forms, and supports training on large datasets and analyzing large documents with the new AsyncAPI. This benefit enables customers to train a single model for the different types of invoices, purchase orders, and more without the need to classify the documents in advance.

We have also enhanced our pre-built receipts capabilities with accuracy improvements, additional new fields for tips, receipt types (itemized, credit card slip, gas, parking, other), and line item extraction detailing all the different items in the receipt. Finally, we have also improved the accuracy of our text recognition enabling extraction of high-quality text from the forms and our table extraction.

Sogeti, part of Capgemeni, is harnessing these new Form Recognizer capabilities. As Arun Kumar Sahu, the Manager of AI ML for Sogeti notes:

“We are working on a document classification and predictive solution for one of the largest automobile auction companies in the US, and needed an efficient way to extract information from various automobile related documents (PDF or image). Form Recognizer was quick and easy to train and host, was cost effective, handled different document formats, and the output was amazing. The new labelling features made it very effective to customize key value pair extraction.”

Speech: Enable more natural interactions and accelerate productivity with advanced speech capabilities

Businesses want to be able to modernize and enable more seamless, natural interactions with their customers. Our latest advancements in speech allow customers to do just that.

At Microsoft Ignite 2018, we introduced our neural text-to-speech capability, which uses deep neural networks to enable natural-sounding speech and reduces listening fatigue for users interacting with AI systems. Neural text-to-speech can be used to make interactions with chatbots and virtual assistants more natural and engaging, convert digital texts such as e-books into audiobooks, and enhance in-car navigation systems. We’re excited to build upon these advancements with the Custom Neural Voice capability, which enables customers to build a unique brand voice, starting from just a few minutes of training audio. The Custom Neural Voice capability can enable scenarios such as customer support provided by a company’s branded character, interactive lesson plans or guided museum tours, and voice assistive technologies. The capability also supports generating long-form content, including audiobooks.

The Beijing Hongdandan Education and Culture Exchange Center is dedicated to using audio to create accessible products for those with visual impairments and improving the lives of the visually impaired by providing aids such as audiobooks. Hongdandan is using the Custom Neural Voice capability to produce audiobooks based on the voice of Lina, who lost her sight at the age of 10. Lina is now a trainer at the Hongdandan Service Center, using her voice to teach others who are visually impaired to communicate well.

With the rapid pace at which business is moving today, remembering all the details from your last important meeting and tracking next steps and key deadlines can be a real challenge. Quickly and accurately transcribing calls can help various stakeholders stay on the same page by capturing critical details and making it easy to search and review topics you discussed. In customer support scenarios, being able to hear and understand your customers and keep an accurate record of information is critical for tracking customer requirements and enabling broader analysis.

However, accurately transcribing organization-specific terms like product names, technical terms, and people's names pose another barrier. With Custom Speech, you can tailor speech recognition models based on your own data so that your unique terms are accurately captured. Simply upload your audio to train a custom model. Now, you can also optimize speech recognition on your organization-specific terms by automatically generating custom models using your Office 365 data in a secure and compliant fashion. With this opt-in feature, organizations using Office 365 can more accurately transcribe company terminology, whether in internal meetings or on customer calls. The organization-wide language model is built only using conversations and documents from public groups that everyone in the organization can access.

Additional new features such as Custom Commands, Custom Speech and Voice containers, Speech Translation with automatic language identification, and Direct Line Speech channel integration with Bot Framework are making it easier to quickly embed advanced speech capabilities into your apps. For more information, visit the Azure Speech Services page.

Language: Extract deeper insights from customer feedback and text documents

There are a multitude of valuable customer insights captured today—whether in social media, customer reviews, or discussion forums. The challenge is being able to extract insights from that data, so businesses can act fast to improve customer service and meet the needs of the market. With the Text Analytics Sentiment Analysis capability, businesses can easily detect positive, neutral, negative, and mixed sentiment in content, enabling them to keep an ongoing pulse on customer satisfaction, better engage their customers, and build customer loyalty. The latest release of the Sentiment Analysis capability offers greater accuracy in sentiment scoring, as well as the ability to detect sentiment for both an entire document as well as individual sentences.

Another challenge of extracting information from your data is being able to take unstructured natural language text and identify occurrences of entities such as people, locations, organizations, and more. Text Analytics is expanding entity type support to more than 100 named entity types, making it easier than ever to extract meaningful information and analyze relationships from raw text and between terms. Additionally, customers will now be able to detect and extract more than 80 kinds of personally identifiable information in English language text documents.

We are also adding several new capabilities to Language Understanding Intelligent Service (LUIS) that enable developers to build sophisticated models that are conversational. The new capabilities provide the ability to handle more complex requests from users (as an example, if you want to allow customers to truly use natural language, they might order ‘Two Burgers with no onions and replace buns with lettuce wraps’). This provides customers with the advanced ability for hierarchical entities and model decomposition, to build more sophisticated language models that reflect the way humans speak. In addition, we are adding more regions and further enhancing the existing human languages supported in LUIS with the addition of Hindi and Arabic.

Enterprise Ready: Azure Virtual Network for enhanced data security

One of the most important considerations when choosing an AI service is security and regulatory compliance. Can you trust that the AI is being processed with the high standards and safeguards that you come to expect with hardened, durable software systems? Azure Cognitive Services offers over 70 certifications. Today we are offering Virtual Network support as part of Cognitive Services to ensure maximum security for sensitive data. This service also is being made available in a container that can run in a customer’s Azure subscription or on-premises.

Get started today

We are continuing to enable new powerful and intelligent scenarios for our customers that improve their productivity and user experiences. The incredible breadth of services available through Azure Cognitive Services enables you to extract insights from all your data. Using these new announcements, you can accurately extract text from forms using Form Recognizer, analyze and understand this text using Text Analytics and LUIS, and finally, provide these insights to your users through a spoken, conversational interface with our speech services.

These milestones illustrate our commitment to make the Azure AI platform suitable for every business scenario, with enterprise-grade tools that simplify application development and industry-leading security and compliance for protecting customers’ data.

Get started today by building your first intelligent application using an Azure free account and learn more about Cognitive Services.

Azure. Invent with purpose.
Quelle: Azure