Scale action groups and suppress notifications for Azure alerts

In Azure Monitor, defining what to monitor while configuring alerts can be challenging. Customers need to be capable of defining when actions and notifications should trigger for their alerts, and more importantly, when they shouldn’t. The action rules feature for Azure Monitor, available in preview, allows you to define actions for your alerts at scale, and allows you to suppress alerts for scenarios such as maintenance windows.

Let’s take a closer look at how action rules (preview) can help you in your monitoring setup!

Defining actions at scale

Previously you could define what action groups trigger for your alerts while defining an alert rule. However, the actions that get triggered, whether it is an email that is sent or a ticket created in a ticketing tool, are usually associated with resource on which the alert is generated rather than the individual alert rule.

For example, for all alerts generated on the virtual machine contosoVM, I would typically want the following.

The same email address to be notified (e.g. contosoITteam@contoso.com)
Tickets to be created in the same ITSM tool

While you could define a single action group such as contosoAG and associate it with each and every alert rule authored on contosoVM, wouldn’t it be easier if you could easily associate contosoAG for every alert generated on contosoVM, without any additional configuration?

That’s precisely what action rules (preview) allow you to do. They allow you to define an action group to trigger for all alerts generated on the defined scope, this could be a subscription, resource group, or resource so that you no longer have to define them for individual alert rules!

Suppressing notifications for your alerts

There are often many scenarios where it would be useful to suppress the notifications generated by your alerts. This could be a planned maintenance window or even the suppression of notifications during non-business hours. You could possibly do this by disabling each and every alert rule individually, with complicated logic that accounts for time windows and recurrence patterns or you can get all of this out of the box by using action rules (preview).

Working on the same principle as before, action rules (preview) also allow you to define the suppression of actions and notifications for all alerts generated on a defined scope, which could be a subscription, resource group, or resource, while the underlying alert rules would continue to monitor. Furthermore, you have the capability to configure both the period as well as recurrence for the suppression, all out of the box! With this you could easily setup notification suppression based on your business requirements, which could be anything from suppression for all weekends such as a maintenance window, to suppression between 5pm – 9am everyday or normal business hours.

Filters for more flexibility

While you can easily define action rules (preview) to either author actions at scale or suppress them, action rules come with additional knobs and levers in the form of filters that allow you to fine tune what specific subset of your alerts the action rule acts on.

For example, going back to the example of suppressing notifications during non-business hours. Perhaps you might still want to receive notifications if there is an alert with severity zero or one, while the rest are suppressed. In such a scenario, I can define a severity filter as part of my action rule, which defines that the rule does not apply to alerts with severity of zero or one, and thus only applies to rules with severity of two, three or four.

Similarly, there are additional filters that provide even more granular definitions from the description of the alert to string matching within the alert’s payload. You can learn more about the supported filters by visiting our documentation, “Action rules (preview).”

Next steps

To best leverage action rules, we recommend reading the documentation which goes into more detail about how to configure action rules (preview), example scenarios, best practices, and FAQs. It is recommended to use action rules (preview) in conjunction with action groups, which have the common alert schema enabled to define consistent alert consumption experiences across different alert types. You can also learn more by reading our documentation, “How to integrate the common alert schema with Logic Apps” which goes into more details on how you can setup an action group with a logic app using the common schema, that integrates with all your alerts.

We are just getting started with action rules (preview), and we look forward to hearing more from you as we evolve the feature towards general availability and beyond. Keep the feedback coming to azurealertsfeedback@microsoft.com.
Quelle: Azure

Thanks for 10 years and welcome to a new chapter in SQL innovation

Tomorrow, July 9, 2019, marks the end of extended support for SQL Server 2008 and 2008 R2. These releases transformed the database industry, with all the core components of a database platform built-in at a fraction of the cost of other databases. We saw broad adoption across applications, data marts, data warehousing, and business intelligence. Thank you for the ten amazing years we’ve had together.

But now support for the SQL Server 2008 and R2 versions is ending. Whether you prefer the evergreen SQL of Azure SQL Database managed instance which never needs to be patched or upgraded, or if you need the flexibility and configurability of SQL Server hosted on a Azure Virtual Machine with three free years of Extended Security Updates, Azure provides the best choice of destinations to secure and modernize your database.

Customers are moving critical SQL Server workloads to Azure

Customers like Allscripts, Komatsu, Paychex, and Willis Towers Watson are taking advantage of these innovative destinations and migrating their SQL Server databases to Azure. Danish IT solutions provider KMD needed a home for their legacy SQL Server in the cloud. They had to migrate an 8-terabyte production database to the cloud quickly and without interruption to its service. Azure SQL Database managed instance allowed KMD to transfer their production data with minimal downtime and no code changes.

“We moved our SQL Server 2008 to Azure SQL Database managed instance, and it has been a great move for us. Not only do we spend less time on maintenance, but we now run a version of SQL that is always current with no need for upgrade and patching.”

– Charlotte Lindahl, Project Manager, KMD

Azure SQL Database offers differentiated value to customers including:

Chose the only cloud with evergreen SQL. Azure SQL Database compatibility levels mean that you can move your on-premises workloads to managed SQL without worrying about application compatibility or performance changes. Customers who move to SQL Database never have to worry about patching, upgrades, or end of support again.
Host larger SQL databases than any other cloud with Azure SQL Database Hyperscale. Hyperscale is a highly scalable service tier for SQL databases that adapts on-demand to your workload's needs. With Hyperscale, databases can achieve the best performance for workloads of unlimited size and scale.
Harness the power of artificial intelligence to monitor and secure your workloads. Trained on millions of databases, the intelligent security and performance features in Azure SQL Database mean consistent and predictable workload performance. In addition to intelligent performance, SQL database customers get peace of mind with automatic threat detection, which identifies unusual log-in attempts or potential SQL injection attacks.
Move to the most economical cloud database for SQL Server, Azure SQL Database managed instance. With the full surface area of your on-premises SQL Server database engine and with an anticipated ROI of 212 percent and a payback period of as little as 6 months1, only SQL Database managed instance cements its status as the most cost effective service for running SQL in the cloud. SEB is a technology company providing software, solutions, and services specializing in managing group benefit solutions and healthcare claims processing. They chose Azure not only for its cost reduction compared to on-premises, but its more than 90 compliance offerings as well.

"With SQL Server 2008 approaching end of support, SEB needed to migrate two critical business applications that contained sensitive health and PII information. In Azure, we were able to get three years of Extended Security Updates for application VMs, and move the data to Azure SQL Database which significantly decreased both management and infrastructure spend. Azure's compliance certifications for HIPAA, PCI and ISO-27k, as well as data residency in Canada, were critical in meeting our regulatory requirements.”

– Mario Correia, Chief Technology Officer, SEB Inc.

See how Hyperscale in Azure SQL Database is enabling customer innovation.

SQL innovation remains our focus now and in the future

Microsoft continues to invest in innovation with SQL Server 2019 and Azure SQL Database. Our priority is to future proof your database workloads. Today, I am excited to announce new innovation across on-premises and in the cloud:

Preview of Azure SQL, a simplified portal experience for SQL databases in Azure: Coming soon, Azure SQL will provide a single pane of glass through which you can manage Azure SQL Databases and SQL Server on Azure Virtual Machines. In Azure SQL, customers will be able to register their self-installed (custom image) SQL VMs using the Resource Provider to access benefits like auto-patching, auto-backup, and new license management options.
Preview of SQL Server 2019 big data clusters: Available later this month, the SQL Server 2019 big data clusters preview combines SQL Server with Apache Spark and Hadoop Distributed File System for a unified data platform that enables analytics and artificial intelligence (AI) over all data, relational and non-relational. Early Adoption Program participants like Startup Systems Imagination Inc. are already using big data cluster to solve challenging AI and machine learning problems.  

“With SQL Server 2019 big data clusters, we can solve for on-demand big data experiments. We can analyze cancer research data coming from dozens of different data sources, mine interesting graph features, and carry out analysis at scale.”

– Pieter Derdeyn, Knowledge Engineer, Systems Imagination Inc.

Get started with SQL in Azure

As we reach end of support for SQL Server 2008 and 2008 R2, and with just six more months until the end of support for Windows Server 2008 and 2008 R2, there’s never been a better time to secure and modernize these older workloads by moving them to Azure. Secure, manage, and transform your SQL Server workloads with the latest data and AI capabilities:

Find the best destination for your SQL Server 2008 and 2008 R2.
Get started on your Azure migration with the Data Migration Guide. 

 

1The Total Economic Impact™ of Microsoft Azure SQL Database Managed Instance, a Forrester Consulting Study, 10/25/2018. https://azure.microsoft.com/en-us/resources/forrester-tei-sql-database-managed-instance/en-us/
Quelle: Azure

Azure Data Box Heavy is now generally available

Our customers continue to use the Azure Data Box family to move massive amounts of data into Azure. One of the regular requests that we receive is for a larger capacity option that retains the simplicity, security, and speed of the original Data Box. Last year at Ignite, we announced a new addition to the Data Box family that did just that – a preview of the petabyte-scale Data Box Heavy

With thanks to those customers who provided feedback during the preview phase, I’m excited to announce that Azure Data Box Heavy has reached general availability in the US and EU!

How Data Box Heavy works

In many ways, Data Box Heavy is just like the original Data Box. You can order Data Box Heavy directly from the Azure portal, and copy data to Data Box Heavy using standard files or object protocols. Data is automatically secured on the appliance using AES 256-bit encryption. After your data is transferred to Azure, the appliance is wiped clean according to National Institute of Standards and Technology (NIST) standards.

But Data Box Heavy is also designed for a much larger scale than the original Data Box. Data Box Heavy’s one petabyte of raw capacity and multiple 40 Gbps connectors mean that a datacenter’s worth of data can be moved into Azure in just a few weeks.

Data Box Heavy

1 PB per order
770 TB usable capacity per order
Supports Block Blobs, Page Blobs, Azure Files, and Managed Disk
Copy to 10 storage accounts
4 x RJ45 10/1 Gbps, 4 x QSFP 10/40 Gbps Ethernet
Copy data using standard NAS protocols (SMB/CIFS, NFS, Azure Blob Storage)

Data Box

100 TB per order
80 TB usable capacity per order
Supports Block Blobs, Page Blobs, Azure Files, and Managed Disk
Copy to 10 storage accounts
2 x RJ45 10/1 Gbps, 2 x SFP+ 10 Gbps Ethernet
Copy data using standard NAS protocols (SMB/CIFS, NFS, Azure Blob Storage)

 

Data Box Disk

40 TB per order/8 TB per disk
35 TB usable capacity per order
Supports Block Blobs, Page Blobs, Azure Files, and Managed Disk
Copy to 10 storage accounts
USB 3.1, SATA II or III
Copy data using standard NAS protocols (SMB/CIFS, NFS, Azure Blob Storage)

 

Expanded regional availability

We’re also expanding regional availability for Data Box and Data Box Disk.

Data Box Heavy
US, EU

Data Box
US, EU, Japan, Canada, and Australia

Data Box Disk
US, EU, Japan, Canada, Australia, Korea, Southeast Asia, and US Government

Sign up today

Here’s how you can get started with Data Box Heavy:

Learn more about our family of Azure Data Box products.
Order any Data Box today via the Azure portal.
Review the Data Box documentation for more details.
Interested in finding a partner? See our list of Data Box Partners.

We’ll be at Microsoft Inspire again this year, so stop by our booth to say hello to the team!
Quelle: Azure

Automate MLOps workflows with Azure Machine Learning service CLI

This blog was co-authored by Jordan Edwards, Senior Program Manager, Azure Machine Learning

This year at Microsoft Build 2019, we announced a slew of new releases as part of Azure Machine Learning service which focused on MLOps. These capabilities help you automate and manage the end-to-end machine learning lifecycle.

Historically, Azure Machine Learning service’s management plane has been via its Python SDK. To make our service more accessible to IT and app development customers unfamiliar with Python, we have delivered an extension to the Azure CLI focused on interacting with Azure Machine Learning.

While it’s not a replacement for the Azure Machine Learning service Python SDK, it is a complimentary tool that is optimized to handle highly parameterized tasks which suit themselves well to automation. With this new CLI, you can easily perform a variety of automated tasks against the machine learning workspace, including:

Datastore management
Compute target management
Experiment submission and job management
Model registration and deployment

Combining these commands enables you to train, register their model, package it, and deploy your model as an API. To help you quickly get started with MLOps, we have also released a predefined template in Azure Pipelines. This template allows you to easily train, register, and deploy your machine learning models. Data scientists and developers can work together to build a custom application for their scenario built from their own data set.

The Azure Machine Learning service Command-Line Interface is an extension to the interface for the Azure platform. This extension provides commands for working with Azure Machine Learning service from the command-line and allows you to automate your machine learning workflows. Some key scenarios would include:

Running experiments to create machine learning models
Registering machine learning models for customer usage
Packaging, deploying, and tracking the lifecycle of machine learning models

To use the Azure Machine Learning CLI, you must have an Azure subscription. If you don’t have an Azure subscription, you can create a free account before you begin. Try the free or paid version of Azure Machine Learning service to get started today.

Next steps

Learn more about the Azure Machine Learning service.

Get started with a free trial of the Azure Machine Learning service.
Quelle: Azure

Highlights from SIGMOD 2019: New advances in database innovation

The emergence of the cloud and the edge as the new frontiers for computing is an exciting direction—data is now dispersed within and beyond the enterprise, on-premises, in the cloud, and at the edge. We must enable intelligent analysis, transactions, and responsible governance for data everywhere, from creation through to deletion (through the entire lifecycle of ingestion, updates, exploration, data prep, analysis, serving, and archival).

Our commitment to innovation is reflected in our unique collaborative approach to product development. Product teams work in synergy with research and advanced development groups, including Cloud Information Services Lab, Gray Systems Lab, and Microsoft Research, to push boundaries, explore novel concepts and challenge hypotheses.

The Azure Data team continues to lead the way in on-premises and cloud-based database management. SQL Server has been identified as the top DBMS by Gartner for four consecutive years.  Our aim is to re-think and redefine data management by developing optimal ways to capture, store and analyze data.

I’m especially excited that this year we have three teams presenting their work: “Socrates: The New SQL Server in the Cloud,” “Automatically Indexing Millions of Databases in Microsoft Azure SQL Database,” and the Gray Systems Lab research team’s “Event Trend Aggregation Under Rich Event Matching Semantics.” 

The Socrates paper describes the foundations of Azure SQL Database Hyperscale, a revolutionary new cloud-native solution purpose-built to address common cloud scalability limits. It enables existing applications to elastically scale without fixed limits without the need to rearchitect applications, and with storage up to 100TB.

Its highly scalable storage architecture enables a database to expand on demand, eliminating the need to pre-provision storage resources, providing flexibility to optimize performance for workloads. The downtime to restore a database or to scale up or down is no longer tied to the volume of data in the database and database point-in-time restores are very fast, typically in minutes rather than hours or even days. With read-intensive workloads, Hyperscale provides rapid scale-out by provisioning additional read replicas instantaneously without any data copy needed.

Learn more about Azure SQL Database Hyperscale.

Azure SQL Database also introduced a new serverless compute option: Azure SQL Database serverless. Serverless allows compute and memory to scale independently and on-demand based on the workload requirements. Compute is automatically paused and resumed, eliminating the requirements of managing capacity and reducing cost, and is an efficient option for applications with unpredictable or intermittent compute requirements.

Learn more about Azure SQL Database serverless.

Index management is a challenging task even for expert human administrators. The ability to create efficiencies and fully automate the process is of critical significance to business, as discussed in the Data team’s presentation on the auto-indexing feature in Azure SQL Database.

This, coupled with the identification of how to achieve optimal query performance for complex real-world applications, underpins the auto-indexing feature.

The auto-indexing feature is generally available and generates index recommendations for every database in Azure SQL Database. If the customer chooses, it can automatically implement index changes on their behalf and validate these index changes to ensure that performance improves. This feature has already significantly improved the performance of hundreds of thousands of databases.

Discover the benefits of the auto-tuning feature in Azure SQL Database.

In the world of streaming systems, the key challenges are supporting rich event matching semantics (e.g. Kleene patterns to capture event sequences of arbitrary lengths), and scalability (i.e. controlling memory pressure and latency at very high event throughputs). 

The advanced research team focused on supporting this class of queries at a very high scale and compiled their findings in Event Trend Aggregation Under Rich Event Matching Semantics. The key intuition is to incrementally maintain the coarsest grained aggregates that can support a given query semantics, enabling control of memory pressure and attainment of very good latency at scale. By carefully implementing this insight, a research prototype was built that achieves six orders of magnitude speed-up and up to seven orders of magnitude memory reduction compared to state-of-the-art approaches.

Microsoft has the unique advantage of a world-class data management system in SQL Server and a leading public cloud in Azure. This is especially exciting at a time when cloud-native architectures are revolutionizing database management.

There has never been a better time to be part of database systems innovation at Microsoft, and we invite you to explore the opportunities to be part of our team.

Enjoy SIGMOD 2019; it’s a fantastic conference! 
Quelle: Azure

Azure FXT Edge Filer now generally available

Scaling and optimizing hybrid network-attached storage (NAS) performance gets a boost today with the general availability of the Microsoft Azure FXT Edge Filer, a caching appliance that integrates on-premises network-attached storage and Azure Blob Storage. The Azure FXT Edge Filer creates a performance tier between compute and file storage and provides high-throughput and low-latency network file system (NFS) to high-performance computing (HPC) applications running on Linux compute farms, as well as the ability to tier storage data to Azure Blob Storage.

Fast performance tier for hybrid storage architectures

The availability of Azure FXT Edge Filer today further integrates the highly performant and efficient technology that Avere Systems pioneered to the Azure ecosystem. The Azure FXT Edge Filer is a purpose-built evolution of the popular Avere FXT Edge Filer, in use globally to optimize storage performance in read-heavy workloads.

The new hardware model goes beyond top-line integration with substantial updates. It is now being manufactured by Dell and has been upgraded with twice as much memory and 33 percent more SSD. Two models with varying specifications are available today. With the new 6600 model, customers will see about a 40 percent improvement in read performance over the Avere FXT 5850. The appliance now supports hybrid storage architectures that include Azure Blob storage.

Edge filer hardware is recognized as a proven solution for storage performance improvements. With many clusters deployed around the globe, Azure FXT Edge Filer can scale performance separately from capacity to optimize storage efficiency. Companies large and small use the appliance to accelerate challenging workloads for processes like media rendering, financial simulations, genomic analysis, seismic processing, and wide area network (WAN) optimization. Now with new Microsoft Azure supported appliances, these workloads can run with even better performance and easily leverage Azure Blob storage for active archive storage capacity.

Rendering more faster

Visual effects studios have been long-time users of this type of edge appliance, as their rendering workloads frequently push storage infrastructures to their limits. When one of these companies, Digital Domain, heard about the new Azure FXT Edge Filer hardware, they quickly agreed to preview a 3-node cluster.

“I’ve been running my production renders on Avere FXT clusters for years and wanted to see how the new Azure FXT 6600 stacks up. Setup was easy as usual, and I was impressed with the new Dell hardware. After a week of lightweight testing, I decided to aim the entire render farm at the FXT 6600 cluster and it delivered the performance required without a hiccup and room to spare.”

Mike Thompson, Principal Engineer, Digital Domain

Digital Domain has nine locations in the United States, China, and India.

Manage heterogeneous storage resources easily

Azure FXT Edge Filers help keep analysts, artists, and engineers productive, ensuring that applications aren’t affected by storage latency. And storage administrators can easily manage these heterogeneous pools of storage in a single file system namespace and through a single mountpoint. Users access their files from a single mount point, whether they are stored in on-premises NAS or in Azure Blob storage.

Expanding a cluster to meet growing demands is as easy as adding additional nodes. The Azure FXT Edge Filer scales from three to 24 nodes, allowing even more productivity in peak periods. This scale helps companies avoid overprovisioning expensive storage arrays and enables moving to the cloud at the user’s own pace.

Gain low latency hybrid storage access

Azure FXT Edge Filers deliver high throughput and low latency for hybrid storage infrastructure supporting read-heavy HPC workloads. Azure FXT Edge Filers support storage architectures with NFS and server message block (SMB) protocol support for NetApp and Dell EMC Isilon NAS systems, as well as cloud APIs for Azure Blob storage and Amazon S3.

Customers are using the flexibility of the Azure FXT Edge Filer to move less frequently used data to cloud storage resources, while keeping files accessible with minimal latency. These active archives enable organizations to quickly leverage media assets, research, and other digital information as needed.

Enable powerful caching of data

Software on the Azure FXT Edge Filers identifies the most in-demand or hottest data and caches it closest to compute resources, whether that data is stored down the hall, across town, or across the world. With a cluster connected, the appliances take over, moving data as it warms and cools to optimize access and use of the storage.

Get started with Azure FXT Edge Filers

Whether you are currently running Avere FXT Edge Filers and are looking to upgrade to the latest hardware to increase performance or expanding your clusters or you are new to the technology, the process to get started is the same. You can request information by completing this online form or by reaching out to your Microsoft representative.

Microsoft will work with you to configure the optimal combination of software and hardware for your workload and facilitate its purchase and installation.

Resources

Azure FXT Edge Filer preview blog

Azure FXT Edge Filer product information

Azure FXT Edge Filer documentation

Azure FXT Edge Filer data sheet
Quelle: Azure

Azure Cost Management updates – June 2019

Whether you're a new student, thriving startup, or the largest enterprise, you have financial constraints and you need to know what you're spending, where, and how to plan for the future. Nobody wants a surprise when it comes to the bill, and this is where Azure Cost Management comes in.

We're always looking for ways to learn more about your challenges and how Cost Management can help you better understand where you're accruing costs in the cloud, identify and prevent bad spending patterns, and optimize costs to empower you to do more with less.

Here are the improvements that we'll be looking at today, all based on your feedback:

Reservation and marketplace purchases for Enterprise Agreements and AWS
Forecasting your Azure and AWS costs
Standardizing cost and usage terminology for Enterprise Agreements and Microst Customer Agreements
Keeping an eye on costs across subscriptions with management group budgets
Updating your dashboard tiles
Expanded availability of resource tags in cost reporting
The new Cost Management YouTube channel

Let's dig into the details.

 

Reservation and marketplace purchases for Enterprise Agreements and AWS

Effective cost management starts by getting all your costs into a single place with a single taxonomy. Now, with the addition of reservation and marketplace purchases, you have a more complete picture of your Enterprise Agreements (EA) for Azure and AWS costs, and can track large reservation costs back to the teams using the reservation benefit. Breaking reservation purchases down will simplify cost allocation efforts, making it easier than ever to manage internal chargeback.

Start by opening cost analysis and changing scope to your EA billing account, AWS consolidated account, or a management group which spans both. You'll notice four new grouping and filtering options to break down and drill into costs:

Charge type indicates which costs are from usage, purchases, and refunds.
Publisher type indicates which costs are from Azure, AWS, and marketplace. Marketplace costs include all clouds. Use Provider to distinguish between the total Azure and AWS costs, and first and third-party costs.
Reservation specifies what the reservation costs are associated with, if applicable.
Frequency indicates which costs are usage-based, one-time fees, or recurring charges.

By default, cost analysis shows your actual cost as it is on your bill. This is ideal for reconciling your invoice, but results in visible spikes from large purchases. This also means usage against a reservation will show no cost, since it was prepaid, and subscription and resource group readers won't have any visibility into their effective costs. This is where amortization comes in.

Switch to the amortized cost view to break down reservation purchases into daily chunks and spread them over the duration of the reservation term. As an example, instead of seeing a $365 purchase on January , you will see a $1 purchase every day from January 1 to December 31. In addition to basic amortization, these costs are also reallocated and associated with the specific resources which used the reservation. For example, if that $1 daily charge is split between two virtual machines, you'll see two $0.50 charges for the day. If part of the reservation is not utilized for the day, you'll see one $0.50 charge associated with the applicable virtual machine and another $0.50 charge with a new charge type titled UnusedReservation.

As an added bonus subscription, resource group, and AWS linked account readers can also see their effective costs by viewing amortized costs. They won't be able to see the purchases, which are only visible on the billing account, but they can see their discounted cost based on the reservation.

To build a simple chargeback report, switch to amortized cost, select no granularity to view the total costs for the period, group by resource group, and change to table view. Then, download the data to Excel or CSV for offline analysis or to merge with your own data.

If you need to automate getting cost data, you have two options. Use the Query API for rich analysis with dynamic filtering, grouping, and aggregation or use the UsageDetails API for the full, unaggregated cost and usage data. Note UsageDetails is only available for Azure scopes. The general availability (GA) version of these APIs is 2019-01-01, but you'll want to use 2019-04-01-preview to include reservation and Marketplace purchases.

As an example, let's get an aggregated view of amortized costs broken down by charge type, publisher type, resource group—left empty for purchases, and reservation—left empty if not applicable.

POST https://management.azure.com/{scope}/providers/Microsoft.CostManagement/query?api-version=2019-04-01-preview
Content-Type: application/json

{
"type": "AmortizedCost",
"timeframe": "Custom",
"timePeriod": { "from": "2019-06-01", "to": "2019-06-30" },
"dataset": {
"granularity": "None",
"aggregation": {
"totalCost": { "name": "PreTaxCost", "function": "Sum" }
},
"grouping": [
{ "type": "dimension", "name": "ChargeType" },
{ "type": "dimension", "name": "PublisherType" },
{ "type": "dimension", "name": "Frequency" },
{ "type": "dimension", "name": "ResourceGroup" },
{ "type": "dimension", "name": "SubscriptionName" },
{ "type": "dimension", "name": "SubscriptionId" },
{ "type": "dimension", "name": "ReservationName" },
{ "type": "dimension", "name": "ReservationId" }
]
}
}

And if you don't need the aggregation and prefer the full, raw dataset for Azure scopes:

GET https://management.azure.com/{scope}/providers/Microsoft.Consumption/usageDetails?metric=AmortizedCost&$filter=properties/usageStart+ge+'2019-06-01'+AND+properties/usageEnd+le+'2019-06-30'&api-version=2019-04-01-preview

If you need actual costs to show purchases as they are shown on your bill, simply change the type or metric to ActualCost. For more information about these APIs, refer to the Query and UsageDetails API documentation. The published docs show the GA version, but they both work the same for the 2019-04-01-preview API version outside of the new type/metric attribute.

Note that Cost Management APIs work across all scopes above resources. Namely, resource group, subscription, management group via Azure roll-based access control (RBAC) access, EA billing accounts (enrollments), departments, enrollment accounts via EA portal access, AWS consolidated, and linked accounts via Azure RBAC. To learn more about scopes, including how to determine your scope ID or manage access, see our documentation "Understand and work with scopes."

Support for reservation and marketplace purchases is currently available in preview in the Azure portal, but will roll out globally in the coming weeks. In the meantime, please check it out and let us know if you have any feedback.

 

Forecasting your Azure and AWS costs

History teaches us a lot, and knowing where you've been is critical to understanding where you're going. This is no less true when it comes to managing costs. You may start with historical costs to understand application and organization trends, but to really get into a healthy, optimized state, you need to plan for the future. Now you can with Cost Management forecasts.

Check your forecasted costs in cost analysis to anticipate and visualize cost trends, and proactively take action to avoid budget or credit overages on any scope. From a single application in a resource group, to the entire subscription or billing account, to higher-level management groups spanning both Azure and AWS resources. Learn about connecting your AWS account in last month's wrap up here.

Cost Management forecasts are in preview in the Azure portal, and will roll out globally in the coming weeks. Check it out and let us know what you'd like to see next.

 

Standardizing cost and usage terminology for Enterprise Agreement and Microsoft Customer Agreement

Depending on whether you use a pay-as-you-go (PAYG), Enterprise Agreement (EA), Cloud Solution Provider (CSP), or Microsoft Customer Agreement (MCA) account, you may be used to a different terminology. These differences are minor and won't impact your ability to understand and break down your bills, but they do introduce a challenge as your organization grows and needs a more holistic cost management solution, spanning multiple account types. With the addition of AWS and eventual migration of PAYG, EA, and CSP accounts into MCA, this becomes even more important. In an effort to streamline the transition to MCA at your next EA renewal, Cost Management now uses new column or property names to align to MCA terminology. Here are the primary differences you can expect to see for EA accounts:

EnrollmentNumber → BillingAccountId/BillingProfileId

​EA enrollments are represented as "billing accounts" within the Azure portal today, and they will continue to be mapped to a BillingAccountId within the cost and usage data. No change there. MCA also introduces the ability to create multiple invoices within a billing account. The configuration of these invoices is called a "billing profile". Since EA can only have a single invoice, the enrollment effectively maps to a billing profile. In line with that conceptual model, the enrollment number will be available as both a BillingAccountId and BillingProfileId.

DepartmentName → InvoiceSectionName

​MCA has a concept similar to EA departments, which allows you to group subscriptions within the invoice. These are called "invoice sections" and are nested under a billing profile. While the EA invoice isn't changing as part of this effort, EA departments will be shown as InvoiceSectionName within the cost data for consistency.

ProductOrderName (new)

​New property to identify the larger product the charge applies to, like the Azure subscription offer.

PublisherName (new)

​New property to indicate the publisher of the offering.

ServiceFamily (new)

​New property to group related meter categories.

Organizations looking to renew their EA enrollment into a new MCA should strongly consider moving from the key-based EA APIs (such as consumption.azure.com) to the latest UsageDetails API (version 2019-04-01-preview) based on these new properties to minimize future migration work. The key-based APIs are not supported for MCA billing accounts.

To learn more about the new terminology, see our documentation "Understand the terms in your Azure usage and charges file."

 

Keeping an eye on costs across subscriptions with management group budgets

Every organization has a bottom line. Cost Management budgets help you make sure you don't hit yours. And now, you can create budgets that span both Azure and AWS resources using management groups.

Organize subscriptions into management groups, and use filters to perfectly tune the budget that's right for your teams.

To learn more, see our tutorial "Create and manage budgets."

 

Updating your dashboard tiles

You already know you can pin customized views of cost analysis to the dashboard.

You may have noticed these tiles were locked to the specific date range you selected when pinning it. For instance, if you chose to view this month's costs in January, the tile would always show January, even in February, March, and so on. This is no longer the case.

Cost analysis tiles now maintain the built-in range you selected in the date picker. If you pin "this month," you'll always get the current calendar month. If you pin "last 7 days," you'll get a rolling view of the last 7 days. If you select a custom date range, however, the tile will always show that specific date range.

To get the updated behavior, please update your pinned tiles. Simply click the chart on the tile to open cost analysis, select the desired date range, and pin it back to the dashboard. Your new tile will always keep the exact view you selected.

What else would help you build out your cost dashboard? Do you need other date ranges? Let us know.

 

Expanded availability of resource tags in cost reporting

Tagging is the best way to organize and categorize your resources outside of the built-in management group, subscription, and resource group hierarchy, allowing you to add your own metadata and build custom reports using cost analysis. While most Azure resources support tags, some resource types do not. Here are the latest resource types which now support tags:

App Service environments
Data Factory services
Event Hub namespaces
Load balancers
Service Bus namespaces

Remember tags are a part of every usage record and are only available in Cost Management reporting after the tag is applied. Historical costs are not tagged, so update your resources today for the best cost reporting.

 

The new Cost Management YouTube channel

Last month, we talked about eight new quickstart videos to get you up and running with Cost Management quickly. Subscribe to the new Azure Cost Management YouTube channel to stay in the loop with new videos as they're released. Here's the newest video in our cost optimization collection:

Five tips to help you save money and manage costs with Azure

Let us know what other topics you'd like to see covered.

 

What's next?

These are just a few of the big updates from the last month. We're always listening and making constant improvements based on your feedback, so please keep the feedback coming! 

Follow @AzureCostMgmt on Twitter and subscribe to the YouTube channel for updates, tips, and tricks. And, as always, share your ideas and vote up others in the Cost Management feedback forum.
Quelle: Azure

Azure. Source–Volume 89

Dear Azure fans, Azure.Source is going on hiatus. Thank you for reading each week and be sure to follow @Azure for updates and new ways to learn more.

Now available

Announcing the general availability of Azure premium files

We are excited to announce the general availability of Azure premium files for customers optimizing their cloud-based file shares on Azure. Premium files offers a higher level of performance built on solid-state drives (SSD) for fully managed file services in Azure.

Premium tier is optimized to deliver consistent performance for IO-intensive workloads that require high-throughput and low latency. Premium file shares store data on the latest SSDs, making them suitable for a wide variety of workloads like databases, persistent volumes for containers, home directories, content and collaboration repositories, media and analytics, high variable and batch workloads, and enterprise applications that are performance sensitive. Our existing standard tier continues to provide reliable performance at a low cost for workloads less sensitive to performance variability, and is well-suited for general purpose file storage, development/test, backups, and applications that do not require low latency.

Leveraging complex data to build advanced search applications with Azure Search

Data is rarely simple. Not every piece of data we have can fit nicely into a single Excel worksheet of rows and columns. Data has many diverse relationships, such as the multiple locations and phone numbers for a single customer .or multiple authors and genres of a single book. Of course, relationships typically are even more complex than this, and as we start to leverage AI to understand our data the additional learnings we get only add to the complexity of relationships. For that reason, expecting customers to have to flatten the data so it can be searched and explored is often unrealistic. We heard this often and it quickly became our number one most requested Azure Search feature. Because of this we were excited to announce the general availability of complex types support in Azure Search. In this post, we explain what complex types adds to Azure Search and the kinds of things you can build using this capability.

Azure Blockchain Workbench 1.7.0 integration with Azure Blockchain Service

The release of Microsoft Azure Blockchain Workbench 1.7.0, which along with our new Azure Blockchain Service, can further enhance your blockchain development and projects. You can deploy a new instance of Blockchain Workbench through the Azure portal or upgrade your existing deployments to 1.7.0 using the upgrade script. This update includes the improvements such as integration with Azure Blockchain Service, and enhance compatibility with Quorum.

New PCI DSS Azure Blueprint makes compliance simpler

Announcing our second Azure Blueprint for an important compliance standard with the release of the PCI-DSS v3.2.1 blueprint. The new blueprint maps a core set of policies for Payment Card Industry (PCI) Data Security Standards (DSS) compliance to any Azure deployed architecture, allowing businesses such as retailers to quickly create new environments with compliance built in to the Azure infrastructure. Azure Blueprints is a free service that enables customers to define a repeatable set of Azure resources that implement and adhere to standards, patterns, and requirements. Azure Blueprints allow customers to set up governed Azure environments that can scale to support production implementations for large-scale migrations.

Now in preview

Event-driven analytics with Azure Data Lake Storage Gen2

Announcing that Azure Data Lake Storage Gen2 integration with Azure Event Grid is in preview. This means that Azure Data Lake Storage Gen2 can now generate events that can be consumed by Event Grid and routed to subscribers with webhooks, Azure Event Hubs, Azure Functions, and Logic Apps as endpoints. With this capability, individual changes to files and directories in Azure Data Lake Storage Gen2 can automatically be captured and made available to data engineers for creating rich big data analytics platforms that use event-driven architectures.

Technical content

How to deploy your machine learning models with Azure Machine Learning

Azure Machine Learning service is a cloud service that you use to train, deploy, automate, and manage machine learning models, all at the broad scale that the cloud provides. The service fully supports open-source technologies such as PyTorch, TensorFlow, and scikit-learn and can be used for any kind of machine learning, from classical ml to deep learning, supervised and unsupervised learning. In this article you will learn to deploy your machine learning models with Azure Machine Learning.

Azure Cloud Shell Tips for SysAdmins Part II – Using the Cloud Shell tools to Migrate

In the last blog post Azure Cloud Shell Tips for SysAdmins (bash) the author discussed some of the tools that the Azure Cloud Shell for bash already has built into it.  This time he goes deeper and show you how to utilize a combination of the tools to create an UbuntuLTS Linux server.  Once the server is provisioned, he will demonstrate how to use Ansible to deploy Node.js from the nodesource binary repository.

Step-By-Step: Migrating The Active Directory Certificate Service From Windows Server 2008 R2 to 2019

End of support for Windows Server 2008 R2 has been slated by Microsoft for January 14th 2020.  Said announcement increased interest in a previous post detailing steps on Active Directory Certificate Service migration from server versions older than 2008 R2.  Many subscribers of ITOpsTalk.com have reached out asking for an update of the steps to reflect Active Directory Certificate Service migration from 2008 R2 to 2016 / 2019 and of course our team is happy to oblige.

Home Grown IoT – Local Dev

Now that we’re starting to build our IoT application it’s time to start talking about the local development experience for the application. At the end of the day I use IoT Edge to do the deployment onto the device and manage the communication with IoT Hub and there is a very comprehensive development guide for Visual Studio Code and Visual Studio 2019. The workflow of this is to create a new IoT Edge project, setup IoT Edge on your machine and do deployments to it that way. This is the way I’d recommend going about it yourself as it gives you the best replication of production and local development.

Delivering static content via Azure CDN | Azure Friday

In one of the prior episodes we learned how to serve a static website from Azure's blob storage<?XML:NAMESPACE PREFIX = "[default] http://www.w3.org/2000/svg" NS = "http://www.w3.org/2000/svg" /> . This is great for a low volume web site. As your site starts getting more hits, you wanted to deliver the content closer to the end user. In this episode, we will learn how to deliver a static content via Azure Content Delivery Network(CDN). Azure CDN offers developers a global solution for rapidly delivering high-bandwidth content to users by caching their content at strategically placed physical nodes across the world.

Azure shows

Deploy your web app in Windows containers on Azure App Service | Azure Friday

Windows Container support is available in preview in Azure App Service. By deploying applications via Windows Containers in Azure App Service you can install your dependencies inside the container, call APIs currently blocked by the Azure App Service sandbox and use the power of containers to migrate applications for which you no longer have the source code. All of this and you still get to use the awesome feature set enabled by Azure App Service such as auto-scale, deployment slots and increased developer productivity.

Using open data to build family trees | The Open Source Show

Erica Joy joins Ashley McNamara to share her not-so-secret personal mission: making genealogy information open, queryable, and easily parsable. She shares a bit about why this is so critical, common challenges, and tips for re-building your own family tree – or using open data to uncover whatever the information you need for your personal mission.

Supporting Windows forms and WPF in .NET Core 3 | On .NET

There is significant effort happening to add support for running desktop applications on .NET Core 3.0. In this episode, Jeremy interviews Mike Harsh about some of the work being done and decisions being made to enable Windows Forms and WPF applications to run well on .NET Core 3.0 and beyond.

Five things about RxJS and reactive programming | Five Things

Where do RxJS, Reactive Programming and the Redux pattern fit into your developer workflow? Where can you learn form the community leaders? Does wearing a hoodie make you a better developer? Oh and remember, go to RxJS Live and drinks are on Aaron!

How to use the Global Search in the Azure portal | Azure Portal Series

In this video of the Azure Portal “How To” Series, you will learn how to find Azure services, resources, documentation, and more using the Global Search in the Azure portal.

Episode 285 – The Azure Journey | The Azure Podcast

Sujit, Kendall, and Cynthia talk with the one and only Richard Campbell on how to tell the cloud story, the conversations to have with customers as they enter the cloud and the implications of globally distributed cloud that needs to be considered. Probably one of our favorite shows.

HTML5 audio not supported

Industries and partners

Solving the problem of duplicate records in healthcare

As the U.S. healthcare system continues to transition away from paper to more a digitized ecosystem, the ability to link an individual’s medical data together correctly becomes increasingly challenging. Patients move, marry, divorce, change names and visit multiple providers throughout their lifetime, with each visit creating new records, and the potential for inconsistent or duplicate information grows. Duplicate medical records often occur as a result of multiple name variations, data entry errors, and lack of interoperability—or communication—between systems. Poor patient identification and duplicate records in turn lead to diagnosis errors, redundant medical tests, skewed reporting and analytics, and billing inaccuracies. The Azure platform offers a wealth of services for partners to enhance, extend, and build industry solutions. Here we will describe how one Microsoft partner, Nextgate, uses Azure to solve a unique problem.

A solution to manage policy administration from end to end

Legacy systems can be a nightmare for any business to maintain. In the insurance industry, carriers struggle not only to maintain these systems but to modify and extend them to support new business initiatives. The insurance business is complex, every state and nation has its own unique set of rules, regulations, and demographics. Creating new products such as an automobile policy has traditionally required the coordination of many different processes, systems, and people. These monolithic systems traditionally used to create new products are inflexible and creating a new product can be an expensive proposition. The Azure platform offers a wealth of services for partners to enhance, extend, and build industry solutions. Here we describe how one Microsoft partner, Sunlight Solutions, uses Azure to solve a unique problem.

Using natural language processing to manage healthcare records

The Azure platform offers a wealth of services for partners to enhance, extend, and build industry solutions. Here we describe how SyTrue, a Microsoft partner focusing on healthcare uses Azure to empower healthcare organizations to improve efficiency, reduce costs, and improve patient outcomes.

Azure Cosmos DB: A competitive advantage for healthcare ISVs

CitiusTech is a specialist provider of healthcare technology services which helps its customers to accelerate innovation in healthcare. CitiusTech used Azure Cosmos DB to simplify the real-time collection and movement of healthcare data from variety of sources in a secured manner. With the proliferation of patient information from established and current sources, accompanied with scrupulous regulations, healthcare systems today are gradually shifting towards near real-time data integration.
Quelle: Azure

Helping move healthcare organizations to Azure

Today’s healthcare organizations are expected to be agile, reduce costs, and direct capital toward revenue generating activities that improve patient outcomes. The cloud is a key part of the answer, but implementing a new solution on the cloud also requires new skills especially around governance, compliance with HIPAA, and security practices. Many healthcare organizations look to an experienced partner to help them migrate solutions from on-premises to the cloud, while building in the right set of structures to seamlessly handle known and future challenges.

The Azure platform offers a wealth of services for partners to enhance, extend, and build industry solutions. Here we describe how one Microsoft partner uses Azure to solve a unique problem.

Wanted: Governance and compliance expertise

For organizations that have moved to the cloud, a lack of governance and understanding about the way cloud services work can lead to wasted spending, unpredictable cloud service bills, and cloud vendor lock-in. The rapid growth of cloud infrastructures also creates a dizzying array of possibilities that can keep a team uncertain of the correct path and second guessing their choices, which can lead to delay and add risk of failure.

Now, healthcare CIOs increasingly rely on cloud platforms, but they run into new problems. To prevent the inevitable difficulties requires a staff that is fully enabled with the right skills for compliance, privacy, and security. Health IT professionals need guidance on how to move an on-premises healthcare infrastructure to a cloud platform, and ensure HIPAA compliance, policies, safeguards, and resources are in place.

Here are the major areas that require thought and planning:

Privacy, compliance concerns: Protecting patient data is a persistent concern, along with implementation, uncertainty, and risk. Concerns about HIPAA compliance, cloud, and legacy system integration are among the major obstacles that have kept healthcare IT on-premises.
Budget constraints, cost optimization: Cloud service bills are often highly detailed and complicated, making it difficult to determine which application, department, or resource is the source of a cost overrun.
Technical hurdles: Healthcare IT professionals may not have the skills or resources to leverage cloud services to do things like extend an on-premises datacenter to a hybrid cloud.
Training: Retaining and enabling IT staff is a key challenge, and education on any new solution is critical to success. Everyone should have easy to understand resources regardless of the role whether it be IT leaders, administrators, developers, and/or database administrators.
Gaps in capabilities: Even with an on-premises solution, many use special services from a vendor. Planning should include those partners as well as specialized areas that the vendors don’t currently address.

Solution

Burwood Group is a Microsoft partner that specializes in moving healthcare organizations to Azure. If a client has a secure, on-premises network, Burwood will build a secure cloud network and leverage the same regulatory controls used for an on-premises installations. They will also educate technology teams on endpoint security and serverless security, with emphasis on HIPAA compliance in the cloud.

The consulting firm offers extensive training. For example, through a one-day class, they provide the basic education to have a successful implementation in Azure, with an emphasis on healthcare requirements in the cloud. This workshop includes hands-on lab exercises and is 100 percent focused on pertinent, practical, and actionable knowledge.

Benefits

Standardization: As a cloud team, nothing is left to guess work. Instead, consistency is instilled across the team. Through education, Burwood introduces the healthcare datacenter in Azure.
Flexibility: IT teams may need to work with multiple cloud architectures for healthcare. This occurs as care is increasingly managed across settings with more interoperability across applications and business entities. Understanding best practices for the cloud allows expertise that is independent of any application or vendor.
Control: When it comes to cloud governance for healthcare, organizations need to control cloud sprawl. As personnel enter or leave an organization, permissions must be carefully allowed or revoked to prevent security breaches. Burwood provides education on these subjects: What is going into and out of Azure? Who has rights to resources in Azure? These types of questions are answered.
Service catalog: Burwood seeks to keep users informed of new services through a service catalog. Users are instructed about the following.

Handling cloud service requests and change management.
Expanding the current service catalog through an Azure for healthcare IT emphasis.
Potential items that users can request through the service catalog in Azure.

Indexing: All resources in the cloud must be tagged with cost center, creation date, and more.
IP awareness: Users are instructed to be very careful of public IP address assignments, and the potential of creating vulnerabilities.

Services

The company has a proficiency in both healthcare and Azure technology. These are a few of the Azure services used to create custom solutions:

Azure portal
Azure Resource Manager
Azure role based access control
Azure Active Directory
Azure Load Balancer

Next steps

To learn more about other industry solutions, go to the Azure for healthcare page. To find more details about consulting and a one day Azure University for healthcare workshop, go to the Azure Marketplace listing for the Burwood Group and select Contact me.
Quelle: Azure

Leveraging complex data to build advanced search applications with Azure Search

Data is rarely simple. Not every piece of data we have can fit nicely into a single Excel worksheet of rows and columns. Data has many diverse relationships such as the multiple locations and phone numbers for a single customer or multiple authors and genres of a single book. Of course, relationships typically are even more complex than this, and as we start to leverage AI to understand our data the additional learnings we get only add to the complexity of relationships. For that reason, expecting customers to have to flatten the data so it can be searched and explored is often unrealistic. We heard this often and it quickly became our number one most requested Azure Search feature. Because of this we were excited to announce the general availability of complex types support in Azure Search. In this post, I want to take some time to explain what complex types adds to Azure Search and the kinds of things you can build using this capability. 

Azure Search is a platform as a service that helps developers create their own cloud search solutions.

What is complex data?

Complex data consists of data that includes hierarchical or nested substructures that do not break down neatly into a tabular rowset. For example a book with multiple authors, where each author can have multiple attributes, can’t be represented as a single row of data unless there is a way to model the authors as a collection of objects. Complex types provide this capability, and they can be used when the data cannot be modeled in simple field structures such as strings or integers.

Complex types applicability

At Microsoft Build 2019,  we demonstrated how complex types could be leveraged to build out an effective search application. In the session we looked at the Travel Stack Exchange site, one of the many online communities supported by StackExchange.

The StackExchange data was modeled in a JSON structure to allow easy ingestion it into Azure Search. If we look at the first post made to this site and focus on the first few fields, we see that all of them can be modeled using simple datatypes, including tags which can be modeled as a collection, or array of strings.

{
"id": "1",
"CreationDate": "2011-06-21T20:19:34.73",
"Score": 8,
"ViewCount": 462,
"BodyHTML": "<p>My fiancée and I are looking for a good Caribbean cruise in October and were wondering which
"Body": "my fiancée and i are looking for a good caribbean cruise in october and were wondering which islands
"OwnerUserId": 9,
"LastEditorUserId": 101,
"LastEditDate": "2011-12-28T21:36:43.91",
"LastActivityDate": "2012-05-24T14:52:14.76",
"Title": "What are some Caribbean cruises for October?",
"Tags": [
"caribbean",
"cruising",
"vacations"
],
"AnswerCount": 4,
"CommentCount": 4,
"CloseDate": "0001-01-01T00:00:00",​

However, as we look further down this dataset we see that the data quickly gets more complex and cannot be mapped into a flat structure. For example, there can be numerous comments and answers associated with a single document.  Even votes is defined here as a complex type (although technically it could have been flattened, but that would add work to transform the data).

"CloseDate": "0001-01-01T00:00:00",
"Comments": [
{
"Score": 0,
"Text": "To help with the cruise line question: Where are you located? My wife and I live in New Orlea
"CreationDate": "2011-06-21T20:25:14.257",
"UserId": 12
},
{
"Score": 0,
"Text": "Toronto, Ontario. We can fly out of anywhere though.",
"CreationDate": "2011-06-21T20:27:35.3",
"UserId": 9
},
{
"Score": 3,
"Text": ""Best" for what? Please read [this page](http://travel.stackexchange.com/questions/how-to
"UserId": 20
},
{
"Score": 2,
"Text": "What do you want out of a cruise? To relax on a boat? To visit islands? Culture? Adventure?
"CreationDate": "2011-06-24T05:07:16.643",
"UserId": 65
}
],
"Votes": {
"UpVotes": 10,
"DownVotes": 2
},
"Answers": [
{
"IsAcceptedAnswer": "True",
"Body": "This is less than an answer, but more than a comment…nnA large percentage of your travel b
"Score": 7,
"CreationDate": "2011-06-24T05:12:01.133",
"OwnerUserId": 74

All of this data is important to the search experience. For example, you might want to:

Search for and highlight phrases not only in the original question, but also in any of the comments.
Limit documents to those where an answer was provided by a specific user.
Boost certain documents higher in the search results when they have a higher number of up votes.

In fact, we could even improve on the existing StackExchange search interface by leveraging Cognitive Search to extract key phrases from the answers to supply potential phrases for autocomplete as the user types in the search box.

All of this is now possible because not only can you map this data to a complex structure, but the search queries can support this enhanced structure to help build out a better search experience.

Next Steps

If you would like to learn more about Azure Search complex types, please visit the documentation, or check out the video and associated code I made which digs into this Travel StackExchange data in more detail.
Quelle: Azure