Using modern data sources in Azure Analysis Services

Just weeks after reaching general availability (GA) with Azure Analysis Services, we are super excited to make Tabular 1400 models available in public preview, as announced in the blog post 1400 Compatibility Level in Azure Analysis Services. Although Tabular 1400 is still a preview feature, it is nevertheless exciting because cloud solutions can now begin to take advantage of all the great features that Analysis Services supports at the 1400 compatibility level, including Detail Rows, Object Level Security, and the modern Get Data experience. For a comprehensive summary, see the blog article 1400 Compatibility Level in Azure Analysis Services.

As far as the modern Get Data experience is concerned, note that there are still some limitations because SQL Server Data Tools for Analysis Services Tabular (SSDT Tabular) as well as some cloud infrastructure component, specifically the on-premises gateway, are not quite ready yet. With every monthly release, SSDT Tabular closes more feature gaps and supports more data sources, but the work is far from complete. The cloud infrastructure components to access modern on-premises data sources are also still in the final stages of testing. So, at this point, only the following cloud-based data sources can be used at the 1400 compatibility level in Azure Analysis Services:

Azure SQL Database: If your business applications rely on Azure SQL DB, you can use Azure Analysis Services connect to the data and add BI capabilities to your solutions.
Azure SQL Data Warehouse: This massively parallel processing (MPP) cloud-based, scale-out, relational database can provide a foundation for large scale BI solutions based on Azure Analysis Services. Just connect your Tabular 1400 models to Azure SQL DW in import or DirectQuery mode for interactive analysis.
Azure Blobs storage: If you want to build large scale Tabular 1400 models on top of unstructured data, you need a scalable storage solution. With exabytes of capacity, massive scalability, and low cost, Azure Blob storage is a good choice. Note, however, that SSDT Tabular does not yet support advanced mashup capabilities to import file-based data efficiently. For example, combining files in a single table requires support for mashup functions, which is coming soon as part of named expressions.

The next big delivery in SSDT Tabular is support for named expressions. This includes parameters, functions, and shared queries, so that you can build advanced mashups, taking full advantage of Azure Blobs as mentioned above. Then the tools focus shifts to improving quality, robustness, and performance, all while continuing to add further connectors until parity with Power BI Desktop is achieved. Among other connectors, HDInsight and Azure Data Lake Store are coming up next to increase the number of supported cloud-based data sources.

For on-premises data sources, the plan is to provide connectivity at the 1400 compatibility level in Azure Analysis Services very soon. This requires a new version of the on-premises gateway, which is planned to ship in parallel with the next monthly release of SSDT Tabular. If you want to create Tabular 1400 models that use on-premises data sources and deploy them in Azure Analysis Services, make sure you use that upcoming SSDT Tabular version and you will also have to deploy that upcoming on-premises gateway.

In the meantime, you can build and test your Tabular 1400 models by using the SSDT Tabular 17.0 (April 2017) release in integrated workspace mode. Give Tabular 1400 a test drive, and as always, please send us your feedback and suggestions by using ProBIToolsFeedback or SSASPrev at Microsoft.com. You can also use any other available communication channels such as UserVoice or MSDN forums. Stay tuned for further announcements when the next monthly release of SSDT Tabular is published together with the on-premises gateway for Azure Analysis Services.
Quelle: Azure

More storage for Premium elastic pools in Azure SQL Database

Until now the amount of storage available to Premium elastic pools in Azure SQL Database was limited to 750 GB.  We are pleased to announce that this limit has increased to 4 TB for the largest Premium pools. 

Premium pool eDTUs
Maximum pool storage

125
250 GB (no change)

250
500 GB (no change)

500
750 GB (no change)

1000
750 GB (no change)

1500
1.5 TB

2000
2.0 TB

2500
2.5 TB

3000
3.0 TB

3500
3.5 TB

4000
4.0 TB

Availability

Currently, this extra storage is available in the following regions: East US 2, West US, West Europe, Southeast Asia, Japan East, Australia East, Canada Central, and Canada East.  More widespread availability is planned.

Learn more

To learn more about SQL Database elastic pools and more storage for Premium pools, please visit the SQL Database elastic pool webpage.  For pricing information, please visit the SQL Database pricing webpage.
Quelle: Azure

Now Generally Available: On-premises data gateway in Azure

With today's update, we're excited to announce that the on-premises data gateway is now generally available in Azure. This gateway helps you securely connect your business apps in the cloud to your data sources on premises. You can use the gateway to move data to and from the cloud while keeping your data sources on premises. The gateway currently supports Azure Logic Apps, but in the next few months, will also expand to support Azure Analysis Services.

This release includes these new features:

Support for multiple regions
Delete your gateway connection resource in Azure
New on-premises connectors for Azure Logic Apps

To get the latest gateway installation, download the gateway installer.

Support for multiple regions

This update gives gateway admins even more control over data gateway settings. When you install the gateway on a local computer, you can now select the region for the gateway cloud service and Azure Service Bus communication channel that you want to use with your gateway installation. Previously, this region defaulted to your Azure Active Directory tenant’s location.

We will move existing gateways to match the original location of your tenant. So if you currently use the gateway, you might notice this change. However, this update won't affect currently running logic apps, which will continue to work as usual.

Note: You can't change this region after installation unless you uninstall the gateway and reinstall. This region also determines and restricts the location where you can create the Azure resource for your gateway connection. So when you create the gateway connection resource in Azure, make sure to choose the location that you selected during installation so you can select your gateway from the installed gateways list.

Delete gateway resource in Azure

You can now delete your gateway connection resource in Azure and associate your gateway to a different Azure resource. The ability to delete the gateway installation is coming soon.

New on-premises connectors for Azure Logic Apps

Also, we’re adding new connectors that support on-premises data sources for Azure Logic Apps, Oracle EBS and PostgreSQL. These connectors follow two other connectors that we introduced a month ago, MySQL and Teradata.

Learn more about how to access data sources on premises through the data gateway for Azure Logic Apps.
Quelle: Azure

Announcing preview of Consumption and Charge APIs for Enterprise Azure customers

We are excited to announce the preview release of the new Azure Consumption and Charge APIs for Enterprise customers. This follows the release of our new Power BI content pack that addressed issues related to performance and data size limitations. Users can now have the ability to query Azure Usage details and Marketplace Charges by any desired date range or billing period. These APIs enable organizations  to gain deep insights into their usage and spend for all workloads running on Azure. This is an important first step in our journey over the next few months to light up additional features to enable our customers to accurately monitor, predict, and optimize costs on Azure. Learn more by reading the detailed documentation on getting started with the APIs.

Details of the APIs:

Balance and Summary: The Balance and Summary API offers a monthly summary of information on balances, new purchases, Azure Marketplace service charges, adjustments, and overage charges.

Usage Details: The Usage Detail API offers a daily breakdown of consumed quantities and estimated charges by an enrollment. The result also includes information on instances, meters, and departments. The API can be queried by billing period or by a specified start and end date.

Marketplace Store Charge: The Marketplace Store Charge API returns the usage-based marketplace charges breakdown by day for the specified billing period or start and end dates.

Price Sheet: The Price Sheet API provides the applicable rate for each meter for the given enrollment and billing period.

Billing Periods: The Billing Periods API returns a list of billing periods that have consumption data for the specified enrollment in reverse chronological order. Each period contains a property pointing to the API route for the four sets of data, BalanceSummary, UsageDetails, Marketplace Charges, and PriceSheet.

What’s next?

We are work on providing this data in ARM as part of a consistent channel agnostic API set. As always, please reach out to us on the Azure Feedback forum and through the Azure MSDN forum.
Quelle: Azure

Empowering digital transformation together at Red Hat Summit

Today we’re wrapping up an amazing week at Red Hat Summit. We’re proud to, once again, sponsor and participate in a forum that brings together customers, partners and communities who are passionate about open source in the enterprise.

With over 40% of enterprise decision makers saying that increasing open source usage is a high or critical priority to their departments, there’s little doubt that open source plays an important role in the enterprise digital transformation. And we’re seeing this momentum in the cloud, with 1 in 3 VMs in Microsoft Azure running Linux, growing at 1.4x the rate of Windows VMs.

Yet a successful cloud strategy is not just about agility and speed, something IT knows well. In Red Hat’s Global Customer Tech Outlook survey for 2017, security, compliance, management and hybrid cloud strategy closely follow infrastructure as top funding priorities for the year, with roughly half of the CIOs naming them a top priority.

Our partnership with Red Hat, announced a year and a half ago, brings more choice to hybrid cloud deployments in a secure, manageable and well-supported way, and drives agility across capabilities such as unified development and DevOps, integrated management, common identity and a consistent data platform.

Helping our customers transform in the cloud

To me, what’s most exciting about our partnership are the thousands of customers around the globe that are transforming their businesses with Red Hat solutions in Microsoft Azure.

Whether a customer is running a scale-out Red Hat Enterprise Linux cluster, using JBoss Middleware for an IoT solution or leveraging integrated support for OpenShift Container Platform, our partnership meets customers wherever they are in their cloud journey, and Microsoft is the only cloud provider that delivers consistency across on-premises and the public cloud while providing access to the rich Azure ecosystem.

This flexibility delivers unprecedented capacity and agility to organizations of all kinds and sizes. Joining me on stage today in my closing keynote, Terrance Snyder, Director of Media Platform Solutions at Catalina Marketing will share how they use open source, and where Microsoft and Red Hat fit. And there are many more great customer scenarios where Red Hat solutions in Azure are fueling digital transformation.

For example, Nielsen uses Red Hat Ansible with Azure to create and configure application resources from scratch in minutes, saving days of effort per environment. And TMB Bank uses Azure to fuel their digital transformation, choosing Microsoft for their Red Hat Enterprise Linux workloads thanks to its world-class security and compliance standards.

At Red Hat Summit, Volvo shared how they use OpenShift Container Plaform in Azure for Java production applications, using Ansible to automate deployments across locations.

Pacifico Seguros, a Peruvian insurance company, uses Red Hat solutions in Azure such as JBoss Middleware and Red Hat Enterprise Linux to train agents, deploying a full core environment to Azure, including their P&C back-end.

In Japan, partner Visionarts helped Sony Corporation move a customer-facing function of the VAIO business to Azure. Visionarts is using Azure Log Analytics and Azure Security Center extensively for their Red Hat solutions, aggregating management tasks with no need to login and providing proactive response to vulnerabilities.

The road ahead

One of the most exciting aspects of the Red Hat / Microsoft partnership is our integrated support, including co-located resources. Richard Hum, studio head at Throwback Entertainment, told us that his team was “surprised that Red Hat open source and Azure support resided in the same office” In my keynote, I’ll share more about the individuals that make this partnership successful, and our joint learnings.

Over the last 18 months, we’ve brought innovation to both portfolios, from .NET Core on Red Hat Enterprise Linux to Red Hat solutions in Azure including OpenShift Container Platform and JBoss Middleware. Later today, Jim Zimmerman from Microsoft and Steve Pousty from Red Hat will demo some of the new features for developers that we’ve been working on over the last 18 months, including SQL Server, .NET Core and Windows Server plus OpenShift in Azure.

And today, as part of our enduring partnership that spans joint engineering, security and support efforts, Microsoft and Red Hat are announcing our plans to offer SQL Server 2017 on Red Hat Enterprise Linux, featuring high availability, immediately upon the SQL Server GA later this year. We’ll continue working with Red Hat on high value scenarios for our joint customers in the enterprise such as hybrid and container based solutions.

Finally, I’d like to invite you to learn more about Red Hat in Azure at our Microsoft Technology Centers where we’re rolling out the latest technical demos and learnings from customers and partners. If you want to get started with OpenShift in Azure, check the newly released Test Drive experience. And if you are not attending the Red Hat Summit, you can watch the keynote online and follow our event highlights on Twitter!
Quelle: Azure

1400 compatibility level in Azure Analysis Services

We are excited to announce the public preview of the 1400 compatibility level for tabular models in Azure Analysis Services! This brings a host of new connectivity and modeling features for comprehensive, enterprise-scale analytic solutions delivering actionable insights. The 1400 compatibility level will also be available in SQL Server 2017 Analysis Services, ensuring a symmetric modeling capability across on-premises and the cloud.

Here are just some highlights of the new features available to 1400 models.

New infrastructure for data connectivity and ingestion into tabular models with support for TOM APIs and TMSL scripting. This enables:

Support for additional data sources, such as Azure Blob storage.
Data transformation and data mashup capabilities.

Support for BI tools such as Microsoft Excel enable drill-down to detailed data from an aggregated report. For example, when end-users view total sales for a region and month, they can view the associated order details.
Object-level security to secure table and column names in addition to the data within them.
Enhanced support for ragged hierarchies such as organizational charts and chart of accounts.
Various other improvements for performance, monitoring and consistency with the Power BI modeling experience.

To benefit from the new features for models at the 1400 compatibility level, you’ll need to download and install SQL Server Data Tools (SSDT) 17.0.

In SSDT, you can select the new 1400 compatibility level when creating new tabular model projects. Alternatively, you can upgrade an existing tabular model by selecting the Model.bim file in Solution Explorer and setting the Compatibility Level to 1400 in the Properties window. Models at the 1400 compatibility level cannot be downgraded to lower compatibility levels.

New Infrastructure for Data Connectivity

1400 models introduce a new infrastructure for data connectivity and ingestion into tabular models with support for TOM APIs and TMSL scripting. This is based on similar functionality in Power BI Desktop and Microsoft Excel 2016. At this point, only the following cloud-based data sources are supported with the 1400 compatibility level in Azure Analysis Services. We intend to add support for more data sources soon. For more information, please refer to the Analysis Services Team blog, and watch out for future posts to the Azure blog.

Azure SQL Data Warehouse
Azure SQL Database
Azure Blog Storage

Detail Rows

A much-requested feature for tabular models is the ability to define a custom row set contributing to a measure value. Multidimensional models achieve this by using drillthrough and rowset actions. This allows end-users to view information in more detail than the aggregated level.

For example, the following PivotTable shows Internet Total Sales by year from the Adventure Works sample tabular model. Users can right-click the cell for 2010 and then select the Show Details menu option to view the detail rows.

By default, all the columns in the Internet Sales table are displayed. This behavior is often not meaningful for the user because too many columns may be shown, and the table may not have the necessary columns to show useful information such as customer name and order information.

Detail Rows Expression Property for Measures

1400 models introduce the Detail Rows Expression property for measures. It allows the modeler to customize the columns and rows returned to the end user. The following example uses the DAX Editor in SSDT to define the columns to be returned for the Internet Total Sales measure.

With the property defined and the model deployed, the custom row set is returned when the user selects Show Details. It automatically honors the filter context of the cell that was selected. In this example, only the rows for 2010 value are displayed.

Further information on Detail Rows is available in this blog post.

Object-Level Security

Roles in tabular models already support a granular list of permissions, and row-level filters to help protect sensitive data.

1400 models introduce table- and column-level security allowing sensitive table and column names to be protected in addition to the data within them. Collectively these features are referred to as object-level security (OLS).

The current version requires that OLS is set using the JSON-based metadata, Tabular Model Scripting Language (TMSL), or Tabular Object Model (TOM). We plan to deliver SSDT support soon. The following snippet of JSON-based metadata from the Model.bim file secures the Base Rate column in the Employee table of the Adventure Works sample tabular model by setting the MetadataPermission property of the ColumnPermission class to None.

"roles": [

  {

    "name": "General Users",

    "description": "All allowed users to query the model",

    "modelPermission": "read",

    "tablePermissions": [

      {

        "name": "Employee",

        "columnPermissions": [

           {

              "name": "Base Rate",

              "metadataPermission": "none"

           }

        ]

      }

    ]

  }

Unauthorized users cannot access the Base Rate column using client tools like Power BI and Excel Pivot Tables. Additionally, such users cannot query the Base Rate column using DAX or MDX, or measures that refer to it.

Further information on OLS is available in this blog post.

Ragged Hierarchies

Tabular models with previous compatibility levels can be used to model parent-child hierarchies. Hierarchies with a differing number of levels are referred to as ragged hierarchies. An example of a ragged hierarchy is an organizational chart. By default, ragged hierarchies are displayed with blanks for levels below the lowest child. This can look untidy to users, as shown by this organizational chart in Adventure Works:

1400 models introduce the Hide Members property to correct this. Simply set the Hide Members property to Hide blank members.

With the property set and the model deployed, the more presentable version of the hierarchy is displayed.

Other Features

Various other features such as the following are also introduced with the 1400 compatibility level. For more information, please refer to the Analysis Services Team blog for what's new in SQL Server 2017 CTP 2.0 and SQL Server vNext on Windows CTP 1.1 for Analysis Services.

Transaction-performance improvements for a more responsive developer experience.
Dynamic Management View improvements enabling dependency analysis and reporting.
Hierarchy and column reuse to be surfaced in more helpful locations in the Power BI field list.
Date relationships to easily create relationships to date dimensions based on date columns.
DAX enhancements to make DAX more accessible and powerful. These include the IN operator and table/row constructors.

Try it Now!

To get started, simply create a 1400 model in SSDT and deploy it to Azure Analysis Services! See this post on how to create your first model. Be sure to keep an eye on this blog to stay up to date on Azure Analysis Services.
Quelle: Azure

Azure IoT Hub Server TLS Leaf certificate renewal – May 2017

The following blog contains important information about TLS certificate renewal for Azure IoT Hub endpoints which may impact client connectivity.

As part of the periodic renewal cycle, the Azure IoT Hub leaf certificates used for TLS connection will be renewed starting mid-May 2017. This could potentially impact some clients connecting to the Azure IoT Hub service. This change only impacts Azure IoT Hubs created in public Azure cloud, and not Azure in China nor Azure Germany.

Certificate renewal summary

The table below provides information about the certificate being renewed. Depending on which cert your device or gateway clients use for TLS connection, action may be needed to prevent loss of connectivity.

Expected behavior

Not impacted: Devices connecting to Azure IoT Hub using Azure IoT Device or Gateway SDK, as provided. Using your own connection code that utilizes the root certificate or SDKs using the Operating System's built-in Certificate Store for TLS connection will not be impacted. 
Potentially impacted: Devices using a connection stack other than the connection stack provided in an Azure IoT SDK. Specifically, connection logic that pins the leaf certificate will experience TLS connection failures after the rollover if not updated. Our recommendation is to pin the root certificates as they renew less frequently.

Validation

We recommend validation to mitigate any untoward impact to your IoT infrastructure connecting to Azure IoT Hub. We have setup a test environment for your convenience to try out before we renew the leaf certificate in Azure IoT Hub. The connection string for this test environment is: HostName=playground01.df.azure-devices-int.net;SharedAccessKeyName=owner;SharedAccessKey=0DvHNevPwsDjpMor6eT6aZefKp77Tdo7z2eaFX9kF5I=

A successful TLS connection to the test environment signifies a positive test outcome, and that your infrastructure will work with this change. This connection test string contains an invalid key so once the TLS connection is established, any runtime operations performed against this test IoT Hub will fail. This is by design as the hub exists solely for customers to validate their TLS connection functions. This test environment will be available until all public cloud regions have been updated.

If you have any technical questions on implementing these changes, open a support request with the options below and an engineer will get back to you shortly.

Issue Type: Technical
Azure Service: Internet of Things/IoT SDKs
Problem Type: Security/Authentication
Glossary of terms: Root, Intermediate, and Leaf certificates – Types of digital certificates also known as public key or Identity certificates used to manage identity, access, and trust over a network.

Quelle: Azure

Reacting to maintenance events… before they happen

Introducing Scheduled Events (Preview)

What if you could learn about upcoming events which may impact the availability of your VM and plan accordingly?  Well, with Azure Scheduled Events you can.

Scheduled Events is one of the subservices under Azure Metadata Service that surfaces information regarding upcoming events (for example, reboot). Scheduled events give your application sufficient time to perform preventive tasks to minimize the effect of such events. Being part of the Azure Metadata Service, scheduled events are surfaced using a REST Endpoint from within the VM. The information is available via a Non-routable IP so that it is not exposed outside the VM.

What is covered with scheduled events

While we continue to invest in increasing the scope of scheduled events, the following are already covered during the preview:

VM Preserving maintenance (also known as – in place VM migration). This class of maintenance operations is used to patch and update the hosting environment (hypervisor and agents) without rebooting the VM. With VM preserving maintenance, your VM freezes for up to 30 seconds without losing open files and network connections. While most modern applications are not impacted by such a short pause, some workloads (like gaming) are too sensitive and consider this as an outage. With scheduled events, your application will be able to learn of such maintenance with an event type of freeze.
VM Restarting maintenance. While the majority of updates have zero to little impact on virtual machines, there are cases where we do need to reboot your virtual machine. With scheduled events, your application can detect such scenarios with event type being set to Reboot or Redeploy.
User operations. You may not reboot your production servers manually, but you can definitely try and reboot or redeploy your test VMs to test your failover logic. In both cases, a scheduled event is surfaced with event type being set to Reboot or Redeploy.

Use cases for scheduled events

We have observed several use cases for using scheduled events:

Proactive failover. Instead of waiting for your application, SLB or traffic manager to sense that something went wrong, you can proactively failover to another node. In some cases, knowing that a VM will be back soon can help the application logic to start accumulate and log changes rather than failover a partition/replica.
Drain a node. Instead of failing running jobs, you can block the VM from accepting new jobs and let it drain those already started.
Log and audit. knowing that the VM was interrupted by Azure can simplify root cause analysis of detection availability issues.
Notify and correlate. Send notification to your admin (human) or monitoring software and correlate the schedule event with other signals.

Getting Started with scheduled events

You can query for Scheduled Events simply by making the following call from within a VNET enabled VM:

curl -H Metadata:true http://169.254.169.254/metadata/scheduledevents?api-version=2017-03-01

A response contains an array of scheduled events. An empty array means that there are currently no events scheduled. In the case where there are scheduled events, the response contains an array of events:

{
"DocumentIncarnation":{IncarnationID},
"Events":[
{
"EventId":{eventID},
"EventType":"Reboot" | "Redeploy" | "Freeze",
"ResourceType":"VirtualMachine",
"Resources":[{resourceName}],
"EventStatus":"Scheduled" | "Started",
"NotBefore":{timeInUTC},
}
]
}

In order to trigger and test your logic dealing with scheduled events on your VM, simply go to the Azure portal and either Restart or Redeploy your VM.

Next Steps

Checkout Azure Scheduled Events documentation to learn more.
Take a look at the following sample which uses Azure Event Hub to collect scheduled events from multiple VMs.

Quelle: Azure

New Power BI content pack for Azure Enterprise users

We are excited to announce the release of the new Power BI content pack for Azure Enterprise users which enables exploration and analysis of consumption data for enterprise enrollments. The data will be refreshed automatically once per day.

What's new?

The ability to specify a time range up to 36 months.
The content pack now includes Marketplace, Balance, and Summary data for the time range specified.
Price sheet data for the current billing period has also been added.
Addressed bugs in the previous version, as well as increased stability and performance.

Here are the steps to get started:

Navigate to Microsoft Azure Consumption Insights and click Get It Now.
Provide Enrollment Number and Number of Months and click Next.
Provide your API access key to connect, see details below.
The import process will begin automatically. When complete, a new dashboard, report, and model will appear in the navigation pane. Click the dashboard to view your imported data.

For more information on how to generate the API key for your enrollment, please visit the API Reports help file on the Enterprise Portal. For more information on the new content pack, please download the Microsoft Azure Consumption Insights document.

As always, please reach out to us on the Azure Feedback forum (add #pbicp2) and through the Azure MSDN forum.
Quelle: Azure

Understanding performance and usage impact of releases using annotations in Application Insights

A little over a year ago, the Application Insights team re-introduced release annotations so that users would have the ability to correlate occurrences of app and services releases with their APM data. VSTS users can simply add a step to their release scripts to create annotations, and users leveraging third-party release engines can push release annotations to Application Insights by simply calling a Powershell script.

In the meantime, the VSTS team has been hard at work maturing continuous integration/deployment functionality in the VSTS release tools, as well as extending that functionality to many other platforms. As the concept of CI/CD continues to gain momentum and popularity in development shops around the world, the value of generating annotations for the associated tighter release cycle increases dramatically.

A simple example

In the above illustration, we have an app called MyEnterpriseApp being regularly deployed through continuous deployment. Two statistics have been chosen here: Users and browser page load time. Since annotations have been added to the release script, we can clearly see when our releases are occurring, and how they may be affecting performance or usage. In the earlier part of the chart, we see a very normal cadence of users: daily spikes during prime weekday hours, with valleys at night, followed by a lower number of users on the weekend. Our page load time is consistently around two seconds or less, and so our user base remains equally consistent.

If we look at the release that happens the following week (around April 17th on this chart), however, a problem is clearly introduced. Page load times shoot up to around eight seconds, and our number of active users goes into a nose-dive, dropping off dramatically until the number of users tolerating this level of performance is actually lower than what we would normally see on weekends. Following these events, we see another release around April 19th, which we can safely assume includes some kind of fix for the situation, because following the release our page load times drop back the normal range, and our active user count rebounds as well now that performance is back to an acceptable state.

So what happened? Was the release not tested properly? Did an environmental factor exist in production that wasn’t present in test? The natural next step is to investigate the problem to ensure it doesn’t happen again. Again, we can leverage our release annotations to begin drilling to greater detail to solve the issue. Remember that we can hover on an annotation, and it will give us information about the release:

If we click on the information balloon, a detail blade opens up describing the release, complete with a link to the release script in VSTS (or a 3rd-party system, so long as that was provided when the annotation was created through the Powershell script):

In this way, we can quickly see who we should contact about the release, and investigate individual steps or deployed components (if we suspect a code error was introduced).

Since we can save Metrics Explorer results as favorites, we can retain the view of these significant statistics for future use. This allows us to come back in the following days and very quickly get a view of our releases to ensure that our errors are not being repeated, or allow us to investigate them immediately if they are. Simply being able to correlate our releases with our performance and usage data by use of release annotations dramatically reduces the time required for confirmation or investigation.

As always, please share your ideas for new or improved features on the Application Insights UserVoice page. For any questions visit the Application Insights Forum.
Quelle: Azure