Analytics integrated help through IntelliSense

When you edit a query in Analytics, you can see IntelliSense makes suggestions as you type, offering auto-completion and descriptive help of each operation or function. Switching back and forth between the query editor and the language reference page can be tedious and very time consuming, and this is why IntelliSense can be of great help, making it very easy to learn and use the query language, and be confident about the queries you’re writing.

Now, IntelliSense has been boosted to include a lot more helpful information.

If you’re familiar with Application Analytics’ query editor, then you’ve already experienced IntelliSense helping you with command-specific information. This is generated on-the-fly by analyzing the query and the current context in it. Here’s the familiar quick view, now suggesting auto-completion including command syntax and a short description:

But auto-completion is just the beginning! At the right end of this view, a new icon appears: “i”. Click the icon (or just Ctrl-Space) to switch to the extended view, which is the real big news. Here you can find an integrated help – offering immediate access to command examples, elaborate syntax, argument details, and even tips on how to best use it:

Instead of wasting time searching for help, IntelliSense brings it right to you!
Quelle: Azure

Analytics integrated help through IntelliSense

When you edit a query in Analytics, you can see IntelliSense makes suggestions as you type, offering auto-completion and descriptive help of each operation or function. Switching back and forth between the query editor and the language reference page can be tedious and very time consuming, and this is why IntelliSense can be of great help, making it very easy to learn and use the query language, and be confident about the queries you’re writing.

Now, IntelliSense has been boosted to include a lot more helpful information.

If you’re familiar with Application Analytics’ query editor, then you’ve already experienced IntelliSense helping you with command-specific information. This is generated on-the-fly by analyzing the query and the current context in it. Here’s the familiar quick view, now suggesting auto-completion including command syntax and a short description:

But auto-completion is just the beginning! At the right end of this view, a new icon appears: “i”. Click the icon (or just Ctrl-Space) to switch to the extended view, which is the real big news. Here you can find an integrated help – offering immediate access to command examples, elaborate syntax, argument details, and even tips on how to best use it:

Instead of wasting time searching for help, IntelliSense brings it right to you!
Quelle: Azure

Total Cost of (Non) Ownership of a NoSQL database service in 2016

Earlier today we published a paper Total Cost of (Non) Ownership (TCO) of a NoSQL Database Cloud Servce. TCO is an important consideration when choosing your NoSQL database, and customers often overlook many factors impacting the TCO. In the paper we compare TCO of running NoSQL databases in the following scenarios:

OSS NoSQL database like Cassandra or MongoDB hosted on-premises
OSS NoSQL database hosted on Virtual Machines
Using a managed NoSQL database as a service such as Azure DocumentDB.

To minimize our bias, we leveraged scenarios from other publications whenever possible.

In part 1 of our TCO paper, we explore an end-to-end gaming scenario from a similar paper NoSQL TCO analysis published by Amazon. We kept scenario parameters and assumptions unchanged and used the same methodology for computing the TCO for OSS NoSQL databases on-premise and on virtual machines. Of course in our paper we used Azure Virtual Machines. The scenario explores an online game that is based on a movie, and involves three different levels of game popularity: the time before the movie is released (low usage), the first month after the movie releases (high usage), and subsequent usage (medium usage), with different volume of transactions and data stored during each stage, as listed in the chart below.

The results of our analysis are fairly consistent with AWS paper. Once all the relevant TCO considerations taken into account, the managed cloud services like DocumentDB and DynamoDB can be five to ten times more cost effective than their OSS counter-parts running on-premises or virtual machines.

 The following factors make managed NoSQL cloud services like DocumentDB more cost effective than their OSS counter-parts running on-premises or virtual machines:

No NoSQL administration dev/ops required. Because DocumentDB is a managed cloud service, you do not need to employ a dev/ops team to handle deployments, maintenance, scale, patching and other day-to-day tasks required with an OSS NoSQL cluster hosted on-premises or on cloud infrastructure.
Superior elasticity. DocumentDB throughput can be scaled up and down within seconds, allowing you to reduce the cost of ownership during non-peak times. OSS NoSQL clusters deployed on cloud infrastructure offer limited elasticity, and on-premises deployments are not elastic.
Economy of scale. Managed services like DocumentDB are operating really large number of nodes, and are able to pass on savings to the customer.
Cloud optimized. Managed services like DocumentDB take full advantage of the cloud. OSS NoSQL databases at the moment are not optimized for specific cloud providers. For example, OSS NoSQL software is unaware of the differences between a node going down vs a routine image upgrade, or the fact that premium disk is already three-way replicated.

The TCO for Azure DocumentDB and AWS DynamoDB in this moderate scenario were comparable, with Azure DocumentDB slightly (~10%) cheaper due to lower costs for write requests.

Quantitative comparison

One challenge with the approach taken in Amazon’s whitepaper is the number of assumptions (often not explicitly articulated) made about the cost of running OSS NoSQL database. To start with, the paper does not mention which OSS NoSQL database is being used for comparison. It is difficult to imagine that the TCO of running two very different NoSQL database engines such as Cassandra or MongoDB for the same scenario would be exactly the same. However, we think Amazon’s methodology maintains its important qualitative merit, this concern non-withstanding.

In the second section of our whitepaper we attempt to address this concern, and provide more precise quantitative comparison for more specific scenarios. We examine three scenarios:

Ingesting one million records/second
A balanced 50/50 read/write workload
Ingesting one million records/second in regular bursts

We compare the TCO for these micro-scenarios when using the following NoSQL databases: Azure DocumentDB, Amazon DynamoDB, and OSS Cassandra on Azure D14v2 Linux Virtual Machines, a popular NoSQL choice for high data volume scenarios. In order to run tests with Cassandra, we utilize the open source Cassandra-stress command included in the open source PerfKit Benchmarker.

Hourly TCO results depicted in the chart above are consistent with the observations in Part 1, with few additional quantitative findings:

DocumentDB TCO is comparable to that of OSS Cassandra running on Azure D14v2 VMs for scenarios involving high sustained pre-dominantly write workloads with low storage needs (i.e. local SSD on the Cassandra nodes is sufficient). For example, 1M writes with a time to live (TTL) less than three hours, or most writes are updates. Cassandra is famous for its good performance for such scenarios and in the early stages of product development is often seen very attractive for this reason. However, the non-trivial dev/ops cost component brings the total cost of ownership of Cassandra deployment higher.
If more storage is needed, or the workload involves a balanced read / write mix, or the workload is bursty, DocumentDB TCO can be up to 4 time lower than OSS Cassandra running on Azure VMs. Cassandra&;s TCO is higher in these scenarios due to non-trivial dev/ops cost for administration of Cassandra clusters and Cassandra&039;s lack of awareness of the underlying cloud platform. DocumentDB TCO is lower thanks to superior elasticity and lower cost for reads and queries thanks to low overhead auto-indexing.
DocumentDB is up to two to three times cheaper than DynamoDB for high volume workloads we examined. Thanks to predictable performance guaranteed by both offerings, these numbers can be verified by simply comparing the public retail price pages. DocumentDB offers write optimized low overhead indexing by default making queries more efficient without worrying about secondary indexes. DocumentDB writes are significantly less expensive for high throughput workloads.

In conclusion, we’d like to add that TCO is only one (albeit an important one) consideration when choosing NoSQL database. Each of these products compared shines in its own way. Product capabilities, ease of development, support, community and other factors need to be taken into account when making a decision. The paper includes briend overview of DocumentDB functionality.

On the community front, we applaud MongoDB and Cassandra projects for creating significant community around their offerings. In order to make Azure a better place for these communities we recently offered protocol level support for MongoDB API as part of DocumentDB offering, and are encouraged with the feedback received to date from MongoDB developers. DocumentDB customers can now take advantage of the MongoDB API community expertise, as well as not worry about locking in into proprietary APIs, a common concern with PaaS services.

As always, let us know how we are doing and what improvements you&039;d like to see going forward for DocumentDB through UserVoice, StackOverflow azure-documentdb, or Twitter @DocumentDB.
Quelle: Azure

Public Preview: Azure Data Lake Tools for Visual Studio Code (VSCode)

We are pleased to announce the Public Preview of the Azure Data Lake (ADL) Tools for VSCode. The tools provide users with the best in class light weight, keyboard focused authoring experience for U-SQL as an alternative solution to the Data Lake Tools for Visual Studio.

By extending VSCode, leveraging the Azure Data Lake Java SDK for U-SQL job submission, and integrating with the Azure portal for job monitoring, the tools provide a cross-platform IDE. Users can run it smoothly in Window, Linux and Mac.

The ADL Tools for VSCode fully embrace the U-SQL language. User can enjoy the power of IntelliSense, Syntax Highlighting, and the Error Marker. It covers the core user scenarios of U-SQL scripting and U-SQL extensibility through custom code. The ADL Tools seamlessly integrate with ADL, which allows user to compile and submit jobs to ADLA.

What features are supported in ADL Tools for VSCode?

U-SQL Language Authoring

The ADL Tools for VSCode allows users to fully utilize the power of U-SQL: a language you’ll be comfortable with from Day One. It empowers users to enjoy the advantages of U-SQL: process any type of data, integrate with your custom code, as well as efficiently scale to any size of data.

U-SQL Scripting

U-SQL combines the declarative advantage of T-SQL and extensibility of C#. Users can create a VSCode U-SQL job in a file format with usql file extension, and leverage the full feature set of U-SQL language and its built-in C# expressions for U-SQL job authoring and submission.

U-SQL Language Extensibility

ADL Tools for VSCode enables user to fully leverage U-SQL extensibility (e.g. UDOs, UDFs, UDAGG) through custom code. User can do so either through registering assembly or using Code Behind feature.

Manage Assembly

The Register Assembly command allows users to register custom code assemblies into the ADLA Metadata Service so that users can refer to the UDF, UDO and UDAGG in their U-SQL scripts. This functionality allows users to package the custom code and share the functionality with others.

Code Behind

The easiest way to make use of custom code is to use the code-behind capabilities. Users can fill in the custom code for the script (e.g., Script.usql) into its code-behind file (e.g., Script.usql.cs). The advantage of code-behind is that the tooling takes care of the following steps for you when you submit your script:

It creates a .CS C# codebehind file and links it with the original U-SQL file.
It compiles the codebehind into an assembly under the codebehind folder.
It registers and unregisters the codebehind assembly as part of the script through an automatic prologue and epilogue.

Azure Data Lake Integration

The ADL Tools for VSCode integrate seamlessly with Azure Data Lake Analytics (ADLA). Azure Data Lake includes all the capabilities required to make it easy for developers, data scientists, and analysts to store data of any size, shape and speed, and do all types of processing and analytics across platforms and languages. U-SQL on ADLA offers Job as a Service with the Microsoft invented U-SQL language. Customers do not have to manage deployment of clusters but can simply submit their jobs to ADLA, an analytics platform managed by Microsoft.

ADLA – Metadata Navigation

Upon signing into Azure, users can view his / her ADLA Metadata entities through a list of customized VSCode command items. The workflow and steps to navigate through ADLA Metadata based on its hierarchy are managed through a set of command items.

ADLA – Job Submission

The ADL Tools for VSCode allow users to submit the U-SQL job into ADLA either through the Submit Job command in the command palette or the right click menu in U-SQL file.

Users can either output the job to ADLS or Azure Blob storage based on their needs. The U-SQL compilation and execution is performed remotely in ADLA.

How do I get started?

You need to first install Visual Studio Code and download the prerequisite files including JRE 1.8.x, Mono 4.2.x (for Linux and Mac), and .Net Core (for Linux and Mac). Then get the latest ADL Tools by going to the VSCode Extension repository or VSCode Marketplace and searching Azure Data Lake Tool for VSCode. Please visit the following link for more information.

For more information, check out the following links:

User Manual: Azure Data Lake Tools for VSCode
Tutorial: get started with Azure Data Lake Analytics

Learn more about today’s announcements on the Azure Data Lake Blog.

Discover more Azure service updates.

If you have questions, feedback, comments, or bug reports, please use the comments below or send a note to hdivstool@microsoft.com.
Quelle: Azure

Azure Data Lake Analytics now generally available

Today, we are pleased to announce that Azure Data Lake Analytics is generally available. Since we announced the public preview, Azure Data Lake has become one of the fastest growing Azure service now with thousands of customers. With the GA announcement, we are revealing improvements we’ve made to the service including making it more productive for end users and security and availability improvements to make it ready for production deployments.

What is Azure Data Lake?

Today’s Big data solutions have been driving some organizations from “rear-view mirror” thinking to forward-looking and predictive analytics. However, there has been adoption challenges and the widespread usage of big data has not yet occurred. Azure Data Lake was introduced to drive big data adoption by making big data easy for developers, data scientists, and analysts to store data of any size, shape and speed, and do all types of processing and analytics across platforms and languages. It removes the complexities of ingesting and storing all your data while making it faster to get up and running with big data. Azure Data Lake includes three services:

Azure Data Lake Store, a no limits data lake that powers big data analytics
Azure Data Lake Analytics, a massively parallel on-demand job service
Azure HDInsight, a full managed Cloud Hadoop and Spark offering

What is Azure Data Lake Analytics?

Azure Data Lake Analytics service is a new distributed analytics job service that dynamically scales so you can focus on your business goals, not on distributed infrastructure. Instead of deploying, configuring and tuning hardware, you write queries to transform your data and extract valuable insights. The analytics service can handle jobs of any scale instantly by simply setting the dial for how much power you need. You only pay for your job when it is running making it cost-effective.

Azure Data Lake Analytics also provides a unified big data developer platform that integrates language, runtime, tooling, development environments, resource management, extensibility security that makes developers and ISVs far more productive. It supports the entire end-to-end big data development lifecycle from authoring, to debugging, monitoring, and optimization.

Start in seconds, Scale instantly, Pay per job:

Our on-demand service will have you processing Big Data jobs within 30 seconds. There is no infrastructure to worry about because there are no servers, VMs, or clusters to wait for, manage or tune. You can instantly scale the analytic units (processing power) from one to thousands for each job with literally a single slider. You only pay for the processing used per job. This model dramatically simplifies the lives of developers who want to start working with big data.

Tutorial: get started with Azure Data Lake Analytics using Azure portal
Demo: Getting Started with Azure Data Lake

Develop massively parallel programs with simplicity:

U-SQL is a simple, expressive, and extensible language that allows you to write code once and automatically have it be parallelized for the scale you need.  U-SQL blends the declarative nature of SQL with the expressive power of C#. In other declarative SQL-based languages used for big data, their extensibility model is “bolted-on” and much harder to use. U-SQL allows developers to easily define and utilize user-defined types and user-defined functions defined in any .NET language.

Big data developers need to accommodate any type of data: images, audio, video, documents. However, to handle those kinds of data, there are many existing libraries that are not all readily accessible to big data languages. U-SQL can seamlessly reuse any .NET library either one that is locally developed or published in repositories such as NuGet to handle any type of data. Developers can also use code written in R or in Python in their U-SQL scripts. After the code is written, you can deploy it as a massively parallel program letting you easily scale out diverse workload categories such as ETL, machine learning, cognitive science, machine translation, imaging processing, and sentiment analysis by using U-SQL and leveraging existing libraries.

Tutorial: Get started with Azure Data Lake Analytics U-SQL language
Develop U-SQL User defined operators for Azure Data Lake Analytics jobs
U-SQL Language Reference
Video: Introducing U-SQL – A new language for Massive Data Processing
Video: U-SQL Query Execution
Video: U-SQL Extensibility

Debug and Optimize your Big Data programs with ease:

With the tools that exist today, developers face serious challenges as their data workloads increase. Understanding bottlenecks in performance and scale is challenging and requires experts in distributed computing and infrastructure. For example, developers must carefully account for the time & cost of data movement across a cluster and rewrite their queries or repartition their data to improve performance. Optimizing code and debugging failures in cloud distributed programs are now as easy as debugging a program in your personal environment. Our execution environment actively analyzes your programs as they run and offers recommendations to improve performance and reduce cost. For example, if you requested 1000 AUs for your program and only 50 AUs were needed, the system would recommend that you only use 50 AUs resulting in a 20x cost savings.

Today, we are also announcing the availability of this big data productivity environment in Visual Studio Code allowing users to have this type of productivity in a free cross-platform code editor that is available on Windows, Mac OS X, and Linux.

Tutorial: develop U-SQL scripts using Data Lake Tools for Visual Studio
Video: Data Lake Developer Tools
Video: Getting Started with Debugging U-SQL

Virtualize your analytics:

The power to act on all your data with optimized data virtualization of your relational sources such as Azure SQL Database, and Azure SQL Data Warehouse. Queries are automatically optimized by moving processing close to the source data, without data movement, thereby maximizing performance and minimizing latency.

Video: U-SQL Federated Query

Enterprise-grade Security, Auditing and Support:

Extend your on-premises security and governance controls to the cloud for meeting your security and regulatory compliance needs. Capabilities such as single sign-on (SSO), multi-factor authentication and seamless management of millions of identities is built-in through Azure Active Directory. Role Based Access control, and the ability to audit all processing and management operations are on by default. We guarantee a 99.9% enterprise-grade SLA and 24/7 support for your big data solution.

Overview of Security in Azure Data Lake

How do I get started?

To get started, customers will need to have an Azure subscription or a free trial to Azure. With this in hand, you should be able to get an Azure Data Lake Analytics up and running in seconds by going through this getting started guide. Also, visit our free Microsoft Virtual Academy course on Data Lake.

Free course: Microsoft Virtual Academy on Azure Data Lake
Overview of Azure Data Lake Analytics
Get started using Microsoft Azure Portal
Get started using Azure PowerShell
Get started using .NET SDK
Develop U-SQL Scripts using Data Lake Tools for Visual Studio
Use Data Lake Analytics interactive tutorial
Analyze weblogs using Data Lake Analytics
Get started with U-SQL
U-SQL reference
.NET SDK reference

Quelle: Azure

Azure Data Lake Store now generally available

Today, we are pleased to announce  that Azure Data Lake Store is generally available.  Since we announced the public preview, Azure Data Lake has become one of the fastest growing Azure service now with thousands of customers. With the GA announcement, we are revealing improvements we’ve made to the service including making it more secure and highly available to make it ready for production deployments.

What is Azure Data Lake?

Today’s Big data solutions have been driving some organizations from “rear-view mirror” thinking to forward-looking and predictive analytics. However, there has been adoption challenges and the widespread usage of big data has not yet occurred. Azure Data Lake was introduced to drive big data adoption by making big data easy for developers, data scientists, and analysts to store data of any size, shape and speed, and do all types of processing and analytics across platforms and languages. It removes the complexities of ingesting and storing all your data while making it faster to get up and running with big data. Azure Data Lake includes three services:

Azure Data Lake Store, a no limits data lake that powers big data analytics
Azure Data Lake Analytics, a massively parallel on-demand job service
Azure HDInsight, a full managed Cloud Hadoop and Spark offering

What is Azure Data Lake Store?

The value of a data lake resides in the ability to develop solutions across data of all types – unstructured, semi-structured and structured. This begins with the Azure Data Lake Store, the first cloud Data Lake for enterprises that is secure, massively scalable and built to the open HDFS standard.  With no limits to the size of data and the ability to run massively parallel analytics, you can now unlock value from all your analytics data. For example, data can be ingested in real-time from sensors and devices for IoT solutions, or from online shopping websites into the store.

Petabyte size files and Trillions of objects:

Prior to the Azure Data Lake Store, storing large datasets in the cloud has been a major challenge.  Artificial limits placed by object stores make them unsuitable to store large files that can be hundreds of terabytes in size such as high-resolution video, genomic and seismic datasets, medical data, and data from a wide variety of industries.  Azure Data Lake Store has revolutionary technology for analyzing and storing massive data sets.  A single Azure Data Lake Store account can store trillions of files where a single file can be greater than a petabyte in size which is 200x larger than other cloud stores.

This makes Data Lake Store ideal for storing any type of data including massive datasets like high-resolution video, genomic and seismic datasets, medical data, and data from a wide variety of industries.

Scalable throughput  for massively parallel analytics:

Data Lake Store is built for running large analytic systems that require massive throughput to process and analyze petabytes of data. Without redesigning your application or repartitioning your data at higher scale, Data Lake Store scales throughput to support any size of analytic workload. It provides massive throughput to run analytic jobs with thousands of concurrent executors that read and write hundreds of terabytes of data efficiently. You only need only focus on the application logic, and we automatically optimize the store for any throughput level.

HDFS for the Cloud:

Microsoft Azure Data Lake Store supports any application that uses the open Apache Hadoop Distributed File System (HDFS) standard. By supporting HDFS, you can easily migrate your existing Hadoop and Spark data to the cloud without recreating your

Use with Hadoop clusters
Use with Data Lake Analytics
Use with Stream Analytics
Use with Data Catalog
Use with Power BI

Always encrypted, Role-based security & Auditing:

Data Lake Store protects your data assets and extends your on-premises security and governance controls to the cloud easily.  Data is always encrypted; in motion using SSL, and at rest using service or user managed HSM-backed keys in Azure Key Vault.  Capabilities such as single sign-on (SSO), multi-factor authentication and seamless management of millions of identities is built-in through Azure Active Directory. You can authorize users and groups with fine-grained POSIX-based ACLs for all data in the Store enabling role-based access controls. Finally, you can meet security and regulatory compliance needs by auditing every access or configuration change to the system. Finally, we guarantee a 99.9% enterprise-grade SLA and 24/7 support for your big data solution.

Security overview
Access control lists
Secure massive datasets
Active Directory authentication
Video: Overview of Security in Azure Data Lake
Video: Developing with OAuth in Azure Data Lake
Video: Authorization in Azure Data Lake

How do I get started?

To get started, customers will need to have an Azure subscription or a free trial to Azure. With this in hand, you should be able to get an Azure Data Lake Analytics up and running in seconds by going through this getting started guide.

Also, visit our free Microsoft Virtual Academy course on Data Lake.

Free course: Microsoft Virtual Academy on Azure Data Lake
Video: Introduction to Azure Data Lake Store
What is Data Lake Store
Create account and upload data
Self-guided learning
Copy to and from Azure Blob Storage
Start with the REST API
Start with the .NET SDK
Start with the Java SDK
Security overview
Active Directory authentication

Quelle: Azure

General availability of Azure Application Insights

Today at the Connect() 2016 event in New York, we announced the general availability of Azure Application Insights (previously Visual Studio Application Insights) and launched our new pricing structure. With this announcement, Application Insights now provides a financially backed SLA offering 99.9% availability.

Application Insights is an integrated application performance management (APM) and application analytics solution. It enables development teams to understand how application performance relates to user experience and how these impact business outcomes.

If you are new to Application Insights, here is a quick overview and demo:

 

The main areas of Application Insights are:

Intelligent APM: Proactively monitor and improve the performance of the application you’re developing with advanced tools. Visual application maps pinpoint performance issues. Smart detection based on machine learning sends you alerts with embedded diagnostics. With Live Metrics Stream you can monitor your application health metrics in real time, while you’re deploying a change.
Analytics: With its rich query language, Analytics gives you answers to complex questions about your application’s performance and usage, almost instantly. Ask creative questions about the performance and behavior of your apps in flexible ways with interactive queries, and refine them until you pin-point the problem that impedes a desired business outcome. Once you derive an insight from the ad-hoc queries, you can share them in the form of visuals across your organization in customizable dashboards or through integration with Power BI.
DevOps Integration &Extensibility: Tightly integrated into the Visual Studio product family. Read performance data right there in the code of your app in Visual Studio IDE. Integrations with Visual Studio Team Services, Team Foundation Server, and GitHub enable you to find and fix quality issues early in your DevOps workflows. Integrations with System Center and Operations Management Suite enable you to share application performance and system performance metrics across team boundaries and shorten the time to find root causes of issues.

As part of being generally available, we have introduced a new pricing structure. You can still start for free with Application Insights, and there is no limitation on the APM and Analytics tools – you get the full feature set without cost. You only pay as your app grows and as you transmit more application telemetry to Application Insights, but you control how much you pay!

In addition, we are announcing these new features and enhancements to Application Insights, that we are proud to share these with you today.

Increased raw data retention to 90 days for Analytics Queries
European data center option for storing Application Insights data
Improvements in Application Performance Management:

Smart detection of degradation in request performance
Correlating Availability Monitoring results with server side telemetry which will let you diagnose failures of your synthetic tests
Failure Samples in Live Metrics Stream which will let you get insight into details of failed requests, dependency calls and exceptions in real time
Grid control in Azure Dashboards and additional charting options in Metric options such as percentage charts

Enhancements to the Codelens and Application Search capabilities inside Visual Studio provide more information in context to identify and fix issues sooner
Preview of Application Insights REST APIs to access all your queries, events, and metrics data

We are continuously adding new capabilities to Application Insights and learning from our customers to better address your needs. We are fully committed to Azure Privacy standards and Security & Compliance policies, so that your data remains safe.

I would like to conclude with some inspiring words from one of our customers, AkzoNobel, a global paint firm, with whom we recently published a case study:

“With Application Insights, our attention has been directed multiple times to issues that would otherwise have taken much longer to detect. As a result, we’ve been able to maintain the service levels required for our application.”

–Rob Reijers, Manager ColorApps Development, AkzoNobel

 

Please share your ideas for new or improved features at the Application Insights User Voice and for any questions visit the Application Insights Forum.
Quelle: Azure

Application Insights: Three of the latest features in Visual Studio

Application Insights telemetry is a powerful tool for detecting, triaging, and diagnosing issues in your web services. The Developer Analytics Tools features in Visual Studio integrate Application Insights data into your editing and debugging workflows. Check out three of the latest features we’ve added.

Operation timelines in Application Insights Search

Web services are driven by requests. So, when you’re using telemetry to diagnose an issue in your web service, it’s helpful to see the telemetry in the context of the request that triggered the exception, dependency call, or custom event. These sequences of related events are called “operations” in Application Insights.

The new Track Operation tab on each event in the Application Insights Search tool shows other events that occurred during the same operation.

The Track Operation tab makes it easy to piece together what happened in your service before a problem occurred by listing events chronologically. Timelines and event duration data for each event can help you spot slow dependencies and improve the performance of your service. The slowest event in each operation is marked with a flame icon to make it easy to find.

Request telemetry in CodeLens

Request telemetry for each of your ASP.NET controller methods is now shown in CodeLens. From the CodeLens indicator, you can see the number of requests in the last 24 hours along with the percentage of failed requests. By clicking the CodeLens indicator, you can also see the average response time for the request, plus comparisons between the last 24 hours and the prior 24 hours for each request metric.

By placing request telemetry right in your editor, it’s easy to spot production reliability and performance issues while you’re working in your codebase. Seeing how often a method has been requested in the last 24 hours can also provide useful context while making changes: “Does this method that I’m about to edit see a lot of usage in production?”

Learn more about Application Insights and CodeLens in the Azure documentation.

CodeLens for debug session telemetry

CodeLens can now also show telemetry from local debug sessions, even if you haven’t connected your application to the Application Insights service in Azure. In ASP.NET projects with the Application Insights SDK, you’ll see CodeLens indicators for exception and request telemetry from the most recent debug session. When you stop debugging and edit your code, you’ll have request response times, exception data, and more at your fingertips.

Get started

The Developer Analytics Tools features are included with Visual Studio 2015 and Visual Studio “15.” They’re also available as an extension on the Visual Studio Gallery.

To connect your application to the Application Insights service and enable production telemetry:

Right-click your project in the Solution Explorer and choose Add Application Insights Telemetry…
Follow the directions in the Application Insights Configuration window.

If you prefer, you can just add the Application Insights SDK to your application for debug session telemetry without connecting to the Application Insights service:

Right-click your project in the Solution Explorer and choose Add Application Insights Telemetry…
Follow the directions in the Application Insights Configuration window
Look for a link to “Just add the SDK to try local-only mode.”

Let us know in the comments below how you use the Developer Analytics Tools, and how else you’d like to use Application Insights telemetry from within Visual Studio.
Quelle: Azure