Azure Site Recovery powers new Managed DR offerings by Microsoft Partners

Today we are excited to announce that two of our Microsoft partners, Rackspace and CDW, now offer Azure Site Recovery as a managed DR solution for customers.

Rackspace, with over 15 years of Microsoft experience and expertise, and a five-time Microsoft Hosting Partner of the Year, now offers Cloud Replication for Hyper-V. This is a new service that provides a disaster recovery strategy for customers running Microsoft Hyper-V workloads. Rackspace assists with capacity planning and Azure architecture, to enable successful replication of virtual machines, and provide regular failover testing to ensure application availability.

“Azure Site Recovery provides a powerful solution to allow our customers to replicate critical resources to Microsoft Azure as part of their overall disaster recovery requirements,” said Duan van der Westhuizen, Director of Product for Microsoft Cloud at Rackspace.“We combine our expertise and services on Microsoft Azure and Hyper-V with the tools and automation from ASR, to provide our customers a fully managed replication solution, at the fraction of the cost of running a secondary environment.”

CDW, with nearly two decades of Microsoft product sales and service experience, and a certified Microsoft Cloud Service Provider with deep experience providing managed services on Azure, now offers CDW Disaster Recovery as a Service (DRaaS). CDW’s DRaaS offer combines the expertise of CDW’s professional services team with the capabilities of its managed services to plan, implement and manage tailored DRaaS plans and procedures.

In the words of Aaron Melius, Solution Architect, CDW, “We built CDW DRaaS around Microsoft Azure Site Recovery because of how well integrated ASR is with other Azure services and VMware-enabled infrastructure. ASR is structured for predictability of performance and pricing alike, so our customers that implement it can count on it to make disaster recovery smooth, easy and economical – especially when it’s bundled with our professional and managed services.”

To get started with Azure Site Recovery today, visit our website. To work with a partner, check out Azure Site Recovery partners.
Quelle: Azure

Get the most out of your Azure portal experience

Hello, Azure friends! If you manage Azure resources, you’re probably already familiar with the Azure portal. In this blog post, I’d like to highlight some of the experiences and capabilities that you can take advantage of to get the most out of your Azure portal experience.

Quickly find what you need

Getting to your resources in a fast and convenient way is key for productivity. The Azure portal provides a search functionality, always present in the top navigation bar, that allows you to search across your resources, resource groups, available services, and public Azure documentation.

When the search box gets the focus, it immediately provides access to your recently used resources.

As you type a search string, it searches for matches in your list of resources (e.g. virtual machines, databases, app services, etc.), resource groups, services available in Azure, and documentation.

 

Notice that the search above is performed across multiple subscriptions (in this case across all my 34 subscriptions). This is a configurable setting, so you are in full control of the scope of your search.

The search box is always present in the top navigation bar. You can also get to it using the key combination “G” + “/” and jump to any point of interest using just the keyboard!

Browse through your resources

Very often you will want or need to browse through your resources. In fact, this is the most common entry point for most customers using the portal. You can browse through all of your resources, or scope resources by type (e.g. all your virtual machines, app services, etc.). In both cases, you have many additional filters available to continue scoping down the list and focus on the resources that are important to you. These experiences display resources across multiple subscriptions and locations, so you can get to everything that you care about in one screen. The image below shows a single list displaying resources from my 34 subscriptions across all locations: 

Act on multiple resources

Want to start multiple virtual machines at once? The list of resources provides the ability to either act on one or multiple resources.

The animation below shows the following scenario: I have created a few testing VMs and need to start to run some tests. I can start them with a single and quick interaction.

Organize resources using tags

Tags allow you to annotate your resources with information that you can later use to organize those resources logically. Tags also show up on your billing data so you can use them for both resource and cost management. We are improving the tagging experience and now you can tag, or untag, multiple resources with a single interaction, as shown in the image below.

Consistent experience

Azure offers a wide range of services, and learning the fundamentals on how to do basic management on everything can be scary. The Azure portal provides a consistent experience that covers finding an instance of a resource, covered at the beginning of this post, to performing basic management operations. The image below highlights some of the common patterns using the Virtual Machine overview screen as an example.

Management landing pages for most resources contain the same structure and basic elements, so once you learn a few patterns, you can apply them across most of the portal. The image below shows the overview pages for App Service, Virtual Machine, and SQL. Notice that all screens follow the structure introduced in the previous image.

Make it your own

We know that everyone works differently, and to support that we offer multiple ways of customizing the portal to match the way you work and bring what is important to you front and center. Here are some things that you can customize:

Items in the left navigation bar
Theme (we support 4 different themes)
Language and locale (we support 18 languages)
Columns in the browse lists (All Resources, Virtual Machines, App Services, etc.)
Dashboards (more on this in the next section)

Customizable dashboards

The Azure portal dashboard is a canvas that you can make your own. It is fully customizable, and you can change it to bring what is important to you front and center.

You can create multiple dashboards and share them with your colleagues, as explained in this article. Dashboards can be updated by dragging and dropping tiles or programmatically. The image below shows an example of a dashboards that you can build in the Azure portal:

The Azure portal provides a vast variety of tiles that you can use to build your dashboard. Two tiles that I’d like to call out are ARM Data and ARM Actions:

The ARM Data allows you to display data from any of your resources in Azure Resource Manager. Just add an instance of the tile to your dashboard, point it to a resource URI, and navigate to the property that you want to display. The ARM Actions is similar, but for actions on resources.

In the image below, the ARM Data and ARM Actions tiles are used to display information about a VM and two commands for starting and stopping it:

If it is in Azure Resource Manager, it can be in your dashboard!

Integrated console experience in the portal

You can manage your account using a point and click GUI or via a command-line experience using Cloud Shell (Bash or PowerShell). Cloud Shell provides an authenticated, browser-based shell experience hosted on the cloud and accessible from virtually anywhere embedded in the portal, and is always one click away.

Leverage the power of Azure Resource Manager templates

As you progress in your Azure journey, it is very likely that you will use Azure Resource Manager templates. You can create your template from scratch or use/extend one of the more than 600 templates available in the Azure Quickstart Templates gallery.

The Azure portal provides a great experience to author and execute templates. You can quickly get to the template authoring experience by searching for “custom template” in the global search box:

 

This experience is fully integrated with the Azure Quickstart Templates gallery, so you can load any template from the gallery. Once you find a template that fits your needs you can edit it using Template Editor screen that provides a full outline of the template, syntax coloring, and helpers to easily add new resource instances into the template, or execute the template. If you decide to execute the template, the portal will provide a data entry screen with specialized fields and validations to reduce data input errors.

In addition to what we have already shown above, the portal provides a template library that you can use to store your own templates.

Also, every resource exposes an “Automation script” option in their left menu that provides the Azure Resource Manager template along with all the code for running the template in CLI, PowerShell, .NET and Ruby for the entire resource group in which that resource is included:

Make the most out of Azure

Building cloud applications is a hard task, and making the most out of the platform is even harder. Azure Advisor helps you make the most out of Azure by providing recommendations for improving cost, availability, security, and performance. Azure Advisor is available by default in the left navigation bar. Once you get to Azure Advisor, you can select the subscriptions the platform will provide the best recommendations to help you optimize your applications and infrastructure.

Start using Azure Advisor today and make the most out of Azure!

Monitor your Azure resources

Azure Monitor provides base level infrastructure metrics and logs for most services in Microsoft Azure. It is available by default in the left navigation bar, so being on top of your infrastructure and applications is just one click away!

Notice that the Azure Monitor screen above follows the same UX patterns that we introduced previously in this blog post when discussing about “Consistent management experience”. Learn more about the latest on Azure Monitor.

Optimize your cloud spend

With the Azure portal you are always one click away from being able to see and understand what are you being charged for. The default left navigation bar has an entry that takes you to the “Cost Management + Billing” screen.

Try Azure Cost Management. We’d love to hear your feedback.

Take Azure with you, everywhere!

The Azure mobile app enables you to stay informed, connected, and in control of your Azure resources and applications. Learn more about the Azure mobile app.

Download the Azure mobile app today and let us know what do you think!

Get early access to new features

In Azure portal preview we often deploy some features in early stages. If you want to try those new features, please use the preview stamp and let us know what you think as your feedback will help to improve those features!

Let us know what you think!

We’ve gone through a lot of stuff and still did not cover everything available in the Azure portal! The team is always hard at work focusing on improving the experience and is always eager to get your feedback and learn how can we make your experience better. Feel free to reach out directly to me at lwelicki@microsoft.com with your feedback or any thoughts about Azure user experience.

Let’s build together the best cloud experience!
Quelle: Azure

Five more reasons why you should download the Azure mobile app

This post was co-authored by Ilse Terrazas Ortega, Program Manager, Azure mobile app

You may have already heard about the Azure mobile app at the Build conference back in May 2017. The app lets you stay connected with Azure even when you are on the go. You can read more details in our launch blog post from May.
Over the last few months, we have been working closely with our customers to improve the Azure mobile app. And today, we are excited to share five more reasons why the Azure app is a must-have.

1. Monitoring resources

The Azure mobile app allows you to quickly check your resources status at a glance. Drill in, and see more details like metrics, Activity Log, properties and execute actions.

2. Executing scripts to respond to issues

Need to urgently execute your get-out-of-trouble script? You can use Bash and now even PowerShell on Cloud Shell to take full control of your Azure resources. All of your scripts are stored on CloudDrive to use across the app and the portal.

 

3. Organizing resources and resource group

Have a lot of resources? No problem, you can favorite your most important resources across subscriptions and keep them in your Favorites tab for easy access.
 

Start creating your Favorites list now – you can do it from the resource view or directly from the resources list tab as shown below.

4. Resource sharing

Tired of sending screenshots to your coworkers to help them find a resource? Now you can share a direct link to the resource via email, text message or other apps with the click of a button.
 

5. Tracking Azure Health incidents

The Azure mobile app can even help you track Azure Health incidents. Just scan the QR code from the portal and track the incident from your phone.
 

 

Download the preview app today and let us know what you'd like to see next in the feedback forum. Keep an eye out for updates and follow @AzureApp on Twitter for the  latest news.
Quelle: Azure

WANdisco enables continuous data replication on Azure HDInsight for Big Data applications

We are pleased to announce the expansion of HDInsight Application Platform to include WANdisco. You can install the WANdisco Fusion app and take advantage of the free trial too.

Azure HDInsight is the industry leading fully-managed cloud Apache Hadoop and Spark offering, which gives you optimized open-source analytic clusters for Spark, Hive, MapReduce, HBase, Storm, Kafka, and Microsoft R Server, backed by a 99.9% SLA.

WANdisco Fusion provides continuous replication of selected data at scale between multiple Big Data and cloud environments. With guaranteed data consistency and continuous availability, Microsoft Azure HDInsight customers will now have easy access to the cost-saving benefits of Fusion’s hybrid architecture for on-demand data analytics and offsite disaster recovery.

This combined offering of WANDisco on Azure HDInsight enables customers to connect their Big Data applications from on-premise to HDInsight and expand their analytical footprint faster. Customers can use more open source workloads and libraries easily in the cloud, since they can create clusters on demand and run them against the data that was replicated by WANdisco.

To learn more please come to our presentation Extend on-premises Hadoop and Spark deployments across data centers and the cloud, including Microsoft Azure with Pranav Rastogi, Program Manager, Microsoft and Jagane Sundar, Chief Technology Officer, WANdisco at Strata Data Conference New York on Thursday, September 28, 2017 at 1:15 PM in room 1A03. To find out more, please visit the Strata Data Conference website.

The engineering teams are also hosting a webinar where they will discuss this offering in detail. Please join us by registering today.

Microsoft Azure HDInsight – Reliable Open Source Analytics at Enterprise grade and scale

Azure HDInsight is the only fully-managed cloud Hadoop offering that provides optimized open source analytical clusters for Spark, Hive, Interactive Hive, MapReduce, HBase, Storm, Kafka, and R Server backed by a 99.9% SLA. Each of these Big Data technologies are easily deployable as managed clusters, with enterprise-level security and monitoring.

The ecosystem of productivity applications in Big Data has grown immensely to help customers be more productive with their Big Data solutions. Today, customers often find it challenging to discover these productivity applications, and then in-turn struggle to install and configure these apps.

To address this gap, the HDInsight Application Platform provides a unique experience to HDInsight where Independent Software Vendors (ISV’s) can directly offer their applications to customers. Customers can now easily discover, install and use these applications built for the Big Data ecosystem by a single click.

Setting up a hybrid environment for Big Data scenarios has always been a huge challenge since customers had to replicate petabytes of data and keep both environments in sync. To help customers connect their on-premise Big Data environments with HDInsight, WANdisco Fusion can be deployed as an HDInsight application.

WANdisco Fusion on Azure HDInsight – Move petabyte scale data from on-premises Big Data deployments to Azure

The integration of WANdisco Fusion with Azure HDInsight presents an enterprise solution that enables organizations to meet stringent data availability and compliance requirements whilst seamlessly moving production data at petabyte scale from on-premises big data deployments to Microsoft Azure.

As customers start moving parts of their Big Data applications to Azure, it would give them the flexibility of experimenting with advanced analytical offerings such as running R Server on HDInsight, and more open source machine learning libraries to use. Traditionally experimenting with them on an on-premise Hadoop deployment has been hard due to IT and hardware procurement, but the cloud effectiveness of HDInsight where you can spin up clusters, scale and delete them on demand, allows you to easily experiment in the cloud. Once you have done your analysis, you can then determine how much of your Big Data deployment should you migrate to the cloud.

Customers can use Fusion for the following scenarios:

Hybrid cloud setup for Big data applications: Connect on-premises Big Data deployments to HDInsight. You can setup replication from any Hadoop or Spark distribution running any open source workload (Hive, Spark, HBase, and more)
Multi-cloud: Connect any Big Data deployment running in any cloud to Azure HDInsight
Multi-region replication for back-up and disaster recovery

The following are some of the key benefits of Fusion on HDInsight which help customers

Continuous data replication: Data is replicated as soon as changes occur, regardless of where those changes are initiated, with guaranteed consistency
Opt-in backup: An administrator can select subsets of content for replication, with fine-grained control over where data resides
No administrator overhead: Replication is continuous and automatic, recovering from intermittent network or system failures automatically so that the need for administration oversight is eliminated

Getting started with Fusion on HDInsight

Installing Fusion is a two-step process. This will configure Fusion server, and the client libraries required on the cluster.

Install Fusion server: This will install the Fusion server in the same Azure Virtual Network as the HDInsight cluster. This allows the server to access the cluster in a secure manner.

Install the Fusion app on a new HDInsight cluster or an existing cluster. In the License key field, enter the Public IP of the Fusion Server

 

After you have installed Fusion on HDInsight, you can follow the user guide to setup continuous active replication from on-premises Big Data deployments to Azure HDInsight, multi-region replication, backup and restore, and more.

Strata Presentation and Webinar

To learn more, please come to our presentation Extend on-premises Hadoop and Spark deployments across data centers and the cloud, including Microsoft Azure with Pranav Rastogi, Program Manager, Microsoft and Jagane Sundar, Chief Technology Officer, WANdisco at Strata Data Conference New York on Thursday, September 28, 2017 at 1:15 PM in room 1A03. To find out more, please visit the Strata Data Conference website.

The engineering teams are also hosting a webinar where they will discuss this offering in detail. Please join us by registering today.

Resources

Install WANdisco Fusion App on Azure HDInsight
Install WANdisco Fusion Server
Try WANdisco for free
Learn more about Azure HDInsight
User Guide for WANdisco

Summary

We are pleased to announce the expansion of HDInsight Application Platform to include WANdisco. This combined offering of WANDisco on Azure HDInsight enables customers to connect their Big Data applications from on-premises to HDInsight in the cloud faster. Please visit us at the Strata session and register for the upcoming webinar to learn more.
Quelle: Azure

Azure Log Analytics – meet our new query language

Azure Log Analytics has recently been enhanced to work with a new query language. The query language itself actually isn’t new at all, and has been used extensively by Application Insights for some time. Recently, the language and the platform it operates on have been integrated into Log Analytics, which allows us to introduce a wealth of new capabilities, and a new portal designed for advanced analytics.

This post reviews some of the cool new features now supported. It’s just the tip of the iceberg though, and you're invited to also review the tutorials on our language site and our Log Analytics community space. The examples shown throughout the post can also be run in our Log Analytics playground – a free demo environment you can always use, no registration needed.

Pipe-away

Queries collect data, stored in one or more tables. Check out this basic query:

Event

This is as simple as you can get, but it's still a valid query, that simply returns everything in the Event table. Grabbing every record in a table usually means way too many results though. When analyzing data, a common first step is to review just a bunch of records from a table, and plan how to zoom in on relevant data. This is easily done with “take”:

Event
| take 10

This is the general structure of queries – multiple elements separated by pipes. The output of the first element (i.e the entire Event table) is the input of the next one. In this case, the final query output will be 10 records from the Event table. After reviewing them, we can decide how to make our query more specific. Often, we will use where to filter by a specific condition, such as this:

Event
| where EventLevelName == "Error"

This query will return all records in the table, where EventLevelName equals “Error” (case sensitive).

Looks like our query still returns a lot of records though. To make sense of all that data, we can use summarize. Summarize identifies groups of records by a common value, and can also apply aggregations to each group.

Event
| where EventLevelName == "Error"
| summarize count() by Computer

This example returns the number of Events records marked as Error, grouped by computer.

Try it out on our playground!

Search

Sometimes we need to search across all our data, instead of restricting the query to a specific table. For this type of query, use the “search” keyword:

search "212.92.108.214"
| where TimeGenerated > ago(1h)

The above example searches all records from the last hour, that contain a specific IP address.

Scanning all data could take a bit longer to run. To search for a term across a set of tables, scope the search this way:

search in (ConfigurationData, ApplicationInsights) "logon" or "login"

This example searches only the ConfigurationData and ApplicationInsights tables for records that contain the terms “logon” or “login”.

Note that search terms are by default case insensitive. Search queries have many variants, you can read more about them in our tabular operators.

Query-time custom fields

We often find that we want to calculate custom fields on the fly, and use them in our analysis. One way to do it is to assign our own name to automatically-created columns, such as ErrorsCount:

Event
| where EventLevelName == "Error"
| summarize ErrorsCount=count() by Computer
| sort by ErrorsCount

But adding fields does not require using summarize. The easiest way to do it is with extend:

Event
| where TimeGenerated > datetime(2017-09-16)
| where EventLevelName == "Error"
| extend PST_time = TimeGenerated-8h
| where PST_time between (datetime(2017-09-17T04:00:00) .. datetime(2017-09-18T04:00:00))

This example calculates PST_time which is based on TimeGenerated, but adapted from UTC to PST time zone. The query uses the new field to filter only records created between 2017-09-17 at 4 AM and 2017-09-18 at 4 AM, PST time.

A similar operator is project. Instead of adding the calculated field to the results set, project keeps only the projected fields. In this example, the results will have only four columns:

Event
| where EventLevelName == "Error"
| project TimeGenerated, Computer, EventID, RenderedDescription

Try it out on our playground.

A complementary operator is Project-away, which specifies columns to remove from the result set.

Joins

Join merges the records of two data sets by matching values of the specified columns. This allows richer analysis, that relies on the correlation between different data sources.

The following example joins records from two tables – Update and SecurityEvent:

Update
| where TimeGenerated > ago(1d)
| where Classification == "Security Updates" and UpdateState == "Needed"
| summarize missing_updates=makeset(Title) by Computer
| join (
SecurityEvent
| where TimeGenerated > ago(1h)
| summarize count() by Computer
) on Computer

Let’s review the two data sets being matched. The first data set is:

Update
| where TimeGenerated > ago(1d)
| where Classification == "Security Updates" and UpdateState == "Needed"
| summarize missing_updates=makeset(Title) by Computer

This takes Update records from the last day, that describe needed security updates. It then summarizes the set of required updates per computer.

The second data set is:

SecurityEvent
| where TimeGenerated > ago(1h)
| summarize count() by Computer

This counts how many of SecurityEvent records were created in the last hour per computer.

The common field we matched on is Computer, so eventually we get a list of computers that each has a list of missing security updates, and the total number of security events in the last hour.

The default visualization for most queries is a table. To visualize the data graphically, add "| render barchart” at the end of the query, or select the Chart button shown above the results. The outcome can help us decide how to manage our next updates:

We can see that the most required update is 2017-09 Cumulative Update for Windows Server and that the 1st computer to handle should probably be ContosoAzADDS1.ContosoRetail.com.

Joins have many flavors – inner, outer, semi, etc. These flavors define how matching should be performed and what the output should be. To learn more on joins, review our joins tutorial.

Next steps

Learn more on how to analyze your data:

Query language doc site
Getting started with queries
Upgrading to the new query language

Quelle: Azure

Announcing general availability of Azure Managed Applications Service Catalog

Today we are pleased to announce the general availability of Azure Managed Applications Service Catalog.

Service Catalog allows corporate central IT teams to create and manage a catalog of solutions and applications to be used by employees in that organization. It enables organizations to centrally manage the approved solutions and ensure compliance. It enables the end customers, or in this case, the employees of an organization, to easily discover the list of approved solutions. They can consume these solutions without having to worry about learning how the solution works in order to service, upgrade or manage it. All this is taken care of by the central IT team which published and owns the solution.

In this post, we will walkthrough the new capabilities that have been added to the Managed Applications and how it improves the overall experience.

Improvements

We have made improvements to the overall experience and made authoring much easier and straight forward. Some of the major improvements are described below.

Package construction simplified

In the preview version, the publisher needed to author three files and package them in a zip. One of them was a template file which contained only the Microsoft.Solutions/appliances resource. The publisher also had to specify all of the actual parameters needed for the deployment of the actual resources in this template file again. This was in addition to these parameters already being specified in the other template file. Although this was needed, it caused redundant and often confusing work for the publishers. Going forward, this file will be auto-generated by the service.

So, in the package (.zip), only two files are now required – i) mainTemplate.json (template file which contains the resources that needs to be provisioned) ii) createUIDefinition.json

If your solution uses nested templates, scripts or extensions, those don’t need to change.

Portal support enabled

At preview, we just had CLI support for creating a managed application definition for the Service Catalog. Now, we have added Portal and PowerShell support. With this, the central IT team of an organization can use portal to quickly author a managed application definition and share it with folks in the organization. They don’t need to use CLI and learn the different commands offered there.

These could be discovered in the portal by clicking on “More Services” and then searching for Managed. Don’t use the ones which say “Preview”.

 

To create a managed application definition, select “Service Catalog managed application definitions” and click on “Add” button. This will open the below blade.

Support for providing template files inline instead of packaging as .zip

Create a .zip file, uploading it to a blob, making it publicly accessible, getting the URL and then creating the managed application definition still required a lot of steps. So, we have enabled another option where you can specify these files inline using new parameters that have been added to CLI and Powershell. Support for inline template files will be added to portal shortly.

Service Changes

Please note that the following major changes have been made to the service.

New api-version

The general availability release is introducing a new api-version which will enable you to leverage all the above mentioned improvements. The new api-version is 2017-09-01. Azure Portal will use this new api-version. The latest version of Azure CLI and Azure PowerShell leverages this new api-version. It will be required that you switch to this latest version to develop and manage Managed Applications. Note that creating and managing Managed Applications will not be supported using the existing version of CLI after 9/25/2017. Existing resources which have been created using the old api-version (old CLI) will still continue to work.

Resource type names have changed

The resource type names have changed in the new api-version. And so, Microsoft.Solutions/appliances is now Microsoft.Solutions/applications, and Microsoft.Solutions/applianceDefinitions is Microsoft.Solutions/applicationDefinitions.

Upgrade to the latest CLI and PowerShell

As mentioned above, to continue using and creating Managed Applications, you will have to use the latest version of CLI and PowerShell, or you can use the Azure portal. Existing versions of these clients built on the older api-version will no longer be supported. Your existing resources will be migrated to use the new resource types and will continue to work using the new version of the clients.

Supported locations

Currently, the supported locations are West Central US and West US2.

Please try out the new version of the service and let us know your feedback through our user voice channel or in the comments below.

Additional resources

Publish a Marketplace Managed Application
Publish a Service Catalog Managed Application
How to create UIDefinition for the Managed Application
Managed Applications samples GitHub repository

Quelle: Azure

Query across resources

We’re excited to introduce cross-resources querying – the ability to query not only the current workspace or application, but analyze data from other resources as well, in a single query.

Until now, queries were limited to the scope of a single Application Insights app, or a single Log Analytics workspace. Today, we support querying across multiple apps or across multiple workspaces, providing a true system-wide view on your data.

Querying across Application Insights apps

Refer to an external application by using an app identifier:

union app('mmsportal-prod').requests, app('fabrikamapp').requests, requests
| summarize count() by bin(timestamp, 1h)

The above example queries records of the requests table in 3 separate apps: mmsportal-prod, fabrikamprod and my current app (which doesn’t require a name, I refer directly to the table). It then counts the total number of records, regardless of the application that holds each record.

Querying across Log Analytics workspaces

Refer to an external workspace by using a workspace identifier:

union Update, workspace("contosoretail-it").Update
| where TimeGenerated >= ago(1h)
| where UpdateState == "Needed"
| summarize dcount(Computer) by Classification

The above example queries the Update table both in my current workspace, and in another workspace named contosoretail-it. It then counts distinct records of needed updates by their classification, regardless of the workspace that holds each record.

Identifying resources

Identifying applications and workspace can be done in several ways:

Resource name – this is a human-readable name of the app or workspace. We sometimes refer to this as the “component name”.
workspace("contosoretail").Update | count
Note: Since workspace and app names are not unique across subscriptions or resource groups, this identifier can be ambiguous if the user has access to multiple components with the same name. In such cases the query will fail on ambiguity.
Qualified Name – this is the “full name” of the app or workspace, composed of thei subscription name, resource group and component name, in this format: <subscriptionName>/<resourceGroup>/<componentName>.
app('AI-Prototype/Fabrikam/fabrikamprod').requests | count
Note: Since Azure subscription names are not unique, this identifier might be ambiguous.
App or workspace ID – this is a GUID, the unique, immutable, public identifier of the app or workspace:
workspace("b438b4f6-912a-46d5-9cb1-b44069212ab4").Update | count

Azure Resource ID – the Azure-defined identity of the app or workspace.

For apps, the format is: /subscriptions/<subscriptionId>/resourcegroups/<resourceGroup>/providers/microsoft.insights/components/<componentName>.
For example:
app("/subscriptions/7293b69-db12-44fc-9a66-9c2005c3051d/resourcegroups/Fabrikam/providers/microsoft.insights/components/fabrikamapp").requests | count

For workspaces, the format is: /subscriptions/<subscriptionId>/resourcegroups/<resourceGroup>/providers/microsoft.OperationalInsights/workspaces/<componentName>.
For example:
workspace("/subscriptions/e427267-5645-4c4e-9c67-3b84b59a6982/resourcegroups/ContosoAzureHQ/providers/Microsoft.OperationalInsights/workspaces/contosoretail").Event | count

Favorite resources

A cool new feature is “Favorites” –  in the Analytics portal, the Scehma pane now has a list of your favorite resources, intended to provide quick access to the resources you query the most. To add an item to the list, you can either hover over the currently active resource and click the star icon, or select the "Edit" button and search for the relevant resource

In the context of cross-resource querying, you’ll notice that Intellisense suggests identifiers based on your favorites. Regardless of suggestions, you can in fact refer to any resource you have access to.

Next steps

Learn more on how to analyze your data:

· Query language doc site

· Getting started with queries
Quelle: Azure

Get started with Monitoring in Azure

We’re happy to announce the public preview of a new overview landing page in Azure Monitor. This landing page is designed to help you understand the monitoring capabilities offered by Azure, and to make it easier to discover, configure, and on-board Azure’s platform and premium monitoring capabilities.

The new Overview helps users that are new to Azure get started by on-boarding Azure alerts, Log Analytics, and Application Insights. It also provides a view to Azure’s always-on platform monitoring, starting with Activity Log error counts and an Azure Service Health summary that helps you catch any failure points in your environment.

As you on-board to richer capabilities, the overview gives you a starting point for navigation, and shows notable issues from different services to let you know if you should take a look at them. You might use the overview for a quick daily review of environment health, or to see what else needs attention after you receive an alert. Today, the overview can be scoped to a single subscription at a time, so the view shows you the health and configuration for the part of your environment you choose. We’ll be expanding out the scope of visibility in future releases.

If any of your Azure resources are logging events with error-level severity, the Activity Log Error count shows you this key indicator, and you can then click through to the dedicated Activity Log page to investigate each event. Similarly, Azure Service Health, which provides personalized information about any issues in Azure that are impacting your services, gives you an always-on view to service issues, planned maintenance events, and health advisories.

We recommend three core services to get more visibility to your Azure resources. Configuring Azure Alerts is a great way to get notified of any unexpected performance degradations or unexpected activity on your resources. If you don’t have Log Analytics set up for your subscription, we’ll guide you to get started so you can unlock deep insights on your data.

Finally, the Azure Monitor overview gives you a new high-level view to your Application Insights monitoring, showing you KPIs for load, latency, failures, and availability. In addition to the alerting and highly customizable workflows you can set up inside Application Insights, the Overview provides a quick view to your application health to see which ones are worth checking on. The Application Insights table is optimized for server-side application monitoring across ASP.NET web apps, Java, and Node.js applications.

We’ll be continuing to expand this page to cover more of Azure’s monitoring capabilities, and to make it as easy as possible to discover and navigate to the monitoring capabilities that are right for your environment. If you have any feedback, please reach us on User Voice.
Quelle: Azure

Azure SQL Database VNET Service Endpoints now in public preview

We are excited to announce that Azure SQL Database and Azure SQL Data Warehouse VNET Service Endpoints are now in public preview in the following regions: West Central US, West US2, and East US1.

This feature allows you to isolate connectivity to your SQLDB to only a given Subnet or set of Subnets within your VNET(s). Even though the connectivity will be on Azure SQL Databases public endpoint, the traffic will stay within the Azure backbone network. This direct route will be preferred over any forced-tunneling route to take Internet traffic back to on-premises. We also provide for separation of roles with the ability to provision VNET Service Endpoints either on the Network Admin, the Database Admin, splitting the roles between these two, or the ability to create a new entity with the help of custom RBAC roles. The following diagram gives more information on the architecture:

Limitations

Each SQL Server can have up to 128 Virtual Network based ACLs
Applies only to ARM VNETs

This does not extend to on-premises via Expressroute, Site-to-Site (S2S) VPN, or Peered VNets.

Considerations

At the time of this preview, Network Security Groups (NSGs) should be opened to the Internet to allow Azure SQL Database traffic. In future, NSGs could be opened to only IP ranges for the PaaS services. IP tags for Azure SQL Database are on the roadmap for CY17.

With VNET Service Endpoints, source IP addresses of resources in your VNet's subnet will switch from using public IPV4 addresses to VNet's private addresses, for traffic to Azure SQL Database. Any existing open TCP connections to your databases service may be closed during this switch. Please make sure no critical tasks are run when Service Endpoints is turned on or off.

If traffic to Azure SQL Database is to be inspected by a network virtual appliance (NVA), it is recommended that VNET Service Endpoints is turned on for the NVA subnet, instead of the subnet where the Azure SQL Database is originating from in the given VNET.

When Service Endpoints is turned on, a Subnet it is sequentially applied to all VMs in that Subnet. The call commits only when Service Endpoints is successfully applied to all VMs. You will be able to ACL given VNET/Subnet your Server only after Service Endpoints from the VNET/Subnet is successfully applied. So there can be potential downtime after the Service Endpoints call is issued until when you ACL the Server.

To learn more check out VNet Service Endpoints and rules for Azure SQL Database.
Quelle: Azure

General availability of HDInsight Interactive Query – blazing fast queries on hyper-scale data

It’s 2017, and big data challenges are as real as they get. Our customers have petabytes of data living in elastic and scalable commodity storage systems such as Azure Data Lake Store and Azure Blob storage.

One of the central questions today is finding insights from data in these storage systems in an interactive manner, at a fraction of the cost. 

Interactive Query leverages [Hive on LLAP] in Apache Hive 2.1, brings the interactivity to your complex data warehouse style queries on large datasets stored on commodity cloud storage.

Today, we announce the general availability of the Interactive Query cluster type in Azure HDInsight (formerly known as Interactive Hive). With this offering, we are bringing the following benefits to our customers:

Fast Data warehouse style SQL queries on petabyte-scale data

Intelligent caching and optimizations in Interactive Query produces blazing-fast query results on remote Cloud storage, such as Azure Blob and Azure Data Lake Store.

Interactive Query enables data analysts to query data interactively in the same storage where data is prepared, eliminating the need for moving data from storage to another analytical engine for data warehousing needs. With zero data migration, you gain faster insights, operational resiliency, and reduced efforts, as well as simplified architecture.

Modern scalable query concurrency architecture

With the introduction of much improved fine-grain resource management and preemption, Interactive Query [Hive on LLAP] makes it better for concurrent users. In addition, HDInsight supports creating multiple clusters on shared Azure storage, and Hive metastore helps in achieving a high degree of concurrency

Rich connectivity with the most popular authoring tools

Interactive Query enables end-users to consume data from rich business intelligence tools, such as PowerBI, Tableau, Excel, Hive View 2.0, Beeline, Hive CLI, and Visual Studio, as well as built-in Zeppelin notebook.

Today, we are happy to announce the preview of Interactive Query tools for Visual Studio code. Rich connectivity options eliminate user learning curves so that they are more productive sooner.

Leverage your existing investments in HDInsight by sharing the data and Hive metastore

If you already run your Batch and ETL workloads in HDInsight, leveraging Interactive Query cluster for fast querying is straightforward. Customers can attach an Interactive Query cluster to existing metastore and data storage, and start querying the data right away.

Achieve low latency with SSD caching without the cost of SSDs

Interactive Query SSD Cache enables you to combine RAM and SSD into a giant pool of memory with all of the other benefits the LLAP cache brings. By using the LLAP SSD cache, a typical daemon can cache four times more data, letting you process larger datasets or support more users. In HDInsight, cluster nodes have built-in SSD at no extra cost.

Say no to data format conversion in order to get faster results

Fast analytics on Hadoop have always come with one big catch: they require up-front conversion to a columnar format like ORCFile, Parquet or Avro, which is time-consuming, complex and limits your agility. With Interactive Query Dynamic Text Cache, which converts CSV or JSON data into optimized in-memory format on-the-fly, caching is dynamic, so the queries determine what data is cached. After text data is cached, analytics run just as fast as if you had converted it to specific file formats.

Enterprise Grade Security and Monitoring (preview)

Interactive Query is built on top of highly secure Azure & HDInsight Platform. With features such as, Domain-joined HDInsight clusters, you can create an interactive query cluster joined to an Active Directory domain, and configure a list of employees from the enterprise who can authenticate through Azure Active Directory to log on to HDInsight cluster.

You can monitor Interactive Query clusters with built-in tools such as Grafana and Ambari, as well as the integration we have built with Azure Log Analytics to monitor all of your resources with a single pane of glass.

Additional resources

Get started with HDInsight Interactive Query Cluster in Azure
Learn more about Azure HDInsight
Use Hive on HDInsight
Open Source component guide on HDInsight
HDInsight release notes
Ask HDInsight questions on Msdn forums
Ask HDInsight questions on stackoverflow

Summary

This week at Ignite, we are pleased to announce general availability of Azure HDInsight Interactive Query. Backed by our enterprise-grade SLA, HDInsight Interactive Query brings sub-second speed to data warehouse style SQL queries to the hyper-scale data stored in commodity cloud storage.
Quelle: Azure