Microsoft Cognitive Services – General availability for Face API, Computer Vision API and Content Moderator

This post was authored by the Cognitive Services Team​.

Microsoft Cognitive Services enables developers to create the next generation of applications that can see, hear, speak, understand, and interpret needs using natural methods of communication. We have made adding intelligent features to your platforms easier.

Today, at the first ever Microsoft Data Amp online event, we’re excited to announce the general availability of Face API, Computer Vision API and Content Moderator API from Microsoft Cognitive Services.

Face API detects human faces and compares similar ones, organizes people into groups according to visual similarity, and identifies previously tagged people and their emotions in images.
Computer Vision API gives you the tools to understand the contents of any image. It creates tags that identify objects, beings like celebrities, or actions in an image, and crafts coherent sentences to describe it. You can now detect landmarks and handwriting in images. Handwriting detection remains in preview.
Content Moderator provides machine assisted moderation of text and images, augmented with human review tools. Video moderation is available in preview as part of Azure Media Services.

Let’s take a closer look at what these APIs can do for you.

Bring vision to your app

Previously, users of Face API could obtain attributes such as age, gender, facial points, and headpose. Now, it’s also possible to obtain emotions in the same Face API call. This responds to some user scenarios in which both age and emotions were requested simultaneously. Learn more about Face API in our guides.

Recognizing landmarks

We’ve added more richness to Computer Vision API by integrating landmark recognition. Landmark models, as well as Celebrity Recognition, are examples of Domain Specific Models. Our landmark recognition model recognizes 9,000 natural and man-made landmarks from around the world. Domain Specific Models is a continuously evolving feature within Computer Vision API.

Let’s say I want my app to recognize this picture I took while traveling:

You could have an idea about where this comes from, but how could a machine easily know it?

In C#, we can leverage these capabilities by making a simple REST API call as the following. By the way, other languages are at the bottom of this post.

using System;
using System.IO;
using System.Net.Http;
using System.Net.Http.Headers;

namespace CSHttpClientSample
{
static class Program
{
static void Main()
{
Console.Write("Enter image file path: ");
string imageFilePath = Console.ReadLine();

MakeAnalysisRequest(imageFilePath);

Console.WriteLine("nnHit ENTER to exit…n");
Console.ReadLine();
}

static byte[] GetImageAsByteArray(string imageFilePath)
{
FileStream fileStream = new FileStream(imageFilePath, FileMode.Open, FileAccess.Read);
BinaryReader binaryReader = new BinaryReader(fileStream);
return binaryReader.ReadBytes((int)fileStream.Length);
}

static async void MakeAnalysisRequest(string imageFilePath)
{
var client = new HttpClient();

// Request headers. Replace the second parameter with a valid subscription key.
client.DefaultRequestHeaders.Add("Ocp-Apim-Subscription-Key", "putyourkeyhere");

// Request parameters. You can change "landmarks" to "celebrities" on requestParameters and uri to use the Celebrities model.
string requestParameters = "model=landmarks";
string uri = "https://westus.api.cognitive.microsoft.com/vision/v1.0/models/landmarks/analyze?" + requestParameters;
Console.WriteLine(uri);

HttpResponseMessage response;

// Request body. Try this sample with a locally stored JPEG image.
byte[] byteData = GetImageAsByteArray(imageFilePath);

using (var content = new ByteArrayContent(byteData))
{
// This example uses content type "application/octet-stream".
// The other content types you can use are "application/json" and "multipart/form-data".
content.Headers.ContentType = new MediaTypeHeaderValue("application/octet-stream");
response = await client.PostAsync(uri, content);
string contentString = await response.Content.ReadAsStringAsync();
Console.WriteLine("Response:n");
Console.WriteLine(contentString);
}
}
}
}

The successful response, returned in JSON would be the following:

“`json
{
"requestId": "b15f13a4-77d9-4fab-a701-7ad65bcdcaed",
"metadata": {
"width": 1024,
"height": 680,
"format": "Jpeg"
},
"result": {
"landmarks": [
{
"name": "Colosseum",
"confidence": 0.9448209
}
]
}
}
“`

Recognizing handwriting

Handwriting OCR is also available in preview in Computer Vision API. This feature detects text in a handwritten image and extracts the recognized characters into a machine-usable character stream.
It detects and extracts handwritten text from notes, letters, essays, whiteboards, forms, etc. It works with different surfaces and backgrounds such as white paper, sticky notes, and whiteboards. No need to transcribe those handwritten notes anymore; you can snap an image instead and use Handwriting OCR to digitize your notes, saving time, effort, and paper clutter. You can even decide to do a quick search when you want to pull the notes up again.

You can try this out yourself by uploading your sample in the interactive demonstration.

Let’s say that I want to recognize the handwriting in the whiteboard:

An inspiration quote I’d like to keep.

In C#, I would use the following:

using System;
using System.IO;
using System.Collections;
using System.Collections.Generic;
using System.Net.Http;
using System.Net.Http.Headers;

namespace CSHttpClientSample
{
static class Program
{
static void Main()
{
Console.Write("Enter image file path: ");
string imageFilePath = Console.ReadLine();

ReadHandwrittenText(imageFilePath);

Console.WriteLine("nnnHit ENTER to exit…");
Console.ReadLine();
}

static byte[] GetImageAsByteArray(string imageFilePath)
{
FileStream fileStream = new FileStream(imageFilePath, FileMode.Open, FileAccess.Read);
BinaryReader binaryReader = new BinaryReader(fileStream);
return binaryReader.ReadBytes((int)fileStream.Length);
}

static async void ReadHandwrittenText(string imageFilePath)
{
var client = new HttpClient();

// Request headers – replace this example key with your valid subscription key.
client.DefaultRequestHeaders.Add("Ocp-Apim-Subscription-Key", "putyourkeyhere");

// Request parameters and URI. Set "handwriting" to false for printed text.
string requestParameter = "handwriting=true";
string uri = "https://westus.api.cognitive.microsoft.com/vision/v1.0/recognizeText?" + requestParameter;

HttpResponseMessage response = null;
IEnumerable<string> responseValues = null;
string operationLocation = null;

// Request body. Try this sample with a locally stored JPEG image.
byte[] byteData = GetImageAsByteArray(imageFilePath);
var content = new ByteArrayContent(byteData);

// This example uses content type "application/octet-stream".
// You can also use "application/json" and specify an image URL.
content.Headers.ContentType = new MediaTypeHeaderValue("application/octet-stream");

try {
response = await client.PostAsync(uri, content);
responseValues = response.Headers.GetValues("Operation-Location");
}
catch (Exception e)
{
Console.WriteLine(e.Message);
}

foreach (var value in responseValues)
{
// This value is the URI where you can get the text recognition operation result.
operationLocation = value;
Console.WriteLine(operationLocation);
break;
}

try
{
// Note: The response may not be immediately available. Handwriting recognition is an
// async operation that can take a variable amount of time depending on the length
// of the text you want to recognize. You may need to wait or retry this operation.
response = await client.GetAsync(operationLocation);

// And now you can see the response in in JSON:
Console.WriteLine(await response.Content.ReadAsStringAsync());
}
catch (Exception e)
{
Console.WriteLine(e.Message);
}
}
}
}

Upon success, the OCR results returned include text, bounding box for regions, lines, and words through the following JSON:

{
"status": "Succeeded",
"recognitionResult": {
"lines": [
{
"boundingBox": [
542,
724,
1404,
722,
1406,
819,
544,
820
],
"text": "You must be the change",
"words": [
{
"boundingBox": [
535,
725,
678,
721,
698,
841,
555,
845
],
"text": "You"
},
{
"boundingBox": [
713,
720,
886,
715,
906,
835,
734,
840
],
"text": "must"
},
{
"boundingBox": [
891,
715,
982,
713,
1002,
833,
911,
835
],
"text": "be"
},
{
"boundingBox": [
1002,
712,
1129,
708,
1149,
829,
1022,
832
],
"text": "the"
},
{
"boundingBox": [
1159,
708,
1427,
700,
1448,
820,
1179,
828
],
"text": "change"
}
]
},
{
"boundingBox": [
667,
905,
1766,
868,
1771,
976,
672,
1015
],
"text": "you want to see in the world !",
"words": [
{
"boundingBox": [
665,
901,
758,
899,
768,
1015,
675,
1017
],
"text": "you"
},
{
"boundingBox": [
752,
900,
941,
896,
951,
1012,
762,
1015
],
"text": "want"
},
{
"boundingBox": [
960,
896,
1058,
895,
1068,
1010,
970,
1012
],
"text": "to"
},
{
"boundingBox": [
1077,
894,
1227,
892,
1237,
1007,
1087,
1010
],
"text": "see"
},
{
"boundingBox": [
1253,
891,
1338,
890,
1348,
1006,
1263,
1007
],
"text": "in"
},
{
"boundingBox": [
1344,
890,
1488,
887,
1498,
1003,
1354,
1005
],
"text": "the"
},
{
"boundingBox": [
1494,
887,
1755,
883,
1765,
999,
1504,
1003
],
"text": "world"
},
{
"boundingBox": [
1735,
883,
1813,
882,
1823,
998,
1745,
999
],
"text": "!"
}
]
}
]
}
}

To easily get started in your preferred language, please refer to the following:

The Face API page and quick-start guides for on C#, Java, Python, and many more.
The Computer Vision API page and quick-start guides on C#, Java, Python, and more.
The Content Moderator Page and test drive Content Moderator to learn how we enable a complete, configurable content moderation lifecycle.

For more information about our use cases, don’t hesitate to take a look at our customer stories, including a great use of our Vision APIs with GrayMeta.

Happy coding!
Quelle: Azure

Discover and act on insights with new Azure monitoring and diagnostics capabilities

Imagine if you could get a dynamic, end-to-end view of your IT environment, and respond to issues as they arise. You could see how your applications, services, and workloads are connected, and how your servers interact with the networks. You could see connections turn red if they fail, view cascading alerts, and see rogue clients that could be causing problems. Not only that, but with just a tap of a button you could fix the issue, and see the red alerts go away. Bringing together your data from workloads, applications, and networks help you start to see the big picture. However, when you apply machine learning and crowd-sourced knowledge from around the globe, suddenly you can visualize and act on your data in a way that you never have before.

To help get this visibility into your cloud and on-premises environment and make it actionable, we are introducing today new monitoring and diagnostics capabilities in Azure:

Visibility

Map out process and server dependencies with Service Map, a new technology in Azure Insight & Analytics, to make it easier to troubleshoot and plan ahead for future changes or migrations.
Use DNS Analytics, a new solution in Azure Insight & Analytics, to help you visualize real-time security, performance, and operations-related data for your DNS servers.

Action

Remediate issues right away with a new option in Azure Insight & Analytics to Take Action and resolve an issue directly from a log search result.
Use the Smart Diagnostics functionality in Azure Application Insights to diagnose sudden changes in the performance or usage of your web application.

In addition, expanded support for Linux has been added to monitoring tools and capabilities in Azure Automation & Control, including Linux patching and Linux file change tracking. You can also now ingest custom logs into Azure Application Insights for more powerful data correlations and analytics.

Visualize dependencies in your environment

A big part of what makes you successful is how quickly you can resolve issues. We take the hard part out of data collection and insights, so you can analyze the issues and resolve them more quickly. The Service Map technology, now generally available, allows you to automatically discover and build a common reference map of dependencies across servers, processes, and 3rd party services, in real-time. This helps you to isolate problems and accelerate root-cause analysis, by visualizing process and server dependencies. In the same dashboard, you can see prioritized alerts, recent changes, notable security issues, and system updates that are due.

In addition, you can use this new capability to make sure nothing is left behind during migrations with the help of the detailed process and server inventory, or identify servers that need to be decommissioned. InSpark uses Service Map to help customer plan and execute migrations to Azure. “Before Service Map, we had to rely on customers to provide information about their servers’ dependencies, and that information was error prone in and incomplete,” says Maarten Goet, Managing Partner. “Now, with Service Map, we immediately see all of their dependencies and we can build an accurate plan for moving business services into Azure.”

See across network devices

Modern businesses are powered by apps that have fast, reliable and secure network connections. Domain Name System (DNS) server is a core component of an organization’s IT infrastructure, that enables such network connections. So, the visibility into operations, performance, and audit of DNS servers is critical for businesses. 

You can use DNS Analytics, a solution now in public preview in Azure Insight & Analytics, to get visibility into your entire DNS infrastructure. This solution helps you visualize real-time security, performance, and operations-related information of your DNS servers through real-time dashboard views. You can drill-down into the dashboard to gain granular details on the state of your DNS infrastructure, create alerts, and remediate DNS server issues. “DNS Analytics provided us with the in-depth information I have been missing,” says Marius Sandbu, cloud architect for Evry, “both to be able to troubleshoot DNS registrations from clients and servers, but also to detect traffic to malware domains.”

Take action to remediate

Turning insights into action is made easier with the ability to create alerts if something is out of the norm and connecting alerts with workflows to remediate automatically. Now it’s even easier in Azure with the capability to perform in-line remediation using the Take Action button in Azure Insight & Analytics. In the Log Search view, you can now choose to take action from a search result to immediately address whatever was detected by the log. This functionality fixes your log issue by selecting a runbook, previously scripted or leveraged from the runbook gallery in Azure Automation, and deploying it. You can solve your problem right away, eliminating extra work and time during an already pressing situation.

Diagnose application issues on the spot

You can now diagnose sudden changes in your web app’s performance or usage with a single click, powered by Machine Learning algorithms in Azure Application Insights Analytics. The Smart Diagnostics feature is available whenever you create or render a time chart. Anywhere it finds an unusual change from the trend of your results, such as a spike or a dip, it identifies a pattern of dimensions that might explain the change. This helps you diagnose the problem quickly. Smart Diagnostics can successfully identify a pattern of property values associated with a change, and highlight the difference between results with and without that pattern, essentially suggesting the most probable root cause leading to an anomaly.

Get started today

Azure management and security services help you to gain greater visibility into your environment with advanced data analysis and visualization, and make it easy to turn insights into action. Learn more about the capabilities of Azure Insight & Analytics, Azure Automation, and Azure Application Insights.
Quelle: Azure

Azure File Storage on-premises access for Ubuntu 17.04

Azure File Storage is a service that offers shared File Storage for any OS that implements the supported SMB Protocol. Since GA we supported both Windows and Linux. However, on premises access was only available to Windows. While Windows customers widely use this capability, we have received the feedback that Linux customers wanted to do the same. And with this capability Linux access will be extending beyond the storage account region to cross region as well as on premises. Today we are happy to announce Azure File Storage on-premises access from across all regions for our first Linux distribution – Ubuntu 17.04. This support is right out of the box and no extra setup is needed.

How to Access Azure File Share from On-Prem Ubuntu 17.04

Steps to access Azure File Share from an on-premises Ubuntu 17.04 or Azure Linux VM are the same.

Step 1: Check to see if TCP 445 is accessible through your firewall. You can test to see if the port is open using the following command:

nmap <azure storage account>.file.core.windows.net

Step 2: Copy the command from Azure Portal or replace <storage account name>, <file share name>, <mountpoint> and <storage account key> on the mount command below. Learn more about mounting at  how to use Azure File on Linux.

sudo mount -t cifs //<storage account name>.file.core.windows.net/<file share name> <mountpoint> -o vers=3.0,username=<storage account name>,password=<storage account key>,dir_mode=0777,file_mode=0777,sec=ntlmssp

Step 3: Once mounted, you can perform file-operations

Other Linux Distributions

Backporting of this enhancement to Ubuntu 16.04 and 16.10 is in progress and can be tracked here: CIFS: Enable encryption for SMB3. RHEL is also in progress. Full-support will be released with next release of RHEL.

Summary and Next Steps

We are excited to see tremendous adoption of Azure File Storage. You can try Azure File storage by getting started in under 5 minutes. Further information and detailed documentation links are provided below.

Use Azure File on Linux
Azure Files Storage: a frictionless cloud SMB file system for Windows and Linux
Inside Azure File Storage

We will continue to enhance the Azure File Storage based on your feedback. If you have any comments, requests, or issues, you can use the following channels to reach out to us:

Stack Overflow
MSDN
User Voice

Quelle: Azure

Cloud migration and disaster recovery of load balanced multi-tier applications

Support for Microsoft Azure virtual machines availability sets has been a highly anticipated capability by many Azure Site Recovery customers who are using the product for either cloud migration or disaster recovery of applications. Today, I am excited to announce that Azure Site Recovery now supports creating failed over virtual machines in an availability set. This in turn allows that you can configure an internal or external load balancer to distribute traffic between multiple virtual machines of the same tier of an application. With the Azure Site Recovery promise of cloud migration and  disaster recovery of applications, this first-class integration with availability sets and load balancers makes it simpler for you to run your failed over applications on Microsoft Azure with the same guarantees that you had while running them on the primary site.

In an earlier blog of this series, you learned about the importance and complexity involved in recovering applications – Cloud migration and disaster recovery for applications, not just virtual machines. The next blog was a deep-dive on recovery plans describing how you can do a One-click cloud migration and disaster recovery of applications. In this blog, we look at how to failover or migrate a load balanced multi-tier application using Azure Site Recovery.

To demonstrate real-world usage of availability sets and load balancers in a recovery plan, a three-tier SharePoint farm with a SQL Always On backend is being used.  A single recovery plan is used to orchestrate failover of this entire SharePoint farm.

 

 

Here are the steps to set up availability sets and load balancers for this SharePoint farm when it needs to run on Microsoft Azure:

Under the Recovery Services vault, go to Compute and Network settings of each of the application tier virtual machines, and configure an availability set for them.
Configure another availability set for each of web tier virtual machines.
Add the two application tier virtual machines and the two web tier virtual machines in Group 1 and Group 2 of a recovery plan respectively.
If you have not already done so, click the following button to import the most popular Azure Site Recovery automation runbooks into your Azure Automation account.

 

Add script ASR-SQL-FailoverAG as a pre-step to Group 1.  
Add script ASR-AddMultipleLoadBalancers as a post-step to both Group 1 and Group 2.
Create an Azure Automation variable using the instructions outlined in the scripts. For this example, these are the exact commands used.

$InputObject = @{"TestSQLVMRG" = "SQLRG" ;
"TestSQLVMName" = "SharePointSQLServer-test" ;
"ProdSQLVMRG" = "SQLRG" ;
"ProdSQLVMName" = "SharePointSQLServer";
"Paths" = @{
"1"="SQLSERVER:SQLSharePointSQLDEFAULTAvailabilityGroupsConfig_AG";
"2"="SQLSERVER:SQLSharePointSQLDEFAULTAvailabilityGroupsContent_AG"};
"406d039a-eeae-11e6-b0b8-0050568f7993"=@{
"LBName"="ApptierInternalLB";
"ResourceGroupName"="ContosoRG"};
"c21c5050-fcd5-11e6-a53d-0050568f7993"=@{
"LBName"="ApptierInternalLB";
"ResourceGroupName"="ContosoRG"};
"45a4c1fb-fcd3-11e6-a53d-0050568f7993"=@{
"LBName"="WebTierExternalLB";
"ResourceGroupName"="ContosoRG"};
"7cfa6ff6-eeab-11e6-b0b8-0050568f7993"=@{
"LBName"="WebTierExternalLB";
"ResourceGroupName"="ContosoRG"}}

$RPDetails = New-Object -TypeName PSObject -Property $InputObject | ConvertTo-Json

New-AzureRmAutomationVariable -Name "SharePointRecoveryPlan" -ResourceGroupName "AutomationRG" -AutomationAccountName "ASRAutomation" -Value $RPDetails -Encrypted $false

You have now completed customizing your recovery plan and it is ready to be failed over.

 

Once the failover (or test failover) is complete and the SharePoint farm runs in Microsoft Azure, it looks like this:

 

Watch this demo video to see all this in action – how using in-built constructs that Azure Site Recovery provides we can failover a three-tier application using a single-click recovery plan. The recovery plan automates the following tasks:

Failing over SQL Always On Availability Group to the virtual machine running in Microsoft Azure
Failing over the web and app tier virtual machines that were part of the SharePoint farm
Attaching an internal load balancer on the application tier virtual machines of the SharePoint farm that are in an availability set
Attaching an external load balancer on the web tier virtual machines of the SharePoint farm that are in an availability set

 

With relentless focus on ensuring that you succeed with full application recovery, Azure Site Recovery is the one-stop shop for all your disaster recovery and migration needs. Our mission is to democratize disaster recovery with the power of Microsoft Azure, to enable not just the elite tier-1 applications to have a business continuity plan, but offer a compelling solution that empowers you to set up a working end to end disaster recovery plan for 100% of your organization&;s IT applications.

You can check out additional product information and start protecting and migrating your workloads to Microsoft Azure using Azure Site Recovery today. You can use the powerful replication capabilities of Azure Site Recovery for 31 days at no charge for every new physical server or virtual machine that you replicate, whether it is running on VMware or Hyper-V. To learn more about Azure Site Recovery, check out our How-To Videos. Visit the Azure Site Recovery forum on MSDN for additional information and to engage with other customers, or use the Azure Site Recovery User Voice to let us know what features you want us to enable next.
Quelle: Azure

Networking to and within the Azure Cloud, part 2

This is the second blog post of a three-part series. Before you begin reading, I would suggest reading the first post Networking to and within the Azure Cloud, part 1. Hybrid networking is a nice thing, but the question then is how do we define hybrid networking? For me, in the context of the connectivity to virtual networks, ExpressRoute’s private peering or VPN connectivity, it is the ability to connect cross-premises resources to one or more Virtual Networks (VNets). While this all works nicely, and we know how to connect to the cloud, how do we network within the cloud? There are at least 3 Azure built-in ways of doing this. In this series of 3 blog posts, my intent is to briefly explain: Hybrid networking connectivity options Intra-cloud connectivity options Putting all these concepts together Intra-Cloud Connectivity Options Now that your workload is connected to the cloud, what are the native options to communicate within the Azure cloud? There are 3 native options: VNet to VNet via VPN (VNet-to-VNet connection) VNet to VNet via ExpressRoute VNet to VNet via Virtual Network Peering (VNet peering) and VNet transit My intent here is to compare these methods, what they allow, and the kind of topologies you can achieve with these. VNet-to-VNet via VPN As exposed in the below picture, when 2 VNets are connected together using VNet-to-VNet via VPN, this is what routing tables look like for both virtual networks: This is interesting with 2 VNets, but it can grow by some measure: If you notice, the route tables for VNet4 and VNet5 indicate how to reach VNet3. However, VNet4 is not able to reach VNet5. Despite this being the case, there are 2 methods to achieve this: Full Mesh VNet-to-VNet Using BGP enabled VPNs With both of these methods, all 3 VNets know how to reach each VNet. Obviously this could scale to many more VNets, assuming the limits of the VPN Gateways are respected (maximum number of tunnels, etc.). VNet-to-VNet via ExpressRoute While maybe not everyone realizes, linking a VNet to an ExpressRoute circuit has an interesting side-effect when you are linking more than one VNet to the same ExpressRoute circuit. For example, when you have 2 VNets linked to the same ExpressRoute circuit, this is what the route table looks like: Interestingly, both VNets are able to communicate with each other, without going outside of the Microsoft Enterprise Edge (MSEE) router. This makes possible the communication between VNets that are either within the same geopolitical region or globally, except on National Clouds, if this is an ExpressRoute Premium circuit. This means you can use the world wide Microsoft backbone, to connect multiple VNets together. And by the way, that VNet-to-VNet traffic is free, as long as you can connect these VNets to the same ExpressRoute circuit. In the example below, you would have 3 VNets connected to the same ExpressRoute circuit: You also see in the picture above the different routes that appear in each VNet’s subnet’s Effective Route Table (read that, it’s very useful to understand why routing doesn’t work like you expect, if that ever happens). VNet-to-VNet with Virtual Network Peering The final option to connect multiple VNets together is to use Virtual Network Peering, which is constrained within one Azure region. This peering arrangement between 2 VNets makes the VNets behave essentially like if this were 1 big virtual network, but you can govern/control these communications with NSGs and route tables. For an illustration of what this means, see below: So taking that to the next level, you could imagine a topology where you would have a hub and spoke topology like this: Peering is non-transitive, therefore in that case, HR VNet cannot talk directly to Marketing VNet, however they can all three, HR, marketing, and engineering, talk to the Hub VNet, that would contain shared resources, like Domain Controllers, Monitoring Systems, Firewalls, or other Network Virtual Appliances (NVA). Using a combination of User-Defined-Routes applied on the spoke VNets and NVA in the centralized Hub VNet. However, like in the case of VPN, if for some reason you would need each VNet to be able to talk to each other, you could create a topology similar to this as well: When using VNet peering, one of the great resource that can be shared is the Gateways, both VPN and ExpressRoute gateways. That would look something like this: This way, you do not have to deploy an ExpressRoute or VPN Gateway in every spoke VNet, but can centralize the security stamp and Gateway access into the Hub Vnet. Please make sure to check out the next post when it comes out to put all these concepts together!
Quelle: Azure

Welcome to Azure CLI Shell Preview

A few months ago, Azure CLI 2.0 was released. It provides a command line interface to manage/administer Azure resources. Azure CLI is optimized to run automation scripts composed of multiple commands, hence the commands and the overall user experience is not interactive. Azure CLI Shell (az-shell) provides an interactive environment to run Azure CLI 2.0 commands, which is ideal for new users learning the Azure CLI’s capabilities, command structures, and output formats. It provides autocomplete dropdowns, auto-cached suggestions combined with on the fly documentation, including examples of how each command is used. Azure CLI Shell provides an experience that makes it easier to learn and use Azure CLI commands.

Features

Gestures

The shell implements gestures that can be used by users to customize it. F3 function key in the toolbar can be used to look up all the available gestures at any time.

Scoping

The shell allows users to scope their commands to a specific group of commands. If you only want to work with ‘vm’ commands, you can use the following gesture to set the right scope so that you don’t have to type ‘vm’ with all subsequent commands:

  $ %%vm

Now, all the completions are ‘vm’ specific.

To remove a scope, the gesture is:

  $ ^^vm

Or, if I wish to remove all scoping:

   $ ^^

Scrolling through examples

The shell lists many examples for each command contextually, as you type your commands. However, some commands, like ‘vm create’ have too many examples to fit on the terminal screen. To look through all examples, you can scroll through the example pane with CTRLY and CTRLN for ‘up’ and ‘down’ , respectively.

Step-by-step examples

With all the examples, you can easily select a specific one to view in the example pane.

   $ [command] :: [example number]

The example number is indicated in the example pane. The shell takes the user through a step by step process to create the command and execute it.

Commands outside the shell

Azure CLI Shell allows a user to execute commands outside of Azure CLI from within the Shell itself. So you don’t need to exit out of the az-shell to add something to git, for example. Users can execute commands outside the shell with the gesture:

   $ #[command]

Query Previous Command

You can use a jmespath query on command outputs of type ‘json’ to quickly find properties/values that you want.

   $ ? [jmespath query]

Exit Code

There is a gesture that allows users to see the exit code of the last command they ran, to check if it executed properly.

   $ $

Installation

PyPI

   $ pip install –user azure-cli-shell

Docker

   $ docker run -it azuresdk/azure-cli-shell:0.2.0

To start the application

   $ az-shell

Welcome to Azure CLI Shell

Azure CLI Shell is open source and located on GitHub . If there are any issues, they can be filed on the Github repository or e-mailed to azfeedback. You can also use the “az feedback” command directly from within az-shell or az as well.
Quelle: Azure

Visualize Azure Machine Learning Models with MicroStrategy Desktop™

  

Have you ever wondered how you can use machine learning in your work?

It would be easy to assume that this type of advanced technology isn’t available to you because of simple logistics or complexities. The truth is, machine learning is more accessible than ever before – and even easy-to-use.

Together, Microsoft and MicroStrategy can help users create powerful, cloud-based machine learning applications through self-service analytics. MicroStrategy Desktop™, combined with Microsoft Azure ML, uses a drag-n-drop interface so users can efficiently plan, create and glean insights from a predictive dashboard.

April 18-20 at MicroStrategy World in Washington, DC, Microsoft will use a hands-on workshop to demonstrate how users can go from nothing to a fully-functional predictive data visualization built on machine learning within an hour. The three tools we’ll use are Microsoft R Open, Azure Machine Learning, and MicroStrategy 10 Desktop.

We invite you to attend the session and see it in action first-hand. If you can’t make the trip to MicroStrategy World, check out our step-by-step guide that we’ll review with the audience.

Sound like an easy way to begin using Machine Learning in your work? Here’s some more information on the three tools you need to get up and running in an hour.

Microsoft R Open

Microsoft R Open, formerly known as Revolution R Open (RRO), is the enhanced distribution of R from Microsoft Corporation. It is a complete open source platform for statistical analysis and data science. The current version, Microsoft R Open 3.3.2, is based on (and 100% compatible with) R-3.3.2, the most widely used statistics software in the world, and is therefore fully compatible with all packages, scripts, and applications that work with that version of R. It includes additional capabilities for improved performance, reproducibility, as well as support for Windows and Linux-based platforms.

Like R, Microsoft R Open is open source and free to download, use, and share. It is available from https://mran.microsoft.com/open/ .

Azure Machine Learning

Data can hold secrets, especially if you have lots of it. With lots of data about something, you can examine that data in intelligent ways to find patterns. And those patterns, which are typically too complex for you to detect yourself, can tell you how to solve a problem.

This is exactly what machine learning does: It examines large amounts of data looking for patterns, then generates code that lets you recognize those patterns in new data. Your applications can use this generated code to make better predictions. In other words, machine learning can help you create smarter applications. Azure Machine Learning enables you to build powerful, cloud-based machine learning applications.

MicroStrategy Desktop

Enterprise organizations use MicroStrategy Desktop to answer some of their most difficult business questions.

Available for Mac and PC, MicroStrategy Desktop is a powerful data discovery tool that allows users to access data on their own and build dashboards. With MicroStrategy Desktop, users can access over 70 data sources, from personal spreadsheets to relational databases and big data sources like Hadoop. With the ability to prepare, blend, and profile datasets with built-in data wrangling, users can go from data to dashboards in minutes on a single interface. MicroStrategy Desktop allows departmental users to visualize information with hundreds of chart and graph options, empowering them to make decisions on their own.

Come join us at MicroStrategy World or try the workshop out for yourself.
Quelle: Azure

Azure Search releases support for synonyms (public preview)

Today, we are happy to announce public preview support for synonyms in Azure Search, one of our most requested features on UserVoice. Synonyms functionality allows for Azure Search to not only return results which match the query terms that were typed into the search box, but also return results which match synonyms of the query terms. As a search-as-a-service solution, Azure Search is used in a wide variety of applications which span many languages, industries, and scenarios. Since terminology and definitions vary from case to case, Azure Search’s Synonyms API allows customers to define their own synonym mappings.

Synonyms aim to increase recall without sacrificing relevance

Synonyms functionality in Azure Search allows a user to get more results for a given query without sacrificing how relevant those results are to the query terms. In a real estate website, for example, a user may be searching for ‘jacuzzi.’ If some of the listings only have the term ‘hot tub’ or ‘whirlpool bath,’ then the user will not see those results. When ‘jacuzzi’ and ‘hot tub’ are mapped to one another in a synonym map, Azure Search does not have to do any guess-work in understanding that these terms are relevant even though the terms bear no resemblance in spelling.

Multi-word synonyms

In many full text search engines, support for synonyms is limited to single words. Our team has engineered a solution that allows Azure Search to support multi-word synonyms. This allows for phrase queries (“”) to function properly while using synonyms. If someone has mapped ‘hot tub’ to ‘whirlpool bath’ and they then search for “large hot tub,” Azure Search will return matches which contain both “large hot tub” and “large whirlpool bath.”

Support for Solr SynonymFilterFactory format

Azure Search’s synonyms feature supports the same format used by Apache Solr’s SynonymFilterFactory. As a widely-used open source standard, many existing synonym maps can be found for various languages and specific domains that can be used out-of-the box with Azure Search. 

Creating or updating a synonym map

Enabling synonyms functionality does not require any re-indexing of your content in Azure Search or any interruption of your service and you can add new synonyms at any time. Currently, the Synonyms API is in Public Preview and only available in the Service REST API (api-version=2016-09-01-Preview) and .NET SDK. 

When defining synonyms for Azure Search, you add a named resource to your search service called a synonymMap. You can enable synonyms for fields in your index by referring to the name of a synonymMap in the new synonymMaps property of the field definition.

Like an index definition, a synonym map is managed as an atomic resource that you read, update, or delete in a single operation. That means that if you want to make incremental changes to a synonym map, you will need to read, modify, and update the entire synonym map.

Below, there are some example operations using an example scenario of a real estate listings data set using the REST API. For examples using the .NET SDK, please visit the documentation. 

POST

You can create a new synonym map in the REST API using HTTP POST:

POST https://[servicename].search.windows.net/synonymmaps?api-version=2016-09-01-Preview
api-key: [admin key]
 

   "name":"addressmap",
   "format":"solr",
   "synonyms": "
      Washington, Wash., WA => WA
}

PUT

You can create a new synonym map or update an existing synonym map using HTTP PUT.

When using PUT, you must specify the synonym map name on the URI. If the synonym map does not exist, it will be created.

PUT https://[servicename].search.windows.net/synonymmaps/addressmap?api-version=2016-09-01-Preview
api-key: [admin key]
 

   "name":"addressmap",
   "format":"solr",
   "synonyms": "
      Washington, Wash., WA => WA
}

 

Types of synonym mappings

With the Synonyms API, it is possible to define synonyms in two ways: one-way mappings and equivalence-class mappings.

One-way mappings

With one-way mappings, Azure Search will treat multiple terms as if they all are a specific term. For example, the state where a property is located may only be stored as the two-letter abbreviation in the index for real estate listings. However, users may type in the full name of the state or other abbreviations.


   "name":"addressmap",
   "format":"solr",
   "synonyms": "
      Washington, Wash., WA => WA"
}

Equivalence-class mappings

In many domains, there are terms which all have the same or similar meaning. The Synonyms API makes it simple to map all like terms to one another so that the search term is expanded at query-time to include all synonyms. The above example of the multiple ways to describe ‘jacuzzi’ is demonstrated below.


   "name":"descriptionmap",
   "format":"solr",
   "synonyms": "hot tub, jacuzzi, whirlpool bath, sauna"
}

Setting the synonym map in the index definition

When defining a searchable field in your index, you can use the new property synonymMaps to specify a synonym map to use for the field. Multiple indexes in the same search service can refer to the same synonym map.

NOTE: Currently only one synonym map per field is supported.

POST https://[servicename].search.windows.net/indexes?api-version= 2016-09-01-Preview
api-key: [admin key]
 
{
   "name":"realestateindex",
   "fields":[
      {
         "name":"id",
         "type":"Edm.String",
         "key":true
      },
      {
         "name":"address",
         "type":"Edm.String",
         "searchable":true,
         "analyzer":"en.lucene",
         "synonymMaps":[
            "addressmap"
         ]
      },
      {
         "name":"description",
         "type":"Edm.String",
         "searchable":true,
         "analyzer":"en.lucene",
         "synonymMaps":[
            " descriptionmap "
         ]
      }
   ]
}

Synonyms + Search Traffic Analytics

Coupled with Search Traffic Analytics (STA), synonyms can be powerful in improving the quality of the search results that end users see. STA in Azure Search reveals the most common queries with zero results. By adding relevant synonyms to these terms, these zero-result queries can be mitigated. STA also shows the most common query terms for your Azure Search service, so you do not need to guess when determining proper terms for your synonym map.

Learn more

You follow these links for the detailed documentation around synonyms using the REST API and .NET SDK. Read more about Azure Search and its capabilities and visit our documentation. Please visit our pricing page to learn about the various tiers of service to fit your needs.
Quelle: Azure

Azure Analysis Services Backup and Restore

This post is authored by Bret Grinslade, Principal Program Manager and Josh Caplan, Senior Program Manager, Azure Analysis Services.

We have gotten good feedback from customers and partners starting to adopt Azure Analysis Services in production. Based on this feedback, this week we are releasing improvements around pricing options, support for backup and restore, and improved Azure Active Directory support. Please try them out and let us know how they work for you.

New Basic Tier

The new Basic Tier is designed to support smaller workloads with simpler refresh and processing needs. While you can put multiple models in one Standard instance, this new tier enables you to create models that are more targeted at less cost. The key differences between Standard and Basic is that the Basic tier does not support some specific enterprise features. Standard supports larger sizes and higher QPUs for concurrent queries and adds data partitioning for improved processing, translations, perspectives, and Direct Query. If your solution doesn’t need these capabilities, you can start with Basic. You can also scale up from Basic to Standard at any time. However, once you scale up to the higher tier you can’t scale back down to Basic. As an example, you can scale from B1 to S0 and then from S0 to S1 and back to S0, but you cannot scale from S0 to either the Basic or Develop tier.

Backup & Restore

We have added backup and restore. At a high level, you configure a backup storage location from your subscription for use with your Azure Analysis Services instance. If you do not have a storage account, you will need to create one. You can do this from the Azure Analysis Services blade for backup configuration or you can create it separately. Once you have associated a storage location, you can backup and restore from that location using TMSL commands or a tool like SQL Server Management Studio (SSMS) which will support this shortly. The documentation has more details on Backing Up and Restoring Azure Analysis Services models. One note, to restore a 1200 tabular model you have created with an on-premises version of SQL Server Analysis Services, you will need to copy it up to the storage account before it can be restored to Azure Analysis Services. The Microsoft Azure Storage Explorer or the AzCopy command-line utility are useful tools for moving large files in to Azure. In addition, if you restore a model from an on-premises server, the on-premises domain users will not have access to the model.  You will need to remove all of the on-premises users from the model roles and then you can add Azure Active Directory users to roles. The roles will be the same. Azure Analysis Services Server Admins will still have access as these are AAD based members. The setting on restore for “SkipMembership” will honored in a future service update to make managing cloud based role membership easier.

Improved Azure Active Directory integration

We have also done some work to improve the way Azure Analysis Services works with Azure Active Directory. Starting now, any newly created Azure AS server will be tied to the Azure AD tenant for which your Azure subscription is associated with and only users within that directory will be able to use your Azure AS server if granted access. This means that if a server is created in a subscription that is owned by Contoso.com than only users within the Contoso.com directory will be able to use those servers. In order to use that server, users must still be granted access to a role within the model. Azure AD supports a few options for allowing users outside of your organization to get access to resources within your tenant. One of these upcoming options will be Azure AD B2B. With B2B, you will be able to add guest access to users outside of your organization to your models through Azure Active Directory. We are hard at work enabling B2B for Azure Analysis Services end-to-end and will post an update when it is fully available in SSMS in SSDT as well as client tools.
Quelle: Azure

The network is a living organism

Organism, from Greek word Organismos, depicts a complex structure of living elements. But, what does a network have in common with organisms?

At Microsoft, we build and manage a hyper-scale global network that’s constantly growing and evolving. Supporting workloads such as Microsoft Azure, Bing, Dynamics, Office 365, OneDrive, Skype, Xbox, and soon LinkedIn, has stringent needs of reliability, security, and performance. Such needs make it imperative to continually monitor the pulse of the network, to detect anomalies, faults, and drive recovery at the millisecond level, much akin to monitoring a living organism.

Monitoring a large network that connects 38 regions, as of April 2017, hundreds of datacenters, thousands of servers with several thousand devices, and millions of components requires constant innovation and invention.

Figure 1. Microsoft global network

Figure 2. Illustration of a physical network in a datacenter

Four core principles drive the design and innovation of our monitoring services:

Speed and accuracy: It’s imperative to detect failures at the sub-second level and drive recovery of the same.
Coverage: From bit errors to bytes, to packets, to protocols, to components, to devices that make up the end-to-end network, our monitoring services must cover them all.
Scale: The services must process petabytes of logs, millions of events, and thousands of correlations that are spread over several thousand miles of connectivity across the face of the planet.
Optimize based on real user metrics: Our monitoring services must use metrics from a network topology level—within a rack, to a cluster, to a datacenter, to a region, to the WAN and the edge—and they must have the capability to zoom in and out.

We built innovations to proactively detect and localize a network issue, including PingMesh and NetBouncer. These services are always on and monitor the pulse of our network for latency and packet drops.

PingMesh uses lightweight TCP probes (consuming negligible bandwidth) for probing thousands of peers for latency measurement (RTT, or round trip time) and detects whether the issue is related to the physical network. RTT measurement is a good tool for detecting network reachability and packet-level latency issues.

After a latency deviation or packet drop is discovered, Netbouncer’s machine learning algorithms are then used to filter out transient issues, such as top-of-rack reboots for an upgrade. After completing temporal analysis in which we look at historical data and performance, we can confidently classify the incident as a network issue and accurately localize the faulty component. After the issue is localized, we can auto-mitigate it by rerouting the impacted traffic, and then either rebooting or removing the faulty component. In the following figure, green, yellow, or red visualize network latency ranges at the 99th percentile between a source-destination rack-pair.

Figure 3. Examples of network latency patterns for known failure modes

In some customer incidents, the incident might need deeper investigation by an on-call engineer to localize and find the root cause. We needed a troubleshooting tool to efficiently capture and analyze the life of a packet through every network hop in its path. This is a difficult problem because of the necessary specificity and scale for packet-level analysis in our datacenters, where traffic can reach hundreds of terabits per second. This motivated us to develop a service called Everflow—it’s used to troubleshoot network faults using packet-level analysis. Everflow can inject traffic patterns, mirror specific packet headers, and mimic the customer’s network packet. Without Everflow, it would be hard to recreate the specific path taken by a customer’s packet; therefore, it would be difficult to accurately localize the problem. The following figure illustrates the high-level architecture of Everflow.

Figure 4. Packet-level telemetry collection and analytics using Everflow

Everflow is one of the tools used to monitor every cable for frame check sequence (FCS) errors. The optical cables can get dirty from human errors like bending or bad placement, or simply aging of the cable. The following figure shows examples of cable bending and cable placed near fans that can cause an FCS error on this link.

Figure 5. Examples of cable bending, and cable placed near the fans that can cause an FCS error on this link

We currently monitor every cable and allow only one error for every billion packets sent, and we plan to further reduce this threshold to ensure link quality for loss-sensitive traffic across millions of physical cables in each datacenter. If the cable has a higher error rate, we automatically shut down any links with these errors. After the cable is cleaned or replaced, Everflow is used to send guided probes to ensure that the link quality is acceptable.

Beyond the datacenter, supporting critical customer scenarios on the most reliable cloud requires observing network performance end-to-end from Internet endpoints. The Azure WAN evolved to build a service called the Map of the Internet that monitors Internet performance and customer experience in real time. This system can disambiguate between expected client performance across wired and wireless connections, separates sustained issues from transient ones, and provides visibility into any customer perspective on demand. For example, it helps us to answer questions like, “Are customers in Los Angeles seeing high RTT on AT&T?”, “Is Taipei seeing increased packet loss through HiNet to Hong Kong?”, and “Is Bucharest seeing reliability issues to Amsterdam?” We use this service to proactively and reactively intervene on impact or risks to customer experiences and quickly correlate them to the scenario, network, and location at fault. This data also triggers automated response and traffic engineering to really minimize impact or mitigate ahead of time whenever possible.

Figure 6. Example of latency degradation alert with a peering partner

The innovation built to monitor our datacenters, and its connectivity is also leveraged to provide insights to our customers.

Typically, customers use our network services via software abstractions. Such abstractions, including virtual networks, virtual network interface cards, and network access control lists, hide the complexity and intricacies of the datacenter network. We recently launched Azure Network Watcher, a service to provide visibility and diagnostic capability of the virtual/logical network and related network resources.

Using Network Watcher, you can visualize the topology of your network, understand performance metrics of the resources deployed in the topology, create packet captures to diagnose connectivity issues, and validate the security perimeter of your network to detect vulnerabilities and for compliance/audit needs.

Figure 7. Topology view of a customer network

The following figure shows how a remote packet capture operation can be performed on a virtual machine.

Figure 8. Variable packet capture in a virtual machine

Building and operating the world’s most reliable and hyper-scale cloud is underpinned by the need to proactively monitor and detect network anomalies and take corrective action—much akin to monitoring a living organism. As the pace, scale, and complexity of the datacenters evolve, new challenges and opportunities emerge, paving the way for continuous innovation. We’ll continue to invest in networking monitoring and automatic recovery, while also sharing our innovations with customers to also help them manage their virtual networks.

References

PingMesh: Guo, Chuanxiong, Lihua Yuan, Dong Xiang, Yingnong Dang, Ray Huang, Dave Maltz, Zhaoyi Liu, et al. "Pingmesh: A large-scale system for data center network latency measurement and analysis." ACM SIGCOMM Computer Communication Review 45, no. 4 (2015): 139-152.

Everflow: Zhu, Yibo, Nanxi Kang, Jiaxin Cao, Albert Greenberg, Guohan Lu, Ratul Mahajan, Dave Maltz, et al. "Packet-level telemetry in large datacenter networks." In ACM SIGCOMM Computer Communication Review, vol. 45, no. 4, pp. 479-491. ACM, 2015.

Read more

To read more posts from this series please visit:

Networking innovations that drive the cloud disruption
SONiC: The networking switch software that powers the Microsoft Global Cloud
How Microsoft builds its fast and reliable global network
Lighting up network innovation
Azure Network Security
Microsoft&;s open approach to networking

Quelle: Azure