Azure Black Belt Networking and Security Presents Microsoft Networking Academy – Fall 2017

Welcome to this new season of our networking, and now security, webinar series! We’re hoping that this show will be helpful in delivering valuable content for anyone who’s looking at starting to consume information about the Azure cloud, or developing their cloud connectivity deeper with Azure.

A Microsoft Network Academy (#MNA) session is taking place every 4 weeks, on Fridays, this fall. It is open to customers, partners and Microsoft employees who want to learn more about Azure Networking, including ExpressRoute, Virtual Networking, and security — such as how to plan and design their connectivity to the Microsoft cloud, as well as the various security elements surrounding Azure.

MNA will typically be delivered in two formats, depending on the episodes, partner-focused sessions, and deep dive sessions. In both formats, there will be an open Q&A session at the end, where customers can "ask the experts". Content and partner speakers will vary for each session, but the general agenda is as follows:

Partner-focused sessions

Azure Networking fundamentals (10 minutes)
Deep dive topic of the week (15-20 minutes)
Partner spotlight of the week (15-20 minutes)
Q&A

Deep dive sessions

Introduction (5 minutes)
Deep dive topic of the week (35-45 minutes)
Q&A (10 minutes)

We will post the agenda in advance on this blog, and to our interested viewers, you can join a distribution list by sending an email to gbb-anf@microsoft.com with the subject-line “Join Microsoft Networking Academy List”. We will email you a reminder and the agenda in advance for the upcoming sessions.

We are kicking off this new season on Friday, September 22nd, 2017.

Join the Skype Meeting and make sure you don’t miss out on future sessions by adding this the series to your Outlook calendar. You can also download ICS (that’s a recurring event – don’t miss out on them, and if you can’t make it, decks and recordings will be posted below).

Here are a few links for your convenience:

Session recordings for Microsoft Networking Academy will be posted on Channel 9
Previous sessions are already posted on Channel 9’s Azure Networking Fridays channel
ExpressRoute Check list
Previous seasons decks and recordings:

Fall 2016 sessions
Early winter 2017 sessions
Later winter 2017 sessions (rebranded Microsoft Networking Academy)

Episode #11 : September 22nd – What’s new in Azure Networking and Security since this summer

Quick introduction and announcements (new team member!) 
What’s new with Azure Networking
What’s new with Azure Security
Ask the Experts Q&A!
Links to the deck and video recording on Channel 9 will be posted here.

Episode #12 : October 20th – Agenda to be confirmed

Agenda to be confirmed – but we’ll likely be discussing Ignite content… stay tuned!
Links to the deck and video recording on Channel 9 will be posted here

Episode #13 : November 17th – Agenda to be confirmed

Agenda to be confirmed.
Links to the deck and video recording on Channel 9 will be posted here

Episode #14 : December 15th – Agenda to be confirmed

Agenda to be confirmed.
Links to the deck and video recording on Channel 9 will be posted here

We’re open for feedback!

Feel free to send us your feedback, comments, and suggestions at GBB-ANF@microsoft.com.

-Your Friendly Azure Networking and Security Black Belt Team.
Quelle: Azure

Perspective View makes deeper analyses easy in Azure Time Series Insights

Perspective View enables analysts and engineers to efficiently perform complex tasks like multistep analysis, viewing data from multiple sites, saving checkpoints, or time-based and pattern-based data comparisons. With Azure Time Series Insights, customers have developed a new appetite to deepen their analysis by further exploiting their data. Up to now, this required home-grown workarounds that were costly and time consuming to accomplish. This post will show some common ways to harness the full power of ‘perspective view’ to easily boost the insights you get from your data in just a few steps.

To get started with a perspective view, click on the four squares button in the top right corner, after selecting your environment in the Time Series Insights portal.

How to do your multistep analysis in a few steps

Almost all data investigations and analyses involve multiple steps to find the actual answers like faulty devices, threshold values, overall impact, etc. Using perspective view to save checkpoints or intermediate steps, it’s now much easier to discover the complete story. 

The Time Series Insights team is religious about dogfooding our product to track and monitor the performance of our service. I constantly find myself using this feature to show the complete sequence of an investigation I’ve conducted and full story to my engineers! The best part is that I can save the perspective view so others on my team, and across the organization, can quickly see my complete investigation too. 

The picture below shows a multi-step investigation story:

Compare today’s trends to yesterday’s trend

A common practice in data analysis is a comparison. Users want to find how things were going during a previous period compared to the current period. Perspective view makes it very easy to perform time-based comparison with two simple steps:

Go to “perspective view” and clone the current view.
Change the current time to the required time window in the cloned view.

That’s it – now you can visually compare your data.

The time-based comparison is valuable, but many customers have saved queries for interesting patterns or different business conditions which are not bound to time, but are important to refer and validate expected behavior. Perspective view allows you to open a saved query and compare it with the current or required data to quickly verify the state and expedite the analysis or investigation.

The picture below shows time-based and saved query comparisons:

Global view

When you have manufacturing plants or deployments in different locations worldwide, your data is usually siloed or stuck side-by-side in a single location. It makes it extremely difficult to visualize and analyze all the data from various geo-locations in near real-time. In perspective view, you can see the data from multiple environments which makes it easy to analyze the global trends or data per plant or location. For instance, one of our customers has manufacturing plants all over the world, and before Time Series Insights, it was very difficult to observe in real-time how many parts are produced in each plant, at what rate, energy consumption, etc. By using the perspective view feature they discovered a few insightful differences between plants. They are now working to improve overall productivity by using the best practices from the highest performing plant.

The picture below compares data from the USA and Europe plants:

Get started today

At the moment there is a current limit of four tiles in a perspective view. We are working to add more capabilities and are excited about what’s to come, so be on the lookout for more product news soon! You can also explore Time Series Insights and take the recent improvements for a test drive using our free demo environment; you’ll just need an Azure.com account to get started. You can also stay up to date on all things Time Series Insights by following us on Twitter.
Quelle: Azure

Seven reasons why so many Node.js devs are excited about Microsoft Azure

Node.js is arguably one of the most popular technologies in use on the cloud. Many devs all over the world have embraced this platform which, according to the 2017 survey by the Node.js Foundation, is being enjoyed by those who create back-end, front-end, mobile, and even desktop apps.

At Microsoft, we are investing heavily in Node.js and JavaScript, which are first class citizens on Azure and across many other products. Whether you are a full-stack, a DevOps specialist, an architect or simply playing around with Node.js, these are seven of the (many) reasons why people love Azure!

Build your app, not the infrastructure. With Azure Web Apps for Node.js (on Linux), you can deploy code via Git or your own Docker container, and run your apps on managed infrastructure. We take care of the rest, letting you use features such as auto-scaling, CI/CD, deployment slots and more! Azure Web Apps are a great choice for Node.js developers that are looking for flexibility, low maintenance overhead and great scalability. It’s easy to get started in 5 minutes and get up and running.
The most comprehensive offering for Docker containers of any cloud provider. With Azure Container Service, it’s easy to get a fully-configured Kubernetes cluster (or Docker Swarm or DC/OS) in minutes, based on the official open source code. However, Node.js users can also run their Docker containers on Azure Web Apps on Linux, or use Virtual Machine images from the Azure Marketplace for Cloud Foundry, OpenShift, CoreOS, and more. Lastly, with Azure Container Registry it’s easy to keep your Docker images safe from prying eyes.
Fully embrace NoOps and go serverless with Azure Functions. Run your Node.js app in a reactive way, responding to requests, events and messages, in a massively scalable infrastructure that’s completely transparent to you. Your code is executed only when needed, and you’re billed flexibly based on the resources used.
Full support for Linux and other open source technologies. If you prefer to have greater control over your infrastructure, you can deploy Virtual Machines based multiple Linux and UNIX distributions, such as Ubuntu, Red Hat Enterprise Linux, CentOS, Debian, CoreOS, FreeBSD, and more. Also, leverage pre-built VM images for popular Node.js stacks, such as N|Solid, and Bitnami’s MEAN.
Your choice of data solutions, including fully-managed solutions like CosmosDB (a global database fully compatible with MongoDB) and Azure Database for MySQL and PostgreSQL.
Monitor your app with Application Insights for Node.js, identify bugs and performance bottlenecks, and collect instant analytics.
Write your apps and scripts with our free, open source editor Visual Studio Code, for Windows, macOS, and Linux. Node.js devs love Visual Studio Code because it offers the tools they need to write JavaScript and TypeScript code, with features such as IntelliSense smart code completion, built-in debugging and support for Git source control. The editor is fully customizable and offers extensions to support Docker containers, other programming languages, and much more.

Bonus reason: leverage the Microsoft Azure global, trusted infrastructure. With 42 regions worldwide (including 6 announced) and a broad portfolio of certifications we comply with, Azure allows for local presence and global expansion, with an emphasis on security and trust.

Thousands of developers, from garage startups to Fortune 500, trust Azure for their projects. Whether you’re using Node.js for mission-critical applications for your business, or just as a hobby, we offer the most complete end-to-end platform for JavaScript developers. Start a free trial today and try out Azure for your Node.js app!
Quelle: Azure

Using Azure Analysis Services with Azure Data Lake Storage

Support for Azure Data Lake Store (ADLS) is now available in Azure Analysis Services and in SQL Server Data Tools (SSDT). Now you can augment your big data analytics workloads with rich interactive analysis for selected data subsets at the speed of thought! Business users can consume Azure Analysis Services models in Microsoft Power BI, Microsoft Office Excel, and Microsoft SQL Server Reporting Services. Azure Data Lake Analytics (ADLA) can be used to run U-SQL batch jobs directly against the source data, such as to generate targeted output files that Azure Analysis Services can import with less overhead.

Azure Data Lake Analytics (ADLA) can process massive volumes of data extremely quickly. Exporting approximately 2.8 billion rows of TPC-DS store sales data (~500 GB) into a CSV file took less than 7 minutes and importing the full 1 TB set of source data into Azure Analysis Services by using the Azure Data Lake connector took less than 6 hours. These results highlight Azure Data Lake as an attractive big-data backend for Azure Analysis Services.

For more details about the preparation of source data in Azure Data Lake and importing it into an Azure Analysis Services model, see the Using Azure Analysis Services on Top of Azure Data Lake Storage on the Analysis Services team blog.
Quelle: Azure

Introducing B-Series, our new burstable VM size

Today I am excited to announce the preview of the B-Series, a new Azure VM family that provides the lowest cost of any existing size with flexible CPU usage. For many workloads that run in Azure, like web servers, small databases, and development and test environments, the CPU performance is very bursty. These workloads will run for a long time using a small fraction of the CPU performance possible and then spike to needing the full power of the CPU due to incoming traffic or required work. With our current sizes, while running in these low points, you are still paying for the full CPU, so that you can handle the high and bursty points.

The B-Series offers a cost effective way to deploy these workloads that do not need the full performance of the CPU continuously and burst in their performance. While B-Series VMs are running in the low-points and not fully utilizing the baseline performance of the CPU, your VM instance builds up credits. When the VM has accumulated enough credit, you can burst your usage, up to 100% of the vCPU for the period of time when your application requires the higher CPU performance.

These VM sizes allow you to pay and burst as needed, using only a fraction of the CPU when you don’t need it and burst up to 100% of the CPU when you do need it (using Intel® Haswell 2.4 GHz E5-2673 v3 processors or better). This level control gives you extreme cost flexibility and flexible value.

The B-Series comes in the following 6 VM sizes during preview. All pricing below is preview pricing:

Size

vCPU's

Memory: GiB

Local SSD: GiB

Baseline CPU Performance of VM

Max CPU Performance of VM

US East Linux Price / Hour

(Price during preview)

US East Windows Price / Hour

(Price during preview)

Standard_B1s

1

1

4

10%

100%

$ 0.012

($ 0.006)

$ 0.017

($ 0.009)

Standard_B1ms

1

2

4

20%

100%

$ 0.023

($ 0.012)

$ 0.032

($ 0.016)

Standard_B2s

2

4

8

40%

200%

$ 0.047

($ 0.024)

$ 0.065

($ 0.033)

Standard_B2ms

2

8

16

60%

200%

$ 0.094

($ 0.047)

$ 0.0122

($ 0.061)

Standard_B4ms

4

16

32

90%

400%

$ 0.188

($ 0.094)

$ 0.229

($ 0.115)

Standard_B8ms

8

32

64

135%

800%

$ 0.376

($ 0.188)

$ 0.439

($ 0.219)

Get more information on the Burstable VM sizes. To participate in this preview, request quota in the supported region that you would like. After your quota has been approved, you can use the portal or the API’s to do your deployment as you normally would.

We are launching the preview with the following regions, but expect more later this year:

US – West 2
US – East
Europe – West
Asia Pacific – Southeast

See ya around, 

Corey
Quelle: Azure

Try Azure #CosmosDB for free

Today we are launching Try Azure Cosmos DB for free, an experience that allows anyone to play with Azure Cosmos DB, with no Azure sign up required and at no charge for a limited time. As many of you know, Azure Cosmos DB is the first globally distributed, massively scalable, multi-model database service. The service is designed to allow customers to elastically and horizontally scale both throughput and storage across any number of geographical regions.It also offers guaranteed <10 ms latencies at the 99th percentile, 99.99% high availability, and five well defined consistency models to developers to make precise tradeoffs between performance, availability and consistency of data. Azure Cosmos DB is also the first globally distributed database service in the market today to offer comprehensive Service Level Agreements (SLAs) for throughput, latency, availability, and consistency.

Why did we launch Try Cosmos DB for free? It’s simple. We want to make it easy for developers to evaluate Azure Cosmos DB, build and test their app against Azure Cosmos DB, do a hands-on-lab, a tutorial, create a demo or perform unit testing without incurring any costs. Our goal is to enable any developer to easily experience Azure Cosmos DB and what it has to offer, become more comfortable with our database service and build the expertise with our stack at zero cost. With Try Cosmos DB for free, you can go from nothing to a fully running planet-scale Azure Cosmos DB app in less than a minute.

Try Cosmos DB Now

Try it out for yourself, it takes less than a minute. Or watch this quick video.

1. Go to Try Azure Cosmos DB for free page.

2. Pick the API/data model of your choice either SQL (DocumentDB), MongoDB, Table or Gremlin (Graph) API, and click Create. Note, you will need to login using a Microsoft Account (a.k.a. Live ID). 

In seconds, you will have your newly created free Azure Cosmos DB account with an invite to open it in the Azure portal and try out our Quick Starts.

3. Click Open in Azure Portal, which will navigate the browser to the newly created free Azure Cosmos DB account with Quick Starts page open.

4. Follow the Quick Starts to get a running app connected to Azure Cosmos DB in under 30 seconds or proceed exploring the service on your own.

When in the portal, you will be reminded how long you have before your account expires.

You can extend the trial period for another 24 hours, or click on the link to sign up for a Free Trial (if you are new to Azure) or create a new Azure Cosmos DB account if you already have a subscription.

With Try Azure Cosmos DB for free, you can create a container (a collection of documents, a table, or a graph) and globally-distribute it to up to 3 regions, and use any of the capabilities Azure Cosmos DB provides for 24 hours. Once the trial expires, you can always come back and create it all over again.

Play with Azure Cosmos DB and let us know what you think

Azure Cosmos DB is the database of the future – it is what we believe is the next big thing in the world of massively scalable databases! It makes your data available close to where your users are, worldwide. It is a globally distributed, multi-model database service for building planet scale apps with ease using the API and data model of your choice. You never know it until YOU TRY IT!

If you need any help or have questions or feedback, please reach out to us on the developer forums on Stack Overflow. Stay up-to-date on the latest Azure Cosmos DB news and features by following us on Twitter #CosmosDB, @AzureCosmosDB.

– Your friends at Azure Cosmos DB
Quelle: Azure

Windows Authentication in Service Fabric and ASP.NET Core 2.0

Recently, I worked on a Service Fabric solution for a customer, where my team had to configure secure communication capabilities to existing reliable (stateless) services, built on top of the ASP.NET Core 2.0 framework. More specifically, we had to configure the Windows Authentication feature on them and choose WebListener as the web server, to process HTTP requests from remote Windows clients. We’ll see that there are slight differences in the names of some ASP.NET packages and libraries, and in the way we configure a weblistener on a stateless service, with the latest version of ASP.NET, with respect to the previous versions (1.x). This article will highlight these aspects and describe a way to properly configure a Service Fabric (SF) Reliable Service Stateless Service, given these requisites. We will leverage on the features, improvements and support made available by the latest release of Service Fabric SDK for Windows (v5.7.198). Among the others, I’d like to focus on the following feature: ASP.NET Core 2.0 Support: the Microsoft.ServiceFabric.AspNetCore.* NuGet packages now support ASP.NET Core 2.0, the latest major version of the open-source and cross-platform framework for building modern cloud-ready web applications. For a fully-documented release notes page, please visit the Azure Service Fabric Team Blog. In the following section, we’ll be building a simple ASP.NET Core 2.0 application, packaged as Stateless Service, using the Stateless ASP.NET Core project template provided by Visual Studio. We’ll then configure security on the application to perform Windows-authenticated calls. Some assumptions: Since the sample is built and deployed on a local SF cluster, make sure that the latest of both SF SDK and Runtime are installed on your local machine through the Web Platform Installer, and the cluster is started with the 1-Node / 5-Node configuration The latest .NET Core SDK is installed (v2.0.0) Visual Studio 2017 with support to ASP.NET Core 2.0 is installed (v15.3) Service Fabric Application 1. Open Visual Studio as Administrator.2. Create a Service Fabric application, name it MyApplication. 3. Create a Stateless ASP.NET Core Service, name it MyAspNetService. 4. Make sure you select ASP.NET Core 2.0 in the following dialog. For this example, I used Empty project template and No Authentication as authentication method (we’ll set it programmatically). Wait until Visual Studio has set up either the application and the service projects, then navigate to MyAspNetService.cs file, which contains the class representing the SF Stateless Service used for our purposes. Here’s the signature and the body of the CreateServiceInstanceListeners(…) method, that developers can override to create diverse listeners for this service instance, even custom ones:protected override IEnumerable<ServiceInstanceListener> CreateServiceInstanceListeners()
{
return new ServiceInstanceListener[]
{
new ServiceInstanceListener(serviceContext =>
new KestrelCommunicationListener(serviceContext, “ServiceEndpoint”, (url, listener) =>
{
ServiceEventSource.Current.ServiceMessage(serviceContext, $”Starting Kestrel on {url}”);

return new WebHostBuilder()
.UseKestrel()
.ConfigureServices(
services => services
.AddSingleton<StatelessServiceContext>(serviceContext))
.UseContentRoot(Directory.GetCurrentDirectory())
.UseStartup<Startup>()
.UseServiceFabricIntegration(listener, ServiceFabricIntegrationOptions.None)
.UseUrls(url)
.Build();
}
)
)
};
}

As you can see, a communication listener based on Kestrel has already been set up as default listener for the only service endpoint configured (ServiceEndpoint). ASP.NET Core comes with two server implementations, briefly explained below. We chose WebListener as web server, since it supports Windows Authentication.
Kestrel
Kestrel is a cross-platform HTTP server based on libuv library, for asynchronous I/O operations on cross-platform architectures. As previously shown, Kestrel is the web server that is included by default in ASP.NET Core new project templates.
It supports the following features:

HTTPS
Opaque upgrade used to enable WebSockets
Unix sockets for high performance behind Nginx
WebListener
WebListener is a Windows-only HTTP server, based on the Http.Sys kernel mode driver.
WebListener supports the following features:

Windows Authentication
Port sharing
HTTPS with SNI
HTTP/2 over TLS (Windows 10)
Direct file transmission
Response caching
WebSockets (Windows 8)
Supported Windows versions:

Windows 7 and Windows Server 2008 R2 and later
Learn more about WebListener web server implementation in ASP.NET Core.
Modify the method CreateServiceInstanceListeners(…) as below:protected override IEnumerable<ServiceInstanceListener> CreateServiceInstanceListeners()
{
return new ServiceInstanceListener[]
{
new ServiceInstanceListener(serviceContext =>
new WebListenerCommunicationListener(serviceContext, “ServiceEndpoint”, (url, listener) =>
{
ServiceEventSource.Current.ServiceMessage(serviceContext, $”Starting WebListener on {url}”);
return new WebHostBuilder()
.UseHttpSys(
options =>
{
options.Authentication.Schemes = AuthenticationSchemes.Negotiate; // Microsoft.AspNetCore.Server.HttpSys
options.Authentication.AllowAnonymous = false;
/* Additional options */
//options.MaxConnections = 100;
//options.MaxRequestBodySize = 30000000;
//options.UrlPrefixes.Add(“http://localhost:5000″);
}
)
.ConfigureServices(
services => services
.AddSingleton<StatelessServiceContext>(serviceContext))
.UseContentRoot(Directory.GetCurrentDirectory())
.UseStartup<Startup>()
.UseServiceFabricIntegration(listener, ServiceFabricIntegrationOptions.None)
.UseUrls(url)
.Build();
}
)
)
};
}

The following NuGet packages are required to make a successful build:

Microsoft.ServiceFabric.AspNetCore.WebListener (v2.7.198)
Microsoft.AspNetCore.Server.HttpSys (v2.0.0)
Here’s the using region:using Microsoft.AspNetCore.Hosting;
using Microsoft.AspNetCore.Server.HttpSys;
using Microsoft.Extensions.DependencyInjection;
using Microsoft.ServiceFabric.Services.Communication.AspNetCore;
using Microsoft.ServiceFabric.Services.Communication.Runtime;
using Microsoft.ServiceFabric.Services.Runtime;
using System.Collections.Generic;
using System.Fabric;
using System.IO;
Considerations

The packages Microsoft.AspNetCore.Server.WebListener and Microsoft.Net.Http.Server have been merged into the aforementioned new package Microsoft.AspNetCore.Server.HttpSys. The namespaces have been updated to match. This is reflected in calling UseHttpSys() extension method instead of UseWebListener().

Windows Authentication is performed by the HttpSys options , by setting:
options.Authentication.Schemes to the enum AuthenticationSchemes.Negotiate
options.Authentication.AllowAnonymous to none.
Learn more about HTTP.sys web server implementation in ASP.NET Core.

I provided additional options in the code (commented) regarding:

maximum client connections
maximum request body size
URLs and port configuration options
Publish and test
Once you publish the application, and the service instance(s) is up and running in your local cluster environment, you can simulate HTTP requests towards the endpoint configured in the ServiceManifest.xml of the service project (in my example, http://localhost:8234).
Cluster:
Snippet from ServiceManifest.xml:

<?xml version=”1.0″ encoding=”utf-8″?>
<ServiceManifest Name=”MyAspNetServicePkg”
Version=”1.0.0″
xmlns=”http://schemas.microsoft.com/2011/01/fabric”
xmlns:xsd=”http://www.w3.org/2001/XMLSchema”
xmlns:xsi=”http://www.w3.org/2001/XMLSchema-instance”>
<ServiceTypes>
<!– This is the name of your ServiceType.
This name must match the string used in RegisterServiceType call in Program.cs. –>
<StatelessServiceType ServiceTypeName=”MyAspNetServiceType” />
</ServiceTypes>

<!– Code package is your service executable. –>
<CodePackage Name=”Code” Version=”1.0.0″>
<EntryPoint>
<ExeHost>
<Program>MyAspNetService.exe</Program>
<WorkingFolder>CodePackage</WorkingFolder>
</ExeHost>
</EntryPoint>
</CodePackage>

<!– Config package is the contents of the Config directoy under PackageRoot that contains an
independently-updateable and versioned set of custom configuration settings for your service. –>
<ConfigPackage Name=”Config” Version=”1.0.0″ />

<Resources>
<Endpoints>
<!– This endpoint is used by the communication listener to obtain the port on which to
listen. Please note that if your service is partitioned, this port is shared with
replicas of different partitions that are placed in your code. –>
<Endpoint Protocol=”http” Name=”ServiceEndpoint” Type=”Input” Port=”8234″ />
</Endpoints>
</Resources>
</ServiceManifest>

You can reach the fully working example on my GitHub repository. -AA
Quelle: Azure

Create flexible ARM templates using conditions and logical functions

In this blog post, I will walk you through some of the new capabilities we have in our template language expressions for Azure Resource Manager templates.

Background

A common ask from customers is, “how can we use conditions in our ARM templates, so if a user selects parameter A, then resource A is created. If not, resource B should be created?”.  The only way you could achieve this, would be by using nested templates and have a mainTemplate to manipulate the deployment graph.

A common pattern can be seen below, where the user will select either new or existing.

{
"$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
"contentVersion": "1.0.0.0",
"parameters": {
"newOrExisting": {
"type": "String",
"allowedValues": [
"new",
"existing"
]
}
},
"variables": {
"templatelink": "[concat('https://raw.githubusercontent.com/krnese/ARM/master/', concat(parameters('newOrExisting'),'StorageAccount.json'))]"
},
"resources": [
{
"apiVersion": "2017-05-10",
"name": "nestedTemplate",
"type": "Microsoft.Resources/deployments",
"properties": {
"mode": "incremental",
"templateLink": {
"uri": "[variables('templatelink')]",
"contentVersion": "1.0.0.0"
},

In the variables declaration, the link to the template is constructed based on the parameter input.

The example shows that the URI to the template will either be ‘https://raw.githubusercontent.com/krnese/ARM/master/newStorageAccount.json’ or ‘https://raw.githubusercontent.com/krnese/ARM/master/existingStorageAccount.json’.  

This approach does work and you will have a successful deployment of the template, regardless whether the user selected new or existing in the parameter selection. However, the approach of having nested templates could indeed lead to many templates over time. Where some of the templates ended up being completely empty just to avoid having a failure in the deployment graph. Further, the complexity would grow as you would normally use other resource types – besides a storage account, and potentially have multiple conditions involved.

Another technique that also was frequently used, was to manipulate complex variables based on input parameters to determine certain properties for a resource. The example below shows how ARM could navigate within the complex variables to either create Linux or Windows virtual machines, based on the parameter platform, which allowed Windows or Linux as input.

"osType": "[variables(concat('osType',parameters('platform')))]",
"osTypeWindows": {
"imageOffer": "WindowsServer",
"imageSku": "2016-Datacenter",
"imagepublisher": "MicrosoftWindowsServer"
},
"osTypeLinux": {
"imageOffer": "UbuntuServer",
"imageSku": "12.04.5-LTS",
"imagepublisher": "Canonical"
},

Needless to say, we have heard from customers that this should be simplified, so that the template language should support and handle conditions more easily.

Introducing support for conditions

As we always appreciate the feedback from our customers, we are glad to remind you (announced at //BUILD this year) we have added support for conditions on resources, as well as many more capabilities. These include logical and comparison functions that can be used in the template language when handling conditions.

I will show a practical example of how the new capabilities can be leveraged in ARM templates. A typical example today, where you want your users to select whether a virtual machine should be Windows or Linux based, would require a complex variable being manipulated as demonstrated earlier in this blog post. In addition, if your user needs to decide if the virtual machine should go into production or not, by having an availability set as an optional resource would require nested templates that either deployed the resource or are empty.

In total, this would require at least 3 templates (remember that 2 of them would be nested templates).

By using conditions and functions, we can now accomplish this with a single template! Not to mention a lot less JSON. Let’s start by walking through some of the parameters we are using in the sample template, and explain the steps we have taken.

"parameters": {
"vmNamePrefix": {
"type": "string",
"defaultValue": "VM",
"metadata": {
"description": "Assign a prefix for the VM you will create."
}
},
"production": {
"type": "string",
"allowedValues": [
"Yes",
"No"
],
"metadata": {
"description": "Select whether the VM should be in production or not."
}
},
"platform": {
"type": "string",
"allowedValues": [
"WinSrv",
"Linux"
],
"metadata": {
"description": "Select the OS type to deploy."
}
},
"pwdOrssh": {
"type": "securestring",
"metadata": {
"description": "If Windows, specify the password for the OS username. If Linux, provide the SSH."
}
},

Besides assigning the virtual machine a prefix, we also have parameters for production and platform.

For production, the user can simply select yes or no. For yes, we want to ensure that the virtual machine being created gets associated with an availability set, since this resource needs to be in place prior to the virtual machine creation process. To support this, we have added the following resource to the template:

{
"condition": "[equals(parameters('production'), 'Yes')]",
"type": "Microsoft.Compute/availabilitySets",
"apiVersion": "2017-03-30",
"name": "[variables('availabilitySetName')]",
"location": "[resourceGroup().location]",
"properties": {
"platformFaultDomainCount": 2,
"platformUpdateDomainCount": 3
},
"sku": {
"name": "Aligned"
}
},

Note the condition property. We are using a comparison function, equals(arg1, arg2), which will check whether two values equal each other. In this case, if the parameter production equals yes, ARM will process this resource during runtime. If not true (No being selected), it will not be provisioned.

For the virtual machine resource in our template, we have declared the reference to the availability set based on the condition we introduced.

"properties": {
"availabilitySet": "[if(equals(parameters('production'), 'Yes'), variables('availabilitySetId'), json('null'))]",

We’re using if() which is one of our logical functions. This function takes three arguments. The first is the condition (Boolean), which is the value to check whether it is true or false. The second argument will be the true value, followed by the third argument which is false. The net result of this, would be to associate the virtual machine. When the user selects yes for the production parameter, then the virtual machine will get associated with the availability set declared in our template, which has already been created due to the condition. If the user selects no, the availability set won’t be created, hence there won’t be any association from the virtual machine resource.

We also have a parameter platform which decides if we are creating a Windows or Linux virtual machine. To simplify the language expression throughout the template, we added the values for Linux and Windows into our variables section.

"windowsOffer": "WindowsServer",
"windowsSku": "2016-Datacenter",
"windowsPublisher": "MicrosoftWindowsServer",
"linuxOffer": "UbuntuServer",
"linuxSku": "12.04.5-LTS",
"linuxPublisher": "Canonical",

On the virtual machine resource, more specifically within the storageProfile section where we distinguish between the image being used, we are referring to our variables for Windows or Linux.

"storageProfile": {
"imageReference": {
"publisher": "[if(equals(parameters('platform'), 'WinSrv'), variables('windowsPublisher'), variables('linuxPublisher'))]",
"offer": "[if(equals(parameters('platform'), 'WinSrv'), variables('windowsOffer'), variables('linuxOffer'))]",
"version": "latest",
"sku": "[if(equals(parameters('platform'), 'WinSrv'), variables('windowsSku'), variables('linuxSku'))]"
},

If a user selects WinSrv for the platform parameter, we will grab the values for the variables pointing to the Windows image. If not and Linux is selected, we refer to those variables instead. The result would either be a virtual machine using Windows Server 2016 or Ubuntu.

Last but not least, we also have our output section in the template, which will provide the user with some instructions based on their selection.

"outputs": {
"vmEndpoint": {
"type": "string",
"value": "[reference(concat(variables('pNicName'))).dnsSettings.fqdn]"
},
"platform": {
"type": "string",
"value": "[parameters('platform')]"
},
"connectionInfo": {
"type": "string",
"value": "[if(equals(parameters('platform'), 'WinSrv'), 'Use RDP to connect to the VM', 'Use SSH to connect to the VM')]"
}
}

If the user deploys a Windows Server, they will see the following output:

If the user deploys Linux, they will see this:

Summary

We believe, by introducing support for conditions on the resources you declare in your templates and enhancements to the template language itself with logical and comparison functions, we have set the scene for much simpler templates. You can now move away from the workarounds you previously had when implementing conditions, and should hopefully achieve much more flexibility while deploying complex apps, resources, and topologies in Azure.

The full template is available here.
Quelle: Azure

Azure Stream Analytics drives retail industry transformation with real-time insights

Over the past decade very few industries have experienced the pace of transformation as much as the retail sector. Driven by competition from innovative online merchants, traditional retailers are having to adapt quickly and transform themselves into efficient omnichannel players to successfully carve out an enduring competitive advantage.

Retailers are racing to cater to digitally savvy customers

The evolving shopping habits and customer preferences makes this strategic shift challenging. Customers across every demographic are becoming digitally savvy – getting their cues from social media while comparing products and pricing on the web. They are even comfortable placing orders for large, durable items online. Such trends and shifting customer preferences require retailers to market to their target customers digitally and engage them contextually via online advertising, social media, email, and SMS messages in real-time. This requires handling large streams of data in real-time and making decisions instantaneously.

Sustaining competitive advantage is increasingly complex

As retailers expand their strategy to include multiple channels, they are having to monitor and manage an ever increasing and complex set of operations such as fleet management, inventory optimization, anomaly detection within point-of-sale transactions, and much more. These, too, require processing and analytics of real-time data streams.

Azure Stream Analytics delivers real-time insights

To engage their customers and make key operational decisions, retailers across a broad spectrum are turning to Azure Stream Analytics. Using Azure Stream Analytics, customers can uncover insights from data generated by clickstream logs, transaction logs, devices, sensors, and applications with sub-second latencies. These insights can help generate alerts, power rich visual dashboards, and kick off workflows based on a pre-determined logic.

A recent customer case study for Azure Stream Analytics features Worldsmart, the largest point of sale (POS) technology provider to independent grocery retailers in Australia. By using emerging cutting edge technologies such as Stream Analytics and R Server, they were able to successfully provide real-time machine learning forecasts and analytics to their retail customers. These advancements are helping companies innovate beyond providing just traditional POS systems, and is allowing them to position themselves amongst competitors.

Azure Stream Analytics – A representative data pipeline

Here are some of the key scenarios that we routinely see our retail customers use Stream Analytics for:

Real-time omnichannel promotions: Azure Stream Analytics enables retailers to pursue real-time customer engagement across multiple channels, while continuously updating and acting upon a 360-degree profile of their customers’ preferences and shopping patterns. This requires that retailers successfully handle huge volumes of website clickstream data alongside geolocation and proximity location data. This in turn helps them offer the right promotion at the right time to their customers whether they are online or in the store.
Real-time website fraud detection and content monitoring: The content on a retail website is ever-changing with new products being launched and old products retiring. By using Azure Stream Analytics retailers can monitor all relevant information being added and removed, and more importantly detect any anomalies in real-time. Examples of anomalies include instances such as removal of a product with significant revenue, incomplete information for a new product onboarded such as a blank image or description, and use of offensive language within description, amongst others. Also, Azure Stream Analytics can help retailers identify and prevent fraud on the website in real-time and thereby minimize losses. For example, Azure Stream Analytics can be used for the identification of clickstream fraud, preventing potential denial of service attacks, identifying web content scraping by third parties, and monitor cardholder authentication bypasses with multiple tries.
Real-time inventory replenishment and demand management: By matching current inventory levels with real-time point-of-sale transaction logs and online orders, retailers can predict potential stock-outs as well as extrapolate future demand trends. With these data points, they can automatically kick off necessary workflows to replenish inventory levels before stock-outs can occur and negatively affect profits and customer satisfaction levels.
Fleet management and driver alerting: Most retailers have their own fleet of vehicles that they use to transport merchandise from distribution centers to stores, and sometimes between stores. Retailers want to ensure that these shipments are delivered in a timely and safe manner. To do so, they equip their fleet with sensors that generate telemetry such as vehicle geo positioning, driver alertness, fuel levels, and air pressure. These streams of data are continuously analyzed using Azure Stream Analytics and alerts are generated to help with scenarios such as vehicle re-routing in the wake of anticipated bad weather, alerting the driver to fill up gasoline if the next gas station is more than a certain distance away, and proactively alert the authorities in case of any mishaps.

While the examples and benefits above are some of the more prevalent scenarios leveraging Azure Stream Analytics within our retail customer base, there are many other scenarios that are currently gaining prominence such as power conservation by reducing the store lighting by correlating current lighting brightness with ambient light and heatmapping the store’s foot traffic to help with better merchandise placement and staff allocation.

Get started now

Available globally across more than 30 regions, Azure Stream Analytics has ‘000s of customers worldwide. We welcome you to learn more about Azure Stream Analytics and give it a try today.
Quelle: Azure

Bot conversation history with Azure Cosmos DB

The Bot Framework provides a service for tracking the context of a conversation, called Bot Framework State. It enables you to store and retrieve data associated with a user, conversation or a specific user within the context of a conversation.

In this article, it is assumed you have previous experience with developing bots using Bot Builder SDK (C# or Node.js), so we will not get into the details in terms of bot implementation.

Before using Cosmos DB, it’s helpful to understand the importance of storing conversation data and how it is stored in Bot Framework State.

Why store conversation data?

Some scenarios where conversation data can be useful:

Analytics: When you want to analyze user data and conversations in near real time. You can also apply Machine Learning models and tools (like Microsoft Cognitive Services APIs).
Some examples:

Sentiment analysis to track the quality of a conversation  
Funnel analysis of messages in bots to identify where the Natural Language Processing (as LUIS) has failed or can be improved to better handle input messages
Metrics: Number of active or new users and message counts (to determine the level of engagement the bot has with users)

Audit: When you have to store data of all users for audit purposes. It can even be a requirement, depending on your solution.

How conversation data is stored

The conversation data is stored in three different structures (in JSON format), known as property bags:

UserData: It's the user property bag, where the ID is the user ID. It stores user data globally across all conversations. It's useful for storing data about the user that isn't dependent on a specific conversation. For example, you can track all conversations of a specific user and get additional information (username, birth date and so on):

{
"id": "emulator:userdefault-user",
"botId": "<your Bot ID>",
"channelId": "emulator",
"conversationId": "<your conversation ID>",
"userId": "<user ID>",
"data": {
"username": "Fernando de Oliveira"
},
"_rid": "9G5GANrnJQADAAAAAAAAAA==",
"_self": "dbs/9G5GAA==/colls/9G5GANrnJQA=/docs/9G5GANrnJQADAAAAAAAAAA==/",
"_etag": ""01008737-0000-0000-0000-5993a11d0000"",
"_attachments": "attachments/",
"_ts": 1502847257
}

This is an example of how you can save data to UserData in C#:

private bool userWelcomed;

public virtual async Task MessageReceivedAsync(IDialogContext context, IAwaitable<IMessageActivity> result)
{
var message = await result;

string userName;

if (!context.UserData.TryGetValue("username", out userName))
{
PromptDialog.Text(context, ResumeAfterPrompt, "Before get started, please tell me your name?");
return;
}

if (!userWelcomed)
{
userWelcomed = true;
await context.PostAsync($"Welcome back {userName}!");

context.Wait(MessageReceivedAsync);
}
}

private async Task ResumeAfterPrompt(IDialogContext context, IAwaitable<string> result)
{
try
{
var userName = await result;
userWelcomed = true;

await context.PostAsync($"Welcome {userName}!");

context.UserData.SetValue("username", userName);
}
catch (TooManyAttemptsException ex)
{
await context.PostAsync($"Oops! Something went wrong :( Technical Details: {ex}");
}

context.Wait(MessageReceivedAsync);
}

ConversationData: It's the conversation property bag, where the ID is the conversation ID. It stores data related to a single conversation globally. This data is visible to all users within the conversation. For example, a group conversation where you can set a default language to your bot (the language your bot will understand and interact with group members):

{
"id": "emulator:conversation<your conversation ID>",
"botId": "<your Bot ID>",
"channelId": "emulator",
"conversationId": "<your conversation ID>",
"userId": "default-user",
"data": {
"defaultLanguage": "pt-BR"
},
"_rid": "9G5GANrnJQAEAAAAAAAAAA==",
"_self": "dbs/9G5GAA==/colls/9G5GANrnJQA=/docs/9G5GANrnJQAEAAAAAAAAAA==/",
"_etag": ""0800357b-0000-0000-0000-598b52060000"",
"_attachments": "attachments/",
"_ts": 1502302725
}

This is an example of how you can save data to ConversationData:

public async Task StartAsync(IDialogContext context)
{
string language;

if (!context.ConversationData.TryGetValue("defaultLanguage", out language))
{
language = "pt-BR";
context.ConversationData.SetValue("defaultLanguage", country);
}

await context.PostAsync($"Hi! I'm currently configured for {language} language.");

context.Wait(MessageReceivedAsync);
}

PrivateConversationData: It's the private conversation property bag, where the ID is a merge of user ID and conversation ID. It stores data related to a single conversation globally, where the data is visible only to the current user within the conversation. It's useful for storing temporary data that you want to be cleaned up when a conversation ends (like a browser cache). For example a bot for online purchases, where you can save an order ID:

{
"id": "emulator:private<your conversation ID>:default-user",
"botId": "<your Bot ID>",
"channelId": "emulator",
"conversationId": "<your conversation ID>",
"userId": "default-user",
"data": {
"ResumptionContext": {
"locale": pt-BR",
"isTrustedServiceUrl": false
},
"DialogState": "<dialog state ID>",
"orderId": "<order ID>"
},
"_rid": "9G5GANrnJQAXAAAAAAAAAA==",
"_self": "dbs/9G5GAA==/colls/9G5GANrnJQA=/docs/9G5GANrnJQAXAAAAAAAAAA==/",
"_etag": ""0100f938-0000-0000-0000-5993ab090000"",
"_attachments": "attachments/",
"_ts": 1502849796
}

This is an example of how you can save data to PrivateConversationData:

string orderId;

if (!context.PrivateConversationData.TryGetValue("orderId", out orderId))
{
// Generic method to generate an order ID
orderId = await GetOrderIdAsync();

context.PrivateConversationData.SetValue("orderId", orderId);

await context.PostAsync($"{userName}, this is your order ID: {orderId}");
}

Cosmos DB rather than Bot Framework State

By default, Bot Framework uses the Bot Framework State to store conversation data. It is designed for prototyping, and is useful for development and testing environments. At the time of this writing, it has only a size limit of 32KB. Learn more about data management.

For production environments, it’s highly recommended to use a NoSQL database to store data as documents, such as Azure Cosmos DB. It's a multi-model database (like document, graph, key-value, table and column-family models) that can offer some key benefits, including:

Global distribution: It's possible to distribute your data across different Azure Regions, ensuring low latency to users.
Horizontal Scalability: You can easily scale your database at a per second granularity and scale storage size up and down automatically according to your needs.
Availability: You can ensure your database will have at least 99.99% availability in a single region.

For document models, there are options like Azure DocumentDB and MongoDB. In this article we are going to use the DocumentDB API.

Storing Conversation Data

To customize your bot conversation data storage, you can use the Bot Builder SDK Azure Extensions. If you are developing your bot with Bot Builder SDK in C#, you have to edit the Global.asax file:

protected void Application_Start()
{
// Adding DocumentDB endpoint and primary key
var docDbServiceEndpoint = new Uri("<your documentDB endpoint>");
var docDbKey = "<your documentDB key>";

// Creating a data store based on DocumentDB
var store = new DocumentDbBotDataStore(docDbServiceEndpoint, docDbKey);

// Adding Azure dependencies to your bot (documentDB data store and Azure module)
var builder = new ContainerBuilder();

builder.RegisterModule(new AzureModule(Assembly.GetExecutingAssembly()));

// Key_DataStore is the key for data store register with the container
builder.Register(c => store)
.Keyed<IBotDataStore<BotData>>(AzureModule.Key_DataStore)
.AsSelf()
.SingleInstance();

// After adding new dependencies, update the container
builder.Update(Conversation.Container);

GlobalConfiguration.Configure(WebApiConfig.Register);
}

If you run your bot and open your Cosmos DB service on Azure Portal, you can see all stored documents (clicking on Data Explorer).

References

Key concepts in the Bot Builder SDK for .NET
Bot Builder: Manage state data
Where is conversation state stored?
Azure Cosmos DB Documentation
Bot Builder Samples

Quelle: Azure