Announcing General Availability of Consumption and Charge APIs for Enterprise Azure customers

We are very pleased to announce the General Availability release of the Azure Consumption and Charge APIs that we previewed in May of this year. Enterprise customers can use these API’s to pull Azure Charge and Usage data for both Azure and Market Place resources. These APIs enable organizations to gain deep insights into their usage and spend for all workloads running on Azure. The Usage Details and Marketplace Store Charges API’s have been enhanced to support both custom date ranges and billing periods. Also, Price Sheet API now has a new column Meter ID which can be used to look up the usage details for the specific meter.

The current preview API’s mentioned in our preview blog posts will continue to stay operational until December 4, 2017 and will be deprecated after that.

Learn more by reading the detailed documentation on getting started with the APIs. Currently, we also have a Power BI Content Pack that can be used by enterprise customers for performing detailed analysis on their azure usage and spend details.

Details of the APIs

Balance and Summary: The Balance and Summary API offers a monthly summary of information on balances, new purchases, Azure Marketplace service charges, adjustments, and overage charges.
Usage Details: The Usage Detail API offers a daily breakdown of consumed quantities and estimated charges by an enrollment. The result also includes information on instances, meters, and departments. The API can be queried by billing period or by a specified start and end date.
Marketplace Store Charge: The Marketplace Store Charge API returns the usage-based marketplace charges breakdown by day for the specified billing period or start and end dates.
Price Sheet: The Price Sheet API provides the applicable rate for each meter for the given enrollment and billing period. We have also added a meterId field to help customers cross check their data with usage.
Billing Periods: The Billing Periods API returns a list of billing periods that have consumption data for the specified enrollment in reverse chronological order. Each period contains a property pointing to the API route for the four sets of data, Balance Summary, Usage Details, Marketplace Charges, and Price Sheet.

What’s next?

We are working on providing this data in ARM as part of a consistent channel agnostic API set. As always, we welcome any feedback or suggestion you may have. These can be sent to us using the Azure Feedback Forum and Azure MSDN forum. We will continue to enhance our collateral with additional functionality to provide richer insights into your usage and spend data for all workloads running on Azure.
Quelle: Azure

SIGGRAPH 2017: Microsoft Azure enables Secure Rendering with services and certs!

At SIGGRAPH 2017 in Los Angeles, CA this week (at the LA Convention Center), Microsoft is demoing the latest in Azure-based rendering services and partner solutions, as well as showcasing a new security guide on safely bursting to the cloud for massive production scale. Come check us out and get more familiar with the industry’s first and only CDSA CPS-certified and MPAA-audited public cloud!

CDSA CPS-certified and MPAA-audited

Come visit us at booth #923 to learn about Azure Batch and High Performance Compute for creating VFX in Azure’s public cloud. We’ll be showcasing secure operations that help you protect sensitive pre-release content while you collaborate with artists and facilities around the world.

Secure Handling in Hollywood

Download the new Microsoft Azure Cloud Platform Hardening Guide developed by Independent Security Evaluators for securely handling sensitive Hollywood studio assets. Using open-source rendering tools such as Blender together with Batch, Key Vault, Azure Active Directory, and other cloud services, you can quickly and easily deploy a small or large render farm to produce the next blockbuster!

Rendering in the Cloud

If you’re already experienced in building render workflows with tools such as Conductor and Royal Render which can natively burst to Azure resources, then you’ll be pleased to know about Microsoft’s new partnership with Autodesk that brings the power of Maya, 3ds Max, and Arnold to the cloud. Learn more about all of these solutions at our booth, too.

Microsoft is also hosting a number of sessions this year, in particular a deep-dive on Secure Burst Rendering to the Microsoft Cloud on Tuesday, August 1 at 5PM (Room 409A). We’re presenting with Jellyfish Pictures, an innovative VFX studio in London, UK, who uses Azure services today to securely produce assets for big-budget films, international TV, commercial customers, and more.

Just the FACT

Last, but not least, we have renewed Azure’s commitment to protecting intellectual property with our certification from the Federation Against Copyright Theft (FACT) for 2017. Check out the latest report available on our Service Trust Platform.

Enjoy the show!
Quelle: Azure

Build apps faster with Azure Serverless

Azure’s Serverless offerings allow developers to build and deploy elastic-scale applications faster than ever. Serverless technology allows developers to focus on their apps rather than provisioning, managing, and scaling the underlying infrastructure. Azure provides unique serverless tools to accelerate development by seamlessly tapping into the benefits of the cloud.

See a quick overview here:

Traditionally, writing a new application couldn’t begin until a few fundamental questions had been answered regarding the infrastructure – Where will this app run? How will this app scale to meet demand? How can I monitor my app? These and many similar questions, take a significant portion of development and operations investment. Developers like to write code and businesses like to focus on their business problems. Azure Serverless enables just that by abstracting the infrastructure and making only the app code and business logic central.

Azure Serverless platform provides a series of fully managed services ranging from compute, storage, database, orchestration, monitoring, analytics, intelligence, and so on to help construct serverless applications for any kind of scenario. In this blog post, we focus on two pieces central to serverless application development, Azure Functions and Azure Logic Apps.

Azure Functions provide Functions-as-a-Service, where you simply provide your code (whether C#, JavaScript, Python, or many other supported languages) and it executes on demand. Azure Functions can be authored and debugged locally on a developer’s machine and can stream data in and out of other services like Azure Storage, Event Hubs, etc. through a unique concept called bindings. Functions scale to meet the application needs automatically, so a Function that runs successfully locally will automatically scale up to potentially process billions of events in the cloud.
Azure Logic Apps provide serverless workflows in the cloud. For example, if you think of an operation like adding a new customer, there may be several pieces of functionality to execute. You may need to add the customer to a database, generate a welcome email, create a new user login, and create an entry in CRM. Logic Apps allows orchestration of data and processing to bring these isolated steps into a coherent workflow. Logic Apps comes with over 150 connectors to services like Visual Studio Team Services, Salesforce, SAP, and many others. This allows developers to easily integrate data in and out of their serverless apps instead of writing complex glue-code between disparate systems. Logic Apps also allow you to orchestrate and connect the Functions and APIs of your application together.

Azure Serverless platform provides an easy to use canvas to build virtually any kind of cloud application by easily bringing together IoT, data processing, automation, messaging, and intelligence with greater agility and power in delivering end-to-end solutions. For example, First Gas recently completed a complex application with Dynamics 365 and SAP in only 4 months. The Chief Information Officer expressed that without Serverless “…there’s no way we would have accomplished this level of integration in four months.”

Check out the video above for a demo of serverless tools in Azure, and be sure to try out some of the Serverless quickstarts to get your first serverless app built in a matter of minutes.
Quelle: Azure

Mesosphere DCOS, Azure, Docker, VMware & Everything Between – Deploying DC/OS with Azure Container Service

This post is part of the “Mesosphere DC/OS, Azure, Docker, VMware & Everything Between” multiple blog post series. In the previous posts for this series, I looked at the following topics:

Mesosphere DCOS, Azure, Docker, VMware and everything between – Architecture and CI/CD Flow

Mesosphere DCOS, Azure, Docker, VMware and everything between – Security & Docker Engine Installation

Mesosphere DCOS, Azure, Docker, VMware & Everything Between – SSH Authorized Keys

Mesosphere DCOS, Azure, Docker, VMware & Everything Between – Deploying DC/OS with Azure Container Service

What a joy! We have a working DC/OS cluster on top of vSphere but now, it’s time to deploy another cluster using Azure Container Service (ACS). Fear not, it will be much quicker to get this baby up & running in Azure with no pain what so ever.

As you remember, in our scenario, we will have two DC/OS clusters. One will be used to run the “Production” Docker containers and the second one for “Integration & Testing”.

To deploy the cluster in Azure, we will use the magic of Azure Container Service which is a semi-managed containers orchestration platform. It supports all big-3 – DC/OS, Kubernetes and Docker Swarm. Unlike a manual on-premises deployment, ACS will do the heavy lifting for us. All you need to do is to state how many Master and Slave nodes you want and that’s it.

Another major difference between ACS deployment and an on-premises one is that in Azure, DC/OS must be deployed with both private and public slave nodes. If you remember, in our vSphere based deployment, we didn’t install any public agents.

Now, there are many blog posts, KBs and articles around how to use and deploy DC/OS using ACS so I’ll try to make it short but comprehensive as possible. IMHO, Microsoft ACS documentation is a very good place to start with.

 

Read more about all the details around DC/OS 1.9 deployment on top of VMware vSphere on my personal blog.
Quelle: Azure

Training a neural network to play Hangman without a dictionary

Authors: Mary Wahl, Shaheen Gauher, Fidan Boylu Uz, Katherine Zhao

Summary

We used reinforcement learning and CNTK to train a neural network to guess hidden words in a game of Hangman. Our trained model has no reliance on a reference dictionary: it takes as input a variable-length, partially-obscured word (consisting of blank spaces and any correctly-guessed letters), and a binary vector indicating which letters have already been guessed. In the git repository associated with this post, we provide sample code for training the neural network and deploying it in an Azure Web App for gameplay.

Motivation

In the classic children's game of Hangman, a player's objective is to identify a hidden word of which only the number of letters is originally known. In each round, the player guesses a letter of the alphabet: if the letter is present in the word, all instances of the letter are revealed; otherwise, one of the hangman's body parts is drawn in on a gibbet. The game ends in a win if the word is entirely revealed by correct guesses, and ends in loss if the hangman's body is completely revealed instead. To assist the player, a visible record of all letters guessed so far is typically maintained.

A common Hangman strategy is to compare the partially-revealed word against all of the words in a player’s vocabulary. If a unique match is found, the player simply guesses the remaining letters; if there are multiple matches, the player can guess a letter that distinguishes between the possible words while minimizing the expected number of incorrect guesses. Such a strategy can be implemented algorithmically (without machine learning) using a pre-compiled reference dictionary as the vocabulary. Unfortunately, this approach will likely give suboptimal guesses or fail outright if the hidden word is not in the player’s vocabulary. This issue occurs commonly in practice, since children selecting hidden words often choose proper nouns or commit spelling errors that would not be present in a reference dictionary.

An alternative strategy robust to such issues is to make guesses based on the frequencies of letters and letter combinations in the target language. For an English-language game, such strategies might include beginning with vowel guesses, guessing the letter U when a Q has already been revealed, recognizing that some letters or n-grams are more common than others, etc. Because of the wide array of learnable patterns and our own a priori uncertainty of which would be most useful in practice, we decided to train a neural network to learn appropriate rules for guessing hidden words without relying on a reference dictionary.

Model Design and Training

Our model has two main inputs: a partially-obscured hidden word, and a binary vector indicating which letters have already been guessed. To accommodate the variable length of hidden words in Hangman, the partially-obscured word (with “blanks” representing any letters in the word that have not yet been guessed) is fed into a Long Short Term Memory (LSTM) recurrent network, from which only the final output is retained. The LSTM’s output is spliced together with the binary vector indicating previous guesses, and the combined input is fed into a single dense layer with 26 output nodes that represent the network’s possible guesses, the letters A-Z. The model’s output “guess” is the letter whose node has the largest value for the given input.

We created a wrapper class called HangmanPlayer to train this model using reinforcement learning. The hidden word and model are provided when an instance of HangmanPlayer is created. In the first round, HangmanPlayer queries the model with an appropriately-sized series of blanks (since no letters have been revealed yet in the hidden word) and an all-zero vector of previous guesses. HangmanPlayer stores the input it provided to the model, as well as the model’s guess and feedback on the guess’s quality. Based on the guess, HangmanPlayer updates the input (to reveal any correctly-guessed letters and indicate which letter has been guessed), then queries the model again… and so forth until the game of Hangman ends. Finally, HangmanPlayer uses the input, output, and feedback it stored to further train the model. Training continues when a new game of Hangman is created with the next hidden word in the training set (drawn from Princeton’s WordNet).

Operationalization

Instructions and sample files in our Git repository demonstrate how to create an Azure Web App to operationalize the trained CNTK model for gameplay. This Flask web app is heavily based on Ilia Karmanov’s template for deploying CNTK models using Python 3. The human user visiting the Web App selects their own hidden word – which they never reveal directly – and provides feedback to the model after each guess until the game terminates in either a win or a loss.

For more information on this project, including sample code and instructions for reproducing the work, please see the Azure Hangman git repository.
Quelle: Azure

App Service Environment v2 release announcement

We are happy to announce an upgrade to the App Service Environment. The App Service Environment (ASE) is a powerful feature offering of the Azure App Service that gives network isolation and improved scale capabilities. It is essentially a deployment of the Azure App Service into a subnet of a customer’s Azure Virtual Network (VNet). While the feature gave customers what they were looking for in terms of network control and isolation, it was not as “PaaS like” as the App Service was normally. We took the feedback to heart and for ASEv2 then we focused on making the user experience the same as it was in the multi-tenant App Service while still providing the benefits that the ASE provided. To make things clearer I will use the abbreviation ASEv2 to refer to the new App Service Environment and the initial version as ASEv1.

App Service Plan based scaling

The App Service Plan (ASP) is the scaling container that all apps are in. When you scale the ASP you are also scaling all of the apps in the ASP. This is true for the multi-tenant App Service as well as the ASE. This means that to create an app you need to either choose an ASP or create an ASP. When you wanted to create an ASP in ASEv1 you needed to pick an ASE as your location and then select a worker pool. If the worker pool you wanted to deploy into didn’t have enough capacity then you would have to add more workers to it before you could create your ASP in it.

With ASEv2, when you create an ASP you still select the ASE as your location but instead of picking a worker pool you use the pricing cards just like you do outside of the ASE. There are no more worker pools to manage. When you create or scale your ASP we automatically add the needed workers. To distinguish between ASPs that are in an ASE and those in the multi-tenant service we created a new pricing plan named Isolated. When you pick an Isolated pricing plan during ASP creation, it means that you want the associated ASP to be created in an ASEv2. If you already have an ASEv2 you simply pick the ASE as the location and the size of worker you wish to use.

ASE creation

One of the other things that limited ASE adoption was feature visibility. Many customers did not even know that the ASE feature existed. To create an ASE you had to look for the ASE creation flow which was completely separate from app creation. In ASEv1 customers need to add workers to their worker pools in order to create ASPs. Now that workers are added automatically when ASPs are created or scaled, we are able to place an ASEv2 creation experience squarely in the ASP creation flow.

To create a new ASEv2 during the ASP creation experience, select a location that is not an ASE and select one of the new Isolated SKU cards. When you do this the ASE creation UI is displayed which enables you to create a brand new ASEv2 in a new VNet or in a pre-existing VNet.

Additional benefits

Due to the changes that were made with the system architecture, the ASEv2 has a few additional benefits over ASEv1. With an ASEv1 the maximum default scale was 50 workers. With ASEv2 the maximum default scale is now 100. That means that you can have up to 100 ASP instances hosted in an ASEv2. That can be anything from 100 instances of an ASP to 100 individual ASPs, with anything in between.

The ASEv2 also now uses Dv2 based dedicated workers which have faster CPU’s, twice the memory per core and SSDs. The new ASE dedicated workers sizes are 1 core 3.5 GB, 2 core 7 GB, and 4 core 14 GB. The end result is that 1 core on ASEv2 performs better than 2 cores in ASEv1.

To learn more about the ASEv2 you can start with the Introduction to the App Service Environment. For a list of the ASE related documents you can also look at App Service Documentation.
Quelle: Azure

Import Power BI Desktop files into Azure Analysis Services

Last week we released a preview of the Azure Analysis Services web designer. This new browser-based experience will allow developers to start creating and managing Azure Analysis Services (AAS) semantic models quickly and easily. While SQL Server Data Tools (SSDT) and SQL Server Management Studio (SSMS) are still the primary tools for development, this new experience is intended to make simple changes fast and easy. It is great for getting started on a new model or to do things such as adding a new measure to a development or production AAS model.

The Azure Analysis Services web designer now allows you to import data models from a Power BI Desktop file (PBIX) into Azure Analysis Services. Once imported to AAS, you will be able to use those models with all of the AAS features including table partitioning.

You can import your own PBIX file by following the steps below.

Before getting started, you need:

An Azure Analysis Server at the Standard or Developer tier.
A Power BI Desktop (.pbix) file. New models created from Power BI Desktop files support Azure SQL Database, Azure SQL Data Warehouse, Oracle, and Teradata data sources.

Importing a Power BI Desktop file

1. In your server's Overview blade > Web designer, click Open.

 

2. In Web designer > Models, click + Add.

3. In New model, type a model name, and then select Power BI Desktop file.

4. Browse for the file you wish to import and then click Import.

At this point, the model inside of your desktop file will be converted to an Azure Analysis Services file. You can then query this file in the web directly, or open it up in the Power BI Desktop as a live connection. Further edits to the model can be made in the Azure Analysis Services web designer or through Visual Studio.

Learn more about Azure Analysis Services and the Azure Analysis Services web designer.
Quelle: Azure

Amping up your disaster recovery with Azure Site Recovery

If you are in the process of building or revising your business continuity plans, it’s worth taking a look at Azure Site Recovery (ASR). ASR is a disaster recovery service that allows you to failover on-premises applications running on Linux and Windows and using VMware and Hyper-V to Azure in the event of an outage.

On today’s episode of Microsoft Mechanics, I’ll walk you through how Azure Site Recovery can help you to keep your applications available, including setting up replication for your on-premises applications to Azure and testing that the solution meets your compliance needs.

Getting started with Azure Site Recovery

As discussed on today’s demo-bench, we’ve reduced the complexity traditionally involved in setting up disaster recovery. ASR is built into Azure. As long as you have an Azure subscription, you can get started today, and it's free to use for the first 31 days.

Also with the Azure Hybrid use benefit, you can apply existing Windows Server Licenses toward this effort – which you can learn more about from Chris Van Wesep on his recent demo bench.

Three pivotal steps

There are three pivotal steps to get up and running. The first is preparing your local infrastructure, where depending on which platform you are using, we point you to the Azure Site Recovery on-premises components needed to replicate your applications. In our example today, you’ll see the experience for replicating your applications with VMware ESX using vCenter. This directly connects Azure to your vCenter instance on-premises.

The step after that is to replicate your applications, which is facilitated by a guided experience within the Azure Portal. This includes things like selecting the target where your applications will land in Azure, your virtual machines, configuration properties, and replication settings.

The last step is to create and store your recovery plan. This is also where you can customize your recovery and can test for failover without impacting production workloads or end users. To customize, this means I can sequence the failover of multi-tier applications running on multiple VMs. You can use Azure Automation to automate some of the common post-failover steps.

Of course, once set up, you can then test for failover as I demonstrate today.

As you move forward with your business continuity plan, you’ll want to use Azure Backup to protect your data to mitigate against corruption, accidental deletion, or ransomware. Azure Backup is also fully integrated with Azure and protects data running on Linux and Windows and virtualized with VMware and Hyper-V. You can learn more here.

We hope that you find today’s overview helpful. Please let us know your thoughts and feel free to post your questions.
Quelle: Azure

Enhance Azure SQL Data Warehouse performance with new monitoring functionality for Columnstore

Azure SQL Data Warehouse (SQL DW) is a SQL-based petabyte-scale, massively parallel, cloud solution for data warehousing. It is fully managed and highly elastic, enabling you to provision and scale capacity in minutes. You can scale compute and storage independently, allowing you to range from burst to archival scenarios.

Azure SQL DW is powered by a Columnstore engine to provide super-fast performance for analytic workloads. This is the same Columnstore engine that is built into the industry leading SQL Server Database from Microsoft. To get full speed from the Azure SQL DW, it is important to maximize Columnstore Row Group quality. A row group is the chunk of rows that are compressed together in the Columnstore. In order to enable easier monitoring and tuning of row group quality we are now exposing a new Dynamic Management View (DMV).

What is a High Quality Row Group?

A row group with 1 million rows (1,048,576 rows to be precise) is of ideal quality and under the right circumstances, this is what Azure SQL DW will create. Under sub-optimal conditions such as insufficient memory, row groups with fewer number of rows get created. This can adversely impact the compression quality as well as increase the per row overhead of ancillary structures for row groups. This is turn can dramatically reduce the performance of your queries (note: SQL DW now prevents creation of row groups < 10,000 rows).

How to Monitor Row Group Quality?

Azure SQL DW now has a new DMV (sys.dm_pdw_nodes_db_column_store_row_group_physical_stats) for exposing information about physical statistics of row groups for a Columnstore table. Don’t be intimidated by the long name – we do have a convenient view (vCS_rg_physical_stats) that you can use to get information from this DMV. The key piece of information is the trim_reason_desc that tells whether a row group was prematurely trimmed or not. If it was not trimmed, then it is of ideal quality (trim_reason_desc = NO_TRIM). If it was trimmed, then the trim_reason_desc is set to the trim reason such as MEMORY_LIMITATION or DICTIONARY_SIZE. The example screenshot below shows a snapshot of a table with poor quality row groups due to various trim reasons.

How to Improve Row Group Quality?

Once you identify trimmed row groups there are corrective actions you can take to fix them depending upon what trim_reason_desc says. Here are some tips for the most significant ones:

BULKLOAD: This is what trim reason is set to when the incoming batch of rows for the load had less than 1 million rows. The engine will create compressed row groups any time there are greater than 100,000 rows being inserted (as opposed to inserting into the delta store) but will set the trim reason to BULKLOAD. To get past this, consider increasing your batch load window to accumulate more rows. Also, re-evaluate your partitioning scheme to ensure it is not too granular as row groups cannot span partition boundaries.
MEMORY_LIMITATION: To create row groups with 1 million rows, a certain amount of working memory is required by the engine. When available memory of the loading session is less than the required working memory, row groups get prematurely trimmed. The columstore compression article explains what you can do to fix this, but in a nutshell the rule of thumb is to use at least a mediumrc user to load you data. You would also need to be on a sufficiently large SLO to have sufficient memory for your loading needs.
DICTIONARY_SIZE: This indicates that row group trimming occurred because there was at least one string column with very wide and/or high cardinality strings. The dictionary size is limited to 16MB in memory and once this is reached the row group is compressed. If you do run into this situation, consider isolating the problematic column into a separate table.

The screenshot below shows a copy of the same table with row group quality fixed by following the recommendations to avoid trimming due to MEMORY_LIMITATION.

Next Steps

Now that you know how to monitor your Columnstore row group quality, you can maintain it for optimal performance both pro-actively as part of your regular loads as well as fix quality issues if they arise. If you are not already using Azure SQL DW, we encourage you to try it out for your Business Intelligence and Business Analytics workloads.

Learn More

Check out the many resources for leaning more about Azure SQL DW, including:

What is Azure SQL Data Warehouse?

SQL Data Warehouse Best Practices

MSDN Forum

Stack Overflow Forum
Quelle: Azure