Announcing the availability of Clear Linux* OS in Azure Marketplace

Today, we’re excited to announce the availability of Clear Linux* OS for Intel® Architecture in Azure Marketplace. Clear Linux OS is a free, open source Linux distribution built from the ground up for cloud and data center environments and tuned to maximize the performance and value of Intel architecture.

Microsoft Azure is the first public cloud provider to offer Clear Linux, and we’re really excited about what it means for Linux users in the cloud and the community at large. Here’s a little bit more about the Clear Linux offerings in Azure Marketplace that we are announcing today:

A bare-bones VM, intended to serve as a starting point for those wanting to explore and build out a system with bundles of their choosing
A container image that includes the popular Docker container runtime, and,
A sample solution image for developing machine learning applications, pre-loaded with popular open source tools

In addition to the performance features of Clear Linux, we believe that DevOps teams will benefit from the stateless capabilities of Clear Linux in Azure. By separating the system defaults and distribution best practices from the user configuration, Clear Linux simplifies maintenance and deployment which becomes very important as infrastructure scales. This also pairs well with bundles, a powerful way of distributing software that allows for scenarios like this system update with new kernel and reboot in just a few seconds:

The availability of Clear Linux in Azure Marketplace adds more open source options to the portfolio of teams and organizations that are looking to accelerate business value and increase agility in the cloud, and it’s a testament to our continued focus on great experiences for Linux users in the hybrid cloud.

It also highlights the openness and flexibility of Azure, enabling teams to choose from a wide array of Linux and open source solutions from Marketplace, partner offerings or their own portfolio, and adding value with a rich and powerful set of cloud services, and is an important milestone of the broad collaboration between Microsoft and Intel as we worked together throughout 2016 to bring the Azure Linux Agent to Clear Linux, and collaborated in kernel and system improvements focusing on improving boot times and performance.

“Our team is delighted to have worked with Intel since day one on this project, and to bring Clear Linux to Azure Marketplace and our customers as a result of that collaboration”, said KY Srinivasan from the Enterprise Open Source Team at Microsoft.

Get started with Clear Linux in Azure today by deploying any of the images in Azure Marketplace, and checking out the Documentation on the Clear Linux site. Don’t have an Azure subscription yet? Get started with a free 30-day trial!
Quelle: Azure

Azure Automation available in UK and West Central US regions

Azure Automation is now available in the Azure UK and West Central US regions. These new regions give you more options for locating Automation accounts in geographic locations that work best for you.

You can use Azure Automation to create, monitor, deploy, and maintain resources in your Azure, on-premises, and third-party cloud environments, by using highly scalable and reliable process execution and desired state configuration engines.

To learn more and get started with a free trial, see the Azure Automation overview.
Quelle: Azure

Microsoft Azure Government is First Commercial Cloud to Achieve DoD Impact Level 5 Provisional Authorization, General Availability of DoD Regions

Furthering our commitment to be the most trusted cloud for Government, today Microsoft is proud to announce two milestone achievements in support of the US Department of Defense.

Information Impact Level 5 DoD Provisional Authorization by the Defense Information Systems Agency

Azure Government is the first commercial cloud service to be awarded an Information Impact Level 5 DoD Provisional Authorization by the Defense Information Systems Agency. This provisional authorization allows all US Department of Defense (DoD) customers to leverage Azure Government for the most sensitive controlled unclassified information (CUI), including CUI of National Security Systems. 

DoD Authorizing Officials can use this Provisional Authorization as a baseline for input into their authorization decisions on behalf of mission owner systems using the Azure Government cloud DOD Region. 

This achievement is the result of the collective efforts of Microsoft, DISA and its mission partners to work through requirements pertaining to the adoption of for infrastructure, platform and productivity across the DoD enterprise.

General Availability of DoD Regions

Information Impact Level 5 requires processing in dedicated infrastructure that ensures physical separation of DoD customers from non-DoD customers. Over the past few months, we ran a preview program with more than 50 customers across the Department of Defense, including all branches of the military, unified combatant commands and defense agencies.

We are thrilled to announce the general availability of the DOD Region to all validated DoD customers. Key services covering compute, storage, networking and database are available today with full service level agreements and dedicated Azure Government support.

Dave Milton, Chief Technology Officer for Permuta Technologies, a leading provider of business solutions tailored for the military affirmed the significance of the general availability of the Azure DoD regions, saying:

“Azure Government DOD Regions has given us the ability to deploy our SaaS offering, DefenseReady Cloud, to the US Department of Defense in a scalable, secure, and cost-effective environment. The mission-critical nature of DefenseReady Cloud requires high availability, compliance with DoD’s SRG Impact Level 5 requirements, and scalability to support our customers changing demand, with a flexible pricing structure that allow us to offer capability to large enterprises as well as local commands. With Azure Government DOD Region, we are now able to onboard a customer in weeks, not months, allowing for a time-to-value that is unparalleled when compared with on-premises or other government-sponsored options. Through our partnership, Microsoft provided direct access to product group engineers, compliance support, training, and other resources needed to bring our SaaS solution to DoD.”

These accomplishments and the commentary of our customers and partners further reinforce our commitment to, and the strength of, our long-standing partnership with the US Department of Defense. For more information on Microsoft Cloud for Government services with Information Impact Level 5 provision authorization visit the Microsoft in Government blog, and for more detail on the Information Impact Level 5 Provision authorization (including in-scope services), please visit our Microsoft Trust Center.

To get started today, customers and mission partners may request access to our Azure Government Trial program.
Quelle: Azure

New Azure Storage Release – Larger Block Blobs, Incremental Copy, and more!

We are pleased to announce new capabilities in the latest Azure Storage Service release and updates to our Storage Client Libraries. This latest release allows users to take advantage of increased block sizes of 100 MB, which allows block blobs up to 4.77 TB, as well as features like incremental copy for page blobs and pop-receipt on add message.

REST API version 2016-05-31

Version 2016-05-31 includes these changes:

The maximum blob size has been increased to 4.77 TB with the increase of block size to 100 MB. Check out our previous announcement for more details.
The Put Message API now returns information about the message that was just added, including the pop receipt. This enables the you to call Update Message and Delete Message on the newly enqueued message.
The public access level of a container is now returned from the List Containers and Get Container Properties APIs. Previously this information could only be obtained by calling Get Container ACL.
The List Directories and Files API now accepts a new parameter that limits the listing to a specified prefix.
All Table Storage APIs now accept and enforce the timeout query parameter.
The stored Content-MD5 property is now returned when requesting a range of a blob or file. Previously this was only returned for full blob and file downloads.
A new Incremental Copy Blob API is now available. This allows efficient copying and backup of page blob snapshots.
Using If-None-Match: * will now fail when reading a blob. Previously this header was ignored for blob reads.
During authentication, the canonicalized header list now includes headers with empty values. Previously these were omitted from the list.
Several error messages have been clarified or made more specific. See the full list of changes in the REST API Reference.

Check out the REST API Reference documentation to learn more.

New client library features

.NET Client Library (version 8.0.1)

All the service features listed above
Support for portable class library (through the NetStandard 1.0 Façade)
Key rotation for client side encryption for blobs, tables/ and queues

For a complete list of changes, check out the change log in our Github repository.

Storage Emulator

All the service features listed above

The storage emulator v4.6 is available as part of the latest Microsoft Azure SDK. You can also install the storage emulator using the standalone installer.

We’ll also be releasing new client libraries for Java, C++, Python and NodeJS to support the latest REST version in the next few weeks along with a new AzCopy release. Stay tuned!
Quelle: Azure

Azure Networking Fridays with the Azure Black Belt Team – Winter 2017!

Happy 2017 everyone! After wrapping up the Fall 2016 season of Azure Networking Fridays, we&;re kicking off the 2017 Winter edition!

With that said, join us for our season&039;s premiere on January 20th!

This hour long session will occur every other Friday this winter and spring. It is open to all customers and partners to learn more about Azure Networking, including ExpressRoute and Virtual Networking, and how to plan and design their connectivity to the Microsoft Cloud.

There will be an open Q&A session at the end where customers can ask the experts. Content and partner speakers will vary for each session, but the general agenda is as follows:

Azure Networking fundamentals (10 minutes)
Deep dive topic of the week (15-20 minutes)
Partner spotlight of the week (15-20 minutes)
Q&A

We’re kicking off the winter edition series on Friday, January 20th, 2017.

Join the Skype Meeting and make sure you don’t miss out on future sessions by adding this the series to your Outlook calendar. You can also download ICS.

Here are a few links that we’re posting for convenience:

Future session recordings will be posted on Channel 9. Previous sessions are already posted on Channel 9.
https://aka.ms/ERCheckList for the check list presented in our sessions.

January 20th’s call agenda:

Deep dive topic with a Microsoft Guest!
Partner Spotlight with Cisco

Quelle: Azure

Azure Analysis Services now available in Southeast Asia and East US 2

Last year in October we released the preview of Azure Analysis Services, which is built on the proven analytics engine in Microsoft SQL Server Analysis Services. With Azure Analysis Services you can host semantic data models in the cloud. Users in your organization can then connect to your data models using tools like Excel, Power BI, and many others to create reports and perform ad-hoc data analysis.

We are excited to share with you that the preview of Azure Analysis Services is now available in 2 additional regions: Southeast Asia and East US 2.  This means that Azure Analysis Services is available in the following regions: Southeast Asia, North Europe, West Europe, West US, South Central US, East US 2 and West Central US.

New to Azure Analysis Services? Find out how you can try Azure Analysis Services or learn how to create your first data model.
Quelle: Azure

Azure Virtual Machine Internals – Part 2

Continuation from Part 1

In Azure Virtual Machine Internals Part 1 we created a vanilla Windows VM and spent some time poking under the covers and following the leads. In this part we will modify the VM that was created earlier.

Add Disk

C:Program Files (x86)Microsoft SDKsAzureCLIbin>node azure vm disk attach-new -g BlogRG -n BlogWindowsVM -z 60 -d newdatadisk -c ReadWrite -o blogrgdisks562 -r newdatadiskc

info: Executing command vm disk attach-new

+ Looking up the VM "BlogWindowsVM"

+ Looking up the storage account blogrgdisks562

info: New data disk location: https://blogrgdisks562.blob.core.windows.net/newdatadiskc/newdatadisk.vhd

+ Updating VM "BlogWindowsVM"

info: vm disk attach-new command OK

C:Program Files (x86)Microsoft SDKsAzureCLIbin>node azure vm show -g BlogRG -n BlogWindowsVM -d full –json

Snippet below:

{

"lun": 1,

"name": "newdatadisk",

"vhd": {

"uri": "https://blogrgdisks562.blob.core.windows.net/newdatadiskc/newdatadisk.vhd"

},

"caching": "ReadWrite",

"createOption": "Empty",

"diskSizeGB": 60

}

Note that the caching is by default set to readwrite – this means that in addition to read transactions, write transactions are cached and lazily flushed to durable storage. So if the VM fails for whatever reason before the data is flushed, data will be lost. The topic of data disks including caching is discussed. Set this option to a value that is appropriate for your application.

At this stage you would have to RDP into the VM and initialize, partition and format the disk. This can be tedious for large number of VMs or disks. There is an example where you can use automation.

Capture

RDP to the VM and sysprep, generalize, shutdown.

The VM shows as stopped in the Portal, as expected.

The warning about continuing to incur the compute charges is important. Azure charges customers for the usages VM incurs. Sticking to the core resources, VMs can independently incur Compute, Storage and Networking charges.

Compute charge is incurred if a VM is provisioned. The charge is incurred even if VM is in ‘stopped’ state (as shown by the warning above) because Azure has still provisioned that VM slot for the customer. The only way to avoid incurring the compute charge is to stop-dellocate or to deprovision the VM.

Stop-deallocating a VM deallocates the VM slot so it could be allocated to another customer. However, the definition of the VM along with its state in disks is maintained so that the VM can be re-provisioned at customer’s choice. While a Stop-deallocated VM will not accrue Compute charges, it will continue to accrue the Storage charges as the disks are not deleted.

At this stage we have a generalized VM which means that the OS disk does not have the OS customizations were made when the VM was initially created including settings like locale, timezone, admin credentials, etc. Let’s see what happens if we try to ‘start’ the stopped VM from portal. The ‘start’ operation does a simple power-start and does not take the VM thru its specialize sequence as part of provisioning.

Clicking ‘Start’ in Portal and after a few seconds the VM shows a ‘Running’. Try to RDP into it – RDP times out. Boot Diagnostics to rescue.

As expected, this VM has detected that its disk is generalized and is in the ‘specialization’ sequence waiting for customer input. This VM cannot be started successfully.

When we sysprep a VM and have generalized its disk, we have to follow thru with following management operations to realize a generalized image that can be used to create one or more new VMs.

Stop VM – powers down the VM

C:Program Files (x86)Microsoft SDKsAzureCLIbin>node azure vm stop -g BlogRG -n BlogWindowsVM

info: Executing command vm stop

+ Looking up the VM "BlogWindowsVM"

warn: VM shutdown will not release the compute resources so you will be billed for the compute resources that this Virtual Machine uses.

info: To release the compute resources use "azure vm deallocate".

+ Stopping the virtual machine "BlogWindowsVM"

info: vm stop command OK

Generalize – meta data change on the VM to mark it generalized

C:Program Files (x86)Microsoft SDKsAzureCLIbin>node azure vm generalize -g BlogRG -n BlogWindowsVM

info: Executing command vm generalize

+ Looking up the VM "BlogWindowsVM"

+ Generalizing the virtual machine "BlogWindowsVM"

info: vm generalize command OK

Capture –

C:Program Files (x86)Microsoft SDKsAzureCLIbin>node azure vm capture -g BlogRG -n BlogWindowsVM -p CaptureBlogWVM -R capturecontainer -t C:tempCaptureVMTemplate.json

info: Executing command vm capture

+ Looking up the VM "BlogWindowsVM"

+ Capturing the virtual machine "BlogWindowsVM"

info: Saved template to file "C:tempCaptureVMTemplate.json"

info: vm capture command OK

The template generated can be used to create new VMs based on the generalized image.

Looking at the storage profile of CaptureVMTemplate.json

"storageProfile": {

"osDisk": {

"osType": "Windows",

"name": "CaptureBlogWVM-osDisk.694733ec-46a0-4e0b-a73b-ee0863a0f12c.vhd",

"createOption": "FromImage",

"image": {

"uri": https://blogrgdisks562.blob.core.windows.net/system/Microsoft.Compute/Images/capturecontainer/CaptureBlogWVM-osDisk.694733ec-46a0-4e0b-a73b-ee0863a0f12c.vhd

Above is the URL of the generalized image that the capture operation has generated. You will notice that the image is a page blob VHD just like any other disk VHD. From the Visual Studio Cloud Explorer we can see the three captured VHDs for the OS data and two data disks. In storage terms they are blob snapshots of the disk page blobs. You will notice that the blob snapshots are in the same storage account as the original disks – blogrgdisks562

Looking at the detail of the generalized OS disk blob (using the command, node azure storage blob list -vv ) we will notice a few things:

{

"name": "Microsoft.Compute/Images/capturecontainer/CaptureBlogWVM-osDisk.694733ec-46a0-4e0b-a73b-ee0863a0f12c.vhd",

"lastModified": "Sun, 21 Aug 2016 19:14:25 GMT",

"etag": "0x8D3C9F75DB60C73",

"contentLength": "136367309312",

"contentSettings": {

"contentType": "application/octet-stream",

"contentEncoding": "",

"contentLanguage": "",

"contentMD5": "en7n+5uiKTbMlrhW59lEGg==",

"cacheControl": "",

"contentDisposition": ""

},

"sequenceNumber": "8",

"blobType": "PageBlob",

"lease": {

"status": "unlocked",

This is a generalized image and as such unlocked with no outstanding lease

"state": "available"

},

"copy": {

"id": "b069a713-599d-44ad-85d0-e7e255f9a92c",

"progress": "136367309312/136367309312",

"bytesCopied": 136367309312,

"totalBytes": 136367309312,

"source": https://blogrgdisks562.blob.core.windows.net/vhds/BlogWindowsVM2016713231120.vhd?sv=2014-02-14&sr=b&sk=system-1&sig=HTMp66d7wK37Cc12cb2LzDsS6w1YjGyHsIHxPjR2%2F%2F8%3D&st=2016-08-21T18%3A59%3A23Z&se=2016-08-21T20%3A14%3A23Z&sp=rw,

This is the source blob which is still the OS disk on the stopped VM, BlogWindowsVM. By following the copy trail, you can trace how the blobs are getting copied. BTW, the source blob still has an infinite lease (not shown here) against it since it is attached as a disk to a VM.

"status": "success",

"completionTime": "Sun, 21 Aug 2016 19:14:25 GMT"

},

"metadata": {

"microsoftazurecompute_capturedvmkey": "/Subscriptions/f028f547-f912-42b0-8892-89ea6eda4c5e/ResourceGroups/BLOGRG/VMs/BLOGWINDOWSVM",

"microsoftazurecompute_imagetype": "OSDisk",

"microsoftazurecompute_osstate": "Generalized",

"microsoftazurecompute_ostype": "Windows"

}

}

Continuation of the capture template json-

},

"vhd": {

"uri": https://blogrgdisks562.blob.core.windows.net/vmcontainerb05604df-5f0f-4ef2-ab18-76ab7b644cfd/osDisk.b05604df-5f0f-4ef2-ab18-76ab7b644cfd.vhd

The Vhd URI as mentioned above does not exist yet. It will be created when this template is deployed.

If you intend to create multiple VMs using this template then the VHD uri will have to be changed to be unique from the second VM onwards.

},

"caching": "ReadWrite"

},

"dataDisks": [

{

"lun": 0,

"name": "CaptureBlogWVM-dataDisk-0.694733ec-46a0-4e0b-a73b-ee0863a0f12c.vhd",

"createOption": "FromImage",

"image": {

"uri": "https://blogrgdisks562.blob.core.windows.net/system/Microsoft.Compute/Images/capturecontainer/CaptureBlogWVM-dataDisk-0.694733ec-46a0-4e0b-a73b-ee0863a0f12c.vhd"

},

"vhd": {

"uri": "https://blogrgdisks562.blob.core.windows.net/vmcontainerb05604df-5f0f-4ef2-ab18-76ab7b644cfd/dataDisk-0.b05604df-5f0f-4ef2-ab18-76ab7b644cfd.vhd"

},

"caching": "ReadWrite"

},

{

"lun": 1,

"name": "CaptureBlogWVM-dataDisk-1.694733ec-46a0-4e0b-a73b-ee0863a0f12c.vhd",

"createOption": "FromImage",

"image": {

"uri": "https://blogrgdisks562.blob.core.windows.net/system/Microsoft.Compute/Images/capturecontainer/CaptureBlogWVM-dataDisk-1.694733ec-46a0-4e0b-a73b-ee0863a0f12c.vhd"

},

"vhd": {

"uri": "https://blogrgdisks562.blob.core.windows.net/vmcontainerb05604df-5f0f-4ef2-ab18-76ab7b644cfd/dataDisk-1.b05604df-5f0f-4ef2-ab18-76ab7b644cfd.vhd"

},

"caching": "ReadWrite"

}

]

},

At this point of time, the original VM is stopped but still accumulating usage charges for Compute and Storage. If we intend to leave the VM stopped for a length of time, we can save on the Compute charges by stop deallocating the VM – it will deallocate the VM on the Hyper-V host but retains the VM meta-data and the disk blobs in Storage.

C:Program Files (x86)Microsoft SDKsAzureCLIbin>node azure vm deallocate -g BlogRG -n BlogWindowsVM

info: Executing command vm deallocate

+ Looking up the VM "BlogWindowsVM"

+ Deallocating the virtual machine "BlogWindowsVM"

info: vm deallocate command OK

get-instance-view command will show the VM status as deallocated (snippets of result below)

C:Program Files (x86)Microsoft SDKsAzureCLIbin>node azure vm get-instance-view -g blogrg -n blogwindowsvm –json

"osDisk": {

"osType": "Windows",

"name": "BlogWindowsVM",

"vhd": {

"uri": https://blogrgdisks562.blob.core.windows.net/vhds/BlogWindowsVM2016713231120.vhd

Even when the VM is stop-deallocated, the OS and data disks are retained

},

"caching": "ReadWrite",

"createOption": "FromImage"

},

"dataDisks": [

{

"lun": 0,

"name": "BlogWindowsVM-20160814-191501427",

"vhd": {

"uri": "https://blogrgdisks562.blob.core.windows.net/vhds/BlogWindowsVM-20160814-191501427.vhd"

},

"caching": "ReadWrite",

"createOption": "Empty",

"diskSizeGB": 50

},

{

"lun": 1,

"name": "newdatadisk",

"vhd": {

"uri": "https://blogrgdisks562.blob.core.windows.net/newdatadiskc/newdatadisk.vhd"

},

"caching": "ReadWrite",

"createOption": "Empty",

"diskSizeGB": 60

}

]

"statuses": [

{

"code": "ProvisioningState/succeeded",

"level": "Info",

"displayStatus": "Provisioning succeeded",

"time": "2016-08-22T02:58:42.797Z"

},

{

"code": "OSState/generalized",

"level": "Info",

"displayStatus": "VM generalized"

OS is generalized, so this VM can be recreated by specializing it

},

{

"code": "PowerState/deallocated",

"level": "Info",

"displayStatus": "VM deallocated"

Compute VM resources are deallocated

Even when a VM is deallocated, its disk storage blobs are still locked like when the VM was running. This is to prevent them from being accidentally deleted.

Looking at the blob properties of the disk blob you will notice that Azure continues to maintain an infinite lease on the blob.

C:Program Files (x86)Microsoft SDKsAzureCLIbin>node azure storage blob list -vv

name: &;BlogWindowsVM-20160814-191501427.vhd&039;,

lastModified: &039;Sun, 21 Aug 2016 19:10:50 GMT&039;,

etag: &039;0x8D3C9F6DD886DB5&039;,

contentLength: &039;53687091712&039;,

contentSettings: {

contentType: &039;application/octet-stream&039;,

contentEncoding: &039;&039;,

contentLanguage: &039;&039;,

contentMD5: &039;&039;,

cacheControl: &039;&039;,

contentDisposition: &039;&039;

},

sequenceNumber: &039;1&039;,

blobType: &039;PageBlob&039;,

lease: {

status: &039;locked&039;,

state: &039;leased&039;,

duration: &039;infinite&039;

}

Cloning VMs

At this point of time we can clone VMs from the generalized image that we captured. We can either use the capture template json or create a new image using the FromImage option with the generalized image as the parameter value.

I grabbed a template from Azure Quickstart templates and modified it just enough and deployed it to create a new VM of name CopyBlogVM.

C:Program Files (x86)Microsoft SDKsAzureCLIbin>node azure group deployment create -g BlogRG -n CopyVMDeployment -f "C:tempcopyvmtemplatetemplate.json" -e "C:tempcopyvmtemplateparameters.json"

info: Executing command group deployment create

+ Initializing template configurations and parameters

+ Creating a deployment

info: Created template deployment "CopyVMDeployment"

+ Waiting for deployment to complete

+

The relevant snippet from the VM template is the storage profile:

"storageProfile": {

"osDisk": {

"name": "[concat(parameters(&039;virtualMachineName&039;),&039;-osDisk&039;)]",

"osType": "[parameters(&039;osType&039;)]",

"caching": "ReadWrite",

"createOption": "fromImage",

image": {

"uri": "[parameters(&039;osDiskVhdUri&039;)]"

},

"vhd": {

"uri": "[concat(concat(reference(resourceId(&039;blogrg&039;, &039;Microsoft.Storage/storageAccounts&039;, parameters(&039;storageAccountName&039;)), &039;2015-06-15&039;).primaryEndpoints[&039;blob&039;], &039;vhds/&039;), parameters(&039;virtualMachineName&039;), &039;20161228010921.vhd&039;)]"

}

},

"dataDisks": []

},

The OsDiskvhdUri is set in the parameters file to the generalized image file

"osDiskVhdUri": {

"value": https://blogrgdisks562.blob.core.windows.net/system/Microsoft.Compute/Images/capturecontainer/CaptureBlogWVM-osDisk.694733ec-46a0-4e0b-a73b-ee0863a0f12c.vhd

},

The CopyBlogVM creates successfully with an OS disk that starts out as a copy of the generalized OS disk referred to by OsDiskvhdUri. Using the generalized OD disk as a template, any number of new VMs can be stamped out. A common scenario would be to capture a generalized VM with new updates/patches and then create new VMs based on the updated image.

In Conclusion

In the two posts we have covered some of the details on how an Azure VM works under the covers. There are other capabilities that we have not covered including backup, encryption, licensing, planned maintenance and networking details. Time permitting, we will visit these topics in future posts.
Quelle: Azure

Microsoft Azure attains DoD PA at Level 5

We are excited to announce that Microsoft Azure Government has received a Provisional Authorization (PA) for DoD Impact Level 5 from the Defense Information Systems Agency (DISA).  This PA will allow United States Department of Defense (DoD) mission owners and officials the ability to plan, assess, and authorize workloads for Impact Level 5 controlled unclassified information (CUI). This includes those workloads supporting National Security Systems as well as mission critical data transiting, or being stored or processed within the Azure Government cloud.

To support our commitment in enabling the DoD Cloud Initiative, Microsoft has established two physically isolated and geographically separated Azure Government regions exclusively for the Department of Defense.  These regions are designed to support Impact Level 5 workloads with stringent DoD security requirements.

The DoD Impact Level 5 PA adds to the list of Azure Government cloud authorizations and assessments which demonstrate adherence to U.S. federal, DoD, and state and local standards. In addition, 2016 saw the Azure Government cloud receive a provisional authorization at the DoD Impact Level 4 and a FedRAMP High P-ATO.  In this way, we are continually expanding our security and compliance footprint by pursuing the most stringent security and compliance requirements to support the needs of government and defense customers.  Achieving the DoD L5 PA affirms our longstanding commitment to deliver the most trusted, comprehensive, and compliant platform, thereby allowing customers to transition their mission-critical workloads to the cloud with confidence.

To get started, DoD customers may request access to our Azure Government Trial program. For a complete list of services certified for DoD Impact Level 5, please refer to the Microsoft Trust Center.
Quelle: Azure

Azure SQL Database is increasing read and write performance

It is our pleasure to announce that we have doubled the write performance across all Azure SQL Database offers and additionally have doubled the read performance for our Premium databases. These performance upgrades come with no price change and are available world-wide.

The increased performance will allow for price optimization of existing workloads as well as for onboarding of even more demanding workloads to the platform.

Especially heavy OLTP workloads in Premium database with random read patterns will benefit from the increases read performance and may fit into a smaller performance tier than they are running in today. In general, if your Premium workload is below 50% DTU utilization now, you may be able to run in the next lower Premium performance level.

The increase in write performance will benefit bulk inserts, heavy batched data manipulation and index maintenance operations. You may notice up to double the logical insert throughput or 1/2 of the previous response times.

Learn More:

What is Azure SQL Database

Create a SQL Database or an Elastic Database Pool

Azure SQL Database Pricing

Quelle: Azure

Azure Virtual Machine Internals – Part 1

Introduction

The Azure cloud services are composed of elements from Compute, Storage, and Networking. The compute building block is a Virtual Machine (VM), which is the subject of discussion in this post. Web search will yield large amounts of documentation regarding the commands, APIs and UX for creating and managing VMs. This is not a 101 or ‘How to’ and the reader is for the most part expected to already be familiar with the topics of VM creation and management. The goal of this series is to look at what is happening under the covers as a VM goes thru its various states.

Azure provides IaaS and PaaS VMs; in this post when we refer to a VM we mean the IaaS VM. There are two control plane stacks in Azure, Azure Service Management (ASM) and Azure Resource Manager (ARM). We will be limiting ourselves to ARM since it is the forward looking control plane.

ARM exposes resources like VM, NIC but in reality ARM is a thin frontend layer and the resources themselves are exposed by lower level resource providers like Compute Resource Provider (CRP), Network Resource Provider (NRP) and Storage Resource Provider (SRP). Portal calls ARM which in turn calls the resource providers.

Getting Started

For most of the customers, their first experience creating a VM is in the Azure Portal. I did the same and created a VM of size ‘Standard DS1 v2’ in the West US region. I mostly stayed with the defaults that the UI presented but chose to add a ‘CustomScript’ extension. When prompted I provided a local file ‘Sample.ps’ as the PowerShell script for the ‘CustomScript’ extension. The PS script itself is a single line Get-Process.

The VM provisioned successfully but the overall ARM template deployment failed (bright red on my Portal dashboard). Couple clicks showed that the ‘CustomScript’ extension had failed and the Portal showed this message:

{
"status": "Failed",
"error": {
"code": "ResourceDeploymentFailure",
"message": "The resource operation completed with terminal provisioning state &;Failed&039;.",
"details": [
{
"code": "DeploymentFailed",
"message": "At least one resource deployment operation failed. Please list deployment operations for details. Please see https://aka.ms/arm-debug for usage details.",
"details": [
{
"code": "Conflict",
"message": "{rn "status": "Failed",rn "error": {rn "code": "ResourceDeploymentFailure",rn "message": "The resource operation completed with terminal provisioning state &039;Failed&039;.",rn "details": [rn {rn "code": "VMExtensionProvisioningError",rn "message": "VM has reported a failure when processing extension &039;CustomScriptExtension&039;. Error message: "Finished executing command"."rn }rn ]rn }rn}"
}
]
}
]
}
}

It wasn’t immediately clear what had gone wrong. We can dig from here and as is often true, failures teach us more than successes.

I RDPed to the just provisioned VM. The logs for the VM Agent are in C:WindowsAzureLogs. The VM Agent is a system agent that runs in all IaaS VMs (customers can opt out if they would like). The VM Agent is necessary to run extensions. Let’s peek into the logs for the CustomScript Extension:

C:WindowsAzureLogsPluginsMicrosoft.Compute.CustomScriptExtension.8CustomScriptHandler

[1732+00000001] [08/14/2016 06:19:17.77] [INFO] Command execution task started. Awaiting completion…

[1732+00000001] [08/14/2016 06:19:18.80] [ERROR] Command execution finished. Command exited with code: -196608

The fact that the failure logs are cryptic hinted that something catastrophic had happened. So I re-looked at my input and realized that I had the file extension for the PS script wrong. I had it as Sample.ps when it should have been Sample.ps1. I updated the VM this time specifying the script file with the right extension. This succeeded as shown by more records appended to the log file mentioned above.

[3732+00000001] [08/14/2016 08:42:24.04] [INFO] HandlerSettings = ProtectedSettingsCertThumbprint: , ProtectedSettings: {}, PublicSettings: {FileUris: [https://iaasv2tempstorewestus.blob.core.windows.net/vmextensionstemporary-10033fff801becb5-20160814084130535/simple.ps1?sv=2015-04-05&sr=c&sig=M3qa7lS%2BZwp%2B8Tytqf1VEew4YaAKvvYn1yzGrPfSwyw%3D&se=2016-08-15T08%3A41%3A30Z&sp=rw], CommandToExecute: powershell -ExecutionPolicy Unrestricted -File simple.ps1 }

[3732+00000001] [08/14/2016 08:42:24.04] [INFO] Downloading files specified in configuration…

[3732+00000001] [08/14/2016 08:42:24.05] [INFO] DownloadFiles: fileUri = https://iaasv2tempstorewestus.blob.core.windows.net/vmextensionstemporary-10033fff801becb5-20160814084130535/simple.ps1?sv=2015-04-05&sr=c&sig=M3qa7lS+Zwp+8Tytqf1VEew4YaAKvvYn1yzGrPfSwyw=&se=2016-08-15T08:41:30Z&sp=rw

[3732+00000001] [08/14/2016 08:42:24.05] [INFO] DownloadFiles: Initializing CloudBlobClient with baseUri = https://iaasv2tempstorewestus.blob.core.windows.net/

[3732+00000001] [08/14/2016 08:42:24.22] [INFO] DownloadFiles: fileDownloadPath = Downloads

[3732+00000001] [08/14/2016 08:42:24.22] [INFO] DownloadFiles: asynchronously downloading file to fileDownloadLocation = Downloadssimple.ps1

[3732+00000001] [08/14/2016 08:42:24.24] [INFO] Waiting for all async file download tasks to complete…

[3732+00000001] [08/14/2016 08:42:24.29] [INFO] Files downloaded. Asynchronously executing command: &039;powershell -ExecutionPolicy Unrestricted -File simple.ps1 &039;

[3732+00000001] [08/14/2016 08:42:24.29] [INFO] Command execution task started. Awaiting completion…

[3732+00000001] [08/14/2016 08:42:25.29] [INFO] Command execution finished. Command exited with code: 0

The CustomScript extension takes a script file which can be provided as a file in a Storage blob. Portal offers a convenience where it accepts a file from the local machine. I had provided Simple.ps1 which was in my temp folder. Behind the scenes Portal uploads the file to a blob, generates a shared access signature (SAS) and passes it on to CRP. From the logs above you can see that URI.

This URI is worth understanding. It is a Storage blob SAS with the following attributes for an account in West US (which is the same region where my VM is deployed):

se=2016-08-15T08:41:30Z means that the SAS is valid until that time (UTC). Comparing it to the timestamp on the corresponding record in log (08/14/2016 08:42:24.05) it is clear that the SAS is being generated for a period of 24 hours.
Sr=c means that this is container level policy.
Sp=rw means that the access is for both read and write.
The shared access signature (SAS) has the full descriptions

I asserted above that this is a storage account in West US. That may be apparent from the naming of the storage account (iaasv2tempstorewestus) but is not a guarantee. So how can you verify that this storage account (or any other storage account) is in the region it claims to be in?

A simple nslookup on the blob DNS URL reveals this

C:Usersyunusm>nslookup iaasv2tempstorewestus.blob.core.windows.net

Server: PK5001Z.PK5001Z

Address: 192.168.0.1

Non-authoritative answer:

Name: blob.by4prdstr03a.store.core.windows.net

Address: 40.78.112.72

Aliases: iaasv2tempstorewestus.blob.core.windows.net

The blob URL is a CNAME to a canonical DNS blob.by4prdstr03a.store.core.windows.net. Experimentation will show that more than one storage accounts maps to a single canonical DNS URL. The ‘by4’ in the name gives a hint to what region it is located. As per the Azure Regions page, the West US region is in California. Looking up the geo location of the IP address (40.78.112.72) indicates a more specific area within California.

Understanding the VM

Now that we have a healthy VM, let’s understand it more. As per the Azure VM Sizes page, this is the VM that I just created:

Size

CPU cores

Memory

NICs (Max)

Max. disk size

Max. data disks (1023 GB each)

Max. IOPS (500 per disk)

Max network bandwidth

Standard_D1_v2

1

3.5 GB

1

Temporary (SSD) =50 GB

2

2×500

moderate

This information can be fetched programmatically by doing a GET.

Returns this:

{

"name": "Standard_DS1_v2",

"numberOfCores": 1,

"osDiskSizeInMB": 1047552,

"resourceDiskSizeInMB": 7168,

"memoryInMB": 3584,

"maxDataDiskCount": 2

}

Doing a GET on the VM we created
Returns the following. Let’s understand this response in some detail. I have annotated inline comments preceded and followed by //
{
"properties": {
"vmId": "694733ec-46a0-4e0b-a73b-ee0863a0f12c",
"hardwareProfile": {
"vmSize": "Standard_DS1_v2"
},
"storageProfile": {
"imageReference": {
"publisher": "MicrosoftWindowsServer",
"offer": "WindowsServer",
"sku": "2012-R2-Datacenter",
"version": "latest"

The interesting field here is the version. Publishers can have multiple versions of the same image at any point of time. Popular images are revved typically on a monthly frequency with the security patches. Major new versions are released as new SKUs. The Portal has defaulted me to the latest version. As a customer, I can chose to pick a specific version as well, whether I deploy thru Portal or thru an ARM template using the CLI or REST API; the latter being the preferred method for automated scenarios. The problem with specifying a particular version is that it can render the ARM template fragile. The deployment will break if the publisher unpublishes that specific version in one or more regions, as a publisher can do. So unless there is a good reason not to, the preferred value for the version setting is latest. As an example, the following images of the SKU 2012-R2-Datacenter are currently in the WestUS region, as returned by the CLI command azure vm image list.

MicrosoftWindowsServer  WindowsServer  2012-R2-Datacenter                      Windows  4.0.20151120     westus    MicrosoftWindowsServer:WindowsServer:2012-R2-Datacenter:4.0.20151120
MicrosoftWindowsServer  WindowsServer  2012-R2-Datacenter                      Windows  4.0.20151214     westus    MicrosoftWindowsServer:WindowsServer:2012-R2-Datacenter:4.0.20151214
MicrosoftWindowsServer  WindowsServer  2012-R2-Datacenter                      Windows  4.0.20160126     westus    MicrosoftWindowsServer:WindowsServer:2012-R2-Datacenter:4.0.20160126
MicrosoftWindowsServer  WindowsServer  2012-R2-Datacenter                      Windows  4.0.20160229     westus    MicrosoftWindowsServer:WindowsServer:2012-R2-Datacenter:4.0.20160229
MicrosoftWindowsServer  WindowsServer  2012-R2-Datacenter                      Windows  4.0.20160430     westus    MicrosoftWindowsServer:WindowsServer:2012-R2-Datacenter:4.0.20160430
MicrosoftWindowsServer  WindowsServer  2012-R2-Datacenter                      Windows  4.0.20160617     westus    MicrosoftWindowsServer:WindowsServer:2012-R2-Datacenter:4.0.20160617
MicrosoftWindowsServer  WindowsServer  2012-R2-Datacenter                      Windows  4.0.20160721     westus    MicrosoftWindowsServer:WindowsServer:2012-R2-Datacenter:4.0.20160721

},
"osDisk": {
"osType": "Windows",
"name": "BlogWindowsVM",
"createOption": "FromImage",
"vhd": {
"uri": https://blogrgdisks562.blob.core.windows.net/vhds/BlogWindowsVM2016713231120.vhd

The OS disk is a page blob and starts out as a copy of the source image that the Publisher has published. Looking at the meta data of this blob and correlating it to what the VM itself has is instructive. Using the Cloud Explorer in Microsoft Visual Studio the blob’s property window:

This is a regular page blob that is functioning as an OS disk over the network. You will observe that the Last Modified date pretty much stays with NOW() most of the time – the reason being as long as the VM is running there are some flushes happening to the disk regularly. The size of the OS disk is 127 GB; the max allowed OS disk in Azure is 1 TB.

Azure Storage Explorer shows more properties for the same blob than the VS Cloud Explorer.

 

The interesting properties are the Lease properties. It shows the blob as leased with an infinite duration. Internally to VM creation, when a page blob is configured to be an OS/data disk for a VM, we take a lease on that blob before attaching it to the VM. This is so that the blob for a running VM is not deleted out of band. If you see a disk-backing blob has no lease while it shows as attached to a VM then that would be an inconsistent state and will need to be repaired.

RDPing in the VM itself, we can see two drives mounted and the OS drive is about the same size as the page blob in Storage. The pagefile is on D drive; so that faulted pages are fetched locally rather than over the network from Blob Storage. The temporary storage can be lost in case of events that case a VM to be relocated to a different node.

},
"caching": "ReadWrite"
},
"dataDisks": []

there are no data disks yet but we will add some soon

},
"osProfile": {
"computerName": "BlogWindowsVM",

The name we chose for the VM in Portal is the hostname as well. The VM is DHCP enabled and gains its DIP address thru DHCP. The VM is registered in an internal DNS and has a generated FQDN.

C:Usersyunusm>ipconfig /all

Windows IP Configuration

Host Name . . . . . . . . . . . . : BlogWindowsVM
Primary Dns Suffix . . . . . . . :
Node Type . . . . . . . . . . . . : Hybrid
IP Routing Enabled. . . . . . . . : No
WINS Proxy Enabled. . . . . . . . : No
DNS Suffix Search List. . . . . . : qkqr4ajgme4etgyuajvm1sfy3h.dx.internal.cl
oudapp.net

Ethernet adapter:

Connection-specific DNS Suffix . : qkqr4ajgme4etgyuajvm1sfy3h.dx.internal.cl
oudapp.net
Description . . . . . . . . . . . : Microsoft Hyper-V Network Adapter
Physical Address. . . . . . . . . : 00-0D-3A-33-81-01
DHCP Enabled. . . . . . . . . . . : Yes
Autoconfiguration Enabled . . . . : Yes
Link-local IPv6 Address . . . . . : fe80::980c:bf29:b2de:8a05%12(Preferred)
IPv4 Address. . . . . . . . . . . : 10.1.0.4(Preferred)
Subnet Mask . . . . . . . . . . . : 255.255.255.0
Lease Obtained. . . . . . . . . . : Saturday, August 13, 2016 11:14:58 PM
Lease Expires . . . . . . . . . . : Wednesday, September 20, 2152 6:24:34 PM
Default Gateway . . . . . . . . . : 10.1.0.1
DHCP Server . . . . . . . . . . . : 168.63.129.16
DHCPv6 IAID . . . . . . . . . . . : 301993274
DHCPv6 Client DUID. . . . . . . . : 00-01-00-01-1F-41-C4-70-00-0D-3A-33-81-01

DNS Servers . . . . . . . . . . . : 168.63.129.16
NetBIOS over Tcpip. . . . . . . . : Enabled

"adminUsername": "yunusm",
"windowsConfiguration": {
"provisionVMAgent": true,

This is a hint to install a guest agent that does a bunch of config and runs the extensions. The guest agent binaries are here – C:WindowsAzurePackages

"enableAutomaticUpdates": true

Windows VMs by default are set to receive auto updates from Windows Update Service. There is a nuance to grasp here regarding availability and auto updates. If you have an Availability Set with multiple VMs with the purpose of getting high SLA against unexpected faults, then you do not want to have correlated actions (like Windows Updates) that can take down VMs across the Availability Set.

 

},

"secrets": []

},

"networkProfile": {

"networkInterfaces": [

{

"id": "/subscriptions/f028f547-f912-42b0-8892-89ea6eda4c5e/resourceGroups/BlogRG/providers/Microsoft.Network/networkInterfaces/blogwindowsvm91"

NIC is a standalone resource, we are not discussing networking resources yet.

}
]
},
"diagnosticsProfile": {
"bootDiagnostics": {
"enabled": true,
"storageUri": "https://blogrgdiag337.blob.core.windows.net/"
}

Boot diagnostics have been enabled. Portal has a way of viewing the screenshot. You can get the URL for the screenshot from CLI:

C:Program Files (x86)Microsoft SDKsAzureCLIbin>node azure vm get-serial-output

info: Executing command vm get-serial-output

Resource group name: blogrg

Virtual machine name: blogwindowsvm

+ Getting instance view of virtual machine "blogwindowsvm"

info: Console Screenshot Blob Uri:

https://blogrgdiag337.blob.core.windows.net/bootdiagnostics-blogwindo-694733ec-46a0-4e0b-a73b-ee0863a0f12c/BlogWindowsVM.694733ec-46a0-4e0b-a73b-ee0863a0f12c.screenshot.bmp

info: vm get-serial-output command OK

The boot screenshot can be viewed in Portal. However, the URL for the screenshot bmp file does not render in a browser.

What gives? It is due to the authentication on the storage account which blocks anonymous access. For any blob or container in Azure Storage it is possible to configure anonymous read access. Please do this with caution and only in cases where secrets will not be exposed. It is a useful capability for sharing data that is not confidential without having to generate SAS signatures. Once anonymous access is enabled on the container the screenshot renders in any browser outside of the portal.

    },
    "provisioningState": "Succeeded"
  },
  "resources": [
    {
      "properties": {
        "publisher": "Microsoft.Compute",
        "type": "CustomScriptExtension",
        "typeHandlerVersion": "1.7",
        "autoUpgradeMinorVersion": true,

It is usually safe for extensions to be auto updated on the minor version. There have been very few surprises in this regard though you have an option to not auto update.

"settings": {
"fileUris": [

https://iaasv2tempstorewestus.blob.core.windows.net/vmextensionstemporary-10033fff801becb5-20160814084130535/simple.ps1?sv=2015-04-05&sr=c&sig=M3qa7lS%2BZwp%2B8Tytqf1VEew4YaAKvvYn1yzGrPfSwyw%3D&se=2016-08-15T08%3A41%3A30Z&sp=rw

As discussed earlier this is the SAS key for the powershell script. You will see this as a commonly used pattern to sharing files and data – upload to a blob, generate a SAS key and pass around.

],
"commandToExecute": "powershell -ExecutionPolicy Unrestricted -File simple.ps1 "
},
"provisioningState": "Succeeded"
},
"id": "/subscriptions/f028f547-f912-42b0-8892-89ea6eda4c5e/resourceGroups/BlogRG/providers/Microsoft.Compute/virtualMachines/BlogWindowsVM/extensions/CustomScriptExtension",
"name": "CustomScriptExtension",
"type": "Microsoft.Compute/virtualMachines/extensions",
"location": "westus"
},
{
"properties": {
"publisher": "Microsoft.Azure.Diagnostics",
"type": "IaaSDiagnostics",
"typeHandlerVersion": "1.5",
"autoUpgradeMinorVersion": true,
"settings": {
"xmlCfg": <trimmed>,
"StorageAccount": "blogrgdiag337"
},
"provisioningState": "Succeeded"
},
"id": "/subscriptions/f028f547-f912-42b0-8892¬-89ea6eda4c5e/resourceGroups/BlogRG/providers/Microsoft.Compute/virtualMachines/BlogWindowsVM/extensions/Microsoft.Insights.VMDiagnosticsSettings",
"name": "Microsoft.Insights.VMDiagnosticsSettings",
"type": "Microsoft.Compute/virtualMachines/extensions",
"location": "westus"
}
],
"id": "/subscriptions/f028f547-f912-42b0-8892-89ea6eda4c5e/resourceGroups/BlogRG/providers/Microsoft.Compute/virtualMachines/BlogWindowsVM",
"name": "BlogWindowsVM",
"type": "Microsoft.Compute/virtualMachines",
"location": "westus"
}

To Be Continued

We will carry on with what we can learn from a single VM and then move on to other topics.
Quelle: Azure