Announcing the Public Preview of the Azure Site Recovery Deployment Planner

With large enterprises deploying Azure Site Recovery (ASR) as their trusted Disaster Recovery solution to protect hundreds of virtual machines to Microsoft Azure, proper deployment planning before production rollout is critical. Today, we are excited to announce the Public Preview of the Azure Site Recovery Deployment Planner. This tool that helps enterprise customers to understand their on-premises networking requirements, Microsoft Azure compute and storage requirements for successful ASR replication, and test failover or failover of their applications. In the current public preview the tool is available only for the VMware to Azure scenario.

The Deployment Planner can be run without having to install any ASR components in your on-premises environment.
The tool does not impact the performance of the production servers, as no direct connection is made to them. All performance data is collected from the VMware vCenter Server/VMware vSphere ESXi Server which hosts the production virtual machines.

What all aspects does the ASR Deployment Planner cover?

As you move from a proof of concept to a production rollout of ASR, we strongly recommend running the Deployment Planner. The tool will help you answer the following questions:

Compatibility assessment

Which on-premises servers are not qualified to protect to Azure with ASR and why?

Network bandwidth need vs. RPO assessment

How much network bandwidth is required to replicate the servers to meet the desired RPO?
How many virtual machines can be replicated to Azure in parallel to complete initial replication in a given time with available bandwidth?
What is the throughput that ASR will achieve on my provisioned network?
What RPO can be achieved for available bandwidth?
What is the impact on the desired RPO if lower bandwidth is provisioned?

Microsoft Azure infrastructure requirements

How many storage accounts need to be provisioned in Microsoft Azure?
What type of Azure Storage (standard/premium) accounts should every protected virtual machine be placed on for best application performance?
What virtual machines can be replicated to a single storage account?
How many cores are required to be provisioned in the Microsoft Azure subscription for successful test failover/failover?
What Microsoft Azure virtual machine size should be used for each of the on-premises servers to get optimal application performance during a test failover/failover?

On-premises infrastructure requirements

How many on-premises ASR Configuration Servers and Process Servers are needed?

Factoring future growth

How are all the above factors impacted after considering possible future growth of the on-premises workloads with increased usage?

How does the ASR Deployment Planner work?

The ASR Deployment Planner has three main modes of operation:

Profiling
Report generation
Throughput calculation

Profiling

In this mode, you profile all the on-premises servers that you want to protect over a few days, e.g. 30 days. The tool stores various performance counters like R/W IOPS, Write IOPS, data churn, and other virtual machine characteristics like number of cores, number/size of disks, number of NICs, etc. by connecting to the VMware vCenter Server/VMware vSphere ESXi Server where the virtual machines are hosted. Learn more about profiling.

Report Generation

In this mode, the tool uses the profiled data to generate a deployment planning report in Microsoft Excel format. The report has five sheets:

Input
Recommendations
Virtual machine to storage placement
Compatible VMs
Incompatible VMs

By default, the tool takes 95th percentile of all the profiled performance metrics and includes a growth factor of 30%. Both these parameters, percentile calculation and growth factor, are configurable. Learn more about report generation.

Throughput Calculation

In this mode, the tool finds the network throughput that can be achieved from your on-premises environment to Microsoft Azure for ASR replication. This will help you determine what additional bandwidth you need to provision for ASR replication. Learn more about throughput calculation.

With ASR’s promise of full application recovery on Microsoft Azure, thorough deployment planning is critical for both disaster recovery and migration scenarios where ASR is used. With the new ASR Deployment Planner, we will ensure that both brand new deployments and existing deployments where you are looking to protect or migrate more servers get the best ASR replication experience and application performance when running on Microsoft Azure.

You can check out additional product information and start replicating your workloads to Microsoft Azure using Azure Site Recovery today. You can use the powerful replication capabilities of Site Recovery for 31 days at no charge for every new physical server or virtual machine that you replicate. Visit the Azure Site Recovery forum on MSDN for additional information and to engage with other customers, or use the ASR UserVoice to let us know what features you want us to enable next.

 
Quelle: Azure

Azure Stack TP3 Delivers Hybrid Application Innovation and Introduces Pay-as-you-Use Pricing Model

Building innovative applications on cloud technologies is critical for organizations to accelerate growth and create differentiated customer experiences. Applications leveraging cloud technologies with pay-as-you-use pricing are now standard. Our goal is to ensure that organizations choosing hybrid cloud environments have this same flexibility and innovation capability to match their business objectives and application designs. This is why we are extending Azure technologies on-premises with Azure Stack and today, are announcing several updates for Azure Stack:

TP3 available for download: Technical Preview 3 (TP3) is available for download today and has new features that enable: more modern application capabilities; running in locations without connections to Azure; along with infrastructure and security enhancements.
Packaging and pricing model: Azure Stack brings the cloud economic model on-premises with pay-as-you-use pricing.
Roadmap Update: Shortly after TP3, Azure Functions will be available to run on TP3, followed by Blockchain, Cloud Foundry, and Mesos templates. Continuous innovation will be delivered to Azure Stack up to general availability and beyond. TP3 is the final planned major Technical Preview before Azure Stack integrated systems will be available for order in mid-CY17.

Extending Azure on-premises

Azure Stack enables three unique hybrid cloud scenarios for organizations looking to build new apps and/or renovate existing apps across cloud and on-premises environments:

Consistent hybrid application development: Organizations investing in people, processes, and applications can do so knowing that it is transferable between Azure and Azure Stack. Individuals looking to develop skills can take those skills to any organization using Azure. Consistency between Azure and Azure Stack means organizations can draw from a worldwide pool of talent that can be productive on day one, easily moving from one project to another.  Individuals with Azure skills can move projects, teams, DevOps processes or organizations with ease. The APIs, Portal, PowerShell cmdlets, and Visual Studio experiences are all the same.
Azure services available on-premises: Infrastructure and Platform services fuel the next generation of application innovation. Delivering Azure IaaS and PaaS services on-premises empowers organizations to adopt hybrid based on their business and technical requirements. They have the flexibility to choose the right combination of public, service provider, and on-premises deployment models. If they decide an app should be deployed in another location, they can easily move it without any modifications.
Purpose-built systems for operational excellence: To help organizations focus on work that drives their business, Azure Stack is delivered through integrated systems that are designed to continuously incorporate Azure innovation in a predictable, non-disruptive manner.

Hybrid use cases for Azure and Azure Stack

As we talk to customers about their cloud strategy, hybrid will be their steady state operating model and are looking to augment their cloud strategy with Azure Stack in a few key scenarios:

Edge and disconnected solutions: Address latency and connectivity requirements by processing data locally in Azure Stack and then aggregating in Azure for further analytics, with common application logic across both. 
Modern applications across cloud and on-premises: Apply Azure web & mobile services, containers, serverless, and microservice architectures to update and extend legacy applications with Azure Stack, while using a consistent DevOps process across on-premises and cloud.
Cloud applications that meet every regulation: Develop and deploy applications in Azure, with full flexibility to deploy on-premises with Azure Stack to meet your regulatory or policy requirements, with no code changes needed.

Customers who have factory floor automation, remote use needs like cruise ships and mines, or requirements for isolation, like government systems, can all adopt modern designs, developing in the cloud and deploying in their locations. 

What’s new in Azure Stack TP3

With Azure Stack TP3, we’ve worked with customers to improve the product through numerous bug fixes, updates, and deployment reliability & compatibility improvements from TP2. With Azure Stack TP3 customers can:

Deploy with ADFS for disconnected scenarios
Start using Azure Virtual Machine Scale Sets for scale out workloads
Syndicate content from the Azure Marketplace to make available in Azure Stack
Use Azure D-Series VM sizes
Deploy and create templates with Temp Disks that are consistent with Azure
Take comfort in the enhanced security of an isolated administrator portal
Take advantage of improvements to IaaS and PaaS functionality
Use enhanced infrastructure management functionality, such as improved alerting

Roadmap Update

As part of our continuous innovation model, we will be adding Azure Functions, VM Extension syndication and multi-tenancy shortly after TP3. This will be followed by new workloads such as Blockchain, Cloud Foundry, and Mesos templates.  We will continue to refresh TP3 until we GA in mid-CY17.

In mid-CY17, the Proof of Concept (POC) deployment will be renamed to the Microsoft Azure Stack Development Kit. This single server dev/test tool enables customers to prototype and validate hybrid applications. It is a key piece of the continuous innovation model that Azure Stack will use to bring new functionality from Azure quickly to customers. It provides a way for new updates to be distributed early to customers so that they can experiment, learn and provide feedback.  

TP3 is our final planned major Technical Preview before GA.  The Azure Stack Development Kit will be released as GA first and at the same time we will release the software to our hardware partners so that they can finish the last mile of co-engineering work required to deliver multi-server Azure Stack integrated systems, mid-CY17.

After GA, we will continuously deliver additional capabilities through frequent updates. The first round of updates after GA are focused on two areas: 1) enhanced application modernization scenarios and 2) enhanced system management and scale. These updates will continue to expand customer choice of IaaS and PaaS technologies when developing applications, as well as improve manageability and grow the footprint of Azure Stack to accommodate growing portfolios of applications.

Extending cloud economics to on-premises with pay-as-you-use pricing

Azure Stack brings the cloud economic model on-premises, with pay-as-you-use pricing. As with Azure, there are no upfront licensing fees for using Azure services in Azure Stack and customers only pay when they use the services. Services are transacted in the same way as they are in Azure, with the same invoices and subscriptions. Services will be typically metered on the same units as Azure, but prices will be lower, since customers operate their own hardware and facilities. For scenarios where customers are unable to have their metering information sent to Azure, we will also offer a fixed-price “capacity model” based on the number of cores in the system.

Customers will acquire Azure Stack hardware from our hardware partners, Dell EMC, HPE, Lenovo and (later in the year) Cisco. We are excited to work with our hardware partners to provide a flexible range of buying options, including pay-as-you-go, for the hardware that underpins the integrated systems.

Customers can reach out to their Microsoft and hardware partner account representatives for detailed pricing information.

Final thoughts and next steps

Every company in every industry around the world transforming from an organization that simply uses digital technology, to a digital organization. We are dedicated to helping organizations grow by creating continually evolving products for their customers.

Azure and the Azure Stack integrated systems enable businesses to focus on investing energy and talent on turning their application portfolio into a strategic differentiator for their business. This approach enables customer choice and flexibility of deploying and operating their application where it best meets their business needs. IT can deliver far greater value by empowering development teams with self-service provisioning and cloud services while partnering with them to establish DevOps workflows that meet business policies and requirements.

Learn more about Azure Stack and download Azure Stack TP3.

Jeffrey Snover
Azure Infrastructure and Management Technical Fellow
Follow me on Twitter at @jsnover
Quelle: Azure

Azure Data Sync update

With Azure SQL Data Sync users can easily synchronize data bi-directionally between multiple Azure SQL Databases and/or on-premises SQL Databases. This service is currently in public preview and available only in the old Azure portal. We are currently working to improve the service and bring it to General Availability (GA). In this blog, we are going to share the current roadmap of Azure Data Sync.

Azure Data Sync will be available in new Azure portal within the next few months. This will come with several improvements to the service, including PowerShell and REST API support, improvements to security and privacy, and enhanced monitoring and troubleshooting.

PowerShell programmability and REST APIs

Previously in Data Sync, creating sync groups and making changes had to be done manually through the UI. This could be a tedious, time consuming process, especially in complex sync topologies with many member databases or sync groups. We now have support for PowerShell and REST APIs which developers can leverage to make these tasks faster and easier.

Better security, better privacy, better resilience

Previously, Data Sync used a shared database to manage the sync metadata and operations for all users. Now each user will have dedicated Sync Databases. A Sync Database is a customer owned Azure SQL Database located in the same region as the Sync Group. One Sync Database can be used for many sync groups in the same region. By replacing the shared sync databases with customer owned sync databases, we provide better privacy and security. In addition, this provides the user flexibility to increase or decrease the performance tier of the Sync Database based on their needs.

Enhanced monitoring and troubleshooting

We have made a few key improvements to monitoring and troubleshooting. Users can now monitor the sync status programmatically using PowerShell and REST APIs. In addition, we’ve improved several error messages, making them more clear and actionable.

Availability in more regions

Previously, Data Sync was only available in limited Azure regions. The service will now be available in most regions. Support for Azure China, Germany, and Government regions will also come soon.

We will migrate existing Data Sync customers to the new Azure portal once it is available. If you are using Data Sync with any on-premises databases, you will need to download and configure the new Sync Agent to complete the migration. Detailed migration instructions will be provided closer to the time of migration.

The Data Sync service will remain free until GA. The only new cost is for the Sync Database which can be in any service tier. If you use Data Sync in multiple regions, you will need one Sync Database for each region.

If you have any feedback on Azure Data Sync service, we’d love to hear from you! To get the latest update of Azure Data Sync, please join the SQL Advisor Yammer Group or follow us @AzureSQLDB on Twitter.
Quelle: Azure

Announcing preview of Azure HDInsight 3.6 with Apache Spark 2.1

Today, we are pleased to announce a preview of Azure HDInsight 3.6. We are enabling this preview to get feedback on Apache Spark 2.1. You can try out all the features available in the open source release of Apache Spark 2.1, along with the rich experience of using notebooks on Azure HDInsight. This post is a short summary on how to get started with this preview.

What’s new in Spark 2.1

The open source Apache Spark 2.1 release brings in a ton of improvements for developers. These improvements range from Structured Streaming to allowing developers to use Apache Kafka (version 0.10) with Spark Streaming.

To learn more about all of the improvements in Apache Spark 2.1, please read the release notes on the Apache Spark project.

Get started with Apache Spark 2.1 on HDInsight

It is very simple to get started with Apache Spark 2.1 Preview. You can go to Microsoft Azure portal and create an Azure HDInsight service.

Once you select HDInsight, you can pick the Spark cluster type with version Spark 2.1 (HDI 3.6 Preview).

 

After creating the cluster you will have access to all the tools, services and notebooks, including Jupyter. You can access the Jupyter notebook by clicking “Cluster dashboard”.

We hope that you like this preview. Following are some resources to learn more about using Spark on HDInsight.

Learn more and get help

Apache Spark 2.1 release notes
Apache Spark on HDInsight
Getting started with Spark on HDInsight
Get help on Spark questions
Ask HDInsight questions on stackoverflow 

Frequently Asked Questions (FAQ’s)

Following is a set of commonly asked questions and known issues in this preview.

Can I use any other cluster besides Spark in HDInsight 3.6?

For this preview release we are only enabling Spark cluster for version 2.1.

I cannot connect to BI tools with Spark 2.1.

You cannot connect BI tools to Spark 2.1 using ODBC driver in this preview.

I cannot use Azure Data Lake Store with Spark 2.1.

In this preview, you can only store data in Azure Blob Storage and use from your Spark 2.1 cluster. Azure Data Lake Store is not yet supported.

Why is Spark 2.1/ HDInsight 3.6 in preview?

We are releasing HDInsight 3.6 as preview so that we can enable users to try the improvements in Spark 2.1 and give us feedback. We are working on improving the experience of Spark 2.1 in HDInsight and once ready, we will make it generally available.

What is the Support & SLA provided for this preview?

Since this is a preview release there is no Support/SLA for this preview. Typically HDInsight has an SLA of 99.9%. However during this preview, users will be subject to applicable supplemental terms of use

Summary

We are pleased to announce a preview of Microsoft Azure HDInsight 3.6 along with Apache Spark 2.1. We are inviting you to try this preview and give us feedback so we can improve the experience.

 
Quelle: Azure

Azure IoT Suite adds device management capability updates

While the most successful enterprise IoT solutions include a strategy for operators to handle ongoing management of device collection in a simple and reliable manner, it can be a hurdle for companies getting started with IoT. To help with that challenge, we recently introduced device management capabilities in Azure IoT Hub.

Today, we’ve added these device management features to the Azure IoT Suite remote monitoring preconfigured solution. The Azure IoT Suite simplifies deploying and orchestrating advanced services to give businesses a complete IoT solution from proof of concept to broader deployment.

With new device management functionality in Azure IoT Suite, developers will be able to quickly move beyond telemetry processing, rule management, and visualization to customize their device overview, queries and device lists. These enhancements include:

Synchronizing settings and metadata between the cloud and devices using device twins.
Performing an action on a connected device through the cloud using direct methods.
Broadcasting and orchestrating operations on multiple devices at a planned time through jobs.
Attesting the status and health for on or offline device collections using real-time, dynamic queries across device twins and jobs.

Customizing device information overview by using Column Editor to provide a dynamic report for devices you want to monitor right now.

The Azure IoT Suite remote monitoring preconfigured solution is also open source, which gives developers the flexibility to customize it to their needs as the business evolves. We are excited to see developers achieve even more through the new device management features.

Learn more about today’s enhancements by reviewing our two step-by-step guides: Get started with the preconfigured solutions and Remote monitoring preconfigured solution walkthrough. You can also provision your IoT solution with your Azure subscription today by visiting www.azureiotsuite.com.
Quelle: Azure

Optimizing rolling feature engineering for time series data

In this blog post, I want to talk about how data scientists can efficiently perform certain types of feature engineering at scale. Before we dive into sample code, I will briefly set the context of how telemetry data gets generated and why businesses are interested in using such data.

To get started, we know that these days machines are instrumented with multiple in-built sensors to record various measurements while it is in operation. Thus, these machines end up generating a lot of telemetry data that can be used once this data is transferred off these machines and stored in a centralized repository. Businesses these days hope to use their amassed data to help answer questions like, “When is a machine likely to fail?” or, “When does a spare part for a machine need to be re-ordered?” Eventually this could help them reduce time and costs incurred in adhoc maintenance activities.

After having built many models, I have noticed that typical telemetry data that gets generated from the various sensors in their raw format add very little value. Sensors by design can generate data at a regular time interval, thus the data consists of multiple time series which can be sorted by time for each machine to build meaningful additional features. So, data scientists, like me, end up enhancing the dataset by performing additional feature engineering on this raw sensor data.

The most common features I begin with are to build out rolling aggregates using my preferred statistical programming language on a sample dataset. Here are some code snippets on how I would generate rolling aggregates for a specific window size using R/Python for machines which records voltage, rotation, pressure, and vibration measurements by date. These code snippets can be run on any other local R/Python IDE, within a Jupyter notebook or within an Azure ML Studio environment.

R

Python

telemetrymean <- telemetry %>%
    arrange(machineID, datetime) %>%
    group_by(machineID) %>%

    mutate(voltmean = rollapply(volt, width = 3, FUN = mean, align = “right”, fill = NA, by = 3),
                  rotatemean = rollapply(rotate, width = 3, FUN = mean, align = “right”, fill = NA, by = 3),
                  pressuremean = rollapply(pressure, width = 3, FUN = mean, align = “right”, fill = NA, by = 3),
                  vibrationmean = rollapply(vibration, width = 3, FUN = mean, align = “right”, fill = NA, by = 3)) %>%
    select(datetime, machineID, voltmean, rotatemean, pressuremean, vibrationmean) %>%
    filter(!is.na(voltmean)) %>%
    ungroup()

temp = []
fields = [&;volt&039;, &039;rotate&039;, &039;pressure&039;, &039;vibration&039;]
for col in fields:
    temp.append(pd.pivot_table(telemetry,
                               index=&039;datetime&039;,
                               columns=&039;machineID&039;,
                               values=col).resample(&039;3H&039;, closed=&039;left&039;, label=&039;right&039;, how=&039;mean&039;).unstack())
telemetry_mean_3h = pd.concat(temp, axis=1)
telemetry_mean_3h.columns = [i + &039;mean_3h&039; for i in fields]
telemetry_mean_3h.reset_index(inplace=True)

For more details on a description of the end to end use case please review the R code and Python code.

Once my R/Python code is tested in the local environment with a small dataset and deemed fit, I would then need to move it into a production environment. I would now need to also consider the various options on how to scale the same computation for a much larger dataset while ensuring efficiency. I have noticed that it is often more efficient to work with data that is indexed for such large-scale computations using some form of SQL query. Here is how I translated the code originally written in R/Python into SQL query language. 

Sample SQL code

select rt.datetime, rt.machineID, rt.voltmean, rt.rotatemean, rt.pressuremean, rt.vibrationmean
from
(select avg(volt) over(partition by machineID order by machineID, datetime rows 2 preceding) as voltmean,
        avg(rotate) over(partition by machineID order by machineID, datetime rows 2 preceding) as rotatemean,
        avg(pressure) over(partition by machineID order by machineID, datetime rows 2 preceding) as pressuremean,
        avg(vibration) over(partition by machineID order by machineID, datetime rows 2 preceding) as vibrationmean,
        row_number() over (partition by machineID order by machineID, datetime) as rn,
        machineID, datetime
from telemetry) rt
where rt.rn % 3 = 0 and rt.voltmean is not null
order by rt.machineID, rt.datetime

For more details please review the SQL code.

Based on my experience with predictive maintenance use cases, I have noticed that SQL rolling feature engineering was best suited for time series ordered data split by machine. For on-prem scenarios, now with SQL Server R Services, it also enables R enthusiasts to run their R code to do other data wrangling, model building and even scoring code from right within SQL Server. Overall, this ends up being more efficient as there is no data movement, and the computation ends up being scalable.

However, there are many other ways of operationalizing this type of feature engineering at scale. For example, R Server on HDInsight combines the functionality of R with the power of Hadoop and Spark, and Azure Data Lake Analytics now supports running R on petabytes of data. The power of can be put towards transforming raw sensor data into meaningful data that can be leveraged for machine learning applications to provide value back to the business.
Quelle: Azure

Connect Tableau to an Azure Analysis Services server

With Azure Analysis Services, you can connect to your severs by using Power BI, Excel, and many third-party client tools. In this post, we’ll focus on how to connect to your server from Tableau Desktop.

Before getting started, you’ll need:

A data model deployed at an Azure Analysis Services server – Creating your first data model in Azure Analysis Services.
Tableau Desktop
The latest MSOLAP.7 provider

In Tableau Desktop 10.1, under Connect, click To a Server > Microsoft Analysis Services.

In the connection dialog, in Server, enter the name of your Azure Analysis Services server. Then select Use a specific username and password, and then type the organizational user name, for example nancy@adventureworks.com, and password.

In the Data Source tab, select the database and cube/model or perspective, and then click on Sheet 1.

The Tableau workbook is now connected to your Azure Analysis Services server. You will see the fields from your model listed under dimensions and measures on the side. You can drag and drop those fields to the sheet to start building out your visuals.

Learn more about Azure Analysis Services.
Quelle: Azure

Enterprise Ethereum Alliance

 

We are proud to announce our participation as a launch partner with the Enterprise Ethereum Alliance in addition to making the first reference implementations of Ethereum available in a public cloud.  Ethereum was the first blockchain supported in Azure and it is evolving to address the needs of enterprises globally.  Focusing on requirements like privacy, permissions and a pluggable architecture while retaining its public roots, Ethereum continues to widen the scope of what developers, businesses and consortiums can achieve.

While Azure and Project Bletchley are independent of any particular blockchain system, Ethereum and Enterprise Ethereum are supported by Azure middleware services like Cryptlets, Azure Active Directory for Identity, data services via Cortana Analytics Suite, Key Vault for key management, operations and deployment as well as rich tooling.  A large partner community offering industry solutions based on Smart Contracts, Cryptlets and SAAS offerings provides a valuable consortium data tier outlined in my previous blog post about Smart Contract architecture and Cryptlets.

You can now deploy your own implementation of this platform on Azure: Quorum: Enterprise Ethereum Alliance Reference Implementations
Quelle: Azure

Azure Command Line 2.0 now generally available

Back in September, we announced Azure CLI 2.0 Preview. Today, we’re announcing the general availability of the vm, acs, storage and network commands in Azure CLI 2.0. These commands provide a rich interface for a large array of use cases, from disk and extension management to  container cluster creation.

Today’s announcement means that customers can now use these commands in production, with full support by Microsoft both through our Azure support channels or GitHub. We don’t expect breaking changes for these commands in new releases of Azure CLI 2.0.

This new version of Azure CLI should feel much more native to developers who are familiar with command line experiences in the bash enviornment for Linux and macOS with simple commands that have smart defaults for most common operations and that support tab completion and pipe-able outputs for interacting with other text-parsing tools like grep, cut, jq and the popular JMESpath query syntax​. It’s easy to install on the platform of your choice and learn.

During the preview period, we’ve received valuable feedback from early adopters and have added new features based on that input. The number of Azure services supported in Azure CLI 2.0 has grown and we now have command modules for sql, documentdb, redis, and many other services on Azure. We also have new features to make working with Azure CLI 2.0 more productive. For example, we’ve added the "–wait" and "–no-wait" capabilities that enable users to respond to external conditions or continue the script without waiting for a response.

We’re also very excited about some new features in Azure CLI 2.0, particularly the combination of Bash and CLI commands, and support for new platform features like Azure Managed Disks.

Here’s how to get started using Azure CLI 2.0.

Installing the Azure CLI

The CLI runs on Mac, Linux, and of course, Windows. Get started now by installing the CLI on whatever platform you use.  Also, review our documentation and samples for full details on getting started with the CLI, and how to access to services provided via Azure using the CLI in scripts.

Here’s an example of the features included with the "vm command":

 

Working with the Azure CLI

Accessing Azure and starting one or more VMs is easy. Here are two lines of code that will create a resource group (a way to group and Manage Azure resources) and a Linux VM using Azure’s latest Ubuntu VM Image in the westus2 region of Azure.

az group create -n MyResourceGroup -l westus2
az vm create -g MyResourceGroup -n MyLinuxVM –image ubuntults

Using the public IP address for the VM (which you get in the output of the vm create command or can look up separately using "az vm list-ip-addresses" command), connect directly to your VM from the command line:

ssh <public ip address>

For Windows VMs on Azure, you can connect using remote desktop ("mstsc <public ip address>" from Windows desktops).

The "create vm" command is a long running operation, and it may take some time for the VM to be created, deployed, and be available for use on Azure. In most automation scripting cases, waiting for this command to complete before running the next command may be fine, as the result of this command may be used in next command. However, in other cases, you may want to continue using other commands while a previous one is still running and waiting for the results from the server. Azure CLI 2.0 now supports a new "–no-wait" option for such scenarios.

az vm create -n MyLinuxVM2 -g MyResourceGroup –image UbuntuLTS –no-wait

As with Resource Groups and a Virtual Machines, you can use the Azure CLI 2.0 to create other resource types in Azure using the "az <resource type name> create" naming pattern.

For example, you can create managed resources on Azure like WebApps within Azure AppServices:

# Create an Azure AppService that we can use to host multiple web apps
az appservice plan create -n MyAppServicePlan -g MyResourceGroup

# Create two web apps within the appservice (note: name param must be a unique DNS entry)
az appservice web create -n MyWebApp43432 -g MyResourceGroup –plan MyAppServicePlan
az appservice web create -n MyWEbApp43433 -g MyResourceGroup –plan MyAppServicePlan

Read the CLI 2.0 reference docs to learn more about the create command options for various Azure resource types. The Azure CLI 2.0 lets you list your Azure resources and provides different output formats.

–output Description
json json string. json is the default. Best for integrating with query tools etc
jsonc colorized json string.
table table with column headings. Only shows a curated list of common properties for the selected resource type in human readable form.
tsv tab-separated values with no headers. optimized for piping to other tex-processing commands and tools like grep, awk, etc.

You can use the "–query" option with the list command to find specific resources, and to customize the properties that you want to see in the output. Here are a few examples:

# list all VMs in a given Resource Group
az vm list -g MyResourceGroup –output table

# list all VMs in a Resource Group whose name contains the string ‘My’
az vm list –query “[?contains(resourceGroup,’My’)]” –output tsv

# same as above but only show the &;VM name&039; and &039;osType&039; properties, instead of all default properties for selected VMs
az vm list –query “[?contains(resourceGroup,’My’)].{name:name, osType:storageProfile.osDisk.osType}” –output table

Azure CLI 2.0 supports management operations against SQL Server on Azure. You can use it to create servers, databases, data warehouses, and other data sources; and to show usage, manage administrative logins, and run other management operations.

# Create a new SQL Server on Azure
az sql server create -n MySqlServer -g MyResourceGroup –administrator-login <admin login> –administrator-login-password <admin password> -l westus2

# Create a new SQL Server database
az sql db create -n MySqlDB -g MyResourceGroup –server-name MySqlServer -l westus2

# list available SQL databases on Server within a Resource Group
az sql db list -g MyResourceGroup –server-name MySqlServer

Scripting with the new Azure CLI 2.0 features

The new ability to combine Bash and Azure CLI 2.0 commands in the same script can be a big time saver, especially if you’re already familiar with Linux command-line tools like grep, cut, jq and JMESpath queries.

Let’s start with a simple example that stops a VM in a resource group using a VM’s resource ID (or multiple IDs by spaces):

az vm stop –ids ‘<one or more ids>’

You can also stop a VM in a resource group using the VM’s name. Here’s how to stop the VM we created above:

az vm stop -g resourceGroup -n simpleVM

For a more complicated use case, let’s imagine we have a large number of VMs in a resource group, running Windows and Linux.  To stop all running Linux VMs in that resource group, we can use a JMESpath query, like this:

os="Linux"
rg="resourceGroup"
ps="VM running"
rvq="[].{resourceGroup: resourceGroup, osType: storageProfile.osDisk.osType, powerState: powerState, id:id}| [?osType==&039;$os&039;]|[?resourceGroup==&039;$rg&039;]| [?powerState==&039;$ps&039;]|[].id"
az vm stop –ids $(az vm list –show-details –query "$rvq" –output tsv)

This script issues an az vm stop command, but only for VMs that are returned in the JMESpath query results (as defined in the rvq variable). The osType, resourceGroup and powerState parameters are provided values. The resourceGroup parameter is compared to a VM’s resourceGroup property, and the osType parameter is compared to a VM’s storageProfile.osDisk.osType property, and all matching results are returned (in tsv format) for use by the "az vm stop" command.

Azure Container Services in the CLI

Azure Container Service (ACS) simplifies the creation, configuration, and management of a cluster of virtual machines that are preconfigured to run container applications. You can use Docker images with DC/OS (powered by Apache Mesos), Docker Swarm or Kubernetes for orchestration.

The Azure CLI supports the creation and scaling of ACS clusters via the az acs command. You can discover full documentation for Azure Container Services, as well as a tutorial for deploying an ACS DC/OS cluster with Azure CLI commands.

Scale with Azure Managed Disks using the CLI

Microsoft recently announced the general availability of Azure Managed Disks to simplify the management and scaling of Virtual Machines. You can create a Virtual Machine with an implicit Managed Disk for a specific disk image, and also create managed disks from blob storage or standalone with the az vm disk command. Updates and snapshots are easy as well — check out what you can do with Managed dDisks from the CLI.

Start using Azure CLI 2.0 today!

Whether you are an existing CLI user or starting a new Azure project, it’s easy to get started with the CLI at http://aka.ms/CLI and master the command line with our updated docs and samples. Check out topics like installing and updating the CLI, working with Virtual Machines, creating a complete Linux environment including VMs, Scale Sets, Storage, and network, and deploying Azure Web Apps – and let us know what you think!

Azure CLI 2.0 is open source and on GitHub.

In the next few months, we’ll provide more updates. As ever, we want your ongoing feedback! Customers using the vm, storage and network commands in production can contact Azure Support for any issues, reach out via StackOverflow using the azure-cli tag, or email us directly at azfeedback@microsoft.com.
Quelle: Azure

Azure brings 5 new services to Canada

Since the beginning of the year, we’ve deployed multiple new services in Canada. Please find below a brief summary of recently deployed services.

Available now

HDInsight is the only fully-managed cloud Hadoop offering that provides optimized open source analytic clusters for Spark, Hive, MapReduce, HBase, Storm, Kafka, and R Server backed by a 99.9% SLA. Each of these big data technologies and ISV applications are easily deployable as managed clusters with enterprise-level security and monitoring.

Learn more about HDInsight.

Azure Functions is an event-based serverless compute experience to accelerate your development. It can scale based on demand and you pay only for the resources you consume. Azure Function’s numerous triggers and bindings, such as http, storage, queues, and events streams, allow you to quickly build solutions with less code.

Learn more about Azure Functions.

Managed Disks makes managing your VM disks much simpler. With Managed Disks, customers only need to specify the desired disk type (Standard or Premium disk) and the disk size, and Azure will create and manage the disk for them. In addition, Managed Disks comes with enhanced VM scale sets (VMSS) capabilities such as being able to define scale sets with attached data drives, and create a scale set with up to 1,000 VMs from an Azure platform/marketplace images.

Learn more about Managed Disks in our General Availability Announcement.

Azure Site Recovery contributes to your BCDR strategy by orchestrating replication of on-premises virtual machines and physical servers. You replicate servers and VMs from your primary on-premises datacenter to the cloud, Azure, or to a secondary datacenter.

Learn more about Azure Site Recovery.

Azure Backup previously required service registration in PowerShell for the past few months. This is no longer required and you can use Backup directly in the Azure Portal.  All subscriptions that were registered previously will continue to work without any intervention. In addition, Hybrid Backup is now deployed (on-prem to Azure backup) and is also available in the Azure Portal. Azure Backup can be used to back up, protect, and restore your data in the Microsoft cloud. Azure Backup replaces your existing on-premises or off-site backup solution with a cloud-based solution that is reliable, secure, and cost-competitive.

Learn more about Azure Backup.
Quelle: Azure