Data driven troubleshooting of Azure Stream Analytics jobs

Today, we are announcing the public preview of metrics in the Job Diagram for data driven monitoring and troubleshooting of Azure Stream Analytics jobs. This new functionality enables the quick and easy isolation of issues to specific components and query steps using metrics on the state of the inputs, outputs, and each step of the processing logic.

If for example, an Azure Stream Analytics job is not producing the expected output, metrics in the Job Diagram can be used to identify query steps that are receiving inputs but not producing any output to identify and isolate issues. Additionally, when one or more inputs to the Steam Analytics job stop producing events, the new capabilities can help identify the inputs and outputs that have pending diagnosis messages associated with them.

To access this capability, click on the “Job diagram” button in the “Settings” blade of the of the Stream Analytics job. For existing jobs, it is necessary to restart the job first.

Every input and output is color coded to indicate the current state of that component.

When you want to look at intermediate query steps to understand the data flow patterns inside Stream Analytics, the visualization tool provides a view of the breakdown of the query into its component steps and the flow sequence. Each logical node shows the number of partitions it has.

Clicking on each query step will show the corresponding section in a query editing pane as illustrated. A metrics chart for the step is also displayed in a lower pane.

Clicking the … will pop up the context menu allowing the expansion of partitions showing the partitions of the Event Hub input in addition to the input merger.

 

Clicking a single partition node will show the metrics chart only for that partition on the bottom.

 

Selecting the merger node will show the metrics chart for the merger. The chart below shows that no events got dropped or adjusted.

 

Hovering the mouse pointer on the chart will show details of the metric value and time.

 

We are very excited to hear your feedback. Please give it a try and let us know what you think.
Quelle: Azure

HDInsight tools for IntelliJ & Eclipse April updates

We are pleased to announce the April updates of HDInsight Tools for IntelliJ & Eclipse. This is a quality milestone and we focus primarily on refactoring the components and fixing bugs. We also added Azure Data Lake Store support and Eclipse local emulator support in this release. The HDInsight Tools for IntelliJ & Eclipse serve the open source community and are of interest to HDInsight Spark developers. The tools run smoothly in Linux, Mac, and Windows. 

Summary of key updates

Azure Data Lake Store support

HDInsight Visual Studio plugin, Eclipse plugin, and IntelliJ plugin now support Azure Data Lake Store (ADLS). Users can now view ADLS entities in the service explorer, add ADLS namespace/path in authoring, and submit Hive/Spark jobs reading/writing to ADLS in HDInsight cluster.

To use Azure Data Lake Store, users firstly need to create Azure HDInsight cluster with Data Lake Store as storage. Follow the instructions to Create an HDInsight cluster with Data Lake Store using Azure Portal.

As shown below, ADLS entities can be viewed in the service explorer.

By clicking “Explorer” above, users can explore data stored in ADLS, as shown below:

Users can read/write ADLS data in their Hive/Spark jobs, as shown below.

If Data Lake Store is the primary storage for the cluster, use adl:///. This is the root of the cluster storage in Azure Data Lake. This may translate to path of /clusters/CLUSTERNAME in the Data Lake Store account.
If Data Lake Store is additional storage for the cluster, use adl://DATALAKEACCOUNT.azuredatalakestore.net/. The URI specifies the Data Lake Store account the data is written to and data is written starting at the root of the Data Lake Store.

 

Learn how to Use HDInsight Spark cluster to analyze data in Data Lake Store.

Learn how to Use Azure Data Lake Store with Apache Storm with HDInsight.

Local emulator for Eclipse plugin

Local emulator was supported before in IntelliJ plugin.

Now local emulator is also supported in Eclipse plugin, similar functionalities and user experiences as local emulator in IntelliJ.

Get more details about local emulator support.

Quality improvement

The major improvements are code refactoring and telemetry enhancements. More than forty bugs around job author, submission, and job view are fixed to improve the quality of the tools in this release.

Installation

If you have HDInsight Tools for Visual Studio/Eclipse/IntelliJ installed before, the new bits can be updated in the IDE directly. Otherwise please refer to the pages below to download the latest bits or distribute the information to the customers:

HDInsight Visual Studio plugin
HDInsight Eclipse plugin
HDInsight IntelliJ plugin

Upcoming releases

The following features are planned for upcoming release:

Debuggability: Remote debugging support for Spark application
Monitoring: Improve Spark application view, job view and job graph
Usability: Improve installation experience; Integrate into IntelliJ run menu
Enable Mooncake support

Feedback

We look forward to your comments and feedback. If there is any feature request, customer ask, or suggestion, please do email us at hdivstool@microsoft.com. For bug submission, please submit using the template provided.
Quelle: Azure

Enabling Azure CDN from Azure web app and storage account portal extension

Enabling CDN for your Azure workflow becomes easier than ever with this new integration. You can now enable and manage CDN for your Azure web app service or Azure storage account without leaving the portal experience.

When you have a website, a storage account for download or a streaming endpoint for your media event, you may want to add CDN to your solution for scalability and better performance to make your CDN enablement experience easy for these Azure workflow. When you create a CDN endpoint from Azure portal CDN extension, you can choose an "origin type" which lists all the available Azure web app, storage account, and cloud services within your subscription. To enhance the integration, we started with CDN integration with Azure media services. From Azure media services portal extension, you can enable CDN for your streaming endpoint with one click. Now we have extended this integration to Web App and storage account.

Go to the Azure portal web app service or storage account extension, select your resource, then search for "CDN" from the menu and enable CDN! Very little information is required for CDN enablement. After enabling CDN, click the endpoint to manage configuration directly from this extension.

From Azure storage account portal extension:

From Azure web app service portal extension:

Directly manage CDN from Azure web app or storage portal extension:

More information

Enable Azure CDN
Integrate an Azure storage account with Azure CDN
Use Azure CDN with Web app

Is there a feature you'd like to see in Azure CDN? Give us feedback!
Quelle: Azure

Optimization tips and tricks on Azure SQL Server for Machine Learning Services

Summary

Since SQL Server 2016, a new function called R Services has been introduced. Microsoft recently announced a preview for the next version of SQL Server, which extends the advanced analytical ability to Python. This new capability of running R or Python in-database at scale enables us to keep the analytics services close to the data and eliminates the burden of data movements. It also simplifies the development and deployment of intelligent applications. To get the most out of SQL server, knowing how to fine tune the intelligence model itself is far from sufficient and sometimes still fail to meet the performance requirement. There are quite a few optimization tips and tricks that could help us boost the performance significantly. In this post, we apply a few optimization techniques to a resume-matching scenario, which mimics the workflow of large volume prediction aiming to showcase how those techniques could make data analytics more efficient and powerful. The three main optimization techniques introduced in our blog are as follows:

Full durable memory-optimized tables
CPU affinity and memory allocation
Resource governance and concurrent execution

This blog post is a short summary of how the above optimization tips and tricks work with R Services on Azure SQL Server. Those optimization techniques not only work for R Services, but for any Machine Learning Services integrated with SQL Server. Please refer to the full tutorial for sample code and step-by-step walkthroughs.

Description of the Sample Use Case

The sample use case for both this blog and its associated tutorial is a resume-matching example. Finding the best candidate for a job position has long been an art that is labor intensive and requires a lot of manual efforts from search agents. How to find candidates with certain technical or specialized qualities from massive amount of information collected from diverse sources has become a new big challenge. We developed a model to search good matches among millions of resumes for a giving position. Being formulated as a binary classification problem, the machine learning model takes both the resume and job description as the inputs and produces the probability of being a good match for each resume-job pair. A user defined probability threshold is then used to further filter out all good matches.

A key challenge in this use case is that for each new job, we will need to match it with millions of resumes within a reasonable time frame. The feature engineering step, which produces thousands of features (2600 in this case), is a significant performance bottleneck during scoring. Hence, achieving a low matching (scoring) latency is the main objective in this use case.

Optimizations

There are many different types of optimization techniques, and we are going to discuss a few of them using the resume-matching scenario. In this blog, we will explain why and how those optimization techniques work from high level. For more detailed explanations and background knowledge, please refer to the included reference links. In the tutorial, the results are expected to be reproducible using similar hardware configuration and the SQL scripts.

Memory-optimized table

Nowadays, memory is no longer a problem for a modern machine in terms of size and speed. People can get ‘value of RAM’ with the advancement of hardware. In the meantime, data has been produced far more quickly than ever before and some tasks need to process those data with low latency. Memory-optimized tables can leverage the advancement of hardware to tackle this problem. Memory-optimized tables mainly reside in memory so that data is read from and written to memory [1]. However, for durability purposes a second copy of the table is maintained on disk and data is only read from disk during database recovery. The performance could be optimized with high scalability and low latency using memory especially when we need to read from and write to tables very frequently [2]. You can find a detailed introduction of memory-optimized tables on this blog [1]. You can also watch this video [3] to learn more about the performance benefits of using In-Memory OLTP.

In the resume-matching scenario, we will need to read all the resume features from the database and match all of them with a new job opening. By using memory-optimized tables, resume features are stored in main memory and disk IO could be significantly reduced. In addition, since we need to write all the predictions back to the database concurrently from different batches, extra performance gain could be achieved by using memory-optimized table. With the support of memory-optimized table on SQL Server, we achieved low latency on reading from/writing to tables and a seamless experience during development. Full durable memory-optimized tables were created along with creating the database. The rest of the development is exactly the same as before without knowing where the data is stored.

CPU affinity and memory allocation

With SQL Server 2014 SP2 and later version, soft-NUMA is automatically enabled at the database-instance level when starting the SQL Server service [4, 5, 6]. If the database engine server detects more than 8 physical cores per NUMA node or socket, it will automatically create soft-NUMA nodes that ideally contain 8 cores. But it can go down to 5 or up to 9 logical cores per node. You can find the log information when SQL Server detects more than 8 physical cores in each socket.

Figure 1: SQL log of auto Soft-NUMA, 4 soft NUMA nodes were created

As shown in Figure 1, our test consisted of 20 physical cores among which 4 soft-NUMA nodes were created automatically such that each node contained 5 cores. Soft-NUMA enables the ability to partition service threads per node and that generally increases scalability and performance by reducing IO and lazy writer bottlenecks. We then further created 4 SQL resource pools and 4 external resource pools [7] to specify the CPU affinity of using the same set of CPUs in each node. By doing this, both SQL Server and the R processes can eliminate foreign memory access since the processes will be within the same NUMA node. Hence, memory access latency could be reduced. Subsequently, those resource pools are then assigned to different workload groups to enhance hardware resource consumption.

Soft-NUMA and CPU affinity cannot divide physical memory in each physical NUMA node. All the soft NUMA nodes in the same physical NUMA node receive memory from the same OS memory block and there is no memory-to-processor affinity. However, we should pay attention to the memory allocation between SQL Server and the R processes. By default, only 20% of memory is allocated to R services and that is not enough for most of the data analytical tasks. Please see How To: Create a Resource Pool for R [7] for more information. We need to fine tune memory allocation between those two and of course the best configuration varies case by case. In the resume-matching use case, we increased the external memory resource allocation to 70% which was the best configuration.

Resource governance and concurrent scoring

To scale up the scoring problem, a good practice is to adopt the map-reduce approach in which we split millions of resumes into multiple batches, and then execute multiple scoring concurrently. The parallel processing framework is illustrated in Figure 2.

Figure 2: Illustration of parallel processing in multiple batches

Those batches will be processed on different CPU sets, and the results will be collected and written back to the database. Resource governance in SQL Server is designed to implement this idea. We can create resource governance for R services on SQL Server [8] by routing those scoring batches into different workload groups (Figure. 3). More information about resource governor could be found on this blog [9].

Figure 3: Resource governor (from: https://docs.microsoft.com/en-us/sql/relational-databases/resource-governor/resource-governor)

Resource governor can help divide the available resources (CPU and memory) on a SQL Server to minimize the workload competition using a classifier function [10, 11]. It provides multitenancy and resource isolation on SQL Server for different tasks to potentially improve the execution and provide predictable performance.

Other Tricks

One pain point with R is that when we conduct feature engineering it is usually processed on a single CPU. This is a major performance bottleneck for most of the data analysis tasks. In our resume-matching use case, we need to produce 2,500 cross-product features that will be then combined with the original 100 features (Figure 4). This whole process would take significant amount of time if everything was done on a single CPU.

Figure 4: Feature engineering of our resume-matching use case

One trick here is to create a R function for feature engineering and to pass it as rxTransform function during training. The machine learning algorithm is implemented with parallel processing. As part of the training, the feature engineering is also processed on multiple CPUs. In comparison with regular approach in which feature engineering is conducted before training and scoring, we observed a 16% performance improvement in terms of scoring time.

Another trick that can potentially improves the performance is to use SQL compute context within R [12]. Since we have isolated resources for different batch executions, we need to isolate the SQL query for each batch as well. By using SQL compute context, we can parallelize the SQL query to extract data from tables and constrain the data on the same workload group.

Results and Conclusion

To fully illustrate those tips and tricks, we have published a very detailed step-by-step tutorial. A few benchmark tests for scoring 1.1 million rows of data were also conducted. We used both the RevoScaleR and MicrosoftML packages to train a prediction model separately. We then compared the scoring time if using those optimizations versus without optimizations. Figure 5 and 6 summarize the best performance results using RevoScaleR and MicrosoftML packages. The tests were conducted on the same Azure SQL Server VM using the same SQL query and R codes. Eight batches for one matching job were used in all tests.

Figure 5: RevoScaleR scoring results

Figure 6: MicrosoftML scoring results

The results suggested that the number of features had a significant impact on the scoring time. Also, using those optimization tips and tricks could significantly improve the performance in terms of scoring time. The improvement was even more prominent if more features were used in the prediction model.

Acknowledgement

Lastly, we would like to express our thanks to Umachandar Jayachandran, Amit Banerjee, Ramkumar Chandrasekaran, Wee Hyong Tok, Xinwei Xue, James Ren, Lixin Gong, Ivan Popivanov, Costin Eseanu, Mario Bourgoin, Katherine Lin and Yiyu Chen for the great discussions, proofreading and test-driving the tutorial accompanying this blog post.

References

[1] Introduction to Memory-Optimized Tables

[2] Demonstration: Performance Improvement of In-Memory OLTP

[3] 17-minute video explaining In-Memory OLTP and demonstrating performance benefits

[4] Understanding Non-uniform Memory Access

[5] How SQL Server Supports NUMA

[6] Soft-NUMA (SQL Server)

[7] How To: Create a Resource Pool for R

[8] Resource Governance for R Services

[9] Resource Governor

[10] Introducing Resource Governor

[11] SQL SERVER – Simple Example to Configure Resource Governor – Introduction to Resource Governor

[12] Define and Use Compute Contexts
Quelle: Azure

Azure IoT Gateway SDK packages now available

Back in November, we announced the general availability of the Azure IoT Gateway SDK. We’ve already heard from a number of customers who are leveraging the open source Gateway SDK to connect their legacy devices or run analytics at the edge of their network. It’s great to see quick adoption! With the Gateway SDK’s modular architecture, developers can also program their own custom modules to perform specific actions. Thanks to its flexible design, you can create these modules in your preferred language – Node.js, Java, C#, or C.

We want to further simplify the experience of getting started with writing modules for the Gateway SDK. Today, we are announcing availability of packages to streamline the developer experience, enabling you to get started in minutes!

What packages are available?

NPM

azure-iot-gateway: With this you will be able to run the Gateway sample app and start writing Node.js modules. This package contains the Gateway runtime core and auto-installs the module dependencies’ packages for Linux or Windows.
generator-az-iot-gw-module: This provides Gateway module project scaffolding with Yeoman.

Maven

com.microsoft.azure.gateway/gateway-module-base: With this you will be able to run the Gateway sample app and start writing Java modules. You only need this package and its dependencies to run the Gateway app locally, but you do not need to include them when you publish your gateway. This package contains the Gateway runtime core and links to the module dependencies’ packages for Linux or Windows.
com.microsoft.azure.gateway/gateway-java-binding: This package contains the Java binding or interface. This package is required for both runtime and publishing.

NuGet

Microsoft.Azure.Devices.Gateway: This package contains the Gateway runtime core.
Microsoft.Azure.IoT.Gateway.Module: This package includes the module dependencies required to run the Gateway sample app and write .NET Framework modules on Windows.

What does this mean for you?

The primary benefit of these packages is time saved. They significantly reduce the number of steps required to start writing a module. You no longer have to clone and build the whole Gateway project. In addition, the packages include all the dependencies for you to mix modules written in different languages.

What’s next?

.NET Core NuGet packages are coming soon. We are looking for further ways to improve the Gateway SDK developer experience. For more information on getting started with these packages, check out our GitHub sample apps.

We’re delighted to see developers contributing their modules to the Gateway SDK community. Our team looks forward to seeing further activity in this area and learning more about your gateway scenarios. So take the new packages for a spin, and let us know what you think!
Quelle: Azure

Backup and restore your Azure Analysis Services models

This month we announced the general availability of Azure Analysis Services, which evolved from the proven analytics engine in Microsoft SQL Server Analysis Services. The success of any modern data-driven organization requires that information is available at the fingertips of every business user, not just IT professionals and data scientists, to guide their day-to-day decisions. Self-service BI tools have made huge strides in making data accessible to business users. However, most business users don’t have the expertise or desire to do the heavy lifting that is typically required, including finding the right sources of data, importing the raw data, transforming it into the right shape, and adding business logic and metrics, before they can explore the data to derive insights. With Azure Analysis Services, a BI professional can create a semantic model over the raw data and share it with business users so that all they need to do is connect to the model from any BI tool and immediately explore the data and gain insights. Azure Analysis Services uses a highly optimized in-memory engine to provide responses to user queries at the speed of thought.

One of the features that was added to Azure Analysis Services is the ability to backup your semantic models and all the data within them to a blob storage account. The backups can later be restored to same Azure Analysis Services server or to a different one. This method can also be used to backup models from SQL Server Analysis services and then restore them to Azure Analysis services. Please note that you can only restore models with a 1200 or higher compatibility level and that any active directory users or groups bust be removed from any role membership before restoring. After restoring, you can re-add those users and groups from Azure Active Directory.

Configure storage settings

Before backing up or restoring, you need to configure storage settings for your server. Azure Analysis Services will backup your models to blob storage account of your choosing. You can configure multiple servers to use the same storage account making it easy to move models between servers.

To configure storage settings:

In Azure portal > Settings, click Backup.

Click Enabled, then click Storage Settings.

Select your storage account or create a new one.
Select a container or create a new one.

Save your backup settings. You must save your changes whenever you change storage settings, or enable or disable backup.

Backup

Backups can be performed using the latest version of SQL Server Management Studio. It can also be automated through PowerShell or with the Analysis Services Tabular Object Model (TOM).

To backup using SQL Server Management Studio:

In SSMS, right-click a database > Back Up.
In Backup Database > Backup file, click Browse.
In the Save file as dialog, verify the folder path, and then type a name for the backup file. By default, the file name is given a .abf extension.
In the Backup Database dialog, select options.

Allow file overwrite – Select this option to overwrite backup files of the same name. If this option is not selected, the file you are saving cannot have the same name as a file that already exists in the same location.

Apply compression – Select this option to compress the backup file. Compressed backup files save disk space, but require slightly higher CPU utilization.

Encrypt backup file – Select this option to encrypt the backup file. This option requires a user-supplied password to secure the backup file. The password prevents reading of the backup data any other means than a restore operation. If you choose to encrypt backups, store the password in a safe location.

Click OK to create and save the backup file.

Restore

When restoring, your backup file must be in the storage account you&;ve configured for your server. If you need to move a backup file from an on-premises location to your storage account, use Microsoft Azure Storage Explorer or the AzCopy command-line utility.

If you&039;re restoring a tabular 1200 model database from an on-premises SQL Server Analysis Services server, you must first remove all of the domain users from the model&039;s roles, and add them back to the roles as Azure Active Directory users. The roles will be the same.

To restore by using SSMS:

In SSMS, right-click a database > Restore.
In the Backup Database dialog, in Backup file, click Browse.
In the Locate Database Files dialog, select the file you want to restore.
In Restore database, select the database.
Specify options. Security options must match the backup options you used when backing up.

New to Azure Analysis Services? Find out how you can try Azure Analysis Services or learn how to create your first data model.
Quelle: Azure

Azure Billing Reader role and preview of Invoice API

Today, we are pleased to announce the addition of a new in-built role, Billing Reader role. The new Billing Reader role allows you to delegate access to just billing information with no access to services such as VMs and storage accounts. Users in this role can perform Azure billing management operations such as viewing subscription scoped cost reporting data and downloading invoices. Also, today we are releasing the public preview of a new billing API that will allow you to programmatically download subscription’s billing invoices.

Allowing additional users to download invoices

Today, only the account administrator for a subscription can download and view invoices. Now the account administrator can allow users in subscription scoped roles, Owner, Contributor, Reader, User Access Administrator, Billing Reader, Service Administrator and Co-Administrator, to view invoices. The invoice contains personal information and hence the account administrator is required to enable access to allow users in subscription scoped roles to view invoices. The steps to allow users in subscription scoped roles to view invoices are below:

Login to the Azure Management Portal with account administrator credentials.

Select the subscription that you want to allow additional users to download invoices.

From the subscription blade, select the Invoices tab within billing section. Click on Access to invoices command. The feature to allow additional users to download invoices is in preview, not all invoices may be available. The account administrator will have access to all invoices.

Allow subscription scoped roles to download invoice

How to add users to Billing Reader Role

Users who are in administrative roles i.e. Owner, User Access Administrator, Service Administrator and Co-administrator roles can delegate Billing Reader access to other users. Users in Billing Reader can view subscription scoped billing information such as usage and invoices. Note, currently billing information is only viewable for non-enterprise subscription. Support for enterprise subscriptions will be available in the future.

Select the subscription for which you want to delegate Billing Reader access
From the subscription blade, select Access Control (IAM)

Click Add
Select “Billing Reader” role

Select or add user that you want to delegate access to subscription scoped billing information

The full definition of access allowed for user in Billing Reader role is detailed in built in roles documentation.

Downloading invoice using new Billing API

Till now you could only download invoices for your subscription via the Azure management portal. We are enabling users in administrative (Owner, Contributor, Reader, Service Administrator and Co-administrator) and Billing Reader roles to download invoices for a subscription programmatically. The invoice API allows you to download current and past invoices for an Azure subscription. During the API preview some invoices may not be available for download. The detailed API documentation is available and samples can also be downloaded. The feature to download invoices via API is not available for certain subscriptions types such as support, enterprise agreements, or Azure in Open. To be able to download invoices through API the account admin has to enable access for users in subscription scoped roles as outlined above.
You can easily download the latest invoice for your subscription using Azure PowerShell.

Login using Login-AzureRmAccount
Set your subscription context using Set-AzureRmContext -SubscriptionId <subscription Id>
To get the URL of the latest invoice, execute Get-AzureRmBillingInvoice –Latest

The output will give back an invoice link to download the latest invoice document in PDF format, an example is shown below

PS C:> Get-AzureRmBillingInvoice -Latest
Id : /subscriptions/{subscription ID}/providers/Microsoft.Billing/invoices/2017-02-09-117274100066163
Name : 2017-02-09-117274100066163
Type : Microsoft. Billing/invoices
InvoicePeriodStartDate : 1/10/2017 12:00:00 AM
InvoicePeriodEndDate : 2/9/2017 12:00:00 AM
DownloadUrl : https://{billingstorage}.blob.core.windows.net/invoices/{invoice identifier}.pdf?sv=2014-02-14&sr=b&sig=XlW87Ii7A5MhwQVvN1kMa0AR79iGiw72RGzQTT% 2Fh4YI%3D&se=2017-03-01T23%3A25%3A56Z&sp=r
DownloadUrlExpiry : 3/1/2017 3:25:56 PM

To download invoice to a local directory you can run the following

PS C:> Get-AzureRmBillingInvoice -Latest
PS C:> Invoke-WebRequest -Uri $invoice.DownloadUrl -OutFile <directory>InvoiceLatest.pdf

In the future,  you will see additions to this API which will enable expanded programmatic access to billing functionality.
Quelle: Azure

Azure management libraries for Java generally available now

Today, we are announcing the general availability of the new, simplified Azure management libraries for Java for Compute, Storage, SQL Database, Networking, Resource Manager, Key Vault, Redis, CDN and Batch services.

Azure Management Libraries for Java are open source – https://github.com/Azure/azure-sdk-for-java.

Service | feature
Generally available
Available as preview
Coming soon

Compute

Virtual machines and VM extensions
Virtual machine scale sets
Managed disks

 

Azure container services
Azure container registry

Storage

Storage accounts

 

Encryption

SQL Database

Databases
Firewalls
Elastic pools

 

 

Networking

Virtual networks
Network interfaces
IP addresses
Routing table
Network security groups
DNS
Traffic managers

Load balances
Application gateways

 

More services

Resource Manager
Key Vault
Redis
CDN
Batch

App service – Web apps
Functions
Service bus

Monitor
Graph RBAC
DocumentDB
Scheduler

Fundamentals

Authentication – core

Async methods

 

Generally available means that developers can use these libraries in production with full support by Microsoft through GitHub or Azure support channels. Preview features are flagged with the @Beta annotation in libraries.

In Spring 2016, based on Java developer feedback, we started a journey to simplify the Azure management libraries for Java. Our goal is to improve the developer experience by providing a higher-level, object-oriented API, optimized for readability and writability. We announced multiple previews of the libraries. During the preview period, early adopters provided us with valuable feedback and helped us prioritize features and Azure services to be supported. For example, we added support for asynchronous methods that enables developers to use reactive programming patterns. And, we also added support for Azure Service Bus.

Getting Started

Add the following dependency fragment to your Maven POM file to use the generally available version of the libraries:

<dependency>
<groupId>com.microsoft.azure</groupId>
<artifactId>azure</artifactId>
<version>1.0.0</version>
</dependency>

Working with the Azure Management Libraries for Java

One Java statement to authenticate. One statement to create a virtual machine. One statement to modify an existing virtual network … No more guessing about what is required vs. optional vs. non-modifiable.

Azure Authentication

One statement to authenticate and choose a subscription. The Azure class is the simplest entry point for creating and interacting with Azure resources.

Azure azure = Azure.authenticate(credFile).withDefaultSubscription();

Create a Virtual Machine

You can create a virtual machine instance by using the define() … create() method chain.

VirtualMachine linuxVM = azure.virtualMachines()
.define(linuxVM1Name)
.withRegion(Region.US_EAST)
.withNewResourceGroup(rgName)
.withNewPrimaryNetwork("10.0.0.0/28")
.withPrimaryPrivateIpAddressDynamic()
.withNewPrimaryPublicIpAddress(linuxVM1Pip)
.withPopularLinuxImage(KnownLinuxVirtualMachineImage.UBUNTU_SERVER_16_04_LTS)
.withRootUsername(“tirekicker”)
.withSsh(sshkey)
.withNewDataDisk(100)
.withSize(VirtualMachineSizeTypes.STANDARD_D3_V2)
.create();

Update a Virtual Machine

You can update a virtual machine instance by using an update() … apply() method chain.

linuxVM.update()
.withNewDataDisk(20, lun, CachingTypes.READ_WRITE)
.apply();

Create a Virtual Machine Scale Set

You can create a virtual machine scale set instance by using another define() … create() method chain.

VirtualMachineScaleSet vmScaleSet = azure.virtualMachineScaleSets()
.define(vmssName)
.withRegion(Region.US_EAST)
.withExistingResourceGroup(rgName)
.withSku(VirtualMachineScaleSetSkuTypes.STANDARD_D5_V2)
.withExistingPrimaryNetworkSubnet(network, "subnet1")
.withExistingPrimaryInternetFacingLoadBalancer(publicLoadBalancer)
.withoutPrimaryInternalLoadBalancer()
.withPopularLinuxImage(KnownLinuxVirtualMachineImage.UBUNTU_SERVER_16_04_LTS)
.withRootUsername("tirekicker")
.withSsh(sshkey)
.withNewDataDisk(100)
.withNewDataDisk(100, 1, CachingTypes.READ_WRITE)
.withNewDataDisk(100, 2, CachingTypes.READ_ONLY)
.withCapacity(10)
.create();

Create a Network Security Group

You can create a network security group instance by using another define() … create() method chain.

NetworkSecurityGroup frontEndNSG = azure.networkSecurityGroups().define(frontEndNSGName)
.withRegion(Region.US_EAST)
.withNewResourceGroup(rgName)
.defineRule("ALLOW-SSH")
.allowInbound()
.fromAnyAddress()
.fromAnyPort()
.toAnyAddress()
.toPort(22)
.withProtocol(SecurityRuleProtocol.TCP)
.withPriority(100)
.withDescription("Allow SSH")
.attach()
.defineRule("ALLOW-HTTP")
.allowInbound()
.fromAnyAddress()
.fromAnyPort()
.toAnyAddress()
.toPort(80)
.withProtocol(SecurityRuleProtocol.TCP)
.withPriority(101)
.withDescription("Allow HTTP")
.attach()
.create();

Create a Web App

You can create a Web App instance by using another define() … create() method chain.

WebApp webApp = azure.webApps()
.define(appName)
.withRegion(Region.US_WEST)
.withNewResourceGroup(rgName)
.withNewWindowsPlan(PricingTier.STANDARD_S1)
.create();

Create a SQL Database

You can create a SQL server instance by using another define() … create() method chain.

SqlServer sqlServer = azure.sqlServers().define(sqlServerName)
.withRegion(Region.US_EAST)
.withNewResourceGroup(rgName)
.withAdministratorLogin("adminlogin123")
.withAdministratorPassword("myS3cureP@ssword")
.withNewFirewallRule("10.0.0.1")
.withNewFirewallRule("10.2.0.1", "10.2.0.10")
.create();

Then, you can create a SQL database instance by using another define() … create() method chain.

SqlDatabase database = sqlServer.databases().define("myNewDatabase")
.create();

Sample Code

You can find plenty of sample code that illustrates management scenarios (69+ end-to-end scenarios) for Azure.

Service
Management Scenario

Virtual Machines

Manage virtual machines
Manage virtual machines asynchronously
Manage availability set
List virtual machine images
Manage virtual machines using VM extensions
List virtual machine extension images
Create virtual machines from generalized image or specialized VHD
Create virtual machine using custom image from virtual machine
Create virtual machine using custom image from VHD
Create virtual machine by importing a specialized operating system disk VHD
Create virtual machine using specialized VHD from snapshot
Convert virtual machines to use managed disks
Manage virtual machine with unmanaged disks

Virtual Machines – parallel execution

Create multiple virtual machines in parallel
Create multiple virtual machines with network in parallel
Create multiple virtual machines across regions in parallel

Virtual Machine Scale Sets

Manage virtual machine scale sets (behind an Internet facing load balancer)
Manage virtual machine scale sets (behind an Internet facing load balancer) asynchronously
Manage virtual machine scale sets with unmanaged disks

Storage

Manage storage accounts
Manage storage accounts asynchronously

Networking

Manage virtual network
Manage virtual network asynchronously
Manage network interface
Manage network security group
Manage IP address
Manage Internet facing load balancers
Manage internal load balancers

Networking – DNS

Host and manage domains

Traffic Manager

Manage traffic manager profiles

Application Gateway

Manage application gateways
Manage application gateways with backend pools

SQL Database

Manage SQL databases
Manage SQL databases in elastic pools
Manage firewalls for SQL databases
Manage SQL databases across regions

App Service – Web apps on Windows

Manage Web apps
Manage Web apps with custom domains
Configure deployment sources for Web apps
Configure deployment sources for Web apps asynchronously
Manage staging and production slots for Web apps
Scale Web apps
Manage storage connections for Web apps
Manage data connections (such as SQL database and Redis cache) for Web apps
Manage authentication for Web apps

App Service – Web apps on Linux

Manage Web apps
Manage Web apps with custom domains
Configure deployment sources for Web apps
Scale Web apps
Manage storage connections for Web apps
Manage data connections (such as SQL database and Redis cache) for Web apps

Functions

Manage functions
Manage functions with custom domains
Configure deployment sources for functions
Manage authentication for functions

Service Bus

Manage queues with basic features
Manage publish-subscribe with basic features
Manage queues and publish-subscribe with claims based authorization
Manage publish-subscribe with advanced features – sessions, dead-lettering, de-duplication and auto-deletion of idle entries
Manage queues with advanced features – sessions, dead-lettering, de-duplication and auto-deletion of idle entries

Resource Groups

Manage resource groups
Manage resources
Deploy resources with ARM templates
Deploy resources with ARM templates (with progress)
Deploy a virtual machine with managed disks using an ARM template

Redis Cache

Manage Redis Cache

Key Vault

Manage key vaults

CDN

Manage CDNs

Batch

Manage batch accounts

Start using Azure Management Libraries for Java today!

Start using these libraries today. It is easy to get started. You can run the samples above.
 
As always, we like to hear your feedback via comments on this blog or by opening issues in GitHub or via e-mail to Java@Microsoft.com.
 
Also. You can find plenty of additional info about Java on Azure at http://azure.com/java.
Quelle: Azure

Networking to and within the Azure Cloud, part 3

This is the third blog post of a three-part series. Before you begin reading, I would suggest reading the first two posts Networking to and within the Azure Cloud, part 1 and Networking to and within the Azure Cloud, part 2. Hybrid networking is a nice thing, but the question then is how do we define hybrid networking? For me, in the context of the connectivity to virtual networks, ExpressRoute’s private peering or VPN connectivity, it is the ability to connect cross-premises resources to one or more Virtual Networks (VNets). While this all works nicely, and we know how to connect to the cloud, how do we network within the cloud? There are at least 3 Azure built-in ways of doing this. In this series of 3 blog posts, my intent is to briefly explain: Hybrid networking connectivity options Intra-cloud connectivity options Putting all these concepts together Putting it all together – connection to the cloud and within the cloud While all these methods and connectivity options are interesting separately, when they are most powerful is when they are combined. Transit VNet with Gateway Sharing The topology below shows a simple hub & spoke topology. Even if VNet peering itself is NOT transitive, transit routing is allowed for gateways (VPN and ExpressRoute). There is an example of combining VNet peering alongside with Gateway Sharing, but using ExpressRoute: Transit VNet with Gateway Sharing, 1 ExpressRoute circuit in 2 regions However when combining this with the usage of more than one Azure region (remember, VNet peering is within a single region), you could easily create a topology like this one: In the image above, you have every VNet within West US and East US able to talk to each other without never leaving the Microsoft backbone network, at high speeds, limited by the ExpressRoute Gateway created on each Hub VNet, that is 1, 2, or 10 Gbps depending on the ExpressRoute gateway SKU. With that being the case, note that packets are physically routed between West US region and Chicago when going to East US. This makes sense because Chicago is between these two regions. Transit VNet with Gateway Sharing, 2 ExpressRoute circuits in 2 regions With another topology where perhaps connectivity to ExpressRoute is needed in more than in Chicago or the resiliency of a single ExpressRoute circuit is not enough, despite the SLA, you could create the following topology: In this case, cross-premises connectivity might have a better latency, if premises are located in the Western part of the United States, and in the Eastern part of the United States. Also, there are 2 ExpressRoute locations through which the packets between the VNets in West US and East US could go through. Since these regions are close to the circuit locations, the added latency should not constitute an issue on the latency. Moreover, this gives a higher potential uptime because of the use of 2 separate ExpressRoute circuits, in 2 distinct locations across the continental US. Transit VNet with Gateway Sharing, 3 ExpressRoute circuits in 3 regions This model could scale to more circuits and more regions, but I believe this gives a good understanding of the kinds of topologies we can create using the Azure Networking toolbox. Local circuits are important in that case to make sure we have optimal routing. On that specific topic, I encourage you to read the article Optimize ExpressRoute Routing. The article discusses the optimal routing for Virtual Networks, which is useful to understand how to make sure that packets routed from West US destined to North Europe would not go through the Tokyo ExpressRoute circuit. Using weights assigned to connections is how you can actually achieve this. Transit VNet with BGP Enabled VPN Gateway Sharing and VPN Transit Routing Another interesting use case of VPN transit, with BGP routing would allow topologies like the image below. For more information please read the Overview of BGP with Azure VPN Gateways. In this last case, it is possible for on premises users that are located in the Western part of the US to use the VPN on the left to reach on premises users connected to the VPN to the North Europe VNet, located on the right, effectively leveraging the Azure backbone between their own facilities without using some kind of proxy mechanisms. This would otherwise be required to allow that scenario.
Quelle: Azure