Announcing App Service Isolated, more power, scale and ease of use

Today, we are announcing the general availability of App Service Isolated, which brings the simplicity of multi-tenant App Service to the secure, dedicated virtual networks powered by App Service Environment (ASE). 

Azure App Service is Microsoft’s leading PaaS (Platform as a Service) offering hosting over 1 million external apps and sites. It helps you build, deploy and scale web, mobile and API apps instantaneously without worrying about the underlying infrastructure. It allows you to leverage your existing skills by supporting an increasing array of languages, frameworks, and popular OSS, and has built-in capabilities that streamline your CI/CD pipeline. ASE was introduced in 2015 to offer customers network isolation, enhanced control, and increased scale options. The updated ASE capabilities that comes with the new pricing tier, App Service Isolated, now allow you to run apps in your dedicated virtual networks with even better scale and performance through an intuitive user experience.

Streamlined scaling

Scaling up or out is now easier. The new ASE eliminates the need to manage and scale worker pools. To scale, either choose a larger Isolated plan (to scale up) or add instances (to scale out), just like the multi-tenant App Service. It’s that easy. To further increase scaling flexibility, App Service Isolated comes with a maximum default scale of 100 Isolated plan instances. You now have more capacity for large implementations.

Upgraded performance

The new ASE uses dedicated Dv2-based machines boasting faster chipsets, SSD storage, and twice the memory per core when compared to the first generation. The dedicated worker sizes for the new ASE are 1 core with 3.5 GB RAM, 2 cores with 7 GB RAM, and 4 cores with 14 GB RAM. With this upgraded infrastructure, you will be able to run your apps with lower latency, have more power to handle heavier workloads, and support more users.

Simplified experience

Creating the new ASE is easy. By selecting the Isolated pricing tier, App Service will create an App Service Plan (ASP) and a new ASE directly. You will just need to specify the Virtual Network that you want to deploy your applications to. There is no separate workflow required to spin up a new ASE in your secure, dedicated virtual networks.

We’ve made App Service Environment (ASE) faster, more efficient, and easier to deploy into your virtual network, enabling you to run apps with Azure App Service at high scale in an isolated network environment. Check out the Azure Friday video. Partners and customer can also learn more about how to get started and set up.
Quelle: Azure

Azure App Service Premium V2 in Public Preview

Azure App Service is announcing the Public Preview of the Premium V2 tier! App Service is a platform-as-a-service (PaaS) offer that allows you to quickly build, deploy, and scale enterprise-grade web, mobile, and API apps running on any platform. Apps running on App Service can meet rigorous performance, scalability, security, and compliance requirements while leveraging a fully-managed platform to take care of infrastructure maintenance. The new Premium V2 tier features Dv2-series VMs with even faster processors, SSD storage, and double the memory-to-core ratio compared to the previous compute iteration. The following are the web worker sizes available with Premium V2: Small (1 CPU core, 3.5 GiB memory) Medium (2 CPU cores, 7 GiB memory) Large (4 CPU core, 14 GiB memory)  During the Preview timeframe, the pricing for App Service Premium V2 is identical to the pricing for the existing App Service Premium tier. Simply put, you are getting better performance and scalability for the same pricing with Premium V2. All features including auto scaling, CI/CD support, and test in production included with App Service Premium are available with App Service Premium V2 as well. To use Premium V2, you can pick the Premium V2 option from the pricing tier selector in the Azure Portal. If you want to use Premium V2 with your existing app running with App Service, you can choose to redeploy or clone your app to a new Premium V2 App Service Plan. Premium V2 will be available in a growing list of regions starting from South Central US, West Europe, North Europe, Australia East, and Australia Southeast. Provisioning of Premium V2 resources can also be automated via ARM template via Azure Portal or PowerShell. Getting started: You can automate the provisioning of Premium V2 resources via ARM template such as the sample below.{
“$schema”: “http://schema.management.azure.com/schemas/2014-04-01-preview/deploymentTemplate.json#”,
“contentVersion”: “1.0.0.0”,
“parameters”: {},
“resources”: [
{
“type”: “Microsoft.Web/serverfarms”,
“sku”: {
“Tier”: “PremiumV2″,
“Name”: “PV2_1″
},
“name”: “AppServicePlanName”,
“apiVersion”: “2016-03-01″,
“location”: “Australia East”,
“properties”: {
“name”: “AppServicePlanName”
}}
]
}
You can run the ARM template via Azure Portal or PowerShell. For example, the following sample sequence can be used to create a resource group and provision a Premium V2 App Service Plan:New-AzureRmResourceGroup -Name {e.g. PremiumV2ResourceGroup} -Location {e.g. “West Europe”}
New-AzureRmResourceGroupDeployment -ResourceGroupName {e.g. PremiumV2ResourceGroup} -TemplateFile {local path for your template .json file}
We invite you to try out the new Premium V2 tier and provide your feedback via the App Service MSDN forum. You can also submit feature ideas via the App Service feedback forum.
Quelle: Azure

Fast and Easy Containers: Azure Container Instances

Containers have fundamentally changed the way developers develop their applications, the way applications are deployed, and the way system administrators manage their environments. Containers offer a broadly accepted and open standard, enabling simple portability between platforms and between clouds. Today, I am extremely excited to announce a new Azure service that makes it even easier to deploy containers. The very first service of its kind in the cloud, Azure Container Instances (ACI), a new Azure service delivering containers with great simplicity and speed and without any Virtual Machine infrastructure to manage. ACIs are the fastest and easiest way to run a container in the cloud.

An Azure Container Instance is a single container that starts in seconds and is billed by the second. ACI offer highly versatile sizing, allowing you to select the exact amount of memory separate from the exact count of vCPUs, so your application perfectly fits on the infrastructure. Your containers won’t be billed for a second longer than is required and won’t use a GB more than is needed. With ACI, containers are a first-class object of the Azure platform, offering Role-Based Access Control (RBAC) on the instance and billing tags to track usage at the individual container level. As the service directly exposes containers, there is no VM management you need to think about or higher-level cluster orchestration concepts to learn. It is simply your code, in a container, running in the cloud.

For those beginning their container journey, Azure Container Instances provide a simple experience to get started with containers in the cloud, enabling you to quickly create and deploy new containers with only a few simple parameters. Here is a sample CLI command that will deploy to ACI:

az container create -g aci_grp –name nginx –image library/nginx –ip-address public

and if you want to control the exact GB of memory and CPU count:

az container create -g aci_grp –name nginx –image library/nginx –ip-address public –cpu 2 –memory 10

Container Instances are available today in public preview for Linux containers. Windows container support will be available in the coming weeks. You can deploy using the Azure CLI or using a template. Furthermore, you can quickly and easily deploy from a public repository, like Docker Hub, or pull from your own private repository using the Azure Container Registry. Each container deployed is securely isolated from other customers using proven virtualization technology.

The above shows the simplicity of ACI. While Azure Container Instances are not an orchestrator and are not intended to replace orchestrators, they will fuel orchestrators and other services as a container building block. In fact, as part of today’s announcement, we are also releasing in open source, the ACI Connector for Kubernetes. This is an open-source connector that enables Kubernetes clusters to deploy to Azure Container Instances. This enables on-demand and nearly instantaneous container compute, orchestrated by Kubernetes, without having VM infrastructure to manage and while still leveraging the portable Kubernetes API. This will allow you to utilize both VMs and container instances simultaneously in the same K8s cluster, giving you the best of both worlds. Azure Container Instances can be used for fast bursting and scaling whereas VMs can be used for the more predictable scaling. Workloads can even migrate back-and-forth between these underlying infrastructure models. This offers a level of agility for deploying Kubernetes, unlike any other cloud provider, enabling services that start in seconds without any underlying VMs and are billed and scaled per second. are billed and scaled per second.

Here is a demo of the ACI Connector in action:

We continue to increase our investment and community engagement with containers and with Kubernetes, including Helm, our recent release of Draft, and the open-source ACI connector released today. With these community releases, we continue to learn how important it is to have an open ecosystem to drive innovation in this growing container space, an exciting and humbling experience. To continue our education and community engagement, I am also excited to announce that Microsoft has joined the Cloud Native Computing Foundation (CNCF) as a Platinum member. CNCF is a Collaborative Project of the Linux Foundation (that Microsoft joined last year) which hosts and provides governance for a wide range of projects including Kubernetes, Prometheus, OpenTracing, Fluentd, Linkerd, containerd, Helm, gRPC, and many others. I am really excited to work closely with the CNCF community and have Gabe Monroy (Lead PM, Containers @ Microsoft Azure) join the CNCF board.

I hope you try out these new services and give us feedback. I am excited to see what you are going to build!

See ya around,

Corey
Quelle: Azure

Microsoft joins Cloud Native Computing Foundation

I’m excited to share that we have just joined the Cloud Native Computing Foundation (CNCF) as a Platinum member. CNCF is a part of the Linux Foundation, which helps govern for a wide range of cloud-oriented open source projects, such as Kubernetes, Prometheus, OpenTracing, Fluentd, Linkerd, containerd, Helm, gRPC, and many others.

Since we joined the Linux Foundation last year, and now have decided to expand that relationship to CNCF membership as a natural next step to invest in open source communities and code at multiple levels, especially in the area of containers. Some specific examples include:

Kubernetes: Microsoft has been contributing code to the Kubernetes project, as well as running Kubernetes as part of the Azure Container Service. Engineering manager and architect Brendan Burns is one of the Kubernetes project maintainers.
Helm: The Helm project was started by Deis before being acquired by Microsoft and continues to be developed and improved by Microsoft engineers. Adam Reese, Michelle Noorali, and Matt Butcher are all project maintainers.
containerd: Microsoft engineers contribute code to expand containerd to Windows Containers; John Howard from the Windows team is one of the project maintainers.
gRPC: A universal, high-performance RPC framework, covering multiple languages such as Node.js, Java, Ruby, Go, and C#, we plan to increase our participation.

Open source is a way to scale software development beyond what any single organization can do. It allows vendors, customers, researchers and others to collaborate and share knowledge about problems and solutions, like no other form of development. And I strongly believe the power of open source derives from strong, diverse communities and that we have an obligation to support these communities by participating as code contributors and in the associated foundations and committees. With all that in mind, I look forward to us working with the other CNCF members (most of whom we already know very well) to help make these projects awesome for everyone.

John Gossman (@gossmanster)
Quelle: Azure

Model comparison and merging for Azure Analysis Services

Relational-database schema comparison and merging is a well-established market. Leading products include SSDT Schema Compare and Redgate SQL Compare, which is partially integrated into Visual Studio. These tools are used by organizations seeking to adopt a DevOps culture to automate build-and-deployment processes and increase the reliability and repeatability of mission critical systems.

Comparison and merging of BI models also introduces opportunities to bridge the gap between self-service and IT-owned “corporate BI”. This helps organizations seeking to adopt a “bi-modal BI” strategy to mitigate the risk of competing IT-owned and business-owned models offering redundant solutions with conflicting definitions.

Such functionality is available for Analysis Services tabular models. Please see the Model Comparison and Merging for Analysis Services whitepaper for detailed usage scenarios, instructions and workflows.

This is made possible using PBIX import in the Azure Analysis Services web designer (see this post for more information) and BISM Normalizer, which we are pleased to announce now resides on the Analysis Services Git repo. BISM Normalizer is a popular open-source tool that works with Azure Analysis Services and SQL Server Analysis Services. All tabular model objects and compatibility levels, including the new 1400 compatibility level, are supported. As a Visual Studio extension, it is tightly integrated with source control systems, build and deployment processes, and model management workflows.

Thanks to Javier Guillen (Blue Granite), Chris Webb (Crossjoin Consulting), Marco Russo (SQLBI), Chris Woolderink (Tabular) and Bill Anton (Opifex Solutions) for their contributions to the whitepaper.
Quelle: Azure

Announcing the preview of App Service domain

For a production web app, you probably want users to see a custom domain name. Today we are announcing the preview of App Service domain. App Service domain (preview) gives you a first class experience in the Azure portal to create and manage domains that will be hosted on Azure DNS for your Azure services such as Web Apps, Traffic Manager, Virtual Machines, and more.

 

Simplified domain management

App Service domains (preview) simplifies the life cycle of creating and managing a domain for Azure services leveraging Azure DNS. Azure DNS then provides reliable performant and secure options of hosting your domains. App Service domains is currently limited to the following TLDs, com, net, co.uk, org, nl, in, biz, org.uk, and co.in. To get started with creating a domain, please see How to buy a domain for App Service.

Here are some benefits to using App Service domains:

Subdomain management and assignment

Auto-renew capabilities

Free cancellation within the first five days

Better security, performance, and reliability using Azure DNS

'Privacy Protection' included for free except for TLDs who's registry does not support privacy such as .co.in, .co.uk, etc.

Checkout the following resources to manage your domain:

Configure Domains for Azure services

Manage DNS zones  

Manage DNS records 

Submit your ideas/feedback in UserVoice. Please add [Domain] at the beginning of the title.

 
Quelle: Azure

More resource policy aliases

Aliases in resource policies enable you to restrict what values or conditions are permitted for a property on a resource. If you are already familiar with policy aliases, you know they are a crucial part of managing your Azure environment.

We want to keep adding new policy aliases, so you can more easily govern what gets deployed in your environment. In this blog, I would like to share most recent aliases we have enabled.

First, let’s review how aliases are integrated into user requests. Each policy alias maps to paths in different API versions for a given resource type. During policy evaluation, when the policy engine retrieves the value of a particular field, it looks at the API version of the request and gets the path for that version. The diagram below shows how policy alias works during policy evaluation time.

Custom Image for virtual machines

For security reasons, lots of customers want to make sure only custom images from the central IT team are deployed in their environment. The IT team approves a set of managed images, and puts them in a resource group. To ensure VMs are created from these images, you implement a resource policy. For implementation, you can either specify the resource group which contains the images or explicitly specify the images.

We added the Microsoft.Compute/imageId alias to enable this scenario. You can use it for virtual machines or virtual machine scale sets by modifying the type condition.

The examples below show what the policies look like.

Example1: (use images from certain resource group)

{
"if": {
"allOf": [
{
"field": "type",
"equals": "Microsoft.Compute/virtualMachines"
},
{
"not": {
"field": "Microsoft.Compute/imageId",
"contains": "resourceGroups/testImage"
}
}
]
},
"then": {
"effect": "deny"
}
}

Example2: (use specific images) 

{

"field": "Microsoft.Compute/imageId",

"in": ["<imageId1>","<imageId2>"]

}

Microsoft.Compute/imageId is the new alias we enabled. You can also use it for virtual machines scale set by modifying the type condition.

Platform Images

We introduced a set of aliases that can be used across resource types. These cross resource type aliases enable you to restrict platform images for virtual machines, virtual machine scale sets, and managed disks. For example, the alias Microsoft.Compute/imagePublisher doesn’t have a resource type name, and can work across different resource types. The linked example shows how to use these aliases.

Use Managed Disk

With the release of managed disk, lots of customers want to require that only managed disks are deployed for VMs. With resource policy, you can now restrict your VM and scale set to use only managed disks. The policy requires that fields related to managed disks are present in user request. Those fields are shown in the linked example. By looking for these fields, you can determine whether managed disks are used with the VM or scale set.

VM Extension Types

Organizations may want to forbid usage of certain type of extensions. For example, a VM extension may not be compatible with certain custom VM images. Or, for security reasons, you don’t want users to reset password for a VM. The example below shows how to block a specific VM extension. It uses publisher and type to determine which extension to block.

{
"if": {
"allOf": [
{
"field": "type",
"equals": "Microsoft.Compute/virtualMachines/extensions"
},
{
"field": "Microsoft.Compute/virtualMachines/extensions/publisher",
"equals": "Microsoft.Compute"
},
{
"field": "Microsoft.Compute/virtualMachines/extensions/type",
"equals": "VMAccessAgent"

}
]
},
"then": {
"effect": "deny"
}
}

Azure Hybrid Use Benefit

When you have proper on-premise license, you can save the license fee on your virtual machines. When you don’t have the license, you should forbid the option. The following policy forbids usage of Azure Hybrid Use Benefit (AHUB).

{
"if": {
"allOf": [
{
"field": "type",
"in":[ "Microsoft.Compute/virtualMachines","Microsoft.Compute/VirtualMachineScaleSets"]
},
{
"field": "Microsoft.Compute/licenseType",
"exists": true
}
]
},
"then": {
"effect": "deny"
}
}

Summary

To do a quick recap, this blog explains how policy alias works and what you can govern through resource policies. Try them and let us know what new things you want to govern!
Quelle: Azure

HDInsight tools for IntelliJ & Eclipse June updates

HDInsight tools June release has been released! In this release, you can choose SBT as a build tool in addition to Maven when creating Spark projects in IntelliJ. With improvements to Spark job view and job graph in IntelliJ and Eclipse, more job info and statistics are now provided. You can also view job logs including driver stderr, stdout, and directory info easily in Spark job view.

Summary of key updates

Improved Spark job view and job graph in IntelliJ and Eclipse

In Azure Explorer (View > Tools window > Azure Explorer in IntelliJ), go to HDInsight node, select the Spark cluster, and then click Jobs as shown below.

The left pane of Spark job view shows all the Spark applications that ran in the cluster. Select one Spark job to view more details.

By hovering over the job graph, it will show the job run information. When clicking on job graph, it dives into the stage graph and shows the statistics of the job. You can also open Spark history UI or Yarn UI by clicking the respective link at the top of the Spark job view.

Improved Spark log view in IntelliJ and Eclipse 

In the same Spark job view pane, click the Log tab to view the frequently used logs including driver stderr, driver stdout, and directory info, as shown below.

SBT build tool support when creating Spark project in IntelliJ

As shown below, you can now choose SBT as a build tool in addition to Maven when creating a new Spark project in IntelliJ.

Following the wizard to create a Spark project, after it is done, a new build.sbt file is generated which contains the build description for the project. Then you can author, submit, or debug the Spark job following the Spark job submission/debugging instructions.

How to install/update

HDInsight Eclipse plugin: Eclipse will prompt you for latest update if you have the plugin installed before, or you can get the latest bits by going to the Eclipse repository and searching “Azure Toolkit for Java”.

HDInsight IntelliJ plugin: IntelliJ will prompt you for latest update if you have the plugin installed before, or you can get the latest bits by going to the IntelliJ repository and searching “Azure Toolkit for IntelliJ”.

For more information, check out the following:

HDInsight Visual Studio plugin (Demo video)
HDInsight Eclipse plugin (Demo video)
HDInsight IntelliJ plugin (Demo video)

Learn more about today’s announcements on the Azure blog and Big Data blog, and discover more Azure service updates.

Feedback

We look forward to your comments and feedback. If there is any feature request, customer ask, or suggestion, please send us a note to hdivstool@microsoft.com. For bug submission, please open a new ticket using the template.
Quelle: Azure

Go serverless with R Scripts on Azure Function

Serverless is all the rage, now you can get in on the action using R! Azure Function supports a variety of languages (C#, F#, js, batch, PowerShell, Python, php and the list is growing). However, R is not natively supported. In the following blog we describe how you can run R scripts on Azure Function using the R site extension.

Azure Functions can be used in several scenarios because of the broad choice of triggers offered:

Timer trigger, executes a Function on a schedule.
Http trigger, execute a Function after an HTTP call.
Azure Queue Storage, Service Bus, Blob Storage, triggers the function when a new object or message is received.

Why would you want to run R scripts on Azure Function?

A typical use-case would be replacing your R jobs currently scheduled with cron for example. Using Azure Function you can set up a timer trigger that triggers your R script on a periodic basis. You get a fully managed solution where you can get alerted on errors and access to the logs or edit the scripts directly from the browser. If you choose the consumption plan, then it is very cost-effective, only paying per use and the underlying storage. (There is a free grant for the 1st million calls on the consumption plan).

The following tutorial will walk you through the steps to create a twitter bot posting a ggplot of the temperature forecast for the next 5 days using only R and Azure Functions:

Running R scripts on Azure Function Tutorial

Go ahead and try it now, it is simpler than you think! Give us some feedback and let us know what you are using it for.
Quelle: Azure

Bing Map v8 available in Azure IoT Suite Remote Monitoring preconfigured solution

Azure IoT Suite is designed to get customers and partners started quickly to connect and manage their devices in a simple and reliable manner, realizing business value from an IoT solution. Bing Map (v7) has been integrated as part of Azure IoT Suite Remote Monitoring preconfigured solution to visualize device locations and status in the preconfigured solution dashboard. We are pleased to announce that an advanced version of Bing Map (v8) has been effective since July 1st, 2017 for Azure IoT Suite Remote Monitoring preconfigured solution. It provides customers and partners:

High performance: Data renders 10x faster.
Extended culture support for 95 new languages: Uses the Bing Maps REST services to perform geocode and route requests on up to 117 languages.
New features based on developer feedback

To leverage the new advanced map functionality, you can do one of the following:

Deploy a new Remote Monitoring Solution: All solutions deployed from http://azureiotsuite.com will leverage the new V8 control.
Update to the latest coder: Get all new features from the master branch, such as Device Management and Bing Map v8.
Update only the map control: Get only the changes for the map control by referencing the single commit.

In all scenarios, the existing Bing Map API Key under your account will be kept valid for you, no specific action is expected from customers due to the migration.

Learn how to re-deploy over an existing Remote Monitoring preconfigured solution.

Quelle: Azure