Azure Data Lake Tools for Visual Studio Code (VSCode) July updates

We are pleased to announce the July updates of Azure Data Lake Tools for VSCode. This is a quality milestone and we added local debug capability for C# code behind for window users, refined Azure Data Lake (ADLA & ADLS) integration experiences, and focused on refactoring the components and fixing bugs. Azure Data Lake Tools for VSCode is an extension for developing U-SQL projects against Microsoft Azure Data Lake! This extension provides you a cross-platform, light-weight, and keyboard-focused authoring experience for U-SQL while maintaining a rich set of development functions. Summary of key updates Local Run for Windows Users This update allows you to perform local run to test your local data. Execute your script locally before publishing your production ready code to ADLA. Use command ADL: Start Local Run Service to start local run service. The cmd console shows up. For first time users, enter 3 and set up your data root. Use command ADL: Submit Job to submit your job to your local account. After job submission, you can view the submission details by clicking jobUrl in the output window, or view the job submission status from the CMD console. Local Debug for Window Users Local Debug enables you to debug your C# code behind, step through the code, and validate your script locally before submitting to ADLA. Use command ADL: Start Local Run Service to start local run service and set a breakpoint in your code behind, then click command ADL: Local Debug to start local debug service. You can debug through the debug console and view parameter, variable, and call stack information. Register assemblies through configuration Register assemblies through configuration provides you more flexibility to register your dependency and upload your resources. Use command ADL: Register Assembly through Configuration to register your assembly, register the assembly dependencies, and upload resources through a simple configuration. Upload file through configuration Upload file through configuration boosts your productivity and offers you the capability to upload multiple files at the same time. Use command ADL: Upload File through Configuration to upload multiple files through a simple configuration. How do I get started? First, install Visual Studio Code and download the prerequisite files including JRE 1.8.x, Mono 4.2.x (Linux and Mac), and .Net Core (Linux and Mac). Then get the latest ADL Tools by going to the VSCode Extension repository or VSCode Marketplace and searching “Azure Data Lake Tools for VSCode”. For more information about Azure Data Lake Tool for VSCode, please see: Get more information on using Data Lake Tools for VSCode. Watch the ADL Tools for VSCode User instructions video. Learn more about how to get started on Data Lake Analytics. Learn how to Develop U-SQL assemblies for Azure Data Lake Analytics jobs.   If you encounter any issues, please submit it to:  https://github.com/Microsoft/AzureDatalakeToolsForVSCode/issues Want to make this extension even more awesome? Share your feedback
Quelle: Azure

Azure Stream Analytics now available in UK West, Canada Central and East

As a part of our ongoing commitment to enable higher performance, and support customer requirements around data location, we’re pleased to announce that Azure Stream Analytics is now available in 3 additional regions: UK West, Canada Central, and Canada East.

With this announcement, Stream Analytics is now available in 26 Azure regions worldwide. For more information about local pricing, please visit Azure Stream Analytics pricing webpage.

Azure Stream Analytics is a serverless, scale-out job service built to help customers easily develop and run massively parallel real-time analytics across multiple streams of data using simple SQL like language. For example, a recent case study demonstrates how SkyAlert leveraged Azure Stream Analytics in conjunction with other Azure services to build an early-warning system that alerts citizens about an impending earthquake up to 2 minutes before it is felt, that could potentially save many precious lives in case of a natural disaster.

New to Azure Stream Analytics? Learn to build your first Stream Analytics application by following this step-by-step guide.
Quelle: Azure

Introducing the new Dv3 and Ev3 VM sizes

We are excited to announce the general availability of our new Dv3 VM sizes. We are also changing the naming for the high memory D sizes (D11-D14) to become the Ev3 family. These new sizes introduce Hyper-Threading Technology running on the Intel® Broadwell E5-2673 v4 2.3GHz processor, and the Intel® Haswell 2.4 GHz E5-2673 v3. The shift from physical cores to virtual CPU’s (vCPU) is a key architectural change that enables us to unlock the full potential of the latest processors to support even larger VM sizes. By unlocking more power from the underlying hardware, we are able to harness better performance and efficiency, resulting in cost savings that we are passing on to our customers. These new Hyper-Threaded sizes will be priced up to 28% lower than the previous Dv2 sizes. 

The Dv3 and Ev3 sizes are also some of the first VM’s to be running on Windows Server 2016 hosts.  Windows 2016 hosts enable Nested Virtualization and Hyper-V Containers for these new VM sizes.  Nested virtualization allows you to run a Hyper-V server on an Azure virtual machine. With nested virtualization you can run a Hyper-V Container in a virtualized container host, set-up a Hyper-V lab in a virtualized environment, or to test multi-machine scenarios. You can find more information on Nested Virtualization on Azure. 

Our new Dv3 VM sizes are a good balance of memory to vCPU performance, with up to 64 vCPU’s and 256GiB of RAM. Our newly named Ev3 sizes provide you with more memory to vCPU than the Dv3, so you can run larger workloads on sizes up to our largest E64 size,  with 64 vCPUs and 432GiB of RAM.

The current Dv2, DSv2, F, Fs and Av2 sizes, with the exception of our DS15v2 and the D15v2,  will also be available on our new Intel® Broadwell processors. The D15v2 and DS15v2 sizes are dedicated to our Intel® Haswell processors.

 

Size
vCPU’s
Memory:
GiB
Local SSD:
GiB
Max data disks
Max local disk throughput:
IOPS / Read MBps / Write MBps
Max NICs / Network bandwidth

Standard_D2_v3
2
8
50
4
3000/46/23
2 / moderate

Standard_D4_v3
4
16
100
8
6000/93/46
2 / moderate

Standard_D8_v3
8
32
200
16
12000/187/93
4 / high

Standard_D16_v3
16
64
400
32
24000/375/187
8 / high

Standard_E2_v3
2
16
50
4
3000/46/23
2 / moderate

Standard_E4_v3
4
32
100
8
6000/93/46
2 / moderate

Standard_E8_v3
8
64
200
16
12000/187/93
4 / high

Standard_E16_v3
16
128
400
32
24000/375/187
8 / high

Standard_E32_v3
32
256
800
32
48000/750/375
8 / extremely high

Standard_E64_v3
64
432
1600
32
96000/1000/500
8 / extremely high

 

Size
vCPU's   
Memory: GiB
Local SSD: GiB
Max data disks
Max cached and local disk throughput: IOPS / MBps (cache size in GiB)
Max uncached disk throughput: IOPS / MBps
Max NICs / Expected network performance (Mbps)

Standard_D2s_v3
2
8
16
4
4,000 / 32 (50)
3,200 / 48
2 / moderate

Standard_D4s_v3
4
16
32
8
8,000 / 64 (100)
6,400 / 96
2 / moderate

Standard_D8s_v3
8
32
64
16
16,000 / 128 (200)
12,800 / 192
4 / high

Standard_D16s_v3
16
64
128
32
32,000 / 256 (400)
25,600 / 384
8 / high

Standard_E2s_v3
2
16
32
4
4,000 / 32 (50)
3,200 / 48
2 / moderate

Standard_E4s_v3
4
32
64
8
8,000 / 64 (100)
6,400 / 96
2 / moderate

Standard_E8s_v3
8
64
128
16
16,000 / 128 (200)
12,800 / 192
4 / high

Standard_E16s_v3
16
128
256
32
32,000 / 256 (400)
25,600 / 384
8 / high

Standard_E32s_v3
32
256
512
32
64,000 / 512 (800)
51,200 / 768
8 / extremely high

Standard_E64s_v3
64
132
64
132
128,000/1024 (1600)
80,000 / 1200
8 / extremely

Geographic Availability

US

West 2
East 2

Europe

West

Asia Pacific

Southeast

We will be rapidly adding the other regions and we will provide more updates as these regions become available.

Update on Dv2 Promo

With the launch of Dv3 and Ev3, we will be winding down our Dv2 Promo offer in the regions noted above where Dv3 and Ev3 are available. During the transition in these regions, customers will continue to be able to deploy new instances of Dv2_promo VMs until 8/15/2017. In regions where Dv3 and Ev3 are not yet available, Dv2 Promo VMs will continue to be available for at least one month after the availability of Ev3 and Dv3 in that region.
 
All deployed Dv2_promo VMs will benefit from their promotional pricing until 6/30/2018 at which point prices will revert to match Dv2 pricing.
Quelle: Azure

Nested Virtualization in Azure

We announced nested virtualization support coming to Azure with Dv3 or Ev3 series at //build session last month.

Today we are excited to announce that you can now enable nested virtualization using the Dv3 and Ev3 VM sizes. We will continue to expand support to more VM sizes in the coming months.

For software and hardware prerequisites, configuration steps and limitations for nested virtualization please see the document here. In this blog we will discuss a couple interesting use cases and provide a short video demo for enabling a nested VM.

Now not only you can create a Hyper-V container with Docker (see instructions here), but also by running nested virtualization, you can create a VM inside a VM. Such nested environment provides great flexibility in supporting your needs in various areas such as development, testing, customer training, demo, etc. For example, suppose you have a testing team using Hyper-V hosts on-prem today. They can now easily move their workloads to Azure by using nested VMs as virtualized test machines. The nested VM hosts will be used to replace physical Hyper-V hosts, individual testing engineer will have full control over the Hyper-V functionality on their own assigned VM Host in Azure.

Let’s look at another example, suppose you want to run your development code, tests or applications on a machine with multiple users on it without impacting them, you can use the nested virtualization technology to spin up independent environments on demand to do that. Within nested VMs, even if you are running a chaos environment your users will not be impacted.

Ready to try it? Please see the video below with my engineer Charles Ding setting up nested VM (here is the link to the power shell script he created and used in the video).

We hope you enjoy using nested virtualization in Azure!
Quelle: Azure

Running Azure Batch jobs using the Azure CLI – no code required

When we introduced Azure Batch the target audience was the developer producing SaaS or client solutions where there was the need to run applications or algorithms at scale. Developers use the Batch APIs to integrate with Batch and utilize it as a component within their solution.

Since launch, we have been adding further capabilities that make it easier to use Batch without code and therefore expanding the audience that can take advantage of Batch. We are pleased to announce the recent addition of new Azure CLI capabilities that make it possible to define and run jobs end-to-end. Users can directly, or via scripting, create pools, upload data, run jobs at scale, and download output data – all using the Azure CLI, no code required.

Batch templates

Batch templates build on the existing Batch support in the Azure CLI that allows JSON files to specify property names and values for the creation of pools, jobs, tasks, and other items. With Batch templates, the following capabilities are available compared to what is possible with the JSON files:

Parameters can be defined. When the template is used, only the parameter values are specified to create the item, with other item property values being specified in the template body. A user who understands Batch and the applications to be run by Batch can create the templates, specifying pool, job, and task property values. A user less familiar with Batch and/or the applications simply needs to specify the values for the defined parameters.
Job task factories create the one or more tasks associated with a job, avoiding the need for many task definitions to be created and drastically simplifying job submission.

Upload and download of input and output files

Input data files need to be supplied for jobs and output data files are often produced. A storage account is associated, by default, with each Batch account; using the Azure CLI, files can now be easily transferred between a client and this storage account, with no coding required. Additionally, files are referenced by pool and job templates for transfer to and from pool nodes.

Example – transcoding video files using ffmpeg

ffmpeg is a popular application that processes audio and video files. The Azure Batch CLI can be used to invoke ffmpeg to transcode multiple video files in parallel, converting source video files to different resolutions.

Create a pool template

A pool of VM nodes will be required on which the ffmpeg application will need to be installed and on which individual transcodes will be run. Someone with knowledge of Batch and ffmpeg defines a pool template. The template is written so that when it is used to create the pool, only a pool id and number of nodes need to be specified.

The template defines:

Two parameters whose values need to be supplied when the template is used to create a pool – the pool id and the number of pool nodes.
The template body specifies the OS, VM sizes, the ffmpeg package, and other pool properties.

An example pool template would be:

{
      "parameters": {
          "nodeCount": {
              "type": "int",
              "metadata": { "description": "The number of pool nodes" }
          },
          "poolId": {
              "type": "string",
              "metadata": { "description": "The pool id " }
          }
      },
      "pool": {
          "type": "Microsoft.Batch/batchAccounts/pools",
          "apiVersion": "2016-12-01",
          "properties": {
              "id": "[parameters('poolId')]",
              "virtualMachineConfiguration": {
                  "imageReference": {
                      "publisher": "Canonical",
                      "offer": "UbuntuServer",
                      "sku": "16.04.0-LTS",
                      "version": "latest"
                  },
                  "nodeAgentSKUId": "batch.node.ubuntu 16.04"
              },
              "vmSize": "STANDARD_D3_V2",
              "targetDedicatedNodes": "[parameters('nodeCount')]",
              "enableAutoScale": false,
              "maxTasksPerNode": 1,
              "packageReferences": [
                  {
                      "type": "aptPackage",
                      "id": "ffmpeg"
                  }
              ]
} } }

Create a job template

To transcode the video files, a job will be created with one task per video file. Each task needs to invoke the ffmpeg application with parameters specifying the source video file that will be copied onto the node, the target resolution, the output file name and location, as well as other task properties.

Someone with knowledge of Batch and ffmpeg defines a job template. This template has been written so that when it is used only the pool id and job id need to be specified. For simplicity, it is assumed that source files will be uploaded to a fixed location, the output files will be written to a fixed location, and the output resolution is set to a specific value.

{
      "parameters": {
          "poolId": {
              "type": "string",
              "metadata": {
                  "description": "The pool id which runs the job"
              }
          },
          "jobId": {
              "type": "string",
              "metadata": {
                  "description": "The job id"
              }
          },
          "resolution": {
              "type": "string",
              "defaultValue": "428×240",
              "allowedValues": [
                  "428×240",
                  "854×480"
              ],
              "metadata": {
                  "description": "Target video resolution"
              }
          }
      },
      "job": {
          "type": "Microsoft.Batch/batchAccounts/jobs",
          "apiVersion": "2016-12-01",
          "properties": {
              "id": "[parameters('jobId')]",
              "constraints": {
                  "maxWallClockTime": "PT5H",
                  "maxTaskRetryCount": 1
              },
              "poolInfo": {
                  "poolId": "[parameters('poolId')]"
              },
              "taskFactory": {
                  "type": "taskPerFile",
                  "source": {
                      "fileGroup": "ffmpeg-input"
                  },
                  "repeatTask": {
                      "commandLine": "ffmpeg -i {fileName} -y -s [parameters('resolution')] -strict -2 {fileNameWithoutExtension}_[parameters('resolution')].mp4",
                      "resourceFiles": [
                          {
                              "blobSource": "{url}",
                              "filePath": "{fileName}"
                          }
                      ],
                      "outputFiles": [
                          {
                              "filePattern": "{fileNameWithoutExtension}_[parameters('resolution')].mp4",
                              "destination": {
                                  "autoStorage": {
                                      "path": "{fileNameWithoutExtension}_[parameters('resolution')].mp4",
                                      "fileGroup": "ffmpeg-output"
                                  }
                              },
                            "uploadOptions": {
                                 "uploadCondition": "TaskSuccess"
                             }
                          }
                      ]
                  }
              },
              "onAllTasksComplete": "terminatejob"
} } }

 

Create a pool using the pool template

A user with files to transcode can first create a pool containing the nodes which will perform the transcodes. If scripted, the parameter values can be passed in the command line; if invoked directly the user will be prompted for the parameter values.

C:BatchCliTemplates>az batch pool create –template pool-ffmpeg.json
You are using an experimental feature {Pool Template}.
nodeCount (The number of pool nodes): 20
poolId (The pool id): MyFfmpegPool

As a user of the template, I haven’t had to understand Azure VM sizes, pool properties, and how to install ffmpeg.

Upload source files to transcode

I need to upload the files to be transcoded to Azure. I was supplied the name of the file group to use, which equates to a container created on the Azure Storage account associated with the Batch account.

az batch file upload –local-path c:source_videos*.mp4 –file-group ffmpeg-input

Run a job to transcode the source files using the job template

A job needs to be created that will have one task per input file that was uploaded. If scripted, the parameter values can be passed in the command line; if invoked directly the user will be prompted for the parameter values.

az batch job create –template job-ffmpeg.json

As a user of the template, I haven’t had to understand how to invoke ffmpeg, specifying the appropriate parameters to perform transcoding, plus I haven’t had to specify the Batch properties for jobs and tasks.

Download the transcoded files

If the transcoded output files are required on the client then they can easily be downloaded.

az batch file download –file-group ffmpeg-output –local-path c:output_lowres_videos

Summary

This example has shown how a user has been able to create a Batch pool and job template to perform video transcoding using ffmpeg. The user has not needed to use the Batch APIs; they have needed knowledge of the ffmpeg application and Azure Batch. To use the templates, an end-user simply has to use the Azure CLI to upload the files to transcode, download the output files, and supply the pool and job template parameter values.

More information

More detailed information is available:

Azure Batch CLI documentation
Detailed documentation, samples, and source code in the Azure GitHub repository for the Batch extension.

Quelle: Azure

Mesosphere DCOS, Azure, Docker, VMware & Everything Between – Deploying DC/OS on VMware vSphere

This post is part of the “Mesosphere DC/OS, Azure, Docker, VMware & Everything Between” multiple blog post series. In the previous posta for this series, I looked at the following topics:

Mesosphere DCOS, Azure, Docker, VMware and everything between – Architecture and CI/CD Flow

Mesosphere DCOS, Azure, Docker, VMware and everything between – Security & Docker Engine Installation

Mesosphere DCOS, Azure, Docker, VMware & Everything Between – SSH Authorized Keys

Now that we have the Docker engine up and running and all of our network & security related configurations in place, it’s time to get the DC/OS cluster rolling on top of VMware vSphere. This is the first major milestone in our entire platform setup. Let’s get moving…

Since this is not a “DC/OS Deep Dive” series, I will not go into much details on DCOS components, but I will provide relevant info on why things the way they are.

Before diving into the installation steps, I highly recommend going over to DCOS Node Types and Network KBs.

For our vSphere DCOS cluster deployment, I will not deploy public agent nodes. To understand why, we need to go back and review the CI/CD flow.

As you remember, in our flow, the “production containers” stops at the DCOS cluster deployed on vSphere. The reason for not deploying public nodes (think of them as your DMZ deployed hosts) is customer requirement to have the production containers available only from corporate LAN or via VPN. Later on, the plan is to provide internet access via the corporate load balancer but to keep things nice and simple, we will deploy only the private agents.

The agent nodes are responsible for hosting your Docker containers and for this deployment, we will have 3 of those.

 

Read more about all the details around DC/OS 1.9 deployment on top of VMware vSphere on my blog.
Quelle: Azure

Service Fabric Community Q&A 14th Edition

We will host our 14th monthly community Q&A call Thursday, July 20th at 10 AM Pacific Time.

The Service Fabric Community Q&A is hosted on the third Thursday of every month by the Azure Service Fabric engineering team. The session provides an opportunity for you to ask any questions you have about Service Fabric.

This is an open event. As always, there is no need to RSVP. Just navigate to http://aka.ms/sfcommunityqa at 10 AM Thursday and you are in!

ICS calendar link: service-fabric-community-qa-14

Talk to you then!

The Service Fabric Team
Quelle: Azure

Azure Container Registry adds individual identity, webhooks, and delete capabilities

The Azure Container Registry team is excited to share a preview of new Container Registry SKUs with more capabilities and features.

The new preview of Azure Container Registry managed tier is available in 3 options including Basic, Standard, and Premium.

These new container registries are available in preview within 3 regions including East US, West Europe, and West Central US. Support for other regions will roll out over the course of the following weeks based on feedback.

The new features available include:

AAD authentication for repositories
Delete operations
Webhook support

Managed SKUs

The images in these SKUs will be stored in Storage Accounts managed by the ACR service, rather than relying on a storage account specified for the user. This change improves reliability and enables new features that weren't possible in the original offering. It will still be possible to create and use registries that rely on your own storage account if you wish to continue using this model.

Individual AAD authentication

Previously, there were only two ways to authenticate access to a registry, by enabling the single admin account or by setting up a service principal. Now with managed registries you can sign in to a registry using your AAD credentials. This automatically works for any managed registry you create. When you sign in via the portal or the az cli, you will have access to all the registries under your AAD account. Moreover, if you sign in with your AAD credentials you won't have to pass in credentials again when pushing images to your repositories.

To authenticate with AAD using the command line, sign in to your Azure account using "az login" in the Azure CLI 2.0. Once logged in, use the command "az acr login" and it will automatically pick up the credentials you used to sign in to your Azure account.

Delete functionality

The new registry SKUs allow you to execute one of our most requested features, repository delete. You can delete specific repositories, images, or tags from your registry without having to delete the entire registry. The feature is available as a context menu in the Azure portal, or via the az acr repository delete api.

Webhook support

To enable cloud connected workflows, we’ve added webhook notifications for registries. You can configure these webhooks to get triggered by push and/or delete actions. Additionally, you can modify the scope of the webhook so that it gets triggered by these actions occurring for any repositories in the registry, a specific repository, or a specific tag.

You can also easily test out the webhook by pinging it and then viewing events of when the webhook was triggered. For a specific event, you can see the request and the responses to help further test or diagnose issues. Webhook support is available both in the Portal and the az acr webhook cli.

Pricing

Please see the ACR pricing page for details. During preview these SKUs will have a 50% discount.

Summary

Individual AAD support, delete, and Webhook functionality were designed with the goal of enabling you to do even more with your container registries. We hope you enjoy the new features available for the Azure Container Registry. You can try them out today by creating a new registry in East US, West Europe, or Central US EUAP and selecting Managed Registries as your SKU.

If you have any feedback or questions, please leave a comment below or reach out through our GitHub or StackOverflow.

– The Azure Container Registry Team
Quelle: Azure

What’s brewing in Visual Studio Team Services: July 2017 Digest

This post series provides the latest updates and news for Visual Studio Team Services and is a great way for Azure users to keep up-to-date with new features being released every three weeks. Visual Studio Team Services offers the best DevOps tooling to create an efficient continuous integration and release pipeline to Azure.

In this month’s update, we begin with an industry first by scaling Git beyond what anyone else thought possible. After discussing more improvements in Git, we’ve got a brand new built-in wiki now in public preview. We also have improvements in build, release, package management, and work item tracking. There’s a lot of new stuff, so let’s dive in.

World’s largest Git repo is on VSTS: 3.5M files and 300 GB!

As part of a big effort at Microsoft to modernize engineering systems, which we call One Engineering System (1ES), we set an audacious goal of moving the Windows code base into Git a few years ago. We tried a couple of different approaches, including submodules, before deciding that the best approach was to scale Git by virtualizing the repo. This spring we accomplished our goal when the Windows team moved the entire Windows code base into a single Git repo hosted on VS Team Services. With nearly 4,000 engineers working in a single Git repo called “OS,” Windows is in a single version control repository for the first time in decades. To achieve this, we created Git Virtual File System (GVFS) which we’ve also released as an open source project so that anyone can use it with VSTS.

The scale that Windows development operates at is really amazing. Let’s look at some numbers.

There are over 250,000 reachable Git commits in the history for this repo, over the past 4 months.
8,421 pushes per day (on average)
2,500 pull requests, with 6,600 reviewers per work day (on average)
4,352 active topic branches
1,760 official builds per day

We’ve already significantly improved performance from the first release of GVFS. Along the way, we’ve also made performance and scale improvements in Git, and we are contributing those to the Git project. Any account on VSTS can use GVFS, so feel free to try it out.

Since we’re talking about Git, let’s take a look at the improvements we’ve made in the experience.

Collapsible pull request comments

Reviewing code is a critical part of the pull request experience, so we’ve added new features to make it easier for reviewers to focus on the code. Code reviewers can easily hide comments to get them out of the way when reviewing new code for the first time.

Hiding comments hides them from the tree view and collapses the comment threads in the file view:

When comments are collapsed, they can be expanded easily by clicking the icon in the margin, and then collapsed again with another click. Tooltips make it easy to peek at a comment without seeing the entire thread.

Improved workflow when approving pull requests with suggestions

Using the auto-complete option with pull requests is a great way to improve your productivity, but it shouldn’t cut short any active discussions with code reviewers. To better facilitate those discussions, the Approve with suggestions vote will now prompt when a pull request is set to complete automatically. The user will have the option to cancel the auto-complete so that their feedback can be read, or keep the auto-complete set and allow the pull request to be completed automatically when all policies are fulfilled.

Filter tree view in Code

Now you don’t need to scroll through all the files that a commit may have modified to just get to your files. The tree view on commit details, pull requests, shelveset details, and changeset details page now supports file and folder filtering. This is a smart filter that shows child files of a folder when you filter by folder name and shows a collapsed tree view of a file to show the file hierarchy when you filter by file name.

Find a file or folder filter on commit tree:

Git tags

Our Git tags experience in the VSTS web UI continues to evolve quickly. In addition to improvements to viewing, you can also delete, filter, and set security on tags.

View tags

You can view all the tags on your repository on the Tags page. If you manage all your tags as releases, then a user can visit the tags page to get a bird’s-eye view of all the product releases.

You can easily differentiate between a lightweight and an annotated tag here, as annotated tags show the tagger and the creation date alongside the associated commit, while lightweight tags only show the commit information.

Delete tags

Sometimes you need to delete a tag from your remote repo. It could be due to a typo in the tag name, or you you might have tagged the wrong commit. You can delete tags from the web UI by clicking the context menu of a tag on the Tags page and selecting Delete tag.

Filtering tags

The number of tags can grow significantly with time. Some repositories may have tags created in hierarchies, which can make finding tags difficult.

If you are unable to find the tag that you were looking for on the tag page, then you can simply search for the tag name using the filter on top of the Tags page.

Tags security

Now you can grant granular permissions to users of the repo to manage tags. You can give users the permission to delete tags or manage tags.

New Wiki experience in public preview

For quite a while we’ve wanted to have a built-in wiki. I’m happy to announce that each project now has its own wiki. Help your team members and other users understand, use, and contribute to your project. Learn more about it in our announcement blog post and check out the docs. Oh, and one more thing. It fully supports emoji, so have fun with it!

Building with the latest Visual Studio

We’re changing the model for handling different versions of Visual Studio. Due to architectural, storage, and performance limitations, we’re no longer going to offer multiple versions of Visual Studio on a single hosted build machine. For details on the history and rationale for these changes, see Visual Studio Team Services VS Hosted Pools.

In this release you’ll see the following changes:

You must now explicitly select a queue when you create a build definition (no default).

To make it easier, we’re moving the default queue to the Tasks tab, in the Process section.

The Visual Studio Build and MSBuild tasks now default to the Latest setting for the version argument.

Coming soon you’ll see more changes. For example, the following hosted pools (and corresponding queues) will be:

Hosted VS2017

Hosted VS2015

Hosted Deprecated (previously called “Hosted Pool”)

Hosted Linux Preview

Chef: Infrastructure as code

Chef is now available in the Visual Studio Team Services Marketplace! If you’re not familiar with Chef, they offer an infrastructure automation platform with a slick custom development kit allowing you to “turn your infrastructure into code.” In their words, “Infrastructure described as code is flexible, versionable, human-readable, and testable.” The Chef team wrote their own extensive blog post about this release, and I encourage you to check that out as well.

The Chef extension adds six new Build & Release tasks for configuring Chef Automate.

The tasks in this extension automate the common activities you perform when interacting with the Chef Automate platform. For a detailed look at setup and configuration, check out the getting started guide on GitHub. The tasks included in the extension typically used as part of the build process are:

Update cookbook version number: Allows you to take your current build number and set the version of a Chef cookbook with that version prior to uploading.
Upload cookbook to Chef Server: Allows you to specify a path containing a cookbook from within your repo, and have it uploaded to your Chef Server, along with all prerequisites if you have specified them.

The tasks typically used as part of your Release process are:

Add variables to Chef Environment: Using this task allows you to copy a set of VSTS Release Management variables for your Environment, over to a specified Chef environment.
Release cookbook version to environment: This task allows you to specify a version ‘pin’ for a Chef cookbook in a particular environment. You can use this task in a Release Pipeline to ‘release’ cookbooks to that environment.
Execute InSpec: Execute InSpec on machines in a Deployment Group.
Execute Chef Client: Execute Chef Client on machines in a Deployment Group.

We are happy to have Chef join the Team Services extension ecosystem, so take your infrastructure to the next level and give them a shot.

Control releases to an environment based on the source branch

A release definition can be configured to trigger a deployment automatically when a new release is created, typically after a build of the source succeeds. However, you may want to deploy only builds from specific branches of the source, rather than when any build succeeds.

For example, you may want all builds to be deployed to Dev and Test environments, but only specific builds deployed to Production. Previously you were required to maintain two release pipelines for this purpose, one for the Dev and Test environments and another for the Production environment.

Release Management now supports the use of artifact filters for each environment. This means you can specify the releases that will be deployed to each environment when the deployment trigger conditions, such as a build succeeding and creating a new release, are met. In the Trigger section of the environment Deployment conditions dialog, select the artifact conditions such as the source branch and tags for builds that will trigger a new deployment to that environment.

In addition, the Release Summary page now contains a pop-up tip that indicates the reason for all “not started” deployments to be in that state, and suggests how or when the deployment will start.

Release Triggers for Git repositories as an artifact source

Release Management now supports configuring a continuous deployment trigger for Git repositories linked to a release definition in any of the team projects in the same account. This lets you trigger a release automatically when a new commit is made to the repository. You can also specify a branch in the Git repository for which commits will trigger a release. This also means that you can link GitHub and Team Foundation Git repositories as artifact sources to a release definition, and then trigger releases automatically for applications such as Node.js and PHP that are not generated from a build.

On-demand triggering of automated tests

The Test hub now supports triggering automated test cases from test plans and test suites. Running automated tests from the Test hub can be set up similarly to the way you run tests in a scheduled fashion in Release Environments. You will need to setup an environment in the release definition using the Run automated tests from test plans template and associate it with the test plan to run the automated tests. See the documentation for step by step guidance on how to set up environments and run automated tests from the Test hub.

Securely store files like Apple certificates

We’ve added a general-purpose secure files library to the Build and Release features. Use the secure files library to store files such as signing certificates, Apple Provisioning Profiles, Android Keystore files, and SSH keys on the server without having to commit them to your source repository.

The contents of secure files are encrypted and can only be used during build or release processes by referencing them from a task. Secure files are available across multiple build and release definitions in the team project based on security settings. Secure files follow the Library security model.

We’ve also added some Apple tasks that leverage this new feature:

Utility: Install Apple Certificate

Utility: Install Apple Provisioning Profile

Consume secrets from an Azure Key Vault as variables

We have also added first-class support for integrating with Azure Key Vault by linking variable groups to Key Vault secrets. This means you can manage secret values completely within Azure Key Vault without changing anything in VSTS (for example, rotate passwords or certificates in Azure Key Vault without affecting release).

To enable this feature in the Variable Groups page, use the toggle button Link secrets from an Azure key vault as variables. After configuring the vault details, choose +Add and select the specific secrets from your vault that are to be mapped to this variable group.

After you have created a variable group mapped to Azure Key Vault, you can link it to your release definitions, as documented in Variable groups.

Note that it’s just the secret names that are mapped to the variable group variables, not the values. The actual values (the latest version) of each secret will be used during the release.

Package build task updates

We’ve made comprehensive updates to the NuGet, npm, Maven, and dotnet build tasks, including fixes to most of the issues logged in the vsts-tasks repo on GitHub.

New unified NuGet task

We’ve combined the NuGet Restore, NuGet Packager, and NuGet Publisher task into a unified NuGet build task to align better with the rest of the build task library; the new task uses NuGet 4.0.0 by default. Accordingly, we’ve deprecated the old tasks, and we recommend moving to the new NuGet task as you have time. This change coincides with a wave of improvements outlined below that you’ll only be able to access by using the combined task.

As part of this work, we’ve also released a new NuGet Tool Installer task that controls the version of NuGet available on the PATH and used by the new NuGet task. So, to use a newer version of NuGet, just add a NuGet Tool Installer task at the beginning of your build.

npm build task updates

Whether you’re building your npm project on Windows, Linux, or Mac the new NPM build task will accommodate. We have also reorganized the task to make both npm install and npm publish easier. For install and publish, we have simplified credential acquisition so that credentials for registries listed in your project’s .npmrc file can be safely stored in a service endpoint. Alternatively, if you’re using a VSTS feed, we have a picker that will let you select a feed, and then we will generate a .npmrc with requisite credentials that are used by the build agent.

Working outside your account/collection

It’s now easier to work with feeds outside your VSTS account, whether they’re Package Management feeds in another VSTS account or TFS server, or non-Package Management feeds like NuGet.org/npmjs.com, Artifactory, or MyGet. Dedicated Service Endpoint types for NuGet, npm, and Maven make it easy to enter the correct credentials and enable the build tasks to work seamlessly across package download and package push operations.

Maven and dotnet now support authenticated feeds

Unlike NuGet and npm, the Maven and dotnet build tasks did not previously work with authenticated feeds. We’ve added all the same goodness outlined above (feed picker, working outside your account improvements) to the Maven and dotnet tasks so you can work easily with VSTS/TFS and external feeds/repositories and have a consistent experience across all the package types supported by Package Management.

Mobile work item form general availability

The mobile experience for work items in Visual Studio Team Services is now out of preview! We have a full end-to-end experience that includes an optimized look and feel for work items and provides an easy way to interact with items that are assigned to you, that you’re following, or that you have visited or edited recently from your phone.

 

Extension of the month: Product Plan

Communicating the big picture helps align everyone to the team goals and empowers more people to notice when something may not be lining up. So I am happy to announce that our partners at ProductPlan have brought their roadmap solution to the VSTS Marketplace.

ProductPlan provides an easy way to plan and communicate your product strategy. Get started with a 30-day free trial.

Easily drag and drop bars, milestones, containers, and lanes to build beautiful roadmaps in minutes.
Update your plans on-the-fly.
Securely share with individuals, your whole team, or the entire company – for free. Easily print and export to a PDF, image, or spreadsheet.
Use the Planning Board to score your initiatives objectively.
Capture future opportunities in a central location with the Parking Lot.
Expand lanes and containers to tailor the amount of detail you share.
View multiple roadmaps in a Master Plan to understand your entire product portfolio at a glance.

As always, there’s even more in our sprintly release announcements. Check out the June 1st and June 22nd announcements for the full list of features. Be sure to subscribe to the DevOps blog to keep up with the latest plans and developments for VSTS.

Happy coding!
Quelle: Azure

Microsoft Cognitive Services updates – Bing Entity Search API and Project Prague

This blog post was authored by the Microsoft Cognitive Services Team.

Microsoft Cognitive Services enables developers to augment the next generation of applications with the ability to see, hear, speak, understand, and interpret needs using natural methods of communication.

Today, we are excited to announce several service updates:

We are launching Bing Entity Search API, a new service available in Free Preview which makes it easy for developers to build experiences that leverage the power of Bing knowledge graph with more engaging contextual experiences. Tap into the power of the web to search for the most relevant entities such as movies, books, famous people, and US local businesses, and easily provide primary details and information sources about them.
Microsoft Cognitive Services Lab’s Project Prague is now available. Project Prague lets you control and interact with devices using gestures to have a more intuitive and natural experience.
Presentation Translator, a Microsoft Garage project, is now available for download. It provides presenters the ability to add subtitles to their presentations in real time, in the same language for accessibility scenarios or in another language for multi-language situations. With customized speech recognition, presenters have the option to customize the speech recognition engine (English or Chinese) using the vocabulary within the slides and slide notes to adapt to jargon, technical terms, product, place names, etc. Presentation Translator is powered by the Microsoft Translator live feature, built on the Translator APIs of Microsoft Cognitive Services.

Let’s take a closer look at what these new APIs and services can do for you.

Bring rich knowledge of people, places, things and local businesses to your apps with Bing Entity Search API

As announced today, Bing Entity Search API is a new addition in our already existing set of Microsoft Cognitive Services Search APIs, including Bing Web Search, Image Search, Video Search, News Search, Bing Autosuggest, and Bing Custom Search. This API lets you search for entities in the Bing knowledge graph and retrieve the most relevant entities and primary details and information sources about them. This API also supports searching for local businesses in the US. It helps developers easily build apps that harness the power of the web and delight users with more engaging contextual experiences.

Get started

To get started today, let’s get a free preview subscription key on the Try Cognitive Services webpage.
After getting the key, I can start sending entity search queries to Bing. It’s as simple as sending the following query:

GET https://api.cognitive.microsoft.com/bing/v7.0/entities?q=mount+rainier HTTP/1.1
Ocp-Apim-Subscription-Key: 123456789ABCDE
X-Search-ClientIP: 999.999.999.999
X-Search-Location: lat:47.60357;long:-122.3295;re:100
Host: api.cognitive.microsoft.com

The request must specify the q query parameter, which contains the user's search term, and the Ocp-Apim-Subscription-Key header. For location aware queries like restaurants near me, it’s important to also include the X-Search-Location and X-MSEdge-ClientIP headers.

For more information about getting started, see the documentation page Making your first entities request.

The response

The following shows the response to the Mount Rainier query.

{
"_type" : "SearchResponse",
"queryContext" : {
"originalQuery" : "mount rainier"
},
"entities" : {
"queryScenario" : "DominantEntity",
"value" : [{
"contractualRules" : [{
"_type" : "ContractualRules/LicenseAttribution",
"targetPropertyName" : "description",
"mustBeCloseToContent" : true,
"license" : {
"name" : "CC-BY-SA",
"url" : "http://creativecommons.org/licenses/by-sa/3.0/"
},
"licenseNotice" : "Text under CC-BY-SA license"
},
{
"_type" : "ContractualRules/LinkAttribution",
"targetPropertyName" : "description",
"mustBeCloseToContent" : true,
"text" : "en.wikipedia.org",
"url" : "http://en.wikipedia.org/wiki/Mount_Rainier"
},
{
"_type" : "ContractualRules/MediaAttribution",
"targetPropertyName" : "image",
"mustBeCloseToContent" : true,
"url" : "http://en.wikipedia.org/wiki/Mount_Rainier"
}],
"webSearchUrl" : "https://www.bing.com/search?q=Mount%20Rainier…",
"name" : "Mount Rainier",
"image" : {
"name" : "Mount Rainier",
"thumbnailUrl" : "https://www.bing.com/th?id=A21890c0e1f…",
"provider" : [{
"_type" : "Organization",
"url" : "http://en.wikipedia.org/wiki/Mount_Rainier"
}],
"hostPageUrl" : "http://upload.wikimedia.org/wikipedia…",
"width" : 110,
"height" : 110
},
"description" : "Mount Rainier, Mount Tacoma, or Mount Tahoma is the highest…",
"entityPresentationInfo" : {
"entityScenario" : "DominantEntity",
"entityTypeHints" : ["Attraction"],
"entityTypeDisplayHint" : "Mountain"
},
"bingId" : "9ae3e6ca-81ea-6fa1-ffa0-42e1d78906"
}]
}
}

For more information about consuming the response, please refer to the documentation page Searching the Web for entities and places.

Try it now

Don’t hesitate to try it by yourself by going to the Entities Search API Testing Console.

Create more natural user experiences with gestures – Project Prague

Project Prague is a cutting edge, easy-to-use SDK that helps developers and UX designers incorporate gesture-based controls into their apps. It enables you to quickly define and implement customized hand gestures, creating a more natural user experience.

The SDK enables you to define your desired hand poses using simple constraints built with plain language. Once a gesture is defined and registered in your code, you will get a notification when your user does the gesture, and can select an action to assign in response.

Using Project Prague, you can enable your users to intuitively control videos, bookmark webpages, play music, send emojis, or summon a digital assistant.

Let’s say that I want to create new gesture to control my app "RotateRight”. First, I need to ensure that I have the hardware and software requirements. Please refer to the requirement section for more information. Intuitively, when performing the "RotateRight" gesture, a user would expect some object in the foreground application to be rotated right by 90°. We have used this gesture to trigger the rotation of an image in a PowerPoint slideshow in the video above.

The following code demonstrates one possible way to define the "RotateRight" gesture:

var rotateSet = new HandPose("RotateSet", new FingerPose(new[] { Finger.Thumb, Finger.Index }, FingerFlexion.Open, PoseDirection.Forward),
new FingertipPlacementRelation(Finger.Index, RelativePlacement.Above, Finger.Thumb),
new FingertipDistanceRelation(Finger.Index, RelativeDistance.NotTouching, Finger.Thumb));

var rotateGo = new HandPose("RotateGo", new FingerPose(new[] { Finger.Thumb, Finger.Index }, FingerFlexion.Open, PoseDirection.Forward),
new FingertipPlacementRelation(Finger.Index, RelativePlacement.Right, Finger.Thumb),
new FingertipDistanceRelation(Finger.Index, RelativeDistance.NotTouching, Finger.Thumb));

var rotateRight = new Gesture("RotateRight", rotateSet, rotateGo);

The "RotateRight" gesture is a sequence of two hand poses, "RotateSet" and "RotateGo". Both poses require the thumb and index to be open, pointing forward, and not touching each other. The difference between the poses is that "RotateSet" specifies that the index finger should be above the thumb and "RotateGo" specifies it should be right of the thumb. The transition between "RotateSet" and "RotateRight", therefore, corresponds to a rotation of the hand to the right.

Note that the middle, ring, and pinky fingers do not participate in the definition of the "RotateRight" gesture. This makes sense because we do not wish to constrain the state of these fingers in any way. In other words, these fingers are free to assume any pose during the execution of the "RotateRight" gesture.

Having defined the gesture, I need to hook up the event indicating gesture detection to the appropriate handler in your target application:

rotateRight.Triggered += (sender, args) => { /* This is called when the user performs the "RotateRight" gesture */ };

The detection itself is performed in the Microsoft.Gestures.Service.exe process. This is the process associated with the "Microsoft Gestures Service" window discussed above. This process runs in the background and acts as a service for gesture detection. I will need to create a GesturesServiceEndpoint instance in order to communicate with this service. The following code snippet instantiates a GesturesServiceEndpoint and registers the "RotateRight" gesture for detection:

var gesturesService = GesturesServiceEndpointFactory.Create();
await gesturesService.ConnectAsync();
await gesturesService.RegisterGesture(rotateRight);
When you wish to stop the detection of the "RotateRight" gesture, you can unregister it as follows:+
C#Copy
await gesturesService.UnregisterGesture(rotateRight);

The handler will no longer be triggered when the user executes the "RotateRight" gesture. When finished working with gestures, keep in mind I should dispose the GesturesServiceEndpoint object:

gesturesService?.Dispose();

Please note that in order for the above code to compile, you will need to reference the following assemblies, located in the directory indicated by the MicrosoftGesturesInstallDir environment variable:

Microsoft.Gestures.dll
Microsoft.Gestures.Endpoint.dll
Microsoft.Gestures.Protocol.dll

For more information about the Getting Started guide, please refer to the documentation.

Thank you again and happy coding!

Quelle: Azure