What’s brewing in Visual Studio Team Services: May 2017 Digest

This post series provides the latest updates and news for Visual Studio Team Services and is a great way for Azure users to keep up-to-date with new features being released every three weeks. Visual Studio Team Services offers the best DevOps tooling to create an efficient continuous integration and release pipeline to Azure. This sprint has our Build 2017 conference deliverables in it, so it’s a big one, especially in the CI/CD space. Here’s a recap with all of our conference presentations.

One of our goals is to keep lowering the barrier to entry for automating your application deployment. The ease with which teams can deploy and validate their application is a huge part of how quickly they are able to ship. While our CI/CD system is completely open, by doing deep integrations with Azure we can make setting up deployments extremely simple. It also unlocks many opportunities for richer workflows that span both development and operations. To that end, we are continuing to strive to make VSTS + Azure the best end-to-end DevOps experience.

This month brings a bunch of new capabilities toward realizing that goal. We have significantly expanded the breadth of app type we support:

We now support using the automation agent on the VMs to which you deploy and using it to drive your application deployment. This has easily been our most requested feature for Release Management and we’re excited for it go live.
We continue to give more and more focus to containers. This sprint, we introduce native support for Kubernetes and Service Fabric, the latter being a great option for Windows containers.
We already have great support for deploying to Azure Web Apps, but we’ve expanded the app types we support with our native task to include Node, PHP, and Linux Web Apps with containers. We’ve also expanded the entry point for setting up CI/CD with more options in the Azure portal configuration UI and introduced the ability to set up CI/CD for Azure Web Apps from the AZ CLI.

Let’s dive in!

VM Deployment (Public Preview)

Release Management now supports robust out-of-the-box multi-machine deployment. You can now orchestrate deployments across multiple machines and perform rolling updates while ensuring high availability of the application throughout.

Agent-based deployment capability relies on the same build and deployment agents. However, unlike the current approach where you install the build and deployment agents on a set of proxy servers in an agent pool and drive deployments to remote target servers, you install the agent on each of your target servers directly and drive rolling deployment to those servers. You can use the full task catalog on your target machines.

A deployment group is a logical group of targets (machines) with agents installed on each of them. Deployment groups represent your physical environments, such as single-box Dev, multi-machine QA, and a farm of machines for UAT/Prod. They also specify the security context for your physical environments.

You can use this against any VM that you register our agent with. We’ve also made it very easy to register with Azure with support for a Azure VM extension that auto-installs the agent when the VM spins up. We will automatically inherit the tags on the Azure VM when it’s registered in VSTS.

Once you have a deployment group, you simply configure what you want us to execute on that deployment group. You can control what gets run on which machines using tags and control how fast or slow the rollout happens.

When the deployment is run, the logs show the progression across the entire group of machines you are targeting.

This feature is now an integrated part of Release Management. There are no additional licenses required to use it.

While we’re on the topic of deploying to different environments, check out our post on configuring your release pipelines for safe deployments.

Azure virtual machine scale set deployment

Another common pattern being use for deployment is to create a full machine image for each version of the application and then deploy that. To make that easier we have a new Build immutable machine image task that uses Packer to generate a machine image after deploying applications and all the required prerequisites. The tasks takes either deployment script or packer configuration template to create the machine image and stores it in an Azure Storage account. This image can than be used for Azure Virtual Machine Scale Set deployments that work well for this type of immutable image deployment. You can learn more in our post on deploying applications to VM scale sets.

Built-in tasks for building and deploying container based applications

With this release we have pulled most of the tasks in our Docker extension into the product by default, improved them, and introduced a set of new tasks and templates for making a set of container scenarios easier.

Docker: Build, push, or run Docker images, or run a Docker command. This task can be used with Docker or Azure Container registry. You can now use our built-in service principal authentication with ACR to make it even easier to use.
Docker-Compose: Build, push, or run multi-container Docker applications. This task can be used with Docker or Azure Container registry.
Kubernetes: Deploy, configure, or update your Kubernetes cluster in Azure Container Service by running kubectl commands.
Service Fabric: Deploy containers to a Service Fabric Cluster. Service Fabric is the best choice today for running Windows Containers in the cloud. In fact, this is where more and more of VSTS itself is running each sprint.

Azure Web App deployment updates

We have made many enhancements for Azure Web Applications:

Azure App Service deployment task supports Node.js, Python applications to be deployed.
Azure App Service deployment task supports deploying to Azure Web App for Linux using containers.
Azure portal Continuous Delivery is expanded to now support Node applications.

We have also introduced CI/CD support into the latest version of the Azure CLI for configuring CI/CD. Here is an example:

az appservice web source-control config –name mywebapp –resource-group mywebapp_rg –repo-url https://myaccount.visualstudio.com/myproject/_git/myrepo –cd-provider vsts –cd-app-type AspNetCore

Deploy to Azure Government Cloud

Customers with Azure subscriptions in Government clouds can now configure Azure Resource Manager service endpoint to target national clouds.

With this, you can now use Release Management to deploy any application to Azure resources hosted in government clouds, using the same deployment tasks. Read more about this in our on setting up continuous delivery to Microsoft Azure Government.

Automatic linking from work items to builds

With this new setting in the build definition, users can track the builds that have incorporated their work without having to search through a large set of builds manually. Each successful build associated with the work item automatically appears in the development section of the work item form.

To enable this feature, toggle the setting under Options in your build definition.

Note: The feature is only available for definitions building Team Services Git or TFVC repos, and only through the new build definition editor.

Using Jenkins for Continuous Integration with Team Services

Jenkins is a popular continuous integration build server, and there are multiple ways to use Jenkins as a CI server with Team Services. Jenkins’ built-in Git Plugin or Team Foundation Server Plugin can poll a Team Services repository every few minutes and queue a job when changes are detected. For those who need tighter integration, Team Services provides two additional ways to achieve it: 1) the Jenkins Service Hook, and 2) Jenkins build and release tasks.

Team Services adds capabilities over the Jenkins Service Hook by including connectors that allow its build and release systems to integrate with Jenkins. These connectors can be chosen from the list of tasks to execute as steps in a Team Services build or release definition.

A Team Services build or release will queue a Jenkins job and download resulting artifacts. Since these tasks execute in a light-weight, long-polling agent that can be installed in your data center, there is no need to modify inbound firewall rules for Team Services to access your Jenkins server from the cloud.

You can learn more in our blog post on integrating with Jenkins.

Maven for Package Management (Public Preview)

Java developers share components by packaging up their code in Maven artifacts, the Java equivalent of a NuGet package. Team Services customers needing a place to host Maven artifacts used to have to use third-party services, like Nexus or Artifactory, to meet their needs. We’re proud to announce that Team Services Package Management now supports hosting Maven artifacts! Check out our getting started guide.

You’ll also want to check out our recent blog post on the extensive support for Java development with Team Services.

New Git branch policies configuration experience

Branch policies are a great way to ensure quality in your branches by requiring code reviews, automatically running a build and tests for each PR, and more. We’ve redesigned the branch policies configuration experience and added some great new capabilities. One of the most powerful features is the ability to configure policies for branch folders. You can do this from the Branches view by selecting a branch folder and choosing Branch policies from the context menu.

This will open the new policies configuration UX, where you can configure policies that apply to all of the branches in the branch folder.

If you’re using the build policy, you can now configure multiple builds for a single branch. There are also new options to specify the type of trigger, automatic or manual. Manual triggers are useful for things like automated test runs that might take a long time to run, and you only really need to run once before completing the pull request. The build policy also has a display name that is useful if you’re configuring multiple builds.

Share Git pull requests with teams

The Share Pull Request action is a handy way to notify reviewers. In this release, we’ve added support for teams and groups, so you can notify everyone involved the pull request in a single step.

Visualize your Git repository

Team Services now supports showing a graph while showing commit history for repositories or files. Now you can easily create a mental model of all your branches and commits for your git repositories using git graph. The graph shows all your commits in topological order.

The key elements of the git graph include:

The git graph is right-aligned, so commits associated with the default branch or the selected branch appear on the right while the rest of the graph grows on the left.
Merge commits are represented by grey dots connected to their first parent and second parent.
Normal commits are represented by blue dots.
If the parent commit of a commit is not visible in the view port on the next 50 commits, then we excise the commit connection. Once you click the arrow, the commit is connected to its parent commit.

You can read more in our post on the Git graph and advanced filters.

Delivery Plans general availability

We are excited to announce that Delivery Plans is out of preview and is now included in the basic access level of VSTS. Delivery Plans is an organizational tool that helps users drive cross-team visibility and alignment by tracking work status on an iteration-based calendar. Users can tailor their plan to include any team or backlog level from across projects in the account. Furthermore, Field Criteria on Plans enables users to further customize their view, while Markers highlight important dates.

Delivery Plans is currently only available for VSTS; however, it will be included in the upcoming TFS 2017 Update 2 release.

Check out the marketplace page for Delivery Plans to learn more and install the extension.

Delivery timeline markers

Have you been looking for a way to highlight key dates on your Delivery Plan? Now you can with plan markers. Plan markers let you visualize key dates for teams directly on your deliver plan. Markers have an associated color and label. The label shows up when you click the marker dot.

Work item search general availability

Thank you all for installing and using the Work Item Search preview from the marketplace. It has been one of our most highly rated extensions. With this release, we are making it easier for you to use work item search by making it a built-in feature of VSTS.

You can get started with work item search using the search box:

Updated process customization experience

We have modernized our pages when customizing your process. The page now includes a breadcrumb in the top to clearly show the context you are in when editing the process or the work item types inside the process.

Also, it’s much easier to start customizing your work item form. When you select Customize from the context menu in a work item, we automatically create an inherited process for you, if you are not already using one, and bring you into the layout editor.

Insight into your projects with Analytics

Our new Analytics service brings you and your team new insights into the health and status of your work. Analytics is currently in preview, and at this early stage includes three new dashboard widgets, Lead Time, Cycle Time, and Cumulative Flow Diagram (CFD). You can install Analytics from the VS Team Services Marketplace.

We released a long list of new features the last couple of sprints. Be sure to read the release notes for May 11th and April 19th for a full list.

Happy coding!
Quelle: Azure

Kubernetes in action: How orchestration and containers can increase uptime and resiliency

It’s been about a month since we finalized the acquisition of Deis to expand the Azure Container Service and Kubernetes support on Azure. It’s been really fantastic to watch the two teams come together and begin to work together. In particular I’ve been excited to get to know the Helm team better and begin to see how we can build tight integrations between Helm and Kubernetes on Azure Container Service.

Containers and Kubernetes can dramatically improve the operability and reliability of software that is developed on the cloud. First the container itself is an immutable package that carries all of its dependencies with it. This means that you have strong assurances that if you build a working solution on your laptop, it will run the same when you deploy it to Azure.

 

In addition to that, orchestrators like Kubernetes provide native support for application management best practices, like automatic health-check based restarts and monitoring. In addition to these basic features, Kubernetes also supplies features for doing automatic rollout and rollback of a service, which allows a user to do a live update of their service without affecting end-user traffic, even if the new version of that service fails during the update. All of these tools mean that its incredibly easy for a user to take an application from a prototype to a reliable production system.

However, in many cases you aren’t deploying your own code, but rather using open source software developed by others, for example MongoDB or the Ghost blogging platform. This is the place where the Helm tool from Deis really shines. Helm is a package manager for your cluster, it gives a familiar interface for people who have used single machine package managers like apt, homebrew or yum, but in the case of Helm, it installs software into your entire Kubernetes cluster. In a few command lines, you can install a replicated, reliable version of MongoDB for your application to start using.

I’m really excited to see how we can better integrate Helm and other awesome open source tooling from Deis into Azure Container Service to make it even easier for developers to build scalable, reliable distributed applications on Azure. For more details and examples of how Kubernetes changes operations for operators, check out my recent appearance on the Microsoft Mechanics show where I demonstrate and discuss Containers, Kubernetes, and Azure.

Brendan Burns, Azure Container Service 
Quelle: Azure

New 400GB and 200GB caches available on Azure Analysis Services

In April we announced the general availability of Azure Analysis Services, which evolved from the proven analytics engine in Microsoft SQL Server Analysis Services. The success of any modern data-driven organization requires that information is available at the fingertips of every business user, not just IT professionals and data scientists, to guide their day-to-day decisions. Self-service BI tools have made huge strides in making data accessible to business users. However, most business users don’t have the expertise or desire to do the heavy lifting that is typically required, including finding the right sources of data, importing the raw data, transforming it into the right shape, and adding business logic and metrics, before they can explore the data to derive insights. With Azure Analysis Services, a BI professional can create a semantic model over the raw data and share it with business users so that all they need to do is connect to the model from any BI tool and immediately explore the data and gain insights. Azure Analysis Services uses a highly optimized in-memory engine to provide responses to user queries at the speed of thought.

Today we are introducing two new SKU sizes, the S8 and S9 allowing you to build data models up to 400 GB in size.

The S8 offers up to 200 GB of cache with 320 QPUs, while the S9 offers up to 400 GB of cache and 640 QPUs. The cache sizes refer to the size of the memory to hold data in after it has been compressed. You do need to reserve some cache for processing and querying. A Query Processing Unit (QPU) in Azure Analysis Services is a unit of measure for relative computational performance for query and data processing. As a rule of thumb, one virtual core approximates to roughly 20 QPUs, although the exact performance depends on the underlying hardware and the generation of hardware used.

The new S8 and S9 SKUs are currently only available in the East US 2 and West Europe datacenters. To learn more about pricing for S8 and S9, please visit our pricing page.

New to Azure Analysis Services? Find out how you can try Azure Analysis Services or learn how to create your first data model.
Quelle: Azure

Microsoft and Cisco enable Azure IoT Suite to connect to Cisco Fog Deployments

As we announced recently, Microsoft thinks there is a natural balance between the cloud and the edge in IoT. Why? Cloud is a natural place to manage IoT devices, to collect data from them, gain insights using analytics and then operationalize those insights. Edge is a natural place to collect, optimize and react to data with low latency based on the insights generated in the cloud. In this way, cloud and edge work together to help IoT reach its full potential. 

Today Microsoft is participating in Cisco’s IoT World Forum in London where conversations like this are taking place and how to help customers gain intelligence from their data faster and more easily. 

Microsoft Azure recently announced support for edge intelligence with Azure IoT Edge, and we strongly believe in giving customers options in picking the right the edge technology to meet their needs. Today at Cisco’s IoT World Forum we announced we’re partnering with Cisco to make it possible for the Azure IoT Suite to connect to and interoperate with Cisco Fog deployments.

This will have the following benefits for our joint customers: 

Enabling them to build and host their IoT applications in Azure, while extending the power of those applications to the edge via Cisco’s Fog computing solutions
Bring intelligence and processing capabilities closer to where the data is originated so that critical decisions can be processed in real time
Help them optimize their IoT deployment costs by only sending the necessary information to the cloud, while processing the rest at the edge  

We believe IoT, Cloud and IoT Edge will continue to play a critical role in digital transformation. We’re excited to extend our work with Cisco and bring the Value of Microsoft’s IoT technologies and solutions to even more customers.   

– Sam
Quelle: Azure

Deploy Cognitive Toolkit model to Azure Web Apps

Azure offers several ways of deploying a deep-learning model including Windows Web App, Linux Web App, and Azure Container Services. For those less experienced with a Linux environment/containers, Windows Web Apps offers familiar territory. In this post we will deploy a ResNet-18 model to Azure Web Apps and then submit some test pictures to it using a sample HTML interface, and also via python.

Demo results

HTML interface

Python

The above screenshot is taken from this notebook. If you wish to run some speed-tests, this notebook on GitHub shows how to submit asyncrochonous requests to the created API to get an idea of how long it takes to classify images in bulk. In this example we get 0.86 seconds per image.

Replicate demo

1. Download the contents of the repo and open a Command Prompt in the folder.

2. Run the following commands to check you have git and azure-cli installed:

az –version # time-of-writing: 2.0.1
pip install azure-cli # otherwise install azure-cli
git –version # time of writing: 2.9.2.windows.1

3. Set your username and password for local git deployment. Please note, you only need to do this once. For example:

set uname=<username_for_local_git_deployment>
set pass=<password_for_local_git_deployment>
# Create a user-name and password for git deployment of all your apps
az appservice web deployment user set –user-name %uname% –password %pass%

4. Create your web-app by running the below commands:

# Name for your web-app
set appn=<app_name>
# Name for resource-group containing web-app
set rgname=<name_for_resource_group_that_contains_app>
# Login to azure
az login
# Create a resource-group
az group create –location westeurope –name %rgname%
# Create a paid 'S2' plan to support your app
# The standard paid plans are: S1, S2, S3
az appservice plan create –name %appn% –resource-group %rgname% –sku S2
# Create the web-app
az appservice web create –name %appn% –resource-group %rgname% –plan %appn%
# Configure for local git deployment (SAVE URL)
az appservice web source-control config-local-git –name %appn% –resource-group %rgname% –query url –output tsv
# Initialise your git repo
git init
# Add the azure endpoint
git remote add azure <PASTE_URL_FROM_ABOVE>
# e.g. git remote add azure https://ilia2ukdemo@wincntkdemo.scm.azurewebsites.net/wincntkdemo.git

5. We will now install Python. Navigate to your web-app on Azure Portal, scroll down to the "Extensions" blade and click select:

6. Then, click on "Add", locate "Python 3.5.3 x64", and add it. Please note, you must use this extension.

Make sure you get a notification that this has installed successfully.

7. (Optional) Under the "Application settings" blade set "Always On" to "On" to reduce the response time since your model will be kept loaded.

8. Deploy this demo by running:

git add -A
git commit -m "init"
git push azure master

If everything has gone successfully you should see the following line in the script output:

remote: Successfully installed cntk-2.0rc1
remote: ..
remote: 2.0rc1

You should now be able to navigate to your web-app address and upload a photo that will be classified according to the CNN: ResNet-18.

Advanced modifications (run your own)

You can include references to other modules (e.g. pandas or opencv) in your model.py file, however you must add the module to the "requirements.txt" file so that python installs the module. If the module needs to be built, you can download the pre-built wheel file to the wheels folder. Don't forget to add the wheel path to the "requirements.txt" file at the root of the directory. Note: Numpy, Scipy, and CNTK wheels are automatically installed inside the "deploy.cmd" script. To change this you can edit the deploy.cmd file to point to whichever numpy wheel you require.

Editing deploy.cmd – The install script automatically adds the binaries for CNTK v2.0 rc1. However, if you want to use Python 3.6 or CNTK v2.0 rc1+, alter the below in the "deploy.cmd" script:

:: VARIABLES
echo "ATTENTION"
echo "USER MUST CHECK/SET THESE VARIABLES:"
SET PYTHON_EXE=%SYSTEMDRIVE%homepython353x64python.exe
SET NUMPY_WHEEL=https://azurewebappcntk.blob.core.windows.net/wheels/numpy-1.12.1+mkl-cp35-cp35m-win_amd64.whl
SET SCIPY_WHEEL=https://azurewebappcntk.blob.core.windows.net/wheels/scipy-0.19.0-cp35-cp35m-win_amd64.whl
SET CNTK_WHEEL=https://azurewebappcntk.blob.core.windows.net/cntkrc/cntk-2.0rc1-cp35-cp35m-win_amd64.whl
SET CNTK_BIN=https://azurewebappcntk.blob.core.windows.net/cntkrc/cntk.zip

To create the 'cntk.zip' file you just need to extract the cntk/cntk folder (i.e. the folder that contains 'CNTK.exe' and DLLs; you can remove the python sub-folder which contains the wheels, if it exists) and then reference it with the %CTNK_BIN% environmental variable above.

You can also install a different python extension if you wish, however make sure to reference it properly (and also to get the Numpy, Scipy and CNTK Wheels for it). For example, the "Python 3.5.3 x64" extension is installed in the directory "D:homepython353x64", and thus the script references: SET PYTHON_EXE=%SYSTEMDRIVE%homepython353x64python.exe

Finally, alter the "model.py" script as desired in the folder "WebApp", along with the HTMl template, "index.html" in "templates" and then push your changes to the repo:

git add -A
git commit -m "modified some script"
git push azure master

Quelle: Azure

Sneak Peek – PowerShell in Azure Cloud Shell

At BUILD 2017, we announced the preview of Azure Cloud Shell, which supports the Bash shell. As showcased (at 9:35 min) by Corey Sanders, we are adding PowerShell support to Azure Cloud Shell, which gives you a choice of shell to get work done.

The PowerShell experience will provide the same benefits as the Bash shell in Azure Cloud Shell:

Get authenticated shell access to Azure from virtually anywhere.
Use common tools and programming languages in a shell that’s updated and maintained by Microsoft.
Persist your files across sessions in attached Azure File storage.

Additionally, the PowerShell experience will provide:

Azure namespace capability to let you easily discover and navigate all Azure resources.

Interaction with VMs to enable seamless management into the guest VMs.

Extensible model to import additional cmdlets and ability to run any executable.

Sign-up today to participate in a limited preview of PowerShell in Azure Cloud Shell.

PowerShell is already the default shell for Windows 10. Adding PowerShell to Cloud Shell ensures you’ll have access to the most common automation tool for managing Azure resources from virtually anywhere.

We look forward to sharing this awesome new PowerShell experience with you!
Quelle: Azure

Azure enables cutting edge Virtual Apps, Desktops and Workstations with NVIDIA GRID

Professional graphics users in every industry count on an immersive, photorealistic, responsive environment to imagine, design, and build everything from airplanes to animated films. Traditionally, these high-powered workstations were tethered to physical facilities and shared among professional users such as designers, architects, engineers, and researchers. But today’s enterprises find themselves operating in multiple geographies, with distributed teams needing to collaborate in real-time.

Hence, last year we released Azure’s first GPU offerings targeting high-end graphics applications. NV based instances are powered by the NVIDIA GRID virtualization platform and NVIDIA Tesla M60 GPUs that provide 2048 CUDA cores per GPU and 8GB of GDDR5 memory per GPU as well. These instances provide over 2x performance increase in graphics-accelerated applications as compared to the previous generations.

Targeting the high-end workstation user, you can run NVIDIA Quadro GPU optimized applications such as Dassault Systems CATIA or Siemens PLM per user directly on the NV instances without the need to deal with the complexity of licensing. Additionally, with up to 4 GPUs via NV24 you’re able to run up to 4 concurrent users utilizing these Quadro applications with features such as multiple displays, larger maximum resolutions and certified Quadro software features from hundreds of software vendors.

Furthermore, if your organization has a need to run Virtual Apps or Virtual Desktops using solutions like RDS, Citrix XenApp Essentials, VMware Horizon, or Workspot, you’re now able to run up to 25 concurrent RDSH users per GPU. Office workers and professionals who don’t require Quadro optimized applications, can finally enjoy virtual desktops with a high-quality user experience that's optimized for productivity applications. It's all the performance of a physical PC, where and when you need it. You can now dramatically lower IT operational expense and focus on managing the users instead of PCs.

 

NV6

NV12

NV24

Cores

6

12

24

GPU

1 x M60 GPU

2 x M60 GPUs

4 x M60 GPUs

Memory

56 GB

112 GB

224 GB

Disk

380 GB SSD

680 GB SSD

1.44 TB SSD

Network

Azure Network

Azure Network

Azure Network

Virtual Workstations

1

2

4

RDSH Virtual Apps and Virtual Desktops

25

50

100

“Because so many of today’s modern applications and operating systems require GPU acceleration, organizations are seeking greater flexibility in their deployment and cost options,” says John Fanelli, VP NVIDIA GRID. “With NVIDIA GRID software and NVIDIA Tesla M60s running on Azure, Microsoft is delivering the benefits of cloud-based RDSH virtual apps and desktops to enable broad-scale, graphics-accelerated virtualization in the cloud that meets the needs of any enterprise.”

These new updates will go a long way to making sure that you have the best infrastructure whether you’re running the most graphics demanding CAD application that require Quadro optimization or if you’re just running office productivity applications on the go.

Azure N-Series VMs are now generally available in multiple regions. To launch these VMs please visit the Azure Portal.

Quelle: Azure

Azure Analysis Services new modeling and tooling features

Following the announcement a few weeks ago that 1400 models are now in Azure Analysis Services, we haven’t stopped there! We are pleased to announce the following further features for 1400 models in Azure.

Shared M expressions are shown in the SSDT Tabular Model Explorer, and can be maintained using the Query Editor.
Data Management View (DMV) improvements.
Opening an file with the .MSDAX extension in SSDT enables DAX non-model related IntelliSense.

Shared M expressions

Shared M expressions are shown in the Tabular Model Explorer! By right clicking the Expressions node, you can edit the expressions in the Query Editor. This should seem familiar to Power BI Desktop users.

DMV improvements

DMVs expose information about server operations and server health, settings and model structure. They are used for server monitoring, model documentation and various other reasons.

DISCOVER_CALC_DEPENDENCY

M expression dependencies are included in DISCOVER_CALC_DEPENDENCY. The following query returns the output shown below. M expressions and structured data sources are included for 1400 models.

SELECT * FROM $System.DISCOVER_CALC_DEPENDENCY

WHERE OBJECT_TYPE = 'PARTITION' OR OBJECT_TYPE = 'M_EXPRESSION';

 

The output represents the same information that is shown by the Query Dependencies visual, which is now available in SSDT from the Query Editor. This visual should seem familiar to Power BI Desktop users.

MDSCHEMA_MEASUREGROUP_DIMENSIONS

This release provides a fix for MDSCHEMA_MEASUREGROUP_DIMENSIONS. This DMV is used by various client tools to show measure dimensionality. For example, the Explore feature in Excel Pivot Tables allows the user to cross-drill to dimensions related to the selected measures.

Prior to this release, some rows were missing in the output for 1200 models, which meant the Explore feature did not work correctly. This is now fixed for 1200 and 1400 models.

DAX file editing

Opening a file with the .MSDAX extension allows DAX editing with non-model related IntelliSense such as highlighting, statement completion and parameter info. As you can imagine, we intend to use this for interesting features to be released in the future!

Try it Now!

To get started, simply create a 1400 model in SSDT and deploy it to Azure Analysis Services! See this post on how to create your first model. Be sure to keep an eye on this blog to stay up to date on Azure Analysis Services.
Quelle: Azure

Use SaaS patterns to accelerate SaaS app development on SQL Database

We’re delighted to announce availability of a sample SaaS application and a series of management scripts and tutorials that demonstrate a range of SaaS-focused design and management patterns that can accelerate SaaS application development on SQL Database. These patterns extend the benefits of SQL Database, making it the most effective and easy-to-manage data platform for a wide range of data-intensive multi-tenant SaaS applications.

Database-per-tenant model gives tenant isolation

The discussion around patterns starts with the consideration of what data model to use. Multi-tenant applications have traditionally been implemented using a multi-tenant database. While multi-tenant databases remain effective for some applications, particularly where the amount of data stored per tenant is small, many SaaS applications benefit from the isolation inherent in using a database per tenant. The fully-managed nature of SQL Database and the use of elastic pools have made managing massive numbers of databases practical. Many ISVs are now running SaaS applications on SQL Database with tens of thousands of tenant databases in elastic pools. MYOB, a leading Australian accounting ISV, is managing over 130,000 tenant databases without breaking a sweat! A database-per-tenant model allows these customers to achieve levels of tenant isolation not possible with a multi-tenant database, with improvements in data security, privacy, performance management, extensibility, and more.

Learning from customer experience

By working closely with many of these customers, and learning from their experience, we have harvested a set of design and management patterns applicable to any business domain that simplify the adoption of a database-per-tenant approach and its use at scale. Based on these patterns, a sample SaaS application and a set of management scripts, backed by easy-to-follow tutorials, is now available, with all code on GitHub and the tutorials online.

You can install the sample application in less than 5 minutes and explore the patterns first-hand by playing with the app and looking at how it’s built using the Azure portal, SQL Server Management Studio, and Visual Studio. By studying the app and management scripts, and working through the tutorials, you can jump start your own SaaS app project.

The sample app is a simple event listing and ticketing SaaS app, where each venue has its own database with events, ticket prices, customers, and ticket sales, all securely isolated from other venues’ data. The app uses a canonical SaaS app architecture for the data layer. Each tenant is mapped to its database using a catalog database, which is used for lookup and connectivity. Other databases are installed to enable other scenarios as you explore the various tutorials.

SaaS scenarios explored

The app and management scripts address many common SaaS-related scenarios, including:

Tenant registration, including database provisioning and initialization, and catalog registration
Routing and connection from the app to the correct tenant database
Database performance monitoring, alerting and management, including cross-pool monitoring and alerting
Schema management, including deployment of schema changes and reference data to all tenant databases
Distributed query across all tenant databases, allowing ad hoc real-time query and analysis
Extract of tenant data into an analytics database or data warehouse
Restoring a single tenant database to a point in time

A load generator simulates unpredictable tenant activity, allowing you to explore resource management scenarios, including scaling pools to handle daily or weekly workload patterns, load-balancing pools, and managing large variations in individual tenant workloads. A ticket-generator allows you to explore analytics scenarios with significant amounts of data.

The app also benefits from other SQL Database features that are especially relevant in a database-per-tenant context, including automatic intelligent index tuning, that optimizes tenant database performance based on each tenant’s actual workload profile.

Integrated with other Azure Services for an end-to-end SaaS scenario

Several other Azure services are also showcased as part of the app, including App Services and Traffic Manager in the app layer, Log Analytics (OMS) for monitoring and alerting at scale, SQL Data Warehouse for cross-tenant analytics, and Azure Resource Management (ARM) templates for deployment.

The app will be extended over time to include more scenarios, from additional management patterns to deeper integration with other Azure services, including Power BI, Azure Machine Learning, Azure Search, and Active Directory, to build out a complete E2E SaaS scenario. We also want to explore the same scenarios with a multi-tenant database model in due course.

These SaaS patterns are also informing planning for future improvements to the SQL Database service.

Get started

Get started by installing the app with one click from GitHub, where you can download the code and management scripts.  Learn more about the patterns and explore the tutorials. Let us know at saasfeedback@microsoft.com what you think of the sample and the patterns, and what you’d like to see added next.
Quelle: Azure

We're all about the quality: Azure achieves ISO 9001:2015 certification

As part of our ongoing effort to deliver the broadest and deepest set of compliance offerings, Microsoft Azure is proud to announce that we obtained the ISO 9001:2015 certification, addressing Quality Management systems.

This international standard is based on seven quality management principles:

Customer focus
Leadership commitment to quality objectives
Employee engagement in the quality goals set by leadership
Process-driven approach to achieve quality objectives
Continuous Improvement
Evidence-based decision making
Customer and partner relationship management

ISO 9001:2015 provides guidance on implementing a quality management system focused on delivering quality products and maintaining a constant state of improvement to exceed customer expectations. This certification (our 5th in the ISO family of certifications) is in perfect alignment with our goal to enable customers by providing the leverage of our compliant and quality products across a broad range of regulated industries, markets, and regions.  Achieving this certification underscores our drive to provide the most quality product possible.

The ISO 9001:2015 certificate for Microsoft Azure can be downloaded from here . The certificate covers 52 services across the following offerings: Azure, Cloud App Security, Intune, PowerApps, Power BI, Flow, Genomics, Graph (detailed scope is listed on the certificate).

For more information on Microsoft Azure’s ISO 9001:2015 Certification and our vast compliance portfolio, please visit the Microsoft Trust Center.
Quelle: Azure