Data Integrity in Azure SQL Database

Microsoft takes data integrity very seriously. While there are some traditional techniques used by DBAs in SQL Server to monitor data integrity (e.g. DBCC CHECKDB) and various methods to recover from database corruptions, the Azure SQL Database engineering team has been able to develop new techniques that can handle some classes of corruption automatically and without data loss. These techniques are used as part of the service to avoid data loss and downtime in the cases where they can be avoided. 

This blog post outlines some of those techniques, how they work, and what impact it has on customers concerned with what steps they should take to safeguard their data in Azure SQL Database.

How we manage data integrity for Azure SQL Database

The job of protecting data integrity in Azure SQL Database involves a combination of techniques and evolving methods:

Extensive data integrity error alert monitoring. Azure SQL Database emits alerts for all errors and unhandled exceptions that indicate data integrity concerns. Each alert is routed to the engineering team for manual handling and investigation (and the general process followed is explained below). This alerting system has caught more integrity issues than anything else described in this blog post.

Backup and restore integrity checks. On an ongoing basis, the Azure SQL Database engineering team automatically tests the restore of automated database backups of databases across the service. Upon restore, databases also receive integrity checks using DBCC CHECKDB. Any issues found during the integrity check will result in an alert to the engineering team.

I/O system “lost write” detection. Azure SQL Database has additional functionality to detect what has been the most common cause of observed physical corruption issues; I/O system “lost writes”. This functionality tracks page writes and their associated LSNs (Log Sequence Numbers). Any subsequent read of a data page from disk will be compared with the page’s expected LSN. If there is a mismatch in LSNs between what is on disk and what is expected, the page will be considered stale, resulting in an immediate alert to the engineering team.

Automatic Page Repair. Azure SQL Database leverages database replicas behind-the-scenes for business continuity purposes, and so the service also then leverages automatic page repair, which is the same technology used for SQL Server database mirroring and availability groups. In the event that a replica cannot read a page due to a data integrity issue, a fresh copy of the page will be retrieved from another replica, replacing the unreadable page without data loss or customer downtime. This functionality applies to premium tier databases and standard tier databases with geo-secondaries.

Data integrity at rest and in transit. Databases created in the service are by default set to verify pages with the CHECKSUM setting, calculating the checksum over the entire page and storing in the page header. Transport Layer Security (TLS) is also used for all communication in addition to the base transport level checksums provided by TCP/IP.

How we handle data integrity incidents

Incidents of wrong results or corruption are treated with the highest severity, with 24×7 work and support from across all Azure engineering teams. The goals when handling integrity incidents are to minimize unavailability and minimize the amount of data loss.

Addressing System Data Integrity Issues

Incidents that do not impact customer data or database availability will be corrected without notifying customers. Examples include those incidents that automatic page repair can address, or corruption to internal database metadata or telemetry that does not affect customer data or query results. 

Addressing Customer Data Integrity Issues

All wrong-results and customer data corruption issues are communicated to impacted customers as soon after confirmed detection as possible. Azure SQL DB engineers will:

Work directly with the customer to explain the scope of corruption, outlining recovery options, and allowing the customer to choose the option that works best for their application and scenario.
Immediately initiate a restore to the point just prior to corruption, and make that available free of charge to the customer under a different database name. Customers for whom availability is the highest priority often choose to begin using this version of the database while recovery operations on the version with some corrupted data occurs in-parallel. In this case, after repair of the original database, the support engineer will assist the customer in identifying what data has diverged between the two and what operations should occur to reconcile.
Where possible, assist the customer in understanding the scope of impact to their application, for example identifying whether data corruption has caused the application to change other data in an unexpected way.

Repair is achieved using various methods, and with steps taken in conjunction with customers. Options can include but are not limited to:

Rebuilding the index – for example for a non-clustered index where the clustered index or heap is not also corrupted.
Running DBCC CHECKDB with REPAIR_REBUILD where the repair has no possibility of data loss.
Running DBCC CHECKDB with REPAIR_ALLOW_DATA_LOSS where repairs can cause some data loss.
For scenarios where DBCC CHECKDB cannot be used to repair the data integrity issue, engineers use point-in-time restore to the point before the data integrity issue occurred plus manual replay of relevant transactions from the transaction log. An example when this occurs is when the transaction log has been corrupted in a way that prevents automatic replay but does not affect customer data. 

Issues leading to wrong-results or data corruption receive detailed post mortems from the engineering team and associated repair items created because of the issue are closely tracked. Many significant enhancements have occurred because of these post mortems, including the “lost writes” functionality described earlier.

Running integrity checks in Azure SQL Database

The Azure SQL Database engineering team takes responsibility for managing data integrity.  As such, it is not necessary for customers to run integrity checks in Azure SQL Database.

With the existing monitoring and protection provided by the service, customers can still choose to execute user-initiated integrity checks in Azure SQL Database. For example, customers may optionally run DBCC CHECKDB.

Evolving methodologies

The Azure SQL Database engineering team regularly reviews and continues to enhance the data integrity issue detection capabilities of the software and hardware used in the service. Although rare, if a data integrity error is encountered prior to receiving notification from Microsoft customer support, customers should file a support case.

If you have feedback to share regarding Microsoft’s data integrity strategy in Azure SQL Database, we would like to hear from you. To contact the engineering team with feedback or comments on this subject, please email: SQLDBArchitects@microsoft.com.
Quelle: Azure

Azure is the Enterprise Cloud – highlights from Microsoft Ignite 2017

This is my favorite week of the year – spending five action packed days with customers, hearing what they are doing, and digging into their technology roadmaps. My discussions ranged from containers to cloud security to machine learning to quantum computing – and everything in between. There was also a huge number of new capabilities and Azure services released or updated at Ignite, so no shortage of new areas to dive into. 

To help net out a very big week, I wanted to provide a summary of the top Azure discussions taking place and some useful articles from Ignite. 

Hybrid cloud evolved. The general consensus I heard through the week was that the public vs private cloud battle is less relevant, and a distributed hybrid cloud makes sense technically and business-wise. The distributed hybrid cloud being the combination of public cloud and resources on premises to optimize for either technical situations and/or business policy. With Azure Stack systems now shipping, I saw customers better understand the approach of an intelligent cloud + intelligent edge as Satya talks about. And, I was excited to have reporters, analysts, and customers quickly connecting dots from Azure Stack to hybrid data with SQL Server 2017 + Azure SQL DB managed service + SQL migration enabling fluid data location across a hybrid deployment. Here are some articles that speak to hybrid being a lot more than infrastructure…

At Ignite, Microsoft extends hybrid cloud beyond just infrastructure 

Microsoft Azure Stack Changes the Game in Hybrid Cloud Computing 

Microsoft just unleashed three long-anticipated secret weapons in the cloud wars 

Microsoft’s Azure has become a terror to other cloud computing players 

AI is here for real. The level of customer discussions about using AI in their applications and business process was impressive, and a bit unexpected. While AI is still relatively new in the market, it’s clear from Ignite that it’s an area many organizations are looking to use in the near term. Even for customers who aren’t using public cloud for infrastructure, I saw strong interest in using Azure Machine Learning and AI to enhance an existing system and explore more significant business process changes through AI. This year at Ignite, I ran into more people who categorize their role as “developer + data scientist” which was perhaps not surprising to see, but definitely on the rise from past years. The Azure Machine Learning Workbench application was getting a lot of buzz. Here’s just a few examples of the buzz…

Microsoft launches new machine learning tools 

Microsoft Ignite 2017: It's all about choice, from SQL Server 2017 to Azure Machine Learning 

Microsoft Wants Developer to Get More Experimental with AI 

Security, security, security. There is no question that security remains very top of mind within the technical community. While security has always been a hot topic, at this year’s Ignite, the conversation was less focused on understanding cloud security that in past years. Instead, the security discussions I participated in were about broader security strategies, using machine learning and AI to improve security, and increasingly tapping into cloud technology to improve security efficacy. It’s great to see the shift away from, “Is the cloud secure?” discussions, to, “How do we all get more secure?” discussions. As Brad Smith said during his keynote at Microsoft Envision this week, “The tech industry, customers and governments must work together to address cybersecurity. Security is a team sport.” I couldn’t agree more. While a bunch of the news at Ignite focused on Microsoft 365 security updates, there was also new capabilities released in Azure Security Center for hybrid cloud security…

Microsoft looks to the cloud to expand its security offerings 

Quantum computing takes step closer to possible. When I asked people throughout Ignite for their keynote highlights, the quantum computing presentation during Satya’s keynote was consistently listed in the top 3. While most people quickly added, “I didn’t understand all of it!” just being witness to this milestone and a glimpse into this step-function change ahead of us, was inspiring. I’m also certain that the number of searches on ‘topological qubit’ have 10x in the past week. Here’s a well done write up on this complex topic…

Microsoft makes play for next wave of computing with quantum computing toolkit 

Thanks to everyone who joined us in Orlando. It’s incredibly inspiring and motivating week for me and hopefully the same for you! Looking forward to being back in Orlando with you again next year.
Quelle: Azure

Join us for the Azure Red Shirt Dev Tour

Now that Ignite is over, we can all roll up our sleeves and start using all the new services and features in Azure.

The Azure Red Shirt Dev Tour is for developers who want to receive detailed advice and instructions for building solutions in the cloud. The sessions will give you information about the latest Azure services, including VM scale sets, managed disks, hybrid cloud, Azure Stack, enterprise mobility & security, Xamarin/mobile development, SQL, AI, machine learning, and much more.

The speaker is Scott Guthrie, Microsoft’s Cloud boss. He’ll code on stage for five hours and you and your teams will walk away with first hand knowledge of the best way to use Azure. Regardless of whether you’re a Java, Node, Python, Docker, .NET, or other developer, beginner or expert, you’ll definitely learn something new.

This FREE event takes place in Chicago (October 16th), Dallas (October 17th), Atlanta (October 18th), Boston (October 19th), and New York (October 20th).

Register today!

We hope to see you there!
Quelle: Azure

Ready for JavaOne: Bringing Java and Kubernetes together in Azure

The Java team at Microsoft has been working hard during this year, collaborating with Java customers and developers around the globe to optimize the Java developer experience in Azure. In the last few weeks we’ve delivered exciting new features in Maven, Jenkins, Visual Studio Code and IntelliJ. These features help Java developers rapidly adopt cloud-native patterns in Azure and debug faster, as well as added support for Managed Disks, Cosmos DB and Container Service in the Azure Management Libraries for Java. We have collaborated with partners such as Red Hat, Pivotal, CloudBees and Azul to bring Java closer to the cloud. It’s truly momentous days for Java, and as our team gets ready for JavaOne next week (where Microsoft will be a Silver sponsor) we are excited to announce that developers can now securely deploy and redeploy Java apps to Kubernetes in Azure Container Service using Maven! Getting started Azure Container Service makes it simple to create an optimized Kubernetes-based container hosting solution for Azure to run containerized applications stored in public or private registries, including Azure Container Registry. Today, you can use Maven to securely deploy and manage your container-based apps. Let’s start with a sample Spring Boot app you can clone from GitHub: git clone -b k8s-private-registry https://github.com/microsoft/gs-spring-boot-dockercd gs-spring-boot-docker/complete After adding your private Docker registry credentials to your Maven settings.xml, build the app and containerize like you always do, and deploy to Kubernetes in Azure Container Service: mvn package docker:build docker:push fabric8:resource fabric8:apply Then, get the IP address for your deployment: kubectl get svc -w And that’s it! It’s that easy to use Maven to deploy a Spring Boot app or any other Java app to Kubernetes in Azure Container Service. Make sure you check out the step-by-step instructions to get started today. Next week: JavaOne Today’s announcement wouldn’t be possible without our joint efforts with Red Hat and the ongoing collaboration enhancing the Fabric8 Maven plugin to add secure registry references. This collaboration highlights how important engaging with the Java ecosystem is for us – a key aspect of our presence at JavaOne. The Microsoft Java team, including feature owners, developer advocates, support engineers and others, looks forward to meeting you in San Francisco next week. Swing by our booth in the Expo hall to learn more – we’d love to connect. Not attending JavaOne this year? Follow @OpenAtMicrosoft or sign up for updates!
Quelle: Azure

Monitoring Azure SQL Data Sync using OMS Log Analytics

Azure SQL Data Sync is a solution which enables customers to easily synchronize data either bidirectionally or unidirectionally between multiple Azure SQL databases and/or on-premises SQL Databases.

Previously, users had to manually look at SQL Azure Data Sync in the Azure portal or use PowerShell/RestAPI’s to pull the log and detect errors and warnings. By following the steps in this blog post, Data Sync users can configure a custom solution which will greatly improve the Data Sync monitoring experience. Remember, this solution can and should be customized to fit your scenario.

Automated email notifications

Users will no longer need to manually look at the log in the Azure Portal or through PowerShell/RestAPI’s. By leveraging OMS Log Analytics we can create alerts which will go directly to the emails of those that need to see them in the event of an error.

Monitoring dashboard for all your Sync Groups 

Users will no longer need to manually look through the logs of each Sync Group individually to look for issues. Users will be able to monitor all their Sync Groups from any of their subscriptions in one place using a custom OMS view. With this view users will be able to surface the information that matters to them.

How do you set this up?

You can implement your custom Data Sync OMS monitoring solution in less than an hour by following the steps below and making minimal changes to the given samples.

You’ll need to configure 3 components:

PowerShell Runbook to feed Data Sync Log Data to OMS
OMS Log Analytics alert for email notifications
OMS view for monitoring

Download the 2 samples:

Data Sync Log PowerShell Runbook
Data Sync Log OMS View

Prerequisites:

Azure Automation Account
Log Analytics linked with OMS Workspace

PowerShell Runbook to get Data Sync Log 

We will use a PowerShell runbook hosted in Azure Automation to pull the Data Sync log data and send it to OMS. A sample script is included. As a prerequisite you need to have an Azure Automation Account. You will need to create a runbook and the schedule for running it.

To create the runbook:

Under your Azure Automation Account, click the Runbooks tab under Process Automation.
Click Add a Runbooks at the top left corner of the Runbooks blade.
Click Import an existing Runbook.
Under Runbook file use the given “DataSyncLogPowerShellRunbook” file. Set the Runbook type as “PowerShell”. You can use any name you want.
Click Create. You now have your runbook.
Under your Azure Automation Account, click the Variables tab under Shared Resources.
Click Add a variable at the top left side of the variables blade. We need to create a variable to store the last execution time for the runbook. If you have multiple runbooks you'll need one variable for each.
Set the name as “DataSyncLogLastUpdatedTime” and Type as DateTime.
Select the Runbook and click the edit button at the top of the blade.
Make the required changes (details in the script)

Azure information
Sync Group information
OMS information (find this information at OMS Portal -> Settings -> Connected Sources)

Run the runbook in the test pane and check to make sure it’s successful.

Note: If you have errors make sure you have the newest PowerShell Module installed. You can do this in the Modules Gallery in your Automation Account.

Click Publish

To schedule the runbook:

Under your runbook, click the Schedules tab under Resources.
Click Add a Schedule in the Schedules blade.
Click Link a Schedule to your runbook.
Click Create a new schedule.
Set Recurrence to Recurring and set the interval you’d like. You should use the same interval here, in the script, and in OMS.
Click Create

To monitor if your automation is running:

Under Overview for your automation account, find the Job Statistics view under Monitoring. Pin this to your dashboard for easy viewing.
Successful runs of the runbook will show as “Completed” and failed runs will show up as “Failed”.

OMS log reader alert for email notifications

We will use an OMS Log Analytics to create an alert. As a prerequisite you need to have a Log Analytics linked with an OMS workspace.

In the OMS portal click on Log Search towards the top left.
Create a query to select the errors and warnings by sync group within the interval you are using.

Type=DataSyncLog_CL LogLevel_s!=Success| measure count() by SyncGroupName_s interval 60minute

After running the query click the bell that says Alert.
Under Generate alert based on select Metric Measurement.

Set the Aggregate Value to Greater than.
After greater than, use the threshold you’d like to set before you receive notifications.
Transient errors are expected in Data Sync. We recommend that you set the threshold to 5 to reduce noise.

Under Actions set Email notification to “Yes”. Enter the desired recipients.
Click Save. You will now receive email notifications based on errors.

OMS view for monitoring

We will create an OMS view to visually monitor all the sync groups. The view includes a few main components:

The Overview tile shows how many errors, successes, and warnings all your sync groups have.
Tile for all sync groups which shows the amount of errors and warnings per sync group that has them. Groups with no issues will not appear.
Tile for each Sync Group which shows the number errors, successes, and warnings and the recent error messages.

To configure the view:

On the OMS home page, click the plus on the left to open the view designer.
Click Import on the top bar of the view designer and select the “DataSyncLogOMSView” file.
The given view is a sample for managing 2 sync groups. You can edit this to fit your case. Click edit and make the following changes.

Create new “Donut & List” objects from the Gallery as needed.
In each tile update the queries with your information. 

On all tiles, change the TimeStamp_t interval as desired
On the Sync Group specific tiles, update the Sync Group names.

In each tile update the titles as needed.

Click Save and your view is ready.

Cost

In most cases this solution will be free.

Azure Automation: There may be a cost incurred with the Azure Automation Account depending on your usage. The first 500 minutes of job run time per month is free. In most cases you will use less than 500 minutes for this solution. To avoid charges, schedule the runbook to run at an interval of 2 hours or more.

OMS Log Analytics: There may be a cost associated with OMS depending on your usage. The free tier includes 500 MB of ingested data per day. In most cases this will be enough for this solution. To decrease the usage, use the failure only filtering included in the runbook. If you are using more than 500 MB per day, upgrade to the paid tier to avoid stoppage of analytics from hitting the limitation.

Code samples

Data Sync Log PowerShell Runbook
Data Sync Log OMS View

Quelle: Azure

What’s brewing in Visual Studio Team Services: September 2017 Digest

Visual Studio Team Services offers the best DevOps tooling to create an efficient continuous integration and release pipeline to Azure. This month we’ll take a look at support Git forks, multi-phase builds, work hub improvements, new reporting widgets, and an updated NDepend extension. Let’s get started with our first look at forking in VSTS.

Forks preview

Preview feature

This capability is enabled through the Git Forks preview feature on your account.

You can now fork and push changes back within a VSTS account! This is the first step on our journey with forks. The next step will be to enable you to fork a repository into a different VSTS account.

A fork is a complete, server-side copy of a repository, including all files, commits, and (optionally) branches. Forks are a great way to isolate experimental, risky, or confidential changes from the original codebase. Once you’re ready to share those changes, it’s easy to use pull requests to push the changes back to the original repository.

Using forks, you can also allow a broad range of people to contribute to your repository without giving them direct commit access. Instead, they commit their work to their own fork of the repository. This gives you the opportunity to review their changes in a pull request before accepting those changes into the central repository.

A fork starts with all the contents of its upstream (original) repository. When you create a fork, you can choose whether to include all branches or limit to only the default branch. None of the permissions, policies, or build definitions are applied. The new fork acts as if someone cloned the original repository, then pushed to a new, empty repository. After a fork has been created, new files, folders, and branches are not shared between the repositories unless a PR carries them along.

You can create PRs in either direction: from fork to upstream, or upstream to fork. The most common direction will be from fork to upstream. The destination repository’s permissions, policies, builds, and work items will apply to the PR.

See the documentation for forks for more information.

Create a folder in a repository using web

You can now create folders via the web in your Git and TFVC repositories. This replaces the Folder Management extension, which will now undergo the process of deprecation.

To create a folder, click New > Folder in either the command bar or context menu:

Wiki page deep linking

Wiki now supports deep linking sections within a page and across pages, which is really useful for creating a table of contents. You can reference a heading in the same page or another page by using the following syntax:

Same page: [text to display](#section-name)
Another page: [text to display](/page-name#section-name)

See the documentation for Markdown syntax guidance for more information.

Preview content as you edit Wiki pages

Data shows that users almost always Preview a wiki page multiple times while editing content. For each page edit, users click on Preview 1-2 times on average. This can be particularly time-consuming for those new to markdown. Now you can see the preview of your page while editing.

Paste rich content as HTML in Wiki

You can now paste rich text in the markdown editor of Wiki from any browser-based applications such as Confluence, OneNote, SharePoint, and MediaWiki. This is particularly useful for those who have created rich content such as complex tables and want to show it in Wiki. Simply copy content and paste it as HTML.

Multi-phase builds

Modern multi-tier apps often must be built with different sets of tasks, on different sets of agents with varying capabilities, sometimes even on different platforms. Until now, in VSTS you had to create a separate build for each aspect of these kinds of apps. We’re now releasing the first set of features to enable multi-phase builds.

You can configure each phase with the tasks you need, and specify different demands for each phase. Each phase can run multiple jobs in parallel using multipliers. You can publish artifacts in one phase, and then download those artifacts to use them in a subsequent phase.

You’ll also notice that all of your current build definitions have been upgraded to have a single phase. Some of the configuration options such as demands and multi-configuration will be moved to each phase.

We’re still working on a few features, including:

Ability to select a different queue in each phase.
Ability to consume output variables from one phase in a subsequent phase.
Ability to run phases in parallel. (For now, all the phases you define run sequentially).

CI builds for Bitbucket repositories

It's now possible to run CI builds from connected Bitbucket repositories. To get started, set up a service endpoint to connect to Bitbucket. Then in your build definition, on the Tasks tab select the Bitbucket source.

After that, enable CI on the Triggers tab, and you’re good to go.

This feature works only for builds in VSTS accounts and with cloud-hosted Bitbucket repositories.

Release template extensibility

Release templates let you create a baseline for you to get started when defining a release process. Previously, you could upload new ones to your account, but now authors can include release templates in their extensions. You can find an example on the GitHub repo.

Conditional release tasks and phases

Similar to conditional build tasks, you can now run a task or phase only if specific conditions are met. This will help you in modeling rollback scenarios.

If the built-in conditions don’t meet your needs, or if you need more fine-grained control over when the task or phase runs, you can specify custom conditions. Express the condition as a nested set of functions. The agent evaluates the innermost function and works its way outwards. The final result is a boolean value that determines if the task to be run.

Personalized notifications for releases

Release notifications are now integrated with the VSTS notification settings experience. Those managing releases are now automatically notified of pending actions (approvals or manual interventions) and important deployment failures. You can turn off these notifications by navigating to the Notification settings under the profile menu and switching off Release Subscriptions. You can also subscribe to additional notifications by creating custom subscriptions. Admins can control subscriptions for teams and groups from the Notification settings under Team and Account settings.

Release definition authors will no longer have to manually send emails for approvals and deployment completions.

This is especially useful for large accounts that have multiple stakeholders for releases, and those other than approver, release creator and environment owner that might want to be notified.

See the managing release notifications for more information.

Branch filters in release environment triggers

In the new release definition editor, you can now specify artifact conditions for a particular environment. Using these artifact conditions, you will have more granular control on which artifacts should be deployed to a specific environment. For example, for a production environment, you may want to make sure that builds generated only from the master branch are deployed. This filter needs to be set for all artifacts that you think should meet this criteria.

You can also add multiple filters for each artifact that is linked to the release definition. Deployment will be triggered to this environment only if all the artifact conditions are successfully met.

Gulp, Yarn, and more authenticated feed support

The npm task today works seamlessly with authenticated npm feeds (in Package Management or external registries like npm Enterprise and Artifactory), but until now it’s been challenging to use a task runner like Gulp or an alternate npm client like Yarn unless that task also supported authenticated feeds. With this update, we’ve added a new npm Authenticate build task that will add credentials to your .npmrc so that subsequent tasks can use authenticated feeds successfully.

Run webtests using the VSTest task

Using the Visual Studio test task, webtests can now be run in the CI/CD pipeline. Webtests can be run by specifying the tests to run in the task assembly input. Any test case work item that has an “associated automation” linked to a webtest, can also be run by selecting the test plan/test suite in the task.

Webtest results will be available as an attachment to the test result. This can be downloaded for offline analysis in Visual Studio.

This capability is dependent on changes in the Visual Studio test platform and requires installing Visual Studio 2017 Update 4 on the build/release agent. Webtests cannot be run using prior versions of Visual Studio.

Similarly, webtests can be run using the Run Functional Test task. This capability is dependent on changes in the Test Agent, that will be available with the Visual Studio 2017 Update 5.

See the Load test your app in the cloud using Visual Studio and VSTS quickstart as an example of how you can use this together with load testing.

Work Items hub

Preview feature

To use this capability you must have the New Work Items Hub preview feature enabled on your profile and/or account.

The Work Items hub allows you to focus on relevant items inside a team project via five pivots:

Assigned to me – All work items assigned to you in the project in the order they’re last updated. To open or update a work item, click its title.
Following – All work items you’re following.
Mentioned – All work items you’ve been mentioned in, for the last 30 days.
My activity – All work items that you have recently viewed or updated.
Recently created – All work items recently created in the project.

Creating a work item from within the hub is just one click away:

While developing the new Work Items hub, we wanted to ensure that you could re-create each one of the pivots via the Query Editor. Previously, we supported querying on items that you’re following and that were assigned to you but this sprint we created two new macros: @RecentMentions and @MyRecentActivity. With these, you can now create a query and obtain the work items where you’ve been mentioned in the last 30 days or create a query that returns your latest activity. Here’s a sneak peek of how these macros can be used:

See the documentation for the Work Items hub for more information.

Customizable work item rules

Whether it be automatically setting the value of certain work item fields or defining the behavior of those fields in specific states, project admins can now use rules to automate the behavior of work item fields and ease the burden on their teams. Here are just a few examples of the key scenarios you will be able to configure using rules.

When a work item state changes to Active, make Remaining Work a required field
When a work item is Proposed and the Original Estimate is changed, copy the value of Original Estimate to the Remaining Work field
When you add a custom state, with its own by/date field types, you can use rules to automatically set those fields’ values on state transitions
When a work item state changes, set the value of custom by/date fields

To get started with rules, simply follow these steps:

Select Customize from the work item’s context menu
Create or select an existing inherited process
Select the work item type you would like to add rules on, click Rules, and click New rule

Check out the documentation for custom rules for more information.

Custom Fields and Tags in Notifications

Notifications can now be defined using conditions on custom fields and tags – not only when they change but when certain values are met. This has been a top customer suggestion in UserVoice (see 6059328 and 2436843) and will allow for a more robust set of notifications that can be set for work items.

Inline add on Delivery Plans

New feature ideas can arrive at any moment, so we’ve made it easier to add new features directly to your Delivery Plans. Simply click the New item button available on hover, enter a name, and hit enter. A new feature will be created with the area path and iteration path you’d expect.

 

New Queries experience

Preview Feature

To use this capability you must have the New Queries Experience preview feature enabled on your profile.

The Queries hub has a new look and feel, changes in navigation, and some exciting new features such as the ability to search for queries.

You’ll notice that the left pane has been removed. To quickly navigate between your favorite queries, use the dropdown in the query title.

We’ve also made the following improvements:

Create and edit followed work item queries with the @Follows macro
Query for work items you were mentioned in with the @Mentions macro
Save as now copies charts to the new query
Simplified command bars for Results and Editor
Expanded filter capabilities in the result grid

Burndown and Burnup widgets

The Burndown and Burnup widgets are now available for those who have installed the Analytics Extension on their accounts.

The Burndown widget lets you display a burndown across multiple teams and multiple sprints. You can use it to create a release burndown, a bug burndown, or a burndown on just about any scope of work over any time period. You can even create a burndown that spans team projects!

The Burndown widget helps you answer the question: Will we complete this project on time?

To help you answer that question, it provides these features:

Displays percentage complete
Computes average burndown
Shows you when you have items not estimated with story points
Tracks your scope increase over the course of the project
Projects your project’s completion date based on historical burndown and scope increase trends

You can burndown on any work item type based on count of work items or by the sum of a specific field (e.g., Story Points). You can burndown using daily, weekly, monthly intervals or based on an iteration schedule. You can even add additional filter criteria to fine tune the exact scope of work you are burning down.

The widget is highly configurable allowing you use it for a wide variety of scenarios. We expect our customers will find amazing ways to use these two widgets.

The Burnup widget is just like the Burndown widget, except that it plots the work you have completed, rather than the work you have remaining.

Streamlined user management

Preview feature

This capability is enabled through the Streamlined User Management preview feature on your profile and/or account.

Effective user management helps administrators ensure they are paying for the right resources and enabling the right access in their projects. We’ve repeatedly heard in support calls and from our customers that they want capabilities to simplify this process in VSTS. This sprint, we are releasing an experience to general availability, which begins to address these issues. See the documentation for the User hub for more information. Here are some of the changes that you’ll see light up:

Invite people to the account in one easy step

Administrators can now add users to an account, with the proper extensions, access level, and group memberships at the same time, enabling their users to hit the ground running. You can also invite up to 50 users at once through the new invitation experience.

More information when managing users

The Manage users page now shows you more information to help you understand users in your account at a glance. The table of users includes a new column called Extensions that lists the extensions each user has access to.

Adding Users to Projects and Teams

We want to make sure each of your administrators has the tools they need to easily get your team up and running. As part of this effort, we are releasing an improved project invitation dialog. This new dialog enables your project administrators to easily add users to the teams which they should be a member of. If you are a project or team administrator, you can access this dialog by clicking the Add button on your project home page or the Team Members widget.

Improved authentication documentation and samples

In the past, our REST documentation has been focused solely on using PATs for access to our REST APIs. We’ve updated our documentation for extensions and integrations to give guidance on how best to authenticate given your application scenario. Whether you’re developing a native client application, interactive web app, or simply calling an API via PowerShell, we have a clear sample on how best to authenticate with VSTS. For more information see the documentation.

Extension of the month: Code Quality NDepend for TFS 2017 and VSTS

With over 200 installs and a solid 5-star rating, NDepend is one of the best code quality solutions in our Marketplace and the extension adds a bunch of new features to your VSTS experience

NDepend Dashboard Hub shows a recap of the most relevant data including technical debt estimations, code size, Quality Gates status, rules and issues numbers.
Quality Gates which are code quality facts that must be enforced before committing to source control and eventually, before releasing.
Rules that check your code against best practices with 150 included inbox and the ability to create custom additions.
Technical Debt and Issues which is generated from checking your code against industry best practices, including deep issue drill down.
Trends are supported and visualized across builds so you can see if you’re improving, or adding more technical debt.
Code Metrics recap for each assembly, namespace, class or method.
Build Summary recap is shown in each Build Summary:

With the new pricing plans for NDepend, you can enable a code quality engineer on your team to use all of these tools starting at a low monthly cost.

For full pricing info, check out the NDepend extension.

These releases always include more than I can cover here. Please check out the August 28th and the September 15th release notes for more information. Be sure to subscribe to the DevOps blog to keep up with the latest plans and developments for VSTS.

Happy coding!

@tfsbuck
Quelle: Azure

Announcing Kentico Cloud Solution in Azure Marketplace

Kentico Cloud is a cloud-first Headless CMS for digital agencies and end clients. We’re excited to announce that its sample site is now available in the Azure Marketplace. With such a solution, your Azure App Service web application can read the content form Kentico Cloud. Kentico Cloud stores the content, tracks the visitors, provides you with statistics and allows for personalization of the content for various customer segments. It allows you to distribute the content to any channel and device, such as websites, mobile devices, mixed reality devices, presentation kiosks, etc, through an API.

Here are a few highlights of using Kentico Cloud:

Content is served via REST, backed by a super-fast CDN. This means that the app or website can be developed using any programming language on any platform.
SDKs for multiple programming languages are provided as open source projects developed by Kentico in collaboration with a developer community.
Built-in visitor-tracking feature tracks individual visitors. It allows you to analyze the data to identify customer segments with similar profiles or behavior.
Based on the gathered data, Kentico Cloud allows you to deliver personalized content and interactions with customers.

The Kentico Cloud sample application utilizes the Git Deploy feature in Azure App Service. With Git Deploy, you don’t have to package your application for Azure Marketplace. Instead, Azure can build your source code in GitHub for itself and deploy your app to an arbitrary Azure App Service Web App instance.

Go to the Azure Portal to create a sample application that uses Kentico Cloud.

References

Kentico Cloud
Case Studies

 

 
Quelle: Azure

Azure Analysis Services adds firewall support

We are pleased to introduce firewall support for Azure Analysis Services. By using the new firewall feature, customers can lock down their Azure Analysis Services (Azure AS) servers to accept network traffic only from desired sources. Firewall settings are available in the Azure Portal in the Azure AS server properties. A preconfigured rule called “Allow access from Power BI” is enabled by default so that customers can continue to use their Power BI dashboards and reports without friction. Permission from an Analysis Services admin is required to enable and configure the firewall feature.

Any client computers can be granted access by adding a list of IPv4 addresses and an IPv4 address range to the firewall settings. It is also possible to configure the firewall programmatically by using a Resource Manager template along with Azure PowerShell, Azure Command Line Interface (CLI), Azure portal, or the Resource Manager REST API. Forthcoming articles on the Microsoft Azure blog will provide detailed information on how to configure the Azure Analysis Services firewall.

Submit your own ideas for features on our feedback forum and learn more about Azure Analysis Services.
Quelle: Azure

Introducing HDInsight integration with Azure Log Analytics Preview

Operating Big Data applications successfully at scale is key consideration for our enterprise customers. Typical Big Data applications span across multiple different technologies, as well as operating environments. Monitoring this complex infrastructure in an efficient way is the key challenge for our customers.

Today, we announce the preview of HDInsight cluster monitoring in Azure Log Analytics, ushering a new era of enterprise grade Hadoop monitoring for mission critical analytics workloads. HDInsight customers can now monitor and debug their Hadoop, Spark, HBase, Kafka, Interactive Query, and Storm clusters in Azure Log Analytics.

This release introduces the following key capabilities:

Monitor all of the HDInsight clusters and other Azure resources with a single pane of glass.
Extendable workload specific dashboards along with sophisticated analytical query language for deep analytics.
Alerts on critical issues with built-in Log Analytics alerting infrastructure.
Troubleshoot issues faster by having Hadoop, Yarn, Spark, Kafka, HBase, Hive, Storm logs, and Metrics in one place.

HDInsight Dashboard in Azure Log Analytics

Here is a quick video on how to use HDInsight in Azure Log Analytics.

 

Get started today

HDInsight integration with Azure Log Analytics help you to gain greater visibility into your Big Data environment. Learn more about the capabilities and to simplify monitoring of your Big Data applications.
Quelle: Azure

Paxata launches Self-Service Data Preparation on Azure HDInsight to accelerate Data Prep

We are pleased to announce the expansion of HDInsight Application Platform to include Paxata, a leading self-service data preparation offering. You can get this offering now at Azure Marketplace and read more on the press announcement by Paxata.

Azure HDInsight is the industry leading fully-managed cloud Apache Hadoop and Spark offering, which gives you optimized open-source analytic clusters for Spark, Hive, MapReduce, HBase, Storm, Kafka, and Microsoft R Server backed by a 99.9% SLA. Paxata’s Adaptive Information Platform empowers business consumers to turn raw data into ready information, instantly and automatically, in order to gain fast time-to-insights, thus accelerating time to value for customers using HDInsight. This combined offering of Paxata on Azure HDInsight enables customers to gain insights faster while running on an enterprise ready platform.

Microsoft Azure HDInsight – Open Source Big Data Analytics at Enterprise grade & scale

Each of Azure HDInsight's big data technologies are easily deployable as managed clusters with enterprise-level security and monitoring. The ecosystem of productivity applications in Big data has grown with the goal of making it easier for customers to solve their big data and analytical problems faster. Today, customers often find it challenging to discover these productivity applications and then in turn struggle to install and configure these apps.

To address this gap, the HDInsight Application Platform provides a unique experience to HDInsight where Independent Software Vendors (ISV’s) can directly offer their applications to customers – and customers can easily discover, install and use these applications built for the Big data ecosystem by a single click.

The largest and most time-consuming challenge for analytics is simply getting the data ready. Roughly 80% of the time is spent bringing together data from diverse sources, cleansing, shaping, and preparing data. As part of this integration, Paxata has optimized their Spark-based Adaptive Information Platform on Azure HDInsight to simplify information management for business consumers.

Paxata Self-Service Data Preparation – Accelerates analytics and time to insight

Paxata Self-Service Data Preparation application, built on the Adaptive Information Platform, provides an intuitive solution that enables any business consumer to turn raw data into trustworthy information and gain insights from their data faster. You can combine unstructured and structured data from various sources, cleanse, shape and publish data to any destination. Business consumers work with an interactive, visual experience with complete governance and reliable performance provided with HDInsight. This truly enables a self-service model for big data where non-technical users can harness the power of big data to accelerate insights.

The following are the salient highlights of Adaptive Information Platform:

Easy to use: Familiar to customers, business consumers use an Excel-like, intuitive, interactive visual experience to interact with data with no coding required.
 Smart: Algorithmic intelligence is used to recommend how to join and append datasets and normalize values.
Unified Information Platform: Paxata provides a unified solution for data integration, data quality, enrichment, collaboration and governance.
Built-in governance and security:  Paxata provides self-documenting data lineage and support for authentication, authorization, encryption, auditing and usage tracking.
Built for scale: Powered by the Apache Spark™-based engine for in-memory high-performance, parallel, pipelined, distributed processing.
Built for the cloud: Elastic scalability to support variable workloads.

Following image shows how Paxata Adaptive Information Platform delivers comprehensive information management with governance, scalability and extensibility.

Paxata on Azure HDInsight: Simplified information management at enterprise scale.

Customers can install Paxata on HDInsight using the one-click deploy experience of HDInsight Application Platform. Paxata’s Adaptive Information Platform is deployed as an application in a secure and compliant manner and doesn’t require customers to open up any ports. All requests are routed through the secure gateway on HDInsight and users are authenticated with Paxata’s own authentication system as well.

Once provisioned, business consumers can access the data using the self-service interface in a secure manner and analyze large data volumes interactively. This is made possible because Paxata’s Adaptive Information Platform leverages Apache Spark™ running on Azure HDInsight, a managed service which is backed by enterprise grade SLA of 99.9%. This ensures that the end user, in this case a business consumer, can use Paxata’s Adaptive Information Management Platform and focus on turning raw data into information without worrying about the underlying platform.

Getting started with Paxata’s Adaptive Information Platform on HDInsight

To install Paxata’s Adaptive Information Platform on HDInsight, you have to create a HDInsight 3.6 cluster with Apache Spark 2.1. You can choose Paxata as an application when creating a new cluster or add Paxata to an existing cluster as well. If you don’t have a license key, you can get one at the Paxata Azure data prep page.

The following screenshot shows how to install Paxata on HDInsight Spark cluster.

Once Paxata’s Adaptive Information Platform is installed you can launch it by browsing to the applications blade inside the HDInsight cluster.

Resources

Install Paxata on Azure HDInsight
Video series on Paxata
Learn more about Azure HDInsight
Learn more about Paxata
Press release by Paxata

Summary

We are pleased to announce the expansion of HDInsight Application Platform to include Paxata. Paxata’s Adaptive Information Platform empowers business consumers to turn raw data into ready information, instantly, in order to gain fast time-to-insights, thus accelerating time to value for customers using HDInsight. This combined offering of Paxata on Azure HDInsight enables customers to gain insights faster while running on an enterprise ready platform.
Quelle: Azure