Automation of Azure Analysis Services with Service Principals and PowerShell

Azure Analysis Services presents opportunities for the automation of administrative tasks including server provisioning, scale up/down, pause/resume, model management, data refresh, deployment, among others. This can leverage cloud efficiencies and helps ensure the repeatability and reliability of mission-critical systems. Such tasks can be performed in the Azure cloud using PowerShell in unattended mode. Services such as Azure Automation exist to support these processes. They should be executed using service principals for enhanced security and ease of management. Service principals are similar to on-premises service accounts, but for Azure. They use credentials in the form of an application ID along with a password or certificate. Model permissions are assigned to service principals through role membership like normal Azure Active Directory UPN accounts. The remainder of this post shows how to create a service principal to use with the Analysis Services cmdlets available in the SqlServer PowerShell module. Creation of service principals Learn more about how to create service principals in the Azure Portal, and how to create service principals in PowerShell either with a password or certificate. Role membership Once the service principal is created, its application ID can be assigned permissions in the Azure Analysis Services server or model roles using the following syntax. The example below adds a service principal to the server administrators group in SSMS. app:<app guid>@<tenant guid> The application can be selected in SSMS using the account picker by searching for its name. Execution of administrative tasks with PowerShell and service principals A pre-requisite to this section is to install the latest Azure.AnalysisServices and SqlServer PowerShell modules. Install-Module -Name Azure.AnalysisServices Install-Module -Name SqlServer The following example shows how to log in using a service principal application ID and password, and how to process (data refresh) a table in a model. Param ( [Parameter(Mandatory=$true)] [String] $AppId, [Parameter(Mandatory=$true)] [String] $PlainPWord, [Parameter(Mandatory=$true)] [String] $TenantId ) $PWord = ConvertTo-SecureString -String $PlainPWord -AsPlainText -Force $Credential = New-Object -TypeName “System.Management.Automation.PSCredential” -ArgumentList $AppId, $PWord Login-AzureAsAccount -Credential $Credential -ServicePrincipal -TenantId $TenantId -RolloutEnvironment “southcentralus.asazure.windows.net” Invoke-ProcessTable -Server “asazure://southcentralus.asazure.windows.net/myserver” -TableName “MyTable” -Database “MyDb” -RefreshType “Full” Note that the login cmdlet used is Login-AzureAsAccount, not Login-AzureRsAccount. The former should be used for Azure Analysis Services database-level operations such as those enabled by the Analysis Services cmdlets in the SqlServer PowerShell module. The latter should be used for Azure resource management operations such as those enabled by the AzureRM.AnalysisServices PowerShell module. The following example shows how to log in using a service principal application ID and self-signed certificate, and how to process (data refresh) a table in a model. Param ( [Parameter(Mandatory=$true)] [String] $AppId, [Parameter(Mandatory=$true)] [String] $CertThumbprint, [Parameter(Mandatory=$true)] [String] $TenantId ) Login-AzureASAccount -RolloutEnvironment “southcentralus.asazure.windows.net” -ServicePrincipal -ApplicationId $AppId -CertificateThumbprint $CertThumbprint -TenantId $TenantId Invoke-ProcessTable -Server “asazure://southcentralus.asazure.windows.net/myserver” -TableName “MyTable” -Database “MyDb” -RefreshType “Full”   Storing credentials and certificates in Azure Automation Credentials and certificates can be securely stored in Azure Automation and extracted for use in runbooks. Learn more about credential assets and certificate assets in Azure Automation.
Quelle: Azure

Announcing Default Encryption for Azure Blobs, Files, Table and Queue Storage

For most customers, security is not only of the utmost importance but also a deciding factor in choosing a public cloud provider. Customers require their data to be encrypted at rest as per their security and compliance needs. We at Azure Storage take security and privacy seriously and work tirelessly to help protect your data. Azure customers already benefit from Storage Service Encryption (SSE) for Azure Blob and File storage using Microsoft Managed Keys or Customer Managed keys for Azure Blob storage.

Central to our strategy in ensuring protection of our customer’s data, we are taking security a step further, by enabling encryption by default using Microsoft Managed Keys for all data written to Azure services (Blob, File, Table and Queue storage), for all storage accounts (Azure Resource Manager and Classic storage accounts), both new and existing. SSE for managed disks, including import scenario, will also be supported. To learn more, visit the managed disks & SSE FAQ.

All data that is written into Azure storage will be automatically encrypted by Storage service prior to persisting, and decrypted prior to retrieval. Encryption and decryption are completely transparent to the user. All data is encrypted using 256-bit AES encryption, also known as AES-256—one of the strongest block ciphers available. With encryption enabled by default, customers do not have to make any changes to their applications. To verify encryption is enabled for their storage accounts, customers can either query the status of encrypted data for blobs and file (not available for table and queue storage), or check account properties. There is neither any additional charge, nor any performance degradation in using this feature.

We will be enabling this capability region by region, expanding to all Azure regions and Azure clouds in the coming weeks.

Visit documentation to learn more about Storage Service Encryption with Service Managed Keys and Storage Service Encryption with Customer Managed Keys.
Quelle: Azure

Create your own pfSense on Azure

pfSense is a widely used open-source Firewall product. Azure provides the commercial version of pfSense, but for some open-source fans, they'd like to create their own pfSense on cloud. Here is an example of how to create your own pfSense on Azure. This example requires you have a Windows 10, Windows 2016 Server, or Windows 2012R2 server, and that Hyper-V is enabled.

​Install pfSense 2.3.4 on a VHD

​Download pfSense CE 2.3.4
Create a VM with generation 1 and a 20G vhd from HyperV Manager, and install pfSense. Accept all default settings and select quick installation. Please note, using a vhd less than 20G is also okay.
After installation, log in and choose:

14) to enable sshd
8) to login shell

Install waagent​

Update pkg ('su' to become root)

# pkg upgrade

Install python, setuptools, and bash:

# pkg install -y python27-2.7.13_3
# pkg install -y py27-setuptools-32.1.0_1
# ln -s /usr/local/bin/python /usr/local/bin/python2.7
# pkg install -y bash

Download waagent (v2.2.14):

# fetch https://github.com/Azure/WALinuxAgent/archive/v2.2.14.tar.gz

untar the package, and install it:

# python setup.py install

Enable udf

Download udf.ko here or from another shared link. Please see the links at the end of this blog post for additional information.
Copy udf.ko to /boot/kernel
Add the following lines into /boot/loader.conf:

kldload udf
console="comconsole"
vfs.mountroot.timeout=300

Add autostart script for waagent

Don't forget to make it executable by "chmod +x waagent.sh"

[2.3.4-RELEASE][root@pfSense.localdomain]/usr/local/etc/rc.d: cat waagent.sh
#! /bin/sh
/usr/local/sbin/waagent –daemon
[2.3.4-RELEASE][root@pfSense.localdomain]/usr/local/etc/rc.d: chmod +x waagent.sh

Upload the VHD to Azure

Learn more about how to upload the VHD to Azure.

Links and reference

The following are udf.ko and pfsense2.3.4.vhd for your reference. The SSL certificate is self-signed, please ignore the error.

udf.ko
pfsense2.3.4

Quelle: Azure

Machine Learning-based anomaly detection in Azure Stream Analytics

Customers who monitor real-time data can now easily detect events or observations that do not conform to an expected pattern thanks to machine learning-based anomaly detection in Azure Stream Analytics, announced for private preview today.

Up to now, Industrial IoT customers, and others, who monitor streaming data relied on expensive custom machine learning models. Implementers needed to have intimate familiarity with the use case and the problem domain, and integrating these models with the stream processing mechanisms that required complex data pipeline engineering. The high barrier to entry precluded adoption of anomaly detection in streaming pipelines despite the associated value for many Industrial IoT sites.

Monitoring made easy

The new capability makes it quick and easy to do service monitoring by tracking KPIs over time, and usage monitoring through metrics such as number of searches, numbers of clicks, or performance monitoring through counters like memory, CPU, file reads, etc. over time. Customers no longer need to build complex and expensive anomaly detection models and integrate them with streaming pipelines.

The new functionality is targeted towards numerical time series data. Azure Stream Analytics can detect positive and negative trends, and changes in a dynamic range of values. For example, in IT monitoring scenarios where event data is streamed to Azure Stream Analytics, trend detection can be used to generate alerts for upward trends in memory usage since it may be indicative of a memory leak. Similarly, alerting on exceptions indicative of service health instability can be obtained by detecting changes in the dynamic range of values. Spikes in the number of login failures can be used to raise security alerts.

The power of Machine Learning

A simple function call in a declarative Azure Stream Analytics query can detect anomalies in the input data. The underlying general-purpose machine learning model is abstracted out and powers the function calls. The underlying machine learning detectors track changes in values and report ongoing changes in their values as anomaly scores. The general-purpose model does not require ad-hoc threshold tuning and uses continuous learning to learn over time. The function calls return anomaly scores and binary spike indicators for each point in time.

How to enable anomaly detection with declarative SQL

The following examples below highlight the productivity wins by enabling anomaly detection in a declarative SQL like query language to reason about data in motion.

Simple usage to detect anomalies over one hour of time series data

select id, val, ANOMALYDETECTION(val) OVER(LIMIT DURATION(hour, 1)) FROM input

Usage with partitioning

select id, val, ANOMALYDETECTION(val) OVER(PARTITION BY id LIMIT DURATION(hour, 1)) FROM input

Usage with partitioning and "when"

select id, val, ANOMALYDETECTION(val) OVER(PARTITION BY id LIMIT DURATION(hour, 1) WHEN id != 2) FROM input

Usage showing the extraction of scores:

select id, val FROM input WHERE (GetRecordPropertyValue(ANOMALYDETECTION(val) OVER(LIMIT DURATION(hour, 1)), 'BiLevelChangeScore')) < -1.0

Three score fields are exposed:

BiLevelChangeScore, SlowPosTrendScore, SlowNegTrendScore

Get started today

We’re excited for you to try out anomaly detection in Azure Stream Analytics. Sign up here to participate in the private preview.
Quelle: Azure

New performance levels and storage add-ons in Azure SQL Database

We are pleased to announce the public preview of new performance levels and storage add-ons in Azure SQL Database. These new choices enable further price optimization opportunities for CPU intensive and storage bound workloads.

Higher performance levels for Standard databases

Previously, the highest performance level for a single database in the Standard tier was limited to 100 DTUs and now increases by 30x to 3000 DTUs with a range of new choices in between. This update follows a similar update which increased the database eDTU limits for Standard elastic pools. The new S4 – S12 performance levels provide price savings opportunities for CPU intensive workloads that do not demand the kind of high IO performance provided by the Premium tier. For IO intensive workloads, the Premium tier continues to provide lower latency per IO and an order of magnitude more IOPS per DTU than in the Standard tier.

Storage add-ons for single databases and elastic pools

Previously, the storage size limit was a fixed amount based on the service tier and performance level. Customers can now purchase extra storage above this included amount for single databases and elastic pools in the Standard and Premium tiers. The decoupling of storage from compute reduces costs by allowing more storage without having to increase DTUs or eDTUs.

Storage provisioned above the included amount is charged extra and billed on an hourly basis. The total price for a single database (or an elastic pool) is the price based on DTUs (or eDTUs) plus the price for any extra storage provisioned. Storage for a single database or elastic pool can be provisioned in increments of 250 GB up to 1 TB, and then in increments of 256 GB beyond 1 TB.

Extra storage unit price

Example

Suppose an S3 database has provisioned 1 TB. The amount of storage included for S3 is 250 GB, and so the extra storage amount is 1 TB – 250 GB = 774 GB. The unit price for extra storage in the Standard tier is approximately $0.085/GB/month during preview, and so the extra storage price is 774 GB * $0.085/GB/month = $65.79/month. Therefore, the total price for this database is $150/month for DTUs + $65.79/month for extra storage = $215.79/month.

New performance levels and storage limits

Single database

Elastic pool

Learn more

To learn more about the new performance levels and storage add-on choices available, please visit the Azure SQL Database resource limits webpage. And for more pricing information, please visit the Azure SQL Database pricing webpage.
Quelle: Azure

Azure Management Libraries for Java – v1.2

We released 1.2 of the Azure Management Libraries for Java. This release adds support for additional security and deployment features, and more Azure services: Managed service identity Create users in Azure Active Directory, update service principals and assign permissions to apps Storage service encryption Deploy Web apps and functions using MS Deploy Network watcher service Search service https://github.com/Azure/azure-sdk-for-java Getting Started Add the following dependency fragment to your Maven POM file to use 1.2 version of the libraries:<dependency>
<groupId>com.microsoft.azure</groupId>
<artifactId>azure</artifactId>
<version>1.2.1</version>
</dependency>

Create a Virtual Machine with Managed Service Identity (MSI)
You can create a virtual machine with MSI enabled using a define() … create() method chain:VirtualMachine virtualMachine = azure.virtualMachines().define(“myLinuxVM”)
.withRegion(Region.US_EAST)
.withNewResourceGroup(rgName)
.withNewPrimaryNetwork(“10.0.0.0/28″)
.withPrimaryPrivateIPAddressDynamic()
.withNewPrimaryPublicIPAddress(pipName)
.withPopularLinuxImage(KnownLinuxVirtualMachineImage.UBUNTU_SERVER_16_04_LTS)
.withRootUsername(“tirekicker”)
.withRootPassword(password)
.withSize(VirtualMachineSizeTypes.STANDARD_DS2_V2)
.withOSDiskCaching(CachingTypes.READ_WRITE)
.withManagedServiceIdentity()
.withRoleBasedAccessToCurrentResourceGroup(BuiltInRole.CONTRIBUTOR)
.create();

You can manage any MSI-enabled Azure resources from a virtual machine with MSI and add an MSI service principal to an Azure Active Directory security group.
Add New User to Azure Active Directory
You can add a new user to Azure Active Directory using a define() … create() method chain:ActiveDirectoryUser user = authenticated.activeDirectoryUsers()
.define(“tirekicker”)
.withEmailAlias(“tirekicker”)
.withPassword(“StrongPass!12″)
.create();

Similarly, you can create and update users and groups in Active Directory.
Enable Storage Service Encryption for a Storage Account
You can enable storage service encryption at a storage account level when you create a storage account using a define() … create() method chain:StorageAccount storageAccount = azure.storageAccounts().define(storageAccountName)
.withRegion(Region.US_EAST)
.withNewResourceGroup(rgName)
.withEncryption()
.create();

Deploy Web apps and Functions using MS Deploy
You can use MS Deploy to deploy Web apps and functions by using the deploy() method:// Create a Web app
WebApp webApp = azure.webApps().define(webAppName)
.withExistingWindowsPlan(plan)
.withExistingResourceGroup(rgName)
.withJavaVersion(JavaVersion.JAVA_8_NEWEST)
.withWebContainer(WebContainer.TOMCAT_8_0_NEWEST)
.create();
// Deploy a Web app using MS Deploy
webApp.deploy()
.withPackageUri(“link-to-bin-artifacts-in-storage-or-somewhere-else”)
.withExistingDeploymentsDeleted(true)
.execute();

And..// Create a function app
FunctionApp functionApp = azure.appServices().functionApps()
.define(functionAppName)
.withExistingAppServicePlan(plan)
.withExistingResourceGroup(rgName)
.withExistingStorageAccount(app3.storageAccount())
.create();
// Deploy a function using MS Deploy
functionApp.deploy()
.withPackageUri(“link-to-bin-artifacts-in-storage-or-somewhere-else”)
.withExistingDeploymentsDeleted(true)
.execute();

Create Network Watcher and start Packet Capture
You can visualize network traffic patterns to and from virtual machines by creating and starting a packet capture using a define() … create() method chain, downloading the packet capture and visualizing network traffic patterns using open source tools:// Create a Network Watcher
Network Watcher networkWatcher = azure.networkWatchers().define(nwName)
.withRegion(Region.US_EAST)
.withNewResourceGroup(rgName)
.create();
// Start a Packet Capture
PacketCapture packetCapture = networkWatcher.packetCaptures()
.define(packetCaptureName)
.withTarget(virtualMachine.id())
.withStorageAccountId(storageAccount.id())
.withTimeLimitInSeconds(1500)
.definePacketCaptureFilter()
.withProtocol(PcProtocol.TCP)
.attach()
.create();

Similarly, you can programmatically:

Verify if traffic is allowed to and from a virtual machine
Get the next hop type and IP address for a virtual machine
Retrieve network topology for a resource group
Analyze virtual machine security by examining effective network security rules applied to a virtual machine
Configure network security group flow logs.
Create a Managed Cloud Search Service
You can create a managed cloud search service (Azure Search) with replicas and partitions using a define() … create() method chain:SearchService searchService = azure.searchServices().define(searchServiceName)
.withRegion(Region.US_EAST)
.withNewResourceGroup(rgName)
.withStandardSku()
.withPartitionCount(1)
.withReplicaCount(1)
.create();

Similarly, you can programmatically:

Manage query keys
Update search service with replicas and partitions
Regenerate primary and secondary admin keys.
Try it
You can get more samples from our GitHub repo. Give it a try and let us know what you think (via e-mail or comments below). You can find plenty of additional info about Java on Azure at http://azure.com/java.
Quelle: Azure

What’s brewing in Visual Studio Team Services: August 2017 Digest

This post series provides the latest updates and news for Visual Studio Team Services (VSTS) and is a great way for Azure users to keep up-to-date with new features being released every three weeks. Visual Studio Team Services offers the best DevOps tooling to create an efficient continuous integration and release pipeline to Azure.

This month we’ll take a look at the new release definition editor, an update on the new wiki, improvements in pull requests, and how you can quickly get started building your own extension for VSTS. Let’s get started with a look at the latest release experience improvements.

New Release Definition Editor

The new Release Definition Editor going into preview. It is based off the new CI editor we released not long ago, and it’s a good example of the overall direction that we are going. It’s not just a cleaner experience, it is structurally different in that it lets you visualize your release process. It lets you work with Release in the way you think about your system. We are going to be bringing this same approach to the runtime views as well so that you can visualize a release as it progresses. Unlocking all of your data with richer, easier to consume visualizations is something we are trying to do across the product.

Visualization of the pipeline

The pipeline in the editor provides a graphical view of how deployments will progress in a release. The artifacts will be consumed by the release and deployed to the environments. The layout and linking of the environments reflects the trigger settings defined for each environment.

In context configuration UI

Artifacts, release triggers, pre deployment and post deployment approvals, environment properties and deployment settings are now in-context and easily configurable.

Applying deployment templates

The list of featured templates are shown when creating a new environment.

Improved task and phase editor

All the enhancements in the new build definition editor are now available in the release definition editor, too. You can search for tasks and add them either by using the Add button or by using drag and drop. You can reorder or clone tasks using drag and drop.

Code information in Release with Jenkins CI

In Release, we want to have better integration with popular CI systems like Jenkins. Today, in the release summary tab, we show code commits only if the CI build is coming from VSTS. This feature enables code information for Jenkins CI artifacts as well when the Jenkins server is reachable by the agent executing the release.

Release status badge in Code hub

Today, if you want to know whether a commit is deployed to your customer production environment, you first identify which build consumes the commit and then check all of the release environments where this build is deployed. Now this experience is much easier with the integration of the deployment status in the Code hub status badge to show the list of environments that your code is deployed to. For every deployment, status is posted to the latest commit that was part of the deployment. If a commit is deployed to multiple release definitions (with multiple environments) then each has an entry in the badge, with status shown for each environment. This improves the traceability of code commit to deployments.

Ansible extension

We have released a new extension that includes a build and release task to integrate with Ansible and execute a given Playbook on a specified list of Inventory nodes via command line interface. Ansible uses Playbooks which express configurations, deployment, and orchestration steps in YAML format. Each Playbook maps a group of hosts to a set of roles. Each role is represented by calls to Ansible tasks. An Inventory file is a description of the host nodes that can be accessed by Ansible.

The task requires that the Playbook and Inventory files be located either on a private Linux agent or on a remote machine where an Ansible automation engine has been installed. An SSH endpoint needs to be set up if the Ansible is located on a remote machine. Inventory can also be specified inline, as Dynamic Inventory, or as a Host list.

Improvements in the Wiki edit experience

As I mentioned last month, each project in VSTS now supports its own Wiki, and it continues to improve every sprint. Let’s look at some of the latest enhancements.

The new Wiki edit experience now supports HTML tags in markdown.

You can also conveniently resize images in the markdown folder.

Revert a Wiki revision

As you use Wiki more, there is a chance you’ll save unintended changes. Now you can revert a revision to a Wiki page by going to the revision details and clicking on the Revert button.

Learn more about getting started with Wiki.

Git pull request status extensibility in public preview

Using branch policies is a great way to increase the quality of your code, not only through code reviews, but also through automated builds and tests. Until now, those policies have been limited to only the integrations provided natively by VSTS. Using the new PR Status API and the corresponding branch policy, third party services can participate in the PR workflow just like native VSTS features.

When a service posts to the Status API for a pull request, it will immediately appear in the PR details view in a new Status section. The status section shows the description and creates a link to the URL provided by the service. Status entries also support an action menu that is extensible for new actions to be added by web extensions.

Status alone does not block completion of a PR – that’s where the policy comes in. Once PR status has been posted, a policy can then be configured. From the branch policies experience, a new policy is available to Require approval from external services. Select + Add service to begin the process.

In the dialog, select the service that’s posting the status from the list and select the desired policy options.

Once the policy is active, the status will be shown in the Policies section, under Required or Optional as appropriate, and the PR completion will be enforced as appropriate.

To learn more about the status API, and to try it out for yourself, check out the documentation and samples.

Automatically complete work items when completing pull requests

If you’re linking work items to your PRs (you are, right?), keeping everything up to date just got simpler. Now, when you complete a PR, you’ll have the option to automatically complete the linked work items after the PR has been merged successfully. If you’re using policies and set PRs to auto-complete, you’ll see the same option. No more remembering to revisit work items to update the state once the PR has completed – VSTS will do it for you.

Task lists in pull request descriptions and comments

When preparing a PR or commenting, you sometimes have a short list of things that you want to track but then end up editing the text or adding multiple comments. Lightweight task lists are a great way to track progress on a list of to-dos as either a PR creator or reviewer in the description or a single, consolidated comment. Click on the Markdown toolbar to get started, or apply the format to selected text.

Once you’ve added a task list, you can simply check the boxes to mark items as completed. These are expressed and stored within the comment as [ ] and [x] in Markdown. See Markdown guidance for more information.

Ability to “Like” comments in pull requests

Show your support for a PR comment with a single click on the like button. You can see the list of all people that liked the comment by hovering over the button.

Clean up stale branches

Keeping your repository clean by deleting branches you no longer need enables teams to find branches they care about and set favorites at the right granularity. However, if you have a lot of branches in your repo, it can be hard to figure out which are inactive and can be deleted. We’ve now made it easier to identify “stale” branches (branches that point to commits older than 3 months). To see your stale branches, go to the Stale pivot on the Branches page.

Search for a deleted branch and re-create it

When a branch is accidentally deleted from the server, it can be difficult to figure out what happened to it. Now you can search for a deleted branch, see who deleted it and when, and re-create it if you wish.

To search for a deleted branch, enter the full branch name into the branch search box. It will return any existing branches that match that text. You will also see an option to search for an exact match in the list of deleted branches. Click the link to search deleted branches.

If a match is found, you will see who deleted it and when. You can also restore the branch.

Restoring the branch will re-create it at the commit to which it was last pointed to. However, it will not restore policies and permissions.

Copy work item processes

You can now create a copy of an inherited process to use as a starting point for a new process, or to prepare and test process changes.

If you make a change to the process that is used by one or more team projects, each of these team projects will see these changes immediately. Often, that is not what you want. Instead, you want to bundle the changes to your process and test your changes before they are rolled out to all team projects. You can do this by following these steps:

Create a copy of the process that you want to change.
Make your changes to the duplicated process. Since no team project is using this process, these changes are not affecting anyone.
To test your changes, create a test project based on this duplicated process if you don't have any yet. If you have already created a test project before, you can change the process of the test project using the Change team project to use <process name> option from the context menu.
Now it is time to deploy the changes. To do this, you change the process of the team projects which need the new changes. Select the Change team project to use <process name> option from the context menu.
Optionally, you can disable or delete the original process.

Updated order of the last column in the Kanban board

If you added a custom state to your work item type, you might have noticed that the last column on the Kanban board always presented the card that was closed earliest. We’ve found that seeing the card closed most recently is often more helpful.

The root cause of this behavior is that the last column of the Kanban board is ordered descending on the Closed Date field. In our processes (Scrum, Agile, CMMI), each work item type includes rules to set this field when it is transitioned to the Closed or Done state (depending on the process and work item type). However, if you add a custom state we didn't automatically add rules to set the Closed Date field to support the new state. If you’d move a work item from the New state to the Closed or Done state, the Closed Date would have an empty value. Our query engine puts empty values on top when ordering descending. So on the Kanban board, you would see the cards on top that were closed earliest.

We first made sure that we are adding the right set of rules to the work items in case you are adding a custom state. You will not see an empty Closed Date anymore when closing a work item. We will not backfill existing closed work items. To ensure that you will see the most recently closed cards on the top in the Kanban board, we have also updated the ordering logic on the last column of the Kanban board to put cards with empty values for the Closed Date field at the bottom.

Velocity Widget for Analytics

The Analytics extension now includes a Velocity widget.

With this powerful widget, you can chart your team’s velocity by Story Points, work item count, or any custom field. With advanced options, you can compare what your team delivered as compared to plan, as well as highlighting work that was completed late.

The Velocity Widget provides functionality not available in the Velocity Chart displayed on the Backlog view, such as:

Show velocity for any team, not just the current team
Show velocity for any backlog level or work item type, not just the Stories backlog.
Calculate velocity by sum of any field, not just Story Points. Or, by count of any work item type.
Show planned vs. actual. Did you deliver what you actually planned?
Highlight work that was completed late, after the sprint.
It supports sizing to 1×1, for when you just want a tile to show your average velocity.

If you haven’t already, install the Analytics Extension to get access to the Velocity Widget as well as widgets for Lead Time, Cycle Time, and a Cumulative Flow Diagram.

We will be publishing more widgets for the Analytics Extension in the coming months, such as Burndown, Burnup, and Trend.

Extension of the month: SpecFlow+LivingDoc

SpecFlow+LivingDoc is now available in the Marketplace. Living documentation is the term used to describe system documentation that is up-to-date and easily understood. A prime example of this are feature files written in Gherkin, which uses natural language to describe how an application is expected to behave in a given scenario. By describing specifications in a natural language, all stakeholders (business, development, testing, requirements etc.) can understand and discuss the specifications on an equal footing. These specifications, in turn, form an important part of the system documentation, and are commonly used in agile development methods such BDD and Specification by Example.

For many .NET developers, the open source project SpecFlow is their tool of choice for automating test scenarios written in Gherkin with Visual Studio. However, these Gherkin files are plain text files, and are generally stored in a code repository and inaccessible to many team members. While the SpecFlow Visual Studio extension supports syntax highlighting for Gherkin in Visual Studio, not all stakeholders have access to Visual Studio, in particular, business stakeholders.

SpecFlow+ LivingDoc bridges this gap, making the specifications written in Gherkin accessible to all team members in VSTS. SpecFlow+ LivingDoc is part of SpecFlow+, a series of (optional) paid extensions for SpecFlow. SpecFlow+ LivingDoc takes your feature files and parses them so that they can be displayed in VSTS with syntax highlighting and formatting. This makes the feature files much easier to navigate than plain text without any formatting.

Formatting includes the following:

Gherkin keywords syntax highlighting
Tables
Alternating background colors for Given/Then/When sections
Support for images embedded via Markdown

Quickly get started writing your own extension

Over the last few years we have introduced a number of ways to extend and integrate with VSTS. For example, we have client libraries for .NET (which work with both .NET Framework and .NET Core apps) and for Node.js. We also have an extensibility model that allows extension of our web experience.

With all of these great options, the challenge is knowing what pieces you need to get started and then assembling them in the right way. We have made a lot of improvements to our integration docs and are planning another wave of improvements to our docs soon, but sometimes you need more than docs.

In a new blog post, The fastest path to a new VSTS extension, we show you how to quickly get started building your own extension.

There is always much more in each release than I can cover here. Please check out the July 14th and August 4th release notes for more information. Be sure to subscribe to the DevOps blog to keep up with the latest plans and developments for VSTS.

Happy coding!

@tfsbuck
Quelle: Azure

Announcing Deploy to Kubernetes & Azure Container Service and Container Agent plugins

Continuous Deployment plugins for Kubernetes and Azure Container Service

We have created a Azure Container Service (ACS) plugin for Jenkins, so that no matter which ACS orchestrator you have chosen, you can continuously deploy to that cluster from Jenkins with the same, simple plugin.

Azure Container Service optimizes the configuration of popular open-source tools and technologies specifically for Azure. You get an open solution that offers portability for both your containers and your application configuration. You select the size, number of hosts, and choice of orchestrator tools (Docker Swarm, Kubernetes or DC/OS) – Container Service handles everything else.

When we were working on the ACS plugin, we surveyed the Jenkins landscape and couldn’t find a plugin that allowed native continuous deployment from Jenkins to Kubernetes. So we decided to create one, as we think it would be extremely valuable to both the Jenkins and Kubernetes communities. Our ACS plugin uses this plugin as a dependency for Kubernetes support.

Here's a sneak preview:

Jenkins Agent plugin for Azure Container Service and Azure Container Instances

By having a large number of agents, Jenkins is able to run a large number of jobs in parrallel. With the VM Agent plugin, Jenkins will dynamically provision a Jenkins VM agent on Azure when there is a new job and deprovision the VM when the job is done. ci.jenkins.io uses the plugin extensively. As does the .NET Core Team, who managed to reduce their monthly build cost by 75%!

Now, imagine instead of a VM, you can create a container agent that takes seconds instead of minutes to provision as it's based on a Doker image with all the tools and environment settings you need. You can create a new container to run your build and tear it down after the build is complete without worrying about the provisioning cost. Also, if you want to experiment with Azure Container Instances (ACI), you can go right ahead and give it a try as the plugin supports ACI too.

 

We will be debuting all of these plugins at Jenkins World 2017, demonstrating how to build and deploy a modern Java app to Azure App Service on Linux and to a Kubernetes cluster on Azure. Be sure to catch our talk on Azure DevOps Opensource Integration. See you at Jenkins World 2017!
Quelle: Azure

SMB Version 1 disabled Azure Gallery Windows operating system images

The Azure security team has recently driven some changes into the default behavior of Windows operating system images that are available in the Azure gallery. These changes are in response to recent concerns over malware that has been able to take advantage of issues with the Server Message Block Version 1 network file sharing protocol. The Petya and WannaCry ransomware attacks are just two types of malware that have been able to spread due to weaknesses in SMB v1.

Due to the security issues related to the use of SMB v1, the SMB v1 protocol is disabled on almost all Windows operating systems in the Azure Gallery. The result of this change is that when you create a new virtual machine in the Azure Virtual Machines service, that virtual machine will have the SMB v1 protocol disabled by default. You will not need to manually disable the protocol, such as using the method shown in the figure below.

While we expect to have little or no disruption due to these changes, there may be issues you want to consider:

What specific Windows operating system images are impacted by this change?
What is your current SMB v1 footprint?
What effect does this change have on your currently running virtual machines?
What about Linux and SMB v1?
What about PaaS Images? Are they involved with this change?
What tools are available for you to be alerted when SMB v1 is enabled on your virtual machines? Can Azure Security Center be helpful in this context?

To learn more about this change and these issues, please read Disabling Server Message Block Version 1 (SMB v1) in Azure.
Quelle: Azure

Stream Processing Changes: #Azure #CosmosDB change feed + Apache Spark

Azure Cosmos DB: Ingestion and storage all-in-one

Azure Cosmos DB is a blazing fast, globally distributed, multi-model database service. Regardless of where your customers are, they can access data stored in Azure Cosmos DB with single-digit latencies at the 99th percentile at a sustained high rate of ingestion. This speed supports using Azure Cosmos DB, not only as a sink for stream processing, but also as a source. In a previous blog, we explored the potential of performing real-time machine learning with Apache Spark and Azure Cosmos DB. In this article, we will further explore stream processing of updates to data with Azure Cosmos DB change feed and Apache Spark.

What is Azure Cosmos DB change feed?

Azure Cosmos DB change feed provides a sorted list of documents within an Azure Cosmos DB collection in the order in which they were modified. This feed can be used to listen for modifications to data within the collection to perform real-time (stream) processing on updates. Changes in Azure Cosmos DB are persisted and can be processed asynchronously, and distributed across one or more consumers for parallel processing. Change feed is enabled at collection creation and is simple to use with the change feed processor library.

Designing your system with Azure Cosmos DB

Traditionally, stream processing implementations first receive a high volume of incoming data into a temporary message queue such as Azure Event Hub or Apache Kafka. After stream processing the data, a materialized view or aggregate is stored into a persistent, query-able database. In this implementation, we can use the Azure Cosmos DB Spark connector to store Spark output into Azure Cosmos DB for document, graph, or table schemas. This design is great for scenarios where only a portion of the incoming data or only an aggregate of incoming data is useful.

Figure 1: Traditional stream processing model

Let’s consider the scenario of credit card fraud detection. All incoming data (new transactions) need to be persisted as soon as they are received. As new data comes in, we want to incrementally apply a machine learning classifier to detect fraudulent behavior.

Figure 2: Detecting credit card fraud

In this scenario, Azure Cosmos DB is a great choice for directly ingesting all the data from new transactions because of its unique ability to support a sustained high rate of ingestion while durably persisting and synchronously indexing the raw records, enabling these records to be served back out with low latency rich queries. From the Azure Cosmos DB change feed, you can connect compute engines such as Apache Storm, Apache Spark or Apache Hadoop to perform stream or batch processing. Post processing, the materialized aggregates or processed data can be stored back into Azure Cosmos DB permanently for future querying.

Figure 3: Azure Cosmos DB sink and source

You can learn more about change feed in the Working with the change feed support in Azure Cosmos DB article, and by trying the change feed + Spark example on GitHub. If you need any help or have questions or feedback, please reach out to us on the developer forums on Stack Overflow. Stay up-to-date on the latest Azure Cosmos DB news and features by following us on Twitter @AzureCosmosDB.
Quelle: Azure