April 2017 Leaderboard of Database Systems contributors on MSDN

Many congratulations to last month's top-10 contributors! Hilary Cotter and Alberto Morillo continue to top the Overall and Cloud database for the third successive month.

 

This Leaderboard initiative was started in October 2016 to recognize the top Database Systems contributors on MSDN forums. The following continues to be the points hierarchy (in decreasing order of points):

 

For questions related to this leaderboard, please write to leaderboard-sql@microsoft.com
Quelle: Azure

Building a Data Lake with Cloudera and Azure Data Lake – Roadshow

Today we are announcing the Cloudera + Microsoft Roadshow to showcase the partnership and integration with Cloudera Enterprise Data Hub and Azure Data Lake Storage (ADLS).    Linux and Open Source solutions (OSS) have been some of the fastest growing workloads in Azure and Big Data/Analytics are popular among our customers.   Microsoft works closely with our open source partners such as Cloudera and many others to build solutions across infrastructure-as-a-service (IaaS) and platform-as-a-service (PaaS) to make it as simple as possible and allow customers to focus on solving their data challenges.  This roadshow will build from the latest Cloudera announcements and will start in the following cities:

Chicago, IL – June 8
Detroit, MI – June 9
Dallas, TX – June 13
Houston, TX – June 14

Join Cloudera and Microsoft where you will get a hands-on lab experience to build a Data Lake and learn how:

Organizations are finding success with Cloudera Enterprise on Azure
To deploy and configure your Cloudera cluster on Microsoft Azure Data Lake Store
To ensure your data is secure and governed in the cloud
You can get started recognizing value from your Azure data

Click on the city to register!  Space is limited so register early!
Quelle: Azure

Building an Azure Analysis Services Model for Azure Blobs — Part 1

Azure Analysis Services recently added support for the 1400 compatibility level for Tabular models, as announced in the article “1400 compatibility level in Azure Analysis Services.” The modern Get Data experience in Tabular 1400 is a true game changer. Where previously Tabular models in the cloud would primarily interface with Azure SQL Database or Azure SQL Data Warehouse, you now have more options to bring in data directly from a source. There are pros and cons to either approach. Perhaps most importantly, Azure SQL Data Warehouse can help to ensure data integrity at cloud scale. It is hard to overstate the importance of accurate and trustworthy data for BI solutions. On the other hand, the data warehouse increases the complexity of the data infrastructure and adds latencies to data updates. If data integrity can be ensured without requiring a data warehouse, direct data import into Tabular 1400 can help to avoid the extra complexity and latencies. For an example, see the blog article “Building an Azure Analysis Services Model on Top of Azure Blob Storage” on the Analysis Services team blog.

 

 

To learn more, please read the full blog post "Building an Azure Analysis Services Model on Top of Azure Blob Storage—Part 1."

New to Azure Analysis Services? Find out how you can try Azure Analysis Services or learn how to create your first data model.
Quelle: Azure

How to use Azure Functions with IoT Hub message routing

I get a lot of asks for new routing endpoints in Azure IoT Hub, and one of the more common asks is to be able to route directly to Azure Functions. Having the power of serverless compute at your fingertips is very powerful and allows you to do all sorts of amazing things with your IoT data.
 
(Quick refresher: back in December 2016 we released message routing in IoT Hub. Message routing allows customers to setup automatic routing of events to different systems, and we take care of all of the difficult implementation architecture for you. Today you can configure your IoT Hub to route messages to your backend processing services via Service Bus queues, topics, and Event Hubs as custom endpoints for routing rules.)

One quick note: if you want to trigger an Azure Function on every message sent to IoT Hub, you can do that already! Just use the Event Hubs trigger and specify IoT Hub's built-in Event Hub-compatible endpoint as the trigger in the function. You can get the IoT Hub built-in endpoint information in the portal under Endpoints –> Events:

Here’s where you enter that information when setting up your Function:

If you’re looking to do more than that, read on.

I have good news and I have bad news. The bad news first: this blog post is not announcing support for Functions as a custom endpoint in IoT Hub (it's on the backlog). The good news is that it's really easy to use an intermediate service to trigger your Azure Function to fire!

Let's take the scenario described in the walkthrough, Process IoT Hub device-to-cloud messages using routes. In the article, a device occasionally sends a critical alert message that requires different processing from the telemetry messages, which comprise the bulk of the traffic through IoT Hub. The article routes messages to a Service Bus queue added to the IoT hub as a custom endpoint. When I demo the message routing feature to customers, I use a Logic app to read from the queue and further process messages, but we can just as easily use an Azure Function to run some custom code. I'm going to assume you've already run through the walkthrough and have created a route to a Service Bus queue, but if you want a quick refresher on how to do that you can jump straight to documentation here. This post will be waiting when you get back!

First, create a Function App in the Azure Portal:

Next, create a Function to read data off your queue. From the quickstart page, click on “Create your own custom function” and select the template “ServiceBusQueueTrigger-CSharp”:

Follow the steps to add your Service Bus queue connection information to the function, and you’re done setting up the trigger. Now you can use the power of Azure Functions to trigger your custom message processing code whenever there's a new message on the queue.
 
Service Bus is billed per million operations and it doesn't add an appreciable amount to the cost of your IoT solution. For example, if I send all messages from my 1 unit S1 SKU IoT hub (400k messages/day) through to a function in this manner, I pay less than $0.05 USD for the intermediate queue if I use a Basic SKU queue. I’m not breaking the bank there.
 
That should tide you over until we have first-class support for routing to Azure Functions in IoT Hub. In the meantime, you can read more about message routes on the developer guide, and learn more about  As always, please continue to submit your suggestions through the Azure IoT User Voice forum or join the Azure IoT Advisors Yammer group.
Quelle: Azure

Announcing AzCopy on Linux Preview

Today we are pleased to announce the preview version of the AzCopy on Linux with redesigned command-line interface that adopts POSIX parameter conventions. AzCopy is a command-line utility designed for copying large amounts of data to, and from Azure Blob, and File storage using simple commands with optimal performance. AzCopy is now built with .NET Core which supports both Windows and Linux platforms. AzCopy also takes a dependency on the Data Movement Library which is built with .NET Core enabling many of the capabilities of the Data Movement Library in AzCopy!

Install and run AzCopy on Linux

Install .NET Core on Linux
Download and extract the tar archive for AzCopy (version 6.0.0-netcorepreview)

wget -O azcopy.tar.gz https://aka.ms/downloadazcopyprlinux
tar -xf azcopy.tar.gz

Install and run azcopy

sudo ./install.sh
azcopy

If you do not have superuser privileges, alternatively you can also run AzCopy by changing to azcopy directory and then running ./azcopy.

Once you are done with the installation, you can remove the extracted files.

What is supported?

Feature parity with AzCopy on Windows (5.2) for Blob and File scenarios

Parallel upload and downloads
Built-in retry mechanism
Resume, or restart from a failed transfer session
And many other features highlighted in the AzCopy guide

What is not supported?

Azure Storage Table service is not supported in AzCopy on Linux

Samples

It is as simple as the legacy AzCopy, with command line options that follow POSIX conventions. Watch the following sample where I upload a directory of 100GB in size. It is simple!

To learn more about all the command line options, issue 'azcopy –help' command.

Here are a few other samples:

Upload VHD files to Azure Storage

azcopy –source /mnt –include "*.vhd" –destination "https://myaccount.blob.core.windows.net/mycontainer?sv=2016-05-31&ss=bfqt&srt=sco&sp=rwdlacup&se=2017-05-10T21:45:18Z&st=2017-05-09T13:45:18Z&spr=https,http&sig=kQ42XrayIifuE4SGYaAy6COHoIanP7H9Qi3R0KqHs7M%3D"

Download a container using Storage Account Key

azcopy –recursive –source https://myaccount.blob.core.windows.net/mycontainer –source-key "lYZbbIHTePy2Co…..==" –destination /mnt

Synchronous copy across Storage accounts

azcopy –source https://ocvpwd5f77vcqsalinuxvm.blob.core.windows.net/mycontainer –source-key "lXHqgIHTePy2Co….==" –destination https://testaccountseguler.blob.core.windows.net/mycontainer –dest-key "uT8nw5…. ==" –-sync-copy

 

Feedback

AzCopy on Linux is currently in preview, and we will make improvements as we hear from our users. So, if you have any comments or issues, please leave a comment below.

 
Quelle: Azure

Azure SQL Database now supports transparent geographic failover of database groups

The built-in geo-replication feature has been generally available to SQL Database customers since 2014. During this time one of the most common customer requests has been about supporting transparent failover with automatic activation. Today we are happy to announce a public preview of auto-failover groups that extends geo-replication with the following additional capabilities:

Geo-replication of a group of databases within a logical server
Ability to choose manual or automatic failover for a group of databases
The connection endpoint to the primary databases in the group that doesn’t change after failover
The connection end-point to the secondary databases in the group that doesn’t change after failover (for read-only workloads)

Geo-replication of a group of databases

A typical cloud application includes multiple databases. You can now use Azure SQL Database API to protect the application by geo-replicating all its databases in one simple step. During an outage, all these databases will failover to the secondary server as a group. The group can include individual databases, elastic pools or a combination of the two. If you are already using geo-replication for your production databases you can create a failover group and add them to it to take advantage of the above benefits at no extra cost.

Connection endpoints

You no longer need to worry about changing SQL connection string after failover. Each auto-failover group includes two connection endpoints. The read-write endpoint is a DNS name that will always point to the primary database and will automatically switch during failover. The read-only endpoint is a DNS name that points to the secondary server and allows using the secondary databases to load balance the read-only workloads. 
Quelle: Azure

Manage your business needs with new enhancements in Azure Autoscale

Automatically scaling out or scaling in applications to handle the demands of your business is an essential element of the cloud strategy. Azure’s Autoscale service empowers you to automatically scale your compute and App Service workloads based on user-defined rules regarding metric conditions, time/date schedules, or both. Azure Autoscale is available for Classic Cloud Services, Virtual Machine Scale Sets (VMSS) and App Services. Today we are excited to announce a host of improvements to Autoscale, including faster auto scaling, simplified configuration, the ability to scale by a custom metric using Application Insights, and more troubleshooting information available in the Activity Log.

Faster Autoscale

Classic Cloud Services: The Classic Virtual Machine infrastructure that powers Classic Cloud Services now supports more reliable, host-level metrics via the Azure Monitor metric pipeline. Because of this, an Autoscale setting can now be set to have as low as a five-minute time window to activate (previously we recommended a time window of no less than 30 minutes). With this, you can now do faster and more reliable auto scaling. If you have a Classic Cloud Service Autoscale setting where you wish to take advantage of the improved scaling, please update it with a shorter time window (as low as five minutes).

VMSS and App Services: The Autoscale engine for VMSS and App Services can also now trigger scale actions faster. The new engine is tuned to check for your metric based rules every minute, thereby enabling the ability to scale your instances as early as 1 minute after a metric value crosses the threshold set in an Autoscale setting. To take advantage of the faster Autoscale, please update your existing autoscale setting. All new Autoscale settings created or updated on VMSS or App Services after May 10th will automatically use the new engine.

Simplified management experience in the portal

Based on your feedback, we made it easier to discover and manage Autoscale settings in the portal. Autoscale settings can now be accessed directly from within the Azure Monitor blade, use a completely re-vamped configuration blade, and enable you to easily see the full template in JSON or scale action history for that setting. Learn more about how to get started with Autoscale today.

Figure 1. the new tab within Azure Monitor for accessing and managing Autoscale settings.

Figure 2. the simplified Autoscale blade with options to view scale action history, view the JSON object and edit notifications.

Autoscale using custom metrics

One of our top customer asks was to be able to Autoscale based on a custom, user-defined metric, and we’ve now enabled it using Application Insights. This new capability is available now and enables you to scale Classic Cloud Service, VMSS or App Service workloads by any Azure Monitor based metric or custom and application metrics collected by Application Insights, Azure’s application performance management service. Here is a sample of an Autoscale setting that allows you to scale your Web API app based on a custom metric ingested to Application Insights. This ability to Autoscale via Application Insights based metrics is now available in public preview, please try this out and share your feedback.

Figure 3. the ability to select Application Insights as a source of metrics and the ability to select a standard or user-defined metric by which scaling will occur.

Improved Autoscale troubleshooting

The Autoscale engine logs an event in Activity Log every time it triggers a scale action, however, the target resource that is being scaled out or in can take the time to complete the scale action. It is important to know when the scale action completes or reports as failed so that you can take automated actions on the resource. To support this, the Autoscale engine now generates a scale action result event when the underlying target service completes the action or reports it as failed. This scale result event is also logged in the Activity Log and includes valuable information about why your Autoscale event failed. We’ve also introduced a new Autoscale Activity Log category so that you can easily filter to view only Autoscale-related events. You can leverage the new Activity Log Alerts to receive notifications or take automated actions via webhooks and Azure Automation, Logic Apps or Functions. This feature is now enabled for Cloud Services, VMSS and App Services.

Figure 4. A view of Activity Logs filtered by Autoscale events, listing the Autoscale trigger action and result event.

Wrapping Up

These new capabilities in Azure Autoscale enable you to efficiently leverage the compute power of Azure to scale your applications to best suit your growing business needs. We are eager to hear your feedback to inform our future work on Autoscale. Please try these new features and let us know what you think. Also be sure you’re getting the most out of this feature by checking out our Autoscale best practices, and most common autoscale patterns.
Quelle: Azure

Azure Government – The most secure & compliant cloud for defense with new compliance and service offerings

Broad support for regulatory compliance and ongoing innovation are at the core of Microsoft’s commitment to enabling U.S. government missions with a complete, trusted, and secure cloud platform. Today, we are announcing support for Defense Federal Acquisition Regulation Supplement (DFARS) requirements, expanding opportunities for defense contractors to take advantage of cloud computing in meeting the needs of the U.S. Department of Defense (DoD). Adding DFARS compliance extends Azure Government’s lead as the cloud platform with the broadest support for U.S. DoD workloads. In addition to this compliance milestone, we are also announcing enhanced technical capabilities with the expansion of our Cognitive Services preview, addition of Graphics Processing Unit (GPU) clusters, and the addition of new database and storage options in Azure Government. With these expanded compliance and service offerings, government customers will have new opportunities to use cloud computing to help meet their mission goals.

Supporting DFARS requirements

Azure Government’s support for DFARS requirements creates new options for DoD contractors as they partner with the defense department. DoD industry partners can now host Covered Defense Information (CDI) on the Microsoft cloud platform while maintaining compliance with DoD procurement requirements, giving them access to the same set of Azure Government capabilities as the DoD itself.

“As a mission partner of the DoD, the security of covered defense information is of utmost importance. Compliance with DFARS is not only required by regulation, but is also critical to the defense of our nation,” says Michael Hawman, General Atomics CIO, “As more DoD contractors consider the adoption of cloud computing to reduce costs and increase agility and capability, the transparency by which CSPs provide support will be critical to building and maintaining trust with cloud security in the defense contractor community. Commercial cloud service providers must familiarize themselves with, and be capable of accepting flow down DFARS requirements as soon as possible."

Cognitive Services available for all customers

Building on the successful preview of Cognitive Services in March, we are now making Cognitive Services available to all government and defense. Opening the preview up to more customers, U.S. government customers and partners can use Cognitive Services to feed real-time analysis that supports their mission objectives. Leveraging the artificial intelligence from Cognitive Services for things like facial recognition or text translation, customers can more easily build applications to help make informed decisions in critical scenarios such as Public Safety and Justice. Azure Government support for application innovation is part of why agencies are choosing Microsoft as their partner in digital transformation:

“Before beginning the search for specific technologies and digital platforms to meet DC’s digital needs, we identified our own list of standards for government cloud service providers. The first three criteria are compliance, reliability and the technical architecture and environment of the platform,” says Archana Vemulapalli, CTO of Washington D.C., “Microsoft offers a strong government cloud platform and services that help my staff and me perform our jobs effectively and create the city’s digital future.”

Announcing GPU clusters, Azure Cosmos DB and Cool Storage

Azure Government continues to add services at an accelerated pace to meet existing as well as unrealized needs of the U.S. government. By announcing GPU clusters today, Azure Government further enables the use of High Performance Computing (HPC) in the cloud for government. Whether using computational analysis to better research diseases and weather patterns or helping reduce backlogs of questions answered for citizens through predictive analysis, U.S. government customers and partners are sure to benefit.

Additionally, Azure Government now supports Azure Cosmos DB and Cool Blob Storage which enable government customers to deploy global-scale databases and choose from more options to control storage costs. Azure Cosmos DB is the next big leap in the evolution of DocumentDB and, as a part of this Azure Cosmos DB release, DocumentDB customers and their data automatically and seamlessly become Azure Cosmos DB customers. Additionally, we are making Cool Storage available so customers can store less frequently accessed data like backup data, media content, scientific data and active archival data at a reduced cost.

Powering innovation at the Department of Veterans Affairs

Agencies are choosing cloud computing and Azure Government to help speed innovation to those they serve. Last month, the U.S. Department of Veterans Affairs launched its Access to Care site on Azure Government. The site helps veterans and their caregivers decide where to go for healthcare services by providing data on patient satisfaction, appointment wait times, and other quality measures from surrounding clinics and VA facilities. Already, the VA has been able to meet the demand, while enhancing the website and continuously adding new functionality by leveraging the capabilities of Azure Government.

“The VA is focused on driving transparency and empowering the veteran,” said Jack Bates, Director VA OI&T Business Intelligence Service Line, “Working closely with Microsoft to deliver the Patient Wait Times App on Azure Government, we have enabled the Department to be fully transparent about performance, and to improve service to the veteran by providing meaningful data.”

By building and hosting Access to Care on Azure Government, which achieved a FedRAMP High ATO from the VA in March, the VA is continuing to embrace digital transformation and improve its services for veterans around the world.

Cloud computing for U.S. Government

From increased support for compliance requirements to application innovation, Azure Government continues to expand capabilities that make it easier for U.S. government customers and partners to take advantage of the cloud. And with six announced government regions in the U.S., Azure Government enables customers to run mission workloads closer to their users and provides geographic redundancy that is not possible with any other major cloud provider. To learn more about what Microsoft is doing in this area, check out the Azure Government blog and sign up for an Azure Government Trial.

– Tom
Quelle: Azure