Announcing the General Availability of Geographic Routing capability in Azure Traffic Manager

Do you have a global user base and would like to customize content based on regions where your users are located? Have you felt the need to comply with policy mandates that require the restriction of data access within a Geography?

With the availability of Geographic Routing capability in Azure Traffic Manager, such needs can be easily addressed. You can now direct user traffic to specific endpoints based on the geographic location from where the requests originate. Azure’s global presence enables you to reach a user base that is vast and diverse across nations and regions, and with Geographic Routing you can now enable a variety of use cases that are tied to geography – such as:

Customizing and localizing content for specific regions, enabling better user experience and engagement. As an example, an e-commerce site can localize the site content and merchandise items to users in a specific region.
Knowing where the users are coming from makes it easier to implement mandates related to data sovereignty.

Configuring geographic routing

Defining user regions

The first thing to do is to partition your user base according to their geographic location/region. There are different levels of granularity by which you can specify a geographic region:

World – any region
Regional Grouping – Africa, Middle East, Australia/Pacific, etc.
Country/Region – Ireland, Peru, Hong Kong SAR, etc.
State / Province – USA–California, Australia–Queensland, Canada-Alberta, etc. (Note:  This granularity level is supported only for states / provinces in Australia, Canada, UK, and USA)

To get a list of the supported regions and the various choices within each region, you can refer to this list of regions used by Azure Traffic Manager geographic routing method. You can also obtain this information programmatically by calling Azure Traffic Manager’s REST API.

The below table gives you an example of how you can route traffic to your application deployed in specific Azure regions, based on where users are located.

 

User Location(s)
Azure Region / Endpoint location

Europe, Africa
North Europe

Australia, New Zealand
Australia East

Mexico, USA-California, USA-Oregon, USA-Washington
West US

Rest of the world and any requests that cannot be mapped to a geographic region
Central US

 

Sample table directing traffic from user location to specific application deployment Azure regions

Create a Traffic Manger Routing Profile with geographic routing

Go to the Azure portal, navigate to Traffic Manager profiles and click on the Add button to create a routing profile.

Add a Traffic Manager Profile

Provide a Name for your profile, select Geographic to be your Routing method, select the Subscription and Resource group you want to use. Click on OK to create the profile.

Add geographic routing method to profile

Once the create is successful, navigate to the profile. You can now see the details including the DNS name and the Routing method (Geographic) you had specified.

DNS name and routing method

Click on the Endpoints button and then the Add button to add your endpoints, to this profile.

Add endpoints

When adding an endpoint, you will be prompted to set the Geo-mapping for this endpoint. Add the four endpoints based on the mapping we had described earlier.

Associate endpoints to geographic routing

Once that is completed, you have an Azure Traffic Manager profile with geographic routing enabled as per your needs! Your users can use the DNS name associated with this profile to connect to your application. During DNS name resolution, Azure Traffic Manager will ensure users are directed to the right endpoint based on where their DNS query originates.

Besides the Portal, you can use Rest API and .NET SDK to provision this capability. PowerShell and CLI support will be available in April 2017.

Availability

This feature is available today in all Azure Public cloud regions. This will be available in Azure Government, Azure Germany, and Azure China in May 2017.

How much does it cost?

The price is the same as with all other routing methods. For details, please refer to the Azure Traffic Manager pricing page.

Next steps

To learn more about the capabilities and best practices related to this feature, please visit the Azure Traffic Manager routing methods and FAQs pages. We look forward to your valuable feedback as you start using this today.
Quelle: Azure

Announcing the General Availability of Geographic Routing capability in Azure Traffic Manager

Do you have a global user base and would like to customize content based on regions where your users are located? Have you felt the need to comply with policy mandates that require the restriction of data access within a Geography?

With the availability of Geographic Routing capability in Azure Traffic Manager, such needs can be easily addressed. You can now direct user traffic to specific endpoints based on the geographic location from where the requests originate. Azure’s global presence enables you to reach a user base that is vast and diverse across nations and regions, and with Geographic Routing you can now enable a variety of use cases that are tied to geography – such as:

Customizing and localizing content for specific regions, enabling better user experience and engagement. As an example, an e-commerce site can localize the site content and merchandise items to users in a specific region.
Knowing where the users are coming from makes it easier to implement mandates related to data sovereignty.

Configuring geographic routing

Defining user regions

The first thing to do is to partition your user base according to their geographic location/region. There are different levels of granularity by which you can specify a geographic region:

World – any region
Regional Grouping – Africa, Middle East, Australia/Pacific, etc.
Country/Region – Ireland, Peru, Hong Kong SAR, etc.
State / Province – USA–California, Australia–Queensland, Canada-Alberta, etc. (Note:  This granularity level is supported only for states / provinces in Australia, Canada, UK, and USA)

To get a list of the supported regions and the various choices within each region, you can refer to this list of regions used by Azure Traffic Manager geographic routing method. You can also obtain this information programmatically by calling Azure Traffic Manager’s REST API.

The below table gives you an example of how you can route traffic to your application deployed in specific Azure regions, based on where users are located.

 

User Location(s)
Azure Region / Endpoint location

Europe, Africa
North Europe

Australia, New Zealand
Australia East

Mexico, USA-California, USA-Oregon, USA-Washington
West US

Rest of the world and any requests that cannot be mapped to a geographic region
Central US

 

Sample table directing traffic from user location to specific application deployment Azure regions

Create a Traffic Manger Routing Profile with geographic routing

Go to the Azure portal, navigate to Traffic Manager profiles and click on the Add button to create a routing profile.

Add a Traffic Manager Profile

Provide a Name for your profile, select Geographic to be your Routing method, select the Subscription and Resource group you want to use. Click on OK to create the profile.

Add geographic routing method to profile

Once the create is successful, navigate to the profile. You can now see the details including the DNS name and the Routing method (Geographic) you had specified.

DNS name and routing method

Click on the Endpoints button and then the Add button to add your endpoints, to this profile.

Add endpoints

When adding an endpoint, you will be prompted to set the Geo-mapping for this endpoint. Add the four endpoints based on the mapping we had described earlier.

Associate endpoints to geographic routing

Once that is completed, you have an Azure Traffic Manager profile with geographic routing enabled as per your needs! Your users can use the DNS name associated with this profile to connect to your application. During DNS name resolution, Azure Traffic Manager will ensure users are directed to the right endpoint based on where their DNS query originates.

Besides the Portal, you can use Rest API and .NET SDK to provision this capability. PowerShell and CLI support will be available in April 2017.

Availability

This feature is available today in all Azure Public cloud regions. This will be available in Azure Government, Azure Germany, and Azure China in May 2017.

How much does it cost?

The price is the same as with all other routing methods. For details, please refer to the Azure Traffic Manager pricing page.

Next steps

To learn more about the capabilities and best practices related to this feature, please visit the Azure Traffic Manager routing methods and FAQs pages. We look forward to your valuable feedback as you start using this today.
Quelle: Azure

Announcing application consistent backup for Linux VMs using Azure Backup

Azure Backup provides consistent file system backup of Linux Virtual Machines running in Azure. Today, we are extending this to take application consistent backups for enterprise critical applications such as MySQL, InterSystems Caché® DB, and SAP HANA running on popular Linux distros (e.g. Ubuntu, Red Hat Enterprise Linux, etc.). This framework gives you flexibility to execute custom pre and post scripts as part of the VM backup process. These scripts can be used to quiesce application IOs while taking backups that guarantee application consistency. Value propositions Customize backup workflow: Now you have full flexibility to control your applications and production environment during backup by executing custom scripts while taking the VM snapshot. Listed below are few examples on how you can leverage the framework: You can use the pre-script to quiescence or redirect the application IOs momentarily using application native APIs and flush in memory content to disk before taking the VM snapshot. You can then use the post-script to thaw the IOs post snapshot completion and resume normal application operation. This will ensure application consistent VM backup for any application you are running. Some applications require fsfreeze to be disabled so that it does not interfere with their quiesce logic, so we also provide a capability to disable Linux fsfreeze which is executed by default while taking Linux VM backup using Azure Backup. You can also invoke native application APIs to take application backup and database log backups, and as part of VM backup data will be moved to Recovery Services Vault, thereby securing it against VM compromise scenarios (e.g. VM deletion or corruption). Application and distro agnostic: The framework is agnostic of Linux distros and versions, and works seamlessly for all supported Linux distros as long as the guest application has APIs to pause and resume application IOs. Sample scripts on github: We are working with partners and ISVs to provide open source scripts on github for popular Linux applications. As of this release, we have these in place for MySQL, CacheDB, and are working closely with SAP HANA.  Under the hood Azure Backup completes a sequence of steps for taking fast and efficient Azure Linux VM backup as explained in the image below: Getting started To use this framework, you need to copy the pre-script and post-script for your application locally on the VM to be backed up. Then you can follow the steps to configure application-consistent Linux VM backup. Sample scripts for MySQL & CachéDB To bootstrap the framework, we worked with InterSystems Caché DB and MySQL DB example with demo scripts, included clustered deployments, that you can use as a reference to create your own scripts and leverage the power of this framework. MySQL application consistent backup using Azure Backup by Capside Caché DB application consistent backup using Azure Backup by Intersystems Seeking Open-source community contribution We are calling for developers, applications vendors, tech enthusiasts, and partners to contribute scripts to our open source github repository. The scripts will be available for everyone to use directly or customize based on their requirements. If you are interested in contributing please send a mail to linuxazurebackupteam@service.microsoft.com and we will work with you to publish them on github. Explain usage guidelines and ensure that you get acknowledgement for your contribution. Related links and additional content Want more details about this feature? Check out Azure application consistent Linux VM backup documentation Need help? Reach out to Azure Backup forum for support Tell us how we can improve Azure Backup by contributing new ideas and voting up existing ones. Follow us on Twitter @AzureBackup for the latest news and updates New to Azure Backup, sign up for a free Azure trial subscription
Quelle: Azure

Azure Site Recovery available in five new regions

To increase our service global footprint, we recently announced the expansion of Azure Site Recovery to Canada and UK regions. Apart from these two new countries, we have also deployed our service in West US2, making it available to all non-government Azure regions in the United States.

With this expansion, Azure Site Recovery is now available in 27 regions worldwide including Australia East, Australia Southeast, Brazil South, Central US, East Asia, East US, East US2, Japan East, Japan West, North Europe, North Central US, Southeast Asia, South Central US, West Central US, West US2, US Gov Virginia, US Gov Iowa, West Europe, West US, North East China, East China, South India,Central India, UK South, UK West, Canada East, and Canada Central​.

Customers can now select any of the above regions to deploy ASR. Irrespective of the region you choose to deploy in, ASR guarantees the same reliability and performance levels as set forth in the ASR SLA. To learn more about Azure Site Recovery visit Getting started with Azure Site Recovery. For more information about the regional availability of our services, visit the Azure Regions page.
Quelle: Azure

40TiB of advanced in-memory analytics with Azure and ActivePivot

In-memory computing has accelerated the big compute capabilities and enabled customers to extend its experience beyond just the monte carlo simulation and into the analytics. This is of note within Financial Services where business users wish to move away from pre-canned reports and instead directly interact with data. With Azure, banks can analyze in real-time and make the right decisions intraday and be more equipped to meet the regulatory standards. This blog will look to explore the new possibilities of a scale out architecture for in-memory analytics in Azure through ActivePivot. ActivePivot is part of the ActiveViam platform that brings big compute and big data closer together.

ActivePivot is an in-memory database that aggregates large amounts of fast-moving data through incremental, transactional, and analytical processing to enable customers to make the right decisions in a short amount of time. ActivePivot computes sophisticated metrics on data that is updated on the fly without the need for any pre-aggregation and allows the customer to explore metrics across hundreds of dimensions, analyze live data at its most granular level, and perform what-if simulations at unparalleled speed.

For customers to enable this on-premise, purchasing servers with enough memory can be expensive and is often saved for mission critical workloads. However, the public cloud opens this up to more workloads for research and experimentation, and taking this scenario to Azure is compelling. Utilizing Azure blob storage to collect and store the historical datasets generated over a period of time allows the customer to use the compute only when the user requires it. Starting from scratch to being fully deployed in less than 30 minutes drastically reduces the total cost of ownership and provides enormous business agility.

For our testing, we collectively processed 400 days of historical data to show how 40TB can be loaded onto 128 node cluster in 15 minutes to query 200Bn records in less than 10 seconds. For this we used the G5 instance with 32 cores and 448GiB of RAM running a Linux image with Java and ActivePivot and 40 storage accounts with 10 days of data, roughly 1TB in each.

The graph above shows the rate of data transfer over a five minute period.

 

Utilizing a special cloud connector, ActivePivot pulls from several storage accounts to transfer at 50 GiB per second. This ActiveViam cloud connector opens several HTTP connections to help fully saturate the bandwidth of the VM and storage and is tailored towards large files.

In parallel to the data fetching, ActivePivot indexes this data in-memory so to accelerate the analytical workloads. The ActivePivot query nodes distribute the calculations automatically across the data nodes and does not require any cross-node transfer. We expected to see performance near linear scale as we double the CPU, memory, and data size and we were very pleased to see it track with our expectation.

As you can see on the graph below, when we multiplied the dataset by 64, from 600 GiB up to 37.5 TiB, the throughput has increased 54 times.

This test proved to be very successful for us as we were able to execute extensive queries over the entire dataset in less than 10 seconds, but to see more detail and a step-by-step process, please visit the ActiveViam blog. We will also be following up with further scale-up testing later in the year so watch this space.

 For more information on Big Compute within Financial Services, please visit HPC Financial Services page or HPC on Azure.
Quelle: Azure

Azure Data Factory offer SAP HANA and Business Warehouse data integration

Azure Data Factory is a cloud-based data integration service that orchestrates and automates the movement and transformation of data, which support copying data from 25+ data stores on-premises and in the cloud easily and performantly. Today, we are excited to announce that Azure Data Factory newly enable loading data from SAP HANA and SAP Business Warehouse (BW) into various Azure data stores for advanced analytics and reporting, including Azure Blob, Azure Data Lake, Azure SQL DW, etc.

What’s new

SAP is one of the most widely-used enterprise software in the world. We hear you that it’s crucial for Microsoft to empower customers to integrate their existing SAP system with Azure to unblock business insights. Azure Data Factory start the SAP data integration support with SAP HANA and SAP BW, which are the most popular ones in the SAP stack used by enterprise customers.

With this release, you can easily ingest data from the existing SAP HANA and SAP BW to Azure, so as to build your own intelligent solutions by leveraging Azure’s first-class information management services, big data stores, advanced analytics tools, and intelligence toolkits to transform data into intelligent action. More specifically:

The SAP HANA connector supports copying data from HANA information models (such as Analytic and Calculation views) as well as Row and Column tables using SQL queries. To establish the connectivity, you need to install the latest Data Management Gateway (version 2.8) and the SAP HANA ODBC driver. Refer to SAP HANA supported versions and installation on more details.
The SAP BW connector supports copying data from SAP Business Warehouse version 7.x InfoCubes and QueryCubes (including BEx queries) using MDX queries. To establish the connectivity, you need to install the latest Data Management Gateway (version 2.8) and the SAP NetWeaver library. Refer to SAP BW supported versions and installation on more details.

What’s next

Beyond SAP HANA and SAP BW support, we also want to learn from you what other services in SAP stack you are using and looking to integrate. Go vote and comment on Azure feedback site.

Get started

To try out the new capabilities, provision a Data Factory from the Azure portal if you don’t have one, then click Copy Data to launch the intuitive copy wizard which will guide you through the configurations. In the source data store gallery, you will find SAP HANA and SAP BW as follows:

SAP HANA

After selecting SAP HANA from the gallery, specify the connection info on Data Management Gateway name, HANA server, user name and password. 

Then you will see a page with navigator and query editor. Browse and pick the measures and dimensions, a base SQL query will be auto generated in the Query Editor as reference according to your selection, and you can review and further customize there. Click Validate Query to see the preview data and schema. 

SAP BW

After selecting SAP BW from the gallery, specify the connection info on Data Management Gateway name, BW server, system number, client ID, user name and password. 

Then you will see a page with navigator and query editor. Browse and pick the measures and dimensions, a base MDX query will be auto generated in the Query Editor as reference according to your selection, and you can review and further customize there. Click Validate Query to see the preview data and schema.

Once you finish your settings on SAP source, continue following the wizard to configure the destination.

Reference

Learn more on Azure Data Factory from Introduction to Azure Data Factory and Move data by using Copy Activity
Refer to SAP HANA support and SAP Business Warehouse support respectively on connector details

Quelle: Azure

Data Simulator For Machine Learning

Virtually any data science experiment that uses a new machine learning algorithm requires testing across different scenarios. Simulated data allows one to do this in a controlled and systematic way that is usually not possible with real data.

A convenient way to implement and re-use data simulation in Azure Machine Learning (AML) Studio is through a custom R module. Custom R modules combine the convenience of having an R script packaged inside a drag and drop module, with the flexibility of custom code where the user has the freedom of adding and removing functionality parameters, seen as module inputs in the AML Studio GUI, as needed. A custom R module has identical behavior to native AML Studio modules. Its input and output can be connected to other modules or be set manually, and they can process data of arbitrary schema, if the underlying R code allows it, inside AML experiments. An added benefit is that they provide a convenient way of deploying code without revealing the source, which may be convenient for IP sensitive scenarios. By publishing it in Cortana Intelligence Gallery one can easily expose to the world any algorithm functionality without worrying about classical software deployment process.

Data simulator

We present here an AML Studio custom R module implementation of a data simulator for binary classification. Current version is simple enough to have the complete code inside Cortana Intelligence Gallery item page. It allows one to generate custom feature dimensionality datasets with both label relevant and irrelevant columns. Relevant features are univariately correlated with the label column. Correlation directionality (i.e. positive or negative correlation coefficient) is controlled by correlationDirectionality parameter(s). All features are generated using separate runif calls. In the future, the module functionality can be further extended to allow the user to choose other distributions by adding and exposing ellipsis/three dots argument feature in R. Last module parameter (seedValue) can be used to control results reproducibility. Figure 1 shows all module parameters exposed in AML Studio.

 

Figure 1. Data Simulator Custom R module in an AML Experiment. 1000000 samples are simulated, with 1000 irrelevant and 10 label relevant columns. Data is highly imbalanced since only 20 samples are of “FALSE” class. 2 values (.03 and 5) long array value for the “noiseAmplitude” property is reused for all relevant columns. Similarly, the sign of the 4 values (1, -1, 0, 3.5) “label-features correlation” property is reused for all 10 relevant columns to control the correlation directionality (i.e. positive or negative) with the label column.​

By visualizing, as shown below in Figure 2, the module output (right click and then “Visualize”), we can check basic properties of the data. This includes data matrix size and univariate statistics like range and missing values.

 

Figure 2. Visualization of simulated data. Data has 1,000,000 rows and 1011 columns (10 relevant and 1000 irrelevant feature columns, plus label). Histogram of the label column (right graph) indicate large class imbalance chosen for this simulation.​

Univariate Feature Importance Analysis of simulated data

Note: Depending on the size chosen for simulated data, it may take some time to generate them: e.g. 1 hour for a 1e6 rows x 2000 feature columns (2001 total columns) dataset. However, new modules can be added to the experiment even after data were generated, and the cashed data can be processed as described below without having to simulate them again.

Univariate Feature Importance Analysis (FIA) measures similarity between each feature column and label values using metrics like Pearsonian Correlation and Mutual Information (MI). MI is more generic than Pearsonian Correlation since it has the nice property that it does not depend of directionality of data dependence: a feature that has labels of one class (say “TRUE”) for all middle values, and the other class (“FALSE”) for all small and large values will still have a large MI value although its Pearsonian Correlation may be close to zero.

Although feature-wise univariate FIA does not capture multivariate dependencies, it provides a simple to understand picture of the relationship between features and classification target (labels). An easy way to perform univariate FIA in AML Studio is by employing existing AML module for Filter Based Feature Selection for similarity computation and Execute R Script module(s) for results concatenation. To do this, we extend the default experiment deployed though CIS gallery page by adding several AML Studio modules as described below.

We first add a second Filter Based Feature Selection module, and we choose Mutual Information value for its “Feature scoring method” property. The original Filter Based Feature Selection module, with “Feature scoring method” property set to Pearson Correlation should be left unchanged. For both Filter Based Feature Selection modules, the setting for “Number of desired features” property is irrelevant. since we will use the similarity metrics computed for all data columns, available by connecting to the second (right) output of each Filter Based Feature Selection module. The “Target column” property for both modules needs to point to the label column name in the data. Figure 3 shows the settings chosen for the second Filter Based Feature Selection module.

Figure 3. Property settings for the Filter Based Feature Selection AML Studio module added for Mutual Information computation. By connecting to the right side output of the module we get the MI values for all data columns (features and label).​

The next two Execute R Script module(s) added to the experiment are used for results concatenation. Their scripts are listed below.

First module (rbind with different column order):

dataset1 <- maml.mapInputPort(1) # class: data.frame
dataset2 <- maml.mapInputPort(2) # class: data.frame

dataset2 <- dataset2[,colnames(dataset1)]
data.set = rbind(dataset1, dataset2)

maml.mapOutputPort("data.set")

Second module (add row names):

dataset <- maml.mapInputPort(1) # class: data.frame

myRowNames <- c("PearsCorrel", "MI")
data.set <- cbind(myRowNames, dataset)
names(data.set)[1] <- c("Algorithms")

maml.mapOutputPort("data.set")

The last module, Convert to CSV, added to experiment allows one to download the results in a convenient format (csv) if needed. The results file is in plain text and can be opened in any text editor or Excel (Figure 4):

Figure 4. Downloaded results file visualized in Excel.

Simulated data properties

FIA results for relevant columns are shown in Figure 5. Although MI and Pearsonian correlation are on different scales, both similarity metrics are well correlated. They are also in sync with the “noiseAmplitude” property of the custom R module described in Figure 1. The 2 noiseAmplitude values (.03 and 5) are reused for all 10 relevant columns, such that relevant features 1, 3, 5, 7, and 9 are much better correlated with the labels dues to their lower noise amplitude.

Figure 5. FIA results for the 10 relevant features simulated before. Although MI (left axis) and Pearsonian correlation (right axis) are on different scales, both similarity metrics are well correlated.​

As expected, for each of the 1000 irrelevant features columns, min, max and average statistics for both MI and Pearsonian Correlation are below 1e-2 (see Table 1).

 

PearsCorrel

MI

min

9.48E-07

3.23E-07

max

3.93E-03

8.31E-06

average

7.67E-04

3.02E-06

stdev

5.84E-04

1.27E-06

Table 1. Statistics of similarity metrics for the 1000 irrelevant columns simulated above.

This result is heavily dependent on sample size (i.e. number of simulated rows). For significantly smaller row sizes than 1e3 used here, the max and average MI and Pearsonian Correlation values for irrelevant columns may be larger due to the probabilistic nature of simulated data.

Conclusion

Data simulation is an important tool for understanding ML algorithms. The Custom R module presented here is available in Cortana Intelligence Gallery and its results can be analyzed using AML module for Filter Based Feature Selection. Future extension of the algorithm should include regression data and multivariate dependencies.
Quelle: Azure

Azure SQL Data Warehouse now generally available in 27 regions worldwide

We are excited to announce the general availability of Azure SQL Data Warehouse in four additional regions—Germany Central, Germany Northeast, Korea Central, and Korea South. This takes the SQL Data Warehouse worldwide availability to 27 regions, more than any other major cloud provider.

SQL Data Warehouse is your go-to SQL-based fully managed, petabyte-scale cloud solution for data warehousing. SQL Data Warehouse is highly elastic, enabling you to provision in minutes and scale capacity in seconds. You can scale compute and storage independently, allowing you to burst compute for complex analytical workloads or scale down your warehouse for archival scenarios, and pay based off what you&;re using instead of being locked into predefined cluster configurations. Unlike other cloud data warehouse services, SQL Data Warehouse offers the unique option to pause compute, giving you even more freedom to better manage your cloud costs.

With general availability, SQL Data Warehouse offers an availability SLA of 99.9%, the only public cloud data warehouse service that offers an availability SLA to customers. Geo-Backups support has also been added to enable geo-resiliency of your data, allowing SQL Data Warehouse Geo-Backup to be restored to any region in Azure. With this feature enabled, backups are available even in the case of a region-wide failure, keeping your data safe. Learn more about the capabilities and features on SQL Data Warehouse with general availability.

Get started with SQL Data Warehouse today and experience the speed, scale, elasticity, security, and ease of use of a cloud-based data warehouse for yourself.

Azure SQL Data Warehouse is generally available across the following regions:

North Europe, North Central US, Central US, East US, East US 2, South Central US, West Central US, West US, West US 2, Canada Central, Canada East, West Europe, Germany Central, Germany Northeast, East Asia, Southeast Asia, Australia Southeast, Central India, South India, China East, China North, Japan East, and Brazil South.

Germany Central, Germany Northeast, Korea Central, Korea South, North Europe, Japan East, Brazil South, Australia Southeast, Central US, East US, East US 2, South Central US, West Central US, West US, West US 2, West Europe, East Asia, Southeast Asia, Central India, South India, Canada Central, and Canada East.

Learn more about Azure services availability across regions.

Learn more

Check out the many resources for learning more about SQL Data Warehouse:

What is Azure SQL Data Warehouse?

SQL Data Warehouse best practices

Videos

MSDN forum

Stack Overflow forum
Quelle: Azure

February 2017 Leaderboard of Database Systems contributors on MSDN

The Leaderboard initiative was started in October last year to recognize the top contributors on MSDN forums related to Database Systems. Many congratulations to the February 2017 top-10 contributors!

Hilary Cotter and Alberto Morillo top the Overall and Cloud database lists this month. The first 7 featured in last month’s Overall Top-10 as well.

The following continues to be the points hierarchy (in decreasing order of points):

For questions related to this leaderboard, please write to leaderboard-sql@microsoft.com
Quelle: Azure

An introduction to Azure Analysis Services on Microsoft Mechanics

Last year in October we released the preview of Azure Analysis Services, which is built on the proven analytics engine in Microsoft SQL Server Analysis Services. With Azure Analysis Services you can host semantic data models in the cloud. Users in your organization can then connect to your data models using tools like Excel, Power BI, and many others to create reports and perform ad-hoc data analysis.

I joined Jeremy Chapman on Microsoft Mechanics to discuss the benefits of Analysis Services in Azure.

 

 

Try the preview of Azure Analysis Services and learn about creating your first data model.
Quelle: Azure