Enhance Azure SQL Data Warehouse performance with new monitoring functionality for Columnstore

Azure SQL Data Warehouse (SQL DW) is a SQL-based petabyte-scale, massively parallel, cloud solution for data warehousing. It is fully managed and highly elastic, enabling you to provision and scale capacity in minutes. You can scale compute and storage independently, allowing you to range from burst to archival scenarios.

Azure SQL DW is powered by a Columnstore engine to provide super-fast performance for analytic workloads. This is the same Columnstore engine that is built into the industry leading SQL Server Database from Microsoft. To get full speed from the Azure SQL DW, it is important to maximize Columnstore Row Group quality. A row group is the chunk of rows that are compressed together in the Columnstore. In order to enable easier monitoring and tuning of row group quality we are now exposing a new Dynamic Management View (DMV).

What is a High Quality Row Group?

A row group with 1 million rows (1,048,576 rows to be precise) is of ideal quality and under the right circumstances, this is what Azure SQL DW will create. Under sub-optimal conditions such as insufficient memory, row groups with fewer number of rows get created. This can adversely impact the compression quality as well as increase the per row overhead of ancillary structures for row groups. This is turn can dramatically reduce the performance of your queries (note: SQL DW now prevents creation of row groups < 10,000 rows).

How to Monitor Row Group Quality?

Azure SQL DW now has a new DMV (sys.dm_pdw_nodes_db_column_store_row_group_physical_stats) for exposing information about physical statistics of row groups for a Columnstore table. Don’t be intimidated by the long name – we do have a convenient view (vCS_rg_physical_stats) that you can use to get information from this DMV. The key piece of information is the trim_reason_desc that tells whether a row group was prematurely trimmed or not. If it was not trimmed, then it is of ideal quality (trim_reason_desc = NO_TRIM). If it was trimmed, then the trim_reason_desc is set to the trim reason such as MEMORY_LIMITATION or DICTIONARY_SIZE. The example screenshot below shows a snapshot of a table with poor quality row groups due to various trim reasons.

How to Improve Row Group Quality?

Once you identify trimmed row groups there are corrective actions you can take to fix them depending upon what trim_reason_desc says. Here are some tips for the most significant ones:

BULKLOAD: This is what trim reason is set to when the incoming batch of rows for the load had less than 1 million rows. The engine will create compressed row groups any time there are greater than 100,000 rows being inserted (as opposed to inserting into the delta store) but will set the trim reason to BULKLOAD. To get past this, consider increasing your batch load window to accumulate more rows. Also, re-evaluate your partitioning scheme to ensure it is not too granular as row groups cannot span partition boundaries.
MEMORY_LIMITATION: To create row groups with 1 million rows, a certain amount of working memory is required by the engine. When available memory of the loading session is less than the required working memory, row groups get prematurely trimmed. The columstore compression article explains what you can do to fix this, but in a nutshell the rule of thumb is to use at least a mediumrc user to load you data. You would also need to be on a sufficiently large SLO to have sufficient memory for your loading needs.
DICTIONARY_SIZE: This indicates that row group trimming occurred because there was at least one string column with very wide and/or high cardinality strings. The dictionary size is limited to 16MB in memory and once this is reached the row group is compressed. If you do run into this situation, consider isolating the problematic column into a separate table.

The screenshot below shows a copy of the same table with row group quality fixed by following the recommendations to avoid trimming due to MEMORY_LIMITATION.

Next Steps

Now that you know how to monitor your Columnstore row group quality, you can maintain it for optimal performance both pro-actively as part of your regular loads as well as fix quality issues if they arise. If you are not already using Azure SQL DW, we encourage you to try it out for your Business Intelligence and Business Analytics workloads.

Learn More

Check out the many resources for leaning more about Azure SQL DW, including:

What is Azure SQL Data Warehouse?

SQL Data Warehouse Best Practices

MSDN Forum

Stack Overflow Forum
Quelle: Azure

Amazon WorkDocs Achieves HIPAA Eligibility and PCI DSS Compliance

Amazon WorkDocs is now HIPAA eligible and is PCI DSS compliant. If you have an executed Business Associate Agreement (BAA) with Amazon Web Services (AWS), you can now use Amazon WorkDocs for HIPAA-compliant file storage and collaboration, including files that contain protected health information (PHI). With PCI DSS compliance, you can now use Amazon WorkDocs to store and collaborate on files that contain sensitive card holder data (CHD). 
Quelle: aws.amazon.com

Announcing App Service Isolated, more power, scale and ease of use

Today, we are announcing the general availability of App Service Isolated, which brings the simplicity of multi-tenant App Service to the secure, dedicated virtual networks powered by App Service Environment (ASE). 

Azure App Service is Microsoft’s leading PaaS (Platform as a Service) offering hosting over 1 million external apps and sites. It helps you build, deploy and scale web, mobile and API apps instantaneously without worrying about the underlying infrastructure. It allows you to leverage your existing skills by supporting an increasing array of languages, frameworks, and popular OSS, and has built-in capabilities that streamline your CI/CD pipeline. ASE was introduced in 2015 to offer customers network isolation, enhanced control, and increased scale options. The updated ASE capabilities that comes with the new pricing tier, App Service Isolated, now allow you to run apps in your dedicated virtual networks with even better scale and performance through an intuitive user experience.

Streamlined scaling

Scaling up or out is now easier. The new ASE eliminates the need to manage and scale worker pools. To scale, either choose a larger Isolated plan (to scale up) or add instances (to scale out), just like the multi-tenant App Service. It’s that easy. To further increase scaling flexibility, App Service Isolated comes with a maximum default scale of 100 Isolated plan instances. You now have more capacity for large implementations.

Upgraded performance

The new ASE uses dedicated Dv2-based machines boasting faster chipsets, SSD storage, and twice the memory per core when compared to the first generation. The dedicated worker sizes for the new ASE are 1 core with 3.5 GB RAM, 2 cores with 7 GB RAM, and 4 cores with 14 GB RAM. With this upgraded infrastructure, you will be able to run your apps with lower latency, have more power to handle heavier workloads, and support more users.

Simplified experience

Creating the new ASE is easy. By selecting the Isolated pricing tier, App Service will create an App Service Plan (ASP) and a new ASE directly. You will just need to specify the Virtual Network that you want to deploy your applications to. There is no separate workflow required to spin up a new ASE in your secure, dedicated virtual networks.

We’ve made App Service Environment (ASE) faster, more efficient, and easier to deploy into your virtual network, enabling you to run apps with Azure App Service at high scale in an isolated network environment. Check out the Azure Friday video. Partners and customer can also learn more about how to get started and set up.
Quelle: Azure

Azure App Service Premium V2 in Public Preview

Azure App Service is announcing the Public Preview of the Premium V2 tier! App Service is a platform-as-a-service (PaaS) offer that allows you to quickly build, deploy, and scale enterprise-grade web, mobile, and API apps running on any platform. Apps running on App Service can meet rigorous performance, scalability, security, and compliance requirements while leveraging a fully-managed platform to take care of infrastructure maintenance. The new Premium V2 tier features Dv2-series VMs with even faster processors, SSD storage, and double the memory-to-core ratio compared to the previous compute iteration. The following are the web worker sizes available with Premium V2: Small (1 CPU core, 3.5 GiB memory) Medium (2 CPU cores, 7 GiB memory) Large (4 CPU core, 14 GiB memory)  During the Preview timeframe, the pricing for App Service Premium V2 is identical to the pricing for the existing App Service Premium tier. Simply put, you are getting better performance and scalability for the same pricing with Premium V2. All features including auto scaling, CI/CD support, and test in production included with App Service Premium are available with App Service Premium V2 as well. To use Premium V2, you can pick the Premium V2 option from the pricing tier selector in the Azure Portal. If you want to use Premium V2 with your existing app running with App Service, you can choose to redeploy or clone your app to a new Premium V2 App Service Plan. Premium V2 will be available in a growing list of regions starting from South Central US, West Europe, North Europe, Australia East, and Australia Southeast. Provisioning of Premium V2 resources can also be automated via ARM template via Azure Portal or PowerShell. Getting started: You can automate the provisioning of Premium V2 resources via ARM template such as the sample below.{
“$schema”: “http://schema.management.azure.com/schemas/2014-04-01-preview/deploymentTemplate.json#”,
“contentVersion”: “1.0.0.0”,
“parameters”: {},
“resources”: [
{
“type”: “Microsoft.Web/serverfarms”,
“sku”: {
“Tier”: “PremiumV2″,
“Name”: “PV2_1″
},
“name”: “AppServicePlanName”,
“apiVersion”: “2016-03-01″,
“location”: “Australia East”,
“properties”: {
“name”: “AppServicePlanName”
}}
]
}
You can run the ARM template via Azure Portal or PowerShell. For example, the following sample sequence can be used to create a resource group and provision a Premium V2 App Service Plan:New-AzureRmResourceGroup -Name {e.g. PremiumV2ResourceGroup} -Location {e.g. “West Europe”}
New-AzureRmResourceGroupDeployment -ResourceGroupName {e.g. PremiumV2ResourceGroup} -TemplateFile {local path for your template .json file}
We invite you to try out the new Premium V2 tier and provide your feedback via the App Service MSDN forum. You can also submit feature ideas via the App Service feedback forum.
Quelle: Azure

Fast and Easy Containers: Azure Container Instances

Containers have fundamentally changed the way developers develop their applications, the way applications are deployed, and the way system administrators manage their environments. Containers offer a broadly accepted and open standard, enabling simple portability between platforms and between clouds. Today, I am extremely excited to announce a new Azure service that makes it even easier to deploy containers. The very first service of its kind in the cloud, Azure Container Instances (ACI), a new Azure service delivering containers with great simplicity and speed and without any Virtual Machine infrastructure to manage. ACIs are the fastest and easiest way to run a container in the cloud.

An Azure Container Instance is a single container that starts in seconds and is billed by the second. ACI offer highly versatile sizing, allowing you to select the exact amount of memory separate from the exact count of vCPUs, so your application perfectly fits on the infrastructure. Your containers won’t be billed for a second longer than is required and won’t use a GB more than is needed. With ACI, containers are a first-class object of the Azure platform, offering Role-Based Access Control (RBAC) on the instance and billing tags to track usage at the individual container level. As the service directly exposes containers, there is no VM management you need to think about or higher-level cluster orchestration concepts to learn. It is simply your code, in a container, running in the cloud.

For those beginning their container journey, Azure Container Instances provide a simple experience to get started with containers in the cloud, enabling you to quickly create and deploy new containers with only a few simple parameters. Here is a sample CLI command that will deploy to ACI:

az container create -g aci_grp –name nginx –image library/nginx –ip-address public

and if you want to control the exact GB of memory and CPU count:

az container create -g aci_grp –name nginx –image library/nginx –ip-address public –cpu 2 –memory 10

Container Instances are available today in public preview for Linux containers. Windows container support will be available in the coming weeks. You can deploy using the Azure CLI or using a template. Furthermore, you can quickly and easily deploy from a public repository, like Docker Hub, or pull from your own private repository using the Azure Container Registry. Each container deployed is securely isolated from other customers using proven virtualization technology.

The above shows the simplicity of ACI. While Azure Container Instances are not an orchestrator and are not intended to replace orchestrators, they will fuel orchestrators and other services as a container building block. In fact, as part of today’s announcement, we are also releasing in open source, the ACI Connector for Kubernetes. This is an open-source connector that enables Kubernetes clusters to deploy to Azure Container Instances. This enables on-demand and nearly instantaneous container compute, orchestrated by Kubernetes, without having VM infrastructure to manage and while still leveraging the portable Kubernetes API. This will allow you to utilize both VMs and container instances simultaneously in the same K8s cluster, giving you the best of both worlds. Azure Container Instances can be used for fast bursting and scaling whereas VMs can be used for the more predictable scaling. Workloads can even migrate back-and-forth between these underlying infrastructure models. This offers a level of agility for deploying Kubernetes, unlike any other cloud provider, enabling services that start in seconds without any underlying VMs and are billed and scaled per second. are billed and scaled per second.

Here is a demo of the ACI Connector in action:

We continue to increase our investment and community engagement with containers and with Kubernetes, including Helm, our recent release of Draft, and the open-source ACI connector released today. With these community releases, we continue to learn how important it is to have an open ecosystem to drive innovation in this growing container space, an exciting and humbling experience. To continue our education and community engagement, I am also excited to announce that Microsoft has joined the Cloud Native Computing Foundation (CNCF) as a Platinum member. CNCF is a Collaborative Project of the Linux Foundation (that Microsoft joined last year) which hosts and provides governance for a wide range of projects including Kubernetes, Prometheus, OpenTracing, Fluentd, Linkerd, containerd, Helm, gRPC, and many others. I am really excited to work closely with the CNCF community and have Gabe Monroy (Lead PM, Containers @ Microsoft Azure) join the CNCF board.

I hope you try out these new services and give us feedback. I am excited to see what you are going to build!

See ya around,

Corey
Quelle: Azure