Azure Firewall and network virtual appliances

Network security solutions can be delivered as appliances on premises, as network virtual appliances (NVAs) that run in the cloud or as a cloud native offering (known as firewall-as-a-service).

Customers often ask us how Azure Firewall is different from Network Virtual Appliances, whether it can coexist with these solutions, where it excels, what’s missing, and the TCO benefits expected. We answer these questions in this blog post.

Network virtual appliances (NVAs)

Third party networking offerings play a critical role in Azure, allowing you to use brands and solutions you already know, trust and have skills to manage. Most third-party networking offerings are delivered as NVAs today and provide a diverse set of capabilities such as firewalls, WAN optimizers, application delivery controllers, routers, load balancers, proxies, and more. These third party capabilities enable many hybrid solutions and are generally available through the Azure Marketplace. For best practices to consider before deploying a NVA, see Best practices to consider before deploying a network virtual appliance.

Cloud native network security

A cloud native network security service (known as firewall-as-a-service) is highly available by design. It auto scales with usage, and you pay as you use it. Support is included at some level, and it has a published and committed SLA. It fits into DevOps model for deployment and uses cloud native monitoring tools.

What is Azure Firewall?

Azure Firewall is a cloud native network security service. It offers fully stateful network and application level traffic filtering for VNet resources, with built-in high availability and cloud scalability delivered as a service. You can protect your VNets by filtering outbound, inbound, spoke-to-spoke, VPN, and ExpressRoute traffic. Connectivity policy enforcement is supported across multiple VNets and Azure subscriptions. You can use Azure Monitor to centrally log all events. You can archive the logs to a storage account, stream events to your Event Hub, or send them to Log Analytics or your security information and event management (SIEM) product of your choice.

Is Azure Firewall a good fit for your organization security architecture?

Organizations have diverse security needs. In certain cases, even the same organization may have different security requirements for different environments. As mentioned above, third party offerings play a critical role in Azure. Today, most next-generation firewalls are offered as Network Virtual Appliances (NVA) and they provide a richer next-generation firewall feature set which is a must-have for specific environments/organizations.  In the future, we intend to enable chaining scenarios to allow you to use Azure Firewall for specific traffic types, with an option to send all or some traffic to a third party offering for further inspection. This third-party offering can be either a NVA or a cloud native solution.

Many Azure customers find the Azure Firewall feature set is a good fit and it provides some key advantages as a cloud native managed service:

DevOps integration – easily deployed using Azure Portal, Templates, PowerShell, CLI, or REST.
Built in HA with cloud scale.
Zero maintenance service model – no updates or upgrades.
Azure specialization— for example, service tags, and FQDN tags.
Significant total cost of ownership saving for most customers.

But for some customers third party solutions are a better fit.

The following table provides a high-level feature comparison for Azure Firewall vs. NVAs:

Figure 1: Azure Firewall versus Network Virtual Appliances – Feature comparison

Why Azure Firewall is cost effective

Azure Firewall pricing includes a fixed hourly cost ($1.25/firewall/hour) and a variable per GB processed cost to support auto scaling. Based on our observation, most customers save 30 percent – 50 percent in comparison to an NVA deployment model. We are announcing a price reduction, effective May 1, 2019, for the firewall per GB cost to $0.016/GB (-46.6 percent) to ensure that high throughput customers maintain cost effectiveness. There is no change to the fixed hourly cost. For the most up-to-date pricing information, please go to the Azure Firewall pricing page.

The following table provides a conceptual TCO view for a NVA with full HA (active/active) deployment:

Cost

Azure Firewall

NVAs

Compute

$1.25/firewall/hour

$0.016/GB processed

(30%-50% cost saving)

 

 

 

Two plus VMs to meet peek requirements

Licensing

Per NVA vendor billing model

Standard Public Load Balancer

First five rules: $0.025/hour
Additional rules: $0.01/rule/hour
$0.005 per GB processed

Standard Internal Load Balancer

First five rules: $0.025/hour
Additional rules: $0.01/rule/hour
$0.005 per GB processed

Ongoing/Maintenance

Included

Customer responsibility

Support

Included in your Azure Support plan

Per NVA vendor billing model

Figure 2: Azure Firewall versus Network Virtual Appliances – Cost comparison

Next steps

Azure Firewall Documentation
March blog: Announcing new capabilities in Azure Firewall
Pricing
Azure Firewall management partners:

AlgoSec 
Barracuda
Tufin

Quelle: Azure

Howden: How they built a knowledge mining solution with Azure Search

Customers across industries including healthcare, legal, media, and manufacturing are looking for new solutions to solve business challenges with AI, including knowledge mining with Azure Search.

Azure Search enables developers to quickly apply AI across their content to unlock untapped information.  Custom or prebuilt cognitive skills like facial recognition, key phrase extraction, and sentiment analysis can be applied to content using the cognitive search capability to extract knowledge that’s then organized within a search index. Let’s take a closer look at how one company, Howden, applies the cognitive search capability to reduce time and risk to their business.

Howden, a global engineering company, focuses on providing quality solutions for air and gas handling. With over a century of engineering experience, Howden creates industrial products that help multiple sectors improve their everyday processes; from mine ventilation and waste water treatment to heating and cooling.

Too many details, not enough time

Every new project requires the creation of a bid proposal. A typical customer bid can span thousands of pages in differing formats such as Word and PDF.  The team has to scour through detailed customer requirements to identify key areas of design and specialized components in order to produce accurate bids.  If they miss key or critical details, they can bid too low and lose money, or bid too high and lose the customer opportunity.  The manual process is time consuming, labor intensive, and creates multiple opportunities for human error. To learn more about knowledge mining with Azure Search and see how Howden built their solution, check out the Microsoft Mechanics show linked below.

Learn more

Leverage the solution accelerator to build your own application
Learn more about Azure Search

Quelle: Azure

Premium files redefine limits for Azure Files

Premium files sets new scale and performance bar for Azure Files, providing more power to developers and IT pros.

Today, we are excited to share that Azure Premium Files preview is now available to everyone! Premium files is a new performance tier that unlocks the next level of performance for fully managed file services in the cloud. Premium tier is optimized to deliver consistent performance for IO-intensive workloads that require high-throughput and low latency. Premium shares store data on the latest solid-state drives (SSDs) making it suitable for a wide variety of workloads like file services, databases, shared cache storage, home directories, content and collaboration repositories, persistent storage for containers, media and analytics, high variable and batch workloads, and many more. Our standard tier continues to provide reliable performance to workloads that are less sensitive to performance variability and is well-suited for general purpose file storage, development/test, and application workloads.

Provisioned performance – Dynamically scalable and consistent

With premium files, you can customize the performance of file storage to fit your workload needs. Premium file shares allow you to dynamically scale premium shares up and down without any downtime. The premium shares’ IOPS and throughput instantly scale based on changes to your provisioned capacity, while still offering low and consistent latency.

Defining premium shares performance:

Baseline IOPS = 1 * provisioned GiB (Up to a max of 100,000 IOPS).

Burst IOPS = 3 * provisioned GiB (Up to a max of 100,000 IOPS).

egress rate = 60 MiB/s + 0.06 * provisioned GiB

ingress rate = 40 MiB/s + 0.04 * provisioned GiB

Example: For a 10 TiB provisioned share, 10K Baseline IOPS, and up to 30K burst IOPS, 675 MiB/s egress, and 450 MiB/s ingress rate. Please note, IOPS and egress/ingress rate can vary based on the access patterns and IO sizes and hit peak performance at 100 TiB shares.

So, how fast can it get? Let’s take a look at latency.

The above sample test results are based on internal testing performed with 8 KiB IO size reads and writes on a single virtual machine, Standard F16s_v2,  and connected over server message block (SMB) to a premium share. Our tests revealed that premium shares provides low and consistent latency for read and writes. This means between two to three milliseconds for small IOs sizes of less than 64 KiB, even with varying numbers of parallel threads (up to 10).

Premium shares offer performance with scale. They can massively scale up to 100K IOPS with a target egress rate of 6 GiB/s and ingress rate of 4 GiB/s for 100 TiB shares. To feed throughput-hungry workloads, we raised the bar for premium share throughput even higher. Now, you can get double the total throughput from when we first introduced premium files. In essence, you can get 100 times the IOPS and a total throughput of 10 GiB/s, which is an improvement of 170 times when compared to our current standard files offering.

What about workloads with variable access patterns? Frequently, applications have short peaks of intense IO, with a more predictable IO pattern most of the time. For these scenarios, premium files offers the best out-of-box experience. All premium shares start with full burst credit and a minimum total throughput of 100 MiB/s and with an ability to operate in burst mode.

Let’s look at how the burst mode works. Any un-used baseline IOs are accrued in the burst credit bucket. Shares can burst up to three times their baseline IOPS if there are enough IO credits accrued. On a best effort basis, all shares can burst up to 3 IOPS per provisioned GiB for up to 60 minutes and shares larger than 50 TiB can go over 60 minutes duration. For more details, please refer to our documentation on bursting.

Pricing – Simple and predictable cost

Premium file shares are billed based on provisioned storage, rather than used storage. You only pay for each GiB you provision, with no transaction fees or any additional cost for throughput and bursting. This makes it much simpler to determine the total cost of ownership for a premium files-based deployment. Although the cost of premium per GiB storage is higher than for standard storage, with zero transaction fees, in-built bursting capability, and flexibility to adjust provisioning size, Premium tier can be a more cost-effective solution than standard tier for some IO-intensive workloads. Refer to the pricing page for additional details.

Availability – Broad and global

At the time of this announcement, the Azure Premium Files public preview is available in East US2, East US, West US, West US2, Central US, North Europe, West Europe, SE Asia, East Asia, Japan East, Japan West, Korea Central, and Australia East regions. We are continuing to expand service to additional Azure regions. Stay up to date on region availability through the Azure products availability page.

Getting started – Quick and easy

It takes two minutes to get started with premium files. Premium tier is offered on a dedicated storage account type, FileStorage. Simply create a new FileStorage account type in any available region and create a new share with size provisioned based on your workload performance. You can use Azure portal, PowerShell, or CLI to create premium shares and any of your favorite Azure Files client tools and/or libraries to access data. Please see detailed steps for how to create a premium file share.

Currently, the Azure portal allows creating premium share up to 5 TiB. Portal update for creating greater than 5TiB is coming soon. Meanwhile, you can use Azure PowerShell or CLI to create shares greater than 5 TiB or update size to greater than 5 TiB of shares being created through the portal.

Next steps

Visit Azure Premium Files documentation to learn more and give it a try.

As always, you can share your feedback and experiences on the Azure Storage forum or just email us at PFSFeedback@microsoft.com. Post your ideas and suggestions about Azure Storage on Azure Storage feedback forum.

Happy sharing!
Quelle: Azure

Azure SQL Database Edge: Enabling intelligent data at the edge

The world of data changes at a rapid pace, with more and more data being projected to be stored and processed at the edge. Microsoft has enabled enterprises with the capability of adopting a common programming surface area in their data centers with Microsoft SQL Server and in the cloud with Azure SQL Database. We note that latency, data governance and network connectivity continue to gravitate data compute needs towards the edge. New sensors and chip innovation with analytical capabilities at a lower cost enable more edge compute scenarios to drive higher agility for business.

At Microsoft Build 2019, we announced Azure SQL Database Edge, available in preview, to help address the requirements of data and analytics at the edge using the performant, highly available and secure SQL engine. Developers will now be able to adopt a consistent programming surface area to develop on a SQL database and run the same code on-premises, in the cloud, or at the edge.

Azure SQL Database Edge offers:

Small footprint allows the database engine to run on ARM and x64 devices via the use of containers on interactive devices, edge gateways, and edge servers.
Develop once and deploy anywhere scenarios through a common programming surface area across Azure SQL Database, SQL Server, and Azure SQL Database Edge
Combines data streaming and time-series, with in-database machine learning to enable low latency analytics
Industry leading security capabilities of Azure SQL Database to protect data-at-rest and in- motion on edge devices and edge gateways, and allows management from a central management portal from Azure IoT.
Cloud connected, and fully disconnected edge scenarios with local compute and storage.
Supports existing business intelligence (BI) tools for creating powerful visualizations with Power BI and third-party BI tools.
Bi-directional data movement between the edge to on-premises or the cloud.
Compatible with popular T-SQL language, developers can implement complex analytics using R, Python, Java, and Spark, delivering instant analytics without data movement, and real-time faster insights

Provides support for processing and storing graph, JSON, and time series data in the database, coupled with the ability to apply our analytics and in-database machine learning capabilities on non-relational datatypes.

For example, manufacturers that employ the use of robotics or automated work processes can achieve optimal efficiencies by using Azure SQL Database Edge for analytics and machine learning at the edge. These real-world environments can leverage in-database machine learning for immediate scoring, initiating corrective actions, and detecting anomalies.

Key benefits:

A consistent programming surface area as Azure SQL Database and SQL Server, the SQL engine at the edge allows engineers to build once for on-premises, in the cloud, or at the edge.
The streaming capability enables instant analysis of the incoming data for intelligent insights.
In-Database AI capabilities enables scenarios like anomaly detection, predictive maintenance and other analytical scenarios without having to move data.

Train in the cloud and score at the edge

Supporting a consistent Programming Surface Area across on-premises, in the cloud, or at the edge, developers can use identical methods for securing data-in-motion and at-rest while enabling high availability and disaster recovery architectures equal to those used in Azure SQL Database and SQL Server. Giving seamless transition of the application from the various locations means a cloud data warehouse could train an algorithm and push the machine learning model to Azure SQL Database Edge and allow it to run scoring locally, giving real-time scoring using a single codebase.

Intelligent store and forward

The engine provides proficiencies to take streaming datasets and replicate them directly to the cloud, coupled with enabling an intelligent store-and-forward pattern. In duality, the edge can leverage its analytical capabilities while processing streaming data or applying machine learning using in-database machine learning. Fundamentally, the engine can process data locally and upload using native replication to a central datacenter or cloud for aggregated analysis across multiple different edge hubs.

Unlock additional insights for your data that resides at the edge. Join the Early Adopter Program to access the preview and get started building your next intelligent edge solution.
Quelle: Azure

Azure.Source – Volume 82.

What a great week we had at Build 2019! We all had tremendous fun meeting developers, talking about new technologies, and sharing our vision for the future. Plus, the weather was nearly perfect, and attendees had time to see some sights and sample Seattle’s terrific restaurant scene.
Quelle: Azure

Take your machine learning models to production with new MLOps capabilities

This blog post was authored by Jordan Edwards, Senior Program Manager, Microsoft Azure.

At Microsoft Build 2019 we announced MLOps capabilities in Azure Machine Learning service. MLOps, also known as DevOps for machine learning, is the practice of collaboration and communication between data scientists and DevOps professionals to help manage the production of the machine learning (ML) lifecycle.

Azure Machine Learning service’s MLOps capabilities provide customers with asset management and orchestration services, enabling effective ML lifecycle management. With this announcement, Azure is reaffirming its commitment to help customers safely bring their machine learning models to production and solve their business’s key problems faster and more accurately than ever before.

 

Here is a quick look at some of the new features:

Azure Machine Learning Command Line Interface (CLI) 

Azure Machine Learning’s management plane has historically been via the Python SDK. With the new Azure Machine Learning CLI, you can easily perform a variety of automated tasks against the ML workspace including:

Compute target management

Experiment submission

Model registration and deployment

Management capabilities

Azure Machine Learning service introduced new capabilities to help manage the code, data, and environments used in your ML lifecycle.

Code management

Git repositories are commonly used in industry for source control management and as key assets in the software development lifecycle. We are including our first version of Git repository tracking – any time you submit code artifacts to Azure Machine Learning service, you can specify a Git repository reference. This is done automatically when you are running from a CI/CD solution such as Azure Pipelines.

Data set management

With Azure Machine Learning data sets you can version, profile, and snapshot your data to enable you to reproduce your training process by having access to the same data. You can also compare data set profiles and determine how much your data has changed or if you need to retrain your model.

Environment management

Azure Machine Learning Environments are shared across Azure Machine Learning scenarios, from data preparation to model training to inferencing. Shared environments help to simplify handoff from training to inferencing as well as the ability to reproduce a training environment locally.

Environments provide automatic Docker image management (and caching!), plus tracking to streamline reproducibility.

Simplified model debugging and deployment

Some data scientists have difficulty getting an ML model prepared to run in a production system. To alleviate this, we have introduced new capabilities to help you package and debug your ML models locally, prior to pushing them to the cloud. This should greatly reduce the inner loop time required to iterate and arrive at a satisfactory inferencing service, prior to the packaged model reaching the datacenter.

Model validation and profiling 

Another challenge that data scientists commonly face is guaranteeing that models will perform as expected once they are deployed to the cloud or the edge. With the new model validation and profiling capabilities, you can provide sample input queries to your model. We will automatically deploy and test the packaged model on a variety of inference CPU/memory configurations to determine the optimal performance profile. We also check that the inference service is responding correctly to these types of queries.

Model interpretability

Data scientists want to know why models predict in a specific manner. With the new model interpretability capabilities, we can explain why a model is behaving a certain way during both training and inferencing.

ML audit trail

Azure Machine Learning is used for managing all of the artifacts in your model training and deployment process. With new audit trail capabilities, we are enabling automatic tracking of the experiments and datasets that corresponds to your registered ML model. This helps to answer the question, “What code/data was used to create this model?”

Azure DevOps extension for machine learning

Azure DevOps provides commonly used tools data scientists leverage to manage code, work items, and CI/CD pipelines. With the Azure DevOps extension for machine learning, we are introducing new capabilities to make it easy to manage your ML CI/CD pipelines with the same tools you use for software development processes. The extension includes the abilities to trigger Azure Pipelines release on model registration, easily connect an Azure Machine Learning Workspace to an Azure DevOps project, and perform a series of tasks designed to help interaction with Azure Machine Learning as easy as possible from the existing automation tooling.

Get started today

These new MLOps features in the Azure Machine Learning service aim to enable users to bring their ML scenarios to production by supporting reproducibility, auditability, and automation of the end-to-end ML lifecycle. We’ll be publishing more blogs that go in-depth with these features in the following weeks, so follow along for the latest updates and releases.

Learn more about Azure Machine Learning service
Get started today with a free trial

Quelle: Azure

Azure SQL Data Warehouse releases new capabilities for performance and security

As the amount of data stored and queried continues to rise, it becomes increasingly important to have the most price-performant data warehouse. While we’re excited about being the industry leader in both of Gigaom’s TPC-H and TPC-DS benchmark reports, we don’t plan to stop innovating on behalf of our customers.

As Rohan Kumar mentioned in his blog on Monday, we’re excited to introduce several new features that will continue to make Azure SQL Data Warehouse the unmatched industry leader in price-performance, flexibility, and security.

To enable customers to continue improving the performance of their applications without adding any additional cost, we’re announcing preview availability of result-set caching, materialized views, and ordered clustered columnstore indexes.

In addition to price-performance enhancements, we’ve added new capabilities that enable customers to be more agile and flexible. The first is workload importance, which is a new feature that enables users to decide how workloads with conflicting needs get prioritized. Second, our new support for automatic statistics maintenance (auto-update statistics) means that manageability and maintenance of Azure SQL Data Warehouse just got easier and more effective. And finally, we’re also adding support for managing and querying JSON data. Users can now load JSON data directly into their data warehouses and mix it with other relational data, leading to faster and easier insights.

Our last announcement focuses on security and privacy. As you know, deploying data warehousing solutions in the cloud demands sophisticated and robust security. While Azure SQL Data Warehouse already enables an advanced security model to be deployed, today we’re announcing support for Dynamic Data Masking (DDM). DDM allows you to protect private data, through user-defined policies, ensuring it’s visible only to those that have permission to see it.

In the sections below, we’ll dive into these new features and the benefits that each provide.

Price-performance

Price-performance is a reoccurring theme in our releases because it ensures we provide one of the fastest analytics services at incredible value. With new functionalities announced today, we continue to demonstrate our commitment towards offering the leading price-performance platform.

Interactive dashboarding with result-set caching (preview)

Interactive dashboards come with predictable and repetitive query patterns. Result-set caching, now available in preview, helps with this scenario as it enables instant query response times while reducing time-to-insight for business analysts and reporting users.

With result-set caching enabled, Azure SQL Data Warehouse automatically caches results from repetitive queries, causing subsequent query executions to return results from the persisted cache that omits full query execution. In addition to saving compute cycles, queries satisfied by result-set cache do not use any concurrency slots and thus do not count against existing concurrency limits. For security reasons, only users with the appropriate security credentials can access the result sets in cache.

Materialized views to improve performance (preview)

Another new feature that greatly enhances query performance for a wide set of queries is materialized view support, now available in preview. A materialized view improves the performance of complex queries (typically queries with joins and aggregations) while offering simple maintenance operations.

When materialized views are created, Azure SQL Data Warehouse query optimizer transparently and automatically rewrites user queries to leverage deployed materialized views, leading to improved query performance. Best of all, as the data gets loaded into base tables, Azure SQL Data Warehouse automatically maintains and refreshes materialized views, providing a simplified view of maintenance and management. As the user queries leverage materialized views, queries run significantly faster and use less system resources. The more complex and expensive the query within the view is, the bigger potential there is for execution time savings.

Fast scans with ordered clustered columnstore indexes (preview)

Columnstore is a key enabler for storing and efficiently querying large amounts of data. For each table, it divides incoming data into row groups and each column of a row group forms a segment on a disk. When querying columnstore indexes, only the column segments that are relevant to user queries are read from the disk. Ordered clustered columnstore indexes further optimize query execution by enabling efficient segment elimination.

Due to pre-ordered data, you can drastically reduce the number of segments that are read from the disk, leading to faster query processing. Ordered clustered columnstore indexes is now available in preview, and queries containing filters and predicates can greatly benefit from this feature.

Flexibility

As business requirements evolve, the ability to change and adapt solution behavior is one of the key benefits of a modern data warehousing product. The ability to handle and manage heterogeneous data that enterprises have while offering ease of use and management is critical. To support these needs, Azure SQL Data Warehouse is introducing the following new functionalities to help you deal with ever-evolving requirements.

Prioritize workloads with workload importance (general availability)

Running mixed workloads on your analytics solution is often a necessity to effectively and quickly execute business processes. In situations where resources are constrained, the capability to decide which workloads need to be executed first is critical, as it helps with overall solution cost management. For instance, executive dashboard reports may be more important than ad-hoc queries. Workload importance now enables this scenario. Requests with higher importance are guaranteed quicker access to resources, which helps meet predefined SLAs and ensures important requests are prioritized.

Workload classification concept

To define workload priority, various requests must be classified. Azure SQL Data Warehouse supports flexible classification policies that can be set for a SQL query, a database user, database role, Azure Active Directory login, or Azure Active Directory group. Workload classification is achieved using the new CREATE WORKLOAD CLASSIFIER syntax.

The diagram below illustrates the workload classification and importance function:

Workload importance concept

Workload importance is established through classification. Importance influences a requester's access to system resources  including memory, CPU, and IO and locks. A request can be assigned one of these five levels of importance: low, below_normal, normal, above_normal, and high. If a request with above_normal importance is scheduled, it gets access to resources before a request with the default normal importance.

Manage and query JSON data (preview)

Organizations are increasingly faced with dealing with multiple data sources and heterogeneous file formats, JSON being among the top ones, aside from CSV files. To speed up time to insight and minimize unnecessary data transformation processes, Azure SQL Data Warehouse now enables support for querying JSON data. This feature is now available in preview.

Business analysts can now use the familiar T-SQL language to query and manipulate documents that are formatted as JSON data. JSON functions, such as JSON_VALUE, JSON_QUERY, JSON_MODIFY, and OPENJSON are now supported in Azure SQL Data Warehouse. Azure SQL Data Warehouse can now effectively support both relational and non-relational data, including joins between the two, while enabling users to use their traditional BI tools, such as Power BI.

Automatic statistics maintenance and update (preview)

Azure SQL Data Warehouse implements a cost-based optimizer to ensure optimal execution plans are being generated and used. For any cost-based optimizer to be effective, column level statistics are needed. When these statistics are stale, there is potential for selecting a non-optimal plan, leading to slower query performance.

Today, we’re extending that support for auto statistics creation by adding the ability to automatically refresh and maintain statistics. As data warehouse tables get loaded and updated, the system can now automatically detect and update out-of-date statistics. With the auto-update statistics capability now available in preview, Azure SQL Data Warehouse delivers full statistics management capabilities while simplifying statistics maintenance processes. You no longer need to manually maintain statistics, which leads to a simplified and more cost-effective data warehouse deployment.

Security

Azure SQL Data Warehouse provides one of the most advanced security and privacy features in the market. This is achieved through using proven SQL Server technology. SQL Server, as the core technology and component of Azure SQL Data Warehouse, has been the least vulnerable databases over the last eight years according to the NIST national vulnerabilities database. To expand existing Azure SQL Data Warehouse's security and privacy features, we’re announcing Dynamic Data Masking (DDM) support is now available in preview.

Protect sensitive data with dynamic data masking (preview)

Dynamic data masking (DDM) enables administrators and data developers to control access to their company’s data, allowing sensitive data to be safe and restricted. It prevents unauthorized access to private data by obscuring the data on-the-fly. Based on user-defined data masking policies, Azure SQL Data Warehouse can dynamically obfuscate data as the queries execute, and before results are shown to users.

Azure SQL Data Warehouse implements the DDM capability directly inside the engine. When creating tables with DDM, policies are stored in the system's metadata and then enforced by the engine as queries get executed. This centralized policy enforcement process simplifies data masking rules management as access control is not implemented and repeated at the application layer. As various users access queries tables, policies are automatically honored and applied while protecting sensitive data. DDM comes with flexible policies and you can choose to define a partial mask, which exposes some of the data in the selected columns, or a full mask that obfuscates the data completely. Azure SQL Data Warehouse also provides built-in masking functions that users can choose from.

Next steps

Get started with a free Azure SQL Data Warehouse account.
Learn more about workload management concepts and workload management scenarios.
Learn more about why analytics in Azure is simply unmatched.

Please note that the preview features mentioned in this blog are being rolled out to all regions. Check the version deployed to your instance and review the latest Azure SQL Data Warehouse release notes to learn more. For preview questions, please contact AskADWPreview@microsoft.com.
Quelle: Azure