Securely monitoring your Azure Database for PostgreSQL Query Store

A few months ago, I shared best practices for alerting on metrics with Azure Database for PostgreSQL. Though I was able to cover how to monitor certain key metrics on Azure Database for PostgreSQL, I did not cover how to monitor and alert on the performance of queries that your application is heavily relying on. As a PostgreSQL database, from time to time you will need to investigate if there are any queries running indefinitely on a PostgreSQL database. These long running queries may interfere with the overall database performance and likely get stuck on some background process. This blog post covers how you can set up alerting on query performance related metrics using Azure Functions and Azure Key Vault.

What is Query Store?

Query Store was a feature in Azure Database for PostgreSQL announced in early Fall 2018 that seamlessly enables tracking query performance over time. This simplifies performance troubleshooting by helping you quickly find the longest running and most resource-intensive queries. Learn how you can use Query Store on a wide variety of scenarios by visiting our documentation, “Usage scenarios for Query Store.” Query Store, when enabled, automatically captures a history of query runtime and wait statistics. It tracks this data over time so that you can see database usage patterns. Data for all users, databases, and queries is stored in a database named azure_sys in the Azure Database for PostgreSQL instance.

Query Store is not enabled on a server by default. However, it is very straightforward to opt-in on your server by following the simple steps detailed in our documentation, “Monitor performance with the Query Store.” After you have enabled Query Store to monitor your application performance, you can set alerts on various metrics such as long running queries, regressed queries, and more that you want to monitor.

How to set up alerting on Query Store metrics

You can achieve near real-time alerting on Query Store metrics monitoring using Azure Functions and Azure Key Vault. This GitHub repo provides you with an Azure Function and a PowerShell script to deploy a simple monitoring solution, which gives you some flexibility to change what and when to alert.

Alternatively, you can clone the repo to use this as a starting point and make code changes to better fit your scenario. The Visual Studio solution, when built with your changes, will automatically package the zip file you need to complete your deployment in the same fashion that is described here.

In this repo, the script DeployFunction creates an Azure function to serve as a monitor for Azure Database for PostgreSQL Query Store. Understanding the data collected by query performance insights will help you identify the metrics that you can alert on.

If you don't make any changes to the script or the function code itself and only provide the required parameters to DeployFunction script, here is what you will get:

A function app.
A function called PingMyDatabase that is time triggered every one minute.
An alert condition that looks for any query that has a mean execution time of longer than five seconds since the last time query store data is flushed to the disk.
An email when an alert condition is met with an attached list of all of the processes that was running on the instance, as well as the list of long running queries.
A key vault that contains two secrets named pgConnectionString and senderSecret that hold the connection string to your database and password to your sender email account respectively.
An identity for your function app with access to a Get policy on your secrets for this key vault.

You simply need to run DeployFunction on Windows PowerShell command prompt. It is important to run this script from Windows PowerShell. Using Windows PowerShell ISE will likely result in errors as some of the macros may not resolve as expected.

The script then creates the resource group and Key Vault deploys a monitoring function app, updates app configuration settings, and sets up the required Key Vault secrets. At any point during the deployment, you can view the logs available in the .logs folder.

After the deployment is complete, you can validate the secrets by going to the resource group in the Azure portal. As shown in the following diagram, two secrets keys are created, pgConnString and senderSecret. You can select the individual secrets if you want to update the value.

Depending on the condition set in the SENDMAILIF_QUERYRETURNSRESULTS app settings, you will receive an email alert when the condition is met.

How can I customize alert condition or supporting data in email?

After the default deployment goes through, using Azure portal you can update settings by selecting Platform features and then Application settings.

You can change the run interval, mail to, if condition, or supporting data to be attached by making changes to the below settings and saving them on your exit.

Alternatively, you can simply use az cli to update these settings like the following.

$cronIntervalSetting="CronTimerInterval=0 */1 * * * *"

az functionapp config appsettings set –resource-group yourResourceGroupName –name yourFunctionAppName –settings $cronIntervalSetting

Or

az functionapp config appsettings set –resource-group $resourceGroupName –name $functionAppName –settings "SENDMAILIF_QUERYRETURNSRESULTS=select * from query_store.qs_view where mean_time > 5000 and start_time >= now() – interval '15 minutes'"

Below are common cases on conditions that you can monitor and alert by either updating the function app settings after your deployment goes through or updating the corresponding value in DeployFunction.ps1 prior to your deployment:

Case

Function app setting name

Sample value

Query 3589441560 takes more than x milliseconds on average in the last fifteen minutes

SENDMAILIF_QUERYRETURNSRESULTS

select * from query_store.qs_view where query_id = 3589441560 and mean_time > x and start_time >= now() – interval '15 minutes'

Queries with cache hit less than 90 percent

SENDMAILIF_QUERYRETURNSRESULTS

select * , shared_blks_hit / nullif(shared_blks_hit + shared_blks_read, 0) AS as cache_hit from query_store.qs_view where shared_blks_hit / nullif(shared_blks_hit + shared_blks_read, 0) < 0.90

Queries with a mean execution time that is more than x milliseconds

SENDMAILIF_QUERYRETURNSRESULTS

select * from query_store.qs_view where mean_time > x and start_time >= now() – interval '15 minutes'

If an alert condition is met, check if there is an ongoing autovacuum operation, list the processes running and attach the results to email

LIST_OF_QUERIESWITHSUPPORTINGDATA

{“count_of_active_autovacuum”:” select count(*) from pg_stat_activity where position('autovacuum:' IN query) = 1 “,"list_of_processes_at_the_time_of_alert":"select now()-query_start as Running_Since,pid,client_hostname,client_addr, usename, state, left(query,60) as query_text from pg_stat_activity"}

How secure is this?

The script provides you with the mechanism to store your secrets in a Key Vault. Your secrets are secured as they are encrypted in-transit and at rest. However, the function app accesses the Key Vault over the network. If you want to avoid this and access your secrets over your virtual network (VNet) through the backbone, you will need to configure a VNet for both your function app and your Key Vault. Note, that VNet support of function apps is in preview and is currently available in selected Azure regions. When the proper deployment scenarios are supported, we may revisit this script to accommodate the changes. Until then, you will need to configure a VNet manually to accomplish the setup below.

We are always looking to hear feedback from you. If you have any feedback for the Query Store on PostgreSQL, or monitoring and alerting on query performance, please don’t hesitate to contact the Azure Database for PostgreSQL team.

Acknowledgments

Special thanks to Korhan Ileri, Senior Data Scientist, for developing the script and contributing to this post. As well as Tosin Adewale, Software Engineer from the Azure CLI team for closely partnering with us.
Quelle: Azure

Reducing security alert fatigue using machine learning in Azure Sentinel

Last week we launched Azure Sentinel, a cloud native SIEM tool. Machine learning (ML) in Azure Sentinel is built-in right from the beginning. We have thoughtfully designed the system with ML innovations aimed to make security analysts, security data scientists, and engineers productive. The focus is to reduce alert fatigue and offer ML toolkits tailored to the security community. The three ML pillars in Azure Sentinel include Fusion, built-in ML, build your own ML.

Fusion

Alert fatigue is real. Security analysts face a huge burden of triage as they not only have to sift through a sea of alerts, but also correlate alerts from different products manually or using a traditional correlation engine.

Our Fusion technology, currently in public preview, uses state of the art scalable learning algorithms to correlate millions of lower fidelity anomalous activities into tens of high fidelity cases. Azure Sentinel integrates with Microsoft 365 solution and correlates millions of signals from different products such as Azure Identity Protection, Microsoft Cloud App Security, and soon Azure Advanced Threat Protection, Windows Advanced Threat Protection, O365 Advanced Threat Protection, Intune, and Azure Information Protection. You can learn how to turn Fusion on by visiting our documentation, “Enable Fusion.”

Fusion combines yellow alerts, which themselves may not be actionable, into high fidelity security interesting red cases. We look at disparate products to produce actionable incidents so as to reduce the false positive rate. From our measurement with external customers and internal evaluation, we have a median 90 percent reduction in alert fatigue. This is possible because Fusion can detect complex, multi-stage attacks and differs from traditional correlation engines in the following ways:

Traditional correlation engines

Fusion

Assume that the attacker takes only one path to attain their goal.

Iterative attack simulation – Fusion encodes uncertainty with paths/stages by simulating different attack paths using an iterative arkov chain Monte Carlo simulations.

Assumes the attacker follows a static kill chain, as the attack path is executed.

Probabilistic cloud kill chain – Fusion constantly updates the probability of moving to the next step in kill chain through a custom defined prior probability function.

Assumes that all the information is present in the logs to catch the attacker.

Using advances in graphical methods – we encode uncertainty in completeness/connectivity of information in the kill chain helping us to detect novel attacks.

In the above screenshot, one can see that the Fusion case, and the two composite alerts that went into it.

Organizations are currently using Fusion for the following scenarios to compound anomalies from Identity Protection and Microsoft Cloud App Security products.

Anomalous login leading to O365 mailbox exfiltration
Anomalous login leading to suspicious cloud app administrative activity
Anomalous login leading to mass file deletion
Anomalous login leading to mass file download
Anomalous login leading to O365 impersonation
Anomalous login leading to mass file sharing
Anomalous login leading to ransomware in cloud app

Built-in ML

Machine learning is now an essential toolkit in security analytics to detect novel types of attacks that escape the traditional rules based system. However, a scarce ML talent pool makes it difficult for security organizations to staff applied security data scientists. To democratize the ML toolkit tailored to the needs of the security community, we introduce built-in ML which is currently in limited public preview.

Built-in ML is designed for security analysts and engineers, with no prior ML knowledge to reuse ML systems designed by Microsoft’s fleet of security machine learning engineers. The benefits of built-inML systems are that organizations dont have to worry about traditional investments like ML training cross validation, or deployment and quickly identify threats that wouldnt be found with a traditional approach.

Behind the cover, built-in ML uses principles of model compression and elements of transfer learning to make the model developed by Microsoft’s ML engineers ready to use for any organization’s needs. Our models are trained on diverse datasets, and periodically retrained to take concept drift into account.

We are opening our flagship geo login anomaly model for any security analyst to use to detect unusual logins in SSH logs. No ML expertise is necessary, customers bring in their logs to Azure Sentinel and use built-in ML systems to gain analysis instantly.

Build-your-own ML

We recognize that organizations have different levels of investments in machine learning for security use cases. Some organizations may have data scientists who need to go deeper and customize the analysis further. For these organizations, we offer the option of Build-you-own ML to author security analytics.

Azure Sentinel will offer Databricks, Spark, and Jupyter Notebook detection’s authoring environment, in order to take care of data plumbing, provide ML algorithm in templates, code snippets for model training and scheduling, and soon introduce seamless model management, model deployment, workflow scheduler, data versioning capabilities and specialized security analytics libraries. This will free up security data scientists from tedious pipeline and platform work, and focus on productive analytics on a hyper scale ML-security platform.

Additional resources

We will be updating this space with the technical details behind these innovations! If you have questions about turning on built-in ML or using build-your-own ML infrastructure, please reach out to askepd@microsoft.com. We also strongly recommend customers enable Fusion when they use Azure Sentinel. You can learn how to turn Fusion on by visiting our documentation, “Enable Fusion.”
Quelle: Azure

Microsoft and NVIDIA extend video analytics to the intelligent edge

Artificial Intelligence (AI) algorithms are becoming more intelligent and sophisticated every day, allowing IoT devices like cameras to bridge the physical and digital worlds. The algorithms can trigger alerts and take actions automatically — from finding available parking spots and missing items in a retail store to detecting anomalies on solar panels or workers approaching hazardous zones.

Processing these state-of-the-art AI algorithms in a datacenter requires a stable high-bandwidth connection to deliver videos feeds to the cloud. However, these cameras are often located in remote areas with unreliable connectivity or it may not be sensible given bandwidth, security, and regulatory needs.

Microsoft and NVIDIA are partnering on a new approach for intelligent video analytics at the edge to transform raw, high-bandwidth videos into lightweight telemetry. This delivers real-time performance and reduces compute costs for users. The “cameras-as-sensors” and edge workloads are managed locally by Azure IoT Edge and the camera stream processing is powered by NVIDIA DeepStream. Once the videos are converted, the data can be ingested to the cloud using Azure IoT Hub.

The companies plan to offer customers enterprise-ready devices running DeepStream in the Azure IoT device catalog, and the NVIDIA DeepStream module will soon be made available in the Azure IoT Edge marketplace.

Over the years, Microsoft and NVIDIA have helped customers run demanding applications on GPUs in the cloud. With this latest collaboration, NVIDIA DeepStream and Azure IoT Edge extend the AI-enhanced video analytics pipeline to where footage is captured, securely and at scale. Now, our customers can get the best of both worlds—accelerated video analytics at the edge with NVIDIA GPUs and secure connectivity and powerful device management with Azure IoT Edge and Azure IoT Hub.

To learn more, visit the Azure IoT Edge and NVIDIA DeepStream product pages. If you are attending GTC in person, join us Tuesday, March 19, 2019 from 9:00 – 10:00 AM at session S9545 – “Using the DeepStream SDK for AI-Based Video Analytics” or visit Microsoft at Booth 1122.
Quelle: Azure

Azure Machine Learning service now supports NVIDIA’s RAPIDS

Azure Machine Learning service is the first major cloud ML service to support NVIDIA’s RAPIDS, a suite of software libraries for accelerating traditional machine learning pipelines with NVIDIA GPUs.

Just as GPUs revolutionized deep learning through unprecedented training and inferencing performance, RAPIDS enables traditional machine learning practitioners to unlock game-changing performance with GPUs. With RAPIDS on Azure Machine Learning service, users can accelerate the entire machine learning pipeline, including data processing, training and inferencing, with GPUs from the NC_v3,  NC_v2, ND or ND_v2 families. Users can unlock performance gains of more than 20X (with 4 GPUs), slashing training times from hours to minutes and dramatically reducing time-to-insight.

The following figure compares training times on CPU and GPUs (Azure NC24s_v3) for a gradient boosted decision tree model using XGBoost. As shown below, performance gains increase with the number of GPUs. In the Jupyter notebook linked below, we’ll walk through how to reproduce these results step by step using RAPIDS on Azure Machine Learning service.

How to use RAPIDS on Azure Machine Learning service

Everything you need to use RAPIDS on Azure Machine Learning service can be found on GitHub.

The above repository consists of a master Jupyter Notebook that uses the Azure Machine Learning service SDK to automatically create a resource group, workspace, compute cluster, and preconfigured environment for using RAPIDS. The notebook also demonstrates a typical ETL and machine learning workflow to train a gradient boosted decision tree model. Users are also free to experiment with different data sizes and the number of GPUs to verify RAPIDS multi-GPU support.

About RAPIDS

RAPIDS uses NVIDIA CUDA for high-performance GPU execution, exposing GPU parallelism and high memory bandwidth through a user-friendly Python interface. It includes a dataframe library called cuDF which will be familiar to Pandas users, as well as an ML library called cuML that provides GPU versions of all machine learning algorithms available in Scikit-learn. And with DASK, RAPIDS can take advantage of multi-node, multi-GPU configurations on Azure.

Accelerating machine learning for all

With the support for RAPIDS on Azure Machine Learning service, we are continuing our commitment to an open and interoperable ecosystem where developers and data scientists can use the tools and frameworks of their choice. Azure Machine Learning service users will be able to use RAPIDS in the same way they currently use other machine learning frameworks, and they will be able to use RAPIDS in conjunction with Pandas, Scikit-learn, PyTorch, TensorFlow, etc. We strongly encourage the community to try it out and look forward to your feedback!
Quelle: Azure

Azure Container Registry virtual network and Firewall rules preview support

While Azure Container Registry (ACR) supports user and headless-service account authentication, customers have expressed their requirements for limiting public endpoint access. Customers can now limit registry access within an Azure Virtual Network (VNet), as well as whitelist IP addresses and ranges for on-premises services.

VNet and Firewall rules are supported with virtual machines (VM) and Azure Kubernetes Services (AKS).

Choosing between private and PaaS registries

As customers move into production, their security teams have a checklist they apply to production workloads, one of which is limiting all public endpoints. Without VNet support, customers had to choose between standalone products, or OSS projects they could run and manage themselves. This puts a larger burden on the customers to manage the storage, security, scalability, and reliability a production registry requires.

With VNet and Firewall rules, customers can achieve their security requirements, while benefiting from integrated security, secured at rest, geo-redundant, and geo-replicated PaaS Container Registry. Thus, freeing up their resources to focus on the unique business problems they face.

Azure Container Registry PaaS, enabling registry products

The newest VNet and Firewall rule capabilities of ACR are just the latest set of capabilities in container lifecycle management. ACR provides core primitives that other registry or CI/CD products may build upon. Our goal with ACR isn’t to compete with our partners, rather enable them with core cloud capabilities, allow them to focus on the higher level, unique capabilities each offer.

Getting started

Using the Azure CLI, or the Azure portal, customers can follow our documentation for configuring VNet and Firewall rules.

VNet and Firewall rules preview pricing

During preview, VNet and Firewall rules will be included in the Azure Container Registry’s Premium Tier.

Preview and general availability dates

As of March 18, 2019, VNet and Firewall rules are available for public preview in all 25 public cloud regions. General availability (GA) will be based on a curve of usage and feedback.

More information

Azure Container Registry
Geo-replicating registries
OS & Framework Patching
ACR Tasks

Quelle: Azure

Power IoT and time-series workloads with TimescaleDB for Azure Database for PostgreSQL

We’re excited to announce a partnership with Timescale that introduces support for TimescaleDB on Azure Database for PostgreSQL for customers building IoT and time-series workloads. TimescaleDB has a proven track record of being deployed in production in a variety of industries including oil & gas, financial services, and manufacturing. The partnership reinforces our commitment to supporting the open-source community to provide our users with the most innovative technologies PostgreSQL has to offer.

TimescaleDB allows you to scale for fast ingest and complex queries while natively supporting full SQL. It leverages PostgreSQL as an essential building block, which means that users get the familiarity and reliability of PostgreSQL, along with the scalability and performance of TimescaleDB. Enabling TimescaleDB on your new or existing Azure Database for PostgreSQL server will eliminate the need to run two databases to collect relational and time-series data.

How to get started

If you don’t already have an Azure Database for PostgreSQL server, you can create one with the Azure CLI command az postgres up. Next, run the following command to add TimescaleDB to your Postgres libraries:

az postgres server configuration set –resource-group mygroup –server-name myserver –name shared_preload_libraries –value timescaledb

Restart the server to load the new library. Then, connect to your Postgres database and run:

CREATE EXTENSION IF NOT EXISTS timescaledb CASCADE;

You can now create a TimescaleDB hypertable from scratch or migrate your existing time-series data.

Postgres with TimescaleDB as a foundation for IoT applications

PostgreSQL is enabling many IoT scenarios. To learn more, refer to the blog post, “Creating IoT applications with Azure Database for PostgreSQL.” With TimescaleDB, this experience is even better. IoT organizations can now also leverage the insights hidden in machine generated data to build new features, automate processes, and drive efficiency.

Challenge
Solution

IoT devices generate a lot of data which needs to be stored efficiently.
TimescaleDB automatically partitions data into chunks to scale for these types of workloads.

IoT data is complex (i.e. marrying device metadata, geospatial data, and time-series data).
TimescaleDB combines relational capabilities with time-series specific functions and is compatible with other PostgreSQL extensions including PostGIS.

IoT data needs to be accessed by multiple users (i.e. internal users for analytics or external users to expose data in real-time).
TimescaleDB speaks full SQL, a query language that is familiar across entire organizations.

IoT data requires diverse, customizable ingest pipelines that require a database with a broad ecosystem.
TimescaleDB inherits PostgreSQL’s entire ecosystem of tools and extensions.

IoT applications are made up of data at their core, and need to be stored in a reliable database.
TimescaleDB inherits PostgreSQL’s 20+ years of reliability and stability.

TimescaleDB offers valuable performance characteristics on top of PostgreSQL. For IoT use cases that highly leverage time-series data, TimescaleDB implements automatic chunk partitioning to support high insert rates. Below is a comparison on Azure PostgreSQL with and without TimescaleDB and observed degradation in insert performance over time. You can imagine that with IoT use cases with large amounts of time-series data, using TimescaleDB can provide significant value for IoT applications that need both relational features and scalability.

Note: General Purpose Compute Gen 5 with 4 vCores, 20GB RAM with Premium Storage

Although IoT is an obvious use case for a time-series database, time-series data actually exists everywhere. Time-series data is essentially collected over time with an associated timestamp. With TimescaleDB, developers can continue to use PostgreSQL, while leveraging TimescaleDB to scale for time-series workloads.

Next steps

Learn more Azure Database for PostgreSQL and get started using the Azure portal or command line.
Learn more about TimescaleDB by visiting their website, docs, or join their Slack community and post any questions you may have there.

As always, we encourage you to leave feedback below. You can also engage with the Azure Database for PostgreSQL through our feedback page and our forums if you have questions or feature suggestions.
Quelle: Azure

Azure Data Studio: An Open Source GUI Editor for Postgres

When you are working with a database, or any other kind of software, your experience is enhanced or hindered by the tools you use to interact with it. PostgreSQL has a command line tool, psql, and it’s pretty powerful, but some people much prefer a graphical editor. Even if you typically use command line, you may want to go visual sometimes. At Microsoft we've spent many years building experiences to enhance developers' day-to-day productivity. Having choices is important. It allows you to go with the tool that works for you.

Today we're excited to announce preview support for PostgreSQL in Azure Data Studio. Azure Data Studio is a cross-platform modern editor focused on data development. It's available for Linux, MacOS, and Windows. Plus, Azure Data Studio comes with an integrated terminal so you're never far away from psql.

We're also introducing a corresponding preview PostgreSQL extension in Visual Studio Code (VS Code). Both Azure Data Studio and VS Code are open source and extensible – two things that PostgreSQL itself is based on.

Azure Data Studio inherits a lot of VS Code functionality. It also supports most of VS Code's extensions like Python, R, and Kubernetes support. If your primary use case is data, choose Azure Data Studio. You can manage multiple database connections, explore database object hierarchy, set up dashboards, and more.

On the other hand, if you're closer to application development than you are to database administration, then go for our PostgreSQL extension in VS Code. Actually, you don't have to choose – use both, switching according to what works best for you at the time.

Connect to Postgres

Curious about what’s included? Let’s take a deeper look at the development experience for PostgreSQL in Azure Data Studio. You can connect to your Postgres server or establish a connection directly to a database. The Postgres server can be hosted on-premises, in a virtual machine (VM), or from the managed service of any cloud provider.

Organize your servers

Often you have multiple Postgres servers you’re working with. Perhaps there’s one production server, a corresponding stage server, and maybe multiple dev/test servers. Knowing which is which is key, especially being able to clearly identify your production server. In Azure Data Studio you can use server groups to categorize your servers. You can highlight your production server group in red to make it visually distinct from the others.

Track down database objects

Your Postgres server evolves as you add new functionality. It’s helpful to be able to clearly see what columns, indexes, triggers, and functions have been created for each database and table. This is especially true when you’re not the only person working on that Postgres instance. Azure Data Studio provides convenient hierarchical navigation in the sidebar. With it you can easily explore and keep track of your server's databases, tables, views, and other objects.

Write queries efficiently

As you look through the new database objects your teammates have created, it’s helpful to go beyond the name of the object to the DDL that composes it. Even if you’re the only person working on your Postgres instance, there may be objects you created a while back that you want to look up. Checking the DDL is a useful double-check to confirm that an object is doing what you expect.

Azure Data Studio provides “Peek Definition” and “Go to Definition” functionality so you can do that, and even do it as you use the object in a query. For example, let’s say you want to query pg_stat_activity, one of the built-in statistics views that comes with Postgres. You can use “Go to Definition” to see all its columns and understand what this view is based on.

Writing SQL queries is bread and butter when working with Postgres, whether you’re an expert or are just getting started with this RDBMS. Whoever you are, IntelliSense for SQL is integrated into Azure Data Studio to help you write your queries quicker. With IntelliSense’s context-aware code completion suggestions, you can use fewer keystrokes to get the job done.

If you use Postgres a lot, you probably have a few SQL queries you end up reusing over and over. Whether they are detailed CREATE statements or complex SELECTs, you can templatize each one into a SQL code snippet. That way you don’t have to retype it afresh each time. Azure Data Studio inherits its code snippet functionality from Visual Studio Code. Code snippets help you avoid errors from retyping code, and overall let you develop faster.

Customize your editor

One advantage of modern development GUIs is the ability to customize them to suit your unique preferences. For example, in this blog we’ve used the Solarized Dark theme in screenshots. Honestly, that isn’t everyone’s cup of tea. Well there are ten more color themes you can choose from in Azure Data Studio, not to mention a high contrast option.

The personalization options extend to key bindings as well. Don't like using the default Ctrl+N to open a new tab? You can change it. Or maybe you want a keyboard shortcut that doesn't come out of the box with Azure Data Studio. You can create and customize key bindings using the Keyboard Shortcuts editor.

How to get started

There are even more features to discover, like Git source control integration and customized dashboards and widgets. You can start using the preview for PostgreSQL in Azure Data Studio today – check out the install instructions. To start using our preview PostgreSQL extension for Visual Studio Code, learn more on our GitHub page.

These two features are in preview and your feedback is critical to making them better and making them work for you. Share your feedback on our PostgreSQL GitHub pages for Azure Data Studio or Visual Studio Code respectively.
Quelle: Azure

Azure Backup for SQL Server in Azure Virtual Machines now generally available!

How do you back up your SQL Servers today? You could be using backup software that require you to manage backup servers, agents, and storage, or you could be writing elaborate custom scripts which need you to manage the backups on each server individually. With the modernization of IT infrastructure and the world rapidly moving to the cloud, do you want to continue using the legacy backup methods that are tedious, infrastructure-heavy, and difficult to scale? Azure Backup for SQL Server Virtual Machines (VMs) is the modern way of doing backup in cloud, and we are excited to announce that it is now generally available! It is an enterprise scale, zero-infrastructure solution that eliminates the need to deploy and manage backup infrastructure while providing a simple and consistent experience to centrally manage and monitor the backups on standalone SQL instances and Always On Availability Groups.

 

Built into Azure, the solution combines the core cloud promises of simplicity, scalability, security and cost effectiveness with inherent SQL backup capabilities that are leveraged by using native APIs, to yield high fidelity backups and restores. The key value propositions of this solution are:

15-minute Recovery Point Objective (RPO): Working with uber critical data and have a low RPO? Schedule a log backup to happen every 15 minutes.
One-click, point-in-time restores: Tired of elaborate manual restore procedures? Restore databases to a point in time up to a second in one click, without having to manually apply a chain of logs over differential and full backups.
Long-term retention: Rigorous compliance and audit needs? Retain your backups for years, based on the retention duration, beyond which the recovery points will be pruned automatically by the built-in lifecycle management capability.
Protection for encrypted databases: Concerned about security of your data and backups? Back-up SQL encrypted databases and secure backups with built-in encryption at rest while controlling backup and restore operations with Role-Based Access Control.
Auto-protection: Dealing with a dynamic environment where new databases get added frequently? Auto-protect your server to automatically detect and protect the newly added databases.
Central management and monitoring: Losing too much time managing and monitoring backups for each server in isolation? Scale smartly by creating centrally managed backup policies that can be applied across databases. Monitor jobs and get alerts and emails across servers and even vaults from a single pane of glass.
Cost effective: No infrastructure and no overhead of managing the scale, seems like value for the money already? Enjoy reduced total cost of ownership and flexible pay-as-you-go option.

Get started

Customer feedback

We have been in preview for a few months now, and have seen an overwhelming response from our customers:

“Our experience with Azure SQL Server Backup has been fantastic. It’s a solution you can put in place in a couple of minutes and not have to worry about it. To restore DBs, we don’t have to deal with rolling logs and only have to choose a date and time. It gives us great peace of mind to know the data is safely stored in the Recovery Services Vaults with our other protected items.”

– Steven Hayes, Principal Architect, Acuity Brands Lighting, Inc

“We have been using Azure Backup for SQL Server for the past few months and have found it simple to use and easy to set up. The backup and restore operations are performant and reliable as well as easy to monitor. We plan to continue using it in the future."

– Celica E. Candido, Cloud Operations Analyst, Willis Towers Watson

Additional resources

Check out the public preview announcement on the Azure blog, “Azure Backup for SQL Server on Azure now in public preview.”
See different SQL backup options from Microsoft and choose the right solution based on your requirements.
Want more details about this feature? Check out Azure Backup for SQL Server documentation.
Get the pricing details for this feature.
Need help? Reach out to Azure Backup forum for support or browse Azure Backup documentation.
Tell us how we can improve Azure Backup by contributing new ideas and voting up existing ones.
New to Azure Backup, sign up for an Azure trial subscription.

Quelle: Azure

Azure.Source – Volume 74

Now in preview

AzCopy support in Azure Storage Explorer now available in public preview

AzCopy in Azure Storage Explorer is now in public preview. AzCopy is a popular command-line utility that provides performant data transfer into and out of a storage account. AzCopy enhances the performance and reliability through a scalable design, where concurrency is scaled up according to the number of machine’s logical cores. Azure Storage Explorer provides the UI interface for various storage tasks, and now it supports using AzCopy as a transfer engine to provide the highest throughput for transferring your files for Azure Storage. This capability is available today as a preview in Azure Storage Explorer.

Now available for preview: Workload importance for Azure SQL Data Warehouse

Announcing the preview of Workload Importance for Azure SQL Data Warehouse on the Gen2 platform. Manage resources more efficiently with Azure SQL Data Warehouse – a fast, flexible and secure analytics platform for enterprises of all sizes. Workload importance gives data engineers the ability to use importance to classify requests. Requests with higher importance are guaranteed quicker access to resources which helps meet SLAs.

Also available in preview

Public preview: Azure Log Analytics in France Central, Korea Central, North Europe
Public preview: Adaptive network hardening in Azure Security Center
Update 19.03 for Azure Sphere public preview now available for evaluation
Azure Security Center: Regulatory compliance dashboard in public preview

News and updates

Achieve more with Microsoft Game Stack

Announcing Microsoft Game Stack, a new initiative in which we commit to bringing together Microsoft tools and services that empower game developers to achieve more. Game Stack brings together all of our game-development platforms, tools, and services—such as Azure, PlayFab, DirectX, Visual Studio, Xbox Live, App Center, and Havok—into a robust ecosystem that any game developer can use. The goal of Game Stack is to help you easily discover the tools and services you need to create and operate your game.

Azure Databricks – VNet injection, DevOps Version Control and Delta availability

Azure Databricks provides a fast, easy, and collaborative Apache® Spark™-based analytics platform to accelerate and simplify the process of building big data and AI solutions that drive the business forward, all backed by industry-leading SLAs. With Azure Databricks, you can set up your Spark environment in minutes and auto-scale quickly and easily. You can also apply your existing skills and collaborate on shared projects in an interactive workspace with support for Python, Scala, R, and SQL, as well as data science frameworks and libraries like TensorFlow and PyTorch.

Hardware innovation for data growth challenges at cloud-scale

The Open Compute Project (OCP) Global Summit 2019 kicked off on March 14 where a vibrant and growing community shared the latest in innovation to make hardware more efficient, flexible, and scalable. This year we turned our attention to the exploding volume of data being created daily. Data is at the heart of digital transformation and companies are leveraging data to improve customer experiences, open new markets, make employees and processes more productive, and create new sources of competitive advantage as they work toward the future of tomorrow.

Azure Data Box family now enables import to Managed Disks

Announcing support for managed disks is now available across the Azure Data Box family of devices, which includes Data Box, Data Box Disk, and Data Box Heavy. The Azure Data Box offline family lets you transfer hundreds of terabytes of data to Microsoft Azure in a quick, inexpensive, and reliable manner. With managed disks support on Data Box, you can now move your on-premises virtual hard disks (VHDs) as managed disks in Azure with one simple step.

Simplify disaster recovery with Managed Disks for VMware and physical servers

Azure Site Recovery (ASR) now supports disaster recovery of VMware virtual machines and physical servers by directly replicating to Managed Disks. To enable replication for a machine, you no longer need to create storage accounts because you can now write replication data directly to a type of Managed Disk. This change will not impact the machines which are already in a protected state; however, all new protections will now have this capability available on the Azure portal.

Simplifying your environment setup while meeting compliance needs with built-in Azure Blueprints

Announcing the release of our first Azure Blueprint built specifically for a compliance standard, the ISO 27001 Shared Services blueprint sample, which maps a set of foundational Azure infrastructure such as virtual networks and policies, to specific ISO controls. Azure Blueprints is a free service that helps customers deploy and update cloud environments in a repeatable manner using composable artifacts such as policies, deployment templates, and role-based access controls. This service is built to help customers set up governed Azure environments and can scale to support production implementations for large-scale migrations. The ISO 27001 Shared Services Blueprint is already available to your Azure tenant.

Microsoft Azure portal March 2019 update

This month’s updates include an improved “All services” view, Virtual Network Gateway overview updates, an improved DNS Zone and Load Balancer creation experience, Management Group integration into Activity Log, redesigned overview screens for certain services within Azure DB, an improved creation experience for Azure SQL Database, multiple changes to the Security Center, and more updates to Intune. Sign in to the Azure portal now and see for yourself everything that’s new.

Approve Azure Pipelines deployments from Slack

Approve Azure Pipelines deployments from Slack is now available. We’re making it even easier for you, with a tighter integration that lets you be more productive – even when you’re on the go. Approving release deployments in Azure Pipelines is just a click away.

Azure Service Fabric 6.4 Refresh Release

Updates to the .NET SDK, Java SDK and Service Fabric runtimes are rolling out through Web Platform Installer, NuGet packages and Maven repositories in all regions.

Azure Security Center updates

Support for virtual network peering in Azure Security Center
Azure Security Center adaptive application control updates
Azure Security Center: Secure score impact changes
Azure Security Center policy migration to Azure Policy
Azure Security Center update: Secure score for compliance metrics
Azure Security Center update: Azure App Service recommendation improvements

Additional news and updates

Azure Log Analytics is now General Available in Australia East and in Australia Central
Service Map available in Central Canada and UK South
Application Insights is now available in France Central and Korea Central
Upcoming change to Azure Monitor Application Insights Smart Detection emails
Azure Resource Manager template language additions

Technical content

Run your code and leave build to us

Getting your app to the cloud is more work than you may anticipate. We're happy to share that there is a faster way. When you need to focus on app code, you can delegate build and deployment to Azure with App Service web apps and we'll take care of building and running your code the way you expect.

Stay informed about service issues with Azure Service Health

Azure Service Health helps you stay informed and take action when Azure service issues like incidents and planned maintenance affect you by providing a personalized health dashboard, customizable alerts, and expert guidance. Read how you can use Azure Service Health’s personalized dashboard to stay informed about issues that could affect you now or in the future.

Azure Stack IaaS – part four

Deploying your IaaS VM-based applications to Azure and Azure Stack requires a comprehensive evaluation of your BC/DR strategy. “Business as usual” is not enough in the context of cloud. For Azure Stack, you need to evaluate the resiliency, availability, and recoverability requirements of the applications separate from the protection schemes for the underlying infrastructure. Learn the concepts and best practices to protect your IaaS virtual machines (VMs) on Azure Stack.

Create a transit VNet using VNet peering

Azure Virtual Network (VNet) is the fundamental building block for any customer network. VNet lets you create your own private space in Azure, or as I call it your own network bubble. VNets are crucial to your cloud network as they offer isolation, segmentation, and other key benefits. VNet peering with gateway transit works across classic Azure Service Management (ASM) and Azure Resource Manager (ARM) deployment models and works across subscriptions and subscriptions belonging to different Azure Active Directory tenants. Gateway transit has been available since September 2016 for VNet peering in all regions and will be available for global VNet peering shortly.

Commit, push, deploy — Git in the Microsoft Azure Cloud

Git is a popular Version Control option — and, instead of asking you to learn something new, this article serves as an introduction to help Git users get familiar with cloud (and Azure), including an end-to-end walkthrough. Chris covers how to download, run, and configure a sample app using Git, and from there, dives into how to deploy, manage, update, and redeploy that app inside Azure.

Azure DevOps Slack Integration

In this quick how-to video, Neil shows how easy it is to setup the Azure DevOps and Slack integration for detailed real-time notifications about your build and release notifications. You'll see how to customize what you see, and click through to Azure DevOps to dig into build failures, approve requests, and validate successful deploys.

Azure Functions With F#

This quick post from Aaron Powell walks through how to use VS Code and F# to create Azure Functions v2.

How to query Azure resources using the Azure CLI

The Azure CLI can be used to not only create, configure, and delete resources from Azure — but also to query data from Azure. Querying Azure for resource properties is handy when you're writing scripts using the Azure CLI – for instance, when you want to get an Azure Virtual Machine or Container Instance IP address to perform some action on that resource. This post is a quick exercise that demonstrates several concepts, so you're ready to query a single resource property and store the value of that property in a variable. We'll use Azure Container Instances (ACI), but you don't need to have experience with ACI to complete the steps in this article – the concepts transfer to any Azure resource.

7 things you should know when getting started with Serverless APIs

In this article—based on a talk Simona Cotin gave at Build— she walks you through an existing application with an Express back-end and porting it to a Serverless back-end by changing a single line in our front-end code. By the end of the article, you will have built an API that will scale instantly as more and more users come in and our workload increases.

Additional technical content

Identify log write limits on Azure SQL Managed Instance using QPI library
Enhance data protection and compliance with customer managed keys
Running Azure Cosmos DB queries from SQL Server using ODBC driver
Planning the future for NoSQL Cassandra DB Applications on Azure
SAP on Azure High Availability Systems with Heterogenous Windows and Linux Clustering and SAP HANA
Lesson Learned #79: Connecting to Azure SQL Database just using the port 1433 without redirection

Azure shows

Episode 270 – Hammer and Nail | The Azure Podcast

Cale Teeter and Sujit D'Mello discuss using a solutions-based approach when selecting Azure services instead of getting caught in the hype of new services.

HTML5 audio not supported

Heat Maps and Image Overlays in Azure Maps | Internet of Things Show

Heat maps are used to represent the density of data using a range of colors. They are often used to show the data "hot spots" on a map and are great to help understand data. The heat map layer also supports weighted data points to help bring the most relevant information to the surface. Learn about the heat map and image layer visualizations in side of Azure Maps.

What’s New for Visual Studio 2019 Integrations with Azure Boards | The DevOps Lab

In this episode, you see a quick walk through of a new experience in Visual Studio 2019; showing how a developer can quickly find the work they need and associate it to their pending changes.

Azure Pipelines multi-cloud support and integration with DevOps tools | Azure Friday

Learn to integrate Azure Pipelines with various 3rd-party tools to achieve full DevOps cycle with Multi-cloud support. You can continue to use you existing tools and get Azure Pipelines benefits: application release orchestration, deployment, approvals, and full traceability all the way to the code or issue.

Five Ways You Can Infuse AI into Your Applications | Five Things

Leben Things! In case you don't speak Elvish, that roughly translates to "Five Things". This week I sit down with Noelle LaCharite from the Microsoft Cognitive Services team to learn how machines can translate language, perform search on unstructured data, converse like humans and more. Even better, you can use this stuff in your applications right away; no degree in multi-dimensional calculus required. This is five ways that you can infuse AI into your applications today.

Getting Started with Infrastructure as Code (IaC) | The Open Source Show

Armon Dadgar, HashiCorp CTO and co-founder, and Aaron Schlesinger walk us through the core concepts of Infrastructure as Code (IaC) and how it goes beyond what people typically think when they hear "Infrastructure." They break down the what, when, how, and why IaC makes developers' lives easier, whether you're running a simple application or have a complex, multi-node system. You'll learn how you can use HashiCorp Terraform to get up and running with IaC, going from nothing to a complete carbon copy of your production environment at the click of button (read: you focus on building, testing, and deploying, not spinning up test environments and hoping they're close to what's in production).

Quick tour of Azure DevOps projects using Node.js and AKS: Part 2 | Azure Tips and Tricks

Learn what Azure DevOps projects are and how to use them with Node.js and Azure Kubernetes Service. In part 2, you’ll get to explore the rest of the resources that Azure DevOps projects has to offer.

How to create a storage account and upload a blob | Azure Portal Series

The Azure Portal enables you to create and manage storage accounts and upload a blob. In this video of the Azure Portal “How To” Series, learn how to easily create a storage account, upload a blob, and manage the storage account within Storage Explorer (preview).

Greg Leonardo on Deploying the Azure Way | Azure DevOps Podcast

Greg Leonardo is a Cloud Architect at Campus Management Corp. and Webonology. In this episode of the Azure Podcast, he discusses some of the topics from his book, Hands-On Cloud Solutions with Azure: architecting, developing, and deploying the Azure way. He also talks about working with infrastructure as code, provisioning and watching environments, and more about what developers targeting Azure need to know.

HTML5 audio not supported

Episode 2 – WTF Azure (How Do I Get Started?) | AzureABILITY

AzureABILITY host Louis Berman discusses how to get started in Azure with his fellow Cloud Solutions Architect, Srini Ambati. Listen in as Louis and Srini give you a leg up into the cloud.

HTML5 audio not supported

Read the transcript

Events

Microsoft Create – A Global Startup Event Series

Create is for startup founders, technical co-founders, and early or first engineers, with potentially a small number of business-focused attendees. The event is ideally for early stage startups looking to make technical decisions about platform and technology stack. Our agenda focuses heavily on Azure technologies and highlights Microsoft for Startups offerings and the ScaleUp program. The tour is free to attendees.

Join Microsoft at the NVIDIA GPU Technology Conference

The world of computing goes deep and wide on working on issues related to our environment, economy, energy, and public health systems. These needs require modern, advanced solutions that were traditionally limited to a few organizations, are hard to scale, and take a long time to deliver. Microsoft Azure delivers High Performance Computing (HPC) capability and tools to power solutions that address these challenges integrated into a global-scale cloud platform. Microsoft’s partnership with NVIDIA makes access to NVIDIA GPUs easier than ever. This week’s NVIDIA’s GPU Technology Conference teaches Azure customers to combine the flexibility and elasticity of the cloud with the capability of NVIDIA GPUs.

Cloud Commercial Communities webinar and podcast newsletter–March 2019

Each month the Cloud Commercial Communities team focuses on core programs, updates, trends, and technologies that Microsoft partners and customers need to know to increase success using Azure and Dynamics. Make sure you catch a live webinar and participate in live QA.

IoT in Action: A more sustainable future for farming

The future of food security and feeding an expanding global population depends upon our ability to increase food production globally—an estimated 70 percent by the year 2050, according to the Food and Agriculture Organization of the United Nations. But challenges ranging from climate change, soil quality, pest control, and shrinking land availability, not to mention water resource constraints, must be addressed. We believe that the Internet of Things (IoT) technology and data-driven agriculture is one answer.

IoT in Action: Thriving partner ecosystem key to transformation

The Internet of Things (IoT) is an ongoing journey. Digital transformation requires that solutions be connected so that the data can be collected and analyzed more effectively across systems to drive exponential improvements in operations, profitability, and customer and employee loyalty. Through our partner-plus-platform-approach, we have committed $5 billion in IoT-focused investments to grow and support our partner ecosystem–specifically through unrelenting R&D innovation in critical areas, like security, new development tools and intelligent services, artificial intelligence, and emerging technologies.

Customers, partners, and industries

Spinning up cloud-scale analytics is even more compelling with Talend and Microsoft

Stich Data Loader is Talend's recent addition to its portfolio for small- and mid-market customers. With Stitch Data Loader, customers can load 5 million rows/month into Azure SQL Data Warehouse for free or scale up to an unlimited number of rows with a subscription. All across the industry, there is a rapid shift to the cloud. Using a fast, flexible, and secure cloud data warehouse is an important first step in that journey. With Microsoft Azure SQL Data Warehouse and Stitch Data Loader companies can get started faster than ever.

Economist study: OEMs create new revenue streams with next-gen supply chains

Original equipment manufacturers (OEMs) make the wheels go round for the business world. Successful OEMs are always on the lookout for opportunities to drive down costs and differentiate their brands; and the rise of IoT offers a golden opportunity to fundamentally transform the supply chain. The Economist Intelligence Unit surveyed 250 senior executives at OEMs in North America, Europe, and Asia-Pacific to gain insights from those customers at the center of the supply chain.

Azure Marketplace new offers – Volume 33

The Azure Marketplace is the premier destination for all your software needs – certified and optimized to run on Azure. Find, try, purchase, and provision applications & services from hundreds of leading software providers. You can also connect with Gold and Silver Microsoft Cloud Competency partners to help your adoption of Azure. In the first half of February we published 50 new offers.

Accelerating enterprise digital transformation through DevOps

IT organizations are under more pressure than ever to do more with less, they are expected to drive competitive advantage and innovation with higher quality while managing smaller teams. Organizations must now adapt by adopting rapid and strategic transformation while simultaneously working diligently to keep the lights on, and all with the important goal of reducing costs. To address these challenges, Sirrus7, GitHub, and HashiCorp have joined together to create the DevOps Acceleration Engine.

Maximize existing vision systems in quality assurance with Cognitive AI

Quality assurance matters to manufacturers. The reputation and bottom line of a company can be adversely affected if defective products are released. If a defect is not detected, and the flawed product is not removed early in the production process, the damage can run in the hundreds of dollars per unit. To mitigate this, many manufacturers install cameras to monitor their products as they move along the production line. Mariner, with its Spyglass solution, uses AI from Azure to achieve visibility over the entire line, and to prevent product defects before they become a problem.

Azure This Week – 15 March 2019 | A Cloud Guru – Azure This Week

This time on Azure This Week, Lars covers the official release of Azure DevOps Server 2019, the public preview of Azure Premium Blob Storage and he looks at some new features in Azure Firewall.

Quelle: Azure

ONNX Runtime integration with NVIDIA TensorRT in preview

Today we are excited to open source the preview of the NVIDIA TensorRT execution provider in ONNX Runtime. With this release, we are taking another step towards open and interoperable AI by enabling developers to easily leverage industry-leading GPU acceleration regardless of their choice of framework. Developers can now tap into the power of TensorRT through ONNX Runtime to accelerate inferencing of ONNX models, which can be exported or converted from PyTorch, TensorFlow, and many other popular frameworks.

Microsoft and NVIDIA worked closely to integrate the TensorRT execution provider with ONNX Runtime and have validated support for all the ONNX Models in the model zoo. With the TensorRT execution provider, ONNX Runtime delivers better inferencing performance on the same hardware compared to generic GPU acceleration. We have seen up to 2X improved performance using the TensorRT execution provider on internal workloads from Bing MultiMedia services.

How it works

ONNX Runtime together with its TensorRT execution provider accelerates the inferencing of deep learning models by parsing the graph and allocating specific nodes for execution by the TensorRT stack in supported hardware. The TensorRT execution provider interfaces with the TensorRT libraries that are preinstalled in the platform to process the ONNX sub-graph and execute it on NVIDIA hardware. This enables developers to run ONNX models across different flavors of hardware and build applications with the flexibility to target different hardware configurations. This architecture abstracts out the details of the hardware specific libraries that are essential to optimizing the execution of deep neural networks.

How to use the TensorRT execution provider

ONNX Runtime together with the TensorRT execution provider supports the ONNX Spec v1.2 or higher, with version 9 of the Opset. TensorRT optimized models can be deployed to all N-series VMs powered by NVIDIA GPUs on Azure.

To use TensorRT, you must first build ONNX Runtime with the TensorRT execution provider (use –use_tensorrt –tensorrt_home <path to location for TensorRT libraries in your local machine> flags in the build.sh tool). You can then take advantage of TensorRT by initiating the inference session through the ONNX Runtime APIs. ONNX Runtime will automatically prioritize the appropriate sub-graphs for execution by TensorRT to maximize performance.

InferenceSession session_object{so};
session_object.RegisterExecutionProvider(std::make_unique<::onnxruntime::TensorrtExecutionProvider>());
status = session_object.Load(model_file_name);​

Detailed instructions are available on GitHub. In addition, a collection of standard tests are available through the onnx_test_runner utility in the repo to help verify the ONNX Runtime build with TensorRT execution provider.

What is ONNX and ONNX Runtime

ONNX is an open format for deep learning and traditional machine learning models that Microsoft co-developed with Facebook and AWS. ONNX allows models to be represented in a common format that can be executed across different hardware platforms using ONNX Runtime. This gives developers the freedom to choose the right framework for their task, as well as the confidence to run their models efficiently on a variety of platforms with the hardware of their choice.

ONNX Runtime is the first publicly available inference engine with full support for ONNX 1.2 and higher including the ONNX-ML profile. ONNX Runtime is lightweight and modular with an extensible architecture that allows hardware accelerators such as TensorRT to plug in as “execution providers.” These execution providers unlock low latency and high efficiency neural network computations. Today, ONNX Runtime powers core scenarios that serve billions of users in Bing, Office, and more.

Another step towards open and interoperable AI

The preview of the TensorRT execution provider for ONNX Runtime marks another milestone in our venture to create an open and interoperable ecosystem for AI. We hope this makes it easier to drive AI innovation in a world with ever-increasing latency requirements for production models. We are continuously evolving and improving ONNX Runtime, and look forward to your feedback and contributions!  

To learn more about using ONNX for accelerated inferencing on the cloud and edge, check out the ONNX session at NVIDIA GTC. Have feedback or questions about ONNX Runtime? File an issue on GitHub, and follow us on Twitter. 
Quelle: Azure