Deploy a FHIR sandbox in Azure

This blog post was authored by Michael Hansen, Senior Program Manager, Microsoft Azure.

In connection with HIMSS 2019, we announced the Azure API for FHIR, which provides our customers with an enterprise grade, managed FHIR® API in Azure. Since then, we have been busy improving the service with new configuration options and features. Some of the features we have been working on include authentication configuration and the SMART on FHIR Azure Active Directory Proxy, which enable the so-called SMART on FHIR EHR launch with the Azure API for FHIR.

We have developed a sandbox environment that illustrates how the service and the configuration options are used. In this blog post, we focus on how to deploy the sandbox in Azure. Later blog posts will dive into some of the technical details of the various configuration options.

The Azure API for FHIR team maintains a GitHub repository with sample applications. It is maintained by the product engineering team to ensure that it works with the latest features of the Azure API for FHIR. The repository contains a patient dashboard application, Azure Function that will load patient data generated with Synthea, and example templates for SMART on FHIR applications:

Deployment instructions

The repository contains fully automated PowerShell scripts that you can use to deploy the sandbox scenario. The deployment script will create Azure Active Directory application registrations and a test user. If you do not want to create these Azure Active Directory objects in the tenant associated with your Azure subscription, we recommend you create a separate Azure Active Directory tenant to use for data plane access control.

The deployment script is written for PowerShell and uses the AzureAd PowerShell module. If you don’t have access to PowerShell on your computer, you can use the Azure Cloud Shell. In the cloud shell, you can deploy the sandbox environment with:

# Clone source code repository
cd $HOME
git clone https://github.com/Microsoft/fhir-server-samples
cd fhir-server-samples/deploy/scripts

# Log in to Azure AD:
Connect-AzureAd -TenantDomain <mytenantdomain>.onmicrosoft.com

# Connect to Azure Subscription
Login-AzureRmAccount

# Selection subscription
Select-AzureRmSubscription -SubsciptionName “Name of your subscription”

# Deploy Sandbox
.Create-FhirServerSamplesEnvironment.ps1 -EnvironmentName <NameOfEnvironment> -EnvironmentLocation westus2 -AdminPassword $(ConvertTo-SecureString -AsPlainText -Force "MySuperSecretPassword")

It will take around 5 minutes to deploy the environment. The deployment script will create a resource group with the same name as the environment. In there, you will find all the resources associated with the sandbox.

Loading synthetic data

The environment resource group will contain a storage account with a container named “FhirImport.” If you upload Synthea patient bundles to this storage account, they will be ingested.

Using the patient dashboard

There are two versions of the patient dashboard, they can be located at:

https://<NameOfEnvironment>dash.azurewebsites.net: This is an ASP.NET patient dashboard. The GitHub repository contains the source code for this patient dashboard.
https://<NameOfEnvironment>js.azurewebsites.net: This is a single page JavaScript application. The source code is also in the GitHub repository.

When you navigate to either of those URLs, you will be prompted to log in. The administrator user is created by the deployment script and will have the username <NameOfEnvironment>-admin@<mytenantdomain>.onmicrosoft.com and the password is whatever you chose it to be during deployment. If you have uploaded some patients using the Synthea uploader, you should be able to display a list of patients. This shows the view in the JavaScript dashboard.

You can click details on a specific patient to get more information:

You can also use the links for the SMART on FHIR applications to get the growth chart application this patient:

The sandbox provides other useful tools. As an example, the “About me” link will provide you with details about the FHIR endpoint including a token that can be used to access the FIR API using tools like Postman.

Deleting the sandbox

When you are done exploring the Azure API for FHIR and the FHIR sandbox, it is easily deleted with:

.Delete-FhirServerSamplesEnvironment.ps1 -EnvironmentName <NameOfEnvironment>

FHIR® is the registered trademark of HL7 and is used with the permission of HL7
Quelle: Azure

Building recommender systems with Azure Machine Learning service

Recommendation systems are used in a variety of industries, from retail to news and media. If you’ve ever used a streaming service or ecommerce site that has surfaced recommendations for you based on what you’ve previously watched or purchased, you’ve interacted with a recommendation system. With the availability of large amounts of data, many businesses are turning to recommendation systems as a critical revenue driver. However, finding the right recommender algorithms can be very time consuming for data scientists. This is why Microsoft has provided a GitHub repository with Python best practice examples to facilitate the building and evaluation of recommendation systems using Azure Machine Learning services.

What is a recommendation system?

There are two main types of recommendation systems: collaborative filtering and content-based filtering. Collaborative filtering (commonly used in e-commerce scenarios), identifies interactions between users and the items they rate in order to recommend new items they have not seen before. Content-based filtering (commonly used by streaming services) identifies features about users’ profiles or item descriptions to make recommendations for new content. These approaches can also be combined for a hybrid approach.

Recommender systems keep customers on a businesses’ site longer, they interact with more products/content, and it suggests products or content a customer is likely to purchase or engage with as a store sales associate might. Below, we’ll show you what this repository is, and how it eases pain points for data scientists building and implementing recommender systems.

Easing the process for data scientists

The recommender algorithm GitHub repository provides examples and best practices for building recommendation systems, provided as Jupyter notebooks. The examples detail our learnings on five key tasks:

Data preparation – Preparing and loading data for each recommender algorithm
Modeling – Building models using various classical and deep learning recommender algorithms such as Alternating Least Squares (ALS) or eXtreme Deep Factorization Machines (xDeepFM)
Evaluating – Evaluating algorithms with offline metrics
Model selection and optimization – Tuning and optimizing hyperparameters for recommender models
Operationalizing – Operationalizing models in a production environment on Azure

Several utilities are provided in reco utils to support common tasks such as loading datasets in the format expected by different algorithms, evaluating model outputs, and splitting training/test data. Implementations of several state-of-the-art algorithms are provided for self-study and customization in an organization or data scientists’ own applications.
In the image below, you’ll find a list of recommender algorithms available in the repository. We’re always adding more recommender algorithms, so go to the GitHub repository to see the most up-to-date list.

 

Let’s take a closer look at how the recommender repository addresses data scientists’ pain points.

It’s time consuming to evaluate different options for recommender algorithms

One of the key benefits of the recommender GitHub repository is that it provides a set of options and shows which algorithms are best for solving certain types of problems. It also provides a rough framework for how to switch between different algorithms. If model performance accuracy isn’t enough, an algorithm better suited for real-time results is needed, or the originally chosen algorithm isn’t the best fit for the type of data being used, a data scientist may want to switch to a different algorithm.

Choosing, understanding, and implementing newer models for recommender systems can be costly

Selecting the right recommender algorithm from scratch and implementing new models for recommender systems can be costly as they require ample time for training and testing as well as large amounts of compute power. The recommender GitHub repository streamlines the selection process, reducing costs by saving data scientists time in testing many algorithms that are not a good fit for their projects/scenarios. This, coupled with Azure’s various pricing options, reduces data scientists’ costs on testing and organization’s costs in deployment.

Implementing more state-of-the-art algorithms can appear daunting

When asked to build a recommender system, data scientists will often turn to more commonly known algorithms to alleviate the time and costs needed to choose and test more state-of-the-art algorithms, even if these more advanced algorithms may be a better fit for the project/data set. The recommender GitHub repository provides a library of well-known and state-of-the-art recommender algorithms that best fit certain scenarios. It also provides best practices that, when followed, make implementing more state-of-the-art algorithms easier to approach.

Data scientists are unfamiliar with how to use Azure Machine Learning service to train, test, optimize, and deploy recommender algorithms

Finally, the recommender GitHub repository provides best practices for how to train, test, optimize, and deploy recommender models on Azure and Azure Machine Learning (Azure ML) service. In fact, there are several notebooks available on how to run the recommender algorithms in the repository on Azure ML service. Data scientists can also take any notebook that has already been created and submit it to Azure with minimal or no changes.

Azure ML can be used intensively across various notebooks for tasks relating to AI model development, such as:

Hyperparameter tuning
Tracking and monitoring metrics to enhance the model creation process
Scaling up and out on compute like DSVM and Azure ML Compute
Deploying a web service to Azure Kubernetes Service
Submitting pipelines

Learn more

Utilize the GitHub repository for your own recommender systems.

Learn more about the Azure Machine Learning service.

Get started with a free trial of Azure Machine Learning service.
Quelle: Azure

Quest powers Spotlight Cloud with Azure

This blog post was co-authored by Liz Yu (Marketing), Bryden Oliver (Architect), Iain Shepard (Senior Software Engineer) at Spotlight Cloud, and Deborah Chen (Program Manager), Sri Chintala (Program Manager) at Azure Cosmos DB.

 

Spotlight Cloud is the first built on Azure database performance monitoring solution focused on SQL Server customers. Leveraging the scalability, performance, global distribution, high-availability, and built-in security of Microsoft Azure Cosmos DB, Spotlight Cloud combines the best of the cloud with Quest Software’s engineering insights from years of building database performance management tools.

As a tool that delivers database insights that lead customers to higher availability, scalability, and faster resolution of their SQL solutions, Spotlight Cloud needed a database service that provided those exact requirements on the backend as well.

Using Azure Cosmos DB and Azure Functions, Quest was able to build a proof of concept within two months and deploy to production in less than eight months.

“Azure Cosmos DB will allow us to scale as our application scales. As we onboard more customers, we value the predictability in terms of performance, latency, and the availability we get from Azure Cosmos DB.”

– Patrick O’Keeffe, VP of Software Engineering, Quest Software

Spotlight Cloud requirements

The amount of data needed to support a business continually grows. As data scales, so does Spotlight Cloud, as it needs to analyze all that data. Quest’s developers knew they needed a highly available database service with the following requirements and at affordable cost:

Collect and store many different types of data and send it to an Azure-based storage service. The data comes from SQL Server DMVs, OS performance counter statistics, SQL plans, and other useful information. The data collected varies greatly in size (100 bytes to multiple megabytes) and shape.
Accept 1,200 operations/second on the data with the ability to continue to scale as more customers use Spotlight Cloud.
Query and return data to aid in the diagnosis and analysis of SQL Server performance problems quickly.

After a thorough evaluation of many products, Quest chose Azure Functions and Azure Cosmos DB as the backbone of their solution. Spotlight Cloud was able to leverage both Azure Function apps and Azure Cosmos DB to reduce cost, improve performance, and deliver a better service to their customers.

Solution

Part of the core data flow in Spotlight Cloud. Other technologies used, not shown, include Event Hub, Application Insights, Key Vault, Storage, DNS.

The core data processing flow within Spotlight Cloud is built on Azure Functions and Azure Cosmos DB. This technology stack provides Quest with the high scale and performance they need.

Scale

 

Ingest apps handle >1,000 sets of customer monitoring data per second. To support this, Azure Functions consumption plan auto-scales up to 100s of VMs automatically.

Azure Cosmos DB provides guaranteed throughput for database and containers, measured in Request Units / second (RU/s), and backed by SLAs. By estimating the required throughput of the workload and translating it to RU/s, Quest was able to achieve predictable throughput of reads and writes against Azure Cosmos DB at any scale.

Performance

 

Azure Cosmos DB handles the write and read operations for Spotlight’s data at < 60 milliseconds. This enables customers’ SQL Server data to be quickly ingested and available for analysis in near real time.

High availability

 

Azure Cosmos DB provides 99.999% high availability SLA for reads and writes, when using 2+ regions. Availability is crucial for Spotlight Cloud’s customers, as many are in the healthcare, retail, and financial services industries and cannot afford to experience any database downtime or performance degradation. In the event a failover is needed, Azure Cosmos DB does automatic failover with no manual intervention, enabling business continuity.

With turnkey global distribution, Azure Cosmos DB handles automatic and asynchronous replication of data between regions. To take full advantage of their provisioned throughput, Quest designated one region to handle writes (data ingest) and another for reads. As a result, users’ read response times are never impacted by the write volume.

Flexible schema

 

Azure Cosmos DB accepts JSON data of varying size and schema. This enabled Quest to store a variety of data from diverse sources, such as SQL Server DMVs, OS performance counter statistics, etc., and removed the need to worry about fixed schemas or schema management.

Developer productivity

 

Azure Functions tooling made the development and coding process very smooth, which enabled developers to be productive immediately. Developers also found Azure Cosmos DB’s SQL query language to be easy to use, reducing the ramp-up time.

Cost

 

The Azure Functions consumption pricing model charges only for the compute and memory each function invocation uses. Particularly for lower-volume microservices, this lets users operate at low cost. In addition, using Azure Functions on a consumption plan gives Quest the ability to have failover instances on standby at all times, and only incur cost if failover instances are actually used.

From a Total Cost of Ownership (TCO) perspective, Azure Cosmos DB and Azure Functions are both managed solutions, which reduced the amount of time spent on management and operations. This enabled the team to focus on building services that deliver direct value to their customers.

Support

Microsoft engineers are directly available to help with issues, provide guidance and share best practices

With Spotlight Cloud, Quest’s customers have the advantage of storing data in Azure instead of an on-premises SQL Server database. Customers also have access to all the analysis features that Quest provides in the cloud. For example, a customer can investigate the SQL workload and performance on their SQL Server in great detail to optimize the data and queries for their users – all powered by Spotlight Cloud running on top of Azure Cosmos DB.

"We were looking to upgrade our storage solution to better meet our business needs. Azure Cosmos DB gave us built-in high availability and low latency, which allowed us to improve our uptime and performance. I believe Azure Cosmos DB plays an important role in our Spotlight Cloud to enable customers to access real-time data fast."

– Efim Dimenstein, Chief Cloud Architect, Quest Software

Deployment Diagram of Spotlight Cloud’s Ingest and Egress app

Diagram above explained. Data is routed to an available ingest app by the Traffic Manager. The Ingest app writes data into the Azure Cosmos DB write region. Data consumers are routed via Traffic Manager to Egress app, which then reads data from the Azure Cosmos DB read region.

Learnings and best practices

In building Spotlight Cloud, Quest gained a deep understanding into how to use Azure Cosmos DB in the most effective way: 

 

Understand Azure Cosmos DB’s provisioned throughput model (RU/s)

 

Quest measured the cost of each operation, the number of operations/second, and provisioned the total amount of throughput required in Azure Cosmos DB.

Since Azure Cosmos DB cost is based on storage and provisioned throughput, choosing the right amount of RUs was key to using Azure Cosmos DB in a cost effective manner.

Choose a good partition strategy

 

Quest chose a partition key for their data that resulted in a balanced distribution of request volume and storage. This is critical because Azure Cosmos DB shards data horizontally and distributes total provisioned RUs evenly among the partitions of data.

During the development stage, Quest experimented with several choices of partition key and measured the impact on the performance. If a partition key strategy was unbalanced, a workload would require more RUs than with a balanced partition strategy.

Quest chose a synthetic partition key that incorporated Server Id and type of data being stored. This gave a high number of distinct values (high cardinality), leading to an even distribution of data – crucial for a write heavy workload.

Tune indexing policy

 

For Quest’s write-heavy workload, tuning index policy and RU cost on writes was key to achieving good performance. To do this, Quest modified the Azure Cosmos DB indexing policy to explicitly index commonly queried properties in a document and exclude the rest. In addition, Quest included only a few commonly used properties in the body of the document and encoded the rest of the data into a single property.

Scale up and down RUs based on data access pattern

 

In Spotlight Cloud, customers tend to access recent data more frequently than the older data. At the same time, new data continues to be written in a steady stream, making it a write-heavy workload.

To tune the overall provisioned RUs of the workload, Quest split the data into multiple containers. A new container is created regularly (e.g. every week to a few months) with high RUs, ready to receive writes.

Once the next new container is ready, the previous container’s RUs is reduced to only what is required to serve the expected read operations. Writes are then directed to the new container with high number of RUs.

Tour of Spotlight Cloud’s user interface

About Quest

Quest has provided software solutions for the fast paced world of enterprise IT since 1987. They are a global provider to 130,000 companies across 100 countries, including 95 percent of the Fortune 500 and 90% of the Global 1000.

Find out more about Spotlight Cloud on Twitter, Facebook, and LinkedIn.
Quelle: Azure

Azure.Source – Volume 80

Spark + AI Summit | Preview | GA | News & updates | Technical content | Azure shows | Events | Customers, partners, and industries

Spark + AI Summit 2019

Spark + AI Summit – Developing for the intelligent cloud and intelligent edge

Last week at Spark + AI Summit 2019, Microsoft announced joining the open source MLflow project as an active contributor. Developers can use the standard MLflow tracking API to track runs and deploy models directly into Azure Machine Learning service. We also announced that managed MLflow is generally available on Azure Databricks and will use Azure Machine Learning to track the full ML lifecycle. The combination of Azure Databricks and Azure Machine Learning makes Azure the best cloud for machine learning. Databricks open sourced Databricks Delta, which Azure Databricks customers get greater reliability, improved performance, and the ability to simplify their data pipelines. Lastly, .NET for Apache Spark is available in preview, which is a free, open-source, .NET Standard compliant, and cross-platform big data analytics framework.

Dear Spark developers: Welcome to Azure Cognitive Services

With only a few lines of code you can start integrating the power of Azure Cognitive Services into your big data workflows on Apache Spark™. The Spark bindings offer high throughput and run anywhere you run Spark. The Cognitive Services on Spark fully integrate with containers for high performance, on premises, or low connectivity scenarios. Finally, we have provided a general framework for working with any web service on Spark. You can start leveraging the Cognitive Services for your project with our open source initiative MMLSpark on Azure Databricks.

Now in preview

Securing Azure SQL Databases with managed identities just got easier

Announcing the second preview release of the Azure Services App Authentication library, version 1.2.0, which release enables simple and seamless authentication to Azure SQL Database for existing .NET applications with no code changes – only configuration changes. Try out the new functionality in existing SQL-backed solutions and gain the security benefits that the App Authentication library and managed identities afford.

Now generally available

Announcing Azure Backup support to move Recovery Services vaults

Announcing general availability support of the move functionality for recovery services vaults, which is an Azure Resource Manager resource to manage your backup and disaster recovery needs natively in the cloud. Migrate a vault between subscriptions and resource groups with a few steps, in minimal downtime and without any data-loss of old backups. Move a Recovery Services vault and retain recovery points of protected virtual machines (VMs) to restore to any point in time later.

Azure SQL Data Warehouse reserved capacity and software plans now generally available

Announcing the general availability of Azure SQL Data Warehouse reserved capacity and software plans for RedHat Enterprise Linux and SUSE. Purchase Reserved Capacity for Azure SQL Data Warehouse and get up to a 65 percent discount over pay-as-you-go rates. Select from 1-year or 3-year pre-commit options. Purchase plans for RedHat Enterprise Linux and save up to 18 percent. Plans are only available for Red Hat Enterprise Linux virtual machines and the discount does not apply to RedHat Enterprise Linux SAP HANA VMs or RedHat Enterprise Linux SAP Business Apps VMs. Save up to 64 percent on your SUSE software costs. SUSE plans get the auto-fit benefit, so you can scale up or down your SUSE VM sizes and the reservations will continue to apply. In addition, there is a new experience to purchase reservations and software plans, including REST APIs to purchase azure reservation and software plans.

Azure Cost Management now generally available for Pay-As-You-Go customers

Announcing the general availability of Azure Cost Management features for all Pay-As-You-Go and Azure Government customers, which will greatly enhance your ability to analyze and proactively manage your cloud costs. These features enable you to analyze your cost data, configure budgets to drive accountability for cloud costs, and export pre-configured reports on a schedule to support deeper data analysis within your own systems. This release for Pay-As-You-Go customers also provides invoice reconciliation support in the Azure portal via a usage csv download of all charges applicable to your invoices.

News and updates

Microsoft container registry unaffected by the recent Docker Hub data exposure

Docker recently announced Docker Hub had a brief security exposure that enabled unauthorized access to a Docker Hub database, exposing 190k Hub accounts and their associated GitHub tokens for automated builds. While initial information led people to believe the hashes of the accounts could lead to image:tags being updated with vulnerabilities, including official and microsoft/ org images, this was not the case. Microsoft has confirmed that the official Microsoft images hosted in Docker Hub have not been compromised. Regardless of which cloud you use, or if you are working on-premises, importing production images to a private registry is a best practice that puts you in control of the authentication, availability, reliability and performance of image pulls.

AI for Good: Developer challenge

Do you have an idea that could improve and empower the lives of everyone in a more accessible way? Or perhaps you have an idea that would help create a sustainable balance between modern society and the environment? Even if it’s just the kernel of an idea, it’s a concept worth exploring with the AI for Good Idea Challenge. If you’re a developer, a data scientist, a student of AI, or even just passionate about AI and machine learning, we encourage you to take part in the AI for Good: Developer challenge and improve the world by sharing your ideas.

Azure Notification Hubs and Google’s Firebase Cloud Messaging Migration

When Google announced its migration from Google Cloud Messaging (GCM) to Firebase Cloud Messaging (FCM), push services like Azure Notification Hubs had to adjust how we send notifications to Android devices to accommodate the change. If your app uses the GCM library, follow Google’s instructions to upgrade to the FCM library in your app. Our SDK is compatible with either. As long as you’re up to date with our SDK version, you won’t have to update anything in your app on our side.

Governance setting for cache refreshes from Azure Analysis Services

Data visualization and consumption tools over Azure Analysis Services (Azure AS) sometimes store data caches to enhance report interactivity for users. The Power BI service, for example, caches dashboard tile data and report data for initial load for Live Connect reports. This post introduces the new governance setting called ClientCacheRefreshPolicy to disable automatic cache refreshes.

Azure Updates

Learn about important Azure product updates, roadmap, and announcements. Subscribe to notifications to stay informed.

Technical content

Best practices in migrating SAP applications to Azure – part 1

This post touches upon the principles outlined in, Pillars of a great Azure architecture, as they pertain to building your SAP on Azure architecture in readiness for your migration.

Best practices in migrating SAP applications to Azure – part 2

Part 2 covers a common scenario where SAP customers can experience the speed and agility of the Azure platform is the ability to migrate from a SAP Business Suite running on-premises to SAP S/4HANA in the cloud.

Use Artificial Intelligence to Suggest 1-5 Star Ratings

When customers are impressed or dissatisfied about a product, they come back to where it was purchased looking for a way to leave feedback. See how to use artificial intelligence (Cognitive Service) to suggest star ratings based on sentiment – detected as customers write positive or negative words in their product reviews. Learn about CogS, Sentiment Analysis, and Azure Functions through a full tutorial – as well as where to go to learn more and setup a database to store and manage submissions.

You should never ever run directly against Node in production. Maybe.

Running against Node might cause your app to crash. To prevent this, you can run Node with a monitoring tool, or you can monitor your applications themselves.

Configure Azure Site Recovery from Windows Admin Center

Learn how to use Windows Admin Center to configure Azure Site Recovery to be able to replicate virtual machines to Azure, which you can use for protection and Disaster Recovery, or even migration.

Connecting Twitter and Twilio with Logic Apps to solve a parking problem

See how Tim Heuer refactored a solution to a common problem NYC residents face—trying to figure out street-side parking rules— using Azure Logic Apps and provided connectors for Twitter and Twilio to accomplish the same thing.

Creating an Image Recognition Solution with Azure IoT Edge and Azure Cognitive Services

Dave Glover demonstrates how one can use Azure Custom Vision and Azure IoT Edge to build a self-service checkout experience for visually impaired people—all without not needing to be a data scientist. The solution is extended with Python Azure Function, SignalR and Static Website Single Page App.

Get Azure Pipeline Build Status with the Azure CLI

For those who prefer the command line, it's possible to interact with Azure DevOps using the Azure CLI. Neil Peterson takes a quick look at the configuration and basic functionality of the CLI extension as related to Azure Pipelines.

dotnet-azure : A .NET Core global tool to deploy an application to Azure in one command

The options for pushing your .NET Core application to the cloud are not lacking depending on what IDE or editor you have in front of you. But what if you just wanted to deploy your application to Azure with a single command? Shayne Boyer shows you how to do just that with the dotnet-azure global tool.

Detecting threats targeting containers with Azure Security Center

More and more services are moving to the cloud and they bring their security challenges with them. In this blog post, we will focus on the security concerns of container environments. This post goes over several security concerns in containerized environments, from the Docker level to the Kubernetes cluster level, and shows how Azure Security Center can help you detect and mitigate threats in the environment as they’re occurring in real time.

Customize your Azure best practice recommendations in Azure Advisor

Cloud optimization is critical to ensuring you get the most out of your Azure investment, especially in complex environments with many Azure subscriptions and resource groups. Learn how Azure Advisor helps you optimize your Azure resources for high availability, security, performance, and cost by providing free, personalized recommendations based on your Azure usage and configurations.

5 tips to get more out of Azure Stream Analytics Visual Studio Tools

Azure Stream Analytics is an on-demand real-time analytics service to power intelligent action. Azure Stream Analytics tools for Visual Studio make it easier for you to develop, manage, and test Stream Analytics jobs. This post introduces capabilities and features to help you improve productivity that were included in two major updates from earlier this year: test partial scripts locally; share inputs, outputs, and functions across multiple scripts; duplicate a job to other regions; local input schema auto-completion; and testing queries against SQL database as reference data.

Azure Tips and Tricks – Become more productive with Azure

Since inception in 2017, the Azure Tips & Tricks collection has grown to over 200+ tips as well as videos, conference talks, and several eBooks spanning the entire breadth of the Azure platform. Featuring a new weekly tip and video it is designed to help you boost your productivity with Azure, and all tips are based off of practical real-world scenarios. This post re-introduces a web resource called Azure Tips and Tricks that helps existing developers using Azure learn something new within a couple of minutes.

Optimize performance using Azure Database for PostgreSQL Recommendations

You no longer have to be a database expert to optimize your database. Make your job easier and start taking advantage of Azure Database for PostgreSQL Recommendation for Microsoft Azure Database for PostgreSQL today. By analyzing the workloads on your server, the recommendations feature gives you daily insights about the Azure Database for PostgreSQL resources that you can optimize for performance. These recommendations are tightly integrated with Azure Advisor to provide you with best practices directly within the Azure portal.

Azure shows

Episode 275 – Azure Foundations | The Azure Podcast

Derek Martin, a Technology Solutions Principal (TSP) at Microsoft talks about his approach to ensuring that customers get the foundational elements of Azure in place first before deploying anything else. He discusses why Microsoft is getting more opinionated, as a company, when advocating for best practices.

HTML5 audio not supported

Code-free modern data warehouse using Azure SQL DW and Data Factory | Azure Friday

Gaurav Malhotra joins Scott Hanselman to show how to build a modern data warehouse solution from ingress of structured, unstructured, semi-structured data to code-free data transformation at scale and finally to extracting business insights into your Azure SQL Data Warehouse.

Serverless automation using PowerShell in Azure Functions | Azure Friday

Eamon O'Reilly joins Scott Hanselman to show how PowerShell in Azure Functions makes it possible for you to automate operational tasks and take advantage of the native Azure integration to deliver and maintain services.

Meet our Azure IoT partners: Accenture | Internet of Things Show

Mukund Ghangurde is part of the Industry x.0 practice at Accenture focused on driving digital transformation and digital reinvention with industry customers. Mukund joined us on the IoT Show to discuss the scenarios he is seeing in the industry where IoT is really transforming businesses and how (and why) Accenture is partnering with Azure IoT to accelerate and scale their IoT solutions.

Doing more with Logic Apps | Block Talk

Integration with smart contracts is a common topic with developers. Whether with apps, data, messaging or services there is a desire to connect the functions and events of smart contracts in an end to end scenario. In this episode, we look at the different types of scenarios and look at the most common use case – how to quickly expose your smart contract functions as microservices with the Ethereum Blockchain Connector for Logic Apps or Flow.

How to get started with Azure API Management | Azure Tips and Tricks

Learn how to get started with Azure API Management, a service that helps protect and manage your APIs.

How to create a load balancer | Azure Portal Series

Learn how to configure load balancers and how to add virtual machines to them in the Azure Portal.

Rockford Lhotka on Software Architecture | The Azure DevOps Podcast

This week, Jeffrey Palermo and Rocky Lhotka are discussing software architecture. They discuss what Rocky is seeing transformation-wise on both the client side and server side, compare and visit the spectrum of Containers vs. virtual machines vs. PaaS vs. Azure Functions, and take a look at microservice architecture. Rocky also gives his tips and recommendations for companies who identify as .NET shops, and whether you should go with Containers or PaaS.

HTML5 audio not supported

Episode 8 – Partners Help The Azure World Go Round | AzureABILITY Podcast

Microsoft's vast partner-ecosystem is a big part of the Azure value proposition. Listen in as Microsoft Enterprise Channel Manager Christine Schanne and Louis Berman delve into the partner experience with Neudesic; a top Gold Microsoft Partner.

HTML5 audio not supported

Events

Get ready for Global Azure Bootcamp 2019

Global Azure Bootcamp is a free, one-day, local event that takes place globally. This annual event, which is run by the Azure community, took place this past Saturday, April 27, 2019. Each year, thousands attend these free events to expand their knowledge about Azure using a variety of formats as chosen by each location. Did you attend?

Connecting Global Azure Bootcampers with a cosmic chat app

We added a little “cosmic touch” to the Global Azure Bootcamp this past weekend by enabling attendees to greet each other with a chat app powered by Azure Cosmos DB. For a chat app, this means low latency in the ingestion and delivery of messages. To achieve that, we deployed our web chat over several Azure regions worldwide and let Azure Traffic Manager route users’ requests to the nearest region where our Cosmos database was deployed to bring data close to the compute and the users being served. That was enough to yield near real-time message delivery performance as we let Azure Cosmos DB replicate new messages to each covered region.

Customers, partners, and industries

Connect IIoT data from disparate systems to unlock manufacturing insights

Extracting insights from multiple data sources is a new goal for manufacturers. Industrial IoT (IIoT) data is the starting point for new solutions, with the potential for giving manufacturers a competitive edge. These systems contain vast and vital kinds of information, but they run in silos. This data is rarely correlated and exchanged. To help solve this problem, Altizon created the Datonis Suite, which is a complete industrial IoT solution for manufacturers to leverage their existing data sources.

Migrating SAP applications to Azure: Introduction and our partnership with SAP

Just over 25 years ago, Bill Gates and Hasso Plattner met to form an alliance between Microsoft and SAP that has become one of our industry’s longest lasting alliances. At the time their conversation was focused on how Windows could be the leading operating system for SAP’s SAPGUI desktop client and when released a few years later, how Windows NT could be a server operating system of choice for running SAP R/3. Ninety percent of today’s Fortune 500 customers use Microsoft Azure and an estimated 80 percent of Fortune customers run SAP solutions, so it makes sense why SAP running on Azure is a key joint initiative between Microsoft and SAP. Over the next three weeks leading up to this year’s SAPPHIRENOW conference in Orlando, we’re publishing an SAP on Azure technical blog series (see Parts 1 & 2 in Technical content above).

Azure Marketplace new offers – Volume 35

The Azure Marketplace is the premier destination for all your software needs – certified and optimized to run on Azure. Find, try, purchase, and provision applications & services from hundreds of leading software providers. You can also connect with Gold and Silver Microsoft Cloud Competency partners to help your adoption of Azure. In the first half of March, we published 68 new offers.

Azure Government Secret Regions, Azure Batch updates & Service Fabric Mesh new additions | Azure This Week – A Cloud Guru

This time on Azure This Week, Lars covers new Azure Government Secret regions and the new updates to Azure Batch. He also talks about new additions to Service Fabric Mesh. Check it out!

Quelle: Azure

Serverless automation using PowerShell preview in Azure Functions

As companies of all sizes move their assets and workloads to the cloud, there’s a clear need to provide more powerful ways to manage, govern, and automate their cloud resources. Such automation scenarios require custom logic best expressed in PowerShell. They are also typically executed either on a schedule or when an event happens like an alert on an application, a new resource getting created, or when an approval happens in an external system. 

Azure Functions is a perfect match to address such scenarios as it provides an application development model based on triggers and bindings for accelerated development and serverless hosting of applications. PowerShell support in Functions has been a common request from customers, given its event-based capabilities.

Today, we are pleased to announce that we have brought the benefits of this model to automating operational tasks across Azure and on-premises systems with the preview release of PowerShell support in Azure Functions.

Companies all over the world have been using PowerShell to automate their cloud resources in their organization, as well as on-premises, for years. Most of these scenarios are based on events that happen on the infrastructure or application that must be immediately acted upon in order to meet service level agreements and time to recovery.

With the release of PowerShell support in Azure Functions, it is now possible to automate these operational tasks and take advantage of the native Azure integration to modernize the delivering and maintenance of services.

PowerShell support in Azure Functions is built on the 2.x runtime and uses PowerShell Core 6 so your automation can be developed on Windows, macOS, and Linux. It also integrates natively with Azure Application Insights to give full visibility into each function execution. Previously, Azure Functions had experimental PowerShell support in 1.x., and it is highly recommended that customers move their 1.x PowerShell functions to the latest runtime.

PowerShell in Azure Functions has all the benefits of other languages including:

Native bindings to respond to Azure monitoring alerts, resource changes through Event Grid, HTTP or Timer triggers, and more.
Portal and Visual Studio Code integration for authoring and testing of the scripts.
Integrated security to protect HTTP triggered functions.
Support for hybrid connections and VNet to help manage hybrid environments.
Run in an isolated local environment.

Additionally, functions written with PowerShell have the following capabilities to make it easier to manage Azure resources through automation.

Automatic management of Azure modules

Azure modules are natively available for your scripts so you can manage services available in Azure without having to include these modules with each function created. Critical and security updates in these Az modules will be automatically upgraded by the service when new minor versions are released.

You can enable this feature through the host.json file by setting “Enabled” to true for managedDependency and updating Requirements.psd1 to include Az. These are automatically set when you create a new function app using PowerShell.

host.json
{
    “version”: “2.0”,
    “managedDependency”: {
       “Enabled”: “true”
    }
}

Requirements.psd1
@{
    Az = ‘1.*’
}

Authenticating against Azure services

When enabling a managed identity for the function app, the PowerShell host can automatically authenticate using this identity, giving functions permission to take actions on services that the managed identity has been granted access. The profile.ps1 is processed when a function app is started and enables common commands to be executed. By default, if managed identify is enabled, the function application will authenticate with Connect-AzAccount -Identity.

Common automation scenarios in Azure

PowerShell is a great language for automating tasks, and with the availability in Azure Functions, customers can now seamless author event-based actions across all services and applications running in Azure. Below are some common scenarios:

Integration with Azure Monitor to process alerts generated by Azure services.
React to Azure events captured by Event Grid and apply operational requirements on resources.
Leverage Logic Apps to connect to external systems like IT service management, DevOps, or monitoring systems while processing the payload with a PowerShell function.
Perform scheduled operational tasks on virtual machines, SQL Server, Web Apps, and other Azure resources.

Next steps

PowerShell support in Azure Functions is available in preview today, check out the following resources and start trying it out:

Learn more about using PowerShell in Azure Functions in the documentation, including quick starts and common samples to help get started.
Sign up for an Azure free account if you don’t have one yet, and build your first function using PowerShell.
You can reach the Azure Functions team on Twitter and on GitHub. For specific feedback on the PowerShell language, please review its Azure Functions GitHub repository.
We also actively monitor StackOverflow and UserVoice, so feel free to ask questions or leave your suggestions. We look forward to hearing from you!
Learn more about automation and PowerShell in Functions on Azure Friday and Microsoft Mechanics.

Quelle: Azure

Microsoft container registry unaffected by the recent Docker Hub data exposure

Docker recently announced Docker Hub had a brief security exposure that enabled unauthorized access to a Docker Hub database, exposing 190k Hub accounts and their associated GitHub tokens for automated builds. While initial information led people to believe the hashes of the accounts could lead to image:tags being updated with vulnerabilities, including official and microsoft/ org images, this was not the case. Microsoft has confirmed that the official Microsoft images hosted in Docker Hub have not been compromised.

Consuming Microsoft images from the Microsoft Container Registry (MCR)

As a cloud and software company, Microsoft has been transitioning official Microsoft images from being served from Docker Hub, to being served directly by Microsoft ​as of May of 2018. To avoid breaking existing customers, image:tags previously available on Docker Hub continue to be made available. However, newer Microsoft images and tags are available directly from the Microsoft Container Registry (MCR) at mcr.microsoft.com. Search and discoverability of the images are available through Docker Hub, however docker pull, run and build statements should reference mcr.microsoft.com. For example, pulling the windows-servercore image:
docker pull mcr.microsoft.com/windows/servercore

Official microsoft/ org images follow the same format.

Microsoft recommends pulling Microsoft official images from mcr.microsoft.com.

Recommended best practices

Leveraging community and official images from Docker Hub and Microsoft are a critical part of today’s cloud native development. At the same time, it’s always important to create a buffer between these public images and your production workloads. These buffers account for availability, performance, reliability and the risk of vulnerabilities. Regardless of which cloud you use, or if you are working on-prem, importing production images to a private registry is a best practice that puts you in control of the authentication, availability, reliability and performance of image pulls. For more information, see Choosing A Docker Container Registry.

Automated container builds

In addition to using a private registry for your images, we also recommend using a cloud container build system that incorporates your companies integrated authentication. For example, Azure offers Azure Pipelines and ACR Tasks for automating container builds, including OS & .NET Framework patching. ACR also offers az acr import, for importing images from Docker Hub and other registries, enabling this buffer.

Microsoft remains committed to the security and reliability of your software and workloads.
Quelle: Azure