Securing the hybrid cloud with Azure Security Center and Azure Sentinel

Infrastructure security is top of mind for organizations managing workloads on-premises, in the cloud, or hybrid. Keeping on top of an ever-changing security landscape presents a major challenge. Fortunately, the power and scale of the public cloud has unlocked powerful new capabilities for helping security operations stay ahead of the changing threat landscape. Microsoft has developed a number of popular cloud based security technologies that continue to evolve as we gather input from customers. Today we’d like to break down a few key Azure security capabilities and explain how they work together to provide layers of protection.

Azure Security Center provides unified security management by identifying and fixing misconfigurations and providing visibility into threats to quickly remediate them. Security Center has grown rapidly in usage and capabilities, and allowed us to pilot many new solutions, including a security information and event management (SIEM)-like functionality called investigations. While the response to the investigations experience was positive, customers asked us to build out more capabilities. At the same time, the traditional business model of Security Center, which is priced per resource such as per virtual machine (VM), doesn’t necessarily fit for SIEM. We realized that our customers needed a full-fledged standalone SIEM solution that stood apart from and integrated with Security Center, so we created Azure Sentinel. This blog post clarifies what each product does and how Azure Security Center relates to Azure Sentinel.

Going forward, Security Center will continue to develop capabilities in three main areas:

Cloud security posture management: Security Center provides you with a bird’s eye security posture view across your Azure environment, enabling you to continuously monitor and improve your security posture using the Azure secure score. Security Center helps you identify and perform the hardening tasks recommended as security best practices and implement them across your machines, data services, and apps. This includes managing and enforcing your security policies and making sure your Azure Virtual Machine instances, non-Azure servers, and Azure PaaS services are compliant. With newly added IoT capabilities, you can now reduce attack surface for your Azure IoT solution and remediate issues before they can be exploited. We will continue to expand our resource coverage and the depth insights that are available in security posture management. In addition to providing full visibility into the security posture of your environment, Security Center also provides visibility into the compliance state of your Azure environment against common regulatory standards.
Cloud workload protection: Security Center’s threat protection enables you to detect and prevent threats at the infrastructure-as-a-service (IaaS) layer as well as in platform-as-a-service (PaaS) resources like Azure IoT and Azure App Service and on-premises virtual machines. Key features of Security Center threat protection include config monitoring, server endpoint detection and response (EDR), application control, network segmentation, and is extending to support container and serverless workloads.
Data security: Security Center includes capabilities that identify breaches and anomalous activities against your SQL databases, data warehouse, and storage accounts, and will be extending to other data services. In addition, Security Center helps you perform automatic classification of your data in Azure SQL database.

When it comes to cloud workload protection, the goal is to present the information to users within Security Center in an easy-to-consume manner so that you can address individual threats. Security Center is not intended for advanced security operations (SecOps) hunting scenarios or to be a SIEM tool.

Going forward SIEM and security orchestration and automated response (SOAR) capabilities will be delivered in Azure Sentinel. Azure Sentinel delivers intelligent security analytics and threat intelligence across the enterprise, providing a single solution for alert detection, threat visibility, proactive hunting, and threat response.

Azure Sentinel is your service organization control (SOC) view across the enterprise, alleviating the stress of increasingly sophisticated attacks, increasing volumes of alerts, and long resolution timeframes. With Azure Sentinel you can:

Collect data at cloud scale across all users, devices, applications, and infrastructure, both on-premises and in multiple clouds.
Integrate curated alerts from Microsoft’s security products like Security Center, Microsoft Threat Protection, and from your non-Microsoft security solutions.
Detect previously undetected threats and minimize false positives using Microsoft Intelligent Security Graph, which uses trillions of signals from Microsoft services and systems around the globe to identify new and evolving threats. Investigate threats with artificial intelligence and hunt for suspicious activities at scale, tapping into years of cyber security experience at Microsoft.
Respond to incidents rapidly with built-in orchestration and automation of common tasks.

SIEMs typically integrate with a broad range of applications including threat intelligence applications for specific workloads, and the same is true for Azure Sentinel. SecOps has the full power of querying against the raw data, using AI models, even building your own model.

So how does Azure Security Center relate to Azure Sentinel?

Security Center is one of the many sources of threat protection information that Azure Sentinel collects data from, to create a view for the entire organization. Microsoft recommends that customers using Azure use Azure Security Center for threat protection of workloads such as VMs, SQL, Storage, and IoT, in just a few clicks can connect Azure Security Center to Azure Sentinel. Once the Security Center data is in Azure Sentinel, customers can combine that data with other sources like firewalls, users, and devices, for proactive hunting and threat mitigation with advanced querying and the power of artificial intelligence.

Are there any changes to Security Center as a result of this strategy?

To reduce confusion and simplify the user experience, two of the early SIEM-like features in Security Center, namely investigation flow in security alerts and custom alerts will be removed in the near future. Individual alerts remain in Security center, and there are equivalents for both security alerts and custom alerts in Azure Sentinel.

Going forward, Microsoft will continue to invest in both Azure Security Center and Azure Sentinel. Azure Security Center will continue to be the unified infrastructure security management system for cloud security posture management and cloud workload protection. Azure Sentinel will continue to focus on SIEM.

To learn more about both products, please visit the Azure Sentinel home page or Azure Security Center home page.
Quelle: Azure

Announcing self-serve experience for Azure Event Hubs Clusters

For businesses today, data is indispensable. Innovative ideas in manufacturing, health care, transportation, and financial industries are often the result of capturing and correlating data from multiple sources. Now more than ever, the ability to reliably ingest and respond to large volumes of data in real time is the key to gaining competitive advantage for consumer and commercial businesses alike. To meet these big data challenges, Azure Event Hubs offers a fully managed and massively scalable distributed streaming platform designed for a plethora of use cases from telemetry processing to fraud detection.

Event Hubs has been immensely popular with Azure’s largest customers and now even more so with the recent release of Event Hubs for Apache Kafka. With this powerful new capability, customers can stream events from Kafka applications seamlessly into Event Hubs without having to run Zookeeper or manage Kafka clusters, all while benefitting from a fully managed platform-as-a-service (PaaS) with features like auto-inflate and geo-disaster recover. As the front door to Azure’s data pipeline, customers can also automatically Capture streaming events into Azure Storage or Azure Data Lake, or natively perform real-time analysis on data streams using Azure Stream Analytics.

For customers with the most demanding streaming needs, Event Hubs clusters in our Dedicated tier provide a single-tenant offering that guarantees the capacity to ingest millions of events per second while boasting a 99.99% SLA. Clusters are used by the Xbox One Halo team, as well as powers both Microsoft Teams and Microsoft Office client application telemetry pipelines.

Azure portal experience for Event Hubs clusters

Today, we are excited to announce that Azure Event Hubs clusters can be easily created through the Azure portal or through the Azure Resource Manager as a self-serve experience (preview), and is instantly available with no further setup. Within your cluster, you can subsequently create and manage namespaces and event hubs per usual and ingest events with no throttling. Creating a cluster to contain your event hubs offers the following benefits:

Single tenant hosting for better performance with guaranteed capacity at full scale, enabling ingress of gigabytes of streaming data at millions of events per second while maintaining fully durable storage and sub-second latency.
Capture feature included at no additional cost, which allows you to effortlessly batch and deliver your events to Azure Storage or Azure Data Lake.
Significant savings on your Event Hubs cloud costs with fixed hourly billing while scaling your infrastructure with Dedicated Event Hubs.
No maintenance since we take care of load balancing, security patching, and OS updates. You can spend less time on infrastructure maintenance and more time building client-side features.
Exclusive access to upcoming features like bring your own key (BYOK).

In the self-serve experience (preview), you can create 1 CU clusters in the following strategic regions through the Azure portal:

North Europe
West Europe
US Central
East US
East US 2
West US

West US 2
North US
South Central US
South East Asia
UK South

Larger clusters of up to 20 CUs or clusters in regions not listed above will also be available upon direct request to the Event Hubs team.

Data is key to staying competitive in this fast moving world and Azure Event Hubs can help your organization gain the competitive edge. With so many possibilities, it’s time to get started.

Learn more about Event Hubs clusters in our Dedicated offering.
Get started with an Event Hubs cluster on the Azure portal.
Quickstart: Data streaming with Event Hubs using the Kafka protocol

Quelle: Azure

A look at Azure's automated machine learning capabilities

The automated machine learning capability in Azure Machine Learning service allows data scientists, analysts, and developers to build machine learning models with high scalability, efficiency, and productivity all while sustaining model quality. Automated machine learning builds a set of machine learning models automatically, intelligently selecting models for training then recommending the best one for your scenario and data set. Traditional machine learning model development is resource-intensive requiring both significant domain knowledge and time to produce and compare dozens of models.

With the announcement of automated machine learning in Azure Machine Learning service as generally available last December, we have started the journey to simplify artificial intelligence (AI). This helps data scientists who want to automate part of their machine learning workflow so they can spend more time focusing on other business objectives. It also makes AI available for a wider audience of business users who don’t have advanced data science and coding knowledge.

We are furthering our investment for accelerating productivity with this release that includes exciting capabilities and features in the areas of model quality, improved model transparency, the latest integrations, ONNX support, a code-free user interface, time series forecasting, and product integrations.

1. Automated machine learning no-code web interface (preview)

Continuing our mission to simplify machine learning, Azure introduced the automated machine learning web user interface in Azure portal. The web user interface enables business domain experts to train models on their data, without writing a single line of code. Users can simply bring their data and, with a few clicks, start training on it. After automated machine learning comes up with the best possible model, customized to the user’s data, they can deploy the model to Azure machine learning service as a web service to generate future predictions on new data.

To start exploring the automated machine learning UI, simply go to Azure portal and navigate to an Azure machine learning workspace, where you will see “Automated machine learning” under the “Authoring” section. If you don’t have an Azure machine learning workspace yet, you can always learn how to create a workspace. To learn more, refer to the automated machine learning UI blog.

2. Time series forecasting

Building forecasts is an integral part of any business, whether it’s revenue, inventory, sales, or customer demand. Forecasting with automated machine learning is now generally available. These capabilities improve the accuracy and performance of recommended models with time series data including a predict forecast function, rolling cross validation splits for time series data, configurable lags, window aggregation, and a holiday featurizer. This ensures high accuracy forecasting models and supporting automation for machine learning across many scenarios.

To learn more, refer to the how to guide with time series data and samples on GitHub.

3. Model transparency

We understand transparency is very important for you to trust the models recommended by automated machine learning.

Now you can understand all steps in the machine learning pipeline including automated featurization (if you set preprocess=True). Learn more about all the preprocessing and featurization steps that automated machine learning performs. You can also programmatically understand how your input data got preprocess and featurized, what kind of scaling and normalization was done and the exact machine learning algorithm and hyperparameter values for a chosen machine learning pipeline. Follow these steps to learn more.
Model interpretability (feature importance) was enabled as a preview capability back in December. Since then, we have made improvements including significant performance boost.

4. ONNX Models (preview)

In many enterprises, data scientists build models in Python since the popular machine learning frameworks are in Python. Many Azure Machine Learning service users also create models using Python. However, in many deployment environments, line of business applications are written in C# or Java, requiring users to “recode” the model. This adds a lot of friction as many times models never get deployed into production. With ONNX support, users can build ONNX models using automated machine learning and integrate with C# applications, without recoding.

To find out more information, please visit GitHub notebook.

5. Enabling .NET developers using Visual Studio/VS Code (preview)

Empower your applications with automated machine learning while remaining in the comfort of the .NET ecosystem. The .NET automated machine learning API enables developers to leverage automated machine learning capabilities without needing to learn Python. Seamlessly integrate automated machine learning within your existing .NET project by using the API's NuGet package. Tackle your binary classification, multiclass classification, and regression tasks within Visual Studio and Visual Studio Code.

6. Empowering data analysts in PowerBI (preview)

We have enabled data analysts and BI professionals using PowerBI to build, deploy, and inference machine learning models, all within PowerBI. This integration allows PowerBI customers to use their data in PowerBI dataflows and leverage the power of automated machine learning capability of Azure Learning service to build models with a no-code experience and then deploy and use the models from PowerBI. Imagine the kind of machine learning powered PowerBI applications and reports you can create with this capability.

7. Automated machine learning in SQL Server

If you are looking to build models using your data in SQL server using your favorite SQL Server Management Studio interface, you can now leverage automated machine learning in Azure Machine Learning service to build, deploy, and use models. This is made possible by simply wrapping python-based machine learning training and inferencing scripts in SQL stored procedures. This is well suited for use with data residing in SQL Server tables and provides an ideal solution for any version of SQL Server that supports SQL Server Machine Learning Services.

8. Automated machine learning in Spark

HDInsight has been integrated with automated machine learning. With this integration, customers who use automated machine learning can now effortlessly process massive amounts of data and get all the benefits of a broad, open source ecosystem with the global scale of Azure to run automated machine learning experiments. HDInsight allows customers to provision clusters with hundreds of nodes. Automated machine learning running on Apache Spark in the HDInsight cluster, allows users to use compute capacity across these nodes to be able to run training jobs at scale, as well as running multiple training jobs in parallel. This allows users to run automated machine learning experiments while sharing the compute with their other big data workloads. To find out more information, please visit GitHub notebooks and documentation.

We support automated machine learning on Azure Databricks clusters with a simple installation of the SDK in the cluster. You can get started by visiting the “Azure Databricks” section in our documentation, “Configure a development environment for Azure Machine Learning.”

Improved accuracy and performance

Since we announced general availability back in December, we have added several new capabilities to generate high quality models in a shorter amount of time.

An intelligent stopping capability that automatically figures out when to stop an experiment based on progress made on the primary metric. If no significant improvement is seen in the primary metric, an experiment is automatically stopped saving you time and compute.

With the goal of exploring a greater number of model pipelines in a given amount of time, users can leverage a sub-sampling strategy to train much faster, while minimizing loss.

Specify preprocess=True, to intelligently search across different featurization strategies to find the best one for the specified data with the goal of getting to a better model. Learn more about the various preprocessing/featurization steps.

XGBoost is available to the set of learners automated machine learning explores, as we see XGBoost models performing well.

Improved support for larger datasets, currently supporting datasets up to 10GB in size.

Learn more

Automated machine learning makes machine learning more accessible for data scientists of all levels of experience. Get started by visiting our documentation and let us know what you think. We are committed to making automated machine learning better for you!

Learn more about the Azure Machine Learning service.

Get started with a free trial of the Azure Machine Learning service.
Quelle: Azure

Building a better asset and risk management platform with elastic Azure services

Elasticity means services can expand and contract on demand. This means Azure customers who are on a pay-as-you-go plan will reap the most benefit out of Azure services. Their service is always available, but the cost is kept to a minimum. This feature is so important, one Microsoft partner is using it as a point of differentiation.

Modular and elastic benefits

A key attribute of Azure is the interchangeable nature of services. Together with elasticity, Azure lets modern enterprises migrate and evolve more easily. For financial service providers, the modular approach lets customers benefit from best-of-breed analytics and in these areas:

Risk and performance analytics: Azure Data Lake Storage, DataBricks, and Azure Stream Analytics are just a few of the options for calculating risk.
Regulatory compliance automation (regtech): Automating compliance using Azure DevOps or using a service provider such as CloudNeeti simplifies an arduous task.
Investment management technology: Azure Virtual Machines or Azure Functions are just two options for managing investment portfolios.

With these capabilities, asset managers can build superior products that generate higher returns for their clients.

Financial services is a tough market

Competition in the asset management industry has ramped up: active vs. active, passive vs. active, and passive vs. passive, while margins are shrinking. At the same time, costly, outdated, and difficult-to-maintain legacy systems and technology are impacting both costs and operational efficiencies, putting a further drag on performance, while also making it difficult to scale. A new Microsoft partner, Axioma, is helping its clients in the financial services industry to regain and retain a competitive edge.

On-premises means rigid resources

Many investment firms have relied on physical datacenters as a means of maintaining control and security. But such properties and legacy systems are costly to maintain and difficult to scale. Given market volatility, fee compression, and an overall competitive investment landscape, fund managers are seeking flexible solutions to discover, create, and implement superior investment strategies and products. Specifically, the need is for enterprise-wide analytics, data, reporting, and data storage.

Cloud elasticity is a vital attribute

Axioma offers an open and flexible platform, where each building block is accessible via APIs. Their platform is a cloud-native architecture but modularity allows seamless integration points with other best-of-breed providers. For example, Axioma Risk is an enterprise-wide multi-asset class (MAC) risk-management platform. With the solution, asset managers can efficiently scale assets under management (AUM) to drive revenue growth and reduce the effects of margin compression.

Build a unified platform with Azure

When using Azure to build a platform, the users of the platform benefit from a common architecture. For example, Axioma helps to migrate solutions to their platform axiomaBlue. The clients then benefit from a common engine that calculates risk and performance analytics. Having one engine on the platform also means using the same underlying market and reference data. Clients, therefore, have a consistent view of risk and return across their enterprise and across front, middle, and back-office functions.

On a specialized platform, users can create flexible, modular, workflow solutions. For financial services, the platform approach means a highly specialized set of components, as shown in this graphic.

Azure services used

Axioma is a primary example of using the elastic and modular attributes of Azure to its fullest extent. They use these Azure services:

Service Fabric 
Virtual machines (Windows and Linux)
VM Scale Sets / Load Balancers
Azure Active Directory
Azure Data Lake
Azure SQL Database
Azure Database for PostgreSQL
Storage Accounts 
Service Bus/Relays

Recommended next steps

Go to the Azure marketplace listing for AxiomaBlue and click Contact me.
Quelle: Azure

Customize your automatic update settings for Azure Virtual Machine disaster recovery

In today’s cloud-driven world, employees are only allowed access to data that is absolutely necessary for them to effectively perform their job. This limited access is especially important in scenarios where it's difficult to monitor access behaviors, like if you have many employees and/or engage vendors. Access is usually based on the job responsibility, authority, and capability. As a result, some job profiles will not have access to certain data or rights to perform specific actions if they do not need it to fulfill their responsibilities. The ability to hence control access but still be able to perform job duties aligning to the infrastructure administrator profile is becoming more relevant and frequently requested by customers.

You asked, we listened!

When we released the automatic update of agents used in disaster recovery (DR) of Azure Virtual Machines (VMs), the most frequent feedback we received was related to access control. Customers had DR admins who were given just enough rights to execute operations to enable, failover, or test DR. While they wanted to enable automatic updates and avoid the hassle of having to monitor for monthly updates and manually upgrade the agents, they didn't want to give the DR admin contributor access to the subscription, which would allow them to create automation accounts. The request we heard from you was to allow customers to provide an existing automation account, approved and created by a person who is entrusted with the right access in the subscription. This automation account could then be used to execute the runbook, which checks for new updates and upgrades the existing agent every time there is a new release.

How to choose an existing automation account?

Choose the virtual machine you want to enable replication for.
In the Advanced Settings blade, under Extension Settings, choose a previously created Automation account.

This automation account can be used to automatically update agents for all Azure virtual machines within the Recovery Services vault. If you change it for one virtual machine, the same will be applied to all virtual machines.

Please note that this capability is only applicable for disaster recovery of Azure virtual machines, and not for Hyper-V/VMware VMs

Related documents:

How does automatic update work
How to enable auto-update

In addition to this, we recently announced one of the top customer requests we've received, which provides better control of your workloads:

Enable replication for a newly added disk – You can enable replication for a data disk that's been newly added to an Azure VM that's already configured for disaster recovery.

Azure natively provides you the high availability and reliability for your mission-critical workloads, and you can choose to improve your protection and meet compliance requirements using the disaster recovery provided by Azure Site Recovery. Getting started with Azure Site Recovery is easy, check out pricing information and sign up for a free Microsoft Azure trial. You can also visit the Azure Site Recovery forum on MSDN for additional information and to engage with other customers.
Quelle: Azure

Symantec’s zero-downtime migration to Azure Cosmos DB

How do you migrate live, mission-critical data for a flagship product that must manage billions of requests with low latency and no downtime? The Consumer Business Unit at Symantec faced this exact challenge when deciding to shift from their costly and complex self-managed database infrastructure, to a geographically dispersed and low latency managed database solution on Azure.
Quelle: Azure

Azure.Source – Volume 85

News and updates

HB-series Azure Virtual Machines achieve cloud supercomputing milestone

Azure Virtual Machine HB-series are the first on the public cloud to scale an MPI-based high performance computing (HPC) job to 10,000 cores. This level of scaling has long been considered the realm of only the world’s most powerful and exclusive supercomputers, but is now available to anyone using Azure. HB-series virtual machines (VMs) are optimized for HPC applications requiring high memory bandwidth. For this class of workload, HB-series VMs are the most performant, scalable, and price-performant ever launched on Azure or elsewhere on the public cloud.

Simplifying event-driven architectures with the latest updates to Event Grid

Event-driven architectures are increasingly replacing and outpacing less dynamic polling-based systems, bringing the benefits of serverless computing to IoT scenarios, data processing tasks, or infrastructure automation jobs. As the natural evolution of microservices, companies all over the world are taking an event-driven approach to create new experiences in existing applications or bring those applications to the cloud, building more powerful and complex scenarios every day. Today, we’re incredibly excited to announce a series of updates to Event Grid that will power higher performance and more advanced event-driven applications in the cloud.

Generally available

Azure NetApp Files is now generally available

We're excited to announce the general availability (GA) of Azure NetApp Files, the industry’s first bare-metal cloud file storage and data management service. Azure NetApp Files is an Azure first-party service for migrating and running the most demanding enterprise file-workloads in the cloud including databases, SAP, and high-performance computing applications with no code changes. This milestone is the result of deep investment by both companies to provide a great experience for our customers through a service that’s unique in the industry.

Azure IoT Hub message enrichment simplifies downstream processing of your data

We just released a new capability that enables enriching messages that are egressed from Azure IoT Hub to other services. Azure IoT Hub provides an out-of-the-box capability to automatically deliver messages to different services and is built to handle billions of messages from your IoT devices. Messages carry important information that enable various workflows throughout the IoT solution. Message enrichment simplifies the post-processing of your data and can reduce the costs of calling device twin APIs for information. This capability allows you to stamp information on your messages, such as details from your device twin, your IoT Hub name, or any static property you want to add.

Now in preview

Simplify the management of application configurations with Azure App Configuration

We’re excited to announce the public preview of Azure App Configuration, a new service aimed at simplifying the management of application configuration and feature flighting for developers and IT. App Configuration provides a centralized place in Microsoft Azure for users to store all their application settings and feature flags (also known as, feature toggles), control their accesses, and deliver the configuration data where it's needed.

Manage your cross cloud spend using Azure Cost Management

It’s common for enterprises to run workloads on more than one cloud provider. However, adopting a multi-cloud strategy comes with complexities like handling different cost models, varying billing cycles, and different cloud designs that can be difficult to navigate across multiple dashboards and views. We’ve heard from many of you that you need a central cost management solution built to help you manage your spend across multiple cloud providers, prevent budget overruns, maintain control, and create accountability with your consumers. Azure Cost Management now offers cross-cloud support. This is available in preview and can play a critical role in helping you efficiently and effectively managing your organization’s multi-cloud needs.

Technical content

Isolate app integrations for stability, scalability, and speed with an integration service environment

Innovation at scale is a common challenge facing large organizations. A key contributor to the challenge is the complexity in coordinating the sheer number of apps and environments. Integration tools, such as Azure Logic Apps, give you the flexibility to scale and innovate as fast as you want, on-premises or in the cloud. This is a key capability you need to have in place when migrating to the cloud, or even if you're cloud native. Oftentimes integration has been relegated as something to do after the fact. In the modern enterprise, however, application integration is something that has to be done in conjunction with application development and innovation.

Key causes of performance differences between SQL managed instance and SQL Server

Migrating to a Microsoft Azure SQL Database managed instance provides a host of operational and financial benefits you can only get from a fully managed and intelligent cloud database service. Some of these benefits come from features that optimize or improve overall database performance. After migration, many of our customers are eager to compare workload performance with what they experienced with on-premises SQL Server, and sometimes they're surprised by the results. This article will help you understand the underlying factors that can cause performance differences and the steps you can take to make fair comparisons between SQL Server and SQL Database.

Deploying Azure with Azure Resources Templates and Chef Automate

This video shows how to use a combination of Azure and Chef to build a website.

Serverless Video: Less Servers, More Code

Serverless is a word that marketing teams around the world love to associate with cloud-based offerings, but what does it really mean? What’s the difference between fully managed offerings and true “serverless?” Are there really no servers involved? Should you migrate existing application services to serverless? How do you decide what new projects should incorporate serverless? This video explains.

Azure shows

Servereless geo-distributed applications with Azure Cosmos DB | Azure Friday

Matias Quaranta joins Scott Hanselman to share some best practices for creating serverless geo-distributed applications with Azure Cosmos DB. With the native integration between Azure Cosmos DB and Azure Functions, you can create database triggers, input bindings, and output bindings directly from your Azure Cosmos DB account. Using Azure Functions and Azure Cosmos DB, you can create and deploy event-driven serverless apps with low-latency access to rich data for a global user base.

Updating the WinForms Designer for .NET Core 3.0 | On .NET

There are many benefits that .NET Core can bring to desktop applications. With .NET Core 3.0, support is being adding for building desktop application with WinForms and Windows Presentation Foundation (WPF). In this episode, Jeremy is joined by Merrie McGaw and Dustin Campbell who share some interesting insights on the work that's going into getting the WinForms designer ready for .NET Core 3.

Parkinson’s patient: Before and after Sensoria Smart Sock | Internet of Things Show

Sensoria is an Azure IoT partner whose vision is The Garment is The Computer®. Sensoria's proprietary sensor-infused smart garments, Sensoria® Core microelectronics and cloud system enable smart garments to convert data into actionable information for users in real-time. Davide Vigano shares the vision and the product on the IoT Show and how they partner with Azure IoT.

Version tracking (Xamarin.essentials API of the week) | The Xamarin Show

Xamarin.Essentials provides developers with cross-platform APIs for their mobile applications. On this week's Xamarin.Essential API of the week we take a look at the Version Tracking API.

Visual Studio Productivity Tips | Visual Studio Toolbox

In this episode, Robert is joined by Kendra Havens. Every version of Visual Studio introduces new productivity features. If you want to see some of the ones introduced in Visual Studio 2019, check out Kendra's video “Write beautiful code, faster.” But what about the ones that have been in Visual Studio for a while that you may have missed? To see some of those, watch this video.

 

A Cloud Guru: Explaining Azure’s new certifications | A Cloud Guru

Lars sits down with Tiago Costa, Cloud Architect and Advisor, as he breaks down Microsoft’s newly launched role-based certifications, from the MVP Global Summit. We get some insight into the "why” behind the certification change, and some bonus exam tips from this Azure MVP and Microsoft Certified Trainer.

The Azure Podcast

Episode 281 – Disaster Recovery | The Azure Podcast

Kendall and Cynthia talk with Sujay Talasila and Won Huh on how to think about disaster recovery, differences that need to be considered between disaster recovery and backups, and recommended practices that users should consider.

HTML5 audio not supported

Partners and industries

Visual data ops for Apache Kafka on Azure HDInsight, powered by Lenses

Apache Kafka is one of the most popular open source streaming platforms today. However, deploying and running Kafka remains a challenge for most. Azure HDInsight addresses this challenge by providing a range of improvements. This blog describes them, and also shows how you can now successfully manage your streaming data operations, from visibility to monitoring, with Lenses, an overlay platform now generally available as part of the Azure HDInsight application ecosystem, right from within the Azure portal.
Quelle: Azure

Announcing service monitor alliances for Azure Deployment Manager

Azure Deployment Manager is a new set of features for Azure Resource Manager that greatly expands your deployment capabilities. If you have a complex service that needs to be deployed to several regions, if you’d like greater control over when your resources are deployed in relation to one another, or if you’d like to limit your customer’s exposure to bad updates by catching them while in progress, then Deployment Manager is for you. Deployment Manager allows you to perform staged rollouts of resources, meaning they are deployed region by region in an ordered fashion.

During Microsoft Build 2019, we announced that Deployment Manager now supports integrated health checks. This means that as your rollout proceeds, Deployment Manager will integrate with your existing service health monitor, and if during deployment unacceptable health signals are reported from your service, the deployment will automatically stop and allow you to troubleshoot.

In order to make health integration as easy as possible, we’ve been working with some of the top service health monitoring companies to provide you with a simple copy/paste solution to integrate health checks with your deployments. If you’re not already using a health monitor, these are great solutions to start with:

Datadog, the leading monitoring and analytics platform for modern cloud environments. See how Datadog integrates with Azure Deployment Manager.
Site24x7, the all-in-one private and public cloud services monitoring solution. See how Site24x7 integrates with Azure Deployment Manager.
Wavefront, the monitoring and analytics platform for multi-cloud application environments. See how Wavefront integrates with Azure Deployment Manager.

These service monitors provide a simple copy/paste solution to integrate with Azure Deployment Manager’s health integrated rollout feature, allowing you to easily prevent bad updates from having far reaching impact across your user base. Stay tuned for Azure Monitor integration, which is coming soon.

Additionally, Azure Deployment Manager no longer requires sign-up for use, and is now completely open to the public!

To get started, check out the tutorial “Use Azure Deployment Manager with Resource Manager templates (Public preview)” or the documentation “Enable safe deployment practices with Azure Deployment Manager (Public preview)”.  If you want to try out the health integration feature, check out the tutorial “Use health check in Azure Deployment Manager (Public preview)” for an end to end walkthrough.

We’re excited to have you give Azure Deployment Manager a try, and, as always, we are listening to your feedback.
Quelle: Azure