Class schedules on Azure Lab Services

Classroom labs in Azure Lab Services make it easy to set up labs by handling the creation and management of virtual machines and enabling the infrastructure to scale. Through our continuous enhancements to Azure Lab Services, we are proud share that the latest deployment now includes added support for class schedules.

Schedules management is one of the key features requested by our customers. This feature helps teachers easily create, edit, and delete schedules for their classes. A teacher can set up a recurring or a one-time schedule and provide a start, end date, and time for the class in the time zone of choice. Schedules can be viewed and managed through a simple, easy to use calendar view.

Students virtual machines are turned on and ready to use when a class schedule starts and will be turned off at the end of the schedule. This feature helps limit the usage of virtual machines to class times only, thereby helping IT admins and teachers manage costs efficiently.

Schedule hours are not counted against quota allotted to a student. Quota is the time limit outside of schedule hours when a student can use the virtual machine.

With schedules, we are also introducing no quota hours. When no quota hours are set for a lab, students can only use their virtual machines during scheduled hours or if the teacher turns on virtual machines for the students to use.

Students will be able to clearly see when a lab schedule session is in progress on their virtual machines view.

You can learn more about how to use schedules in our documentation, “Create and manage schedules for classroom labs in Azure Lab Services.” Please give this feature a try and provide feedback at Azure Lab Services UserVoice forum. If you have a questions, please post it on Stack Overflow.
Quelle: Azure

More reliable event-driven applications in Azure with an updated Event Grid

We have been incredibly excited to be a part of the rise of event-driven programming as a core building block for cloud application architecture. By making the following features generally available, we want to enable you to build more sophisticated, performant, and stable event-driven applications in Azure. We are proud to announce the general availability of the following set of features, previously in preview:

Dead lettering
Retry policies
Storage Queues as a destination
Hybrid Connections as a destination
Manual Validation Handshake

To take advantage of the GA status of the features, make sure you are using our 2019-01-01 API and SDKs. If you are using the Azure portal or CloudShell, you’re already good to go. If you are using CLI or PowerShell, make sure you have versions 2.0.56 or later for CLI and 1.1.0 for PowerShell.

Dead lettering

Dead lettering gives you an at-least-once guarantee that you will receive your events in mission critical systems. With a dead letter destination set, you will never lose a message even if your event handler is down, your authorization fails, or a bug in your endpoint is overwhelmed with volume.

Dead lettering allows you to connect each event subscription to a storage account, so that if your primary event pipeline fails, Azure Event Grid can deliver those events to a storage account for consumption at any time.

Retry policies

Retry policies make your primary eventing pipeline more robust in the event of ephemeral failures. While dead lettering provides you with a backstop in case there are long lasting failures in your system, it is more common to see only temporary outages in distributed systems.

Configuring retry policies allows you to set how many times, or for how long you would like an event to be retried before it is dead lettered or dropped. Sometimes, you may want to keep retrying an event as long as possible regardless of how late it is. Other times, once an event is stale, it has no value, so you want it dropped immediately. Retry policies let you choose the delivery schedule that works best for you.

Storage Queues as a destination

Event Grid can directly push your events to an Azure Storage Queue. Queues can be a powerful event handler when you need to buffer your ingress of events to your event handler to allow it to properly scale up. Similarly, if your event handler can’t guarantee uptime, putting a storage queue in between allows you to hold those events and process them when your event handler is ready.

Storage queues also have virtual network (VNet) integration which allows for VNet injection of Event Grid events. If you need to connect an event source to an event handler that is within a VNet, you can tell Event Grid to publish to a storage queue and then consume events in your VNet via your queue.

Hybrid connections as a destination

If you want to build and debug locally while connected to cloud resources for an event, have an on-premises service that can’t expose an HTTP endpoint, or need to work from behind a locked down firewall, Hybrid connections allows you to connect those resources to Event Grid.

Hybrid connections as an event handler gives you an HTTP endpoint to connect Event Grid to. It also gives you option to make an outbound WebSocket connection from your local resource to the same hybrid connection instance. The hybrid connection will then relay your incoming events from event grid to your on-premises resource.

Manual validation handshake

Not all event handlers can customize their HTTP response in order to provide endpoint proof of ownership. The manual validation handshake makes it as easy as copy paste to prove you are an authorized owner of an endpoint.

When you register an Event Grid subscription, a validation event will be sent to the endpoint with a validation code. You are still able to respond to the validation event by echoing back the validation code, however, if that is not convenient, you can now copy and paste the validation URL included from the event to any browser to validate the endpoint. Doing a GET on the endpoint validates proof of ownership.

We hope you react well to this news.

The Azure Event Grid team
Quelle: Azure

Azure.Source – Volume 70

Now in preview

Anomaly detection using built-in machine learning models in Azure Stream Analytics

Many customers use Azure Stream Analytics to continuously monitor massive amounts of fast-moving streams of data to detect issues that do not conform to expected patterns and prevent catastrophic losses. This in essence is anomaly detection. Built-in machine learning models for anomaly detection in Azure Stream Analytics significantly reduces the complexity and costs associated with building and training machine learning models. This feature is now available for public preview worldwide. See how to use simple SQL language to author and deploy powerful analytics processing logic that can scale-up and scale-out to deliver insights with millisecond latencies.

Anomaly detection using machine learning in Azure Stream Analytics

Update 19.02 for Azure Sphere public preview now available

Device builders can now bring the security of Azure Sphere to products even faster than ever before. The Azure Sphere 19.02 release is now available in preview and focuses on broader enablement of device capabilities, reducing time to market with new reference solutions, and continuing to prioritize features based on feedback from organizations building with Azure Sphere. To build applications that leverage this new functionality, you will need to ensure that you have installed the latest Azure Sphere SDK Preview for Visual Studio. All Wi-Fi connected devices will automatically receive an updated Azure Sphere OS.

Also in preview

Azure Sphere 19.02 Release
Public preview: Power BI Embedded support for application authentication with service principal
Public preview: Azure Service Bus for Node.js
SQL Database as a source of reference data input for Stream Analytics

Now generally available

Actuating mobility in the enterprise with new Azure Maps services and SDKs

The mobility space is at the forefront of the most complex challenges faced by cities and urban areas today. Azure Maps has introduced new SDKs and cloud-based services to equip Azure customers with the tools necessary to make smarter, faster, more informed decisions and to enable enterprises, partners, and cities to build the solutions that help visualize, analyze, and optimize mobility challenges all while getting the benefits of a rich set of maps and mapping services with the fastest map data refresh rate available.

Also generally available

General availability: Azure Kubernetes Service in Australia Southeast

News and updates

Get started quickly using templates in Azure Data Factory

Cloud data integration helps organizations integrate data of various forms and unify complex processes in a hybrid data environment. A number of times different organizations have similar data integration needs and require repeat business processes. Templates in Azure Data Factory help you get started quickly with building data factory pipelines and improve your productivity along with reducing development time for repeat processes. The templates are available in a Template gallery that contains use-case based templates, data movement templates, SSIS templates or transformation templates that you can use to get hands-on with building your data factory pipelines.

Quickly build data integration pipelines using templates in Azure Data Factory

Azure IoT Hub Java SDK officially supports Android Things platform

Connectivity is often the first challenge in the Internet of Things (IoT) world. That’s why we released Azure IoT SDKs to enable building IoT applications that interact with IoT Hub and the IoT Hub Device Provisioning Service. These SDKs cover most popular languages in IoT development including C, .NET, Java, Python, and Node.js, as well as popular platforms like Windows, Linux, OSX, and MBED all with support for iOS and Android for enabling mobile IoT scenarios. We are happy to share that the Azure IoT Hub Java SDK now officially supports the Android Things platform so developers can leverage the operating system on the device side, while using Azure IoT Hub as the central message hub that scales to millions of simultaneously connected devices.

Azure IoT Edge runtime available for Ubuntu virtual machines

Azure IoT Edge is a fully managed service allowing you to deploy Azure and third-party services to run directly on IoT devices, whether they are cloud-connected or offline, and offers functionality ranging from connectivity to analytics to storage all while allowing you to deploy modules entirely from the Azure portal without writing any code. Azure IoT Edge deployments are built to scale so that you can deploy globally to any number of devices or to simulate the workload with virtual devices. Now generally available, open-source Azure IoT Edge runtime preinstalled on Ubuntu virtual machines.

Azure IoT Edge VM on Azure Marketplace

Monitor at scale in Azure Monitor with multi-resource metric alerts

Customers rely on Azure to run large scale applications and services critical to their business. To run services at scale, you need to setup alerts to proactively detect, notify, and remediate issues before it affects your customers We’ve just released multi-resource support for metric alerts in Azure Monitor to help you set up critical alerts at scale. Learn about metric alerts in Azure Monitor that work on a host of multi-dimensional platforms and custom metrics; and can notify you when the metric breaches a defined threshold. *This functionality is currently only supported for virtual machines with support for other resource types coming soon.

How Azure Security Center helps you protect your environment from new vulnerabilities

Recently the disclosure of a vulnerability (CVE-2019-5736) was announced in the open-source software (OSS) container runtime, runC allowing an attacker to gain root-level code execution on a host. Azure Security Center can help you detect vulnerable resources in your environment within Microsoft Azure, on-premises, or other clouds. See how Azure Security Center can help you detect that an exploitation has occurred and to alert you.

Announcing launch of Azure Pipelines app for Slack

Use the Azure Pipelines app for Slack to easily monitor the events for your pipelines. Set up and manage subscriptions for completed builds, releases, pending approvals (and more) then get notifications for these events in your Slack channels.

The February release of Azure Data Studio is now available

Azure Data Studio is a new cross-platform desktop environment for data professionals using the family of on-premises and cloud data platforms on Windows, MacOS, and Linux. The February release of Azure Data Studio (formerly known as SQL Operations Studio) is now in general availability. New features include: A new Admin Pack for SQL Server, Added Profiler filtering, Added Save as XML, Added Data-Tier Application Wizard improvements, Updates to the SQL Server 2019 Preview extension, Results streaming turned on by default, and important bug fixes.

Additional news and updates

Azure API Management update February 14
CVE-2019-5736 and runC vulnerability in AKS
CVE-2019-5736 fix for Azure IoT Edge
Azure Automation supports the Azure PowerShell Az module
New language functions in Stream Analytics
Feature update: Serial console for Azure Virtual Machines
Azure DevOps Roadmap update for 2019 Q1

Technical content

Controlling costs in Azure Data Explorer using down-sampling and aggregation

Azure Data Explorer (ADX) is an outstanding service for continuous ingestion and storage of high velocity telemetry data from cloud services and IoT devices. In this helpful post, we see how ADX users can take advantage of stored functions, the Microsoft Flow Azure Kusto connector, and how to create and update tables with filtered, down-sampled, and aggregated data for controlling storage costs.

Azure Stack IaaS – part one

Azure Stack at its core is an Infrastructure-as-a-Service (IaaS) platform and has created a lot of excitement around new hybrid application patterns, consistent Azure APIs to simplify DevOps practices and processes, the extensive Azure ecosystem available through the Marketplace, and the option to run Azure PaaS Services locally, such as App Services and IoT Hub. Underlying all of these are some exciting IaaS capabilities that this new Azure Stack IaaS blog series outlines.

Benefits of using Azure API Management with microservices

The IT industry is experiencing a shift from monolithic applications to microservices-based architectures. The benefits of this new approach include independent development and the freedom to choose technology, independent deployment and release cycle, individual microservices that can scale independently, and reducing the overall cost while increasing reliability. Azure API Management is now available in a new pricing tier with billing per execution especially suited for microservice-based architectures and event-driven systems. Explore how to design a simplified online store system, why and how to manage public facing APIs in microservice-based architectures, and how to get started with Azure API Management and microservices.

Maximize throughput with repartitioning in Azure Stream Analytics

Customers love Azure Stream Analytics for its ease of analyzing streams of data in movement, with the ability to set up a running pipeline within five minutes. Optimizing throughput has always been a challenge when trying to achieve high performance in a scenario that can't be fully parallelized. You can now use a new extension of Azure Stream Analytics SQL to specify the number of partitions of a stream when reshuffling the data. This new capability unlocks performance and aids in maximizing throughput in such scenarios. The new extension of Azure Stream Analytics SQL includes a keyword INTO that allows you to specify the number of partitions for a stream when performing reshuffling using a PARTITION BY statement. This new keyword, and the functionality it provides, is a key feature to achieve high performance throughput for the above scenarios, as well as to better control the data streams after a shuffle

Moving your Azure Virtual Machines has never been easier!

Because of geographical proximity, a merger or acquisition, data sovereignty, or SLA requirements, we are often approached by customers who want to change the region in which their Azure virtual machine is currently deployed to another target region. To meet this need, Azure is continuously expanding; adding new Azure regions, and introducing new capabilities. Walk through the steps you need to move your virtual machine as is (or to increase availability), across regions.

Protect Azure Virtual Machines using storage spaces direct with Azure Site Recovery

We all need to protect our business-critical applications. Storage spaces direct (S2D) lets you host a guest cluster on Microsoft Azure which is especially useful in scenarios where virtual machines (VMs) are hosting those critical applications like SQL, Scale out file server, or SAP ASCS. Learn how the Azure Site Recovery support of storage spaces direct allows you to take your higher availability application and make it more resilient by providing a protection against region level failure. Disaster recovery between Azure regions is available in all Azure regions where ASR is available.  This feature is only available for Azure Virtual Machines’ disaster recovery.

Under the hood: Performance, scale, security for cloud analytics with ADLS Gen2

Since we announced the general availability of Azure Data Lake Storage (ADLS) Gen2, Azure has become the only cloud storage service that is purpose-built for big data analytics and is designed to integrate with a broad range of analytics frameworks enabling a true enterprise data lake, maximize performance via true filesystem semantics, scales to meet the needs of the most demanding analytics workloads, is priced at cloud object storage rates, and is flexible to support a broad range of workloads so that you are not required to create silos for your data. Take a closer look at the technical foundation of ADLS that will power the end-to-end analytics scenarios our customers demand.

Build a Node.js App with the npm Module for Azure Cosmos DB

Ever wondered what it's like to try the JavaScript SDK to manage Azure Cosmos DB SQL API data? Follow along with John Papa as he walks viewers through the Microsoft quickstart guide, and you'll be able to use the SDK in under six minutes!

Keep Calm, and Keep Coding with Cosmos and Node.js

John Papa digs into the Azure Cosmos DB SDK for Node.js to discover how good it feels when an SDK is fast to install, fast to learn, and fast to execute.

Real Talk JavaScript podcast – Episode 17: Azure Functions & Serverless with Jeff Hollan

Jeff Hollan, Senior Program Manager for Microsoft Azure Functions, joins John to talk about serverless and talks about his serverless doorbell project.

HTML5 audio not supported

5 Azure Offerings I ❤️For Xamarin Development

It’s no secret that Matt is both an Azure and Xamarin fan. In this post, he rounds up five Azure offerings that are great for Xamarin development—Azure AD (B2C), Azure Key Vault, Azure Functions, Azure Custom Vision API, and Azure App Center—all of which can be accessed with a free Azure account!

Docker from the beginning

In this first part, Chris looks at the basics of containers and gives some hands-on advice and sample code to get the reader started. Future parts will explain the story of Docker including a look at Kubernetes and our own AKS service. And in part two, Chris looks at the Docker Volumes and how they can make for a great Developer Experience. Future parts will explain the story of Docker including a look at Kubernetes and our own AKS service.

Migrating Azure Functions f1 (.NET) to v2 (.NET Core/Standard)

In this post, Jeremy shared the lessons he learned upgrading his serverless link shortener app to the new Azure Functions platform.

Prototyping your first cloud-connected IoT project using an MXChip board and Azure IoT hub

Learn how to quickly build a prototype IoT project using Azure IoT Hub. Jim's blog post gives full instructions on how to get started with the MXChip board using Visual Studio Code, what Azure IoT Hub is, how to send messages and how to use device twins powered by Azure Functions to sync data.

Kubernetes Basics

In this miniseries, Microsoft Distinguished Engineer and Kubernetes co-creator, Brendan Burns provides foundational knowledge to help you understand Kubernetes and how it works.

AZ 203 Developing Solutions for Microsoft Azure Study Guide

It's essential to be knowledgeable of how the cloud can bring the best value to the developer. App Dev Manager Isaac Levin shares some tips about how to best prepare for the Microsoft Certified Azure Developer Associate certification (AZ-203) exam.

Azure shows

Episode 266 – Azure Kubernetes Service | The Azure Podcast

The dynamic Sean McKenna, Lead PM for AKS, gives us all the details about the service and why and when you should use it for your cloud compute needs. Russell and Kendall get together with him @ Microsoft Ready for a great show.

HTML5 audio not supported

HashiCorp Vault on Azure | Azure Friday

Working with Microsoft, HashiCorp launched Vault with a number of features to make secret management easier to automate in Azure cloud. Yoko Hyakuna from HashiCorp joins Donovan Brown to show how Azure Key Vault can auto-unseal the HashiCorp Vault server, and then how HashiCorp Vault can dynamically generate Azure credentials for apps using its Azure secrets engine feature.

Using HashiCorp Vault with Azure Kubernetes Service (AKS) | Azure Friday

As the adoption of Kubernetes grows, secret management tools must integrate well with Kubernetes so that the sensitive data can be protected in the containerized world. On this episode, Yoko Hakuna demonstrates the HashiCorp Vault's Kubernetes auth method for identifying the validity of containers requesting access to the secrets.

Azure Instance Metadata Service updates for attested data | Azure Friday

Azure Instance Metadata Service is used to provide information about a running virtual machine that can be used to configure and manage the machine. With the latest updates, Azure Marketplace vendors can now validate that their image is running in Azure.

Ethereum Name Service | Block Talk

This session provides an overview of the Ethereum Name Service and the core features that are included.  We then show a demonstration of how this service can be useful when building decentralized applications.

Introducing Spatial operations for Azure Maps | Internet of Things Show

The ability to analyze data is a core facet of the Internet of Things. Azure Maps Spatial Operations will take location information and analyze it on the fly to help inform our customers of ongoing events happening in time and space. The Spatial Operations we are launching consist of Geofencing, Buffer, Closest Point, Great Circle Distance and Point in Polygon. We will demonstrate geofencing capabilities, how to associate fences with temporal constraints so that fences are evaluated only when relevant, and how to react to Geofence events with Event Grid. Finally, we will talk about how other spatial operations can support geofencing and other scenarios.

Building Applications from Scratch with Azure and Cognitive Services | On .NET

In this episode, Christos Matskas joins us to share the story of an interesting application he built using the Azure SDKs for .NET and Cognitive Services. We not only get an overview of creating custom vision models, but also a demo of the docker containers for cognitive services. Christos also shares how he was able to leverage .NET standard libraries to maximize code portability and re-use.

Open Source Security Best Practices for Developers, Contributors, and Maintainers | The Open Source Show

Armon Dadgar, HashiCorp CTO and co-founder, and Aaron Schlesinger talk about how and why HashiCorp Vault is a security and open source product: two things traditionally considered at odds. You'll learn how to avoid secret sprawl and protect your apps' data, ways for contributors and maintainers to enhance the security of any project, and why you should trust no one (including yourself).

Overview of Open Source DevOps in Azure Government | Azure Government

In this episode of the Azure Government video series, Steve Michelotti talks with Harshal Dharia, Cloud Solution Architect at Microsoft, about open source DevOps in Azure Government. Having a reliable and secure DevOps pipeline is one of the most important factors to a successful development project. However, different organizations and agencies often have different tools for DevOps. Harshal starts out by discussing various DevOps tools available, and specifically focuses this demo-heavy talk on open source DevOps tools in Azure Government. Harshal then shows how Terraform and Jenkins can be used in a robust CI/CD pipeline with other open source tools. For DevOps, your organization or agency can bring all your favorite open source tools and use them, but from within the highly scalable, reliable, and secure environment of Azure Government. If you’re into open source DevOps, this is the video for you!

How to create an Azure Functions project with Visual Studio Code | Azure Tips and Tricks

In this edition of Azure Tips and Tricks, learn how to create an Azure Functions project with Visual Studio Code. To start working with Azure Functions, make sure the "Azure Functions" extension is installed inside of Visual Studio Code.

How to manage virtual machines on the go via the Azure mobile app | Azure Portal Series

Managing your Azure virtual machines while you’re on the go is easy using the Azure mobile app. In this video of the Azure Portal "How To" series, learn how to use the Azure mobile app to monitor, manage, and stay connected to your Azure virtual machines.

How to monitor your Kubernetes clusters | Kubernetes Best Practices Series

Get best practices on how to monitor your Kubernetes clusters from field experts in this episode of the Kubernetes Best Practices Series. In this intermediate level deep dive, you will learn about monitoring and logging in Kubernetes from Dennis Zielke, Technology Solutions Professional in the Global Black Belts Cloud Native Applications team at Microsoft.

Simon Timms on Azure Functions and Processes – Episode 23 | The Azure DevOps Podcast

Simon Timms is a long-time freelance Software Engineer, multi-time Microsoft MVP co-host of ASP.NET Monsters on Channel 9, and also runs the Function Junction YouTube channel. He considers himself a generalist with a history of working in a diverse range of industries. He’s personally interested in A.I., DevOps, and microservices; and skilled in Software as a Service (SaaS), .NET Framework, Continuous Integration, C#, and JavaScript. He’s also written two books with Packt Publishing: Social Data Visualization with HTML5 and JavaScript and Mastering JavaScript Design Patterns. In this week’s episode, Simon and Jeffrey will be discussing Azure Functions and running processes in Azure. Simon explains how the internal model of Azure Functions works, the difference between Azure Functions and Durable Functions, the benefits and barriers to Azure Functions, and much, much more.

HTML5 audio not supported

Events

Learn how to build with Azure IoT: Upcoming IoT Deep Dive events

Microsoft IoT Show, the place to go to hear about the latest announcements, tech talks, and technical demos, is starting a new interactive, live-streaming event and technical video series called IoT Deep Dive.

Join us in Seattle from May 6-8 for Microsoft Build

Join us in Seattle for Microsoft’s premier event for developers. Come and experience the latest developer tools and technologies. Imagine new ways to create software by getting industry insights into the future of software development. Connect with your community to understand new development trends and innovative ways to code. Registration goes live on February 27.

Customers, partners, and industries

PyTorch on Azure: Deep learning in the oil and gas industry

Drilling for oil and gas is one of the most dangerous jobs on Earth. Workers are exposed to the risk of events ranging from small equipment malfunctions to entire off shore rigs catching on fire.

How to avoid overstocks and understocks with better demand forecasting

Promotional planning and demand forecasting are incredibly complex processes. Something seemingly straight-forward, like planning the weekly flyer, requires answers to thousands of questions involving a multitude of teams deciding which products to promote, and where to position the inventory to maximize sell-through. Explore how Rubikloud’s Price & Promotion Manager enables AI-powered optimization for enterprise retail and allows merchants and supply chain professionals to take a holistic approach to integrated forecasting and replenishment.

Ignite: The Tour was in Australia last week, which is home to A Cloud Guru. Check out this three-part report from Lars Klint.

Azure This Week – Ignite Special – 12 February 2019 | A Cloud Guru – Azure This Week

Lars reports from an exclusive invite-only tour of the Microsoft Quantum research facility at Sydney University.

Azure This Week – Ignite Special – 13 February 2019 | A Cloud Guru – Azure This Week

On this special edition episode of Azure This Week, Lars talks to Anthony Chu, Christina Warren, Jason Hand and looks at the new support for Azure SQL Database in Azure Stream Analytics.

Azure This Week – Ignite Special – 14 February 2019 | A Cloud Guru – Azure This Week

On day 2 of Microsoft Ignite | The Tour | Sydney Lars talks Azure security with Damian Brady, Tanya Janca, Orin Thomas and Troy Hunt.

Quelle: Azure

How Azure Security Center helps you protect your environment from new vulnerabilities

Recently the disclosure of a vulnerability (CVE-2019-5736) was announced in the open-source software (OSS) container runtime, runc. This vulnerability can allow an attacker to gain root-level code execution on a "host. runc" which is the underlying container runtime underneath many popular containers.

Azure Security Center can help you detect vulnerable resources in your environment within Microsoft Azure, on-premises, or other clouds. Azure Security Center can also detect that an exploitation has occurred and alert you.

Azure Security Center offers several methods that can be applied to mitigate or detect malicious behavior:

Strengthen security posture – Azure Security Center periodically analyzes the security state of your resources. When it identifies potential security vulnerabilities it creates recommendations. The recommendations guide you through the process of configuring the necessary controls. We have plans to add recommendations when unpatched resources are detected. You can find more information about strengthening security posture by visiting our documentation, “Managing security recommendations in Azure Security Center.”
File Integrity Monitoring (FIM) – This method examines files and registry keys of operating systems, application software, and more, for changes that might indicate an attack. By enabling FIM, Azure Security Center will be able detect changes in the directory which can indicate malicious activity. Guidance on how to enable FIM and add file tracking on Linux machines can be found in our documentation, “File Integrity Monitoring in Azure Security Center.”
Security alerts – Azure Security Center detects suspicious activities on Linux machines using auditd framework. Collected records flow into a threat detection pipeline and surface as an alert when malicious activity is detected. Security alerts coverage will soon include new analytics to identify compromised machines by runc vulnerability. You can find more information about security alerts by visiting our documentation, “Azure Security Center detection capabilities.”

To apply the best security hygiene practices, it is recommended to have your environment configured so that it posses the latest updates from your distribution provider. System updates can be performed through Azure Security Center, for more guidance visit our documentation, “Apply system updates in Azure Security Center.”
Quelle: Azure

Update 19.02 for Azure Sphere public preview now available

The Azure Sphere 19.02 release is available today. In our second quarterly release after public preview, our focus is on broader enablement of device capabilities, reducing your time to market with new reference solutions, and continuing to prioritize features based on feedback from organizations building with Azure Sphere.

Today Azure Sphere’s hardware offerings are centered around our first Azure Sphere certified MCU, the MediaTek MT3620. Expect to see additional silicon announcements in the near future, as we work to expand our silicon and hardware ecosystems to enable additional technical scenarios and ultimately deliver more choice to manufacturers.

Our 19.02 release focuses on broadening what you can accomplish with MT3620 solutions. With this release, organizations will be able to use new peripheral classes (I2C, SPI) from the A7 core. We continue to build on the private Ethernet functionality by adding new platform support for critical networking services (DHCP and SNTP) that enable a set of brownfield deployment scenarios. Additionally, by leveraging our new reference solutions and hardware modules, device builders can now bring the security of Azure Sphere to products even faster than before.

To build applications that leverage this new funcionality, you will need to ensure that you have installed the latest Azure Sphere SDK Preview for Visual Studio. All Wi-Fi connected devices will automatically receive an updated Azure Sphere OS.

New connectivity options – This release supports DHCP and SNTP servers in private LAN configurations. You can optionally enable these services when connecting a MT3620 to a private Ethernet connection.
Broader device enablement – Beta APIs now enable hardware support for both I2C and SPI peripherals. Additionally, we have enabled broader configurability options for UART.
More space for applications – The MT3620 now supports 1 MB of space dedicated for your production application binaries.
Reducing time to market of MT3620-enabled products – To reduce complexity in getting started with the many aspects of Azure Sphere we have added several samples and reference solutions to our GitHub samples repo:

Private Ethernet – Demonstrates how to wire the supported microchip part and provides the software to begin developing a private Ethernet-based solution.
Real-time clock – Demonstrates how to set, manage, and integrate the MT3620 real time clock with your applications.
Bluetooth command and control – Demonstrates how to enable command and control scenarios by extending the Bluetooth Wi-Fi pairing solution released in 18.11.
Better security options for BLE – Extends the Bluetooth reference solution to support a PIN between the paired device and Azure Sphere.
Azure IoT – Demonstrates how to use Azure Sphere with either Azure IoT Central or an Azure IoT Hub.
CMake preview – Provides an early preview of CMake as an alternative for building Azure Sphere applications both inside and outside Visual Studio. This limited preview lets customers begin testing the use of existing assets in Azure Sphere development.

OS update protection – The Azure Sphere OS now protects against a set of update scenarios that would cause the device to fail to boot. The OS detects and recovers from these scenarios by automatically and atomically rolling back the device OS to its last known good configuration.
Latest Azure IoT SDK – The Azure Sphere OS has updated its Azure IoT SDK to the LTS Oct 2018 version.

All Wi-Fi connected devices that were previously updated to the 18.11 release will automatically receive the 19.02 Azure Sphere OS release. As a reminder, if your device is still running a release older than 18.11, it will be unable to authenticate to an Azure IoT Hub via DPS or receive OTA updates. See the Release Notes for how to proceed in that case.

As always, continued thanks to our preview customers for your comments and suggestions. Microsoft engineers and Azure Sphere community experts will respond to product-related questions on our MSDN forum and development questions on Stack Overflow. We also welcome product feedback and new feature requests.

Visit the Azure Sphere website for documentation and more information on how to get started with your Azure Sphere development kit. You can also email us at nextinfo@microsoft.com to kick off an Azure Sphere engagement with your Microsoft representative.
Quelle: Azure

Learn how to build with Azure IoT: Upcoming IoT Deep Dive events

Microsoft IoT Show, the place to go to hear about the latest announcements, tech talks, and technical demos, is starting a new interactive, live-streaming event and technical video series called IoT Deep Dive!

Each IoT Deep Dive will bring in a set of IoT experts, like Joseph Biron, PTC CTO of IoT, and Chafia Aouissi, Azure IoT Senior Program Manager, during the first IoT Deep Dive, "Building End to End industrial Solutions with PTC ThingWorx and Azure IoT.” Join us on February 20, 2019 from 9:00 AM – 9:45 AM Pacific Standard Time to walk-through end to end IoT solutions, technical demos, and best practices.

Come learn and ask questions about how to build IoT solutions and deep dive into intelligent edge, tooling, DevOps, security, asset tracking, and other top requested technical deep dives. Perfect for developers, architects, or anyone who is ready to accelerate going from proof of concept to production, or needs best practices tips while building their solutions.

Upcoming events

IoT Deep Dive Live: Building End to End industrial Solutions with PTC ThingWorx and Azure

PTC ThingWorx and Microsoft Azure IoT are proven industrial innovation solutions with a market-leading IoT cloud infrastructure. Sitting on top of Azure IoT, ThingWorx delivers a robust and rapid creation of IoT applications and solutions that maximizes Azure services such as IoT Hub. Join the event to learn how to build an E2E industrial solution. You can setup a reminder to join the live event.

When: February 20, 2019 at 9:00 AM – 9:45 AM Pacific Standard Time | Level 300
Learn about: ThingWorx, Vuforia Studio, Azure IoT, and Dynamics 365
Special guests:

Joseph Biron, Chief Technology Officer of IoT, PTC
Neal Hagermoser, Global ThingWorx COE Lead, PTC
Chafia Aouissi, Senior Program Manager, Azure IoT
Host: Pamela Cortez, Program Manager, Azure IoT

Industries and use cases: Smart connected product manufactures in the verticals of including automotive, industrial equipment, aerospace, electronics, and high tech.

Location Intelligence for Transportation with Azure Maps 

Come learn how to use Azure Maps to provide location intelligence in different areas of transportation such as fleet management, asset tracking, and logistics.

When: March 6, 2019 9:00 AM – 9:45 AM Pacific Standard Time | Level 300
Learn about: Azure Maps, Azure IoT Hub, Azure IoT Central, and Azure Event Grid
Guest speakers:

Ricky Brundritt, Senior Program Manager, Azure IoT
Pamela Cortez, Program Manager, Azure IoT

Industries and use cases: Fleet management, logistics, asset management, and IoT

Submit questions before the events on the Microsoft IoT tech community or during the IoT Deep Dive live event itself! All videos will be hosted on Microsoft IoT Show after the live event.
Quelle: Azure

Under the hood: Performance, scale, security for cloud analytics with ADLS Gen2

On February 7, 2018 we announced the general availability of Azure Data Lake Storage (ADLS) Gen2. Azure is now the only cloud provider to offer a no-compromise cloud storage solution that is fast, secure, massively scalable, cost-effective, and fully capable of running the most demanding production workloads. In this blog post we’ll take a closer look at the technical foundation of ADLS that will power the end to end analytics scenarios our customers demand.

ADLS is the only cloud storage service that is purpose-built for big data analytics. It is designed to integrate with a broad range of analytics frameworks enabling a true enterprise data lake, maximize performance via true filesystem semantics, scales to meet the needs of the most demanding analytics workloads, is priced at cloud object storage rates, and is flexible to support a broad range of workloads so that you are not required to create silos for your data.

A foundational part of the platform

The Azure Analytics Platform not only features a great data lake for storing your data with ADLS, but is rich with additional services and a vibrant ecosystem that allows you to succeed with your end to end analytics pipelines.

Azure features services such as HDInsight and Azure Databricks for processing data, Azure Data Factory to ingress and orchestrate, Azure SQL Data Warehouse, Azure Analysis Services, and Power BI to consume your data in a pattern known as the Modern Data Warehouse, allowing you to maximize the benefit of your enterprise data lake.

Additionally, an ecosystem of popular analytics tools and frameworks integrate with ADLS so that you can build the solution that meets your needs.

“Data management and data governance is top of mind for customers implementing cloud analytics solutions. The Azure Data Lake Storage Gen2 team have been fantastic partners ensuring tight integration to provide a best-in-class customer experience as our customers adopt ADLS Gen2.”

– Ronen Schwartz, Senior Vice president & General Manager of Data Integration and Cloud Integration, Informatica

"WANDisco’s Fusion data replication technology combined with Azure Data Lake Storage Gen2 provides our customers a compelling LiveData solution for hybrid analytics by enabling easy access to Azure Data Services without imposing any downtime or disruption to on premise operations.”

– David Richards, Co-Founder and CEO, WANdisco

“Microsoft continues to innovate in providing scalable, secure infrastructure which go hand in hand with Cloudera’s mission of delivering on the Enterprise Data Cloud. We are very pleased to see Azure Data Lake Storage Gen2 roll out globally. Our mutual customers can take advantage of the simplicity of administration this storage option provides when combined with our analytics platform.”

– Vikram Makhija, General Manager for Cloud, Cloudera

Performance

Performance is the number one driver of value for big data analytics workloads. The reason for this is simple, the more performant the storage layer, the less compute (the expensive part!) required to extract the value from your data. Therefore, not only do you gain a competitive advantage by achieving insights sooner, you do so at a significantly reduced cost.

“We saw a 40 percent performance improvement and a significant reduction of our storage footprint after testing one of our market risk analytics workflows at Zurich’s Investment Management on Azure Data Lake Storage Gen2.”

– Valerio Bürker, Program Manager Investment Information Solutions, Zurich Insurance

Let’s look at how ADLS achieves overwhelming performance. The most notable feature is the Hierarchical Namespace (HNS) that allows this massively scalable storage service to arrange your data like a filesystem with a hierarchy of directories. All analytics frameworks (eg. Spark, Hive, etc.) are built with an implicit assumption that the underlying storage service is a hierarchical filesystem. This is most obvious when data is written to temporary directories which are renamed at the completion of the job. For traditional cloud-based object stores, this is an O(n) complex operation, n copies and deletes, that dramatically impacts performance. In ADLS this rename is a single atomic metadata operation.

The other contributor to performance is the Azure Blob Filesystem (ABFS) driver. This driver takes advantage of the fact that the ADLS endpoint is optimized for big data analytics workloads. These workloads are most sensitive to maximizing throughput via large IO operations, as distinct from other general purpose cloud stores that must optimize for a much larger range of IO operations. This level of optimization leads to significant IO performance improvements that directly benefits the performance and cost aspects of running big data analytics workloads on Azure. The ABFS driver is contributed as part of Apache Hadoop® and is available in HDInsight and Azure Databricks, as well as other commercial Hadoop distributions.

Scalable

Scalability for big data analytics is also critically important. There’s no point having a solution that works great for a few TBs of data, but collapses as the data size inevitably grows. The rate of growth of big data analytics projects tend to be non-linear as a consequence of more diverse and accessible sources of data. Most projects do benefit from the principle that the more data you have, the better the insights. However, this leads to design challenges such that the system must scale at the same rate as the growth of the data. One of the great design pivots of big data analytics frameworks, such as Hadoop and Spark, is that they scale horizontally. What this means is that as the data and/or processing grows, you can just add more nodes to your cluster and the processing continues unabated. This, however, relies on the storage layer scaling linearly as well.

This is where the value of building ADLS on top of the existing Azure Blob service shines. The EB scale of this service now applies to ADLS ensuring that no limits exist on the amount of data to be stored or accessed. In practical terms, customers can store 100s of PB of data which can be accessed with throughput to satisfy the most demanding workloads.

Secure

For customers wanting to build a data lake to serve the entire enterprise, security is no lightweight consideration. There are multiple aspects to providing end to end security for your data lake:

Authentication – Azure Active Directory OAuth bearer tokens provide industry standard authentication mechanisms, backed by the same identity service used throughout Azure and Office365.
Access control – A combination of Azure Role Based Access Control (RBAC) and POSIX-compliant Access Control Lists (ACLs) to provide flexible and scalable access control. Significantly, the POSIX ACLs are the same mechanism used within Hadoop.
Encryption at rest and transit – Data stored in ADLS is encrypted using either a system supplied or customer managed key. Additionally, data is encrypted using TLS 1.2 whilst in transit.
Network transport security – Given that ADLS exposes endpoints on the public Internet, transport-level protections are provided via Storage Firewalls that securely restrict where the data may be accessed from, enforced at the packet level.

Tight integration with analytics frameworks results in an end to end secure pipeline. The HDInsight Enterprise Security Package makes end-user authentication flow through the cluster and to the data in the data lake.

Get started today!

We’re excited for you to try Azure Data Lake Storage! Get started today and let us know your feedback.

Get started with Azure Data Lake Storage.
Watch the video, “Create your first ADLS Gen2 Data Lake.”
Read the general availability announcement.
Learn how ADLS improves the Azure analytics platform in the blog post, “Individually great, collectively unmatched: Announcing updates to 3 great Azure Data Services.”
Refer to the Azure Data Lake Storage documentation.
Learn how to deploy a HDInsight cluster with ADLS.
Deploy an Azure Databricks workspace with ADLS.
Ingest data into ADLS using Azure Data Factory.

Quelle: Azure

Monitor at scale in Azure Monitor with multi-resource metric alerts

Our customers rely on Azure to run large scale applications and services critical to their business. To run services at scale, you need to setup alerts to proactively detect, notify, and remediate issues before it affects your customers. However, configuring alerts can be hard when you have a complex, dynamic environment with lots of moving parts.

Today, we are excited to release multi-resource support for metric alerts in Azure Monitor to help you set up critical alerts at scale. Metric alerts in Azure Monitor work on a host of multi-dimensional platform and custom metrics, and notify you when the metric breaches a threshold that was either defined by you or detected automatically.

With this new feature, you will be able to set up a single metric alert rule that monitors:

A list of virtual machines in one Azure region
All virtual machines in one or more resource groups in one Azure region
All virtual machines in a subscription in one Azure region

Benefits of using multi-resource metric alerts

Get alerting coverage faster: With a small number of rules, you can monitor all the virtual machines in your subscription. Multi-resource rules set at subscription or resource group level can automatically monitor new virtual machines deployed to the same resource group/subscription (in the same Azure region). Once you have such a rule created, you can deploy hundreds of virtual machines all monitored from day one without any additional effort.
Much smaller number of rules to manage: You no longer need to have a metric alert for every resource that you want to monitor.
You still get resource level notifications: You still get granular notifications per impacted resource, so you always have the information you need to diagnose issues.
Even simpler at scale experience: Using Dynamic Thresholds along with multi-resource metric alerts, you can monitor each virtual machine without the need to manually identify and set thresholds that fit all the selected resources. Dynamic condition type applies tailored thresholds based on advanced machine learning (ML) capabilities that learn metrics' historical behavior, as well as identifies patterns and anomalies.

Setting up a multi-resource metric alert rule

When you set up a new metric alert rule in the alert rule creation experience, use the checkboxes to select all the virtual machines you want the rule to be applied to. Please note that all the resources must be in the same Azure region.

You can select one or more resource groups, or select a whole subscription to apply the rule to all virtual machines in the subscription.

If you select all virtual machines in your subscription, or one or more resource groups, you get the option to auto-grow your selection. Selecting this option means the alert rule will automatically monitor any new virtual machines that are deployed to this subscription or resource group. With this option selected, you don’t need to create a new rule or edit an existing rule whenever a new virtual machine is deployed.

You can also use Azure Resource Manager templates to deploy multi-resource metric alerts. Learn more in our documentation, “Understand how metric alerts work in Azure Monitor.”

Pricing

The pricing for metric alert rules is based on number of metric timeseries monitored by an alert rule. This same pricing applies to multi-resource metric alert rules.

Wrapping up

We are excited about this new capability that makes configuring and managing metric alerts rule at scale easier. This functionality is currently only supported for virtual machines with support for other resource types coming soon. We would love to hear what you think about it and what improvements we should make. Contact us at azurealertsfeedback@microsoft.com.
Quelle: Azure

Protect Azure Virtual Machines using storage spaces direct with Azure Site Recovery

Storage spaces direct (S2D) lets you host a guest cluster on Microsoft Azure which is especially useful in scenarios where virtual machines (VMs) are hosting a critical application like SQL, Scale out file server, or SAP ASCS. You can learn more about clustering by reading the article, “Deploying laaS VM Guest Clusters in Microsoft Azure.” I am also happy to share that with the latest Azure Site Recovery (ASR) update, you can now protect these business critical applications. The ASR support of storage spaces direct allows you to take your higher availability application and make it more resilient by providing a protection against region level failure.

We continue to deliver on our promise of simplicity and help you can protect your storage spaces direct cluster in three simple steps:

Inside the recovery services vault, select +replicate.

1. Select replication policy with application consistency off. Please note, that only crash consistency support is available.

2. Select all the nodes in the cluster and make them part of a Multi-VM consistency group. To learn more about Multi-VM consistency please visit our documentation, “Common questions: Azure-to-Azure replication.”

3. Lastly, select OK to enable the replication.

Next steps

Begin protecting virtual machines using storage spaces direct. To get started visit our documentation, “Replicate Azure Virtual Machines using storage spaces direct to another Azure region.”

Disaster recovery between Azure regions is available in all Azure regions where ASR is available. Please note, this feature is only available for Azure Virtual Machines’ disaster recovery.

Related links and additional content

Check the most common queries on Azure Virtual Machine disaster recovery.
Learn more about the supported configurations for replicating Azure Virtual Machines.
Need help? Reach out to Azure Site Recovery forum for support.
Tell us how we can improve Azure Site Recovery by contributing new ideas and voting on existing ones.

Quelle: Azure

Anomaly detection using built-in machine learning models in Azure Stream Analytics

Built-in machine learning (ML) models for anomaly detection in Azure Stream Analytics significantly reduces the complexity and costs associated with building and training machine learning models. This feature is now available for public preview worldwide.

What is Azure Stream Analytics?

Azure Stream Analytics is a fully managed serverless PaaS offering on Azure that enables customers to analyze and process fast moving streams of data, and deliver real-time insights for mission critical scenarios. Developers can use a simple SQL language (extensible to include custom code) to author and deploy powerful analytics processing logic that can scale-up and scale-out to deliver insights with milli-second latencies.

Traditional way to incorporate anomaly detection capabilities in stream processing

Many customers use Azure Stream Analytics to continuously monitor massive amounts of fast-moving streams of data in order to detect issues that do not conform to expected patterns and prevent catastrophic losses. This in essence is anomaly detection.

For anomaly detection, customers traditionally relied on either sub-optimal methods of hard coding control limits in their queries, or used custom machine learning models. Development of custom learning models not only requires time, but also high levels of data science expertise along with nuanced data pipeline engineering skills. Such high barriers to entry precluded adoption of anomaly detection in streaming pipelines despite the associated value for many Industrial IoT sites.

Built-in machine learning functions for anomaly detection in Stream Analytics

With built-in machine learning based anomaly detection capabilities, Azure Stream Analytics reduces complexity of building and training custom machine learning models to simple function calls. Two new unsupervised machine learning functions are being introduced to detect two of the most commonly occurring anomalies namely temporary and persistent.

AnomalyDetection_SpikeAndDip function to detect temporary or short-lasting anomalies such as spike or dips. This is based on the well-documented Kernel density estimation algorithm.
AnomalyDetection_ChangePoint function to detect persistent or long-lasting anomalies such as bi-level changes, slow increasing and slow decreasing trends. This is based on another well-known algorithm called exchangeability martingales.

Example

SELECT sensorid, System.Timestamp as time, temperature as temp,
AnomalyDetection_SpikeAndDip(temperature, 95, 120, 'spikesanddips')
OVER PARTITION BY sensorid
LIMIT DURATION(second, 120) as SpikeAndDipScores
FROM input

In the example above, AnomalyDetection_SpikeAndDip function helps monitor a set of sensors for spikes or dips in the temperature readings. Furthermore, the underlying ML model uses a user supplied confidence level of 95 percent to set the model sensitivity. A training event count of 120 that corresponds to a 120 second sliding window are supplied as function parameters. Note that the job is partitioned by sensorid, which results in multiple ML models being trained under the hood, one for each sensor and all within the same single query.

Get started today

We’re excited for you to try out anomaly detection functions in Azure Stream Analytics. To try this new feature, please refer to the feature documentation, "Anomaly Detection in Azure Stream Analytics."
Quelle: Azure