Custom Vision Service introduces classifier export, starting with CoreML for iOS 11

To enable developers to build for the intelligent edge, Custom Vision Service from Microsoft Cognitive Services has added mobile model export.

Custom Vision Service is a tool for easily training, deploying, and improving custom image classifiers. With just a handful of images per category, you can train your own image classifier in minutes. Today, in addition to hosting your classifiers at a REST endpoint, you can now export models to run offline, starting with export to the CoreML format for iOS 11. Export will allow you to embed your classifier directly in your application and run it locally on a device. The models you export are optimized for the constraints of a mobile device, so you can classify on device in real time.

Custom Vision Service is designed to build quality classifiers with very small training datasets, helping you build a classifier that is robust to differences in the items you are trying to recognize and that ignores the things you are not interested in. With today's update, you can easily add real time image classification to your mobile applications. Creating, updating, and exporting a compact model takes only minutes, making it easy to build and iteratively improve your application. More export formats and supported devices are coming in the near future.

A sample app and tutorial for adding real time image classification to an iOS app is now available.

To learn and starting building your own image classifier, visit www.customvision.ai.

Screenshot of a fruit recognition classifier in our sample app.
Quelle: Azure

IntelliJ Community Edition: 1-Click to Run Java Containers on Azure

Deploying Java containers to Azure made easy from IntelliJ. It takes seconds, not minutes.

Today, I am delighted to announce a new feature that enables the 1-Click experience to run containerized Java applications on Azure. It enables Java developers to easily dockerize their projects, push docker images to a public or private repository like Azure Container Registry (ACR), and run on Web App for Containers. The new feature is supported in IntelliJ IDEA, both Ultimate and Community Edition. It provides a straightforward experience for those Java developers who attempt to start the container journey on Azure.

Run containerized Java applications on Azure

Here is how you can dockerize and run a Maven project on Azure – it only takes three steps:

Create a Java Maven project in IntelliJ IDEA Community Edition.
Add docker support to the project.
Provide Azure Container Registry and Web App for Containers configuration info, and hit run.

1-Click to redeploy your application

Once your app is setup in IntelliJ, you only need to hit run to redeploy your application. You can edit run configuration anytime.

Give it a try

We have a step-by-step tutorial to help you get started. We would love to get your feedback (via email or comments below). We will continue to light up many more Azure developer experiences in IntelliJ.

You can find more information about Java on Azure:

GitHub Page for the open source project of IntelliJ/Eclipse Toolkits
Home Page of Azure Toolkit for IntelliJ
Home Page of Azure Toolkit for Eclipse
Java on Azure Developer Center

Quelle: Azure

Introducing Azure confidential computing

Microsoft spends one billion dollars per year on cybersecurity and much of that goes to making Microsoft Azure the most trusted cloud platform. From strict physical datacenter security, ensuring data privacy, encrypting data at rest and in transit, novel uses of machine learning for threat detection, and the use of stringent operational software development lifecycle controls, Azure represents the cutting edge of cloud security and privacy.

Today, I’m excited to announce that Microsoft Azure is the first cloud to offer new data security capabilities with a collection of features and services called Azure confidential computing. Put simply, confidential computing offers a protection that to date has been missing from public clouds, encryption of data while in use. This means that data can be processed in the cloud with the assurance that it is always under customer control. The Azure team, along with Microsoft Research, Intel, Windows, and our Developer Tools group, have been working on confidential computing software and hardware technologies for over four years. The bottom of this post includes a list of Microsoft Research papers related to confidential computing. Today we take that cutting edge one step further by now making it available to customers via an Early Access program.

Data breaches are virtually daily news events, with attackers gaining access to personally identifiable information (PII), financial data, and corporate intellectual property. While many breaches are the result of poorly configured access control, most can be traced to data that is accessed while in use, either through administrative accounts, or by leveraging compromised keys to access encrypted data. Despite advanced cybersecurity controls and mitigations, some customers are reluctant to move their most sensitive data to the cloud for fear of attacks against their data when it is in-use. With confidential computing, they can move the data to Azure knowing that it is safe not only at rest, but also in use from the following threats:

Malicious insiders with administrative privilege or direct access to hardware on which it is being processed
Hackers and malware that exploit bugs in the operating system, application, or hypervisor
Third parties accessing it without their consent

Confidential computing ensures that when data is “in the clear,” which is required for efficient processing, the data is protected inside a Trusted Execution Environment (TEE – also known as an enclave), an example of which is shown in the figure below. TEEs ensure there is no way to view data or the operations inside from the outside, even with a debugger. They even ensure that only authorized code is permitted to access data. If the code is altered or tampered, the operations are denied and the environment disabled. The TEE enforces these protections throughout the execution of code within it.

With Azure confidential computing, we’re developing a platform that enable developers to take advantage of different TEEs without having to change their code. Initially we support two TEEs, Virtual Secure Mode and Intel SGX. Virtual Secure Mode (VSM) is a software-based TEE that’s implemented by Hyper-V in Windows 10 and Windows Server 2016. Hyper-V prevents administrator code running on the computer or server, as well as local administrators and cloud service administrators from viewing the contents of the VSM enclave or modifying its execution. We’re also offering hardware-based Intel SGX TEE with the first SGX-capable servers in the public cloud. Customers that want their trust model to not include Azure or Microsoft at all can leverage SGX TEEs. We’re working with Intel and other hardware and software partners to develop additional TEEs and will support them as they become available.

Microsoft already uses enclaves to protect everything from blockchain financial operations, to data stored in SQL Server, and our own infrastructure within Azure. While we’ve previously spoken about our confidential computing blockchain efforts, known as the Coco Framework, today we are announcing the use of the same technology to implement encryption-in-use for Azure SQL Database and SQL Server. This is an enhancement of our Always Encrypted capability, which ensures that sensitive data within a SQL database can be encrypted at all times without compromising the functionality of SQL queries. Always Encrypted achieves that this by delegating computations on sensitive data to an enclave, where the data is safely decrypted and processed. We continue to use enclaves inside Microsoft products and services to ensure that wherever sensitive information needs to be processed, it can be secured while in use.

In addition to SQL Server, we see broad application of Azure confidential computing across many industries including finance, healthcare, AI, and beyond. In finance, for example, personal portfolio data and wealth management strategies would no longer be visible outside of a TEE. Healthcare organizations can collaborate by sharing their private patient data, like genomic sequences, to gain deeper insights from machine learning across multiple data sets without risk of data being leaked to other organizations. In oil and gas, and IoT scenarios, sensitive seismic data that represents the core intellectual property of a corporation can be moved to the cloud for processing, but with the protections of encrypted-in-use technology. 

Customers can try out Azure confidential computing through our Early Access program, which includes access to Azure VSM and SGX-enabled virtual machines, as well as tools, SDKs, and Windows and Linux support to enable any application in the cloud to protect its data while in use.

Sign up for the Azure confidential computing Early Access program.

I look forward to seeing you at Ignite, where I’ll demonstrate enclaves in Azure. There’s so many opportunities and use cases we can secure together using the Azure cloud, Intel hardware, along with Microsoft technologies, services, and products. 

Today is the exciting beginning of a new era of secure computing. Join us in Azure as we create this future.

– Mark

 

Microsoft Research papers related to confidential computing:

Shielding applications from an untrusted cloud with Haven 
VC3: Trustworthy Data Analytics in the Cloud using SGX
Oblivious Multi-Party Machine Learning on Trusted Processors 
A Design and Verification Methodology for Secure Isolated Regions

See how confidential computing fits within Microsoft’s broader cloud security strategy in the Microsoft Story Labs feature: Securing the Cloud.
Quelle: Azure

Protecting applications and data on Azure Stack

Azure provides global scale cloud computing running in Microsoft Datacenters in over 40 regions. Azure Stack extends the value of Azure to a wider range of customers and scenarios by making a core set of services available to run on integrated systems running in customer’s or service provider datacenters. Many companies and service providers are evaluating Azure Stack Development Kit (ASDK) to learn more. Many of these companies are going beyond evaluation and thinking through strategy, design, and implementation of a hybrid cloud for modern apps and on-premises cloud services. Business continuity and disaster recovery (BC/DR) planning is an important part of a comprehensive strategy.

Have you downloaded and installed ASDK? Download it now.

Business continuity and disaster recovery planning are closely related processes that a company uses to prepare for unforeseen risks. In this post, we’ll go through the protection objectives for different application types and recovery objectives so you can think about solutions that meet your business needs. These objectives will have a direct impact on the BC/DR strategy that you choose.

Azure Stack BC/DR Areas

There are four main areas to consider for a BC/DR strategy:

Protect the underlying infrastructure that hosts IaaS virtual machines, PaaS services, and tenant applications and data.
Protect IaaS-based applications and data.
Protect PaaS-based applications and data.
Archive PaaS data for long-term retention.

This blog focuses on the second point: protecting IaaS virtual machine-based applications and data using Microsoft and partner products for backup and restore, and replication and failover. The Azure Stack team is engaged with multiple partners to ensure that their third-party solutions work on Azure Stack. Expect additional blog posts covering the other three areas in the near future. 

Backup and restore IaaS virtual machine-based applications

Azure Stack supports Windows- and Linux-based applications deployed on virtual machines that are provisioned as IaaS Azure Resource Manager virtual machines. Backup products with access to the guest operating system (OS) can easily protect file, folder, OS state, and application data using similar policies you use today for near-line and long-term retention. You have the flexibility to use Microsoft products like Azure Backup and System Center Data Center Protection Manager, and third-party products to back up data on-premises, to a service provider, or directly to Azure. This approach gives you the flexibility of choice for protecting your applications and data using products you are familiar with and trust.

Replication and failover for IaaS virtual machine-based applications

Applications that require minimal downtime and minimal data loss need additional protection. Backup and restore is ideal for customers that can tolerate application downtime for an extended period. However, to achieve minimal downtime, you need to replicate data to a secondary location and orchestrate failover of the application in the event of a disaster. With Azure Stack, you have the flexibility to use Azure Site Recovery to replicate data directly to Azure and failover your application to run in Azure. Applications deployed on Azure Stack that support native replication, like Microsoft SQL, can replicate data to another location where the application is running. You can also use third-party replication and orchestration products to failover applications to another location. Like backup and restore, this approach gives you the flexibility and choice for protecting your applications and data.

Azure Stack partners and deployment scenarios

Important considerations for general availability

In-guest protection—Azure Stack only supports protection technologies at the guest level of an IaaS virtual machine. Azure Stack does not support installing agents on underlying infrastructure servers and does not rely on hypervisor-based technologies.
Azure Site Recovery—Upon general availability, Azure Site Recovery will support failover to Azure, and test failover in Azure. Azure Site Recovery will support re-protection of the IaaS virtual machine to your Azure Stack instance using a manual process, with automation forthcoming. To get failover from Azure to Azure Stack, you will need to shut down the IaaS virtual machines, manually copy the data to your Azure Stack instance, and then recreate the IaaS virtual machine. At that point, you can enable protection for the IaaS virtual machine as a new instance.
Site to site protection—Protecting IaaS VM resources across two Azure Stack clouds is supported using the Azure Backup Server or a third-party product from one of our partners. 

Recovering from a catastrophic data loss

Your BC/DR planning must prepare your company to recover from a disaster that permanently takes your entire Azure Stack cloud offline and results in complete and unrecoverable data loss. Deploying your Azure Stack cloud will require you to engage your hardware vendor. If you have a secondary site, consider deploying an Azure Stack instance there, which will allow you to recover tenant applications and data without having to wait for a new cloud to be deployed.

Next steps

Download and install ASDK today and get familiar with hosting your applications in a hybrid cloud. As you gain insight into the applications that you plan to deploy into Azure Stack, think through the recovery objectives for your applications. Identify the different protection schemes that make sense, and consider the technologies and products that will help you achieve these objectives.

Reach out to your vendor to discuss how they will support BC/DR scenarios on Azure Stack. Start testing application and data protection on ASDK. As we get closer to general availability, you will see updated documentation about Azure Stack. If you want to talk to the Azure Stack product group about BC/DR, fill out this survey so we can reach out to you.

More information

Attending Microsoft Ignite this year in Orlando? Check out these two sessions for information about Azure Stack BC/DR:

Microsoft Azure Stack business continuity and disaster recovery
Recovering Azure Stack infrastructure from a catastrophic data loss

We always want feedback—please sign up to talk to us.
Quelle: Azure

Microsoft Tech Summit is back – register for a free event near you!

Announcing the 2017–2018 Microsoft Tech Summit global tour! Build your skills with the latest in cloud technologies at a free, technical learning event for IT professionals and developers, coming to a city near you. We’re hitting the road with our top engineers to bring you two days of in-depth sessions, networking opportunities, industry insights, and hands-on skill-building with the experts behind Microsoft’s cloud services.

The cloud is changing expectations and transforming the way we live and work. Whether you’re developing innovative apps or delivering optimized solutions, Microsoft Tech Summit can help you evolve your skills, deepen your expertise, and grow your career.

Discover the latest trends, tools, and product roadmaps at more than 70 sessions, covering a range of topics across Microsoft Azure and Microsoft 365, which includes Windows 10, Office 365, and Enterprise Mobility + Security. From beginner sessions that will help you develop core cloud skills, to advanced, 400-level training that will take your expertise to the next level, there is something for everyone.

This year, we’re adding a new event Hub, the primary gathering place where you can learn, network, meet partners, and visit the community theater. You’ll also have direct access to Microsoft experts who bring you Azure and Microsoft 365 – ask your toughest questions, learn best practices, provide feedback, and share strategies to optimize operations and deliver more value to your organization.

Microsoft Tech Summit Tour Schedule

 

November 16 – 17, 2017
Sydney 

November 29 – 30, 2017    
Tel Aviv  

December 6 – 7, 2017
São Paulo

December 13 – 14, 2017
Toronto

January 17 – 18, 2018
Singapore

January 24 – 25, 2018
Birmingham (UK)

February 13 – 14, 2018
Cape Town

February 21 – 22, 2018
Frankfurt

March 5 – 6, 2018
Washington, D.C.

March 14 – 15, 2018
Paris

March 19 – 20, 2018
San Francisco

March 28 – 29, 2018
Amsterdam

April 17 – 18, 2018
Stockholm

April 25 – 26, 2018
Warsaw

Join us for the Microsoft Tech Summit, and learn how Microsoft’s cloud platform can help you lead your organization through real digital transformation, and shape your future. Registration is now open – find a city near you and reserve your free seat today. 

Don’t miss out, register now!
Quelle: Azure

Five principles of innovation in the cloud

If you work in IT you may have noticed that you live in a world of competing agendas. On one side are the forces that expect you to do what you have always done, “keep the lights on.” On the other, you are being driven to innovate so you can seize new opportunities, support evolving business needs, and better serve customers.

Despite the tendency to be risk-averse, as more and more companies move to cloud environments certain principles have emerged to help foster a more innovative IT mindset, and a more central role for IT within the corporation. The following are some of the tenets for making this transition, as covered in the book Enterprise Cloud Strategy (2nd ed.) and the related upcoming webinar:

Go fast: The cloud enables projects to be spun up quickly, which allows you to “try many, use best,” and learn quickly from what doesn’t work.
Push the boundaries: IT not only needs to adapt to the cloud, but embrace new architectures and processes, and test limits, including designing “net new” apps and refactoring legacy apps for PaaS and SaaS.
Make data-driven decisions: Use the monitoring and analytics capabilities of the cloud to track costs and technical efficiency so you can make smart decisions about which apps are generating the biggest return.
Simplify: Retire, consolidate, and right-size as many services and applications as possible to free up resources.
Communicate to succeed: Establishing clear, ongoing communication with stakeholders is the single most important factor for successful innovation.

You’ll find in-depth guidance on encouraging experimentation and other important facets of cloud migration in our webinar featuring authors Barry Briggs and Eduardo Kassner.

Sign up for the webinar!
Quelle: Azure

Microsoft Video Indexer expands functionality unlocking more video insights

In May 2017, we announced the global public preview of the Video Indexer service at the Microsoft Build Developers Conference in Seattle. As part of Microsoft Cognitive Services, Video Indexer was introduced as a unique integrated bundling of Microsoft's cloud-based artificial intelligence and cognitive capabilities applied specifically for video content. It has since become the industry's most comprehensive Video AI service available, making it easy to extract insights from videos. For the last four months, we have received a tremendous, enthusiastic response from our global customers and partners who began using Video Indexer within their own in-house workflows and video solutions. With the International Broadcasters Conference (IBC) just around the corner, we are excited to share significant feature updates to the Video Indexer service.

Additionally, we are very pleased that several customers and partners have already integrated Video Indexer within their own video solutions and product offerings. A few examples are as follows:

Ooyala has integrated Video Indexer into their Flex Media Logistics platform to enable auto-transcription, translation, content-aware advertising insertion, and better content monetization and search.
Zone TV is using Video Indexer to automate the curation of a first-of-its-kind customizable suite of linear TV channels.
Axle Video is taking advantage of Video Indexer to automatically tag entire video files and video segments to greatly accelerate the post-production process and efficient search across video libraries.

Overall Service updates since May

Speech-to-text – Video Indexer now supports Egyptian Arabic for speech-to-text.

With the addition of Arabic, Video Indexer portal has been updated to support RTL (right to left) text in the transcript tab as well as the search page.

Captions – You can now start playing captions automatically in the Video Indexer player by adding showCaptions=true to the iframe embed player URL.
Download insights from portal – You can now download the JSON insights extracted by Video Indexer via the portal.
Playback speeds – The Video Indexer player was updated to support different playback speeds.
Inline transcript editing – The transcript panel has been updated to support inline editing.
Editing in insights widget – The insights widget has been updated to support editing in embedded mode (based on an access token).
Keywords editing – The insights widget has been updated to enable the addition and removal of keywords.
Annotations – The insights widget now shows the annotations extracted by the Video Indexer engine.

API updates

The Video Indexer API reference page provides details about the APIs. Some APIs are new, while some have been enhanced with new features.

Upload – The Upload API has been enhanced with the following:

callbackUrl – This enables you to receive a callback when the indexing operation finishes on a video.
indexingPreset – You can specify if you want to use the default preset or only perform audio indexing.
streamingPreset – With this parameter, you can control whether the video gets encoded to multiple bitrates or not. You may want to disable encoding if you are only interested in extracting insights from your videos. This would help improve the turn around on processing the video.

Re-Index Breakdown – This is a new API and will enable you to reprocess a video by re-triggering insights extraction on an existing video in your account.
Re-Index Breakdown by External Id – Similar to the above, but this API will enable you to use the external id versus the id provided by Video Indexer to trigger the re-indexing of the video.
Update Transcript – This API enables you to update the transcript for a video by providing a JSON string containing the full video VTT.
Update Face Name – This is an API for providing a friendly name to the faces detected by Video Indexer.
Get Insights Widget URL – This API has been updated with an optional parameter called “allowEdit” that would allow the user to get an access token for editing in embedded mode.
Get Insights Widget URL by External Id – Similar to the above, but this API will enable you to use the external id versus the id provided by Video Indexer.

Other updates

Logic Apps – With the Video Indexer connector for Logic Apps, you can now setup custom workflows connecting your most used apps with Video Indexer to further automate the process of extracting insights from your video. You can read more about this in the Video Indexer connector for Logic Apps blog.
Microsoft Flow – With the integration between Video Indexer and Microsoft Flow, users can now extract insights from their business videos without writing a single line of code.
UI updates – The Video Indexer web portal UI has been updated with several enhancements based on feedback provided by customers.

Besides all the above listed updates, Video Indexer backend services have been updated to make the service more robust and efficient. Looking ahead, Video Indexer will be making progress along the following dimensions:

More video AI algorithms to extract additional metadata.
Improving accuracy of video AI technologies.
More intelligent correlation among the extracted metadata to provide human friendly insights.
User interface enhancement to showcase the extracted metadata and insights.
Paid offering of Video Indexer.

Please submit your feedback related to Video Indexer on uservoice. Also, you can track updates from Video Indexer team by following us on Twitter (@Video_Indexer). Members of the Video Indexer team will be present at the Microsoft Stand (Hall 15, Stand 1, 35, and 36). Come see us to learn more!
Quelle: Azure

New Power BI Connector for Azure Enterprise users

We are very pleased to announce the release of Azure Consumption and Insights Connector in Power BI Desktop. Enterprise customers can use this to pull Azure Charge and Usage data for both Azure and Marketplace resources. It can be used by enterprise users for exploring, analyzing, and building their own custom dashboards.

Learn more by reading the detailed documentation on getting started with the APIs. Currently, we also have a Power BI Content Pack that can be used by enterprise customers for performing detailed analysis on their Azure usage and spend details.

What’s new?

We have the 4 datasets setup with the default behavior:

Current Billing period data for Usage and Price Sheet
Data since May 2014 for Balance Summary and Marketplace

We have added new parameters to pull data for any historical period as a moving window of data. For example, if you are interested in doing a month over month comparison you can pull data for that month from the prior and current year by using the new parameters.

Get started

Please check the steps listed in the Azure Consumption and Insights Connector documentation.

We have also added a guidebook section in the above document to help customers move their existing dashboards built on Azure Enterprise Connector (Beta) to the new connector.

What’s next?

We are currently working on providing this data using ARM authentication. As always, we welcome any feedback or suggestion you may have. These can be sent to us using the Azure Feedback Forum and the Azure MSDN forum. We will continue to enhance our collateral with additional functionality to provide richer insights into your usage and spend data for all workloads running on Azure.
Quelle: Azure

Events, Data Points, and Messages – Choosing the right Azure messaging service for your data

With the introduction of Event Grid, Microsoft Azure now offers an even greater choice of messaging infrastructure options. The expanded messaging service fleet consists of the Service Bus message broker, the Event Hubs streaming platform, and the new Event Grid event distribution service. Those services, which are all focused on moving datagrams, are complemented by the Azure Relay that enables bi-directional, point-to-point network connection bridging.

At first glance, it may appear that Service Bus, Event Hubs, and the Event Grid compete with each other. They all accept datagrams, either called events or messages, and they enable consumers to obtain and handle them.

Looking more closely at the kind of information that is published and how that information is consumed and processed, reveals, however, that the usage scenario overlap of the three services is rather small and that they are very complementary. A single application might use all three services in combination, each for different functional aspects, and we expect a lot of Azure solutions to do so.

To understand the differences, let’s first explore the intent of the publisher.

If a publisher has a certain expectation of how the published information item ought to be handled, and what audience should receive it, it’s issuing a command, assigning a job, or handing over control of a collaborative activity, either of which is expressed in a message.

Message exchanges

Messages often carry information that pass the baton of handling certain steps in a workflow or a processing chain to a different role inside a system. Those messages, like a purchase order or a monetary account transfer record, may express significant inherent monetary value. That value may be lost and/or very difficult to recover if such a message were somehow lost in transfer. The transfer of such messages may be subject to certain deadlines, might have to occur at certain times, and may have to be processed in a certain order. Messages may also express outright commands to perform a specific action. The publisher may also expect that the receiver(s) of a message report back the outcome of the processing, and will make a path available for those reports to be sent back.

This kind of contractual message handling is quite different from a publisher offering facts to an audience without having any specific expectations of how they ought to be handled. Distribution of such facts is best-called events.

Event distribution and streaming

Events are also messages, but they don’t generally convey a publisher intent, other than to inform. An event captures a fact and conveys that fact. A consumer of the event can process the fact as it pleases and doesn’t fulfill any specific expectations held by the publisher.

Events largely fall into two big categories: They either hold information about specific actions that have been carried out by the publishing application, or they carry informational data points as elements of a continuously published stream.

Let’s first consider an example for an event sent based on an activity. Once a sales support application has created a data record for a new sales lead, it might emit an event that makes this fact known. The event will contain some summary information about the new lead that is thought to be sufficient for a receiver to decide whether it is interested in more details, and some form of link or reference that allows the obtaining of those details.

The ability to subscribe to the source and therefore obtain the event, and to subsequently get the referenced information will obviously be subject to access control checks. Any such authorized party may react to the event with their own logic and therefore extend the functionality of the overall system.

A subscriber to the “new sales lead” event may, for instance, be an application that handles newsletter distribution, and signs up the prospective customer to newsletters that match their interests and which they agreed to receive. Another subscriber to the same event may put the prospective customer on an invitation list for a trade show happening in their home region in the following month and initiate sending an invitation letter via regular mail. The latter system extension may be a function that’s created and run just on behalf of the regional office for the duration of a few weeks before the event, and subsequently removed.

The core of the sales support application isn’t telling those two subscribers what to do and isn’t even aware of them. They are authorized consumers of events published by the source application, but the coupling is very loose, and removing these consumers doesn’t impact the source application’s functional integrity. Creating transparency into the state changes of the core application allows for easy functional extension of the overall system functionality, either permanent or temporary.

Events that inform about discrete “business logic activity” are different from events that are emitted with an eye on statistical evaluation and where the value of emitting those events lies in the derived insights. Such statistics may be used for application and equipment health and load monitoring, for user experience and usage metrics, and many other purposes. Events that support the creation of statistics are emitted very frequently, and usually capture data point observations made at a certain instant.

The most common example for events carrying data points are log data events as they are produced by web servers or, in a different realm, by environmental sensors in field equipment. Typically, an application wouldn’t trigger actions based on such point observations, but rather on a derived trend. If a temperature sensor near a door indicates a chilly temperature for a moment, instantly turning up the heating is likely an overreaction. But if the temperature remains low for a minute or two, turning up the heat a notch and also raising an alert about that door possibly not being shut are good reactions. These examples are based on looking at a series of events calculating the temperature trend over time, not just a point observation.

The analysis of such streams of events carrying data points, especially when near real-time insights are needed, requires data to be accumulated in a buffer that spans a desired time window and a desired number of events, and then processed using some statistical function or some machine-trained algorithm. The best pattern to acquire events of such a buffer is to pull them towards the buffer, do the calculation, move the time window, pull the next batch of events to fill the time window, do the next calculation, and so forth.

The Azure Messaging services fleet

Applications emit action events and data point events as messages to provide insights into what work they do and how that work is progressing. Other messages are used to express commands, work jobs, or transfers of control between collaborating parties. While these are all messages, the usage scenarios are so different that Microsoft Azure provides a differentiated, and yet composable, portfolio of services.

Azure Event Hubs

Azure Event Hubs is designed with focus on the data point event scenario. An Event Hub is an “event ingestor” that accepts and stores event data, and makes that event data available for fast “pull” retrieval. A stream analytics processor tasked with a particular analytics job can “walk up” to the Event Hub, pick a time offset, and replay the ingested event sequence at the required pace and with full control; an analytics task that requires replaying the same sequence multiple times can do so. Because most modern systems handle many data streams from different publishers in parallel, Event Hubs support a partitioning model that allows keeping related events together while enabling fast and highly parallelized processing of the individual streams that are multiplexed through the Event Hub. Each Event Hub instance is typically used for events of very similar shape and data content from the same kind of publishers, so that analytics processors get the right content in a timely fashion, and without skipping.

Example: A set of servers that make up a web farm may push their traffic log data into one Event Hub, and the partition distribution of those events may be anchored on the respective client IP address to keep related events together. That Event Hub capturing traffic log events will be distinct from another Event Hub that receives application tracing events from the same set of servers because the shape and context of those events differ.

The temperature sensors discussed earlier each emit a distinct stream that will be kept together using the Event Hub partitioning model, using the identity of the device as the partitioning key. The same partitioning logic and a compatible consumption model is also used in Azure IoT Hubs.

The Event Hubs Capture feature automatically writes batches of captured events into either Azure Storage blob containers or into Azure Data Lake and enables timely batch-oriented processing of events as well as “event sourcing” based on full raw data histories.

Azure Event Grid

Azure Event Grid is the distribution fabric for discrete “business logic activity” events that stand alone and are valuable outside of a stream context. Because those events are not as strongly correlated and also don’t require processing in batches, the model for how those events are being dispatched for processing is very different.

The first assumption made for the model is that there’s a very large number of different events for different contexts emitted by an application or platform service, and a consumer may be interested in just one particular event type or just one particular context. This motivates a filtered subscriber model where a consumer can select a precise subset of the emitted events to be delivered.  

The second assumption is that independent events can generally be processed in a highly parallelized fashion using Web service calls or “serverless” functions. The most efficient model for dispatching events to those handlers is to “push” them out, and have the existing auto-scaling capabilities of the Web site, Azure Functions, or Azure Logic Apps manage the required processing capacity. If Azure Event Grid gets errors indicating that the target is too busy, it will back off for a little time, which allows for more resources to be spun up. This composition of Azure Event Grid with existing service capabilities in the platform ensures that customers don’t need to pay for running “idle” functionality like a custom VM/Container hosting the aforementioned newsletter service – and that doesn’t do anything but wait for the next event, while still having processing capacity ready in milliseconds for when such an event occurs.

Azure Service Bus

Azure Service Bus is the “Swiss Army Knife” service for all other generic messaging tasks. While Azure Event Grid and Azure Event Hubs have a razor-sharp focus on the collection and distribution of events at great scale, and with great velocity, an Azure Service Bus namespace is a host for queues holding jobs of critical business value. It allows for the creation of routes for messages that need to travel between applications and application modules. It is a solid platform for workflow and transaction handling and has robust facilities for dealing with many application fault conditions.

A sale recorded in a point-of-sale solution is both a financial record and an inventory tracking record, and not a mere event. It’s recorded in a ledger, which will eventually be merged into a centralized accounting system, often via several integration bridges, and the information must not be lost on the way. The sales information, possibly expressed as separate messages to keep track of the stock levels at the point of sale, and across the sales region, may be used to initiate automated resupply orders with order status flowing back to the point of sale.

A particular strength of Service Bus is also its function as a bridge between elements of hybrid cloud solutions and systems that include branch-office or work-site systems. Systems that sit “behind the firewall”, are roaming across networks, or are occasionally offline can’t be reached directly via “push” messaging, but require messages to be sent to an agreed pickup location from where the designated receiver can obtain them.

Service Bus queues or topic subscriptions are ideal for this use-case, where the core of the business application lives in the cloud or even an on-site datacenter, branch-offices, work-sites, or service tenants spread across the world. This model is particularly popular with SaaS providers in health care, tax and legal consulting, restaurant services, and retail.

Composition

Because it’s often difficult to draw sharp lines between the various use-cases, the three services can also be composed. (Mind that Event Grid is still in early preview; some of the composition capabilities described here will be made available in the coming months)

First, both Service Bus and Event Hub will emit events into Event Grid that will allow applications to react to changes quickly, while not wasting resources on idle time. When a queue or subscription is “activated” by a message after sitting idle for a period of time, it will emit a Grid event. The Grid event can then trigger a function that spins up a job processor.

This addresses the case where high-value messages flow only very sporadically, maybe at rates of a handful of messages per day, and to keep a service alive on an idle queue will be unnecessarily costly. Even if the processing of said messages were to require substantial resources, the spin-up of those resources can be anchored on the Event Grid event trigger. The available queue messages are then processed, and the resources can again be spun down.

Event Hub will emit a Grid event when a Capture package has been dropped into an Azure Storage container, and this can trigger a function to process or move the package.

Second, Event Grid will allow subscribers to drop events into Service Bus queues or topics, and into Event Hubs.

If there’s an on-premises service in a hybrid cloud solution that is interested in specific files appearing in an Azure Storage container so that it can promptly download them, it can reach out to the cloud through NAT and Firewall and listen on a Service Bus queue that is subscribed to that event on Azure Storage.

The same model of routing a Grid event to a queue is applicable if reacting to the event is particularly costly in terms of time and/or resources, or if there’s a high risk of failure. The Event Grid will wait for, at most, 60 seconds for an event to be positively acknowledged. If there’s any chance that the processing will take longer, it’s better to turn to the Service Bus pull model that allows for processing to take much longer while maintaining a lock on the message.

Since many Event Grid events will also be interesting to be looked at statistically and projected over time, you can route them selectively into an Event Hub, and from there into the many different analytics channels that are Event Hub enabled, without writing any extra code. Event Hub Capture is also a great archive facility for Grid events through this easily configured path.

Summary

Azure Messaging provides a fleet of services that allows application builders to pick a fully-managed service that best fits their needs for a particular scenario. The services follow common principles and provide composability that doesn’t force developers into hard decisions choosing between the services. The core messaging fleet that consists of Event Hubs, Event Grid, Service Bus, and the Relay is complemented by further messaging-based or message-driven Azure services for more specific scenarios, such as Logic Apps, IoT Hub and Notification Hubs.

It’s quite common for a single application to rely on multiple messaging services in composition, and we hope that we could provide some orientation around which of the core services is most appropriate for each scenario.
Quelle: Azure

Live from LA: Microsoft at Open Source Summit North America

Today, members of the Microsoft team are at Open Source Summit North America, where attendees will learn more about our open source journey from a keynote, sessions, our booth and much more. It has been a big year for Microsoft’s open source journey, from the technology we’ve delivered, to how we work with the community, to our continued cultural shift.

We’ve joined the Linux Foundation, the Cloud Foundry Foundation, and the Cloud Native Computing Foundation. New Linux VMs have outpaced Windows VMs on Azure. Visual Studio Code, our open source editor, passed 2 million monthly active users. We’ve brought new tools and services – like MySQL and Postgres managed services – to Azure, and we’ve continued to work to bring flagship products like SQL Server to Linux so it’s easier for developers to take advantage of these technologies for their cloud applications.

As we continue to work towards making Azure the best cloud for developers, we want to reduce friction, so they can quickly build and deploy open source-based solutions without having to maintain the underlying servers and operating system. Just last week, we brought Azure App Service, which enables developers to quickly build, deploy and scale applications without having to maintain the underlying web servers or operating systems, to Linux.

More importantly, we’ve learned a lot from the vibrant open source communities we’re working with, which is changing our culture, how we design products, how we work with our customers. Read more about our latest open source strategy, and how we’re working with communities from Julia Liuson, Corporate Vice President, Developer Tools & Services, over at the open.microsoft.com blog, and tune in live here. Julia’s keynote starts at 4:25 p.m. PT!
Quelle: Azure