Simplifying event-driven architectures with the latest updates to Event Grid

Event-driven architectures are increasingly replacing and outpacing less dynamic polling-based systems, bringing the benefits of serverless computing to IoT scenarios, data processing tasks or infrastructure automation jobs. As the natural evolution of microservices, companies all over the world are taking an event-driven approach to create new experiences in existing applications or bring those applications to the cloud, building more powerful and complex scenarios every day.

Today, we’re incredibly excited to announce a series of updates to Event Grid that will power higher performance and more advanced event-driven applications in the cloud:

Public preview of IoT Hub device telemetry events
Public preview of Service Bus as an event handler
Automatic server-side geo-disaster recovery
General availability of Event Domains, now with up to 100K topics per Domain
Public preview of 1MB event support
List search and pagination APIs
General availability of advanced filters with increased depth of filtering

Expanded integration with the Azure ecosystem

One of the biggest features we have been asked for since launching the Azure IoT Hub integration with Event Grid is device telemetry events. Today, we’re finally enabling that feature in public preview in all public regions except East US, West US, and West Europe. We are excited for you to try this capability and build more streamlined IoT solutions for your business.

Subscribing to device telemetry events allows you to easily integrate data from your devices into your solution more easily, including serverless applications using Azure Functions or Azure Logic Apps, and any other services by using webhooks, whether they are on Azure or not. This helps simplify IoT architectures by eliminating the need of additional services that poll for device telemetry for further processing.

By publishing device telemetry events to Event Grid, IoT Hub expands the services your data can reach, beyond the endpoints supported through message routing. For example, you can automate downstream workflows by creating different subscriptions to device telemetry events for different device types, identified by the device twin tag, and triggering distinct Azure Functions or third party applications for unique computation per device type. Based on your Event Grid subscriptions to device telemetry events, we create a default route in IoT hub, handling all your Event Grid subscriptions to device telemetry.

Learn more about IoT Hub device telemetry in docs, and continue to submit your suggestions through the Azure IoT User Voice forum.

We are also adding Service Bus as an event handler for Event Grid in public preview, so starting today you can route your events in Event Grid directly to Service Bus queues. Service Bus can now act as either an event source or event handler, making for a more robust experience delivering events and messages in distributed enterprise applications. It is currently in public preview and does not work with Service Bus topics and sessions, but it does work with all tiers of Service Bus queues.

This enables command and control scenarios in which you receive events of activity on other services such as blob created, device created, and job finished passing them along for further processing.

Learn more about Service Bus as a destination in docs.

Server-side geo disaster recovery

Event Grid now has built-in automatic geo disaster recovery (GeoDR) of metadata, applicable to all existing Domains, Topics and Event Subscriptions, not just for new ones. This provides a vastly improved resilience against service interruptions, all fully managed by our platform. In the event of an outage that takes out an entire Azure region, the Event Grid service will already have all of your eventing infrastructure metadata synced to a paired region, and your new events will begin to flow again with no intervention from your side required, avoiding service interruption automatically.

Disaster recovery is generally measured with two metrics:

Recovery Point Objective (RPO): the minutes or hours of data that may be lost.
Recovery Time Objective (RTO): the minutes of hours the service may be down.

Event Grid’s automatic failover has different RPO’s and RTO’s for your metadata (event subscriptions, plus more) and data (events). If you need different specification from below, you can still always implement your own client-side failover using the topic health APIs.

Metadata RPO: Zero minutes. You read that right. Any time a resource is created in Event Grid, its instantly replicated across regions. In the event of a failover, no metadata is lost.
Metadata RTO: Though generally this happens much more quickly, within 60 minutes Event Grid will begin to accept create/update/delete calls for topics and subscriptions.
Data RPO: If your system is healthy and caught up on existing traffic at the time of regional failover, the RPO for events is about 5 minutes.
Data RTO: Like metadata, this generally happens much more quickly, however within 60 minutes Event Grid will begin accepting new traffic after a regional failover.

Here’s the best part, there is no cost for metadata GeoDR on Event Grid. It is included on the current price of the service and won’t incur any additional charges.

Powering advanced event-driven workloads

As we see more advanced event-driven architectures for diverse scenarios such as IoT, CRM, or financial we’ve noticed an increasing need on expanding our capabilities for multitenant applications and workloads handling bigger amount of data in their events.

Event Domains give you the power to organize your entire eventing infrastructure under a single construct, set fine grain auth rules on each topic for who can subscribe, and manage all event publishing with a single endpoint. Classic pub-sub architectures are built exclusively on topics and subscriptions, however as you build more advanced and hi-fidelity event-driven architectures, the burden on maintenance increases exponentially. Event Domains take the headache out of it by handling much of the management for you.

Today we’re happy to announce that Event Domains are now generally available, and with that, you’ll be able to have 100,000 topics per Domain. Here’s the full set of Event Domains limits with general availability:

100,000 topics per Event Domain
100 Event Domains per Azure Subscription
500 event subscriptions per topic in an Event Domain
50 ‘firehose’ event subscriptions at the Event Domain scope
5,000 events/second into an Event Domain

As always, if these limits don’t suit you, feel free to reach out via support ticket or by emailing askgrid@microsoft.com

so we can get you higher capacity.

We also acknowledge that advanced event-driven architectures don’t always fit in the confines of 64 KB. These workloads require handling larger events for a simpler architecture, and today we’re announcing the public preview of events up to 1MB.

There are no configuration changes required and this will work on existing event subscriptions, and everything under 64 KB will be still be covered by our general availability SLA. To try it out, just push larger events, noticing that events over 64 KB will be charged in 64 KB increments, and the batch size limit for events sent to Event Grid as a JSON array is still 1MB in total.

Simplified management of events

You might have thousands of event subscriptions or, with the general availability of Event Domains, hundreds of thousands of topics floating around your Azure subscription. In order to make searching and managing of these resources easier, we’ve introduced list search and list pagination APIs throughout Event Grid. For more information check out all the details in our, “Azure Event Grid Documentation.”

Advanced filters used to route messages in Event Grid are now generally available, with no restriction on the number of nested objects in your JSON. This allows for more granularity when filtering events before passing it to other services for further processing, reducing compute time, and resources needed by avoiding performing this filtering elsewhere.

If you haven’t played with advanced filters yet, you can use the following operators on any part of the event, making the possibilities nearly endless: StringContains, StringBeginsWith, StringEndsWith, StringIn, StringNotIn, NumberGreaterThan, NumberGreaterThanOrEquals, NumberLessThan, NumberLessThanOrEquals, NumberIn, NumberNotIn, BoolEquals.

Get started today

As always, we love to hear your thoughts, feedback, and wish lists as you get a chance to try out these new features! You can start now with the following resources, and please reach out with your feedback.

Sign up for an Azure free account if you don’t have one yet
Subscribe to IoT Hub device telemetry events with Event Grid
Learn more about using Service Bus as an event handler
Build more powerful multitenant applications with Event Domains
Perform searches and pagination over thousands and thousands of events with these new APIs
Route only the necessary events for processing using advanced filters

Quelle: Azure

Manage your cross cloud spend using Azure Cost Management

It’s common for enterprises to run workloads on more than one cloud provider; however, adopting a multi-cloud strategy comes with complexities like handling different cost models, varying billing cycles, and different cloud designs that can be difficult to navigate across multiple dashboards and views.

We’ve heard from many of you that you need a central cost management solution built to help you manage your spend across multiple cloud providers, prevent budget overruns, maintain control, and create accountability with your consumers.

Azure Cost Management now offers cross-cloud support. This is available in preview and can play a critical role in helping you efficiently and effectively managing your organization’s multi-cloud needs. Onboard your AWS costs via an easy-to-use cloud connector to get a single view of your expenditures, so you can analyze and budget for the future in one place.

Get started now and create your first AWS connector with the free preview to explore these capabilities and more.

For more information about managing AWS costs, see this Manage AWS costs and usage in Azure documentation.

What’s available in this preview:

Analyzing Azure and AWS spend with cost analysis

Get deep insights into your cloud spend with the rich graphic capabilities in cost analysis. Analyze your costs across 18 available dimensions like provider, service name, usage location, availability zones, meter, and tags. Cost analysis provides the flexibility to view granular charges to understand what’s driving costs and anomalies, or high-level views to analyze total and month-over-month costs for one or both of your cloud providers. Learn more about Azure Cost Management cost analysis.

 

Budgets and alerts across your Azure & AWS spend

Budgets and alerts in Cost Management help you plan for and drive organizational accountability by allowing you to set spend amount limits and alert thresholds. You can now set budgets and receive alerts on AWS scopes, as well. You can set budgets for various scopes like subscriptions, resource groups, or for your enterprise agreement hierarchy if you are an enterprise customer. Budget notifications are integrated with Azure Action Groups, which enables email, SMS, or you can plug into your automation scripts using available Webhook and Functions integration. Learn more about Azure Cost Management budgets.

 

What’s coming next?

With Azure Cost Management, Microsoft is committed to supporting you manage your multi-cloud environment. We will continue to develop additional Cost Management features so you can benefit from a unified user experience across both Azure and AWS.
In the coming months we’ll be focusing on building features like the ability to save and schedule reports, additional capabilities in cost analysis, budgets, forecast, alerts, exports, and show-backs. And looking further into the future, we plan to continue enhancing our multi-cloud support for additional cloud products.
Follow us at @AzureCostMgmt on Twitter for more updates and information.

Quelle: Azure

Azure NetApp Files is now generally available

Today, we're excited to announce the General Availability (GA) of Azure NetApp Files, the industry’s first bare-metal cloud file storage and data management service. Azure NetApp Files is an Azure first-party service for migrating and running the most demanding enterprise file-workloads in the cloud including databases, SAP, and high-performance computing applications with no code changes. Today’s milestone is the result of deep investment by both companies to provide a great experience for our customers through a service that’s unique in the industry.

Since launching in preview mode, several Fortune 100 enterprises across the world have provided valuable feedback that helped us enrich the service.

"We wanted an on-premise like performance for our reservoir simulation and analysis software. We were thrilled to see Azure NetApp Files exceeding our expectations with an over 5x performance increase. Most importantly, the massive scale-up/down capability of Azure NetApp Files now allows for pure cloud-based consumption of both capacity AND performance,” said Juan Pedro Brett, Digital Transformation Engineer, E&P at Repsol.

Azure NetApp Files represents the culmination of a deep partnership between Microsoft and NetApp, combining NetApp’s proven and industry-leading ONTAP technology with the scale, reach, and enterprise capabilities of Azure.

“Azure NetApp Files is the most significant proof-point of NetApp’s commitment to accelerate our customer’s digital transformation, said Anthony Lye, Senior Vice President and General Manager of Cloud BU at NetApp. “We have partnered extensively with Microsoft across engineering, sales, and marketing to help customers benefit from the availability of our industry leading storage and data management technology on Azure. The general availability of Azure NetApp Files is also a fantastic moment to thank our customers for their enormous interest and feedback that has helped shaped this service.”

Read more about NetApp’s perspective on this announcement on the NetApp blog.

Deep integration of NetApp’s industry leading ONTAP storage and data management technology into Azure provides unique value to our customers in several ways:

Seamless Azure experience

Azure NetApp Files is a fully managed cloud service with full Azure portal integration and access via REST API and Azure SDKs, and soon via Azure CLI and PowerShell. It’s sold and supported exclusively by Microsoft. Customers can seamlessly migrate and run applications in the cloud without worrying about procuring or managing storage infrastructure. Additionally, customers can purchase Azure NetApp Files and get support through existing Azure agreements, with no up-front or separate term agreement.

Power of NetApp ONTAP

NetApp’s ONTAP systems serve hundreds of thousands of customers and has earned the trust of enterprise organizations over decades. The technology provides proven protocol support, including support for NFSv3 and SMB 3.1. It enables powerful data management with snapshots of datasets, high availability, and can achieve sub-millisecond latency.

Advanced security for business-critical data

Azure NetApp Files is built and operated with Azure’s industry-leading standards and processes for security, and benefits from the multi-layered security provided by Microsoft across its physical datacenters, infrastructure, and operations. Azure NetApp Files provides FIPS-140-2-compliant data encryption at rest, role-based access control (RBAC), Active Directory authentication (enabled for SMB), and export policies for network-based access control.

Support for hybrid scenarios

Azure NetApp Files enables easy migration of data across on-premises and cloud infrastructures using Cloud Sync, a NetApp service for rapid, security-enhanced data synchronization. This simplifies lift and shift and DevOps scenarios with capabilities like instantaneous snapshot, restore, and Active Directory integration (for SMB) that work the same way in the cloud and on premises. Integrated data replication and backup will be available in the near future. Learn more about Cloud Sync.

Get started with Azure NetApp Files

You can request onboarding to Azure NetApp Files by submitting an online request or by reaching out to your Microsoft representative.

We will continue to strongly partner with NetApp and look forward to hearing your feedback on Azure NetApp Files. You can email us at ANFFeedback@microsoft.com or share your ideas and suggestions for Azure Storage on our feedback forum.
Quelle: Azure

Azure.Source – Volume 84

Now available

All U.S. Azure regions now approved for FedRAMP High impact level

We’re now sharing our ability to provide Azure public services that meet U.S. Federal Risk and Authorization Management Program (FedRAMP) High impact level and extend FedRAMP High Provisional Authorization to Operate (P-ATO) to all of our Azure public regions in the United States. Achieving FedRAMP High means that both Azure public and Azure Government data centers and services meet the demanding requirements of FedRAMP High, making it easier for more federal agencies to benefit from the cost savings and rigorous security of the Microsoft Commercial Cloud.

Now in preview

Drive higher utilization of Azure HDInsight clusters with autoscale

We are excited to share the preview of the Autoscale feature for Azure HDInsight. This feature enables enterprises to become more productive and cost-efficient by automatically scaling clusters up or down based on the load or a customized schedule.

Announcing the preview of Windows Server containers support in Azure Kubernetes Service

Kubernetes is taking the app development world by storm. Earlier this month, we shared that the Azure Kubernetes Service (AKS) was the fastest growing compute service in Azure’s history. Today, we’re excited to announce the preview of Windows Server containers in Azure Kubernetes Service (AKS) for the latest versions, 1.13.5 and 1.14.0.  With this, Windows Server containers can now be deployed and orchestrated in AKS enabling new paths to migrate and modernize Windows Server applications in Azure.

Optimize price-performance with compute auto-scaling in Azure SQL Database serverless

Optimizing compute resource allocation to achieve performance goals while controlling costs can be a challenging balance to strike – especially for database workloads with complex usage patterns. To help address these challenges, we are pleased to announce the preview of Azure SQL Database serverless. SQL Database serverless (preview) is a new compute tier that optimizes price-performance and simplifies performance management for databases with intermittent and unpredictable usage. Line-of-business applications, dev/test databases, content management, and e-commerce systems are just some examples across a range of applications that often fit the usage pattern ideal for SQL Database serverless.

Visual interface for Azure Machine Learning service

During Microsoft Build we announced the preview of the visual interface for Azure Machine Learning service. This new drag-and-drop workflow capability in Azure Machine Learning service simplifies the process of building, testing, and deploying machine learning models for customers who prefer a visual experience to a coding experience. This capability brings the familiarity of what we already provide in our popular Azure Machine Learning Studio with significant improvements to ease the user experience.

Technical content

Kubernetes - from the beginning, Part I, Basics, Deployment and Minikube

Kubernetes is a BIG topic. In this blog series, Chris Noring tackles the basics. Part 1 covers: Why Kubernetes and orchestration in general; talking through Minikube, simple deploy example; cluster and basic commands; deploying an app; and concepts and troubleshooting of pods and nodes.

Introduction to AzureKusto

This post is to announce the availability of AzureKusto, the R interface to Azure Data Explorer (internally codenamed “Kusto”), a fast, fully managed data analytics service from Microsoft. It is available from CRAN, or you can install the development version from GitHub.

Microsoft Azure for spoiled people

In this article, the author walks you through the easiest possible way to set up a Vue.js CLI-built web app on Azure with continuous integration via GitHub.

Azure IoT Central and MXChip Hands-on Lab

This hands-on lab covers creating an Azure IoT Central application, connecting an MXChip IoT DevKit device to your Azure IoT Central application, and setting up a device template.

Azure shows

A new way to try .NET

Learning a programming language is becoming a fundamental aspect of education across the world. We're always looking for new and interesting ways to teach programming to learners at all levels. From Microsoft Build 2019,  we had Maria Naggaga come on to show us the Try .NET project. She shows us how this simple tool will allow us to create interactive documentation, workshops, and other interesting learning experiences.

Xamarin.Forms 101: Dynamic resources

Let's take a step back in a new mini-series that I like to call Xamarin.Forms 101. In each episode we will walk through a basic building block of Xamarin.Forms to help you build awesome cross-platform iOS, Android, and Windows applications in .NET. This week we will look at how to use dynamic resources to change the value of a resource while the application is running.

Cosmos DB data in a smart contract with Logic Apps

We show an IoT use case that highlights how to leverage the power of Cosmos DB to manipulate IoT data within a Cosmos DB and use that data in smart contracts via the Ethereum Logic App connector.

ARM templates and Azure policy

Cynthia talks with Satya Vel on the latest ARM template updates including an enhanced template export experience, best practices for ARM clients, and new capabilities that are now available on ARM templates.

Industries and partners

Securing the pharmaceutical supply chain with Azure IoT

You’re responsible for overseeing the transportation of a pallet of medicine halfway around the world. Drugs will travel from your pharmaceutical company’s manufacturing outbound warehouse in central New Jersey to third-party logistics firms, distributors, pharmacies, and ultimately, patients. Each box in that pallet – no bigger than the box that holds the business cards on your desk – contains very costly medicine, the product of 10 years of research and R&D spending. But there are several big catches. Read on to see what they are and how Azure IoT helps overcome them.

How you can use IoT to power Industry 4.0 innovation

IoT is ushering in an exciting—and sometimes exasperating—time of innovation. Adoption isn’t easy, so it’s important to hold a vision of the promise of Industry 4.0 in mind as you get ready for this next wave of business. This post is the fourth in a four-part series designed to help companies maximize their ROI on IoT.
Quelle: Azure

HB-series Azure Virtual Machines achieve cloud supercomputing milestone

New HPC-targeted cloud virtual machines are first to scale to 10,000 cores

Azure Virtual Machine HB-series are the first on the public cloud to scale a MPI-based high performance computing (HPC) job to 10,000 cores. This level of scaling has long been considered the realm of only the world’s most powerful and exclusive supercomputers, but now is available to anyone using Azure.

HB-series virtual machines (VMs) are  optimized for HPC applications requiring high memory bandwidth. For this class of workload, HB-series VMs are the most performant, scalable, and price-performant ever launched on Azure or elsewhere on the public cloud.

With the AMD EPYC processors, the HB-series delivers more than 260 GBs of memory bandwidth, 128 MB L3 cache, and SR-IOV-based 100 Gbs InfiniBand. At scale, a customer can utilize up to 18,000 physical CPU cores and more than 67 terabytes of memory for a single distributed memory computational workload.

For memory-bandwidth bound workloads, the HB-series delivers something many in HPC thought may never happen. Azure-based VMs are now as or more capable as bare-metal, on-premises status quo that dominates the HPC market, and at a highly competitive price point.

World-class HPC technology

HB-series VMs feature the cloud’s first deployment of AMD EPYC 7000-series CPUs explicitly for HPC customers. AMD EPYC features 33 percent more memory bandwidth than any x86 alternative, and even more than leading POWER and ARM server platforms. In context, this 263 GBs of memory bandwidth the HB-series VM delivers 80 percent more than competing cloud offerings in the same memory per core class.

HB-series VMs expose 60 non-hyperthreaded CPU cores and 240 GB of RAM, with a baseclock of 2.0 GHz, and an all-cores boost speed of 2.55 GHz. HB VMs also feature a 700 GB local NVMe SSD, and support up to four Managed Disks including the new Azure P60/P70/P80 Premium Disks.

A flagship feature of HB-series VMs is 100 GBs InfiniBand from Mellanox. HB-series VMs expose the Mellanox ConnectX-5 dedicated back-end NIC via SR-IOV, meaning customers can use the same OFED driver stack that they’re accustomed to in a bare metal context. HB-series VMs deliver MPI latencies as low as 2.1 microseconds, with consistency, bandwidth, and message rates in line with bare-metal InfiniBand deployments.

Cloud HPC scaling achievement

As part of early acceptance testing, the Azure HPC team benchmarked many widely used HPC applications. One common class of applications are those that simulate computational fluid dynamics (CFD). To see how far HB-series VMs could scale, we selected the Le Mans 100 million cell model available to Star-CCM+ customers, with results as follows:

 

The Le Mans 100 million cell model scaled to 256 VMs across multiple configurations accounting for as many as 11,520 CPU cores. Our testing revealed that maximum scaling efficiency could be had with two MPI ranks per NUMA domain yielding a top-end scaling efficiency of 71.3 percent efficiency. For top-end performance, three MPI ranks per NUMA domain yielded the fastest overall performance. Customers can choose which metric they find most valuable based on a wide variety of factors.

Delighting HPC customers on Azure

The unique capabilities and cost-performance of HB-series VMs are a big win for scientists and engineers who depend on high-performance computing to drive their research and productivity to new heights. Organizations spanning aerospace, automotive, defense, financial services, heavy equipment, manufacturing, oil & gas, public sector academic, and government research have shared feedback on how the HB-series has increased product performance and provided new insights through detailed simulation models.

Rescale partners with Azure to provide HPC resources for computationally complex simulations and analytics. Launching today, Azure Virtual Machine HB-series VM can be consumed through Rescale’s ScaleX® as the new “Amber” compute resource.

“As the only fully managed HPC cloud service in the market, Rescale creates an elegant way to move on-premises HPC workloads to the cloud. We have been waiting with great anticipation for Microsoft to introduce cloud building blocks specifically engineered for HPC," said Adam McKenzie, CTO of Rescale. "Now, new HB-series VMs on Azure enable MPI workloads to scale to tens of thousands of cores with the kind of cost-performance that rivals on-premises supercomputers”

Available now

Azure Virtual Machine HB-series are currently available in South Central US and Western Europe, with additional regions rolling out soon.

Find out more about high performance computing (HPC) in Azure
Learn about Azure Virtual Machines

Quelle: Azure

Transforming Azure Monitor Logs for DevOps, granular access control, and improved Azure integration

Logs are critical for many scenarios in the modern digital world. They are used in tandem with metrics for observability, monitoring, troubleshooting, usage and service level analytics, auditing, security, and much more. Any plan to build an application or IT environment should include a plan for logs.

Logs architecture

There are two main paradigms for logs:

Centralized: All logs are kept in a central repository. In this scenario, it is easy to search across resources and cross-correlate logs but, since these repositories get big and include logs from all kind of sources, it's hard to maintain access control on them. Some organizations completely avoid centralized logging for that reason, while other organizations that use centralized logging restrict access to very few admins, which prevents most of their users from getting value out of the logs.

Siloed: Logs are either stored within a resource or stored centrally but segregated per resource. In these instances, the repository can be kept secure, and access control is coherent with the resource access, but it's hard or impossible to cross-correlate logs. Users who need a broad view of many resources cannot generate insights. In modern applications, problems and insights span across resources, making the siloed paradigm highly limited in its value.

To accommodate the conflicting needs of security and log correlations many organizations have implemented both paradigms in parallel, resulting in a complex, expensive, and hard-to-maintain environment with gaps in logs coverage. This leads to lower usage of log data in the organization and results in decision-making that is not based on data.

New access control options for Azure Monitor Logs

We have recently announced a new set of Azure Monitor Logs capabilities that allow customers to benefit from the advantages of both paradigms. Customers can now have their logs centralized while seamlessly integrated into Azure and its role based access control (RBAC) mechanisms. We call this resource-centric logging. It will be added to the existing Azure Monitor Logs experience automatically while maintaining the existing experiences and APIs. Delivering a new logs model is a journey, but you can start using this new experience today. We plan to enhance and complete alignment of all Azure Monitor's components over the next few months.

The basic idea behind resource-centric logs is that every log record emitted by an Azure resource is automatically associated with this resource. Logs are sent to a central workspace container that respects scoping and RBAC based on the resources. Users will have two options for accessing the data:

Workspace-centric: Query all data in a specific workspace–Azure Monitor Logs container. Workspace access permissions apply. This mode will be used by centralized teams that need access to logs regardless of the resource permissions. It can also be used for components that don't support resource-centric or off-Azure resources, though a new option for them will be available soon.

 Resource-centric: Query all logs related to a resource. Resource access permissions apply. Logs will be served from all workspaces that contain data for that resource without the need to specify them. If workspace access control allows it, there is no need to grant the users access to the workspace. This mode works for a specific resource, all resources in a specific resource group, or all resources in a specific subscription. Most application teams and DevOps will be using this mode to consume their logs.

Azure Monitor experience automatically decides on the right mode depending on the scope the user chooses. If the user selects a workspace, queries will be sent in workspace-centric mode. If the user selects a resource, resource group, or subscription, resource-centric is used. The scope is always presented in the top left section of the Log Analytics screen:

You can also query all logs of resources in a specific resource group using the resource group screen:

Soon, Azure Monitor will also be able to scope queries for an entire subscription.

To make logs more prevalent and easier to use, they are now integrated into many Azure resource experiences. When log search is opened from a resource menu, the search is automatically scoped to that resource and resource-centric queries are used. This means that if users have access to a resource, they'll be able to access their logs. Workspace owners can block or enable such access using the workspace access control mode.

Another capability we're adding is the ability to set permissions per table that store the logs. By default, if users are granted access to workspaces or resources, they can read all their log types. The new table RBAC allows admins to use Azure custom roles to define limited access for users, so they're only able to access some of the tables, or admins can block users from accessing specific tables. You can use this, for example, if you want the networking team to be able to access only the networking related table in a workspace or a subscription.

As result of these changes, organizations will have simpler models with fewer workspaces and more secure access control. Workspaces now assume the role of a manageable container, allowing administrators to better govern their environments. Users are now empowered to view logs in their natural Azure context, helping them to leverage the power of logs in their day-to-day work.

The improved Azure Monitor Logs access control lets you now enjoy both worlds at once without compromise on usability and security. Central teams can have full access to all logs while DevOps teams can access logs only for their resources. This comes on top of the powerful log analytics, integration and scalability capabilities that are used by tens of thousands of customers.

Next steps

To use it today, you need to:

Decide which workspaces should be used to store all data. Take into account billing, regulation, and data ownership.
Change your workspace access control mode to “Use resource or workspace permissions” to enable them for resource-centric access. Workspaces created after March 2019 are configured to this mode by default.
Remove workspace access permissions from your application teams and DevOps.
Let your users become master of their logs.

Quelle: Azure

All US Azure regions now approved for FedRAMP High impact level

Today, I’m excited to share our ability to provide Azure public services that meet US Federal Risk and Authorization Management Program (FedRAMP) High impact level and extend FedRAMP High Provisional Authorization to Operate (P-ATO) to all of our Azure public regions in the United States. In October, we told customers of our plan to expand public cloud services and regions from FedRAMP Moderate to FedRAMP High impact level. FedRAMP High was previously available only to customers using Azure Government. Additionally, we’ve increased the number of services available at High impact level to 90, including powerful services like Azure Policy and Azure Security Center, as we continue to drive to 100 percent FedRAMP compliance for all Azure services per our published listings and roadmap. Azure continues to support more services at FedRAMP High impact levels than any other cloud provider.

Achieving FedRAMP High means that both Azure public and Azure Government data centers and services meet the demanding requirements of FedRAMP High, making it easier for more federal agencies to benefit from the cost savings and rigorous security of the Microsoft Commercial Cloud.

While FedRAMP High in the Azure public cloud will meet the needs of many US government customers, certain agencies with more stringent requirements will continue to rely on Azure Government, which provides additional safeguards such as the heightened screening of personnel. We announced earlier the availability of new FedRAMP High services available for Azure Government.

FedRAMP was established to provide a standardized approach for assessing, monitoring, and authorizing cloud computing products and services to federal agencies, and to accelerate the adoption of secure cloud solutions by federal agencies. The Office of Management and Budget now requires all executive federal agencies to use FedRAMP to validate the security of cloud services. Cloud service providers demonstrate FedRAMP compliance through an Authority to Operate (ATO) or a Provisional Authority to Operate (P-ATO) from the Joint Authorization Board (JAB). FedRAMP authorizations are granted at three impact levels based on NIST guideline slow, medium, and high.

Microsoft is working closely with our stakeholders to simplify our approach to regulatory compliance for federal agencies, so that our government customers can gain access to innovation more rapidly by reducing the time required to take a service from available to certified. Our published FedRAMP services roadmap lists all services currently available in Azure Government to our FedRAMP High boundary, as well as services planned for the current year. We are committed to ensuring that Azure services to government provides the best the cloud has to offer and that all Azure offerings are certified at the highest level of FedRAMP compliance.

New FedRAMP High Azure Government Services include:

Azure DB for MySQL
Azure DB for PostgreSQL
Azure DDoS Protection
Azure File Sync
Azure Lab Services
Azure Migrate
Azure Policy
Azure Security Center
Microsoft Flow
Microsoft PowerApps

We will continue our commitment to provide our customers the broadest compliance in the industry, as Azure now supports 91 compliance offerings, more than any other cloud service provider. For a full listing of our compliance offerings, visit the Microsoft Trust Center.
Quelle: Azure

How you can use IoT to power Industry 4.0 innovation

IoT is ushering in an exciting—and sometimes exasperating—time of innovation. Adoption isn’t easy, so it’s important to hold a vision of the promise of Industry 4.0 in mind as you get ready for this next wave of business.

IoT can serve as an onramp to continual transformation, providing companies with the ability to capitalize more fully on automation, AI, and machine learning. As companies harness the power of IoT, cloud services, robotics, and other emerging technologies, they’ll discover new ways of working, creating, and living. They’ll test and learn more swiftly, and scale results in the most promising areas. And this innovation will find form in smart buildings, more efficient factories, connected cities, fully autonomous vehicles, a healthier environment, and better lives.

Between now and that digital world, there are years of trial and error and dozens of applications ahead. But companies across the spectrum are embedding IoT to attain data and analytics mastery, optimize processes, create new services, and rethink products right now. Their leaders are positioning themselves and their companies to take advantage of the promise of digitization across industries.

This post is the fourth in a four-part series designed to help companies maximize their ROI on IoT. In the first post, we discussed how IoT can transform businesses. In the second, we shared insights on how to create a successful strategy that yields desired ROI. In the third post, we discussed how companies can fill capability gaps. Now let’s offer some fresh thinking on what innovation could look like for your company.

IoT innovation is not one size fits all. What it means for a process manufacturing firm is necessarily different than what it will mean for a healthcare company. To help you understand how you might apply IoT to your business—and learn from companies that have gone before you—here are four different innovation plays.

Push service optimization to new levels

With almost all companies competing on the customer experience, it makes sense to optimize service levels to trim cost, error, and delay from customer-facing processes. Better service can be a key differentiator in the marketplace. And when it’s paired with continual optimization enabled by IoT, your customers start seeing the benefit in their businesses.

Jabil is one of the world’s largest and most innovative providers of manufacturing, design engineering, and supply-chain-management technologies. Jabil was quick to recognize that keeping and increasing its competitive edge required the company to accelerate production cycles and personalize products. Its customers might order a product only once, meaning that they couldn’t afford the time delays and waste of traditional inspection processes. “We have many products that customers expect to [have] in their shops within a week,” says Matt Behringer, chief information officer for enterprise operations and quality systems at Jabil. “And that is including transit.”

Jabil used an IoT approach based on the Microsoft Azure Cortana Intelligence Suite to connect systems, gain predictive intelligence, and increase its flexibility and scalability. In a pilot project that connected an electronics manufacturing production line to the cloud, Jabil was able to anticipate and avoid more than half of circuit board failures at the second step in the process, and the remaining 45 percent at the sixth step. By using AI and machine learning, Jabil can correct board errors even earlier in the process, reducing scrapped materials, product failures, and warranty issues. Now, the IoT platform monitors all individual production lines and collects data from every Jabil factory and product worldwide. Jabil is pushing optimization further by using deep neural networks to refine its automated optical inspection process, increasing speed and accuracy to new levels.

“One of the things we’re able to do with predictive analytics in Azure is reduce waste, whether it’s from a process or design issue, or as a result of maintaining enough excess inventory to ensure we have enough for shipment. We’re confident we can produce a good-quality product all the way through the line,” says Behringer.

Leverage data from a digital ecosystem

As companies build IoT-enabled systems of intelligence, they’re creating ecosystems where partners work together seamlessly in a fluid and ever-changing digital supply chain. Participants gain access to a centralized view of real-time data they can use to fine-tune processes, and analytics to enable predictive decision-making. In addition, automation can help customers reduce sources of waste such as unnecessary resource use.

PCL Construction comprises a group of independent construction companies that perform work in the United States, the Caribbean, and Australia. Recognizing that smart buildings are the future of construction, PCL is partnering with Microsoft to drive smart building innovation and focus implementation efforts.

The company is using the full range of Azure solutions—Power BI, Azure IoT, advanced analytics, and AI—to develop smart building solutions for multiple use cases, including increasing construction efficiency and workplace safety, improving building efficiency by turning off power and heat in unused rooms, analyzing room utilization to create a more comfortable and productive work environment, and collecting usage information from multiple systems to optimize services at an enterprise level. PCL’s customers benefit with greater control, more efficient buildings, and lower energy consumption and costs.

However, the path forward wasn’t easy. “Cultural transformation was a necessary and a driving factor in PCL’s IoT journey. To drive product, P&L, and a change in approach to partnering, we had to first embrace this change as a leadership team,” says PCL manager of advanced technology services Chris Palmer.

Develop a managed-services business

Essen, Germany-based thyssenkrupp Elevator is one of the world’s leading providers of elevators, escalators, and other passenger transportation solutions. The company uses a wide range of Azure services to improve usage of its solutions and streamline maintenance at customers’ sites around the globe.

With business partner Willow, thyssenkrupp has used the Azure Digital Twin platform to create a virtual replica of its Innovation Test Tower, an 800-foot-tall test laboratory in Rottweill, Germany. The lab is also an active commercial building, with nearly 200,000 square feet of occupied space and IoT sensors that transmit data 24 hours a day. Willow and thyssenkrupp are using IoT to gain new insights into building operations and how space is used to refine products and services.

In addition, thyssenkrupp has developed MAX, a solution built on the Azure platform that uses IoT, AI, and machine learning to help service more than 120,000 elevators worldwide. Using MAX, building operators can reduce elevator downtime by half and cut the average length of service calls by up to four times, while improving user satisfaction.

The company’s MULTI system uses IoT and AI to make better decisions about where elevators go, providing faster travel times or even scheduling elevator arrival to align with routine passenger arrivals.

“We constantly reconfigure the space to test different usage scenarios and see what works best for the people in the [Innovation Test Tower] building. We don’t have to install massive new physical assets for testing because we do it all through the digital replica—with keystrokes rather than sledgehammers. We have this flexibility thanks to Willow Twin and its Azure infrastructure,” says professor Michael Cesarz, chief executive officer for MULTI at thyssenkrupp.

Rethink products and services for the digital era

Kohler, a leading manufacturer, is embedding IoT in its products to create smart kitchens and bathrooms, meeting consumer demand for personalization, convenience, and control. Built with the Microsoft Azure IoT platform, the platform responds to voice commands, hand motions, weather, and consumer preset options.

And Kohler innovated fast, using Azure to demo, develop, test, and scale the new solutions. “From zero to demo in two months is incredible. We easily cut our development cycle in half by using Azure platform services while also significantly lowering our startup investment,” says Fei Shen, associate director of IoT engineering at Kohler.

The smart bathroom and kitchen products can start a user’s shower, adjust the water temperature to a predetermined level, turn on mirror lights to preferred brightness and color, and share the day’s weather and traffic. They also warn users if water floods their kitchen and bathroom. The smart fixtures provide Kohler with critical insights into how consumers are using their products, which they can use to develop new products and fine-tune existing features.

Kohler is betting that consumer adoption of smart home technology will grow and is pivoting its business to meet new demand. “We’ve been making intelligent products for about 10 years, things like digital faucets and showers, but none have had IoT capability. We want to help people live more graciously, and digitally enabling our products is the next step in doing that,” said Jane Yun, Associate Marketing Manager in Smart Kitchens and Baths at Kohler.

As these examples show, the possibilities for IoT are boundless and success is different for every company. Some firms will leverage IoT only for internal processes, while others will use analytics and automation to empower all the partners in their digital ecosystems. Some companies will wrap data services around physical product offerings to optimize the customer experience and deepen relationships, while still others will rethink their products and services to tap emerging market demand and out-position competitors.

How will you apply IoT insights to transform your businesses and processes? Get help crafting your IoT strategy and maximizing your opportunities for ROI.

Download the Unlocking ROI white paper to learn how to get more value from the Internet of Things.
Quelle: Azure

Visual interface for Azure Machine Learning service

During Microsoft Build we announced the preview of the visual interface for Azure Machine Learning service. This new drag-and-drop workflow capability in Azure Machine Learning service simplifies the process of building, testing, and deploying machine learning models for customers who prefer a visual experience to a coding experience.
Quelle: Azure

Securing the pharmaceutical supply chain with Azure IoT

You’re responsible for overseeing the transportation of a pallet of medicine halfway around the world. Drugs will travel from your pharmaceutical company’s manufacturing outbound warehouse in central New Jersey to third-party logistics firms, distributors, pharmacies, and ultimately, patients. Each box in that pallet – no bigger than the box that holds the business cards on your desk – contains very costly medicine, the product of 10 years of research and R&D spending.

Oh, and there’s a catch – actually several. You will need to ensure compliance with a long list of requirements from temperature and vibration to whether the box has been opened. The box must be kept at a stable temperature of between 2-8 degrees Celsius the whole journey. Additionally, the box is as vulnerable to shock as a Faberge egg. And the contents of each box can easily be faked. And another catch: your company isn’t in the global logistics business, and you lose oversight of those boxes of precious medicine as soon as they leave your freight bay in New Jersey.

IoT opens a new era for secure, smart cold chain asset management

It used to be that the only solution available for you to monitor and manage your cold chain was for your freight technicians to toss a data logger in the center of each outbound pallet and hope for the best. The shipment was passed from the third-party logistics firm to distributors, to warehouses, past freight forwarders, onto last-mile distribution, and finally on to the pharmacy and patients. Your visibility was minimal while your exposure to drug waste or potential counterfeiting was high.

Microsoft and Wipro envisioned a better solution. One that that would help ensure the cold chain was maintained from production to delivery to customers. And one that would limit issues like counterfeiting.

We worked with a top 20 global pharmaceutical company to develop Titan Secure, a digital supply chain and anti-counterfeiting platform. The platform was built with Microsoft Azure Internet of Things (IoT) technologies. See the Titan Secure reference architecture below to learn more.

“Azure IoT technology enabled us to develop a real-time IoT solution that provided the alerts and analytics needed to maintain the cold chain and decrease counterfeiting costs for pharmaceutical customers,” explained Sujan Thanjavuru, Head of Life Sciences Strategy & Transformation, Wipro, Ltd. “We worked with our customer to customize the sensors and develop a user interface that made it easy for managers to understand the state of their pharma shipments in real time. The result was an easy-to-use dashboard that provided valuable insights.”

“Azure IoT brings greater efficiency and reliability to customer value chains with world-class IoT and location intelligence services,” added Tony Shakib, IoT Business Acceleration Leader, Microsoft Azure.

Imagine a future with reduced counterfeit drugs and cold chain product wastage

Fast forward: imagine you’ve implemented Titan Secure from Wipro. Now, your outbound freight technician slaps a small, flexible bluetooth low energy (BLE) beacon sensor onto each box of medication, which is paired with the FDA and EMA-compliant serial number and barcode. The sensors measure temperature, humidity, shock, vibration, and tamper data. They generate geospatial alerts in real time in the event of a temperature excursion or potential counterfeiting attempts. The information is stored in and displayed from Azure. Data is transferred on the backend using Microsoft blockchain, but shipping operators don’t need to know what that means to use it. On an easy-to-use, interactive map and dashboard, technicians can easily track each individual box of your company’s product as it’s shipped from your outbound warehouse all the way to the pharmacy. Your managers receive an alert when a shipment is predicted to get too hot, so that you can call the third party and fix the problem before the shipment has to be destroyed. Once you notice tampering within one of your shipments, you’ll find out quickly what’s happened and how many boxes have been affected.

Manage your cold chain in real-time

What does this mean for your company? Wipro’s Thanjavuru explained, “Pharmaceutical companies can now digitally transform their cold chain management. They can monitor temperature and telemetry data through the entire product journey, view analytics and alerts within the Titan Secure dashboard for visibility including anti-counterfeiting support, and – with cloud connectivity – information about the shipment is available in near real-time.”
Quelle: Azure