New enhancements for Azure IoT Edge automatic deployments

Since releasing Microsoft Azure IoT Edge, we have seen many customers using IoT Edge automatic deployments to deploy workloads to the edge at scale. IoT Edge automatic deployments handle the heavy lifting of deploying modules to the relevant Azure IoT Edge devices and allow operators to keep a close eye on status to quickly address any problems. Customers love the benefits and have given us feedback on how to make automatic deployments even better through greater flexibility and seamless experiences. Today, we are sharing a set of enhancements to IoT Edge automatic deployments that are a direct result of this feedback. These enhancements include layered deployments, deploying marketplace modules from the Azure portal and other UI updates, and module support for automatic device configurations.

Layered deployments

Layered deployments are a new type of IoT Edge automatic deployments that allow developers and operators to independently deploy subsets of modules. This avoids the need to create an automatic deployment for every combination of modules that may exist across your device fleet. Microsoft Azure IoT Hub evaluates all applicable layered deployments to determine the final set of modules for a given IoT Edge device. Layered deployments have the same basic components as any automatic deployment. They target devices based on tags in the device twins and provide the same functionality around labels, metrics, and status reporting. Layered deployments also have priorities assigned to them, but instead of using the priority to determine which deployment is applied to a device, the priority determines how multiple deployments are ranked on a device. For example, if two layered deployments have a module or a route with the same name, the layered deployment with the higher priority will be applied while the lower priority is overwritten.

This first illustration shows how all modules need to be included in each regular deployment, requiring a separate deployment for each target group.

This second illustration shows how layered deployments allow modules to be deployed independently to each target group, with a lower overall number of deployments.

Revamped UI for IoT Edge automatic deployments

There are updates throughout the IoT Edge automatic deployments UI in the Azure portal. For example, you can now select modules from Microsoft Azure Marketplace from directly within the create deployment experience. The Azure Marketplace features many Azure IoT Edge modules built by Microsoft and partners.

Automatic configuration for module twins

Automatic device management in Azure IoT Hub automates many of the repetitive and complex tasks of managing large device fleets by using automatic device configurations to update and report status on device twin properties. We have heard from many of you that you would like the equivalent functionality for configuring module twins, and are happy to share that this functionality is now available.

Next steps

Learn about layered deployments for Azure IoT Edge
Learn about automatic device management support for module twins

Quelle: Azure

Better performance with bursting enhancement on Azure Disks

We introduced the preview of bursting support on Azure Premium SSD Disks, and new disk sizes 4/8/16 GiB on both Premium & Standard SSDs at Microsoft Ignite in November. We would like to share more details about it. With bursting, eligible Premium SSD disks can now achieve up to 30x of the provisioned performance target, better handling for spiky workloads. If you have workloads running on-premises with less predictable disk traffic, you can migrate to Azure and improve your overall performance taking advantage of bursting support.

Disk bursting is enforced on a credit based system, where you will accumulate credits when traffic is below provisioned target and consume credit when it exceeds provisioned. You can best leverage the capability in these scenarios below:

OS disks to accelerate virtual machine (VM) boot: You can expect to experience a boost as part of VM boot where reads to the OS disk may be issued at a higher rate. If you are hosting cloud workstations on Azure, your applications launch time can potentially be reduced taking advantage of additional disk throughput.
Data disks to accommodate spiky traffic: Some production operations trigger spikes of disk input/output (IO) by design. For example, if you conduct a database checkpoint, there will be a sudden increase of writes against the data disk, and a similar increase in reads for backup operations. Disk bursting provides you better flexibility to handle any excepted or unexpected change of disk traffic pattern.

With this preview release, we lower the entry cost of cloud adoption with smaller disk sizes and make our disk offerings more performant leveraging burst support. Start leveraging these new disk capabilities to build your most performant, robust and cost-efficient solution on Azure today!

Getting Started

Create new managed disks on the burst applicable sizes using the Azure portal, Powershell, or command-line interface (CLI) now! You can find the specifications of burst eligible and new disk sizes in the table below. The preview regions that support bursting and new disk sizes are listed in our Azure Disks frequently asked questions article. We are actively extending the preview support to more regions.

Premium SSD managed disks

Bursting capability is supported on Premium SSD managed disks only. It will be enabled by default for all new deployments in the supported regions. For existing disks of the applicable sizes, you can enable bursting with either of the two options: detach and re-attach the disk or stop and restart the attached VM. To learn more details on how bursting works, please refer to this "What disk types are available in Azure?" article.

Burst Capable Disks

Disk Size

Provisioned IOPS per disk

Provisioned Bandwidth per disk

Max Burst IOPS per disk

Max Burst Bandwidth per disk

Max Burst Duration at Peak Burst Rate

P1 – New

4 GiB

120

25 MiB/sec

3,500

170 MiB/sec

30 mins

P2 – New

8 GiB

120

25 MiB/sec

3,500

170 MiB/sec

30 mins

P3 – New

16 GiB

120

25 MiB/sec

3,500

170 MiB/sec

30 mins

P4

32 GiB

120

25 MiB/sec

3,500

170 MiB/sec

30 mins

P6

64 GiB

240

50 MiB/sec

3,500

170 MiB/sec

30 mins

P10

128 GiB

500

100 MiB/sec

3,500

170 MiB/sec

30 mins

P15

256 GiB

1,100

125 MiB/sec

3,500

170 MiB/sec

30 mins

P20

512 GiB

2,300

150 MiB/sec

3,500

170 MiB/sec

30 mins

Standard SSD Managed Disks

Here are the new disk sizes introduced on Standard SSD Disks. The performance targets define the max IOPS and bandwidth you can achieve on these sizes. Compared to Premium SSD Disks above, the disk IOPS and bandwidth offered are not provisioned. For your performance sensitive workloads or single instance deployment, we recommend you leverage Premium SSDs.

 

Disk Size

Max IOPS per disk

Max Bandwidth per disk

E1 – New

4 GiB

120

25 MB/sec

E2 – New

8 GiB

120

25 MB/sec

E3 – New

16 GiB

120

25 MB/sec

Visit our service website to explore the Azure Disk Storage portfolio. To learn about pricing, you can visit the Azure Managed Disks pricing page.

General feedback

We look forward to hearing your feedback on the new disk sizes. Please email us at AzureDisks@microsoft.com.
Quelle: Azure

New features in Azure Monitor metrics explorer based on your feedback

A few months ago, we posted a survey to gather feedback on your experience with metrics in Azure Portal. Thank you for participation and providing valuable suggestions! We appreciate your input, whether you are working on a hobby project, in a governmental organization, or any size company—small to huge. We want to share some of the insights we gained from the survey and highlight some of the features that we delivered based on your feedback. These features include:Resource picker that supports multi-resource scoping.Splitting by dimension allows limiting the number of time series and specifying sort order.Charts can show large number of datapoints.Improved chart legends.Resource picker with multi-resource scopingOne of the key pieces of feedback we heard was about the resource picker panel. You said that being able to select only one resource at a time when choosing a scope is too limiting. Now you can select multiple resources across resources groups in a subscription.  Ability to limit the number of timeseries and change sort order when splitting by dimensionMany of you asked for ability to configure the sort order based on dimension values, and for control over the maximum number of timeseries shown on the chart. Those who asked, explained that for some metrics, such as “Available memory” and “Remaining disk space,” they want to see the timeseries with smallest values, while for other metrics, including “CPU Utilization” or “Count of Failures,” showing the timeseries with highest values make more sense. To make it possible, we expanded the dimension splitter selector with Sort order and Limit count inputs.  Charts that show large number of datapointsCharts with multiple timeseries over the long period, especially with short time grain are based on queries that return lots of datapoints. Unfortunately, processing too many datapoints may slow down chart interactions. To ensure the best performance, we used to apply a hard limit on the number of datapoints per chart, prompting users to lower the time range or to increase the time grain when the query returns too much data. Some of you found the old experience frustrating. You said that that occasionally you might want to plot charts with lots of datapoints, regardless of performance. Based on your suggestions, we changed the way we handle the limit. Instead of blocking chart rendering, we now display a message that suggests that the metrics query will return a lot of data, but letting your proceed anyways (with a friendly reminder that you might need to wait longer for the chart to display).   High-density charts from lots of datapoints can be useful to visualize the outliers, as shown in this example:   Improved chart legendA small but useful improvement was made based on your feedback that the chart legends often wouldn’t fit on the chart, making it hard to interpret the data. This was almost always happening with the charts pinned to dashboards and rendered in the tight space of dashboard tiles, or on screens that have smaller resolution. To solve the problem, we now let you scroll the legend until you find the data you need:  FeedbackLet us know how we’re doing and what more you’d like to see. Please stay tuned for more information on these and other new features in the coming months. We are continuously addressing pain points and making improvements based on your input.If you have any questions or comments before our next survey, please use the feedback button on the Metrics blade. Don’t feel shy about giving us a shout out if you like a new feature or are excited about the direction we’re headed. Smiles are just as important in influencing our plans as frowns!
Quelle: Azure

Combine the Power of Video Indexer and Computer Vision

We are pleased to introduce the ability to export high-resolution keyframes from Azure Media Service’s Video Indexer. Whereas keyframes were previously exported in reduced resolution compared to the source video, high resolution keyframes extraction gives you original quality images and allows you to make use of the image-based artificial intelligence models provided by the Microsoft Computer Vision and Custom Vision services to gain even more insights from your video. This unlocks a wealth of pre-trained and custom model capabilities. You can use the keyframes extracted from Video Indexer, for example, to identify logos for monetization and brand safety needs, to add scene description for accessibility needs or to accurately identify very specific objects relevant for your organization, like identifying a type of car or a place.

Let’s look at some of the use cases we can enable with this new introduction.

Using keyframes to get image description automatically

You can automate the process of “captioning” different visual shots of your video through the image description model within Computer Vision, in order to make the content more accessible to people with visual impairments. This model provides multiple description suggestions along with confidence values for an image. You can take the descriptions of each high-resolution keyframe and stitch them together to create an audio description track for your video.

Using Keyframes to get logo detection

While Video Indexer detects brands in speech and visual text, it does not support brands detection from logos yet. Instead, you can run your keyframes through Computer Vision’s logo-based brands detection model to detect instances of logos in your content.

This can also help you with brand safety as you now know and can control the brands showing up in your content. For example, you might not want to showcase the logo of a company directly competing with yours. Also, you can now monetize on the brands showing up in your content through sponsorship agreements or contextual ads.

Furthermore, you can cross-reference the results of this model for you keyframe with the timestamp of your keyframe to determine when exactly a logo is shown in your video and for how long. For example, if you have a sponsorship agreement with a content creator to show your logo for a certain period of time in their video, this can help determine if the terms of the agreement have been upheld.

Computer Vision’s logo detection model can detect and recognize thousands of different brands out of the box. However, if you are working with logos that are specific to your use case or otherwise might not be a part of the out of the box logos database, you can also use Custom Vision to build a custom object detector and essentially train your own database of logos by uploading and correctly labeling instances of the logos relevant to you.

Using keyframes with other Computer Vision and Custom Vision offerings

The Computer Vision APIs provide different insights in addition to image description and logo detection, such as object detection, image categorization, and more. The possibilities are endless when you use high-resolution keyframes in conjunction with these offerings.

For example, the object detection model in Computer Vision gives bounding boxes for common out of the box objects that are already detected as part of Video Indexer today. You can use these bounding boxes to blur out certain objects that don’t meet your standards.

High-resolution keyframes in conjunction with Custom Vision can be leveraged to achieve many different custom use cases. For example, you can train a model to determine what type of car (or even what breed of cat) is showing in a shot. Maybe you want to identify the location or the set where a scene was filmed for editing purposes. If you have objects of interest that may be unique to your use case, use Custom Vision to build a custom classifier to tag visuals or a custom object detector to tag and provide bounding boxes for visual objects.

Try it for yourself

These are just a few of the new opportunities enabled by the availability of high-resolution keyframes in Video Indexer. Now, it is up to you to get additional insights from your video by taking the keyframes from Video Indexer and running additional image processing using any of the Vision models we have just discussed. You can start doing this by first uploading your video to Video Indexer and taking the high-resolution keyframes after the indexing job is complete and second creating an account and getting started with the Computer Vision API and Custom Vision.

Have questions or feedback? We would love to hear from you. Use our UserVoice page to help us prioritize features, leave a comment below or email VISupport@Microsoft.com for any questions.

Quelle: Azure

Azure Sphere guardian module simplifies & secures brownfield IoT

One of the toughest IoT quandaries is figuring out how to bake IoT into existing hardware in a secure, cost-effective way. For many customers, scrapping existing hardware investments for new IoT-enabled devices (“greenfield” installations) isn’t feasible. And retrofitting mission-critical devices that are already in service with IoT (“brownfield” installations) is often deemed too risky, too complicated, and too expensive.

This is why we’re thrilled about a major advancement for Azure Sphere that opens up the brownfield opportunity, helping make IoT retrofits more secure, substantially easier, and more cost effective than ever before. The guardian module with Azure Sphere simplifies the transformation of brownfield devices into locked-down, internet-connected, data-wielding, intelligent devices that can transform business.

For an in-depth exploration of the guardian module and how it’s being used at major corporations like Starbucks, sign up for the upcoming Azure Sphere Guardian Module webinar.

The guardian module with Azure Sphere offers some key advantages

Like all Microsoft products, Azure Sphere is loaded with robust security features at every turn—from silicon to cloud. For brownfield installations, the guardian module with Azure Sphere physically plugs into existing equipment ports without the need for any hardware redesign.

Azure Sphere, rather than the device itself, talks to the cloud. The guardian module processes data and controls the device without exposing existing equipment to the potential dangers of the internet. The module shields brownfield equipment from attack by restricting the flow of data to only trusted cloud and device communication partners while also protecting module and equipment software.

Using the Azure Sphere guardian module, enterprises can enable any number of secure operations between the device and the cloud. The device can even use the Azure Sphere Security Service for certificate-based authentication, failure reporting, and software updates.

Opportunities abound for the Microsoft partner ecosystem

Given the massive scale of connectable equipment already in use in retail, industrial, and commercial settings, the new guardian module presents a lucrative opportunity for Microsoft partners. Azure Sphere can connect an enormous range of devices of all types, leading the way for a multitude of practical applications that can pay off through increased productivity, predictive maintenance, cost savings, new revenue opportunities, and more.

Fulfilling demand for such a diverse set of use cases is only possible thanks to Azure Sphere’s expanding partner ecosystem. Recent examples of this growth include our partnership with NXP to deliver a new Azure Sphere-certified chip that is an extension of their i.MX 8 high-performance applications process series and brings greater compute capabilities to support advanced workloads. As well as our collaboration with Qualcomm Technologies, Inc to deliver the first cellular-enabled Azure Sphere chip, which gives our customers the ability to securely connect anytime, anywhere.

Starbucks uses Azure Sphere guardian module to connect coffee machines

If you saw Satya Nadella’s Vision Keynote at Build 2019, you probably recall the demonstration of Starbucks’ IoT-connected coffee machines. But what you may not know is the Azure Sphere guardian module is behind the scenes, enabling Starbucks to connect these existing machines to the cloud.

As customers wait for their double-shot, no-whip mochas to brew, these IoT-enabled machines are doing more than meets the eye. They’re collecting more than a dozen data points for each precious shot, like the types of beans used, water temperature, and water quality. The solution enables Starbucks to proactively identify any issues with their machines in order to smooth their customers’ paths to caffeinated bliss.

Beyond predictive maintenance, Azure Sphere will enable Starbucks to transmit new recipes directly to machines in 30,000 stores rather than manually uploading recipes via thumb drives, saving Starbucks lots of time, money, and thumb drives. Watch this Microsoft Ignite session to see how Starbucks is tackling IoT at scale in pursuit of the perfect pour.

As an ecosystem, we have a tremendous opportunity to meet demand for brownfield installations and help our customers quickly bring their existing investments online without taking on risk and jeopardizing mission-critical equipment. The first guardian modules are available today from Avnet and AI-Link, with more expected soon.

Discover the value of adding secured connectivity to existing mission-critical equipment by registering for our upcoming Azure Sphere Guardian Modules webinar. You will experience a guided tour of the guardian module, including a deep dive into its architecture and the opportunity this open-source offering presents to our partner community. We’ll also hear from Starbucks around what they’ve learned since implementing the guardian module with Azure Sphere.
Quelle: Azure

Azure Stack HCI now running on HPE Edgeline EL8000

Do you need rugged, compact-sized hyperconverged infrastructure (HCI) enabled servers to run your branch office and edge workloads? Do you want to modernize your applications and IoT functions with container technology? Do you want to leverage Azure's hybrid services such as backup, disaster recovery, update management, monitoring, and security compliance? 

Well, Microsoft and HPE have teamed up to validate the HPE Edgeline EL8000 Converged Edge system for Microsoft's Azure Stack HCI program. Designed specifically for space-constrained environments, the HPE Edgeline EL8000 Converged Edge system has a unique 17-inch depth form factor that fits into limited infrastructures too small for other x86 systems. The chassis has an 8.7-inch width which brings additional flexibility for deploying at the deep edge, whether it is in a telco environment, a mobile vehicle, or a manufacturing floor. This Network Equipment-Building System (NEBs) compliant system delivers secure scalability.

HPE Edgeline EL8000 Converged Edge system gives:

Traditional x86 compute optimized for edge deployments, far from the traditional data center without the sacrifice of compute performance.
Edge-optimized remote system management with wireless capabilities based on Redfish industry standard.
Compact form factor, with short-depth and half-width options.
Rugged, modular form factor for secure scalability and serviceability in edge and hostile environments including NEBs level three and American Society of Heating, Refrigerating and Air-Conditioning Engineers (ASHRAE) level three/four compliance.
Broad accelerator support for emerging edge artificial intelligence (AI) use cases, for field programmable gate arrays or graphics processing units.
Up to four independent compute nodes, which are cluster-ready with embedded networks.

Modular design providing broad configuration possibilities

The HPE Edgeline EL8000 Converged Edge system offers flexibility of choice for compute density or for input/output expansion. These compact, ruggedized systems offer high-performance capacity to support the use cases that matter most, including media streaming, IoT, AI, and video analytics. The HPE Edgeline EL8000 is a versatile platform that enables edge compute transformation so as use case requirements change, the system's flexible and modular architecture can scale to meet them.

Seamless management and security features with HPE Edgeline Chassis Manager

The HPE Edgeline EL8000 Converged Edge system features the HPE Edgeline Chassis Manager which limits downtime by providing system-level health monitoring and alerts. Increase efficiency and reliability by managing the chassis fan speeds for each server blade installed in addition to monitoring the health and status of the power supply. It simplifies firmware upgrade management and implementation with HPE Edgeline Chassis Manager.

Microsoft Azure Stack HCI:

Azure Stack HCI solutions bring together highly virtualized compute, storage, and networking on industry-standard x86 servers and components. Combining resources in the same cluster makes it easier for you to deploy, manage, and scale. Manage with your choice of command-line automation or Windows Admin Center.

Achieve industry-leading virtual machine performance for your server applications with Hyper-V, the foundational hypervisor technology of the Microsoft cloud, and Storage Spaces Direct technology with built-in support for non-volatile memory express (NVMe), persistent memory, and remote-direct memory access (RDMA) networking.

Help keep apps and data secure with shielded virtual machines, network microsegmentation, and native encryption.

You can take advantage of cloud and on-premises working together with a hyperconverged infrastructure platform in the public cloud. Your team can start building cloud skills with built-in integration to Azure infrastructure management services, including:

Azure Site Recovery for high availability and disaster recovery as a service (DRaaS).

Azure Monitor, a centralized hub to track what’s happening across your applications, network, and infrastructure – with advanced analytics powered by AI.

Cloud Witness, to use Azure as the lightweight tie breaker for cluster quorum.

Azure Backup for offsite data protection and to protect against ransomware.

Azure Update Management for update assessment and update deployments for Windows virtual machines (VMs) running in Azure and on-premises.

Azure Network Adapter to connect resources on-premises with your VMs in Azure via a point-to-site virtual private network (VPN.)

Sync your file server with the cloud, using Azure File Sync.

Azure Arc for Servers to manage role-based access control, governance, and compliance policy from Azure Portal.

By deploying the Microsoft and HPE HCI solution, you can quickly solve your branch office and edge needs with high performance and resiliency while protecting your business assets by enabling the Azure hybrid services built into the Azure Stack HCI Branch office and edge solution.  
Quelle: Azure

Microsoft partner ANSYS extends ability of Azure Digital Twins platform

Digital twins have moved from an exciting concept to reality. More companies than ever are connecting assets and production networks with sensors and using analytics to optimize operations across machinery, plants, and industrial networks. As exact virtual representations of the physical environment, digital twins incorporate historical and real-time data to enable sophisticated spatial analysis of key relationships. Teams can use digital twins to model the impact of process changes before putting them into production, reducing time, cost, and risk.

For the second year in a row, Gartner has identified digital twins as one of the top 10 strategic technology trends. According to Gartner, while 13 percent of organizations that are implementing IoT have already adopted digital twins, 62 percent are in the process or plan to do so. Gartner predicts a tipping point in 2022 when two out of three companies will have deployed at least one digital twin to optimize some facet of their business processes.

This is why we’re excited by the great work of ANSYS, a Microsoft partner working to extend the value of the Microsoft Azure Digital Twins platform for our joint customers. The ANSYS Twin Builder combines the power of physics-based simulations and analytics-driven digital twins to provide real-time data transfer, reusable components, ultrafast modeling, and other tools that enable teams to perform myriad “what-if” analyses, and build, validate, and deploy complex systems more easily.

“Collaborating with ANSYS to create an advanced IoT digital twins framework provides our customers with an unprecedented understanding of their deployed assets’ performance by leveraging physics and simulation-based analytics.” — Sam George, corporate vice president of Azure IoT, Microsoft

Digital twins model key relationships, simplifying design

Digital twins will be first and most widely adopted in manufacturing, as industrial companies invest millions to build, maintain, and track the performance of remotely deployed IoT-enabled assets, machinery, and vehicles. Operators depend on near-continuous asset uptime to achieve production goals, meaning supply-chain bottlenecks, machine failures, or other unexpected downtime can hamper production output and reduce revenue recognition for the company and its customers. The use of digital twins, analytics, business rules, and automation helps companies avoid many of these issues by guiding decision-making and enabling instant informed action.

Digital twins can also simulate a multidimensional view of asset performance that can be endlessly manipulated and perfected prior to producing new systems or devices, ending not just the guesswork of manually predicting new processes, but also the cost of developing multiple prototypes. Digital twins, analytics-based tools, and automation also equip companies to avoid unnecessary costs by prioritizing issues for investment and resolution.

Digital twins can optimize production across networks

Longer-term, companies can more easily operate global supply chains, production networks, and digital ecosystems through the use of IoT, digital twins, and other tools. Enterprise teams and their partners will be able to pivot from sensing and reacting to changes to predicting them and responding immediately based on predetermined business rules. Utilities will be better prepared to predict and prevent accidents, companies poised to address infrastructure issues before customers complain, and stores more strategically set up to maintain adequate inventories.

Simulations increase digital twins’ effectiveness

ANSYS’ engineering simulation software enables customers to model the design of nearly every physical product or process. The simulations are then compiled into runtime modules that can execute in a docker container and integrate automatically into IoT processing systems, reducing the heavy lift of IoT customization.

With the combined Microsoft Azure Digital Twins-ANSYS physics-based simulation capabilities, customers can now:

Simulate baseline and failure data resulting in accurate, physics-based digital twins models.
Use physics-based predictive models to increase accuracy and improve ROI from predictive maintenance programs.
Leverage “what-if analyses” to simulate different solutions before selecting the best one.
Use virtual sensors to estimate critical quantities through simulation.

In addition, companies can use physics-based simulations within the Microsoft-ANSYS platform to pursue high-value use cases such as these:

 Optimize asset performance: Teams can use digital twins to model asset performance to evaluate current performance versus targets, identifying, resolving, and prioritizing issues for resolution based on the value they create.
 Manage systems across their lifecycle: Teams can take a systems approach to managing complex and costly assets, driving throughput and retiring systems at the ideal time to avoid over-investing in market-lagging capabilities.
 Perform predictive maintenance: Teams can use analytics to determine and schedule maintenance, reduce unplanned downtime and costly break-fix repairs, and perform repairs in order of importance, which frees team members from unnecessary work.
 Orchestrate systems: Companies will eventually create systems of intelligence by linking their equipment, systems, and networks to orchestrate production across plants, campuses, and regions, attaining new levels of visibility and efficiency.
 Fuel product innovation: With rapid virtual prototyping, teams will be able to explore myriad product versions, reducing the time and cost required to innovate products, decreasing product failures, and enabling the development of customized products.
 Enhance employee training: Companies can use digital twins to conduct training with employees, improving their effectiveness on the job while reducing production design errors due to human error.
 Eliminate physical constraints: Digital twins eliminate the physical barriers to experimentation, meaning users can simulate tests and conditions for remote assets, such as equipment in other plants, regions, or space.

Opening up new opportunities for partners

According to Gartner, more than 20 billion connected devices are projected by 2020 and adoption of IoT and digital twins is only going to accelerate—in fact, MarketsandMarkets™ estimates that the digital twins market will reach a value of $3.8 billion in 2019 and grow to $35.8 billion by 2025. Our recent IoT Signals research found that 85 percent of decision-makers have already adopted IoT, 74 percent have projects in the “use” phase, and businesses expect to achieve 30 percent ROI on their IoT projects going forward. The top use case participants want to pursue is operations optimization (56 percent), to reap more value from the assets and processes they already possess. That’s why digital twins is so important right now because it provides a framework to accomplish this goal with greater accuracy than was possible before.

“As industrial companies require comprehensive field data and actionable insights to further optimize deployed asset performance, ecosystem partners must collaborate to form business solutions. ANSYS Twins Builder’s complementary simulation data stream augments Azure IoT Services and greatly enhances its customers’ understanding of asset performance.”—Eric Bantegnie, vice president and general manager at ANSYS

Thanks to Microsoft partners like ANSYS, companies are better equipped to unlock productivity and efficiency gains by removing critical constraints, including physical barriers, from process modeling. With tools like digital twins, companies will be limited only by their own creativity, creating a more intelligent and connected world where all have more opportunities to flourish.

Learn more about Microsoft Azure Digital Twins and ANSYS Twin Builder.
Quelle: Azure

Introducing maintenance control for platform updates

Today we are announcing the preview of a maintenance control feature for Azure Virtual Machines that gives more control to customers with highly sensitive workloads for platform maintenance. Using this feature, customers can control all impactful host updates, including rebootless updates, for up to 35 days.

Azure frequently updates its infrastructure to improve the reliability, performance, and security, or to launch new features. Almost all updates have zero impact on your Azure virtual machines (VMs). When updates do have an effect, Azure chooses the least impactful method for updates:

If the update does not require a reboot, the VM is briefly paused while the host is updated, or it's live migrated to an already updated host. These rebootless maintenance operations are applied fault domain by fault domain, and progress is stopped if any warning health signals are received.
In the extremely rare scenario when the maintenance requires a reboot, the customer is notified of the planned maintenance. Azure also provides a time window in which you can start the maintenance yourself, at a time that works for you.

Typically, rebootless updates do not impact the overall customer experience. However, certain very sensitive workloads may require full control of all maintenance activities. This new feature will benefit those customers who deploy this type of workload.

Who is this for?

The ability to control the maintenance window is particularly useful when you deploy workloads that are extremely sensitive to interruptions running on an Azure Dedicated Host or an Isolated VM, where the underlying physical server runs a single customer’s workload. This feature is not supported for VMs deployed in hosts shared with other customers.

The typical customer who should consider using this feature requires full control over updates because while they need to have the latest updates in place, their business requires that at least some of their cloud resources must be updated with zero impact on their own schedule.

Customers like financial services providers, gaming companies, or media streaming services using Azure Dedicated Hosts or Isolated VMs will benefit by being able to manage necessary updates without any impact on their most critical Azure resources.

How does it work?

You can enable the maintenance control feature for platform updates by adding a custom maintenance configuration to a resource (either an Azure Dedicated Host or an Isolated VM). When the Azure updater sees this custom configuration, it will skip all non-zero-impact updates, including rebootless updates. For as long as the maintenance configuration is applied to the resource, it will be your responsibility to determine when to initiate updates for that resource. You can check for pending updates on the resource and apply updates within the 35-day window. When you initiate an update on the resource, Azure applies all pending host updates. A new 35-day window starts after another update becomes pending on the resource. If you choose not to apply the updates within the 35-day window, Azure will automatically apply all pending updates for you, to ensure that your resources remain secure and get other fixes and features.

Things to consider

You can automate platform updates for your maintenance window by calling “apply pending update” commands through your automation scripts. This can be batched with your application maintenance. You can also make use of Azure Functions and schedule updates at regular intervals.
Maintenance configurations are supported across subscriptions and resource groups, so you can manage all maintenance configurations in one place and use them anywhere they're needed.

Getting started

The maintenance control feature for platform updates is available in preview now. You can get started by using CLI, PowerShell, REST APIs, .NET, or SDK. Azure portal support will follow.

For more information, please refer to the documentation: Maintenance for virtual machines in Azure.

FAQ

Q: Are there cases where I can’t control certain updates? 

A:  In case of a high-severity security issue that may endanger the Azure platform or our customers, Azure may need to override customer control of the maintenance window and push the change. This is a rare occurrence that would only be used in extreme cases, such as a last resort to protect you from critical security issues.

Q: If I don’t self-update within 35-days what action will Azure take?

A:  If you don’t execute a platform update within 35-days, Azure will apply the pending updates on a fault domain by fault domain basis. This is done to maintain security and performance, and to fix any defects.

Q: Is this feature supported in all regions?

A:   Maintenance Control is supported in all public cloud regions. Currently we don't support gov cloud regions, but this support will come later.
Quelle: Azure

Networking enables the new world of Edge and 5G Computing

At the recent Microsoft Ignite 2019 conference, we introduced two new and related perspectives on the future and roadmap of edge computing.

Before getting further into the details of Network Edge Compute (NEC) and Multi-access Edge Compute (MEC), let’s take a look at the key scenarios which are emerging in line with 5G network deployments. For a decade, we have been working with customers to move their workloads from their on-premises locations to Azure to take advantage of the massive economies of scale of the public cloud. We get this scale with the ongoing build-out of new Azure regions and the constant increase of capacity in our existing regions, reducing the overall costs of running data centers.

For most workloads, running in the cloud is the best choice. Our ability to innovate and run Azure as efficiently as possible allows customers to focus on their business instead of managing physical hardware and associated space, power, cooling, and physical security. Now, with the advent of 5G mobile technology promising larger bandwidth and better reliability, we see significant requirements for low latency offerings to enable scenarios such as smart-buildings, factories, and agriculture. The “smart” prefix highlights that there is a compute-intensive workload, typically running machine learning or artificial intelligence-type logic, requiring compute resources to execute in near real-time. Ultimately the latency, or the time from when data is generated to the time it is analyzed, and a meaningful result is available, becomes critical for these smart-scenarios. Latency has become the new currency, and to reduce latency we need to move the required computing resources closer to the sensors, data origin or users.

Multi-access Edge Compute: The intersection of compute and networking

Internet of Things (IoT) creates incredible opportunities, but it also presents real challenges. Local connectivity in the enterprise has historically been limited to Ethernet and Wi-Fi. Over the past two decades, Wi-Fi has become the de-facto standard for wireless networks, not due to it necessarily being the best solution, but rather its entrenchment in the consumer ecosystem and lack of alternatives. Our customers from around the world tell us that deploying Wi-Fi to service their IoT devices requires compromises on coverage, bandwidth, security, manageability, reliability, and interoperability/roaming. For example, autonomous robots require better bandwidth, coverage, and reliability to operate safely within a factory. Airports generally have decent Wi-Fi coverage inside the terminals, but on the tarmac, coverage often drops significantly, making it insufficient and less suitable to power the smart airport.

Next-gen private cellular connectivity greatly improves bandwidth, coverage, reliability, and manageability. Through the combination of local compute resources and private mobile connectivity (private LTE), we can enable many new scenarios. For instance, in the smart factory example used earlier customers are now able to run their robotic control logic, highly available and independent of connectivity to the public cloud. MEC helps ensure that operations and any associated critical first-stage data processing remain up and production can continue uninterrupted.

With its promise and advantage of near-infinite compute and storage, the cloud is ideal for large data-intensive and computational tasks, such as machine learning jobs for predictive maintenance analytics. At this year’s Ignite conference, we shared our thoughts and experience, along with a technology preview of MEC with Azure. The technology preview brings private mobile network capabilities to Azure Stack Edge; an on-premises compute platform managed from Azure. In practical terms, the MEC allows locally controlling the robots; even if the factory suffers a network outage.

From an edge computing perspective, we have containers, running across Azure Stack Edge and Azure. A key aspect is that the same programming paradigm can be used for Azure and the edge-based MEC platform. Code can be developed and tested in the cloud, then seamlessly deployed at the edge. Developers can take advantage of the vast array of DevOps tools and solutions available in Azure and apply them to the new exciting edge scenarios. The MEC technology preview focuses on the simplified experience of cross-premises deployment and operations of managed compute and Virtual Network Functions with integration to existing Azure services.

Network Edge Compute

Whereas Multi-access Edge Compute (MEC) is intended to be deployed at the customer’s premises, Network Edge Compute (NEC) is the network carrier equivalent, placing the edge computing platform within their network. Last week we announced the initial deployment of our NEC platform in AT&T’s Dallas facility. Instead of needing to access applications and games running in the public cloud, software providers can bring their solutions physically closer to their end-users. At AT&T’s Business Summit we gave an augmented reality demonstration, working with Taqtile, and showed how to perform maintenance on an aircraft landing gear.

The HoloLens user sees the real landing gear along with the virtual manual along with specific parts of the landing gear virtually highlighted. The mixing of real-world and virtual objects displayed via HoloLens is what is often referred to as augmented reality (AR) or mixed reality (MR).

Edge Computing Scenarios

We have been showcasing multiple MEC and NEC use-cases over these past few weeks. For more details please refer to our Microsoft Ignite MEC and 5G session.

Mixed Reality (MR)

Mixed reality use cases such as remote assistance can revolutionize several industrial automation scenarios. Lower latencies and higher bandwidth coupled with local compute, enables new remote rendering scenarios to reduce battery consumption in handsets and MR devices.

Retail e-fulfillment

Attabotics provides a robotic warehousing and fulfillment system for the retail and supply chain industries. Attabotics employs robots (Attabots) for storage and retrieval of goods from a grid of bins. A typical storage structure has about 100,000 bins and is serviced by between 60 and 80 Attabots. Azure Sphere powers the robots themselves. Communications using Wi-Fi or traditional 900 MHz spectrum does not meet the scale, performance and reliability requirements.
  
The Nexus robot control system, used for command and control of the warehousing system, is built natively on Azure and uses Azure IoT Central for telemetry. With a Private LTE (CBRS) radio from our partners Sierra Wireless and Ruckus Wireless and packet core partner Metaswitch, we enabled the Attabots to communicate over a private LTE network. The reduced latency improved reliability and made the warehousing solution more efficient. The entire warehousing solution, including the private LTE network used for a warehouse, run on a single Azure Stack Edge.

Gaming

Multi-player online gaming is one of the canonical scenarios for low-latency edge computing. Game Cloud Studios has developed a game based on Azure Play Fab, called Tap and Field. The game backend and controls run on Azure, while the game server instances reside and run on the NEC platform. Having lower latencies results in better gaming experiences for players who are nearby in venues such as e-sport events, arcades, arenas, and similar venues.

Public Safety

The proliferation of drone use is disrupting many industries, from security and privacy to the delivery of goods. Air Traffic Control operations are on the cusp of one of the most significant disruptive events in the field, going from monitoring only dozens of aircrafts today to thousands tomorrow. This necessitates a sophisticated near real-time tracking system. Vorpal VigilAir has built a solution where drone and operator tracking is done using a distributed sensor network powered by a real-time tracking application running on the NEC.

Data-driven digital agriculture solutions

Azure FarmBeats is an Azure solution that enables aggregation of agriculture datasets across providers, and generation of actionable insights by building artificial intelligence (AI) or machine learning (ML) models by fusing the datasets. Gathering datasets from sensors distributed across the farm requires a reliable private network, and generating insights requires a robust edge computing platform that is capable of being operated in a disconnected mode in remote locations where connectivity to the cloud is often sparse. Our solution, based on the Azure Stack Edge along with a managed private LTE network, offers a reliable and scalable connectivity fabric along with the right compute resources close to the farm.

MEC, NEC, and Azure: Bringing compute everywhere

MEC enables a low-latency connected Azure platform in your location, NEC provides a similar platform in a network carrier’s central office, and Azure provides a vast array of cloud services and controls.

At Microsoft, we fundamentally believe in providing options for all customers. Because it is impractical to deploy Azure datacenters in every major metropolitan city throughout the world, our new edge computing platforms provide a solution for specific low-latency application requirements that cannot be satisfied in the cloud. Software developers can use the same programming and deployment models for containerized applications using MEC where private mobile connectivity is required, deploying to NEC where apps are optimally located outside the customer’s premises, or directly in Azure. Many applications will look to take advantage of combined compute resources across the edge and public cloud.

We are building a new extended platform and continue to work with the growing ecosystem of mobile connectivity and edge computing partners. We are excited to enable a new wave of innovation unleashed by the convergence of 5G, private mobile connectivity, IoT and containerized software environments, powered by new and distributed programming models. The next phase of computing has begun.
Quelle: Azure