Combine the Power of Video Indexer and Computer Vision

We are pleased to introduce the ability to export high-resolution keyframes from Azure Media Service’s Video Indexer. Whereas keyframes were previously exported in reduced resolution compared to the source video, high resolution keyframes extraction gives you original quality images and allows you to make use of the image-based artificial intelligence models provided by the Microsoft Computer Vision and Custom Vision services to gain even more insights from your video. This unlocks a wealth of pre-trained and custom model capabilities. You can use the keyframes extracted from Video Indexer, for example, to identify logos for monetization and brand safety needs, to add scene description for accessibility needs or to accurately identify very specific objects relevant for your organization, like identifying a type of car or a place.

Let’s look at some of the use cases we can enable with this new introduction.

Using keyframes to get image description automatically

You can automate the process of “captioning” different visual shots of your video through the image description model within Computer Vision, in order to make the content more accessible to people with visual impairments. This model provides multiple description suggestions along with confidence values for an image. You can take the descriptions of each high-resolution keyframe and stitch them together to create an audio description track for your video.

Using Keyframes to get logo detection

While Video Indexer detects brands in speech and visual text, it does not support brands detection from logos yet. Instead, you can run your keyframes through Computer Vision’s logo-based brands detection model to detect instances of logos in your content.

This can also help you with brand safety as you now know and can control the brands showing up in your content. For example, you might not want to showcase the logo of a company directly competing with yours. Also, you can now monetize on the brands showing up in your content through sponsorship agreements or contextual ads.

Furthermore, you can cross-reference the results of this model for you keyframe with the timestamp of your keyframe to determine when exactly a logo is shown in your video and for how long. For example, if you have a sponsorship agreement with a content creator to show your logo for a certain period of time in their video, this can help determine if the terms of the agreement have been upheld.

Computer Vision’s logo detection model can detect and recognize thousands of different brands out of the box. However, if you are working with logos that are specific to your use case or otherwise might not be a part of the out of the box logos database, you can also use Custom Vision to build a custom object detector and essentially train your own database of logos by uploading and correctly labeling instances of the logos relevant to you.

Using keyframes with other Computer Vision and Custom Vision offerings

The Computer Vision APIs provide different insights in addition to image description and logo detection, such as object detection, image categorization, and more. The possibilities are endless when you use high-resolution keyframes in conjunction with these offerings.

For example, the object detection model in Computer Vision gives bounding boxes for common out of the box objects that are already detected as part of Video Indexer today. You can use these bounding boxes to blur out certain objects that don’t meet your standards.

High-resolution keyframes in conjunction with Custom Vision can be leveraged to achieve many different custom use cases. For example, you can train a model to determine what type of car (or even what breed of cat) is showing in a shot. Maybe you want to identify the location or the set where a scene was filmed for editing purposes. If you have objects of interest that may be unique to your use case, use Custom Vision to build a custom classifier to tag visuals or a custom object detector to tag and provide bounding boxes for visual objects.

Try it for yourself

These are just a few of the new opportunities enabled by the availability of high-resolution keyframes in Video Indexer. Now, it is up to you to get additional insights from your video by taking the keyframes from Video Indexer and running additional image processing using any of the Vision models we have just discussed. You can start doing this by first uploading your video to Video Indexer and taking the high-resolution keyframes after the indexing job is complete and second creating an account and getting started with the Computer Vision API and Custom Vision.

Have questions or feedback? We would love to hear from you. Use our UserVoice page to help us prioritize features, leave a comment below or email VISupport@Microsoft.com for any questions.

Quelle: Azure

Azure Sphere guardian module simplifies & secures brownfield IoT

One of the toughest IoT quandaries is figuring out how to bake IoT into existing hardware in a secure, cost-effective way. For many customers, scrapping existing hardware investments for new IoT-enabled devices (“greenfield” installations) isn’t feasible. And retrofitting mission-critical devices that are already in service with IoT (“brownfield” installations) is often deemed too risky, too complicated, and too expensive.

This is why we’re thrilled about a major advancement for Azure Sphere that opens up the brownfield opportunity, helping make IoT retrofits more secure, substantially easier, and more cost effective than ever before. The guardian module with Azure Sphere simplifies the transformation of brownfield devices into locked-down, internet-connected, data-wielding, intelligent devices that can transform business.

For an in-depth exploration of the guardian module and how it’s being used at major corporations like Starbucks, sign up for the upcoming Azure Sphere Guardian Module webinar.

The guardian module with Azure Sphere offers some key advantages

Like all Microsoft products, Azure Sphere is loaded with robust security features at every turn—from silicon to cloud. For brownfield installations, the guardian module with Azure Sphere physically plugs into existing equipment ports without the need for any hardware redesign.

Azure Sphere, rather than the device itself, talks to the cloud. The guardian module processes data and controls the device without exposing existing equipment to the potential dangers of the internet. The module shields brownfield equipment from attack by restricting the flow of data to only trusted cloud and device communication partners while also protecting module and equipment software.

Using the Azure Sphere guardian module, enterprises can enable any number of secure operations between the device and the cloud. The device can even use the Azure Sphere Security Service for certificate-based authentication, failure reporting, and software updates.

Opportunities abound for the Microsoft partner ecosystem

Given the massive scale of connectable equipment already in use in retail, industrial, and commercial settings, the new guardian module presents a lucrative opportunity for Microsoft partners. Azure Sphere can connect an enormous range of devices of all types, leading the way for a multitude of practical applications that can pay off through increased productivity, predictive maintenance, cost savings, new revenue opportunities, and more.

Fulfilling demand for such a diverse set of use cases is only possible thanks to Azure Sphere’s expanding partner ecosystem. Recent examples of this growth include our partnership with NXP to deliver a new Azure Sphere-certified chip that is an extension of their i.MX 8 high-performance applications process series and brings greater compute capabilities to support advanced workloads. As well as our collaboration with Qualcomm Technologies, Inc to deliver the first cellular-enabled Azure Sphere chip, which gives our customers the ability to securely connect anytime, anywhere.

Starbucks uses Azure Sphere guardian module to connect coffee machines

If you saw Satya Nadella’s Vision Keynote at Build 2019, you probably recall the demonstration of Starbucks’ IoT-connected coffee machines. But what you may not know is the Azure Sphere guardian module is behind the scenes, enabling Starbucks to connect these existing machines to the cloud.

As customers wait for their double-shot, no-whip mochas to brew, these IoT-enabled machines are doing more than meets the eye. They’re collecting more than a dozen data points for each precious shot, like the types of beans used, water temperature, and water quality. The solution enables Starbucks to proactively identify any issues with their machines in order to smooth their customers’ paths to caffeinated bliss.

Beyond predictive maintenance, Azure Sphere will enable Starbucks to transmit new recipes directly to machines in 30,000 stores rather than manually uploading recipes via thumb drives, saving Starbucks lots of time, money, and thumb drives. Watch this Microsoft Ignite session to see how Starbucks is tackling IoT at scale in pursuit of the perfect pour.

As an ecosystem, we have a tremendous opportunity to meet demand for brownfield installations and help our customers quickly bring their existing investments online without taking on risk and jeopardizing mission-critical equipment. The first guardian modules are available today from Avnet and AI-Link, with more expected soon.

Discover the value of adding secured connectivity to existing mission-critical equipment by registering for our upcoming Azure Sphere Guardian Modules webinar. You will experience a guided tour of the guardian module, including a deep dive into its architecture and the opportunity this open-source offering presents to our partner community. We’ll also hear from Starbucks around what they’ve learned since implementing the guardian module with Azure Sphere.
Quelle: Azure

Azure Stack HCI now running on HPE Edgeline EL8000

Do you need rugged, compact-sized hyperconverged infrastructure (HCI) enabled servers to run your branch office and edge workloads? Do you want to modernize your applications and IoT functions with container technology? Do you want to leverage Azure's hybrid services such as backup, disaster recovery, update management, monitoring, and security compliance? 

Well, Microsoft and HPE have teamed up to validate the HPE Edgeline EL8000 Converged Edge system for Microsoft's Azure Stack HCI program. Designed specifically for space-constrained environments, the HPE Edgeline EL8000 Converged Edge system has a unique 17-inch depth form factor that fits into limited infrastructures too small for other x86 systems. The chassis has an 8.7-inch width which brings additional flexibility for deploying at the deep edge, whether it is in a telco environment, a mobile vehicle, or a manufacturing floor. This Network Equipment-Building System (NEBs) compliant system delivers secure scalability.

HPE Edgeline EL8000 Converged Edge system gives:

Traditional x86 compute optimized for edge deployments, far from the traditional data center without the sacrifice of compute performance.
Edge-optimized remote system management with wireless capabilities based on Redfish industry standard.
Compact form factor, with short-depth and half-width options.
Rugged, modular form factor for secure scalability and serviceability in edge and hostile environments including NEBs level three and American Society of Heating, Refrigerating and Air-Conditioning Engineers (ASHRAE) level three/four compliance.
Broad accelerator support for emerging edge artificial intelligence (AI) use cases, for field programmable gate arrays or graphics processing units.
Up to four independent compute nodes, which are cluster-ready with embedded networks.

Modular design providing broad configuration possibilities

The HPE Edgeline EL8000 Converged Edge system offers flexibility of choice for compute density or for input/output expansion. These compact, ruggedized systems offer high-performance capacity to support the use cases that matter most, including media streaming, IoT, AI, and video analytics. The HPE Edgeline EL8000 is a versatile platform that enables edge compute transformation so as use case requirements change, the system's flexible and modular architecture can scale to meet them.

Seamless management and security features with HPE Edgeline Chassis Manager

The HPE Edgeline EL8000 Converged Edge system features the HPE Edgeline Chassis Manager which limits downtime by providing system-level health monitoring and alerts. Increase efficiency and reliability by managing the chassis fan speeds for each server blade installed in addition to monitoring the health and status of the power supply. It simplifies firmware upgrade management and implementation with HPE Edgeline Chassis Manager.

Microsoft Azure Stack HCI:

Azure Stack HCI solutions bring together highly virtualized compute, storage, and networking on industry-standard x86 servers and components. Combining resources in the same cluster makes it easier for you to deploy, manage, and scale. Manage with your choice of command-line automation or Windows Admin Center.

Achieve industry-leading virtual machine performance for your server applications with Hyper-V, the foundational hypervisor technology of the Microsoft cloud, and Storage Spaces Direct technology with built-in support for non-volatile memory express (NVMe), persistent memory, and remote-direct memory access (RDMA) networking.

Help keep apps and data secure with shielded virtual machines, network microsegmentation, and native encryption.

You can take advantage of cloud and on-premises working together with a hyperconverged infrastructure platform in the public cloud. Your team can start building cloud skills with built-in integration to Azure infrastructure management services, including:

Azure Site Recovery for high availability and disaster recovery as a service (DRaaS).

Azure Monitor, a centralized hub to track what’s happening across your applications, network, and infrastructure – with advanced analytics powered by AI.

Cloud Witness, to use Azure as the lightweight tie breaker for cluster quorum.

Azure Backup for offsite data protection and to protect against ransomware.

Azure Update Management for update assessment and update deployments for Windows virtual machines (VMs) running in Azure and on-premises.

Azure Network Adapter to connect resources on-premises with your VMs in Azure via a point-to-site virtual private network (VPN.)

Sync your file server with the cloud, using Azure File Sync.

Azure Arc for Servers to manage role-based access control, governance, and compliance policy from Azure Portal.

By deploying the Microsoft and HPE HCI solution, you can quickly solve your branch office and edge needs with high performance and resiliency while protecting your business assets by enabling the Azure hybrid services built into the Azure Stack HCI Branch office and edge solution.  
Quelle: Azure

Microsoft partner ANSYS extends ability of Azure Digital Twins platform

Digital twins have moved from an exciting concept to reality. More companies than ever are connecting assets and production networks with sensors and using analytics to optimize operations across machinery, plants, and industrial networks. As exact virtual representations of the physical environment, digital twins incorporate historical and real-time data to enable sophisticated spatial analysis of key relationships. Teams can use digital twins to model the impact of process changes before putting them into production, reducing time, cost, and risk.

For the second year in a row, Gartner has identified digital twins as one of the top 10 strategic technology trends. According to Gartner, while 13 percent of organizations that are implementing IoT have already adopted digital twins, 62 percent are in the process or plan to do so. Gartner predicts a tipping point in 2022 when two out of three companies will have deployed at least one digital twin to optimize some facet of their business processes.

This is why we’re excited by the great work of ANSYS, a Microsoft partner working to extend the value of the Microsoft Azure Digital Twins platform for our joint customers. The ANSYS Twin Builder combines the power of physics-based simulations and analytics-driven digital twins to provide real-time data transfer, reusable components, ultrafast modeling, and other tools that enable teams to perform myriad “what-if” analyses, and build, validate, and deploy complex systems more easily.

“Collaborating with ANSYS to create an advanced IoT digital twins framework provides our customers with an unprecedented understanding of their deployed assets’ performance by leveraging physics and simulation-based analytics.” — Sam George, corporate vice president of Azure IoT, Microsoft

Digital twins model key relationships, simplifying design

Digital twins will be first and most widely adopted in manufacturing, as industrial companies invest millions to build, maintain, and track the performance of remotely deployed IoT-enabled assets, machinery, and vehicles. Operators depend on near-continuous asset uptime to achieve production goals, meaning supply-chain bottlenecks, machine failures, or other unexpected downtime can hamper production output and reduce revenue recognition for the company and its customers. The use of digital twins, analytics, business rules, and automation helps companies avoid many of these issues by guiding decision-making and enabling instant informed action.

Digital twins can also simulate a multidimensional view of asset performance that can be endlessly manipulated and perfected prior to producing new systems or devices, ending not just the guesswork of manually predicting new processes, but also the cost of developing multiple prototypes. Digital twins, analytics-based tools, and automation also equip companies to avoid unnecessary costs by prioritizing issues for investment and resolution.

Digital twins can optimize production across networks

Longer-term, companies can more easily operate global supply chains, production networks, and digital ecosystems through the use of IoT, digital twins, and other tools. Enterprise teams and their partners will be able to pivot from sensing and reacting to changes to predicting them and responding immediately based on predetermined business rules. Utilities will be better prepared to predict and prevent accidents, companies poised to address infrastructure issues before customers complain, and stores more strategically set up to maintain adequate inventories.

Simulations increase digital twins’ effectiveness

ANSYS’ engineering simulation software enables customers to model the design of nearly every physical product or process. The simulations are then compiled into runtime modules that can execute in a docker container and integrate automatically into IoT processing systems, reducing the heavy lift of IoT customization.

With the combined Microsoft Azure Digital Twins-ANSYS physics-based simulation capabilities, customers can now:

Simulate baseline and failure data resulting in accurate, physics-based digital twins models.
Use physics-based predictive models to increase accuracy and improve ROI from predictive maintenance programs.
Leverage “what-if analyses” to simulate different solutions before selecting the best one.
Use virtual sensors to estimate critical quantities through simulation.

In addition, companies can use physics-based simulations within the Microsoft-ANSYS platform to pursue high-value use cases such as these:

 Optimize asset performance: Teams can use digital twins to model asset performance to evaluate current performance versus targets, identifying, resolving, and prioritizing issues for resolution based on the value they create.
 Manage systems across their lifecycle: Teams can take a systems approach to managing complex and costly assets, driving throughput and retiring systems at the ideal time to avoid over-investing in market-lagging capabilities.
 Perform predictive maintenance: Teams can use analytics to determine and schedule maintenance, reduce unplanned downtime and costly break-fix repairs, and perform repairs in order of importance, which frees team members from unnecessary work.
 Orchestrate systems: Companies will eventually create systems of intelligence by linking their equipment, systems, and networks to orchestrate production across plants, campuses, and regions, attaining new levels of visibility and efficiency.
 Fuel product innovation: With rapid virtual prototyping, teams will be able to explore myriad product versions, reducing the time and cost required to innovate products, decreasing product failures, and enabling the development of customized products.
 Enhance employee training: Companies can use digital twins to conduct training with employees, improving their effectiveness on the job while reducing production design errors due to human error.
 Eliminate physical constraints: Digital twins eliminate the physical barriers to experimentation, meaning users can simulate tests and conditions for remote assets, such as equipment in other plants, regions, or space.

Opening up new opportunities for partners

According to Gartner, more than 20 billion connected devices are projected by 2020 and adoption of IoT and digital twins is only going to accelerate—in fact, MarketsandMarkets™ estimates that the digital twins market will reach a value of $3.8 billion in 2019 and grow to $35.8 billion by 2025. Our recent IoT Signals research found that 85 percent of decision-makers have already adopted IoT, 74 percent have projects in the “use” phase, and businesses expect to achieve 30 percent ROI on their IoT projects going forward. The top use case participants want to pursue is operations optimization (56 percent), to reap more value from the assets and processes they already possess. That’s why digital twins is so important right now because it provides a framework to accomplish this goal with greater accuracy than was possible before.

“As industrial companies require comprehensive field data and actionable insights to further optimize deployed asset performance, ecosystem partners must collaborate to form business solutions. ANSYS Twins Builder’s complementary simulation data stream augments Azure IoT Services and greatly enhances its customers’ understanding of asset performance.”—Eric Bantegnie, vice president and general manager at ANSYS

Thanks to Microsoft partners like ANSYS, companies are better equipped to unlock productivity and efficiency gains by removing critical constraints, including physical barriers, from process modeling. With tools like digital twins, companies will be limited only by their own creativity, creating a more intelligent and connected world where all have more opportunities to flourish.

Learn more about Microsoft Azure Digital Twins and ANSYS Twin Builder.
Quelle: Azure

Introducing maintenance control for platform updates

Today we are announcing the preview of a maintenance control feature for Azure Virtual Machines that gives more control to customers with highly sensitive workloads for platform maintenance. Using this feature, customers can control all impactful host updates, including rebootless updates, for up to 35 days.

Azure frequently updates its infrastructure to improve the reliability, performance, and security, or to launch new features. Almost all updates have zero impact on your Azure virtual machines (VMs). When updates do have an effect, Azure chooses the least impactful method for updates:

If the update does not require a reboot, the VM is briefly paused while the host is updated, or it's live migrated to an already updated host. These rebootless maintenance operations are applied fault domain by fault domain, and progress is stopped if any warning health signals are received.
In the extremely rare scenario when the maintenance requires a reboot, the customer is notified of the planned maintenance. Azure also provides a time window in which you can start the maintenance yourself, at a time that works for you.

Typically, rebootless updates do not impact the overall customer experience. However, certain very sensitive workloads may require full control of all maintenance activities. This new feature will benefit those customers who deploy this type of workload.

Who is this for?

The ability to control the maintenance window is particularly useful when you deploy workloads that are extremely sensitive to interruptions running on an Azure Dedicated Host or an Isolated VM, where the underlying physical server runs a single customer’s workload. This feature is not supported for VMs deployed in hosts shared with other customers.

The typical customer who should consider using this feature requires full control over updates because while they need to have the latest updates in place, their business requires that at least some of their cloud resources must be updated with zero impact on their own schedule.

Customers like financial services providers, gaming companies, or media streaming services using Azure Dedicated Hosts or Isolated VMs will benefit by being able to manage necessary updates without any impact on their most critical Azure resources.

How does it work?

You can enable the maintenance control feature for platform updates by adding a custom maintenance configuration to a resource (either an Azure Dedicated Host or an Isolated VM). When the Azure updater sees this custom configuration, it will skip all non-zero-impact updates, including rebootless updates. For as long as the maintenance configuration is applied to the resource, it will be your responsibility to determine when to initiate updates for that resource. You can check for pending updates on the resource and apply updates within the 35-day window. When you initiate an update on the resource, Azure applies all pending host updates. A new 35-day window starts after another update becomes pending on the resource. If you choose not to apply the updates within the 35-day window, Azure will automatically apply all pending updates for you, to ensure that your resources remain secure and get other fixes and features.

Things to consider

You can automate platform updates for your maintenance window by calling “apply pending update” commands through your automation scripts. This can be batched with your application maintenance. You can also make use of Azure Functions and schedule updates at regular intervals.
Maintenance configurations are supported across subscriptions and resource groups, so you can manage all maintenance configurations in one place and use them anywhere they're needed.

Getting started

The maintenance control feature for platform updates is available in preview now. You can get started by using CLI, PowerShell, REST APIs, .NET, or SDK. Azure portal support will follow.

For more information, please refer to the documentation: Maintenance for virtual machines in Azure.

FAQ

Q: Are there cases where I can’t control certain updates? 

A:  In case of a high-severity security issue that may endanger the Azure platform or our customers, Azure may need to override customer control of the maintenance window and push the change. This is a rare occurrence that would only be used in extreme cases, such as a last resort to protect you from critical security issues.

Q: If I don’t self-update within 35-days what action will Azure take?

A:  If you don’t execute a platform update within 35-days, Azure will apply the pending updates on a fault domain by fault domain basis. This is done to maintain security and performance, and to fix any defects.

Q: Is this feature supported in all regions?

A:   Maintenance Control is supported in all public cloud regions. Currently we don't support gov cloud regions, but this support will come later.
Quelle: Azure

Networking enables the new world of Edge and 5G Computing

At the recent Microsoft Ignite 2019 conference, we introduced two new and related perspectives on the future and roadmap of edge computing.

Before getting further into the details of Network Edge Compute (NEC) and Multi-access Edge Compute (MEC), let’s take a look at the key scenarios which are emerging in line with 5G network deployments. For a decade, we have been working with customers to move their workloads from their on-premises locations to Azure to take advantage of the massive economies of scale of the public cloud. We get this scale with the ongoing build-out of new Azure regions and the constant increase of capacity in our existing regions, reducing the overall costs of running data centers.

For most workloads, running in the cloud is the best choice. Our ability to innovate and run Azure as efficiently as possible allows customers to focus on their business instead of managing physical hardware and associated space, power, cooling, and physical security. Now, with the advent of 5G mobile technology promising larger bandwidth and better reliability, we see significant requirements for low latency offerings to enable scenarios such as smart-buildings, factories, and agriculture. The “smart” prefix highlights that there is a compute-intensive workload, typically running machine learning or artificial intelligence-type logic, requiring compute resources to execute in near real-time. Ultimately the latency, or the time from when data is generated to the time it is analyzed, and a meaningful result is available, becomes critical for these smart-scenarios. Latency has become the new currency, and to reduce latency we need to move the required computing resources closer to the sensors, data origin or users.

Multi-access Edge Compute: The intersection of compute and networking

Internet of Things (IoT) creates incredible opportunities, but it also presents real challenges. Local connectivity in the enterprise has historically been limited to Ethernet and Wi-Fi. Over the past two decades, Wi-Fi has become the de-facto standard for wireless networks, not due to it necessarily being the best solution, but rather its entrenchment in the consumer ecosystem and lack of alternatives. Our customers from around the world tell us that deploying Wi-Fi to service their IoT devices requires compromises on coverage, bandwidth, security, manageability, reliability, and interoperability/roaming. For example, autonomous robots require better bandwidth, coverage, and reliability to operate safely within a factory. Airports generally have decent Wi-Fi coverage inside the terminals, but on the tarmac, coverage often drops significantly, making it insufficient and less suitable to power the smart airport.

Next-gen private cellular connectivity greatly improves bandwidth, coverage, reliability, and manageability. Through the combination of local compute resources and private mobile connectivity (private LTE), we can enable many new scenarios. For instance, in the smart factory example used earlier customers are now able to run their robotic control logic, highly available and independent of connectivity to the public cloud. MEC helps ensure that operations and any associated critical first-stage data processing remain up and production can continue uninterrupted.

With its promise and advantage of near-infinite compute and storage, the cloud is ideal for large data-intensive and computational tasks, such as machine learning jobs for predictive maintenance analytics. At this year’s Ignite conference, we shared our thoughts and experience, along with a technology preview of MEC with Azure. The technology preview brings private mobile network capabilities to Azure Stack Edge; an on-premises compute platform managed from Azure. In practical terms, the MEC allows locally controlling the robots; even if the factory suffers a network outage.

From an edge computing perspective, we have containers, running across Azure Stack Edge and Azure. A key aspect is that the same programming paradigm can be used for Azure and the edge-based MEC platform. Code can be developed and tested in the cloud, then seamlessly deployed at the edge. Developers can take advantage of the vast array of DevOps tools and solutions available in Azure and apply them to the new exciting edge scenarios. The MEC technology preview focuses on the simplified experience of cross-premises deployment and operations of managed compute and Virtual Network Functions with integration to existing Azure services.

Network Edge Compute

Whereas Multi-access Edge Compute (MEC) is intended to be deployed at the customer’s premises, Network Edge Compute (NEC) is the network carrier equivalent, placing the edge computing platform within their network. Last week we announced the initial deployment of our NEC platform in AT&T’s Dallas facility. Instead of needing to access applications and games running in the public cloud, software providers can bring their solutions physically closer to their end-users. At AT&T’s Business Summit we gave an augmented reality demonstration, working with Taqtile, and showed how to perform maintenance on an aircraft landing gear.

The HoloLens user sees the real landing gear along with the virtual manual along with specific parts of the landing gear virtually highlighted. The mixing of real-world and virtual objects displayed via HoloLens is what is often referred to as augmented reality (AR) or mixed reality (MR).

Edge Computing Scenarios

We have been showcasing multiple MEC and NEC use-cases over these past few weeks. For more details please refer to our Microsoft Ignite MEC and 5G session.

Mixed Reality (MR)

Mixed reality use cases such as remote assistance can revolutionize several industrial automation scenarios. Lower latencies and higher bandwidth coupled with local compute, enables new remote rendering scenarios to reduce battery consumption in handsets and MR devices.

Retail e-fulfillment

Attabotics provides a robotic warehousing and fulfillment system for the retail and supply chain industries. Attabotics employs robots (Attabots) for storage and retrieval of goods from a grid of bins. A typical storage structure has about 100,000 bins and is serviced by between 60 and 80 Attabots. Azure Sphere powers the robots themselves. Communications using Wi-Fi or traditional 900 MHz spectrum does not meet the scale, performance and reliability requirements.
  
The Nexus robot control system, used for command and control of the warehousing system, is built natively on Azure and uses Azure IoT Central for telemetry. With a Private LTE (CBRS) radio from our partners Sierra Wireless and Ruckus Wireless and packet core partner Metaswitch, we enabled the Attabots to communicate over a private LTE network. The reduced latency improved reliability and made the warehousing solution more efficient. The entire warehousing solution, including the private LTE network used for a warehouse, run on a single Azure Stack Edge.

Gaming

Multi-player online gaming is one of the canonical scenarios for low-latency edge computing. Game Cloud Studios has developed a game based on Azure Play Fab, called Tap and Field. The game backend and controls run on Azure, while the game server instances reside and run on the NEC platform. Having lower latencies results in better gaming experiences for players who are nearby in venues such as e-sport events, arcades, arenas, and similar venues.

Public Safety

The proliferation of drone use is disrupting many industries, from security and privacy to the delivery of goods. Air Traffic Control operations are on the cusp of one of the most significant disruptive events in the field, going from monitoring only dozens of aircrafts today to thousands tomorrow. This necessitates a sophisticated near real-time tracking system. Vorpal VigilAir has built a solution where drone and operator tracking is done using a distributed sensor network powered by a real-time tracking application running on the NEC.

Data-driven digital agriculture solutions

Azure FarmBeats is an Azure solution that enables aggregation of agriculture datasets across providers, and generation of actionable insights by building artificial intelligence (AI) or machine learning (ML) models by fusing the datasets. Gathering datasets from sensors distributed across the farm requires a reliable private network, and generating insights requires a robust edge computing platform that is capable of being operated in a disconnected mode in remote locations where connectivity to the cloud is often sparse. Our solution, based on the Azure Stack Edge along with a managed private LTE network, offers a reliable and scalable connectivity fabric along with the right compute resources close to the farm.

MEC, NEC, and Azure: Bringing compute everywhere

MEC enables a low-latency connected Azure platform in your location, NEC provides a similar platform in a network carrier’s central office, and Azure provides a vast array of cloud services and controls.

At Microsoft, we fundamentally believe in providing options for all customers. Because it is impractical to deploy Azure datacenters in every major metropolitan city throughout the world, our new edge computing platforms provide a solution for specific low-latency application requirements that cannot be satisfied in the cloud. Software developers can use the same programming and deployment models for containerized applications using MEC where private mobile connectivity is required, deploying to NEC where apps are optimally located outside the customer’s premises, or directly in Azure. Many applications will look to take advantage of combined compute resources across the edge and public cloud.

We are building a new extended platform and continue to work with the growing ecosystem of mobile connectivity and edge computing partners. We are excited to enable a new wave of innovation unleashed by the convergence of 5G, private mobile connectivity, IoT and containerized software environments, powered by new and distributed programming models. The next phase of computing has begun.
Quelle: Azure

Building Xbox game streaming with Site Reliability best practices

Last month, we started sharing the DevOps journey at Microsoft through the stories of several teams at Microsoft and how they approach DevOps adoption. As the next story in this series, we want to share the transition one team made from a classic operations role to a Site Reliability Engineering (SRE) role: the story of the Xbox Reliability Engineering and Operations (xREO) team.

This transition was not easy and came out of necessity when Microsoft decided to bring Xbox games to gamers wherever they are through cloud game streaming (project xCloud). In order to deliver cutting-edge technology with top-notch customer experience, the team had to redefine the way it worked—improving collaboration with the development team, investing in automation, and get involved in the early stages of the application lifecycle. In this blog, we’ll review some of the key learnings the team collected along the way. To explore the full story of the team, see the journey of the xREO team.

Consistent gameplay requirements and the need to collaborate

A consistent experience is crucial to a successful game streaming session. To ensure gamers experience a game streamed from the cloud, it has to feel like it is running on a nearby console. This means creating a globally distributed cloud solution that runs on many data centers, close to end users. Azure’s global infrastructure makes this possible, but operating a system running on top of so many Azure regions is a serious challenge.

The Xbox developers who have started architecting and building this technology understood that they could not just build this system and “throw it over the wall” to operations. Both teams had to come together and collaborate through the entire application lifecycle so the system can be designed from the start with considerations on how it will be operated in a production environment.

Architecting a cloud solution with operations in mind

In many large organizations, it is common to see development and operation teams working in silos. Developers don’t always consider operation when planning and building a system, while operations teams are not empowered to touch code even though they deploy it and operate it in production. With an SRE approach, system reliability is baked into the entire application lifecycle and the team that operates the system in production is a valued contributor in the planning phase. In a new approach, involving the xREO team in the design phase enabled a collaborative environment, making joint technology choices and architecting a system that could operate with the requirements needed to scale.

Leveraging containers to clearly define ownership

One of the first technological decisions the development and xREO teams made together was to implement a microservices architecture utilizing container technologies. This allowed the development teams to containerize .NET Core microservices they would own and remove the dependency from the cloud infrastructure that was running the containers and was to be owned by the xREO team.

Another technological decision both teams made early on, was to use Kubernetes as the underlying container orchestration platform. This allowed the xREO team to leverage Azure Kubernetes Service (AKS), a managed Kubernetes cloud platform that simplifies the deployment of Kubernetes clusters, removing a lot of the operational complexity the team would have to face running multiple clusters across several Azure regions. These joint choices made ownership clear—the developers are responsible for everything inside the containers and the xREO team is responsible for the AKS clusters and other Azure services make the cloud infrastructure hosting these containers. Each team owns the deployment, monitoring and operation of its respective piece in production.

This kind of approach creates clear accountability and allows for easier incident management in production, something that can be very challenging in a monolithic architecture where infrastructure and application logic have code dependencies and are hard to untangle when things go sideways.

Scaling through infrastructure automation

Another best practice the xREO team invested in was infrastructure automation. Deploying multiple cloud services manually on each Azure region was not scalable and would take too much time. Using a practice known as “infrastructure as code” (IaC) the team used Azure Resource Manager templates to create declarative definitions of cloud environments that allow deployments to multiple Azure regions with minimal effort.

With infrastructure managed as code, it can also be deployed using continuous integration and continuous delivery (CI/CD) to bring further automation to the process of deploying new Azure resources to existing data centers, updating infrastructure definitions or bringing online new Azure regions when needed. Both IaC and CI/CD, allowed the team to remain lean, avoid repetitive mundane work and remove most of the risk of human error that comes with manual steps. Instead of spending time on manual work and checklists, the team can focus on further improving the platform and its resilience.

Site Reliability Engineering in action 

The journey of the xREO team started with a need to bring the best customer experience to gamers. This is a great example that shows how teams who want to delight customers with new experiences through cutting edge innovation must evolve the way they design, build, and operate software. Shifting their approach to operations and collaborating more closely with the development teams was the true transformation the xREO team has undergone.

With this new mindset in place, the team is now well positioned to continue building more resilience and further scale the system and by so, deliver the promise of cloud game streaming to every gamer.

Resources

The full story of the xREO team
Additional stories: The DevOps journey at Microsoft
Microsoft Game Stack

Quelle: Azure

Announcing the preview of Azure Spot Virtual Machines

We’re announcing the preview of Azure Spot Virtual Machines. Azure Spot Virtual Machines provide access to unused Azure compute capacity at deep discounts. Spot pricing is available on single Virtual Machines in addition to Virtual Machine Scale Sets (VMSS). This enables you to deploy a broader variety of workloads on Azure while enjoying access to discounted pricing. Spot Virtual Machines offer the same characteristics as a pay-as-you-go Virtual Machines, with differences in pricing and evictions. Spot Virtual Machines can be evicted anytime if Azure needs capacity.

The workloads that are ideally suited to run on Spot Virtual Machines include, but are not necessarily limited to, the following:

•    Batch jobs.
•    Workloads that can sustain and/or recover from interruptions.
•    Development and test.
•    Stateless applications that can use Spot Virtual Machines to scale out, opportunistically saving cost.
•    Short-lived jobs which can easily be run again if the Virtual Machine is evicted.

Preview for Spot Virtual Machines will replace the preview of Azure low-priority Virtual Machines on scale sets. Eligible low-priority Virtual Machines will be automatically transitioned over to Spot Virtual Machines. Please refer to the FAQ for additional information. 

Pricing

Unlike low-priority Virtual Machines, prices for Spot Virtual Machines will vary based on capacity for a size or SKU in an Azure region. Spot pricing can give you insights into the availability and demand for a given Azure Virtual Machine series and specific size in a region. The prices will change slowly to provide stabilization, thus allowing you to better manage budgets. In the Azure portal, you will have access to the current Azure Virtual Machine Spot prices to easily determine which region or Virtual Machine size best fits your needs. Spot prices are capped at pay-as-you-go prices.
 

Deployment

Spot Virtual Machines are easy to deploy and manage. Deploying a Spot Virtual Machine is similar to configuring and deploying a regular Virtual Machine. For example, in the Azure portal, you can simply select Azure Spot Instance to deploy a Spot Virtual Machine. You can also define your maximum price for your Spot Virtual Machines. Here are two options: 

You can choose to deploy your Spot Virtual Machines without capping the price. Azure will charge you the Spot Virtual Machine price at any given time, giving you peace of mind that your Virtual Machines will not be evicted for price reasons.
 
Alternatively, you can decide to provide a specific price to stay in your budget. Azure will not charge you above the maximum price you set and will evict the Virtual Machine if the spot price rises above your defined maximum price.
 

There are few other options available to lower costs.

If your workload does not require a specific Virtual Machine series and size, then you can find other Virtual Machines in the same region that may be cheaper.
If your workload is not dependent on a specific region, then you can find a different Azure region to reduce your cost.

Quota

As part of this announcement, to give better flexibility, Azure is also rolling out a separate quota for Spot Virtual Machines that is separate from your pay-as-you-go Virtual Machine quota. The quota for Spot Virtual Machines and Spot VMSS instances is a single quota for all Virtual Machine sizes in a specific Azure region. This approach will give you easy access to a broader set of Virtual Machines.
 

Handling Evictions

Azure will try to keep your Spot Virtual Machine running and minimize evictions, but your workload should be prepared to handle evictions as runtime for an Azure Spot Virtual Machines and VMSS instances is not guaranteed. You can optionally get a 30-second eviction notice by subscribing to scheduled events. Virtual Machines can be evicted due to the following reasons:

Spot prices have gone above the max price you defined for the Virtual Machine. Azure Spot Virtual Machines get evicted when the Spot price for the Virtual Machine you have chosen goes above the price you defined at the time of deployment. You can try to redeploy your Virtual Machine by changing prices.
Azure needs to reclaim capacity.

In both scenarios, you can try to redeploy the Virtual Machine in the same region or availability zone.

Best practices

Here are some effective ways to best utilize Azure Spot Virtual Machines:

For long-running operations, try to create checkpoints so that you can restart your workload from a previous known checkpoint to handle evictions and save time.
In scale-out scenarios, to save costs, you can have two VMSS, where one has regular Virtual Machines and the other has Spot Virtual Machines. You can put both in the same load balancer to opportunistically scale out.
Listen to eviction notifications in the Virtual Machine to get notified when your Virtual Machine is about to be evicted.
If you are willing to utilize pay-as-you-go prices, then use Eviction type to “Capacity Eviction only”, in the API provide “-1” as max price as Azure never charges you more than the Spot Virtual Machine price.
To handle evictions, build a retry logic to redeploy Virtual Machines. If you do not require a specific Virtual Machine series and size, then try to deploy a different size that matches your workload needs.
While deploying VMSS, select max spread in portal management tab or FD==1 in the API to find capacity in a zone or region.

Learn more

Spot Virtual Machine details
Spot Virtual Machine pricing: Windows and Linux
Create Spot Virtual Machines in Portal
Create Spot Virtual Machines in Azure CLI
Create Spot Virtual Machines in Azure PowerShell
Create Spot Virtual Machines in Azure Resource Manager templates
Create Spot VMSS in Azure Resource Manager templates
Planned Azure Batch support for Spot Virtual Machines 

Quelle: Azure

Microsoft has validated the Lenovo ThinkSystem SE350 edge server for Azure Stack HCI

Do you need rugged, compact-sized hyperconverged infrastructure (HCI) enabled servers to run your branch office and edge workloads? Do you want to modernize your applications and IoT functions with container technology? Do you want to leverage Azure's hybrid services such as backup, disaster recovery, update managment, monitoring, and security compliance?  

Microsoft and Lenovo have teamed up to validate the Lenovo ThinkSystem SE350 for Microsoft's Azure Stack HCI program. The ThinkSystem SE350 was designed and built with the unique requirements of edge servers in mind. It is versatile enough to stretch the limitations of server locations, providing a variety of connectivity and security options and can be easily managed with Lenovo XClarity Controller. The ThinkSystem SE350 solution has a focus on smart connectivity, business security, and manageability for the harsh environment. To see all Lenovo servers validated for Azure Stack HCI, see the Azure Stack HCI catalog to learn more.

Lenovo ThinkSystem SE350:

The ThinkSystem SE350 is the latest workhorse for the edge. Designed and built with the unique requirements for edge servers in mind, it is versatile enough to stretch the limitations of server locations, providing a variety of connectivity and security options and is easily managed with Lenovo XClarity Controller. The ThinkSystem SE350 is a rugged compact-sized edge solution with a focus on smart connectivity, business security, and manageability for the harsh environment.

The ThinkSystem SE350 is an Intel® Xeon® D processor-based server, with a 1U height, half-width and short depth case that can go anywhere. Mount it on a wall, stack it on a shelf, or install it in a rack. This rugged edge server can handle anything from 0-55°C as well as full performance in high dust and vibration environments.

Information availability is another challenging issue for users at the edge, who require insight into their operations at all times to ensure they are making the right decisions. The ThinkSystem SE350 is designed to provide several connectivity options with wired and secure wireless Wi-Fi and LTE connection ability. This purpose-built compact server is reliable for a wide variety of edge and IoT workloads.

Microsoft Azure Stack HCI:

Azure Stack HCI solutions bring together highly virtualized compute, storage, and networking on industry-standard x86 servers and components. Combining resources in the same cluster makes it easier for you to deploy, manage, and scale. Manage with your choice of command-line automation or Windows Admin Center.

Achieve industry-leading virtual machine (VM) performance for your server applications with Hyper-V, the foundational hypervisor technology of the Microsoft cloud, and Storage Spaces Direct technology with built-in support for non-volatile memory express (NVMe), persistent memory, and remote direct memory access (RDMA) networking.

Help keep apps and data secure with shielded virtual machines, network micro-segmentation, and native encryption.

You can take advantage of cloud and on-premises working together with a hyper-converged infrastructure platform in the public cloud. Your team can start building cloud skills with built-in integration to Azure infrastructure management services:

Azure Site Recovery for high availability and disaster recovery as a service (DRaaS).

Azure Monitor, a centralized hub to track what’s happening across your applications, network, and infrastructure, with advanced analytics powered by artificial intelligence.

Cloud Witness, to use Azure as the lightweight tie-breaker for cluster quorum.

Azure Backup for offsite data protection and to protect against ransomware.

Azure Update Management for update assessment and update deployments for Windows Virtual Machines running in Azure and on-premises.

Azure Network Adapter to connect resources on-premises with your VMs in Azure via a point-to-site VPN.

Sync your file server with the cloud, using Azure File Sync.

Azure Arc for Servers to manage role-based access control, governance, and compliance policy from Azure Portal.

By deploying the Microsoft + Lenovo HCI solution, you can quickly solve your branch office and edge needs with high performance and resiliency while protecting your business assets by enabling the Azure hybrid services built into the Azure Stack HCI Branch office and edge solution.  
Quelle: Azure