Azure Orbital Ground Station as Service extends life and reduces costs for satellite operators

How can Microsoft empower satellite operators to focus on their mission and enable them to continue the operation of their satellites, without making capital investments in their ground infrastructure?

To answer that question, Microsoft worked alongside the National Oceanic and Atmospheric Administration (NOAA), and our partner Xplore, to demonstrate how the commercial cloud can provide satellite mission management for NOAA’s legacy polar satellites (NOAA-18)—extending the mission life of these satellites while reducing the cost of operation through Azure Orbital Ground Station as-a-Service (GSaaS).

Partnering with the National Oceanic and Atmospheric Administration and Xplore

The initiative was part of a year-long cooperative research and development agreement (CRADA) with NOAA, where we worked together to determine the ability of the Azure Orbital platform to connect and downlink data from NOAA satellites. NOAA also tested the ability of Microsoft Azure to comply with specified security controls in a rapid and effective manner. Our cloud-based solutions performed successfully across all measures.

Partners are central to Microsoft’s approach to space, and they played a key role in this project. As part of the CRADA, we leveraged our partner network to bring together Azure Orbital with Xplore’s Major Tom mission control software platform. This approach enabled NOAA to transmit commands to the NOAA-18 spacecraft and verify the receipt of these commands. This test was conducted in real-time, and data was flowing bi-directionally with the NOAA-18 satellite.

Commercial technology enabled the rapid demonstration of these innovative capabilities. Xplore was able to move quickly to bring functions of NOAA’s heritage space system architecture to the Azure cloud through their Major Tom platform. This highlights the power of Azure as a platform to bring together Azure Orbital as the ground station, Major Tom to provide the mission control software for commanding and telemetry viewing, and the NOAA operators to monitor the scenarios.

This successful demonstration shows that the Azure Orbital GSaaS, and the partner network it brings together, enables sustainable outcomes for satellite operators. Our work with NOAA is just the beginning of the journey. We look forward to partnering with additional satellite operators to help them reduce their infrastructure management costs, lower latency, increase capacity and resiliency, and empower their missions through the power of Azure Orbital GSaaS and the Azure cloud.

Learn more about Azure Orbital and Azure Space

To learn more about Azure Orbital GSaaS, visit our product page, or take a look at the session with Microsoft Mechanics, which goes into more detail on how we connect space satellites around the world and bring earth observational data into Azure for analytics via Microsoft and partner ground stations. We demonstrate how it works and how it fits into Microsoft’s strategy with Azure Space to bring cloud connectivity everywhere on earth and to make space satellite data accessible for everyday use cases.

More broadly, Azure Space marks the convergence between global satellite constellations and the cloud. As the two join together, our purpose is to bring cloud connectivity to even the most remote corners of the earth, connect to satellites, and harness the vast amount of data collected from space. This can help solve both long-term trending issues affecting the earth like climate change, or short-term real-time issues such as connected agriculture, monitoring and controlling wildfires, or identifying supply chain bottlenecks.

Learn more about Azure Space today.
Quelle: Azure

MLOPs Blog Series Part 2: Testing robustness of secure machine learning systems using machine learning ops

Robustness is the ability of a closed-loop system to tolerate perturbations or anomalies while system parameters are varied over a wide range. There are three essential tests to ensure that the machine learning system is robust in the production environments: unit testing, data and model testing, and integration testing.

Unit testing

Tests are performed on individual components that each have a single function within the bigger system (for example, a function that creates a new feature, a column in a DataFrame, or a function that adds two numbers). We can perform unit tests on individual functions or components; a recommended method for performing unit tests is the Arrange, Act, Assert (AAA) approach:

1.    Arrange: Set up the schema, create object instances, and create test data/inputs.
2.    Act: Execute code, call methods, set properties, and apply inputs to the components to test.
3.    Assert: Check the results, validate (confirm that the outputs received are as expected), and clean (test-related remains).

Data and model testing

It is important to test the integrity of the data and models in operation. Tests can be performed in the MLOps pipeline to validate the integrity of data and the model robustness for training and inference. The following are some general tests that can be performed to validate the integrity of data and the robustness of the models:

1.    Data testing: The integrity of the test data can be checked by inspecting the following five factors—accuracy, completeness, consistency, relevance, and timeliness. Some important aspects to consider when ingesting or exporting data for model training and inference include the following:

•    Rows and columns: Check rows and columns to ensure no missing values or incorrect patterns are found.

•    Individual values: Check individual values if they fall within the range or have missing values to ensure the correctness of the data.

•    Aggregated values: Check statistical aggregations for columns or groups within the data to understand the correspondence, coherence, and accuracy of the data.

2.   Model testing: The model should be tested both during training and after it has been trained to ensure that it is robust, scalable, and secure. The following are some aspects of model testing:

•    Check the shape of the model input (for the serialized or non-serialized model).

•    Check the shape and output of the model.

•    Behavioral testing (combinations of inputs and expected outputs).

•    Load serialized or packaged model artifacts into memory and deployment targets. This will ensure that the model is de-serialized properly and is ready to be served in the memory and deployment targets.

•    Evaluate the accuracy or key metrics of the ML model.

Integration testing

Integration testing is a process where individual software components are combined and tested as a group (for example, data processing or inference or CI/CD).

Figure 1: Integration testing (two modules)

Let’s look at a simple hypothetical example of performing integration testing for two components of the MLOps workflow. In the Build module, data ingestion and model training steps have individual functionalities, but when integrated, they perform ML model training using data ingested to the training step. By integrating both module 1 (data ingestion) and module 2 (model training), we can perform data loading tests (to see whether the ingested data is going to the model training step), input and outputs tests (to confirm that expected formats are inputted and outputted from each step), as well as any other tests that are use case-specific.

In general, integration testing can be done in two ways:

1.    Big Bang testing: An approach in which all the components or modules are integrated simultaneously and then tested as a unit.

2.    Incremental testing: Testing is carried out by merging two or more modules that are logically connected to one another and then testing the application's functionality. Incremental tests are conducted in three ways:

•    Top-down approach

•    Bottom-up approach

•    Sandwich approach: a combination of top-down and bottom-up

Figure 2: Integration testing (incremental testing)

The top-down testing approach is a way of doing integration testing from the top to the bottom of the control flow of a software system. Higher-level modules are tested first, and then lower-level modules are evaluated and merged to ensure software operation. Stubs are used to test modules that aren't yet ready. The advantages of a top-down strategy include the ability to get an early prototype, test essential modules on a high-priority basis, and uncover and correct serious defects sooner. One downside is that it necessitates a large number of stubs, and lower-level components may be insufficiently tested in some cases.

The bottom-up testing approach tests the lower-level modules first. The modules that have been tested are then used to assist in the testing of higher-level modules. This procedure is continued until all top-level modules have been thoroughly evaluated. When the lower-level modules have been tested and integrated, the next level of modules is created. With the bottom-up technique, you don’t have to wait for all the modules to be built. One downside is those essential modules (at the top level of the software architecture) that impact the program's flow are tested last and are thus more likely to have defects.
The sandwich testing approach tests top-level modules alongside lower-level modules, while lower-level components are merged with top-level modules and evaluated as a system. This is termed hybrid integration testing because it combines top-down and bottom-up methodologies.

Learn more

For further details and to learn about hands-on implementation, check out the Engineering MLOps book, or learn how to build and deploy a model in Azure Machine Learning using MLOps in the “Get Time to Value with MLOps Best Practices” on-demand webinar. Also, check out our recently announced blog about solution accelerators (MLOps v2) to simplify your MLOps workstream in Azure Machine Learning.
Quelle: Azure

Responsible AI investments and safeguards for facial recognition

A core priority for the Cognitive Services team is to ensure its AI technology, including facial recognition, is developed and used responsibly. While we have adopted six essential principles to guide our work in AI more broadly, we recognized early on that the unique risks and opportunities posed by facial recognition technology necessitate its own set of guiding principles.

To strengthen our commitment to these principles and set up a stronger foundation for the future, Microsoft is announcing meaningful updates to its Responsible AI Standard, the internal playbook that guides our AI product development and deployment. As part of aligning our products to this new Standard, we have updated our approach to facial recognition including adding a new Limited Access policy, removing AI classifiers of sensitive attributes, and bolstering our investments in fairness and transparency.

Safeguards for responsible use

We continue to provide consistent and clear guidance on the responsible deployment of facial recognition technology and advocate for laws to regulate it, but there is still more we must do.

Effective today, new customers need to apply for access to use facial recognition operations in Azure Face API, Computer Vision, and Video Indexer. Existing customers have one year to apply and receive approval for continued access to the facial recognition services based on their provided use cases. By introducing Limited Access, we add an additional layer of scrutiny to the use and deployment of facial recognition to ensure use of these services aligns with Microsoft’s Responsible AI Standard and contributes to high-value end-user and societal benefit. This includes introducing use case and customer eligibility requirements to gain access to these services. Read about example use cases, and use cases to avoid, here. Starting June 30, 2023, existing customers will no longer be able to access facial recognition capabilities if their facial recognition application has not been approved. Submit an application form for facial and celebrity recognition operations in Face API, Computer Vision, and Azure Video Indexer here, and our team will be in touch via email.

Facial detection capabilities (including detecting blur, exposure, glasses, head pose, landmarks, noise, occlusion, and facial bounding box) will remain generally available and do not require an application.

In another change, we will retire facial analysis capabilities that purport to infer emotional states and identity attributes such as gender, age, smile, facial hair, hair, and makeup. We collaborated with internal and external researchers to understand the limitations and potential benefits of this technology and navigate the tradeoffs. In the case of emotion classification specifically, these efforts raised important questions about privacy, the lack of consensus on a definition of “emotions,” and the inability to generalize the linkage between facial expression and emotional state across use cases, regions, and demographics. API access to capabilities that predict sensitive attributes also opens up a wide range of ways they can be misused—including subjecting people to stereotyping, discrimination, or unfair denial of services.

To mitigate these risks, we have opted to not support a general-purpose system in the Face API that purports to infer emotional states, gender, age, smile, facial hair, hair, and makeup. Detection of these attributes will no longer be available to new customers beginning June 21, 2022, and existing customers have until June 30, 2023, to discontinue use of these attributes before they are retired.

While API access to these attributes will no longer be available to customers for general-purpose use, Microsoft recognizes these capabilities can be valuable when used for a set of controlled accessibility scenarios. Microsoft remains committed to supporting technology for people with disabilities and will continue to use these capabilities in support of this goal by integrating them into applications such as Seeing AI.

Responsible development: improving performance for inclusive AI

In line with Microsoft’s AI principle of fairness and the supporting goals and requirements outlined in the Responsible AI Standard, we are bolstering our investments in fairness and transparency. We are undertaking responsible data collections to identify and mitigate disparities in the performance of the technology across demographic groups and assessing ways to present this information in a way that would be insightful and actionable for our customers.

Given the potential socio-technical risks posed by facial recognition technology, we are looking both within and beyond Microsoft to include the expertise of statisticians, AI/ML fairness experts, and human-computer interaction experts in this effort. We have also consulted with anthropologists to help us deepen our understanding of human facial morphology and ensure that our data collection is reflective of the diversity our customers encounter in their applications.

While this work is underway, and in addition to the safeguards described above, we are providing guidance and tools to empower our customers to deploy this technology responsibly. Microsoft is providing customers with new tools and resources to help evaluate how well the models are performing against their own data and to use the technology to understand limitations in their own deployments. Azure Cognitive Services customers can now take advantage of the open-source Fairlearn package and Microsoft’s Fairness Dashboard to measure the fairness of Microsoft’s facial verification algorithms on their own data—allowing them to identify and address potential fairness issues that could affect different demographic groups before they deploy their technology. We encourage you to contact us with any questions about how to conduct a fairness evaluation with your own data.

We have also updated the transparency documentation with guidance to assist our customers to improve the accuracy and fairness of their systems by incorporating meaningful human review to detect and resolve cases of misidentification or other failures, by providing support to people who believe their results were incorrect, and by identifying and addressing fluctuations in accuracy due to variation in operational conditions.

In working with customers using our Face service, we also realized some errors that were originally attributed to fairness issues were caused by poor image quality. If the image someone submits is too dark or blurry, the model may not be able to match it correctly. We acknowledge that this poor image quality can be unfairly concentrated among demographic groups.

That is why Microsoft is offering customers a new Recognition Quality API that flags problems with lighting, blur, occlusions, or head angle in images submitted for facial verification. Microsoft also offers a reference app that provides real-time suggestions to help users capture higher-quality images that are more likely to yield accurate results.

To leverage the image quality attribute, users need to call the Face Detect API. See the Face QuickStart to test out the API.

Looking to the future

We are excited about the future of Azure AI and what responsibly developed technologies can do for the world. We thank our customers and partners for adopting responsible AI practices and being on the journey with us as we adapt our approach to new responsible AI standards and practices. As we launch the new Limited Access policy for our facial recognition service, in addition to new computer vision features, your feedback will further advance our understanding, practices, and technology for responsible AI.

Learn more at the Limited Access FAQ.
Quelle: Azure

See how 3 industry-leading companies are driving innovation in a new episode of Inside Azure for IT

I had the awesome opportunity to talk with a few people innovating with some of the most exciting next-generation tech in our latest episode of the Inside Azure for IT fireside chat series. Many of us, myself included, spend a lot of time focused on challenges that need to be addressed today—in this minute—leaving less time for creativity and longer-range planning. The same is true for many organizations. When businesses are faced with downtime, traditional hardware restrictions, or have to adapt quickly to new changes afoot, it can limit productivity and stifle innovation.

What we hear from IT leaders is that digital transformation becomes a reality when they can go from doing their job despite technology limitations to innovating and delivering on priorities because of the technology they’re using—specifically global, cloud-based infrastructure.

In this episode, you’ll get a behind-the-scenes look at how three companies are using cutting-edge technologies like high-performance computing, Quantum, and AI to solve complex challenges, power innovation, and generate new kinds of business impact.

Driving innovation across industries with Azure

The episode is divided into three separate segments so you can watch them individually on-demand, at your convenience.

Part 1: Jeremy Smith and Karla Young on how Jellyfish Pictures virtualized their entire animation and visual effects studio with Azure

In this segment, you’ll hear from Jeremy Smith, CTO, and Karla Young, Head of PR, Marketing, and Communications at Jellyfish Pictures about how they create the amazing visuals we see in movies like How to Train Your Dragon: Homecoming, or some of the recent Star Wars films—both big favorites for my family! Using Azure high-performance computing to accelerate image rendering, they can spin up tens of thousands of cores at a moment’s notice and manage all that rich content securely in a single place, without replication.
Watch now: Virtualizing animation with Azure high-performance computing.

Part 2: Anita Ramanan and Viktor Veis on using quantum computing to address a complex scheduling challenge for NASA’s Jet Propulsion Laboratory

In the second segment, I’m joined by members of the Azure Quantum team—Anita Ramanan, Technical Program Manager Lead for Optimization in Azure Quantum, and Viktor Veis, Azure Quantum Group Software Engineering Manager—to talk about a project they worked on with NASA’s Jet Propulsion Laboratory. They share how they used quantum-inspired algorithms to create schedules for spacecraft communications in minutes rather than hours—and how Azure Quantum can address similar challenges in almost every industry, from manufacturing to healthcare.
Watch now: A quantum-inspired approach to scheduling communications in space.

Part 3: Alex Oelling on how Volocopter is powering an urban air mobility ecosystem of self-flying air taxis and drone services with Azure infrastructure and AI

In the third segment, I chat with Alex Oelling, Chief Digital Officer at Volocopter about how they are bringing urban air travel to life in major cities. A true pioneer in providing air taxi and drone services in urban environments, Volocopter is building a cloud-based solution to work with smart cities and existing mobility operations using Azure infrastructure and AI.
Watch now: Pioneering urban air travel in major cities with Azure infrastructure and AI.

When we launched Inside Azure for IT last July, our goal was to create a place where cloud professionals could come to learn Azure best practices and insights that would help them transform their IT operations. Whether you’ve tuned in for our live ask-the-experts sessions, watched deep-dive skilling videos, or joined us for fireside chats—we want to say "thank you" for engaging with us and bringing us your hardest questions.

Stay current with Inside Azure for IT

Beyond this latest episode, there are many more technical and cloud-skilling resources available through Inside Azure for IT. Learn more about empowering an adaptive IT environment with best practices and resources designed to enable productivity, digital transformation, and innovation. Take advantage of technical training videos and learn about implementing these scenarios.

Watch the free Azure Hybrid, Multicloud, and Edge Day event on-demand.
Watch past episodes of the Inside Azure for IT fireside chats.
Watch part 1: Virtualizing animation with Azure high-performance computing.
Watch part 2: A quantum-inspired approach to scheduling communications in space.
Watch part 3: Pioneering urban air travel in major cities with Azure infrastructure and AI.

Quelle: Azure

Azure IoT increases enterprise-level intelligent edge and cloud capabilities

For Microsoft Azure IoT, our approach is connecting devices at the edge to the cloud seamlessly and securely to help customers achieve desired business outcomes. At this year’s Embedded World 2022, we’ll share how our Azure IoT solutions are delivering enhanced device security, seamless cloud integration, and device certification.

One of the key ways we’re delivering cost-efficient and energy-efficient solutions to IoT customers at Embedded World is with new Arm64 support. Partners such as NXP, with i.MX 8M SoC processors, are bringing full Windows IoT Enterprise capabilities in a small footprint ideal for compact and fanless designs.

Arm64 for low-cost, low-power benefits without compromise

Following our preview of the NXP i.MX 8M BSP release on Windows IoT Enterprise earlier this year, we are extending Arm64 support on NXP I.MX8 for Windows 10 IoT Enterprise.

Windows on Arm was launched in 2017 to provide better battery life, always-online internet connectivity, and quick boot-up via a Microsoft OS experience running on hardware powered by Arm processors. As enterprise-level IoT deployment has evolved, today’s edge devices have greater demands for compute-intensive applications, such as rich graphics and grid computing.

That’s why we’re now bringing full Windows application compatibility to IoT to deliver low-power and low-cost benefits of Arm64 through a multi-year collaboration between Microsoft and NXP, an Industrial IoT provider. Customers can get started by downloading the i.MX 8M Public Preview BSP and user guide. Additional partners announcing support for Windows IoT on Arm64 with their devices include Reycom and Avnet.

Security at the edge

Cyberattacks on IoT devices and other connected technology can put businesses at risk. An attack can result in stolen IP or other highly valuable data, compromised regulatory status or certification, costly downtime, as well as complex financial and legal ramifications. The following security announcement is one more way Microsoft is helping ensure security is built into the foundation of IoT solutions from the start.

Edge Secured-core

Edge Secured-core is a trusted certification program helping customers select hardware that meets a higher security standard. Edge Secured-core, including Edge Secured-core for Windows IoT, brings this certification into the IoT Edge ecosystem, making it easier for companies to identify edge hardware that meets this higher bar in protecting data.

MCU Security Platform

Microsoft also has partnered with STMicroelectronics to jointly develop a security platform for MCUs enabling ST’s ultra-low-power STM32U5 microcontrollers (MCUs) to connect securely to Azure IoT cloud services. The STM32U5 with Trusted Firmware for Cortex-M (TF-M) has been independently certified to PSA Level 3 and SESIP Level 3, and the STSAFE secure element has been certified to Common Criteria EAL 5+.

The security platform is built on Microsoft’s production-ready Azure real-time operating system (RTOS) which has received EAL4+ Common Criteria security certification and PSA Level 1 certification. The offering leverages best-in-class security with Microsoft Defender for IoT, Device Update for IoT Hub, and Device Provisioning Services with X.509 Certificate management.

Enhanced Azure RTOS

As software solutions become more complex, robust RTOS become more important for seamless development. Microsoft announced three enhancements for Azure RTOS at Embedded World 2022.

Embedded Wireless Framework

The Embedded Wireless Framework defines a common set of APIs for wireless interfaces used in IoT. The application programming interface covers multiple wireless network protocols, including Wi-Fi and cellular, with their unique proprietary drivers. The Wireless Framework also allows users to reuse application code across different devices leveraging IoT.

Visual Studio Code for Embedded

Visual Studio and VS Code have recently added embedded capabilities to C++ scenarios, opening a previously untapped market of developers for those products. Developers can use VS and VS Code for embedded development with Azure RTOS, Free RTOS, and Zephr. Industry partnerships will continue to extend capabilities.

Connecting IoT devices to Azure with LwM2M

Microsoft has collaborated with several partners to enable bridging the LwM2M protocol to Azure IoT cloud services, offering greater flexibility for device builders designing for low-power and low-bandwidth optimized applications over low-power wide-area (LPWA) technologies such as NB-IoT. Device certification enforces security standards.

Azure Sphere and Rust for continual innovation

Azure Sphere previously enabled programming exclusively in C. However, Rust has become one of the most popular embedded developer languages due to the safety and development ease it provides. Rust decreases time to market and lowers risks associated with security vulnerabilities in customer application code. Azure Sphere is now previewing support for Rust, ensuring a safe IoT device from the silicon through the application and to the cloud. Developers interested in joining the preview or getting updates can contact Azure Sphere at Microsoft.

Expanding enterprise-level intelligent edge capabilities

Enhanced device security, seamless cloud integration, and device certification support the Microsoft approach of making intelligent edge devices connect seamlessly and securely to the intelligent cloud. Visit the Microsoft Azure IoT booth at Embedded World 2022 to learn more about these latest announcements.
Quelle: Azure

Discover how you can innovate anywhere with Azure Arc

Welcome to Azure Hybrid, Multicloud, and Edge Day—please join us for the digital event. Today, we’re sharing how Azure Arc extends Azure platform capabilities to datacenters, edge, and multicloud environments through an impactful, 90-minute lineup of keynotes, breakouts, and technical sessions available live and on-demand. As part of today’s event, we’re announcing the general availability of Azure Machine Learning for hybrid and multicloud deployments with Azure Arc. Now you can build, train, and deploy your machine learning models right where the data lives, such as your new or existing hardware and IoT devices.

When I talk with customers, one of the things I hear most frequently is how new cloud-based applications drive business forward. And as these new applications are built, they need to take full advantage of the agility, efficiency, and speed of cloud innovation. However, not all applications and infrastructure they run on can physically reside in the cloud. That’s why 93 percent of enterprises are committed to hybrid deployments for their on-premises, multicloud, and edge workloads.1

With Azure, we meet you where you are, so you can innovate anywhere. The Azure cloud platform helps you bring new solutions to life—to solve today’s challenges and create the future. Azure Arc is a bridge that extends the Azure platform so you can build applications and services with the flexibility to run across datacenters, edge, and multicloud environments.

Azure Arc provides a consistent development, operations, and security model for both new and existing applications. Our customers are using it to revolutionize their businesses, whether they’re building on new and existing hardware, virtualization and Kubernetes platforms, IoT devices, or integrated systems.

I’m constantly amazed by the ways people are using Azure and Azure Arc to create innovative solutions, and at the same time, overcome longstanding security and governance challenges.

John Deere brings modern cloud benefits on-premises and at the edge with hybrid data services

The iconic green and yellow John Deere tractors are a familiar sight in fields around the world. With a well-stocked technology portfolio that spans cloud platforms, on-premises datacenters, and edge devices at factories, John Deere’s modernization strategy makes the most of its assets while cultivating a path for the future.

Together with Azure Arc–enabled SQL Managed Instance, John Deere helps connect the dots across all these environments and puts the power of the cloud to work in the company’s existing infrastructure. The result? A unified view of operations across platforms that pivots on Azure Arc, helping John Deere to optimize manufacturing operations. Together with Azure Arc–enabled SQL Managed Instance, the hybrid solution is helping John Deere drive down operational costs and accelerate innovation.

Another opportunity the cloud provides is to transform data insights into new products and services. For years, Azure has provided machine learning and IoT solutions to unlock signals and data from the physical world. Azure Arc brings data services from Azure, like SQL, PostgreSQL, and Machine Learning so you can harness data insights from edge to cloud with an end-to-end solution from local data collection, compute, storage, and real-time analysis.

We recently announced Azure Arc–enabled SQL Managed Instance Business Critical is now generally available. The Business Critical tier of Azure Arc–enabled SQL Managed Instance is built for mission-critical workloads requiring the most demanding performance, high availability, and security. Azure Arc–enabled SQL Managed Instance comes from the same evergreen SQL in Azure that is always up to date with no end of support.

Wolverine Worldwide analyzes sensitive data on-premises to optimize the supply chain

Wolverine Worldwide owns beloved activewear and lifestyle brands such as Chaco, Saucony, Merrell, Keds, Sperry, and more. When the pandemic created a new set of unanticipated supply chain challenges across the global economy, Wolverine turned to cloud innovation to help its 13 brands.

“Previously, data was a little tough to get at. It was either a gut feel, or the opportunity bypassed us while we were doing our analysis. With Azure Arc, Wolverine can use Azure Machine Learning and data services to analyze holistically data from the supply chain, manufacturing, and its ecommerce business while keeping sensitive data on-premises.”—Jason Miller, Vice President for Enterprise Data, Planning & Analytics, Wolverine Worldwide

Whether you want to secure and govern servers or create a self-service experience on VMware from Azure, Azure Arc is validated on a variety of infrastructures so you can always get your applications and data to run where you need them.

Businesses can start with Azure Stack HCI support for single-node clusters, which is generally available, for flexibility to deploy Azure Stack HCI in smaller spaces and with lower processing needs. Additionally, we’re announcing today that Windows Admin Center can now manage your Azure Arc–enabled servers and Azure Stack HCI clusters from the Azure Portal. Using this functionality, you can securely manage your servers and clusters from Azure—without needing a VPN, public IP address, or other inbound connectivity to your machine.

Greggs modernizes security and operations

A bakery and coffee shop in the UK with over 2,200 retail locations, Greggs is another customer using Azure Arc–enabled security and management tools. The company needed visibility across its digital estate from on-premises Windows Servers to Kubernetes running in AKS.

“By deploying Azure Arc, we can use Microsoft Defender for Cloud for our on-premises server estate, something we couldn’t do before. We’ve gained significant security benefits—like secure risk score, compliance scoring, and assessments. The central aggregation of logs shows us if a security event actually occurs across multiple devices so that we can pinpoint potential causes.”—Scott Clennell, Head of Infrastructure and Networks, Greggs

For customers like Greggs, we continue to innovate on Azure Arc–enabled servers. We recently announced Azure Arc–enabled servers support for private endpoints, a new servers monitoring workbook created in the public Azure Monitor GitHub repository, and a preview of SSH access to Azure Arc–enabled servers.

With Azure Arc, you have access today to a comprehensive set of Azure services, such as Microsoft Defender for Cloud, Microsoft Sentinel, Azure Policy, Azure Monitor, and more to secure and manage resources and data anywhere.

Millennium bcp streamlines multicloud app deployments with Azure Arc

“We needed…the ability to move a workload running in an Azure Kubernetes Service (AKS) cluster to a Google Cloud Platform or Amazon Web Services cluster, or vice versa, in case of emergency. We needed something that could help us turn those into an enterprise-level service. That’s where Azure Arc came in.”—Nuno Guedes, Cloud Compute Lead, Millennium bcp

Millennium bcp is the largest private bank in Portugal and uses Azure Arc for a standard approach to deploy containers to its multicloud environment. Azure Arc helps companies like Millennium build and modernize cloud-native apps on any Kubernetes using familiar developer tools, like Visual Studio Code and GitHub, as well as implement consistent GitOps and policy-driven deployments across environments.

To support our customers’ app development, we recently announced GitOps with Flux v2 in AKS and Azure Arc–enabled Kubernetes, general availability of Arc–enabled Open Service Mesh, general availability of Azure Key Vault Secrets Provider extension, and the landing zone accelerator for Azure Arc–enabled Kubernetes.

Finally, a huge thank you to our partners and customers in the Azure Arc community. We hope you will enjoy the event and learn how Azure Arc can benefit your organization. We look forward to connecting and listening to your feedback.

Azure Hybrid, Multicloud, and Edge Day highlights

You can access everything on-demand, and check out the additional demos and customer stories in the event portal. Enjoy the event experience. I can’t wait to see how you innovate anywhere.

1Hybrid & Multicloud Perceptions Survey, Microsoft.
Quelle: Azure

Simplify and centralize network security management with Azure Firewall Manager

We are excited to share that Azure Web Application Firewall (WAF) policy and Azure DDoS Protection plan management in Microsoft Azure Firewall Manager is now generally available.

With an increasing need to secure cloud deployments through a Zero Trust approach, the ability to manage network security policies and resources in one central place is a key security measure.

Today, you can now centrally manage Azure Web Application Firewall (WAF) to provide Layer 7 application security to your application delivery platforms, Azure Front Door, and Azure Application Gateway, in your networks and across subscriptions. You can also configure DDoS Protection Standard for protecting your virtual networks from Layer 3 and Layer 4 attacks.

Azure Firewall Manager is a central network security policy and route management service that allows administrators and organizations to protect their networks and cloud platforms at a scale, all in one central place. 

Azure Web Application Firewall is a cloud-native web application firewall (WAF) service that provides powerful protection for web apps from common hacking techniques such as SQL injection and security vulnerabilities such as cross-site scripting.

Azure DDoS Protection Standard provides enhanced Distributed Denial-of-Service (DDoS) mitigation features to defend against DDoS attacks. It is automatically tuned to protect all public IP addresses in virtual networks. Protection is simple to enable on any new or existing virtual network and does not require any application or resource changes. 

By utilizing both WAF policy and DDoS protection in your network, this provides multi-layered protection across all your essential workloads and applications.

WAF policy and DDoS Protection plan management are an addition to Azure Firewall management in Azure Firewall Manager.

Centrally protect your application delivery platforms using WAF policies 

In Azure Firewall Manager, you can now manage and protect your Azure Front Door or Application Gateway deployments by associating WAF policies, at scale. This allows you to view all your key deployments in one central place, alongside Azure Firewall deployments and DDoS Protection plans.

Upgrade from WAF configuration to WAF policy

In addition, the platform supports administrators to upgrade from a WAF config to WAF policies for Application Gateways, by selecting the service and Upgrade from WAF configuration. This allows for a more seamless process for migrating to WAF policies, which supports WAF policy settings, managed rulesets, exclusions, and disabled rule-groups.

As a note, all WAF configurations that were previously created in Application Gateway can be done through WAF policy.

Manage DDoS Protection plans for your virtual networks

You can enable DDoS Protection Plan Standard on your virtual networks listed in Azure Firewall Manager, across subscriptions and regions. This allows you to see which virtual networks have Azure Firewall and/or DDoS protection in a single place.

View and create WAF policies and DDoS Protection Plans in Azure Firewall Manager

You can view and create WAF policies and DDoS Protection Plans from the Azure Firewall Manager experience, alongside Azure Firewall policies.

In addition, you can import existing WAF policies to create a new WAF policy, so you do not need to start from scratch if you want to maintain similar settings.

Monitor your overall network security posture

Azure Firewall Manager provides monitoring of your overall network security posture. Here, you can easily see which virtual networks and virtual hubs are protected by Azure Firewall, a third-party security provider, or DDoS Protection Standard. This overview can help you identify and prioritize any security gaps that are in your Azure environment, across subscriptions or for the whole tenant.

Coming soon, you’ll also be able to view your Application Gateway and Azure Front Door monitors, for a full network security overview.

Learn more

To learn more about these features in Azure Firewall Manager, visit the Manage Web Application Firewall policies tutorial, WAF on Application Gateway documentation, and WAF on Azure Front Door documentation. For DDoS information, visit the Configure Azure DDoS Protection Plan using Azure Firewall Manager tutorial and Azure DDoS Protection documentation.

To learn more about Azure Firewall Manager, please visit the Azure Firewall Manager home page.
Quelle: Azure

MLOps Blog Series Part 1: The art of testing machine learning systems using MLOps

Testing is an important exercise in the life cycle of developing a machine learning system to ensure high-quality operations. We use tests to confirm that something functions as it should. Once tests are created, we can run them automatically whenever we make a change to our system and continue to improve them over time. It is a good practice to reward the implementation of tests and identify sources of mistakes as early as possible in the development cycle to prevent rising downstream expenses and lost time.

In this blog, we will look at testing machine learning systems from a Machine Learning Operations (MLOps) perspective and learn about good case practices and a testing framework that you can use to build robust, scalable, and secure machine learning systems. Before we delve into testing, let’s see what MLOps is and its value to developing machine learning systems.

 

Figure 1: MLOps = DevOps + Machine Learning.

 

Software development is interdisciplinary and is evolving to facilitate machine learning. MLOps is a process for fusing machine learning with software development by coupling machine learning and DevOps. MLOps aims to build, deploy, and maintain machine learning models in production reliably and efficiently. DevOps drives machine learning operations. Let’s look at how that works in practice. Below is an MLOps workflow illustration of how machine learning is enabled by DevOps to orchestrate robust, scalable, and secure machine learning solutions.

 

Figure 2: MLOps workflow.

 

The MLOps workflow is modular, flexible, and can be used to build proofs of concept or operationalize machine learning solutions in any business or industry. This workflow is segmented into three modules: Build, Deploy, and Monitor. Build is used to develop machine learning models using an machine learning pipeline. The Deploy module is used for deploying models in developer, test, and production environments. The Monitor module is used to monitor, analyze, and govern the machine learning system to achieve maximum business value. Tests are performed primarily in two modules: the Build and Deploy modules. In the Build module, data is ingested for training, the model is trained using ingested data, and then it is tested in the model testing step.

1. Model testing: In this step, we evaluate the performance of the trained model on a separated set of data points named test data (which was split and versioned in the data ingestion step). The inference of the trained model is evaluated according to selected metrics as per the use case. The output of this step is a report on the trained model's performance. In the Deploy module, we deploy the trained models to dev, test, and production environments, respectively. First, we start with application testing (done in dev and test environments).

2. Application testing: Before deploying an machine learning model to production, it is vital to test the robustness, scalability, and security of the model. Hence, we have the "application testing" phase, where we rigorously test all the trained models and the application in a production-like environment called a test, or staging, environment. In this phase, we may perform tests such as A/B tests, integration tests, user acceptance tests (UAT), shadow testing, or load testing.

Below is the framework for testing that reflects the hierarchy of needs for testing machine learning systems.

 

Figure 3: Hierarchy of needs for testing machine learning systems.

 

One way to think about machine learning systems is to consider Maslow's hierarchy of needs. Lower levels of a pyramid reflect “survival,” and the true human potential is unleashed only after basic survival and emotional needs are met. Likewise, tests that inspect robustness, scalability, and security ensure that the system not only performs at the basic level but reaches its true potential. One thing to note is that there are many additional forms of functional and nonfunctional testing, including smoke tests (rapid health checks) and performance tests (stress), but they may all be classified as system tests.

Over the next three posts, we’ll cover each of the three broad levels of testing, starting with robustness and then moving on to scalability and finally, security.

For further details and to learn about hands-on implementation, check out the Engineering MLOps book, or learn how to build and deploy a model in Microsoft Azure Machine Learning using MLOps in the Get Time to Value with MLOps Best Practices on-demand webinar.

Source for images: Engineering MLOps book
Quelle: Azure

Azure powers rapid deployment of private 4G and 5G networks

As the cloud continues to expand into a ubiquitous and highly distributed fabric, a new breed of application is emerging: Modern Connected Applications. We define these new offerings as network-intelligent applications at the edge, powered by 5G, and enabled by programmable interfaces that give developer access to network resources. Along with internet of things (IoT) and real-time AI, 5G is enabling this new app paradigm, unlocking new services and business models for enterprises, while accelerating their network and IT transformation.

At Mobile World Congress this year, Microsoft announced a significant step towards helping enterprises in this journey: Azure Private 5G Core, available as a part of the Azure private multi-access edge compute (MEC) solution. Azure Private 5G Core enables operators and system integrators (SIs) to provide a simple, scalable, and secure deployment of private 4G and 5G networks on small footprint infrastructure, at the enterprise edge.

This blog dives a little deeper into the fundamentals of the service and highlights some extensions that enterprises can leverage to gain more visibility and control over their private network. It also includes a use case of an early deployment of Azure Kubernetes Services (AKS) on an edge platform, leveraged by the Azure Private 5G Core to rapidly deploy such networks.

Building simple, scalable, and secure private networks

Azure Private 5G Core dramatically simplifies the deployment and operation of private networks. With just a few clicks, organizations can deploy a customized set of selectable 5G core functions, radio access network (RAN), and applications on a small edge-compute platform, at thousands of locations. Built-in automation delivers security patches, assures compliance, and performs audits and reporting. Enterprises benefit from a consistent management experience and improved service assurance experience, with all logs and metrics from cloud to edge available for viewing within Azure dashboards.

Enterprises need the highest level of security to connect their mission critical operations. Azure Private 5G Core makes this possible by natively integrating into a broad range of Azure capabilities. With Azure Arc, we provide seamless and secure connectivity from an on-premises edge platform into the Azure cloud. With Azure role-based access control (RBAC), administrators can author policies and define privileges that will allow an application to access all necessary resources. Likewise, users can be given appropriate access to manage all resources in a resource group, such as virtual machines, websites, and subnets. Our Zero Trust security frameworks are integrated from devices to the cloud to keep users and data secure. And our complete, “full-stack” solution (hardware, host and guest operating system, hypervisor, AKS, packet core, IoT Edge Runtime for applications, and more) meets standard Azure privacy and compliance benchmarks in the cloud and on the enterprise edge, meaning that data privacy requirements are adhered to in each geographic region.

Deploying private 5G networks in minutes

Microsoft partner Inventec is a leading design manufacturer of enterprise-class technology solutions like laptops, servers, and wireless communication products. The company has been quick to see the potential benefit in transforming its own world-class manufacturing sites into 5G smart factories to fully utilize the power of AI and IoT.

In a compelling example of rapid private 5G network deployment, Inventec recently installed our Azure private MEC solution in their Taiwan smart factory. It took only 56 minutes to fully deploy the Azure Private 5G Core and connect it to 5G access points that served multiple 5G endpoints—a significant reduction from the months that enterprises have come to expect. Azure Private 5G Core leverages Azure Arc and Azure Kubernetes Service on-prem to provide security and manageability for the entire core network stack. Figures 1 and 2 below show snapshots from the trial.

Figure 1: Screenshot of logs with time stamps showing start and completion of the core network deployment.

Figure 2: Screenshot from the trial showing one access point successfully connected to seven endpoints.

Inventec is developing applications for manufacturing use-cases that leverage private 5G networks and Microsoft’s Azure Private 5G Core. Examples of these high-value MEC use cases include Automatic Optical Inspection (AOI), facial recognition, and security surveillance systems.

Extending enterprise control and visibility from the 5G core

Through close integration with other elements of the Azure private MEC solution, our Azure Private 5G Core essentially acts as an enterprise “control point” for private wireless networks. Through comprehensive APIs, the Azure Private 5G Core can extend visibility into the performance of connected network elements, simplify the provisioning of subscriber identity modules (SIMs) for end devices, secure private wireless deployments, and offer 5G connectivity between cloud services (like IoT Hub) and associated on-premises devices.

Figure 3: Azure Private 5G Core is a central control point for private wireless networks.

Customers, developers, and partners are finding value today with a number of early integrations with both Azure and third-party services that include:

Plug and play RAN: Azure private MEC offers a choice of 4G or 5G Standalone radio access network (RAN) partners that integrate directly with the Azure Private 5G Core. By integrating RAN monitoring with the Azure Private 5G Core, RAN performance can be made visible through the Azure management portal. Our RAN partners are also onboarding their Element Management System (EMS) and Service Management and Orchestrator (SMO) products to Azure, simplifying the deployment processes and have a framework for closed-loop radio performance automation.
Azure Arc managed edge: The Azure Private 5G Core takes advantage of the security and reliability capabilities of Azure Arc-enabled Azure Kubernetes Service running on Azure Stack Edge Pro. These include policy definitions with Azure Policy for Kubernetes, simplified access to AKS clusters for High Availability with Cluster Connect and fine-grained identity and access management with Azure RBAC. 
Device and Profile Management: Azure Private 5G Core APIs integrate with SIM management services to securely provision the 5G devices with appropriate profiles. In addition, integration with Azure IoT Hub enables unified management of all connected IoT devices across an enterprise and provides a message hub for IoT telemetry data. 
Localized ISV MEC applications: Low-latency MEC applications benefit from running side-by-side with core network functions on the common (Azure private MEC) edge-compute platform. By integrating tightly with the Azure Private 5G Core using Azure Resource Manager APIs, third-party applications can configure network resources and devices. Applications offered by partners are available in, and deployable from the Azure Marketplace.

It’s easy to get started with Azure private MEC

As innovative use cases for private wireless networks continue to develop and industry 4.0 transformation accelerates, we welcome ISVs, platform partners, operators, and SIs to learn more about Azure private MEC.

Application ISVs interested in deploying their industry or horizontal solutions on Azure should begin by onboarding their applications to Azure Marketplace.
Platform partners, operators, and SIs interested in partnering with Microsoft to deploy or integrate with private MEC can get started by reaching out to the Azure private MEC Team.

Microsoft is committed to helping organizations innovate from the cloud, to the edge, and to space—offering the platform and ecosystem strong enough to support the vision and vast potential of 5G. As the cloud continues to expand and a new breed of modern connected apps at the edge emerges, the growth and transformation opportunities for enterprises will be profound. Learn more about how Microsoft is helping developers embrace 5G.
Quelle: Azure

Supporting openEHR with Azure Health Data Services

This blog post is co-authored by Trent Norris, Cloud and Data Partner Alliances, HLS.

This blog is part of a series in collaboration with our partners and customers leveraging the newly announced Azure Health Data Services. Azure Health Data Services, a platform as a service (PaaS) offering designed exclusively to support Protected Health Information (PHI) in the cloud, is a new way of working with unified data—providing care teams with a platform to support both transactional and analytical workloads from the same data store and enabling cloud computing to transform how we develop and deliver AI across the healthcare ecosystem.

Microsoft Cloud for Healthcare and the Azure Health Data Services product engineering team are committed to global patient health information interoperability. We believe interoperability is table stakes to unlock and derive a more comprehensive assessment of the available clinical evidence.

In order to accomplish this connected interoperable data flow, we have built Azure Health Data Services around HL7v2, CCDA, FHIR, DICOM, and other connected standards and common schemas. Aligning to these common standards and schemas enables Microsoft’s partner-led strategy and GTM engagement with organizations that include but aren’t limited to openEHR, Better, EY, and EPAM.

Azure Health Data Services is the first of its kind to unify diverse data types in the same data store at the patient level as you bring it into the cloud, which means you can view structured, unstructured, and imaging data together for a holistic, real-time view—in just minutes. With the service, you can search and query across your data using a unified Fast Healthcare Interoperability Resources (FHIR®) structure and deploy a suite of services to connect it rapidly to the technology you need. Whether you’re blending patient data with population health data sets for AI development and analytics, visualizing data for operational efficiencies, deploying patient engagement tools for personalized care, or querying imaging metadata alongside clinical data using our new DICOMcast feature, Azure Health Data Services work with your existing systems to enhance what you’re doing today. It’s also built on open standards to ensure you can support new solutions and innovations yet to come.

We are major supporters and members of the openEHR foundation. We believe their commitment to open specifications, clinical models, and software that can be used to create and build solutions for healthcare is closely aligned to our mission here at Microsoft. Providing rich pipelines for the data into the cloud, where you can integrate analytical tools such as Azure Synapse or Data Bricks with healthcare’s transactional systems of record. We believe FHIR (with SMART on FHIR) is the standard that complements openEHR by acting as the USB exchange connector that will standardize access for other Microsoft Azure Services and FHIR application ecosystems.

Do more with your data with Microsoft Cloud for Healthcare

With Azure Health Data Services, health organizations can transform their patient experience, discover new insights with the power of machine learning and AI, and manage PHI data with confidence. Enable your data for the future of healthcare innovation with Microsoft Cloud for Healthcare.

We look forward to being your partner as you build the future of health.

Learn more about Azure Health Data Services.
Learn more about Microsoft Cloud for Healthcare.
Learn more about how health companies are using Azure to drive better health outcomes.

®FHIR is a registered trademark of Health Level Seven International, registered in the U.S. Trademark Office, and is used with their permission.
Quelle: Azure