IoT Signals healthcare report: Key opportunities to unlock IoT’s promise

The cost of healthcare is rising globally and to tackle this, medical providers, from hospitals to your local doctor’s office, are looking to IoT to streamline processes and minimize costs. Few industries stand to gain more from emerging technology. And in few industries the stakes are higher because, in healthcare, incremental efficiencies can make the difference between life and death.

The International Data Corporation (IDC) expects that by 2025 there will be 41.6 billion connected IoT devices or ‘things,’ generating more than 79 zettabytes (ZB) of data.i In the healthcare industry, IoT has emerged as a valuable tool to help ensure quality and better patient care. IoT is used to manage everything from chronic diseases to medication dosages to medical equipment—situations where security flaws in devices are potentially life-threatening. By helping to reduce human error, improve safety conditions, increase staff satisfaction, and make organizations more efficient, IoT can ultimately improve health outcomes.

Insights from new IoT Signals Healthcare report

Today we're launching a new IoT Signals report focused on the healthcare industry that provides an industry pulse on the state of IoT adoption. This research enables us to better serve our partners and customers, as well as help healthcare leaders develop their own IoT strategies. We surveyed 152 decision-makers in enterprise healthcare organizations across multiple countries to deliver an industry-level view of the IoT ecosystem, including adoption rates, related technology trends, challenges, and benefits of IoT.

What the study found is that while IoT has had broad adoption in healthcare (89 percent) and is considered critical to success, healthcare organizations are still challenged by security, compliance and privacy concerns, as well as skills shortages. To summarize the findings:

IoT is helping healthcare organizations become safer and more efficient. With the sensitive and highly regulated nature of healthcare work, leveraging IoT for patient monitoring, quality assurance, and logistical support is quite prevalent. IoT is helping organizations ensure quality in these areas while improving patient care.
To expand IoT implementations, organizations must tackle regulatory and compliance challenges. Healthcare organizations must continue to keep patient information private and comply with evolving regulatory standards while proving the return on investment of IoT. Overcoming barriers around evolving data regulations is key for healthcare organizations, and many are adopting numerous standards. Over 8 in 10 have adopted either HL7, DICOM, or CMS Interoperability, with HL7 FHIR and DICOM being the most common.
IoT talent shortages exist. Getting IoT off the ground is a challenge for any company, given technology challenges, long-term commitments, and the investment required. It’s doubly so for healthcare organizations that lack talent and resources. In fact, 43 percent of those surveyed cited lack of budget and staff as roadblocks to success, with 34 percent specifically concerned about a lack of skilled workers and technical knowledge. Furthermore, 25 percent said a lack of resources and knowledge were key factors in their ability to scale, and in proof-of-concept failures.
The future of IoT in healthcare will extend beyond patient care, with strong growth in optimizing logistics and operations. While IoT usage for patient care will continue to grow and remain a top use case in the future, decision-makers see strong potential to leverage IoT more to support the logistics and operational side of their organizations. Significant IoT growth is expected in facilities management and staff tracking. Decision-makers also anticipate improved safety, compliance, and efficiency through increased IoT implementation within supply chain management, inventory tracking, and quality assurance as patient care catches up with traditional IoT scenarios like manufacturing, logistics, supply chain, and quality.

Microsoft is leading the charge to address these IoT challenges

There are many ways in which healthcare organizations can benefit by leveraging the Azure IoT platform to connect and control devices:

Simplify patient monitoring while reducing healthcare costs. Continuous monitoring of assets connected to healthcare applications, including battery life and general health of devices, allows providers to deliver personalized patient care anytime, anywhere and equips their care team with a near real-time view of the patient’s health and activities.
Optimize medical equipment utilization. Medical staff can avoid equipment downtime and misplacement, and allocate more time for patients, when they connect and track machines, supplies, and other assets through the cloud and monitor their usage for optimal deployment.
Proactively replenish supplies. Healthcare facilities can better ensure safety and efficacy through cold chain tracking to monitor, maintain, and automate life-saving vaccine storage and distribution by connecting devices to the cloud and proactively replenishing contents.

Across all these applications, we see common benefits provided by cloud computing, including:

Greater trust around the security of health data.
Near infinite scale for storing and processing large amounts of data.
Increased speed in gaining access to new tools, more storage space, or greater computing power.
Economical use of resources.
Scaling up and down as demand fluctuates in terms of, for instance, natural disasters.

Our commitment

We are committed to helping healthcare customers bring their visions to life with IoT, and this starts with simplifying and securing IoT. Our customers are embracing IoT as a core strategy to drive better patient outcomes and we are heavily investing in this space, committing $5 billion in IoT and intelligent edge innovation by 2022 and growing our IoT and intelligent edge partner ecosystem to over 10,000.

Our vision is to simplify IoT, enabling every business on the planet to benefit. We have the most comprehensive portfolio of IoT platform services and are pushing to further simplify IoT solution development with our scalable, fully managed IoT app platform Azure IoT Central. Solution builders are accelerated from proof of concept to production using IoT Central application templates like our healthcare template for continuous patient monitoring. We work hard to ensure healthcare organizations have a robust talent pool of IoT developers, providing free training for common application patterns and deployments through our IoT School and AI School.

Security is paramount for healthcare customers. Azure Sphere takes a holistic security approach from silicon to cloud, providing a highly secured solution for connected microcontroller units (MCUs,) that go into devices ranging from connected home devices to medical and industrial equipment. Azure Security Center provides unified security management and advanced threat protection for systems running in the cloud and on the edge. Azure Sphere combined with a real-time operating system (RTOS) delivers a better together solution that can help real-time medical apps improve the performance in IoT medical devices, including medical imaging systems, by ensuring they meet data regulation requirements.

Finally, we’re helping our healthcare customers leverage their IoT investments with AI and at the intelligent edge. Azure IoT Edge enables customers to distribute cloud intelligence to run in isolation on IoT devices directly and Azure Stack Edge builds on Azure IoT Edge and adds virtual machine and mass storage support.

When IoT is foundational to a healthcare organization’s transformation strategy, it can have a significant positive impact on patient care, safety, and the bottom line. We're invested in helping our partners, customers, and the broader industry to take the necessary steps to address barriers to success and invent with purpose.

Read the full IoT Signals healthcare report and learn how we're helping healthcare providers embrace the future and unlock new opportunities with IoT.

i Worldwide Global DataSphere IoT Device and Data Forecast, 2019–2023, (Doc #US45066919), May 2019.
Quelle: Azure

Reimagining healthcare with Azure IoT

Providers, payors, pharmaceuticals, and life sciences companies are leading the next wave of healthcare innovation by utilizing connected devices. From continuous patient monitoring, to optimizing operations for manufacturers and cold-chain supply tracking for the pharmaceutical industry, the healthcare industry has embraced IoT technology to improve patient outcomes and operations.

In our latest IoT Signals for Healthcare research, we spoke with over 150 health organizations about the role that IoT will play in helping them deliver better health outcomes in the years to come. Across the ecosystem, 85 percent see IoT as “critical” to their success, with 78 percent planning to increase their investment in IoT technologies over the next few years. Real-time data from connected devices and sensors provides benefits across the health ecosystem, from manufacturers and pharmaceuticals to health providers and patients.

For health providers, IoT unlocks efficiencies for clinical staff and equipment:

Reduces human error.
Ensures regulatory compliance when exchanging patient health data across systems.
Coordinates the productivity of medical professionals across clinical facilities.

For manufacturers, IoT creates new digital feedback loops connecting their employees, facilities, products, and end customers. Real-time data can help:

Reduce costly downtime with predictive maintenance.
Improve sustainable practices by reducing waste and ensuring worker safety.
Contribute to improved product quality and quantity.

For the pharmaceutical industry, IoT provides greater traceability for inventory along a supply chain:

Improved visibility into environmental conditions.
Reduced costly inventory spoilage.
Increased control against theft or counterfeiting.

For end patients, IoT can improve health outcomes with continuous patient monitoring:

Reduces the need for unnecessary readmissions.
Improves treatment success rates by providing continuous data to care professionals.
Personalizes care based on patient needs.

In this blog, we’ll cover how our portfolio can support different IoT solution needs for software developers, hardware developers, and healthcare customers. We’ll also cover new product updates for healthcare solution builders, review a sample solution architecture, and showcase two case studies that illustrate different approaches for building innovative healthcare solutions. To further explore applications of IoT in healthcare and customer case studies, head to our IoT in Healthcare page.

Building healthcare IoT solutions with Azure IoT

As Microsoft and its global partners continue to build solutions that empower healthcare organizations around the world, a key question continues to face IoT decision makers: whether to build a solution from scratch or buy an existing solution that fits their needs.

From ensuring device-to-cloud security with Azure Sphere to providing multiple approaches for device management and connectivity with Platform as a Service (PaaS) options or a managed app platform, Azure IoT provides the most comprehensive IoT and Edge product portfolio on the market, designed to meet the diverse needs of healthcare solution builders.

Solution builders who want to invest their resources in designing, maintaining, and customizing IoT systems from the ground up can do so with our growing portfolio of IoT platform services, leveraging Azure IoT Hub as a starting point.

While this approach may be tempting for many, often solution builders struggle when growing their pilot into a globally scalable IoT solution. This process introduces significant complexity to an IoT architecture, requiring expertise across cloud and device security, DevOps, compliance, and more. For this reason, many solution builders might be better suited for starting with a managed platform approach with Azure IoT Central. Using more than two dozen Azure services, Azure IoT Central is designed to continually evolve with the latest service updates and seamlessly accompany solution builders along their IoT journey from pilot to production. With predictable pricing, white labeling, healthcare-specific application templates, and extensibility, solution builders can focus their time on how their device insights can improve outcomes, instead of common infrastructure questions like ingesting device data or ensuring disaster recovery.

New tools to accelerate building a healthcare IoT solution

Over the past year, we’ve been working hard to create new tools to make IoT solution development easier for our healthcare partners and customers:

Azure IoT Central app templates.
Internet of Medical Things (IoMT) Fast Healthcare Interoperability Resource (FHIR) Connector for Azure.

To help you put all of these tools together, we’ve also published a reference architecture diagram for continuous patient monitoring solutions.

Continuous patient monitoring reference architecture

Azure IoT Central app templates

Last November, we announced the first IoT Central healthcare application template, designed for continuous patient monitoring applications. In-patient monitoring and remote patient monitoring are top of mind for many healthcare organizations; monitoring is the number one application of IoT in healthcare today, according to our survey of health organizations (mentioned above).

Application templates help solution builders get started even faster by providing scenario-specific resources such as:

Sample device operator dashboards.
Sample device templates.
Preconfigured rules and alerts.

An IoT device operator might set alerts to be notified when patient devices have low battery levels or exceed a certain threshold of temperature, so that they can take timely action to prevent devices losing connectivity, being damaged, or losing battery. Furthermore, the application template has rich documentation detailing integration with the Azure API for FHIR, ensuring scalable compliance with the HL7 FHIR standard (more on this in the next section).

Outside of using existing App Templates, solution builders can also leverage the “Custom App” option to build IoT applications for other healthcare scenarios as well.

IoMT FHIR Connector for Azure

Interoperability continues to be a huge challenge and critical for most healthcare organizations looking to use healthcare data in innovative ways. Microsoft proudly announced the general availability of our own FHIR server offering, Azure API for FHIR, in October 2019. We are now further enriching the FHIR ecosystem with the IoMT FHIR Connector for Azure, a connector designed to ingest, transform, and store IoT protected health information (PHI) data in FHIR compatible format.

Innovative healthcare companies share their IoT stories

In addition to rich industry insights like those found in IoT Signals for Healthcare and our previously published stories from Stryker, Gojo, and Wipro, we are releasing two new case stories. They detail the decisions, trade-offs, processes, and results of top healthcare organizations investing in IoT solutions, as well as the healthcare solution builders supporting them. These case studies showcase different approaches to building an IoT solution, based on the unique needs of their business. Read more about how these companies are implementing and winning with their IoT investments.

ThoughtWire and Schneider Electric leverage IoT for hospital operations

Clinical environments are managed by traditionally disconnected systems (facility management, clinical operations, inventory management, and more), operated by entirely separate teams. This makes it difficult to holistically manage and optimize clinical operations. Schneider Electric, a global expert in facilities management, partnered with ThoughtWire, a specialist in operations management systems, to deliver an end-to-end solution for facilities and clinical operations management. The joint Smart Hospital solution uses Azure’s IoT platform to help hospitals and clinics reduce costs, minimize their carbon footprint, and promote better staff satisfaction, patient experiences and health outcomes.

“We don’t just want to understand how the facility operates, we want to understand how patients and clinical staff interact with that infrastructure,” says Chris Roberts, Healthcare Solution Architect at Schneider Electric. “That includes everything to do with patient experience and patient safety. And when you talk about those things, the clinical world and the infrastructure world start to merge and connect. Working with ThoughtWire, we bridge the gap between those two worlds and drive performance improvements.”

To learn more, read the case study here.

Sensoria Health creates a new gold standard for managing diabetic foot ulcers

Diabetic Foot Ulcers (DFUs) are the leading cause of hospitalizations for diabetics, with a notoriously high treatment failure rate (over 75 percent), and an annual cost of $40 billion globally. To improve treatment success, Sensoria partnered with leading diabetic foot boot manufacturer, Optima Molliter, to create the Motus Smart Solution. The solution enables clinicians to remotely monitor patients wearing removable offloading devices (casts) when they leave the clinic and to track patient compliance against recommended care plans, enabling more personalized–and more impactful–care.

Sensoria turned to Azure IoT Central to develop a solution that would handle device management at scale while ensuring compliance in storing and sharing patient data. They leveraged the Continuous Patient Monitoring app template as their starting point to quickly design, launch, and scale their solution. With native IoMT Connector for FHIR integration, the template ensures that patient data is ultimately stored and shared in a secure and compliant format.

As stated by Davide Vigano, Cofounder and CEO of Sensoria, “We needed to quickly build enterprise-class applications for both doctors and patients to use with the device, send data from the device in a way that would help people remain compliant with HIPAA and other similar privacy-related legislation around the world, and find a way for the device’s data to easily flow from clinician to clinician across the very siloed healthcare industry. Using Azure IoT Central helped us deliver on all those requirements in a very short period of time.”

To learn more, read the case study here.

We look forward to seeing healthcare organizations continue to innovate with IoT to drive better health outcomes. We’ll continue to build the tools and platforms to empower our partners to invent with purpose.

Getting started

Explore other case studies and applications of IoT in healthcare.
Check out the IoMT FHIR Connector for Azure.
Try out the IoT Central Continuous Patient Monitoring template.

Quelle: Azure

Data agility and open standards in health: FHIR fueling interoperability in Azure

Data agility in healthcare; it sounds fantastic, but there are few data ecosystems as sophisticated and complex as healthcare. The path to data agility can often be elusive. Leaders in health are prioritizing and demanding cloud technology that works on open standards like Fast Healthcare Interoperability Resources (FHIR) to transform how we manage data. Open standards will drive the future of healthcare, and today, we're sharing the expansion of Microsoft’s portfolio for FHIR, with new open-source software (OSS) and connectors which will help customers at different stages of their journey to advance interoperability and secure exchange of protected health information (PHI):

FHIR Converter: Transform legacy health data into FHIR.
FHIR Tools for Anonymization: Enables secondary use of FHIR data.
IoMT FHIR Connector: Ingest, normalize, and transform data from health devices, the Internet of Medical Things (IoMT), into FHIR.
Power BI FHIR Connector: Connect FHIR APIs to the Power BI platform for analytics and visualization.

Enabling health data to work in the open format of FHIR enables us to innovate for the future of health. The Microsoft Azure API for FHIR was released to general availability in November 2019, and Azure was the first cloud with a fully-managed, enterprise-grade service for health data in the FHIR format. Since then, we’ve been actively working with customers so they can easily deploy an end-to-end pipeline for PHI in the cloud with the added security of FHIR APIs. From remote patient monitoring or clinical trials in the home environment to clinics and research teams, data needs to flow seamlessly in a trusted environment. Microsoft is empowering data agility with seamless data flows that leverage the open and secure framework of FHIR APIs.

Transform data to FHIR with the FHIR Converter

Health systems today have data in a variety of data formats and systems. The FHIR Converter provides your data team with a simple API call to convert data in legacy formats, such as HL7 V2, in real-time and convert it into FHIR. The current release includes the ability to transform HL7 V2 message utilizing a set of starting templates, generated on mappings defined by the HL7 community, but allows for customization to meet each organization’s implementation of the HL7 V2 standard using a simple Web UI. The FHIR Converter is designed as a simple, yet powerful, tool to reduce the amount of time and manual effort required in data mapping and exchange of data in FHIR.

Enable secondary use of FHIR data

The power of data organized in the FHIR framework means you can manage it more efficiently, particularly when you need to make data available for secondary use. Using FHIR Tools for Anonymization, your teams can leverage techniques, including de-identification through redaction or date-shifting for extraction, and exchange of data in anonymized formats. Because FHIR Tools for Anonymization is open source, you can work with it locally or with a cloud-based FHIR service like the Azure API for FHIR.

FHIR Tools for Anonymization enables de-identification of the 18 identifiers per the HIPAA Safe Harbor method. A configuration file is available for customers to create custom templates that meet their needs for Expert Determination methods.

Ingesting PHI data with FHIR, the Internet of Medical Things (IoMT)

Today’s healthcare data is not limited to patient charts and documents, it is expanding rapidly to include device data captured both inside and outside the clinician’s office. Customers can already use the powerful Azure IoT platform to manage devices and IoT solutions, but in the health industry, we need to pay special attention to managing PHI data from devices.

The IoMT FHIR Connector for Azure has been specifically designed for devices in health scenarios. Developed to work seamlessly with pre-built Azure functions and Microsoft Azure Event Hubs or the Microsoft Azure IoT platform, the IoMT connector ingests streaming data in real-time at millions of events per second. Customized settings allow developers to manage device content, sample data rates, and set the desired capture thresholds. Upon ingestion, device data is normalized, grouped, and mapped to FHIR that can be sent via FHIR APIs to an electronic health record (EHR) or other FHIR service. Supporting the open standard of FHIR means the IoMT FHIR Connector works with most devices, eliminating the need for custom integration for multiple device scenarios.

To enhance scale and connectivity with common patient-facing platforms that collect device data, the IoMT FHIR Connector is also launching with a FHIR HealthKit framework to quickly bring Apple HealthKit data to the cloud. 

Fueling data visualization in Power BI with real data

Customers love the rich data visualizations in Power BI that help everyone make decisions based on facts, not instinct. The Power BI Connector enables our health customers to light up robust tools for data visualization, analytics, and data exploration in Power BI using data in the FHIR format. With the control of FHIR APIs from an FHIR endpoint that uses the open standards, you still maintain flexibility and control data access allowing you to define user access as needed. Whether you need consistent event tracking or patient management reporting for your care teams, research tools and self-serve exploration for your clinical research teams, or predictive analytics and systems efficiency for your operations teams, the connection of FHIR and Power BI provides a powerful new tool for health organizations.

Check out the new FHIR tech

Microsoft is committed to data agility through FHIR. We believe FHIR is the fuel for innovation in healthcare and life sciences, and we’re excited to see what you build with it. The future of health is ours to create and we are excited to be at the innovation forefront of that journey with you.

We’d love to hear from health developers about the new FHIR products rolling out. Check out the OSS releases in GitHub.
Quelle: Azure

Announcing preview of Backup Reports

We recently announced a new solution, Backup Explorer, to enable you as a backup administrator to perform real-time monitoring of your backups, helping you achieve increased efficiency in your day-to-day operations.

But what if you could also be proactive in the way you manage your backup estate? What if there was a way to unlock the latent power of your backup metadata to make more informed business decisions?

For instance, any business would be well-served following a systematic way of forecasting backup usage. Often, this involves analyzing how backup storage has increased over time for a given tenant, subscription, resource group, or for individual workloads. Such analysis requires the paired ability to aggregate data over a long period of time and present it in a way that allows the reader to quickly derive insights.

Today, we are pleased to announce the public preview of Backup Reports. Leveraging Azure Monitor Logs and Azure Workbooks, Backup Reports serve as a one-stop destination for tracking usage, auditing of backups and restores, and identifying key trends at different levels of granularity.

With our reports, you can answer questions including ‘Which Backup Item(s) consume the most storage?’, ‘Which machines have had consistently misbehaving backups?’, ‘What are the main causes of backup job failure?’, and many more.

Key benefits

Boundary-less reporting: Backup Reports work across multiple workload types that are supported by Azure Backup. This includes Azure workloads such as Azure Virtual Machines, SQL in Azure Virtual Machines, SAP HANA/ASE in Azure Virtual Machines, as well as on-premises workloads including Data Protection Manager (DPM), Azure Backup Server, and Azure Backup Agent. The reports can aggregate information across multiple vaults, subscriptions, and regions. If you are an Azure Lighthouse user with delegated access to your customers’ subscriptions/Log Analytics workspaces, you can also view reporting data across all your tenants within a single pane of glass.
Rich slicing, dicing, and drill-down capabilities: Backup Reports offers a range of filters and visualization experiences that enable you, as a backup administrator, to easily scope down your analysis and derive valuable insights. You can also slice and dice on backup item-specific properties, such as the backup item type, protection state, and more.
Native Azure-based experience: Backup Reports can be viewed right on the Azure portal without the need to purchase any additional software licenses. This native integration also makes it possible to seamlessly navigate to (and from) the individual dashboards for backup items and vaults and take action.

Note, Backup Reports will start showing data for Azure file share backup for each region once Azure file share backup becomes generally available.

Getting started

To start using Backup Reports, you will first need to configure your vaults to send diagnostics data to Log Analytics. To make this task easier, we have provided a built-in Azure Policy that auto-enables Log Analytics diagnostics for all vaults in a chosen scope.

Once all your vaults have been configured to send data to Log Analytics, you can simply navigate to any vault and click on the Backup Reports menu item.

 

This opens a report that will aggregate data across your entire backup estate. Simply select one or more LA Workspaces to view data and you’ll be ready to go.

Next steps

Read the Backup Reports Documentation to learn how to make the most of your reports.
New to Azure Backup? Sign up for a free Azure trial subscription.
Need help? Reach out to Azure Backup forum for support or browse Azure Backup documentation.
Follow us on Twitter @AzureBackup for the latest news and updates.

Quelle: Azure

Azure IoT Introduces seamless integration with Cisco IoT

The pace of technological change is relentless across all markets. Edge computing continues to play an essential role in allowing data to be managed closer to its source, where workloads can range from basic services like data filtering and de-duplication to advanced capabilities like event-driven processing. Gartner estimates that by 2025 75 percent of Enterprise data will be generated at the Edge. As computing resources and IoT networking devices become more powerful, the ability to manage vast amounts of data near the edge will mean infrastructure and operations teams are required to manage more advanced data workloads, while keeping pace with business needs.

Our leadership in the cloud and the Internet of Things is no coincidence and they are intertwined. These technology trends are accelerating ubiquitous computing and bringing unparalleled opportunities for transformation across industries. Our goal has been to create trusted, scalable solutions that our customers and partners can build on, no matter where they are starting in their IoT journey.

What if there was an integrated set of hardware, software, and cloud capabilities that allowed seamless connectivity and streamlined edge data flow directly from essential operations like autonomous driving, robotic factory lines, and oil and gas refinery operations into Azure IoT? This is where Azure IoT is partnering with Cisco to provide to customers a pre-integrated Cisco Edge to Microsoft Azure IoT Hub solution.

Value of the partnership, Microsoft Azure IoT and Cisco IoT

With both Azure IoT and Cisco IoT being known as leaders in the industrial IoT market, we have decided to team up to share the availability of an integrated Azure IoT solution, that provides the necessary software, hardware, and cloud services that businesses need to rapidly launch IoT initiatives and quickly realize business value. Using software-based intelligence pre-loaded onto Cisco IoT network devices, telemetry data pipelines from industry-standard protocols like OPC-Unified Architecture (OPC-UA) and Modbus can be easily established using a friendly UI directly into Azure IoT Hub. Services like Microsoft Azure Stream Analytics, Microsoft Azure Machine Learning, and Microsoft Azure Notification Hub services can be used to quickly build IoT applications for the enterprise. Additional telemetry processing is also supported by Cisco through local scripts developed in Microsoft Visual Studio, where filtered data can also be uploaded directly into Azure IoT Hub. This collaboration provides customers with a fully integrated solution that will give access to powerful design tools, global connectivity, advance analytics, and cognitive services for analyzing IoT data.

These capabilities will help to illuminate business opportunities across many industries. Using Cisco Edge Intelligence software to connect to Azure IoT Hub and Device Provisioning Services enable simple device provisioning and management at scale, without the headache of a complex setup.

Customers across industries want to leverage IoT data to deliver new use-cases and solve business problems.

“This partnership between Cisco and Azure IoT will significantly simplify customer deployments. Customers can now securely connect their assets, and simply ingest and send IoT data to the cloud. Our IoT Gateways will now be pre-integrated to take advantage of the latest in cloud technology from Azure. Cisco and Microsoft are happy to help our customers realize the value of their IoT projects faster than ever before. Our early field customer, voestalpine, is benefiting from this integration as they digitize their operations to improve production planning and operational efficiencies.”—Vikas Butaney, Cisco IoT VP of Product Management

“At voestalpine, we are going through a digital journey to rethink and innovate manufacturing processes to bring increased operational efficiency. We face challenges to consistently and securely extract data from these machines and deliver the right data to our analytics applications. We are validating Cisco’s next-generation edge data software, Cisco Edge Intelligence along with Azure IoT services for our cloud software development. Cisco’s out-of-the-box edge solution with Azure IoT services helps us accelerate our digital journey.”—Stefan Pöchtrager, Enterprise Architect, voestalpine AG

By enabling Azure IoT with Cisco IoT network devices infrastructure, IT, and operations teams can quickly take advantage of a wide variety of hardware and easily scalable telemetry collection from connected assets, to kickstart their Azure IoT application development. Our customers can now augment their existing Cisco networks with Azure IoT ready gateways across multiple industries and use cases, without compromising the ability to implement data control and security that both Microsoft and Cisco are known for.

Please visit Microsoft Azure for more information regarding Azure IoT.

Please visit Cisco Edge Intelligence for more information regarding Cisco IoT.
Quelle: Azure

Azure HDInsight and Azure Database for PostgreSQL news

I’ve been committed to open source software for over a decade because it fosters a deep collaboration across the developer community, resulting in ground-breaking innovation. At the heart of open source is the freedom to learn from each other and share ideas, empowering the brightest minds to work together on the cutting edge of software development.

Over the last decade, Microsoft has become one of the largest open source contributors in the world, adding to Hadoop, Linux, Kubernetes, Python, and more. Not only did we release our own technologies like Visual Studio Code as open source, we have also collaborated and contributed to existing open source projects. One of our proudest moments was when we became the release masters for YARN in late 2018, having open sourced over 150,000 lines of code, which enabled YARN to run on clusters 10x larger than before. We're actively growing our community of open source committers within Microsoft.

We’re constantly exploring new ways to better serve our customers in their open source journey. Our commitment is to combine the innovation open source has to offer with the global reach and scale of Azure. Today, we're excited to share a few important updates to accelerate our customers’ open source innovation.

Microsoft supported distribution of Apache Hadoop

Microsoft has been an early supporter of the Hadoop ecosystem since the launch of HDInsight in 2013. With HDInsight, we have been focused on delivering seamless integration of key Azure services like Azure Data Factory and Azure Data Lake Storage, with the power of the most popular open source frameworks to enable comprehensive analytics pipelines. To accelerate this momentum, we're pleased to share a Microsoft supported distribution of Apache Hadoop and Spark for our new and existing HDInsight customers. This distribution of Apache Hadoop is 100 percent open source and compatible with the latest version of Hadoop. Users can now provision a new HDInsight cluster based on Apache code that is built and wholly supported by Microsoft.

By providing a Microsoft supported distribution of Apache Hadoop and Spark, our customers will benefit from enterprise-grade security features like encryption, and native integration with key Azure stores and services like Azure Synapse Analytics and Azure Cosmos DB. Best of all, given that Microsoft directly supports this distribution, we can quickly provide support and upgrades to our customers and deliver the latest innovation from the Hadoop ecosystem. All of this will enable customers to innovate faster, without being restricted to proprietary technology just to use our support and features. Additionally, Azure will continue to develop a vibrant marketplace of open source vendors

“We at Cloudera welcome the commitment from Microsoft to Apache Hadoop and Spark. Open-source is key to our mutual customers’ success. Microsoft’s initiative represents a strong endorsement of open-source for the enterprise and we are excited to continue our partnership with Cloudera Data Platform for Microsoft Azure.” Mick Hollison, Chief Marketing Officer at Cloudera

This is part of our strong commitment to Hadoop, open source analytics, and the HDInsight service. In addition to our deeper engagement in supporting open source Hadoop and Spark, in the coming months, we’ll enable the most requested features on HDInsight that lower costs and accelerate time to value. These include an improved provisioning and management experience, reserved instance pricing, low-priority virtual machines, and auto-scale.

We have always sought to meet customers where they are, from our decision four years ago to support HDInsight solely on Linux, to our recent migration of clusters distribution in-house. Customers don't need to take any specific actions to benefit from these changes. These upcoming improvements to HDInsight will be seamless and automatic, with no business interruption or pricing changes.

Welcome new PostgreSQL committers

Since the Citus Data acquisition, we have doubled down on our PostgreSQL investment based on the tremendous customer demand and developer enthusiasm for one of the most versatile databases in the world. Today, Azure Database for PostgreSQL Hyperscale is generally available, and it’s one of our first Azure Arc-enabled services.

The innovation and ingenuity of PostgreSQL continue to inspire us, and it would not be possible without the contribution and passion of a dedicated community. We will continue to contribute to PostgreSQL. Recently, we contributed pg_autofailover to the community to share our learnings of operating PostgreSQL at cloud scale.

To build on our investment in PostgreSQL, we're excited to welcome Andres Freund, Thomas Munro, and Jeff Davis to the team. Together, they bring a decade of collective experience and a leading track record as core committers to PostgreSQL. They, like the rest of the team, are engaging with and listening to the global Postgres community, as we work to deliver the best of cloud scale, security, and manageability to open source innovation.      

We're committed to actively engaging the open source community and providing our customers with choice and flexibility. The true open source spirit is about collaboration, and we’re excited to combine the best of open source software with the breadth of Azure. Most importantly, we are bringing together the best minds and talented visionaries, both at Microsoft and in the broader open source community, to constantly improve our open source products and deliver the newest features to our customers. Here’s to open source!

Additional resources

 HDInsight Documentation is your one-stop-shop for learning all about this analytics platform.
PostgreSQL Committers Blog: Visit to learn more about the three new committers we hired.

Quelle: Azure

ExpressRoute Global Reach: Building your own cloud-based global backbone

Connectivity has gone through a fundamental shift as more workloads and services have moved to the Cloud. Traditional enterprise Wide Area Networks (WAN) have been fixed in nature, without the ability to dynamically scale to meet modern customer demands. For customers seeking to increasingly apply a cloud-first approach as the basis for their app and networking strategy, hybrid cloud enables applications and services to be deployed cross-premises as a fully connected and seamless architecture. The connectivity across premises is moving to utilize a more cloud-first model, with services offered by global hyper-scale networks.

Microsoft global network

Microsoft operates one of the  largest networks on the globe  spanning over 130,000 miles of terrestrial and subsea fiber cable systems across 6 continents. Besides Azure, the global network powers all our cloud services, including Bing, Office 365 and Xbox. The network carries more than 30 billion packets per second at any one time and is accessible for peering, private connectivity and application content delivery through our more than 160 global network PoPs. Microsoft continuously add new network PoPs to optimize the experience for our customers accessing Microsoft services.

The global network is built and operated using intelligent software-defined traffic engineering technologies, that allow Microsoft to dynamically select optimal paths and route around network faults and congestion scenarios in near real-time. The network has multiple redundant paths to ensure maximum uptime and reliability when powering mission-critical workloads for our customers.

ExpressRoute overview

Azure ExpressRoute provides enterprises with a service that bypasses the Internet to securely and privately connect to Azure and to create their own global network. A common scenario is for enterprises to use ExpressRoute to access their Azure virtual networks (VNets) containing their own private IP addresses. This allows Azure to become a seamless hybrid extension of their on-premises networks. Another scenario includes using ExpressRoute to access public services over a private connection such as Azure Storage or Azure SQL. Traffic for ExpressRoute enters the Microsoft network at our networking Points of Presence (or PoPs) strategically distributed across the world, which are hosted in carrier-neutral facilities to provide customers options when picking a carrier or Telco partner.

ExpressRoute provides three different SKUs of ExpressRoute circuits:

ExpressRoute Local: Available at ExpressRoute sites physically close to an Azure region and can be used only to access the local Azure region. Because the traffic stays in the regional network and does not traverse the global network, the ExpressRoute Local traffic has no egress charge.
ExpressRoute Standard: Provides connectivity to any Azure region with in the same geopolitical region as the ExpressRoute site from London to West Europe, for example.
ExpressRoute Premium: Provides connectivity to any Azure region within the cloud environment. For example, an ExpressRoute Premium circuit at the New Zealand site can access Azure regions in Australia or other geographies from Europe or North America.

In addition to using the more than 200 ExpressRoute partners to connect for ExpressRoute, enterprises can directly connect to ExpressRoute routers with the ExpressRoute Direct option, at either 10G or 100G physical interfaces. Within ExpressRoute Direct, enterprises can divide up this physical port into multiple ExpressRoute circuits to serve different business units and use cases.

Many customers want to take further advantage of their existing architecture and ExpressRoute connections to provide connectivity between their on-premises sites or data centers. Enabling site-to-site connectivity across our global network is now very easy. When Azure introduced ExpressRoute Global Reach, as the first in public cloud, we provided a sleek and simple way to take full advantage of our global backbone assets. 

ExpressRoute Global Reach

With ExpressRoute Global Reach, we are democratizing connectivity, allowing enterprises to build cloud based virtual global backbones by using ExpressRoute and Microsoft’s global network. ExpressRoute Global Reach enables connectivity from on-premises to on-premises fully routed privately within the Microsoft global backbone. This capability can be a backup to existing network infrastructure, or it can be the primary means to serve enterprise Wide Area Network (WAN) needs. Microsoft takes care of redundancy, the larger global infrastructure investments, and the scale out requirements, allowing customers to focus on their core mission. 

Consider Contoso, a multi-national company headquartered in Dallas, Texas with global offices in London and Tokyo. These three main locations also serve as major connectivity hubs for branch offices and on-premises datacenters. Utilizing a local last-mile carrier, Contoso invests in redundant paths to meet at the ExpressRoute sites in these same locations. After establishing the physical connectivity, Contoso stands up their ExpressRoute connectivity through a local provider or via ExpressRoute Direct and starts advertising routes via the industry standard, Border Gateway Protocol (BGP). Contoso can now connect all these sites together and opt to enable Global Reach, which will take the on-premises routes and advertise them to the peered circuit in the remote locations, enabling cross-premises connectivity. Contoso has now created a cloud-based Wide Area Network and all within minutes. Effectively end-to-end global connectivity without long-haul investments and fixed contracts.

Modernizing the network and applying the cloud-first model help customers scale with their needs, while at the same time take full advantage and build onto their existing cloud infrastructure. As on-premises sites and branches emerge or change, global connectivity should be as easy as a click of a button. ExpressRoute Global Reach enables companies to provide best in class connectivity on one of the most comprehensive software-defined networks on the planet.

ExpressRoute Global Reach is generally available in these locations, including Azure US Government.
Quelle: Azure

Azure HBv2 Virtual Machines eclipse 80,000 cores for MPI HPC

HPC-optimized virtual machines now available

Azure HBv2-series Virtual Machines (VMs) are now generally available in the South Central US region. HBv2 VMs will also be available in West Europe, East US, West US 2, North Central US, Japan East soon.

HBv2 VMs deliver supercomputer-class performance, message passing interface (MPI) scalability, and cost efficiency for a variety of real-world high performance computing (HPC) workloads, such as CFD, explicit finite element analysis, seismic processing, reservoir modeling, rendering, and weather simulation.

Azure HBv2 VMs are the first in the public cloud to feature 200 gigabit per second HDR InfiniBand from Mellanox. HDR InfiniBand on Azure delivers latencies as low as 1.5 microseconds, more than 200 million messages per second per VM, and advanced in-network computing engines like hardware offload of MPI collectives and adaptive routing for higher performance on the largest scaling HPC workloads. HBv2 VMs use standard Mellanox OFED drivers that support all RDMA verbs and MPI variants.

Each HBv2 VM features 120 AMD EPYC™ 7002-series CPU cores with clock frequencies up to 3.3 GHz, 480 GB of RAM, 480 MB of L3 cache, and no simultaneous multithreading (SMT). HBv2 VMs provide up to 340 GB/sec of memory bandwidth, which is 45-50 percent more than comparable x86 alternatives and three times faster than what most HPC customers have in their datacenters today. A HBv2 virtual machine is capable of up to 4 double-precision teraFLOPS, and up to 8 single-precision teraFLOPS.

One and three year Reserved Instance, Pay-As-You-Go, and Spot Pricing for HBv2 VMs is available now for both Linux and Windows deployments. For information about five-year Reserved Instances, contact your Azure representative.

Disruptive speed for critical weather forecasting

Numerical Weather Prediction (NWP) and simulation has long been one of the most beneficial use cases for HPC. Using NWP techniques, scientists can better understand and predict the behavior of our atmosphere, which in turn drives advances in everything from coordinating airline traffic, shipping of goods around the globe, ensuring business continuity, and critical disaster preparedness from the most adverse weather. Microsoft recognizes the criticality of this field is to science and society, which is why Azure shares US hourly weather forecast data produced by the Global Forecast System (GFS) from the National Oceanic and Atmospheric Administration (NOAA) as part of the Azure Open Datasets initiative.

Cormac Garvey, a member of the HPC Azure Global team, has extensive experience supporting weather simulation teams on the world’s most powerful supercomputers. Today, he’s published a guide to running the widely-used Weather Research and Forecasting (WRF) Version 4 simulation suite on HBv2 VMs.

Cormac used a 371M grid point simulation of Hurricane Maria, a Category 5 storm that struck the Caribbean in 2017, with a resolution of 1 kilometer. This model was chosen not only as a rigorous benchmark of HBv2 VMs but also because the fast and accurate simulation of dangerous storms is one of the most vital functions of the meteorology community.

Figure 1: WRF Speedup from 1 to 672 Azure HBv2 VMs.

Nodes

(VMs)

Parallel

Processes

Average Time(s)

per Time Step

Scaling

Efficiency

Speedup

(VM-based)

1

120

18.51

100 percent

1.00

2

240

8.9

104 percent

2.08

4

480

4.37

106 percent

4.24

8

960

2.21

105 percent

8.38

16

1,920

1.16

100 percent

15.96

32

3,840

0.58

100 percent

31.91

64

7,680

0.31

93 percent

59.71

128

15,360

0.131

110 percent

141.30

256

23,040

0.082

88 percent

225.73

512

46,080

0.0456

79 percent

405.92

640

57,600

0.0393

74 percent

470.99

672

80,640

0.0384

72 percent

482.03

Figure 2: Scaling and configuration data for WRF on Azure HBv2 VMs.

Note: for some scaling points, optimal performance is achieved with 30 MPI ranks and 4 threads per rank, while in others 90 MPI ranks was optimal. All tests were run with OpenMPI 4.0.2.

Azure HBv2 VMs executed the “Maria” simulation with mostly super-linear scalability up to 128 VMs (15,360 parallel processes). Improvements from scaling continue up to the largest scale of 672 VMs (80,640 parallel processes) tested in this exercise, where a 482x speedup over a single VM. At 512 nodes (VMs) we observe a ~2.2x performance increase as compared to a leading supercomputer that debuted among the top 20 fastest machines in 2016.

The gating factor to higher levels of scaling efficiency? The 371M grid point model, even as one of the largest known WRF models, is too small at such extreme levels of parallel processing. This opens the door for leading weather forecasting organizations to leverage Azure to build and operationalize even higher resolution models that higher numerical accuracy and a more realistic understanding of these complex weather phenomena.

Visit Cormac’s blog post on the Azure Tech Community to learn how to run WRF on our family of H-series Virtual Machines, including HBv2.

Better, safer product design from hyper-realistic CFD

Computational fluid dynamics (CFD) is core to the simulation-driven businesses of many Azure customers. A common request from customers is to “10x” their capabilities while keeping costs as close to constant as possible. Specifically, customers often seek ways to significantly increase the accuracy of their models by simulating it in higher resolution. Given that many customers already solve CFD problems with ~500-1000 parallel processes per job, this is a tall task that implies linear scaling to at least 5,000-10,000 parallel processes. Last year, Azure accomplished one of these objectives when it became the first public cloud to scale a CFD application to more than 10,000 parallel processes. With the launch of HBv2 VMs, Azure’s CFD capabilities are increasing again.

Jon Shelley, also a member of the Azure Global HPC team, worked with Siemens PLM to validate one its largest CFD simulations ever, a 1 billion cell model of a sports car named after the famed 24 Hours of Le Mans race with a 10x higher-resolution mesh than what Azure tested just last year. Jon has published a guide to running Simcenter STAR-CCM+ at large scale on HBv2 VMs.

Figure 3: Simcenter STAR-CCM+ Scaling Efficiency from 1 to 640 Azure HBv2 VMs

Nodes

(VMs)

Parallel

Processes

Solver Elapsed Time

Scaling Efficiency

Speedup

(VM-based)

8

928

337.71

100 percent

1.00

16

1,856

164.79

102.5 percent

2.05

32

3,712

82.07

102.9 percent

4.11

64

7,424

41.02

102.9 percent

8.23

128

14,848

20.94

100.8 percent

16.13

256

29,696

12.02

87.8 percent

28.10

320

37,120

9.57

88.2 percent

35.29

384

44,544

7.117

98.9 percent

47.45

512

59,392

6.417

82.2 percent

52.63

640

57,600

5.03

83.9 percent

67.14

Figure 4: Scaling and configuration data for STAR-CCM+ on Azure HBv2 VMs

Note: A given scaling point may achieve optimal performance with 90, 112, 116, or 120 parallel processes per VM. Plotted data below shows optimal performance figures. All tests were run with HPC-X MPI ver. 2.50.

Once again, Azure HBv2 executed the challenging problem with linear efficiency to more than 15,000 parallel processes across 128 VMs. From there, high scaling efficiency continued, peaking at nearly 99 percent at more than 44,000 parallel processes. At the largest scale of 640 VMs and 57,600 parallel processes, HBv2 delivered 84 percent scaling efficiency. This is among the largest scaling CFD simulations with Simcenter STAR-CCM+ ever performed, and now can be replicated by Azure customers.

Visit Jon’s blog post on the Azure Tech Community site to learn how to run Simcenter STAR-CCM+ on our family of H-series Virtual Machines, including HBv2.

Extreme HPC I/O meets cost-efficiency

An increasing scenario on the cloud is on-demand HPC-grade parallel filesystems. The rationale is straight forward; if a customer needs to perform a large quantity of compute, that customer often needs to also move a lot of data into and out of those compute resources. The catch? Simple cost comparisons against traditional on-premises HPC filesystem appliances can be unfavorable, depending on circumstances. With Azure HBv2 VMs, however, NVMeDirect technology can be combined with ultra low-latency RDMA capabilities to deliver on-demand “burst buffer” parallel filesystems at no additional cost beyond the HBv2 VMs already provisioned for compute purposes.

BeeGFS is one such filesystem and has a rapidly growing user base among both entry-level and extreme-scale users. The BeeOND filesystem is even used in production on the novel HPC + AI hybrid supercomputer “Tsubame 3.0.”

Here is a high-level summary of how a sample BeeOND filesystem looks when created across 352 HBv2 VMs, providing 308 terabytes of usable, high-performance namespace.

Figure 5: Overview of example BeeOND filesystem on HBv2 VMs.

Running the widely-used IOR test of parallel filesystems across 352 HBv2 VMs, BeeOND achieved peak read performance of 763 gigabytes per second, and peak write performance of 352 gigabytes per second.

Visit Cormac’s blog post on the Azure Tech Community to learn how to run BeeGFS on RDMA-powered Azure Virtual Machines.

10x-ing the cloud HPC experience

Microsoft Azure is committed to delivering to our customers a world-class HPC experience, and maximum levels of performance, price/performance, and scalability.

“The 2nd Gen AMD EPYC processors provide fantastic core scaling, access to massive memory bandwidth and are the first x86 server processors that support PCIe 4.0; all of these features enable some of the best high-performance computing experiences for the industry,” said Ram Peddibhotla, corporate vice president, Data Center Product Management, AMD. “What Azure has done for HPC in the cloud is amazing; demonstrating that HBv2 VMs and 2nd Gen EPYC processors can deliver supercomputer-class performance, MPI scalability, and cost efficiency for a variety of real-world HPC workloads, while democratizing access to HPC that will help drive the advancement of science and research.”

"200 gigabit HDR InfiniBand delivers high data throughout, extremely low latency, and smart In-Network Computing engines, enabling high performance and scalability for compute and data applications. We are excited to collaborate with Microsoft to bring the InfiniBand advantages into Azure, providing users with leading HPC cloud services” said Gilad Shainer, Senior Vice President of Marketing at Mellanox Technologies. “By taking advantage of InfiniBand RDMA and its MPI acceleration engines, Azure delivers higher performance compared to other cloud options based on Ethernet. We look forward to continuing to work with Microsoft to introduce future generations and capabilities."

Find out more about High Performance Computing in Azure.
Running WRF v4 on Azure.
Running Siemens Simcenter Star-CCM+ on Azure.
Tuning BeeGFS and BeeOND on Azure for Specific I/O Patterns.
Azure HPC on Github.
Azure HPC CentOS 7.6 and 7.7 images.
Learn about Azure Virtual Machines.
AMD EPYC™ 7002-series.

Quelle: Azure

Accelerate Your cloud strategy with Skytap on Azure

Azure is the best cloud for existing Microsoft workloads, and we want to ensure all of our customers can take full advantage of Azure services. We work hard to understand the needs of those customers running Microsoft workloads on premises, including Windows Server, and help them to navigate a path to the cloud. But not all customers can take advantage of Azure services due to the diversity of their on-premises platforms, the complexity of their environments, and the mission-critical applications running in those environments.

Microsoft works with many partners to create strategic partnerships to unlock the power of the cloud for customers relying on traditional on-premises application platforms. Azure currently offers several specialized application platforms and experiences, including Cray, SAP, and NetApp, and we continue to invest in additional options and platforms.

Allowing businesses to innovate with the cloud faster

Today we're pleased to share that we are enabling more customers to start on their journey to the cloud. Skytap has announced the availability of Skytap on Azure. The Skytap on Azure service simplifies cloud migration for traditional applications running on IBM Power while minimizing disruption to the business. Skytap has more than a decade of experience working with customers and offering extensible application environments that are compatible with on-premises data centers; Skytap’s environments simplify migration and provide self-service access to develop, deploy, and accelerate innovation for complex applications.

Brad Schick, Skytap CEO: “Today, we are thrilled to make the service generally available.  Enterprises and ISVs can now move their traditional applications from aging data centers and use all the benefits of Azure to innovate faster.”

Customers can learn more about Skytap and the Skytap on Azure service here.

Cloud migration remains a crucial component for any organization in the transformation of their business, and Microsoft continues to focus on how best to support customers in that journey. We often hear about the importance of enabling the easy movement of existing applications running on traditional on-premises platforms to the cloud and the desire to have those platforms be available on Azure.

The migration of applications running on IBM Power to the cloud is often seen as a difficult and challenging move involving re-platforming. For many businesses, these environments are running traditional, and frequently, mission-critical applications. The idea of re-architecting or re-platforming these applications to be cloud native can be daunting. With Skytap on Azure, customers gain the ability to run native Power workloads, including AIX, IBM i, and Linux on Azure. The Skytap service allows customers to unlock the benefits of the cloud faster and begin innovating across applications sooner, by providing the ability to take advantage of and integrate with the breadth of Azure native services. All of this is possible with minimal changes to the way existing IBM Power applications are managed on-premises.

Application running on IBM Power and x86 in Skytap on Azure.

With Skytap on Azure, Microsoft brings the unique capabilities of IBM Power9 servers to Azure data centers, directly integrating with Azure network, and enabling Skytap to provide their platform with minimal connectivity latency to Azure native services such as Blob Storage, Azure NetApp Files, or Azure Virtual Machines.

Skytap on Azure is now available in the East US Azure region. Given the high level of interest we have seen already, we intend to expand availability to additional regions across Europe, the United States, and Asia Pacific. Stay tuned for more details on specific regional rollout availability.

Try Skytap on Azure today, available through the Azure Marketplace. For more information on the Public Availability of Skytap on Azure, please access the full Skytap press release. Skytap on Azure is a Skytap first-party service delivered on Microsoft Azure’s global cloud infrastructure.
Quelle: Azure

Azure Cost Management + Billing updates – February 2020

Whether you're a new student, thriving startup, or the largest enterprise, you have financial constraints and you need to know what you're spending, where, and how to plan for the future. Nobody wants a surprise when it comes to the bill, and this is where Azure Cost Management + Billing comes in.

We're always looking for ways to learn more about your challenges and how Azure Cost Management + Billing can help you better understand where you're accruing costs in the cloud, identify and prevent bad spending patterns, and optimize costs to empower you to do more with less. Here are a few of the latest improvements and updates based on your feedback:

New Power BI reports for Azure reservations and Azure Hybrid Benefit
Quicker access to help and support
We need your feedback
What's new in Cost Management Labs
Drill in to the costs for your resources
Understanding why you see "not applicable"
Upcoming changes to Azure usage data
New videos and learning opportunities
Documentation updates

Let's dig into the details.

 

New Power BI reports for Azure reservations and Azure Hybrid Benefit

Azure Cost Management + Billing offers several ways to report on your cost and usage data. You can start in the portal, download data or schedule an automated export for offline analysis, or even integrate with Cost Management APIs directly. But maybe you just need detailed reporting alongside other business reports. This is where the Power BI comes in. We last talked about the addition of reservation purchases in the Azure Cost Management Power BI connector in October. Building on top of that, the new Azure Cost Management Power BI app offers an extensive set of reports to get you started, including detailed reservation and Azure Hybrid Benefit reports.

The Account overview offers a summary of all usage and purchases as well as your credit balance to help you track monthly expenses. From here, you can dig in to usage costs broken down by subscription, resource group, or service in additional pages. Or, if you simply want to see your prices, take a look at the Price sheet page.

If you’re already using Azure Hybrid Benefit (AHB) or have existing, unused on-prem Windows licenses, check out the Windows Server AHB Usage page. Start by checking how many VMs currently have AHB enabled to determine if you have additional licenses that could help you further lower your costs. If you do have additional licenses, you can also identify eligible VMs based on their core/vCPU count. Apply AHB to your most expensive VMs to maximize your potential savings.

If you’re using Azure reservations or are interested in potential savings you could benefit from if you did, you’ll want to check out the VM RI coverage pages to identify any new opportunities where you can save with new reservations, including the historical usage so you can see why that reservation is recommended. You can drill in to a specific region or instance size flexibility group and more. You can see your past purchases in the RI purchases page and get a breakdown of those costs by region, subscription, or resource group in the RI chargeback page, if you need to do any internal chargeback. And, don’t forget to check out the RI savings page, where you can see how much you’ve saved so far by using Azure reservations.

This is just the first release of a new generation of Power BI reports. Get started with the Azure Cost Management Power BI quickstart today and let us know what you’d like to see next.

 

Quicker access to help and support

Learning something new can be a challenge; especially when it's not your primary focus. But given how critical it is to meet your financial goals, getting help and support needs to be front and center. To support this, Cost Management now includes a contextual Help menu to direct you to documentation and support experiences.

Get started with a quickstart tutorial and, when you're ready to automate that experience or integrate it into your own apps, check out the API reference. If you have any suggestions on how the experience could be improved for you, please don't hesitate to share your feedback. If you run into an issue or see something that doesn't make sense, start with Diagnose and solve problems, and if you don't see a solution, then please do submit a new support request. We're closely monitoring all feedback and support requests to identify ways the experience could be streamlined for you. Let us know what you'd like to see next.

 

We need your feedback

As you know, we're always looking for ways to learn more about your needs and expectations. This month, we'd like to learn more about how you report on and analyze your cloud usage and costs in a brief survey. We'll use your inputs from this survey to inform ease of use and navigation improvements within Cost Management + Billing experiences. The 15-question survey should take about 10 minutes.

Take the survey.

 

What's new in Cost Management Labs

With Cost Management Labs, you get a sneak peek at what's coming in Azure Cost Management and can engage directly with us to share feedback and help us better understand how you use the service, so we can deliver more tuned and optimized experiences. Here are a few features you can see in Cost Management Labs:

Get started quicker with the cost analysis Home view
Azure Cost Management offers five built-in views to get started with understanding and drilling into your costs. The Home view gives you quick access to those views so you get to what you need faster.
New: More details in the cost by resource view
Drill in to the cost of your resources to break them down by meter. Simply expand the row to see more details or click the link to open and take action on your resources.
New: Explain what "not applicable" means
Break down "not applicable" to explain why specific properties don't have values within cost analysis.

Of course, that's not all. Every change in Azure Cost Management is available in Cost Management Labs a week before it's in the full Azure portal. We're eager to hear your thoughts and understand what you'd like to see next. What are you waiting for? Try Cost Management Labs today.

 

Drill in to the costs for your resources

Resources are the fundamental building block in the cloud. Whether you're using the cloud as infrastructure or componentized microservices, you use resources to piece together your solution and achieve your vision. And how you use these resources ultimately determines what you're billed for, which breaks down to individual "meters" for each of your resources. Each service tracks a unique set of meters covering time, size, or other generalized unit. The more units you use, the higher the cost.

Today, you can see costs broken down by resource or meter with built-in views, but seeing both together requires additional filtering and grouping to get down to the data you need, which can be tedious. To simplify this, you can now expand each row in the Cost by resource view to see the individual meters that contribute to the cost of that resource.

This additional clarity and transparency should help you better understand the costs you're accruing for each resource at the lowest level. And if you see a resource that shouldn't be running, simply click the name to open the resource, where you can stop or delete it to avoid incurring additional cost.

You can see the updated Cost by resource view in Cost Management Labs today, while in preview. Let us know if you have any feedback. We'd love to know what you'd like to see next. This should be available everywhere within the next few weeks.

 

Understanding why you see "not applicable"

Azure Cost Management + Billing includes all usage, purchases, and refunds for your billing account. Seeing every line item in the full usage and charges file allows you to reconcile your bill at the lowest level, but since each of these records has different properties, aggregating them within cost analysis can result in groups of empty properties. This is when you see "not applicable" today.

Now, in Cost Management Labs, you can see these costs broken down and categorized into separate groups to bring additional clarity and explain what each represents. Here are a few examples:

You may see Other classic resources for any classic resources that don't include resource group in usage data when grouping by resource or resource group.
If you're using any services that aren't deployed to resource groups, like Security Center or Azure DevOps (Visual Studio Online), you will see Other subscription resources when grouping by resource group.
You may recall seeing Untagged costs when grouping by a specific tag. This group is now broken down further into Tags not available and Tags not supported groups. These signify services that don't include tags in usage data (see How tags are used) and costs that can't be tagged, like purchases and resources not deployed to resource groups, covered above.
Since purchases aren't associated with an Azure resource, you might see Other Azure purchases or Other Marketplace purchases when grouping by resource, resource group, or subscription.
You may also see Other Marketplace purchases when grouping by reservation. This represents other purchases, which aren't associated with a reservation.
If you have a reservation, you may see Unused reservation when viewing amortized costs and grouping by resource, resource group, or subscription. This represents the unused portion of your reservation that isn't associated with any resources. These costs will only be visible from your billing account or billing profile.

Of course, these are just a few examples. You may see more. When there simply isn't a value, you'll see something like No department, as an example, which represents Enterprise Agreement (EA) subscriptions that aren't grouped into a department.

We hope these changes help you better understand your cost and usage data. You can see this today in Cost Management Labs while in preview. Please check it out and let us know if you have any feedback. This should be available everywhere within the next few weeks.

 

Upcoming changes to Azure usage data

Many organizations use the full Azure usage and charges to understand what's being used, identify what charges should be internally billed to which teams, and/or to look for opportunities to optimize costs with Azure reservations and Azure Hybrid Benefit, just to name a few. If you're doing any analysis or have setup integration based on product details in the usage data, please update your logic for the following services.

The following change will start effective March 1:

Autodesk Arnold Service meter IDs will change.

Also, remember the key-based Enterprise Agreement (EA) billing APIs have been replaced by new Azure Resource Manager APIs. The key-based APIs will still work through the end of your enrollment, but will no longer be available when you renew and transition into Microsoft Customer Agreement. Please plan your migration to the latest version of the UsageDetails API to ease your transition to Microsoft Customer Agreement at your next renewal.

 

New videos and learning opportunities

For those visual learners out there, here are 2 new resources you should check out:

Optimize Spending with Azure Cost Management + Billing (60m) – Attend this webinar on February 27 to learn about how to optimize your costs.
Azure Machine Learning datasets (10m) – Learn about datasets, which can help you reduce storage costs.

Follow the Azure Cost Management + Billing YouTube channel to stay in the loop with new videos as they're released and let us know what you'd like to see next!

 

Documentation updates

There were lots of documentation updates. Here are a few you might be interested in:

Walk through for the new Power BI template app for EA.
New PowerShell sample in the Budgets quickstart.
Added details about reservation purchases being included in budgets.
Detailed how tags are represented in cost and usage data.
Explained why certain attributes show "Not applicable" to the cost analysis quickstart.
Documented how reservation recommendations are calculated.
Expanded the list of services that support monthly reservation payments.
Noted subscriptions can moved between directories in Account admin tasks in the Azure portal.
Documented options for transferring CSP subscriptions.
Updated API references for billing accounts, billing profiles, subscription billing properties, transactions, and line of credits.

Want to keep an eye on all of the documentation updates? Check out the Cost Management + Billing doc change history in the azure-docs repository on GitHub. If you see something missing, select Edit at the top of the document and submit a quick pull request.

What's next?

These are just a few of the big updates from last month. We're always listening and making constant improvements based on your feedback, so please keep the feedback coming.

Follow @AzureCostMgmt on Twitter and subscribe to the YouTube channel for updates, tips, and tricks. And, as always, share your ideas and vote up others in the Cost Management feedback forum.
Quelle: Azure