Find the clarity and guidance you need to realize cloud value

A modernized cloud workload offers significant benefits—including cost savings, optimized security and management, and opportunities for ongoing innovation. But the process of migrating and modernizing workloads can be challenging. That’s why it’s essential to prepare and plan ahead—and to ensure that your organization finds continued value in the cloud.

Whether you’re just starting your move to the cloud or are looking for ways to optimize your current cloud workloads, my team and I are committed to helping you maximize your cloud investments, overcome technical barriers, adopt new business processes, develop your team’s skills, and achieve sustainable innovation in the cloud. That’s why we invite you to watch sessions from Realizing Success in the Cloud—now available on-demand.

At this digital event, attendees learned about the key components of a successful cloud adoption journey. They heard Microsoft leaders, industry experts, and Azure customers discuss ways to drive value with migration and modernization. They also discovered best practices for boosting adoption across organizations and enabling sustainable innovation in the long term.

Check out these session highlights, which cover three critical areas of the cloud journey:

1. Optimize your business value in the cloud

In the early phases of any new cloud project, it’s essential that you define strategy, understand motivations, and identify business outcomes. Maybe you’re looking to optimize your cost investment and reduce technical debt. Or maybe adoption might enable your team to build new technical capabilities and products. Whether you’re looking to migrate, modernize, or innovate in the cloud, you’ll want to build a business case that sets your organization up for success—and we’ll show you how to put one together.

With the help of Jeremy Winter, Azure VP of Program Management, you’ll explore the process using key technical and financial guidelines. In this session, you’ll discover templates, assessments, and tools for estimating your cloud costs, managing spending, and maximizing the overall value you get from Azure. You’ll also hear how the cloud experts at Insight, a Microsoft technology partner, use Azure enablement resources to help their clients realize savings.

2. Customize your Azure journey

Your organization’s business, security, and industry requirements are unique, which is why you’ll need to develop a tailored plan that will help you successfully execute your vision—and ensure that your deployment and operations needs are being met. That’s why it’s important to understand when to adhere to the best practices of your cloud vendor—and when to customize your journey—with guidance from the experts.

In the session led by Uli Homann, Microsoft VP of Cloud and AI, you’ll learn how to set up scalable, modular cloud environments using Azure landing zones. As you prepare for post-deployment, you’ll find out how to evaluate the cost efficiency, performance, reliability, and security of your workload performance using recommendations from the Azure Well-Architected Framework and Azure Advisor. Uli also speaks with NHS Digital, the technology partner for the UK’s public healthcare system, to discuss how they built a responsive system architecture that could scale and perform under unprecedented demand.

3. Accelerate success with Azure skills training

Whether you’re migrating to the cloud or building a cloud-native app, the skills of your team are key to enabling successful business outcomes. Azure skills training fosters a growth mindset and helps your team develop expertise that impacts your entire organization, from individual career advancement to sustainable, long-term innovation.

In a fireside chat between Sandeep Bhanot, Microsoft VP of Global Technical Learning, and Cushing Anderson, VP of IT Education and Certification at IDC, you’ll hear about key learnings from research that highlight the business value of skills training for accelerating success. You’ll also explore how to use these findings to build a compelling business case for developing skills training programs in your organization.

Watch this event on-demand to:

Get an overview of the cloud enablement tools, programs, and frameworks available to help you realize your goals on Azure.
See these resources in action. Hear success stories from customers like KPMG who have used Azure enablement resources to build, optimize, and achieve ongoing value in the cloud.
Hear insights from Microsoft product experts as they answer questions from the Azure community during the Q and A.

The live event may be over, but you still have the chance to learn and explore at your own pace, on your own time. Discover how to quickly access and use the right set of Azure enablement tools for your specific needs—and pave the way for ongoing success in the cloud. 

Watch now.
Quelle: Azure

Learn what’s new in Azure Firewall

This post was co-authored by Suren Jamiyanaa, Program Manager 2, Azure Networking.

We continue to be amazed by the adoption, interest, positive feedback, and the breadth of use cases customers are finding for our service. Today, we are happy to share several key Azure Firewall capabilities as well as an update on recent important releases into general availability and preview.

Intrusion Detection and Prevention System (IDPS) signatures lookup now generally available.
TLS inspection (TLSi) Certification Auto-Generation now generally available.
Web categories lookup now generally available.
Structured Firewall Logs now in preview.
IDPS Private IP ranges now in preview.

Azure Firewall is a cloud-native firewall-as-a-service offering that enables customers to centrally govern and log all their traffic flows using a DevOps approach. The service supports both application and network-level filtering rules and is integrated with the Microsoft Threat Intelligence feed for filtering known malicious IP addresses and domains. Azure Firewall is highly available with built-in auto-scaling.

IDPS signatures lookup

Azure Firewall Premium IDPS signature lookup is a great way to better understand the applied IDPS signatures on your network as well as fine-tuning them according to your specific needs. IDPS signatures lookup allows you to:

Customize one or more signatures and change their mode to Disabled, Alert, or Alert and Deny. For example, if you receive a false positive where a legitimate request is blocked by Azure Firewall due to a faulty signature, you can use the signature ID from the network rules logs and set its IDPS mode to off. This causes the "faulty" signature to be ignored and resolves the false positive issue.
You can apply the same fine-tuning procedure for signatures that are creating too many low-priority alerts, and therefore interfering with visibility for high-priority alerts.
Get a holistic view of the entire 58,000 signatures.
Smart search.
Allows you to search through the entire signatures database by any type of attribute. For example, you can search for specific CVE-ID to discover what signatures are taking care of this CVE by typing the ID in the search bar.

TLSi Certification Auto-Generation

For non-production deployments, you can use the Azure Firewall Premium TLS inspection Certification Auto-Generation mechanism, which automatically creates the following three resources for you:

Managed Identity
Key Vault
Self-signed Root CA certificate

Just choose the new managed identity, and it ties the three resources together in your Premium policy and sets up TLS inspection.

Web categories lookup

Web Categories is a filtering feature that allows administrators to allow or deny web traffic based on categories, such as gambling, social media, and more. We added tools that help manage these web categories: Category Check and Mis-Categorization Request.

Using Category Check, an admin can determine which category a given FQDN or URL falls under. In the case that a FQDN or URL fits better under a different category, an administrator can also report an incorrect classification, in which the request will be evaluated and updated if approved.

Structured Firewall Logs

Today, the following diagnostic log categories are available for Azure Firewall:

Application rule log
Network rule log
DNS proxy log

These log categories are using Azure diagnostics mode. In this mode, all data from any diagnostic setting will be collected in the AzureDiagnostics table.

With this new feature, customers will be able to choose using Resource Specific Tables instead of the existing AzureDiagnostics table. In case both sets of logs are required, at least two diagnostic settings would need to be created per firewall.

In Resource Specific mode, individual tables in the selected workspace are created for each category selected in the diagnostic setting.

This method is recommended since it makes it much easier to work with the data in log queries, provides better discoverability of schemas and their structure, improves performance across both ingestion latency and query times, and the ability to grant Azure role-based access control (RBAC) rights on a specific table.

New Resource Specific tables are now available in diagnostic setting allowing users to utilize the following newly added categories:

Network rule log: contains all Network Rule log data. Each match between data plane and network rule creates a log entry with the data plane packet and the matched rule's attributes.
NAT rule log: contains all destination network address translation (DNAT) events log data. Each match between data plane and DNAT rule creates a log entry with the data plane packet and the matched rule's attributes.
Application rule log: contains all Application rule log data. Each match between data plane and Application rule creates a log entry with the data plane packet and the matched rule's attributes.
Threat Intelligence log: contains all Threat Intelligence events.
IDPS log: contains all data plane packets that were matched with one or more IDPS signatures.
DNS proxy log: contains all DNS Proxy events log data.
Internal FQDN resolve failure log: contains all internal Firewall FQDN resolution requests that resulted in failure.
Application rule aggregation log: contains aggregated Application rule log data for Policy Analytics.
Network rule aggregation log: contains aggregated Network rule log data for Policy Analytics.
NAT rule aggregation log: contains aggregated NAT rule log data for Policy Analytics.

Additional Kusto Query Language (KQL) log queries were added (as seen in the diagram below) to query structured firewall logs.

IDPS Private IP ranges

In Azure Firewall Premium IDPS, Private IP address ranges are used to identify if traffic is inbound or outbound. By default, only ranges defined by Internet Assigned Numbers Authority (IANA) RFC 1918 are considered private IP addresses. To modify your private IP addresses, you can now easily edit, remove or add ranges as needed.

Learn more

Azure Firewall Documentation.
Azure Firewall Preview Features.
Azure Firewall Premium.
Azure Firewall Web Categories.

Quelle: Azure

Achieve seamless observability with Dynatrace for Azure

This blog post has been co-authored by Manju Ramanathpura, Principal Group PM, Azure DevEx & Partner Ecosystem.

As adoption of public cloud grows by leaps and bounds, organizations want to leverage software and services that they love and are familiar with as a part of their overall cloud solution. Microsoft Azure enables customers to host their apps on the globally trusted cloud platform and use the services of their choice by closely partnering with popular SaaS offerings. Dynatrace is one such partner that provides deep cloud observability, advanced AIOps, and continuous runtime application security capabilities on Azure.

“Deep and broad observability, runtime application security, and advanced AI and automation are key for any successful cloud transformation. Through the Dynatrace platform’s integration with Microsoft Azure, customers will now have immediate access to these capabilities. This integration will deliver answers and intelligent automation from the massive amount of data generated by modern hybrid-cloud environments, enabling flawless and secure digital interactions.”—Steve Tack, SVP Product Management, Dynatrace.

Modern cloud-native environments are complex and dynamic. When failures occur, development teams need deep visibility into the systems to get to the root cause of the issues and understand the impact of potential fixes. Good observability solutions such as Dynatrace for Azure not only enable you to understand what is broken, but also provide the ability to proactively identify and resolve issues before they impact your customers. Currently, if you want to leverage Dynatrace for observability, you go through a complex process of setting up credentials, Event Hubs, and writing custom code to send monitoring data from Azure to Dynatrace. This is often time-consuming and hard to troubleshoot when issues occur. To alleviate this customer pain, we worked with Dynatrace to create a seamlessly integrated solution on Azure that’s now available on the Azure Marketplace.

Dynatrace’s integration provides a unified experience with which you can:

Create a new Dynatrace environment in the cloud with just a few clicks. Dynatrace SaaS on Azure is a fully managed offering that takes away the need to set up and operate infrastructure.
Seamlessly ship logs and metrics to Dynatrace. Using just a few clicks, configure auto-discovery of resources to monitor and set up automatic log forwarding. Configuring Event Hubs and writing custom code to get monitoring data is now a thing of the past.
Easily install Dynatrace OneAgent on virtual machines (VMs) and App Services through a single click. OneAgent continuously monitors the health of host and processes and automatically instruments any new processes.
Use single sign-on to access the Dynatrace SaaS portal—no need to remember multiple credentials and log in separately.
Get consolidated billing for the Dynatrace service through Azure Marketplace.

“Microsoft is committed to providing a complete and seamless experience for our customers on Azure. Enabling developers to use their most loved tools and services makes them more productive and efficient. Azure native integration of Dynatrace makes it effortless for developers and IT administrators to monitor their cloud applications with the best of Azure and Dynatrace together.”—Balan Subramanian, Partner Director of Product Management, Azure Developer Experiences.

Get started with Dynatrace for Azure

Let’s now look at how you can easily set up and configure Dynatrace for Azure:

Acquire the Dynatrace for Azure offering: You can find and acquire the solution from the Azure Marketplace.

 

Create a Dynatrace resource in Azure portal: Once the Dynatrace solution is acquired, you can seamlessly create a Dynatrace resource using the Azure portal. Using the Dynatrace resource, you can configure and manage your Dynatrace environments within the Azure portal.

 

Configure log forwarding: Configure which Azure resources send logs to Dynatrace, using the familiar concept of resource tags.

 

Install Dynatrace OneAgent: With a single click, you can install Dynatrace OneAgent on multiple VMs and App Services.

 

Access Dynatrace native service for Azure with single sign-on: Use the single sign-on experience to effortlessly access dashboards, Smartscape® topology visualization, log content, and more on the Dynatrace portal.

Next steps

Subscribe to the preview of Dynatrace’s integration with Azure available in the Azure Marketplace.
Learn more about the Dynatrace integration.

Quelle: Azure

SUSECON 2022: Powering Business Critical Linux workloads on Azure

Since 2009, Microsoft and SUSE have partnered to provide Azure-optimized solutions for SUSE Linux Enterprise Server (SLES). SLES for SAP Applications is the leading platform for SAP solutions on Linux, with over 90 percent of SAP HANA deployments and 70 percent of SAP NetWeaver applications running on SUSE. Microsoft and SUSE jointly offer agility and flexibility for next-generation SAP landscape powered by SAP HANA.

Microsoft is sponsoring SUSECON Digital 2022 to bring the latest technological advancements to customers and the open-source community at large. In keeping with SUSECON Digital 2022’s Future Forward theme, we’ll be shining light on the latest and greatest, unraveling the worlds of business-critical Linux, enterprise container management, and edge innovation. In the three-day event from June 7 to 9, Microsoft will participate in several activities such as keynotes, demo sessions, and virtual booths. When you need a break, there are wellness sessions, games, and opportunities to win exciting prizes.

The need for innovation has never been greater. To become more agile and accelerate innovation, organizations are embarking on journeys of digital transformation. These transformations require modernizing legacy infrastructure and applications, adopting cloud-native technologies, and pushing organizational boundaries beyond the data center and cloud, all the way to the edge.

Hiren Shah, Head of Products, SAP on Azure Core Platform, Microsoft will join Markus Noga, GM Linux, SUSE at the Business Critical Linux keynote session to highlight how our partnership powers SAP workloads in critical business functions such as finance, supply chain, procurement, and enhances customer experience. Downtime caused by infrastructure outages can result in business disruption and lost revenue. A few areas where joint development is currently in progress include:

High availability with SUSE pacemaker cluster for SAP HANA workloads.
Automated resource migration when infrastructure fails.
Automation of deployment for SAP HANA workloads and Operating System configuration including High Availability.
Monitoring of high availability (HA) setup with Azure monitor for SAP Solutions.
Identifying common configuration errors through customer engagements
Live patching, balanced uptime, and security needs

In addition to the Cornerstone Keynote discussion, the Microsoft team will deliver six breakout sessions, covering topics such as Azure high-performance computing software platform, SLES for SAP applications, Azure Hybrid Benefit, SQL server, automotive software development, and more. We will focus on best practices for SQL Server SLES-based Azure Virtual Machines. Many of our customers are now deploying SQL Server containers as part of their data estate modernization strategy; the sessions will cover how Rancher can be used to deploy SQL Server containers and manage production workloads.

Migrating mission-critical SAP workloads can be complex for enterprises. Microsoft’s open source SAP Deployment Automation Framework can help customers deploy infrastructure using Terraform (infrastructure as code) and install SAP using Ansible (configuration as code). SUSE has been a co-development partner with Microsoft in developing this open source framework. This framework enables the accelerated deployment of SAP and is aligned with reference architecture and best practices. We are excited to continue our partnership with SUSE as we explore synergies within SAP operations (and beyond) to see an increasing number of customers and partners leveraging our framework.

Learn more about the latest updates

Join us at SUSECON Digital 2022 to learn more and bring your questions to our experts. Learn more about deploying secure, reliable, flexible hybrid cloud environments using SUSE solutions on Azure.
Explore how Azure Hybrid Benefit extends SLES workloads on Azure removing migration friction with integrated support provided by Microsoft.

Resources

Learn how Azure Monitor for SAP Solutions monitors products for customers.
Read more about SQL Server on Virtual Machines and how to migrate to the cloud.
Discover how to use SUSE SAP automation solution on Azure.

Quelle: Azure

Top 5 reasons to attend Azure Hybrid, Multicloud, and Edge Day

Infrastructure and app development are becoming more complex as organizations span a combination of on-premises, cloud, and edge environments. Such complexities arise when:

Organizations want to maximize their existing on-premises investments like traditional apps and datacenters.
Workloads can’t be moved to public clouds due to regulatory or data sovereignty requirements.
Low latency is required, especially for edge workloads.
Organizations need innovative ways to transform their data insights into new products and services.

Operating across disparate environments presents management and security complexities. But comprehensive hybrid solutions can not only address these complexities but also offer new opportunities for innovation. For example, organizations can innovate anywhere across hybrid, multicloud, and edge environments by bringing Azure security and cloud-native services to those environments with a solution like Azure Arc.

That’s why we’re excited to present Azure Hybrid, Multicloud, and Edge Day—your chance to see how to innovate anywhere with Azure Arc. Join us at this free digital event on Wednesday, June 15, 2022, from 9:00 AM‒10:30 AM Pacific Time.

Here are five reasons to attend Azure Hybrid, Multicloud, and Edge Day:

Hear real-world success stories, tips, and best practices from customers using Azure Arc. IT leaders from current customers will share how they use Azure Arc to enable IT, database, and developer teams to deliver value to their users faster, quickly mine business data for deeper insights, modernize existing on-premises apps, and easily keep environments and systems up to date.
Be among the first to hear Microsoft product experts present innovations, news, and announcements for Azure Arc. Get the latest updates on the most comprehensive portfolio of hybrid solutions available.
See hybrid solutions in action. Watch demos and technical deep dives—led by Microsoft engineers—on hybrid and multicloud solutions, including Azure Arc and Azure Stack HCI. You’ll also hear product leaders present demos on Azure Arc–enabled SQL Managed Instance, Business Critical—a service tier that just recently became generally available. Business Critical is built for mission-critical workloads that require the most demanding performance, high availability, and security.
Get answers to your questions. Use the live Q&A chat to ask your questions and get insights on your specific scenario from Microsoft product experts and engineers.
Discover new skill-building opportunities. Learn how you can expand your hybrid and multicloud skillset with the latest trainings and certifications from Microsoft, including the Windows Server Hybrid Administrator Associate certification.

And here’s a first look at one of the Azure customers sharing their perspective at this digital event: Greggs

A United Kingdom favorite for breakfast, lunch, and coffee on the go, Greggs has been modernizing their 80-year-old business through digital transformation. When they needed to consolidate their sprawl between their on-premises server estate and their virtual machines, their IT team turned to Azure Arc.

“One of the advantages of Arc was that we could use one strategy across both on-premises and off-premises architecture,” says Scott Clennell, Head of Infrastructure and Networks at Greggs. “We deployed Azure Arc on our on-premises architecture, then throughout the rest of the infrastructure very rapidly—a matter of a couple of weeks.”

Not only has Azure Arc helped the IT team manage their digital estate better—it’s transformed their team culture. By uniting their entire IT team around Azure Arc, they can work better with their developers using common systems and collaboration tools.

Hear from Greggs and more featured customers at Azure Hybrid, Multicloud, and Edge Day. We hope you can attend!

Azure Hybrid, Multicloud, and Edge Day

June 15, 2022
9:00 AM‒10:30 AM Pacific Time

Delivered in partnership with Intel.

Quelle: Azure

Improve outbound connectivity with Azure Virtual Network NAT

For many customers, making outbound connections to the internet from their virtual networks is a fundamental requirement of their Azure solution architectures. Factors such as security, resiliency, and scalability are important to consider when designing how outbound connectivity will work for a given architecture. Luckily, Azure has just the solution for ensuring highly available and secure outbound connectivity to the internet: Virtual Network NAT. Virtual Network NAT, also known as NAT gateway, is a fully managed and highly resilient service that is easy to scale and specifically designed to handle large-scale and variable workloads.

NAT gateway provides outbound connectivity to the internet through its attachment to a subnet and public IP address. NAT stands for network address translation, and as its name implies, when NAT gateway is associated to a subnet, all of the private IPs of a subnet’s resources (such as, virtual machines) are translated to NAT gateway’s public IP address. The NAT gateway public IP address then serves as the source IP address for the subnet’s resources. NAT gateway can be attached to a total of 16 IP addresses from any combination of public IP addresses and prefixes.

Figure 1: NAT gateway configuration with a subnet and a public IP address and prefix.

Customer is halted by connection timeouts while trying to make thousands of connections to the same destination endpoint

Customers in industries like finance, retail, or other scenarios that require leveraging large sets of data from the same source need a reliable and scalable method to connect to this data source.

In this blog, we’re going to walk through one such example that was made possible by leveraging NAT gateway.

Customer background

A customer collects a high volume of data to track, analyze, and ultimately make business decisions for one of their primary workloads. This data is collected over the internet from a service provider’s REST APIs, hosted in a data center they own. Because the data sets the customer is interested in may change daily, a recurring report can’t be relied on—they must request the data sets each day. Because of the volume of data, results are paginated and shared in chunks. This means that the customer must make tens of thousands of API requests for this one workload each day, typically taking from one to two hours. Each request correlates to its own separate HTTP connection, similar to their previous on-premises setup.

The starting architecture

In this scenario, the customer connects to REST APIs in the service provider’s on-premises network from their Azure virtual network. The service provider’s on-premises network sits behind a firewall. The customer started to notice that sometimes one or more virtual machines waited for long periods of time for responses from the REST API endpoint. These connections waiting for a response would eventually time out and result in connection failures.

Figure 2: The customer sends traffic from their virtual machine scale set (VMSS) in their Azure virtual network over the internet to an on-premises service provider’s data center server (REST API) that is fronted by a firewall.

The investigation

Upon deeper inspection with packet captures, it was found that the service provider’s firewall was silently dropping incoming connections from their Azure network. Since the customer’s architecture in Azure was specifically designed and scaled to handle the volume of connections going to the service provider’s REST APIs for collecting the data they required, this seemed puzzling. So, what exactly was causing the issue?

The customer, the service provider, and Microsoft support engineers collectively investigated why connections from the Azure network were being sporadically dropped, and made a key discovery. Only connections coming from a source port and IP address that were recently used (on the order of 20 seconds) were dropped by the service provider’s firewall. This is because the service provider’s firewall enforces a 20-second cooldown period on new connections coming from the same source IP and port. Any connections using a new source port on the same public IP were not impacted by the firewall’s cooldown timer. From these findings, it was concluded that source network address translation (SNAT) ports from the customer’s Azure virtual network were being reused too quickly to make new connections to the service provider’s REST API. When ports were reused before the cooldown timer completed, the connection would timeout and ultimately fail. The customer was then confronted with the question of, how do we prevent ports from being reused too quickly to make connections to the service provider’s REST API? Since the firewall’s cooldown timer could not be changed, the customer had to work within its constraints.

NAT gateway to the rescue

Based on this data, NAT gateway was introduced into the customer’s setup in Azure as a proof of concept. With this one change, connection timeout issues became a thing of the past.

NAT gateway was able to resolve this customer’s outbound connectivity issue to the service provider’s REST APIs for two reasons. One, NAT gateway selects ports at random from a large inventory of ports. The source port selected to make a new connection has a high probability of being new and therefore will pass through the firewall without issue. This large inventory of ports available to NAT gateway is derived from the public IPs attached to it. Each public IP address attached to NAT gateway provides 64,512 SNAT ports to a subnet’s resources and up to 16 public IP addresses can be attached to NAT gateway. That means a customer can have over 1 million SNAT ports available to a subnet for making outbound connections. Secondly, source ports being reused by NAT gateway to connect to the service provider’s REST APIs are not impacted by the firewall’s 20-second cooldown timer. This is because the source ports are set on their own cooldown timer by NAT gateway for at least as long as the firewall’s cooldown timer before they can be reused. See our public article on NAT gateway SNAT port reuse timers to learn more.

Stay tuned for our next blog where we’ll do a deep dive into how NAT gateway solves for SNAT port exhaustion through not only its SNAT port reuse behavior but also through how it dynamically allocates SNAT ports across a subnet’s resources.

Learn more

Through the customer scenario above, we learned how NAT gateway’s selection and reuse of SNAT ports proves why it is Azure’s recommended option for connecting outbound to the internet. Because NAT gateway is not only able to mitigate risk of SNAT port exhaustion but also connection timeouts through its randomized port selection, NAT gateway ultimately serves as the best option when connecting outbound to the internet from your Azure network.

To learn more about NAT gateway, see Design virtual networks with NAT gateway.
Quelle: Azure

Power your file storage-intensive workloads with Azure VMware Solution

This blog has been co-authored by Ram Kakani, Principal Program Manager, Azure Dedicated

If you’ve been waiting for the right time to optimize your storage-intensive VMware applications in the cloud, I have great news for you: Azure NetApp Files for Network File System (NFS) datastores in Azure VMware Solution is now available in preview.

With Azure VMware Solution you can now scale storage independently from compute using Azure NetApp Files datastores, enabling you to run VMware-based storage-intensive workloads like SQL Server, general-purpose file servers, and others in Azure.

Gain the flexibility and scalability of running your storage-heavy workloads on Azure VMware Solution, while delivering high performance and low latency.

Azure NetApp Files as a datastores choice for Azure VMware Solution

Azure NetApp Files is available in preview as a datastores choice for Azure VMware Solution, and Azure NetApp Files NFS volumes can now be attached to the Azure VMware Solution clusters of your choice.

Use cases include migration and disaster recovery (DR)

Azure NetApp Files datastores for Azure VMware solution enable VMware customers to:

Flexibly manage and scale storage resources for workloads running on Azure VMware Solution, independently to compute.
Lower total cost of ownership (TCO) through storage optimization, for VMware workloads
More efficiently leverage Azure VMware Solution as a DR-endpoint for business continuity

Let the powerful file storage solution in the cloud power your VMware workloads

Azure NetApp Files is a fully managed file share service built on trusted NetApp ONTAP storage technology and offered as an Azure first-party solution.

"Azure NetApp Files helps deliver the performance, flexibility, scalability, and cost optimization customers need to migrate any VMWare workload, including ’un-migratable‘, storage-intensive VMware applications, to the Azure cloud and to securely back up on-premises VMware applications to Azure.”—Ronen Schwartz, Senior Vice President and General Manager, NetApp Cloud Volumes

We know every business is different and scaling on its own timetable, so we created three performance tiers for Azure NetApp Files: Standard, Premium, and Ultra. Scale-up and down on-demand as your requirements change. You can store up to 10 PB in a single deployment; achieve up to 4.4 GBps of throughput and sub-millisecond minimum latency in a single volume.

We continue to add features and regions and listen to our customers to better understand what they need to migrate their workloads to Azure. We heard loud and clear from VMware customers that Azure NetApp Files was exactly what they needed to make the move to the cloud.

Fully integrated with Azure VMware Solution

But we didn’t build a silo solution that works only with Azure VMware Solution. We built the most powerful file storage solution in the public cloud to work seamlessly with other Azure services. Now we have extended Azure NetApp Files to work perfectly with Azure VMware Solution to meet the needs of VMware customers.

Get started today

On Azure VMware Solution you can now scale storage independently of your compute costs and gain the performance, scalability, reliability, and security you need with Azure NetApp Files for Azure VMware Solution.

Learn more

Sign up for the preview now.
Microsoft documentation for attaching Azure NetApp Files to Azure VMware Solution VMs.
Read the NetApp blog.

Quelle: Azure

Unlocking innovative at-home patient care solutions with Azure

This post was co-authored by Stuart Bailey, Product Director, Capita Healthcare Decisions

This blog is part of a series in collaboration with our partners and customers leveraging the newly announced Azure Health Data Services. Azure Health Data Services, a platform as a service (PaaS) offering designed exclusively to support Protected Health Information (PHI) in the cloud is a new way of working with unified data—providing care teams with a platform to support both transactional and analytical workloads from the same data store and enabling cloud computing to transform how we develop and deliver AI across the healthcare ecosystem.  

As pressures on the National Health Service (NHS) in the United Kingdom continue to grow, so does the need for safe and effective home health care. Head Home is a remote patient monitoring (RPM) solution that looks to streamline current at-home care for patients and their health and care professionals.

The NHS is currently experiencing the most severe pressures it has in its 70-year history, with an already strained system being stretched beyond its limits by the impact of the COVID-19 pandemic.1 In hospitals, the number of general and acute beds available has been declining since 20102, and it has been estimated that up to 15 percent of beds are being used by people waiting for care3. Finding innovative ways to relieve these pressures remain critical in supporting the NHS’ recovery.

To find solutions to this challenge, a key area to address is facilitating more efficient patient discharge and at-home care. Patient surveys have long shown that most older people prefer to receive care at home, and recent research by the University of Oxford has found that this may improve patient outcomes and satisfaction, while simultaneously helping to reduce hospital pressures.4  This approach is known as “hospital-at-home” and its use has been accelerated by the pandemic. Hospital-at-home aims to allow health and care professionals to provide remote monitoring and communication for patients from their own homes, whilst helping healthcare facilities to free up vital resources. However, while wearable devices such as temperature monitors, pulse monitors, blood pressure monitors, and even heart monitors are readily available, solutions that enable them to be monitored remotely are less common and the hospital-a-home approach is currently reliant on expensive, hard to maintain devices and bespoke manufacturer software.

This is largely due to data still being stored on-premises in a siloed healthcare industry, and a lack of interoperability among these on-premises systems. Disparate datasets are collected from a variety of wearables without a unified solution to manage them, making it difficult for providers to access patient data collected from wearable devices at home in a timely fashion. This results in delays in patient monitoring and formulating treatment plans when patients are out of the hospital, making monitoring and treating patients remotely unachievable.

To help solve this problem, Microsoft released Azure Health Data Services, a suite of purpose-built technologies for protected health information (PHI) in the cloud built on the global open standards Fast Healthcare Interoperability Resources (FHIR)® and Digital Imaging Communications in Medicine (DICOM). This solution enables providers to unify and manage data on a trusted cloud, making it possible to standardize diverse data streams such as clinical, imaging, device, and unstructured data using FHIR, DICOM, and MedTech services. Data collected from various wearables and in different formats can be ingested and persisted in Azure Health Data Services, allowing data to be managed in one place, and therefore reducing the need for numerous manufacturers’ software. It enables providers to view the standardized data in context with other clinical datasets, supporting the goal of moving from reactive care to proactive care while reducing cost, empowering a more effective and personalized approach to at-home care.

Expanding healthcare support with Azure Health Data Services

Aiming to enable the hospital-at-home approach to better support patients and help to relieve existing pressures on the NHS, Capita Healthcare Decisions leverages Microsoft Azure Health Data Services which enables healthcare professionals and patients to manage patient data in the cloud. Head Home by Capita is a remote patient monitoring (RPM) solution that enables the health indicators of patients to be monitored by health and care professionals from within their own homes. Through Head Home, personalized health indicator thresholds can be set, ensuring that if there is a change in the condition of a patient, then their care team is notified over the preferred interface by the provider  (SMS, push notification). This allows health and care professionals to react in an appropriate and timely manner, whilst reassuring patients that, should their wellbeing change, they will be cared for. Head Home can currently support the monitoring of blood oxygen level, heart rate, body temperature, respiratory rate, blood pressure, and single-lead ECG, ensuring a range of key health indicators can be effectively monitored in a hospital-at-home.

In addition to the indicator monitoring and warning system, Head Home enables patients to talk to a personal assistant via voice interface to communicate with their care team, ensuring a greater connection for patients receiving care at home. This type of communication between a patient and their health and care professionals has been shown to be critical for recovery, helping to develop trust and transparency during the care process. This verbal communication is being recorded in the Head Home dashboard, alongside notes from patient appointments and check-ins, helping to improve clinical documentation and efficiency.

The hospital-at-home model sees the provision of faster access to appropriate and targeted care in people’s homes and introduces the right digital infrastructure to deliver the system benefits, as well as helping to tackle the elective care backlog. With Head Home, Capita Healthcare Decisions has pioneered a digital solution to enable clinicians to support patient recovery at home by providing a better-connected real-time monitoring solution whilst reducing the need for healthcare delivery resources.

As existing providers of clinical decision support software, Capita Healthcare Decisions utilizes the Azure Health Data Services to persist health data in the cloud. This enables rapid exchange of data backed by a PaaS offering on a trusted cloud. In addition, Azure Health Data Services allows Capita Healthcare Decisions to ingest the patient data from wearables providers (HealthKit and Google Fit) and device aggregators for persistence and analysis, enabling new opportunities to gain new insights in research and improve patient care. By integrating this with a variety of Internet of Medical Things (IoMT) devices and making use of personal assistant voice interfaces, Capita Healthcare Decisions aims to deliver an accessible and easy-to-use service that can provide the monitoring required to keep patients safe during their care at home. By using the FHIR standard, Capita Healthcare Decisions is leveraging the power of an open-source standard that will evolve with the science of healthcare and enable interoperability with data flows in existing healthcare systems. The interfaces that sit between the monitoring devices themselves and Capita Healthcare Decisions’ intuitive monitoring platform enables these readily available, relatively low-cost devices to be easily deployed at scale. By providing these complementary functions, Head Home is helping to deliver a more viable hospital-at-home environment.    

At a time when NHS resources are being stretched to new levels, innovative technology platforms such as Head Home offer a much-needed solution. Leveraging Microsoft Azure Health Data Services, Capital Healthcare Decisions offers an agile way to monitor the health of patients remotely, ensuring that at-home care can be delivered safely and effectively, all with the associated potential to improve outcomes, patient satisfaction, and reduce healthcare delivery costs.

Do more with your data with Microsoft Cloud for Healthcare

With Azure Health Data Services, health organizations can transform their patient experience, discover new insights with the power of machine learning and AI, and manage PHI data with confidence. Enable your data for the future of healthcare innovation with Microsoft Cloud for Healthcare.

We look forward to being your partner as you build the future of health.

Learn more about Azure Health Data Services.
Learn more about Capital Health Decisions, or email healthcaredecisions@capita.com.
Read our recent blog, “Microsoft launches Azure Health Data Services to unify health data and power AI in the cloud.”
Learn more about Microsoft Cloud for Healthcare.

References

®FHIR is a registered trademark of Health Level Seven International, registered in the U.S. Trademark Office and is used with their permission.

1An NHS under pressure. (2021). The British Medical Association Is the Trade Union and Professional Body for Doctors in the UK.

2The number of hospital beds. (2021, November 5). The King’s Fund.

3NHS: Up to 15 percent of hospital beds used by people waiting for care. Pollock, B. I. (2021, November 18). BBC News.

4 Study finds that caring for older people at home can be just as good—or even better—than hospital care. The University of Oxford. (n.d.). www.ox.ac.uk.
Quelle: Azure

Virtual desktop infrastructure security best practices

It’s no longer a matter of organizations deciding whether to embrace remote and hybrid work but finding the best way to do so. A recent study showed most employees are happier having the option to work from home, and 80 percent say they’re as productive or more productive when they do. One of the most popular options for organizations who want to offer remote work options is virtual desktop infrastructure or VDI.

What is VDI?

Virtual desktop infrastructure (VDI) is an IT infrastructure that virtualizes desktops—to give employees access to enterprise data and applications from anywhere and from most personal and professional devices. Organizations host applications and data on servers, and through VDI, enable their employees to work remotely via remote desktops. VDI is popular for enabling remote work because, with the right configuration, it’s highly secure and relatively inexpensive compared to on-premises options.

What are some of the security benefits of cloud-based VDI migration?

Migrating to a cloud-based VDI solution allows organizations to take advantage of built-in security features that mitigate and eliminate the risks associated with traditional desktop virtualization. Azure Virtual Desktop in combination with the Azure public cloud, for example, offers comprehensive security features, like Azure Sentinel and Microsoft Defender for Endpoint, that are built-in before deployment. This helps enable an organization to follow critical VDI security best practices from the start of their virtualization journey.

What are some VDI security best practices?

Conditional access applies access controls based on signals like group membership, type of device, and IP address to enforce policies.
Multifactor authentication requires that users consistently verify their identities to access sensitive data.
Audit logs are used to gain insight into user and admin activities.
Endpoint security like Microsoft Defender for Endpoints offers built-in protection against malware and other advanced threats for all your endpoints.
Application restriction mitigates security threats by limiting what applications certain users are allowed to access using software like Windows Defender Application Control.

Following these VDI security practices helps organizations secure user identities, data, and access to their VDI. They’re the reason a comprehensive VDI solution, like Azure Virtual Desktop, doesn’t just mitigate security risks associated with virtualization, but increases overall security.

Of course, there are numerous factors and potential issues for an organization to consider in choosing to implement a VDI solution. Most of these issues stem from hosting virtual desktops on-premises, as traditional VDIs do.

What are some concerns for an organization considering a traditional VDI?

First, there’s the cost. Traditionally, implementing VDI is an involved, complicated process. It often requires employees with specialized roles to deploy, manage, and scale an organization’s VDI as needed. Cloud-based VDI solutions like Azure Virtual Desktop are managed and scaled by the cloud VDI solution provider themselves, which lowers cost considerably.

Second and most importantly, there are the security concerns that come with adopting a hybrid model through traditional VDI. After the deployment of a VDI, IT managers must consider the security of home and corporate networks when developing security protocols. Employees using different types of devices to access data also opens networks to new vulnerabilities, as these new devices can be more vulnerable to cyberattacks. Most of these vulnerabilities are eliminated when you use a cloud-based VDI with built-in security features and endpoint protection.

How do you choose a secure VDI for your organization?

Meeting these implementation and security challenges often poses a barrier to organizations fully embracing a hybrid work model. IT decision makers must consider the challenges along with the benefits of enabling remote work when choosing a VDI solution for their organization. Adopting a comprehensive, cloud-based virtual desktop solution, like Azure Virtual Desktop, mitigates and eliminates many of these security concerns.

Also referred to as desktop-as-a-service, cloud-based VDI solutions host their virtual desktops on the cloud using a subscription model instead of on-premises, locally operated and maintained servers. Not only does this lower the cost and time of implementing VDI by decreasing the amount of labor needed to maintain it, it also ensures that the cloud-based virtual desktop solution provider shares responsibility with its customers for security. With the right provider, this can prove to be an enormous benefit.

Learn more

To explore the possibility of implementing Azure Virtual Desktop at your organization, read the 17-page e-book, Delivering Secure Remote and Hybrid Work with Azure Virtual Desktop, to learn more about how to:

Increase your end-to-end security through VDI migration.
Implement and maintain VDI security best practices.
Scale resources on demand for your employees without the limitations of on-premises data centers using Azure Virtual Desktop.
Lower your costs by running multiple virtual desktop user sessions on a single virtual machine.

Quelle: Azure

Start skilling on Azure with these helpful guides

We are excited to introduce Azure Skills Navigator, a new learning resource designed especially for those that are new to Azure and want to learn more. Azure Skills Navigator is our very own ramp-up guide intended to help you develop a strong foundation on cloud technologies as you begin to explore Azure.

These downloadable Azure Skills Navigator guides offer a variety of resources to help build your skills and knowledge of Azure. Each guide features carefully selected digital training, learning courses, videos, documents, certifications, and more. We understand how important it is in today’s market to stay ahead of the tech curve. There is a high demand for professionals skilled in cloud technologies. Azure Skills Navigator guides ensure that you have a solid foundation as you begin exploring Azure. We have hand-picked a selection of resources that will help you develop a strong foundation of Microsoft Azure, allowing you to build and explore today. After you’ve mastered the content, we will help you navigate our intermediate and advanced level content.

We have guides tailored for a number of roles—System Administrators, Solution Architects, Developers, Data Engineers, and Data Scientists. Given the high demand for these guides, we will be launching more for a number of new roles in the coming months. These role-based guides map out your itinerary for deepening your knowledge of Azure, helping you build a strong foundation for cloud computing in a way that is tailored and personalized for you. You can travel at your own pace, and then continue your Azure exploration with ongoing learning resources ranging from blog updates, videos, and events to connect with technical communities. These guides are just the beginning; Microsoft Learn will be your trusted partner as you progress through your learning journey. There are numerous options for continuing your training and certification beyond these guides as well.

Explore the guides by role below to get started

Azure Skills Navigator for System Administrators: A guide for deepening your knowledge of fundamental concepts of cloud computing and Azure core infrastructure services, management, monitoring, security, and compliance.
Azure Skills Navigator for Solution Architects: A guide for deepening your knowledge of fundamental concepts of Microsoft Azure, core solutions, solution design principles, including security and compliance, and deployment tools and methods to help bring your solution architectures to life.
Azure Skills Navigator for Developers: A guide to build your skills around knowing how to architect and deploy apps in the Cloud and how to maintain and instrument those apps once deployed. Our guide provides an overview of key concepts across Java, .NET, Node.js, and Python, crucial topics to establishing a strong foundation on Microsoft Azure.
Azure AI Learning Journey for Developers: A guide to achieving artificial intelligence expertise on Azure AI, creating the next generation of applications, and preparing for Azure AI Fundamental certification.
Azure Data Engineer Learning Journey for Data Engineers: A guide to achieving expertise in data engineering; explore how Azure Synapse enables you to leverage all your data to unlock powerful insights.
Azure Data Scientist Learning Journey for Data Scientists: A guide to achieving Machine Learning expertise on Azure; learn how to collaborate and build models faster with the latest machine learning tools and frameworks.

Learn more

On-demand Intro to Tech session at Microsoft Build: The New Developer’s Guide to the Cloud hosted by Christoffer Noring, Nitya Narasimhan, and Someleze Diko.
GitHub repo containing all the resources and space for you to share suggestions for improvement.
Blog announcement for Azure Infrastructure guides.
Blog announcement for developers on Azure.
Blog announcement for Azure Data and AI.

Quelle: Azure