Modernize with Azure Migrate

With the pandemic mostly behind us, several large economies have opened in some shape or form. This, despite the uneven supply of goods and services and higher than usual energy costs. The higher energy cost and the resulting increase in the cost of doing business, has led to a tighter economic outlook. Coupled with long lead times for required parts and continued remote work, datacenter management is harder and costlier than it has been. However, maintaining and growing any business requires additional information technology (IT) resources. Thus, there is an increased need for IT solutions to maintain business continuity and sustain innovation. Hyperscalers such as Microsoft’s Azure fill this need and are less affected by these constraints due to the economies of scale. Further, the cloud consumption model allows customers to quickly scale resources up or down to support agile businesses. This is why public cloud spend continues to accelerate and the top cloud initiatives for all organizations are migrating more loads, optimizing existing use, and modernizing through platform as a service (PaaS) or software as a service (SaaS)1.

Customer requirements

The customer requirement is to stay competitive, both on the technical and business fronts, to ensure continued success. Technical competency requires an agile and innovative IT platform with data analytics to provide insights that can help differentiate from the competition. It would be ideal if such an innovative platform is available at a lower cost. Incidentally, modernizing existing IT infrastructure, applications, and data to PaaS/SaaS models in the cloud, delivers on all these requirements, leading to a higher return on investment (ROI) for the customer.

The higher efficiency and lower cost due to the adoption of modern cloud-native architectures, such as PaaS and SaaS, also leads to greater levels of flexibility. Thus, setting the stage for the customer to realize greater value as they progress from IaaS to PaaS and onto SaaS models. Please download our analyst report for details on options and value due to application modernization in Azure.

Microsoft’s commitment to modernization

This week at Microsoft Inspire, we are highlighting our commitment to modernization with integrated, at-scale modernization of ASP.NET applications to Azure Application Service. Also, in preview is Azure Migrate’s support of discovery and assessment of SQL Server running in Microsoft Hyper-V and Physical environments and IaaS services of other public clouds. Please see our tech community blog for more details on this, and other Azure Migrate features available for Linux and Windows workloads.

Enabling deeper integration with our ISV partners

Azure Migrate’s extensible framework is ideal for deeper integration of first-party features to drive automation, while also leveraging third-party tools. Here is a brief view of partner capabilities that can be added to this flexible framework:

Over the years, enterprises have built and expanded custom applications, which require modernization to better support fast-changing business needs. See how Microsoft and CAST partner by combining Azure Migrate and software intelligence produced by CAST technology to automate migration and modernization under the Azure Migration and Modernization Program (AMMP).
Operability of your cloud infrastructure and workloads is key to cloud adoption success and Azure landing zones provide prescriptive guidance to set a well architected foundation for your Azure infrastructure. In partnership with HashiCorp and our Terraform Azure community, we now have the reference implementation for deploying and managing Azure resources at enterprise scale.

Learn more

Attend this Microsoft Inspire on-demand session to learn more about cloud migration and modernization. Check out this FastTrack link for moving to Azure efficiently and get best practice guidance from the Azure migration and modernization center (AMMC). AMMP is now one comprehensive program for all migration and modernization needs of our customers. Learn more and join AMMP today.

Sources: 

1. Trends in Cloud Computing: 2022 State of the Cloud Report, Flexera.com.
Quelle: Azure

Accelerating capital markets workloads for Murex on Azure

The financial services industry is constantly evolving to meet customer and regulatory demands. It is facing a variety of challenges spanning people, processes, and technology. Financial institutions (FIs) need to continuously accelerate to achieve technology and innovation while maintaining scale, quality, speed, and safety. Simultaneously, they need to handle evolving regulatory frameworks, manage risk, digitally transform, process financial transaction volumes, and accelerate cost reductions and restructuring efforts.

Murex is a leading global software provider of trading, risk management, processing operations, and post-trade solutions for capital markets. FIs around the world deploy Murex’s MX.3 platform to better manage risk, accelerate transformation, and simplify compliance while driving revenue growth.

Murex MX.3 on Azure

Murex MX.3 has been certified for Microsoft Azure since version 3.1.35. We have been collaborating with Murex and global strategic partners like Accenture and DXC to provide Murex customers with a simple way to create and scale MX.3 infrastructure and achieve agility in business transformation. With the recent version 3.1.48, SQL Server is supported and customers can now benefit from the performance, scalability, resilience, and cost savings facilitated by SQL Server. With SQL Server IaaS Extension, Murex customers can run SQL Server virtual machines (VMs) in Azure with PaaS capabilities for Windows OS (with automated patching setting disabled in order to prevent the installation of a cumulative update that may not yet be supported by MX3).

Architecture

Murex customers can now refer to the architecture to implement MX.3 application on Azure. Azure enables a secure, reliable, and efficient environment, significantly reducing the infrastructure cost needed to operate the MX.3 environment and providing scalability and a highly performant environment. Customers running MX.3 on Azure can take advantage of multilayered security provided by Microsoft across physical data centers, infrastructure, and operations in Azure. They can benefit from the Compliance Program that helps accelerate cloud adoption with proactive compliance assurance for highly critical and regulated workloads. Customers can maximize their existing on-premises investments using an effective hybrid approach. Azure provides a holistic, seamless, and more secure approach to innovation across customers’ on-premises, multicloud, and edge environments.

The architecture is designed to provide high availability and disaster recovery. Murex customers can achieve threat intelligence and traffic control using Azure Firewall, cost optimization using Reserved Instances and VM scale sets, and high storage throughout using Azure NetApp Files Ultra Storage.

“With the deployment of large scale—originally specialized platform-based—Murex workloads, Azure NetApp Files has proven to deliver the ideal Azure landing zone for storage-performance intensive, mission-critical enterprise applications and to live up to its promise to Migrate the Un-migratable," says Geert van Teylingen, Azure NetApp Files Principal Product Manager from NetApp.

Customers running Murex on Azure

Customers around the world are migrating the Murex platform from on-premises to Azure.

ABN AMRO has moved their MX.3 trading and treasury front-to-back-to-risk platform to Azure, achieving flexibility, agility, and improved time to market. ABN AMRO’s journey to Azure progressed from proof of concept to production, with the Murex MX.3 platform now entirely operational on Azure.

“The key focus for us was always to make sure that we could automate most processes while preserving its operational excellence and key features,” says Kees van Duin, IT Integrator at ABN AMRO.

“Thanks to Microsoft, we were able to preserve nearly 90 percent of our original design and move our platform to the cloud, while in-production, as efficiently as possible. We couldn’t be happier with the result,” he continues.

For Pavilion Energy, Upskills helped drive implementation for Murex Trading in Azure, helping reduce the risk of errors, increase the volume of trading activities, and optimize the management of their Murex MX.3 platform environments.

“We have been working on the Murex technology for over 10 years. Implementing Murex Trading Platform fully into Azure has proven to be the right decision to reduce the risk of delivery, optimize the environments management, and provide sustainable solutions and support to Pavilion Energy” says Thong Tran, Chief Executive Officer (CEO) of Upskills.

Strategic partners helping accelerate Murex workloads

Murex customers can modernize MX.3 workloads, reduce time-to-market and operational costs, and increase acceleration, leveraging accelerators, scripts, and blueprints from our partners—Accenture and DXC.

Accenture and Microsoft have decades of experience partnering with each other and building joint solutions that help customers achieve their goals. Leveraging our strategic alliance to better serve our customers, Accenture has designed and created specific accelerators, tools, and methodologies for MX.3 on Azure that could help organizations develop richer DevOps and become more agile while controlling costs.

Luxoft, a DXC Technology Company, with Microsoft as a global strategic partner for more than 30 years and Murex as a top-tier alliance partner for more than 13 years, helps modernize solutions to connect people, data, and processes with tangible business results. DXC has developed execution frameworks that adopt market best practices to accelerate and minimize risks of cloud migration of MX.3 to Azure.

Keeping pace with the changing regulatory and compliance constraints, financial innovation, computation complexity, and cyber threats is essential for FIs. FIs around the world are relying on Murex MX.3 to accelerate transformation and drive growth and innovation while complying with complex regulations. Customers are using Azure to enhance business agility and operation efficiency, reduce risk and total cost of ownership, and achieve scalability and robustness.

Additional resources

Microsoft and Murex team to help FIs move to Azure
Murex MX.3 architecture
ABN AMRO digital transformation journey with Murex

Quelle: Azure

MLOps Blog Series Part 4: Testing security of secure machine learning systems using MLOps

The growing adoption of data-driven and machine learning–based solutions is driving the need for businesses to handle growing workloads, exposing them to extra levels of complexities and vulnerabilities.

Cybersecurity is the biggest risk for AI developers and adopters. According to a survey released by Deloitte, in July 2020, 62 percent of adopters saw cybersecurity risks as a significant or extreme threat, but only 39 percent said they felt prepared to address those risks.

In Figure 1, we can observe possible attacks on a machine learning system (in the training and inference stages).

Figure 1: Vulnerabilities of a machine learning system.

To know more about how these attacks are carried out, check out the Engineering MLOps book. Here are some key approaches and tests for securing your machine learning systems against these attacks:

Homomorphic encryption

Homomorphic encryption is a type of encryption that allows direct calculations on encrypted data. It ensures that the decrypted output is identical to the result obtained using unencrypted inputs.

For example, encrypt(x) + encrypt(y) = decrypt(x+y).

Privacy by design

Privacy by design is a philosophy or approach for embedding privacy, fairness, and transparency in the design of information technology, networked infrastructure, and business practices. The concept brings an extensive understanding of principles to achieve privacy, fairness, and transparency. This approach will enable possible data breaches and attacks to be avoided.

Figure 2: Privacy by design for machine learning systems.

Figure 2 depicts some core foundations to consider when building a privacy by design–driven machine learning system. Let’s reflect on some of these key areas:

Maintaining strong access control is basic.
Utilizing robust de-identification techniques (in other words, pseudonymization) for personal identifiers, data aggregation, and encryption approaches are critical.
Securing personally identifiable information and data minimization are crucial. This involves collecting and processing the smallest amounts of data possible in terms of the personal identifiers associated with the data.
Understanding, documenting, and displaying data as it travels from data sources to consumers is known as data lineage tracking. This covers all of the data's changes along the journey, including how the data was converted, what changed, and why. In a data analytics process, data lineage provides visibility while considerably simplifying the ability to track data breaches, mistakes, and fundamental causes.
Explaining and justifying automated decisions when you need to are vital for compliance and fairness. High explainability mechanisms are required to interpret automated decisions.
Avoiding quasi-identifiers and non-unique identifiers (for example, gender, postcode, occupation, or languages spoken) is best practice, as they can be used to re-identify persons when combined.

As artificial intelligence is fast evolving, it is critical to incorporate privacy and proper technological and organizational safeguards into the process so that privacy concerns do not stifle its progress but instead lead to beneficial outcomes.

Real-time monitoring for security

Real-time monitoring (of data: inputs and outputs) can be used against backdoor attacks or adversarial attacks by:

Monitoring data (input and outputs).
Accessing management efficiently.
Monitoring telemetry data.

One key solution is to monitor inputs during training or testing. To sanitize (pre-process, decrypt, transformations, and so on) the model input data, autoencoders, or other classifiers can be used to monitor the integrity of the input data. The efficient monitoring of access management (who gets access, and when and where access is obtained) and telemetry data can result in being aware of quasi-identifiers and help prevent suspicious attacks.

Learn more

For further details and to learn about hands-on implementation, check out the Engineering MLOps book, or learn how to build and deploy a model in Azure Machine Learning using MLOps in the Get Time to Value with MLOps Best Practices on-demand webinar. Also, check out our recently announced blog about solution accelerators (MLOps v2) to simplify your MLOps workstream in Azure Machine Learning.
Quelle: Azure

How Microsoft Azure Cross-region Load Balancer helps create region redundancy and low latency

In this blog, we’ll walk through Microsoft Azure Cross-region Load Balancer (also known as the Global tier of Standard Load Balancer) through a case study with a retail customer. By incorporating Azure Cross-region Load Balancer into their end-to-end architecture, the customer was able to achieve region redundancy, high availability, and low latency for their end applications with a quick turnaround time for scaling events while retaining their IPs for TCP and UDP connections. DNS-based global load balancing solution was considered but not adopted due to long failover time caused by time-to-live not being honored.

Low latency with geo-proximity-based routing algorithm

Figure 1: With Azure Load Balancer all traffic will be routed to a random backend server based on 5-tuple hash.

Figure 2: With Cross-region Load Balancer traffic will be routed to the closest regional deployment.

With the previous setup, all traffic regardless of source IP location will be first forwarded to the load balancer’s region. This could take several hops across data centers which could introduce additional latency to network requests. With Azure Cross-region Load Balancer’s geo-proximity-based routing, end customers are being routed to the closest regional deployment which dramatically improves latency.

Automatic failover for disaster recovery

Figure 3: With Standard SKU Load Balancer, when the only regional deployment or the Load Balancer goes down, all traffic can be impacted.

Figure 4: Cross-region Load Balancer ensures seamless failover for disaster recovery.

Even though Standard Load Balancer offers zone redundancy, it is a regional resource. If a regional outage occurs causing the Load Balancer or all the backend servers to go unavailable, traffic will not be able to be forwarded as it arrives at the Load Balancer frontend. As a result, the website will be unavailable to the end customers. By adding a Cross-region Load Balancer on top of several existing regional deployments, the customer is now armed with region redundancy which ensures high availability of their end application. If web server one goes down, the end customer's traffic will be re-routed to web server two to ensure no packet gets dropped.

Scale up and down with no downtime

Figure 5: Easy scaling when using Microsoft Azure Virtual Machine Scale Sets (VMSS) combined with Cross-region Load Balancer.

Like many other industries, the retail industry faces frequent changes in traffic volume due to seasonality and other spontaneous trends. As a result, the customer’s top concern is to scale up and down in real-time. There are two ways to achieve this today with a Cross-region Load Balancer. One way is to directly add or remove a regional Public Load Balancer behind the Cross-region Load Balancer. Another way is to use Microsoft Azure Virtual Machine Scale Sets with a pre-configured autoscaling policy.

Zero friction for adoption

Azure Load Balancer has been an important part of the customer’s end-to-end architecture for stable connectivity and smart load balancing. By leaving the existing network architecture as is and simply adding a Cross-region Load Balancer on top of the existing load balancer set up, the customer is saved from any additional overhead or friction due to the addition of a Cross-region Load Balancer.

Client IP preservation

Cross-region load balancer is a Layer-4 pass-through network load balancer, which ensures that the Load Balancer preserves the original IP address of the network packet. IP preservation allows you to apply logic in the backend server that is specific to the original client IP address.

Next steps

Cross-region Load Balancer is now in preview.

Read our Microsoft Docs page to learn about creating a Cross-region Load Balancer using the Azure portal.
Quelle: Azure

Digital transformation for manufacturers requires additional IT/OT security

While every industry is vulnerable to a ransomware attack, manufacturers are at a particular risk. While digitization and automation have helped transform the industry, it has simultaneously opened up new attack vectors within organizations. Now the most targeted industry, the manufacturing industry, has seen a 300 percent increase in cyberattacks in a single year.

Beyond the tremendous growth in attacks, manufacturing companies make an ideal target for hackers due to the high value of the companies themselves, the high costs of unplanned downtime, and the highly visible impact that downtime has on consumers’ daily lives. With the risks so high, an enterprise-level solution that provides visibility and protection like Microsoft Defender for IoT is essential.

Visibility is the first step to network protection

The number of connected industrial control system (ICS)/operational technology (OT) devices in manufacturing facilities continues to grow. The benefits for the operations side of the house are clear, but the lack of visibility into them poses serious security risks for chief information security officers (CISOs).

Manufacturers often have no way to identify and monitor what all their connected devices are doing and with whom or what they are communicating. Worse, all too often they lack even a simple inventory of all the connected devices they have in their facilities. In case of an attack, the lack of visibility means that they have no way of tracing the attack vector the hacker took, making them vulnerable to a second wave and delaying recovery and remediation.

Continuous monitoring without impacting productivity

Microsoft Defender for IoT not only creates asset maps within minutes of being turned on, but it also provides continuous monitoring of every device in every facility around the world. Microsoft’s Section 52 has access to tens of trillions of identity, endpoint, and other signals each day. The threat intelligence from this specialized IoT and ICS research team produces high-impact insights that help keep manufacturers safe from attacks.

The agentless nature of the system protects companies without impacting production, no matter the topology of the network or the regulations governing the industry. And, with round-the-clock protection, Microsoft Defender for IoT can alert the SecOps team about an intrusion any time, any place.

Security for networks in an age of IT and OT convergence

As their digital transformations have progressed, manufacturers have seen their IT and OT environments converge. The air gap between them that ensured production would continue even if IT assets were taken offline is increasingly a thing of the past. With these trendlines, forward-thinking CISOs and their boards are taking proactive steps to protect the entire company from cyber-physical attacks that could have huge costs to safety, production, reputation, and the bottom line.

Fortunately, Microsoft Defender for IoT can usually be deployed in less than a single day per facility and works right out of the box for large enterprises and small, niche facilities. With it, defenders of OT networks have a powerful new tool at their disposal to help keep hackers out and people, production, and profits safe.

For more information on how Microsoft Defender for IoT can help protect your business, visit Microsoft Defender for IoT | Microsoft Azure today.
Quelle: Azure

What is desktop as a service (DaaS) and how can it help your organization?

Today’s workers want the freedom to respond to email and collaborate with colleagues from anywhere, on any device—whether they’re working at their kitchen table, at the airport waiting for their flight to board, or in the carpool line waiting for their kids to get out of school. The pandemic proved that remote teams could succeed, no matter where they worked and how far-flung they were.

Even so, many companies are still scrambling to accommodate the technological needs of their hybrid and remote workers. Desktop as a service, sometimes known by the acronym DaaS, can help.

What is desktop as a service (DaaS)?

DaaS is a high-performing, secure, cost-effective type of desktop virtualization. DaaS frees businesses from tethering their computer operating systems and productivity software to any physical hardware. Instead, businesses can use DaaS to access virtual desktops over the internet from a cloud provider. Cloud providers that offer this service distribute and manage virtual desktops from their own datacenters. 

DaaS vs. on-premises

DaaS solutions differ from on-premises software in a number of ways, most notably:

Pricing. With DaaS, companies can avoid making advance purchases of hardware that they anticipate their employees needing, such as expensive desktops and laptops. Instead, companies pay cloud providers only for the data, resources, and services that they use.

Scalability. Cloud providers offer companies the freedom to use any amount of desktops on a fluctuating basis. This gives companies instant access to the precise number of desktops they need, whenever and wherever they need them.

Management. Cloud providers offering DaaS conduct maintenance, data storage, updates, backup, and other desktop management for companies that outsource these solutions. DaaS providers often manage their customers’ desktops, applications, and security as well.

What are the benefits of DaaS?

The financial, performance, and administrative benefits of using DaaS are numerous. Let’s look at some of the biggest reasons businesses use this type of desktop virtualization.

Enables remote work. The rise of hybrid and remote workplaces calls for a different approach to accessing applications and data. With DaaS, IT teams can easily move data between different platforms and users can easily access the data they need from multiple machines, no matter where they work.

Supports BYOD. Besides freeing employees from physical offices, DaaS can free employees from solely working on company-issued devices or with one particular device. With DaaS, IT teams can more easily support bring your own device, or BYOD, policies that let employees work on their own phones, tablets, and laptops.

Simplifies desktop management. For IT teams, outsourcing the deployment, configuration, and management of virtual desktops helps reduce the administrative load. The ability to quickly scale up or down the use of desktops, applications, and data based on user need also helps to ease IT duties.

Helps increase security. DaaS poses fewer security risks because the data resides in the cloud provider’s datacenter, not on the laptops, tablets, and phones that employees use. If a computer or device is lost or stolen, it can easily be disconnected from the cloud service.

Reduces IT costs. DaaS solutions save businesses money by shifting IT costs from traditional on-premises hardware and software purchased up front and in bulk to cloud-based services and desktops purchased as needed. DaaS can run on devices that require far less computing power than a standard laptop or desktop machine, which helps companies save money. Allowing employees to use their own devices also helps save on hardware costs, as does reducing the workload of IT teams.

Extends the life of legacy machines. Companies that lack the immediate funds to upgrade all of their outdated machines can use DaaS to install a newer operating system on them. Serving the newer operating system from the cloud is a more affordable prospect than replacing an entire fleet of on-premises equipment all at once.

Real-world uses for DaaS

Cloud providers usually offer two flavors of DaaS, persistent desktop and nonpersistent desktop:

Persistent desktop offers the greatest degree of application compatibility and personalization and is necessary for users that require elevated permissions. This usually results in a higher cost per user than a nonpersistent desktop. A persistent desktop is a good fit for developers and IT professionals.
Nonpersistent desktop offers the lowest cost solution by separating the personalization layer from the underlying operating system. This enables any user to log onto any virtual machine (VM) and maintain a personalized environment. This option is a good fit for knowledge workers and task workers.

We’ve already looked at how DaaS benefits remote and hybrid workforces, BYOD programs, and companies looking to optimize their IT assets and costs. But there are many other business uses for DaaS, including:

Modernizing call centers. Organizations with shift workers who require the same software to do task-based work can optimize IT resources by using nonpersistent desktops and remote applications.
Accelerating deployment and decommissioning. Nonpersistent desktops can help seasonal businesses that routinely undergo staffing fluctuations reduce the time and costs associated with deploying and decommissioning desktop users.
Granting contractors and partners secure data access. Companies can increase the login security of their contractors, vendors, and business partners by enabling them to work on virtual desktops from their own devices.
Ensuring business continuity. Companies can help safeguard their data against natural disasters and other threats to daily operations by outsourcing desktop management to cloud providers that offer airtight data protection at remote datacenters.
Increasing sustainability. By using cloud-based virtual desktops to reduce the amount of hardware used onsite, businesses can decrease their power consumption and electronic waste, thus reducing their environmental impact.

Explore the flexibility of Azure Virtual Desktop

Azure Virtual Desktop is a desktop and application solution that enables your remote workforce to stay productive regardless of location or device—all while being secure, scalable, and cost-effective. With Azure Virtual Desktop, you can:

Deliver Windows 10 and Windows 11 desktops virtually anywhere. Give employees the only virtual desktop solution that’s fully optimized for Windows 10, Windows 11, and Microsoft 365 with multisession capabilities—no matter what device they’re using, no matter where they’re using it.

Keep your applications and data secure and compliant. Use the built-in, reliable security features of Azure to stay ahead of potential threats and take remedial action against breaches.

Simplify deployment and management. The Azure portal enables you to configure your network settings, add users, deploy desktops and applications, and enable security with just a few clicks. Citrix and VMware customers also can streamline the delivery of virtual desktops and applications with Azure.

Reduce costs with multisession and existing licenses. Optimize costs with the eligible Microsoft 365 or Windows licenses that you already have. Use Windows 10 and Windows 11 multisession support to reduce infrastructure costs. Plus, take advantage of flexible, consumption-based pricing to pay for only what you use.

To explore how to get started with Azure Virtual Desktop, read the Quickstart Guide to Azure Virtual Desktop. In it, you’ll find:

Guidance on planning a successful deployment of Azure Virtual Desktop.
Steps to set up and optimize your virtual desktops with just a few clicks.
Best practices, recommendations, and troubleshooting tips.

If you’d like to continue your exploration of Azure:

Try Azure Virtual Desktop free.
Get started with 12 months of free services.

Quelle: Azure

Choose the right size for your workload with NVads A10 v5 virtual machines, now generally available

Visualization workloads entail a wide range of use cases: from computer-aided design (CAD), to virtual desktops, to high-end simulations. Traditionally, when running these graphics-heavy visualization workloads in the cloud, customers have been limited to purchasing virtual machines (VMs) with full GPUs, which increased costs and limited flexibility. So, in 2019, we introduced the first GPU-partitioned (GPU-P) virtual machine offering in the cloud. And today, your options just got wider. Introducing the general availability of NVads A10 v5 GPU accelerated virtual machines, now available in US South Central, US West2, US West3, Europe West, and Europe North regions. Azure is the first public cloud to offer GPU partitioning (GPU-P) on NVIDIA GPUs.

NVads A10 v5 virtual machines feature NVIDIA A10 Tensor Core GPUs, up to 72 AMD EPYC™ 74F3 vCPUs with clock frequencies up to 4.0 GHz, 880 GB of RAM, 256 MB of L3 cache, and simultaneous multithreading (SMT).

Pay-as-you-go, one-year and three-year Azure Reserved Instances, and Spot virtual machines pricing for Windows and Linux deployments are now available.

Flexible and affordable NVIDIA GPU-powered workstations in the cloud

Many enterprises today use NVIDIA vGPU technology on-premises to create virtual GPUs that can be shared across multiple virtual machines. We are always innovating to provide cloud infrastructure that makes it easy for customers to migrate to the cloud. By working with NVIDIA, we have implemented SR-IOV-based GPU partitioning that provides customers cost-effective options, similar to the vGPU profiles configured on-premises to pick the right-sized GPU-powered virtual machine for the workload. The SR-IOV-based GPU partitioning provides a strong, hardware-backed security boundary with predictable performance for each virtual machine.

With support for NVIDIA vGPU, customers can select from virtual machines with one-sixth of an A10 GPU and scale all the way up to two full A10 GPU configurations. This offers cost-effective entry-level and low-intensity GPU workloads on NVIDIA GPUs, while still giving customers the option to scale up to powerful full-GPU and multi-GPU processing power. Each GPU partition in the NVads A10 v5 series virtual machines includes the full NVIDIA RTX(GRID) license and customers can either deploy a single virtual workstation per user or offer multiple sessions using the Windows Enterprise multi-session operating system. Our customers love the integrated license validation feature as it simplifies the user experience by eliminating the need to deploy dedicated license server infrastructure and provides customers with a unified pricing model.

"The NVIDIA A10 GPU-accelerated instances in Azure with support for GPU partitioning are transformational for customers seeking cost-effective cloud options for graphics- and compute-intensive workloads. Now, enterprises can access powerful RTX Virtual Workstation instances accelerated by NVIDIA Ampere architecture-based A10 GPUs—sized to meet the performance requirements of creative and technical professionals working across industries such as manufacturing, architecture, and media and entertainment."— Anne Hecht, Senior Director, Product Marketing, NVIDIA.

NVIDIA RTX Virtual Workstations include the latest enhancements in AI, ray tracing, and simulation to enable incredible 3D designs, photorealistic simulations, and stunning visual effects—at faster speeds than ever.

Pick the right-sized GPU virtual machine for any workload

The NVads A10 v5 virtual machine series is designed to offer the right choice for any workload and provide the optimum configurations for both single-user and multi-session environments. The flexible GPU-partitioned virtual machine sizes enable a wide variety of graphics, video, and AI workloads—some of which weren’t previously possible. These include virtual production and visual effects, engineering design and simulation, game development and streaming, virtual desktops/workstations, and many more.

“In the world of CAD design, cost performance and flexibility are of prime importance for our users. Microsoft has completed extensive testing with Siemens NX and we found significant benefits in performance for multiple user scenarios. With GPU partitioning, Microsoft Azure can now enable multiple users to use Siemens NX and efficiently utilize GPU resources offering customers great performance at a reasonable hardware price point.”—George Rendell, Vice President Product Management, Siemens NX.

High performance for GPU-accelerated graphics applications

The NVIDIA A10 Tensor core GPUs in the NVads A10 v5 virtual machines offer great performance for graphics applications. The AMD EPYC™ 74F3 vCPUs with clock frequencies up to 4.0 GHz offer impressive performance for single-threaded applications.

Next steps

For more information on topics covered here, see the following documentation:

NVads A10 v5 virtual machine documentation
Virtual machine pricing
Learn more about Azure HPC + AI
Read about visualization workloads on Azure

Quelle: Azure

How to choose the right Azure services for your applications—It’s not A or B

This post was co-authored by Ajai Peddapanga, Principal Cloud Solution Architect.

If you have been working with Azure for any period, you might have grappled with the question—which Azure service is best to run my apps on? This is an important decision because the services you choose will dictate your resource planning, budget, timelines, and, ultimately, the time to market for your business. It impacts the cost of not only the initial delivery, but also the ongoing maintenance of your applications.

Traditionally, organizations have thought that they must choose between two platforms, technologies, or competing solutions to build and run their software applications. For example, they ask questions like: Do we use Web Logic or WebSphere for hosting our Java Enterprise applications?, Should Docker Swarm be the enterprise-wide container platform or Kubernetes?, or Do we adopt containers or just stick with virtual machines (VMs)? They try to fit all their applications on platform A or B. This A or B mindset stems from outdated practices that were based on the constraints of the on-premises world, such as packaged software delivery models, significant upfront investments in infrastructure and software licensing, and long lead times required to build and deploy any application platform. With that history, it’s easy to bring the same mindset to Azure and spend a lot of time building a single platform based on a single Azure service that can host as many of their applications as possible—if not all. Then companies try to force-fit all their applications into this single platform, introducing delays and roadblocks that could have been avoided.

There's a better approach possible in Azure that yields higher returns on investment (ROI). As you transition to Azure, where you provision and deprovision resources on an as-needed basis, you don't have to choose between A or B. Azure makes it easy and cost-effective to take a different—and better—approach: the A+B approach. An A+B mindset simply means instead of limiting yourself to a predetermined service, you choose the service(s) that best meet your application needs; you choose the right tool for the right job.

Figure 1: Azure enables you to shift your thinking from an A or B to an A+B mindset, which has many benefits.

With A+B thinking, you can:

Select the right tool for the right job instead of force-fitting use cases to a predetermined solution.
Innovate and go to market faster with the greater agility afforded by the A+B approach.
Accelerate your app modernizations and build new cloud-native apps by taking a modular approach to picking the right Azure services for running your applications.
Achieve greater process and cost efficiencies, and operational excellence.
Build best-in-class applications tailored fit for your business

As organizations expand their decision-making process and technical strategy from an A or B mindset to encompass the possibilities and new opportunities offered with an A+B mindset, there are many new considerations. In our new book, we introduce the principles of the A+B mindset that you can use to choose the right Azure services for your applications. We have illustrated the A+B approach using two Azure services as examples in our book; however, you can apply these principles to evaluate any number of Azure Services for hosting your applications–Azure Spring Apps, Azure App Service, Azure Container Apps, Azure Kubernetes Service, and Virtual Machines are commonly used Azure Services for application hosting. A+B mindset applies to any application, written in any language.

Learn more

Asir and Ajai are the authors of a new Microsoft e-book that helps you transition to an A+B mindset and answer the question: “What is the right service for my applications?” Get the Microsoft e-book to learn more about how to transition to an A+B mindset to choose the right Azure services for your applications.
Quelle: Azure

MLOPs Blog Series Part 3: Testing scalability of secure machine learning systems using MLOps

The capacity of a system to adjust to changes by adding or removing resources to meet demand is known as scalability. Here are some tests to check the scalability of your model.

System testing

System tests are carried out to test the robustness of the design of a system for given inputs and expected outputs (for example, an MLOps pipeline, inference). Acceptance tests (to fulfill user requirements) can be performed as part of system tests.

A/B testing

A/B testing is performed by sending production traffic to alternate systems that will be evaluated. Statistical hypothesis testing is used to decide which system is better.

Figure 1: A/B testing

Canary testing

Canary testing is done by delivering the majority of production traffic to the current system while sending traffic from a small group of users to the new system we're evaluating.

Figure 2: Canary testing

Shadow testing

Sending the same production traffic to various systems is known as shadow testing. Shadow testing is simple to monitor and validates operational consistency.

Figure 3: Shadow testing

Load testing

Load testing is a technique for simulating a real-world load on software, applications, and websites. Load testing simulates numerous users using a software application to simulate the expected usage of the program. It measures the following:

•    Endurance: Whether an application can resist the processing load, it is expected to have to endure for an extended period.
•    Volume: The application is subjected to a large volume of data to test whether the application performs as expected.
•    Stress: Assessing the application's capacity to sustain a specified degree of efficacy in adverse situations.
•    Performance: Determining how a system performs in terms of responsiveness and stability under a particular workload.
•    Scalability: Measuring the application's ability to scale up or down as a reaction to an increase in the number of users.

Load tests can be performed to test the above factors using various software applications. Let’s look at an example of load testing an AI microservice using locust.io. The dashboard in Figure 4 reflects the total requests made to the microservice per second as well as the response times. Using these insights, we can gauge the performance of the AI microservice under a certain load.

Figure 4: Load testing using Locust.io

Learn more

To learn more about the implementation of the above test, watch this demo video and view the code of load testing AI microservices using locust.io. You can check out the code on the load testing microservices GitHub repository. For further details and to learn about hands-on implementation, check out the Engineering MLOps book, or learn how to build and deploy a model in Azure Machine Learning using MLOps in the “Get Time to Value with MLOps Best Practices” on-demand webinar.
Quelle: Azure

Microsoft Cost Management updates – June 2022

Whether you're a new student, a thriving startup, or the largest enterprise, you have financial constraints, and you need to know what you're spending, where, and how to plan for the future. Nobody wants a surprise when it comes to the bill, and this is where Microsoft Cost Management comes in.

We're always looking for ways to learn more about your challenges and how Microsoft Cost Management can help you better understand where you're accruing costs in the cloud, identify and prevent bad spending patterns, and optimize costs to empower you to do more with less. Here are a few of the latest improvements and updates based on your feedback:

Viewing cost in the Azure mobile app
Introducing a new API for configuring cost alerts
Prevent budget overages with action groups common alert schema
Amplify your learning experience in Cost Management
Help shape the future of navigation in Cost Management and Billing
What's new in Cost Management Labs
New ways to save money with Microsoft Cloud
New videos and learning opportunities
Documentation updates
Join the Microsoft Cost Management team

Let's dig into the details.

Viewing cost in the Azure mobile app

The Azure mobile app is like having the portal in your pocket, allowing you to stay connected to your Azure resources on the go. In addition to managing access, checking resource status, monitoring health, and all the other great capabilities, you can also keep an eye on the cost of your subscriptions and resource groups. Simply open any subscription or resource group and scroll down to see cost.

Let us know what you’d like to see next!

Introducing a new API for configuring cost alerts

We’ve talked about how one of the most critical aspects of cost management is staying informed about changes to your costs. You already know how to get alerted when cost exceeds predefined thresholds with budgets and you may have seen that you can subscribe to updates of cost analysis views or subscribe to anomaly alerts from the portal. These are all great resources when getting started, but when it comes to getting set up for success at scale, automation is essential. Now you can automate subscribing to views or anomaly alerts with the ScheduledActions API.

Check out the ScheduledActions API to get started today and let us know what new alerts you’d like to see next!

Prevent budget overages with action groups common alert schema

Speaking of automation, the best way to stay within your budget is to automate actions to minimize cost before you exceed your budget. If you’re interested in setting a hard limit on your budget, configure your budget to trigger an action group. Action groups allow you to run custom scripts that can shut down VMs, archive data, or even delete test resources, giving you ultimate control of your finances to ensure you never get surprised.

Cost Management budget alerts now support the Azure Monitor common alert schema, making it easier than ever to automate actions that keep you under your budget.

Learn more about configuring action groups for your budgets and how the Azure Monitor common alert schema can help.

Amplify your learning experience in Cost Management

Cost can be a daunting topic. Whether you’re just getting started or looking to learn more about specific features, there are many ways for you to learn about features – from our monthly blog posts and smaller feature updates to full product documentation and MS Learn modules to videos on YouTube. And that’s just scratching the surface. In an effort to help streamline your learning experience, you can now explore the many learning options from the Cost Management overview.

Check out Cost Management tutorials yourself and let us know what you’d like to see added.

Help shape the future of navigation in Cost Management and Billing

Do you manage the billing account or monitor cloud costs for your team or organization? We’re exploring navigation pathways for key tasks within the Azure portal and would love to get your feedback in a 30-minute, unmoderated walkthrough.

If you are interested in participating in this study, please contact our research team and we’ll schedule a time.

What's new in Cost Management Labs

With Cost Management Labs, you get a sneak peek at what's coming in Microsoft Cost Management and can engage directly with us to share feedback and help us better understand how you use the service, so we can deliver more tuned and optimized experiences. Here are a few features you can see in Cost Management Labs:

Update: Cost Management tutorials – Now available in the public portal
Whether you’re just getting started or looking to learn more about specific features, tutorials are now a click away from the Cost Management overview in Cost Management Labs.
Product column experiment in the cost analysis preview
We’re testing new columns in the Resources and Services views in the cost analysis preview for Microsoft Customer Agreement. You may see a single Product column instead of the Service, Tier, and Meter columns. Please leave feedback to let us know which you prefer.
Group-related resources in the cost analysis preview
Group-related resources, like disks under VMs or web apps under App Service plans, by adding a “costanalysis-parent” tag to the child resources with a value of the parent resource ID. Wait 24 hours for tags to be available in usage and your resources will be grouped. Leave feedback to let us know how we can improve this experience further for you.
Charts in the cost analysis preview
View your daily or monthly cost over time in the cost analysis preview. You can opt-in using Try Preview.
View cost for your resources
The cost for your resources is one click away from the resource overview in the preview portal. Just click View cost to quickly jump to the cost of that particular resource.
Change scope from the menu
Change scope from the menu for quicker navigation. You can opt-in using Try Preview.

Of course, that's not all. Every change in Microsoft Cost Management is available in Cost Management Labs a week before it's in the full Azure portal. We're eager to hear your thoughts and understand what you'd like to see next. What are you waiting for? Try Cost Management Labs today.

New ways to save money with Microsoft Cloud

Lots of cost optimization improvements over the last month! Here are some of the generally available offers you might be interested in:

NC A100 v4 virtual machines for AI.
DCsv3 and DCdsv3 series virtual machines.
Azure Arc-enabled SQL Managed Instance Business Critical.
Increased size of Stream Analytics jobs and cluster.
Azure Ebsv5 now available in 13 additional regions.
Azure Databricks available in Sweden Central and West Central US.

And here are some of the new previews:

New Cosmos DB features for scalable, cost-effective application development.
Azure Cosmos DB serverless container storage limit increase to 1TB.
16MB limit per document in API for MongoDB.
Autoscale Stream Analytics jobs.

New videos and learning opportunities

Here’s a new video you might be interested in:

MySQL Developer Essentials Season 1 Episode 3: Cost management and optimization (9 minutes).

Follow the Microsoft Cost Management YouTube channel to stay in the loop with new videos as they’re released and let us know what you'd like to see next.

Want a more guided experience? Start with Control Azure spending and manage bills with Microsoft Cost Management.

Documentation updates

Here are a few documentation updates you might be interested in:

New FAQ: When does Azure finalize or close the billing cycle of a closed month?
New tutorial: Update tax details for an Azure billing account.
New tutorial: Elevate access to manage billing accounts.
New tutorial: How to create an anomaly alert.
Added additional details about the anomaly detection model.
Payment updates to account for the Reserve Bank of India regulation for recurring payments.
Split out tutorials for creating subscriptions for EA, CSP, MCA (same directory), and MCA (separate directory).
Marketplace price list in the EA portal has been retired.
Budget API is preferred over Azure PowerShell/CLI.

Want to keep an eye on all of the documentation updates? Check out the Cost Management and Billing documentation change history in the azure-docs repository on GitHub. If you see something missing, select Edit at the top of the document and submit a quick pull request.

Join the Microsoft Cost Management team

Are you excited about helping customers and partners better manage and optimize costs? We're looking for passionate, dedicated, and exceptional people to help build best in class cloud platforms and experiences to enable exactly that. If you have experience with big data infrastructure, reliable and scalable APIs, or rich and engaging user experiences, you'll find no better challenge than serving every Microsoft customer and partner in one of the most critical areas for driving cloud success.

Watch the video below to learn more about the Microsoft Cost Management team:

Join our team.

What's next?

These are just a few of the big updates from last month. Don't forget to check out the previous Microsoft Cost Management updates. We're always listening and making constant improvements based on your feedback, so please keep the feedback coming.

Follow @MSCostMgmt on Twitter and subscribe to the YouTube channel for updates, tips, and tricks. You can also share ideas and vote up others in the Cost Management feedback forum or join the research panel to participate in a future study and help shape the future of Microsoft Cost Management.

We know these are trying times for everyone. Best wishes from the Microsoft Cost Management team. Stay safe and stay healthy.
Quelle: Azure