Azure Lab Services August 2022 update—Improved classroom and training experience

The new updated Azure Lab Services allows you to set up and configure Cloud labs for your classroom and training scenarios. You don’t have to worry about setting up, expanding, or managing on-premises labs anymore. We provide a managed service and take the hassle out of managing and maintaining these labs. The updated service comes with improved performance and enhanced backend reliability. With the introduction of virtual network (VNet) injection and more control of the virtual network, you can now unlock key training and classroom scenarios such as lab-to-lab communication and utilize the service to teach a wide range of courses requiring complex configurations. With this new update you also now have an option to integrate the service with the Canvas learning management system.

The introduction of additional roles, Azure policies, and enhanced cost tracking and management capability provides the features you need to fully understand, manage and maintain your service. The availability of a .NET SDK, Python SDK, Azure PowerShell module, and Azure Resource Manager (ARM) templates makes it easy for IT and administrators to automate and manage all aspects of the service. Learn more about the Azure Lab Services update and how to use it.

With major reliability and performance enhancements to the original service, this major update is bringing a whole slew of additional features for IT organizations, administrators, educators, and students.

The update is bringing features and functionality for all personas of the service including administrators, educators, and students.

New features help IT departments and administrators automate and manage

For the IT staff and the service administrators, now there is a concept of creating a lab plan instead of a lab account in the Azure portal to start the process of creating labs. A lab plan is used to create, configure settings, and manage the labs. For ease of administration of the lab, new roles have been created to provide granular control to different people in the organization who will manage and maintain the labs. We are also introducing default and custom Azure policies with this update to help administrators with more control over the management of the labs.

Similar to the older service, you will have to request additional virtual processors (vCPUs), depending on your Azure subscription, and how many labs and virtual machines you want to create in the labs. With this updated release, there is an improved vCPU capacity management for your subscription, and you don't share the vCPU capacity with other customers when using the service.

With the new release, it is also easier to track costs for your labs or the virtual machines utilizing Azure Cost Management. On the networking front, we are introducing Virtual Network Injection compared to virtual network peering, which was offered in the older service. Virtual Network Injection provides you with control of Azure NSG (Network Security Group) and load balancer for your virtual network. Virtual Network Injection supports some of the common scenarios such as lab-to-lab communication, access to Azure or on-premises license server, and utilizing Azure File services.

In order to make it easy for administrators to manage and maintain the service, we are offering a range of tools including a .NET SDK, Python SDK, Azure PowerShell module, and ARM templates. These tools will not only help you with automating and managing your service but can also be utilized to build value-add services on top of our service for your customers.

In alignment with all the global compliance and regulatory laws around data residency, the customers now have a choice to deploy the labs and related virtual machines in their region of choice, so their data stays local to where they want.

More options and flexibility for educators

Educators and instructors are also getting features and new functionality to improve their experience in the service. The updated service can also be integrated with Canvas, a popular learning management system. This makes it easy for educators to stay in Canvas to create, manage, and maintain their labs, and students can also access the labs and virtual machines from within Canvas. Educators now have the option to create labs with virtual machines and assign students to them with non-admin access.

The auto-shutdown feature of the virtual machines has now been improved to work across both Windows and Linux virtual machines. In addition, there are improvements around virtual machine idle detection based on resource usage and user presence. The update also provides additional flexibility to the educator to skip the virtual machine template creation process if they already have an image to use and don’t want to customize it. Using an already existing image or the default image from the Azure marketplace allows for fast creation of the lab compared to when the educator wants to create a lab with an image but will further customize it after the lab is created.

Faster, easier access for students

The updated service has also introduced improvements to the student experience.  Students can now troubleshoot any virtual machine access issues by redeploying their virtual machine without losing data. If the lab is set up to use Azure Active Directory (AAD) group sync, there is no longer a need to send an invitation email to the students to register for the lab and get access to the virtual machine. Now, a virtual machine is automatically assigned to the student and they can access it immediately.

Learn more

Enable your educational, learning, and training scenarios today no matter what industry, by using the service. Get started today to use the enhanced experience and new features by utilizing the Azure Lab Services August 2022 update!
Quelle: Azure

5 steps to prepare developers for cloud modernization

If you’re thinking about what it takes to modernize your applications, you’re not alone. Companies everywhere now understand that migrating applications to the cloud and shifting to a cloud-first approach is critical to business competitiveness. The purpose of modernizing applications is to better align them to current and future business needs. By deploying enterprise applications to the cloud, you gain greater ability to innovate, improve security, scale to meet demand, manage costs, and deliver rich and consistent customer experiences anywhere in the world more quickly.

But as you move to the cloud, there are many options to choose from and skills to gain. One of the most important parts of this effort is understanding how to prepare developers for cloud modernization—and one of the trickiest parts is knowing where to start.

According to research on Developer Velocity, the number one driver of business performance is best-in-class developer tools.1 Companies that create the right environment—by providing strong tools and removing points of friction for developers to innovate—have 47 percent higher developer satisfaction and retention rates than those in the lowest quartile for Developer Velocity. With Microsoft Azure, you’ll find not only the tools and technologies that you need to move to the cloud, but also extensive developer support for cloud modernization.

In this article, we’ll walk you through technical documentation, educational resources, and step-by-step guidance to help you build the skills and strategy needed to successfully modernize your applications. We use Azure App Service as our example, but the same concepts apply to other tools you might use in your modernization efforts.

Here are five steps to take to start preparing for cloud modernization:

1.    Watch how application migration works.

Migrating existing, on-premises applications to the cloud is often the focus of initial application modernization efforts. Once the business case has been made to migrate an application to the cloud, you’ll need to assess the application for all the dependencies that can affect whether it can be successfully migrated without modifying the application. In the case of App Service, a migration assistant guides you through the assessment. Then, if the assessment indicates that the application can be migrated, the migration assistant performs the migration. To get an introduction to how the assessment and migration process works, watch the overview video on how to migrate web apps to App Service.

2.    Learn to migrate an on-premises application to the cloud.

The best way to understand what it takes to migrate an application is to try it for yourself. To learn how to migrate an on-premises web application to App Service, take the step-by-step online course—including a hands-on lab—that guides you through migration and post-migration. Using a sandbox environment and access to free resources, you’ll get an in-depth walkthrough of how to migrate your web application, from assessment through post-migration tasks. You’ll also get background on why the assessment phase is so important, what types of problems it’s intended to identify, and what to do if any problems are found. Next, the course takes you through the migration process and provides guidance on the settings you’ll need to choose from, and it prepares you for additional tasks that might be necessary to get the web app in working order.

3.    Build a web app in the language of your choice.

Learning how to build a cloud-native application is another important step in preparing yourself to shift to a cloud-first approach. To give it a try, sign up for an Azure free account, which gives you access to dozens of free services, including App Service. Along with access to a wide range of cloud resources, you get developer support for cloud modernization through quickstart guides that walk you through creating and deploying a web app in App Service using the language of your choice, including .NET, Node.js, Java, Python, and other languages. This is also a great time to explore other Azure cloud capabilities and use the $200 credit that you get with the Azure free account.

4.    Assess your own web apps for modernization readiness.

Once you understand the basics of migrating and deploying applications in the cloud, it’s time to get to work on the process of assessing and migrating your own web apps. Use the free App Service migration tool to run a scan on your web app’s public URL. The tool will provide you with a compatibility report on the technologies your app uses and whether App Service fully supports them. If compatible, the tool will guide you through downloading the migration assistant, which simplifies migration in an automated way with minimal or no code changes.

5.    Download the App Migration Toolkit.

With a solid background in how to prepare for modernization, you’re in a good position to start putting the full range of Azure developer support for cloud modernization to work. Download the App Migration Toolkit to find the resources you need to successfully modernize your ASP.NET applications from start to finish. From building your business case to best practices and help gaining skills, the toolkit provides practical guidance and support to help you turn your application modernization plans into reality.

While application modernization is a significant initiative that requires strategy, planning, skill-building, and investment of time and resources, the benefits to the business are worth the effort. Fortunately, Azure simplifies the process of figuring out how to prepare developers for cloud modernization. The App Migration Toolkit gives you the skills and knowledge needed to help your organization innovate and stay competitive.

1Developer Velocity: How software excellence fuels business performance.
Quelle: Azure

Track adversaries and improve posture with Microsoft threat intelligence solutions

Today, we’re thrilled to announce two new security products driven by our acquisition of RiskIQ just over one year ago that deliver on our vision to provide deeper context into threat actors and help customers lock down their infrastructure.

Track threat actor activity and patterns with Microsoft Defender Threat Intelligence

This new product helps security operations teams uncover attacker infrastructure and accelerate investigation and remediation with more context, insights, and analysis than ever before. While threat intelligence is already built into the real time detections of our platform and security products like Microsoft Sentinel, customers also need direct access to real-time data and Microsoft’s unmatched signal to proactively hunt for threats across their environments.

For example, adversaries often run their attacks from many machines, with unique IP addresses. Tracing the actor behind an attack and tracking down their entire toolkit is challenging and time-consuming. Using built-in AI and machine learning, Defender Threat Intelligence uncovers the attacker or threat family and the elements of their malicious infrastructure. Armed with this information, security teams can then find and remove adversary tools within their organization and block their future use in tools like Microsoft Sentinel, helping to prevent future attacks.

See your business the way an attacker can with Microsoft Defender External Attack Surface Management

The new Defender External Attack Surface Management gives security teams the ability to discover unknown and unmanaged resources that are visible and accessible from the internet—essentially the same view an attacker has when selecting their target. Defender External Attack Surface Management helps customers discover unmanaged resources that could be potential entry points for an attacker.

Microsoft Defender External Attack Surface Management scans the internet and its connections every day. This builds a complete catalogue of a customer’s environment, discovering internet-facing resources, even the agentless and unmanaged assets. Continuous monitoring, without the need for agents or credentials, prioritizes new vulnerabilities. With this complete view of the organization, customers can take recommended steps to mitigate risk by bringing these resources under secure management within tools like Microsoft Defender for Cloud.

Read the full threat intelligence announcement and to learn more about how Microsoft Defender Threat Intelligence and Microsoft Sentinel work together, read the Tech Communities blog.

Additionally, in the spirit of continuous innovation and bringing as much of the digital environment under secure management as possible, we are proud to announce the new Microsoft Sentinel solution for SAP. Security teams can now monitor, detect, and respond to SAP alerts all from our cloud-native SIEM, Microsoft SIEM.

To learn more about these products and to see live demos, visit us at Black Hat USA, Microsoft Booth 2340. You can also register now for the Stop Ransomware with Microsoft Security digital event on September 15, 2022, to watch in-depth demos of the latest threat intelligence technology.
Quelle: Azure

Microsoft Cost Management updates – July 2022

Whether you're a new student, a thriving startup, or the largest enterprise, you have financial constraints, and you need to know what you're spending, where, and how to plan for the future. Nobody wants a surprise when it comes to the bill, and this is where Microsoft Cost Management comes in.

We're always looking for ways to learn more about your challenges and how Microsoft Cost Management can help you better understand where you're accruing costs in the cloud, identify and prevent bad spending patterns, and optimize costs to empower you to do more with less. Here are a few of the latest improvements and updates based on your feedback:

Introducing the Cost Details API
Filter cost recommendations by tag
How to choose the right Azure services for your applications—It’s not A or B
What's new in Cost Management Labs
New ways to save money with Microsoft Cloud
New videos and learning opportunities
Documentation updates
Join the Microsoft Cost Management team

Let's dig into the details.

Introducing the Cost Details API

You already know you can dig into your cost and usage data from the Azure portal or Microsoft 365 admin center. You may even know that you can export cost data to storage on a recurring schedule. While many organizations use both, some have more ad-hoc requirements where they need an on-demand solution. Traditionally, these organizations would use the Consumption UsageDetails API or the older Enterprise Agreement (EA) consumption.azure.com APIs. This month, we introduced a new on-demand solution for downloading granular cost details with the new Cost Details API – now generally available for Enterprise Agreement and Microsoft Customer Agreement accounts.

The Cost Details API comes with improved security, stability, and scalability over the UsageDetails API and aligns with the schema already being used by Cost Management exports. If you’re still using the older EA key-based APIs, then you’ll also get additional benefits like a single dataset for all cost data, including Marketplace and reservation purchases, an option to amortize reservation purchases, as well as support for splitting shared costs with cost allocation.

With the general availability of Cost Details this month, the UsageDetails and consumption.azure.com APIs are in maintenance and will not receive updates. Please migrate to scheduled exports for large accounts with a lot of cost data or to streamline recurring data dumps. If you have requirements that necessitate a more on-demand solution, please migrate to the Cost Details API.

To learn more about the Cost Details API, see Get cost details for a pay-as-you-go subscription. For additional information about when to select Exports or Cost Details, see Choose a cost details solution.

Filter cost recommendations by tag

Nearly every conversation we have with organizations starts with cost optimization and ensuring their workloads are running efficiently. And when it comes to cost optimization, we always tell people to start in Azure Advisor, which gives a great picture of high-confidence recommendations to reduce cost. At the same time, this can be daunting for large teams with resources spread across many resource groups and subscriptions. To help facilitate this, you can now filter your recommendations by tag in Azure Advisor.

With the power of tag filters, you can now get recommendations scoped to a business unit, project, or application to filter recommendations and calculate scores using tags you have already assigned to Azure resources, resource groups, and subscriptions.

To learn more, visit how to filter Advisor recommendations using tags.

How to choose the right Azure services for your applications—It’s not A or B

If you’ve been working with Azure for any period, you might have grappled with the question—which Azure service is best to run my apps on? This is an important decision because the services you choose will dictate your resource planning, budget, timelines, and, ultimately, the time to market for your business. It impacts the cost of not only the initial delivery, but also the ongoing maintenance of your applications.

Read on as Asir Selvasingh and Ajai Peddapanga summarize your options on the Azure blog to get you started on the right foot. They’ll also share details about their new e-book on the subject.

What's new in Cost Management Labs

With Cost Management Labs, you get a sneak peek at what's coming in Microsoft Cost Management and can engage directly with us to share feedback and help us better understand how you use the service, so we can deliver more tuned and optimized experiences. Here are a few features you can see in Cost Management Labs:

New: What’s new in Cost Management– Now enabled by default in Labs
Learn about new announcements from the Cost Management overview. You can opt in using Try Preview.
New: Cost savings insights in the cost analysis preview
Identify potential savings available from Azure Advisor cost recommendations for your Azure subscription. You can opt in using Try preview.
New: Forecast in the cost analysis preview
Show your forecast cost for the period at the top of the cost analysis preview. You can opt in using Try preview.
Product column experiment in the cost analysis preview
We’re testing new columns in the Resources and Services views in the cost analysis preview for Microsoft Customer Agreement. You may see a single Product column instead of the Service, Tier, and Meter columns. Please leave feedback to let us know which you prefer.
Group related resources in the cost analysis preview
Group related resources, like disks under VMs or web apps under App Service plans, by adding a “cm-resource-parent” tag to the child resources with a value of the parent resource ID.
Charts in the cost analysis preview
View your daily or monthly cost over time in the cost analysis preview. You can opt in using Try Preview.
View cost for your resources
The cost for your resources is one click away from the resource overview in the preview portal. Just click View cost to quickly jump to the cost of that particular resource.
Change scope from the menu
Change scope from the menu for quicker navigation. You can opt-in using Try Preview.

Of course, that's not all. Every change in Microsoft Cost Management is available in Cost Management Labs a week before it's in the full Azure portal or Microsoft 365 admin center. We're eager to hear your thoughts and understand what you'd like to see next. What are you waiting for? Try Cost Management Labs today. 

New ways to save money in the Microsoft Cloud

Here are new and updated offers you might be interested in:

Generally available: NVads A10 v5 GPU-accelerated virtual machines.
Generally available: Improved Try Azure Cosmos DB for free experience.
Generally available: Azure SQL Database Hyperscale Named replicas.
Preview: Create an additional 5000 Azure Storage accounts within your subscription.

New videos and learning opportunities

One new video this month for those automating reporting and optimization:

Creating context for your Advisor recommendations (eight minutes).

Follow the Microsoft Cost Management YouTube channel to stay in the loop with new videos as they’re released and let us know what you'd like to see next.

Want a more guided experience? Start with Control Azure spending and manage bills with Microsoft Cost Management.

Documentation updates

Here are a few documentation updates you might be interested in:

Added Create an Azure budget with Bicep.
Added Enable preview features in Cost Management Labs.
Updated the effective date for Reserve Bank of India directive updates in Pay your Microsoft Customer Agreement or Microsoft Online Subscription Program Azure bill.
Added details about Warned subscriptions to Azure subscription states.
Noted cost analysis date range limit of 13 months in Understand Cost Management data.
15 updates based on your feedback.

Want to keep an eye on all documentation updates? Check out the Cost Management and Billing documentation change history in the azure-docs repository on GitHub. If you see something missing, select Edit at the top of the document and submit a quick pull request. You can also submit a GitHub issue. We welcome and appreciate all contributions!

Join the Microsoft Cost Management team

Are you excited about helping customers and partners better manage and optimize costs? We're looking for passionate, dedicated, and exceptional people to help build best in class cloud platforms and experiences to enable exactly that. If you have experience with big data infrastructure, reliable and scalable APIs, or rich and engaging user experiences, you'll find no better challenge than serving every Microsoft customer and partner in one of the most critical areas for driving cloud success.

Watch the video below to learn more about the Microsoft Cost Management team:

Join our team.

What's next?

These are just a few of the big updates from last month. Don't forget to check out the previous Microsoft Cost Management updates. We're always listening and making constant improvements based on your feedback, so please keep the feedback coming.

Follow @MSCostMgmt on Twitter and subscribe to the YouTube channel for updates, tips, and tricks. You can also share ideas and vote up others in the Cost Management feedback forum and help shape the future of Microsoft Cost Management.

We know these are trying times for everyone. Best wishes from the Microsoft Cost Management team. Stay safe and stay healthy.
Quelle: Azure

Azure empowers easy-to-use, high-performance, and hyperscale model training using DeepSpeed

This blog was written in collaboration with the DeepSpeed team, the Azure ML team, and the Azure HPC team at Microsoft.

Large-scale transformer-based deep learning models trained on large amounts of data have shown great results in recent years in several cognitive tasks and are behind new products and features that augment human capabilities. These models have grown several orders of magnitude in size during the last five years. Starting from a few million parameters of the original transformer model all the way to the latest 530 billion-parameter Megatron-Turing (MT-NLG 530B) model as shown in Figure 1. There is a growing need for customers to train and fine-tune large models at an unprecedented scale.

Figure 1: Landscape of large models and hardware capabilities.

Azure Machine Learning (AzureML) brings large fleets of the latest GPUs powered by the InfiniBand interconnect to tackle large-scale AI training. We already train some of the largest models including Megatron/Turing and GPT-3 on Azure. Previously, to train these models, users needed to set up and maintain a complex distributed training infrastructure that usually required several manual and error-prone steps. This led to a subpar experience both in terms of usability and performance.

Today, we are proud to announce a breakthrough in our software stack, using DeepSpeed and 1024 A100s to scale the training of a 2T parameter model with a streamlined user experience at 1K+ GPU scale. We are bringing these software innovations to you through AzureML (including a fully optimized PyTorch environment) that offers great performance and an easy-to-use interface for large-scale training.

Customers can now use DeepSpeed on Azure with simple-to-use training pipelines that utilize either the recommended AzureML recipes or via bash scripts for VMSS-based environments. As shown in Figure 2, Microsoft is taking a full stack optimization approach where all the necessary pieces including the hardware, the OS, the VM image, the Docker image (containing optimized PyTorch, DeepSpeed, ONNX Runtime, and other Python packages), and the user-facing Azure ML APIs have been optimized, integrated, and well-tested for excellent performance and scalability without unnecessary complexity.

Figure 2: Microsoft full-stack optimizations for scalable distributed training on Azure.

This optimized stack enabled us to efficiently scale training of large models using DeepSpeed on Azure. We are happy to share our performance results supporting 2x larger model sizes (2 trillion vs. 1 trillion parameters), scaling to 2x more GPUs (1024 vs. 512), and up to 1.8x higher compute throughput/GPU (150 TFLOPs vs. 81 TFLOPs) compared to those published on other cloud providers.

We offer near-linear scalability both in terms of an increase in model size as well as increase in number of GPUs. As shown in Figure 3a, together with the DeepSpeed ZeRO-3, its novel CPU offloading capabilities, and a high-performance Azure stack powered by InfiniBand interconnects and A100 GPUs, we were able to maintain an efficient throughput/GPU (>157 TFLOPs) in a near-linear fashion as the model size increased from 175 billion parameters to 2 trillion parameters. On the other hand, for a given model size, for example, 175B, we achieve near-linear scaling as we increase the number of GPUs from 128 all the way to 1024 as shown in Figure 3b. The key takeaway from the results presented in this blog is that Azure and DeepSpeed together are breaking the GPU memory wall and enabling our customers to easily and efficiently train trillion-parameter models at scale.

(a)                                                                                          (b)

Figure 3: (a) Near-perfect throughput/GPU as we increase the model size from 175 billion to 2 trillion parameters (BS/GPU=8), (b) Near-perfect performance scaling with the increase in number of GPU devices for the 175B model (BS/GPU=16). The sequence length is 1024 for both cases.

Learn more

To learn more about the optimizations, technologies, and detailed performance trends presented above, please refer to our extended technical blog.

Learn more about DeepSpeed, which is part of Microsoft’s AI at Scale initiative.
Learn more about Azure HPC + AI.
To get started with DeepSpeed on Azure, please follow our getting started tutorial.
The results presented in this blog were produced on Azure by following the recipes and scripts published as part of the Megatron-DeepSpeed repository. The recommended and most easy-to-use method to run the training experiments is to utilize the AzureML recipe.
If you are running experiments on a custom environment built using Azure VMs or VMSS, please refer to the bash scripts we provide in Megatron-DeepSpeed.

Quelle: Azure

Microsoft Cost Details API now generally available for EA and MCA customers

The Cost Details API is now generally available for use by Enterprise Agreement (EA) and Microsoft Customer Agreement (MCA) customers. This API provides on-demand download of the granular cost details (formerly referred to as usage details) associated with your Microsoft charges. The API replaces all existing Usage Details APIs and provides data for charges in your invoice. For customers with an MCA agreement this includes Microsoft 365, Dynamics 365, Power Platform, and Azure charges. This API is the go-to solution, along with Exports, for ingesting the raw cost data needed to build any custom reporting solution. To learn more about how to call the Cost Details API, please see our documentation on how to get small cost datasets on demand.

Customers that are not using Exports or the Cost Details API should migrate to one of these solutions moving forward. To learn more about which solution is best for your scenario, see our picking a cost details solution best practices.

Benefits of the new solutions

Both the Cost Details API and Exports provide extensive benefits on top of existing solutions today.

Security and stability—New solutions require Service Principal and/or user tokens in order to access data. For EA customers, keys used in the EA Reporting APIs are valid for six months. Going forward it is recommended to use token-based solutions through Service Principal or User authentication, consistent with Azure REST APIs.
Scalability—The EA Reporting APIs (available only for EA customers) and the Consumption Usage Details (available for both EA and MCA customers) aren't built to scale well as your Microsoft and Azure costs increase. The number of Azure cost records in your cost details dataset can get exceedingly large as you deploy more resources into the cloud. The new solutions are asynchronous and have extensive infrastructure enhancements behind them to ensure successful downloads for any size dataset.
Single dataset for all usage details—For EA customers, the existing EA Reporting APIs have separate endpoints for Azure usage charges versus Azure Marketplace charges. These datasets have been merged in the new solutions. A single dataset reduces the number of APIs that you need to call to see all your charges.
Purchase amortization—Customers who purchase Reservations can see an amortized view of their costs using the new solutions. You can request for amortized or actual cost datasets as part of report configuration. Learn more about using amortized cost details datasets.
Schema consistency—The Cost Details API and Exports provide files with matching fields. This allows you to easily move between solutions based on your scenario. Learn more about the fields available and understand cost details data fields.
Cost allocation integration—EA and MCA customers can use the new solutions to view charges in relation to the cost allocation rules that have been configured. Learn more about creating and managing Azure cost allocation rules.
Go forward improvements—The new solutions are being actively developed moving forward. They'll receive all new features as they're released.

Migrating to the new solutions

The EA Reporting APIs and the Consumption Usage Details API are no longer actively being developed. We strongly recommend migrating off these APIs as soon as possible. A retirement announcement will be made in the future with formal timelines for when the APIs will be turned off. However, the APIs will be fully supported from a reliability standpoint until the formal retirement date is reached.

When assessing the migration off existing usage details solutions, please use the documents below as a reference point:

Migrate from the Enterprise Reporting Usage Details API
Migrate from the Consumption Usage Details API

Please note that pay-as-you-go, MSDN, and Visual Studio customers who are not a part of either an Enterprise Agreement (EA) or Microsoft Customer Agreement (MCA) should continue using the Consumption Usage Details API.

Next steps

Learn more about automation with our Cost Management automation overview.
Assign permissions to SPN to call the APIs.
Understand the basics of how to work with cost details data.
Pick the cost details solution that is right for you.
Get small cost datasets on demand with the Cost Details API.

Follow us on Twitter @MSCostMgmt for more exciting Cost Management updates.
Quelle: Azure

Migrate and modernize with Azure to power innovation across the entire digital estate

Cloud adoption increased significantly during COVID-19 and continues for many companies. However, an enormous migration and modernization opportunity remains as organizations continue their digital transformation. In fact, 72 percent of organizations reported their industry’s pace of digital transformation accelerated because of COVID-19, according to a survey sponsored by Microsoft in The Economist1. And we don’t expect that to slow down anytime soon.

We are hearing key themes from our customers that reinforce this, including:

Cloud has become the catalyst for innovation. Customers are moving beyond operational efficiency to create new products and offerings, leveraging the unique capabilities of cloud to differentiate themselves.
Customers are not just looking for technology, they’re looking for a trusted partner. They need an expert to help them navigate these tough issues as they move toward hybrid, multi-cloud, and edge environments, facing new complexities and opportunities.
Security, data privacy, and compliance are top of mind for customers in every industry as cyberattacks continue to rise2.

These areas are opportunities for us to strengthen our partnerships and help our mutual customers realize their greatest potential. This is why we are focusing on three key areas of growth for Azure at Microsoft Inspire:

Innovation with new cloud-native experiences.
Modernization of app and data estates.
Migration and modernization of infrastructure and mission-critical workloads.

This week, we are announcing new Microsoft Azure capabilities to help partners increase return on investments, grow leads, and shorten sales cycles. This includes updates to our Azure Migration and Modernization Program (AMMP) and announcing the ISV Success Program.

Read on for more details on our latest developments and each of the opportunity areas to see how we can deliver the greatest value to our customers. Our Azure keynote session is also a great resource to learn more.

Key investments to help partners achieve success

Azure Migration and Modernization Program

We now have more than 500 partners enrolled in the Azure Migration and Modernization Program (AMMP) across apps, data, and infrastructure. AMMP is our hero program to help simplify and accelerate migration and modernization, with the right mix of incentives, best practice guidance, tools, and expert help. AMMP has powered deeper go-to-market connections with partners—and our goal is to ensure every migration and modernization opportunity has a partner attached to it. Partners are at the center of how we execute with customers.

We’re making substantive investments and updates to AMMP to help drive scale and velocity of migrations:

Up to 2.5 times larger incentives for Windows Server and SQL Server migrations. 
Empowerment for Microsoft sales organizations to locally allocate incentives for their areas, providing more opportunity for partners to engage and feed into local plans.
Updated best practice guidance with the Cloud Adoption Framework for Azure and Azure Well Architected Framework (WAF).
New modernization capabilities in Azure Migrate, including the option for ISVs to integrate their own IP.

Now more than ever, we need partners’ help to scale our customers’ migration and modernization journeys. Sign up or nominate customers for the AMMP.

ISV Success Program

As the cloud becomes the fabric of every business, across every industry, customers need more complete solutions to support their growth and innovation. This creates tremendous opportunity as the demand for software as a service (SaaS) and "anything as a service solutions continue to increase. To help unlock opportunities for software providers we are excited to announce new benefits with the ISV Success Program to help ISVs innovate rapidly, build well architected applications, publish them to our commercial marketplace, and grow their sales. Currently in preview, and broadly available in fall 2022, the program is intended to be the pathway to ISV success in the Microsoft Cloud Partner Program.

Software providers can take advantage of this new program to build across the Microsoft Cloud and get access to cloud sandboxes, developer tooling, technical and business resources, and a dedicated community.

Innovate with new cloud-native experiences

Organizations across industries are looking to deliver highly personalized experiences to their end customers. Cloud-native applications can help meet these needs. For example, Microsoft Azure has helped retailers like Walgreens gain immediate access to rich transactional data and insights, enabling faster decisions and better customer experiences. According to IDC3, 750 million new logical applications will be built by 2025. This is why so many ISVs and enterprises are turning to Azure for their cloud-native applications. Partners can help customers build cloud apps using scalable containerized architectures combined with globally scalable databases infused with intelligence through AI.

Modernize application and data estates

Modernization of applications and data represents a huge partner opportunity where one project will lead to the next. In fact, 59 percent of organizations see modernizing apps to the cloud as a top initiative4. According to Microsoft estimates, the opportunity for data exceeds $42 billion today, and will grow to $85 billion by fiscal year 2025. Our goal is to help partners close deals to modernize the application and data estates even faster. We’ve made this even more seamless with our newly released Microsoft Intelligent Data Platform. This end-to-end ecosystem integrates databases, analytics, and governance across the customer estate—enabling organizations to adapt in real-time, add layers of intelligence to their applications, unlock fast and predictive insights, and govern their data—wherever it resides.

No matter where our customers are in their modernization journey, Azure offers flexibility between control over managing infrastructure and the level of productivity desired. Partner advisory services and technical expertise is extremely valuable in this estate-level opportunity.

Migrate and modernize infrastructure and mission-critical workloads

Our customers often face time-sensitive decisions with datacenter contract renewals or software end-of-support. As a result, many customers are looking to shift large parts of their IT spend to the cloud with infrastructure as a service (IaaS) seeing the biggest increase. 

This year also brings timely migration opportunities for Windows Server and SQL Server. It’s a great time for partners to advise customers using SQL Server 2012 and Windows Server 2012/2012 R2 about End of Support (EOS) timelines and help them take action to stay secure. It’s also an opportunity to talk with customers about their cloud migration and modernization plans. We offer the best value at every stage of cloud migration. To share just two examples, it’s up to 80 percent less expensive to run Windows Server VMs and Azure SQL Managed Instance on Azure than it is with our main competitor.  And it’s not just about costs—we offer unique capabilities like Azure Automanage to simplify VM management and the broadest SQL Server compatibility to ease the move.

Read more about the top workloads with the largest migration opportunity as a key growth and revenue driver this coming year with announcement details, including:

Azure Confidential Computing capabilities, now generally available, allow partners to transition to Azure workloads that handle sensitive data with additional levels of protection.
Azure Center for SAP solutions, now in preview, is an end-to-end solution to deploy and manage SAP workloads on Azure, enabling customers and partners to create and run an SAP system as a unified workload, and providing a more seamless foundation for innovation on the Microsoft Cloud.
A new Azure Arc Boost Program, in partnership with Intel, will drive the deployment of Azure Arc and Azure Stack HCI in customer hybrid environments through our Systems Integrator partner ecosystem.
Our most recent release of Azure Stack HCI, in preview, delivers more customer value and partner opportunity with new features by providing increased value on investment, shortened time to value, and improved support experience for Azure Stack HCI.

Get started at Microsoft Inspire

With so much opportunity ahead, where should you get started? Be sure to tune into the keynote and the Azure sessions linked below to hear from partners about how they’re already growing their business through Microsoft Cloud and Azure. Partners can bring tailored industry expertise and solutions to complement the innovation that Azure delivers. Together with our amazing partner community, we are creating opportunities with the most trusted cloud to empower customers to transform today, tomorrow, and build for the future.

Azure sessions

Power innovation across the digital estate
Grow revenue by accelerating customer adoption of Azure infrastructure
New SAP on Azure solutions and GTM offers to accelerate your business
Winning the data estate with Microsoft Intelligent Data Platform
Drive digital and application innovation with Microsoft Azure
Addressing sovereign requirements with Microsoft Cloud
SQL Server 2022: the most Azure-connected SQL Server release ever
Move your Azure hybrid business forward with Azure Arc
Enhance your customers' network security and drive business growth on
Latest Azure Confidential Computing innovations – generally available
How to make money migrating with Azure VMware Solution
Onboard as a HPC partner
Azure migration and modernization – Tools & Programs
Bring AI to Every App, Process and employee with Azure AI
Building industry sustainability solutions together with our partners
Innovate with cloud-scale apps, data, and AI
Winning the toughest analytics workloads on Azure
Advancing enterprise Linux application modernization on Azure
New opportunities to grow your practice with Azure Virtual Desktop

Sources: 

1The transformation imperative: Digital drivers in the covid-19 pandemic, The Economist

2Microsoft Digital Defense Report

3750 Million New Logical Applications: More Background, IDC

4Flexera Releases 2021 State of the Cloud Report Press Release, flexera.com

 
Quelle: Azure

Azure Premium SSD v2 Disk Storage in preview

We are excited to announce the preview of Premium SSD v2, the next generation of Microsoft Azure Premium SSD Disk Storage. This new disk offering provides the most advanced block storage solution designed for a broad range of input/output (IO)-intensive enterprise production workloads that require sub-millisecond disk latencies as well as high input/output operations per second (IOPS) and throughput—at a low cost. With Premium SSD v2, you can now provision up to 64TiBs of storage capacity, 80,000 IOPS, and 1,200 MBPS throughput on a single disk. With best-in-class IOPS and bandwidth, Premium SSD v2 provides the most flexible and scalable general-purpose block storage in the cloud, enabling you to meet the ever-growing demands of your production workloads such as—SQL Server, Oracle, MariaDB, SAP, Cassandra, Mongo DB, big data, analytics, gaming, on virtual machines, or stateful containers. Moreover, with Premium SSD v2, you can provision granular disk sizes, IOPS, and throughput independently based on your workload needs, providing you more flexibility in managing performance and costs.

With the launch of Premium SSD v2, our Azure Disk Storage portfolio now includes one of the most comprehensive sets of disk storage offerings to satisfy workloads ranging from Tier-1 IOPS intensive workloads such as SAP HANA to general purpose workloads such as RDMS and NoSQL databases and cost-sensitive Dev/Test workloads.

Benefits of Premium SSD v2

As customers transition their production workloads to the cloud or deploy new cloud-native applications, balancing performance and cost is top of mind. For example, transaction-intensive database workloads may require high IOPS on a small disk size or a gaming application may need very high IOPS during peak hours. Similarly, big data applications like Cloudera/Hadoop may require very high throughput at a low cost. Hence, customers need the flexibility to scale their IOPS and throughput independent of the disk size. With Premium SSD v2, you can customize disk performance to precisely meet your workload requirements or seasonal demands, without the need to provision additional storage capacity.

Premium SSD v2 also enables you to provision storage capacity ranging from 1 GiB up to 64 TiB with GiB increments. All Premium SSD v2 disks provide a baseline performance of 3,000 IOPS and 125 MB/sec. If your disk requires higher performance, you can provision the required IOPS and throughput at a low cost, up to the max limits shown below. You can dynamically scale up or scale down the IOPS and throughput as needed without downtime, allowing you to manage disk performance cost-effectively while avoiding the maintenance overhead of striping multiple disks to achieve more performance. Summarizing the key benefits:

Granular disk size in 1 GiB increments.
Independent provisioning of IOPS, throughput, and GiB.
Consistent sub-millisecond latency.
Easier maintenance with scaling performance up and down without downtime.

Premium SSD v2, like all other Azure Disk Storage offerings, will provide our industry-leading data durability and high availability at general availability.

Following is a summary comparing Premium SSD v2 with the current Premium SSD and Ultra Disk.

 

Ultra Disk

Premium SSD v2

Premium SSD

Disk Size

4 GiB – 64 TiB

1 GiB – 64 TiB

4 GiB – 32 TiB

Baseline IOPS

Varies by disk size

3,000 IOPS free

Varies by disk size

Baseline throughput

Varies by disk size

125 MBPS free

Varies by disk size

Peak IOPS

160,000 IOPS

80,000 IOPS

20,000 IOPS

Peak Throughput

4,000 MBPS

1,200 MBPS

900 MBPS

Durability

99.999999999% durability

(~0% annual failure rate)

99.999999999% durability

(~0% annual failure rate)

99.999999999% durability

(~0% annual failure rate)

Supported Azure Virtual Machines

Premium SSD v2 can be used with any premium storage-enabled virtual machines sizes enabling you to leverage a diverse set of virtual machine sizes. Currently, Premium SSD v2 can only be used as data disks. Premium SSDs and Standard SSDs can be used as OS disks for virtual machines using Premium SSD v2 data disks.

Pricing

Premium SSD v2 disks are billed hourly based on the provisioned capacity, IOPS, and MBPS. Let’s take an example of a disk that you provision with 100 GiB capacity, 5000 IOPS, and 150 MB/sec throughput.

The disks are billed per GiB of the provisioned capacity. Hence, you will be charged for 100 GiB of the provisioned capacity.
The disks are billed for any additional IOPS provisioned over the free baseline of 3,000 IOPS. In this case, since you provisioned 5000 IOPS, you will be billed for the additional 2,000 IOPS.
The disks are billed for any additional throughput over the free baseline throughput of 125 MB/s. In this case, since you provisioned 150 MB/sec throughput, you will be billed for the additional 25 MB/s throughput.

You can learn more on the Azure Managed Disks pricing page.

Getting Started

Premium SSD v2 is currently available in preview in select regions. If you are interested in participating in the preview, you can request access to get started. Once enrolled in the preview program, you will be able to create and manage Premium SSD v2 via the Azure portal, PowerShell, and CLI SDKs. You can refer to the Premium SSD v2 documentation to learn more.

We look forward to hearing your feedback. Please email us at AzureDisks@microsoft.com with any questions.
Quelle: Azure

Microsoft joins Jakarta EE and MicroProfile Working Groups at Eclipse Foundation

We’re excited to announce that Microsoft has joined the Eclipse Foundation Jakarta EE and MicroProfile Working Groups as an Enterprise and Corporate member, respectively. Our goal is to help advance these technologies to deliver better outcomes for our Java customers and the broader community. We’re committed to the health and well-being of the vibrant Java ecosystem, including Spring (Spring utilizes several key Jakarta EE technologies). Joining the Jakarta EE and MicroProfile groups complements our participation in the Java Community Process (JCP) to help advance Java SE.

Over the past few years, Microsoft has made substantial investments in offerings for Java, Jakarta EE, MicroProfile, and Spring technologies on Azure in collaboration with our strategic partners. With Red Hat, we’ve built a managed service for JBoss EAP on Azure App Service. We’re also collaborating with Red Hat to enable robust solutions for JBoss EAP on Virtual Machines (VMs) and Azure Red Hat OpenShift (ARO). With VMware, we jointly develop and support Azure Spring Apps (formerly Azure Spring Cloud), a fully managed service for Spring Boot applications. And with Oracle and IBM, we’ve been building solutions for customers to run WebLogic and WebSphere Liberty/Open Liberty on VMs, Azure Kubernetes Service, and ARO (WebSphere). Other work includes a first-party managed service to run Tomcat and Java SE (App Service) and Jakarta Messaging support in Azure Service Bus. Learn more about these Java EE, Jakarta EE, and MicroProfile on Azure offerings.

Our strategic partners

Microsoft is actively improving our support for running Quarkus on Azure, including on emerging platforms such as Azure Container Apps. The expanded investment in Jakarta EE and MicroProfile is a natural progression of our work to enable Java on Azure. Our broad and deep partnerships with key Java ecosystem stakeholders such as Oracle, IBM, Red Hat, and VMware power our Java on Azure work. These strategic partners share our enthusiasm for the Jakarta EE and MicroProfile journeys that Microsoft has embarked upon.

"We're thrilled to have an organization with the influence and reach of Microsoft joining the Jakarta EE Working Group. Microsoft has warmly embraced all things Java across its product and service portfolio, particularly Azure. Its enterprise customers can be confident that they will be actively participating in the further evolution of the Jakarta EE specifications which are defining enterprise Java for today's cloud-native world."—Mike Milinkovich, Executive Director, Eclipse Foundation.

“We welcome Microsoft to the Jakarta EE and MicroProfile Working Groups. We are pleased with our collaboration with Microsoft in delivering Oracle WebLogic Server solutions in Azure, which are helping customers to use Jakarta EE in the cloud. We look forward to more collaboration in the Jakarta EE and MicroProfile Working Groups.”—Tom Snyder, Vice President, Oracle Enterprise Cloud Native Java.

“IBM’s collaboration with Microsoft has shown Jakarta EE and MicoProfile running well in a number of Azure environments on the Liberty runtime, so it’s exciting to see Microsoft now joining the Jakarta EE and MicroProfile Working Groups. I look forward to seeing Microsoft bringing another perspective to the Working Groups based on their experience and needs for Azure customers.”—Ian Robinson, Chief Technology Officer, IBM Application Platform.

"It is great to see Microsoft officially join both MicroProfile and Jakarta EE as they'd been informally involved in these efforts for a long time. I hope to see Microsoft's participation bring experience from their many users and partners who have developed and deployed enterprise Java applications on Azure for several years."—Mark Little, Vice President, Software Engineering, Red Hat.

"We are excited to see Microsoft supporting the Jakarta EE Working Group. Jakarta EE serves as a key integration point for Spring applications and we look forward to the future evolution of common specifications like Servlet, JPA, and others. Microsoft delights developers with their continued support of the Java ecosystem along with their work with VMware on bringing a fully managed Spring service to Azure.”—Ryan Morgan, Vice President, Software Engineering, VMware.

Looking to the future

As part of the Jakarta EE and MicroProfile working groups, we’ll continue to work closely with our long-standing partners. We believe our experience with running Java workloads in the cloud will be valuable to the working groups, and we look forward to building a strong future for Java together with our customers, partners, and the community.

Learn more about Java on Azure offerings for Jakarta EE and MicroProfile.
Quelle: Azure

Gateway Load Balancer now generally available in all regions

Previously, we announced the public preview release of Gateway Load Balancer (GWLB), a new SKU of Azure Load Balancer targeted for transparent NVA (network virtual appliance) insertion supported by a growing list of NVA providers. Today, placing NVAs in the path of traffic is a growing need for customers as their workloads scale. Common use cases of NVAs we’ve seen are:

Allowing or blocking specific IPs using virtual firewalls.
Protecting applications from DDoS attacks.
Analyzing or visualizing traffic patterns.

And GWLB now offers the following benefits for NVA scenarios:

Source IP preservation.
Flow symmetry.
Lightweight NVA management at scale.
Auto-scaling with Azure Virtual Machines Scale Sets (VMSS).

With GWLB, bump-in-the-wire service chaining becomes easy to add on to new or existing architectures in Azure. This means customers can easily “chain” a new GWLB resource to both Standard Public Load Balancers and individual virtual machines with Standard Public IPs, covering scenarios involving both highly available, zonally resilient deployments and simpler workloads.

Figure 1: GWLB can be associated to multiple consumer resources, including both Standard Public Load Balancers and Virtual Machines with Standard Public IPs. When GWLB is chained to the front-end configuration or VM NIC IP configuration, unfiltered traffic from the internet will first be directed to the GWLB and then reach the configured NVAs. The NVAs will then inspect the traffic and send the filtered traffic to the final destination, the consumer application hosted on either the load balancer or virtual machine.

What’s new with Gateway Load Balancer

GWLB borrows a majority of the same concepts as the Standard Load Balancers that customers are familiar with today. You’ll have most of the same components such as frontend IPs, load balancing rules, backend pools, health probes, and metrics, but you’ll also see a new component unique to GWLB—VXLAN tunnel interfaces.

VXLAN is an encapsulation protocol utilized by GWLB. This allows traffic packets to be encapsulated and decapsulated with VXLAN headers as they traverse the appropriate data path, all while maintaining their original source IP and flow symmetry without requiring Source Network Address Translation (SNAT) or other complex configurations like user-defined routes (UDRs).

The VXLAN tunnel interfaces are configured as part of the GWLB’s back-end pool and enable the NVAs to isolate “untrusted” traffic from “trusted” traffic. Tunnel interfaces can either be internal or external and each backend pool can have up to two tunnel interfaces. Typically, the external interface is used for “untrusted” traffic—traffic coming from the internet and headed to the appliance. Correspondingly, the internal interface is used for “trusted” traffic—traffic going from your appliances to your application.

Contoso case study

To better understand the use case of GWLB, let’s dive deeper into example retail company Contoso’s use case.

Who is Contoso?

Contoso is a retail company that uses Azure Load Balancer today to make their web servers supporting their retail platform regionally resilient. In the past few years, they’ve experienced exponential growth and now serve over 20 million visitors per month. When faced with the need to scale their retail platform, they chose Azure Load Balancer because of its high performance coupled with ultra-low latency. As a result of their success, they’ve begun to adopt stricter security practices to protect customer transactions and reduce the risk of harmful traffic reaching their platforms.

What does Contoso’s architecture look like today?

One of their load balancers supporting the eastus region is called contoso-eastus and has a front-end IP configuration with the public IP 101.22.462. Today, traffic headed to 101.22.462 on port 80 is distributed to the backend instances on port 80 as well.

What’s the problem?

The security team recently identified some potentially malicious IP addresses that have been attempting to access their retail platform. As a result, they’re looking to place a network-layer virtual firewall to protect their applications from IP addresses with poor reputations.

What’s the plan?

Contoso has decided to go with a third-party NVA vendor whose appliances the team has used in other contexts such as smaller scale applications or other internal-facing tools. The security team wants to keep the creation of additional resources to a minimum to simplify their NVA management architecture, so they decide map one GWLB with an auto-scaling backend pool of NVAs using Azure VMSS to each group of load balancers deployed in the same region.

Deploying Gateway Load Balancer

The cloud infrastructure team at Contoso creates a GWLB with their NVAs deployed using Azure VMSS. Then, they chain this GWLB to their 5 Standard Public LBs for the eastus region. After verifying that their Data Path Availability and Health Probe Status metrics are 100 percent on both their GWLB and on each chained Standard Public LB, they run a quick packet capture to ensure everything is working as expected.

What happens now?

Now, traffic packets whose destination are any of the frontend IPs of the Standard Public LBs for eastus will be encapsulated using VXLAN and sent to the GWLB first. At this point, the firewall NVAs will decapsulate the traffic, inspect the source IP, and determine whether this traffic is safe to continue on towards the end application. The NVA will then re-encapsulate traffic packets that meet the firewall’s criteria and send it back to the Standard LB. When the traffic reaches the Standard LB, the packets will be decapsulated, meaning that the traffic will appear as if it came directly from the internet, with its original source IP intact. This is what we mean by transparent NVA insertion, as Contoso’s retail platform applications will behave exactly as they did before, without ever knowing that the packet was inspected or filtered by a firewall appliance prior to reaching the application server.

Gateway Load Balancer partners

Gateway Load Balancer supports a variety of NVA providers, you can learn more about each of our partners on our partners page.

Virtual firewalls

Check Point
Cisco
F5
Fortinet
Palo Alto Networks

Traffic observability

cPacket Networks
Glasnostic

Network security

Citrix
Trend Micro
Valtix

DDoS protection

A10 Networks

Learn more

Try out Gateway Load Balancer today with the help of our quickstart tutorials, or read more about Gateway Load Balancer on our public documentation.
Quelle: Azure