Announcing new investments to help accelerate your move to Azure

As businesses adapt to new ways of operating, IT leaders are presented with increasing challenges to achieving sustainable growth. Ensuring your business continues to run without interruptions while adapting and transforming can be paramount. If your company is looking for options to migrate your server estate to the cloud, we have news for you.

Outstanding offers

Extended Security Updates and Azure Migration and Modernization Program support to larger migration projects.

Microsoft has great offers for Windows Server and SQL Server customers looking to move to the cloud. Azure offers free Extended Security Updates for SQL Server 2012 and Windows Server 2012/2012 R2, giving you more time to modernize supported applications for three additional years beyond the 10 years granted by Microsoft Support. Microsoft also allows customers to save significantly when running their workloads in Azure Virtual Machines with Azure Hybrid Benefit, which combined with reserved instances can enable up to 85 percent savings when compared to other cloud services.

To help support your migration and modernization to the cloud, mitigating potential unforeseen risks and costs, Microsoft is expanding the Azure Migration and Modernization Program (AMMP). In the past years, AMMP has helped thousands of customers like Jotun unlock the value of the cloud, bringing together the right mix of resources and best practices at every stage of their journey. We’re now investing significantly more to support your largest Windows/SQL Server migration and modernization projects—up to 2.5 times larger based on project eligibility. This investment will help with your migration in two ways: partner assistance with planning and moving your workloads, and Azure credits that offset transition costs during your move to Azure SQL Managed Instance and Azure SQL Database.

Unparalleled innovation

Unlock your SQL Server and Windows Server’s greatest potential in Azure, with unique capabilities and more options for true hybrid cloud flexibility. With Microsoft you can choose the option that aligns best to your business needs, migrating and modernizing servers with solutions like Windows Server and SQL Server running in virtual machines (VMs), Azure SQL managed databases, and hybrid management through Azure Arc.

When you have your VMs in Azure, management becomes simplified with dedicated solutions such as Azure Automanage and Windows Admin Center in the Azure portal. Azure SQL allows you to spend more time innovating and less time patching, updating, and backing up your databases, as Azure is the only cloud with evergreen SQL that automatically applies the latest updates and patches so that your databases are always up to date, eliminating end-of-support hassles. Azure SQL also features built-in AI that automatically tunes databases ensuring peak performance for every database, delivering leading price-performance.

Unmatched security

Security is foundational for Azure. If your company is running SQL Server 2012 and Windows Server 2012/2012 R2, this is the time to consider assessing those environments as they reach the end of support on July 12, 2022 and October 10, 2023 respectively. Not having support means the end of security updates, which may leave your business exposed to security risks and compliance concerns. Azure offers three years of extended security updates. You can learn more here.

Multilayered security is provided across physical datacenters, infrastructure, and operations with cyber security experts actively monitoring to protect your Windows Server and SQL Server, including in hybrid deployments with Azure Arc. Microsoft has more than 3,500 cybersecurity professionals and spends $1 billion annually on security to help protect, detect, and respond to threats, so you can grow a safe and secure business. The Azure platform is a leader in compliance coverage with 90 plus compliance offers that allow you to proactively safeguard your data and streamline compliance. Our commitment to privacy is uncompromising. Our core privacy principle is, you own your data. We will never use it for marketing or advertising purposes, in turn providing you confidence around data storage and security. 

Get started

Learn more about your end of support options for SQL Server 2012 and Windows Server 2012/R2.

Get started with the Azure Migration and Modernization Program (AMMP). Talk to your Microsoft representative to understand eligibility requirements and submit your Windows Server and SQL Server project today.
Quelle: Azure

Azure Cost Management and Billing updates – April 2022

Whether you're a new student, a thriving startup, or the largest enterprise, you have financial constraints, and you need to know what you're spending, where, and how to plan for the future. Nobody wants a surprise when it comes to the bill, and this is where Azure Cost Management and Billing comes in.

We're always looking for ways to learn more about your challenges and how Azure Cost Management and Billing can help you better understand where you're accruing costs in the cloud, identify and prevent bad spending patterns, and optimize costs to empower you to do more with less. Here are a few of the latest improvements and updates based on your feedback:

Summarized totals in the cost analysis preview
Download your Azure prices as a ZIP file
Unlock cloud savings on the fly with autoscale on Azure
What's new in Cost Management Labs
New ways to save money with Azure
New videos and learning opportunities
Documentation updates
Join the Azure Cost Management and Billing team

Let's dig into the details.

Summarized totals in the cost analysis preview

I’ve talked about how the cost analysis preview is the future of analytics and insights in Cost Management. While what we have today is a solid foundation that most prefer over classic cost analysis, there’s still a lot left before we can fully replace the classic experience. This month’s update is one small step in that direction with the addition of the Total, Average, and Budget key performance indicators (KPIs) at the top of cost analysis.

The Total KPI shows the summarized total across all rows. If you have charges in multiple currencies, cost is normalized to USD to show an overall total. Most views default to show actual, billed charges. The Reservations view shows amortized costs to break down and allocate your reservation purchases to the resources that received the prepurchase benefit. As a reminder, if you’d like to switch to amortized cost from another view, you can select the Customize command at the top to switch. Learn more about amortization, see View amortized reservation costs.

The Average KPI shows the average daily cost for the period. If your period includes the current day, the average is calculated up to and including yesterday, but does not include partial cost from the current day since the data for the day is not complete. Keep in mind every service submits usage at different timelines which will impact the average calculation. Learn more about data latency and refresh processing at Understand Cost Management data.

The Budget KPI shows the monthly budget you have configured with a quick link to edit the budget. If you don’t have a budget yet, you’ll see a link to create a new budget. Budgets created from the cost analysis preview are preconfigured with alerts when your cost exceeds 50 percent, 80 percent, and 95 percent of your cost or 100 percent of your forecast for the month. You can add additional recipients or update alerts from the Budgets page.

You may have seen these rolling out over the past few months, but they are now available to everyone. If you’re interested in what’s coming next, check out What’s new in Cost Management Labs below. Labs includes additional previews you might be interested in, like charts and grouping related resources. Check out the latest updates in cost analysis preview and let us know what you’d like to see next.

 

Download your Azure prices as a ZIP file

One important aspect of optimizing cost is comparing prices across different resource SKUs and regions. This can be cumbersome when using the portal or Azure pricing calculator but is a perfect scenario for automation with the Cost Management Price Sheets API. Now you can download your Azure prices as a ZIP file with multiple, smaller CSV files to make parsing the file easier. This helps avoid issues where the file can grow too big to be opened in tools like Microsoft Excel.

Learn more about the Price Sheets API and update your scripts today.

Unlock cloud savings on the fly with autoscale on Azure

Unused cloud resources can put an unnecessary drain on your computing budget, and unlike legacy on-premises architectures, there is no need to over-provision compute resources for times of heavy usage.

Autoscaling is one of the value levers that can help unlock cost savings for your Azure workloads by automatically scaling up and down the resources in use to better align capacity to demand. This practice can greatly reduce wasted spend for those dynamic workloads with inherently “peaky” demand.

To learn more, read Unlock cloud savings on the fly with autoscale on Azure.

What's new in Cost Management Labs

With Cost Management Labs, you get a sneak peek at what's coming in Azure Cost Management and can engage directly with us to share feedback and help us better understand how you use the service, so we can deliver more tuned and optimized experiences. Here are a few features you can see in Cost Management Labs:

New: Cost Management tutorials
Whether you’re just getting started or looking to learn more about specific features, tutorials are now a click away from the Cost Management overview in Cost Management Labs.
Update: Access preview views from classic cost analysis – Now available in the public portal
Get one-click access to the new preview views from classic cost analysis in the View menu. You can see this in classic cost analysis in Cost Management Labs.
Update: Average cost in the cost analysis preview – Now available in the public portal
See your average daily cost at the top of the cost analysis preview. You can opt in using Try Preview.
Update: Budgets in the cost analysis preview – Now available in the public portal
Quickly create and edit budgets directly from the cost analysis preview. If you don’t have a budget yet, you’ll see a suggested budget based on your forecast. You can opt in using Try Preview.
Update: Anomaly detection alerts – Now enabled by default in Labs
Subscribe to automatic email alerts when a new anomaly has been detected. Anomaly detection is only available for subscriptions in the cost analysis preview. You can opt into this preview using Try Preview and then configure anomaly alerts from the Alerts page.
Update: Grouping SQL databases and elastic pools – Now enabled by default in Labs
Get an at-a-glance view of your total SQL costs by grouping SQL databases and elastic pools under their parent server in the cost analysis preview. You can opt in using Try Preview.
Charts in the cost analysis preview
View your daily or monthly cost over time in the cost analysis preview. You can opt in using Try Preview.
View cost for your resources
The cost for your resources is one click away from the resource overview in the preview portal. Just click View cost to quickly jump to the cost of that particular resource.
Change scope from the menu
Change scope from the menu for quicker navigation. You can opt-in using Try Preview.

Of course, that's not all. Every change in Azure Cost Management is available in Cost Management Labs a week before it's in the full Azure portal. We're eager to hear your thoughts and understand what you'd like to see next. What are you waiting for? Try Cost Management Labs today.

New ways to save money with Azure

Lots of cost optimization improvements over the last month! Here are some of the generally available offers you might be interested in:

On-demand capacity reservations for virtual machines.
Ebsv5 virtual machines increase remote storage performance.
Azure HBv3 virtual machines for HPC now upgraded.
Cosmos DB autoscale RU/s entry point is 4x lower.
Azure Database for PostgreSQL – Flexible Server now supports more high availability regions and US Gov Virginia and US Gov Arizona for Azure Government.
Azure Database for MySQL – Flexible Server in China East 2 and China North 2.
Azure Batch supports Spot Virtual Machines.
IBM WebSphere on Azure with evaluation licensing.
Azure Stream Analytics in 10 new regions.

And here are some of the new previews:

Capacity reservation support in AKS.
Azure Dedicated Host support in AKS.
Arm64-based virtual machines can deliver up to 50% better price-performance.
NC A100 v4 virtual machines accelerate AI applications.
Virtual machines with Ampere Altra Arm-based processors.
DCsv3 virtual machines available in Switzerland and West US.
Azure SignalR Service Premium tier.

New videos and learning opportunities

Here are a couple new videos you might be interested in:

Reduce your costs with Azure Spot Virtual Machines (18 minutes).
Announcing Microsoft Azure FX Series Virtual Machine General Availability (2 minutes).

Follow the Azure Cost Management and Billing YouTube channel to stay in the loop with new videos as they’re released and let us know what you'd like to see next.

Want a more guided experience? Start with Control Azure spending and manage bills with Azure Cost Management and Billing.

Documentation updates

Here are a few documentation updates you might be interested in:

New: Prepay for Virtual machine software reservations.
New: View amortized reservation costs.
Identify anomalies and unexpected changes in cost now covers anomaly detection.
Analyze Azure costs with the Power BI App includes details about how cost may differ from the EA portal.
Save and share customized views includes a note about how many views you can save.

Want to keep an eye on all of the documentation updates? Check out the Cost Management and Billing documentation change history in the azure-docs repository on GitHub. If you see something missing, select Edit at the top of the document and submit a quick pull request.

Join the Azure Cost Management and Billing team

Are you excited about helping customers and partners better manage and optimize costs? We're looking for passionate, dedicated, and exceptional people to help build best in class cloud platforms and experiences to enable exactly that. If you have experience with big data infrastructure, reliable and scalable APIs, or rich and engaging user experiences, you'll find no better challenge than serving every Microsoft customer and partner in one of the most critical areas for driving cloud success.

Watch the video below to learn more about the Azure Cost Management and Billing team:

Join our team.

What's next?

These are just a few of the big updates from last month. Don't forget to check out the previous Azure Cost Management and Billing updates. We're always listening and making constant improvements based on your feedback, so please keep the feedback coming.

Follow @AzureCostMgmt on Twitter and subscribe to the YouTube channel for updates, tips, and tricks. You can also share ideas and vote up others in the Cost Management feedback forum or join the research panel to participate in a future study and help shape the future of Azure Cost Management and Billing.

We know these are trying times for everyone. Best wishes from the Azure Cost Management and Billing team. Stay safe and stay healthy.
Quelle: Azure

Enhance your classroom experience with Azure Lab Services—April 2022 update

Azure Lab Services offers classroom labs for higher education, K-12 institutions, and commercial organizations that don't want to use the on-premises hardware but rather want to harness the power of the cloud to host labs for students or users. We are excited to announce major updates to Azure Lab Services including enhanced lab creation and improved backend reliability, access performance, extended virtual network support, easier labs administration via new roles, improved cost tracking via Azure Cost Management service, availability of PowerShell module, and .NET API SDK for advanced automation and customization, and integration with Canvas learning management system. Learn more about the new update and how to use it.

Along with making significant reliability enhancements to the backend, labs creation, and access performance improvements, this major update is bringing a whole slew of additional features for the IT departments and administrators, educators, and the students, who are the three key personas that use this service.

IT and administrators

For the IT and administrators, we have now introduced the concept of a lab plan instead of a lab account to provide more control over the creation, configuration, and management of the labs. For ease of administration of the lab, new roles have been created to provide granular control for different people managing labs for a large organization.

Creating a large number of labs with many virtual machines requires additional vCPUs which you have to request from us. With this new update, there is an improved vCPU capacity management for your subscription and you don't share the vCPU capacity with others using the service. We have also now made it easier for you to track costs for your lab resources in Azure Cost Management. We have replaced virtual network peering with virtual network injection. With Virtual Network Injection you have more control over the network for lab virtual machines. In your own subscription, create a virtual network in the same region as the lab, delegate a subnet to Azure Lab Services, and you’re off and running.

For advanced automation, deployment, configuration, and management we have the PowerShell module and .NET API SDK. The Azure Lab Services PowerShell will now be integrated with the Azure PowerShell module and will release early February. In alignment with all the global compliance and regulatory laws around data residency, we are also saving the customer data in the regions where the labs are set up.

Educators

For all the educators and instructors using the service, we have added new functionality to improve their experience. Azure Lab Services can now be integrated within Canvas, a popular learning management system. Educators can use Canvas to create and configure labs for the students. Students can connect to the virtual machine from inside their course in Canvas. We have improved the auto-shutdown feature of the virtual machine. Auto-shutdown settings are now available for all operating systems. In addition, we have improved idle detection based on resource usage. For more flexibility, an instructor or IT Administrator can choose to skip the virtual machine template creation process if they already have an image ready to use or want to quickly deploy virtual machines for their lab.

Students

Student experiences have also improved. Students can now redeploy their virtual machine without losing data if they are having issues accessing or using the virtual machine. If the lab is set up to use AAD group sync, there is no longer a need to send an invitation email so students can access their virtual machine—one is assigned to the student automatically.

Learn more

We are eager to have you use our new and improved service to realize your educational, learning, and training scenarios no matter what industry you work in. Contact us directly or get started today to use the enhanced experience!
Quelle: Azure

How Microsoft measures datacenter water and energy use to improve Azure Cloud sustainability

One of the biggest topics of discussion at COP26, the global climate conference held in November 2021, was how a lack of reliable and consistent measurement hampers progress on the path to Net Zero. I have been reflecting on this issue and, on this Earth Day, I would like to provide an update on how we are measuring energy and water use at our datacenters to improve sustainability across the Azure Cloud.

Today, we’re sharing an important update on how Microsoft, and our datacenters, are helping to solve our part of this measurement challenge.

While the environmental goals are similar, each industry has unique challenges in measuring its carbon emissions to build its sustainability strategy. It’s one of the key reasons we, together with ClimateWorks Foundation and 20 other leading organizations, launched the Carbon Call. It’s also why we developed Microsoft Cloud for Sustainability, an Azure-based platform that allows organizations to combine disparate data sources into one place and help provide insights into how to improve their sustainability approaches.

You’ve told us just how important measuring energy and water consumption from our datacenters is in taking sustainability into account for commercial decisions. Below you will see, for the first time, our datacenter PUE (Power Usage Effectiveness) and WUE (Water Usage Effectiveness) metrics. To address these capabilities, we set design goals—our theoretical estimates of the most efficient we can operate our datacenters—and ensure we have measurements of our actual efficiencies. These targets can vary between datacenter generations and usage; for instance, newer datacenter generations as well as datacenters operating at peak utilization are more efficient. We track these statistics at a global level and by our operating geographies—Americas, Asia Pacific, and EMEA (Europe, Middle East, Africa).

Understanding Power Usage Effectiveness (PUE)

PUE is an industry metric that measures how efficiently a datacenter consumes and uses the energy that powers the datacenter, including the operation of systems like powering, cooling, and operating the servers, data networks and lights. The closer the PUE number is to “1,” the more efficient the use of energy.

While local environment and infrastructure can affect how PUE is calculated, there are also slight variations across providers. Here’s the simplest way to think about PUE.

We design and build our datacenters toward the optimum PUE figure. We can also predict, with a high degree of accuracy, that optimum PUE figure. As we constantly innovate, we factor these changes into our datacenter designs to get as close to “1” as feasible. Our newest generation of datacenters have a design PUE of 1.12 and, with each new generation, we strive to become even more efficient. In the chart below, the blue bars show our estimated, or designed, PUE figures, while the grey bars indicate our actual PUE figures. As you can see, in Asia Pacific our actual PUE is higher; that’s due in part to higher ambient temperatures in the region which necessitates additional cooling.

In almost every region, our actual operating PUE is more efficient than our designs.

Understanding Water Usage Effectiveness (WUE)

Water Usage Effectiveness (WUE) is another key metric relating to the efficient and sustainable operations of our datacenters and is a crucial aspect as we work towards our commitment to be water positive by 2030.

WUE is calculated by dividing the number of liters of water used for humidification and cooling by the total annual amount of power (measured in kWh) needed to operate our datacenter IT equipment.

Like PUE, there are variables that can impact WUE—many of which relate to the location of the datacenter. Humid locations often have more atmospheric water, while arid locations have very little. Datacenters in colder parts of the world, like Sweden and Finland operate in naturally cooler environments so require less water for cooling. Our datacenter designs minimize water use. The chart below shows (in blue) our estimated or designed WUE figure, and in grey, our actual WUE figure. Again, Asia Pacific is higher due to higher ambient temperatures and as a result the need in some places for water-cooled chillers.

We continue to integrate our standards in water reduction technologies such as those in our Phoenix, Arizona datacenter where we use direct outside air most of the year to cool servers. We otherwise cool through direct evaporation that requires a fraction of the water compared to other, conventional water-based cooling systems such as water-cooled chillers.

Furthermore, by powering our datacenter with power from the Sun Streams 2 Solar Project owned by local partner, Longroad Energy, we’re displacing the water needed in the traditional electricity generation process and expect to save 356 million liters of water annually.

Scope 3 and supply chain

As we shared in March with our annual sustainability report, we made good progress on a number of our goals. Across the company’s operations, we saw an overall reduction in our Scope 1 and Scope 2 emissions of about 17 percent year over year, through our purchasing of renewable energy. At the same time, we also saw a rise in our Scope 3 emissions, which increased about 23 percent year over year.

We know that Scope 3 emissions (representing the total emissions across a company’s entire value chain) are the most difficult to control and reduce, because we can often only influence change. We know this is a long-term effort and this year we have increased our focus on operational discipline that is rooted in reliable data. We’ve also been working with partners across the industry, including Infrastructure Masons on carbon transparency within the datacenter supply chain, and will have exciting news to share at the Datacloud Global Congress on April 25 to 27.

Learn more

We know just how crucial data transparency and consistency are in helping our customers make the correct choices for their business, and hope that today’s announcement on our PUE and WUE data will be an important step forward in informing decisions about their sustainability strategies.

To learn more about our datacenter operations and commitments in action today, you can visit:

Microsoft sustainability
Azure sustainability
Microsoft Azure's global infrastructure
Take a virtual tour of Microsoft’s datacenters

Quelle: Azure

Microsoft announces new collaboration with Red Button for attack simulation testing

As we highlighted in our latest attack trends report, Distributed Denial-of-Service (DDoS) attacks are one of the biggest security concerns today. Whether in the cloud or on-premises, DDoS attacks can be targeted at any endpoint that is publicly reachable through the internet. Planning and preparing for a DDoS attack is crucial to a well-vetted incident management response plan.

Today, Microsoft is excited to announce a new collaboration with Red Button, offering our customers an additional DDoS attack simulation testing provider to choose from. With Red Button’s DDoS Testing service suite, you will be able to work with a dedicated team of experts to simulate real-world DDoS attack scenarios in a controlled environment. Simulation testing allows you to assess your current state of readiness, identify gaps in your incident response procedures, and guide you in developing a proper DDoS response strategy.

Red Button DDoS Testing

Red Button’s DDoS Testing service suite includes three stages:

1. Planning session

Red Button experts meet with your team to understand your network architecture, assemble technical details, and define clear goals and testing schedules. This includes planning the DDoS test scope and targets, attack vectors, and attack rates. The joint planning effort is detailed in a test plan document.

2. Controlled DDoS attack

Based on the defined goals, the Red Button team launches a combination of multi-vector DDoS attacks. The test typically lasts between three to six hours. Attacks are securely executed using dedicated servers and are controlled and monitored using Red Button’s management console.

3. Summary and recommendations

The Red Button team provides you with a written DDoS Test Report outlining the effectiveness of DDoS mitigation. The report includes an executive summary of the test results, a complete log of the simulation, a list of vulnerabilities within your infrastructure, and recommendations on how to correct them.

Here is an example of a DDoS Test Report from Red Button:

In addition, Red Button offers two other service suites that can complement the DDoS Testing service suite:

DDoS 360 is an “all included” annual service that includes the DDoS Testing, DDoS Hardening, DDoS team skills development, and DDoS Incident Response services. The program consists of multiple year-round activities carried out by Red Button’s top DDoS experts, which includes extensive pre-attack activities to strengthen your technological infrastructure and improve the skills of your teams as well as a dedicated incident response expert team in the event of an attack.
DDoS Incident Response (IR) is a 30-day incident response service that consists of three phases: when under a DDoS attack or DDoS threat (for example, DDoS ransom threat), Red Button DDoS experts are immediately assigned and work closely with your security and IT teams to analyze the attack and apply the appropriate mitigations. Once the attack has been fully mitigated, Red Button audits your network architecture and DDoS protection system configuration, including running a DDoS test and provides detailed recommendations for hardening and optimization to prevent future attacks. Lastly, Red Button conducts DDoS training for your teams to increase your skills and readiness, and helps you build a DDoS Playbook that provides detailed procedures and activities to prepare for any future attack.

Azure DDoS simulation testing policy

Red Button’s simulation environment is built within Azure. You can only simulate attacks against Azure-hosted public IP addresses that belong to an Azure subscription of your own, which will be validated by Azure Active Directory (Azure AD) before testing. Additionally, these target public IP addresses must be protected under Azure DDoS Protection.

You may only simulate attacks using our approved testing partners:

Red Button.
BreakingPoint Cloud.

Learn more

Red Button: DDoS Services—Protection Consulting and Testing.
Azure DDoS Protection simulation testing partners: Azure DDoS Protection simulation testing documentation.
Microsoft penetration testing guidelines: Penetration testing documentation.
Azure DDoS Protection Standard product page.
Azure DDoS Protection Standard documentation.
DDoS Protection best practices.

Quelle: Azure

Azure Purview is now Microsoft Purview

In September of 2021, we announced the highly anticipated general availability of Azure Purview—a cloud-native data governance solution to enable organizations of all sizes to manage and govern their on-premises, multicloud, and software as a service (SaaS) data. Since Azure Purview was brought onto the market, thousands of organizations including London Heathrow Airport, Grundfos, and illimity have collectively discovered tens of billions of data assets as well as served up millions of searches every month to empower knowledge workers to find valuable enterprise data quickly and easily. 

Organizations that use Azure Purview have a more holistic understanding of their hybrid data estate, which is always kept up to date with automated data discovery and sensitive data classification. In addition to empowering knowledge workers, this understanding, along with insights from sensitivity, business context, and relationships between data assets is also being used by teams working under the Chief Data Officers (CDO), the Chief Information and Security Officers (CIO and CISO) and the Chief Risk and Compliance Officers (CRO and CCO) to govern, protect, and manage data more effectively.

Traditional data management solutions rely on multiple unconnected, duplicative business processes, and a patchwork of software products augmented with custom code and point-wise integrations. Dozens of products are sometimes used together to address fragments of the data governance and compliance landscape, forcing Chief Data, Security, Compliance, and Legal Officers to stitch together solutions that don’t work together, expose infrastructure gaps, and are costly and complex to manage. A survey of US-based decision-makers showed that to meet their compliance and data-protection needs, almost 80 percent had purchased multiple products, and a majority had purchased three or more.¹ The result is increased operations costs, ineffective data governance, poor security outcomes, failed compliance audits, and damage to brand reputation. Additionally, as the threat landscape continues to evolve, the types of risks organizations face inevitably expand and extend well beyond the traditional cybersecurity risks. This means that risk roles within the organization are blurring, requiring a collaborative and cohesive approach across data, compliance, and risk officers, as each drives an integral part of an effective data strategy. We believe the new way to optimize your data strategy is to deliver a unified view of data in the organization across hybrid, multicloud environments by bringing together the business users of data with the protectors of data.

In the past, we have shared how Azure Purview and Microsoft 365 Compliance are used together to ensure consistent, automated application of sensitivity labels to data assets across the data estate to simplify how organizations understand their sensitive data.

Today, we are excited to introduce Microsoft Purview—a comprehensive set of solutions from Microsoft to help you govern, protect, and manage your entire data estate. By bringing together the former Azure Purview and the former Microsoft 365 Compliance portfolio under one brand and over time, a more unified platform, Microsoft Purview can help you understand and govern the data across your estate, safeguard that data wherever it lives, and improve your risk and compliance posture in a much simpler way than traditional solutions on the market today.

Microsoft Purview

Helps you gain visibility into assets across your entire data estate.
Leverages that visibility to manage end-to-end data risks and regulatory compliance.
Governs, protects, and manages data in a new, more comprehensive, and simpler way. 

Customers of the Azure Purview portal can now use the Microsoft Purview governance portal. For customers of Microsoft 365 E5 or Microsoft E5 Compliance, check out the Microsoft Purview compliance portal to see what’s new!

Get started with Microsoft Purview today

Watch a video introducing the new Microsoft Purview.
Get started quickly and easily with a new Microsoft Purview account to try the Microsoft Purview Data Map and Microsoft Purview Data Catalog.

1February 2022 survey of 200 US compliance decision-makers (n=100 599-999 employees, n=100 1000 plus employees) commissioned by Microsoft with MDC Research. 
Quelle: Azure

Enhance your data visualizations with Azure Managed Grafana—now in preview

This blog has been co-authored by Ye Gu, Principal Program Manager.

Organizations are transforming their digital environments to increase agility and to operate more efficiently. We see this transformation in how customers migrate to the cloud and adopt cloud-native technologies and practices in their own environments. As their digital estates become increasingly more complex and critical to their business operations, it becomes even more important to effectively manage and monitor their applications and infrastructure.      

Grafana is a popular open-source analytics visualization tool that allows users to bring together logs, traces, metrics, and other disparate data from across an organization, regardless of where they are stored. Last year, we announced our strategic partnership with Grafana Labs to develop a Microsoft Azure managed service that lets customers run Grafana natively within the Azure cloud platform. Today, we are announcing that Azure Managed Grafana is available in preview. With Azure Managed Grafana, the Grafana dashboards our customers are familiar with are now integrated seamlessly with the services and security of Azure.

Seamless connection across Azure data sources and beyond

The Grafana application lets users easily visualize all their telemetry data in a single user interface. With Grafana's extensible architecture, users can visualize and correlate multiple data sources across on-premises, Azure, and multi-cloud environments. Azure Managed Grafana particularly optimizes this experience for Azure-native data stores such as Azure Monitor and Data Explorer thus making it easy for customers to connect to any resource in their subscription and view all resulting telemetry in a familiar Grafana dashboard.

Customers can preserve existing charts in the Azure portal that are used for monitoring. Through service-to-service integration, our customers can bring any chart in the Azure portal over to their Azure Managed Grafana instance with a one-click “pin to” operation thus automating the entire migration process. 

Azure Managed Grafana also provides a rich set of built-in dashboards for various Azure Monitor features to help customers easily build new visualizations. For example, some features with built-in dashboards include Azure Monitor application insights, Azure Monitor container insights, Azure Monitor virtual machines insights, and Azure Monitor alerts.

Secured access and sharing of Grafana dashboards with Azure Active Directory

In Azure Managed Grafana, customers can customize user permissions with specific roles and assignments stored in Azure Active Directory. These definitions are mapped transparently to Grafana’s internal roles, which enforces the actual access control. This integration enables both simplicity and consistency by allowing customers to manage users in their teams and authorize their use of a Grafana instance centrally through Azure Active Directory.

On the backend, Azure Managed Grafana can be configured to access Azure Monitor through a managed identity that was set up as part of the Grafana instance creation. Using this option, customers do not need to deal with another credential separately—though that is still possible if preferred.

Get started with Azure Managed Grafana

Try it free for the first 30 days from the Azure portal today.

Go to the Azure Managed Grafana product page.
Read the technical documentation.
Share feedback on Microsoft Q&A.
Join the Azure Observability Tech Community for detailed blogs and discussions.
Read the Grafana integrations with Azure Monitor blog.

Quelle: Azure

Feathr: LinkedIn’s feature store is now available on Azure

This blog post is co-authored by David Stein, Senior Staff Software Engineer, Jinghui Mo, Staff Software Engineer, and Hangfei Lin, Staff Software Engineer, all from Feathr team.

Feature store motivation

With the advance of AI and machine learning, companies start to use complex machine learning pipelines in various applications, such as recommendation systems, fraud detection, and more. These complex systems usually require hundreds to thousands of features to support time-sensitive business applications, and the feature pipelines are maintained by different team members across various business groups.

In these machine learning systems, we see many problems that consume lots of energy of machine learning engineers and data scientists, in particular duplicated feature engineering, online-offline skew, and feature serving with low latency.

Figure 1: Illustration on problems that feature store solves.

Duplicated feature engineering

In an organization, thousands of features are buried in different scripts and in different formats; they are not captured, organized, or preserved, and thus cannot be reused and leveraged by teams other than those who generated them.
Because feature engineering is so important for machine learning models and features cannot be shared, data scientists must duplicate their feature engineering efforts across teams.

Online-offline skew

For features, offline training and online inference usually require different data serving pipelines—ensuring consistent features across different environments is expensive.
Teams are deterred from using real-time data for inference due to the difficulty of serving the right data.
Providing a convenient way to ensure data point-in-time correctness is key to avoid label leakage.

Serving features with low latency

For real-time applications, getting feature lookups from database for real-time inference without compromising response latency and with high throughput can be challenging.
Easily accessing features with very low latency is key in many machine learning scenarios, and optimizations needs to be done to combine different REST API calls to features.

To solve those problems, a concept called feature store was developed, so that:

Features are centralized in an organization and can be reused
Features can be served in a synchronous way between offline and online environment
Features can be served in real-time with low latency

Introducing Feathr, a battle-tested feature store

Developing a feature store from scratch takes time, and it takes much more time to make it stable, scalable, and user-friendly. Feathr is the feature store that has been used in production and battle-tested in LinkedIn for over 6 years, serving all the LinkedIn machine learning feature platform with thousands of features in production.

At Microsoft, the LinkedIn team and the Azure team have worked very closely to open source Feathr, make it extensible, and build native integration with Azure. It’s available in this GitHub repository and you can read more about Feathr on the LinkedIn Engineering Blog.

Some of the highlights for Feathr include:

Scalable with built-in optimizations. For example, based on some internal use case, Feathr can process billions of rows and PB scale data with built-in optimizations such as bloom filters and salted joins.
Rich support for point-in-time joins and aggregations: Feathr has high performant built-in operators designed for Feature Store, including time-based aggregation, sliding window joins, look-up features, all with point-in-time correctness.
Highly customizable user-defined functions (UDFs) with native PySpark and Spark SQL support to lower the learning curve for data scientists.
Pythonic APIs to access everything with low learning curve; Integrated with model building so data scientists can be productive from day one.
Rich type system including support for embeddings for advanced machine learning/deep learning scenarios. One of the common use cases is to build embeddings for customer profiles, and those embeddings can be reused across an organization in all the machine learning applications.
Native cloud integration with simplified and scalable architecture, which is illustrated in the next section.
Feature sharing and reuse made easy: Feathr has built-in feature registry so that features can be easily shared across different teams and boost team productivity.

Feathr on Azure architecture

The high-level architecture diagram below articulates how would a user interacts with Feathr on Azure:

Figure 2: Feathr on Azure architecture.

A data or machine learning engineer creates features using their preferred tools (like pandas, Azure Machine Learning, Azure Databricks, and more). These features are ingested into offline stores, which can be either:

Azure SQL Database (including serverless), Azure Synapse Dedicated SQL Pool (formerly SQL DW).
Object storage, such as Azure BLOB storage, Azure Data Lake Store, and more. The format can be Parquet, Avro, or Delta Lake.

The data or machine learning engineer can persist the feature definitions into a central registry, which is built with Azure Purview.
The data or machine learning engineer can join on all the feature dataset in a point-in-time correct way, with Feathr Python SDK and with Spark engines such as Azure Synapse or Databricks.
The data or machine learning engineer can materialize features into an online store such as Azure Cache for Redis with Active-Active, enabling multi-primary, multi-write architecture that ensures eventual consistency between clusters.
Data scientists or machine learning engineers consume offline features with their favorite machine learning libraries, for example scikit-learn, PyTorch, or TensorFlow to train a model in their favorite machine learning platform such as Azure Machine Learning, then deploy the models in their favorite environment with services such as Azure Machine Learning endpoint.
The backend system makes a request to the deployed model, which makes a request to the Azure Cache for Redis to get the online features with Feathr Python SDK.

A sample notebook containing all the above flow is located in the Feathr repository for more reference.

Feathr has native integration with Azure and other cloud services. The table below shows these integrations:

Feathr component

Cloud Integrations

Offline store – Object Store

Azure Blob Storage
Azure ADLS Gen2
AWS S3

 

Offline store – SQL

Azure SQL DB
Azure Synapse Dedicated SQL Pools (formerly SQL DW)
Azure SQL in VM
Snowflake

Online store

Azure Cache for Redis

Feature Registry

Azure Purview

Compute Engine

Azure Synapse Spark Pools
Databricks

Machine Learning Platform

Azure Machine Learning
Jupyter Notebook

File Format

Parquet
ORC
Avro
Delta Lake

Table 1: Feathr on Azure Integration with Azure Services.

Installation and getting started

Feathr has a pythonic interface to access all Feathr components, including feature definition and cloud interactions, and is open sourced here. The Feathr python client can be easily installed with pip:

pip install -U feathr

For more details on getting started, please refer to the Feathr Quickstart Guide. The Feathr team can also be reached in the Feathr community.

Going forward

In this blog, we’ve introduced a battle-tested feature store, called Feathr, which is scalable and enterprise ready, with native Azure integrations. We are dedicated to bringing more functionalities into Feathr and Feathr on Azure integrations, and feel free to give any feedback by raising issues in Feathr GitHub repository.

Checkout the GitHub repository for Feathr.
Reach out to Feathr community.
Read Feathr open-source blog post from our LinkedIn colleagues.

Quelle: Azure

Accelerate your AI applications with Azure NC A100 v4 virtual machines

Real-world AI has revolutionized and changed how people live during the past decade, including media and entertainment, healthcare and life science, retail, automotive, finance service, manufacturing, and oil and gas. Speaking to a smart home device, browsing social media with recommended content, or taking a ride with a self-driving vehicle is no longer in the future. With the ease of your smartphone, you can now deposit checks without going to the bank? All of these advances have been made possible through new AI breakthroughs in software and hardware.

At Microsoft, we host our deep learning inferencing, cognitive science, and our applied AI services on the NC series instances. The learnings and advancements made in these areas with regard to our infrastructure are helping drive the design decisions for the next generation of NC system. Because of our approach, our Azure customers are able to benefit from our internal learnings.

We are pleased to announce that the next generation of NC A100 v4 series is now available for preview. These virtual machines (VMs) come equipped with NVIDIA A100 80GB Tensor Core PCIe GPUs and 3rd Gen AMD EPYC™ Milan processors. These new offerings improve the performance and cost-effectiveness of a variety of GPU performance-bound real-world AI training and inferencing workloads. These workloads cover object detection, video processing, image classification, speech recognition, recommender, autonomous driving reinforcement learning, oil and gas reservoir simulation, finance document parsing, web inferencing, and more.

The NC A100 v4-series offers three classes of VM ranging from one to four NVIDIA A100 80GB PCIe Tensor Core GPUs. It is more cost-effective than ever before, while still giving customers the options and flexibility they need for their workloads.

Size

vCPU

Memory (GB)

GPUs (NVIDIA A100 80 GB Tensor Core)

Azure Network (Gbps)

Standard_NC24ads_A100_v4

24

220

1

20

Standard_NC48ads_A100_v4

48

440

2

40

Standard_NC96ads_A100_v4

96

880

4

80

Compared to the previous NC generation (NCv3) with NVIDIA Volta architecture-based GPUs, customers will experience between 1.5 and 2.5 times the performance boost due to:

Two times GPU to host bandwidth.
Four times vCPU cores per GPU VM.
Two times RAM per GPU VM.
Seven independent GPU instances on a single NVIDIA A100 GPU through Multi-Instance GPU (MIG) on Linux OS.

Below is a sample of what we experienced while running ResNet50 AI model training across a variety of batch sizes using the VM size NC96ads_A100_v4 compared to the existing NCv3 4 V100 GPUs VM size NC24s_v3. Tests were conducted across a range of batch sizes, from one to 256.

Figure 1: ResNet50 results were generated using NC24s_v3 and NC96ads_A100_v4 virtual machine sizes.

For additional information on how to run this on Azure and additional results please check out our performance technical community blog.

With our latest addition the NC series, you can reduce the time it takes to train your model training in around half the time and still within budget. You can seamlessly apply the trained cognitive science models to applications through batch inferencing, run multimillion atomics biochemistry simulations for next-generation medicine, host your web and media services in the cloud for tens of thousands of end-users, and so much more.

Learn more

The NC A100 v4 series are currently available in the South Central US, East US, and Southeast Asia Azure regions. They will be available in additional regions in the coming months.
For more information on the Azure NC A100 v4-series, please see:

Sign up for the preview of the NVIDIA A100 Tensor Core PCIe GPU in the Azure NC A100 v4-series. 
Performance of NC A100 v4-series.
Find out more about high-performance computing (HPC) in Azure.
Microsoft documentation for NC A100 v4-series VM.
Azure HPC optimized OS images.
Azure GPU virtual machines.

Quelle: Azure

Optimize your cloud investment with Azure Reservations

Continuous cost optimization can take place at all stages of an Azure workload’s lifecycle, but your Azure subscription provides a very effective benefit to further optimize your investment when you are ready to deploy that workload.

For cloud workloads with consistent resource usage, you can buy reserved instances at a significant discount and reduce your workload costs by up to 72 percent compared to pay-as-you-go prices. Azure Reservations can be obtained by committing to one-year or three-year plans for virtual machines, Azure Blob storage or Azure Data Lake Storage Gen2, SQL Database compute capacity, Azure Cosmos DB throughput, and other Azure resources.

When you can predict and commit to needed capacity, it gives us visibility into your resource requirements in advance, allowing us to be more efficient in our operations. We can then pass the savings on to you. This benefit applies to both Windows and Linux virtual machines (VMs).

In addition, you now can combine the cost savings of reserved instances with the added Azure Hybrid Benefit when running on-premises and Azure workloads to save up to 80 percent over pay-as-you-go pricing.

How to get your reservation

A reservation discount only applies to resources associated with Enterprise Agreement, Microsoft Customer Agreement, Cloud Solution Provider (CSP), or subscriptions with pay-as-you-go rates. These are billing discounts (paid upfront or monthly) and do not affect the runtime state of your resources. And do not worry, you will not pay any extra fees when you choose to pay monthly.

To determine which reservation to purchase, analyze your usage data in the Azure portal, or use reservation recommendations available in Azure Advisor (VMs only), the Cost Manage Power BI app, or the Reservation Recommendations REST API.

Reservation purchase recommendations are calculated by analyzing your hourly usage data over the last seven, 30, and 60 days.

Simple and flexible

You can purchase Azure Reserved VM Instances in three easy steps—just specify your Azure region, virtual machine type, and term (one year or three years)—that's it.

Here is how it works: Discounts are generally applied to the resource usage matching the attributes you select when you buy the reservation. Attributes include the scope where the matching VMs, SQL databases, Azure Cosmos DB, or other resources run. Attributes include the SKU, regions (where applicable), and scope. Reservation scope selects where the reservation savings apply. You can scope a reservation to a subscription or resource group. When you scope the reservation to a resource group, reservation discounts apply only to the resource group—not the entire subscription.

You can manage reservations for Azure resources including updating the scope to apply reservations to a different subscription, changing who can manage the reservation, splitting a reservation into smaller parts, or changing instance size. Enhanced data for reservation costs and usage is available for Enterprise Agreement (EA) and Microsoft Customer Agreement (MCA) usage in Azure Cost Management and Billing. Those same customers can view amortized cost data for reservations and use that data to chargeback the monetary value for a subscription, resource group, or resource.

Capacity on demand

The ability for you to access compute capacity with service-level agreements, and ahead of actual VM deployments, is important to ensure the availability of mission-critical applications running on Azure. On-demand capacity reservations, now in preview, enable you to reserve compute capacity for one or more virtual machine size(s) in an Azure region or availability zone for any length of time. You can create and cancel an on-demand capacity reservation at any time, no commitment is required.

You also can exchange a reservation for another reservation of the same type or refund a reservation, up to $50,000 USD in a 12-month rolling window if you no longer need it, or cancel a reserved instance at any time and return the remaining months to Microsoft.

Learn more

Purchase reservations from the Azure portal, APIs, PowerShell, or CLI. Cloud solution providers can use the Azure portal or Partner Center to purchase Azure Reservations.

To dive deeper, check out the learning module, “Save money with Azure Reserved Instances.”
Quelle: Azure