Azure Cost Management + Billing updates – July 2020

Whether you're a new student, thriving startup, or the largest enterprise, you have financial constraints, and you need to know what you're spending, where, and how to plan for the future. Nobody wants a surprise when it comes to the bill, and this is where Azure Cost Management + Billing comes in.

We're always looking for ways to learn more about your challenges and how Azure Cost Management + Billing can help you better understand where you're accruing costs in the cloud, identify and prevent bad spending patterns, and optimize costs to empower you to do more with less. Here are a few of the latest improvements and updates based on your feedback:

Drilling into empty fields and untagged resources in cost analysis.
What's new in Cost Management Labs.
New ways to save money with Azure.
New videos and learning opportunities.
Documentation updates.

Let's dig into the details.

 

Drilling into empty fields and untagged resources in cost analysis

Azure Cost Management + Billing includes all usage, purchases, and refunds for your billing account. Seeing every line item in the full usage and charges file allows you to reconcile your bill at the lowest level, but since each record can represent different charge types, which may have different properties, aggregating them within cost analysis can result in groups of empty results. This is when you see groups like "no value," "other purchases," or "untagged". Now you can filter down to these empty values and group by other attributes to drill in and understand your costs.

You can drill into data in cost analysis by either adding an explicit filter using the filter pills at the top or by clicking any grouped segment in the charts. When you add a filter using the filter pills, you'll see a new "No value" option. This accounts for any and all scenarios where that property might be empty. Here are a few examples:

Other subscription resources: Services that aren't deployed to resource groups do not have a resource group name.
Untagged resources: There are 3 categories of costs that don't have tags: Resources that simply don't have tags applied (Untagged), resources with tags that aren't included in usage data (Tags not available), and charges that cannot be tagged at all (Tags not supported).
Purchases: Since purchases aren't associated with an Azure resource, you might see placeholders for Azure or Marketplace purchases. Azure purchases cover Microsoft offers, like reservations and Azure Active Directory. Marketplace purchases cover any third-party offers available from the Azure Marketplace.

After filtering down to "No value," group data by different properties to get a clearer picture of what that represents. As an example, group by publisher type or charge type to identify Marketplace costs or purchases, respectively, when you see meter and service properties are empty.

You can also click a chart segment to drill into these costs. Clicking any of the placeholders will automatically apply the "No value" filter pill for that property.

Use this new filtering capability to drill in to and understand your costs and let us know what you'd like to see next.

 

What's new in Cost Management Labs

With Cost Management Labs, you get a sneak peek at what's coming in Azure Cost Management and can engage directly with us to share feedback and help us better understand how you use the service, so we can deliver more tuned and optimized experiences. Here are a few features you can see in Cost Management Labs:

Show billing menu items on the Cost Management menu – Now available in the portal.
See all Cost Management + Billing menu items together in one place with quick navigation between scopes.

Of course, that's not all. Every change in Azure Cost Management is available in Cost Management Labs a week before it's in the full Azure portal. We're eager to hear your thoughts and understand what you'd like to see next. What are you waiting for? Try Cost Management Labs today.

 

New ways to save money with Azure

We're always looking for ways to help you optimize costs. Here's what's new this month:

Save even more on VMs with five-year Hybrid Benefit reservations.
Support for Azure Hybrid Benefit v2 VMs in Japan East.
Reduce your Data Lake storage costs with the new, ultra low-cost Archive tier.
More flexible options with ephemeral OS disks, enabling you to save on storage costs.

 

New videos and learning opportunities

For those visual learners out there, here's one new video you might be interested in:

Azure Cosmos DB: A cost-effective database for cloud native applications (part one) (12 minutes).
Azure Cosmos DB: A cost-effective database for cloud native applications (part two) (11 minutes).
How to optimize costs with Azure Kubernetes Service (AKS) and PostgreSQL (10 minutes).
Cost optimization with Windows containers (6 minutes).

Follow the Azure Cost Management + Billing YouTube channel to stay in the loop with new videos as they're released and let us know what you'd like to see next.

Want a more guided experience? Start with Control Azure spending and manage bills with Azure Cost Management + Billing.

 

Documentation updates

Here are a couple documentation updates you might be interested in:

Noted that early termination fees are not being charged for reservation refunds.
Documented support for budget alert thresholds above 100 percent.

Want to keep an eye on all of the documentation updates? Check out the Cost Management + Billing doc change history in the azure-docs repository on GitHub. If you see something missing, select Edit at the top of the document and submit a quick pull request.

 

What's next?

These are just a few of the big updates from last month. Don't forget to check out the previous Azure Cost Management + Billing updates. We're always listening and making constant improvements based on your feedback, so please keep the feedback coming.

Follow @AzureCostMgmt on Twitter and subscribe to the YouTube channel for updates, tips, and tricks. And, as always, share your ideas and vote up others in the Cost Management feedback forum.

We know these are trying times for everyone. Best wishes from the Azure Cost Management team. Stay safe, and stay healthy!
Quelle: Azure

Eight ways to optimize costs on Azure SQL

Across the globe, businesses are emerging into a new normal, eager to restart or rebuild, but still operating in uncertain times. Optimizing costs and redirecting the spend to where it matters most is as important as ever, and many companies see the cloud as a way to control costs, build resilience, and accelerate time to market.

Customers choose Azure for a variety of reasons, but one of the main reasons is to lower their costs. What more could you do if you could save up to 80 percent or more on your database costs? We introduced the Azure SQL family of database services to help businesses cost-effectively adapt and scale to rapidly changing conditions. Here are the top eight ways you can optimize your data spend, with savings available wherever you are in your digital transformation journey.

1. Maintain business continuity in the cloud with free SQL Server licenses

Use your active Software Assurance benefit to get a free license for every SQL Server in your datacenter for a secondary passive replica you can use for disaster recovery to an Azure Virtual Machine.

2. Shift capex to opex with SQL Server on Azure Virtual Machines

Migrating your data to virtual machines hosted on Azure can yield real savings, over $10 million in three years,1 by avoiding the cost and complexity of buying and managing your own physical servers. With SQL Server on Azure Virtual Machines, Azure manages the infrastructure while you purchase, install, configure, and manage your own software. Benefit even more when you register your VM with Resource Provider and operate more productively with a comprehensive set of manageability features like automated backups, patching, and AlwaysOn availability groups.

3. Protect your data with free security updates

For applications that rely upon SQL Server 2008 or 2008/R2, activate three years of free extended security updates when you migrate to Azure Virtual Machines. Use Azure Site Recovery for easy migration to the cloud with pre-configured SQL Server 2008 and 2008 R2 images in Azure Gallery.

4. Boost productivity with fully managed Azure SQL database services

Modernize your existing apps on evergreen, fully managed services that are always on the latest version of SQL Server, where backups, high availability, performance tuning, data protection, and more are performed on your behalf. A recent Forrester Consulting study indicated Azure SQL Database and Azure SQL Managed Instance provide up to a 238 percent return on investment in addition to productivity improvements up to 40 percent.2

“We’ve reduced our operating costs by about 70 percent or one-seventh of our previous IT budget. We’re using those savings to focus on research and development to make our product better and faster.” Shoji Ueda: Senior Architect, Benesse Corporation

5. Use your SQL Server licenses for discounted rates on Azure

Save up to 80 percent3 versus other cloud providers with Azure Hybrid Benefit, a unique offer that maximizes the value of your on-premises licenses in the cloud. Unlike the License Mobility benefit on other clouds, Azure Hybrid Benefit covers your Windows Server licenses, too, and eases the migration of heavily virtualized SQL Server workloads by providing four vCores of SQL Database or SQL Managed Instance for every one core of SQL Server Enterprise. On top of this, you get 180-days of dual-use rights so you can maintain your on-premises operation while migrating to Azure.

6. Optimize costs through better insights

Use Azure Advisor to obtain cost savings insights on idle or underutilized VMs. Or, use Azure Cost Management to monitor and control your storage expenses and optimize usage in your SQL databases.

7. Pay only for the resources you use

Pay by the second with the only serverless SQL in the cloud. SQL Database serverless automatically scales, pauses, and resumes compute resources based upon your workload activity, so you only pay for the resources you consume. Icertis, a leading provider of contract lifecycle management in the cloud, cut its database costs by nearly 70 percent with SQL Database serverless.

“Azure SQL Database serverless enables us to offer an even more robust and resilient solution, helping us build deeper partnerships with our customers and go to market stronger than ever before.” Purna Rao, Senior DevOps Architect, Icertis

8. Commit upfront and lock-in rates for up to three years

Reduce your compute costs by up to 72 percent4 versus pay-as-you-go pricing and budget more effectively with reservation pricing. You can save even more, up to 80 percent, when you combine reservation pricing with Azure Hybrid Benefit. Prepay upfront at a reserved price or with convenient monthly payments at no extra cost.

When you factor in the savings from Azure Hybrid Benefit with the performance on Azure, you get an unbeatable value for your mission-critical workloads, costing up to 86 percent less5 than AWS on SQL Database and up to 84 percent less6 for workloads on SQL Server on Azure Virtual Machines.

Get started with Azure SQL today

Need help with next steps? We can guide you to the right Azure SQL service for your workload and the tools and services to help you cost-effectively migrate to the cloud.

Azure. Invent with Purpose.

1 “The Total Economic ImpactTM of Microsoft Azure IaaS,” a commissioned study conducted by Forrester Consulting in August 2019 on behalf of Microsoft.

2“The Total Economic ImpactTM of Migration to Azure SQL Managed Databases,” a commissioned study conducted by Forrester Consulting in March 2020 on behalf of Microsoft.

3 Calculations based on scenarios running 744 hours/month for 12 months at 3-year Reserved Instances or Reserved Capacity. Prices as of 10/24/2018, subject to change. Azure Windows VM calculations based on one D2V3 Azure VM in US West 2 region at the SUSE Linux Enterprise Basic rate. AWS calculations based on one m5.Large VM in US West (Oregon) using Windows Server pay-as-you-go rate for Reserved Instances under Standard 3-year term, all upfront payment. SQL Server calculations based on 8 vCore Azure SQL Database Managed Instance Business Critical in US West 2 running at Azure Hybrid Benefit rate. AWS calculations based on RDS for SQL EE for db.r4.2xlarge on US West (Oregon) in a multi AZ deployment for Reserved Instances under Standard 3-year term, all upfront payment. Extended security updates cost used for AWS is based on Windows Server Standard open NL ERP pricing in USD. Actual savings may vary based on region, instance size, and performance tier. Savings exclude Software Assurance costs, which may vary based on Volume Licensing agreement. Contact your sales representative for details.

4 The 72 percent saving is based on one M32ts Azure VM for Windows OS in US Gov Virginia region running for 36 months at a Pay as You Go rate of ~$3,660.81/month; reduced rate for a 3-year Reserved Instance of ~$663.45/month. Azure pricing as of 10/30/2018 (prices subject to change). Actual savings may vary based on location, instance type, or usage.

5 Price-performance claim based on data from a study commissioned by Microsoft and conducted by GigaOm in August 2019. The study compared price performance between a single, 80 vCore, Gen 5 Azure SQL Database on the business-critical service tier and the db.r4.16xlarge offering for SQL Server on AWS RDS. Benchmark data is taken from a GigaOm Analytic Field Test derived from a recognized industry standard, TPC Benchmark™ E (TPC-E), and is based on a mixture of read-only and update intensive transactions that simulate activities found in complex OLTP application environments. Price-performance is calculated by GigaOm as the cost of running the cloud platform continuously for three years divided by transactions per second throughput. Prices are based on publicly available US pricing in East US for Azure SQL Database and US East (Ohio) for AWS RDS as of August 2019. Price-performance results are based upon the configurations detailed in the GigaOm Analytic Field Test. Actual results and prices may vary based on configuration and region.

6 Price-performance claims based on data from a study commissioned by Microsoft and conducted by GigaOm in February 2020. The study compared price performance between SQL Server 2019 Enterprise Edition on Windows Server 2019 Datacenter edition in Azure E32as_v4 instance type with P30 Premium SSD Disks and the SQL Server 2019 Enterprise Edition on Windows Server 2019 Datacenter edition in AWS EC2 r5a.8xlarge instance type with General Purpose (gp2) volumes. Benchmark data is taken from a GigaOm Analytic Field Test derived from a recognized industry standard, TPC Benchmark™ E (TPC-E). The Field Test does not implement the full TPC-E benchmark and as such is not comparable to any published TPC-E benchmarks. Prices are based on publicly available US pricing in West US for SQL Server on Azure Virtual Machines and Northern California for AWS EC2 as of January 2020. The pricing incorporates three-year reservations for Azure and AWS compute pricing, and Azure Hybrid Benefit for SQL Server and Azure Hybrid Benefit for Windows Server and License Mobility for SQL Server in AWS, excluding Software Assurance costs. Actual results and prices may vary based on configuration and region.
Quelle: Azure

Creating cloud ready environments with Azure landing zones

Moving to the cloud creates an opportunity to pause and think about how to operate the IT environment. Most organizations in the world have seen their ability to innovate and adopt cloud technologies slowed down by the rules and operating model that governs their existing IT environments. Organizations have their own set of processes, tools, and dedicated staff to ensure that these environments can continuously support business needs.

With the move to a cloud environment, IT has access to new tools and processes that unblock IT operations. By revisiting the operating model, technology-focused teams and Azure partners can help organizations improve agility, cost, and scale.

Azure landing zones in the Microsoft Cloud Adoption Framework for Azure are designed to accelerate efforts to map, modernize, or even reimagine the operating model. Azure landing zones help build a cloud environment aligned to the optimal technology operations specific to your needs in the cloud.

As the following analogy illustrates, a standardized foundation can’t fit the variety of needs seen by organizations and operating models. Respecting any need for options and customization, we provide a range of landing zone architectures and implementation options. Organizations can use the implementation option that most clearly aligns to their current cloud strategy. As the approach to managing, operating, and governing the cloud platform matures, you can support your customers and refactor their Azure landing zone implementation to reflect changes to their operating model.

Landing zone analogy

The cloud environment is similar to laying a foundation in any construction project. All architects have to consider common decisions when designing and laying the foundation for any building. They all share things like concrete, rebar, and conduits to bring in necessary utilities, like plumbing or electricity. While foundations contain similar elements and considerations, they may have other considerations that make them unique and wildly different. The foundation for a house is concise and well-contained. The foundation for a stadium is larger and more complex. The foundation for a bridge is even more complex and may require stricter governance and performance standards. Designing the right foundation requires an understanding of what that foundation will support.

The cloud environments, created by Azure landing zones, are very similar as they are all built from the same common design elements. While commonalities exist across all environments, each landing zone implementation is customized to support a specific type of structure or cloud operating model. Like traditional foundations, the cloud environment will require review, modification, and iteration by an experienced architect to ensure that it supports the organization’s long-term needs.

When getting started or rethinking operations, Azure landing zones help accelerate the design, review, and implementation of the cloud environment. When working with your customers to accelerate their journey, Azure landing zones can guide your collaboration, as you validate, customize, and expand Azure landing zones to build the foundation for their digital transformation.

Azure landing zones

Azure landing zones provide a clear architecture, reference implementations, and code samples to create the initial cloud environment. This environment will support all other adoption efforts by consistently applying a set of common design areas. These design areas represent how the operating model is supported in the cloud.

Azure landing zones implementation options provides a reference implementation or approach to help make decisions regarding networking, identity, resource organization, governance, operations, and other design areas that impact the environment. The options provide a structure, which organizations can follow, to ensure all minimal design considerations have been made and decisions are reflected consistently across the cloud environment.

Azure landing zones implementation options

Azure landing zones are designed to meet our customers distinct needs based on today’s requirements, and then provide a clear path to customize and mature any personalized landing zone implementation. This starts with choosing an landing zone implementation option, which will quickly deploy a starting point for the cloud environment.

Some of the Azure landing zones are small by design to encourage skills development and customization. The “start small” implementation options establish an infrastructure-as-code approach and then provide the IT team with a series of decisions guides. This approach helps guide the thoughts and decisions that need to happen. This iterative approach builds the foundation in parallel to the cloud adoption plan to help the team make concrete decisions, as cloud experience matures.

For organizations with well-defined operating models, the “enterprise-scale” implementation option fills in those decisions. This option includes very detailed solutions for security, governance, and operations. These solutions are automated and enforced by Azure Policy and other governance tools in the reference implementations. When starting with enterprise-scale, organizations can reduce the number of decision points and implement a proven cloud operating model faster.

Azure landing zones development

Regardless of the landing zone chosen, the Ready methodology of the Cloud Adoption Framework (CAF) for Azure helps guide organizations while developing the skills needed to create and support their cloud environment. The theory behind Azure landing zones brings well-established development practices to the infrastructure management function.

As Azure landing zones are implemented and customized, the team will develop skills in general Azure architecture. It is also important to learn how to refactor landing zones to meet new business and technical requirements and how test-driven development can ensure high-quality changes are adding value to the cloud environment. You’ll also experience how the governance tools in Azure can be used to create an environment factory to provide your customers with the rapid deployment of security, well-governed, well-managed azure landing zones.

As your customer’s cloud adoption efforts advance, you can use the guidance found in the Govern and Manage methodologies to further help them mature their governance and operational management postures. As these processes and disciplines mature, Azure landing zones and the suite of Azure governance tools provide a convenient approach to apply changes to existing environments. This allows the collective technology teams to mature governance and management at the right pace, while ensuring that such progress isn’t stalled by technical compatibility challenges.

Learn more

To learn more about Azure landing zones, check out the Ready section under the Cloud Adoption Framework (CAF) including:

Read Azure landing zones defined.
Review the Azure landing zone design areas and begin thinking about your landing zone requirements.
Evaluate the Azure landing zone implementation options to find the deployment approach that best aligns with your needs.

If you are ready to help your customers deploy Azure landing zones, the following resources will help you get started:

Start small and expand: Deploy the CAF migration landing zone blueprint to start building out a migration ready environment. Add the CAF blueprint to begin adding governance tooling to any environment.
Start with enterprise-scale: For a more robust implementation, deploy the CAF enterprise-scale landing zones leveraging the reference implementation.
Third-party, multi-cloud option: Use CAF Terraform modules to deploy landing zones.

Already have workloads on Azure and want to assess them against best practices? Check out the Microsoft Azure Well-Architected Framework and the Microsoft Azure Well-Architected Review.

Grow your business and strengthen your position, as a trusted cloud advisor, by leveraging Azure landing zones to create the right cloud environment to support your customer’s cloud adoption needs! Building on the right environment, ensures that your own and your customers’ modern operations are able to support the innovation and migration needs of the organization. Adopting the cloud on top of Azure landing zones is the first step to unlocking the agility, scale, and cost benefits of the cloud across your customer’s IT portfolio.
Quelle: Azure

Monitoring Azure Arc enabled Kubernetes and servers

Azure Arc is a preview service that enables users to create and attach Kubernetes clusters both inside and outside of Azure. Azure Arc also enables the user to manage Windows and Linux machines outside of Azure the same way native Azure Virtual Machines are managed. To monitor these Azure Arc enabled clusters and servers, you can use Azure Monitor the same way you would use it for the Azure resources.

With Azure Arc, the Kubernetes clusters and servers are given a full-fledged Azure Resource ID and managed identity, enabling various scenarios that simplifies management and monitoring of these resources from a common control plane. For Kubernetes, this enables scenarios such as deploying applications through GitOps-based management, applying Azure policy, or monitoring your containers. For servers, users also benefit from applying Azure policies and collecting logs with Log Analytics agent for virtual machine (VM) monitoring.

Monitoring Azure and on-premises resources with Azure Monitor

As customers begin their transition to the cloud, monitoring on-premises resources alongside their cloud infrastructure can feel disjointed and cumbersome to manage. With Azure Arc enabled Kubernetes and Servers, Azure Monitor can enable you to monitor your full telemetry across your cloud-native and on-premises resources in a single place. This saves the hassle of having to configure and manage multiple different monitoring services and bridges the disconnect that many people experience when working across multiple environments.

For example, the below view shows the Map experience of Azure Monitor on an Azure Arc enabled server, with the dashed red lines showing failed connections. The graphs on the right side of the map show detailed metrics about the selected connection.

Also, here you can see your data from Azure Kubernetes Services (AKS), Azure Arc, and Azure Red Hat OpenShift side-by-side in Azure Monitor for containers:

Using Azure Monitor for Azure Arc enabled servers

Azure Monitor for VMs is a complete monitoring offering that gives you views and information about the performance of your virtual machines, as well as dependencies your monitored machines may have. It provides an insights view of a single monitored machine, as well as an at-scale view to look at the performance of multiple machines at once.

Azure Arc enabled servers fit right into the existing monitoring view for Azure Virtual Machines, so the monitoring view on an Azure Arc enabled server will look the same as the view of a native Azure Virtual Machines. From within the Azure Arc blade, you can look at your Azure Arc machines and dive into their monitoring, both through the Performance tab, which shows insights about different metrics such as CPU Utilization and the Map tab, which shows dependencies.

In the at-scale monitoring view, your Azure Arc machines are co-mingled with your native Azure Virtual Machines and Virtual Machines Scale Sets to create a single place to view performance information about your machines. The monitoring data shown in these at-scale views will include all VMs, Virtual Machines Scale Sets, and Azure Arc enabled servers that you have onboarded to Azure Monitor.

The Getting Started tab provides an overview of the monitoring status of your machines, broken down by subscription and resource group.

The Performance tab shows trends at scale, as the performance in certain metrics of all the machines in the chosen subscription and resource group. Within the at-scale view, with the provided Type filter, you can drill down any view to show either your native Azure Virtual Machines, native Azure Virtual Machine Scale Sets, or your Azure Arc enabled servers.

You can check out our onboarding documentation to learn how to start monitoring your Azure Arc enabled Servers.

Using Azure Monitor for Azure Arc enabled Kubernetes

Azure Monitor for Containers provides numerous monitoring features to create a thorough experience to understand the health and performance for your Azure Arc clusters.

Azure Monitor provides both an at-scale view for all your clusters, ranging from standard AKS, AKS-engine, Azure Red Hat OpenShift, and Azure Arc. Azure Monitor provides important details, such as:

Health statuses (healthy, critical, warning, unknown).
Node count.
Pod count (user and system).

At the resource level for your Azure Arc enabled Kubernetes, there are several key performance indicators for your cluster. Users can toggle the metrics for these charts based on percentile and pin them to their Azure Dashboards.

In the Nodes, Controllers, and Containers tab, data is displayed across various levels of hierarchy with detailed information in the context blade. By clicking on the View in Analytics, you can take a deep dive into the full container logs to analyze and troubleshoot.

Next steps

There are Azure Monitor Workbooks and Grafana integrations available as well if you want to explore additional metrics or create your own custom monitoring experiences.

You can check out our onboarding documentation to learn how to start monitoring your Azure Arc enabled Kubernetes clusters.
Quelle: Azure

Three reasons to migrate your ASP.NET apps and SQL Server data to Azure

The way we work and live has changed. Over the last several months, enterprises have had to shift their strategy from “physical first” to digital first and accelerate their digital transformation to enable remote productivity, reduce costs, or rapidly address new opportunities. In a digital first world, websites and web applications play a significant role in how customers interact with a business. To make a great first impression, companies are modernizing their web applications and data to the cloud for optimal performance, and saving money along the way.

Nearly a third1 of the world’s public websites are built on ASP.NET, and for good reasons; it’s fast, scalable, and secure. What if you could combine those benefits with the operational and financial benefits of the cloud? Microsoft Azure offers the only end-to-end application hosting platform to build and manage .NET applications, enabling significant cost savings, operational efficiencies, and business agility.

Here are three ways you’ll benefit from migrating your ASP.NET apps and SQL Server data to Azure.

Optimize costs with fully managed services that do more for you

Operating your .NET applications on a fully managed platform allows your teams to focus on what matters most by offloading apps, infrastructure, and data management to Azure. With our deep expertise in Windows, Visual Studio and ASP.NET, we have designed Azure App Service and Azure SQL Database from the ground up for .NET applications and the SQL Server Databases that power them. Simply put, there is no better place to build, host, manage, and scale your .NET applications.

“After having used Azure for the past couple of years, we don’t want to do it any other way, and the capital cost savings were just too compelling.” —Anand Kulanthaivelu, Solutions Architect for autoTRADER.ca

Use the power of built-in AI, with capabilities that surface savings opportunities for you. Azure SQL Database offers automatic tuning and adaptive query optimization to support peak performance. You can break down how increased traffic or upstream dependencies are affecting website response times with rich out of the box monitoring for ASP.NET applications in Application Insights.

Operate confidently with mission-critical performance and security

App Service and SQL Database can help simplify your operations while enabling you to operate confidently to address any business need. With App Service on Windows operating on an Internet Information Services (IIS) server in the backend and SQL Database sharing the same codebase with on-premises SQL Server, your developers and database admins can continue to use their familiar tools and processes to be effective from day one without a steep learning curve.

Azure .NET app hosting platform serves over 2 million websites and processes, 41 billion requests and 9 trillion SQL queries per day. Built-in auto-scaling quickly adapts to meet workload demand from these apps, ensuring that user experiences stay great. Cloud-native technology like Azure SQL Database Hyperscale removes many of the limits seen in other cloud databases, with a flexible storage architecture that grows as needed, up to 100 TB. Both App Service and SQL Database are available in all 60+ Azure regions, enabling you to deploy your applications closer to customers and meet local regulatory compliance needs.

“Hyperscale made it easy for us to support our growing workload and the dozens of microservices that power our core ecosystem.” —Andrew Wieck, Manager of Business Analytics, Clearent

Azure Security Center provides enterprise grade protection for all your Azure resources, so you can monitor all your applications and databases and receive recommendations to improve your security posture and threat protection. Azure SQL Database offers the broadest range of built-in security controls across T-SQL, authentication, networking and key management as well as advanced data security that proactively detects threats and vulnerabilities. You can protect applications with Azure Web Application Firewall, leverage Azure CDN to optimize performance or use Azure Front Door to route user traffic to the lowest latency backend, all while gaining built-in distributed denial of service (DDoS) protection and global load balancing.

“We looked at moving to the cloud for better DDoS protection and lowered cost of operations for our apps. We have successfully migrated 200 apps to Azure, while using the App Service Migration Assistant to migrate 60 different .NET Apps. The Azure Migrate App Service Migration Assistant really simplified our migration journey by identifying any migration blockers and enabling us to migrate apps with just a few clicks. As an App development team, we really like the value proposition of Azure managed services for .NET Apps such as App Service and Azure SQL Database. We don’t have to worry about patching virtual machines or containers.” —Tim Fragakis, Director of Cloud Services, IT, Clover Imaging Group

Accelerate innovation and ship new features faster

Native integration between Visual Studio, GitHub, App Service and CI/CD enable developers to build and ship changes faster. Features such as remote and live-site debugging for ASP.NET apps let developers and operators diagnose issues in production environments and resolve them quickly, without impacting traffic.

Building on Azure opens the door to new features and services that provide off-the-shelf value to accelerate innovation. Developers can easily connect to new data sources and backend systems with 300+ pre-built connectors for Azure Logic Apps. Turn legacy web services into modern REST-based APIs by creating façades with Azure API Management, then innovate with many of the pre-built APIs for Azure Cognitive Services such as Speech, Text and Image processing. Add interactivity to your website with Azure Bot Service to serve customers more efficiently and deliver personalized results faster with Azure Cognitive Search.

Get started today

Azure offers easy-to-use tools with step by step guidance to help you migrate your apps and data quickly and efficiently. Use the Azure App Service Migration Assistant to perform readiness checks on your application and receive a detailed assessment that walks you through the migration process. Azure Database Migration Service provides a step-by-step guide to help you get to the cloud with near-zero downtime from multiple database sources. Go through this Microsoft Learn module for migrating .NET Apps to get a hands-on migration experience.

Learn more about building .NET applications on Azure and view our on-demand webinar to learn more about the tools you can use to migrate those apps to the cloud. For best-practice guidance and access to Azure engineers, consider the Azure Migration Program. If you are a Microsoft Partner, view our recent session at Microsoft Inspire to learn how you can build and grow your .NET Apps Modernization practice.

1 Framework Usage Distribution on the Entire Internet (as of July 2020).
Quelle: Azure

Aiming for more than just net zero

Climate experts across the globe agree: if we can’t drastically reduce carbon emissions, our planet will face catastrophic consequences. Microsoft has operated carbon neutral since 2012, and in January 2020 Brad Smith announced our commitment to going carbon negative by 2030. This isn’t a goal we can reach in one easy swoop—it will take time, dedication, and many small steps that coalesce into something greater.

As the cloud business grows, our datacenter footprint grows. In our journey toward carbon negative, Microsoft is taking steps to roll back the effect datacenters have on the environment. Reaching this goal will take many steps, along with the implementation of innovative technologies that have yet to be developed.

Many companies are reaching for net zero emissions, but we’re taking it even further. We’re not just reducing our output to zero. We’re committed to reducing our emissions by half, and then removing the carbon we’ve emitted since 1975, to truly go carbon negative.

The journey to carbon negative

A big part of going carbon negative means completely changing the way datacenters operate. Datacenters have adopted some sustainable methods around cooling, including open-air and adiabatic cooling. These methods have helped to drastically reduce the water and energy consumption of datacenters, but they’re not enough. Currently, datacenters and the backup that powers them in peak load times depend on fossil fuels like diesel. Microsoft is working to change that.

Our ambitious goals to cut down our carbon footprint have necessitated exploration into various technologies. With each kind of technology, we’re determining the best combination to implement based on our overall goal as well as the specific datacenter locations and their local needs.

Liquid immersion cooling

Liquid immersion cooling is predicted to not only help eliminate water consumption but to lower energy consumption, at a minimum, by 5 to 15 percent. As a further benefit, this closed-loop cooling system leads to fewer server racks and smaller datacenter configurations. Datacenters take up a massive amount of space in their current configuration, making this a huge advantage.

Learn more using liquid immersion.

Grid-interactive UPS batteries

Grid-interactive Uninterruptible Power Supply (UPS) batteries help to balance supply and lower energy demand on the grid by directing microbursts of electricity to datacenters or the grid as needed. These batteries store energy at close to 90 percent efficiency and smooth out intermittency from renewables. As we continue to explore this technology further, we could potentially extend the duration of the batteries from a few minutes to several hours—potentially using these long-duration batteries as a replacement for traditional backup generators.

Learn more about powering sustainability goals.

Clean power backup

Clean power backup has the potential to easily replace conventional diesel with less harmful emissions. Synthetic diesel causes less harm to the environment and provides a much-needed bridge to using renewables. Synthetic diesel can even be used in diesel generators without any modifications, reducing emissions on the way to carbon negative.

Hydrogen fuel cells provide another option for green backup energy to datacenters and are almost two times more efficient than combustion engines. The only output is food-grade steam that is then recaptured and reused.

Learn more about clean power generators.

Learn more about hydrogen innovations.

Power your sustainability goals

These small steps add up to something big. According to a 2018 study, workloads on Azure can be up to 98 percent more carbon-efficient than when running on traditional on-premises datacenters, and we’re making more investments in the future of sustainability. By moving your workloads to Azure, you’re ensuring that your workloads are powered by datacenters with reduced emissions and lowered energy consumption. We’re committed to not only reducing our own carbon footprint but also helping you reduce yours.

Work with us toward a carbon-negative future.

Visit Microsoft Sustainability.
Quelle: Azure

NFS 3.0 support for Azure Blob storage is now in preview

Many enterprise and organizations are moving their data to Microsoft Azure Blob storage for its massive scale, security capabilities, and low total cost of ownership. At the same time, they continue running many apps on different storage systems using the Network File System (NFS) protocol. Companies that use different storage systems due to protocol requirements are challenged by data silos where data resides in different places and requires additional migration or app rewrite steps.

To help break down these silos and enable customers to run NFS-based applications at scale, we are announcing the preview of NFS 3.0 protocol support for Azure Blob storage. Azure Blob storage is the only storage platform that supports NFS 3.0 protocol over object storage natively (no gateway or data copying required), with object storage economics, which is essential for our customers.

One of our Media and Entertainment (M&E) customers said, “NFS access to blob storage will enable our customers to preserve their legacy data access methods when migrating the underlying storage to Azure Blob storage.” Other customers have requested NFS for blob storage so they can reuse the same code from an on-premises solution to access files while controlling the overall cost of the solution. Financial services customers want NFS based offering for their analytic workloads. These are a few of the many examples from customers that have embraced private preview of NFS 3.0 support for Azure Blob Storage.

NFS 3.0 support for Azure Blob storage helps with large scale read-heavy sequential access workloads where data will be ingested once and minimally modified further including large scale analytic data, backup and archive, NFS apps for seismic and subsurface processing, media rendering, genomic sequencing, and line-of-business applications.

During the preview, NFS 3.0 is available to BlockBlobStorage accounts with premium performance in the following regions: US East, US Central, and Canada Central. Support for GPV2 accounts with standard tier performance will be announced soon.

Mount blob container using NFS 3.0

Each container in a newly created NFS 3.0 enabled storage account is automatically exported. NFS clients within the same network can mount it using this sample command:

mount -o sec=sys,vers=3,nolock,proto=tcp <storage-account-name>.blob.core.windows.net:/<storage-account-name>/<container-name> /mnt/test

Replace the <storage-account-name> placeholders with the name of your storage account.
Replace the <container-name> placeholder with the name of your container.

During preview, the test data stored in your NFS 3.0 enabled storage accounts are billed the same capacity rate that Azure Blob Storage has per GB per month. Pricing for transactions is subject to change and will be determined when it is generally available. To learn more, visit our documentation, NFS 3.0 protocol support in Azure Blob storage (preview).

Next steps

We are confident that NFS 3.0 on Azure Blob storage can simplify your workload migration to Azure. To register the NFS 3.0 protocol feature with your subscription, see the step-by-step guide. We look forward to hearing your feedback on this feature and suggestions for future improvements through email at azurenfspreview@microsoft.com.
Quelle: Azure

Advancing resilience through chaos engineering and fault injection

“When I first kicked off this Advancing Reliability blog series in my post last July, I highlighted several initiatives underway to keep improving platform availability, as part of our commitment to provide a trusted set of cloud services. One area I mentioned was fault injection, through which we’re increasingly validating that systems will perform as designed in the face of failures. Today I’ve asked our Principal Program Manager in this space, Chris Ashton, to shed some light on these broader ‘chaos engineering’ concepts, and to outline Azure examples of how we’re already applying these, together with stress testing and synthetic workloads, to improve application and service resilience.” – Mark Russinovich, CTO, Azure

 

Developing large-scale, distributed applications has never been easier, but there is a catch. Yes, infrastructure is provided in minutes thanks to your public cloud, there are many language options to choose from, swaths of open source code available to leverage, and abundant components and services in the marketplace to build upon. Yes, there are good reference guides that help give a leg up on your solution architecture and design, such as the Azure Well-Architected Framework and other resources in the Azure Architecture Center. But while application development is easier, there’s also an increased risk of impact from dependency disruptions. However rare, outages beyond your control could occur at any time, your dependencies could have incidents, or your key services/systems could become slow to respond. Minor disruptions in one area can be magnified or have longstanding side effects in another. These service disruptions can rob developer productivity, negatively affect customer trust, cause lost business, and even impact an organization’s bottom line.

Modern applications, and the cloud platforms upon which they are built, need to be designed and continuously validated for failure. Developers need to account for known and unknown failure conditions, applications and services must be architected for redundancy, algorithms need retry and back-off mechanisms. Systems need to be resilient to the scenarios and conditions caused by infrequent but inevitable production outages and disruptions. This post is designed to get you thinking about how best to validate typical failure conditions, including examples of how we at Microsoft validate our own systems.

Resilience

Resilience is the ability of a system to fail gracefully in the face of—and eventually recover from—disruptive events. Validating that an application, service, or platform is resilient is equally as important as building for failure. It is easy and tempting to validate the reliability of individual components in isolation and infer that the entire system will be just as reliable, but that could be a mistake. Resilience is a property of an entire system, not just its components. To understand if a system is truly resilient, it is best to measure and understand the resilience of the entire system in the environment where it will run. But how do you do this, and where do you start?

Chaos engineering and fault injection

Chaos engineering is the practice of subjecting a system to the real-world failures and dependency disruptions it will face in production. Fault injection is the deliberate introduction of failure into a system in order to validate its robustness and error handling.

Through the use of fault injection and the application of chaos engineering practices generally, architects can build confidence in their designs – and developers can measure, understand, and improve the resilience of their applications. Similarly, Site Reliability Engineers (SREs) and in fact anyone who holds their wider teams accountable in this space can ensure that their service level objectives are within target, and monitor system health in production. Likewise, operations teams can validate new hardware and datacenters before rolling out for customer use. Incorporation of chaos techniques in release validation gives everyone, including management, confidence in the systems that their organization is building.

Throughout the development process, as you are hopefully doing already, test early and test often. As you prepare to take your application or service to production, follow normal testing practices by adding and running unit, functional, stress, and integration tests. Where it makes sense, add test coverage for failure cases, and use fault injection to confirm error handling and algorithm behavior. For even greater impact, and this is where chaos engineering really comes into play, augment end-to-end workloads (such as stress tests, performance benchmarks, or a synthetic workload) with fault injection. Start in a pre-production test environment before performing experiments in production, and understand how your solution behaves in a safe environment with a synthetic workload before introducing potential impact to real customer traffic.

Healthy use of fault injection in a validation process might include one or more of the following:

Ad hoc validation of new features in a test environment:
A developer could stand up a test virtual machine (VM) and run new code in isolation. While executing existing functional or stress tests, faults could be injected to block network access to a remote dependency (such as SQL Server) to prove that the new code handles the scenario correctly.
Automated fault injection coverage in a CI/CD pipeline, including deployment or resiliency gates:
Existing end-to-end scenario tests (such as integration or stress tests) can be augmented with fault injection. Simply insert a new step after normal execution to continue running or run again with some faults applied. The addition of faults can find issues that would normally not be found by the tests or to accelerate discovery of issues that might be found eventually.
Incident fix validation and incident regression testing:
Fault injection can be used in conjunction with a workload or manual execution to induce the same conditions that caused an incident, enabling validation of a specific incident fix or regression testing of an incident scenario.
BCDR drills in a pre-production environment:
Faults that cause database failover or take storage offline can be used in BCDR drills, to validate that systems behave appropriately in the face of these faults and that data is not lost during any failover tests.
Game days in production:
A ‘game day’ is a coordinated simulation of an outage or incident, to validate that systems handle the event correctly. This typically includes validation of monitoring systems as well as human processes that come into play during an incident. Teams that perform game days can leverage fault injection tooling, to orchestrate faults that represent a hypothetical scenario in a controlled manner.

Typical release pipeline

This figure shows a typical release pipeline, and opportunities to include fault injection:

 

 

An investment in fault injection will be more successful if it is built upon a few foundational components:

Coordinated deployment pipeline.
Automated ARM deployments.
Synthetic runners and synthetic end-to-end workloads.
Monitoring, alerting, and livesite dashboards.

With these things in place, fault injection can be integrated in the deployment process with little to no additional overhead – and can be used to gate code flow on its way to production.

Localized rack power outages and equipment failures have been found as single points of failure in root cause analysis of past incidents. Learning that a service is impacted by, and not resilient to, one of these events in production is a timebound, painful, and expensive process for an on-call engineer. There are several opportunities to use fault injection to validate resilience to these failures throughout the release pipeline in a controlled environment and timeframe, which also gives more opportunity for the code author to lead an investigation of issues uncovered. A developer who has code changes or new code can create a test environment, deploy the code, and perform ad hoc experiments using functional tests and tools with faults that simulate taking dependencies offline – such as killing VMs, blocking access to services, or simply altering permissions. In a staging environment, injection of similar faults can be added to automated end-to-end and integration tests or other synthetic workloads. Test results and telemetry can then be used to determine impact of the faults and compared against baseline performance to block code flow if necessary.

In a pre-production or ‘Canary’ environment, automated runners can be used with faults that again block access to dependencies or take them offline. Monitoring, alerting, and livesite dashboards can then be used to validate that the outages were observed as well as that the system reacted and compensated for the issue—that it demonstrated resilience. In this same environment, SREs or operations teams may also perform business continuity/disaster recovery (BCDR) drills, using fault injection to take storage or databases offline and once again monitoring system metrics to validate resilience and data integrity. These same Canary activities can also be performed in production where there is real customer traffic, but doing so incurs a higher possibility of impact to customers so it is recommended only to do this after leveraging fault injection earlier in the pipeline. Establishing these practices and incorporating fault injection into a deployment pipeline allows systematic and controlled resilience validation which enables teams to mitigate issues, and improve application reliability, without impacting end customers.

Fault injection at Microsoft

At Microsoft, some teams incorporate fault injection early in their validation pipeline and automated test passes. Different teams run stress tests, performance benchmarks, or synthetic workloads in their automated validation gates as normal and a baseline is established. Then the workload is run again, this time with faults applied – such as CPU pressure, disk IO jitter, or network latency. Workload results are monitored, telemetry is scanned, crash dumps are checked, and Service Level Indicators (SLIs) are compared with Service Level Objectives (SLOs) to gauge the impact. If results are deemed a failure, code may not flow to the next stage in the pipeline.

Other Microsoft teams use fault injection in regular Business Continuity, Disaster Recovery (BCDR) drills, and Game Days. Some teams have monthly, quarterly, or half-yearly BCDR drills and use fault injection to induce a disaster and validate both the recovery process as well as the alerting, monitoring and live site processes. This is often done in a pre-production Canary environment before being used in production itself with real customer traffic. Some teams also carry out Game Days, where they come up with a hypothetical scenario, such as replication of a past incident, and use fault injection to help orchestrate it. Faults, in this case, might be more destructive—such as crashing VMs, turning off network access, causing database failover, or simulating an entire datacenter going offline. Again, normal live site monitoring and alerting are used, so your DevOps and incident management processes are also validated. To be kind to all involved, these activities are typically performed during business hours and not overnight or over a weekend.

Our operations teams also use fault injection to validate new hardware before it is deployed for customer use. Drills are performed where the power is shut off to a rack or datacenter, so the monitoring and backup systems can be observed to ensure they behave as expected.

At Microsoft, we use chaos engineering principles and fault injection techniques to increase resilience, and confidence, in the products we ship. They are used to validate the applications we deliver to customers, and the services we make available to developers. They are used to validate the underlying Azure platform itself, to test new hardware before it is deployed. Separately and together, these contribute to the overall reliability of the Azure platform—and improved quality in our services all up.

Unintended consequences

Remember, fault injection is a powerful tool and should be used with caution. Safeguards should be in place to ensure that faults introduced in a test or pre-production environment will not also affect production. The blast radius of a fault scenario should be contained to minimize impact to other components and to end customers. The ability to inject faults should have restricted access, to prevent accidents and prevent potential use by hackers with malicious intent. Fault injection can be used in production, but plan carefully, test first in pre-production, limit the blast radius, and have a failsafe to ensure that an experiment can be ended abruptly if needed. The 1986 Chernobyl nuclear accident is a sobering example of a fault injection drill gone wrong. Be careful to insulate your system from unintended consequences.

Chaos as a service?

As Mark Russinovich mentioned in this earlier blog post, our goal is to make native fault injection services available to customers and partners so they can perform the same validation on their own applications and services. This is an exciting space with so much potential to improve cloud service reliability and reduce the impact of rare but inevitable disruptions. There are many teams doing lots of interesting things in this space, and we’re exploring how best to bring all these disparate tools and faults together to make our lives easier—for our internal developers building Azure services, for built-on-Azure services like Microsoft 365, Microsoft Teams, and Dynamics, and eventually for our customers and partners to use the same tooling to wreak havoc on (and ultimately improve the resilience of) their own applications and solutions.
Quelle: Azure

New Windows Virtual Desktop capabilities now generally available

With the global pandemic, customers are relying on remote work more than ever, and Windows Virtual Desktop is helping customers rapidly deliver a secure Windows 10 desktop experience to their users. Charlie Anderson, CIO of Fife Council in the United Kingdom, was planning to modernize his companies’ existing Remote Destop Services (RDS) infrastructure, and then business requirements changed. He needed increased agility and scale to meet the changing requirements. In his own words:

“Windows Virtual Desktop was absolutely essential for us in terms of our response to the COVID-19 pandemic. Like many, we were faced with a continuity issue unparalleled in recent times. For us, this meant not only the continuation of services we already delivered, but also responding very quickly to new demands arising as a result of our public response to the pandemic.

To do that, we needed to provide as close to the “in-office” experience as we could to a workforce now working away from our offices. This meant multiplying previous remote working capacities by a factor of 15 almost overnight – something which would have been impossible without a scalable and cloud-based approach, which also worked well on a range of Council and self-provided devices.

There is little doubt that the Windows Virtual Desktop solution will not only be vital to the future resilience of our public services to the people of Fife, but it will also form a key part of our future device strategy as we seek to develop new, agile, and cost-effective approaches going forward.“

In April 2020, we released the public preview of Azure portal integration which made it easier to deploy and manage Windows Virtual Desktop. We also announced a new audio/video redirection (A/V redirect) capability that provided seamless meeting and collaboration experience for Microsoft Teams. We are humbled by the amazing feedback we’ve received from you on these capabilities, and that’s been a huge motivation for our team to accelerate development. We are happy to announce that both the Azure portal integration and A/V redirect in Microsoft Teams are now generally available.

Azure portal integration

With the Azure portal integration, you get a simple interface to deploy and manage your apps and virtual desktops. Host pool, workspace, and all other objects you create are Azure Resource Manager objects and are managed the same way you manage other Azure resources.

 
Customers who have existing deployments based on the previous (classic) model can continue using it. We will soon publish guidance on migrating to the new Azure Resource Manager-based deployment model so you can take advantage of all the new capabilities, including:

Azure role-based access control (RBAC)

You can use Azure RBAC to provide fine-grained access control to your Windows Virtual Desktop resources. There are four built-in admin roles that you can get started with, and you can create custom roles if necessary.

User management

Previously, you could only publish Remote Apps and Desktops to individual users. You can now publish resources to Azure Active Directory (Azure AD) groups, which makes it much easier to scale.

Monitoring

The monitoring logs are now stored in Azure Monitor Logs. You can analyze the logs with Log Analytics and create visualizations to help you quickly troubleshoot issues.

A/V redirect for Microsoft Teams

Many of you use Microsoft Teams to collaborate with your colleagues. Traditionally, virtual desktops have not been ideal for audio and video conferencing due to latency issues. That changes with the new A/V redirect feature in Windows Virtual Desktop. Once you enable A/V redirect in the Desktop client for Windows, the audio and video will be handled locally for Microsoft Teams calls and meetings. You can still use Microsoft Teams on Windows Virtual Desktop with other clients without optimized calling and meetings. Microsoft Teams chat and collaboration features are supported on all platforms.

Next steps

You can read more about these updates in the Azure portal integration and Microsoft Teams integration documentation pages.

Thank you for your support during the preview. If you have any questions, please reach out to us on Tech Community and UserVoice. 
Quelle: Azure

Fully managed HashiCorp Consul Service generally available on Azure today

I want to congratulate the HashiCorp and Microsoft Azure teams on the general availability of HashiCorp Consul Service (HCS) on Azure. This is a first-of-a-kind achievement for HashiCorp running a cloud-based service. Within Azure, we have a deep commitment to build a platform where anyone from startups to large-scale enterprises can deliver reliable, compelling services that augment the Azure platform and benefit our customers.

Throughout the process of bringing this service to production-grade availability, the HashiCorp team has been an awesome partner. We learned a lot together and I’m grateful for the strength of our relationship. Seeing HCS launch on Azure is awesome and a great example of the depth of our collaboration and commitment to serve our joint customers.

HCS on Azure enables Azure users to natively provision Consul servers in any supported Azure region directly through the Azure Marketplace. Consul is delivered “as-a-service" where the Consul servers themselves are managed and operated by HashiCorp SREs while Azure takes care of the underlying infrastructure, virtual machines (VMs), and networks. This ensures customers can focus on the application and business logic they’re building and can offload the operational overhead of running Consul to experts at HashiCorp, including managing upgrades, patching, and providing technical support.

One of the major challenges of adopting open source technology like Consul is learning how to operate it yourself. This new HCS service eliminates this barrier. You can experiment and prototype with an open source solution and go to production with the confidence of the managed service offering.

Consul is a great option for service discovery and service mesh, especially in hybrid environments connecting the Azure Kubernetes Service (AKS) to legacy services running on VMs, or even in on-premises environments. You can secure traffic between components, perform health checking, and even implement access control to on-premises resources with a single solution that also integrates with modern cloud-native services running in Kubernetes.

Because HCS is offered as an Azure managed application, it is integrated with all native Azure experiences. You can create a cluster with a push-button experience in the Azure portal, pay for the service using centralized Azure billing and spending commitments, and integrate identity management with Azure Active Directory (Azure AD). Because of the native platform integration, it’s super easy to integrate HashiCorp Consul with AKS through Helm and the Service Mesh Interface. You can even deploy AKS and Consul using the same HashiCorp Terraform template.

I have always said that it is an open ecosystem that powers the success of a platform, whether that is Kubernetes or Azure. Today’s launch is a great example of how we’re making that vision a reality with fantastic partners like HashiCorp.  Many congratulations and thanks to the HashiCorp and Azure teams!

To learn more and get started, visit HashiCorp Consul Service on Azure.
Quelle: Azure