How InstaDeep used Cloud TPU v4 to help sustainable agriculture

You are what you eat. We’ve all been told this, but the truth is what we eat is often more complex than we are – genetically at least. Take a grain of rice. The plant that produces rice has 40,000 to 50,000 genes, double that of humans, yet we know far more about the composition of the human genome than of plant life. We need to close this knowledge gap quickly if we are to answer the urgent challenge of feeding 8 billion people, especially as food security around the globe is likely to worsen with climate change.For this reason,  AI company InstaDeep has teamed up with Google Cloud to train a large AI model with more than 20 billion parameters on a dataset of reference genomes for cereal crops and edible vegetables, using the latest generation of Google’s Tensor Processing Units (Cloud TPU v4), which is particularly suited for training efficiency at scale. Our aim is to improve food security and sustainable agriculture by creating a tool that can analyze and predict plants’ agronomic traits from genomic sequences. This will help identify which genes make some crops more nutritious, more efficient to grow, and more resilient and resistant to pests, disease and drought.Genomic language models for sustainable agricultureEver since farming began, we have been, directly or indirectly, trying to breed better crops with higher yields, better resilience and, if we’re lucky, better taste too. For thousands of years, this was done by trial and error, growing crops year-on-year while trying to identify and retain only the most beneficial traits as they naturally arise from evolutionary mutations. Now that we have access to the genomic sequences of plants, we hope to directly identify beneficial genes and predict the effect of novel mutations.However, the complexity of plant genomes often makes it difficult to identify which variants are beneficial. Revolutionary advances in machine learning (ML) can help to understand the link between DNA sequences and molecular phenotypes. This means we now have precise and cost-effective prediction methods to help us close the gap between genetic information and observable traits. These predictions can help identify functional variants and accelerate our understanding of which genes link to which traits – so we can make better crop selections.Moreover, thanks to the vast library of available crop genetic sequences, training large models on hundreds of plant genomes means we can transfer the knowledge from thoroughly-studied species to those that are less understood but important for food production – especially in developing countries. And by doing this digitally, AI can quickly map and annotate the genomes of both common and rare crop variants.One of the major limitations of traditional ML methods for plant genomics has been they mostly rely on supervised learning techniques. They need labeled data. Such data is scarce and expensive to collect, severely limiting these methods. Recent advances in natural language processing (NLP), such as Transformer architectures and BERT-style training (Bidirectional Encoder Representations from Transformers), allow scientists to train massive language models on raw text data to learn meaningful representations. This unsupervised learning technique changes the game. Once learned, the representations can be leveraged to solve complex regression or classification tasks – even when there is a lack of labeled data.InstaDeep partners with Google Cloud to train the new generation of AI models for genomics on TPUsResearchers have demonstrated that large language models can be especially effective in proteomics. To understand how this works, imagine reading amino acids as words and proteins as sentences. The treasure trove of raw genomics data – in sequence form – inspired InstaDeep and Google Cloud to apply similar technologies on nucleotides, this time reading them as words and chunks of genomes as sentences. Moreover, the representations that the system learned improved in line with the size of the models and datasets, NLP research studies showed. This finding led InstaDeep researchers to train a set of increasingly larger language models on genomics datasets ranging from 1 billion to 20 billion parameters.Models of 1 billion and 5 billion parameters were trained on a dataset comprising the reference genomes for several edible plants, including fruit, cereal and vegetables for a total of 75 billion nucleotides.The training dataset must increase in the same proportion as the model capacity, recent work has shown. Thus, we created a larger dataset gathering all reference genomes available on the National Center for Biotechnology Information (NCBI) database including human, animal, non-edible plant and bacteria genomes. This dataset, which we used to train a 20 billion-parameter Transformer model, comprised 700 billion tokens, exceeding the size of most datasets typically used for NLP applications, such as the Common Crawl or Wikipedia dataset. Both teams announced that the 1 billion-parameter model will be shared with the scientific community to further accelerate plant genomics research.The compact and meaningful representations of nucleotide sequences learned by these models can be used to tackle molecular phenotype prediction problems. To showcase their ability, we trained a model to predict the gene function and gene ontology (i.e. a gene’s attribute) for different edible plant species. Early results have demonstrated that this model can predict these characteristics with high accuracy – encouraging us to look deeper at what these models can tell us. Based on these results, we decided to annotate the genomes of three plant species with considerable importance for many developing countries: cassava, sweet potato, and yam. We are working on making these annotations freely available to the scientific community and hope that these will be used to further guide and accelerate new genomic research.Overcoming scaling challenges with massive models and datasets with Cloud TPUsThe compute requirement for training our 20 billion-parameter model with billions of tokens is massive. While modern accelerators offer impressive peak performance per chip, to utilize this performance often requires tightly coupled hardware and software optimizations. Moreover, maintaining this efficiency when scaling to hundreds of chips presents additional system design challenges. The Cloud TPU’s tightly-coupled hardware and software stack is especially well suited to such challenges. The Cloud TPU software stack is based on the XLA Compiler which offers out-of-the-box optimizations (such as compute and communication overlap) and an easy programming model for expressing parallelism. We successfully trained our large models for genomics by leveraging Google Tensor Processing Units (TPUv4). Our code is implemented with the JAX framework. JAX provides a functional programming-based approach to express computations as functions that can be easily parallelized using JAX APIs powered by XLA. This helped us to scale from a single host (four chips) configuration to a multi-host configuration without having to tackle any of the system design challenges. The TPU’s cost-effective inter- and intra-communication capabilities led to an almost linear scaling between the number of chips and training time. This allowed us to train the models quickly and efficiently on a grid of 1024 TPUv4 cores  (512 chips). ConclusionUltimately, our hope is that the functional characterization of genomic variants predicted by deep learning models will be critical to the next era in agriculture, which will largely depend on genome editing and analysis. We envisage that novel approaches, such as in-silico mutagenesis – the assessment of all possible changes in a genomic region by a computer model – will be invaluable in prioritizing mutations that improve plant fitness and guiding crop improvements. Attempting similar work in wet-lab experiments would be difficult to scale and nearly impossible in nature. By making our current and future annotations available to the research community, we also hope to help democratize breeding technologies so that they can benefit all of global agriculture.Further ReadingTo learn more about the unique features of Cloud TPU v4 hardware and software stack we encourage readers to explore Cloud TPU v4 announcement. To learn more about scaling characteristics, please see this benchmark and finally we recommend reading PJIT Introduction to get started with JAX and SPMD parallelism on Cloud TPU.This research was made possible thanks to the support of Google’s TPU Research Cloud (TRC) Program which enabled us to use the Cloud TPUv4 chips that were critical to this work.
Quelle: Google Cloud Platform

BigQuery’s performance powers AutoTrader UK’s real-time analytics

Editor’s note: We’re hearing today from Auto Trader UK, the UK and Ireland’s largest online automotive marketplace, about how BigQuery’s robust performance has become the data engine powering real-time inventory and pricing information across the entire organization. Auto Trader UK has spent nearly 40 years perfecting our craft of connecting buyers and sellers of new and used vehicles. We host the largest pool of sellers, listing more than 430,000 cars every day and attract an average of over 63 million cross platform visits each month. For the more than 13,000 retailers who advertise their cars on our platform, it’s important for them (and their customers) to be able to quickly see the most accurate, up-to-date information about what cars are available and their pricing. BigQuery is the engine feeding our data infrastructure Like many organizations, we started developing our data analytics environment with an on-premise solution and then migrated to a cloud-based data platform, which we used to build a data lake. But as the volume and variety of data we collected continued to increase, we started to run into challenges that slowed us down. We had built a fairly complex pipeline to manage our data ingestion, which relied on Apache Spark to ingest data from a variety of data sources from our online traffic and channels. However, ingesting data from multiple data sources in a consistent, fast, and reliable way is never a straightforward task. Our initial interest in BigQuery came after we discovered it integrated with a more robust event management tool for handling data updates. We had also started using Looker for analytics, which already connected to BigQuery and worked well together. As a result, it made sense to replace many parts of our existing cloud-based platform with Google Cloud Storage and BigQuery.Originally, we had only anticipated using BigQuery for the final stage of our data pipeline, but we quickly discovered that many of our data management jobs could take place entirely within a BigQuery environment. For example, we use the command-line tool DBT, which offers support for BigQuery, to transform our data. It’s much easier for our developers and analysts to work with than Apache Spark since they can work directly in SQL. In addition, BigQuery allowed us to further simplify our data ingestion. Today, we mainly use Kafka Connect to sync data sources with BigQuery.Looker + BigQuery puts the power of data in the hands of everyoneWhen our data was in the previous data lake architecture, it wasn’t easy to consume. The complexity of managing the data pipeline and running Spark jobs made it nearly impossible to expose it to users effectively. With BigQuery, ingesting data is not only easier, we also have multiple ways we can consume it through easy-to-use languages and interfaces. Ultimately, this makes our data more useful to a much wider audience.Now that our BigQuery environment is in place, our analysts can query the warehouse directly using the SQL interface. In addition, Looker provides an even easier way for business users to interact with our data. Today, we have over 500 active users on Looker—more than half the company. Data modeled in BigQuery gets pushed out to our customer-facing applications, so that the dealers can log into a tool and manage stock or see how their inventory is performing. Striking a balance between optimization and experimentationPerformance in BigQuery can be almost too robust: It will power through even very unoptimized queries. When we were starting out, we had a number of dashboards running very complex queries against data that was not well-modeled for the purpose, meaning every tile was demanding a lot of resources. Over time, we have learned to model data more appropriately before making it available to end-user analytics. With Looker, we use aggregate awareness, which allows users to run common query patterns across large data sets that have been pre-aggregated. The result is that the number of interactively run queries  are relatively small. The overall system comes together to create a very effective analytics environment — we have the flexibility and freedom to experiment with new queries and get them out to end users even before we fully understand the best way to model. For more established use cases, we can continue optimizing to save our resources for the new innovations. BigQuery’s slot reservation system also protects us from unanticipated cost overruns when we are experimenting.One of the examples where this played out was when we rolled new analytic capabilities out to our sales teams. They wanted to use analytics to drive conversations with customers in real-time to demonstrate how advertisements were performing on our platform and show the customer’s return on their investment. When we initially released those dashboards, we saw a huge jump in usage of the slot pool. However, we were able to reshape the data quickly and make it more efficient to run the needed queries by matching our optimizations to the pattern of usage we were seeing.Enabling decentralized data managementAnother change we experienced with BigQuery is that business units are increasingly empowered to manage their own data and derive value from it. Historically, we had a centralized data team doing everything from ingesting data to modeling it to building out reports. As more people adopt BigQuery across Auto Trader, distributed teams build up their own analytics and create new data products. Recent examples include stock inventory reporting, trade marketing and financial reporting. Going forward, we are focused on expanding BigQuery out into a self-service platform that enables analysts within the business to directly  build what they need. Our central data team will then evolve into a shared service, focused on maintaining the data infrastructure and adding abstraction layers where needed so it is easier for those teams to perform their tasks and get the answers they need.BigQuery kicks our data efforts into overdriveAt Auto Trader UK, we initially planned for BigQuery to play a specific part in our data management solution, but it has become the center of our data ingestion and access ecosystem. The robust performance of BigQuery allows us to get prototypes out to business users rapidly, which we can then optimize once we fully understand what types of queries will be run in the real world. The ease of working with BigQuery through a well-established and familiar SQL interface has also enabled analysts across our entire organization to build their own dashboards and find innovative uses for our data without relying on our core team. Instead, they are free to focus on building an even richer toolset and data pipeline for the future.Related ArticleHow Telus Insights is using BigQuery to deliver on the potential of real-world big dataBigQuery’s impressive performance reduces processing time from months to hours and delivers on-demand real-world insights for Telus.Read Article
Quelle: Google Cloud Platform

Seer Interactive gets the best marketing results for their clients using Looker

Marketing strategies based on complex and dynamic data get results. However, it’s no small task to extract easy-to-act-on insights from increasing volumes and ever-evolving sources of data including search engines, social media platforms, third-party services, and internal systems. That’s why organizations turn to us at Seer Interactive. We provide every client with differentiating analysis and analytics, SEO, paid media, and other channels and services that are based on fresh and reliable data, not stale data or just hunches. More data, more waysAs digital commerce and footprints have become foundational for success over the past five years, we’ve experienced exponential growth in clientele. Keeping up with the unique analytics requirements of each client has required a fair amount of IT agility on our part. After outgrowing spreadsheets as our core BI tool, we adopted a well-known data visualization app only to find that it couldn’t scale with our growth and increasingly complex requirements either. We needed a solution that would allow us to pull hundreds of millions of data signals into one centralized system to give our clients as much strategic information as possible, while increasing our efficiency. After outlining our short- and long-term solution goals, we weighed the trade-offs of different designs. It was clear that the data replication required by our existing BI solution design was unsustainable. Previously, all our customer-facing teams created their own insights. More than 200 consultants were spending hours each week pulling and compiling data for our clients, and then creating their own custom reports and dashboards. As data sets grew larger and larger, our desktop solutions simply didn’t have the processing power required to keep up, and we had to invest significant money in training any new employees in these complex BI processes. Our ability to best serve our customers was being jeopardized because we were having trouble serving basic needs, let alone advanced use cases.We selected Looker, Google Cloud’s business intelligence solution, as our BI platform. As the direct query leader, Looker gives us the best available capabilities for real-time analytics and time to value. Instead of lifting and shifting, we designed a new, consolidated data analytics foundation with Looker that uses our existing BigQuery platform, which can scale with any amount and type of data. We then identified and tackled quick-win use cases that delivered immediate business value for our team and clients.  Meet users where they are in skills, requirements, and preferencesOne of our first Looker projects involved redesigning our BI workflows. We built dashboards in Looker that automatically serve up the data our employees need, along with filters they use to customize insights and set up custom alerts. Users can now explore information on their own to answer new questions, knowing insights are reliable because they’re based on consistent data and definitions. More technical staff create ad hoc insights with governed datasets in BigQuery and use their preferred visualization tools like Looker  Studio, Power BI, and Tableau. We’ve also duplicated some of our data lakes to give teams a sandbox that they can experiment in using Looker embedded analytics. This enables them to quickly see more data and uncover new opportunities that provide value to our clients. Our product development team is also able to build and test prototypes more quickly, letting us validate hypotheses for a subsection of clients before making them available across the company. And because Looker is cloud based, all our users can analyze as much data as they want without exceeding the computing power of their laptops.Seamless security and faster developmentWe leverage BigQuery’s access and permissioning capabilities. Looker can inherit data permissions directly from BigQuery and multiple third-party CRMs, so we’ve also been able to add granular governance strategies within our Looker user groups. This powerful combination ensures that data is accessed only by users who have the right permissions. And Looker’s unique “in-database” architecture means that we aren’t replicating and storing any data on local devices, which reduces both our time and costs spent on data management while bolstering our security posture. Better services and hundreds of thousands of dollars in savingsTime spent on repetitive tasks adds up over months and years. With Looker, we automate reports and alerts that people frequently create. Not only does this free up teams to discover insights that they previously wouldn’t have time to pinpoint, but they have fresh reports whenever they are needed. For instance, we automated the creation of multiple internal dashboards and external client analyses that utilize cross-channel data. In the past, before we had automation capabilities, we used to only generate these analyses up to four times a year. With Looker, we can scale and automate refreshed analyses instantly—and we can add alerts that flag trends as they emerge. We also use Looker dashboards and alerts to improve project management by identifying external issues such as teams who are nearing their allocated client budgets too quickly or internal retention concerns like employees who aren’t taking enough vacation time.Using back-of-the-napkin math, let’s say every week 50 different people spend at least one hour looking up how team members are tracking their time. By building a dashboard that provides time-tracking insights at a glance, we save our collective team 2,500 hours a year. And if we assume the hourly billable rate is $200 an hour, we’re talking $500,000 in savings—just from one dashboard. Drew Meyer Director of Product, Seer InteractiveThe insights and new offerings to stay ahead of trends Looker enables us to deliver better experiences for our team members and clients that weren’t possible even two years ago, including faster development of analytics that improve our services and processes. For example, when off-the-shelf tools could not deliver the keyword-tracking insights and controls we required to deliver differentiating SEO strategies for clients, we created our own keyword rank tracking application using Looker embedded analytics. Our application provides deep-dive SEO data-exploration capabilities and gives teams unique flexibility in analyzing data while ensuring accurate, consistent insights. Going forward, we’ll continue adding new insights, data sources, and automations with Looker to create even better-informed marketing strategies that fuel our clients’ success.
Quelle: Google Cloud Platform

Voltus and Azure—no power integrity challenge too big to solve

This post was co-authored by Giancarlo DiPasquale, Microsoft Director, Semiconductor & EDA; Rajat Chaudhry, Product Management Director, Cadence; and Adrian Lao, Senior Software Architect, Cadence.

With the advent of AI and hyperscale designs on advanced nodes, it is common to see designs in over 50 billion transistor categories with tens to 100 billion plus nodes in the on-chip power network. This explosion in scale requires solutions that meet the following requirements:

High performance and capacity.
Elasticity.
Manage varying compute resource requirements.
Low cost to manage the exponential increase in compute requirements.

Voltus on Azure

Voltus is a leading IC Power Integrity Signoff Solution from Cadence Design Systems. It is used by top chip design companies to verify the reliability of their power networks on chip (NoC) and enables power integrity and thermal analysis at the system level.

Microsoft Azure provides a cloud-based high-performance computing (HPC) infrastructure with security, reliability, and scalability that is a natural fit for electronic design automation (EDA) workloads, especially power integrity analysis.

Azure can support both a hybrid model as well as an all-in model. In the hybrid model customers mainly use their on-premises infrastructure but can add to their compute and storage capacity on an on-demand basis to satisfy peak demand. The hybrid approach is typically used by customers new to using the cloud. In an all-in model, customers primarily use Azure infrastructure for all their EDA workloads. The all-in model is a great use case for startups and customers who really want to optimize their costs while taking advantage of the scale and flexibility of Azure. Voltus supports both the hybrid as well as the all-in model with Azure.

Managing variable compute costs through the design cycle

Using Azure can help customers optimize their costs as compute requirements will vary through the design cycle with lower requirements early on and peak demand near signoff. This is in contrast to the high fixed cost of on-premises infrastructure.

Running Voltus on Azure

We have used a block and full Chip test case to demonstrate our results.

The Azure team selected Edsv4 virtual machines (VMs) based on second-generation Intel Xeon Platinum 8272CL (Cascade Lake). These VMs are well suited for both compute and memory-intensive workloads.

The Voltus use case setup on Azure is illustrated in Figure 1.

Figure 1

High performance and elasticity

Voltus has a fully distributed and scalable architecture. Every step of the power integrity analysis flow, from design parsing to the solver, is fully distributed and scalable. Data from each part of the automatically partitioned design is assigned to compute nodes on the compute infrastructure for various steps in the analysis. This process is managed by a master machine as illustrated in Figure 2.

Figure 2

The level of distribution is user-controlled, which allows the user to take advantage of compute elasticity and manage performance. As Figure 3 illustrates for both the block and full chip run, we observe near-linear scalability in performance with respect to the number of CPUs.

Figure 3

Higher performance with lower costs

Believe it or not, that is indeed true. The elasticity of Voltus architecture enables the tool to run faster with a higher number of CPUs and since the CPUs are used for a smaller amount of time, the result is that the total cost drops to an optimal point. This can be seen at both the block and full chip levels as illustrated in Figure 3. This is a win-win situation where you can improve your performance and reduce your costs.

Figure 4

The magic of Voltus hierarchical analysis

Designers can further increase their performance and reduce cost by using Voltus XM hierarchical analysis. With Voltus XM, block-level models can be used instead of the full flattened design as illustrated in Figure 5. This method significantly reduces node count while maintaining accuracy. We can even further reduce our runtime and costs with Voltus XM and Azure. We observe a 4.5x reduction in cost and a 2x improvement in performance over the flat run for the full chip test case (Figure 6).

Figure 5

Figure 6

We have demonstrated the benefit of using Voltus on Azure at both the block level and chip level. These benchmarks show that customers can not only just benefit from higher performance using elastic compute, but there is an optimal point for performance and cost. Using Voltus XM hierarchical analysis further improves cost and performance. With Voltus on Azure, semiconductor companies have the ideal solution to verify power integrity for their most complex designs.

Learn more about Voltus on Azure

View our new high performance computing hub on Microsoft Docs
Read more about Azure HPC + AI

Please contact your Cadence sales representative for help enabling Voltus on Azure.

 

 

#AzureHPCAI
Quelle: Azure

Microsoft Cost Management updates—November 2022

Whether you're a new student, a thriving startup, or the largest enterprise you have financial constraints, and you need to know what you're spending, where it’s being spent, and how to plan for the future. Nobody wants a surprise when it comes to the bill, and this is where Microsoft Cost Management comes in.

We're always looking for ways to learn more about your challenges and how Microsoft Cost Management can help you better understand where you're accruing costs in the cloud, identify and prevent bad spending patterns, and optimize costs to empower you to do more with less. Here are a few of the latest improvements and updates based on your feedback:

Use tag inheritance to group by subscription and resource group tags.
View cost change since previous period in the cost analysis preview.
New cost recommendations for virtual machine scale sets.
What's new in Cost Management Labs.
New ways to save money with Microsoft Cloud.
New videos and learning opportunities.
Documentation updates.
Join the Microsoft Cost Management team.

Let's dig into the details.

Use tag inheritance to group by subscription and resource group tags

As organizations grow their cloud usage, they want the ability to slice their cloud costs in multiple ways to better manage and optimize their cloud costs.

For example—finance teams may want costs grouped by department for cost allocation reasons, making each department responsible for the costs depending on their cloud usage. Engineering teams typically want to group costs by application or environment to understand where and how much they’re spending.

Tagging is an effective mechanism to group your costs but requires tagging every resource and relying on resource providers to support and emit tags with usage in the billing pipeline. To overcome these limitations and make it easier to use tags for cost reporting, you can now use the Cost Management tag inheritance preview to apply resource group and subscription tags to resource usage automatically in cost details.

With tag inheritance enabled, you can easily apply a single set of tags to your subscriptions rather than enforcing tag policies and tracking adoption, still to be left with some resources that don’t include tags in their usage data. This covers broad scenarios like departmental chargeback or environment. To tag lower-level data like applications, you can apply tags to each resource group.
Tag inheritance can be enabled on any Enterprise Agreement (EA) or Microsoft Customer Agreement (MCA) subscription. To enable tag inheritance across all subscriptions, enable it from the EA billing account or MCA billing profile.

You can enable tag inheritance in Cost Management from Cost analysis by selecting Configure at the top of the page or by opening Cost Management directly and selecting Manage billing account (or billing profile or subscription) from the menu. On the management settings page, you’ll see a Tag inheritance (preview) option with the current status.

Select Edit to enable tag inheritance and decide how to handle conflicts when tag names match. Once enabled, you should start to see inherited tags in cost details APIs and experiences, like Cost analysis and scheduled exports, in 8–24 hours.

To learn more, see Group and allocate costs using tag inheritance.

View cost change since previous period in the Cost analysis preview

Perhaps the most powerful aspect of the cloud is the flexibility it offers. But that flexibility comes at a cost–and while you can always get a good cost estimate from the Azure pricing calculator, most of us aren’t thinking about cost when we’re focused on solving a problem. This is where cloud computing becomes challenging–if we don’t understand the cost implications of the changes we make, we may very well get a surprise at the end of the month. To help spot these changes sooner, you can now find the percentage change since the previous period in the Cost analysis preview.

When your view is showing three months or less, the difference is calculated as the cost from the start of the period through yesterday, compared to the same days from the previous period. If showing more than three months, the date range uses the first month through the last month. If the current day or month are not part of the period you’re looking at (such as last month), the entire period is compared to the previous period.

Pair this with the average cost KPI and anomaly insights, and the Cost analysis preview gives you several new ways to catch unexpected changes in your cost patterns. If you aren’t using the Cost analysis preview yet, I recommend checking it out. We’re currently rolling out another change to help you start with the best view, so it would be good to share your thoughts early. Give the Cost analysis preview a shot and let us know what you think using the rating button at the bottom.

New cost recommendations for virtual machine scale sets

Cost optimization is on everyone’s minds these days. With a huge uptick in the usage of virtual machine scale sets (VMSS) over recent years, ensuring efficient use of VMSS resources is more important than ever. And as with virtual machines, one of the best ways to drive efficiency of VMSS is by right-sizing or deleting underutilized resources. To that end, Azure Advisor now includes cost optimization recommendations for VMSS.

Given the scale at which VMSS runs with multiple virtual machine instances, right-sizing is even more critical. So not only is it possible to over-provision the size or stock keeping unit (SKU) of the virtual machines, it’s also possible to over-provision the instances relative to the needs of the workloads running on these virtual machines. VMSS may also be used as the underlying infrastructure for Service Fabric, which has certain recommendations on the number of instances to be used, based on the reliability/durability tier.

Azure Advisor takes all these complexities into account while generating recommendations that are sure to save on your costs, while not impacting the performance or reliability of your workloads.

Overall, these recommendations represent close to $23 million in potential monthly savings! We want to help you do more with Azure for less.

To learn more, see Optimize virtual machine (VM) or virtual machine scale set (VMSS) spend.

What's new in Cost Management Labs

With Cost Management Labs, you get a sneak peek at what's coming in Microsoft Cost Management and can engage directly with us to share feedback and help us better understand how you use the service so we can deliver more tuned and optimized experiences. Here are a few features you can see in Cost Management Labs:

New: Change since previous period in the cost analysis preview—Now available in the public portal. 
Show the percentage difference in cost compared to the previous period at the top of the cost analysis preview. You can opt in using Try Preview.

New: Recent and pinned views in the cost analysis preview—Now enabled by default in Labs. 
Show all classic and preview views in the cost analysis preview and streamline navigation by prioritizing recently used and pinned views. You can see this in the Cost Management Labs or by opting in using Try Preview.
New: Recommendations view.
View a summary of cost recommendations that help you optimize your Azure resources in the cost analysis preview. You can opt in using Try Preview.
Forecast in the cost analysis preview.
Show your forecast cost for the period at the top of the cost analysis preview. You can opt in using Try preview.
Group related resources in the cost analysis preview.
Group related resources, like disks under VMs or web apps under App Service plans, by adding a “cm-resource-parent” tag to the child resources with a value of the parent resource ID.
Charts in the cost analysis preview. 
View your daily or monthly cost over time in the cost analysis preview. You can opt in using Try Preview.
View cost for your resources. 
The cost for your resources is one click away from the resource overview in the preview portal. Just click View cost to quickly jump to the cost of that resource.
Change scope from the menu.
Change scope from the menu for quicker navigation. You can opt-in using Try Preview.

Of course, that's not all. Every change in Microsoft Cost Management is available in Cost Management Labs a week before it's in the full Azure portal or Microsoft 365 admin center. We're eager to hear your thoughts and understand what you'd like to see next. What are you waiting for? Try Cost Management Labs today.

New ways to save money in the Microsoft Cloud

There were lots of cost optimization improvements happened over the last month. Here are some of the notable general availability offers you might be interested in:

Virtual Machine software reservations.
Azure Premium SSD v2 Disk Storage.
Auto-shutdown for Machine Learning compute instances.
New node sizing for Azure VMware Solution.
Azure Database for PostgreSQL in China North 3 and China East 3.
Azure Stream Analytics in Qatar Central.

And here are two new previews:

Azure HX and HBv4 virtual machines for HPC.
Azure Network Watcher for hybrid networks.

New videos and learning opportunities

Cost management and optimization were popular topics at Microsoft Ignite last month. Explore all 76 sessions with topics covering Azure, Microsoft 365, and more.

Follow the Microsoft Cost Management YouTube channel to stay in the loop with new videos as they’re released and let us know what you'd like to see next.

Want a more guided experience? Start with Control Azure spending and manage bills with Microsoft Cost Management.

Documentation updates

There were plenty of minor documentation updates. Here are a few you might be interested in:

New: Understand reservations discount for Azure SQL Edge.
New: Error when you create multiple subscriptions.
Updated: Overview of Cost Management + Billing–Complete rewrite to offer a more detailed overview.
Updated: How an Azure saving plan discount is applied–Covered how discounts are applied when both savings plans and reservations are available.
Updated: Azure portal administration for direct Enterprise Agreements–Added details about how to enable.
Updated: Reservation discount for Azure Data Explorer–Added details about stopping or suspending Data Explorer clusters.
Updated: Transfer Azure subscriptions between subscribers and CSPs–Added details about MCA subscription transfers.
9 updates based on your feedback.

Want to keep an eye on all documentation updates? Check out the Cost Management and Billing documentation change history in the azure-docs repository on GitHub. If you see something missing, select Edit at the top of the document and submit a quick pull request. You can also submit a GitHub issue. We welcome and appreciate all contributions.

Join the Microsoft Cost Management team

Are you excited about helping customers and partners better manage and optimize costs? We're looking for passionate, dedicated, and exceptional people to help build best in class cloud platforms and experiences to enable exactly that. If you have experience with big data infrastructure, reliable and scalable APIs, or rich and engaging user experiences, you'll find no better challenge than serving every Microsoft customer and partner in one of the most critical areas for driving cloud success.

Watch the video below to learn more about the Microsoft Cost Management team:

Join our team.

What's next?

These are just a few of the big updates from last month. Don't forget to check out the previous Microsoft Cost Management updates. We're always listening and making constant improvements based on your feedback, so please keep the feedback coming.

Follow @MSCostMgmt on Twitter and subscribe to the YouTube channel for updates, tips, and tricks. You can also share ideas and vote up others in the Cost Management feedback forum or join the research panel to participate in a future study and help shape the future of Microsoft Cost Management.

We know these are trying times for everyone. Best wishes from the Microsoft Cost Management team. Stay safe and stay healthy.
Quelle: Azure

November Extensions Roundup: Kubernetes Observability, API Testing, and More

November’s been a busy month, and we’ve got three new Docker Extensions for you to try! Docker Extensions build new functionality into Docker Desktop, extend its existing capabilities, and allow you to discover and integrate additional tools that you’re already using with Docker. Let’s take a look at some of the latest ones.

And if you’d like to see everything available, check out our full Extensions Marketplace!

Look inside Kubernetes clusters with Calyptia Core

Do you struggle to understand what’s happening inside your Kubernetes clusters? Do you need help with automating the logging and a simpler way to aggregate your observability data? If the answer is yes, the Calyptia Core extension is definitely worth trying! The extension allows developers to build, configure, and manage high performant Kubernetes-based observability data pipelines with point-and-click ease.

With the extension, you can:

Eliminate the complexity of configuring and maintaining your observability pipeline

Create an integration between Calyptia Core and your local Docker Desktop Kubernetes cluster

Automate logging

Create custom data pipelines with support for user-defined processing rules

Check out this video to watch it in action:

Automate API testing with Postman’s Newman

Testing and debugging is an important part of any developer’s workflow. While working with APIs, you may need to automate API testing to run tests locally, run collections to assess the current status and health of your API, log test results and filter by test failures to debug unexpected API behavior, or run collections to execute an API workflow against different environment configurations.

Collections are a great way to handle these needs. With Postman collections and Postman’s Newman extension, you can run collections during development in both Docker Desktop and the command line.

See the resource usage of your containers

Docker stats is a great command for making it simple to see the amount of resources your containers are using. But what happens if you need to see resource usage over time? How do you see how much CPU and memory a Compose project is using?

That’s where the Resource Usage extension comes in. With this extension, you can:

Analyze the most resource-intensive containers or Docker Compose projects

Observe how resource usage changes over time for containers

View how much CPU, memory, network, and disk space your containers use

Check out the latest Docker Extensions with Docker Desktop

Docker is always looking for ways to improve the developer experience. Check out these resources for more info on extensions:

Install Docker Desktop for Mac, Windows, or Linux to try extensions yourself.

Visit our Extensions Marketplace to see all of our extensions.

Build your own extension with our Extensions SDK.

Quelle: https://blog.docker.com/feed/

AWS Microservice Extractor für .NET bietet jetzt KI-gestützte automatische Refaktorisierungsempfehlungen

KI-gestützte automatische Refaktorisierungsempfehlungen sind jetzt in AWS Microservice Extractor für .NET verfügbar, einem unterstützenden Tool, das den Prozess der Refaktorisierung monolithischer .NET-Anwendungen in unabhängige Microservices vereinfacht. Mit automatischen Empfehlungen können Entwickler mit der Refaktorisierung einer älteren monolithischen Anwendung beginnen, auch wenn sie mit der ursprünglichen Architektur der Anwendung oder den im Laufe der Jahre nachgerüsteten Funktionen nicht vertraut sind. Die präskriptive Anleitung durch automatische Empfehlungen im Microservice Extractor ermöglicht es Entwicklern, die Zeit für die Identifizierung und Extraktion von Microservices aus veralteten Anwendungen zu halbieren und die Transformation von Unternehmensanwendungen für den Betrieb in der Cloud insgesamt zu beschleunigen.
Quelle: aws.amazon.com

AWS Migration Hub Refactor Spaces ist jetzt zur Beschleunigung der Container-Modernisierung mit CloudHedge OmniDeq integriert

Ab heute können Kunden Migration Hub Refactor Spaces mit CloudHedge OmniDeq nutzen, um einen Plattformwechsel für Anwendungen in Containern vorzunehmen und sie direkt in Faktorwechselumgebungen bereitzustellen. Kunden können jetzt innerhalb weniger Minuten mit der Modernisierung beginnen, ohne zusätzliche AWS-Infrastruktur aufbauen oder verwalten zu müssen.
Quelle: aws.amazon.com