New Tools for More Successful Editing Adventures

Spacing in Galleries Lock Those BlocksDid You Notice the Table of Contents on This Post?

Spacing in Galleries

You now have the power to control how much space lives between your images at the horizontal and vertical level. This allows you to create a variety ways to present a collection of photos. You can even eliminate all ofd the space to give it a collage effect.

Lock Those Blocks

Have you ever accidentally messed up a block or pattern? While there’s always some comfort in knowing you can “undo” most accidents, there’s a new/better way to prevent them from happening in the first place.

You can now lock blocks from within the editor or the list view. This makes it easier than ever to protect components, sections, or even full layouts.

Within locking you can choose to prevent them from being moved or to prevent them from being removed. You can also look forward to the ability to lock “inner blocks” which would make it easier to lock any number of blocks you have set up within a layout block like the group, row, or stack blocks.

Did You Notice the Table of Contents on This Post?

That’s right, it’s finally here. The table of contents block automatically updates as you edit your content, giving your users a direct path to the content they are most interested in.

What other features have you discovered that make creating and editing content easier for you? Share your tips and tricks in the comments.
Quelle: RedHat Stack

Sharing is caring: How GPU sharing on GKE saves you money

Developers and data scientists are increasingly turning to Google Kubernetes Engine (GKE) to run demanding workloads like machine learning, visualization/rendering and high-performance computing, leveraging GKE’s support for NVIDIA GPUs. In the current economic climate, customers are under pressure to do more with fewer resources, and cost savings are top of mind. To help, in July, we launched a GPU time-sharing feature on GKE that lets multiple containers share a single physical GPU, thereby improving its utilization. In addition to GKE’s existing support for multi-instance GPUs for NVIDIA A100 GPUs, this feature extends the benefits of GPU sharing to all families of GPUs on GKE. Contrast this to open source Kubernetes, which only allows for allocation of one full GPU per container. For workloads that only require a fraction of the GPU, this results in under-utilization of the GPU’s massive computational power. Examples of such applications include notebooks and chat bots, which stay idle for prolonged periods, and when they are active, only consume a fraction of GPU. Underutilized GPUs are an acute problem for many inference workloads such as real-time advertising and product recommendations. Since these applications are revenue-generating, business-critical and latency-sensitive, the underlying infrastructure needs to handle sudden load spikes gracefully. While GKE’s autoscaling feature comes in handy, not being able to share a GPU across multiple containers often leads to over-provisioning and cost overruns.Time-sharing GPUs in GKEGPU time-sharing works by allocating time slices to containers sharing a physical GPU in a round-robin fashion. Under the hood, time-slicing works by context switching among all the processes that share the GPU. At any point in time, only one container can occupy the GPU. However, at a fixed time interval, the context switch ensures that each container gets a fair time-slice. The great thing about the time-slicing is that if only one container is using the GPU, it gets the full capacity of the GPU. If another container is added to the same GPU, then each container gets 50% of the GPU’s compute time. This means time-sharing is a great way to oversubscribe GPUs and improve their utilization. By combining GPU sharing capabilities with GKE’s industry-leading auto-scaling and auto-provisioning capabilities, you can scale GPUs automatically up or down, offering superior performance at lower costs. Early adopters of time-sharing GPU nodes are using the technology to turbocharge their use of GKE for demanding workloads. San Diego Supercomputing Center (SDSC) benchmarked the performance of time-sharing GPUs on GKE and found that even for the low-end T4 GPUs, sharing increased job throughput by about 40%. For the high-end A100 GPUs, GPU sharing offered a 4.5x throughput increase, which is truly transformational.NVIDIA multi-instance GPUs (MIG) in GKEGKE’s GPU time-sharing feature is complementary to multi-instance GPUs, which allow you to partition a single NVIDIA A100 GPU into up to seven instances, thus improving GPU utilization and reducing your costs. Each instance with its own high-bandwidth memory, cache and compute cores can be allocated to one container, for a maximum of seven containers per single NVIDIA A100 GPU. Multi-instance GPUs provide hardware isolation between workloads, and consistent and predictable QoS for all containers running on the GPU. Time-sharing GPUs vs. multi-instance GPUsYou can configure time-sharing GPUs on any NVIDIA GPU on GKE including the A100. Multi-instance GPUs are only available in the A100 accelerators.If your workloads require hardware isolation from other containers on the same physical GPU, you should use multi-instance GPUs. A container that uses a multi-instance GPU instance can only access the CPU and memory resources available to that instance. As such, multi-instance GPUs are better suited to when you need predictable throughput and latency for parallel workloads. But if there are fewer containers running on a multi-instance GPU than available instances then the remaining instances will be unused. On the other hand, in the case of time-sharing, context switching lets every container access the full power of the underlying physical GPU. Therefore, if only one container is running, it still gets the full capacity of the GPU. Time-shared GPUs are ideal for running workloads that need only a fraction of GPU power and burstable workloads. Time-sharing allows a maximum of 48 containers to share a physical GPU whereas multi-instance GPUs on A100 allows up to a maximum of 7 partitions. If you want to maximize your GPU utilization, you can configure time-sharing for each multi-instance GPU partition. You can then run multiple containers on each partition, with those containers sharing access to the resources on that partition.Get started todayThe combination of GPUs and GKE is proving to be a real game-changer. GKE brings auto-provisioning, autoscaling and management simplicity, while GPUs bring superior processing power. With the help of GKE, data scientists, developers and infrastructure teams can build, train and serve the workloads without having to worry about underlying infrastructure, portability, compatibility, load balancing and scalability issues. And now, with GPU time-sharing, you can match your workload acceleration needs with right-sized GPU resources. Moreover, you can leverage the power of GKE to automatically scale the infrastructure to efficiently serve your acceleration needs while delivering a better user experience and minimizing operational costs. To get started with time-sharing GPUs in GKE, check out the documentation.Related ArticleUsing Google Kubernetes Engine’s GPU sharing to search for neutrinosNative support for GPU time sharing and A100 Multi-Instance GPU partitioning allowed many more IceCube ray-tracing simulations from the s…Read Article
Quelle: Google Cloud Platform

SevenRooms serves up personalized hospitality, food, and beverage services with Google Cloud

Finding ways to increase customer loyalty and profitability in a post-COVID world is top of mind for hotels, bars, and restaurants. Unfortunately, many food and beverage services providers struggle to deliver the personalized experiences that keep guests coming back for more. The reality is that most traditional hospitality apps offer only limited insight into guest activities, preferences, underlying operating costs, and other essential details. We built SevenRooms to help food and beverage operators create truly memorable experiences by managing and personalizing every step of the guest journey. With SevenRooms, restaurants in more than 250 cities globally leverage technology and data to provide unforgettable hospitality experiences with a human touch. That includes seating guests at their favorite table, pouring complimentary glasses of wine from a preferred vintage, offering special menu options, and personalized experiences for special occasions. Scaling guest experience and retention on Google CloudWhen developing SevenRooms, we needed a technology partner that would enable our small team to securely scale services, automate manual tasks, and accelerate time to market while reducing IT costs. That’s why we started working withGoogle App Engine—and later became more involved with the Google for Startups Cloud Program.We soon realized many traditional apps lacked the integrations and capabilities needed to respond to the unique challenges facing food and beverage operators that the Google for Startups Cloud Program services provide. With guidance from experts on the Google Startups Success team, we quickly transformed SevenRooms from a beta restaurant and nightclub reservation and guest management point solution into a full-fledged guest experience and retention platform that analyzes actionable data to automatically build and continuously update detailed guest profiles. The combination of Google App Engine and other Google Cloud solutions makes everything easier to build and scale. We’re seeing our total cost of ownership (TCO) decline by 10-15%, so we can shift additional resources to R&D to help customers create one-of-a-kind interactions with their guests. And importantly, we can bring new products to market 200-300% faster than competitors.Because SevenRooms handles a lot of sensitive guest data, stringent security protocols are vital. We store all data onGoogle Cloud, taking advantage of its highly secure-by-design infrastructure and built-in support for international data privacy laws such as GDPR. We useBigQuery andLooker for data analysis and reporting, and power our NoSQL database withFirestore. We also scale workloads and runElasticsearch onGoogle Compute Engine (GCE)—and seamlessly integrate our reservation booking engine with Google Maps andGoogle Search.     In the future, we’re looking to further market actionable guest data with the help of advancedmachine learning (ML) models withTensorFlow andGoogle Cloud Tensor Processing Units (TPUs). Cloud TPUs and Google Cloud data and analytics services are fully integrated with other Google Cloud offerings, includingGoogle Kubernetes Engine (GKE). By running ML workloads on Cloud TPUs, SevenRooms will benefit from Google Cloud’s leading networking and data analytics technologies such as BigQuery. We’re also exploring additional Google Cloud solutions such asAnthos to unify the management of infrastructure and applications across on-premises, edge, and multiple public clouds, as well asGoogle Cloud Run to deploy scalable containerized applications on a fully managed serverless platform. These solutions will enable us to continue to quickly expand our services and offer customers a variety of new benefits. Building a profitable, sustainable future in food and beverageOur work with the Google Startups Success team has been instrumental to helping us get where we are today. Their responsiveness is incredible and stands out compared to services from other technology providers. Google Cloud gives us a highly secure infrastructure and next-level training to evolve our infrastructure using solutions such as BigQuery andBackup and Disaster Recovery.We also work with Google Cloud partner, DoiT International, to further scale and optimize our operations. In particular, DoiT provided expert scripts to short-cut lengthy processes, while actively troubleshooting any issues or questions that came up. DoiT continues to share guidance in key areas for future products and features. They have provided expertise for architecture, infrastructure and cost management. Moving forward, we’re excited to work with Google Cloud and DoiT to handle a growing surge in users which we anticipate in 2023 and beyond.With Google Cloud, SevenRooms is revitalizing food and beverage service delivery by enabling businesses to cultivate and maintain direct guest relationships, deliver exceptional experiences, and encourage repeat visits. We’ve compiled many case studies that demonstrate how our customers see great results by personalizing their interactions with guests, from $1.5M in cost savings, to $400K of additional revenue, to a 68% jump in email open rates.Demand for our guest experience and retention platform keeps growing as we help our customers take a people-first approach by delivering unique and tailored dining experiences. We can’t wait to see what we accomplish next as we expand our team and reach new markets worldwide. If you want to learn more about how Google Cloud can help your startup, visit our pagehere to get more information about our program, andsign up for our communications to get a look at our community activities, digital events, special offers, and more.Related ArticleHow PicnicHealth is revolutionizing healthcare with Google Workspace and Google CloudHealthcare startup PicnicHealth uses Google Cloud and Google Workspace to revolutionize healthcare data.Read Article
Quelle: Google Cloud Platform

From NASA to Google Cloud, Ivan Ramirez helps top gaming companies reach new levels

Editor’s note: Ivan Ramirez, Gaming Team Lead, works in one of Google Cloud’s most dynamic, and least understood businesses, supporting some of the world’s biggest online gaming companies. He’s in a world of extreme potential, heart-stopping challenges, big teamwork, and managing scarce resources for the maximum outcome. And that’s before we get to the gaming.I assume most people in gaming are lifelong gamers. True?Not in my case, but I did start out playing the world’s best video game. I graduated from Georgia Tech with a degree in Aerospace Engineering and went to NASA. I trained to work on the electrical and thermal control systems of the International Space Station, and simulated things like an explosion or a medical emergency for 12 hours at a time.What was it like moving over to the Gaming industry?I had a lot of jobs before I started at Google in 2016. Now, as a Gaming Team Lead, I’m working with customers in many different aspects of the technology relationship, from working hands on keyboard alongside engineers to giving industry vision presentations to executives and everything in between. The great thing about this industry is that at every level, Gaming wants to be at the bleeding edge of technology. They want to be using the best graphics chips, have the most players online at once, or the fastest networking. They want lots of analytics, for things like placing ads in real time, or detecting cheaters while a game is going on. Look at something like Niantic’s Pokemon GO Fest this year, where players caught over one billion Pokémon, spun over 750 million PokéStops and collectively explored over 100 million kilometers. We’ve got big scale numbers like that with a few customers.How does that affect the rest of Google Cloud?When they push us to go faster and deliver more, it helps us invent the future.  Gaming companies also value the freedom to innovate, and have a real passion for their customers, which is something Google Cloud shares in our culture, as well as our leadership in scale, data analytics and more.You say you did a lot of different jobs, but you’ve been here six years. Why?I grew up in Lima, Peru. When I was 10, my dad got an offer to relocate to Miami. It was tough for him, but it was an opportunity he couldn’t pass up. Later, I wanted to go to Georgia Tech because they were strong for Aerospace, even though in Peru you traditionally stay close to family. I think I learned early on that you have to get over your comfort zone to rise up. I’ve had a great time here at Google because it enables me to continue to grow. Over the six years it’s always stayed interesting. Being at Google pushes me to try new things.Do you think Gaming has affected you personally, too?Maybe it affects the way I think about work and people. Some of my proudest moments are helping people, connecting them with others. I try to teach them some of the things I’ve learned, including taking care of yourself. We are people who want to say “yes” to everything, who feel like there’s always something more that we can do, or another project we can improve. You have to find limits and ways to care for yourself and your family too, or you won’t be able to last over the long haul, or even be a good partner and teammate.Related ArticleHear how this Google M&A Lead is helping to build a more diverse Cloud ecosystemPrincipal for Google Cloud’s Mergers & Acquisitions business and founder of Google’s Black+TechAmplify, Wayne Kimball, Jr. shares how inv…Read Article
Quelle: Google Cloud Platform

How to help ensure smooth shift handoffs in security operations

Editor’s note: This blog was originally published by Siemplify on Oct. 29, 2019.Much the same way that nursing teams need to share patient healthcare updates when their shift ends, security operations centers (SOC) need to have smooth shift-handoff procedures in place to ensure that continuous monitoring of their networks and systems is maintained.Without proper planning, knowledge gaps can arise during the shift-change process. These include:Incomplete details: Updates, such as the work that has been done to address active incidents and the proposed duties to continue these efforts, are not thoroughly shared.Incorrect assumptions: Operating with fragmented information, teams engage in repetitive tasks, or worse, specific investigations are skipped entirely because it is assumed they were completed by another shift.Dropped tasks: From one shift to the next, some tasks can fall entirely through the cracks and are never reported to the incoming personnel.Because of these gaps, security analysts tend to spend too much time following up with each other to ensure items are completed. With major incidents, this may mean keeping personnel from the previous shift on for a partial or even full second shift until the incident is closed out. Ramifications of being overworked can include physical and mental fatigue and even burnout.Fortunately, these gaps are not inevitable.Getting a process in placeDecide on the basicsBefore you can succeed with shift handoffs, you need to decide how you will design your shifts. Will shifts be staggered? Will they be covered from geographically different regions (i.e. “follow the sun” model)? If so, handovers may be challenged by language and cultural differences.Do you allow for people to swap shifts (i.e. work the early shift one week and the graveyard the next)? If shifts are fixed then you can create shift teams. If shifts rotate, you need to ensure analysts work each shift for a set period of time to adapt to the specific types of cases and ancillary work that each shift is responsible for. Rotating shifts also can infuse a fresh set of eyes to processes or problems. It also may help retain talent, as working consistently irregular hours can have a negative impact on one’s health.Properly share communicationWhen you work in a SOC, you don’t punch in and out like in the old factory days. Active cases may require you or someone from the team to arrive early to receive a debriefing, or stay late to deliver your own to arriving colleagues (as well as complete any pending paperwork). Streamlining the transfer process is critical but simple: Create a standard handoff log template that each shift uses to clearly communicate tasks and action items. Be prepared for questions.Log activitiesSecurity orchestration, automation, and response (SOAR) technology can help in the collaboration process. In addition, SOAR gives managers the ability to automatically assign cases to the appropriate analyst. Through playbooks, escalations can be defined and automated based on the processes that are unique to your organization.Related ArticleHow to overcome 5 common SecOps challengesHere are 5 common issues that many SecOps teams struggle with—and how to fix them.Read Article
Quelle: Google Cloud Platform

Easily connect SaaS platforms to Google Cloud with Eventarc

Last year, we launched Eventarc, a unified eventing platform with 90+ sources of events from Google Cloud, helping make it a more programmable cloud. We recognize that most Google Cloud customers utilize a myriad of platforms to run their business, from internal IT systems, to hosted vendor software and SaaS services. Creating and maintaining integrations between these platforms is time consuming and complex. With third-party sources in Eventarc, adding integrations between supported SaaS platforms and your applications in Google Cloud is easier than ever.Today we are happy to announce the Public Preview of third-party sources in Eventarc, with the first cohort of sources provided by ecosystem partners.Here are some highlights of this exciting new platform:Simple Discovery and Setup: Configure an integration in two easy steps. Fully managed event infrastructure: With Eventarc, there is nothing to maintain or manage so connecting your SaaS ecosystem to Google Cloud couldn’t be simpler.Consistency: Third-party sources are consistent with the rest of Eventarc, including a consistent trigger configuration and invocations in CloudEvent Format.Trigger multiple workloads: All supported Eventarc destinations are available to target with third party source triggers (Cloud Functions Gen2, Cloud Run, GKE, and Cloud Workflows). Built-in Filtering: Filter on most CloudEvent attributes to allow for robust and easy filtering in the Eventarc trigger.Today, we’re happy to introduce our first cohort of third-party sources. These partners help to improve the value of the connected cloud, and open new exciting use cases for our customers.The Datadog source is available today in publicpreview (codelab, setup instructions).Available in public preview today (setup instructions).The Lacework source is available in private preview. Sign up today.The Check Point CloudGuard source is available in private preview. Sign up today.Next stepsTo learn more about third-party providers offering an Eventarc source, to run through the quickstart, or to provide feedback please see the links below.Learn more about third-party sources in EventarcLearn about third-party providers currently offering an Eventarc sourceTry out the Datadog source codelabInterested in becoming a third-party source of events on Google Cloud? Contact us at eventarc-integrations@google.com
Quelle: Google Cloud Platform

Track adversaries and improve posture with Microsoft threat intelligence solutions

Today, we’re thrilled to announce two new security products driven by our acquisition of RiskIQ just over one year ago that deliver on our vision to provide deeper context into threat actors and help customers lock down their infrastructure.

Track threat actor activity and patterns with Microsoft Defender Threat Intelligence

This new product helps security operations teams uncover attacker infrastructure and accelerate investigation and remediation with more context, insights, and analysis than ever before. While threat intelligence is already built into the real time detections of our platform and security products like Microsoft Sentinel, customers also need direct access to real-time data and Microsoft’s unmatched signal to proactively hunt for threats across their environments.

For example, adversaries often run their attacks from many machines, with unique IP addresses. Tracing the actor behind an attack and tracking down their entire toolkit is challenging and time-consuming. Using built-in AI and machine learning, Defender Threat Intelligence uncovers the attacker or threat family and the elements of their malicious infrastructure. Armed with this information, security teams can then find and remove adversary tools within their organization and block their future use in tools like Microsoft Sentinel, helping to prevent future attacks.

See your business the way an attacker can with Microsoft Defender External Attack Surface Management

The new Defender External Attack Surface Management gives security teams the ability to discover unknown and unmanaged resources that are visible and accessible from the internet—essentially the same view an attacker has when selecting their target. Defender External Attack Surface Management helps customers discover unmanaged resources that could be potential entry points for an attacker.

Microsoft Defender External Attack Surface Management scans the internet and its connections every day. This builds a complete catalogue of a customer’s environment, discovering internet-facing resources, even the agentless and unmanaged assets. Continuous monitoring, without the need for agents or credentials, prioritizes new vulnerabilities. With this complete view of the organization, customers can take recommended steps to mitigate risk by bringing these resources under secure management within tools like Microsoft Defender for Cloud.

Read the full threat intelligence announcement and to learn more about how Microsoft Defender Threat Intelligence and Microsoft Sentinel work together, read the Tech Communities blog.

Additionally, in the spirit of continuous innovation and bringing as much of the digital environment under secure management as possible, we are proud to announce the new Microsoft Sentinel solution for SAP. Security teams can now monitor, detect, and respond to SAP alerts all from our cloud-native SIEM, Microsoft SIEM.

To learn more about these products and to see live demos, visit us at Black Hat USA, Microsoft Booth 2340. You can also register now for the Stop Ransomware with Microsoft Security digital event on September 15, 2022, to watch in-depth demos of the latest threat intelligence technology.
Quelle: Azure

Microsoft Cost Management updates – July 2022

Whether you're a new student, a thriving startup, or the largest enterprise, you have financial constraints, and you need to know what you're spending, where, and how to plan for the future. Nobody wants a surprise when it comes to the bill, and this is where Microsoft Cost Management comes in.

We're always looking for ways to learn more about your challenges and how Microsoft Cost Management can help you better understand where you're accruing costs in the cloud, identify and prevent bad spending patterns, and optimize costs to empower you to do more with less. Here are a few of the latest improvements and updates based on your feedback:

Introducing the Cost Details API
Filter cost recommendations by tag
How to choose the right Azure services for your applications—It’s not A or B
What's new in Cost Management Labs
New ways to save money with Microsoft Cloud
New videos and learning opportunities
Documentation updates
Join the Microsoft Cost Management team

Let's dig into the details.

Introducing the Cost Details API

You already know you can dig into your cost and usage data from the Azure portal or Microsoft 365 admin center. You may even know that you can export cost data to storage on a recurring schedule. While many organizations use both, some have more ad-hoc requirements where they need an on-demand solution. Traditionally, these organizations would use the Consumption UsageDetails API or the older Enterprise Agreement (EA) consumption.azure.com APIs. This month, we introduced a new on-demand solution for downloading granular cost details with the new Cost Details API – now generally available for Enterprise Agreement and Microsoft Customer Agreement accounts.

The Cost Details API comes with improved security, stability, and scalability over the UsageDetails API and aligns with the schema already being used by Cost Management exports. If you’re still using the older EA key-based APIs, then you’ll also get additional benefits like a single dataset for all cost data, including Marketplace and reservation purchases, an option to amortize reservation purchases, as well as support for splitting shared costs with cost allocation.

With the general availability of Cost Details this month, the UsageDetails and consumption.azure.com APIs are in maintenance and will not receive updates. Please migrate to scheduled exports for large accounts with a lot of cost data or to streamline recurring data dumps. If you have requirements that necessitate a more on-demand solution, please migrate to the Cost Details API.

To learn more about the Cost Details API, see Get cost details for a pay-as-you-go subscription. For additional information about when to select Exports or Cost Details, see Choose a cost details solution.

Filter cost recommendations by tag

Nearly every conversation we have with organizations starts with cost optimization and ensuring their workloads are running efficiently. And when it comes to cost optimization, we always tell people to start in Azure Advisor, which gives a great picture of high-confidence recommendations to reduce cost. At the same time, this can be daunting for large teams with resources spread across many resource groups and subscriptions. To help facilitate this, you can now filter your recommendations by tag in Azure Advisor.

With the power of tag filters, you can now get recommendations scoped to a business unit, project, or application to filter recommendations and calculate scores using tags you have already assigned to Azure resources, resource groups, and subscriptions.

To learn more, visit how to filter Advisor recommendations using tags.

How to choose the right Azure services for your applications—It’s not A or B

If you’ve been working with Azure for any period, you might have grappled with the question—which Azure service is best to run my apps on? This is an important decision because the services you choose will dictate your resource planning, budget, timelines, and, ultimately, the time to market for your business. It impacts the cost of not only the initial delivery, but also the ongoing maintenance of your applications.

Read on as Asir Selvasingh and Ajai Peddapanga summarize your options on the Azure blog to get you started on the right foot. They’ll also share details about their new e-book on the subject.

What's new in Cost Management Labs

With Cost Management Labs, you get a sneak peek at what's coming in Microsoft Cost Management and can engage directly with us to share feedback and help us better understand how you use the service, so we can deliver more tuned and optimized experiences. Here are a few features you can see in Cost Management Labs:

New: What’s new in Cost Management– Now enabled by default in Labs
Learn about new announcements from the Cost Management overview. You can opt in using Try Preview.
New: Cost savings insights in the cost analysis preview
Identify potential savings available from Azure Advisor cost recommendations for your Azure subscription. You can opt in using Try preview.
New: Forecast in the cost analysis preview
Show your forecast cost for the period at the top of the cost analysis preview. You can opt in using Try preview.
Product column experiment in the cost analysis preview
We’re testing new columns in the Resources and Services views in the cost analysis preview for Microsoft Customer Agreement. You may see a single Product column instead of the Service, Tier, and Meter columns. Please leave feedback to let us know which you prefer.
Group related resources in the cost analysis preview
Group related resources, like disks under VMs or web apps under App Service plans, by adding a “cm-resource-parent” tag to the child resources with a value of the parent resource ID.
Charts in the cost analysis preview
View your daily or monthly cost over time in the cost analysis preview. You can opt in using Try Preview.
View cost for your resources
The cost for your resources is one click away from the resource overview in the preview portal. Just click View cost to quickly jump to the cost of that particular resource.
Change scope from the menu
Change scope from the menu for quicker navigation. You can opt-in using Try Preview.

Of course, that's not all. Every change in Microsoft Cost Management is available in Cost Management Labs a week before it's in the full Azure portal or Microsoft 365 admin center. We're eager to hear your thoughts and understand what you'd like to see next. What are you waiting for? Try Cost Management Labs today. 

New ways to save money in the Microsoft Cloud

Here are new and updated offers you might be interested in:

Generally available: NVads A10 v5 GPU-accelerated virtual machines.
Generally available: Improved Try Azure Cosmos DB for free experience.
Generally available: Azure SQL Database Hyperscale Named replicas.
Preview: Create an additional 5000 Azure Storage accounts within your subscription.

New videos and learning opportunities

One new video this month for those automating reporting and optimization:

Creating context for your Advisor recommendations (eight minutes).

Follow the Microsoft Cost Management YouTube channel to stay in the loop with new videos as they’re released and let us know what you'd like to see next.

Want a more guided experience? Start with Control Azure spending and manage bills with Microsoft Cost Management.

Documentation updates

Here are a few documentation updates you might be interested in:

Added Create an Azure budget with Bicep.
Added Enable preview features in Cost Management Labs.
Updated the effective date for Reserve Bank of India directive updates in Pay your Microsoft Customer Agreement or Microsoft Online Subscription Program Azure bill.
Added details about Warned subscriptions to Azure subscription states.
Noted cost analysis date range limit of 13 months in Understand Cost Management data.
15 updates based on your feedback.

Want to keep an eye on all documentation updates? Check out the Cost Management and Billing documentation change history in the azure-docs repository on GitHub. If you see something missing, select Edit at the top of the document and submit a quick pull request. You can also submit a GitHub issue. We welcome and appreciate all contributions!

Join the Microsoft Cost Management team

Are you excited about helping customers and partners better manage and optimize costs? We're looking for passionate, dedicated, and exceptional people to help build best in class cloud platforms and experiences to enable exactly that. If you have experience with big data infrastructure, reliable and scalable APIs, or rich and engaging user experiences, you'll find no better challenge than serving every Microsoft customer and partner in one of the most critical areas for driving cloud success.

Watch the video below to learn more about the Microsoft Cost Management team:

Join our team.

What's next?

These are just a few of the big updates from last month. Don't forget to check out the previous Microsoft Cost Management updates. We're always listening and making constant improvements based on your feedback, so please keep the feedback coming.

Follow @MSCostMgmt on Twitter and subscribe to the YouTube channel for updates, tips, and tricks. You can also share ideas and vote up others in the Cost Management feedback forum and help shape the future of Microsoft Cost Management.

We know these are trying times for everyone. Best wishes from the Microsoft Cost Management team. Stay safe and stay healthy.
Quelle: Azure