Access your Professional Email inbox directly from WordPress.com

Every day we’re connected to a million apps, and we browse through multiple browser tabs just to complete a single action item. We have busy schedules that would benefit from streamlined processes, simple tools, and powerful workflows. With that in mind, our team asked for feedback, and that inspired us to put together a new solution—your Professional Email inbox baked right into your WordPress.com site.

You can now manage your inbox and website from the same place, eliminating the need for multiple sets of credentials and URLs. Once you’re securely logged in, we’ll save you the clicks and multiple tabs that managing your work used to take, allowing you to operate directly from your website dashboard.

A few time-saving hacks to get the most from your embedded inbox:

Easily connect with your audience or community from your site while checking your site followers.

Post a blog post, head directly to your inbox, and share it with your customers. 

Create a new product to sell and share the news directly from your site dashboard.

Ready to try it out?

Start here

Quelle: RedHat Stack

Billing info is at your fingertips in the latest Cloud Console mobile app

Cloud billing is an important part of managing your cloud resources, and understanding your cloud spend estimates or accessing invoices is critical for many businesses. Thus far, the best way to check your billing information has been to use the Google Cloud Console from your favorite web browser. But Google Cloud users tell us that they want to be able to access billing data on the go.Today, we’re introducing a new way of accessing billing information — from the Cloud Console mobile app. Now, with your Android or iOS mobile device, you can access not only your resources (App Engine, Compute, Databases, Storage or IAM), logs, incidents, errors, but also your billing information. With these enhanced billing features, we are making it easier for you to understand your cloud spend.Billing in the Cloud Console mobile appWith the newest app release, you can add a billing widget on the home dashboard of the Cloud Console mobile app using the “plus” button on the home screen. Whenever you open the app you will see the current spend of the selected billing account. You can also switch the active project from the home screen.If you go from the main screen to the Billing tab you can see your cost forecast. We also added sections to make navigation easier, while the Overview screen lets you see graphs of monthly trends, costs per project or per cloud service. The Budgets screen, meanwhile, lets you preview how each of your predefined budgets are being spent. You can learn more about Cloud Billing budgets in this blog post.The new Credits page shows you the usage for all the credits that were ever used in your account, such as the one from Climate Innovation Challenge, and the Account management page shows you details about your billing account. You can also check the account’s id, which users are managing the billing accounts or which projects are using the active account.And if you ever need help, you can always reach out to us directly from the app using the “Help with billing” section on the Billing tab!To summarize we’ve enhanced billing on your smartphone with:Smoother navigationForecasted costAccess to billing graphsThese new features are available for you to use today. If you have any feedback, we want to hear from you — just click the “Send feedback” button in the app. And don’t forget to pin the dashboard card to the main screen, so you always have your billing information at your fingertips. Go ahead and download the app today from Google Play or the Apple App Store.Related ArticleProtect your Google Cloud spending with budgetsBudgets are the first and simplest way to get a handle on your cloud spend. In this post, we break down a budget and help you set up aler…Read Article
Quelle: Google Cloud Platform

Now in preview, BigQuery BI Engine Preferred Tables

Earlier in the quarter we had announced that BigQuery BI Engine support for all BI and custom applications was generally available. Today we are excited to announce the preview launch of Preferred Tables support in BigQuery BI Engine!  BI Engine is an in-memory analysis service that helps customers get low latency performance for their queries across all BI tools that connect to BigQuery.  With support for preferred tables,  BigQuery customers now have the ability to prioritize specific tables for acceleration, achieving predictable performance and optimized use of their BI Engine resources. BigQuery BI Engine is designed to help deliver freshest insights without having to sacrifice the performance of their queries by accelerating their most popular dashboards and reports.  It provides intelligent scaling and ease of configuration where customers do not have to worry about any changes to their BI tools or in the way they interact with BigQuery. They simply have to create a project level memory reservation.  BigQuery BI Engine’s smart caching algorithm ensures that the data that tends to get queried often is in memory for faster response times.  BI Engine also creates replicas of the data being queried to support concurrent access, this is based on the query patterns and does not require manual tuning from the administrator.  However, some workloads are more latency sensitive than others.  Customers would therefore want more control of the tables to be accelerated within a project to ensure reliable performance and better utilization of their BI Engine reservations.  Before this feature,  BigQuery BI Engine customers could achieve this by using separate projects for only those tables that need acceleration. However, that requires additional configuration and not the best reason to use separate projects.With the launch of preferred tables in BI Engine, you can now tell BI Engine which tables should be accelerated.  For example, if you have two types of tables being queried from your project.  The first being a set of pre-aggregated or dimension tables that get queried by dashboards for executive reporting and the other representing all tables used for ad hoc analysis.  You can now ensure that your reporting dashboards get predictable performance by configuring them as ‘preferred tables’ in the BigQuery project.  That way, other workloads from the same project will not consume memory required for interactive use-cases. Getting startedTo use preferred tables, you can use cloud console, BigQuery Reservation API or a data definition language (DDL) statement in SQL.  We will show the UI experience below.  You can look at detailed documentation of the preview feature here. You can simply edit existing BI Engine configuration in the project.  You will see an optional step of specifying the preferred tables, followed by a box to specify the tables you want to set as preferred.The next step is to confirm and submit the configuration and you will be ready to go! Alternatively, you can also achieve this by issuing a DDL statement in SQL editor as follows:code_block[StructValue([(u’code’, u’ALTER BI_CAPACITY `<PROJECT_ID>.region-<REGION>.default`rnSET OPTIONS(rn size_gb = 100,rn preferred_tables = [“bienginedemo.faadata.faadata1″]);’), (u’language’, u”), (u’caption’, <wagtail.wagtailcore.rich_text.RichText object at 0x3e3bfa4a22d0>)])]This feature is available in all regions today and rolled out to all BigQuery customers. Please give it a spin!Related ArticleLearn how BI Engine enhances BigQuery query performanceThis blog explains how BI Engine enhances BigQuery query performance, different modes in BI engine and its monitoring.Read Article
Quelle: Google Cloud Platform

Incorporating quota regression detection into your release pipeline

On Google Cloud, one of the ways an organization may want to enforce fairness in how much of a resource can be consumed is through the use of quotas. Limiting resource consumption on services is one way that companies can better manage their cloud costs. Oftentimes, people associate quotas with APIs to access that said resource. Although an endpoint may be able to handle a high number of Queries Per Second (QPS), the quota gives them a means to ensure that no one user or customer has monopoly over the available capacity. This is where fairness comes into play. It allows people to put limits that can be scoped per user or per customer and allows them to increase or lower those limits.Although quota limits address the issue of fairness from a resource providers’ point of view — in this case, Google Cloud — you still need a way as the resource consumer to ensure that those limits are adhered to and, just as importantly, ensure that you don’t inadvertently violate those limits. This is especially important in a continuous integration and continuous delivery (CI/CD) environment, where there is so much automation going on. CI/CD is heavily based on automating product releases and you want to ensure that the products released are always stable. This brings us to the issue of quota regression.What is quota regression and how can it occur? Quota regression refers to the unplanned change in an allocated quota that oftentimes results in a reduced capacity for resource consumption. Let’s take for example an accountant firm. I have many friends in this sector and they can never hang out with me during their busy season between January and April. At least, that’s the excuse. During the busy season, they have an extraordinarily high caseload, and a low caseload the rest of the year. Let’s assume that these caseloads actually have an immediate impact on your resource costs on Google Cloud. Since this high caseload only occurs at a particular point throughout the year, it may not be necessary to maintain a high quota at all times. It’s not financially prudent since resources are paid on a “per-usage” model. If the accountant firm has an in-house engineering team that has built load-tests to ensure the system is functioning as intended, you would expect the load capacity to increase before the busy season. If the load test is being done in an environment separate from the serving one (which it should be due to reasons such as security and avoiding unnecessary access grants to data), this is where you might start to see a quota regression. An example of this is load testing in your non-prod Google Cloud project (e.g.your-project-name-nonprod) and promoting images to your serving project (e.g.your-project-name-prod).In order for the load tests to pass, there must be a sufficient quota allocated to the load testing environment. However, there exists a possibility that that quota has not been granted in the serving environment. It could be due to simply an oversight in the process where the admin needed to request the additional quota in the serving environment, or it could be because that quota was reverted after a busy season and thus went unnoticed. Whatever the reason, it still depends on human intervention to assert that the quotas are consistent across environments. If this is missed, the firm can go into a busy season with passing load tests and still have a system outage due to lack of quota in the serving environment.Why not just use traditional monitoring?This brings to mind the argument of “Security Monitor vs Security Guard.” Even with monitoring to detect such inconsistencies, alerts can be ignored and alerts can be late. Alerts work if there is no automation tied to the behavior. In the example above, alerts may just suffice. However, in the context of CI/CD, it’s likely for a deployment that introduces a higher QPS on dependencies to be promoted from a lower environment to the serving environment, because the load tests pass if the lower environment has sufficient quota. The problem here is that now that deployment is automatically pushed to production with alerts probably occurring with the outage. The best way to handle these scenarios is to incorporate an automated way of not just monitoring and alerting, but a means for preventing promotion of that regressive behavior to the serving environment. The last thing you want is new logic that requires a higher resource quota than what is granted being automatically promoted to prod.Why not use existing checks in tests? The software engineering discipline offers several types of tests (unit, integration, performance, load, smoke, etc…), none of which address something as complex as cross-environment consistency. Most of them focus on the user and expected behaviors. The only test that really focuses on infrastructure is the load test, but a quota regression is not necessarily part of the load test. It’s not something you’re going to detect since a load test occurs in its own environment and is agnostic of where it’s actually running. In other words, a quota regression test needs to be aware of the environments — it needs an expected baseline environment where the load test occurs and an actual serving environment where the product will be deployed. What I am proposing is an environment aware test to be included in the suite of many other tests.Quota regression testing on Google CloudGoogle Cloud already provides services that you can use to easily incorporate this feature. This is more of a systems architecture practice that you can exercise. The Service Consumer Management API provides the tools you need to create your own quota regression test. Take for example the ConsumerQuotaLimit Resource that’s returned via the list api. For the remainder of this discussion, let’s assume an environment setup such as this:Diagram demonstrating an extremely simple deployment pipeline for a resource provider.In the diagram above, we have a simplified deployment pipeline:Developers submit code to some repositoryThe Cloud Build build and deployment trigger gets firedTests are runDeployment images are pushed if the prerequisite steps succeedImages are pushed to their respective environments (in this case build to dev, and previous dev to prod)Quotas are defined for the endpoints on deploymentCloud Load Balancer makes the endpoints available to end usersQuota limitsWith this mental model, let’s hone in on the role quotas play in the big picture. Let’s assume we have the following service definition for an endpoint called “FooService”. The service name, metric label and quota limit value are what we care about for this example.gRPC Cloud Endpoint Yaml Examplecode_block[StructValue([(u’code’, u’type: google.api.Servicernconfig_version: 3rnname: fooservice.endpoints.my-project-id.cloud.googrntitle: Foo Service gRPC Cloud Endpointsrnapis:rn – name: com.foos.demo.proto.v1.FooServicernusage:rn rules:rn # ListFoos methods can be called without an API Key.rn – selector: com.foos.demo.proto.v1.FooService.ListFoosrn allow_unregistered_calls: truern # GetFoo methods can be called without an API Key.rn – selector: com.foos.demo.proto.v1.FooService.GetFoorn allow_unregistered_calls: truern # UpdateFoo methods can be called without an API Key.rn – selector: com.foos.demo.proto.v1.FooService.UpdateFoorn allow_unregistered_calls: truernmetrics:rn – name: library.googleapis.com/read_callsrn display_name: “Read Quota”rn value_type: INT64rn metric_kind: DELTArn – name: library.googleapis.com/write_callsrn display_name: “Write Quota”rn value_type: INT64rn metric_kind: DELTArnquota:rn limits:rn – name: “apiReadQpmPerProject”rn metric: library.googleapis.com/read_callsrn unit: “1/min/{project}”rn values:rn STANDARD: 1rn – name: “apiWriteQpmPerProject”rn metric: library.googleapis.com/write_callsrn unit: “1/min/{project}”rn values:rn STANDARD: 1rn # By default, all calls are measured with a cost of 1:1 for QPM.rn # See https://github.com/googleapis/googleapis/blob/master/google/api/quota.protorn metric_rules:rn – selector: “*”rn metric_costs:rn library.googleapis.com/read_calls: 1rn – selector: com.foos.demo.proto.v1.FooService.UpdateFoorn metric_costs:rn library.googleapis.com/write_calls: 2′), (u’language’, u”), (u’caption’, <wagtail.wagtailcore.rich_text.RichText object at 0x3ee0bf95a4d0>)])]In our definition we’ve established:Service Name: fooservice.endpoints.my-project-id.cloud.googMetric Label: library.googleapis.com/read_callsQuota Limit: 1With these elements defined, we’ve now restricted read calls to exactly one per minute for the service. Given a project number, (e.g., 123456789) we can now issue a call to the Consumer Quota Metrics Service to display the service quota.Example commands and output.code_block[StructValue([(u’code’, u’$ alias gcurl=’curl -H “Authorization: Bearer $(gcloud auth print-access-token)” -H “Content-Type: application/json”‘rn$ gcurl https://serviceconsumermanagement.googleapis.com/v1beta1/services/fooservice.endpoints.my-project-id.cloud.goog/projects/my-project-id/consumerQuotaMetrics’), (u’language’, u”), (u’caption’, <wagtail.wagtailcore.rich_text.RichText object at 0x3ee0e000b2d0>)])]Response example (truncated)code_block[StructValue([(u’code’, u'{rn “metrics”: [rn {rn “name”: “services/fooservice.endpoints.my-project-id.cloud.goog/projects/123456789/consumerQuotaMetrics/library.googleapis.com%2Fread_calls”,rn “displayName”: “Read Quota”,rn “consumerQuotaLimits”: [rn {rn “name”: “services/fooservice.endpoints.my-project-id.cloud.goog/projects/123456789/consumerQuotaMetrics/library.googleapis.com%2Fread_calls/limits/%2Fmin%2Fproject”,rn “unit”: “1/min/{project}”,rn “metric”: “library.googleapis.com/read_calls”,rn “quotaBuckets”: [rn {rn “effectiveLimit”: “1”,rn “defaultLimit”: “1”rn }rn ]rn }rn ],rn “metric”: “library.googleapis.com/read_calls”rn }rn u2026′), (u’language’, u”), (u’caption’, <wagtail.wagtailcore.rich_text.RichText object at 0x3ee0cf183650>)])]In the above response, the most important thing to note is the effective limit for a given service’s metric. The effective limit is the limit being applied to a resource consumer when enforcing customer fairness as discussed earlier.Now that we’ve established how to get the effectiveLimit for a quota definition on a resource per project, we can define the assertion of quota consistency as: Load Test Environment Quota Effective Limit <= Serving Environment Quota Effective Limit Having a test like this, you can then integrate that with something like Cloud Build to block the promotion of your image from the lower environment to your serving environment if that test fails to pass. That saves you from introducing regressive behavior from the new image into the serving environment that would otherwise result in an outage. The importance of early detectionIt’s not enough to alert on a detected quota regression and block the image promotion to prod. It’s better to raise alarms as soon as possible. If resources are lacking when it’s time to promote to production, you’re now faced with the problem of wrangling enough resources in time. This may not always be possible in the desired timeline; it’s possible that the resource provider needs to scale up its resources to handle the increase in quota. This is not always something that can just be done in a day. For example, is the service hosted on Google Kubernetes Engine (GKE)? Even with autoscale, what if the ip pool is exhausted? Cloud infrastructure changes, although elastic, are not instant. Part of production planning needs to account for the time needed to scale.In summary, quota regression testing is a key component that should be added to the entire concept of handling overload and dealing with load balancing in any cloud service — not just Google Cloud. It is important for product stability with the dips and spikes in demands, which will inevitably show up as a problem in many spaces. If you continue to rely on human intervention to ensure consistency of your quota across your configurations, you will only guarantee that eventually, you will have an outage when that consistency is not met. For more on working with quotas, check out the documentation. Related Article5 principles for cloud-native architecture—what it is and how to master itLearn to maximize your use of Google Cloud by adopting a cloud-native architecture.Read Article
Quelle: Google Cloud Platform

CISO Perspectives: June 2022

June saw the in-person return of the RSA Conference in San Francisco, one of the largest cybersecurity enterprise conferences in the world. It was great to meet with so many of you at many of our Google Cloud events, at our panel hosted in partnership with Cyversity, and throughout the conference. At RSA we focused on our industry-leading security products, but even more importantly on our goal to make (and encourage others to make) more secure products, not just security products. And remember, we make this newsletter available on the Google Cloud blog and by email—you can subscribe here.RSA ConferenceThose of us who attended RSA from Google Cloud were grateful for the chance to connect in person with so many of our customers, partners, and peers from across the industry. Some key themes Google Cloud discussed at press, analyst, government and customer meetings at the conference included: Digital sovereignty: How the cloud can be used to help organizations address and manage requirements around data localization, and achieve the necessary operational and software sovereignty. We believe that sovereignty is more than just meeting regulatory requirements. These principles can help organizations become more innovative and resilient while giving them the ability to control their digital future.Defending against advanced threats: Organizations are operating against a backdrop of ever more advanced threats, and are looking to enhance their protection through capabilities like posture management and more pervasive implementation of Zero Trust capabilities. We also were focused on work to increase productivity and upskilling of threat management and security operations teams. Threat intelligence: A big part of supporting customers is ongoing interest in how we can further curate and release threat intelligence through our various products and capabilities. These themes point to what security and tech decision-makers are looking for: secure products overall, not just security products. This is the backbone of our “shared fate” philosophy at Google Cloud. We know that in today’s environment, we can reduce and prevent toil for our customers by prioritizing security first, and building secure capabilities into all our products and solutions. As RSA brings together incredible people and organizations, we also took stock of work happening across the industry to grow a more diverse cybersecurity workforce. We had the opportunity to host a panel discussion at Google’s San Francisco office with Cyversity and UC Berkeley’s Center for Long-Term Cybersecurity, two organizations who are deeply committed to advancing diversity in our industry.MK Palmore, Director, Office of the CISO at Google Cloud, moderates a panel on diversity and cybersecurity with Ann Cleaveland, UC Berkeley; Rob Duhart, Walmart; and Larry Whiteside, Jr., Cyversity. Photo courtesy MK Palmore.One resounding takeaway was that diversity of background, experience, and perspective is vital for cybersecurity organizations to effectively manage risks, especially security risks. As my colleague MK Palmore noted, so much of the threat landscape is about problem solving. This is why it’s imperative to bring different views and vantage points to address the most challenging issues. One way we can achieve this is through expanding the talent pipeline. Over one million cybersecurity positions go unfilled each year across the industry, so we need to actively introduce cybersecurity topics to students and new job seekers, including those who come to security from non-traditional backgrounds. Progress requires a combination of private and public partnership, and organizations like Cyversity have established track records of providing women and individuals from underrepresented communities with the right resources and opportunities. As a company, Google is committed to growing a more diverse workforce for today and for the future. Secure Products, not just Security ProductsSecurity should be built into all products. We all should be focused on constantly improving the base levels of security in all products. One recent example is in our recent guide on how to incorporate Google Cloud’s new Assured Open Source Software service into your software supply chain. Assured OSS can provide you with a higher assurance collection of the open source software that you rely on. Additionally, we are working hard across all of our developer tooling to embed security capabilities, such as Cloud Build, Artifact Registry, and Container/Artifact Analysis.Google Cybersecurity Action Team HighlightsHere are the latest updates, products, services and resources from our cloud security teams this month: SecurityMapping security with MITRE: Through our research partnership with the MITRE Engenuity Center for Threat-Informed Defense, we have mapped the native security capabilities of Google Cloud to MITRE ATT&CK. This can help customers with their adoption of Autonomic Security Operations, which requires the ability to use threat-informed decision making throughout the continuous detection and continuous response (CD/CR) workflow. Read more.Two new BigQuery capabilities to help secure and manage sensitive data: Managing data access continues to be an important concern for organizations and regulators. To fully address those concerns, sensitive data needs to be protected with the right mechanisms so that data can be kept secure throughout its entire lifecycle. We’re offering two new features in BigQuery that can help secure and manage sensitive data. Now generally available, encryption SQL functions can encrypt and decrypt data at the column level; and in preview is dynamic data masking, which can selectively mask column-level data at query time based on the defined masking rules, user roles, and privileges. Introducing Confidential GKE Nodes: Part of the growing Confidential Computing product portfolio, Confidential GKE Nodes make sure your data is encrypted in memory. GKE workloads you run today can run confidentially without any code changes.Adding more granular GKE release controls: Customers can now subscribe their GKE clusters to release channels, so that they can decide when, how, and what to upgrade in clusters and nodes. These upgrade release controls can help organizations to automate tasks such as notifying their DevOps teams when a new security patch is available.Detecting password leaks using reCAPTCHA Enterprise: We all know that reusing passwords is a risk. But as long as the password remains an unfortunately common form of account authentication, people will wind up reusing them. reCAPTCHA Enterprise’s password leak detection can help organizations warn their end-users to change passwords. It uses a privacy-preserving API which hides the credential details from Google’s backend services, and allows customers to keep their users’ credentials private. Database auditing comes to Cloud SQL: This security feature can let customers monitor changes to their Google Cloud SQL Server databases, including database creations, data inserts, and table deletions.DNS zone permissions: Our Cloud DNS has introduced in Preview a new managed zone permissions capability that can allow enterprises with distributed DevOps teams to delegate Cloud DNS managed zone administration to their individual application teams. It can prevent one application team from accidentally changing the DNS records of another application, and it also can allow for a better security posture because only authorized users will be able to modify managed zones. This better supports the principle of least privilege.  New capabilities in Cloud Armor: We’ve expanded Cloud Armor’s coverage to more types of workloads. New edge security policies can help defend workloads using Cloud CDN, Media CDN, and Cloud Storage, and filter requests before they are served from cache. Cloud Armor also now supports the TCP Proxy and SSL Proxy Load Balancers to help block malicious traffic attempting to reach backends behind these load balancers. We’ve also added features to improve the security, reliability, and availability of deployments, including two new rule actions for per-client rate limiting, malicious bot defense in reCAPTCHA Enterprise, and machine learning-based Adaptive Protection to help counter advanced Layer 7 attacks.Industry updatesHow SLSA and SBOM can help healthcare resiliency: Healthcare organizations continue to be a significant target from many different threats and we are helping the healthcare industry develop more resilient cybersecurity practices. We believe part of developing that resiliency in the face of rising cyberattacks are software bills of materials (SBOM) and Supply chain Levels for Software Artifacts (SLSA) framework. Securing the software supply chain is a critical priority for defenders and something Google is committed to helping organizations do, which we explain more in-depth in this deep dive on SLSA and SBOM.Google Cloud guidance on merging organizations: When two organizations merge, it’s vital that they integrate their two cloud deployments in as securely a manner as possible. We’ve published these best practices that address some security concerns they may have, especially around Identity and Access Management. Stronger privacy controls for the public sector: Google Workspace has added client-side encryption to let public agencies retain complete confidentiality and control over their data by choosing how and where their encryption keys are stored. Compliance & ControlsGoogle Cloud security overview: Whether your organization is just getting started with its digital transformation or is running on a mature cloud, this wonderfully-illustrated summary of how Google Cloud security works is a great way for business and dev teams to help explain what Google Cloud security can do to make your organization more secure.  New commitments on processing of service data for Google Cloud customers: As part of our work with the Dutch government and its Data Protection Impact Assessment (DPIA) of Google Workspace and Workspace for Education, Google intends to offer new contractual privacy commitments for service data that align with the commitments we offer for customer data. Read more.Google Cloud’s preparations to address DORA: Google Cloud welcomes the inter-institutional agreement agreed to by European legislators on the Digital Operational Resilience Act (DORA). This major milestone in the adoption of new rules designed to ensure financial entities can withstand, respond to, and recover from all types of information and communications technology-related disruptions and threats, including increasingly sophisticated cyberattacks. Read more. Google Cloud Security PodcastsWe launched in February 2021 a new podcast focusing on Cloud Security. If you haven’t checked it out, we publish four or five podcasts a month where hosts Anton Chuvakin and Timothy Peacock chat with cybersecurity experts about the most important and challenging topics facing the industry today. This month, they discussed:What good detection and response looks like in the cloud, with Dave Merkel and Peter Silberman, who lead managed detection and response company Expel. Listen here.How Google runs “red team” exercises, with our own Stefan Friedli, senior security engineer. Listen here. Anton and Timothy’s reactions to RSA 2022. Listen here.How best to observe and track cloud security threats, with James Condon, director of security research at cloud security startup Lacework. Listen here.And everything you wanted to know about AI threats but might’ve been afraid to ask, with Nicholas Carlini, research scientist at Google. Listen here.To have our Cloud CISO Perspectives post delivered every month to your inbox, sign up for our newsletter. We’ll be back next month with more security-related updates.Related ArticleCloud CISO Perspectives: May 2022Google Cloud CISO Phil Venables shares his thoughts on the latest security updates from the Google Cybersecurity Action Team.Read Article
Quelle: Google Cloud Platform

Microsoft Cost Management updates – June 2022

Whether you're a new student, a thriving startup, or the largest enterprise, you have financial constraints, and you need to know what you're spending, where, and how to plan for the future. Nobody wants a surprise when it comes to the bill, and this is where Microsoft Cost Management comes in.

We're always looking for ways to learn more about your challenges and how Microsoft Cost Management can help you better understand where you're accruing costs in the cloud, identify and prevent bad spending patterns, and optimize costs to empower you to do more with less. Here are a few of the latest improvements and updates based on your feedback:

Viewing cost in the Azure mobile app
Introducing a new API for configuring cost alerts
Prevent budget overages with action groups common alert schema
Amplify your learning experience in Cost Management
Help shape the future of navigation in Cost Management and Billing
What's new in Cost Management Labs
New ways to save money with Microsoft Cloud
New videos and learning opportunities
Documentation updates
Join the Microsoft Cost Management team

Let's dig into the details.

Viewing cost in the Azure mobile app

The Azure mobile app is like having the portal in your pocket, allowing you to stay connected to your Azure resources on the go. In addition to managing access, checking resource status, monitoring health, and all the other great capabilities, you can also keep an eye on the cost of your subscriptions and resource groups. Simply open any subscription or resource group and scroll down to see cost.

Let us know what you’d like to see next!

Introducing a new API for configuring cost alerts

We’ve talked about how one of the most critical aspects of cost management is staying informed about changes to your costs. You already know how to get alerted when cost exceeds predefined thresholds with budgets and you may have seen that you can subscribe to updates of cost analysis views or subscribe to anomaly alerts from the portal. These are all great resources when getting started, but when it comes to getting set up for success at scale, automation is essential. Now you can automate subscribing to views or anomaly alerts with the ScheduledActions API.

Check out the ScheduledActions API to get started today and let us know what new alerts you’d like to see next!

Prevent budget overages with action groups common alert schema

Speaking of automation, the best way to stay within your budget is to automate actions to minimize cost before you exceed your budget. If you’re interested in setting a hard limit on your budget, configure your budget to trigger an action group. Action groups allow you to run custom scripts that can shut down VMs, archive data, or even delete test resources, giving you ultimate control of your finances to ensure you never get surprised.

Cost Management budget alerts now support the Azure Monitor common alert schema, making it easier than ever to automate actions that keep you under your budget.

Learn more about configuring action groups for your budgets and how the Azure Monitor common alert schema can help.

Amplify your learning experience in Cost Management

Cost can be a daunting topic. Whether you’re just getting started or looking to learn more about specific features, there are many ways for you to learn about features – from our monthly blog posts and smaller feature updates to full product documentation and MS Learn modules to videos on YouTube. And that’s just scratching the surface. In an effort to help streamline your learning experience, you can now explore the many learning options from the Cost Management overview.

Check out Cost Management tutorials yourself and let us know what you’d like to see added.

Help shape the future of navigation in Cost Management and Billing

Do you manage the billing account or monitor cloud costs for your team or organization? We’re exploring navigation pathways for key tasks within the Azure portal and would love to get your feedback in a 30-minute, unmoderated walkthrough.

If you are interested in participating in this study, please contact our research team and we’ll schedule a time.

What's new in Cost Management Labs

With Cost Management Labs, you get a sneak peek at what's coming in Microsoft Cost Management and can engage directly with us to share feedback and help us better understand how you use the service, so we can deliver more tuned and optimized experiences. Here are a few features you can see in Cost Management Labs:

Update: Cost Management tutorials – Now available in the public portal
Whether you’re just getting started or looking to learn more about specific features, tutorials are now a click away from the Cost Management overview in Cost Management Labs.
Product column experiment in the cost analysis preview
We’re testing new columns in the Resources and Services views in the cost analysis preview for Microsoft Customer Agreement. You may see a single Product column instead of the Service, Tier, and Meter columns. Please leave feedback to let us know which you prefer.
Group-related resources in the cost analysis preview
Group-related resources, like disks under VMs or web apps under App Service plans, by adding a “costanalysis-parent” tag to the child resources with a value of the parent resource ID. Wait 24 hours for tags to be available in usage and your resources will be grouped. Leave feedback to let us know how we can improve this experience further for you.
Charts in the cost analysis preview
View your daily or monthly cost over time in the cost analysis preview. You can opt-in using Try Preview.
View cost for your resources
The cost for your resources is one click away from the resource overview in the preview portal. Just click View cost to quickly jump to the cost of that particular resource.
Change scope from the menu
Change scope from the menu for quicker navigation. You can opt-in using Try Preview.

Of course, that's not all. Every change in Microsoft Cost Management is available in Cost Management Labs a week before it's in the full Azure portal. We're eager to hear your thoughts and understand what you'd like to see next. What are you waiting for? Try Cost Management Labs today.

New ways to save money with Microsoft Cloud

Lots of cost optimization improvements over the last month! Here are some of the generally available offers you might be interested in:

NC A100 v4 virtual machines for AI.
DCsv3 and DCdsv3 series virtual machines.
Azure Arc-enabled SQL Managed Instance Business Critical.
Increased size of Stream Analytics jobs and cluster.
Azure Ebsv5 now available in 13 additional regions.
Azure Databricks available in Sweden Central and West Central US.

And here are some of the new previews:

New Cosmos DB features for scalable, cost-effective application development.
Azure Cosmos DB serverless container storage limit increase to 1TB.
16MB limit per document in API for MongoDB.
Autoscale Stream Analytics jobs.

New videos and learning opportunities

Here’s a new video you might be interested in:

MySQL Developer Essentials Season 1 Episode 3: Cost management and optimization (9 minutes).

Follow the Microsoft Cost Management YouTube channel to stay in the loop with new videos as they’re released and let us know what you'd like to see next.

Want a more guided experience? Start with Control Azure spending and manage bills with Microsoft Cost Management.

Documentation updates

Here are a few documentation updates you might be interested in:

New FAQ: When does Azure finalize or close the billing cycle of a closed month?
New tutorial: Update tax details for an Azure billing account.
New tutorial: Elevate access to manage billing accounts.
New tutorial: How to create an anomaly alert.
Added additional details about the anomaly detection model.
Payment updates to account for the Reserve Bank of India regulation for recurring payments.
Split out tutorials for creating subscriptions for EA, CSP, MCA (same directory), and MCA (separate directory).
Marketplace price list in the EA portal has been retired.
Budget API is preferred over Azure PowerShell/CLI.

Want to keep an eye on all of the documentation updates? Check out the Cost Management and Billing documentation change history in the azure-docs repository on GitHub. If you see something missing, select Edit at the top of the document and submit a quick pull request.

Join the Microsoft Cost Management team

Are you excited about helping customers and partners better manage and optimize costs? We're looking for passionate, dedicated, and exceptional people to help build best in class cloud platforms and experiences to enable exactly that. If you have experience with big data infrastructure, reliable and scalable APIs, or rich and engaging user experiences, you'll find no better challenge than serving every Microsoft customer and partner in one of the most critical areas for driving cloud success.

Watch the video below to learn more about the Microsoft Cost Management team:

Join our team.

What's next?

These are just a few of the big updates from last month. Don't forget to check out the previous Microsoft Cost Management updates. We're always listening and making constant improvements based on your feedback, so please keep the feedback coming.

Follow @MSCostMgmt on Twitter and subscribe to the YouTube channel for updates, tips, and tricks. You can also share ideas and vote up others in the Cost Management feedback forum or join the research panel to participate in a future study and help shape the future of Microsoft Cost Management.

We know these are trying times for everyone. Best wishes from the Microsoft Cost Management team. Stay safe and stay healthy.
Quelle: Azure