Avoid cost overruns: How to manage your quotas programmatically

One important aspect of managing a cloud environment is setting up financial governance to safeguard against budget overruns. Fortunately, Google Cloud lets you set quotas for a variety of services, which can play a key role in establishing guardrails—and protect against unforeseen cost spikes. And to help you set and manage quotas programmatically, we’re pleased to announce that the Service Usage API now supports quota limits in Preview.The Service Usage API is a service that lets you view and manage other APIs and services in your cloud projects. With support for quota limits, you can now leverage the Service Usage API to manage service quotas, such as those from Compute Engine.In this blog post, we’ll take a look at how to use this new functionality with Google Cloud operations tools, so you can track the resources consumed by your projects, set alerts, and right-size your deployments for better cost control.Understanding quotasQuotas can be used to limit the resources a project/organization is authorized to consume.  From the type and number of Compute Engine CPUs, to the maximum number of requests made to an API over a certain period of time, quota metrics have associated quota limits that express the ceiling that the quota metric can reach.A quota limit may be applied globally. This is when there is only one quota limit for the project, independent of where the resource is consumed. Other quota limits may be applied separately for each cloud region (a regional limit) or cloud zone (a zonal limit).  As a project administrator, you can use these quota limits to control how much and where a project can use resources, so that costs stay under control. As an example, you may want to allow production workloads to use a substantial number of high-end CPUs, and a large number of external VPN gateways to allow for scaling flexibility. Experimental projects, meanwhile, may have significantly lower limits to make sure they stay within their allocated research budget.Quota limits were initially exclusively managed via the Google Cloud Console. This interface is ideal when you only need to apply a few changes. However, when you need to adjust a large number of quota limits, or when you need to apply these changes as part of an automated workflow, a programmatic approach is preferable. Setting quota limits programmaticallyWith the Service Usage API, you can discover the quota limits that are available as well as set new ceilings (called consumer overrides). This API will allow you to set quota limits programmatically in workflows and scripts when projects are created, or to leverage automation tools that you might already be using such as Terraform. Note that you can’t use the Service Usage API to increase the available quota above what is allowed by default. For this, you need to place a Quota Increase Request (QIR) via the Quota page.Click to enlargeYou can invoke the Service Usage API by making direct HTTP requests, or using the client libraries that Google provides in your favorite languages (Go, Java, Python, etc.)Monitoring and alerting on quotaYou can now monitor quotas, graph historical usage, and set alerts when certain thresholds are reached with the help of Cloud Monitoring, from both the user interface and its API (see Using quota metrics). Cloud Monitoring starts tracking each of the quotas supported by Service Usage API the moment the project starts consuming them. Allocation quota usage, rate quota usage, quota limit, and quota exceeded error (attempts to go over quota that failed) are all stored automatically by Cloud Monitoring under the “Consumer Quota” resource type.You can use Metrics explorer to query quota data, create charts and easily incorporate them in a monitoring dashboard. This enables you and your team to see historical events, track trends, and monitor usage over time.Click to enlargeYou can also create alerts on quota data in order to be notified when consumption thresholds you define are exceeded or when you are approaching a quota limit. You have to define which conditions trigger the alert, and where you want to be notified (notification channels include email, SMS, Cloud Console app, PagerDuty, Slack, Pub/Sub, and Webhooks). Cloud Monitoring offers both a UI and an API to create and configure these alerts.Click to enlargeRatio alertingThe new Monitoring Query Language (MQL) makes it possible to create flexible and powerful  ratio alerts. With ratio alerts, you can set an alerting threshold as a percentage of a quota limit instead of a fixed number. The advantage of an alert based on a ratio is that you don’t need to redefine the alert when the quota limit changes. For example, you could set an alert threshold as “75%” for the CPUs quota, which triggers the alert if the number of CPUs in use exceeds 75, given a quota limit of 100. If you then increase the quota limit to 300 CPUs, the alert triggers if the number of CPUs in use exceeds 225.Click to enlargeCombined with wildcard filters, MQL can help set up powerful alerts, e.g.,  “alert me if any of my quotas reach 80% of their limits.” This allows you to create one alert that covers a significant portion of your quotas.Click to enlargeGet started Any project owner, viewer or editor can access quota usage within the Cloud Console. You can get started by reviewing the Quota and Service Usage documentation and then Managing service quota using the Service Usage API. For quota monitoring and alerting, start with the documentation on using quota metrics, followed by more in-depth documentation on MQL, ratio alerting, and wildcards.Related ArticleCloud cost optimization: principles for lasting successLearn organizational principles that can help you run your cloud environment efficiently and cost effectively.Read Article
Quelle: Google Cloud Platform

Black History Month: Celebrating the success of Black founders with Google Cloud

February is Black History Month—a time for us to come together to celebrate and remember the important people and history of the African heritage. Over the next four weeks, we will highlight four Black-led startups and how they use Google Cloud to grow their businesses. Our first feature highlights TQIntelligence and its founder, Yared.As a psychologist, administrator, educator, and researcher, I’ve always wondered how we could use data to better serve children and adolescents who are being treated for trauma-related mental health disorders. Inspired by 20 years of experience focusing on poverty and trauma, I sought to rethink mental health services in low-income communities. This inspired me to found my startup TQIntelligence, which uses artificial intelligence and voice recognition technology to enable therapists to make more accurate diagnoses faster. Poverty and trauma—A double whammy for low-income communitiesPoverty and trauma are twofold blows. They go hand in hand, and quite often, one compounds the other. Children and adolescents in marginalized communities are at the highest risk for Adverse Childhood Experiences (ACE): traumatic events in childhood like abuse, neglect or violence, which lead to severe mental health disorders. These disorders impact children’s ability to learn and function on par with their peers. Severe ACEs can lead to mental impairments and an increased risk for suicide, and may inflict long-term adverse effects on children’s health and well-being into adulthood.Furthermore, a lack of quality health care exacerbates the trauma that low-income communities face, despite more than $500 billion in funding—the equivalent of nearly 4% of GDP per year on Medicare/Medicaid, Child Protective Services, and legal fees. This occurs primarily because therapists in low-income communities tend to be the least experienced in treating the most severe youth. Addressing these disparities in treatment outcomes requires innovation in the assessment and measurement of mental health disorder severity.TQIntelligence—Adding intelligence to mental healthcare with Clarity AIAt TQIntelligence, our goal is to enable therapists serving at-risk youth to make more accurate diagnoses, faster. Our proprietary Clarity AI technology uses scientifically-validated diagnostic tools, Artificial Intelligence, and voice recognition technology to bring a data-driven approach that has long been standard in physical medical practices to the mental health arena. Similar to other medical practices, where the diagnosis is based on a variety of tests and calibrations such as blood tests, x-rays, or MRIs to gain a holistic understanding of a patient’s problem, Clarity AI uses the science of Speech Emotion Recognition (SER) and AI to identify trauma biomarkers in a patient’s voice. These technologies give therapists the tools they need to better support and provide services to children and adolescents from low-income families. Google for Startups: Black Founders Fund—Google Cloud technology transforming behavioral health care  TQIntelligence relies on data and Cloud technology to drive accurate mental health diagnostics. Google has provided us with a wealth of resources and products to help us along this journey. Our Clarity AI tool utilizes Google’s Cloud Healthcare API, supporting HIPAA compliance and ensuring the safety of patient data. Google Cloud allowed us to collaborate with their voice analytics team, including giving access to Cloud AutoML tools to advance our speech emotion recognition technology. Once fully developed, this algorithm will help us identify a patient’s level of emotional distress and help therapists establish the patient’s baseline and track patient progress over time, monitoring for improvement, stagnation, or deterioration. So far, we have successfully collected and analyzed more than 800 voice samples as part of our ML model training. These audio clips go through a manual voice labeling process by four mental health professionals since there is no open-source youth clinical voice sample to help us train our model. With the help from Google Cloud, we have been able to scale our data collection process while providing data-driven solutions to our pilot site collaborators. We also leverage Cloud Functions, Firebase and Cloud SQL to create a highly scalable serverless application. Firebase offers seamless integration of data collected to our smartphone applications. The data to be used for our machine learning model training is collected on the mobile platforms and then moved into Cloud Storage using Cloud Functions for storing in buckets. Cloud Functions also is used to retrieve data in the web portal. The tabular data is then stored as databases in Cloud SQL.Black-led startups—Innovating for the futureOver the past year, there has been lot of noise surrounding the topic of social justice. I see many organizations using current conversations as a branding gimmick without any quantifiable commitment; others only offer reactionary or temporary responses. By showing up and putting their money where their mouth is, Google is leading the way for corporations, investors, and foundations looking for ways to start to address the dismal gap in funding for BIPOC and women founders. Google is empowering underrepresented founders like myself. For us, that came through our participation in the Google for Startups Founders Academy (apply here by February 9th!) and the Google for Startups Accelerator: Black Founders. These programs connected us to industry leaders and subject matter experts—relationships that converted into progress, and have proven to be a tremendous advantage in a space where Black-led startups have been excluded from developing AI technology. Receiving $100K in non-dilutive capital from the Google for Startups: Black Founders Fund truly changed the game for us. In addition to financial support, the Black Founders Fund provided us with product support, including 1:1 mentorship from Google Cloud’s engineering team as well as access to Google Cloud credits. Furthermore, we were able to secure an additional $1 million grant from the National Science Foundation. But most importantly, it is fostering meaningful change toward equitable funding. Despite being one of the fastest-growing groups of entrepreneurs, Black founders receive less than 1% of venture capital in the U.S. Excluding founders of color from access to funding isn’t just bad business; it reflects the legacy of white supremacy and racism in this country. At this pivotal point in our country’s history, we need more investors and decision-makers recognizing that diversity drives innovation. Part of this change requires organizations fearlessly and courageously examine their role in perpetuating the legacy of white supremacy, followed by intentional and well-thought actions on how they can contribute to social equity. As someone who grew up with limited resources, I know firsthand the consequences of scarcity to individuals’ emotional, psychological and spiritual development. Founders of color like me are the most appropriate change agents to address the ills of poverty and the psychological impact of scarcity. Going forward, the TQIntelligence team will continue our work with low-income communities and strive to help therapists in these communities make the most of their limited resources to provide effective psychotherapeutic intervention; effective therapy has the most significant impact on future generations. With the support of Google for Startups Black Founders Fund and our mentors at Google—such as Josh Belanich, Jason Scott, Jewel Burks Solomon, and many others—we’ve been able to positively impact the lives of young people from low-income communities. I look forward to continuing this journey with the Google team and changing the world for the better, one diagnosis at a time!If you want to learn more about how Google Cloud can help your startup, visit our startup page here and sign up for our monthly startup newsletter to get a peek at our community activities, digital events, special offers, and more.
Quelle: Google Cloud Platform

A primer on Cloud Bigtable cost optimization

To serve the various workloads that you might have, Google Cloud offers a selection of managed databases. In addition to partner-managed services, including MongoDB, Cassandra by Datastax, Redis Labs, and Neo4j, Google Cloud provide a series of managed database options: CloudSQL and Cloud Spanner for relational use cases, Firestore and Firebase for document data, Memorystore for in-memory data management, and Cloud Bigtable, a wide-column, key-value database that can scales horizontally to support millions of requests per second with low latency. Fully managed cloud computing databases such as Cloud Bigtable enable organizations to store, analyze, and manage petabytes of data without the operational overhead of traditional self-managed databases. Even with all the cost efficiencies that cloud databases offer, as these systems continue to grow and support your applications, there are additional opportunities to optimize costs. This blog post reviews the billable components of Cloud Bigtable, discusses the impact various resource changes can have on cost, and introduces several high-level best practices that may help manage resource consumption for your most demanding workloads. (In later posts, we’ll discuss optimizing costs while balancing performance trade-offs using methods and best practices that apply to organizations of all sizes.) Understand the resources that contribute to Cloud Bigtable costsThe cost of your Bigtable instance is directly correlated to the quantity of consumed resources. Compute resources are charged according to the amount of time the resources are provisioned, whereas for network traffic and storage, you are charged by the quantity consumed.More specifically, when you use Cloud Bigtable, you are charged according to the following:NodesIn Cloud Bigtable, a node is a compute resource unit. As the node count increases, the instance is able to respond to a progressively higher request (writes and reads) load, as well as serve an increasingly larger quantity of data. Node charges are the same for instances regardless if its clusters store data on solid-state drives (SSD) or hard disk drives (HDD). Bigtable keeps track of how many nodes exist in your instance clusters during each hour. You are charged for the maximum number of nodes during that hour, according to the regional rates for each cluster. Nodes are priced in hours per node; the nodal unit price is determined by the cluster region.Data storageWhen you create a Cloud Bigtable instance, you choose the storage type: SSD or HDD; this cannot be changed afterward. The average used storage over a one-month period is utilized to calculate the monthly rate. Since data storage costs are region-dependent, there will be a separate line item on your bill for each region where an instance cluster has been provisioned. The underlying storage format of Cloud Bigtable is the SSTable; and you are billed only for the compressed disk storage consumed by this internal representation. This means that you are charged for the data as it is compressed on disk by the Bigtable service. Further, all data in Google Cloud is persisted in the Colossus file storage system for improved durability. Data Storage is priced in binary gigabytes (GiB)/month; the storage unit price is determined according to the deployment region and the storage type, either SSD or HDD.Network trafficIngress traffic, or the quantity of bytes sent to Bigtable, is free. Egress traffic, or the quantity of bytes sent from Bigtable, is priced according to the destination. Egress to the same zone and egress between zones in the same region are free, whereas cross-region egress and inter-continent egress incur progressively increasing costs based on the total quantity of bytes transferred during the billing period. Egress traffic is priced in GiB sent.Backup storage Cloud Bigtable users can readily initiate, within the bounds of project quota, managed table backups to protect against data corruption or operator error. Backups are stored in the zone of the cluster from which they are taken, and will never be larger than the size of the archived table. You are billed according to the storage used and the duration of the backup between backup creation and removal, via either manual deletion or assigned time-to-live (TTL.) Backup storage is priced in GiB/month; the storage unit price is dependent on the deployment region but is the same regardless of the instance storage type.Understand what you can adjust to affect Bigtable cost As discussed, the billable costs of Cloud Bigtable are directly correlated to the compute nodes provisioned as well as the storage and network resources consumed over the billing period. Thus, it is intuitive that consuming fewer resources will result in reduced operational costs.  At the same time, there are performance and functional implications of resource consumption rate reductions that require consideration. Any effort to reduce operational cost of a running database-dependent production system is best undertaken with a concurrent assessment of the necessary development or administrative effort, while also evaluating potential performance tradeoffs. Certain resource consumption rates can be easily changed, while other types of resource consumption rate changes require application or policy changes, and the remaining type can only be achieved upon the completion of a data migration.Node countDepending on your application or workload, any of the resources consumed by your instance might represent the most significant portion of your bill, but it is very possible that the provisioned node count constitutes the largest single line item (we know, for example, that Cloud Bigtable nodes generally represent 50-80% of costs depending on the workload). Thus it is likely that a reduction in the number of nodes might offer the best opportunity for expeditious cost reduction with the most impact. As one would expect, cluster CPU load is the direct result of the database operations served by the cluster nodes. At a high level, this load is generated by a combination of the database operation complexity, the rate of read or write operations per second, and the rate of data throughput required by your workload.  The operation composition of your workload may be cyclical and change over time, providing you the opportunity to shape your node count to the needs of the workload. When running a Cloud Bigtable cluster, there are two inflexible maximum metric upper bounds: the maximum available CPU (i.e., 100% average CPU utilization) and the maximum average quantity of stored data that can be managed by a node. At the time of writing, nodes of SSD and HDD clusters are limited to manage no more than 2.5 TiB and 8 TiB data per node respectively.If your workload attempts to exceed these limits, your cluster performance may be severely degraded. If available CPU utilization is exhausted, your database operations will increasingly experience undesirable results: high request latency, and an elevated service error rate. If the amount of storage per node exceeds the hard limit in any instance cluster, writes to all clusters in that instance will fail until you add nodes to each cluster that is over the limit.As a result, you are recommended to choose a node count for your cluster such that some headroom is maintained below the respective metric upper bounds. In the event of an increase in database operations, the database can continue to serve requests with optimal latency, and the database will have room to support spikes in load before hitting the hard serving limits.  Alternatively, if your workload is more data-intensive than compute-intensive, it might be possible to reduce the amount of data stored in your cluster such that the minimum required node count is lowered.Data storage volumeSome applications, or workloads, generate and store a significant amount of data. If this evokes the behavior of your workload, there might be an opportunity to reduce costs by storing, or retaining, less data in Cloud Bigtable.  As discussed, data storage costs are correlated to the amount of data stored over time: if less data is stored in an instance, the incurred storage costs will be lower. Depending on the storage volume, the structure of your data and the retention policies, an opportunity for cost savings could exist for either instances of the SSD or HDD storage types.As noted above, since there is a minimum node requirement based on the total data stored, there is a possibility that reducing the data stored might reduce both data storage costs as well as provide an opportunity for reduced node costs.Backup storage volume Each table backup performed will incur additional cost for the duration of the backup storage retention. If you can determine an acceptable backup strategy that retains fewer copies of your data for less time, you will be able to reduce this portion of your bill. Storage typeDepending on the performance needs of your application, or workload, there is a possibility that both node and data storage costs can be reduced if your database is migrated from SSD to HDD.  This is due to the fact that HDD nodes can manage more data than SSD nodes, and the storage costs for HHD are an order of magnitude lower than SSD storage. However, the performance characteristics are different for HDD: read and write latencies are higher, supported reads per second are lower, and throughput is lower. Therefore, it is essential that you assess the suitability of HDD for the needs of your particular workload before choosing this storage type.Instance topology At the time of writing, a Cloud Bigtable instance can contain up to four clusters provisioned in the available Google Cloud zones of your choice. In case your instance topology encompasses more than one cluster, there are several potential opportunities for reducing your resource consumption costs.  Take a moment to assess the number and the locations of clusters in your instance.  It is understandable that each additional cluster results in additional node and data storage costs, but there is also a network cost implication. When there is more than one cluster in your instance, data is automatically replicated between all of the clusters in your instance topology.If instance clusters are located in different regions, the instance will accrue network egress costs for inter-region data replication. If an application workload issues database operations to a cluster in a different region, there will be network egress costs for both the calls originating from the application and the responses from Cloud Bigtable.There are strong business rationales, such as system availability requirements, for creating more than one cluster in your instance. For instance, a single cluster provides three nines, or 99.9% availability, and a replicated instance with two or more clusters provides four nines, or 99.99%, availability when a multi-cluster routing policy is used. These options should be taken into account when evaluating the needs for your instance topology.When choosing the locations for additional clusters in a Cloud Bigtable instance, you can choose to place replicas in geo-disparate locations such that data serving and persistence capacity are close to your distributed application endpoints. While this can provide various benefits to your application, it is also useful to weigh the cost implications of the additional nodes, the location of the clusters, and the data replication costs that can result from instances that span the globe. Finally, while limited to a minimum node count by the amount of data managed, clusters are not required to have a symmetric node count. The result is that you could asymmetrically size your clusters according to the expected load from application traffic expected for each cluster.High-level best practices for cost optimizationNow that you have had a chance to review how costs are apportioned for Cloud Bigtable instance resources, and you have been introduced to the resource consumption adjustments available that affect billing cost, check out  some strategies available to realize cost savings that will balance the tradeoffs relative to your performance goals. (We’ll discuss techniques and recommendations to follow these best practices in the next post.). Options to reduce node costsIf your database is overprovisioned, meaning that your database has more nodes than needed to serve database operations from your workloads, there is an opportunity to save costs by reducing the number of nodes.  Manually optimize node count If the load generated by your workload is reasonably uniform, and your node count is not constrained by the quantity of managed data, it may be possible to gradually decrease the number of nodes using a manual process to find your minimum required count.Deploy autoscalerIf the database demand of your application workload is cyclical in nature, or undergoes short-term periods of elevated load, bookended by significantly lower amounts, your infrastructure may benefit from an autoscaler that can automatically increase and decrease the number of nodes according to a schedule or metric thresholds.Optimize database performance As discussed earlier, your Cloud Bigtable cluster should be sized to accommodate the load generated by database operations originating from your application workloads with a sufficient amount of headroom to absorb any spikes in load. Since there is this direct correlation between the minimum required node count and the amount of work performed by the databases, an opportunity may exist to improve the performance of your cluster so the minimum number of required nodes is reduced.Possible changes in your database schema or application logic that can be considered include rowkey design modifications, filtering logic adjustments, column naming standards, and column value design. In each of these cases, the goal is to reduce the amount of computation needed to respond to your application requests.Store many columns in a serialized data structure Cloud Bigtable organizes data in a wide-column format. This structure significantly reduces the amount of computational effort required to serve sparse data. On the other hand, if your data is relatively dense, meaning that most columns are populated for most rows, and your application retrieves most columns for each request, you might benefit from combining the columnar values into fields in a single data structure. A protocol buffer is one such serialization structure.Assess architectural alternativesCloud Bigtable provides the highest level of performance when reads are uniformly distributed across the rowkey space. While such an access pattern is ideal, as serving load will be shared evenly across the compute resources, it is likely that some applications will interact with data in a less uniformly distributed manner.For example, for certain workload patterns, there may be an opportunity to utilize Cloud Memorystore to provide a read-through, or capacity cache. The additional infrastructure would add an additional cost, however certain system behavior may precipitate a larger decrease in Bigtable node cost. This option would most likely benefit cases when your workload queries data according to a power law distribution, such as theZipf distribution, where a small percentage of keys accounts for a large percentage of the requests, and your application requires extremely low P99 latency. The tradeoff is that the cache will be eventually consistent, consequently your application must be able tolerate some data latency.Such an architectural change would potentially allow for you to serve requests with greater efficiency, while also allowing you to decrease the number of nodes in your cluster. Options to reduce data storage costsDepending on the data volume of your workload, your data storage costs might account for a large portion of your Cloud Bigtable cost. Data storage costs can be reduced in one of two ways: store less data in Cloud Bigtable, or choose a lower-cost storage type. Developing a strategy for offloading data for longer-term data to either Cloud Storage or BigQuery may provide a viable alternative to keeping infrequently accessed data in Cloud Bigtable without eschewing the opportunity for comprehensive analytics use cases. Assess data retention policies One straightforward method to reduce the volume of data stored is to amend your data retention policies so that older data can be removed from the database after a certain age threshold. While writing an automated process to periodically remove data outside the retention policy limits would accomplish this goal, Cloud Bigtable has a built-in feature that allows for garbage collection to be applied to columns according to policies assigned to their column family. It is possible to set policies that will limit the number of cell versions, or define a maximum age, or a time-to-live (TTL), for each cell based on its version timestamp. With garbage collection policies in place, you are given the tools to safeguard against unbounded Cloud Bigtable data volume growth for applications that have established data retention requirements. Offload larger data structuresCloud Bigtable performs well with rows up to 100 binary megabytes (MiB) in total size, and can support rows up to 256 MiB, which gives you quite a bit of flexibility about what your application can store in each row. Yet, if you are using all of that available space in every row, the size of your database might grow to be quite large.For some datasets, it might be possible to split the data structures into multiple parts: one, optimally smaller part in Cloud Bigtable and another, optimally larger, part in Google Cloud Storage. While this would require your application to manage the two data stores, it could provide the opportunity to decrease the size of the data stored in Cloud Bigtable, which could in turn lower storage costs.Migrate from instance storage from SSD to HDDA final option that may be considered to reduce storage cost for certain applications is a migration of your storage type to HHD from SSD. Per-gigabyte storage costs for HDD storage are an order of magnitude less expensive than SSD. Thus, if you need to have a large volume of data online, you might assess this type of migration.That said, this path should not be embarked upon without serious consideration. Only once you have comprehensively evaluated the performance tradeoffs, and you have allotted the operational capacity to conduct a data migration, might this be chosen as a viable path forward.  Options to reduce backup storage costs At the time of writing, you can create up to 50 backups of each table and retain each for up to 30 days. If left unchecked, this can add up quickly.Take a moment to assess the frequency of your backups and the retention policies you have in place. If there are not established business or technical requirements for the current quantity of archives that you currently retain, there might be an opportunity for cost reduction. What’s next Cloud Bigtable is an incredibly powerful database that provides low latency database operations and linear scalability for both data storage and data processing. As with any provisioned component in your infrastructure stack, the cost of operating Cloud Bigtable is directly proportional to the resources consumed by its operation. Understanding the resource costs, the adjustments available, and some of the cost optimization best practices is your first step toward finding a balance between your application performance requirements and your monthly spend. In the next post in this series, you will learn about some of the observations you can make of your application to better understand the options available for cost optimization.  Until then, you can:Learn more about Cloud Bigtable pricing Review the recommendations about choosing between SSD and HDD storageUnderstand more about the various aspects of Cloud Bigtable performance
Quelle: Google Cloud Platform

Search and browse Google Cloud code samples

We’ve added new features to our documentation to provide quick and easy ways to search and browse all code samples that are available for Google Cloud. When getting started with a new technology, are you the type of developer who immediately looks for code samples? If so, you’ll be happy to check out these new features.  All code samples search pageThe first new feature is the all code samples search page. On this page, you can browse through 1200+ code sample “tiles”, each of which includes a list of programming languages that the code sample is available in, along with the relevant Google Cloud product. At the top of the page are two filters: the first provides a quick way to filter the code samples listed by programming language, and the second lets you filter the list of code samples by one or more Google Cloud products. Next to the filters is a text search box where you can search the titles and descriptions of code samples.All code samples page for a specific Google Cloud productThe next feature provides new pages that contain all code samples for a specific Google Cloud product, integrated into the product’s existing documentation. Use these pages to see all the code samples for a product like BigQuery. The code samples listed on this page include a View in documentation section, which lists all of the documentation pages that include the code sample.Individual code sample pagesIf you click the “View Sample” button from one of the new code sample pages, you open an individual page for that code sample. This page displays the code sample along with a list of documentation pages that include the code sample. You can click the View on GitHub button to see the complete code sample on Github.As we add new code samples to the Google Cloud open source Github repositories, we automatically create new standalone code sample pages in our documentation to feature these new additions.Try searching and browsing our new code sample pages. We hope they help you get started quicker. Or, if your development is already well underway, we hope that these improvements speed up your coding efforts by getting you right to the code samples that you’re looking for.Related ArticleIntroducing interactive code samples in Google Cloud documentationWith interactive code samples in Google Cloud documentation, you can replace the variables inline, before you even copy the snippet.Read Article
Quelle: Google Cloud Platform