A Birthday Challenge as Docker Turns 8

Time flies. Eight years ago Docker was introduced to the world and forever changed the way applications are developed. We have enjoyed watching developers from all walks of life and from every corner of the globe bring their ideas to life using our technology. 

As is our tradition in the Docker community, and as announced during our last Community All-Hands, we are celebrating Docker’s big day with a birthday challenge where Docker users are encouraged to learn some of our Docker Captain’s favorite tips + tricks by completing 8 hands-on interactive exercises. Unlike last year’s challenge, this year as you complete an exercise you not only earn badges but you also earn points based on speed and accuracy which will be displayed on a leaderboard organised by individual score, country score and Captain score.

The challenge is on for the next month and we will announce the winners and award special prizes to the top three individual scores. 

So let’s celebrate 8 years of Docker and let the challenge begin!
The post A Birthday Challenge as Docker Turns 8 appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

WordPress.com Design Update for a More Intuitive Experience

We’re excited to tell you more about WordPress.com’s new navigation experience. The update makes managing your entire site more intuitive and creates consistency across all parts of WordPress.com. This update also allows you all to take advantage of the wider WordPress open source community, creating the same admin menu as WordPress.org that you see referenced in lots of documentation and tutorials.

As we continue to grow, we’ve heard that navigating WordPress.com could be confusing because of different sidebars layouts, menus, etc. We listened to your feedback and ran usability tests on new designs to improve the experience.  

Updated Sidebar Menu Design in WordPress.com

This design update combines sidebars and menus that were mismatched and streamlines them into one dashboard that’s consistent for everyone.

For many of you, there will be little to no change in how you use WordPress.com. For those of you who access advanced features, you can easily enable wp-admin as your default view in your Account Settings. We’re surfacing most menu items from third-party plugins and themes by default. You can learn more about interface and account settings here.

While you’re in Account Settings, give your WordPress.com interface a new color scheme. You can now choose a color for your dashboard.

Thanks for the continued feedback to make WordPress.com more intuitive to use for all. And we’re not done yet – we’ll continue to listen and evolve as WordPress.com grows. Happy WordPressing!
Quelle: RedHat Stack

Albertsons and Google are making grocery shopping easier with cloud technology

The past year shook all of us out of our routines as we adapted to the pandemic. Simple things like grocery shopping took on new importance and presented new challenges. Ordering groceries online became commonplace almost overnight. Making that easier was one of the goals we tackled alongside Albertsons Companies when we started working together last spring. Together with Albertsons, we want to make grocery shopping easy, exciting and friendly, building a digital experience that sets the foundation for Albertsons Cos.’ long-term strategy. Albertsons Cos. operates more than 20 grocery brands—including Albertsons, Safeway, Vons, Jewel-Osco, Shaw’s, Acme, Tom Thumb, Randalls, United Supermarkets, Pavilions, Star Market, Haggen and Carrs—serving millions of customers across the United States. Last spring, as the world was in the throes of adapting to life amid COVID-19, Albertsons Cos. and Google held a joint innovation day—all conducted virtually—to figure out what could be done to help people during the pandemic and beyond. In just one day, we came up with a litany of ideas of how technology could be implemented to make grocery shopping easier. Virtual joint innovation sparks ongoing ideasTogether, we’ve spent the last year turning many of these ideas into reality, and one of those ideas becomes available today: We’re announcing new pickup and delivery actions that share additional online information—like availability windows and order minimums—from Albertsons Cos’ stores directly on their Business Profiles on mobile Search—and coming to Google Maps later this year. This new feature joins another idea that became reality earlier this month, when Albertsons announced its use of Business Messages to help people get up-to-date information about COVID-19 vaccines at Albertsons Cos. pharmacies. And as we look to the future, the two companies are also announcing a multi-year partnership to make shopping easier and more convenient for millions of customers, coast-to-coast. Through this partnership, we’re looking to transform the grocery shopping experience far beyond the pandemic. For example, AI-powered hyperlocal information and features will make it easier to get your grocery shopping done—enabling things like personalized service, easier ordering, pickup and delivery, predictive shopping carts, and more. As we look forward to the future of grocery shopping, let’s look at some of the trends from the past year that defined how we’re thinking about the future of grocery shopping.Click to enlargeWe don’t know exactly what the future will look like, but we know that some things will be forever changed. While many of us will happily let restaurants cook our seafood again, there are things that all of us will take forward from this time. Albertsons Cos. and Google are making sure that easy grocery shopping will be one of them. Learn moreKeep up with the latest Google Cloud news on our newsroom and blog.Albertsons Companies helps customers find COVID-19 vaccines with Business MessagesConversational AI with Apigee API Management for enhancing customer experiences
Quelle: Google Cloud Platform

Take a tour of best practices for Cloud Bigtable performance and cost optimization

To serve your various application workloads, Google Cloud offers a selection of managed database options: Cloud SQL and Cloud Spanner for relational use cases, Firestore and Firebase for document data, Memorystore for in-memory data management, and Cloud Bigtable, a wide-column, NoSQL key-value database. Bigtable was designed by Google to store, analyze, and manage petabytes of data while supporting horizontal scalability to millions of requests per second at low latency. Cloud Bigtable offers Google Cloud customers this same database that has been battle tested within Google for over a decade, without the operational overhead of traditional self-managed databases. When considering the total cost of ownership, fully managed cloud databases are often far less expensive to operate than self-managed databases. Nonetheless, as your databases continue to support your growing applications, there are coincident opportunities to optimize cost. This blog provides best practices for optimizing a Cloud Bigtable deployment for cost savings. A series of options are presented and the respective tradeoffs to be considered are discussed.Before you beginWritten for developers, database administrators, and system architects who currently use Cloud Bigtable, or are considering using it, this blog will help you strike the right balance between performance and cost. The first installment in this blog series, A primer on Cloud Bigtable cost optimization, reviews the billable components of Cloud Bigtable, discusses the impact various resource changes can have on cost, and introduces the best practices that will be covered in more detail in this article.Note: This blog does not replace the public Cloud Bigtable documentation, and you should be familiar with that documentation before you read this guide. Further, this article is not intended to go into the details of optimizing a particular workload to support a business goal, but instead provides some general best practices that can be employed to balance cost and performance.Understand the current database behaviorBefore you make any changes, spend some time to observe and document the current behavior of your clusters. Use Cloud Bigtable Monitoring to document and understand the existing values and trends for these key metrics:Reads/writes per second CPU utilization Request latencyRead/write throughputDisk usageYou will want to look at the metric values at various points throughout the day, as well as the longer-term trends. To start, look at the current and previous weeks to see if the values are constant throughout the day, follow a daily cycle, or follow some other periodic pattern. Assessing longer periods of time can also provide valuable insight, as there may be monthly or seasonal patterns.Take some time to review your workload requirements, use-cases and access patterns. For instance, are they read-heavy or write-heavy? Or, are they throughput or latency sensitive? Knowledge of these constraints will help you balance performance with costs. Define minimum acceptable performance thresholds Before making any changes to your Cloud Bigtable cluster, take a moment to acknowledge the potential tradeoffs in this optimization exercise. The goal is to reduce operational costs by reducing your cluster resources, changing your instance configuration, or reducing storage requirements to the minimum resources required to serve your workload according to your performance requirements. Some resource optimization may be possible without any effect on your application performance, but more likely, cost-reducing changes will influence application performance metric values. Knowing the minimum acceptable performance thresholds for your application is important so that you know when you have reached the optimal balance of cost and performance.First, create a metric budget. Since you will use your application performance requirements to drive the database performance targets, take a moment to quantify the minimum acceptable latency and throughput metric values for each application use case. These values represent the use case metric budget total. For a given use case, you may have numerous backend services which interact with Cloud Bigtable to support your application. Use your knowledge of the respective backend services and their behaviors to allocate to each backend service a fraction of the total budget. It is likely, each use case is supported by more than one backend service, but if Cloud Bigtable is the only backend service, then the entire metric budget can be allocated to Cloud Bigtable. Now, compare the measured Cloud Bigtable metrics with the available metric budget. If the budget is greater than the metrics which you observed, there is room to reduce the resources provisioned for Cloud Bigtable without making any other changes. If there is no headroom when you compare the two, you will likely need to make architectural or application logic changes before the provisioned resources can be reduced.This diagram shows an example of the apportioned metric budget for latency for an Application, which has two use cases. Each of these use cases call backend services, which in turn use additional backend services as well as Cloud Bigtable.Notice in the examples shown in the illustration above that the budget available for the Cloud Bigtable operations is only a portion of the total service call budget. For instance, the Estimation Service has a total budget of 300ms and the component call to Cloud Bigtable Workload A has been allotted a minimum acceptable performance threshold of 150ms. As long as this database operation finishes in 150ms or less, the budget has not been exhausted. If, when reviewing your actual database metrics, you find that Cloud Bigtable Workload A is completing more quickly than this, then you have some room to maneuver in your budget that may provide an opportunity to reduce your compute costs.Four methods to balance performance and costNow that you have a better understanding of the behavior, and resource requirements for your workload, you can consider the available opportunities for cost optimization. Next, we’ll cover four potential and complementary methods to help you:Size your cluster optimallyOptimize your database performanceEvaluate your data storage usageConsider architectural alternativesMethod 1: Size clusters to an optimal cluster node countBefore you consider making any changes to your application or data serving architecture, make certain that you have optimized the number of nodes provisioned for your clusters for your current workloads.Assess observed metrics for overprovisioning signalsFor single clusters, or multi-cluster instances with single-cluster routing, the recommended maximum average CPU utilization is 70% for the cluster and 90% for the hottest node. For an instance composed of multiple clusters with multi-cluster routing, the recommended maximum average CPU utilization is 35% for the cluster and 45% for the hottest node.Compare the appropriate recommended maximum values for CPU utilization value to the metric trends you observe on your existing cluster(s). If you find a cluster with average utilization significantly lower than the recommended value, the cluster is likely underutilized and could be a good candidate for downsizing. Keep in mind that instance clusters need not have a symmetric node count; you can size each cluster in an instance according to its utilization.When you compare your observations with the recommended values, take into account the various periodic maximums you observed when assessing the cluster metrics. For example, if your cluster utilizes a peak weekday average of 55% CPU utilization, but also reaches a maximum average of 65% on the weekend, the later metric value should be used to determine the CPU headroom in your cluster.Manually optimize node count To right-size your cluster following this strategy: decrease the number of nodes slowly, and observe any change in behavior during a period of time when the cluster has reached a steady state. A good rule of thumb is to reduce the cluster node count by no more than 5% to 10% every 10 to 20 minutes. This will allow the cluster to smoothly rebalance the splits as the number of serving nodes decreases. When planning modifications to your instances, take your application traffic patterns into consideration. For instance, monitoring during off-hours may give false signals when determining the optimal node count. Traffic during the modification period should be representative of a typical application load. For example, downsizing and monitoring during off-hours may give false signals when determining the optimal node count. Keep in mind that any changes to your database instance should be complemented by active monitoring of your application behavior. As the node count decreases, you will observe a corresponding increase in average CPU increase. When it reaches the desired level, no additional nodes reduction is needed. If, during this process, the CPU value is higher than your target, you will need to increase the number of nodes in the cluster to serve the load.Use autoscaling to maintain node count at an optimal level over timeIn the case that you observed a regular daily, weekly, or seasonal pattern when assessing the metric trends, you might benefit from metric-based or schedule-based autoscaling. With a well formulated auto-scaling strategy in place, your cluster will expand when additional serving capacity is necessary and contract when need has subsided. On average, you will have a more cost efficient deployment that meets your application performance goals.Because Cloud Bigtable does not provide a native autoscaling solution just yet, you can use the Cloud Bigtable Admin API to programmatically resize your clusters. We’ve seen customers build their own autoscaler using this API. One such open source solution for Cloud Bigtable autoscaling that has been reused by numerous Google Cloud customers is available on GitHub. As you implement your auto-scaling logic, here are some helpful pointers:Scale up/down according to a measured strategy. When scaling up, consider cost. Scaling up too quickly will lead to increased costs. When scaling down, scale down gradually for optimal performance. Frequent increases and decreases in cluster node count in a short time period are cost ineffective. Since you are charged each hour for the maximum number of nodes that exist during that hour, granular up and down scaling within an hour will be cost inefficient.Autoscaling is only effective for the right workloads. There is a short lag time, on the order of minutes, after adding nodes to your cluster before they can serve traffic effectively. This means that autoscaling is not an ideal solution for addressing short-duration traffic bursts. Choose autoscaling for traffic that follows a periodic pattern. Autoscaling works well for solutions with normal, diurnal traffic patterns like scheduled batch workloads or an application where traffic patterns follow normal business hours. Autoscaling is also effective for bursty workloads. For workloads that anticipate scheduled batch workloads an autoscaling solution with scheduling capability to scale up in anticipation of the batch traffic can work wellMethod 2: Optimize database performance to lower costIf you can reduce the database CPU load by improving the performance of your application or optimizing your data schema, this will, in turn, provide the opportunity to reduce the number of cluster nodes. As discussed, this would then lower your database operational costs.Apply best practices to rowkey design to avoid hotspotsIt’s worth repeating: the most frequently experienced performance issues for Cloud Bigtable are related to rowkey design, and of those, the most common performance issues result from data access hotspots. As a reminder, a hotspot occurs when a disproportionate share of database operations interact with data in an adjacent rowkey range. Often, hotspots are caused by rowkey designs consisting of monotonically increasing values such as sequential numeric identifiers or timestamp values. Other causes include frequently updated rows, and access patterns resulting from certain batch jobs.You can use Key Visualizer to identify hotspots and hotkeys in your Cloud Bigtable clusters. This powerful monitoring tool generates visual reports for each of your tables, showing your usage based on the rowkeys that are accessed. Heatmaps provide a quick method to visually inspect table access to identify common patterns including periodic usage spikes, read or write pressure for specific hotkey ranges, and telltale signs ofsequential reads and writes.If you identify hotspots in your data access patterns, there are a few strategies to consider: Ensure that your rowkey space is well-distributedAvoid repeatedly updating the same row with new values; It is far more efficient to create new rows.Design batch jobs to access data in a well-distributed patternConsolidate datasets with similar schema and contemporaneous accessYou may be familiar with database systems where there are benefits in manually partitioning data across multiple tables, or in normalizing relational schema to create more efficient storage structures. However, in Cloud Bigtable, it can often be better to store all your data in one (no pun intended) big table. The best practice is to design your tables to consolidate datasets into larger tables in cases where they have similar schema, or they consist of data, in columns or adjacent rows, that are concurrently accessed.There are a few reasons for this strategy: Cloud Bigtable has a limit of 1,000 tables per instance. A single request to a larger table can be more efficient than concurrent requests to many smaller tables.Larger tables can take better advantage of the load-balancing features that provide the high performance of Cloud Bigtable.Further, since Key Visualizer is only available for tables with at least 30 GB of data, table consolidation might provide additional observability. Compartmentalize datasets which are not accessed togetherFor example, if you have two datasets, and one dataset is accessed less frequently than the other, designing a schema to separate these datasets on disk might be beneficial. This is especially true if the less frequently accessed dataset is much larger than the other, or if the rowkeys of the two datasets are interleaved.There are several design strategies available to compartmentalize dataset storage.If atomic row-level updates are not required, and the data is rarely accessed together, two options can be considered:Store the data in separate tables. Even if both datasets share the same rowkey space, the datasets can be separated into two separate tables. Keep the data in one table but use separate rowkey prefixes to store related data in contiguous rows, in order to separate the disparate dataset rows from each other. If you need atomic updates across datasets which share a rowkey space, you will want to keep those datasets in the same table, but each dataset can be placed in a different column family. This is especially effective if your workload concurrently ingests the disparate datasets with the shared keyspace, but reads those datasets separately.When a query uses a Cloud Bigtable Filter to ask for columns from just one family, Cloud Bigtable efficiently seeks the next row when it reaches the last of that column family’s cells. In contrast, if independently requested column sets are interleaved within a single column family, Cloud Bigtable will not be able to read the desired cells contiguously. Due to the layout of data on disk, this results in a more resource-expensive series of filtering operations to retrieve the requested cells one at a time.These schema design recommendations have the same result: The two datasets will be more addressable on disk, which makes the frequent accesses to the smaller dataset much more efficient. Further, separating data which you write together but do not read together lets Cloud Bigtable more efficiently seek the relevant blocks of the SSTable and skip past irrelevant blocks. Generally, any schema design changes made to control relative sort order can potentially help improve performance, which in turn could reduce the number of needed compute nodes, and deliver cost savings.Store many column values in a serialized data structure Each cell traversed by a read incurs a small additional overhead, and each cell returned comes with further overhead at each level of the stack. You may realize performance gains, if you store structured data in a single column as a blob rather than spreading it across a row with one value per column.There are two exceptions to this recommendation.First, if the blob is large and you frequently only want part of it, splitting up the data can result in higher data throughput. If your queries generally target disjointed subsets of the data, make a column for each respective smaller blob. If there’s some overlap, try a tiered system. For example, you might make columns A, B, and C to support queries which just want blob A, sometimes request blobs A and B or blobs B and C, but rarely require all three.Second, if you want to use Cloud Bigtable filters (see caveats above) on a portion of the data, that section will need to be in its own column.If this method fits your data and use-case, consider using the protocol buffer (Protobuf) binary format that may reduce storage overhead as well as improve performance. The tradeoff is that additional client side processing will be required to decode the protobuf to extract data values. (Check out the post on the two sides of this tradeoff and potential cost optimization for more detail.) Consider use of timestamps as part of the rowkeyIf you are keeping multiple versions of your data, consider adding timestamps at the end of your rowkey instead of keeping multiple timestamped cells of a column in a row. This changes the disk sort order from (row, column, timestamp) to (row, timestamp, column). In the former case, the cell timestamp is assigned as part of the row mutation, and is a final part of the cell identifier. In the latter case, the data timestamp is explicitly added to the rowkey. This latter rowkey design is much more efficient if you want to retrieve many columns per row but only a single timestamp or limited range of timestamps. This approach is complementary to the previous serialized structure recommendation: if you collect multiple timestamped cells for each column, an equivalent serialized data structure design will require the timestamp to be promoted to the rowkey. If you cannot store all columns together in a serialized structure, storing values in individual columns will still provide benefits if you read columns in a manner well-suited to this pattern.If you frequently add new timestamped data for an entity in order to persist a time series, this design is most advantageous. However, if you only keep a few versions for historical purposes, intrinsic Cloud Bigtable timestamped cells will be preferable, as these timestamps are obtained and applied to the data automatically, and will not have a detrimental performance impact. Keep in mind, if you only have one column, the two sort orders are equivalent.Consider client filtering logic over complex query filter predicatesThe Cloud Bigtable API has a rich, chainable, filtering mechanism which can be very useful when searching a large dataset for a small subset of results. However, if your query is not very selective in the range of rowkeys requested, it is likely more efficient to return all the data as fast as possible and filter in your application. To justify the increased processing cost, only queries with a selective result set should be written with server-side filtering.Utilize garbage-collection policies to automatically minimize row sizeWhile Cloud Bigtable can support rows with data up to 256MB in size, performance may be impacted if you store data in excess of 100 MB per row. Since large rows negatively affect performance, you will want to prevent unbounded row growth. You could explicitly delete the data by removing unneeded cells, column families or rows, however this process would either have to be performed manually or would require automation, management, and monitoring.Alternatively, you can set a garbage-collection policy to automatically mark cells for deletion at the next compaction, which typically takes a few days but might take up to a week. You can set policies, by column family, to remove cells that exceed either a fixed number of versions, or an age-based expiration, commonly known as a time to live (TTL). It is also possible to apply one of each policy type and define the mechanism of combined application: either the intersection (both) or the union (either) of the rules.There are some subtleties on the exact timing of when data is removed from query results that are worth reviewing: explicit deletes, those that are performed by the Cloud Bigtable Data API DeleteFromRow Mutation, are immediately omitted, whereas the specific moment a garbage collected cell is excluded cannot be guaranteed.Once you have assessed your requirements for data retention, and understand the growth patterns for your various datasets, you can develop a strategy for garbage-collection that will ensure row sizes do not have an adverse effect on performance by exceeding the recommended maximum size.Method 3: Evaluate data storage for cost saving opportunitiesWhile more likely that Cloud Bigtable nodes account for a large proportion of your monthly spend, you should also evaluate your storage for cost reduction prospects. As separate line items, you are billed for the storage used by Cloud Bigtable’s internal representation on disk, and for the compressed storage required to retain any active table backups.There are several active and passive methods at your disposal to control data storage costs.Utilize garbage-collection policies to remove data automaticallyAs discussed above, the use of garbage-collection policies can simplify dataset pruning. In the same way that you might choose to control the size of rows to ensure proper performance, you can also set policies to remove data to control data storage costs. Garbage collection allows you to save money by removing data that is no longer required or used. This is especially true if you are using the SSD storage type.In the case that you want to apply garbage-collection policies to serve both this purpose and the one earlier discussed you can use a policy based on multiple criteria: either a union policy or a nested policy with both an intersection and a union. To take an extreme example, imagine you have a column that stores values of approximately 10 MB, so you would need to make sure that no more than ten versions are held to keep the row size below 100 MB. There is business value in keeping these 10 versions for the short term, but in the long term, in order to control the amount of data storage, you only need to keep a few versions. In this case you could set such a policy: (maxage=7d and maxversions=2) or maxversions=10. This garbage collection policy would removes cells in the column family that meet either of the following conditions:Older than the 10 most recent cellsMore than seven days old and older than the two most recent cellsA final note on garbage-collection policies: do take into consideration that you will continue to be charged for storage of expired or obsolete data until compaction occurs (when garbage-collection happens) and the data is physically removed. This process typically will occur within a few days but might require up to a week. Choose a cost-aware backup plan Database backups are an essential aspect of a backup and recovery strategy. With Cloud Bigtable managed table backups, you can protect your data against operator error and application data corruption scenarios. Cloud Bigtable backups are handled entirely by the Cloud Bigtable service, and you are only charged for storage during the retention period. Since there is no processing cost to create or restore a backup, they are less expensive than external backups that export, and import data using separately provisioned services. Table backups are stored with the cluster where the backup was initiated and include, with some minor caveats, all the data that was in the table at backup creation time. When a backup is created, a user-defined expiration date is defined. While this date can be up to 30 days after the backup is created, the retention period should be carefully considered so that you do not keep it longer than necessary. You can establish a retention period according to your requirements for backup redundancy and table backup frequency. The latter should reflect the amount of acceptable data loss: the recovery point objective (RPO) of your backup strategy.For example, if you have a table with an RPO of one hour, you can configure a schedule to create a new table backup every hour. You could set the backup expiration to the 30 day maximum, however this setting would, depending on the size of the table, incur a significant cost. Depending on your business requirements, this cost might not provide a correlative value. Alternatively, based on your backup retention policy, you could choose to set a much shorter backup expiration period: for example, four hours. In this hypothetical example, you could recover your table within the required RPO of less than one hour, yet at any point in time you would only retain four or five table backups. This is in comparison to 720 backups, if backup expiration was set to 30 days. Provision with HDD storageWhen a Cloud Bigtable instance is created, you must choose between SSD or HDD storage. SSD nodes are significantly faster with more predictable performance, but come at a premium cost and lower storage capacity per node. Our general recommendation is: choose SSD storage when in doubt. However, an instance with HDD storage can provide significant cost savings for workloads of a suitable use case. Signs that your use case may be a good fit for HDD instance storage include:Your use case has large storage requirements (greater than 10 TB) especially relative to the anticipated read throughput. For example, a time series database for classes of data, such as archival data, that are infrequently read Your use case data access traffic is largely composed of writes, and predominantly scan reads. HDD storage provides reasonable performance for sequential reads and writes, but only supports a small fraction of the random read rows per second provided by SSD storage.Your use case is not latency sensitive. For example, batch workloads that drive internal analytics workflows. That being said, this choice must be made judiciously. HDD instances can be more expensive than SSD instances if, due to the differing characteristics of the storage media, your cluster becomes disk I/O bound. In this circumstance, an SSD instance could serve the same amount of traffic with fewer nodes than an HDD instance. Moreover, the instance store type cannot be changed after creation time; to switch between SSD and HDD storage types, you would need to create a new instance and migrate the data. Review the Cloud Bigtable documentation for a more thorough discussion of the tradeoffs between SSD and HDD storage types. Method 4: Consider architectural changes to lower database load Depending on your workload, you might be able to make some architectural changes to reduce the load on the database, which would allow you to decrease the number of nodes in your cluster. Fewer nodes will result in a lower cluster cost.Add a capacity cacheCloud Bigtable is often selected for its low latency in serving read requests. One of the reasons it works great for these types of workloads is that Cloud Bigtable provides a Block Cache that caches SSTables blocks that were read from Colossus, the underlying distributed file system. Nonetheless, there are certain data access patterns, such as when you have rows with a frequently read column containing a small value, and an infrequently read column containing a large value, where additional cost and performance optimization can be achieved by introducing a capacity cache to your architecture. In such an architecture, you provision a caching infrastructure that is queried by your application, before a read operation is sent to Cloud Bigtable. If the desired result is present in the caching layer, also known as a cache-hit, Cloud Bigtable does not need to be consulted. This use of a caching layer is known as the cache-aside pattern.Cloud Memorystore offers both Redis and Memcached as managed cache offerings. Memcached is typically chosen for Cloud Bigtable workloads given its distributed architecture. Check out this tutorial for an example of how to modify your application logic to add a Memcached cache layer in front of Cloud Bigtable. If a high cache hit ratio can be maintained, this type of architecture offers two notable optimization options. First, it might allow you to downsize your Cloud Bigtable cluster node count. If the cache is able to serve a sizable portion of read traffic, the Cloud Bigtable cluster can be provisioned with a lower read capacity. This is especially true if the request profile follows a power law probability distribution: one where a small number of rowkeys represents a significant proportion of the requests.Second, as discussed above, if you have a very large dataset, you could consider provisioning a Cloud Bigtable instance with HDDs rather than SSDs. For large data volumes, the HDD storage type for Cloud Bigtable might be significantly less expensive than the SSD storage type. SSD backed Cloud Bigtable clusters have significantly higher point read capacity than the HDD equivalents, but the same write capacity. If less read capacity is needed because of the capacity cache, an HDD instance could be utilized while still maintaining the same write throughput. These optimizations do come with a risk if a high cache hit ratio cannot be maintained due to a change in the query distribution, or if there is any downtime in the caching layer. In these instances, an increased amount of traffic will be passed to Cloud Bigtable. If Cloud Bigtable does not have the necessary read capacity, your application performance may suffer: request latency will increase and request throughput will be limited. In such a situation, having an auto-scaling solution in place can provide some safeguard, however choosing this architecture should be undertaken only once the failure state risks have been assessed. What’s nextCloud Bigtable is a powerful fully managed cloud database that supports low-latency operations and provides linear scalability to petabytes of data storage and compute resources. As discussed in the first part of this series, the cost of operating a Cloud Bigtable instance is related to the reserved and consumed resources. An overprovisioned Cloud Bigtable instance will incur higher cost than one that is tuned to specific requirements of your workload; however, you’ll need some time to observe the database to determine the appropriate metrics targets. A Cloud Bigtable instance that is tuned to best utilize the provisioned compute resources will be more cost-optimized.In the next post in this series, you will learn more about certain under-the-hood aspects of Cloud Bigtable that will help shed some light on why various optimizations have a direct correlation to cost reduction. Until then, you can:Learn more about Cloud Bigtable performance. Explore the Key Visualizer diagnostic tool for Cloud Bigtable.Understand more about Cloud Bigtable garbage-collection.While there have been many improvements and optimizations to the design since publication, the original Cloud Bigtable Whitepaper remains a useful resource.Related ArticleA primer on Cloud Bigtable cost optimizationCheck out how to understand resources that contribute to costs and how to think about cost optimization for the Cloud Bigtable database.Read Article
Quelle: Google Cloud Platform

Auto Trader: Charting the road from Oracle to PostgreSQL

Editor’s note: We’re hearing today from Auto Trader, UK’s largest online automotive marketplace. Auto Trader aims to improve the process of buying and selling vehicles in the UK, providing a platform for consumers to connect with retailers and manufacturers. Here’s how Auto Trader used Google’s Cloud SQL on their database migration journey from on-premises into the cloud. You could say innovation is in our DNA at Auto Trader—we’ve spent nearly 40 years growing and evolving our business solutions alongside our customers. Established as a print magazine in 1977, Auto Trader went completely digital in 2013 and has since become one of the leading digital brands in the UK. Currently, Auto Trader brings in around 50 million cross-platform impressions a month and holds a 75% share of all minutes spent on automotive classified platforms. As we’ve grown, we found ourselves needing to move faster. Over the years we had invested a lot in our on-premises infrastructure and as we started shifting to the cloud we had made significant strides in ensuring our estate was cloud-native. Still, there were capabilities that were becoming increasingly difficult to realize without a significant overhaul. In 2018, we decided to move to Google Cloud, adopting Google Kubernetes Engine (GKE) as we felt we could unlock some of our development goals faster by leveraging these solutions. For our team, this meant focusing more on the services and databases rather than the day-to-day building and management of the infrastructure. From proprietary to open sourceHistorically, we had a massive on-premises Oracle database where we consolidated all of our services—around 200 in total. While this worked well for monolithic application development, it became clear we needed to break up these large databases into smaller chunks, more tightly integrated with their owning service. Our long-term vision has always been to be more database-agnostic to avoid getting locked into a single vendor. As a result, we already had a very low PL/SQL footprint. Google Cloud SQL was a natural fit for us and now sits at the heart of our data store strategy.Cloud SQL’s fully managed services took away the headache of database maintenance that would typically take up a lot of our energy. We could rely on Google to deal with the behind-the-scenes management of upgrades, backups, patches, or failures, enabling our data engineers to invest more time in learning and performance tuning.To date, we have migrated around 65% of our Oracle footprint to Cloud SQL, with approximately 2TB (13% of pre-migration size) of data still left to shift across several services. Migrating and moving off of our Oracle footprint remains a strategic priority for us in 2021.Cloud SQL’s fully managed services took away the headache of database maintenance that would typically take up a lot of our energy.Transforming mindset with migrationOur long-term goal is to move away from consolidated database architecture where services have to share a database engine. A service should only be able to access its own data stores, and not the databases of other services. All other access should be through a service layer.Auto Trader was well-positioned for migration, with over 60% of our services already “cloud native,” running on our private cloud prior to moving to GKE clusters. The remaining services were re-engineered for the cloud, removing dependencies on local stateful storage and ensuring horizontal scalability. We have a clear set of rules around how services of any tier or criticality need to run in our cloud environment. Thus far, we have 14 MySQL-backed instances supporting 63 services and 11 PostgreSQL instances running 17 services. These instances support our critical Vehicle Data Service, which contains details on every vehicle and power our inventory service. What’s impressive is that we’ve seen strong performance improvements since we migrated. We also recently migrated our registration and single sign-on service to Postgres with very little fuss or drama, and have since scaled resources on the Cloud SQL instance for this service with ease within a five-minute window.As part of this migration, we’re also trying to change behavior for our users. We restrict direct programmatic access for anything other than the owning service to the Cloud SQL databases to help avoid unknown external dependencies, something which has caused us pain historically while on Oracle.Instead, we now facilitate access to data through Google’s data cloud, which is centered around moving data from operational data stores, usually in Cloud SQL databases, using Kafka as the stream processing framework to land data in BigQuery, Google Cloud’s enterprise data warehouse. The source data stored in BigQuery is then processed using a tool called dbt (data build tool) to clean and join to other useful datasets and stored back into BigQuery. Looker, which is our business intelligence (BI) tool, is then connected to BigQuery to allow colleagues to explore, analyse and share business insights.Cloud SQL delivers speed, freedom, and innovationMoving to Cloud SQL has significantly impacted the way our teams work and has helped us create a seamless development experience.For instance, it has removed the burden of maintenance from our team. We used to schedule maintenance outside of business hours, which would take away the database engineer for days at a time. Adding memory and CPU and generally scaling up instances has become a non-event and allows us to move at a much faster pace from the point of decision making to actioning. Cloud SQL is much easier to manage and the team no longer needs to worry about spending hours on maintenance patches, which has improved overall team productivity.Moving to Cloud SQL has significantly impacted the way our teams work and has helped us create a seamless development experience.Cloud SQL has also changed how we provision new instances for developers. Before we migrated, we couldn’t even offer developers the option of having their own instance. They had to use a consolidated instance, regardless of what else was running on it. Now, we can provision a new dedicated instance in under an hour with a couple of lines of Terraform code. Developers have their own space and freedom to work without the risk of impacting other services. We are also able to troubleshoot issues more quickly and can even link developers directly to our Grafana dashboard to give them better visibility.Since modernizing to GKE, Istio and Cloud SQL, Auto Trader’s release cadence has improved by over 140% (year over year), enabling an impressive peak of 458 releases to production in a single day. Auto Trader’s fast-paced delivery platform managed over 36,000 releases in a year with an improved success rate of 99.87%, and it continues to grow.To find out more about how Auto Trader improved visibility, agility, and security by migrating to Google Cloud, make sure to read the Auto Trader UK case study.
Quelle: Google Cloud Platform