Building internet-scale event-driven applications with Cloud Spanner change streams

Since its launch, Cloud Spanner change streams has seen broad adoption by Spanner customers in healthcare, retail, financial services, and other industries. This blog post provides an overview of the latest updates to Cloud Spanner change streams and how they can be used to build event-driven applications.A change stream watches for changes to your Spanner database (inserts, updates, and deletes) and streams out these changes in near real-time. One of the most common uses of change streams is replicating Spanner data to BigQuery for analytics. With change streams, it’s as easy as writing Data definition language (DDL) to create a change stream on the desired tables and configuring Dataflow to replicate these changes to BigQuery so that you can take advantage of BigQuery’s advanced analytic capabilities.Yet analytics is just the start of what change streams can enable. Pub/Sub and Apache Kafka are asynchronous and scalable messaging services that decouple the services that produce messages from the services that process those messages. With support for Pub/Sub and Apache Kafka, Spanner change streams now lets you use Spanner transactional data to build event-driven applications.An example of an event-driven architecture is an order system that triggers inventory updates to an inventory management system whenever orders are placed. In this example, orders are saved in a table called order_items. Consequently, changes on this table will trigger events in the inventory system. To create a change stream that tracks all changes made order_items, run the following DDL statement:code_block[StructValue([(u’code’, u’CREATE CHANGE STREAM order_items_changes FOR order_items’), (u’language’, u”), (u’caption’, <wagtail.wagtailcore.rich_text.RichText object at 0x3ef9f6d0c6d0>)])]Once the order_items_changes change stream is created, you can create event streaming pipelines to Pub/Sub and Kafka.Creating an event streaming pipeline to Pub/SubThe change streams Pub/Sub Dataflow template lets you create Dataflow jobs that send change events from Spanner to Pub/Sub and build these kinds of event streaming pipelines.Once the Dataflow job is running, we can simulate inventory changes by inserting and updating order items in the Spanner database:code_block[StructValue([(u’code’, u”INSERT INTO order_items (order_item_id, order_id, article_id, quantity)rnVALUES (rn ‘5fb2dcaa-2513-1337-9b50-cc4c56a06fda’,rn ‘b79a2147-bf9a-4b66-9c7f-ab8bc6c38953′, rn ‘f1d7f2f4-1337-4d08-a65e-525ec79a1417′, rn 5rn);”), (u’language’, u”), (u’caption’, <wagtail.wagtailcore.rich_text.RichText object at 0x3efa0d3d6450>)])]code_block[StructValue([(u’code’, u”UPDATE order_items rnSET quantity = 10 rnWHERE order_item_id = ‘5fb2dcaa-2513-1337-9b50-cc4c56a06fda';”), (u’language’, u”), (u’caption’, <wagtail.wagtailcore.rich_text.RichText object at 0x3efa0d3d6b90>)])]This causes two change records to be streamed out through Dataflow and published as messages to the given Pub/Sub topic, as shown below:The first Pub/Sub message contains the inventory insert, and the second message contains inventory update.From here, the data can be consumed using any of the many integration options Pub/Sub offers.Creating an event streaming pipeline to Apache KafkaIn many event-driven architectures, Apache Kafka is the central event store and stream-processing platform. With our newly added Debezium-based Kafka connector, you can build event streaming pipelines with Spanner change streams and Apache Kafka. The Kafka connector produces a change event for every insert, update, and delete. It sends groups change event records for each Spanner table into a separate Kafka topic. Client applications then read the Kafka topics that correspond to the database tables of interest, and can react to every row-level event they receive from those topics.The connector has built-in fault-tolerance. As the connector reads changes and produces events, it records the last commit timestamp processed for each change stream partition. If the connector stops for any reason (e.g. communication failures, network problems, or crashes), it simply continues streaming records where it last left off once it restarts.To learn more about the change streams connector for Kafka, see Build change streams connections to Kafka. You can download the change streams connector for Kafka from Debezium.Fine-tuning your event messages with new value capture typesIn the example above, the stream order_items_changed that uses the default value capture type OLD_AND_NEW_VALUES. This means that the Change streams change record includes both the old and new values of a row’s modified columns, along with the primary key of the row. Sometimes, however,  you don’t need to capture all that change data. For this reason, we added two new value capture types: NEW_VALUES and NEW_ROW, described below:To continue with our existing example, let’s create another change stream that contains only the new values of changed columns. This is the value capture type with the lowest memory and storage footprint.code_block[StructValue([(u’code’, u”CREATE CHANGE STREAM order_items_changed_values rnFOR order_itemsrnWITH ( value_capture_type = ‘NEW_VALUES’ )”), (u’language’, u”), (u’caption’, <wagtail.wagtailcore.rich_text.RichText object at 0x3ef9f6e7bf90>)])]The DDL above creates a change stream using the PostgreSQL interface syntax. Read Create and manage change streams to learn more about the DDL for creating change streams for both PostgreSQL and GoogleSQL Spanner databases.SummaryWith change streams, your Spanner data follows you wherever you need it, whether that’s for analytics with BigQuery, for triggering events in downstream applications, or for compliance and archiving. And because change streams are built into Spanner, there’s no software to install, and you get external consistency, high scale, and up to 99.999% availability.With support for Pub/Sub and Kafka, Spanner change streams makes it easier than ever to build event-driven pipelines with whatever flexibility you need for your business.To get started with Spanner, create an instance or try it out for free, or take a Spanner QwiklabTo learn more about Spanner change streams, check out About change streams To learn more about the change streams Dataflow template for Pub/Sub, go to Cloud Spanner change streams to Pub/Sub template To learn more about the change streams connector for Kafka, go to Build change streams connections to Kafka
Quelle: Google Cloud Platform

Unlock insights faster from your MySQL data in BigQuery

Data practitioners know that relational databases are not designed for analytical queries. Data-driven organizations that connect their relational database infrastructure to their data warehouse get the best of both worlds: a production database unhassled by a barrage of analytical queries, and a data warehouse that is free to mine for insights without the fear of bringing down production applications. The remaining question is how do you create a connection between two disparate systems with as little operational overhead as possible.Dataflow Templates makes connecting your MySQL data warehouse with BigQuery as simple as filling out a web form. No custom code to write, no infrastructure to manage. Dataflow is Google Cloud’s serverless data processing for batch and streaming workloads that makes data processing fast, autotuned, and cost-effective. Dataflow Templates are reusable snippets of code that define data pipelines — by using templates, a user doesn’t have to worry about writing a custom Dataflow application. Google provides a catalog of templates that help automate common workflows and ETL use cases. This post will dive into how to schedule a recurring batch pipeline for replicating data from MySQL to BigQuery.Launching a MySQL-to-BigQuery Dataflow Data PipelineFor our pipeline, we will launch a Dataflow Data Pipeline. Data Pipelines allow you to schedule recurring batch jobs1 and feature a suite of lifecycle management features for streaming jobs that make it an excellent starting point for your pipeline. We’ll click on the “Create Data Pipeline” button at the top.We will select the MySQL to BigQuery pipeline. As you can see, if your relational database is Postgres or SQL Server, we also have templates for those systems as well.The form will now expand to provide a list of parameters for this pipeline that will help execute the pipeline:Required parametersSchedule: The recurring schedule for your pipeline (you can schedule hourly, daily, or weekly jobs, or define your own schedule with unix cron)Source: The URL connection string to connect to the Jdbc source. If your database requires SSL certificates, you can append query strings that enable SSL mode and the GCS locations of certificates. These can be encoded using Google Cloud Key Management Service.Target: BigQuery output tableTemp Bucket: GCS bucket for staging filesOptional parameters Jdbc source SQL query, if you want to replicate a portion of the database. Username & password, if your database requires authentication. You can also pass in an encoded string from Google Cloud KMS, if you desire.Partitioning parametersDataflow-related parameters, including options to modify autoscaling, number of workers, and other configurations related to the worker environment. If you require an SSL certificate and you have truststore and certificate files, you will use the “extra files to stage” parameter to pass in their respective locations.Once you’ve entered your configurations, you are ready to hit the Create Pipeline button.Creating the pipeline will take you to the Pipeline Info screen, which will show you a history of executions of the pipeline. This is a helpful view if you are looking for jobs that ran long, or identifying patterns that happen across multiple executions. You’ll find a list of jobs related to the pipeline in a table view near the bottom of the page. Clicking on one of those job IDs will allow you to inspect a specific execution in more detail.The Dataflow monitoring experience features a job graph showing a visual representation of the pipeline you launched, and includes a logging panel at the bottom that displays logs collected from the job and workers. You will find information associated with the job on the right hand panel, as well as several other tabs that allow you to understand your job’s optimized execution, performance metrics, and cost.Finally, you can go to the BigQuery SQL workspace to see your table written to its final destination. If you prefer a video walkthrough of this tutorial, you can find that here. You’re all set for unlocking value from your relational database — and it didn’t take an entire team to set it up!What’s nextIf your use case involves reading and writing changes in continuous mode, we recommend checking out our Datastream product, which serves change-data-capture and real-time replication use cases. If you prefer a solution based on open-source technology, you can also explore our Change Data Capture Dataflow template that uses a Debezium connector to publish messages to Pub/Sub, then writes to BigQuery.Happy Dataflowing!1. If you do not need to run your job on a scheduled basis, we recommend using the “Create Job from Template” workflow, found on the “Jobs” page
Quelle: Google Cloud Platform

How to use custom holidays for time-series forecasting in BigQuery ML

Time-series forecasting is one of the most important models across a variety of industries, such as retail, telecom, entertainment, manufacturing. It serves many use cases such as forecasting revenues, predicting inventory levels and many others. It’s no surprise that time series is one of the most popular models in BigQuery ML. Defining holidays is important in any time-series forecasting model to accommodate for variations and fluctuations in the time-series data. In this blog post we will discuss how you can take advantage of the recent enhancements to define custom holidays and get better explainability for your forecasting models in BigQuery ML.You could already specify HOLIDAY_REGION when creating a time-series model. The model would use the holiday information within that HOLIDAY_REGION to capture the holiday effect. However, we heard from our customers that they are looking to understand the holiday effect in detail — which holidays are used in modeling, what is the contribution of individual holidays in the model as well as the ability to customize or create their own holidays for modeling.To address these, we recently launched the preview of custom holiday modeling capabilities in ARIMA_PLUS and ARIMA_PLUS_XREG. With these capabilities, you can now do the following:Access to all the built-in holiday data by querying the BigQuery public dataset bigquery-public-data.ml_datasets.holidays_and_events_for_forecasting or by using the table value function ML.HOLIDAY_INFO. You can inspect the holiday data used for fitting your forecasting modelCustomize the holiday data (e.g. primary date and holiday effect window) using standard GoogleSQL to improve time series forecasting accuracyExplain the contribution of each holiday to the forecasting resultBefore we dive into using these features, let’s first understand custom holiday modeling and why one might need it. Let’s say you want to forecast the number of daily page views of the Wikipedia page for Google I/O, Google’s flagship event for developers. Given the large attendance of Google I/O you can expect significantly increased traffic to this page around the event days. Given that these are Google-specific dates and not included in the default HOLIDAY_REGION, the forecasted page views will not provide a good explanation for the spikes around those dates. So you need the ability to specify custom holidays in your model so that you get better explainability for your forecasting. With custom holiday modeling features, you can now build more powerful and accurate time-series forecasting models using BigQuery ML.The following sections show some examples of the new custom holiday modeling in forecasting in BigQuery ML. In this example, we explore the bigquery-public-data.wikipedia dataset, which has the daily pageviews for Google I/O, create a custom holiday for Google I/O event, and then use the model to forecast the daily pageviews based on its historical data and factoring in the customized holiday calendar.“The bank would like to utilize a custom holiday calendar as it has ‘tech holidays’ due to various reasons like technology freezes, market instability freeze etc. And, it would like to incorporate those freeze calendars while training the ML model for Arima,” said a data scientist of a large US based financial institution.An example: forecast wikipedia daily pageviews for Google I/OStep 1. Create the datasetBigQuery hosts hourly wikipedia page view data across all languages. As a first step, we aggregate them by day and all languages.code_block[StructValue([(u’code’, u”CREATE OR REPLACE TABLE `bqml_tutorial.googleio_page_views`rnASrnSELECTrn DATETIME_TRUNC(datehour, DAY) AS date,rn SUM(views) AS viewsrnFROMrn `bigquery-public-data.wikipedia.pageviews_*`rnWHERErn datehour >= ‘2017-01-01’rn AND datehour < ‘2023-01-01’rn AND title = ‘Google_I/O’rnGROUP BYrn DATETIME_TRUNC(datehour, DAY)”), (u’language’, u”), (u’caption’, <wagtail.wagtailcore.rich_text.RichText object at 0x3ee7b577b810>)])]Step 2: Forecast without custom holidayNow we do a regular forecast. We use the daily page view data from 2017 to 2021 and forecast into the year of 2022.code_block[StructValue([(u’code’, u”CREATE OR REPLACE MODEL `bqml_tutorial.forecast_googleio`rn OPTIONS (rn model_type = ‘ARIMA_PLUS’,rn holiday_region = ‘US’,rn time_series_timestamp_col = ‘date’,rn time_series_data_col = ‘views’,rn data_frequency = ‘DAILY’,rn horizon = 365)rnASrnSELECTrn *rnFROMrn `bqml_tutorial.googleio_page_views`rnWHERErn date < ‘2022-01-01′;”), (u’language’, u”), (u’caption’, <wagtail.wagtailcore.rich_text.RichText object at 0x3ee7b5756a10>)])]We can visualize the result from ml.explain_forecast using Looker Studio and get the following graph:As we can see, the forecasting model is capturing the general trend pretty well. However, it is not capturing the increased traffic that are related to previous Google I/O events and not able to generate an accurate forecast for 2022 either.Step 3: Forecast with custom holidayAs we can see from below, Google I/O happened during these dates between 2017 and 2022. We would like to instruct the forecasting model to consider these dates as well.code_block[StructValue([(u’code’, u”CREATE OR REPLACE MODEL `bqml_tutorial.forecast_googleio_with_custom_holiday`rn OPTIONS (rn model_type = ‘ARIMA_PLUS’,rn holiday_region = ‘US’,rn time_series_timestamp_col = ‘date’,rn time_series_data_col = ‘views’,rn data_frequency = ‘DAILY’,rn horizon = 365)rnAS (rn training_data AS (rn SELECTrn *rn FROMrn `bqml_tutorial.googleio_page_views`rn WHERErn date < ‘2022-01-01’rn ),rn custom_holiday AS (rn SELECTrn ‘US’ AS region,rn ‘GoogleIO’ AS holiday_name,rn primary_date,rn 1 AS preholiday_days,rn 2 AS postholiday_daysrn FROMrn UNNEST(rn [rn DATE(‘2017-05-17′),rn DATE(‘2018-05-08′),rn DATE(‘2019-05-07′),rn — cancelled in 2020 due to pandemicrn DATE(‘2021-05-18′),rn DATE(‘2022-05-11′)])rn AS primary_datern )rn);”), (u’language’, u”), (u’caption’, <wagtail.wagtailcore.rich_text.RichText object at 0x3ee7b575e110>)])]As we can see, we provide a full list of Google I/O’s event dates to our forecasting model. Besides, we also adjust the holiday effect window to cover four  days around the event date to better capture some potential view traffic before and after the event.After visualizing in Looker Studio, we get the following chart:As we can see from the chart, our custom holiday significantly helped boost the performance of our forecasting model and now it is perfectly capturing the increase of page views caused by Google I/O.Step 4: Explain fine-grained holiday effectYou can further inspect the holiday effect contributed by each individual holidays by using ml.explain_forecast:code_block[StructValue([(u’code’, u’SELECTrn time_series_timestamp,rn holiday_effect_GoogleIO,rn holiday_effect_US_Juneteenth,rn holiday_effect_Christmas,rn holiday_effect_NewYearrnFROMrn ml.explain_forecast(rn modelrn bqml_tutorial.forecast_googleio_with_custom_holiday,rn STRUCT(365 AS horizon))rnWHERE holiday_effect != 0;’), (u’language’, u”), (u’caption’, <wagtail.wagtailcore.rich_text.RichText object at 0x3ee7b5741c10>)])]The results look similar to the following. As we can see, Google I/O indeed contributes a great amount of holiday effect to the overall forecast result for those custom holidays.Step 5: Compare model performanceAt the end, we use ml.evaluate to compare the performance of the previous model created without custom holiday and the new model created with custom holiday. Specifically, we would like to see how the new model performs when it comes to forecasting a future custom holiday, and hence we are setting the time range on the week of Google I/O in 2022.code_block[StructValue([(u’code’, u’SELECTrn “original” AS model_type,rn *rnFROMrn ml.evaluate(rn MODEL bqml_tutorial.forecast_googleio,rn (rn SELECTrn *rn FROMrn `bqml_tutorial.googleio_page_views`rn WHERErn date >= ‘2022-05-08’rn AND date < ‘2022-05-12’rn ),rn STRUCT(rn 365 AS horizon,rn TRUE AS perform_aggregation))rnUNION ALLrnSELECTrn “with_custom_holiday” AS model_type,rn *rnFROMrn ml.evaluate(rn MODELrn bqml_tutorial.forecast_googleio_with_custom_holiday,rn (rn SELECTrn *rn FROMrn `bqml_tutorial.googleio_page_views`rn WHERErn date >= ‘2022-05-08’rn AND date < ‘2022-05-12’rn ),rn STRUCT(rn 365 AS horizon,rn TRUE AS perform_aggregation));’), (u’language’, u”), (u’caption’, <wagtail.wagtailcore.rich_text.RichText object at 0x3ee7b5772550>)])]We get the following result, which demonstrates the great performance boost of the new model:ConclusionIn the previous example, we demonstrated how to use custom holidays in forecasting and evaluate its impact on a forecasting model. The public dataset and the ML.HOLIDAY_INFO table value function is also helpful for understanding what holidays are used to fit your model. Some gains brought by this feature are as follows:You can configure custom holidays easily using standard GoogleSQL, enjoying BigQuery scalability, data governance, etc.You get elevated transparency and explainability of time series forecasting in BigQuery.What’s next?Custom holiday modeling in forecasting models is now available for you to try in preview. Check out the tutorial in BigQuery ML to learn how to use it. For more information, please refer to the documentation.Acknowledgements: Thanks to Xi Cheng, Haoming Chen, Jiashang Liu, Amir Hormati, Mingge Deng, Eric Schmidt and Abhinav Khushraj from the BigQuery ML team. Also thanks to Weijie Shen, Jean Ortega from the Fargo team of Resource Efficiency Data Science team.Related ArticleHow to do multivariate time series forecasting in BigQuery MLMultivariate time series forecasting allows BigQuery users to use external covariate along with target metric for forecasting.Read Article
Quelle: Google Cloud Platform

The big picture: How Google Photos scaled rapidly on Spanner

Mobile photography has become ubiquitous over the past decade, and it’s now easier than ever to take professional quality photos with the push of a button. This has resulted in explosive growth in the number of photo and video captures, and a huge portion of these photos and videos contain private, cherished, and beloved memories — everything from small, everyday moments to life’s biggest milestones. Google Photos aims to be the home for all these memories, organized and brought to life so that users can share and save what matters. With more than one billion users and four trillion photos and videos — and with the responsibility to protect personal, private, and sensitive user data — Google Photos needs a database solution that is highly scalable, reliable, secure, and supports large scale data processing workloads conducive to AI/ML applications. Spanner has proved to be exactly the database we needed.A picture says a thousand wordsGoogle Photos offers a complete consumer photo workflow app for mobile and web. Users can automatically back up, organize, edit, and share their photos and videos with friends and family. All of this data can be accessed and experienced in delightful ways thanks to machine learning-powered features like search, suggested edits, suggested sharing, and Memories. With Photos storing over 4 trillion photos and videos, we need a database that can handle a staggering amount of data with a wide variety of read and write patterns. We store all the metadata that powers Google Photos in Spanner, including both media-specific and product-specific metadata for features like album organization, search, and clustering. The Photos backend is composed of dozens of microservices, all of which interact with Spanner in different ways, some serving user-facing traffic, and others handling batch traffic. Photos also has dozens of large batch-processing Flume pipelines that power our most expensive workloads: AI/ML processes, data integrity management, and other types of full account or database-wide processing.High level architecture for media processing in Google Photos using SpannerDespite Google Photos’ size and complexity, Spanner has a number of features that make our integration easy to maintain. Thanks to Spanner’s traffic isolation, capacity management, and automatic sharding capabilities, we are able to provide a highly reliable user experience even with unpredictably bursty traffic loads. Balancing our online and offline traffic is also manageable thanks to Spanner’s workload tunable replication capabilities. Photos enables users to access all of their photos at any time, reliably across the globe. Photos relies on Spanner to automatically replicate data with 99.999% availability. Spanner’s sharding capabilities give us low latency worldwide, help us smooth our computational workloads, and make it easy for us to support the ever increasing set of regulatory requirements concerning data residency.  The system has to be reliable and available for user uploads, while simultaneously ensuring that ML-based features not only perform well, but also don’t impact interactive traffic. Spanner’s sharding flexibility allows both these use cases to be satisfied in the same database. We have read-only and read/write shards to separate these use cases. We need to serve our active online users quickly because we know they expect their photos to be instantaneously displayed and shareable.Photos also has strict consistency and concurrency needs. That’s not surprising when you consider the variety of first- and third-party clients that upload media, processing pipelines performing updates, and various feature needs – many of which involve cross-user sharing. It’s Spanner’s high write throughput, consistency guarantees, and resource management tools that have allowed Photos to build and scale these features and pipelines by 10x with minimal re-architecture. Our use of Spanner has proven Spanner’s ability to scale rapidly without compromise — something rare in traditional, vertically scalable SQL databases. Equally as important, Spanner has significantly increased our operational efficiency. We now save a lot of time and energy on tactical placement, location distribution, redundancy, and backup management. Replica management is a simple matter of configuration management, and we rely on Spanner to manage the changes. In addition, automated index verifications, automatic sharding, and guaranteed data consistency across all regions, save us a lot of manual work.Trust paints the whole pictureOur users entrust us with their private and precious data, and we take that responsibility very seriously. Privacy, security, and safety are incredibly important to Google Photos — they are core principles that are considered in every feature and user experience that we build. Spanner’s secure access controls help significantly by eliminating unilateral data access, managing the risk of internal or external data breaches, and ensuring that data privacy is respected throughout our backend. Reliability and trust are the cornerstones of Google Photos. It’s critical that users can access their data whenever they want it, and that fundamental product features like backup and sharing remain highly available even during peak load (holidays, for example). The Photos team continues to heavily focus on reliability improvements to ensure that we’re delivering the experience that our users have come to expect from Google. Thanks to Spanner’s ongoing investment in this area, Photos has been able to continuously raise this bar — which is particularly notable given Photos’ own rapid growth rate. Running multiple replicas is a key aspect of how our system runs reliably, and Spanner’s strong external consistency features and continuous index verifications ensure that data remains correct. In addition, Spanner offers robust backup and recovery systems which provide us even more confidence that our datastores will remain correct and complete. Picture perfectThe numbers speak for themselves. Spanner supports a staggering amount of traffic across many regions, over a billion users, and metadata for more than four trillion images. We’ve already experienced 10x growth since launching our Spanner database, and we’re confident that Spanner can support another 10-fold increase in the future. Going forward, we’re confident in Spanner’s robust, easy-to-use nature to help us scale to the next billion users and drive even more incredible experiences for our users. Learn moreRead the blog “SIGOPS Hall of Fame goes to Spanner paper — here’s why that matters” by Chris Taylor, Distinguished Software Engineer, on the recent award and the evolution of Spanner.Learn more about Cloud Spannerand create a 90-day Spanner free trial instance.Related ArticleEvaluating the true cost or TCO of a database — and how Cloud Spanner comparesCloud Spanner databases offer high performance at lower costs by providing a fully managed experience with unlimited scalability and high…Read Article
Quelle: Google Cloud Platform

Expanding 24/7 multilingual support: Now in Mandarin Chinese and Korean

At Google Cloud, we fully grasp that the need for technical support can arise at any hour. Further,  the importance of communicating in a language you’re comfortable is paramount, particularly when dealing with urgent issues.To help, we’re expanding the current 8×5 support offering to 24×7 for P1 and P2 cases for Korean Enhanced and Premium Support customers, as well as for Chinese Enhanced customers, aligning with what we already provide for Chinese Premium Support customers today. With this, Premium and Enhanced Support customers will be able to reach out to us in Mandarin Chinese or Korean, regardless of the time or day for urgent issues.For a comprehensive understanding of our Customer Care offerings, we invite you to visitcloud.google.com/support.In accordance with these additions, we have amended our Technical Support Services Guidelines, ensuring they reflect recent enhancements.We’re eager to engage with an increasing number of our customers in these important Google Cloud regions, offering support and solutions in a language they’re comfortable with.
Quelle: Google Cloud Platform

Four steps to managing your Cloud Logging costs on a budget

As part of our ongoing series on cost management for observability data in Google Cloud, we’re going to share four steps for getting the most out of your logs while on a budget. While we’ll focus on optimizing your costs within Google Cloud, we’ve found that this works with customers with infrastructure and logs on prem and in other clouds as well.Step 1: Analyze your current spending on logging toolsTo get started, create an itemized list of what volume of data is going where and what it costs. We’ll start with the billing report and the obvious line items including those under Operations Tools/Cloud Logging:Log Volume – the cost to write log data to disk once (see our previous blog post for an explanation)Log Storage Volume – the cost to retain logs for more than 30 days If you’re using tools outside Cloud Logging, you’ll also need to include any costs related to these solutions. Here’s a list to get you started:Log vendor and hardware costs — what are you paying to observability vendors? If you’re running your own logging solution, you’ll want to include the cost of compute and disk.If you export logs within Google Cloud, include Cloud Storage and BigQuery costsProcessing costs — consider the costs for Kafka, Pub/Sub or Dataflow to process logs. Network egress charges may apply if you’re moving logs outside Google Cloud.Engineering resources dedicated to managing your logging tools across your enterprise often are significant too!Step 2: Eliminate waste — don’t pay for logs you don’t needWhile not all costs scale directly with volume, optimizing your log volume is often the best way to reduce spend. Even if you are using a vendor with a contract that locks you into a fixed price for a period of time, you may still have costs in your pipeline that can be reduced by avoiding wasteful logs such as Kafka, Pub/Sub or Dataflow costs. Finding chatty logs in Google CloudThe easiest way to understand which sources are generating the highest volume of logs within Google Cloud is to start with our pre-built dashboards in Cloud Monitoring. To access the available dashboards:Go to Monitoring -> DashboardsSelect “Sample Library” -> “Logging”This blog post has some specific recommendations for optimizing logs for GKE and GCE using prebuilt dashboards.As a second option, you can use Metrics Explorer and system metrics to analyze the volume of logs. For example, type “log bytes ingested” into the filter. This specific metric corresponds to the Cloud Logging “Log Volume” charge. There are many ways to filter this data. To get a big picture, we often start with grouping by both “resource_type” and “project_id”. To narrow down the resource type in a particular project, add a “project_id” filter. Select “sum” under the Advanced Options -> Click on Aligner and select “sum”. Sort by volume to see the resources with the highest log volume.While these rich metrics are great for understanding volumes, you’ll probably want to eventually look at the logs to see whether they’re critical to your observability strategy. In Logs Explorer, the log fields on the left side help you understand volumes and filter logs from a resource type.Reducing log volume with the Logs Router Now that we understand what types of logs are expensive, we can use the Log Router and our sink definitions to reduce these volumes. Your strategy will depend on your observability goals, but here are some general tools we’ve found to work well.The most obvious way to reduce your log volume is not to send the same logs to multiple storage destinations. One common example of this is when a central security team uses an aggregated log sink to centralize their audit logs but individual projects still ingest these logs. Instead, use exclusion filters on the _Default log sink and any other log sinks in each project to avoid these logs. Exclusion filters also work on log sinks to BigQuery, Pub/Sub, or Cloud Storage.Similarly, if you’re paying to store logs in an external log management tool, you don’t have to save these same logs to Cloud Logging. We recommend keeping a small set of system logs from GCP services such as GKE in Cloud Logging in case you need assistance from GCP support but what you store is up to you, and you can still export them to the destination of your choice!Another powerful tool to reduce log volume is to sample a percentage of chatty logs. This can be particularly useful with 2XX log balancer logs, for example. This can be a powerful tool, but we recommend you design a sampling strategy based on your usage, security and compliance requirements and document it clearly.Step 3: Optimize costs over the lifecycle of your logsAnother option to reduce costs is to avoid storing logs for more time than you need them. Cloud Logging charges based on the monthly log volume retained per month. There’s no need to switch between hot and cold storage in Cloud Logging; doubling the default amount of retention only increases the cost by 2%. You can change your custom log retention at any time.If you are storing your logs outside of Cloud Logging, it is a good idea to compare the cost to retain logs and make a decision. Step 4: Setup alerts to avoid surprise billsOnce you are confident that the volume of logs being routed through log sinks fit in your budget, set up alerts so that you can detect any spikes before you get a large bill. To alert based on the volume of logs ingested into Cloud Logging:Go to the Logs-based metrics page. Scroll down to the bottom of the page and click the three dots on “billing/bytes_ingested” under System-defined metrics. Click “ Create alert from metric”Add filters (For example: use resource_id or project_id. This is optional). Select the logs based metric for the alert policy.You can also set up similar alerts on the volume for log sinks to Pub/Sub, BigQuery or Cloud Storage.ConclusionOne final way to stretch your observability budget is to use more Cloud Operations. We’re always working to bring our customers the most value possible for their budget such as our latest feature, Log Analytics, which adds querying capabilities but also makes the same data available for analytics, reducing the need for data silos. Many small customers can operate entirely on our free tier. Larger customers have expressed their appreciation for the scalable Log Router functionality available at no extra charge that would otherwise require an expensive event store to process data. So it’s no surprise that a 2022 IDC report showed that more than half of respondents surveyed stated that managing and monitoring tools from public cloud platforms provide more value compared to third-party tools. Get started with Cloud Logging and Monitoring today.
Quelle: Google Cloud Platform

How an open data cloud is enabling Airports of Thailand and EVme to reshape the future of travel

Aviation and accommodation play a big role in impacting the tourism economy, but analysis of recent data also highlights tourism’s impact on other sectors, from financial services to healthcare, to retail and transportation. With travel recovery in full swing post pandemic, Google search queries related to “travel insurance” and “medical tourism” in Thailand have increased by more than 900% and 500% respectively. Financial institutions and healthcare providers must therefore find ways to deliver tailored offerings to travelers who are seeking peace of mind from unexpected changes or visiting the country to receive specialized medical treatment.Interest in visiting Thailand for “gastronomy tourism” is also growing, with online searches increasing by more than 110% year-on-year.  Players in the food and beverage industry should therefore be looking at ways to better engage tourists keen on authentic Thai cuisine.Most importantly, digital services will play an integral role in travel recovery. More than one in two consumers in Thailand are already using online travel services, with this category expected to grow 22% year-on-year and contribute US$9 billion to Thailand’s digital economy by 2025. To seize growth opportunities amidst the country’s tourism rebound, businesses cannot afford to overlook the importance of offering always-on, simple, personalized, and secure digital services.That is why Airports of Thailand (AOT), SKY ICT (SKY) and EVME PLUS (EVme) are adopting Google Cloud’s open data cloud to deliver sustainable, digital-first travel experiences.Improving the passenger experience in the cloudWith Thailand reopening its borders, there has been an upturn in both inbound and outbound air travel. To accommodate these spikes in passenger traffic across its six international airports, AOT migrated its entire IT footprint to Google Cloud, which offers an open, scalable, and secure data platform, with implementation support from its partner SKY, an aviation technology solutions provider.Tapping on Google Cloud’s dynamic autoscaling capabilities, the IT systems underpinning AOT’s ground aviation services and the SAWASDEE by AOT app can now accommodate up to 10 times their usual workloads. AOT can also automatically scale down its resources to reduce costs when they are no longer in use. Using the database management services of Google Cloud to eliminate data silos, the organization is able to enhance its capacity to deliver real-time airport and flight information to millions of passengers. As a result, travelers enjoy a smoother passenger experience, from check-in to baggage collection.At the same time, SKY uses Google Kubernetes Engine (GKE) to transform SAWASDEE by AOT into an essential, all-in-one travel app that offers a full range of tourism-related services. GKE allows AOT to automate application deployment and upgrades without causing downtime. This frees up time for the tech team to accelerate the launch of new in-app features, such as a baggage tracker service, airport loyalty programs, curated travel recommendations, an e-payment system, and more.EVme drives sustainable travel with dataBeing able to travel more efficiently is only one part of the future of travel. More than ever, sustainability is becoming a priority for consumers when they plan their travel itineraries. For instance, search queries related to “sustainable tourism” in Thailand have increased by more than 200% in the past year, with close to four in 10 consumers sharing that they are willing to pay more for a sustainable product or service.To meet this increasing demand and support Thailand’s national efforts to become a low-carbon society, EVme, a subsidiary of PTT Group, is building its electric vehicle lifestyle app on Google Cloud, the industry’s cleanest cloud. It has also deployed the advanced analytics and business intelligence tools of Google Cloud to offer its employees improved access to data-driven insights, which helps them better understand customer needs and deliver personalized interactions. These insights have helped EVme determine the range of electric vehicle models it offers for rental via its app, so as to cater to different preferences. At the same time, the app can also share crucial information, such as the availability of public electric vehicle charging stations, while providing timely support and 24-hour emergency assistance to customers.As we empower organizations across industries with intelligent, data-driven capabilities to make smarter business decisions and be part of an integrated ecosystem that delivers world-class visitor experiences, our collaborations with AOT, SKY, and EVme will enhance their ability to serve travelers with personalized, digital-first offerings powered by our secure and scalable open data cloud.
Quelle: Google Cloud Platform

How to easily migrate your apps to containers — free deep dive and workshop

Just here for the event registration link?Click here.Are you looking to migrate your applications to Google Cloud? Thinking about using containers for some of those apps? If so, you’re in luck! Google Cloud is hosting a free workshop on May 24th, 2023, that will teach you everything you need to know about migrating your app to containers in Google Cloud. The workshop starts at 9AM PST and will be led by Google Cloud experts who will walk you through some of your migration options, the costs involved, and the security considerations. We’ll also feature a hands-on lab so you can get familiar with some of the tools we use to achieve your migration goals. And we’ll wrap up with a live Q&A so you have the opportunity to ask questions of the experts and get your specific questions answered.Whether you’re a developer, a system administrator, or a business decision-maker, this workshop will give you the insights you need to make an informed decision about how to migrate your apps to Google Cloud. Click here to register for this free workshop. We hope to see you there!Need a bit more info before you sign up? No problem. Let’s chat about some of the benefits of migrating on-prem workloads to containers in Google Cloud: Wide range of container services: Choose between Google Kubernetes Engine (GKE), Cloud Run, and Anthos, giving you the flexibility to choose the container service that best meets your needs.Global network infrastructure: With our global network of data centers, you can deploy containers close to your users. This improves performance and reduces latency.Tools and resources: There’s a variety of tools and resources to help you manage and deploy your containers, including the Google Cloud console, the gcloud command-line tool, and the GKE dashboard.Commitment to security: Google Cloud takes security seriously, and our container services are built on a secure foundation. This includes features like role-based access control (RBAC), network policies, and encryption.Still have questions? We’ve got answers, and hope you’ll join us for this free workshop on May 24th at 9AM PST. And if you can’t wait until then, you can also check out our new whitepaper: The future of infrastructure will be containerized.We hope to see you on the 24th!
Quelle: Google Cloud Platform

At Google I/O, generative AI gets to work

Over the past decade, artificial intelligence has evolved from experimental prototypes and early successes to mainstream enterprise use. And the recent advancements in generative AI have begun to change the way we create, connect, and collaborate. As Google CEO Sundar Pichai said in his keynote, every business and organization is thinking about how to drive transformation. That’s why we’re focused on making it easy and scalable for others to innovate with AI.In March, we announced exciting new products that infuse generative AI into our Google Cloud offerings, empowering developers to responsibly build with enterprise-level safety, security, and privacy. They include Gen App Builder, which lets developers quickly and easily create generative chat and enterprise search applications, and Generative AI support in Vertex AI, which expands our machine learning development platform with access to foundation models from Google and others to quickly build, customize and deploy models. We also introduced our vision for Google Workspace, and delivered generative AI features to trusted testers in Gmail and Google Docs that help people write.Last month we introduced Security AI Workbench, an industry-first extensible platform powered by our new LLM security model Sec-PaLM, which incorporates Google’s unique visibility into the evolving threat landscape and is fine-tuned for cybersecurity operations.Today at Google I/O, we are excited to share the next steps not only in our own AI journey, but also those of our customers and partners as well. We’ve already seen a number of organizations begin to develop with and deploy our generative AI offerings. These organizations have been able to move their ideas from experimentation to enterprise-ready applications with the training models, security, compute infrastructure, and cost controls needed to provide their customers with transformative experiences. Our open ecosystem, which provides opportunities for every kind of partner, continues to grow as well. And we are also pleased to share new services and capabilities across Google Cloud and Workspace, including Duet AI—our AI-powered collaborator—to enable more users and developers to start seeing the impact AI can have on their organization.Customers bringing ideas to life with generative AILeading companies in a variety of industries like eDreams ODIGEO, GitLab, Oxbotica, and more, are using our generative AI technologies to create engaging content, synthesize and organize information, automate business processes, and build amazing customer experiences. A few examples we showcased today include:Adore Me, a New York-based intimate apparel brand, is creating production-worthy copy with generative AI features in Docs and Gmail. This is accelerating projects and processes in ways that even surprised the company.Canva, the visual communication platform, uses Google Cloud’s rich generative AI capabilities in language translation to better support its non-English speaking users. Users can now easily translate presentations, posters, social media posts, and more into over a hundred languages. The company is also testing ways that Google’s PaLM technology can turn short video clips into longer, more compelling stories. The result will be a more seamless design experience while growing the Canva brand.Character.AI, a leading conversational AI platform, selected Google Cloud as its preferred cloud infrastructure provider because we offer the speed, security and flexibility required to meet the needs of its rapidly growing community of creators. We are enabling Character.AI to train and infer LLMs faster and more efficiently, and enhancing the customer experience by inspiring imagination, discovery, and understanding. Deutsche Bank is testing Google’s generative AI and large language models (LLMs) at scale to provide new insights to financial analysts, driving operational efficiencies and execution velocity. There is an opportunity to significantly reduce the time it takes to perform banking operations and financial analysts’ tasks, empowering employees by increasing their productivity while helping to safeguard customer data privacy, data integrity, and system security.Instacart is always looking for opportunities to adopt the latest technological innovations, and by joining the Workspace Labs program, they have access to the new features and can discover how generative AI will make an impact for their teams.Orange is exploring a next-generation contact center with Google Cloud. With customers in 26 countries, the global telecommunications firm is testing generative AI to transcribe the call, summarize the exchange between the customer and service representatives, and suggest possible follow up actions to the agent based on the discussion. This experiment has the potential to dramatically improve both the efficiency and quality of customer interactions. Orange is working closely with Google to help ensure data protection and make sure that systematic employee review of Generative AI output and transparency can be implemented.Replit is developing a collaborative software development platform powered by AI. Developers using Replit’s Ghostwriter coding AI already have 30% of their code written by generative AI today. With real-time debugging of the code output and context awareness of the program’s files, Ghostwriter frees up developers’ time for more challenging and creative aspects of programming.Uber is creating generative AI for customer-service chatbots and agent assist capabilities, which handle a range of common service issues with human-like interactions with the aim of achieving greater customer satisfaction and cost efficiency. Additionally, Uber is working on using our synthetic data systems (a technique for improving the quality of LLMs) in areas like product development, fraud detection, and employee productivity.Wendy’s is working with Google Cloud on a groundbreaking AI solution, Wendy’s FreshAI, designed to revolutionize the quick service restaurant industry. The technology is transforming Wendy’s drive-thru food ordering experience with Google Cloud’s generative AI and LLMs—with the ability to discern the billions of possible order combinations on the Wendy’s menu. In June, Wendy’s plans to launch its first pilot of the technology in a Columbus, Ohio-area restaurant, before expanding to more drive-thru locations.Leading companies build with generative AI on Google CloudPartnering creates a strong ecosystem of real-world options for customersAt Google Cloud, we are dedicated to being the most open hyperscale cloud provider, and that includes our AI ecosystem. Today, we are excited to expand upon the partnerships announced earlier this year for every layer of the AI stack—chipmakers, companies building foundation models and AI platforms, technology partners enabling companies to develop and deploy machine learning (ML) models, app-builders solving customer use cases with generative AI, and global services and consulting firms that help enterprise customers implement all of this technology at scale. We announced new or expanded partnerships with SaaS companies like Box, Dialpad, Jasper, Salesforce, and UKG; and consultancies including Accenture, BCG, Cognizant, Deloitte, and KPMG. Together with our previous announcements with companies like AI21 Labs, Aible, Anthropic, Anyscale, Bending Spoons, Cohere, Faraday, Glean, Gretel, Labelbox, Midjourney, Osmo, Replit, Snorkel AI, Tabnine, Weights & Biases, and many more, they provide the a wide range of options for businesses and governments looking to bring generative AI into their organizations. Introducing new generative AI capabilities for Google CloudTo help cloud users of all skill levels solve their everyday work challenges, we’re excited to announce Duet AI for Google Cloud, a new generative AI-powered collaborator. Duet AI serves as your expert pair programmer and assists cloud users with contextual code completion, offering suggestions tuned to your code base, generating entire functions in real-time, and assisting you with code reviews and inspections. It can fundamentally transform the way cloud users of all skill sets build new experiences and is embedded across Google Cloud interfaces—within the integrated development environment (IDE), Google Cloud Console, and even chat. For developers looking to create generative AI applications more simply and efficiently, we are also introducing new foundation models and capabilities across our Google Cloud AI products. And to continue to enable and inspire more customers and partners, we are opening up generative AI support in Vertex AI and expanding access to many of these new innovations to more organizations.New foundation models are now available in Vertex AI. Codey, our code generation foundation model, helps accelerate software development with code generation, code completion, and code chat. Imagen, our text-to-image foundation model, lets customers generate and customize studio-grade images. And Chirp, our state-of-the-art speech model, allows customers to more deeply engage with their customers and constituents inclusively in their native languages with captioning and voice assistance. They can each be accessed via APIs, tuned through our intuitive Generative AI Studio, and feature enterprise-grade security and reliability, including encryption, access control, content moderation, and recitation capabilities that let organizations see the sources behind model outputs. Text Embeddings API is a new API endpoint that lets developers build recommendation engines, classifiers, question-answering systems, similarity matching, and other sophisticated applications based on semantic understanding of text or images. Reinforcement Learning from Human Feedback (RLHF) allows organizations to incorporate human feedback to deeply customize and improve model performance. Underpinning all of these innovations is our AI-optimized infrastructure. We provide the widest choice of compute options among leading cloud providers and are excited to continue to build them out with the introduction of new A3 Virtual Machines based on NVIDIA’s H100 GPU. These VMs, alongside the recently announced G2 VMs, offer a comprehensive range of GPU power for training and serving AI models.Extending generative AI across Google Workspace Earlier this year, we shared our vision for bringing generative AI to Workspace, and gave many users early access to features that helped them write in Gmail and Google Docs. Today, we are excited to announce Duet AI for Google Workspace, which brings together our powerful generative AI features and lets users collaborate with AI so they can get more done every day. We’re delivering the following features to trusted testers via Workspace Labs: In Gmail, we’re adding the ability to draft responses that consider the context of your existing email thread—and making the experience available on mobile.In Google Slides and Meet, we’re enabling you to easily generate images from text descriptions. Custom images in slides can help bring your story to life, and in Meet they can be used to create custom backgrounds.In Google Sheets, we’re automating data classification and the creation of custom plans—helping you analyze and organize data faster than ever. Moving the industry forward, responsiblyCustomers continue to amaze us with their ideas and creativity, and we look forward to continuing to help them discover their own paths forward with generative AI. While the potential for impact on business is great, we remain committed to taking a responsible approach, guided by our AI Principles. As we gather more feedback from our customers and users, we will continue to bring new innovations to market, with a goal to enable organizations of every size and industry to increase efficiency, connect with customers in new ways, and unlock entirely new revenue streams.
Quelle: Google Cloud Platform

Dialing up the impact of digital natives in the MENA region with Google Cloud

Entrepreneurs with the passion to drive positive impact have been selecting the Middle East and North Africa (MENA) region as the launchpad for their businesses since 2015, based on insights from Google Cloud’s digital natives unit. Today, the region is taking center stage due to the thousands of startups, digital natives, and web3 companies thriving. With more than 5,500 technology startups in the region, an amicable business climate, ample access to venture capital, and digital transformation being a top priority on government agendas, digital natives in the region have been on the rise.Forbes recently announced the Top 50 most funded startups in the MENA region. Collectively, these companies raised a whopping USD 3.2 billion in 2022, with startups in the region continuing to attract significant funding to date in comparison to other regions. The list also highlighted that UAE-based companies were the most represented for raising USD 964 million in total funding for that year, followed by the Kingdom of Saudi Arabia (KSA) where USD 946.7 million were raised, and Egypt reigning third place for raising USD 508.5 million.On the list, UAE-based fintech Tabby ranked second with USD 275 million in funds, and Sary, a Saudi based online marketplace, came in seventh for securing USD 112 million. Breadfast, an on-demand supermarket and household essentials provider based in Egypt also secured USD 26 million in funds during the same year.Tech-enabled success for digital natives in the MENA regionThe common factor between companies such as Tabby, Sary and Breadfast is that they are all fully tech-enabled businesses running on Google Cloud. These three companies leverage Google Cloud’s scalable, secure and reliable platform, and innovative cloud solutions to create seamless experiences every day for their customers across KSA, United Arab Emirates (UAE), Egypt, Kuwait, and Pakistan.Tabby provides “buy now, pay later” solutions via an online application that has been built on Google Cloud from day one. Tabby has successfully grown a customer base of 2.5 million active shoppers in the region since its start in 2019, with the support of the scalability provided by Google Cloud that provides uninterrupted and secure financial services to customers. With an online retail boom on the horizon for the MENA region, Tabby is poised for a growth trajectory as the volume of active e-shoppers will continue to rise and more markets become activated in the region’s digital economy. Tabby’s development team is able to take several strides ahead of market demand by developing a seamless and innovative product that can accommodate an average of 10 million shoppers per day. By running the entire IT infrastructure on Google Cloud, the team dedicates their time and resources to focus on what is important to the business and that is to provide a product that caters to customer and market requirements, rather than exhaust resources on time consuming tasks such as the daily management of IT assets. Tabby also believes in the power of big data and turns to Google Cloud’s data analytics solutions such as Big Query to roll out new monetary policies for customers. Before a new credit policy is introduced to shoppers, Tabby tests its viability on Big Query and analyzes different implementation scenarios in real-time to test out its effectiveness. This helps the team roll out policies that have been proven to be effective with shoppers.Throughout the year, the MENA region experiences a peak in shopping cycles connected with local festivities such as the holy month of Ramadan, White Friday and Christmas. It is around high peak shopping periods that Tabby’s application experiences significant spikes, as the team manages 140 million requests per day in comparison to 80 million requests on a regular day. Nonetheless, with the support of Google Cloud’s scalable infrastructure Tabby holds a record of zero down-time during peak periods, and can scale operations successfully with low latency — ultimately locking in an excellent service to customers.“From the first day Tabby went live in 2019 to date, we have experienced zero-downtime in our systems during high traffic periods because of Google Cloud’s scalable and flexible infrastructure. We are able to support 2.5 million shoppers across the Middle East because we run on a robust and reliable infrastructure. Scalability is key for the team at Tabby. We are able to build new products very quickly on Google Cloud in comparison to other cloud providers.”A report by eCommerce DB revealed that Saudi Arabia is the 27th largest market globally for e-commerce with a projected revenue of USD 11,977 million by the end of 2023. Mordor Intelligence also revealed that the Saudi e-commerce market is expected to show a compound annual growth rate (CAGR 2023-2027) of 13.9%, resulting in a projected market volume of US$20,155.8 million by 2027. Enter Sary, a Saudi-based B2B marketplace that connects businesses of all sizes to millions of shoppers in Saudi Arabia, Egypt and Pakistan via mobile and web applications. Sary is not a common marketplace, it aims to support local businesses and empower homegrown names to reach customers at scale via its platform in the countries where it operates.Sary is home to 70,000 businesses from all walks of life and as the company set out to expand its footprint it was time to move away from an unsophisticated cloud setup to a more advanced and robust cloud provider that provides the security and scalability that supports plans to tap into new markets.Sary attributes a big part of its success to running a robust infrastructure on Google Cloud, as it witnessed an 84% increase in operational system throughput since migrating the entire IT infrastructure. This means that businesses relying on the platform as their main marketplace are able to process orders at scale without any down-time or system interruptions, and generate positive revenue streams. Sary also leverages Google Kubernetes Engine (GKE) to automatically scale system bandwidth based on the volume of traffic the website or application receives. This solution helps the company manage IT costs effectively, while still delivering an uncompromised service to customers.“The support we receive every day from the Google Cloud team has been phenomenal. They have been with us every step of the way. We are able to free up time to focus on what is important and that is to deliver business value to our customers who depend on Sary for their success.”Egypt is another country that is rising as a strategic player in the MENA digital natives scene over the recent years. The 2022 Egypt Venture Investment Reportrevealed that the startup ecosystem observed a 168% year-on-year increase in capital investments to reach a new all time high record of USD 491 million. Breadfast is one of the companies disrupting the scene in Egypt as an early adopter of operating a cloud-native supply chain, before the arrival of rapid online grocery delivery companies in the country. Now a household name, Breadfast is a cloud native on-demand supermarket and household essentials provider that delivers to over 200,000 homes in Cairo.  The team at Breadfast built a fully tech-enabled business across all operational touchpoints that comprises manufacturing facilities, supply fulfillment points, 30 dark stores, 15 specialized coffee outlets and last-mile delivery. Running a tech-driven business generates additional costs that can be optimized when working with a cloud provider. And ever since Breadfast migrated the entire IT infrastructure to Google Cloud in 2022, the company has become more profitable as operating costs were reduced by 35% while improving system throughput with the support of Google Cloud’s scalable and secure infrastructure. To fulfill its brand promise of product delivery within 60 minutes anywhere in Cairo, Breadfast also turns to Google Cloud‘s resilient infrastructure that delivers efficient operational throughput to ensure no interruptions affect server vitality and impact order processing timelines. Breadfast successfully increased system up-time to 99.5 % since it migrated to Google Cloud, and was able to deliver six million orders across the city within a span of 30 minutes in 2022.“In our line of business time is of the essence. Two minutes of downtime in our systems takes 12 hours to fix on ground, which can have a downward impact on our customers. We decided to migrate our IT infrastructure to Google Cloud as the trusted cloud provider because of its resilience and the operational uptime is now at 99.5% ever since we made the move. This enabled Breadfast to deliver millions of orders in 2022.”Build your business with Google CloudGoogle Cloud opened up its secure and scalable infrastructure to businesses in the Middle East and North Africa region, where artificial intelligence (AI) and machine learning (ML) is embedded in cloud solutions that bring meaning to data and can help automate almost everything. Google Cloud also provides digital natives with the freedom to run applications where they need them with open, hybrid, and multi-cloud solutions. This way, an application is built once and can run anywhere, even on-premises.With no configuration required, digital natives can access limitless data effortlessly with Google Cloud solutions such as Big Query and Looker. These unique data analytics solutions are the single source of truth as they rely on AI and ML to design solutions that provide a deep understanding of customer data. Powered by data-driven understanding of customers, businesses today can preempt customer trends and bring them the right products and solutions based on their needs. Businesses can also accurately track down granular information such as if a driver delivered an order on time, and which item needs to be restocked in a warehouse.Google Cloud provides data loss prevention solutions which help digital natives encrypt critical data like customer information and financial records. Businesses can also discover, classify and protect their most sensitive data and detect customer churn or fraudulent activity using machine learning capabilities embedded in Big Query.To help entrepreneurs in the MENA region supercharge business growth, Google Cloud runs the Google for Startups Cloud Program that offers access to startup experts, cloud cost coverage up to USD 100,000 for each of the first two years, technical training, business support, and Google-wide offers. Sign up here for the program.Note: All customer metrics featured in the blogpost were derived from direct customer interviews with Google Cloud.
Quelle: Google Cloud Platform