Ad agencies choose BigQuery to drive campaign performance

Advertising agencies are faced with the challenge of providing the precision data that marketers require to make better decisions at a time when customers’ digital footprints are rapidly changing. They need to transform customer information and real-time data into actionable insights to inform clients what to execute to ensure the highest campaign performance.In this post, we’ll explore how two of our advertising agency customers are turning to Google BigQuery to innovate, succeed, and meet the next generation of digital advertising head on. Net Conversion eliminated legacy toil to reach new heightsPaid marketing and comprehensive analytics agency Net Conversion has made a name for itself with its relentless attitude and data-driven mindset. But like many agencies, Net Conversion felt limited by traditional data management and reporting practices. A few years ago, Net Conversion was still using legacy data servers to mine and process data across the organization, and analysts relied heavily on Microsoft Excel spreadsheets to generate reports. The process was lengthy, fragmented, and slow—especially when spreadsheets exceeded the million-row limit.To transform, Net Conversion built Conversionomics, a serverless platform that leverages BigQuery, Google Cloud’s enterprise data warehouse, to centralize all of its data and handle all of its data transformation and ETL processes. BigQuery was selected for its serverless architecture, high scalability, and integration with tools that analysts were already using daily, such as Google Ads, Google Analytics, and Data Hub. After moving to BigQuery, Net Conversion discovered surprising benefits that streamlined reporting processes beyond initial expectations. For instance, many analysts had started using Google Sheets for reports, and BigQuery’s native integration with Connected Sheets gave them the power to analyze billions of rows of data and generate visualizations right where they were already working.If you’re still sending Excel files that are larger than 1MB, you should explore Google Cloud. Kenneth Eisinger Manager of Paid Media Analytics at Net ConversionSince modernizing their data analytics stack, Net Conversion has saved countless hours of time that can now be spent on taking insights to the next level. Plus, BigQuery’s advanced data analytics capabilities and robust integrations have opened up new roads to offer more dynamic insights that help clients better understand their audience.   For instance, Net Conversion recently helped a large grocery retailer launch a more targeted campaign that significantly increased downloads of their mobile application. The agency was able to better understand and predict their customers’ needs by analyzing buyer behavior across the website, mobile application, and their purchase history. Net Conversion analyzed website data in real-time with BigQuery, ran analytics on their mobile app data through the Firebase’s integration with BigQuery, and enriched these insights with sales information from the grocery retailer’s CRM to generate propensity behavior models that  accurately predicted which customers would most likely install their mobile app.WITHIN helped companies weather the COVID stormWITHIN is a performance branding company, focused on helping brands maximize growth by fusing marketing and business goals together in a single funnel. During the COVID-19 health crisis, WITHIN became an innovator in the ad agency world by sharing real-time trends and insights with customers through its Marketing Pulse Dashboard. This dashboard was part of the company’s path to adopting BigQuery for data analytics transformation. Prior to using BigQuery, WITHIN used a PostgreSQL database to house its data and manual reporting. Not only was the team responsible for managing and maintaining the server, which took focus away from the data analytics, but query latency issues often slowed them down. BigQuery’s serverless architecture, blazing-fast compute, and rich ecosystem of integrations with other Google Cloud and partner solutions made it possible to rapidly query, automate reporting, and completely get rid of CSV files. Using BigQuery, WITHIN is able to run Customer Lifetime Value (LTV) analytics and quickly share the insights with their clients in a collaborative Google Sheet. In order to improve the effectiveness of their campaigns across their marketing channels, WITHIN further segments the data into high and low LTV cohorts and shares the predictive insights with their clients for in-platform optimizations.By distilling these types of LTV insights from BigQuery, WITHIN has been able to use those to empower their campaigns on Google Ads with a few notable success stories.WITHIN worked with a pet food company to analyze historical transactional data to model predicted LTV of new customers. They found significant differences between product category and autoship vs single order customers, and they implemented LTV-based optimization. As a result, they saw a 400% increase in average customer LTV. WITHIN helped a coffee brand increase their customer base by 560%, with the projected 12-month LTV of newly acquired customers jumping a staggering 1280%.Through integration with Google AI Platform Notebooks, BigQuery also advanced WITHIN’s ability to use machine learning (ML) models. Today, the team can build and deploy models to predict dedicated campaign impact across channels without moving the data.  The integration of clients’ LTV data through Google Ads has also impacted how WITHIN structures their clients’ accounts and how they make performance optimization decisions.Now, WITHIN can capitalize on the entire data lifecycle: ingesting data from multiple sources into BigQuery, running data analytics, and empowering people with data by automatically visualizing data right in Google Data Studio or Google Sheets.A year ago, we delivered client reporting once a week. Now, it’s daily. Customers can view real-time campaign performance in Data Studio — all they have to do is refresh. Evan Vaughan Head of Data Science at WITHINHaving a consistent nomenclature and being able to stitch together a unified code name has allowed WITHIN to scale their analytics. Today, WITHIN is able to create an internal Media Mix Modeling (MMM) tool with the help of Google Cloud that they’re trialing with their clients.The overall unseen benefit of BigQuery was that it put WITHIN in a position to remain nimble and spot trends before other agencies when COVID-19 hit. This aggregated view of data allowed WITHIN to provide unique insights to serve their customers better and advise them on rapidly evolving conditions.Ready to modernize your data analytics? Learn more about how Google BigQuery unlocks the insights hidden in your data.Related ArticleQuery BIG with BigQuery: A cheat sheetOrganizations rely on data warehouses to aggregate data from disparate sources, process it, and make it available for data analysis in s…Read Article
Quelle: Google Cloud Platform

To the cloud and beyond! Migration Enablement with Google Cloud’s Professional Services Organization

Google Cloud’s Professional Services Organization (PSO) engages with customers to ensure effective and efficient operations in the cloud, from the time they begin considering how cloud can help them overcome their operational, business or technical challenges, to the time they’re looking to optimize their cloud workloads. We know that all parts of the cloud journey are important and can be complex.  In this blog post, we want to focus specifically on the migration process and how PSO engages in a myriad of activities to ensure a successful migration.As a team of trusted technical advisors, PSO will approach migrations in three phases:Pre-Migration PlanningCutover ActivitiesPost-Migration OperationsWhile this post will not cover in detail all of the steps required for a migration, it will focus on how PSO engages in specific activities to meet customer objectives, manage risk, and deliver value.  We will discuss the assets, processes and tools that we leverage to ensure success.Pre-Migration PlanningAssess ScopeBefore the migration happens, you will need to understand and clarify the future state that you’re working towards.  From a logistical perspective, PSO will be helping you with capacity planning to ensure sufficient resources are available for your envisioned future state.While migration into the cloud does allow you to eliminate many of the considerations for the physical, logistical, and financial concerns of traditional data centers and co-locations, it does not remove the need for active management of quotas, preparation for large migrations, and forecasting.  PSO will help you forecast your needs in advance and work with the capacity team to adjust quotas, manage resources, and ensure availability. Once the future state has been determined, PSO will also work with the product teams to determine any gaps in functionality.  PSO capturesfeature requests across Google Cloud services and makes sure they are understood, logged, tracked, and prioritized appropriately with the relevant product teams.  From there, they work closely with the customer to determine any interim workarounds that can be leveraged while waiting for the feature to land, as well as providing updates on the upcoming roadmap.  Develop Migration Approach and ToolingWithin Google Cloud, we have a library of assets and tools we use to assist in the migration process.  We have seen these assets help us successfully complete migrations for other customers efficiently and effectively.Based on the scoping requirements and tooling available to assist in the migration, PSO will help recommend a migration approach.  We understand that enterprises have specific needs; differing levels of complexity and scale; regulatory, operational, or organization challenges that will need to be factored into the migration.  PSO will help customers think through the different migration options and how all of the considerations will play out.PSO will work with the customer team to determine the best migration approach for moving servers from on-prem to Google Cloud. PSO will walk customers through different migration approaches, such as refactoring, lift-shift, or new installs. From there, the customer can determine the best fit for their migration. PSO will provide guidance on best practices and use cases from other customers with similar use cases. Google offers a variety of cloud native tools that can assist with asset discovery, the migration itself, and post-migration optimization. PSO, as one example, will help work with project managers to determine the best tooling that accommodates the customer’s requirements for migrating servers. PSO will also engage Google product team to ensure the customer fully understands the capabilities of each tool and the best fit for the use case. Google understands from a tooling perspective, one size does not fit all, thus PSO will work with the customer on determining the best migration approach and tooling for different requirements. Cutover ActivitiesOnce all of the planning activities have been completed, PSO will assist in making sure the cutover is successful.During and leading up to critical customer events, PSO can provide proactive event management services which deliver increased support and readiness for key workloads.  Beyond having a solid architecture and infrastructure on the platform, support for this infrastructure is essential and TAMs will help ensure that there are additional resources to support and unblock the customer where challenges arise.As part of event management activities, PSO liaises with the Google Cloud Support Organization to ensure quick remediation and high resilience for situations where challenges arise.  A war room is usually created to facilitate quick communication about the critical activities and roadblocks that arise.  These war rooms can give customers a direct line to the support and engineering teams that will triage and resolve their issues.Post-Migration ActivitiesOnce cutover is complete, PSO will continue to provide support in areas such incident management, capacity planning, continuous operational support, and optimization to ensure the customer is successful from start to finish.PSO will serve as the liaison between the customer and Google engineers. If support cases need to be escalated, PSO will ensure the appropriate parties are involved and work to get the case resolved in a timely manner. Through operational rigor, PSO will work with the customer in determining if certain Google Cloud services will be beneficial to the customer objectives. If services will add value to the customer, PSO will help enable the services so it aligns with the customer’s goal and current cloud architecture. In cases where there are missing gaps in services, PSO will proactively work with the customer and Google engineering teams to close the gaps by enabling additional functionality in the services.  PSO will continue to work with the engineering teams to consistently review and provide recommendations on the customer’s cloud architecture in ensuring the most optimal and cost efficient design along with adhering to Google’s best practices guidelines. Aside from migrations, PSO is also responsible for providing continuous training of Google Cloud to customers. To ensure consistent development of Google Cloud, PSO will work with the customer to jointly develop a learning roadmap to ensure the customer has the necessary skills to succeed in delivering successful projects in Google Cloud.ConclusionGoogle PSO will be actively engaged throughout the customer’s cloud journey to ensure the necessary guidance, methodology, and tools are presented to the customer. PSO will engage in a series of activities from pre-migration planning to post migration in key areas such as capacity planning to ensure sufficient resources are allocated for future workloads to providing support on technical cases for troubleshooting. PSO will serve as a long-term trusted advisor who will be the voice of the  customer and provide the reliability and stability of the customer’s Google Cloud environment.Click here if you’d like to engage with our PSO team on your migration. Or, you can also get started with a free discovery and assessment of your current IT landscape.Reference materialMigration service kitMigration trip reports
Quelle: Google Cloud Platform

Reimagine what’s possible with Google Cloud for human services and labor

The COVID-19 pandemic tested our nation’s public benefits system in unimaginable ways. With an unprecedented 60 million individuals turning to unemployment and social services to satisfy their basic needs, state and local governments were stretched to meet the demand.But state and local government leaders have risen to the challenge of providing their constituents with critical services. They created and administered innovative solutions, complete with wholly original processes, and brought desperately needed employment, cash, food, and healthcare support to families in crisis. These public sector heroes raised the bar during this critical time and facilitated economic recovery for our local communities, making the impossible possible for its people. When state and local government agencies partner with Google Cloud, they can reimagine how they deliver human services and labor services for their communities. All in a matter of weeks, instead of months or years.Tools that solve for remote work and service delivery hurdlesNever before was there such a need for modern and accessible tools to allow a workforce to secure new jobs and operate remotely. TheVirtual Career Center (VCC), built on Google Cloud, was designed with these needs in mind.Google Workspace, including Google Meet, allows job seekers to schedule video meetings with career coaches, job recruiters, and potential employers. The Google Job Search API will enable them to explore career opportunities best suited to their skills and interests.A thousand miles away, theRhode Island Virtual Career Center was helping its job seekers by offering virtual meetings with career coaches, the ability to schedule meetings with prospective employers, and help on building effective resumes. Skipper, the CareerCompass RI Bot, an intelligent agent for careers, uses data and machine learning to facilitate potential new career paths and reskilling opportunities for Rhode Islanders.In addition to enabling remote work, Google Workspace can support remote service delivery, a critical need for those involved in health and human services programs. Engaging with individuals and families is an essential component for more of these programs, including eligibility determination, assessment, and service delivery. The ability to engage directly with the public through virtual interviews, counseling sessions, telehealth, etc. improves access for those on both sides of the screen. Further, through Google Classroom, a component of Google Workspace for Education, foster and adoptive parents can obtain virtual training required for licensing. Tools to expedite relief funds and employment assistanceAs the number of people applying for unemployment assistance skyrocketed, so did the backlog of claims requiring review. The pandemic also saw a rise in fraudulent claims, which caused delays for families waiting on legitimate payments and resulted in unnecessary spending for government agencies. Realizing the magnitude of the challenge was beyond the agencies’ current resources, and they shifted their focus to machine learning.  To address the massive backlog of claims, SpringML partnered with Google Cloud to develop the Improper Payment Analytics solution that successfully leverages AI to help agencies identify fraudulent claims and avoid improper payments so aid can be prioritized for those who need it. Though such a tool would typically take months to develop, the team was able to launch it in just weeks, to the benefit of all stakeholders involved. Families awaiting legitimate payments received checks more quickly, and state and local governments saved millions in spending.For instance, to address erroneous payments totaling $330 million to fraudulent applicants, the State of Ohio partnered with Google Cloud to utilize artificial intelligence and machine learning solutions to proactively identify and decline improper payment risks. As a result, the State avoided paying fraudulent claims and accelerated the payment of legitimate benefits to families facing financial hardship.TheNew York State Department of Labor launched a streamlined unemployment application to allow residents to apply for pandemic unemployment assistance without the added burden of applying for unemployment insurance. Success was immediate, and the application backlog plummeted as New Yorkers got the financial aid they desperately needed. Illinois took a different approach, deploying Contact Center Artificial Intelligence (CCAI) to create virtual agents who assist with specialized calls 24/7, in multiple languages, providing turn-by-turn guidance in real-time. Conversations can be turned into insights through analytics and reporting tools that uncover key call drivers and customer sentiment. By solving for spikes in call volume, CCAI has helped process more than 1 million unemployment claims.Automated data solutions bring immediate food and cash assistance resourcesWith millions out of work and out of school during the pandemic, food and cash assistance became a critical need. Applying for all of these benefits requires significant documentation that, in most cases, has historically been manually processed. Given the volume of people in need, this took a massive amount of human resources and time.UsingDocument AI (DocAI), agencies can automate this highly manual process and speed up the delivery of critical benefits to individuals and families. DocAI extracts the key data, provides a confidence score in a single review pane for staff review, and automatically uploads that data to the case management system–significantly reducing manual processing. Today, these state and local government organizations turn to DocAI to eliminate the difficult paperwork application process to help meet the benefit needs of their residents now and tomorrow, as illustrated by both the Wisconsin Department of Workforce Development (DWD) and the State of Hawaii.TheWisconsin DWD streamlined their paper unemployment insurance claims using DocAI by enabling DWD staff to receive critical data extracted from submitted applications and make decisions rapidly, saving time for both applicants and staff. TheState of Hawaii also used DocAI to extract, interpret, and transport COVID-19 test result data of incoming travelers to Google Cloud instantly. As a result, Hawaii was able to welcome travelers and reopen its economy in the midst of the pandemic.Partner with usAfter making it through this very difficult period, agencies have emerged from the pandemic with stronger labor and health and human service delivery for their residents. State and local governments recognized the necessity of virtual engagement and expanded access to services to meet this critical moment, solving for work, cash, food and healthcare needs.To learn more about Google Cloud for human services and labor, watch our video on the power of customer innovation or contact your Google Cloud sales representative. Let’s get solving together.
Quelle: Google Cloud Platform

Sopra Steria uses Google Cloud, Cisco, and ACTIVEO to power new generation of Virtual Agents

With people expecting to access products and services through easy, always-on experiences delivered across channels, transforming approaches to digital services is a must for every business. As a result, business leaders have to strive to provide employees with these same frictionless experiences. Sopra Steria is a European leader in consulting, digital services and software development, with 46,000 employees in 25 countries that generated revenue of €4.3 billion in 2020. It provides end-to-end solutions that help customers drive their digital transformation to obtain tangible and sustainable benefits, by combining in-depth knowledge of a wide range of business sectors and innovative technologies.To accomplish its goals, Sopra Steria planned to use conversational artificial intelligence as part of their strategy and began working closely with Google Cloud, Cisco, and Activeo. It believed that, by building out more advanced Virtual Agents for its customers to use to serve them, it could usher in a new era of frictionless experiences in the End-User support services.“To address customer demands for office and business applications support services, we looked to integrate a new generation of virtual voice assistant into our platform,” says Xavier Leroux, CTO End User Services at Sopra Steria. “We did this using Google’s proven AI and integrated it with our Cisco Contact Center by Activeo.”As many companies have learned, the need to provide customers with advanced experiences is directly attached to the services offered to their employees. By providing more contextual information and other forms of digital support to staff, those same team members will be better positioned to serve customers.Solving a complex problem with easeSopra Steria sought to solve several challenges for its customers, enabling them to:Provide all their employees with seamless, quick access to IT and business support,Offer 24/7 phone support capabilities for employees,Make services easily deployable, flexible, agile, and available in multiple languages,Reduce costs associated with delivering better support services.Sopra Steria chose Google Cloud Contact Center AI (CCAI), built by Google Cloud and Cisco, as the best option to achieve its vision. Google Cloud CCAI is currently used in some of the world’s largest call centers, building on Google’s expertise in natural language processing.Sopra Steria implemented CCAI as its new Virtual Assistant on phone channels managed by a Cisco telephony solution, and used Activeo for its integration and implementation support.The new Virtual Assistant allows Sopra Steria clients to qualify employee requests and direct them to the right agent based on Natural Language Understanding while offering self-service options for basic requests such as password resets. These Virtual Assistants also fully automate complex incident tickets creation in synchronization within the IT service management system.Google Cloud CCAI covers three main use cases:Autonomously handles full-length conversations using natural language,Augments operator support through contextual assistance through a desktop application based on live conversation analysis,Semantic analysis of all audio and text conversations processed by customer service through Google Cloud machine learning.Cisco provides native integration of CCAI within its Cisco Contact Center solution as a part of its global partnership with Google Cloud.“The work we have done with Google Cloud has allowed Sopra Steria to create innovative and personalized conversational services to enhance both employee satisfaction and operator productivity without sacrificing security,” says Christian Laloy, EMEA Contact Center Sales Specialist at Cisco.Unlocking new, more powerful contact center experiencesSince standing up the new CCAI solution, Sopra Steria has been able to enhance its service catalog through Virtual Assistants. The offering has generated immense interest among existing and new Sopra Steria clients because it can simultaneously reduce average waiting and handling and resolution times while dramatically decreasing service costs.“The Virtual Assistant serves both the user and operator alike,” says Xavier Leroux. “The user gets a conversational experience like no other and a seamless journey to resolve their requests. With Google Cloud, Cisco and Activeo, we have increased operator efficiency, so they can focus more on adding value to every business interaction.”Related ArticleCustomers handle up to 28% more concurrent chats with Agent Assist for ChatContact Center AI Agent Assist for Chat is now in Public Preview, speeding up resolutions to customers’ problems.Read Article
Quelle: Google Cloud Platform

Helping build the digital future. On Europe’s terms.

Cloud computing is globally recognized as the single most effective, agile and scalable path to digitally transform and drive value creation. It has been a critical catalyst for growth, allowing private organizations and governments to support consumers and citizens alike, delivering services quickly without prohibitive capital investment. European organizations—in both the public and private sectors—want a provider to deliver a cloud on their terms, one that meets their requirements for security, privacy, and digital sovereignty, without compromising on functionality or innovation.Last year, we set out an ambitious vision of sovereignty along three distinct pillars: data sovereignty (including control over encryption and data access), operational sovereignty (visibility and control over provider operations), and software sovereignty (providing the ability to run and move cloud workloads without being locked-in to a particular provider, including in extraordinary situations such as stressed exits). After extensive dialogue with customers and policymakers, we are today unveiling ‘Cloud. On Europe’s Terms’. As part of  this initiative, we will continue to demonstrate our commitment to deliver cloud services that provide the highest levels of digital sovereignty, all while enabling the next wave of growth and transformation for Europe’s businesses and organizations.Google Cloud’s baseline controls and security features offer strong protections, meet current robust security and privacy requirements, and address many customer needs. Yet each country in Europe has its own characteristics and expectations. Certain customers in Europe may require more flexibility than current public and private cloud offerings may provide. We want to deliver a platform that allows customers to deploy workloads with the desired local control, without losing the transformational benefits of the public cloud.We are now delivering on this new vision collaboratively with trusted local technology providers in Europe, starting with T-Systems in Germany. Today, together with T-Systems, we announced a partnership to build a Sovereign Cloud offering in Germany for private and public sector organizations. The offering will become available in mid 2022 with additional features being added over time.  In this new joint offering, T-Systems will manage sovereignty controls and measures, including encryption and identity management of the Google Cloud Platform. In addition, as part of their offering, T-Systems will operate and independently control key parts of the Google Cloud infrastructure for T-Systems Sovereign Cloud customers in Germany.We are committed to building trust with European governments and enterprises with a cloud that meets their digital sovereignty, sustainability and economic objectives. We are starting with T-Systems today and will continue by partnering with trusted technology providers in selected markets across the region. Customers in other markets across Europe will be able to use these trusted partner offerings or use Google Cloud’s controls to exercise autonomous control over data access and use; exercise choice over the infrastructure that is used to process that data; and avoid cloud vendor lock-in. With Google Cloud, our customers also automatically benefit from sustainable business transformation on the cleanest cloud in the industry. Today, we are the largest annual corporate purchaser of renewable energy globally, and by 2030, we aim to operate entirely on 24/7 carbon-free energy in all of our cloud regions worldwide. We’ll continue to listen to our customers and key stakeholders across Europe who are setting policy and helping shape requirements for customer control of data. Our goal is to make Google Cloud the best possible place for sustainable, digital transformation for European organizations on their terms—and there is much more to come.Related ArticleThe cloud trust paradox: 3 scenarios where keeping encryption keys off the cloud may be necessaryAlthough rare, there are sometimes situations where encryption keys should be stored off the cloud. Here are three to consider.Read Article
Quelle: Google Cloud Platform

PyTorch on Google Cloud: How To train and tune PyTorch models on Vertex AI

Since the publishing of the inaugural post of PyTorch on Google Cloud blog series, we announced Vertex AI: Google Cloud’s end-to-end ML platform at Google I/O 2021.  Vertex AI unifies Google Cloud’s existing ML offerings into a single platform for efficiently building and managing the lifecycle of ML projects. It provides tools for every step of the machine learning workflow across various model types, for varying levels of machine learning expertise.We will continue the blog series with Vertex AI to share how to build, train and deploy PyTorch models at scale and how to create reproducible machine learning pipelines on Google Cloud. Figure 1. What’s included in Vertex AI?In this post, we will show how to use:Vertex AI Training to build and train a sentiment text classification model using PyTorchVertex AI Hyperparameter Tuning to tune hyperparameters of PyTorch modelsYou can find the accompanying code for this blog post on the GitHub repository and the Jupyter Notebook.Let’s get started!Use case and datasetIn this article we will fine tune a transformer model (BERT-base) from Hugging Face Transformers Library for a sentiment analysis task using PyTorch. BERT (Bidirectional Encoder Representations from Transformers) is a Transformer model pre-trained on a large corpus of unlabeled text in a self-supervised fashion. We will begin experimentation with the IMDB sentiment classification dataset on Notebooks. We recommend using a Notebook instance with limited compute for development and experimentation purposes. Once we are satisfied with the local experiment on the notebook, we show how you can submit a training job from the same Jupyter notebook to the Vertex Training service to scale the training with bigger GPU shapes. Vertex Training service optimizes the training pipeline by spinning up infrastructure for the training job and spinning it down after the training is complete, without you having to manage the infrastructure.Figure 2. ML workflow on Vertex AIIn the upcoming posts, we will show how you can deploy and serve these PyTorch models on Vertex Prediction service followed by Vertex Pipelines to automate, monitor and govern your ML systems by orchestrating a ML workflow in a serverless manner, and storing workflow’s artifacts using Vertex ML Metadata.  Creating a development environment on NotebooksTo set up a PyTorch development environment on JupyterLab notebooks with Notebooks, follow the setup section in the earlier post here. To interact with the new notebook instance, go to the Notebooks page in the Google Cloud Console and click the “OPEN JUPYTERLAB” link next to the new instance, which becomes active when the instance is ready to use.Figure 3. Notebook instanceTraining a PyTorch model on VertexTrainingAfter creating a Notebooks instance, you can start with your experiments. Let’s look into the model specifics for the use case.The model specificsFor analyzing sentiments of the movie reviews in the IMDB dataset, we will fine-tune a pre-trained BERT model from Hugging Face. The pre-trained BERT model already encodes a lot of information about the language as the model was trained on a large corpus of English data in a self-supervised fashion. Now we only need to slightly tune them using their outputs as features for the sentiment classification task. This means quicker development iteration on a much smaller dataset, instead of training a specific Natural Language Processing (NLP) model with a larger training dataset.Figure 4. Pretrained Model with classification layer: The blue-box indicates the pre-trained BERT Encoder module. Output of the encoder is pooled into a linear layer with the number of outputs same as the number of target labels (classes).For training the sentiment classification model, we will:Preprocess and transform (tokenize) the reviews dataLoad the pre-trained BERT model and add the sequence classification head for sentiment analysisFine-tune the BERT model for sentence classificationThe following code snippet shows how to preprocess the data and fine-tune a pre-trained BERT model. Please refer to the Jupyter Notebook for complete code and detailed explanation.In the snippet above, notice that the encoder (also referred to as the base model) weights are not frozen. This is why a very small learning rate (2e-5) is chosen to avoid loss of pre-trained representations. Learning rate and other hyperparameters are captured under the TrainingArguments object. During the training, we are only capturing accuracy metrics. You can modify the compute_metrics function to capture and report other metrics.Training the model on Vertex AIWhile you can do local experimentation on your Notebooks instance, for larger datasets or large models often a vertically scaled compute resource or horizontally distributed training is required. The most effective way to perform this task is Vertex Training service for following reasons:Automatically provision and deprovision resources: Training job on Vertex AI will automatically provision computing resources, perform the training task and ensure deletion of compute resources once the training job is finished.Reusability and portability: You can package training code with its parameters and dependencies into a container and create a portable component. This container can then be run with different scenarios such as hyperparameter tuning, various data sources and more.Training at scale: You can run a distributed training job on Vertex Training to train models in a cluster across multiple nodes in parallel and resulting in faster training time.Logging and Monitoring: The training service logs messages from the job to Cloud Logging and can be monitored while the job is running.In this post, we show how to scale a training job with Vertex Training by packaging the code and creating a training pipeline to orchestrate a training job. There are three steps to run a training job using Vertex AI custom training service:Figure 5. Custom training on Vertex AISTEP 1 – Determine training code structure: Package training application code as a Python source distribution or as a custom container image (Docker)STEP 2 – Choose a custom training method: You can run a training job on Vertex Training as a custom job or a hyperparameter training job or a training pipeline.Custom jobs: With a custom job you configure the settings to run your training code on Vertex AI such as worker pool specs – machine types, accelerators, Python training spec or custom container spec.Hyperparameter tuning jobs: Hyperparameter tuning jobs automate tuning of hyperparameters of your model based on the criteria you configure such as goal or metric to optimize, hyperparameters values and number of trials to run.Training pipelines: Orchestrates custom training jobs or hyperparameter tuning jobs with additional steps after the training job is successfully completed.STEP 3 – Run the training job: You can submit the training job to run on Vertex Training using gcloud CLI or any of Client SDK libraries such as Vertex SDK for Python.Refer to the documentation for further details on custom training methods.Packaging the training applicationBefore running the training application on Vertex Training, the training application code with required dependencies must be packaged and uploaded to a Cloud Storage bucket that your Google Cloud project can access. There are two ways to package the application and run on Vertex Training:Create a Python source distribution with the training code and dependencies to use with a pre-built containers on Vertex AIUse custom containers to package dependencies using Docker containersYou can structure your training code in any way you prefer. Refer to the GitHub repository or Jupyter Notebook for our recommended approach on structuring training code. Run Custom Job on Vertex Training with a pre-built containerVertex AI provides Docker container images that can be run as pre-built containers for custom training. These containers include common dependencies used in training code based on the Machine Learning framework and framework version.For the sentiment analysis task, we are using Hugging Face Datasets and fine-tune a transformer model from Hugging Face Transformers Library using PyTorch. We use the pre-built container for PyTorch and package the training application code as a Python Source Distribution by adding standard Python dependencies required by the training algorithm – transformers, datasets and tqdm – in the setup.py file.Figure 6. Custom training with pre-built containers on Vertex TrainingThe find_packages() function inside setup.py includes the training code in the package as dependencies.We use Vertex SDK for Python to create and submit the training job to the Vertex training service by configuring a Custom Job resource with the pre-built container image for PyTorch and specifying the training code packaged as Python source distribution. We are attaching a NVIDIA Tesla T4 GPU to the training job for accelerating the training.  Alternatively, you can also submit the training job to Vertex AI training service using gcloud beta ai custom-jobs create command. gcloud command stages your training application on GCS bucket and submits the training job.worker-pool-spec parameter in the command defines the worker pool configuration used by the custom job. Following are the fields within worker-pool-spec:Set the executor-image-uri to us-docker.pkg.dev/vertex-ai/training/pytorch-gpu.1-7:latest for training on pre-built PyTorch v1.7 image for GPUSet the local-package-path to the path to the training codeSet the python-module to the trainer.task which is the main module to start the training applicationSet the accelerator-type and machine-type to set the compute type to run the applicationRefer to documentation for the  gcloud beta ai custom-jobs create command for details.Run Custom Job on Vertex Training with custom containerTo create a training job with a custom container, you define a Dockerfile to install or add the dependencies required for the training job. Then, you build and test your Docker image locally to verify, push the image to Container Registry and submit a Custom Job to Vertex Training service.Figure 7. Custom training with custom containers on Vertex TrainingWe create a Dockerfile with a pre-built PyTorch container image provided by Vertex AI as the base image, install the dependencies – transformers, datasets , tqdm and cloudml-hypertune and copy the training application code.Now, build and push the image to Google Cloud Container Registry.Submit the custom training job to Vertex Training using Vertex SDK for Python. Alternatively, you can also submit the training job to Vertex AI training service using gcloud beta ai custom-jobs create command with custom container spec. gcloud command submits the training job and launches worker pool with the custom container image specified.worker-pool-spec parameter defines the worker pool configuration used by the custom job. Following are the fields within worker-pool-spec:Set the container-image-uri to the custom container image pushed to Google Cloud Container Registry for trainingSet the accelerator-type and machine-type to set the compute type to run the applicationOnce the job is submitted, you can monitor the status and progress of training job either in Google Cloud Console or use gcloud CLI command gcloud beta ai custom-jobs stream-logs as shown below:Figure 8. Monitor progress and logs of custom training jobs from Google Cloud ConsoleHyperparameter tuning on Vertex AIThe training application code for fine-tuning a transformer model uses hyperparameters such as learning rate and weight decay. These hyperparameters control the behavior of the training algorithm and can have a substantial effect on the performance of the resulting model. In this section, we show how you can automate tuning these hyperparameters with Vertex Training.We submit a Hyperparameter Tuning job to Vertex Training service by packaging the training application code and dependencies in a Docker container and push the container to Google Container Registry, similar to running a CustomJob on Vertex AI with Custom Container shown in the earlier section.Figure 9. Hyperparameter Tuning on Vertex TrainingHow does hyperparameter tuning work in Vertex AI?Following are the high level steps involved in running a Hyperparameter Tuning job on Vertex Training service:Define the hyperparameters to tune the model along with the metric to optimizeVertex Training service runs multiple trials of the training application with the hyperparameters and limits you specify – maximum number of trials to run and number of parallel trials.Vertex AI keeps track of the results from each trial and makes adjustments for subsequent trials. This requires your training application to report the metrics to Vertex AI using the Python package cloudml-hypertune.When the job is finished, get the summary of all the trials with the most effective configuration of values based on the criteria you configuredRefer to the Vertex AI documentation to understand how to configure and select hyperparameters for tuning, configure tuning strategy and how Vertex AI optimizes the hyperparameter tuning jobs. The default tuning strategy uses results from previous trials to inform the assignment of values in subsequent trials.Changes to training application code for hyperparameter tuningThere are few requirements to follow that are specific to hyperparameter tuning in Vertex AI:To pass the hyperparameter values to training code, you must define a command-line argument in the main training module for each tuned hyperparameter. Use the value passed in those arguments to set the corresponding hyperparameter in the training application’s code.You must pass metrics from the training application to Vertex AI to evaluate the efficacy of a trial. You can use cloudml-hypertune Python package to report metrics.Previously, in the training application code, we instantiated Trainer with hyperparameters passed as training arguments (training_args). These hyperparameters are passed as command line arguments to the training module trainer.task which are then passed to the training_args. Refer to ./python_package/trainer module for training application code.To report metrics to Vertex AI when hyperparameter tuning is enabled, we call cloudml-hypertune Python package after the evaluation phase as a callback to the trainer object. The trainer object passes the metrics computed in the last evaluation phase to the callback that will be reported by the hypertune library to Vertex AI for evaluating trials.Run Hyperparameter Tuning Job on Vertex AIBefore submitting the Hyperparameter Tuning job to Vertex AI, push the custom container image with the training application to Cloud Container Registry repository and then submit the job to Vertex AI using Vertex SDK for Python. We use the same image as before when running the Custom Job on Vertex Training service.Define the training arguments with hp-tune argument set to y so that training application code can report metrics to Vertex Training service.Create a CustomJob with worker pool specs to define machine types, accelerators and customer container spec with the training application code.Next, define the parameter and metric specifications:parameter_spec defines the search space i.e. parameters to search and optimize. The spec requires to specify the hyperparameter data type as an instance of a parameter value specification. Refer to the documentation on selecting the hyperparameter to tune and how to define them.metric_spec defines the goal of the metric to optimize. The goal specifies whether you want to tune your model to maximize or minimize the value of this metric.Configure and submit a HyperparameterTuningJob with the CustomJob, metric_spec, parameter_spec and trial limits. Trial limits define how many trials to allow the service to run: max_trial_count: Maximum # of Trials run by the service. Start with a smaller value to understand the impact of the hyperparameters chosen before scaling up.parallel_trial_count: Number of Trials to run in parallel. Start with a smaller value as Vertex AI uses results from the previous trials to inform the assignment of values in subsequent trials. Higher number of parallel trials mean these trials start without having the benefit of the results of any trials still running.search_algorithm: Search algorithm specified for the study. When not specified, Vertex AI by default applies Bayesian optimization to arrive at the optimal solution to search over the parameter space.Refer to the documentation to understand the hyperparameter training job configuration.Alternatively, you can submit a hyperparameter tuning job to Vertex AI training service using gcloud beta ai hp-tuning-jobs create. The gcloud command submits the hyperparameter tuning job and launches multiple trials with a worker pool based on custom container image specified, number of trials and the criteria set. The command requires hyperparameter tuning job configuration provided as configuration file in YAML format with job name. Refer to the Jupyter notebook on creating the YAML configuration and submitting the job via gcloud command.You can monitor the hyperparameter tuning job launched from Cloud Console following the link here or use gcloud CLI command gcloud beta ai custom-jobs stream-logs.Figure 10. Monitor progress  and logs of hyperparameter tuning jobs from Google Cloud ConsoleAfter the job is finished, you can view and format the results of the hyperparameter tuning Trials (run by Vertex Training service) and pick the best performing Trial to deploy to Vertex Prediction service.Run predictions locallyLet’s run prediction calls on the trained model locally with a few examples (refer to the notebook for the complete code). The next post in this series will show you how to deploy this model on Vertex Prediction service.Cleaning up the Notebook environmentAfter you are done experimenting, you can either stop or delete the Notebooks instance. Delete the Notebooks instance to prevent any further charges. If you want to save your work, you can choose to stop the instance instead.What’s next?In this article, we explored Notebooks for PyTorch model development. We then trained and tuned the model on Vertex Training service, a fully managed service for training machine learning models at scale. We looked at how you can submit training jobs as Custom Job and Hyperparameter Tuning Job to Vertex Training using Vertex SDK for Python and gcloud CLI commands with both pre-built and custom containers for PyTorch.In the next installments of this series, we will show how to deploy PyTorch models on Vertex Prediction service and orchestrate a machine learning workflow using Vertex Pipelines. We encourage you to explore the Vertex AI features and read the reference guide on best practices for implementing machine learning on Google Cloud.ReferencesIntroduction to NotebooksCustom training on Vertex TrainingConfiguring distributed training on Vertex TrainingGitHub repository with code and accompanying notebookStay tuned. Thank you for reading! Have a question or want to chat? Find authors here – Rajesh [Twitter | LinkedIn] and Vaibhav [LinkedIn].Thanks to Karl Weinmeister and Jordan Totten  for helping and reviewing the post.Related ArticlePyTorch on Google Cloud: How to train PyTorch models on AI PlatformWith PyTorch on Google Cloud blog series, we aim to share—how to build, train and deploy PyTorch models at scale, how to create reproduci…Read Article
Quelle: Google Cloud Platform

How Lowe’s SRE reduced its mean time to recovery (MTTR) by over 80 percent

Editor’s Note:In a previous blog, we discussed how home improvement retailer Lowe’s was able to increase the number of releases it supports by adopting Google’s Site Reliability Engineering (SRE) framework on Google Cloud. Lowe’s went from one release every two weeks to 20+ releases daily, helping meet its customer needs faster and more effectively. Today, the Lowe’s SRE team shares how they used SRE principles to decrease their mean-time-to-recovery (MTTR) by over 80 percent.The stakes of managing Lowes.com have never been higher, and that means spotting, troubleshooting and recovering from incidents as quickly as possible, so that customers can continue to do business on our site. To do that, it’s crucial to have solid incident engineering practices in place. Resolving an incident means mitigating the impact and/or restoring the service to its previous condition. The average time it takes to do this is called mean time to recovery (MTTR). Tracking this metric helps us stay on top of the overall reliability of our systems at Lowe’s, while simultaneously improving the speed with which we recover. Our goal is to keep the MTTR metric as low as possible, so that failures don’t negatively impact our business. Here are the four areas we addressed to drive holistic improvement in our MTTR.Lowe’s incident reporting processTo reduce MTTR, we created a seamless incident reporting process following SRE principles. Our incident reporting process is a workflow that starts at the time an incident occurs, and ends with an SRE captain who closes the action items after a postmortem report. With this approach, we are able to limit the number of critical incidents. The reporting process involves three core components: monitoring, alerting, and blameless postmortems.Monitoring and alertingHaving proper monitoring and alerting in place is crucial when it comes to incident management. Monitoring and alerting tools let you detect issues as soon as they occur, and notify the right person in the shortest possible time to take action. From a measurement standpoint, we track this as our mean time to acknowledge (MTTA). This is the average time it takes from when an alert is triggered, to when work on the issue begins.At the time of an incident, our monitoring and alerting tools notify the on-call SRE first responder via PagerDuty in the form of a phone call, text message and email. Our SRE software engineering team has done a lot of automation to enable various Service Level Indicator (SLI) alerts and Service Level Agreement (SLA) notifications. The on-call SRE then initiates a triage call with our service/domain stakeholders to resolve the incident. As a result, we reduced our MTTA from 30 minutes in 2019, to one minute – a 97 percent decrease. Blameless postmortems: learning from incidentsA postmortem is a written record of an incident, its impact, the actions taken to resolve it, the root cause and the follow-up actions to prevent the incident from recurring (see example here). A blameless postmortem builds on that and is a core part of an SRE culture, and our culture at Lowe’s. We ensure that individuals are not singled out, and the outcome for all postmortems are directed toward learnings and process improvement.For us, the postmortem process is the biggest part of our incident workflow. When an SRE creates a new postmortem report, the first step is to conduct a postmortem session with domain stakeholders to review the report. The postmortem then goes into the review stage and gets reviewed by more stakeholders in our weekly postmortem meeting. In the final stage of this process, the SRE captain will close the report once everyone in the weekly meeting agrees that the report is complete.To conduct a successful postmortem, it is critical to keep the focus on identifying gaps and issues with the system and operations processes, rather than an individual, and generate concrete actions to address the problems we’ve identified. To ensure this, we follow a couple of best practices:We start by gathering the facts from the person who identified the problem, and each SLI owner has to identify a gap or the next SLI upstream owner who created the impact for them.Every SLI owner is provided full opportunity to present their case, and identifying the issue is done as a community exercise. Once action items and process changes are identified, an owner is nominated to complete the actions, or they will volunteer. For easy reference, we publish and store postmortems in our incident knowledge base. This process helps SREs continuously improve as future incidents arise. Continuous Improvement Encouraging a culture of honest, transparent and direct feedback that you need for blameless postmortems is often an iterative process that needs sponsorship from executives, empowering incident captains to lead the entirety of the discussion and outcomes. Running successful postmortems, and completing action items from them, needs to be recognized and accounted for in SRE performance objective assessment. As shared in Google’s SRE book, the best practice is to ensure that writing effective postmortems is a rewarded and celebrated practice, with leadership’s acknowledgement and participation. This is possibly the hardest part to accomplish in an effective postmortem during a cultural transformation unless you have full buy-in from leadership.However, it’s all well worth it. This process is a key part of how we were able to improve our MTTR over time—from two hours in 2019 to just 17 minutes! Our SRE incident reporting process has also transformed how our company solves issues. By streamlining this workflow from alerting, to solving an issue, to blameless postmortems, we have reduced our MTTR by 82 percent and our MTTA by 97 percent. Most importantly, our team is learning from every incident and becoming better engineers as a result. Visit the SRE Google Cloud website to learn more about implementing SRE best practices in the cloud.AcknowledgementSpecial thanks to Rahul Mohan Kola Kandy, Vivek Balivada, and the Digital SRE team at Lowe’s for contributing to this blog post.Related ArticleHow Lowe’s meets customer demand with Google SRE practicesLowe’s has adopted Google SRE practices to help developer and operations teams keep up with ecommerce demand.Read Article
Quelle: Google Cloud Platform

Staying Ahead of New Regulations in APAC

Over the course of the COVID-19 pandemic, we’ve seen our customers across the globe increase their use of cloud services, in large part due to an increase in e-commerce activities, digitization efforts, and the move to remote work. This shift has put further emphasis on the importance of security and control in cloud computing. Cloud Service Providers (CSP) have a responsibility to provide transparency and assurance around how customer data is being stored, processed, and protected, which is why in 2021 we’ve increased our efforts to support security and compliance in the APAC region.   At Google Cloud, we strongly believe in trust and transparency, and recently outlined criteria we believe defines what it means to be a trusted cloud service provider. Data protection is a baseline requirement across many industries and the need for a trusted, compliant cloud service provider becomes increasingly important as new regulations are published and organizations shift their IT operations and workloads to public cloud platforms. In the APAC region, there have been some key regulatory updates over the course of the last year, which include: IRAP  (Information Security Registered Assessors Program) – A framework for assessing the implementation and effectiveness of an organization’s security controls against the Australian government’s security requirements. ISMAP (Information System Security Management and Assessment Program) – A Japanese government system for assessing the security and operation of cloud service providers to participate in public sector tenders.ETDA (Electronic Transaction Development Agency) – An agency setting the security standard for meeting control systems.RBIA (Risk Based Internal Audit) – An internal audit methodology that provides assurance to a Board of Directors on the effectiveness of how risks are managed. GR 95 (Presidential Regulation No. 95) – Responsible for providing guidance to government agencies and businesses to implement online governance tools used for public services.We have posted updates to guidance and resources to help support regulatory and compliance requirements as part of our compliance offerings, which include compliance mappings geared toward assisting regulated entities with their regulatory notification and outsourcing requirements. You’ll also be able to see the results of the assessments and certifications that we’ve completed so far this year:Australia – IRAPIndia – RBI Outsourcing GuidelinesIndia –  Ministry of Electronics and Information Technology (MeitY)Indonesia – Government Regulation (GR) 95Japan – ISMAPKorea – Regulation on Outsourcing of Information Processing Business of Financial InstitutionsKorea – K-ISMS (Korea Information Security Management System)Singapore – Multi-Tier Cloud Security (MTCS) Tier 3 Singapore – MAS Technology Risk Management Guidelines (MAS TRM)Thailand – ETDAIn the coming months we will continue providing updates and you can look forward to the following: Australia SCEC Zone 3/ PSZ 3 – Enablement of SCEC Z3 for our Melbourne Region, allowing for regional replication.2G3M Japan – Healthcare Security Guidelines for the Ministry of Health, Labor, and Welfare. MAMPU (Malaysian Administrative Modernization and Management Planning Unit) – A government agency in Malaysia tasked with facilitating the modernization of the public administrative system and driving economic growth in Malaysia by helping public sector agencies adopt innovative technologies.As this space continues to evolve, we are committed to doing our best to stay ahead of new and changing regulations. Look for updated compliance offerings and continued momentum in this space.Related ArticleBuilding global momentum with government and security compliance certificationsOperating virtually has heightened the importance of security and compliance for public sector agencies around the world.Read Article
Quelle: Google Cloud Platform

What is Cloud SQL?

If you are building an application, chances are you will need a relational database for  transaction processing. That’s where Cloud SQL comes in. It is a fully managed relational database for MySQL, PostgreSQL and SQL Server. It reduces maintenance cost and automates database provisioning, storage capacity management, backups and out of the box High availability and Disaster recovery/failover. Click to enlargeHow to set up Cloud SQL?Cloud SQL is easy to set up. You select the region and zone where you would like the instance to be and they get created. Configure the machine type with the right number of CPU & amount of memory your application needsChoose storage type between solid state and hard disk drives depending on latency, QPS and cost requirements. Reliability & Availability Cloud SQL also offers automated backups and point-in-time recovery options. You can set time slots and locations for backups. For production applications it is recommended to enable built-in high availability (HA) option which supports99.95% SLA. With this, Google Cloud continuously monitors the Cloud SQL instance with a heartbeat signal and when a master fails an automatic failover is triggered to another zone in your selected region in case of an outage. You can also create replicas across regions to protect fromregional failure. And, you can enable automatic storage increase to add more storage when nearing capacity. Cloud SQL Insights, a free tool, helps detect, diagnose, and identify problems in a query for Cloud SQL databases. It provides self-service, intuitive monitoring, and diagnostic information that goes beyond detection to help you to identify the root cause of performance problems.How to migrate an existing MySQL database to Cloud SQL?If you have an existing application that you are moving to cloud then chances are that you need to migrate your existing SQL database to Cloud SQL. Database Migration Service(DMS) simplifies migration of MySQL and PostgreSQL databases from on-premises, Google Compute Engine, and other clouds to Cloud SQL. It is serverless, easy to set up and available at no additional cost. It replicates data continuously for minimal downtime migrations. Here’s how it works: Provide your data source details – type of database engine MySQL, PostgreSQL, Amazon RDS or others.  Pick one time or continuous replication for minimal downtime. Create a Cloud SQL instance as your destination.For connectivity to the source instance DMS makes it easy by providing multiple options. You can allow-list an IP address, create a reverse SSH tunnel via cloud hosted Virtual Machine or set up VPC peering. Finally test and promote the migrated instance to primary Cloud SQL instance. Security & ComplianceThe data in CloudSQL is automatically encrypted at rest and in transit. External connections can be enforced to be SSL-only. For secure connectivity you can also use Cloud SQL Proxy which is a tool to help you connect to your Cloud SQL instance from your local machine. You can control network access with firewall protection. From the compliance perspective Cloud SQL is SSAE 16, ISO 27001, PCI DSS v3.0, and HIPAA compliant.Cloud SQL in actionCloud SQL can be used in multiple use cases in conjunction with different compute options. You can use it in any application as a transactional database, long-term analytics backend with BigQuery, predictive analytics with Vertex AI and event driven messaging with Pub/Sub. Cloud SQL when combined with Datasteam(Change Data Capture) makes a great real-time analysis solution for any incoming data. Here are a few examplesWeb/mobile applicationMobile gaming Predictive analysisInventory trackingConclusionWhatever your application use case may be, Cloud SQL is designed to integrate with different services within and outside Google Cloud. Use this fully managed relational database and let Google take care of the endless maintenance required to run a database including setting up servers, applying patches and updates, configuring replication and managing backups. Instead, focus your energy on higher priority work where you can really add value. For a more in-depth look into Cloud SQL check out the documentation.For more #GCPSketchnote, follow the GitHub repo. For similar cloud content follow me on Twitter @pvergadia and keep an eye out on thecloudgirl.devRelated ArticleMigrate your Microsoft SQL Server workloads to Google CloudCloud SQL for SQL Server is now available, so it’s easy to migrate your Microsoft SQL Server 2008 instances for a managed, compatible dat…Read Article
Quelle: Google Cloud Platform

How to check all the right boxes for your cloud migration before you get started

Cloud adoption and migrations have continued to accelerate over the years. According to Flexera’s 2021 State of the Cloud report, 31% of respondents spend at least $12,000,000 per year on public cloud, which is up from 16% in the same survey in 2020. Additionally, enterprises already run about 47% of their workloads and store 44% of their data in the public cloud. Organizations have come a long way, but many still have at least half way to go to full migration to the public cloud. As the public cloud becomes a multi-billion dollar business and the IaaS market share of computing continues to grow, enterprises leverage the public cloud more and more each year. But even though self-provisioning of new workloads in public clouds is simple, migrating existing services to the cloud requires more preparation. A common misperception is that migrating existing workloads to the public cloud, especially those with a lot of data, is complex, time consuming and risky. With the right planning, however, enterprise IT organizations can rapidly establish good migration practices to accelerate migrations and lower risk. To help you through all of this, we’ve put together a guide and checklist of essential tips for the four key parts of the migration process: Assess, Plan, Migrate, and Optimize. Those four key phases will help ensure a successful cloud migration, so we’ve put together this white paper to explore those facets, and we concluded with a handy checklist to help you get started right away, part of which you can see in the screenshot below:At Google Cloud, we’re here to help make sure your migration goes successful from start to finish (and beyond)! To learn more, download this migration guide and checklist. Or, if you’re really ready to jump start your migration today, you can take advantage of our current offer by signing up for a free discovery and assessment or exploring our Rapid Assessment and Migration Program (also known as RAMP).
Quelle: Google Cloud Platform