Customer Care portfolio: Flexible, scalable, robust support

Technical support is now more critical than ever. It’s crucial to keeping your business running smoothly, while rapidly adjusting to an increasingly hybrid workforce that needs to stay connected at all times. Although the scale may vary, organizations of all sizes face similar challenges. We launched the Cloud Customer Care portfolio, a significant evolution in our technical support services, to address your needs with more comprehensive, scalable, and flexible services that can help you focus on your core business and provide you the service you expect from Google Cloud – regardless of the size of your organization.A reasonably priced technical support service for an unlimited number of users, Standard Support is intended for the general needs of small- to medium-sized organizations that have workloads in development. But as your business looks to build capacity and maintain workloads in production, you’ll need rapid critical-incident response, greater flexibility, and more specialized features. That’s where our Enhanced Support can provide exceptional value.Enhanced SupportUnplanned downtime, especially during planned events, can be catastrophic. Our Enhanced Support service is designed to keep you up and running with faster response times 24/7, along with direct access to technical support cases, our Cloud Support API to optimize management, and workload-centric support for multitechnology environments.But special circumstances demand special attention. That’s why we’ve created Value-Add Services for Enhanced Support that can give you the flexibility to:Receive expert assistance with our Technical Account Advisor Service. This service includes guided onboarding and ongoing hands-on stewardship, as well as monthly, quarterly, and yearly reviews, trend analysis, optimization recommendations, and dedicated case-escalation management for critical incident response.Get ahead of key business events that drive sudden high-traffic spikes like product launches, grand openings, or data migrations with Planned Event Support. Working with your team, we cover pre-event architecture reviews and accelerated response times, all followed by comprehensive post-event reporting that details pitfalls, successes, and lessons learned.Add a layer of governance to your support experience with Assured Support. By restricting support services to personnel who meet geographical-location and attribute-based requirements, it helps you ensure compliance with local standards, maintain data integrity and sovereignty, and maximize operational efficiencies.The combination of Enhanced Support and the Technical Account Advisor Service is the ideal solution for us at Moloco. It is an inexpensive way to access the timely attention we need, when we need it. From the start, we’ve experienced noticeable improvements with response times, technical guidance, and service reviews critical to our business success. Changhoon Kim, VP of Engineering, MolocoIn short, Enhanced Support helps you optimize your cloud experience with high-quality and robust support, fast response times, and additional services for businesses of all sizes. And if you sign up for Enhanced Support now, you’ll receive a 50% discount until March 31, 2022.What’s next for customers?Existing Silver, Gold, and Role-Based Support services will end for customers on May 31, 2022. Make the move now to our new Customer Care portfolio and keep your support services running seamlessly – with added capabilities. What’s next for partners?Existing Role-Based Support services will end for partners on May 31, 2022. To help ensure services continue to run seamlessly, be sure to move your organization – or, for resellers, your customer’s organization – to our new Customer Care portfolio prior to that date. For more information on partner programs and benefits, please refer to the Partner Advantage portal.If customers and partners choose not to make the transition, current support services will automatically transition to Basic Support, a nontechnical service for admin and billing inquiries only.What’s right for you?To get started, compare support services – including Basic, Standard, Enhanced, and Premium Support – and explore our pricing calculator to find the level that’s best for your needs and budget. Once you’ve selected your service, making the switch is simple, but the process looks a little different depending on your current plan. Check out step-by-step instructions for transitioning from Role-Based Support or transitioning from Silver or Gold Support. You can also sign up through the Google Cloud Console or contact your sales rep.Questions? Concerns? Suggestions? We want to hear from you.Your input is critical to how we continue to grow and refine the entire Cloud Customer Care portfolio. That’s why we regularly assess the effectiveness of our support services and base future improvements directly on your feedback. If you have any questions regarding which service is right for you or need assistance making the move, please contact us at Cloud Customer Care Support.  Sign up for Enhanced Support through the Cloud Console or contact your sales rep, and receive a 50% discount until March 31, 2022.Related ArticleMission Critical Services: for the most demanding enterprise environmentsMission Critical Services (MCS), a new Value Add Service available for purchase by Premium Support customers, is based on Google Cloud’s …Read Article
Quelle: Google Cloud Platform

Unlock more choice with updates to Google Cloud’s infrastructure capabilities and pricing

Over the past several years, Google Cloud has made significant investments in our infrastructure product portfolio. We launched new Tau T2D VMs, which deliver 42% better price-performance vs. other leading cloud providers. We upgraded Cloud Storage to offer more flexibility to support customers’ enterprise and analytics workloads, with dual-region buckets and upcoming Turbo Replication. And we’ve delivered numerous improvements to our global network, including expansion to 29 cloud regions. However, from conversations with customers, we’ve also learned we can do more to align our capabilities and pricing with their varied workloads. So, today, we are announcing we will adjust our infrastructure product and pricing structure to give customers more choice in how they pay for what they use alongside new, flexible SKUs with new product options and capabilities. These changes are designed to help ensure better product fit for our customers’ use cases across a wider array of workloads. They are also designed to better align with how other leading cloud providers charge for similar products, so customers can more easily compare services between leading cloud providers. Some of these changes will provide new, lower-cost options and features for Google Cloud products. Other changes will raise prices on certain products. Ultimately, our goal is to provide more flexible pricing models and options for how customers are using our cloud services. Here’s an overview of what customers can expect:Which services are changing? What new services are being introduced?We are changing prices for some storage, compute, and networking products. The changes provide customers with new ways to optimize their spending based on workload type and size, or data portability needs, as well as reducing costs on some services. Specific changes include: Cloud Storage pricing changes for data mobility, including replication of data written to a dual- or multi-region storage bucket, and inter-region data accessIntroduction of a new lower-cost archive snapshot option for Persistent Disk (PD), so that compliance/archiving use cases are charged less than compute-intensive DevOps workloadsNew outbound data processing pricing for Cloud Load Balancing, in line with other leading cloud providersNew pricing for Network Topology, which will include Performance Dashboard within Network Intelligence Center at no additional charge Will customers’ bills increase? Decrease?The impact of the pricing changes depends on customers’ use cases and usage. While some customers may see an increase in their bills, we’re also introducing new options for some services to better align with usage, which could lower some customers’ bills. In fact, many customers will be able to adapt their portfolios and usage to decrease costs. We’re working directly with customers to help them understand which changes may impact them.When will the new prices go into effect?Today, we sent customers a six-month notice on the price changes, which go into effect on October 1, 2022. Customers under existing commit contracts with a floating or fixed discount will not face any changes until renewal. Our goal is to help our customers manage any impact of these changes and allow time for them to adjust or modify their implementations. What should customers do next?There are a number of things customers can do to prepare for the changes:Read through the Mandatory Service Announcement (MSA) sent on March 14.Consider what actions, if any, they may want to take based on current storage, networking, and compute needs. Many of these changes may have simple choices associated with them.Consider using the Storage Transfer Service to select the right Cloud Storage bucket locations. Storage Transfer Service will be available free-of-cost for transfers within Cloud Storage, starting April 2 until the end of the year.For those customers under contract, Google Cloud account representatives are available to discuss these changes. Please visit our pricing page and the links below for more details on our updates to storage, networking, and PD pricing, including information on how to modify your implementations if needed. If you do not have an account manager and still have questions please review our public FAQ, which will be updated regularly, as well as the resource links below. Note: This pricing analysis is valid as of February 2022.Resources:Cloud Storage Pricing Announcements Load Balancing pricing AnnouncementsNetwork Intelligence Center pricing AnnouncementsPD Pricing AnnouncementsPublic FAQRelated ArticleA year in review: Advancements in infrastructure at Google CloudA recap of the year’s infrastructure progress, from impressive Tau VMs, to industry-leading storage capabilities, to major networking leaps.Read Article
Quelle: Google Cloud Platform

Data Governance in the Cloud – part 2 – Tools

This is part 2 of the Data Governance blog series published in January. This blog focuses on technology to implement data governance in the cloud.Along with a corporate governance policy and a dedicated team of people, implementing a successful data governance program requires tooling. From securing data, retaining and reporting audits, enabling data discovery, tracking lineage, to automating monitoring and alerts, multiple technologies are integrated to manage data life cycle.Google cloud offers a comprehensive set of tools that enable organizations to manage their data securely, ensure governance, and drive data democratization. These tools fall into the following categories: Data SecurityData security encompasses securing data from the point data is generated, acquired, transmitted, stored in permanent storage, and retired at the end of its life. Multiple strategies supported by various tools are used to ensure data security, identify and fix vulnerabilities as data moves in the data pipeline.Google Cloud’s Security Command Center is a centralized vulnerability and threat reporting service. Security Command Center is a built-in security management tool for Google Cloud platform that helps organizations prevent, detect, and remediate vulnerabilities and threats. Security Command Center can identify security and compliance misconfigurations in your Google Cloud assets and provides actionable recommendations to resolve the issues.Data Encryption All data in Google cloud is encrypted by default, both in transit and rest. All VM to VM traffic, client connections to BigQuery, serverless Spark, Cloud Functions, and communication to all other services in Google cloud within a VPC as well as between peered VPCs is encrypted by default. In addition to default encryption which is provided out of the box, customers can also manage their own encryption keys in Cloud KMS. Client side encryption where customers keep full control of the encryption keys at all times is also available.Data Masking and TokenizationWhile data encryption ensures that data is stored and travels in an encrypted form, end users are still able to see the sensitive data when they query the database or read file. Several compliance regulations require de-identifying or tokenizing sensitive data. For example, GDPR recommends data pseudonymization to “reduce the risk on data subjects”. De-identified data reduces the organization’s obligations on data processing and usage. Tokenization, another data obfuscation method, provides the ability to do data processing tasks such as verifying credit card transactions, without knowing the real credit card number. Tokenization replaces the original value of the data with a unique token. The difference between tokenization and encryption is that data encrypted using keys can be deciphered using the same keys while tokens are mapped to original data in the tokenization server. Without access to the token server, data tokens prevent deciphering of the original value even if a bad actor gets access to the token.Google’s Cloud Data Loss Prevention (DLP) automatically detects, obfuscates and de-identifies sensitive information in your data using methods like data masking and tokenization. When building data pipelines or migrating data into the cloud, integrate Cloud DLP to automatically detect and de-identify or tokenize sensitive data and allow data scientists and users to build models and reports while minimizing risk of compliance violations.Fine Grained Access ControlBigQuery supports fine grained access control for your data in Google Cloud. BigQuery access control policies can be created to limit access at column and row level controls in BigQuery. The combination of column and row level access control combined with DLP allows you to create datasets that have a safe (masked or encrypted) version of the data and a clear version of the data. This promotes data democratization where the CDO can trust the guardrails of Google cloud to allow access correctly according to the user identity, accompanied by audit logs to ensure a system of record. Data can be shared across the organization to run analysis and build machine learning models while ensuring that sensitive data remains inaccessible to unauthorized users.Data Discovery, Classification and Data Sharing Ability to find data easily is crucial to enable an effective data driven organization. Data governance programs leverage data catalogs to create an enterprise repository of all metadata. These catalogs allow data stewards and data users to add custom metadata, create business glossaries, and allow data analysts and scientists to search for data to analyze across the organization. Certain data catalogs also offer users to request access within the catalog to data which can be approved or denied based on policies created by data stewards.Google cloud offers a fully managed and scalable Data Catalog to centralize metadata and support data discovery. Google’s data catalog will adhere to the same access controls the user has on the data (so users will not be able to search for data they cannot access). Further, Google’s Data Catalog is natively integrated into the GCP data fabric, without the need to manually register new datasets in the catalog – the same “search” technology that scours the web auto-indexes newly created data. In addition, Google partners with major data governance platforms e.g. Collibra, Informatica to provide unified support for your on-prem and multi-cloud data ecosystem.Data LineageData lineage allows tracing back the sources of the data, allowing data scientists to ensure their models are trained on carefully sourced data, allowing data engineers to build better dashboards from known data sources, and allows inheriting policies from data sources to derivatives (so if a sensitive data source is used to create an ML model, that ML model can be labeled sensitive as well).The ability to trace data to the source and keep a log of all changes made as the data progresses in the data pipeline provides a clear picture of the data landscape to the data owners. It makes it easier to identify data not tracked in data lineage and take corrective action to bring it under established governance and controls. When data is scattered across on-prem, cloud or multi cloud environments, a centralized lineage tracking platform gives a single view on where data originated and how data is moving across the organization. Tracking lineage is imperative to control costs, ensure compliance, reduce data duplication, and improve data quality.Google Cloud’s Data Fusion provides end to end data lineage to help governance and ensure compliance. A data lineage system for BigQuery can also be built using Cloud Audit logs, data catalog, PubSub, and Dataflow. The architecture of building such a lineage system is described here. Additionally, Google’s rich partner ecosystem includes market leaders providing data lineage capabilities for on-prem and hybrid clouds, e.g. Collibra. Open source systems, e.g. Apache Atlas can also be implemented to collect metadata and track lineage in Google Cloud.AuditingIt is important to keep all data access records for auditing purposes. Audits can be internal and external. Internal audits ensure that the organization is meeting all compliance criteria and take corrective action if needed. If an organization is operating in a regulated industry or keeping personal information, then keeping audit records is a compliance requirement.Google Cloud Audit Logs can be turned on to ensure compliance with audits in Google Cloud and answer “who did what, where, and when across Google Cloud services?”. Cloud Logging (formerly Stackdriver) aggregates all the log data from your infrastructure and applications in one place. Cloud logging automatically collects data from Google Cloud services and you can feed application logs using Cloud Logging agent, FluentD, or the Cloud logging API. Logs in Cloud logging can be forwarded to GCS for archival, to bigquery for analyses, and also streamed to Pub/Sub to share logs with external third party systems.Finally, Cloud Log Explorer allows you to easily retrieve, parse, and analyze logs and build dashboards to monitor logging data in real time.Data QualityBefore data can be embedded in the decision making process, organizations need to ensure data meets the established quality standards. These standards are created by data stewards for their data domains. Google Dataprep by Trifacta provides a friendly user interface to explore data and visualize data distribution. Business users can use Dataprep to quickly identify outliers, duplicates, and missing values before using data for analysis.GCP’s Dataplex enables Data Quality assessment through declarative rules that can be executed on Dataplex serverless infrastructure. Data owners can create rules to find duplicate records, ensure completeness, accuracy, and validity (e.g transaction date cannot be in future.) Data owners can schedule these checks using Dataplex’s scheduler or include them in a pipeline by using the APIs. Data quality metrics are stored in a BigQuery table and/or are made available in Cloud logging for further dashboarding and automation.Additionally, Google’s rich partner ecosystem includes leading data quality software providers, e.g. Informatica, and Collibra. Data quality tools are used to monitor on-prem, cloud, and multi cloud data pipelines to identify quality issues and quarantine or fix poor quality data.Analytics ExchangeOrganizations looking to democratize data, need a platform to easily share and exchange data analytics assets. The dashboard, report or a model that one team has built is often useful to other teams. In large organizations in the absence of an easy way to discover and share these assets, work is replicated leading to higher cost and lost time. Exchanging analytics assets enables teams to discover data issues improving reliability and data quality. Increasingly, organizations are also looking to exchange analytics assets with external partners. These can be used to negotiate better costs with vendors and even create a cash stream depending on the use cases.Analytics Hub enables organizations to securely share their analytics assets to share and subscribe their analytics assets. Analytics Hub is a critical tool for organizations looking to democratize data and embed data in all decision making across the organization. Compliance CertificationsBefore organizations can migrate data to the cloud, they need to ensure all compliance requirements have been met. An organization may be required to comply with these regulations because of the region they are operating in, e.g. need to comply with CCPA in California, GDPR in Europe, and LGPD in Brazil. Organizations are also subjected to regulations because of their specific industry, e.g. PCI DSS in banking, HIPAA in healthcare, or FedRAMP when working with the US federal government.Google cloud has over 100 plus compliance certifications that are specific to regions and industries. Google continues to add regulatory and compliance certifications to its portfolio. Dedicated compliance teams help customers ensure compliance as they migrate their data and onboard to Google cloud.ConclusionStart your data governance journey by exploring Dataplex: Google’s solution for centrally managing and governing data across your organization. As you look towards implementing data democratization, consider Analytics Hub to build a data analytics exchange to share your analytics assets easily. Security is built into every Google product and compliance certifications across the globe and industries ease data migrations to the cloud. If you have already started your cloud journey, ensure high quality data, secure access to sensitive data attributes by using native Google Cloud and partner products in GCP.Where to learn more: Google Data Governance leaders have captured best practices and Data Governance learnings in an O’Reilly publication: Data Governance, The Definitive GuideRelated ArticleData governance in the cloud – part 1 – People and processesThe role of data governance, why it’s important, and processes that need to be implemented to run an effective data governance programRead Article
Quelle: Google Cloud Platform

Maximize your Cloud Spanner savings with new committed use discounts

Cloud Spanner is a fully managed relational database that offers near unlimited scale, strong consistency, and industry leading high availability of up to 99.999%. Spanner powers applications of all sizes in multiple industries including financial services, gaming, retail, and healthcare. Spanner provides great value and price-performance since it helps you save operational costs, provides multiple replicas of your data by default, and allows you to pay for only what you need. Multiple customers have built their mission critical-applications on Spanner and are committed to expand its usage to transform many more applications. We are excited to announce the launch of Committed Use Discounts (CUDs) to further reduce costs for customers committing to use Spanner. You can get up to 40% discount on Spanner compute capacity by purchasing committed use discounts. Spanner committed use discounts provide deeply discounted prices in exchange for your commitment to continuously use Spanner compute capacity (as measured in nodes or processing units) for a one or three year period. One-year commitment provides a 20% discount whereas a three-year commitment provides a 40% discount! Spanner committed use discounts are available now and are applicable for all Spanner instance configurations in all regions.Greater flexibility drives higher utilizationSpanner committed use discounts provide full flexibility in terms of how discounts are applied. Once you make a commitment to spend a certain amount on an hourly basis on Spanner from a billing account, you can get discounts on instances in different instance configurations, regions, and projects associated with that billing account. Both regional and multi-region instances can utilize the same spend commitment. This flexibility helps you achieve a high utilization rate of your commitment across regions and projects without manual intervention, saving you time and money. If for business reasons you need to migrate your application from single region to multi-region in future, you can do so with the same commitment while continuing to enjoy the discounts.Committed use discounts along with other launches such as PostgreSQL interface and granular instance sizing democratize access to Spanner and make it easier for you to power more of your workloads with Spanner.How to purchase committed use discountsYou can purchase a Cloud Spanner committed use discount in the Google Cloud Console billing page by selecting the Commitments tab and then selecting PURCHASE at the top as shown below. Read the purchasing spend-based commitments section in Google Cloud’s documentation for more details.Once you click on PURCHASE, choose the billing account, commitment period and hourly commitment amount, in terms of equivalent on-demand spend.This amount represents the equivalent of on-demand costs that you would have incurred without the committed use discount.After you purchase a Spanner committed use discount, it automatically applies to aggregated spending on compute capacity (as measured in nodes or processing units) in all regions, instance configurations, and projects. We have provided this flexibility so that you need not make separate commitments for each region and instead have higher savings by automatically applying the discount in all regions.Spanner committed use discounts don’t apply to storage, backup, and network pricing. When you purchase a Spanner committed use discount, you pay the same commitment fee for the entire commitment period. You still receive the same discount percentage on applicable usage in the event of a price change. The commitment fee is billed monthly.When is committed use discounts right for you? Spanner committed use discounts are ideal when your spending on Spanner compute capacity has a predictable portion that you can commit to for a one or three year period. Let’s take an example. Let’s say you have a couple of Spanner instances in different regions and you have provisioned a total of 125 nodes. Let’s say you are consistently spending an average of $100/hour on this computer capacity. Let’s further assert that you feel confident this usage rate will not decline over the next year.This sort of steady usage represents an excellent opportunity to buy a Spanner committed use discount—in this case, a one-year commitment to spend $100/hour on Spanner nodes, in exchange for a 20% discount to that commitment. Let’s look at how such a purchase would apply to three different per-hour billing scenarios.In the first hour, you spend $100 on Spanner nodes. This matches your commitment exactly, with no overage. With the commitment’s 20 percent discount applied, this hour would cost you $80, saving you $20.Let’s say you scaled up Spanner nodes in the next hour, spending $110. You still enjoy a 20% discount for the $100 that your commitment covers. The remaining $10 gets billed at an on-demand rate.  Your bill for this hour would come to $80 plus the $10 of usage beyond the coverage of the commitment, for $90. Compared to the full $110 for the second hour, that still nets a $20 savings, just as in the previous hour.In the third hour, you scale down Spanner nodes, spending only $85. Your bill for this hour would still be $80 based on a $100/hour commitment with the 20% discount applied. As you can see, this commitment has saved you $45 over a three-hour span—even though one of those hours had a spending below the $100/hour commitment. Given that a typical month contains around 730 hours, a well-chosen committed use discount can add up to significant monthly savings for you. Let us see how. For example, considering $100 per hour On-demand spend on Spanner:Monthly expenditure based on On-demand rate = $100 per hour * 730 hours = $73,000Monthly expenditure based on 1 year commitment =  ($100 per hour * (1-20%)) * 730 hours  = $58,400 per monthTotal savings per month = $73,000 – $58,400 = $14,600Total savings in 1 year = $14,600 per month * 12 months =  $175,200You can save even more by making a 3 year commitment:Monthly spend based on 3 year commitment =  ($100 per hour * (1-40%)) * 730 hours  = $43,800 per monthTotal savings per month = $73,000 – $43,800 = $29,200Total savings in 3 years = $29,200 per month * 36 months =  $1,051,200These examples show how committed use discounts can help you achieve significant savings on Spanner usage. Now you can use committed use discounts to expand your Spanner usage to power more applications with Spanner’s consistency, availability and scalability guarantees. Learn moreCheck out our documentation for more details on Spanner committed use discounts. For Spanner pricing information, take a look at our pricing page.To get started with Spanner, create an instanceor try it out with a Spanner Qwiklab.Related ArticleVimeo builds a fully responsive video platform on Google CloudThe video platform Vimeo leverages managed database services from Google Cloud to serve up billions of views around the world each day.Read Article
Quelle: Google Cloud Platform

Google Cloud Partners driving Retail and Commerce Innovation

With strains on the supply chain and other pandemic-driven economic challenges intensifying, retailers this year face some of their biggest mandates to date: Increasing operational efficiency, delivering a seamless online and in-store experience, and staying one step ahead of rapidly changing customer preferences while offering exceptional customer service. This is why Google’s internal media teams continuously monitor and analyze the retail market and innovate with their services partners. Their goal is to help business leaders maximize business outcomes in this new landscape.All of these challenges require business leaders to precisely balance investments in pricing, promotion, product assortment, technology, and in-store experiences. The good news is these same leaders know intuitively that cloud technology can help. And with the help of a trusted partner who has the necessary technical knowledge and business experience, they can put the right plan in place to move forward and win.Let me show you how some of our Google Cloud partners and customers are solving real-world retail and commence business challenges.Air Asia accelerating forecasting, budgeting and increasing agility Faster budgeting and forecasting enabled Air Asia’s BIG Rewards’ leadership team to transition the business from survival mode during the pandemic, to more long-term program stability looking forward. BIG Rewards partnered with Searce to innovate and quickly implement a solution with improved speed and accuracy that democratizes data to better serve the business and empower all users to create relevant reports for decision-making. With Connected Sheets, BIG Rewards employees can analyze and report on large volumes of data through a familiar Sheets interface, accelerating decision-making and enabling the business to become more agile and relevant to a fast-changing market.“With the Google Cloud and Google Workspace solution, we expect to reduce the time to complete budgets from up to three months to just two to three weeks and lower the time to undertake quarterly forecasts from one month to one to two weeks.”—Sereen Teoh, Chief Financial Officer, BIG RewardsWell-designed experiments delivering measurable ROI for American Eagle OutfittersAmerican Eagle Outfitters conducts store experiments on their key initiatives before scale-up by leveraging Google Cloud data, machine learning capabilities and Accenture’s retail data science expertise to remain quick and agile. From concept to production and deployment in four months, American Eagle Outfitters saved millions of dollars in savings through store testing and a cost-effective platform, and provided the ability to understand performance of in-store tests at a granular level across multiple metrics. Their centralized data store for transactions, inventory, and web data is used for multiple solutions without compromising on performance, and leads to accelerated solution development.  “BigQuery gave us the scalability and processing power to analyze massive datasets that were previously too hard to manage in our old systems.”—Jimmy Hunkele, Director of Data Analytics, American Eagle OutfittersUniting top retail brand, Unify, onto one collaboration platformWith the help of Devoteam, Unify successfully brought the companies behind France’s top digital media brands together onto one communications platform in just two months. Despite the COVID-19 lockdown, the migration helped improve the speed of collaboration by reducing the dependence on email using Google Meet and Google Sheets. By installing a single communications solution for multiple companies, the team reaps the rewards of a shared CRM system to illuminate new synergies and enable remote change management using face-to-face interaction. The migration transforms working norms by enabling remote collaboration between brands with tools that create harmony and support customer experiences.“Google Workspace is more intuitive than other solutions and simplifies large account migrations with automated processes. The question was never ‘Which system should we use?’ but always ‘How can we bring everyone to Google Workspace?”—Charles Misson, Manager of Corporate IT, UnifyCultivating a vision at 1-800-FLOWERS.COM, Inc.To best manage all the eCommerce environments associated with its family of brands and ensure outstanding customer service, 1-800-FLOWERS.COM, Inc. has been working with MongoDB and Google Cloud to revolutionize its DevOps culture. As organizations modernize IT, MongoDB encourages DevOps professionals to place more importance on understanding customers, driving business values, and taking a people-first approach to work. By encouraging experimentation and innovation, 1-800-FLOWERS.COM, Inc., opens up new possibilities for software and infrastructure together. As a result, their DevOps team is able to act independently and bolster performance as demand fluctuates due to turbulent external factors across the retail marketplace.“From agility in scaling and improved resource management to seamless global clusters and premium monitoring, MongoDB and Google Cloud reduce complexity and allow our teams to stay lean and focused on innovation rather than infrastructure.”—Chief Technology Officer, Abi SachdevaPartner specializations create unique opportunities for retailersThese four examples show that Google Cloud, along with its services partners, helps retailers achieve their digital transformation goals with intelligent, data-driven solutions that are extended by our ecosystem of partners. One of the beauties of working with a partner is the instant access to expertise and experience necessary to align challenges with solutions and aspirations with reality. We continue to add thousands of people across Google Cloud to ensure our partners and customers receive all the support needed to thrive and win. Looking for a solution focused partner in your region who has achieved Expertise and/or Specialization in your industry? Search our Global Partner Directory. Not yet a Google Cloud partner? Visit Partner Advantage and learn how to become one today!Learn more about how Google Cloud is transforming retail and e-commerce to meet changing customer expectations at the NRF 2022 archives.Related ArticleLeading with Google Cloud & Partners to modernize infrastructure in manufacturingLearn how Google Cloud Partner Advantage partners help customers solve real-world business challenges in manufacturing.Read Article
Quelle: Google Cloud Platform

The L’Oréal Beauty Tech Data Platform – A data story of terabytes and serverless

Editor’s note: In Today’s guest post we hear from beauty leader L’Oréal about their approach to building a modern data platform on fully managed services: managing the ingest of diverse datasets into BigQuery with Cloud Run, and orchestrating transformations into relevant business domain representations for stakeholders across the organization. Learn more about how businesses have benefited from Cloud Run in Forrester’s report on Total Economic Impact.L’Oréal was born out of science. For over 100 years, we have always shaped the future of beauty, and taken its eternal quest to new horizons. This has earned us our current position as the world’s uncontested beauty leader (~€ 32 B annual sales in 2021), present in 150 countries with over 85,000 employees. Today, with the power of our game-changing science, multiplied by cutting-edge technologies, we continue our lifelong journey of shaping the future of beauty. As a Beauty Tech company, we leverage our decades-long heritage of rich data assets to empower our decision-making with instant, sophisticated analysis. Because we oversee global brands, which must adapt to local requirements, we need to maintain a deep understanding of what a brands’ data represents, while managing disparate legal and regulatory requirements for different countries. Our end goal is to run a safe, compliant and sustainable data warehouse as efficiently and effectively as possible. We sync and aggregate internal and external data from a wide variety of sources across organizations and retail stores. This made the management of our data warehouse infrastructure used to be very complex and hard to manage before Google Cloud. L’Oréal’s footprint was so large that we once found it impossible to have a standardized method to handle data. Every process was vendor-specific, and the infrastructure was brittle. We went looking for a solution to our complex data infrastructure needs, and defined the following non-negotiable principles: No Ops: The job of a developer at L’Oréal is not to manage servers. We need an elastic infrastructure that scales on demand, so that our developers can focus on delivering customized and inclusive beauty experiences to all consumers, rather than focusing on managing servers. Secure: We have strict security and compliance requirements which vary by country, and we employ a zero-trust security strategy. We must keep both our own internal data and customer data safe and encrypted. Sustainable : Our data lives in multiple environments, including on-prem data centers and public cloud services. We must be able to securely access and analyze this data while minimizing the complexity and environmental impact of moving and duplicating data. End-to-end supervision: Because developers shouldn’t be managing servers, we need a “single pane of glass” dashboard to monitor and triage the system if something goes wrong. Easy-to-deploy: Deploying code safely should not compromise velocity. We are constantly developing innovations that push the boundaries of science and reinvent beauty rituals. We need integrated tools to make our code deployment process seamless and safe. Event-driven architecture: Our data is used globally by research, product, business and engineering teams with high expectations on data quality and timeliness. Many of our internal processes and analysis are based on near real-time data. Data products delivered “as a service”: We want to empower our employees to drive business value at record speed. To that end, we need solutions that enable us to remove the developers from the critical path of solution delivery as much as possible. Extract-load-transform (ELT): Our goal is to implement the pattern to load data as soon as possible into the data warehouse to take advantage of SQL transformations. After considering multiple vendors on the market, with these principles in mind, we landed on end-to-end Google Cloud serverless and data tooling. We were already using Google Cloud for a few processes, including BigQuery, and loved the experience. We’ve now expanded our use of Google Cloud to fully support the L’Oréal Beauty Tech Data Platform.L’Oréal’s Beauty Tech Data Platform incorporates data from two types of sources: directly via API, which is data that adapts easily to our schema and is inserted directly into BigQuery, and bulk data from integrations, which require event-driven transformations using Eventarc mechanisms. These transformations are performed in Cloud Run and Cloud Functions (2nd gen), or directly in SQL. With Google Cloud, we can adapt very quickly. Today, we currently have 8500 flows for ~5000 users using the native zero-trust capabilities offered by Google Cloud. Indeed, the flows come from Google Cloud and other third-party services. BigQuery enabled us to adopt standard SQL as our universal language in our data warehouse and meet all expectations for queries and reporting. We were also able to load original data using features like federated queries, and efficiently transitioned from ETL to ELT data ingestion by handling semi-structured data with SQL. This approach of loading original data from sources into BigQuery with non-destructive transformations allows us to reprocess data for new use-cases easily, directly within BigQuery. Our applications are hosted on multiple environments – on-premises, in Google Cloud, and in other public clouds. This made it difficult for our data engineers and analysts to natively analyze data across clouds until we started usingBigQuery Omni. This capability of BigQuery allowed us to globally access and analyze data across clouds through a single pane of glass using the native BigQuery user interface itself. Without BigQuery Omni, it would’ve been impossible for our teams to natively do cross-cloud analytics. Moreover, it eliminated the need for us to move sensitive data, which is not only expensive because of local tax and subsea transport, but also incredibly risky – sometimes even forbidden – because of local regulations. Today Google Cloud powers our Beauty Tech Data Platform, which stores 100TB of production data in BigQuery and processes 20TB of data each month. We have more than 8000 governed datasets, and 2 millions of BigQuery tables coming from multiple data sources such as Salesforce, SAP, Microsoft, and Google Ads. For more complex transformations where custom and specific libraries are required,Cloud Workflows help us to manage the complexity very efficiently by orchestrating steps in containers through Cloud Run, Cloud Functions and even BigQuery jobs — the most used way to transform and add value to the L’Oréal data. Additionally, by using BigQuery and Google Cloud’s serverless compute for API ingestion, bulk data loading, and post-loading transformations, we can keep the entire system in a single boundary of trust at a fraction of the cost. With ingest, queries, and transformations all being fully elastic and on-demand, we no longer have to perform capacity planning for either the compute or analytics components of the system. And of course these services’ pay-as-you-go model perfectly aligns with L’Oréal’s strategy of only paying for something when you use it.  Google Cloud fulfilled the requirements of our Beauty Tech Data Platform. And as if offering us a no-ops, secure, easy-to-deploy, custom-development free, event-based platform with end-to-end supervision wasn’t enough, Google Cloud also helped us with our sustainability efforts.  Being able to measure and understand the environmental footprint of our public cloud usage is also a key part of our sustainable tech roadmap. With Google Cloud Carbon Footprint, we can easily see the impact of our sustainable infrastructure approach and architecture principles. Our Beauty Tech platform is a strategic ambition for L’Oréal: inventing the beauty products of the future while becoming the company of the future. Sustainable tech is an imperative and a very important step towards this ambition of creating responsible beauty for our consumers, and sustainable-by-design tech services for our employees. We all have a role to play, and by joining forces, we can have a positive impact. Google Cloud’s data ecosystem and serverless tools are highly complementary, and made it possible to build a next-generation data analytics platform that met all our needs. Get started using serverless and BigQuery together on Google Cloud today.Related ArticleShowing the speed of serverless through hackathon solutionsGoogle Cloud Easy as Pie Hackathon, the results are in.Read Article
Quelle: Google Cloud Platform

Women Techmakers journey to Google Cloud certification

In many places across the globe, March is celebrated as Women’s History Month, and March 8th, specifically, marks the day known around the world as International Women’s Day. Here at Google, we’re excited to celebrate women from all backgrounds and are committed to increasing the number of women in the technology industry. Google’s Women Techmakers community provides visibility, community, and resources for women in technology to drive participation and innovation in the field. This is achieved by hosting events, launching resources, and piloting new initiatives with communities and partners globally. By joining Women Techmakers, you’ll receive regular emails with access to resources, tools and opportunities from Google and Women Techmakers partnerships to support you in your career.Google Cloud, in partnership with Women Techmakers, has created an opportunity to bridge the gaps in the credentialing space by offering a certification journey for Ambassadors of the Women Techmakers community. Participants will have the opportunity to take part in a free-of-charge, 6-week cohort learning journey, including: weekly 90-minute exam guide review sessions led by a technical mentor, peer-to-peer support in the form of an Online Community, and 12 months access to Google Cloud’s on-demand learning platform, Google Cloud Skills Boost. Upon completion of the coursework required in the learning journey, participants will receive a voucher for the Associate Cloud Engineer certification exam. This program, and other similar offerings such as Cloud Career Jumpstart, and the learning journey for members transitioning out of the military, are just a few examples of the investment Google Cloud is making into the future of the technology workforce. Are you interested in staying in the loop with future opportunities with Google Cloud? Join our community here.Related ArticleCloud Career Jump Start: our virtual certification readiness programCloud Career Jump Start is Google Cloud’s first virtual Certification Journey Learning program for underrepresented communities.Read Article
Quelle: Google Cloud Platform

Leveraging OpenTelemetry to democratize Cloud Spanner Observability

Today we’re announcing the launch of an OpenTelemetery receiver for Cloud Spanner,  which provides an easy way for you to process and visualize metrics from Cloud Spanner System tables, and export these to the APM tool of your choice. We have also built a reference integration with Prometheus and sample Grafana dashboards which customers  can use as a template for their own troubleshooting needs. This receiver is available starting version v0.41.0Whether you are a database admin or a developer, it is important to have tools that help you understand the performance of your database, detect if something goes wrong (elevated latencies, increased error rates, reduced throughput etc), and identify the root cause of these signals. Cloud Spanner offers a wide portfolio of Observability tools that allow you to easily monitor database performance, diagnose and fix potential issues. However, some of our customers would like to have the flexibility of consuming Cloud Spanner metrics in their own observability tooling, which could be either an open source combination of a time-series database like Prometheus coupled with a Grafana dashboard, or it could be a commercial Application Monitoring (APM) tool like Splunk, Datadog, Dynatrace, NewRelic or AppDynamics. The reason is that, organizations have already invested in their own observability tooling and don’t want to switch, since switching to a different vendor or a visualization console will require spending a great deal of effort. This is where OpenTelemetry comes in.OpenTelemetry is a vendor-agnostic observability framework for instrumenting, generating, collecting, and exporting telemetry data (traces, metrics and logs). It integrates with many libraries and frameworks across various languages to offer a large set of automatic instrumentation capabilities. The OpenTelemetry ReceiverAn OpenTelemetery receiver is a component of the OpenTelemetery Collector which is built on a Receiver-Exporter model, and by installing the new receiver for Cloud Spanner and configuring a corresponding exporter, developers can now export metrics to their APM tool of choice. This architecture offers a vendor-agnostic implementation on how to receive, process, and export telemetry data. It removes the need to run, operate, and maintain multiple agents / collectors which send traces and metrics in proprietary formats to one or more tracing and/or metrics backends. Cloud Spanner has a number of introspection tools in the form of System Tables (built-in tables that you can query to gain helpful insights about operations in Spanner such as queries, reads, and transactions). Now, with the introduction of the OpenTelemetry receiver for Cloud Spanner, developers can now consume these metrics and visualize them in their APM tool.Reference ImplementationAs a reference implementation, we have created a set of sample dashboards on Grafana, which consume metrics both from Prometheus (exported by the OpenTelemetery Collector) and Cloud monitoring to enable an end-to-end debugging experience. NOTE: Instead of deploying a self managed instance of Prometheus, customers can now also use Google’s managed service for Prometheus. Using this service will let you monitor and alert on your workloads, using Prometheus, without having to manually manage and operate Prometheus at scale. Learn more about using this service here.PrerequisitesPrometheus installed and configured. OpenTelemetry version v0.41.0 (or higher).Here are the specific configurations of these components:OpenTelemetry collectorBelow is a sample configuration file that enables the receiver and sets up an endpoint for Prometheus to scrape metrics from.[config.yml]code_block[StructValue([(u’code’, u’receivers:rn googlecloudspanner:rn collection_interval: 60srn top_metrics_query_max_rows: 100rn # backfill_enabled: truern projects:rn – project_id: “<YOUR_PROJECT>”rn service_account_key: “<SERVICE_ACCOUNT_KEY>.json”rn instances:rn – instance_id: “<YOUR_INSTANCE>”rn databases:rn – “<YOUR_DATABASE>”rnrnexporters:rn prometheus:rn send_timestamps: truern endpoint: “0.0.0.0:8889″ rnrn logging:rn loglevel: debugrnrnprocessors:rn batch:rn send_batch_size: 200rnrnservice:rn pipelines:rn metrics:rn receivers: [googlecloudspanner]rn processors: [batch]rn exporters: [logging, prometheus]’), (u’language’, u”)])]PrometheusOn Prometheus, you need to add a scrape configuration like so:[prometheus.yml]code_block[StructValue([(u’code’, u’global:rn scrape_interval: 15srnrnscrape_configs:rn – job_name: “otel”rn honor_timestamps: truern static_configs:rn – targets: [“collector:8888″, “collector:8889″]’), (u’language’, u”)])]GrafanaFinally, you need to configure Grafana and add datasources and dashboards. Our reference dashboards use two data sources – Cloud monitoring and Prometheus. This sample configuration file can be used with the dashboards we’ve shared above.[datasource.yml]code_block[StructValue([(u’code’, u’apiVersion: 1rnrndatasources:rn- name: Google Cloud Monitoringrn type: stackdriverrn access: proxyrn jsonData:rn tokenUri: https://oauth2.googleapis.com/tokenrn clientEmail: <YOUR SERVICE-ACCOUNT EMAIL> rn authenticationType: jwtrn defaultProject: <YOUR SPANNER PROJECT NAME>rn secureJsonData:rn privateKey: |rn <YOUR SERVICE-ACCOUNT PRIVATE KEY BELOW>rn —–BEGIN PRIVATE KEY—–rn rn —–END PRIVATE KEY—–rnrn- name: Prometheusrn type: prometheusrn # Access mode – proxy (server in the UI) or direct (browser in the UI).rn access: proxyrn url: http://prometheus:9090′), (u’language’, u”)])]Sample DashboardsThe monitoring dashboard powered by Cloud monitoring metrics.The Query Insights dashboard powered by PrometheusWe believe that a healthy observability ecosystem serves our customers well and this is reflected in our continued commitment to open-source initiatives. We’ve received the following feedback from the OpenTelemetry Community on this implementation: “OpenTelemetry has grown from a proposal between two open-source communities to the north star for the collection of metrics and other observability signals. Google has strengthened their commitment to our community by constantly supporting OpenTelemetry standards. Using this implementation and the corresponding dashboards, developers can now consume these metrics in any tooling of their choice, and will be very easily able to debug common issues with Cloud Spanner.” —Bogdan Drutu, Co-Founder of OpenTelemetryWhat’s next?We will continue to provide flexible experiences to developers, embrace open standards, support our partner ecosystem and continue being a key contributor to the open source ecosystem. We will also continue to provide best-in-cloud native observability tooling in our console so that our customers get the best experience wherever they are. To learn more about our Cloud Spanner’s introspection capabilities, read this blog post, and to learn more about Cloud Spanner in general, visit our website.Related ArticleImproved troubleshooting with Cloud Spanner introspection capabilitiesCloud-native database Spanner has new introspection capabilities to monitor database performance and optimize application efficiency.Read Article
Quelle: Google Cloud Platform

Get more insights from your Java applications logs

Today it is even easier to capture logs in your Java applications. Developers can get more data with their application logs using a new version of the Cloud Logging client library for Java. The library populates the current executing context implicitly with every ingested log entry. Read this if you want to learn how to get HTTP requests and tracing information and additional metadata in your logs without writing a single line of code.There are three ways to ingest log data into Google Cloud Logging:Develop a proprietary solution that directly calls the Logging API.Leverage logging capabilities of the Google Cloud managed environments like GKE or install Google Cloud Ops agent and print your application logs to stdout and stderr.Use Google Cloud Logging client library in one of many supported programming languages.The library provides you with ready to use boilerplate constructs built following the best practices of using Logging API. Java applications can use the Google Cloud Logging library to ingest logs using the integrations with Java Logging and Logback framework.If you are new to using Google Logging client libraries for Java, follow the steps to set up Cloud Logging for Java and get started.In the version 3.6 release of the the Logging client library for Java you get many long demanding features including automatic population of the metadata about the environment’s resource supporting Cloud Run and Cloud Functions, HTTP request contextual information, tracing correlation that enables displaying grouped log entries in Logs Explorer and more. This release of the library is composed of the three packages:google-cloud-logging — provides the hand-written layer above Cloud Logging API and the integration with legacy Java Logging solution.google-cloud-logging-logback is the integration with the Logback framework and ingests logs using the google-cloud-logging package.google-cloud-logging-servlet-initializer is a new addition to the library; it provides integration with servlet-based Web applications.The features are available in the versions ≥3.6.3 and ≥0.123.3-alpha of the google-cloud-logging and google-cloud-logging-logback packages respectively.If you are using Maven, update the packages’ versions in the pom.xml:code_block[StructValue([(u’code’, u'<dependency>rn <groupId>com.google.cloud</groupId>rn <artifactId>google-cloud-logging</artifactId>rn <version>3.6.3</version>rn</dependency>rn<dependency>rn <groupId>com.google.cloud</groupId>rn <artifactId>google-cloud-logging-logback</artifactId>rn <version>0.123.3-alpha</version>rn</dependency>’), (u’language’, u”)])]If you are using Gradle, , update your dependencies:code_block[StructValue([(u’code’, u”implementation ‘com.google.cloud:google-cloud-logging:3.6.3’rnimplementation ‘com.google.cloud:google-cloud-logging-logback:0.123.3-alpha'”), (u’language’, u”)])]You can use the official Google Cloud BOM version 0.167.0 that includes the new releases of the packages.What is newThe Java library inserts structured information about the executing environment including resource types, HTTP request metadata, tracing and more. Using the library you can write your payloads in one of the three formats:A text provided as a Java stringA JSON object provided as an instance of Map<String, ?> or StructA protobuf object provided as an instance of AnyYou can use the structured logs with enhanced filtering in Logs Explorer to observe and troubleshoot their applications. The Logs Explorer uses structured logs to establish correlations between traces and logs and to group together logs that belong to the same transaction. The correlated “child” logs are displayed “under” the entry of the “parent” log:Grouped logs display in Logs ExplorerWith the previous versions of the Logging library you had to write code to explicitly populate these fields. For example, developers that use Logback framework had to write a code like below to populate the trace field of the ingested logs:code_block[StructValue([(u’code’, u’// . . .rnString traceInfo = request.getHeader(“x-cloud-trace-context”);rnTraceLoggingEventEnhancer.setCurrentTraceId(traceInfo);rn// . . .’), (u’language’, u”)])]And to invoke this code at the beginning of each transaction.The new features of the Logging library makes implementing the population logic unnecessary. The new version of the library supports automatic population of following log entry fields:resource ‒ describes the resource type and its attributes where the application is running. Along with GCE instances, it supports Google Cloud managed services such as GKE, AppEngine (both Standard and Flexible), Cloud Run and Cloud Functions.httpRequest ‒ captures info about HTTP requests from the current application’s context. The context is defined per-thread and can be populated both explicitly in the application code or implicitly from the Jakarta servlet requests pipeline.trace and spanId ‒ reads the tracing data from the HTTP request header. The tracing data assists in correlating multiple logs that belong to the same transaction.sourceLocation ‒ stores info about the class and method names as well as the line of code where the application called the log ingestion method. The library retrieves the data by traversing the trace stack up until the first entry that is not part of the Logging library code or the system package.What is left to you is to set the payload and relevant payload’s metadata labels. The only field in the log entry that the library does not automatically populate now is the operation field.Disable information auto-population in log entriesYou have full control over the auto-population functionality. The auto-population is enabled by default for your convenience. But in certain scenarios it can be desirable to disable it. For example, if your application is log intensive and has a narrow bandwidth, you may want to disable the auto-population in order to save the connection’s bandwidth for the application communication.If you are ingesting logs using the write() method of the Logging interface, you can configure the LoggingOptions argument to disable the auto-population:code_block[StructValue([(u’code’, u’LoggingOptions options = LoggingOptions.newBuilder()rn .setAutoPopulateMetadata(false).build();rnLogging logging = options.getService();’), (u’language’, u”)])]If you are using Java Logging, you can disable auto population by adding the following to your logging.properties file:code_block[StructValue([(u’code’, u’com.google.cloud.logging.LoggingHandler.autoPopulateMetadata=false’), (u’language’, u”)])]If you are using Logback framework, you can disable auto population by adding the following to your Logback configuration:code_block[StructValue([(u’code’, u'<autoPopulateMetadata>false</autoPopulateMetadata>’), (u’language’, u”)])]How the current context is populatedRich query and display capabilities of Log Explorer such as displaying correlated logs use the log entries’ fields such as httpRequest and trace. The new version of the library uses the Context class to store the information about the HTTP request and tracing data in the current application context. The context’s scope is per thread. Before the library ingests logs into Cloud Logging, it reads the HTTP request and tracing information from the current context and sets the respective fields in the log entries. The fields are populated only if the caller did not explicitly provide values in these fields. Using the ContextHandler class you can setup the HTTP request and tracing data of the current context:code_block[StructValue([(u’code’, u’import com.google.cloud.logging.HttpRequest;rn// . . .rnHttpRequest request;rn// . . .rnContextHandler ctxHandler = new ContextHandler();rnContext ctx = Context.newBuilder()rn .setRequest(request)rn .setTraceId(traceId)rn .setSpanId(spanId)rn .build();rnctxHandler.setCurrentContext(ctx);’), (u’language’, u”)])]After the context is set all logs that will be ingested in the same scope as the context will be populated with the HTTP request and tracing information that was set in the current context. The Context class can setup the HTTP request using partial data such as URL or request method:code_block[StructValue([(u’code’, u’import com.google.cloud.logging.HttpRequest.RequestMethod;rn// . . .rnContextHandler ctxHandler = new ContextHandler();rnContext ctx = Context.newBuilder()rn .setRequestUrl(“https://example.com/info”)rn .setRequestMethod(RequestMethod.GET);rn .build();rnctxHandler.setCurrentContext(ctx);’), (u’language’, u”)])]The builder of the Context class also supports setting the tracing information from the parsed values of the Google tracing context and  W3C tracing context strings using the methods loadCloudTraceContext() and loadW3CTraceParentContext() respectively.Implementation of the context population can be a complex task. Java Web servers support asynchronous execution of the request handlers. To manage the context in the right scope may require in-depth knowledge of specific implementation details about each Web server. The new version of the Logging library provides a simple way to automate the process of the current context management, saving you the effort of implementing the code by themselves. The automation supports all Web servers that are based on the Jakarta servlets such as Tomcat, Jetty or Undertow. The current implementation supports Jakarta servlets version ≥ 4.0.4. The implementation is added to the new google-cloud-logging-servlet-initializer package. All that you have to do to enable automatic capturing of the current context is to add the package to your application.If you are using Maven add the following to your pom.xml:code_block[StructValue([(u’code’, u'<dependency>rn <groupId>com.google.cloud</groupId>rn <artifactId>google-cloud-logging-servlet-initializer</artifactId>rn <version>0.1.7-alpha</version>rn <type>pom</type>rn</dependency>’), (u’language’, u”)])]If you are using Gradle, add the following to your dependencies:code_block[StructValue([(u’code’, u”implementation ‘com.google.cloud:google-cloud-logging-servlet-initializer:0.1.7-alpha'”), (u’language’, u”)])]The added package uses the Java’s Service Provider Interface to register the ContextCaptureInitializer class which integrates into the servlet pipeline to capture information about current HTTP requests. The information is parsed to populate the HttpRequest structure. It also parses the request’s headers to retrieve tracing information. It supports “x-cloud-trace-context” (Google tracing context) and “traceparent” (W3C tracing context) headers.Use Logging library with logging agentsMany applications utilize logging capabilities of the Google Cloud managed services. The applications output their logs to stdout and stderr, and the logs are ingested into Cloud Logging by Logging agents or the Cloud managed services with the logging agent capabilities. This approach benefits from asynchronous log processing that does not consume application resources. The drawback of the approach is that if you want to populate fields in the structured logs or provide the structured payload, they have to format their output following the special Json format that the logging agents can parse. Also, while the logging agents can detect and populate the resource information about the managed environment, they cannot help with auto population of other fields of the log entry such as traceId or sourceLocation.The new release of the Logging library for Java introduces the support for logging agents in both of its Java Logging and Logback integrations. Now the library’s users can instruct the appropriate handler to redirect the log writing to stdout instead of Logging API.If you are using Java Logging, add the following to your logging.properties file:code_block[StructValue([(u’code’, u’com.google.cloud.logging.LoggingHandler.redirectToStdout=true’), (u’language’, u”)])]If you are using Logback, add the following to the Logback configuration:code_block[StructValue([(u’code’, u'<redirectToStdout>true</redirectToStdout>’), (u’language’, u”)])]By default, both LoggingHandler and LoggingAppender write logs by calling the Logging API. You have to add the above configurations to make them utilize the logging agents for the log ingestion.Some limitations of using Logging AgentsWhen configuring the library’s Java Logging handler or Logback adapter to redirect log writing to stdout, you should be aware of the constraints that the use of logging agents implies.Google Cloud managed services (e.g. GKE) automatically install logging agents in the resources that they provision. For example, a GKE cluster has a logging agent installed in each worker node (GCE instance) of the cluster. As a result, logging agents are constrained with the resource they run and do not support customization of the resource field of the ingested log entries.Additionally, the logName of all ingested logs is defined by the agent and cannot be changed*. It means that the application cannot define the log name or where the log entry will be stored (a.k.a. log’s destination name).If it is essential for you to define a custom resource type or to control to which project the logs will be routed and/or the log name, you should not redirect the log writing to standard output.* It is possible to customize the log name (but not the destination) by customizing the Logging agent’s configuration in GCE instances by defining the name as the “tag”.What is nextLet’s recap the benefits of upgrading your logging client to the latest version.Use the new Logging library if you need log correlation capabilities of Log Explorer or forward Cloud Logging structured logs to external solutions and use the data in the auto-populated fields.Use the google-cloud-logging-servlet-initializer package to automate the context management if you run a request based application that uses Jakarta servlets. Note that it will not work with legacy Java EE servlets or Web servers that are not based on Java servlets such as Netty.If you run your application in the Google Cloud serverless environments like Cloud Run or Cloud Functions, consider using Java Logging or Logback with the configuration that redirects formatted logs to standard output like it is described in the previous section. Leveraging logging agents for ingesting logs resolves some reliability problems about asynchronous log ingestion such as CPU throttling on Cloud Run or no grace period in Cloud Functions.Related ArticleGetting Started with Google Cloud Logging Python v3.0.0Learn how to manage your app’s Python logs and related metadata using Google Cloud client libraries.Read Article
Quelle: Google Cloud Platform

How Google Cloud helps you to architect for DR when you have locality restricted workloads

There are many reasons why locality restrictions need to be taken into consideration and it’s something CISOs and resiliency officers need to factor in. A balance needs to be met between taking advantage of the best features of the cloud while meeting your locality requirements. Google Cloud helps you meet your business objectives; whether your architecture is all-in on Google Cloud, a hybrid pattern which may be on-premises and Google Cloud, or on Google Cloud and an alternative cloud provider.Before starting to design your architecture you need to consider the locality requirements you need to meet. These can at a high level be one or more of three scenarios: Data localization: Data needs to be stored and processed within a specified entity (for example the EU) or a specific country or designated countries. Data residency: Data is stored in a specified geographical location.Data sovereignty: Builds upon both data localization and residency, but to meet sovereignty requirements, you will be subject to regulations and laws of an entity such as the EU, regulated industry groups, or a specific country.These scenarios are often conflated because they are related, yet they are distinct. Designing your locality restricted architecture requires you to also design your disaster recovery (DR) architecture to meet your localization requirements. The approach to designing your DR architecture for locality restricted architectures is the same as designing DR architectures that do not have any DR locality restrictions, but with augmentation to address the locality requirements. Start by reading the Google Cloud disaster recovery planning guide. Next, as you consider locality-restricted workloads, we have two additional DR guides that focus on meeting locality restrictions:Architecting disaster recovery for locality-restricted workloads – Start here and focus first on the requirements discussed in the planning section of this guide. This also discusses the locality features of a subset of the Google Cloud portfolio which is useful to review when designing your overall architecture.   Disaster recovery use cases: locality-restricted data analytics applications – This guide helps you understand what designing your DR architecture looks like in practice. It has two data analytic use cases which have locality restricted requirements. The guide talks through the locality considerations for both use cases. Use the following flowchart to help you determine what you need to take into consideration when designing your DR architecture architecture:Click to enlargeIf you end up considering custom solutions or partner offerings, then use the Google Cloud disaster recovery planning guide together with the locality restricted guides Architecting disaster recovery for locality-restricted workloads and Disaster recovery use cases: locality-restricted data analytics applications to help you with designing your locality restricted DR architecture.Related ArticleNew in Google Cloud VMware Engine: Single nodes, certifications and moreThe latest version of Google Cloud VMware Engine now supports single node clouds, compliance certs and Toronto availabilityRead Article
Quelle: Google Cloud Platform