Committed use discounts for RHEL and RHEL for SAP now available on Compute Engine

Optimizing your costs is a major priority for us here at Google Cloud. We are pleased to announce the general availability of committed use discounts (“CUDs”) for Red Hat Enterprise Linux and Red Hat Enterprise Linux for SAP. If you run consistent and predictable workloads on Compute Engine, you can utilize CUDs to save on Red Hat Enterprise Linux subscription costs by as much as 24% compared to on-demand (or “PAYG”) prices. “Red Hat Enterprise Linux on Google Cloud provides a consistent foundation for hybrid cloud environments and a reliable, high-performance operating environment for applications and cloud infrastructure. The introduction of committed use discounts for Red Hat Enterprise Linux for Google Cloud makes it even easier for customers to deploy on the world’s leading enterprise Linux platform to unlock greater business value in the cloud.” — Gunnar Hellekson, Vice President and General Manager, Red Hat Enterprise Linux Business Unit, Red Hat What are committed use discounts for Red Hat Enterprise Linux?Red Hat Enterprise Linux and Red Hat Enterprise Linux for SAP committed use discounts (collectively referred to as “Red Hat Enterprise Linux CUDs”) are resource-based commitments available for purchase in one- or three-year terms. When you purchase Red Hat Enterprise Linux CUDs, you are committing to paying the monthly Red Hat Enterprise Linux subscription fees for the duration you’ve selected for the number of licenses you specify, regardless of your actual usage. In exchange, you can save as much as 24% on Red Hat Enterprise Linux subscription costs compared to on-demand rates. Because you are billed monthly regardless of actual Red Hat Enterprise Linux usage, CUDs are ideal for your predictable and steady-state usage, to maximize your savings and make for easier budget planning. How do committed use discounts work for Red Hat Enterprise Linux?Red Hat Enterprise Linux CUDs are project- and region-specific, similar to the other software license CUDs available today. This means you will need to purchase Red Hat Enterprise Linux CUDs in the same region and project as the instances consuming these subscriptions. After you purchase Red Hat Enterprise Linux CUDs, discounts automatically apply to any running virtual machine (VM) instances within a selected project in the specified region. If you have multiple projects under the same billing account, commitments can also be shared across projects by turning on billing account sharing.When commitments expire, your running VMs continue to run at on-demand rates. It is important to note that after you purchase a commitment, you cannot edit or cancel it. You must pay the agreed-upon monthly amount for the duration of the commitment. Refer to Purchasing commitments for licenses for more information. How much can I save by using committed use discounts for Red Hat Enterprise Linux?By purchasing Red Hat Enterprise Linux CUDs, you can save as much as 20% on one-year commitments and up to 24% on three-year commitments compared to the current on-demand prices. However, it is important to remember that with CUDs, you will be charged for monthly subscription fees regardless of your actual Red Hat Enterprise Linux usage. Therefore, to maximize the discounts you can receive from CUDs, we recommend purchasing CUDs for steady and predictable workloads. Here is a helpful comparison between maximum discounts possible using CUDs versus its relative on-demand prices:Price as of this article’s publish date. Hourly costs are approximate. Calculations are derived based on the full CUD prices (as of this article’s publish date), assuming VMs running 730 hours per month,12 months per year. Discounts compared to current on-demand pricing, rounded to the nearest whole number.Based on our research, CUDs are a good fit for many Red Hat Enterprise Linux VMs, the majority of which run 24/7 workloads. When evaluating whether or not purchasing Red Hat Enterprise Linux CUD is a good choice for you, consider the following: Based on list prices for a one-year term, Red Hat Enterprise Linux CUDs can help you save on subscription costs if you utilize a Red Hat Enterprise Linux instance for ~80% or more of the time within the one year CUD term. For a three-year Red Hat Enterprise Linux CUD, you can start saving when a Red Hat Enterprise Linux instance runs for ~76% or more of the time. Additionally, remember that Red Hat Enterprise Linux CUDs automatically apply to all running VM instances within the same region and project. (However, one Red Hat Enterprise Linux CUD can only be applied to one VM instance at a time.)*Savings are estimates only. This analysis assumes only one Red Hat Enterprise Linux (large) instance running under the CUD project and region.What if I need to upgrade my Red Hat Enterprise Linux version after purchasing a commitment? Red Hat Enterprise Linux CUDs are version-agnostic and are not affected when you perform operating system (OS) upgrades or downgrades. For example, if you purchased a commitment for Red Hat Enterprise Linux 7, you may upgrade to Red Hat Enterprise Linux 8 and continue to use the same commitment without any action on your end. Additionally, commitments are not affected by future pricing changes to on-demand prices for Compute Engine resources.How can I purchase committed use discounts for Red Hat Enterprise Linux?The easiest way to purchase Red Hat Enterprise Linux CUDs is through the Google Cloud console. In the Google Cloud console, go to the Committed Use Discounts page. Click Purchase commitment to purchase a new commitment. Click New license committed use discount to purchase a new license commitment. Name your commitment and choose the region where you want it to apply. Choose a duration of the commitment, either 1 or 3 years. Choose a License family. Choose the License type and quantity. Choose the Number of licenses. Click Purchase.You can also purchase Red Hat Enterprise Linux commitments using the Google Cloud CLI or the Compute Engine API. For more information, refer to Purchasing commitments for licenses. We hope this helps you find the most cost-optimal plan for your Red Hat Enterprise Linux deployment needs.
Quelle: Google Cloud Platform

Introducing BigQuery Partner Center — a new way to discover and leverage integrated partner solutions

At Google, we are committed to building the most open and extensible Data Cloud. We want to provide our customers with more flexibility, interoperability and agility when building analytics solutions using BigQuery and tightly integrated partner products. We have therefore significantly expanded our Data Cloud partner ecosystem, and are increasing our investment in technology partners in a number of new areas.At the Google Data Cloud & AI Summit in March, we introduced BigQuery Partner Center, a new user interface in the Google Cloud console that enables our customers to easily discover, try, purchase and use a diverse range of partner products that have been validated through the Google Cloud Ready – BigQuery program. Google Cloud Ready – BigQuery is a program whereby Google Cloud engineering teams evaluate and validate BigQuery partner integrations and connectors using a series of tests and benchmarks based on standards set by Google. Customers can be assured of the quality of the integrations when using these partner products with BigQuery. These validated partners and solutions are now accessible directly from BigQuery Partner Center.Navigating in BigQuery Partner CenterCustomers can start exploring BigQuery Partner Center by launching the BigQuery Cloud Console.A video demo of how to discover and install a free trial of Confluent Cloud from the BigQuery Partner Center.Discover: In the Partner Center, you can find a list of validated partners organized in the following categories:BI, ML, and Advanced AnalyticsConnectors & Development ToolsData Governance, Master Data ManagementData Quality & ObservabilityETL & Data IntegrationTry: You will have the option to try out the product by signing up for a free trial version offered by the partner.Buy: If you choose to purchase any of the partner products, you can do it directly from Google Cloud Marketplace by clicking on the Marketplace hyperlink tag.Here’s an overview of how you can discover and use some of BigQuery’s partner solutions.Confluent Cloud is now available in BigQuery Partner Center to help customers easily connect and create Confluent streaming data pipelines into BigQuery, and extract real-time insights to support proactive decision-making while offloading operational burdens associated with managing open source Kafka..Fivetran offers a trial experience through the BigQuery Partner Center, which allows customers to replicate data from key applications, event streams, and file stores to BigQuery continuously. Moreover, customers can actively monitor their connector’s performance and health using logs and metrics provided through Google Cloud Monitoring.Neo4j provides an integration through BigQuery Partner Center that allows users to extend SQL analysis with graph-native data science and machine learning by working seamlessly between BigQuery and Neo4j Graph Data Science; whether using BigQuery SQL or notebooks. Data science teams can now improve and enrich existing analysis and ML using the graph-native data science capabilities within Neo4j by running in-memory graph analysis directly from BigQuery.Expanding partner ecosystem through Google Cloud Ready – BigQuery We are also excited to share that, since we introduced the Google Cloud Ready – BigQuery initiative last year, we have recognized over 50 technology partners that have successfully met a core set of integration requirements with BigQuery. To unlock more use cases that are critical to customers’ data-driven transformation journey, Google Cloud engineering teams closely worked with partners across many categories to test compatibility, tune functionality, and optimize integrations to ensure our customers have the best experience when using these partner products with BigQuery.For example, in the Data Quality & Observability category, we have most recently validated products from Anomalo, Datadog, Dynatrace, Monte Carlo and New Relic to enable better detection and remediation of data quality issues.In the Reverse ETL & Master Data Management category, we worked with Hightouch and Tamr to expand data management use cases for data cleansing, preparation, enrichment and data synchronization from BigQuery back to SaaS based applications.Data Governance and Security partners like Immuta and Privacera can provide enhanced data access controls and management capabilities for BigQuery, while Carto offers advanced geospatial and location intelligence capabilities that are well integrated with BigQuery.We also continue to expand partnerships in key categories such as Advanced Analytics and Data Integration with industry leading partners like Starburst, Sisense, Hex, and Hevo Data to ensure our customers have flexibility and options in choosing the right partner products to meet their business needs.With the general availability of BigQuery Partner Center, customers can now conveniently discover, try out and install a growing list of Google Cloud Ready – BigQuery validated partners from the BigQuery Cloud Console directly.Getting startedTo explore in the new Partner Center, launch the BigQuery Cloud Console.To see a full list of partner solutions and connectors that have been validated to work well with BigQuery, visit here. To learn more about the Google Cloud Ready – BigQuery validation program, visit our documentation page. If you are a partner interested in becoming “Google Cloud Ready” for BigQuery, please fill out this intake form. If you have any questions, feel free to contact us.
Quelle: Google Cloud Platform

Bringing our world-class expertise together under Google Cloud Consulting

Every day, we see how much our customers value Google Cloud experts working alongside their teams to drive innovation. We also know that being connected to the right services and partners at the right time accelerates customer success. Last year, we expanded our custom AI solution practice and launched our Global Delivery Center to deliver deep product expertise at a global scale. Today, we’re excited to announce the next step on our journey to bring all our services together with the launch of Google Cloud Consulting and our unified services portfolio at cloud.google.com/consulting. The Google Cloud Consulting portfolio provides a unified services capability, bringing together offerings, across multiple specializations, into a single place. This includes services from learning to technical account management to professional services and customer success. Through this single portfolio, you’ll have access to detailed descriptions of the various services, with examples of how you can leverage them to solve specific business challenges. This will make it easy to identify the right package of services for your business and will ensure you get the most out of your investment. At Google Cloud, we always work closely with our ecosystem of partners to deliver innovation and value to our customers, and Google Cloud Consulting further reinforces our commitment to being partner-first. By bringing together capabilities across the customer lifecycle — from onboarding to enablement to co-delivery and assurance — this unified portfolio makes it simpler for partners to work with Google Cloud Consulting and help drive the best outcomes for customers. “Our partnership with Google Cloud Consulting is helping us to grow our Google Cloud practice globally and accelerate our customers’ adoption of the platform. We are pushing the bounds of innovation together as the AI wave approaches,” said Ankur Kashyap, SVP and Global Head of Google Ecosystem Unit, HCLTech.Broadcom, a provider of enterprise security solutions, recently worked with Google Cloud Consulting to migrate its infrastructure from Amazon Web Services (AWS), and found the combination of technology and expertise critical for success. “Google’s deep technical skills and its data, security and AI offerings have accelerated our transformation towards becoming a software-led company,” said Andy Nallappan, Vice President, CTO and CSO, Broadcom. Kroger, the American retailer, worked with Google Cloud Consulting and Deloitte to accelerate its technical objectives. “Google Cloud Consulting and Deloitte brought us a technology architecture and application framework that we could implement in record time. We’re already seeing results across our stores, with associate tasks being optimized and overall productivity increasing,” said Jim Clendenen, VP, Enterprise Retail Systems, Kroger. Whether you’re just getting started in the cloud or seeking new ways to innovate, our portfolio of offerings is built to help you: Leverage Google Cloud professional service engineers and consultants to kickstart your cloud journey, from testing, planning and executing migrations to optimizing your operations.Work alongside our partners to provide expertise and assurance services.Benefit from access to cutting-edge tools, including best-in-class Artificial Intelligence and Machine Learning (AI/ML) solutions, data resources, and security services that you can use to build robust data platforms and protect your business from security threats. Receive bespoke, hands-on guidance from our Technical Account Managers who build familiarity with your applications, systems, and business goals, and proactively advise and accelerate your digital transformation. Train and certify your teams in Google Cloud with a range of learning services that can boost your long-term self-sufficiency, and help you foster a culture of innovation. These end-to-end capabilities are designed to meet you wherever you are in your cloud journey, so you can both build your business in the cloud and make digital breakthroughs safely.At Google Cloud, we’re committed to providing technology and services to help you grow and succeed. From developing innovative solutions, to pioneering with generative AI, to securely managing your data in the cloud, to transforming user experiences, we’re with you at all the key moments of your cloud journey. As we look forward to 2023, we’ll continue to expand the service catalog and focus on making it even easier to find and transact these services, and further streamline the experience of engaging with our services. Click here to see Google Cloud Consulting’s full portfolio of offerings. 
Quelle: Google Cloud Platform

Accelerate time to value with Google’s Data Cloud for your industry

Many data analytics practitioners today are interested in ways they can accelerate new scenarios and use cases to enable business outcomes and competitive advantage. As many enterprises look at rationalizing their data investments and modernizing their data analytics platform strategies, the prospect of migrating to a new cloud-first data platform like Google’s Data Cloud can be perceived as a risky and daunting task — not to mention the expense of the transition from redesign and remodeling of legacy data models in traditional data warehouse platforms to the refactoring of analytics dashboards and reporting for end users. The time and cost of this transition is not trivial. Many enterprises are looking for ways to deliver innovation at cloud speed without the time and costs of traditional replatforming where millions are spent on this type of transition. When access to all data within the enterprise and beyond is the future – it’s a big problem if you can’t leverage all of your data for its insights, and at cloud scale, because you’re stuck in the technologies and approaches of which aren’t designed to match your unique industry requirements. So, what is out there to address these challenges? Google’s Data Cloud for industries combines pre-built industry content, ecosystem integrations, and solution frameworks to accelerate your time to value. Google has developed a set of solutions and frameworks to address these issues as part of its latest offering called Google Cloud Cortex Framework, which is part of Google’s Data Cloud. Customers like Camanchaca accelerated build time for analytical models by 6x, and integrated Cortex content for improved supply chain and sustainability insights and saved 12,000 hours deploying 60 data models in less than 6 months. Accelerating time to value with Google Cloud Cortex FrameworkCortex Framework provides accelerators to simplify your cloud transition and data analytics journey in your industry. This blog explores some essentials you need to know about Cortex and how you can adopt and leverage its content to rapidly onramp your enterprise data from key applications such as SAP and Salesforce, along with data from Google, third-party, public and community data sets. Cortex is available today and it allows enterprises to accelerate time to value by providing endorsed connectors delivered by Google and our partners, reference architectures, ready to use data models and templates with BigQuery, Vertex AI examples, and an application layer that includes microservices templates for data sharing with BigQuery that developers can easily deploy, enhance, and make their own depending on the scope of their data analytics project or use case. Cortex content helps you get there faster — with lower time and complexity to implement. Let’s now explore some details of Cortex and how you can best take advantage of it with Google’s Data Cloud.   First, Cortex is both a framework for data analytics and a set of deployable accelerators; the below image provides an overview of the essentials of Cortex Framework focusing on key areas of endorsed connectors, reference architectures, deployment templates, and innovative solution accelerators delivered by Google and our partners. We’ll explore each of these focus areas of Cortex in greater depth below.Why Cortex Framework?  Leading connectors: First, Cortex provides leading connectors delivered by Google and our partners. These connectors have been tested and validated to provide interoperability with Cortex data models in BigQuery, Google’s cloud-scale enterprise data warehouse. By taking the guesswork out of selecting which tooling works to integrate to Cortex with BigQuery, we’re taking the time, effort, and cost out of evaluating the various tooling available in the market. Deployment accelerators: Cortex provides a set of predefined deployable templates and content for enterprise use cases with SAP and Salesforce that include BigQuery data models, Looker dashboards, Vertex AI examples, and microservices templates for synchronous and asynchronous data sharing with surrounding applications. These accelerators are available free of charge today via Cortex Foundation and can easily be deployed in hours. The figure below provides an overview of Cortex Foundation and focus areas for templates and content available today:Reference architectures: Cortex provides reference architectures for integrating with leading enterprise applications such as SAP and Salesforce as well as Google and third-party data sets and data providers. Reference architectures include blueprints for integration and deployment with BigQuery that are based on best practices for integration with Google’s Data Cloud and partner solutions based on real-world deployments. Examples include best practices and reference architectures for CDC (Change Data Capture) processing and BigQuery architecture and deployment best practices. The image below shows an example of reference architectures based on Cortex published best practices and options for CDC processing with Salesforce. You can take advantage of reference architectures such as this one today and benefit from these best practices to reduce the time, effort and cost of implementation based on what works and has been successful in real-world customer deployments.Innovative solutions: Cortex Foundation includes support for various use cases and insights across a variety of data sources. For example, Cortex Demand Sensing is a solution accelerator offering leveraging Google Cloud Cortex Framework to deliver accelerated value to Consumer Packaged Goods (CPG) customers who are looking to infuse innovation into their Supply Chain Management and Demand Forecasting processes.An accurate forecast is critical to reducing costs, and maximizing profitability. One gap for many CPG organizations is a near-term forecast that leverages all of the available information from various internal and external data sources to predict near-term changes in demand. As an enhanced view of demand materializes, CPG companies also need to manage and match demand and supply to identify near term changes in demand and their root cause, and then shape supply and demand to improve SLAs and increase profitability. Our approach shown below for Demand Sensing integrates SAP ERP and other data sets (e.g. Weather Trends, Demand Plan, etc) together with our Data Cloud solutions like BigQuery, Vertex AI and Looker to deliver extended insights and value to demand planners to improve the accuracy of demand predictions and help to defer cost and drive new revenue opportunities.The ecosystem advantageBuilding an ecosystem means connections with a diverse set of partners that accelerate your time to value. Google Cloud is excited to announce a range of new partner innovations that bring you more choice and optionality. Over 900 partnersput trust in BigQuery and Vertex AI to power their business by being part of the “Built with” Google Cloud initiative. These partners build their business on top of our data platform, enabling them to scale at high performance – both their technology and their business. In addition to this, more than 50 data platform partners offer fully validated integrations through our Google Cloud Ready – BigQuery initiative. A look aheadOur solutions roadmap will target expansion of Cortex Foundation templates and content support for additional solutions in sales and marketing, supply chain, and expansion of use cases and models for finance. You will also see significant expansion with predefined BigQuery data models and content for Google Ads, Google Marketing Platform, and other cross-media platforms and applications and improvements with deployment experience and expansion into analytical accelerators that span across data sets and industries. If you would like to connect with us to share more details on what we are working on and our roadmap, we’re happy to engage with you! Please feel free to contact us at cortex-framework@google.com to learn more about the work we are doing and how we might help with your specific use cases or project. We’d love to hear from you!Ready to start your journey?With Cortex Framework, you come first in benefiting from our open source Data Foundation solutions content and packaged industry solutions content available on our Google Cloud Marketplace and Looker. The Cortex content is available free of charge so you can easily get started with your Google Data Cloud journey today!Learn more about Google Cloud Cortex Framework and how you can accelerate business outcomes with less risk, complexity and cost. Cortex will help you get there faster with your enterprise data sources and establish a cloud-first data foundation with Google’s Data Cloud. Join the Data Cloud Summitto learn how customers like Richemont & Cartier use Cortex Framework to speed up time to value.
Quelle: Google Cloud Platform

Pub/Sub schema evolution is now GA

Pub/Sub schemas are designed to allow safe, structured communication between publishers and subscribers. In particular, the use of schemas provides that guarantee that any message published adheres to a schema and encoding, which the subscriber can rely on when reading the data. Schemas tend to evolve over time. For example, a retailer is capturing web events and sending them to Pub/Sub for downstream analytics with BigQuery. The schema now includes additional fields that need to be propagated through Pub/Sub. Up until now Pub/Sub has not allowed the schema associated with a topic to be altered. Instead, customers had to create new topics. That limitation changes today as the Pub/Sub team is excited to introduce schema evolution, designed to allow the safe and convenient update of schemas with zero downtime for publishers or subscribers.Schema revisionsA new revision of schema can now be created by updating an existing schema. Most often, schema updates only include adding or removing optional fields, which is considered a compatible change.All the versions of the schema will be available on the schema details page. You are able to delete one or multiple schema revisions from a schema, however you cannot delete the revision if the schema has only one revision. You can also quickly compare two revisions by using the view diff functionality.Topic changesCurrently you can attach an existing schema or create a new schema to be associated with a topic so that all the published messages to the topic will be validated against the schema by Pub/Sub. With schema evolution capability, you can now update a topic to specify a range of schema revisions against which Pub/Sub will try to validate messages, starting with the last version and working towards the first version. If first-revision is not specified, any revision <= last revision is allowed, and if last revision is not specified, then any revision >= first revision is allowed.Schema evolution exampleLet’s take a look at a typical way schema evolution may be used. You have a topic T that has a schema S associated with it. Publishers publish to the topic and subscribers subscribe to a subscription on the topic:Now you wish to add a new field to the schema and you want publishers to start including that field in messages. As the topic and schema owner, you may not necessarily have control over updates to all of the subscribers nor the schedule on which they get updated. You may also not be able to update all of your publishers simultaneously to publish messages with the new schema. You want to update the schema and allow publishers and subscribers to be updated at their own pace to take advantage of the new field. With schema evolution, you can perform the following steps to ensure a zero-downtime update to add the new field:1. Create a new schema revision that adds the field.2. Ensure the new revision is included in the range of revisions accepted by the topic.3. Update publishers to publish with the new schema revision.4. Update subscribers to accept messages with the new schema revision.Steps 3 and 4 can be interchanged since all schema updates ensure backwards and forwards compatibility. Once your migration to the new schema revision is complete, you may choose to update the topic to exclude the original revision, ensuring that publishers only use the new schema.These steps work for both protocol buffer and Avro schemas. However, some extra care needs to be taken when using Avro schemas. Your subscriber likely has a version of the schema compiled into it (the “reader” schema), but messages must be parsed with the schema that was used to encode them (the “writer” schema). Avro defines the rules for translating from the writer schema to the reader schema. Pub/Sub only allows schema revisions where both the new schema and the old schema could be used as the reader or writer schema. However, you may still need to fetch the writer schema from Pub/Sub using the attributes passed in to identify the schema and then parse using both the reader and writer schema. Our documentation provides examples on the best way to do this.BigQuery subscriptionsPub/Sub schema evolution is also powerful when combined with BigQuery subscriptions, which allow you to write messages published to Pub/Sub directly to BigQuery. When using the topic schema to write data, Pub/Sub ensures that at least one of the revisions associated with the topic is compatible with the BigQuery table. If you want to update your messages to add a new field that should be written to BigQuery, you should do the following:1. Add the OPTIONAL field to the BigQuery table schema.2. Add the field to your Pub/Sub schema.3. Ensure the new revision is included in the range of revisions accepted by the topic.4. Start publishing messages with the new schema revision.With these simple steps, you can evolve the data written to BigQuery as your needs change.Quotas and limitsSchema evolution feature comes with following limits:20 revisions per schema name at any time are allowed.Each individual schema revision does not count against the maximum 10,000 schemas per project.Additional resourcesPlease check out the additional resources available at to explore this feature further:DocumentationClient librariesSamplesQuotas
Quelle: Google Cloud Platform

Coop reduces food waste by forecasting with Google’s AI and Data Cloud

Although Coop has a rich history spanning nearly 160 years, the machine learning (ML) team supporting its modern operations is quite young. Its story began in 2018 with one simple mission: to leverage ML-powered forecasting to help inform business decisions, such as demand planning based on supply chain seasonality and expected customer demand. The end goal? By having insight into not only current data but also projections of what could happen in the future, the business can optimize operations to keep customers happy, save costs, and support its sustainability goals (more on that later!).Coop’s initial forecasting environment was one on-premises workstation that leveraged open-source frameworks such as PyTorch and TensorFlow. Fine tuning and scaling models to a larger number of CPUs or GPUs was cumbersome. In other words, the infrastructure couldn’t keep up with their ideas.So when the question arose of how to solve these challenges and operationalize the produced outcomes beyond those local machines, Coop leveraged the company’s wider migration to Google Cloud to find a solution that could stand the test of time.Setting up new grounds for innovationOver a two-day workshop with the Google Cloud team, Coop kicked things off by ingesting data from its vast data pipelines and SAP systems to BigQuery. At the same time, Coop’s ML team implemented physical accumulation cues of incoming new information and sorted out what kind of information this was. The team was relieved to not have to worry about setting up infrastructure and new instances.Next, the Coop team turned to Vertex AI Workbench to further develop its data science workflow, finding it surprisingly fast to get started. The goal was to train forecasting models to support Coop’s distribution centers so they could optimize their stock of fresh produce based on accurate numbers. Achieving higher accuracy, faster, to better meet customer demandDuring the proof-of-concept (POC) phase, Coop’s ML team had two custom-built models competing against an AutoML-powered Vertex AI Forecast model, which the team ultimately operationalized on Vertex AI: a single Extreme Gradient Boosting model and a Temporal Fusion Transformer in PyTorch. The team established that using Vertex AI Forecast was faster and more accurate than training a model manually on a custom virtual machine (VM).On the test set in the POC, the team reached 14.5 WAPE (Weighted Average Percentage Error), which means Vertex AI Forecast provided a 43% performance improvement relative to models trained in-house on a custom VM.After a successful POC and several internal tests, Coop is building a small-scale pilot (to be put live in production for one distribution center) that will conclude with the Coop ML team eventually streaming back the forecasting insights to SAP, where processes such as carrying out orders to importers and distributors take place. Upon successful completion and evaluation of the small-scale pilot in production in the next few months, they could possibly scale it out to full blown production across distribution centers throughout Switzerland. The architecture diagram below approximately illustrates the steps involved in both stages. The vision is of course to leverage Google’s data and AI services, including forecasting and post-forecasting optimization, to support all of Coop’s distribution centers in Switzerland in the near futureLeveraging Google Cloud to increase the relative forecasting accuracy by 43% over custom models trained by the Coop team can significantly affect the retailer’s supply chain. By taking this POC to pilot and possibly production, the Coop ML team hopes to improve its forecasting model to better support wider company goals, such as reducing food waste.Driving sustainability by reducing food wasteCoop believes that sustainability must be a key component of its business activity. With the aim to become a zero-waste company, its sustainability strategy feeds into all corporate divisions, from how it selects suppliers of organic, animal-friendly, and fair-trade products to efforts for reducing energy, CO2 emissions, waste materials, and water usage in its supply chains. Achieving these goals boils down to an optimal control problem. This is known as a Bayesian framework: Coop must carry out quantile inference to determine the scope of its distributions. For example, is it expecting to sell between 35 and 40 tomatoes on a given day, or is its confidence interval between 20 and 400? Reducing this amount of uncertainty with more specific and accurate numbers means Coop can order the precise number of units for distribution centers, ensuring customers can always find the products they need. At the same time, it prevents ordering in excess, which reduces food waste. Pushing the envelope of what can be achieved company-wideHaving challenged its in-house models against the Vertex AI Forecast model in the POC, Coop is in the process of rolling out a production pilot to one distribution center in the coming months, and possibly all distribution centers across Switzerland later thereafter. In the process, one of the most rewarding things was realizing that the ML team behind the project could use different Google Cloud tools, such as Google Kubernetes Engine and BigQuery, and Vertex AI to create its own ML platform. Beyond using pre-trained Vertex AI models, the team can automate and create data science workflows quickly so it’s not always dependent on infrastructure teams.Next, Coop’s ML team aims to use BigQuery as a pre-stage for Vertex AI. This will allow the entire data streaming process to flow more efficiently, serving data to any part of Vertex AI when needed. “The two tools integrate seamlessly, so we look forward to trying that combination for our forecasting use cases and potentially new use cases, too. We are also exploring the possibility of deploying different types of natural language processing-based solutions to other data science departments within Coop that are relying heavily on TensorFlow models,” says Martin Mendelin, Head of AI/ML Analytics, Coop. “By creating and customizing our own ML platform on Google Cloud, we’re creating a standard for other teams to follow, with the flexibility to work with open-source programs but in a stable, reliable environment where their ingenuity can flourish,” Mendelin adds. “The Google team went above and beyond with its expertise and customer focus to help us make this a reality. We’re confident that this will be a nice differentiator for our business.”
Quelle: Google Cloud Platform

Introducing time-bound Session Length defaults to improve your security posture

Google Cloud provides many layers of security for protecting your users and data. Session length is a configuration parameter that administrators can set to control how long users can access Google Cloud without having to reauthenticate. Managing session length is foundational to cloud security and it ensures access to Google Cloud services is time-bound after a successful authentication. Google Cloud session management provides flexible options for setting up session controls based on your organization’s security policy needs. To further improve security for our customers, we are rolling out a recommended default 16-hour session length to existing Google Cloud customers.Many apps and services can access sensitive data or perform sensitive actions. It’s important that only specific users can access that information and functionality for a period of time. By requiring periodic reauthentication, you can make it more difficult for unauthorized people to obtain that data if they gain access to credentials or devices.Enhancing your security with Google Cloud session controlsThere are two tiers of session management for Google Cloud: one for managing user connections to Google services (e.g. Gmail on the web), and another for managing user connections to Google Cloud services (e.g. Google Cloud console). This blog outlines the  session control updates for Google Cloud services.Google Cloud customers can quickly set up session length controls by selecting the default recommended reauthentication frequency. For existing customers who have session length configured to Never Expire, we are updating the session length to 16 hours.Google Cloud session control: Reauthentication policyThis new default session length rollout helps our customers gain situational awareness of their security posture. It ensures that customers did not mistakenly grant infinite session length to users or apps using Oauth user scopes. After the time bound session expires, users will need to reauthenticate with their login credentials to continue their access. The session length changes impact the following services and apps:Google Cloud Consolegcloud command-line toolAny other app that requires Google Cloud scopesThe session control settings can be customized for specific organizations, and the policies apply to all users within that organization. When choosing a session length, admins have the following options:Choose from a range of predefined session lengths, or set a custom session length between 1 and 24 hours. This is a timed session length that expires the session based on the session length regardless of the user’s activity.Configure whether users can use just their password, or are required to use a Security Key to reauthenticate.How to get started The session length will be on by default for 16 hours for existing customers and can be enabled at the Organizational Unit (OU) level. Here are steps for the admins and users to get started:Admins: Find the session length controls at Admin console > Security > Access and data control > Google Cloud session control. Visit the Help Center to learn more about how to set session length for Google Cloud services. End users: If a session ends, users will simply need to log in to their account again using the familiar Google login flow. Sample Use CasesThird-party SAML identity providers and session length controls If your organization uses a third-party SAML-based identity provider (IdP), the cloud sessions will expire, but the user may be transparently re-authenticated (i.e., without actually being asked to present their credentials) if their session with the IdP is valid at that time. This is expected behavior as Google will redirect the user to the IdP and accept a valid assertion from the IdP. To ensure that users are required to reauthenticate at the correct frequency, evaluate the configuration options on your IdP and review the Help Center to Set up SSO via a third party Identity provider.Trusted applications and session length controlsSome apps are not designed to gracefully handle the reauthentication scenario, causing confusing app behaviors or stack traces. Some other apps are deployed for server-to-server use cases with user credentials instead of the recommended service account credential, in which case there is no user to periodically reauthenticate. If you have specific apps like this, and you do not want them to be impacted by session length reauthentication, the org admin can add these apps to the trusted list for your organization. This will exempt the app from session length constraints, while implementing session controls for the rest of the apps and users within the organization.General Availability & Rollout PlanAvailable to all Google Cloud customersGradual rollout starting on March 15, 2023.Helpful links Help Center: Set session length for Google Cloud services Help Center: Control which third-party & internal apps access Google Workspace dataHelp Center: Use a security key for 2-Step VerificationCreating and managing organizationsUsing OAuth 2.0 for Server to Server ApplicationsRelated ArticleIntroducing IAM Deny, a simple way to harden your security posture at scaleOur latest new capability for Google Cloud IAM is IAM Deny, which can help create more effective security guardrails.Read Article
Quelle: Google Cloud Platform

Google Cloud and MongoDB expand partnership to support startups

Scale your Startups from ideation to growth with MongoDB Atlas on Google Cloud. By providing an integrated set of database and data services and a unified developer experience, MongoDB Atlas on Google Cloud lets companies at all stages build applications that are highly available, performant at global scale, and compliant with the most demanding security and privacy standards.Today we’re excited to announce that we’re expanding our partnership to also support startups together.In addition to the technology, each company has dedicated programs to help startups scale quicker with financial, business and technical support. Harness the power of our partnership for startupsThere are two key ways in which we believe our partnership can help startups scale quicker, more safely and more successfully: 1. Our technologiesMongoDB Atlas allows you to run our fully-managed developer data platform on Google Cloud in just a few clicks. Set up, scale, and operate MongoDB Atlas anywhere in the world with the versatility, security and high-availability you need. Run MongoDB Atlas on Google Cloud to gain true multi-cloud capabilities, best-in-class automation, workload intelligence, and proven practices with the most modern developer data platform available. With the Pay-As-You-Go option on the Google Cloud Marketplace, you only pay for the Atlas resources you use, with no upfront commitment required.Got global customers? Google Cloud is wherever they are and MongoDB Atlas makes it easy to distribute your data for low latency performance and global compliance needs. Selling to a tough enterprise crowd? Data in MongoDB Atlas is protected from the start with preconfigured security features for authentication, authorization, and encryption, and is stored in the same zero-trust, shared-risk model that Google itself depends on. As partners, Google Cloud and MongoDB co-engineer streamlined integrations between MongoDB Atlas and many Google Cloud services to make it easier to deploy apps (Dataflow, GKE, Cloud Run), pull in data from other sources (Apigee), run in flexible multi cloud environments (Anthos), easy deployment of MEAN stack, and Terraform and analyze data (BigQuery, Vertex AI). 2. Our dedicated startup programsThe Google for Startups Cloud program provides:Credits for Google Cloud, Google Workspace, access to training programs and technical support via a dedicated Startup Success Manager, our global Google Cloud Startup Community, and co-marketing opportunities for select startups.Credits: If you’re early in your startup journey and not yet backed with equity funding, you’ll have access to $2,000 of Google Cloud credits. If you are, your first year of Cloud and Firebase usage is covered with credits up to $100,000. Plus, in year two get 20% of Google Cloud and Firebase usage covered, up to an additional $100,000 in credits*Google-wide discounts: Free Google Workspace Business Plus for new signups and monthly credits on Google Maps Platform for 12 months for new signupsTraining: Google Cloud Skills Boost credits giving access to online courses and hands-on labsTechnical support: Get timely help 24/7 through Enhanced Support by applying Google Cloud creditsBusiness Support & Networking: Access to a Startup Success Manager, our global Google Cloud Startup Community, and co-marketing opportunities for select startupsThe MongoDB for Startups program provides: Credits for MongoDB Atlas, dedicated onboarding support, a wide range of hands-on training available on-demand, a complimentary technical advisor session, and co-marketing opportunities to help you amplify your business.  Credits: Free credits for MongoDB Atlas,  including usage of the core Atlas Database, in addition to extended data services for full-text search, data visualization, real-time analytics, building event-driven applications and more to supercharge your data infrastructureDedicated Onboarding Support: Bespoke onboarding resources tailored to help you successfully adopt and scale MongoDB Atlas  Hands-on Training: Free on-demand content access to MongoDB’s library of training with 150+ hands-on labsExpert Technical Advice: A dedicated one-on-one session with our technical experts for personalized recommendations to add scale and optimizeGo to Market Opportunities: Engage with MongoDB’s diverse community of startups and developers through networking events and work with MongoDB on co-marketing initiatives to amplify your startup’s growth and promote the innovative tech you are buildingStartups finding success with Google Cloud and MongoDB Atlas Startup programsMany startups have found these integrations and the interoperability between Google Cloud and MongoDB Atlas to be a powerful combination: Thunkable, a no-code app development platform, has found quick success (3 million users) with a team of just four to six engineers. “The engineering team has always been focused on building the product,” said Thunkable engineer, Jose Dominguez. “So not having to worry about the database was a great win for us. It allowed us to iterate very fast…. As we scale, supporting more enterprise customers, we don’t have to worry about database management issues.”Phonic — a software company that applies intelligent analytics to qualitative research in order to break down barriers between qualitative and quantitative data — uses Google Cloud for distributed file storage, App Engine for auto-scaling, and MongoDB Atlas to support its needs for flexible databases that can adjust to frequent schema changes.Next stepsTo apply to join the Google for Startups Cloud program and MongoDB Atlas Startup program, and to learn more about the benefits each offers, visit our partnership page. Companies enrolled in both startup programs will have exclusive access to joint events, technical support, bespoke offers and much more.
Quelle: Google Cloud Platform

Rapidly expand the reach of Spanner databases with read-only replicas and zero-downtime moves

As Google Cloud’s fully managed relational database that offers near unlimited scale, strong consistency, and availability up to 99.999%, Cloud Spanner powers applications at any scale in industries such as financial services, games, retail, and healthcare. When you set up a Spanner instance, you can choose from two different kinds of configurations: regional and multi-regional. Both configuration types offer high availability, near unlimited scale, and strong consistency. Regional configurations offer 99.99% availability and can survive zone outages. Multi-regional configurations offer 99.999% availability and can survive two zone outages and entire regional outages.Today, we’re announcing a number of significant enhancements to Spanner’s regional and multi-regional capabilities: Configurable read-only replicas let you add read-only replicas to any regional or multi-regional Spanner instance to deliver low latency reads to clients in any geographySpanner’s zero-downtime instance move service gives you the freedom to move your production Spanner instances from any configuration to another on the fly, with zero downtime, whether it’s regional, multi-regional, or a custom configuration with configurable read-only replicas We’re also dropping the list prices of our nine-replica global multi-regional configurations nam-eur-asia1 and nam-eur-asia3 to make them even more affordable for global workloadsLet’s take a look at each of these enhancements in a bit more detail. Configurable read-only replicasOne of Spanner’s most powerful capabilities is its ability to deliver high performance across vast geographic territories. Spanner achieves this performance with read-only replicas. As its name suggests, a read-only replica contains an entire copy of the database and it can serve stale reads without requiring a round trip back to the leader region. In doing so, read-only replicas deliver low latency stale reads to nearby clients and help increase a node’s overall read scalability.For example, a global online retailer would likely want to ensure that its customers worldwide can search and view products from its catalog efficiently. This product catalog would be ideally suited for Spanner’s nam-eur-asia1 multi-region configuration, which has read/write replicas in the United States and read-only replicas in Belgium and Taiwan. This would ensure that customers can view the product catalog with low latency around the globe.Until today, read-only replicas were available in several multi-region configurations: nam6, nam9, nam12, nam-eur-asia1, and nam-eur-asia3. But now, with configurable read-only replicas, you can add read-only replicas to any regional or multi-regional Spanner instance so that you can deliver low-latency stale reads to clients everywhere. To add read-only replicas to a configuration, go to the Create Instance page in the Google Cloud console. You’ll now see a “Configure read-only replicas” section. In this section, select the region for the read-only replica, along with the number of replicas you want per node, and create the instance. It’s as simple as that! The following snapshot shows how to add a read-only replica in us-west2 (Los Angeles) to the nam3 multi-regional configuration.As we roll out configurable read-only replicas, we do not yet offer read-only replicas in every configuration/region pair. If you find that your desired read-only replica region is not yet listed, simply fill out this request form.Configurable read-only replicas are available today for $1/replica/node-hour plus storage costs. Full details on pricing are available at Cloud Spanner pricing. Also announcing: Spanner’s zero-downtime instance move serviceNow that you can use configurable read-only replicas to create new instance configurations that are tailored to your specific needs, how can you migrate your current Spanner instances to these new configurations without any downtime? Spanner database instances are mission critical and can scale to many petabytes and millions of queries per second. So you can imagine that moving a Spanner instance from one configuration to another — say us-central1 in Iowa to nam3 with a read-only replica in us-west2 — is no small feat. Factor in Spanner’s stringent availability of up to 99.999% while serving traffic at extreme scale, and it might seem impossible to move a Spanner instance from us-central1 to nam3 with zero downtime.However, that’s exactly what we’re announcing today! With the instance move service, now generally available, you can request a zero-downtime, live migration of your Spanner instances from any configuration to any other configuration — whether they are regional, multi-regional, or custom configurations with configurable read-only replicas. To request an instance move, select “contact Google” in the Edit Instance of the Google Cloud Console and fill out the instance move request form. Once you make a move request, we’ll contact you to let you know the start date of your instance configuration move, and then move your configuration with zero downtime and no code changes while preserving the SLA guarantees of your configuration. When moving an instance, both the source and destination instance configurations are subject to hourly compute and storage charges, as outlined in Cloud Spanner pricing. Depending on your environment, instance moves can take anywhere from a few hours to a few days to complete. Most importantly, during the instance move, your Spanner instance continues to run without any downtime, and can continue to rely on Spanner’s high availability, near unlimited scale, and strong consistency to serve your mission-critical production workloads. Price drops for global 9-replica Spanner multi-regional configurationsFinally, we’re also pleased to announce that we’re making it even more compelling to use Spanner’s global configurations of nam-eur-asia1 and nam-eur-asia3 by dropping the compute list price of these configurations from $9/node/hour to $7/node/hour. With write quorums in North America and read-only replicas in both Europe and Asia, these configurations are perfectly suited for global applications with strict performance requirements and 99.999% availability. And now, they’re even more cost-effective to use!Learn more If you are new to Spanner, try Spannerat no charge with a 90-day free trial instance.Learn more about multi-regional Spanner configurations by reading Demystifying Cloud Spanner multi-region configurationsRelated ArticleDemystifying Cloud Spanner multi-region configurationsCloud Spanner is a strongly consistent, highly scalable, relational database. It powers billion-user products every month. In order to pr…Read Article
Quelle: Google Cloud Platform

Node hosting on Google Cloud: a pillar of Web3 infrastructure

Blockchain nodes are the physical machines that power the virtual computer that comprises a blockchain network and store the distributed ledger. There are several types of blockchain nodes, such as:RPC nodes, which DApps, wallets, and other blockchain “clients” use as their blockchain “gateway” to read or submit transactionsValidator nodes, which secures the network by participating in consensus and producing blocksArchive nodes, which indexers use to archive nodes to get the full history of on-chain transactions Deploying and managing nodes can be costly, time consuming, and complex. Cloud providers can help abstract away the complexities of node hosting so that Web3 developers do not need to think about infrastructure. In this article, we’ll explore both how organizations can avoid challenges by running their own nodes on Google Cloud, and how in many scenarios, our fully managed offering, Blockchain Node Engine, can make node hosting even easier.Figure 1 – Blockchain nodesWhy running nodes is often difficult and costly Developers often choose a mix of deploying their own nodes or using shared nodes provided by third parties. Free RPC nodes are sufficient to start exploring but may not offer the required latency or performance. Web3 infrastructure providers’ APIs or dedicated nodes are another option, letting developers focus on their app without worrying about the underlying blockchain node infrastructure. There are situations, however, in which it is beneficial to run your own nodes in the cloud. For example:Privacy is too critical for RPC calls to go over the public internet.Certain regulated industries require organizations to operate in a specific jurisdiction and control their nodesNode hardware needs to be configured for optimal performance.A DApp requires low latency to the node.An organization is a validator with a significant stake and needs to be in control of the uptime of its validator node and security.An organization needs predictable and consistent high performance that will not be impacted by others using your node.In Ethereum, the fee recipient is an address nominated by a validator to receive tips from user transactions. The node controls the fee recipient, not the validator client, so to guarantee control of the fee recipient, the organization must run its own nodes.Figure 2 – Dedicated blockchain nodesOrganizations can face challenges running their own nodes. At a macro level, node infrastructure challenges fall into one of these buckets:Sustainability (impact on the environment)Security (DDoS attacks, private key management)Performance (can the hardware keep up with the blockchain software)Scalability (how a network starts and grows)In addition, there is a learning curve related to how each protocol works (e.g., Ethereum, Solana, Arbitrum, Aptos, etc.), what hardware specifications the protocol requires (compute, memory, disk, network), and how to optimize (e.g., sync modes).Hyperscalers have been perceived as not performant enough and too expensive. As a result, a lot of the Web3 infrastructure today runs in bare-metal server providers or in one hyperscaler. For example, as of September 20, 2022, more than 40% of Solana validators ran in Hetzner. But then, Hetzner blocked Solana activity on its servers, causing disruption to the protocol. Similarly, as of October 2022, 5 out of the top 10 Solana validators by SOL staked (representing 8.3% of all staked SOL) ran in AWS, per validators.app. Simply put, this concentration of validators creates a dependency on only a select few hosting providers. As a result, an outage–or a ban–from a single provider can lead to a material failure of the underlying protocol. Moreover, this centralization goes against the Web3 ethos of decentralization and diversification. Healthy protocols require a diversity of participants, clients, and geographic distribution. In fact, the Solana Foundation, via its delegation program, incentivizes infrastructure diversity with the data center criteria.Running nodes on Google Cloud for security, resiliency, and speedTo avoid the aforementioned challenges and improve decentralization on major protocols, organizations have been using Google Cloud to host nodes for several years. For example, we are a validator for protocols like Aptos, Arbitrum, Solana, and Hedera, and Web3 customers use Google Cloud to power nodes include Blockdaemon, Bullish, Coinbase and Dapper Labs. We support a diverse set of ecosystems and use cases, for example:The nodes can run in Google Cloud, regardless of the protocol (we run nodes for Ethereum, layer 2’s, and alternative layer 1’s, etc.). Please note that Proof of Work mining is restricted.We have nodes running in both live and test networks. This is important for the learnings required for each protocol.While these examples are public (permissionless) networks, we also support the private networks favored by some of our regulated customers.Streamlining and accelerating node hosting with Blockchain Node EngineBlockchain Node Engine provides streamlined provisioning, and a secure environment, as a fully managed service. A developer using Blockchain Node Engine doesn’t need to worry about configuring or running nodes. Blockchain Node Engine does all this so that the developer can focus on building a superb DApp. We’ve simplified this process and collapsed all the required node hosting steps into one.For protocols not supported by Blockchain Node Engine, or if an organization wants to manage their own nodes themselves,  services in Google Cloud are built to cover an organization’s full Web3 journey: An organization might start with a simple Compute Engine VM instance using the machine family that works for the protocol. (We support the most demanding protocols, including Solana.)Then, they’ll make their architecture more resilient with managed instance group fronted by Cloud Load BalancerNext, the organization might secure the user-facing nodes by fronting them with Cloud Armor as a Web Application Firewall and DDoS protectionThis node hosting infrastructure is fully automated and integrated with the organization’s DevOps pipelines, helping them to seamlessly accelerate development.As the organization grows and its apps attract more traffic, Kubernetes becomes a natural choice for health monitoring and management. Blockchain nodes can be migrated to GKE node pools (pun intended). (Note: Organizations can also start directly in GKE, rather than Compute Engine.)As the organization continues to grow, it can benefit from access to the cloud-native services close to the nodes. For example, customers use various caching solutions like Cloud CDN, Memorystore and/or Spanner (like blockchain.com) so that most requests do not even have to hit your nodes.On the data side, the organization can implement pipelines that extract data from the node and ingest into BigQuery to make it available for analysis and ML.It can also leverage Confidential Computing for data encrypted while in use (e.g., Multi-Party Computation, Bullish).Next stepsAs we’ve shown with the formation of both customer-facing and product teams dedicated to Web3, Google Cloud is inspired by the Web3 community and grateful to work with so many innovators within it. We’ve been excited to see our work in open-source projects, security, reliability, and sustainability address core needs we see in Web3 communities, and we look forward to seeing more creative decentralized apps and services as Web3 businesses continue to accelerate. To get started with Blockchain Node Engine or explore hosting your own nodes in Google Cloud, contact sales or visit our Google Cloud for Web3 page. Acknowledgements: I’d like to thank customer engineers David Mehi and Sam Padilla and staff software engineer Ross Nicoll, who helped me to better understand node hosting, and Richard Widmann, digital assets head of strategy for his review of this post.
Quelle: Google Cloud Platform