Rapidly expand the reach of Spanner databases with read-only replicas and zero-downtime moves

As Google Cloud’s fully managed relational database that offers near unlimited scale, strong consistency, and availability up to 99.999%, Cloud Spanner powers applications at any scale in industries such as financial services, games, retail, and healthcare. When you set up a Spanner instance, you can choose from two different kinds of configurations: regional and multi-regional. Both configuration types offer high availability, near unlimited scale, and strong consistency. Regional configurations offer 99.99% availability and can survive zone outages. Multi-regional configurations offer 99.999% availability and can survive two zone outages and entire regional outages.Today, we’re announcing a number of significant enhancements to Spanner’s regional and multi-regional capabilities: Configurable read-only replicas let you add read-only replicas to any regional or multi-regional Spanner instance to deliver low latency reads to clients in any geographySpanner’s zero-downtime instance move service gives you the freedom to move your production Spanner instances from any configuration to another on the fly, with zero downtime, whether it’s regional, multi-regional, or a custom configuration with configurable read-only replicas We’re also dropping the list prices of our nine-replica global multi-regional configurations nam-eur-asia1 and nam-eur-asia3 to make them even more affordable for global workloadsLet’s take a look at each of these enhancements in a bit more detail. Configurable read-only replicasOne of Spanner’s most powerful capabilities is its ability to deliver high performance across vast geographic territories. Spanner achieves this performance with read-only replicas. As its name suggests, a read-only replica contains an entire copy of the database and it can serve stale reads without requiring a round trip back to the leader region. In doing so, read-only replicas deliver low latency stale reads to nearby clients and help increase a node’s overall read scalability.For example, a global online retailer would likely want to ensure that its customers worldwide can search and view products from its catalog efficiently. This product catalog would be ideally suited for Spanner’s nam-eur-asia1 multi-region configuration, which has read/write replicas in the United States and read-only replicas in Belgium and Taiwan. This would ensure that customers can view the product catalog with low latency around the globe.Until today, read-only replicas were available in several multi-region configurations: nam6, nam9, nam12, nam-eur-asia1, and nam-eur-asia3. But now, with configurable read-only replicas, you can add read-only replicas to any regional or multi-regional Spanner instance so that you can deliver low-latency stale reads to clients everywhere. To add read-only replicas to a configuration, go to the Create Instance page in the Google Cloud console. You’ll now see a “Configure read-only replicas” section. In this section, select the region for the read-only replica, along with the number of replicas you want per node, and create the instance. It’s as simple as that! The following snapshot shows how to add a read-only replica in us-west2 (Los Angeles) to the nam3 multi-regional configuration.As we roll out configurable read-only replicas, we do not yet offer read-only replicas in every configuration/region pair. If you find that your desired read-only replica region is not yet listed, simply fill out this request form.Configurable read-only replicas are available today for $1/replica/node-hour plus storage costs. Full details on pricing are available at Cloud Spanner pricing. Also announcing: Spanner’s zero-downtime instance move serviceNow that you can use configurable read-only replicas to create new instance configurations that are tailored to your specific needs, how can you migrate your current Spanner instances to these new configurations without any downtime? Spanner database instances are mission critical and can scale to many petabytes and millions of queries per second. So you can imagine that moving a Spanner instance from one configuration to another — say us-central1 in Iowa to nam3 with a read-only replica in us-west2 — is no small feat. Factor in Spanner’s stringent availability of up to 99.999% while serving traffic at extreme scale, and it might seem impossible to move a Spanner instance from us-central1 to nam3 with zero downtime.However, that’s exactly what we’re announcing today! With the instance move service, now generally available, you can request a zero-downtime, live migration of your Spanner instances from any configuration to any other configuration — whether they are regional, multi-regional, or custom configurations with configurable read-only replicas. To request an instance move, select “contact Google” in the Edit Instance of the Google Cloud Console and fill out the instance move request form. Once you make a move request, we’ll contact you to let you know the start date of your instance configuration move, and then move your configuration with zero downtime and no code changes while preserving the SLA guarantees of your configuration. When moving an instance, both the source and destination instance configurations are subject to hourly compute and storage charges, as outlined in Cloud Spanner pricing. Depending on your environment, instance moves can take anywhere from a few hours to a few days to complete. Most importantly, during the instance move, your Spanner instance continues to run without any downtime, and can continue to rely on Spanner’s high availability, near unlimited scale, and strong consistency to serve your mission-critical production workloads. Price drops for global 9-replica Spanner multi-regional configurationsFinally, we’re also pleased to announce that we’re making it even more compelling to use Spanner’s global configurations of nam-eur-asia1 and nam-eur-asia3 by dropping the compute list price of these configurations from $9/node/hour to $7/node/hour. With write quorums in North America and read-only replicas in both Europe and Asia, these configurations are perfectly suited for global applications with strict performance requirements and 99.999% availability. And now, they’re even more cost-effective to use!Learn more If you are new to Spanner, try Spannerat no charge with a 90-day free trial instance.Learn more about multi-regional Spanner configurations by reading Demystifying Cloud Spanner multi-region configurationsRelated ArticleDemystifying Cloud Spanner multi-region configurationsCloud Spanner is a strongly consistent, highly scalable, relational database. It powers billion-user products every month. In order to pr…Read Article
Quelle: Google Cloud Platform

Node hosting on Google Cloud: a pillar of Web3 infrastructure

Blockchain nodes are the physical machines that power the virtual computer that comprises a blockchain network and store the distributed ledger. There are several types of blockchain nodes, such as:RPC nodes, which DApps, wallets, and other blockchain “clients” use as their blockchain “gateway” to read or submit transactionsValidator nodes, which secures the network by participating in consensus and producing blocksArchive nodes, which indexers use to archive nodes to get the full history of on-chain transactions Deploying and managing nodes can be costly, time consuming, and complex. Cloud providers can help abstract away the complexities of node hosting so that Web3 developers do not need to think about infrastructure. In this article, we’ll explore both how organizations can avoid challenges by running their own nodes on Google Cloud, and how in many scenarios, our fully managed offering, Blockchain Node Engine, can make node hosting even easier.Figure 1 – Blockchain nodesWhy running nodes is often difficult and costly Developers often choose a mix of deploying their own nodes or using shared nodes provided by third parties. Free RPC nodes are sufficient to start exploring but may not offer the required latency or performance. Web3 infrastructure providers’ APIs or dedicated nodes are another option, letting developers focus on their app without worrying about the underlying blockchain node infrastructure. There are situations, however, in which it is beneficial to run your own nodes in the cloud. For example:Privacy is too critical for RPC calls to go over the public internet.Certain regulated industries require organizations to operate in a specific jurisdiction and control their nodesNode hardware needs to be configured for optimal performance.A DApp requires low latency to the node.An organization is a validator with a significant stake and needs to be in control of the uptime of its validator node and security.An organization needs predictable and consistent high performance that will not be impacted by others using your node.In Ethereum, the fee recipient is an address nominated by a validator to receive tips from user transactions. The node controls the fee recipient, not the validator client, so to guarantee control of the fee recipient, the organization must run its own nodes.Figure 2 – Dedicated blockchain nodesOrganizations can face challenges running their own nodes. At a macro level, node infrastructure challenges fall into one of these buckets:Sustainability (impact on the environment)Security (DDoS attacks, private key management)Performance (can the hardware keep up with the blockchain software)Scalability (how a network starts and grows)In addition, there is a learning curve related to how each protocol works (e.g., Ethereum, Solana, Arbitrum, Aptos, etc.), what hardware specifications the protocol requires (compute, memory, disk, network), and how to optimize (e.g., sync modes).Hyperscalers have been perceived as not performant enough and too expensive. As a result, a lot of the Web3 infrastructure today runs in bare-metal server providers or in one hyperscaler. For example, as of September 20, 2022, more than 40% of Solana validators ran in Hetzner. But then, Hetzner blocked Solana activity on its servers, causing disruption to the protocol. Similarly, as of October 2022, 5 out of the top 10 Solana validators by SOL staked (representing 8.3% of all staked SOL) ran in AWS, per validators.app. Simply put, this concentration of validators creates a dependency on only a select few hosting providers. As a result, an outage–or a ban–from a single provider can lead to a material failure of the underlying protocol. Moreover, this centralization goes against the Web3 ethos of decentralization and diversification. Healthy protocols require a diversity of participants, clients, and geographic distribution. In fact, the Solana Foundation, via its delegation program, incentivizes infrastructure diversity with the data center criteria.Running nodes on Google Cloud for security, resiliency, and speedTo avoid the aforementioned challenges and improve decentralization on major protocols, organizations have been using Google Cloud to host nodes for several years. For example, we are a validator for protocols like Aptos, Arbitrum, Solana, and Hedera, and Web3 customers use Google Cloud to power nodes include Blockdaemon, Bullish, Coinbase and Dapper Labs. We support a diverse set of ecosystems and use cases, for example:The nodes can run in Google Cloud, regardless of the protocol (we run nodes for Ethereum, layer 2’s, and alternative layer 1’s, etc.). Please note that Proof of Work mining is restricted.We have nodes running in both live and test networks. This is important for the learnings required for each protocol.While these examples are public (permissionless) networks, we also support the private networks favored by some of our regulated customers.Streamlining and accelerating node hosting with Blockchain Node EngineBlockchain Node Engine provides streamlined provisioning, and a secure environment, as a fully managed service. A developer using Blockchain Node Engine doesn’t need to worry about configuring or running nodes. Blockchain Node Engine does all this so that the developer can focus on building a superb DApp. We’ve simplified this process and collapsed all the required node hosting steps into one.For protocols not supported by Blockchain Node Engine, or if an organization wants to manage their own nodes themselves,  services in Google Cloud are built to cover an organization’s full Web3 journey: An organization might start with a simple Compute Engine VM instance using the machine family that works for the protocol. (We support the most demanding protocols, including Solana.)Then, they’ll make their architecture more resilient with managed instance group fronted by Cloud Load BalancerNext, the organization might secure the user-facing nodes by fronting them with Cloud Armor as a Web Application Firewall and DDoS protectionThis node hosting infrastructure is fully automated and integrated with the organization’s DevOps pipelines, helping them to seamlessly accelerate development.As the organization grows and its apps attract more traffic, Kubernetes becomes a natural choice for health monitoring and management. Blockchain nodes can be migrated to GKE node pools (pun intended). (Note: Organizations can also start directly in GKE, rather than Compute Engine.)As the organization continues to grow, it can benefit from access to the cloud-native services close to the nodes. For example, customers use various caching solutions like Cloud CDN, Memorystore and/or Spanner (like blockchain.com) so that most requests do not even have to hit your nodes.On the data side, the organization can implement pipelines that extract data from the node and ingest into BigQuery to make it available for analysis and ML.It can also leverage Confidential Computing for data encrypted while in use (e.g., Multi-Party Computation, Bullish).Next stepsAs we’ve shown with the formation of both customer-facing and product teams dedicated to Web3, Google Cloud is inspired by the Web3 community and grateful to work with so many innovators within it. We’ve been excited to see our work in open-source projects, security, reliability, and sustainability address core needs we see in Web3 communities, and we look forward to seeing more creative decentralized apps and services as Web3 businesses continue to accelerate. To get started with Blockchain Node Engine or explore hosting your own nodes in Google Cloud, contact sales or visit our Google Cloud for Web3 page. Acknowledgements: I’d like to thank customer engineers David Mehi and Sam Padilla and staff software engineer Ross Nicoll, who helped me to better understand node hosting, and Richard Widmann, digital assets head of strategy for his review of this post.
Quelle: Google Cloud Platform

Snap partners with Google Cloud to upskill teams around the globe

Snap inc., the developer of the Snapchat platform, has become a global leader in the social media industry. Snap runs its business on Google Cloud, and relies on Premium Support to optimize its cloud business imperatives. Snap sought new ways to extract business value from their cloud data, so turned to their assigned Google Technical Account Management team to develop a means to strengthen and expand cloud skills to meet their goal.  The Technical Account Managers (TAMs) serve as an extension of the Snap Engineering Program Management team to deliver deep expertise of Google Cloud and guide Snap in their cloud journey including mapping the essential cloud skills for Snap to achieve broader business strategy.  Since Snap employees had varying levels of cloud expertise, it meant that their TAM team needed to design a tailored learning program to optimally meet Snap’s needs.The TAM team engaged with Google Cloud Customer Experience (CCE) team which included cloud support, learning, consulting, customer success and customer insight and advocacy services. The Google team leveraged a Skill Training Survey for the Snap team to identify, extract and map existing skills to the level of targeted cloud skills that would enable them to design a learning curriculum to boost employee productivity, enable efficient scaling and strengthen the mitigation of technical issues while optimizing their environment for issue prevention. In addition, Google Cloud offered instructor-led virtual training focused on Looker, Kubernetes and others topics of interest and a tailored Snap Global Training Program was launched in the last quarter of 2022.By including training for Looker, it expanded Snap employees’ ability to reach data-driven decisions. Snap also took advantage of the Google Cloud Skills Boost licenses delivering access to a learning platform with over 700 courses and learning labs,  included with Premium Support. Next, the TAMs and CCE teams were tasked to raise internal awareness of the global Snap training program, so developed a comprehensive marketing and communications plan to drive promotion over a twelve week period to prospective trainees through newsletters, email groups, Slack channels, Engineering meetings, and an internal site dedicated to Snap training resources. “Partnering with Google to provide Snap engineers with learning opportunities aligns with Snap’s values of Kind, Smart and Creative.  Investing in growing our team member’s skills helps them personally advance and helps our business achieve our goals.”—Michele Vaughan, Snap Engineering Program ManagerThe Google-led Snap Global Training Program includes hands-on, instructor-led training, in-person gamified Cloud Hero Learning Events, and access to on-demand Google Cloud Skills Boost labs. Over 100 trainees at Snap initially participated in the instructor-led training, and more than 500 employees completed the on-demand labs. This training program has enabled Snap employees to develop and strengthen skills in the targeted cloud areas including data visualization, AI and ML, and Kubernetes. In addition, the program ignited a Looker Advisory Professional Services initiative to advise Snap with best practices and improvements for the usage of Looker. These skills enable Snap to extract increased value for their cloud data and guide the future of their business for sustaining a competitive advantage in a dynamic marketplace. To learn more about how Google Cloud Customer Experience can support your organization’s business transformation journey with cloud support, learning, consulting, customer success, and customer insight and advocacy services, visit: Premium Support to empower business innovation with expert-led technical guidance and cloud support Google Cloud Training & Certification to expand and diversify your team’s cloud education
Quelle: Google Cloud Platform

Three new Specializations help partners digitally transform customers

Two of our most enduring commitments to partners include our mission to provide you with the support, tools, and resources you need to grow and drive customer delivery excellence, and to ensure Google Cloud partners stand apart as deeply skilled technology pace setters. This includes working with partners to stay ahead of important new trends that have the potential to disrupt our shared customers—and that also have the potential to accelerate your business growth.To help do this, we’ve rolled out three new Specializations that are aligned to three very important new trends.Partners who earn our new Data Center Modernization Services Specialization have demonstrated success with data center transformation of workloads from on-premises, private cloud, or other public clouds.Partners who earn our new DevOps Services Specialization have demonstrated success implementing, managing, and improving the quality and speed of creating new applications on Google Cloud.Finally, partners who earn our new Contact Center AI Services Specialization have demonstrated success in implementing and migrating Contact Center AI projects with Dialogflow. I am also very proud to announce that we have several partners who have already earned these Specializations. I’d like to briefly talk about why each area is important, who the launch partners are, and provide you with information to learn more about each one.Data Center ModernizationGoogle worked with IDC on multiple studies involving global organizations across industries. This research projects that by 2026, the world will create 7 petabytes of data each second*—that’s equal to about 500 billion full pages of text every second. At some point all of this data will run through, or reside in, a data center—putting enormous pressure on customer infrastructures.Google’s perspective is to construct a unified data cloud “that supports every stage of the data lifecycle” in which “databases, data warehouses, data lakes, streaming, BI, AI, and ML all reside on a common infrastructure that is pre-configured to work together seamlessly.”Regardless of the approach, customers can rely on these partners who have earned our new  Data Center Modernization Services Specialization to lead the way to the modernized data center: Proquire, HCL Technologies, SADA Systems, Wipro, and Deloitte Consulting.DevOps ServicesWe live in an era in which customer demand for software solutions is rising so fast that quality and delivery times are becoming critical points of failure. Our DevOps Services Specialization positions our partners to meet this challenge head on and deliver sophisticated, reliable, secure software, fast—and manage it as a service, if required.In fact, DevOps is so important that it is regarded as a critical ingredient in driving customer satisfaction. According to the newly released 2023 Testing in DevOps report, nearly 90% of coding teams with “highly automated pipelines and mature DevOps practices” report high customer satisfaction rates.Congratulations to partners 66degrees, and DoiT for being the first two companies to achieve this critically important Specialization.Call Center AICall Center support is a significant area of focus for organizations across the globe for one major reason: Business reputations can be made or broken more by the quality of their support systems, than the quality of a product or service. This is why digitally transforming the call center has become a priority for business leaders.Dialogflow is the foundation of Google Cloud’s Contact Center AI Specialization. This platform understands natural languages, making it easy to design and integrate a conversational user interface into apps, web applications, devices, bots, interactive voice response systems, and more. In sum, it enables partners to transform the contact center by making it available to anyone, anywhere, on any device using a variety of different communication modes. All quickly, and accurately.Dialogflow can analyze multiple types of input from your customers, including text or audio inputs (like from a phone or voice recording). It can also respond to customers in a couple of ways, either through text or with synthetic speech.Customers looking to transform their contact center experience can work with our first group of partners to earn this Specialization: Solstice Consulting (DBA Kin + Carta U.S.), Teksystems Global Services, IBM, Quantiphi, and yosh.ai (Shopai Spółka Z Ograniczoną Odpowiedzialnością in Poland).If you’re a customer looking for a partner with a particular Specialization, we invite you to search through our Partner Directory.If you’re a partner who wants to learn more about how to earn Specializations, check out everything you need to know—including certification requirements, Customer Success Stories, and more—on the Partner Advantage portal. Partners can also schedule an optional pre-assessment with ISSI (for a fee) before applying for a Specialization by emailing googlespecadmin@issi-inc.com.*IDC, 2023 Data and AI Trends Report, February 2023.
Quelle: Google Cloud Platform

What would you build with $500 in Google Cloud credits included with Innovators Plus?

Imagine you had $500 in Google Cloud credits to build whatever you want. That’s what you get when you start an Innovators Plus subscription, along with a range of other benefits including: access to the entire catalog of on-demand training on Google Cloud Skills Boost, a certification exam voucher, invite-only quarterly technical briefings, and live-learning events. It’s the best of Google Cloud for anyone looking to skill up your expertise for in-demand cloud roles. I’ll get into detail on that below.Iasked our communitywhat they would use the $500 Google Cloud credits for, and they came back with some really great ideas. Here are some that I wanted to share:Build your own Mastodon server on Google Cloud Platform (@lukwam, @ehienabs). This is a hot topic these days, and picking the right server involves a variety of choices and ultimately comes down to the experience you want for your community. Justin Ribeiro talks about how to do that on Google Cloud Platform in this article.Use Vision AI to identify beer preferences and choices (@Pistol_Peter_D). This is a cool idea in which you could take a picture of a refrigerator door in a store, for example. The photo would then be used to recommend a beer based on your personal preferences, ratings, and other details. Check out Vision AI here to see how it could help build out that idea.Outdoor activity map tracker app with journaling activities enabled (@rmcsqrd). A great idea for anyone who lives an active lifestyle! Consider using Google Maps under the hood for this.Indulge your domain name addiction (@lukeschlangen). Are you into buying and selling interesting domain names? Here is our Cloud Domains documentation for registration, transfer and management of domains with Google Cloud to help you with that.Invent a tool that picks green regions, helping you balance latency and emissions to choose the most sustainable Google Cloud region for your app (@taylorkstacey). Check out this Google Cloud region picker tool that considers carbon footprint, price and latency.Build your skillset by using the Cloud Resume Challenge to do a self-guided tour of Google Cloud Platform (@billblum). Google Cloud Skills Boost has loads of resume-building, hands-on labs that give you access to the Google Cloud Platform.Watch this 60 second YouTube short where I talk about these some more. OK, so you have the vision, how do you get the credits and get started building? Explore Innovators Plus today and make the most of great savings Innovators Plusis an annual subscription for technical practitioners and developers, offering extensive additional training benefits to grow your career in cloud, with new knowledge and skills.For $299/year Innovators Plus offers you benefits with up to 80% savings off the full retail value of this package. Innovators Plus includes:$500 of Google Cloud credits – what will you build?BONUS! An extra $500 credit after the first certification earned each yearAccess to the entire Google Cloud Skills Boost catalog of extensive on-demand training: over 700 courses, labs, certification preparation, and learning paths to help grow your careerReady to get that Google Cloud certification? You’ll get a voucher to help you reach for that goalSpecial access to Google Cloud experts, executives and learning events throughout the yearJoin us at technical briefings and private events1:1 consultations with Google Cloud expertsLearn more about the benefits of Innovators Plus and see what you can build and learn 2023!*Innovators Plus requires you to use a Google Account and a Developer Profile. For customers in the EEA, the UK, and Switzerland, Innovators Plus is restricted to business or professional use.
Quelle: Google Cloud Platform

Building your own private knowledge graph on Google Cloud

A Knowledge Graph ingests data from multiple sources, extracts entities (e.g., people, organizations, places, or things), and establishes relationships among the entities (e.g., owner of, related to) with the help of common attributes such as surnames, addresses, and IDs.Entities form the nodes in the graph and the relationships are the edges or connections. This graph building is a valuable step for data analysts and software developers for establishing entity linking and data validation.The term “Knowledge Graph” was first introduced by Google in 2012 as part of a new Search feature to provide users with answer summaries based on previously collected data from other top results and sources.Advantages of a Knowledge GraphBuilding a Knowledge Graph for your data has multiple benefits:Clustering text together that is identified as one single entity like “Da Vinci,” “Leonardo Da Vinci,” “L Da Vinci,” “Leonardo di ser Piero da Vinci,” etc. Attaching attributes and relationships to this particular entity, such as “painter of the Mona Lisa.”Grouping entities based on similarities, e.g., grouping Da Vinci with Michelangelo because both are famous artists from the late 15th century.It also provides a single source of truth that helps users discover hidden patterns and connections between entities. These linkages would have been more challenging and computationally intensive to identify using traditional relational databases.Knowledge Graphs are widely deployed for various use cases, including but not limited to: Supply chain: mapping out suppliers, product parts, shipping, etc.Lending: connecting real estate agents, borrowers, insurers, etc.Know your customer: anti-money laundering, identity verification, etc.Deploying on Google CloudGoogle Cloud has introduced two new services (both in Preview as of today): The Entity Reconciliation API lets customers build their own private Knowledge Graph with data stored in BigQuery. Google Knowledge Graph Search API lets customers search for more information about their entities from the Google Knowledge Graph.To illustrate the new solutions, let’s explore how to build a private knowledge graph using the Entity Reconciliation API and use the generated ID to query the Google Knowledge Graph Search API. We’ll use the sample data from zoominfo.com for retail companies available on Google Cloud Marketplace (link 1, link 2). To start, enable the Enterprise Knowledge Graph API and then navigate to the Enterprise Knowledge Graph from the Google Cloud console.The Entity Reconciliation API can reconcile tabular records of organization, local business, and person entities in just a few clicks.Three simple steps are involved: Identify the data sources in BigQuery that need to be reconciled and create a schema mapping file for each source. Configure and kick off a Reconciliation job through our console or API.Review the results after job completion.Step 1For each job and data source, create a schema mapping file to inform how Enterprise Knowledge Graph ingests the data and maps to a common ontology using schema.org. This mapping file will be stored in a bucket in Google Cloud Storage.For the purposes of this demo, I am choosing the organization entity type and passing in the database schema that I have for my BigQuery table. Note to always use the latest from our documentation.code_block[StructValue([(u’code’, u’prefixes:rn ekg: http://cloud.google.com/ekg/0.0.1#rn schema: https://schema.org/rnrnmappings:rn organization:rn sources:rn – [yourprojectid:yourdataset.yourtable~bigquery]rn s: ekg:company_$(id_column_from_table)rn po:rn – [a, schema:Organization]rn – [schema:name, $(name_column_from_table)]rn – [schema:streetAddress, $(address_column_from_table)]rn – [schema:postalCode, $(ZIP_column_from_table)]rn – [schema:addressCountry, $(country_column_from_table)]rn – [schema:addressLocality, $(city_column_from_table)]rn – [schema:addressRegion, $(state_column_from_table)]rn – [ekg:recon.source_name, (chosen_source_name)]rn – [ekg:recon.source_key, $(id_column_from_table)]’), (u’language’, u”), (u’caption’, <wagtail.wagtailcore.rich_text.RichText object at 0x3eae0e207050>)])]Step 2The console page shows the list of existing entity reconciliation jobs available in the project.Create a new job by clicking on the “Run A Job” button in the action bar, then select an entity type for entity reconciliation.Add one or more BigQuery data sources and specify a BigQuery dataset destination where EKG will create new tables with unique names under the destination data set. To keep the generated cluster IDs constant across different runs, advanced settings like “previous BigQuery result table” are available. Click “DONE” to create the job.Step 3After the job completes, navigate to the output BigQuery table, then use a simple join query similar to the one below to review the output:code_block[StructValue([(u’code’, u’SELECT *rnFROM `<dataset>.clusters_14002307131693260818` as RS join `<dataset>.retail_companies` as SRCrnon RS.source_key = SRC.COMPANY_IDrnorder by cluster_id;’), (u’language’, u”), (u’caption’, <wagtail.wagtailcore.rich_text.RichText object at 0x3eadf639cf10>)])]This query joins the output table with the input table(s) of our Entity Reconciliation API and orders by cluster ID. Upon investigation, we can see that two entities are grouped into one cluster.The  confidence score indicates how likely it is that these entities belong to this group. Last but not least, the cloud_kg_mid column returns the linked Google Cloud Knowledge Graph machine ID, which can be used for our Google Knowledge Graph Search API.Running the above cURL command will return response that contains a list of entities, presented in JSON-LD format and compatible with schema.org schemas with limited external extensions.For more information, kindly visit our documentation.Special thanks to Lewis Liu, Product Manager and Holt Skinner, Developer Advocate for the valuable feedback on this content.
Quelle: Google Cloud Platform

Introducing new cloud services and pricing for ultimate flexibility

As the saying goes, “it’s hard to make predictions, especially about the future.” Some organizations find it challenging to predict what cloud resources they’ll need in months or years ahead. Every organization is on its own unique cloud journey. To help, we’re developing new ways for customers to consume and pay for Google Cloud services. We’re doing this by removing barriers to entry, aligning cost to consumption and providing contractual and product flexibility. Read on to learn how we’re rolling out several new go-to-market programs across these key areas to help our customers purchase and consume Google Cloud services more easily.Removing barriers to entry with Google Cloud Flex AgreementsMany customers choose multi-year commitments because they provide better line-of-sight into IT spend and budgeting. However, these commitments can create difficulty for those who don’t have clear visibility into their future cloud consumption needs. That’s why today we’re launching Flex Agreements, which enable customers to migrate their workloads to the cloud with no up-front commitments. As part of this new licensing option, Google Cloud customers still get access to unique incentives, such as monthly spend discounts1, committed use discounts, cloud credits, and access to professional services, based on monthly spend and workloads migrated to Google Cloud.Flex Agreements are just one example of how we are removing barriers to help customers start using Google Cloud. In 2022, we launched the Innovators Plus annual subscription, which gives developers a curated toolkit to accelerate their expertise, including access to live and on-demand training through Google Cloud Skills Boost, Google Cloud credits, and more. We also recently expanded trials for Google Cloud products. For example, the new Spanner free trial instance is good for 90 days, allowing developers to create Google Standard SQL or PostgreSQL databases, explore Spanner capabilities, and prototype applications—with no commitment or contract needed. Contractual and feature flexibility  Contractual flexibility has always been one of our core principles. Committed Use Discounts (CUDs), for example, provide discounted prices in exchange for a commitment to use a minimum level of resources for a specified term. Last year, we introduced Flexible CUD, spend-based commitments that offer predictable and simple flat-rate discounts that apply across multiple virtual machine families and regions.In addition to contractual flexibility, our customers also need the flexibility to choose features and functionality based on their stages of cloud adoption and the complexity of their business requirements. Therefore, over the next few quarters, we will launch new product pricing editions—Standard, Enterprise, and Enterprise Plus—in parts of our cloud portfolio. This new commercial packaging model will help give customers more choice and flexibility to optimize their cloud spend.For customers running workloads such as those in regulated industries like banking and public sector, the higher-end Enterprise Plus tier will offer compute, storage, networking and analytics services with high availability, multi-region support, regional failover and disaster recovery, advanced security, and a broad range of regulatory compliance support. The Enterprise pricing tier will include a broad range of features designed for customers with workloads that demand a high level of scalability, flexibility, and reliability. The Standard pricing tier will offer cost-efficient and easy-to-use managed services that include all essential capabilities such as autoscaling to meet the core workload requirements of customers.Align costs to consumption with autoscalingAt Google Cloud, a core requirement for the products we build is providing customers industry-leading capabilities to automatically scale (autoscale) services up and down to match capacity with real-time demand. Autoscaling improves uptime, reduces infrastructure costs, and removes the operational burden of managing resources.  Many Google Cloud products include autoscaling capabilities to help customers manage unplanned variations in demand. For example, Dataflow vertical and horizontal autoscaling, in combination with granular adaptive resource configuration (aka “right-fitting”), has resulted in up to 50% saving in infrastructure costs for streaming by automatically choosing the right number of instances required to run the jobs and dynamically re-allocating more or fewer instances during the runtime of jobs. Bigtable also provides native autoscaling capabilities, and Spanner’s autoscale is an open source tool that works across regional and multi-regional Spanner deployments. Similarly, we added multiple features such as Cluster Autoscaler, Horizontal Pod Autoscaling, Vertical Pod Autoscaling, and Node Auto-Provisioning to GKE for elasticity and cost efficiency. For L.L.Bean, the ability to quickly scale capacity to meet changing usage patterns (e.g., during the holidays), as well as to rapidly perform load tests to test capacity, are “night and day” with Google Cloud compared to L.L.Bean’s legacy on-premises IT system.”We won’t have to pay for peak capacity to have it available during peak shopping times. We just scale capacity up or down as needed.” — Randy Dyer, Enterprise Architect, L.L.BeanWe are now taking these capabilities to the next level by enabling autoscaling in BigQuery at a more granular level so you never pay more than what you use. This allows you to provision additional capacity in smaller increments, so you never overprovision and overpay for underutilized capacity. BigQuery customers can now try the new BigQuery autoscaler (currently in public preview) in their Google Cloud console.A commitment to flexibility and choiceAt Google Cloud, we remain deeply committed to the success of our customers and partners, and we are uniquely positioned to help organizations transform their business. By providing you with more flexibility and choice in how to purchase our products, we are empowering you to be more efficient and resilient.Join Google Data Cloud & AI Summit to hear the latest announcements around innovations in Google Data Cloud for databases, data analytics, business intelligence, and AI. Gain expert insights, new solutions, and strategies that can help you transform customer experiences with modern apps, boost revenue, and reduce costs.1. Not available for customers buying through Partner Advantage.
Quelle: Google Cloud Platform

Grow and scale your startup with Google Cloud

At Google Cloud, we understand how important it is for startups to get holistic technical support so that you can build and scale your businesses to the next level. The Google Cloud Technical Guides for Startups Series helps you do this and more with its Start, Build and Grow multi-series.In the Start and Build Series, we explored how to get started on Google Cloud, as well as how to build and optimize existing applications.It’s time to take the next step and learn how to scale them.Boost your startup game with our Grow SeriesWe are excited to announce the launch of our 3rd installment – the Grow Series!This series focuses on growing and scaling your deployments, and is the final piece of the technical enablement multi-series. Not only will we traverse through some exciting and innovative Google Cloud solutions, but we will also throw the spotlight on some industry specific use cases. Scale your deployments Learn to scale with solutions such as Looker for powerful insights, Cloud Spanner- a highly scalable relational database, AlloyDB – our high performing postgreSQL database and Anthos for your hybrid connectivity needs. Explore industry specific architecturesDeep dive into various industry examples as we explore startup architectures from healthcare to retail verticals and more. Optimize for sustainabilityLearn about implementing digital and operational sustainability as we discuss how to build your startup on Google Cloud- a platform with net zero carbon emissions.Get started with the first episodeIntroducing Google Cloud Technical Guides for Startups – Grow SeriesCheck out the first episode of the Grow Series for an overview of the topics covered and find out what else is in store.  Hop on to the Google Cloud channelWe are excited to have you with us on the final chapter of this journey. Check out our website and join us on the Google Cloud Tech Channel to find the Start, Build and Grow Series. If you want to learn more about how Google Cloud can help your startup, visit our pagehere to get more information about our program, andsign up for our communications to get a look at our community activities, digital events, special offers, and more.
Quelle: Google Cloud Platform

Reducing the storage impact of Point-in-Time Recovery

Point-In-Time Recovery (PITR) is a critical capability for enterprise applications.  It allows database administrators to recover from accidental data deletion by restoring their production databases to a time before the incident.  Cloud SQL for PostgreSQL launched support for PITR in July 2020, allowing you to recover from disasters like data corruption or accidental deletion by restoring your Cloud SQL instance to a previous time. We’re excited to announce an additional enhancement to PITR for Cloud SQL for PostgreSQL that makes enabling PITR an even easier decision: for instances with Point-in-Time Recovery newly-enabled, the write-ahead logs being stored for PITR operations (which are the transaction logs that are used to go back in time) will no longer consume disk storage space.  Instead, when you enable PITR for new instances, Cloud SQL will store transaction logs collected during the retention window in Google Cloud Storage, and retrieve them when you perform a restore.  Because transaction logs can grow rapidly when your database experiences a burst of activity, this move will help reduce the impact these bursts have on your provisioned disk storage.  These logs will be stored for up to seven days in the same Google Cloud region as your instance at no additional cost to you.  PITR is enabled by default when you create a new Cloud SQL for PostgreSQL instance from the Google Cloud console, and transaction logs will no longer be stored on the instance for instances that have PITR newly enabled.  If you have already enabled PITR on your PostgreSQL instances, this enhancement will be rolled out to your instances at a later point.  If you want to take advantage of this change sooner, you can first disable and then re-enable PITR on your instance (which will reset your ability to perform a point-in-time restore to the time at which PITR was re-enabled).  On instances with this feature enabled, you’ll notice that consumed storage on your instance will reduce relative to the volume of write-ahead logs (WAL) generated by your instance.  The actual amount of storage your logs consume will vary by instance and by database activity – during busy times for your database, log size may shrink or grow.  However, these logs will now only be stored on your instance long enough to successfully replicate to any replicas of the instance and to ensure that they are safely written to Cloud Storage; afterwards, they will be removed from your instance.We’re excited to continue to enhance Cloud SQL for PostgreSQL to ensure that disaster recovery is easy to enable, cost effective, and seamless to use.  Learn more about this change in our documentation.Related ArticleBuilding a resilient architecture with Cloud SQLHow to build a resilient database architecture with Cloud SQL, a managed Google Cloud service for MySQL, PostgreSQL and SQL-Server database.Read Article
Quelle: Google Cloud Platform

Extending reality: Immersive Stream for XR is now Generally Available

Last year at Google I/O, we announced the preview of Immersive Stream for XR, which leverages Google Cloud GPUs to host, render, and stream high-quality photorealistic experiences to millions of mobile devices around the world. Today, we are excited to announce that the service is now generally available for Google Cloud customers. With Immersive Stream for XR, users don’t need powerful hardware or a special application to be immersed in a 3D or AR world; instead, they can click a link or scan a QR code and immediately be transported to extended reality. Immersive Stream for XR is being used to power the “immersive view” feature in Google Maps, while automotive and retail brands are enhancing at-home shopping experiences for consumers, from virtually configuring a new vehicle to visualizing new appliances in the home.What’s new with GAWith this latest product milestone, Immersive Stream for XR now supports content developed in Unreal Engine 5.0. We have also added the ability to render content in landscape mode to support tablet and desktop devices. With landscape mode and the ability to render to larger screens, there is more real estate for creating sophisticated UIs and interactions, for more full-featured immersive applications. Finally, you can now embed Immersive Stream for XR content on your own website using an HTML iframe, allowing users to access your immersive applications without leaving your domain.How customers are using Immersive Stream for XRA common type of experience our customers want to create is a ‘space’ where users can walk around and interact with objects. For example, home improvement retailers can let their shoppers place appliances options or furniture in renderings of their actual living spaces; travel and hospitality companies can provide virtual tours of a hotel room or event space; and museums can offer virtual experiences where users can walk around and interact with virtual exhibits. To help customers create these experiences faster, we collaborated with Google Partner Innovation (PI) to create a spaces template, the first of a series of templates developed with close customer involvement within the PI Early Access Program. The spaces template standardizes the common interactions across these scenarios, such as user movement and object interaction.aside_block[StructValue([(u’title’, u’Industries embrace XR’), (u’body’, <wagtail.wagtailcore.rich_text.RichText object at 0x3e70c16cc8d0>), (u’btn_text’, u’Read more.’), (u’href’, u’https://cloud.google.com/blog/transform/augment-reality-virtual-reality-smartphone-secrets-immersive-stream’), (u’image’, None)])]Aosom, a home and garden ecommerce retailer, recently used this template to launch an experience that allows users to place furniture in either a virtual living room or in their own space using AR. Users have the ability to customize the item’s color and options, then add products to their shopping cart once satisfied. “Home & Garden shoppers are always looking for offerings that are unique and compatible with their own living space,” said Chunhua Wang, Chief Executive Officer, Aosom. “Google Cloud’s Immersive Stream for XR has enabled Aosom to deliver a visually vivid and immersive shopping experience to our customers.”Immersive Stream for XR especially benefits automakers, who can now enable prospective buyers to browse and customize new vehicles in photo realistic detail and visualize them in their own driveway. Most recently, Kia Germany leveraged the technology to promote the Kia Sportage, one of their top selling vehicles. The virtual experience was accessible via a QR code on the Kia website.“At Kia Germany we are excited to use Google Immersive Stream for XR to reach new consumers and provide them the perfect experience to discover our Sportage,” said Jean-Philippe Pottier, Manager of Digital Platforms at Kia Germany. “Our users love that they can change colors, engines, and interact with the model in 3D and augmented reality.”Last, with the addition of Unreal Engine 5.0 and support for bigger and more realistic worlds, users have the ability to explore far away historical landmarks without leaving their home. For example, Virtual Worlds uses photogrammetry techniques to capture historical sites, polish them with a team of designers, and then create interactive experiences on top. Because of the visual detail involved, these experiences have historically required expensive workstations with GPUs to perform the rendering, limiting their availability to physical exhibits. Using Unreal 5.0’s new Nanite and Luman capabilities, the team created an educational tour of the Great Sphinx of Giza, and made it accessible by anyone using Immersive Stream for XR, available here. Elliot Mizroch, CEO of Virtual Worlds, explains, “We’ve captured incredible sites from Machu Picchu to the Pyramids of Giza and we want everyone to be able to explore these monuments and learn about our heritage. Immersive Stream for XR finally gives us this opportunity.”Next stepsWe’re excited to see all of the innovative use cases you build using Google Cloud’s Immersive Stream for XR. Learn more by reading our documentation, or get started by downloading the Immersive Stream for XR template project. To get started with Unreal Engine 5.0 and landscape mode, you can download our updated Immersive Stream for XR template project, load it into Unreal Engine 5.0.3, and start creating your content. If you’d like to embed your experience on your own website, you can contact us to allowlist your domain.
Quelle: Google Cloud Platform