Achieving cloud-native network automation at a global scale with Nephio

In 2007, in order to meet ever increasing traffic demands of YouTube, Google started building what is now the Google Global Cache program. Over the past 15 years, we have added thousands of edge caching locations around the world, with widely varying hosting conditions—some in customer data centers, some in remote locations with limited connectivity. Google manages the software and hardware lifecycles of all these systems remotely. Although the fleet size and serving corpus have grown by several orders of magnitude during this time, the operations team overseeing it has remained relatively small and agile. How did we do it?We started with a set of automation tools for software deployment (remotely executing commands), a set of tools for auditing/repairs (if this condition occurs, run that command), and a third set of tools for configuration management. As the fleet grew and was deployed in more varied environments, we discovered and fixed more edge cases in our automation tools. Soon, the system started reaching its scaling limits, and we built a new, more uniform and more scalable system in its place. We learned a few key lessons in the process:Intent-driven, continuously reconciling systems are more robust at scale than imperative, fire-and-forget tools.Distributed actuation of intent is a must for large-scale edge deployments. Triggering all actions from a centralized location is not reliable and does not scale, especially for edge deployments.Uniformity in systems is easier to maintain. Being able to manage deployment, repairs, and configuration using common components and common workflows (in other words, files checked into a repository with presubmit validation, review, version control, and rollback capability) reduces cognitive load for the operations team and allows more rapid response with fewer human errorsThis pattern repeats time and time again across many large distributed systems at Google, and we believe these tenets are key as network function vendors and communication service providers look to adopt cloud-based network technologies. For example, in a 5G deployment involving hundreds of locations (or many hundreds of thousands, in the case of RAN), with containerized software components, the industry needs better tools to handle deployment and operations at scale. Working with the community to address these issues, we hope to drive a common Kubernetes-based, cloud-native network automation architecture, while also providing extension points for vendors to innovate and adapt to their specific requirements.That’s why Google Cloud founded the Nephio project in April 2022. The Nephio community launched with 24 founding organizations and has now grown 2X since launching. In addition to the founding members, new participating organizations include Vodafone, Verizon, Telefonica, Deutsche Telekom, KT, HPE, Red Hat, Windriver, Tech Mahindra, and others. Over 150 developers across the globe participated in the community kickoff meeting hosted by the Linux Foundation on May 17, 2022.Google Cloud is collaborating with communication service providers, network function vendors, and cloud providers in Nephio by:Working with the community to refine the cloud native automation architecture, and define a common data model based on the Kubernetes Resource Model (KRM) and Configuration as Data (CaD) approach. This new model needs to support cloud infrastructure, network function deployment, and management of user journeys.Contributing to the development of an open, fully functional reference implementation of this architecture.Open sourcing several key building blocks, such as kpt, Porch and ConfigSync. We are also planning to open source controllers, Google Cloud infrastructure CRDs, additional sample NF CRDs, and operators to jumpstart the Nephio project.Google Cloud will also integrate Nephio with our Google Distributed Cloud Edge platform, combining the advantages of a fully managed hardware platform with Nephio-powered deployment and management of network functions to our customers.The Nephio community is complementary to many of the existing open source communities and standards. Nephio is closely working with adjacent communities in CNCF, LF Networking, and LF Edge to provide an end-to-end automation framework for telecommunication networks.By working with the community in this open manner, we believe that, together, we can advance the state of the art of network automation, improving the deployment and management of network functions on cloud native infrastructure. We welcome the industry to join us in this effort. For more information, please visit the Nephio website at www.nephio.org. And please register to join us online or in-person at the Nephio developer summit on June 22 and 23.Related ArticleAchieving cloud-native network automation in telecommunications togetherGoogle Cloud and Linux Foundation launch Nephio an open source cloud native network automation program.Read Article
Quelle: Google Cloud Platform

The Retirement Tracker simplifies and socializes early retirement on Google Cloud

A lot of people talk about retirement but far fewer people have the information and tools to plan for it properly. Just how much money you need to live comfortably once you stop working can be the million-dollar (or more!) question. Although there is no shortage of retirement calculators, many only provide a limited one-time analysis and require detailed personal information that may be sold to third parties. We developedThe Retirement Tracker with one idea: to empower individuals to take control over their retirement planning with tools to easily plan, track, and even socialize their early retirement.With The Retirement Tracker, people can aggregate their financial accounts—including savings, 401Ks, and stock portfolios—on one safe, convenient retirement app. The Retirement Tracker analyzes real-time data from these accounts to track net worth and automatically update retirement targets. A small part of this information, such as stock transactions, can even be shared among people’s self-created investment groups to encourage information sharing and friendly competition.Scaling up for early retirement on Google CloudWhen building The Retirement Tracker, we needed a technology partner that would enable us to securely and effectively scale while saving time and administrative costs. That’s why we started working withGoogle Cloud and partnering with theGoogle for Startups Cloud Program.Google Cloud gives us a highly secure-by-design infrastructure, valuable cloud credits to obtain products from an expansive technology platform, access to dedicated startup experts, and potential for joining the Google Cloud Marketplace.Even though we are a small team, we innovate quickly and easily onGoogle Workspace using Gmail, Google Docs, Sheets, Calendar, and Meet. We also store and protect all sensitive company documents on Google Cloud and post our“Restimators” investment video series on YouTube. More recently, we’ve adoptedFirebase to scale and manage our infrastructure while accelerating the development of The Retirement Tracker.In just days, we implemented Plaid authentication and authorization protocols, enabling customers to quickly and securely connect details about their investment and savings accounts to The Retirement Tracker. This is a process that possibly would have taken us months if we had to manually build these security capabilities from scratch.Google Firebase now delivers a seamless customer experience by aggregating and displaying near real-time data from multiple financial accounts on a single dashboard. On the back end, Firebase automatically queries read-only tokens, securely accesses account balance changes, and encrypts sensitive data in the cloud.  Firebase also makes it easy for customers to administer internal investment groups and selectively socialize information such as stock purchases and sales—without revealing transaction quantities or prices. Customers create these small invite-only groups to help family and friends improve their retirement portfolios with friendly competition and strategic crowdsourcing. Customers can also participate in additional investment discussions hosted by The Retirement Trackeron Discord.Building a sustainable financial futureSince we started using Google and Google Cloud solutions, everything is easier to build and scale. We constantly perfect the customer experience with new features and services, while leaving our IT and cloud infrastructure in the hands of Google Cloud experts. Demand for our app is growing fast as we prepare to move The Retirement Tracker out of beta in 2022. Moving forward, we’re excited to continue to grow in the Google for Startups Cloud Program, and with our dedicated Google team to improve the observability and reliability of The Retirement Tracker to handle the volume of users we’re anticipating in 2023 and beyond. To help us do so, we’re exploring additional Google Cloud solutions such asLooker,BigQuery, andCloud Spanner. These solutions will enable us to rapidly expand our services and offer customers a variety of new benefits from using The Retirement Tracker. Our participation in the Google for Startups Cloud Program has been instrumental to our success. The Startup Success Manager has worked with our team to identify programs we could apply to in order to strengthen our relationship even further. With Google Cloud, we’re making early retirement easier and more accessible on one convenient, highly secure mobile app. We can’t wait to see what we accomplish next as we drive innovation and financial inclusion by empowering people to plan, track, and socialize retirement planning that can be at once so important and so difficult for so many people.  If you want to learn more about how Google Cloud can help your startup, visit our pagehere to get more information about our program, and sign up for our communications to get a look at our community activities, digital events, special offers, and more.
Quelle: Google Cloud Platform

Snap Inc. adopts Google Cloud TPU for deep learning recommendation models

While many people still think of academic research when it comes to deep learning, Snap Inc. has been applying deep learning models to improve its recommendation engines on a daily basis. Using Google’s Cloud Tensor Processing Units (TPUs), Snap has accelerated its pace of innovation and model improvement to enhance the user experience. Snap’s blog Training Large-Scale Recommendation Models with TPUs tells the story of how the Snap ad ranking team leveraged Google’s leading-edge TPUs to train deep learning models quickly and efficiently. But there’s a lot more to the story than the how, and that’s what we’re sharing here.Faster leads to betterSnap’s ad ranking team is charged with training the models that make sure the right ad is served to the right Snapchatter at the right time. With 300+ million users daily and millions of ads to rank, training models quickly and efficiently is a large part of a Snap ML engineer’s daily workload. It’s simple, really: the more models Snap’s engineers can train, the more likely they are to find the models that perform better—and the less it costs to do so. Better ad recommendation models translate to more relevant ads for users, driving greater engagement and improving conversion rates for advertisers.Over the past decade, there has been tremendous evolution in the hardware accelerators used to train large ML models like those Snap uses for ad ranking, from general-purpose multicore central processing units (CPUs) to graphics processing units (GPUs) to TPUs. TPUs are Google’s custom-developed application specific integrated circuits (ASICs) used to accelerate ML workloads. TPUs are designed from the ground up to minimize time to accuracy when training large models. Models that previously took weeks to train on other hardware platforms can now be trained in hours on TPUs—a product of Google’s leadership and experience in machine learning (dig into the technology in Snap’s blog).Benchmarking successSnap wanted to understand for itself what kind of improvements in training speed it might see using TPUs. So, the Snap team benchmarked model training using TPUs versus both GPUs and CPUs, and the results were impressive. GPUs underperformed TPUs in terms of both throughput and cost, with a reduction in throughput of 67 percent and an increase in costs of 52 percent when using GPUs. Similarly, TPU-based training drastically outperformed CPU-based training for Snap’s most common models. For example, when looking at their standard ad recommendation model, TPUs slashed processing costs by as much as 74 percent while increasing throughput by as much as 250 percent—all with the same level of accuracy.Because TPU embedding API is a native and optimized solution for embedding-based operations, it performs embedding-based computations and lookups more efficiently. This is particularly valuable to recommenders, which have additional requirements such as fast embedding lookups and high memory bandwidth.Benefits across the boardFor Snap’s ad ranking team, those improvements translate into tangible workflow advantages. It’s not unusual for Snap to have a month’s worth of data that includes all the logs of users who were shown particular ads and a record of whether they interacted with an ad or not. That means it has millions of data points to process, and Snap wants to model them as quickly as possible so it can make better recommendations going forward. It’s an iterative process, and the faster Snap can get the results from one experiment, the faster its engineers can spin up another with even better results—and they’d much prefer to do that in hours rather than days. Increased efficiency and velocity benefit Snapchatters, too. The better the models are, the more likely they are to correctly predict the likelihood that a given user will interact with a particular ad, improving the user experience and boosting engagement. Improved engagement leads to higher conversion rates and greater advertiser value—and given the volumes of ads and users Snap deals with, even a one percent improvement has real monetary impact.Working at the leading edgeSnap is working hard to improve its recommendation quality with the goal of delivering greater value to advertisers and a better experience for Snapchatters. That includes going all-in on leading-edge solutions like Google TPUs that allow its talented ML engineers to shine. Now that you know the whole story, see how Snap got there with the help of Google: Training Large-Scale Recommendation Models with TPUs.Related ArticleCloud TPU VMs are generally availableCloud TPU VMs with Ranking & Recommendation acceleration are generally available on Google Cloud. Customers will have direct access to TP…Read Article
Quelle: Google Cloud Platform

No more normal? No problem when you build supply chains with data and AI

What if, after all the upheavals and innovations of the past two years, we’re not headed for some new normal but instead an era of no more normal?“There are big, big challenges that need to be solved every single day by supply chain professionals,” Hans Thalbauer, Google Cloud’s managing director for supply chain and logistics, pointed out during our recent Supply Chain & Logistics Spotlight event. Among the issues Thalbauer ticked off were changes from the pandemic, consumer demand, labor shortages, the climate crisis, geopolitical instability, and energy shortages.“And the thing is, it’s not just a short term issue, we think it’s a long-term and systemic issue,” Thielbauer said. “There’s a big question out there, which is: How will global trade change? Is it really transforming and translating into something new? Will global trade continue to work as is?”Even experts at the White House are asking these very questions at this very time. The same day as the Supply Chain & Logistics Spotlight, the president’s Council of Economic Advisors released their annual report with an entire chapter dedicated to supply chain. In it, they noted that once-obscure, and ideally invisible, supply chains had “entered dinner table conversations.” And for good reason. “Because of outsourcing, offshoring, and insufficient investment in resilience, many supply chains have become complex and fragile,” the economists wrote. Nor are they alone in worrying about the future of logistics.Whatever the outcomes—more global or local, more automated or disintermediated, more agile or fragile—one of the likeliest results is a greater reliance on technology, and especially data, to help handle all the disruptions and interruptions on the horizon. Leaders in the field, including at The Home Depot, Paack, and Seara Foods, are discovering opportunities in a few key areas: connecting data from end to end; the power of platforms to access and share information; and the importance of predictive analytics to mitigate issues as, or even before, they arise.“We need to create visibility, flexibility, and innovation,” Thalbauer said. “Too often companies just focus on their orders, forecasts, and inventory, but typically they ignore the rest of the world. We need to bring in the public information, the traffic, weather, climate, and financial risks, connect that with the enterprise data, and we need to actually enable community data to create collaboration between business partners at every tier.”End-to-end dataCompanies have always sought visibility from the factory to the warehouse to the store and now the front door, and all the points in between. Both the challenge and necessity of seeing into all these is that as the data has grown, and our capabilities along with it, so has the complexity. It’s at a scale no humans can manage, which makes the importance not only of data but analytics and AI all the more essential.Home Depot has had a front row seat to these growing interdependencies—especially when it comes to serving competing yet complimentary clienteles. The pandemic presented its share of unexpected opportunities, as the combination of soaring home values, disposable income, and DIYers looking for (stay at) home projects led to runs on everything from lumber to sheds-turned-offices to garage doors. Empty shelves can lead to angry customers. And in this case, it wasn’t just homeowners and renters Home Depot was contending with, explained Chris Smith, vice president of IT Supply Chain at Home Depot, but also an increasingly important base of contractors and even large-scale developers. Both tended to need different materials, at different scales, and shopped in different ways, and these demands have only expanded during the pandemic.Whatever the future of logistics look like—more global or local, more automated or disintermediated, more agile or fragile—one of the likeliest results is a greater reliance on technology.“We really have what we call an omnichannel algorithm.” Chris Smith, VP of IT Supply Chain at The Home Depot. “It’s really marrying up the customer’s preferences with our understanding of capacity, assortment, inventory availability, taking all that together, and saying: How do we best meet the customer promise and do it with the most efficient use of our supply chain? So where do we fulfill it from, where is the inventory available, and how do we do that in a way that’s most economical for us while still meeting the promise of the customer,” Smith said. Paack, a last-mile delivery start-up serving the UK, Spain, France, Portugal, and Italy, is similarly pushing the envelope on fulfillment. The company focuses on combining a wealth of data—from drivers, customers, sensors, weather, and more—to ensure guaranteed delivery. So far, their success rate is approaching 98% of on-time delivery, with special scheduling tools to ensure customers are available to receive their packages.Using solutions like the Last Mile Fleet Solution from Google Maps Platform, Paack can manage drivers and customers in real-time.“The granularity of information we can collect in terms of which routes are being effectively followed by the driver’s route versus planned routes, the ability for them to change directions, because we might know locally of better ways to go, notifications from the customer as to their availability—these really allows us to build a better experience for everyone,” Olivier Colinet, chief product and technology officer for Paack, said. “We want first-time drivers to be the most productive drivers, and this first step allows us to do so.”Power of platformsPaack’s success exemplifies the power of building a strong platform for customers and workers, as well as tapping existing platforms, like Google Maps, to bolster your own.On the other side of the globe, the world’s largest meat supplier is seeking to empower thousands of ranchers and farmers with a platform of their own. Seara, a Brazil-based supplier of pork, chicken and eggs that is part of the globe-spanning JBS conglomerate, launched its SuperAgroTech platform in July 2021. Though in development for years, the program could hardly have come at a more critical time for the global food supply. The food industry was already coping with pandemic-related shortages and shutdowns, and then came the spillover effects from the war in Ukraine.“In general, the entire supply chain was affected and the operation had to adapt to new working conditions,” Thiago Acconcia, the director of innovation and strategy at Seara, said. “So in the farms, in the field, the same situations are repeated, and the creation of this digital online platform enters as a facilitator when it gives autonomy to the farmer, providing them with the data input and digital communication.” It’s a level of connectivity the farmers never had with Seara before—and vice versa.The technology has been deployed to more than 9,000 farms at launch. Through a range of IoT sensors, monitoring devices, and data inputs from farmers, operators and Seara data, teams can track a host of results. These include yields, animal health, profits, and even environmental and social impacts, which are becoming increasingly important features for consumers.The eventual goal is to reach 100% digital management of the farm.“So today, we are able to activate any producer in a few seconds, regardless of the location,” Acconcia said. With SuperAgroTech, the platform “doesn’t mind if it’s in the very south of the country, if it’s in the central part. It’s strengthening the relationships with our producers and also promoting a level of personalized attention they’ve never had.”Such platforms also provide a level of visibility and connectivity rarely enjoyed before, as well as a virtuous cycle between data collection, analysis, and insights put back into action on the platform. In an unpredictable world, this kind of integration is becoming essential.Stacks of containersPredictive AnalyticsAs a company’s digital strategies evolve through integrated data and robust platforms, one of the most exciting opportunities arises around predictive analytics.While seeing into the future remains science fiction (at least for now), AI, cloud, and even emerging quantum computing are providing robust ways to better reveal trends, make connections, and anticipate both opportunities and interruptions.Home Depot has looked at ways to quickly adapt its digital stores using consumer data and AI to create better experiences, as well as smoothing out supply chain issues. Home Depot’s Chris Smith pointed to a listing for an out-of-stock appliance or tool, for example, that will quickly offer other locations or items for sale as a convenient alternative.“We can apply machine learning in many different ways to make better, faster decisions, both in how we support moving inventory through our supply chain or how we understand available capacity to support our customers,” Smith said. “And with automation, from our distribution centers to our forecasting and replenishment systems, we’re going to continue to look at places where we can optimize and automate to make better decisions.”For Paack, predictions could come in the form of traffic or storms or even the likelihood that a repeat customer will be available or not, without having to prompt them.And at Seara, the role of data and analytics is not just vital to the business but the very vitality of the world. As climate, supply chains, global conflicts, migration, and other issues continue to constrain the food supply, anticipating issues could be the difference between salvaging a crop or not. “We started creating advanced analytics by means of AI tools to not only notify real-time problems but also to predict what’s going to happen in the near and long future,” Acconcia said. “We are talking about the world’s food, and SuperAgroTech has the role to feed the world, and to overcome these biggest challenges.”
Quelle: Google Cloud Platform

Introducing granular instance sizing for Cloud Spanner, now run production workloads for as low as $40/month

Cloud Spanner is a relational database service that offers industry leading 99.999% availability, and near unlimited scale to handle even the most demanding of workloads. For these reasons, customers in various industries trust Spanner for their workloads with significant throughput requirements. We have heard from our customers that they would like to standardize on Spanner for all their workloads – big and small as they value the manageability,  scale-insurance, and consistent performance that Spanner offers.Therefore, last year we launched granular instance sizing in preview so that you can run your workloads on Spanner starting at approximately $65/month. Today, we are excited to announce the general availability of granular instance sizing. With granular instance sizing, at a much lower cost you can still get all of the Spanner benefits like transparent replication across zones and regions, high-availability, resilience to different types of failures, and the ability to scale up and down as needed without any downtime. And with Committed Use Discounts,  the entry price for production workload further reduces to less than $40/month as you receive a 40% discount for a 3-year commitment. How granular instances workWith granular instance sizing, we are introducing a new unit for provisioning resources in Spanner – “Processing Units (PUs)” in addition to “Nodes”. One Spanner node is equal to 1,000 PUs; so you can start with a 100 PUs instance and provision in batches of 100 PUs, and get a proportional amount of compute and storage resources. All Spanner instances including the instances with less than 1,000 PUs or 1 node have the same availability SLA of 99.99% for regional instances and 99.999% for multi-regional instances. You can use this feature to cost-effectively run workloads of all sizes on Spanner and scale seamlessly as needed. With granular instance sizing you get proportional resources for proportional price, for example, a 100 PUs Spanner instance can support a maximum of 10 databases with up to ~410 GB of data storage. The limit for number of databases per Spanner instance scales proportionally with the provisioned compute capacity in the instance to a maximum of 100 databases per instance. Additionally, a Spanner instance can store unlimited data as long as sufficient compute capacity is provisioned in the instance as there is a limit of 4TB of data per 1,000 PUs (1 node).You can easily use granular instance sizing by selecting the instance configuration, Processing Units (PUs) as the unit of compute capacity and then providing their quantity. The summary on the right side of the console-page displays the per-hour compute cost based on the number of Processing Units; it also lists the maximum storage which is available for the instance.Making Spanner more accessible for every developer and workload Our mission is to democratize access to Spanner so that developers can easily get started with a familiar interface and low entry cost, and seamlessly scale their workloads without downtime.  In addition to reducing the cost of entry for production workloads with granular instance sizing, we also recently introduced Committed Use Discounts (CUDs)for you. With CUDs, you make hourly spend based usage commitments for a year or longer for Spanner compute capacity and get discounted prices for it. Spend based commitment offers maximum flexibility as the discount is automatically applied on compute capacity of instances in any instance-configuration (regional or multi-regional) across projects. You can reduce your costs by purchasing either a one-year CUD that provides a 20% discount or a three-year CUD that provides a 40% discount. So if a three-year committed-use-discount of 40% is applied to a 100 PUs regional Spanner instance for example in us-central1, your monthly bill will be less than $40/month. We also announced the preview of the PostgreSQL interface for Spanner at Google Cloud Next ‘21. With this capability, you can build transformative applications with Spanner while using the familiar PostgreSQL dialect. You can leverage the core subset of capabilities that PostgreSQL offers with the scale, consistency and high-availability of Spanner. We will soon be announcing the General Availability of the PostgreSQL interface of Spanner. With granular instance sizing, CUDs and PostgreSQL interface for Spanner our goal is to address the popular demand from developers to make the best in class experience on Spanner more accessible and cost effective. For example, a two-person game development startup developed their first game on a 100 PUs Spanner instance that they plan to launch soon. This gaming startup aspires their gaming title to have the same success as PokemonGo (also built on Spanner) and when it does, they won”t have to worry about re-architecting their database because Spanner offers them seamless scaling to support their millions of users.Learn moreWe invite you to build your applications on Spanner and scale as your business grows.  To get started with Spanner, create an instanceor try it out with a Spanner QwiklabRelated ArticleEliminate hotspots in Cloud BigtableLearn how hotspots can impact the performance of your Cloud Bigtable database. Debugging hot tablets can reduce P99 latencies and increas…Read Article
Quelle: Google Cloud Platform

Change streams for Cloud Spanner: now generally available

At this year’s Google Data Cloud Summit, we announced Cloud Spanner change streams. Today, we are thrilled to announce the general availability of change streams. With change streams, Spanner users are now able to track and stream out changes (inserts, updates, and deletes) from their Cloud Spanner database in near real-time.Change streams provides a wide range of options to integrate change data with other Google Cloud services. Common use cases include:Analytics: Send change events to BigQuery to ensure that BigQuery has the most recent data available for analytics.  Event triggering: Send data change events to Pub/Sub for further processing by downstream systems. Compliance: Save the change events in Google Cloud Storage for archiving purposes. Getting started with change streamsThis section walks you through a simple example of creating a change stream, reading its data, and sending the data to BigQuery for analytics.If you haven’t already done so, get yourself familiar with Cloud Spanner basics with the Spanner Qwiklab.Creating a change streamSpanner change streams are created with DDL, similar to creating tables and indexes. Change stream DDL requires the same IAM permission as any other schema change (spanner.databases.updateDdl).A change stream can track changes on a set of columns, a set of tables, or an entire database. Each change stream can have a retention period of anywhere from one day to seven days, and you can set up multiple change streams to track exactly what you need for your specific objectives.  Learn more about creating and managing change streams.Suppose you have a table Orders like this:code_block[StructValue([(u’code’, u’CREATE TABLE Orders (rn OrderID INT64 NOT NULL,rn CustomerID INT64 NOT NULL,rn ProductId INT64 NOT NULL,rn OrderDate DATE,rn Price INT64,rn) PRIMARY KEY(OrderID);’), (u’language’, u”)])]The DDL to create a change stream that tracks the entire Orders table with a (implicit) default retention of 1 day would be defined as:code_block[StructValue([(u’code’, u’CREATE CHANGE STREAM OrdersStream FOR Orders;’), (u’language’, u”)])]Creating change streams is a long running operation. You can check the progress on the change streams page in the Cloud console:Once created, you can click on the change stream name to view more details:Your database DDL should look something like:Now that the change stream has been created, you can process the change stream data.Streaming data to BigQueryThere are several ways to process change stream data. The easiest way is to use the Spanner connector for Apache Beam which allows you to build scalable data processing pipelines for Google Cloud Dataflow. We provide Dataflow templates for processing and writing change data to BigQuery or Google Cloud Storage respectively. Learn more about how Cloud Spanner change streams work with Dataflow.In this example, we will use the Spanner change streams to BigQuery template to write change stream data to BigQuery. First, navigate to your project’s Dataflow Jobs page in the Google Cloud Console. Click on CREATE JOB FROM TEMPLATE, choose the Change streams to BigQuery template, then fill in the required fields:Click RUN JOB, and wait for Dataflow to build the pipeline and launch the job. Once your Dataflow pipeline is running, you can view the job graph, execution details, and metrics on the Dataflow Jobs page:Now let’s write some data into the tracked table Orders:Under the hood, when Spanner detects a data change in a data set tracked by a change stream, it writes a data change record synchronously with that data change, within the same transaction. Spanner co-locates both of these writes so they are processed by the same server, minimizing write processing. Learn more about how Spanner writes and stores change streams.Finally, when you view your BigQuery dataset, you will see the row that you just inserted, with some additional information from the change stream records.You are all set! As long as your Dataflow pipeline is running, the data changes to the tracked tables will be seamlessly streamed to your BigQuery dataset. Learn more about monitoring your pipeline.More ways to process change stream dataInstead of using the Google-provided Dataflow templates for BigQuery and Google Cloud Storage, you can choose to build a custom Dataflow pipeline to process change data with Apache Beam. For this case, we provide the SpannerIO Dataflow connector that outputs change data as an Apache Beam PCollection of DataChangeRecord objects. This is  a great choice if you want to define your own data transforms, or want a different sink than BigQuery or Google Cloud Storage. Learn more about how to create custom Dataflow pipelines that consume and forward change stream data. Alternatively, you can process change streams with the Spanner API. This approach, which is particularly well-suited for more latency-sensitive applications, does not rely on Dataflow. The Spanner API is a powerful interface that lets you read directly from a change stream to implement your own connector and stream changes to the pipeline of your choice. With the Spanner API, a change stream is divided into multiple partitions, each of which can be used to query a change stream in parallel for higher throughput. Spanner dynamically creates these partitions based on load and size. Each partition is associated with a Spanner database split, allowing change streams to scale as effortlessly as the rest of Spanner. Learn more about using the change stream query API.What’s nextSpanner change streams is available to all customers today at no additional cost –  you’ll pay only for any extra compute and storage of the change stream data at the regular Spanner rates. Since change streams are built right into Spanner, there’s no software to install, and you get external consistency, industry-leading availability, and effortless scale with the rest of the database. Start exploring change streams from the change stream overview today!Related ArticleBoost the power of your transactional data with Cloud Spanner change streamsChange streams track changes in your Spanner database and integrate this data with other systems for analytics, event triggering, and com…Read Article
Quelle: Google Cloud Platform

Application Rationalization through Google Cloud’s CAMP Framework

On April 6th, 2022, Google Cloud established a new partnership with CAST, to help accelerate the migration and application modernization programs of customers worldwide, complementing the Google capabilities already available through the Google Cloud Application Modernization Program (CAMP).Application Rationalization (App Rat) is the first step towards a cloud adoption or migration journey, through which you go over the application inventory to determine which applications should be Retired, Retained, Refactored, Replatformed, or Reimagined.Why is this important to you?Have the majority of your in-house applications still not moved to the cloud? How much time does your development team spend on support (bug fix, tickets, etc.) versus feature(s) development? Have Infrastructure/Platform dependencies ever delayed product rollout? Would an auto-scalable, managed cloud, increase stakeholder buy-in? Can Google simplify this journey?Google Cloud Application Modernization Program (CAMP) has been designed as an end-to-end framework to help guide organizations through their modernization journey by assessing where they are today, and provide a path forward. When it comes to App Rationalization, this depends on what your role is. Step 1 (Assess): Who is the target audience? The Platform team (or) the Application team?This determines what kind of challenges we are trying to solve. For e.g. the centralized platform team wants to set some guardrails on how the App teams deploy their apps. Streamlining this would allow the platform team to mature themselves into the SRE territory. The application team, on the other hand, loves flexibility, and the ability to perform Continuous Delivery.These examples are only the tip of the iceberg. Most of the enterprise customers have a majority of their applications in the legacy world. Unless we move those business critical applications to the cloud, it’s impossible to mature as an enterprise. For more information, check State of DevOps 2021 report.Step 2 (Analyze): Google Cloud offers the tooling and the framework to analyze your legacy applications.Platform Owner (persona), usually have very little information on which workloads are a good fit for modernization. Google’s StratoZone® SaaS platform provides customers with a data-driven cloud decision framework. The StratoProbe® Data Collector Application delivers the ability to easily deploy and scale the discovery of a customer’s IT environment for Private, Public, or Hybrid-cloud planning. To ease and accelerate the VM migration journey, Google Cloud offers assistance and guidance in making the right decisions when deciding to go to cloud.Google’s mFit aims at unblocking customers in their transformation by providing workload selection for successful on-boarding, at scale, to Anthos, GKE and Cloud Run , in both pre-sales (e.g. proof-of-concept/proof-of-value) and post-sales (e.g. pilot and at scale execution) scenarios.App and/or Business Owners (persona), get involved in a 1-week workshop, using CAST Highlight, which would provide rapid portfolio assessment through automated source code analysis for Cloud Readiness, Open Source risks, Resiliency, and Agility.Step 3 (Plan & execute): Each organization is different. Some may follow the “Migration Factory” approach, and some may follow “Modernization Factory”, and some may follow both. Irrespective of which approach you choose to follow, it is important to plan just enough, so that you can start your execution. Ensure to set the OKRs, that would help with the right measurements, before you start the execution. The actual learning from the execution helps the team(s) to learn more about the cloud migration process, and refine it based on their organization.Using CAST Highlight in the assessment step previously, we get the recommendation for the analyzed applications. From there, for certain workloads, we can use Migrate to Containers, to automate the containerization of suitable workloads. However, there are certain applications that require manual code changes. You have a few options for that,Our experts can help you get started. Our partners can help youStep 4 (Measure & reiterate): Measure the progress using the predefined metrics in the previous step. Celebrate the wins. Consistently share the learnings and best practices with the developer community. Pick the next challengeTake the next stepTell us what you’re solving for. A Google Cloud expert will help you find the best solution.Related ArticleGoogle Cloud Application Modernization Program: Get to the future fasterThe Google Cloud App Modernization Program (CAMP) can help you accelerate your modernization process and adopt DevOps best practicesRead Article
Quelle: Google Cloud Platform

Why IT leaders choose Google Cloud certification for their teams

As organizations worldwide move to the cloud, it’s become increasingly crucial to provide teams with confidence and the right skills to get the most out of cloud technology. With demand for cloud expertise exceeding the supply of talent, many businesses are looking for new, cost-effective ways to keep up.When ongoing skills gaps stifle productivity, it can cost you money. In Global Knowledge’s 2021 report, 42% of IT decision-makers reported having “difficulty meeting quality objectives” as a result of skills gaps, and, in an IDC survey cited in the same Global Knowledge report, roughly 60% of organizations described a lack of skills as a cause for lost revenue. In today’s fast-paced environment, businesses with cloud knowledge are in a stronger position to achieve more. So what more could you be doing to develop and showcase cloud expertise in your organization?Google Cloud certification helps validate your teams’ technical capabilities, while demonstrating your organization’s commitment to the fast pace of the cloud.What certification offers that experience doesn’t is peace of mind. I’m not only talking about self-confidence, but also for our customers. Having us certified, working on their projects, really gives them peace of mind that they’re working with a partner who knows what they’re doing. Niels Buekers, managing director at Fourcast BVBAWhy get your team Google Cloud certified?When you invest in cloud, you also want to invest in your people. Google Cloud certification equips your teams with the skills they need to fulfill your growing business. Speed up technology implementation Organizations want to speed up transformation and make the most of their cloud investment.Nearly 70% of partner organizations recognize that certifications speed up technology implementation and lead to greater staff productivity, according to a May 2021 IDC Software Partner Survey. The same report also found that 85% of partner IT consultants agree that “certification represents validation of extensive product and process knowledge.”Improve client satisfaction and successGetting your teams certified can be the first step to improving client satisfaction and success. Research of more than 600 IT consultants and resellers in a September 2021 IDC study found that “fully certified teams met 95% of their clients’ objectives, compared to a 36% lower average net promoter score for partially certified teams.”Motivate your team and retain talentIn today’s age of the ongoing Great Resignation, IT leaders are rightly concerned about employee attrition, which can result in stalled projects, unmet business objectives, and new or overextended team members needing time to ramp up. In other words, attrition hurts.But when IT leaders invest in skills development for their teams, talent tends to stick around. According to a business value paper from IDC, comprehensive training leads to 133% greater employee retention compared to untrained teams. When organizations help people develop skills, people stay longer, morale improves, and productivity increases. Organizations wind up with a classic win-win situation as business value accelerates. Finish your projects ahead of scheduleWith your employees feeling supported and well equipped to handle workloads, they can also stay engaged and innovate faster with Google Cloud certifications. “Fully certified teams are 35% more likely than partially certified teams to finish projects ahead of schedule, typically reaching their targets more than two weeks early,” according to research in an IDC InfoBrief.Certify your teamsGoogle Cloud certification is more than a seal of approval – it can be your framework to increase staff tenure, improve productivity, satisfy your customers, and obtain other key advantages to launch your organization into the future. Once you get your teams certified, they’ll join a trusted network of IT professionals in the Google Cloud certified community, with access to resources and continuous  learning opportunities.To discover more about the value of certification for your team, download the IDC paper today and invite your teams to join our upcoming webinar and get started on their certification journey.Related ArticleHow to become a certified cloud professionalHow to become a certified cloud professionalRead Article
Quelle: Google Cloud Platform

Built with BigQuery: Gain instant access to comprehensive B2B data in BigQuery with ZoomInfo

Editor’s note: The post is part of a series highlighting our partners, and their solutions, that are Built with BigQuery.To fully leverage the data that’s critical for modern businesses, it must be accurate, complete, and up to date. Since 2007, ZoomInfo has provided B2B teams with the accurate firmographic, technographic, contact, and intent data they need to hit their marketing, sales, and revenue targets. While smart-analytics teams have used ZoomInfo data sets in Google BigQuery to integrate them with other sources to deliver reliable and actionable insights powered by machine learning, Google Cloud and ZoomInfo recently have partnered to give organizations even richer data sets and more powerful analytics tools. Today, customers now have instant access to ZoomInfo data and intelligence directly within Google BigQuery. ZoomInfo is available as a virtual view in BigQuery, so analysts can explore the data there even before importing it. Once ZoomInfo data has been imported into BigQuery, data and operations teams will be able to use it in their workflows quickly and easily, saving their sales and marketing teams time, money, and resources. ZoomInfo data sets include: Contact and company. Capture essential prospect and customer data — from verified email addresses and direct-dial business phone and mobile numbers, to job responsibilities and web mentions. Get B2B company insights, including organizational charts, employee and revenue growth rates, and look-alike companies.Technographics and scoops. Uncover the technologies that prospects use — and how they use them — to inform your marketing and sales efforts. Discover trends to shape the right outreach messaging and determine a buyer’s needs before making the first touch. Buyer intent. ZoomInfo’s buyer intent engine captures real-time buying signals from companies researching relevant topics and keywords related to your business solution across the web.Website IP traffic. Enrich data around traffic from your digital properties, so your customer-facing teams can take immediate action and turn traffic into sales opportunities.In the future, ZoomInfo data sets will be available in the Google Cloud Marketplace as well as Google Cloud Analytics Hub (now in preview) alongside popular Google and third-party data sets including Google Trends, Google Analytics, and Census Bureau data. The new features will help ZoomInfo and Google Cloud customers such as Wayfair Professional, one of the world’s largest home retailers. Wayfair Professional is a long-time user of ZoomInfo’s Company Data Brick, API, and enrichment services. Wayfair Professional has historically accessed ZoomInfo data through file transfer, which involved shuffling encrypted CSVs back and forth over SFTP and manual file processing to ingest it into Google BigQuery. Ryan Sigurdson, senior analytics manager at Wayfair Professional, shared that moving their monthly offline company enrichment workflow to BigQuery could save them weeks of manual work and maintenance every month. Built with BigQueryZoomInfo is one of over 700 tech companies powering their products and businesses using data cloud products from Google, such as BigQuery, Looker, Spanner, and Vertex AI. Recently at the Data Cloud Summit, Google Cloud announced Built with BigQuery, which helps ISVs like ZoomInfo get started building applications using data and machine learning products. By providing dedicated access to technology, expertise, and go to market programs, this initiative can help tech companies to accelerate, optimize, and amplify their success.ZoomInfo’s SaaS solutions have been built on Google Cloud for years. By partnering with Google Cloud, ZoomInfo can leverage an all-in-one cloud platform to develop its data collection, data processing, data storage, and data analytics solutions. “Enabling customers to gain superior insights and intelligence from data is core to the ZoomInfo strategy. We are excited about the innovation Google Cloud is bringing to market and how it is creating a differentiated ecosystem that allows customers to gain insights from their data securely, at scale, and without having to move data around,” says Henry Schuck, ZoomInfo’s chief executive officer. “Working with the Built with BigQuery team enables us to rapidly gain deep insight into the opportunities available and accelerate our speed to market.”Google Cloud provides a platform for building data-driven applications like ZoomInfo, from simplified data ingestion, processing, and storage to powerful analytics, AI/ML, and data sharing capabilities, all integrated with the open, secure, and sustainable Google Cloud platform. With a diverse partner ecosystem and support for multicloud, open source tools, and APIs, Google Cloud provides technology companies the portability and extensibility they need to avoid data lock-in. To learn more about ZoomInfo on Google Cloud, visit https://www.zoominfo.com/offers/google-bigquery.To learn more about Built with BigQuery, visit https://cloud.google.com/solutions/data-cloud-isvsRelated ArticleGet value from data quickly with Informatica Data Loader for BigQueryWith Informatica’s Data Loader on Google Cloud, accelerate data uploads and keep data flowing to get insights and answers faster.Read Article
Quelle: Google Cloud Platform

Take the 2022 Accelerate State of DevOps Survey

The State of DevOps report by Google Cloud and the DORA research team is the largest and longest running research of its kind with inputs from over 32,000 professionals worldwide. It provides an independent view into the practices and capabilities that organizations, irrespective of their size, industry, and region, can employ to drive better performance.  Today, Google Cloud and the DORA research team are excited to announce the launch of the 2022 State of DevOps survey. For the 2022 State of DevOps report we will be focusing on a topic that has been top of mind recently: security. As technology teams continue to accelerate and evolve, so do the quantity and sophistication of security threats. Security can no longer be an afterthought or the final step before delivery, it must be integrated throughout the software development process. Shift LeftThe industry must shift from reactive practices to proactive and diagnostic measures, where software teams should assume that their systems are already compromised and build security into their supply chain. In the 2021 State of DevOps report we found that elite performers who met or exceeded their reliability targets were twice as likely to have shifted their security practices left, i.e., implemented security practices earlier on in the software development lifecycle, and deliver reliable software quickly, and safely. Not only that, but teams who integrate security best practices throughout their development process are 1.6 times more likely to meet or exceed their organizational goals.But how do companies know where to start when it comes to getting security right? In last year’s report we found that companies can integrate security, improve software delivery and operational performance, and improve organizational performance by leveraging the following practices:2022 State of DevOps SurveyLike the past six research reports, our goal this year is to perform detailed analysis to help teams benchmark their performance against the industry and provide strategies that teams can employ to improve their performance. For the first time in last year’s report, high and elite performers make up two-thirds of respondents. We can confidently say that as the industry continues to accelerate its adoption of DevOps principles teams see meaningful benefits as a result.This year we are doing a deeper investigation into how security practices and capabilities predict overall software delivery and operations performance. Achieving elite performance is a team endeavor and diverse, inclusive teams drive the best performance. The research program benefits from the participation of a diverse group of people. Please help us encourage more voices by sharing this survey with your network, especially with your colleagues from underrepresented parts of our industry. This survey is for everyone. No matter where you are on your DevOps journey, the size of your organization, your organization’s industry, or how you identify. There are no right or wrong answers, in fact we often hear feedback that questions in the survey prompt ideas for improvement.  The survey will remain open until midnight PDT on July 22, 2022. We look forward to hearing from you and your teams!Related Article2021 Accelerate State of DevOps report addresses burnout, team performanceThe SODR is continually one of the most downloaded assets on the GCP website. We are releasing the updated version of the report with new…Read Article
Quelle: Google Cloud Platform