Anthos makes multi-cloud easier with new API, support for Azure

One of the main reasons organizations adopt Anthos is to simplify the management of Kubernetes-based applications across a variety of clouds. And now, with our latest release, we’ve made multi-cloud management even easier with the general availability of the Anthos Multi-Cloud API.In addition, in this latest release,Support for Anthos clusters1 running on Azure is also now generally availableWe’ve added integrated logging and monitoringWe’ve introduced Connect Gateway support for unified cluster access, with Terraform and Kubernetes Config Connector support coming soon! Let’s take a deeper look at what you can find in our latest Anthos release. Exploring the Multi-Cloud APIWith the latest release of Anthos, we’ve trimmed our installation footprint and streamlined our cluster management technology to allow you to use a single API for full lifecycle management of Anthos clusters running in AWS or Azure. Compare that to previous releases, which required you to install a management cluster in each cloud. Now, the Anthos Multi-Cloud API, the Google Cloud control plane does all the work! This release standardizes the gcloud CLI for deploying Anthos clusters in AWS, Azure, and GCP (with full Terraform support on the way). Clusters you create in other clouds appear in the Google Cloud Console, creating a centralized management view complete with cluster telemetry and logging. Now, creating a new Anthos cluster on Google Cloud, AWS or Azure is a simple gcloud command:Here’s the associated view from the Cloud Console:The Multi-Cloud API performs authentication with each cloud via service account or application registration, and allows clusters to be deployed on existing or newly created VPCs/Vnets. It supports multiple machine types in each cloud, with plans to support even more soon (AWS, Azure). As a reminder, Anthos clusters on Azure or AWS integrate with each respective cloud’s native KMS, storage facilities, and load balancing. Using Connect Gateway to connect to Anthos clusters in AWS and AzureConnect gateway allows you to interact with your Anthos clusters securely, and now it works with Anthos clusters running on AWS and Azure too. Cluster commands are routed through a GCP Service to your clusters over an encrypted connection, removing the need for end users to use a VPN. Putting together a multi-cloud strategyOperationalizing Google-managed Kubernetes clusters in all three major clouds is now much easier with the release of the Multi-Cloud API. The next step is to apply configuration governance and policy controls to the clusters which will create safe and secure deployment landing zones for your applications regardless of the environment.For one thing, you can now leverage Anthos Configuration Management (ACM), which automates policy and security at scale for Kubernetes clusters whether they are running on-premises, on GCP, and on other public clouds. ACM synchronizes your clusters to a git repository that contains your business specific configurations and policies. Developers can launch their applications by adding configuration files to the ACM repo or they can use their existing CD tooling. In either case, by using ACM, you can be sure security and governance is applied uniformly across your fleet of clusters.Meanwhile, Cloud Run for Anthos and Anthos Service Mesh offer tremendous value to organizations looking to optimize and secure Kubernetes-based workloads. Cloud Run for Anthos enables container-based application deployments that scale to zero with predictable costs in your own clusters while making use of existing CI/CD pipelines and security tooling. Anthos Service Mesh brings advanced application networking capabilities to your services and valuable inter-cluster communication telemetry, and is designed to work on Anthos clusters running on GKE, AWS and Azure. These Anthos capabilities are critical to businesses that manage microservice-based applications at scale; look for them to be released in the coming in the coming monthsGet started todayAnthos clusters are enterprise-grade Kubernetes clusters that are entirely supported by Google Cloud — and now running them in AWS and Azure is a seamless experience. To get started, check out our Install Anthos Clusters on AWS or Azure guide.1. An Anthos cluster refers to a Google-managed Kubernetes cluster that can run outside of Google Cloud.
Quelle: Google Cloud Platform

Unlock the power of change data capture and replication with new, serverless Datastream, now GA

We’re excited to announce that Datastream, Google Cloud’s serverless change data capture (CDC) and replication service, is now generally available. Datastream allows you to synchronize data across disparate databases, storage systems, and applications reliably and with minimal latency to support real-time analytics, database replication, and event-driven architectures. You can easily and seamlessly deliver change streams from Oracle and MySQL databases into Google Cloud services such as BigQuery, Cloud SQL, Google Cloud Storage and Cloud Spanner, saving time and resources and ensuring your data is accurate and up to date. Get started with Datastream today.Datastream provides an integrated solution for CDC replication use cases with custom sources and destinations*Check the documentation page for all supported sources and destinations.Since our public preview launch earlier this year, we’ve seen Datastream used across a variety of industries, by customers such as Chess.com, Cogeco, Schnuck Markets, and MuchBetter. This early adoption strengthens the message we’ve been hearing from customers about the demand for change data capture to provide replication and streaming capabilities for real-time analytics and business operations. MuchBetter is a multi-award-winning e-wallet app, providing a truly secure and enjoyable banking alternative for customers all over the world. Working with Google Cloud Premier Partner Datatonic, they’re leveraging Datastream to replicate real-time data from MySQL OLTP databases into a BigQuery data warehouse to power their analytics needs. According to Andrew McBrearty, Head of Technology at MuchBetter, “from MuchBetter’s point of view, leveraging Dataflow, BigQuery and Looker has unlocked additional insights from our ever-increasing operational data. Using Datastream in our solution ensured continued real-time capability – we now have trend analysis in place, improved efficiency across the business, and the ability to use our data to derive actionable insights and to make data-driven decisions. This means we can continue to grow and adapt at a pace our customers have come to expect from MuchBetter. And for the first time, the world of ML and AI is open to us.”Getting to know DatastreamGoogle Cloud customers are choosing Datastream for real-time change data capture because of its differentiated approach:Simple experienceReal-time replication of change data shouldn’t be complicated: database preparation documentation, secure connectivity setup, and stream validation should be built right into the flow. Datastream delivers on this experience, as MuchBetter discovered during their evaluation of the product. “Datastream’s ease-of-use and immediate availability (serverless) meant we could start our evaluation and immediately see results”, says Mark Venables, Principal Data Engineer at MuchBetter. “For us, this meant getting rid of the considerable pre-work needed to align proof of concept tests with third-party CDC suppliers.”Datastream guides you to success by providing detailed pre-requisites and step-by-step configuration guidelines to prepare your source database for CDC ingestion.End-to-end solutionBuilding pipelines to replicate changes from your source database shouldn’t take up all of your team’s time. Use pre-built Dataflow templates to easily replicate data into BigQuery, Cloud Spanner or Cloud SQL. Out of the box, these Dataflow templates will automatically create the tables and update the data at the destination, taking care of any out-of-order or duplicate events, and providing error resolution capabilities. Leverage the templates’ flexibility to fine-tune Dataflow to fit your specific needs. “Google-managed Dataflow templates meant getting our pipelines up and running with minimal effort and fuss – this allowed more time to be spent on more complex pipeline development whilst tactically delivering solutions to our users,” says Venables.Secure Datastream keeps your migrated data secure, supporting private connectivity between source and destination databases. “Establishing connectivity is often viewed as hard. Datastream surprised us with its ease of use & setup, even in more secure modes,” says Grzegorz Dlugolecki, Principal Cloud Architect at Chess.com, a leading online chess community and mobile application, hosting more than ten million chess games every day. “Datastream’s private connectivity configuration allowed us to easily create a private connection between our source and the destination, and ensure our data is safe and secure.”Datastream provides a simple wizard to automatically set up private, secure connectivity to your source databaseHigh throughput, low latencyWith Datastream’s serverless architecture, you don’t need to worry about provisioning, managing machines, or scaling up resources to meet fluctuations in data throughput. Datastream guarantees high performance – a single stream can process 10’s of MBs per second, while ensuring minimal latency. “We evaluated several market-leading ETL solutions”, says Dlugolecki,  “Datastream was the only tool able to successfully sync our complex, single-table datasets, doing this in weeks instead of years estimated by the other vendors.”Getting started with DatastreamYou can start streaming real-time changes from your Oracle and MySQL databases today using Datastream:Navigate to the Datastream area of your Google Cloud console, under Big Data, and click Create Stream.Choose the source database type, and see what actions you need to take to set up your source.Create your source connection profile, which can later be used for additional streams.Define how you want to connect your source.Create and configure your destination connection profile.Validate your stream and make sure the test was successful. Start the stream when you’re ready.Once the stream is started, Datastream will backfill historical data and will continuously replicate new changes as they happen. Learn more and start using Datastream todayDatastream is now generally available for Oracle and MySQL sources. Datastream supports sources both on-premises and in the cloud, and captures historical data and changes into Cloud Storage. Integrations with Cloud Data Fusion and Cloud Dataflow (our data integration and stream processing products, respectively) replicate changes to other Google Cloud destinations, including: BigQuery, Cloud Spanner, and Cloud SQL.For more information, head on over to the Datastream documentation, see our step-by-step Datastream + Dataflow to BigQuery tutorial, or start training with this Datastream Qwiklab.Related ArticleUsing Datastream to unify data for machine learning and analyticsWhile machine learning model architectures are becoming more sophisticated and effective, the availability of high-quality, fresh data fo…Read Article
Quelle: Google Cloud Platform

How data and AI can help media companies better personalize; and what to watch out for

Media companies now have access to an ever-expanding pool of data from the digitally connected consumer. And over the past two years, as content consumption and audience behaviors have shifted in response to the world around us, direct-to-consumer has only accelerated. As media organizations pivot from third-party to first-party data, this presents challenges with the volume, velocity and fragmentation of data. It’s also an opportunity to better understand how to acquire, engage and retain audiences ⁠— and inject agility into their business amidst a competitive landscape. How should media companies be thinking about their data, and its value, to capitalize on this opportunity?  To help answer these questions, we sat down with Gloria Lee, Executive Account Director in Media & Entertainment and John Abel, Technical Director for the Office of the CTO at Google Cloud.Data and the growing importance of personalization There’s no doubt customer needs and expectations are in a constant state of flux. Across the media industry, audiences are increasingly expecting personalized content. In fact, a  PWC study conducted in 2020 found that nearly one-third (31%) of survey respondents said easy, personalized content recommendations would be a reason for staying with a streaming service.Audience engagement is the currency, and in a crowded space where attention is finite, media companies need a granular understanding of their audience. To do this, there’s an opportunity to capture and capitalize on first party data, so they can better serve their audience. Not just what audiences are consuming, but also when, where, and on which platform (and increasingly, those platforms are digital). These data points are key in understanding audiences deeply to deliver hyper-personalized experiences that audiences are expecting.    “If you look across the world today, we know that through digitalization, [that] hyper-personalization is required,” says John. “So that hyper-personalization, the volume of data and the value of the data is super critical across all industries. Media and entertainment is no different,” he adds.Enriching storytelling through AI & ML   Extracting insights on how, where and when the consumer wants to receive content will accelerate the need for data research; AI and ML will be critical to unlocking data’s full potential. “The most valuable data is generated data, typically from machine learning or AI, where you’re seeing new insights in data that give you new opportunities.” explains Gloria.  New technologies are providing insights — often in real time — about audiences, making personalization an easier task. An example use case would be recommending a new song based on a user’s listening history. This kind of personalization is just the start, as AI/ML unlocks more novel opportunities. For example, AL/ML can also be used to enrich the watching experience by finding opportune moments to integrate brands. As Gloria puts it, “Artificial intelligence, and machine learning is what enables people to quickly look through their content to find relevant moments for marketing purposes”. Getting personalization right, while making sure to keep consumer information safe and private is a challenge for all consumer companies; not just M&E. John explains “there’s a blend of how they move technology to the edge and they don’t break privacy.” Media and Entertainment companies will need to keep their data secure and private, using sophisticated practices like data federation. In this model, individual data is not exchanged.  Rather, data is first aggregated into cohorts to anonymize the individual. The goal of methods like this is to obtain useful insights while retaining privacy and security.How data is driving audience experiences Spotify is a prime example of a media company using data-led insights to provide personalized content for their customers — making it easier for users to discover new audio content and connect with their favorite artists or podcasts. “[With] Google Cloud…we can iterate quicker on key needs, like data insights and machine learning…[streamlining] our ability to concentrate on what’s important to our users and give them the experiences they know and love about Spotify.”—Tyson Singer, VP of technology and platform at SpotifySky, one of Europe’s  leading broadcasters is also transforming its data strategy to better serve their customers. By creating a scalable cloud-based architecture, Sky can keep up with increasing amounts of TV box diagnostic data on service uptime and delivery ⁠— meaning less data lost and more time to focus on improving user experience through personalization. “The data will sit right at the heart of Sky’s future strategy. It will help ensure that our products are intuitive and easy to use and that we can keep seamlessly connecting customers with the content and services they know and love,” says Oliver Tweedie, Director of Data Engineering at Sky.Transforming to a data-oriented Media companyKeeping up with new technology trends, inside and outside of the industry, will play a critical role in how media and entertainment companies can survive and thrive into the future. And without a way to centralize and draw insights from their data quickly, media organizations will struggle to stay in the race. With any type of change comes resistance. But at the end of the day, it all comes down to people. When navigating digital transformations, Gloria touches on the three categories of people: ⁠supporters, those excited about the change, those who couldn’t care less and detractors, those who are opposed to it. “It’s really tapping into the leaders for those three different groups within the company and trying to get them on board and seeing what their drivers are,” Gloria explains.So what advice do John and Gloria have for Media players looking into the data-led future?Related ArticleRead Article
Quelle: Google Cloud Platform

Expanding our infrastructure with cloud regions around the world

Businesses and organizations around the world depend on Google Cloud to help them digitally transform, innovate across their industries, and drive operational efficiencies for long-term growth. Our global network of Google Cloud Platform regions is the foundation of this capability, delivering high-performance, low latency cloud-based services to customers and their users in more than 200 countries and territories globally. With 29 cloud regions and 88 zones, we operate more regions with multiple availability zones than any other hyperscale cloud provider. So far in 2021, we’ve opened new regions in Warsaw (Poland), Delhi NCR (India), Melbourne (Australia), and Toronto (Canada), bringing the cleanest cloud in the industry closer to more customers across multiple continents. Building on this momentum, today we are excited to share further updates to our expansion strategy.Extending our Google Cloud region roadmapChileToday, we’re proud to announce that our Santiago cloud region is now operational, ready to help more South American customers and partners build a digital-first future. This marks our first cloud region in Chile and second in South America, complementing São Paulo, which opened in 2017. The Santiago cloud region brings our high-performance, low-latency cloud services closer to customers across Latin America, from financial institutions like Caja Los Andes to health providers like Red Salud and enterprises like LATAM Airlines. Join us as we celebrate the opening of the Santiago cloud region with our customers and local Google leaders. IsraelToday, we are excited to share that our Google Cloud region in Israel will be located near Tel Aviv. When operational, the Tel Aviv region will enable us to meet growing demand for cloud services in Israel across industries, from retail to financial services to the public sector.Israeli companies like BreezoMeter, Haaretz, PayBox, and Wix already run on our cloud, making it easier to operate their businesses faster, securely, and more reliably. “Google Cloud’s global network helps Wix to achieve the best performance around the globe. For example, with Google Cloud CDN, we are able to serve tens of millions of requests per day seamlessly, while ensuring that our customers get a consistently great web experience worldwide.” – Eugene Olshenbaum, VP Technology at WixRecently, Google Cloud was selected by the Israeli government to provide public cloud services to all government entities from across the state, including ministries, authorities, and government-owned companies.GermanyOur second cloud region in Germany will be located in Berlin-Brandenburg, complementing our existing cloud region in Frankfurt. Once launched, our cloud region in Berlin-Brandenburg will strengthen our safe and secure platform for customers in Germany, including both public sector organizations and businesses like BMG, helping them scale and adapt to changing requirements.”At BMG, we’re continuously pushing digitization further to help ensure that when our artists release new music, it is promoted effectively around the world. With autoscaling via BigQuery, excellent customer support, and a clean and simple user interface, Google Cloud has been a partner to our technology team and beyond. The new Google Cloud region in Berlin-Brandenburg will further improve collaboration company-wide and make data more accessible to all teams.” – Gaurav Mittal, VP IT & Systems at BMGSaudi ArabiaLast year, we announced our plans to deploy and operate a cloud region in Saudi Arabia, while a local strategic reseller, sponsored by Aramco, will offer cloud services to businesses in the Kingdom. Today, we are announcing Dammam as the location for this cloud region. As we prepare for launch, we will start hiring out of a Riyadh-based office to support the cloud region’s deployment and operation.  United StatesOver the next year, we will add cloud regions in Columbus, Ohio, and Dallas, Texas, providing customers operating in North America with the capacity they need to run mission-critical services at the lowest possible latency. These new U.S. regions will bring our services even closer to existing customers such as J.B. Hunt Transport, Inc., which is implementing Google Cloud solutions to help create the most efficient transportation network in North America.Beyond performance and capacityAs we add regions across the Americas, Asia, Europe and the Middle East, Google Cloud is committed to continuing to help build a more sustainable future and create opportunities for everyone. We operate the cleanest cloud in the industry. Google was the first organization of its size to become carbon neutral in 2007, and we were the first major company to match 100% of our electricity consumption with renewable energy starting in 2017 and every year since then. As we now work toward achieving carbon-free energy 24/7 by 2030, we’re proud to support our customers with cloud infrastructure and tools to reduce their environmental impact. In addition to decarbonizing our energy consumption around the world, we are committed to upholding human rights in every country where we operate. This includes respecting the Universal Declaration of Human Rights, as well as the standards established in the United Nations Guiding Principles on Business and Human Rights and Global Network Initiative Principles. We’re a proud founding member of the Global Network Initiative, in which we work closely with civil society, academics, investors and industry peers to protect and advance freedom of expression and privacy globally as we deliver high-quality, relevant and useful content.Whenever we expand operations in a new country, we undertake thorough human-rights due diligence. This often includes external human-rights assessments, which identify risks that we review carefully and decide how to address. We maintain a clear position on requests from governments for access to data. We also recently announced the Trusted Cloud Principles initiative, led by Google, Amazon, Microsoft, and other technology companies, to protect the rights of customers as they move to the cloud.From the beginning, Google’s mission has been to organize the world’s information and make it universally accessible and useful. Within Google Cloud, we aim to do the same for enterprise organizations, in ways that meet international and local standards. As the global landscape continues to evolve, we are committed to collaborating with human rights organizations and the broader technology industry to uphold human rights in every country where we operate. Learn more about our human rights efforts and global cloud infrastructure.Related ArticleExpanding our global footprint with new cloud regionsGoogle Cloud is expanding its global network with new regions in South America, Europe, and Asia.Read Article
Quelle: Google Cloud Platform

Google showcases Cloud TPU v4 Pods for large model training

Recently, models with billions or trillions of parameters have shown significant advances in machine learning capabilities and accuracy. For example, Google’s LaMDA model is able to engage in a free-flowing conversation with users about a large variety of topics. There is enormous interest within the machine learning research and product communities in leveraging large models to deliver breakthrough capabilities. The high computational demand of these large models requires an increased focus on improving the efficiency of the model training process, and benchmarking is an important means to coalesce the ML systems community towards realizing higher efficiencies.In the recently concluded MLPerf v1.1 Training round1, Google submitted two large language model benchmarks into the Open division, one with 480 billion parameters and a second with 200 billion parameters. These submissions make use of publicly available infrastructure, including Cloud TPU v4 Pod slices and the Lingvo open source modeling framework. Traditionally, training models at these scales would require building a supercomputer at a cost of tens or even hundreds of millions of dollars – something only a few companies can afford to do. Customers can achieve the same results using exaflop-scale Cloud TPU v4 Pods without incurring the costs of installing and maintaining an on-premise system. Large model benchmarksGoogle’s Open division submissions consist of a 480 billion parameter dense Transformer-based encoder-only benchmark using TensorFlow and a 200 billion-parameter JAX benchmark. These models are architecturally similar to MLPerf’s BERT model but with larger dimensions and number of layers. These submissions demonstrate large model scalability and high performance on TPUs across two distinct frameworks. Notably, these benchmarks, with their stacked transformer architecture, are fairly comparable in terms of their compute characteristics with other large language models.Figure 1: Architecture of the Encoder-only model used in Google’s MLPerf 1.1 submissions.Our two submissions were benchmarked on 2048-chip and 1024-chip TPU v4 Pod slices, respectively. We were able to achieve an end-to-end training time of ~55 hours for the 480B parameter model and ~40 hours for the 200B parameter model. Each of these runs achieved a computational efficiency of 63%- calculated as a fraction of floating point operations of the model together with compiler rematerialization over the peak FLOPs of the system used. Next-generation ML infrastructure for large Model training Achieving these impressive results required a combination of several cutting edge technologies. First, each TPU v4 chip provides more than 2X the compute power of a TPU v3 chip – up to 275 peak TFLOPS. Second, 4,096 TPU v4 chips are networked together into a Cloud TPU v4 Pod by an ultra-fast interconnect that provides 10x the bandwidth per chip at scale compared to typical GPU-based large scale training systems. Large models are very communication intensive: local computation often depends on results from remote computation that are communicated across the network. TPU v4’s ultra-fast interconnect has an outsized impact on computational efficiency of large models by eliminating latency and congestion in the network.Figure 2: A portion of one of Google’s Cloud TPU v4 Pods, each of which is capable of delivering in excess of 1 exaflop/s of computing power.The performance numbers demonstrated by our submission also rely on our XLA linear algebra compiler and leverage the Lingvo framework. XLA transparently performs a number of optimizations, including GSPMD based automatic parallelization of many of the computation graphs that form the building blocks of the ML model. XLA also allows for reduction in latency by overlapping communication with the computations. Our two submissions demonstrate the versatility and performance of our software stack across two frameworks, TensorFlow and JAX.Large models in MLPerfGoogle’s submissions represent an important class of models that have become increasingly important in ML research and production, but are currently not represented in MLPerf’s Closed division benchmark suite. We believe that adding these models to the benchmark suite is an important next step and can inspire the ML systems community to focus on addressing the scalability challenges that large models present.Our submissions demonstrate 63% computational efficiency, cutting edge in the industry. This high computational efficiency enables higher experimentation velocity through faster training. This directly translates into cost savings for Google’s Cloud TPU customers.  Please visit the Cloud TPU homepage and documentation to learn more about leveraging Cloud TPUs using TensorFlow, PyTorch, and JAX.1. The MLPerf name and logo are trademarks of MLCommons Association in the United States and other countries. All rights reserved. Unauthorized use is strictly prohibited. See www.mlcommons.org for more information.
Quelle: Google Cloud Platform

Shopify engineers deliver on peak performance during Black Friday Cyber Monday 2021

Many predicted this would be the biggest Black Friday/Cyber Monday (BFCM) weekend ever recorded. And it looks like those predictions were right — especially for independent and direct to consumer (DTC) brands. Shopify, a leading provider of essential internet infrastructure for commerce, works with more than 1.7 million merchants worldwide. Over the course of the long weekend, the company’s merchants welcomed a record number of consumers purchasing from independent and DTC brands (47 million globally) and drove $6.3 billion in global sales for a 23% increase y/y (up from $5.1 billion in global sales in 2020). These results are incredible on their own, and even more so when we remember that these increases are on top of last year’s pandemic-fueled shift to online shopping.But what does it take to handle this type of scale? Noted across the industry for their innovative approaches to peak time shopping solutions, Shopify engineers are leveraging Google services to enhance performance for merchants and shoppers like never before. Shopify engineers are showing commerce how big event performance is doneShopify knows how critical peak seasons are for its merchants. With peak sales of more than $3.1 million per minute at 12:02 PM EST on Black Friday, November 26, Shopify leverages one of the most skilled teams of engineers to develop pioneering tools that improve scalability and velocity for merchants.“Achieving a record-breaking sales weekend for BFCM 2021 is only possible with an infrastructure that’s built for performance and scale,” said Delaney Manders, VP of Engineering for Shopify. “With the incredible collaboration between Shopify engineering and Google Cloud, we averaged about 30TB/min of egress traffic across our infrastructure and helped our merchants deliver near-perfect uptime for their consumers during peak sales periods.” Through customer experience and operational optimization tools designed with Google Cloud services, Shopify delivers streamlined and reliable performance for merchants and consumers during high traffic events. With tools like Shopify Inbox and Shop Pay, Shopify engineers have designed some of the finest merchant and consumer experience optimization tools in commerce. Through their solutions, Shopify engineers work smarter, not harder, and enable merchants to meet peak event demand without added hassle or stress. Black Friday/Cyber Monday may be the event of the season for many Shopify merchants but the massive scalability, low latency, and optimized uptime make high traffic and high risk events as easy to execute as any other day.From the merchant, to the backend, to the consumer, Shopify’s engineers have developed tools to optimize performance for merchants and shoppers, while simplifying their workloads. With these approaches, big event days at Shopify go as smoothly as any other.Shop Pay takes the lines (and the forms) out of shoppingFor the end customer, a time-consuming checkout experience can easily discourage or delay a purchase. When it comes to big events like Black Friday/Cyber Monday, Shopify recognizes the particular significance of an intelligent and seamless checkout experience. The company’s engineers developed Shop Pay, a solution that helps accelerate the purchase process. In addition to supporting faster checkout, Shop Pay also personalizes the shopping experience by remembering a shopper’s preferences and encrypting everything for optimal safety.Shopify’s data shows that Shop Pay increases checkout speed by 4x. Following an analysis of 10,000 of its largest merchants, Shopify found merchants who enabled Shop Pay had an average checkout-to-order rate 1.72x times higher than those going through regular checkouts. This innovation in platform performance significantly increased growth and retention for merchants. Shop Mover gets the backend ready for a crowd  To keep operations as agile and modern as possible, Shopify developed and open sourced a general purpose MySQL data migration tool, Ghostferry. Supported by Google Cloud services, Ghostferry moves data across different MySQL instances while the application is still running and with minimal downtime (<5 seconds). Shopify’s Shop Mover is built on top of Ghostferry and Google Cloud services, enabling load-balancing of data shards across multiple databases. In addition to being the tool used for the initial migration out of Shopify’s data centers and into Google Cloud, Shop Mover now moves hundreds of thousands of shops every year and is a reason Shopify merchants could handle the high volume of BFCM 2021. Shopify’s global reach means better deals for merchants and shoppers. To expand access to new regions while optimizing platform performance for merchants, Shopify engineers leveraged the global infrastructure of Google Cloud. Shopify supports merchants from around the world with Google Cloud’s network of 25 regions and 76 availability zones. With lower latency and higher reliability, performance is optimized for shoppers and merchants around the world. Through its partnership with Google, Shopify engineers are leveragingGlobal Virtual Private Cloud (VPC) to streamline the writing and deployment of applications that span multiple regions, and in-country disaster recovery which helps Shopify maintain business continuity across the globe.  The culture of creativity, scale, and velocity among Shopify’s engineering team continues to drive exciting solutions for enhanced peak event performances. Backed by fresh approaches to improving the customer experience and streamlining operations at scale, Shopify engineers can expect a streamlined workday and merchants can confidently deliver for shoppers without a hassle. And this translates to a better holiday shopping experience for merchants and shoppers alike.
Quelle: Google Cloud Platform

Vertex AI NAS: higher accuracy and lower latency for complex ML models

Vertex AI launched with the premise “one AI platform, every ML tool you need.” Let’s talk about how Vertex AI streamlines modeling universally for a broad range of use cases. The overall purpose of Vertex AI is to simplify modeling so that enterprises can fast track their innovation, accelerate time to market, and ultimately increase return on ML investments. Vertex AI facilitates this in several ways. Features like Vertex AI Workbench, for example, speed up training and deployment of models by five times compared to traditional notebooks. Vertex AI Workbench’s native integration with BigQuery and Spark means that users without data science expertise can more easily perform machine learning work. Tools integrated into the unified Vertex AI platform, such as state of the art pre-trained APIs and AutoML, make it easier for data scientists to build models in less time. And for modeling work that lends itself best to custom modeling, Vertex AI’s custom model tooling supports advanced ML coding, with nearly 80% fewer lines of code required (compared to competitive platforms) to train a model with custom libraries. Vertex AI delivers all this while maintaining a strong focus on Explainable AI. Yet organizations with the largest investments in AI and machine learning, with teams of ML experts, require extremely advanced toolsets to deliver on their most complex problems. Simplified ML modeling isn’t relegated to simple use cases only.Let’s look at Vertex AI Neural Architecture Search (NAS), for instance. Vertex AI NAS enables ML experts at the highest level to perform their most complex tasks with higher accuracy, lower latency, and low power requirements. Vertex AI NAS originates from the deep experience Alphabet has with building advanced AI at scale. In 2017, the Google Brain team recognized we need a better way to scale AI modeling, so they developed Neural Architecture Search technology to create an AI that generates other neural networks, trained to optimize their performance in a specific task the user provides. To the astonishment of many in the field, these AI-optimized models were able to beat a number of state of the art benchmarks, such as ImageNet and SOTA mobilenets, setting a new standard for many of the applications we see in use today, including many Google-internal products. Google Cloud saw the potential of such a technology and shipped in less than a year a productized version of the technique (under the brand AutoML). Vertex AI NAS is the newest and most powerful version of this idea, using the most sophisticated innovation that has emerged since the initial research.Customer organizations are already implementing Vertex AI NAS for their most advanced workloads. Autonomous vehicle company Nuro is using Vertex AI NAS, and Jack Guo, Head of Autonomy Platform at the company, states, “Nuro’s perception team has accelerated their AI model development with Vertex AI NAS. Vertex AI NAS have enabled us to innovate AI models to achieve good accuracy and optimize memory and latency for the target hardware. Overall, this has increased our team’s productivity for developing and deploying perception AI models.” And our partner ecosystem is growing for Vertex AI NAS. Google Cloud and Qualcomm Technologies have collaborated to bring Vertex AI NAS to the Qualcomm Technologies Neural Processing SDK, optimized for Snapdragon 8. This will bring AI to different device types and use cases, such as those involving IoT, mixed reality, automobiles, and mobile.Google Cloud’s commitments to making machine learning more accessible and useful for data users, from the novice to the expert, and to increasing the efficacy of machine learning for enterprises are at the core of everything we do. With the suite of unified machine learning tools within Vertex AI, organizations can take advantage of every ML tool they need on one AI platform. Ready to start ML modeling with Vertex AI? Start building for free. Want to know how Vertex AI Platform can help your enterprise increase return on ML investments? Contact us.Related ArticleNew to ML: Learning path on Vertex AIIf you’re new to ML, or new to Vertex AI, this post will walk through a few example ML scenarios to help you understand when to use which…Read Article
Quelle: Google Cloud Platform

Google Cloud’s 5 ways to create differentiated value in post-merger integrations

Across all industries, the last few years have accelerated the need to transform to digital, attain new talent and capabilities at a rapid pace, and enlist new operating models, such as cloud technology. The impetus for these changes has been not only to drive down costs but also to increase competitiveness and boost productivity. Mergers & acquisitions (M&A), as well as restructurings such as carve-outs, divestitures, spin-offs etc., have always been a common tool in the CEO and Board agenda in order to deliver added growth, create and deliver synergies, reposition the company’s strategy and rebalance the corporate portfolio to its most efficient and forward looking uses. Increasingly, strategic access to innovative technologies, data acquisition and monetization, technical debt elimination and capabilities such as Artificial Intelligence and Machine Learning (AI/ML) are some of the main reasons to pursue a deal. The post-merger integration of the new bigger and more complex technology estate is likely to be an increasingly important element to delivering added value…if the integration is completed successfully. Google Cloud has a unique set of solutions, processes, partners and people to accelerate strategic deals and retain or create even more value than initially built into the deals valuation model. In this blogpost, we expand on how Google Cloud acts as an accelerating agent for realizing additional value propositions.Google Cloud’s 5 ways to create differentiated value in post-merger integrationsGoogle Cloud can be a trusted advisor in tracing the strategic integration journey after an M&A deal. More concretely, Google Cloud’s value can be summarized in the following:1. Seamless integration and single pane of control across Cloud Service Providers (CSPs) and companies’ physical data centers with Anthos.What we typically observe is that after M&A customers end up with a fragmented technology stack across multiple CSPs and a private cloud on-premise. This increases the friction in the software development lifecycle (SDLC) due to different processes and skill sets required to develop, test and release software across the various environments. Without a consistent platform, companies squander valuable technical resources and fall short of business demands for velocity and customer experience.  In a recent study by Forrester Consulting (commissioned by Google Cloud), using Anthos as a managed platform to control the SDLC across environments led to a projected 4.8x return on investment (ROI), 38% reduction in non-coding activities for technology teams and 75% increase in application migration and modernization.2. Increased optionality, observability and control for IT and vendor rationalization in a post-merger landscape, with our API management platform Apigee.For technology teams, the merger impact is felt immediately. The technology estate is bigger, more complex and most probably has multiple pockets of duplication, which  will likely be a shifting landscape in the years to come. APIs are a key element to the success of post-merger integration. By putting Apigee, an API management platform, in front of all HTTP application and data traffic, an organization can create a single pane of glass across all infrastructure, allowing companies undergoing an M&A to make more strategic, data-driven decisions. For instance, traffic to vendor or legacy systems can be centrally monitored and measured to make their use observable which leads to more data-driven rationalization decisions. Also, introducing an API layer around pockets of duplication provides optionality and relaxes time constraints; both likely critical elements of a successful integration.   3. Rapid and automated modernization of legacy technical debt, reducing the time of post-merger operations on gauging IT priorities.At Google Cloud we offer three main levers to help with these post-merger situations: Post-merger modernization: Using Google Cloud-owned tools, processes and methodologies which we refer to as the G4 Platform, Google Cloud can help automate and rapidly modernize legacy systems, e.g. Mainframe modernization, by automatically translating code written in Cobol to modern, easily and cost-effectively maintainable languages such as Java.Attacking the post-merger backlog: Google Cloud Cortex Framework offers repeatable blueprints and reference architectures that can accelerate time to value. Often backlogs multiply after a merger, leveraging repeatable patterns is key to scale the output of the technology teams.Divide and conquer complexity: Decomposing legacy applications through containers opens the path for step by step migrations and modernization, gradually moving processes to Virtual Machines on Cloud and then onwards to containers to leverage the full power of Kubernetes. 4. Merging data and re-prioritising data operations and licenses, while avoiding sunk costs and fixed, often duplicated, multi-year contracts with data providers.Using market and alternative data as a service on Google Cloud, merged companies can quickly and efficiently rationalize their post-merger data licensing and infrastructure needs. For instance, financial institutions can leverage commercial market data readily available on Google Cloud in an analysis-ready state, while corporate sustainability teams can use geolocation datasets available throughGoogle Earth Engine. The business can focus on the new business opportunities instead of adapting and merging legacy data estates and operations. Additionally, with BigQuery Omni, any data infrastructure merge doesn’t have to be a big bang – the data can live in other cloud providers or on-premise while still managing that data from a single pane of control. 5. Placing security, operational resilience, and sovereignty at the center of the post-deal operations.Post-M&A, merged companies will likely have to adhere to more jurisdictions and regulatory oversight compared to what each individual entity had to adhere and report to previously. This can pose significant challenges in terms of ensuring that the merged company’s technology estate operates within the bounds of data, operational and software sovereignty restrictions of each jurisdiction. Google Cloud offers a series of characteristics that inherently help in this situation. For example, Google Cloud offers access transparency and data sovereignty, portability during stressed exit scenarios for jurisdictions that mandate it and a Sovereign Cloud offering with trusted partners for projects and jurisdictions that demand the highest level of sovereignty assurances.M&A can place significant pressures on technology teams in the race to identify and merge processes, technologies, and responsibilities. Google Cloud has a plethora of technologies, solutions, and patterns to help you through the journey and unlock the potential of the new combined entity to focus on what matters.AcknowledgmentsSpecial thanks for their contribution to this blog post to Mose Tronci, Solutions Architect – Financial Services and Prue Mackenzie, Key Account Director – Financial Services.
Quelle: Google Cloud Platform

The next big evolution in serverless computing

The term “serverless” has infiltrated most cloud conversations, shorthand for the natural evolution of cloud-native computing, complete with many productivity, efficiency and simplicity benefits. The advent of modern “Functions as a Service” platforms like AWS Lambda and Google Cloud Functions heralded a new way of thinking about cloud-based applications: a move away from monolithic, slow-moving applications toward more distributed, event-based, serverless applications based on lightweight, single-purpose functions where managing underlying infrastructure was a thing of the past.With these early serverless platforms, developers got a taste for not needing to reason about, or pay for, raw infrastructure. Not surprisingly, that led them to apply the benefits of serverless to more traditional workloads. Whether it was simple ETL use cases or legacy web applications, developers wanted the benefits of serverless platforms to increase their productivity and time-to-value.Needless to say, many traditional workloads turned out to be a poor fit for the assumptions of most serverless platforms, and the task of rewriting those large, critical, legacy applications into a swarm of event-based functions wasn’t all that appealing. What developers needed was a platform that could provide all the core benefits of serverless, without requiring them to rewrite their application — or really have an opinion at all about the workload they wanted to run.With the introduction of Cloud Run in 2019, the team here at Google Cloud aimed to redefine how the market, and our customers, thought about severless. We created a platform that is serverless at its core, but that’s capable of running a far wider set of applications than previous serverless platforms. Cloud Run does this by using the container as its fundamental primitive. And in the two years since launch, the team has released 80 distinct updates to the platform, averaging an update every 10 days. Customers have similarly accelerated their adoption: Cloud Run deployments more than quadrupled from September 2020 to September 2021.  The next generation of serverless platforms will need to maintain the core, high-value characteristics of the first generation, things like:Rapid auto-scaling from, and to zeroThe option of pay-per-use billing modelsLow barriers to entry through simplicityLooking ahead, serverless platforms will need a much more robust set of capabilities to serve a new, broader range of workloads and customers. Here are the top five trends in serverless platforms that we see for 2022 and beyond.1. More (legacy) workloadsServerless’s value proposition isn’t limited to new applications, and shouldn’t require a wholesale rewrite of what is (and has been), working just fine. Developers ought to be able to apply the benefits of serverless to a wider range of workloads, including existing ones. Cloud Run has been able to expand the range of workloads it can address with several new capabilities, including:Per-instance concurrency. Many traditional applications run poorly when constrained to a single-request model that’s common in FaaS platforms. Cloud Run allows for up to 1,000 concurrent requests on a single instance of an application, providing a far greater level of efficiency.Background processing. Current-generation serverless platforms often “freeze” the function when it’s not in use. This makes for a simplified billing model (only pay while it’s running), but can make it difficult to run workloads that expect to do work in the background. Cloud Run supports new CPU allocation controls, which allow these background processes to run as expected.Any runtime. Modern languages or runtimes are usually appropriate for new applications, but many existing applications either can’t be rewritten, or depend on a language that the serverless platform does not support. Cloud Run supports standard Docker images and can run any runtime, or runtime version, that you can run in a container.2. Security and supply chain integrityRecent high-profile hacks like SolarWinds, Mimecast/Microsoft Exchange, and Codecov have preyed on software supply chain vulnerabilities. Malicious actors are compromising the software supply chain — from bad code submission to bypassing the CI/CD pipeline altogether. Cloud Run integrates with Cloud Build, which offers SLSA Level 1 compliance by default and verifiable build provenance. With code provenance, you can trace a binary to the source code to prevent tampering and prove that the code you’re running is the code you think you’re running. Additionally, the new Build Integrity feature automatically generates digital signatures, which can then be validated before deployment by Binary Authorization. 3. Cost controls and billing flexibilityWorkloads with highly variable traffic patterns, or those with generally low traffic, are a great fit for the rapid auto-scaling and scale-to-zero characteristics of serverless. But workloads with a more steady-state pattern can often be expensive when run with fine-grained pay-per-use billing models. In addition, as powerful as unbounded auto-scaling can be, it can make it difficult to predict the future cost of running an application.Cloud Run includes multiple features to help you manage and reduce costs for serverless workloads. Organizations with stable, steady-state, and predictable usage can now purchase committed use contracts directly in the billing UI, for deeply discounted prices. There are no upfront payments, and these discounts can help you reduce your spend by as much as 17%. The always-on CPU feature removes all per-request fees, and is priced 25% lower than the standard pay-per-request model. This model is generally preferred for applications with either more predictable traffic patterns, or those that require background processing.For applications that require high availability with global deployments, traditional “fixed footprint” platforms can be incredibly costly, with each redundant region needing to carry the capacity for all global traffic. The scale-to-zero behavior of Cloud Run, together with its availability in all GCP regions, make it possible to have a globally distributed application without needing a fixed capacity allocation in any region.4. Integrated DevOps experience, with built-in best practicesA large part of increasing simplicity and productivity for developers is about reducing the barriers to entry so they can just focus on their code. This simplicity needs to extend beyond the “day one” operations, and provide an integrated DevOps experience.Cloud Run supports and end-to-end DevOps experience, all the way from source code to “day-two” operations tooling:Start with a container or use buildpacks to create container images directly from source code. In fact, you don’t even need to learn Docker or containers. With a single “gcloud run deploy” command, you can build and deploy your code to Cloud Run. Built-in tutorials in Cloud Shell Editor and Cloud Code make it easy to come up to speed on serverless. No more switching between tabs, docs, your terminal, and your code. You can even author your own tutorials, allowing your organization to share best practices and onboard new hires faster. Experiment and test ideas quickly. In just a few clicks, you can perform gradual rollouts and rollbacks, and perform advanced traffic management in Cloud Run. Get access to distributed tracing with no setup or configuration, allowing you to find performance bottlenecks in production in minutes. 5. PortabilityThe code you write and the applications you run should not be tied to a single vendor. The benefits of the vendor’s platform should be applied to your application, without you needing to alter your application in unnecessary ways that lock you in to a particular vendor.Cloud Run runs standard Docker container images. When deploying source code directly to Cloud Run, we use open source buildpacks to turn your source code into a container. Your source code, your buildpack used and your container can always be run locally, on-prem, or on any other cloud.Look no furtherThese five trends are important things to consider as you compare the various serverless solutions in the market in the coming year. The best serverless solution will allow you to run a broad spectrum of apps, without language, networking or regional restrictions. It will also offer secure multi-tenancy, with an integrated secure software supply chain. And you’ll want to consider how the platform helps you keep costs in check, whether it provides an integrated DevOps experience, and ensures portability. Once you’ve answered these questions for yourself, we encourage you to try out Cloud Run, with these Quickstart guides.
Quelle: Google Cloud Platform

Learn how Notified accelerated discovery and classification of journalists at scale with Google Cloud AI

Notified is a leading communications cloud for events, public relations, and investor relations to drive meaningful insights and outcomes. They provide communications solutions to effectively reach and engage customers, investors, employees, and the media.One of Notified’s Public Relations solutions is the ‘Media Contact Database’ that allows customers to discover media and influencers in a unique media database powered by AI and human-curated research. The goal of the initiative is to expand the scope of the AI driven, dynamically discovered influencers, and analyze online news articles using AI/ML technologies to extract entities and classify content. The prior process to extract insights from news articles provided only 30-40% of the desired results, and there were accuracy and stability issues that resulted in a lot of manual intervention.Journalist BeatA key outcome of the AI driven process is to identify the ‘Journalist Beat’. A Journalist Beat essentially summarizes the individual’s area of focus such as a sports writer, financial journalist etc. Three options were evaluated for the AI/ML process to generate the Journalist Beats :Option 1:  Topic MLUnsupervised ML approach to determine the commonly used terms.Pro: Common approach to grouping documents and determine similar textCon: Unbounded list of textOption 2: ML ClassificationBuild classification models (supervised) to map reference articles to ‘Beats’ Pro: Aligns to ‘Research Analytics’ existing processesCon: Time to build and maintain ML models for hundreds of beats.Option 3: GCP Context ClassificationLeverage GCP’s Natural Language API for initial classification and as input to Notified single modelPro: Aligns to ‘Research Analytics’ without building ML models.Ultimately the GCP Natural Language API solution was chosen because of the speed of execution and a high level of accuracy with the pretrained models. The Notified team was able to launch the product feature within a few weeks, without ever needing to do extensive data collection and train the models. Here is the high level process that was implemented for Journalist Beats.Since Notified supports curated media contacts globally, news articles were instantly translated to English using GCP Translation API. GCP Natural Language API’s solution to classify text was used to analyze the translated text and generate the list of content categories.Solution ArchitectureHere is a sample solution architecture for the ‘Discovered Journalist’ process.Three core principles guided the above architecture – Serverless & Fully Managed, Scalability & Elasticity for flexibility and to optimize costs, API led real-time processing.In addition to the GCP Natural Language API and Translation API below are a few serverless GCP products that were part of the automated solution:BigQuery is Google Cloud’s fully managed, petabyte-scale, and cost-effective analytics data warehouse that lets you run analytics over vast amounts of data in near real time.Cloud Run is a fully managed serverless platform that can be used to develop and deploy highly scalable containerized applications.Cloud Tasks is a fully managed service that allows you to manage the execution, dispatch, and delivery of a large number of distributed tasks.The powerful pre-trained models of the Natural Language API provide a comprehensive set of features to apply natural language understanding to applications such as sentiment analysis, entity analysis, entity sentiment analysis, content classification, and syntax analysis. Notified looks ahead to super-scalingIn an effort to even further improve its best in class ‘Media Contact Database’, Notified looks to super scale the above AI driven Influencer Discovery process to the order of 100+ million news articles per month. It plans to expand the scope of entities extracted from the news articles and provide a news exploration service for its customers by performing intelligent entity-based searches.To watch your markets evolve, see how competitors add AI insights. To actually stay in the market, make AI the main driver of your product road maps. GCP Natural Language API accelerated our ability to adopt AI at scale. Thomas Squeo, CTO, NotifiedAcknowledgmentsWe’d like to thank our collaborators at Google and Notified for making this blog post possible. Thanks to Arpit Agrawal at MediaAgility for contributing to this blog post.To learn more about how Google Cloud Natural Language AI can help your enterprise, try out an interactive demo and take the next step, visit the product overview page here.Related ArticlePicture what the cloud can do: How the New York Times is using Google Cloud to find untold stories in millions of archived photosThe New York Times is building a pipeline on Google Cloud Platform to preserve its extensive photo archive, store it in the cloud, and le…Read Article
Quelle: Google Cloud Platform