Google Cloud unveils world’s largest publicly available ML hub with Cloud TPU v4, 90% carbon-free energy

At Google, the state-of-the-art capabilities you see in our products such as Search and YouTube are made possible by Tensor Processing Units (TPUs), our custom machine learning (ML) accelerators. We offer these accelerators to Google Cloud customers as Cloud TPUs. Customer demand for ML capacity, performance, and scale continues to increase at an unprecedented rate. To support the next generation of fundamental advances in artificial intelligence (AI), today we announced Google Cloud’s machine learning cluster with Cloud TPU v4 Pods in Preview — one of the fastest, most efficient, and most sustainable ML infrastructure hubs in the world.Powered by Cloud TPU v4 Pods, Google Cloud’s ML cluster enables researchers and developers to make breakthroughs at the forefront of AI, allowing them to train increasingly sophisticated models to power workloads such as large-scale natural language processing (NLP), recommendation systems, and computer vision algorithms. At 9 exaflops of peak aggregate performance, we believe our cluster of Cloud TPU v4 Pods is the world’s largest publicly available ML hub in terms of cumulative computing power, while operating at 90% carbon-free energy.  “Based on our recent survey of 2000 IT decision makers, we found that inadequate infrastructure capabilities are often the underlying cause of AI projects failing. To address the growing importance for purpose-built AI infrastructure for enterprises, Google launched its new machine learning cluster in Oklahoma with nine exaflops of aggregated compute. We believe that this is the largest publicly available ML hub with 90% of the operation reported to be powered by carbon free energy. This demonstrates Google’s ongoing commitment to innovating in AI infrastructure with sustainability in mind.” —Matt Eastwood, Senior Vice President, Research, IDCPushing the boundaries of what’s possibleBuilding on the announcement of Cloud TPU v4 at Google I/O 2021, we granted early access to Cloud TPU v4 Pods to several top AI research teams, including Cohere, LG AI Research, Meta AI, and Salesforce Research. Researchers liked the performance and scalability that TPU v4 provides with its fast interconnect and optimized software stack, the ability to set up their own interactive development environment with our new TPU VM architecture, and the flexibility to use their preferred frameworks, including JAX, PyTorch, or TensorFlow. These characteristics allow researchers to push the boundaries of AI, training large-scale, state-of-the-art ML models with high price-performance and carbon efficiency.co-here.jpglg ai research.jpgmeta.jpgsalesforce.jpgIn addition, TPU v4 has enabled breakthroughs at Google Research in the areas of language understanding, computer vision, speech recognition, and much more, including the recently announced Pathways Language Model (PaLM) trained across two TPU v4 Pods.“In order to make advanced AI hardware more accessible, a few years ago we launched theTPU Research Cloud (TRC) program that has provided access at no charge to TPUs to thousands of ML enthusiasts around the world. They have published hundreds of papers and open-source github libraries on topics ranging from ‘Writing Persian poetry with AI’ to ‘Discriminating between sleep and exercise-induced fatigue using computer vision and behavioral genetics’. The Cloud TPU v4 launch is a major milestone for both Google Research and our TRC program, and we are very excited about our long-term collaboration with ML developers around the world to use AI for good.”—Jeff Dean, SVP, Google Research and AISustainable ML breakthroughsThe fact that this research is powered predominantly by carbon-free energy makes the Google Cloud ML cluster all the more remarkable. As part of Google’s commitment to sustainability, we’ve been matching 100% of our data centers’ and cloud regions’ annual energy consumption with renewable energy purchases since 2017. By 2030, our goal is to run our entire business on carbon-free energy (CFE) every hour of every day. Google’s Oklahoma data center, where the ML cluster is located, is well on its way to achieving this goal, operating at 90% carbon-free energy on an hourly basis within the same grid. In addition to the direct clean energy supply, the data center has a Power Usage Efficiency (PUE)1 rating of 1.10, making it one of the most energy-efficient data centers in the world. Finally, the TPU v4 chip itself is highly energy efficient, with about 3x the peak FLOPs per watt of max power of TPU v3. With energy-efficient ML-specific hardware, in a highly efficient data center, supplied by exceptionally clean power, Cloud TPU v4 provides three key best practices that can help significantly reduce energy use and carbon emissions.Breathtaking scale and price-performanceIn addition to sustainability, in our work with leading ML teams we have observed two other pain points: scale and price-performance. Our ML cluster in Oklahoma offers the capacity that researchers need to train their models, at compelling price-performance, on the cleanest cloud in the industry. Cloud TPU v4 is central to solving these challenges. Scale: Each Cloud TPU v4 Pod consists of 4096 chips connected together via an ultra-fast interconnect network with the equivalent of an industry-leading 6 terabits per second (Tbps) of bandwidth per host, enabling rapid training for the largest models.Price-performance: Each Cloud TPU v4 chip has ~2.2x more peak FLOPs than Cloud TPU v3, for ~1.4x more peak FLOPs per dollar. Cloud TPU v4 also achieves exceptionally high utilization of these FLOPs for training ML models at scale up through thousands of chips. While many quote peak FLOPs as the basis for comparing systems, it is actually sustained FLOPs at scale that determines model training efficiency, and Cloud TPU v4’s high FLOPs utilization (significantly better than other systems due to high network bandwidth and compiler optimizations) helps yield  shorter training time and better cost efficiency.Table 1: Cloud TPU v4 pods deliver state-of-the-art performance through significant advancements in FLOPs, interconnect, and energy efficiency.Cloud TPU v4 Pod slices are available in configurations ranging from four chips (one TPU VM) to thousands of chips. While slices of previous-generation TPUs smaller than a full Pod lacked torus links (“wraparound connections”), all Cloud TPU v4 Pod slices of at least 64 chips have torus links on all three dimensions, providing higher bandwidth for collective communication operations.Cloud TPU v4 also enables accessing a full 32 GiB of memory from a single device, up from 16 GiB in TPU v3, and offers two times faster embedding acceleration, helping to improve performance for training large-scale recommendation models. PricingAccess to Cloud TPU v4 Pods comes in evaluation (on-demand), preemptible, and committed use discount (CUD) options. Please refer to this page for more details.Get started todayWe are excited to offer the state-of-the-art ML infrastructure that powers Google services to all of our users, and look forward to seeing how the community leverages Cloud TPU v4’s combination of industry-leading scale, performance, sustainability, and cost efficiency to deliver the next wave of ML-powered breakthroughs. Ready to start using Cloud TPU v4 Pods for your AI workloads? Reach out to your Google Cloud account manager or fill in this form. Interested in access to Cloud TPU for open-source ML research? Check out our TPU Research Cloud program.AcknowledgementsThe authors would like to thank the Cloud TPU engineering and product teams for making this launch possible. We also want to thank James Bradbury, Software Engineer, Vaibhav Singh, Outbound Product Manager and Aarush Selvan, Product Manager, for their contributions  to this blog post.1. We report a comprehensive trailing twelve-month (TTM) PUE in all seasons, including all sources of overhead.Related ArticleCloud TPU VMs are generally availableCloud TPU VMs with Ranking & Recommendation acceleration are generally available on Google Cloud. Customers will have direct access to TP…Read Article
Quelle: Google Cloud Platform

Google Cloud at I/O: Everything you need to know

We love this time of year. This week is Google I/O, our largest developer conference, where developer communities from around the world come together to learn, catch up, and have fun. Google Cloud and Google Workspace had a big presence at the show, talking about our commitment to building intuitive and helpful developer experiences to help you innovate freely and quickly. We do the heavy lifting, embedding the expertise from years of Google research in areas like AI/ML and security, so you can easily build secure and intelligent solutions for your customers.So, what’s happening at I/O this year? Let’s start with the keynotes… Google I/O keynote Google and Alphabet CEO Sundar Pichai kicked off Day 1 of I/O with a powerhouse keynote highlighting recent breakthroughs in machine learning, including one of the fastest, most efficient, and most sustainable ML infrastructure hubs in the world. Google Cloud’s machine learning cluster with Cloud TPU v4 pods (in Preview), allows researchers and developers to make AI breakthroughs by training larger and more complex models faster, to power workloads like large-scale natural language processing (NLP), recommendation systems, and computer vision. With eight TPU v4 pods in a single data center, generating 9 exaflops of peak performance, we believe this system is the world’s largest publicly available ML hub in terms of cumulative computing power, while operating at 90% carbon-free energy. Read more about the ML hub with Cloud TPU v4 here.“Early access to TPU v4 has enabled us to achieve breakthroughs in conversational AI programming with our CodeGen, a 16-billion parameter auto-regressive language model that turns simple English prompts into executable code.” —Erik Nijkamp, Research Scientist, Salesforce“…we saw a 70% improvement in training time for our ‘extremely large’ model when moving from TPU v3 to TPU v4… The exceptionally low carbon footprint of Cloud TPU v4 Pods was another key factor.…”—Aidan Gomez, CEO and Co-Founder, CohereIn the keynote, Sundar also announced new AI-enabled features in Google Workspace focused on users, that are designed to help people thrive in the hybrid workplace. New advancements in NLP enable summaries in Spaces to help users catch up on missed conversations by providing a helpful digest. Automated meeting transcription for Google Meet allows users who didn’t attend a meeting to stay in the loop, or for attendees to easily reference the discussion at a later time. Users can also now leverage portrait restore, which automatically improves video image quality — even on devices with lower quality webcams. And they can filter out the reverberation in large spaces with hard surfaces, giving users “conference-room-quality” audio whether they are in their basement, kitchen, or garage. These new features deliver high quality experiences, allowing Google Workspace users to benefit from our AI leadership.Developer keynoteNext up, we heard from Jeanine Banks, Google Vice President of Developer Experiences and DevRel, and a number of product teams who led us through a flurry of exciting new updates about everything from Android to Flutter to Cloud. On the Google Cloud front, we announced the preview of Cloud Run jobs, which can reduce the time developers spend performing administrative tasks such as database migration, managing scheduled jobs like nightly reports, or doing batch data transformation. With Cloud Run jobs, you can execute your code on the highly scalable, fully managed Cloud Run platform, but only pay when your jobs are executing — and without having to worry about managing infrastructure. Learn more about Cloud Run jobs here.Then, we announced the preview of AlloyDB for PostgreSQL, a new fully managed, relational database service that gives enterprises the performance, availability, and ease of management they need to migrate from their expensive legacy database systems and onto Google Cloud. AlloyDB combines proven, disaggregated storage and compute that powers our most popular, globally available products such as Google Maps, YouTube, Search, and Ads — with PostgreSQL, an open source database engine beloved by developers.Our performance tests show that AlloyDB is four times faster for transaction processing and up to 100 times faster for analytical queries than standard PostgreSQL. It’s also two times faster than AWS’ comparable PostgreSQL-compatible service for transactional workloads. AlloyDB’s fully-managed database operations and ML-based management systems can relieve administrators and developers from daunting database management tasks. Of course, AlloyDB is fully PostgreSQL-compatible, meaning that developers can reuse their existing development skills and tools. It also offers an impressive 99.99% SLA inclusive of maintenance, and no complex licensing or I/O charges. You can learn more about AlloyDB for PostgreSQL here.“Developers have many choices for building, innovating and migrating their applications. AlloyDB provides us with a compelling relational database option with full PostgreSQL compatibility, great performance, availability, and cloud integration. We are really excited to co-innovate with Google and can now benefit from enterprise grade features while cost-effectively modernizing from legacy, proprietary databases.”—Bala Natrajan, Sr. Director, Data Infrastructure and Cloud Engineering at PayPalCloud keynote – “The cloud built for developers” Moving on to the Cloud keynote, Google Cloud’s very own Aparna Sinha, Director of Product Management, Google Cloud and Google Workspace’s Matthew Izatt, Product Lead, gave the I/O audience exciting cloud updates. Aparna reiterated the benefits of Cloud Run jobs and AlloyDB, while showcasing how our services integrate nicely to give you a full stack specifically tailored for backend, web, mobile and data analytics applications. These stacks also natively embed key security and AI/ML features for simplicity. Specifically, with build integrity, a new feature in Cloud Build, you get out-of-the-box build provenance and “Built by Cloud Build” attestations, including details like the images generated, the input sources, the build arguments, and the built time, helping you achieve up to SLSA Level 2 assurance. Next, you can use Binary Authorization to help ensure that only verified builds with the right attestations are deployed to production. You can get the same results as the experts — without having to be a security expert yourself. Aparna also announced the preview of Network Analyzer, showing how developers can troubleshoot and isolate root causes of complex service disruptions quickly and easily. The new Network Analyzer module in Network Intelligence Center can proactively detect network failures to prevent downtime caused by accidental misconfiguration, over-utilization, and suboptimal routes. Network Analyzer is available for services like Compute Engine, Google Kubernetes Engine (GKE), Cloud SQL, and more. You can visit the Network Analyzer page to learn more. Something that really got the developer audience excited was the announcement of the preview of Immersive Stream for XR allowing you to render eXtended Reality experiences using powerful Google Cloud GPUs, and stream these experiences to mobile devices around the world. Immersive Stream for XR integrates the process of creating, maintaining, and scaling high-quality XR. In fact, XR content delivered using Immersive Stream for XR works on nearly every mobile device regardless of model, year, or operating system. Also, your users can enjoy these immersive experiences simply by clicking a link or scanning a QR code. “We know that our new and existing customers expect unique and innovative campaigns for two of the most unique and innovative vehicles in our brand’s history, and Google Cloud helped us create something very special to share with them.”—Albi Pagenstert, Head of Brand Communications and Strategy, BMW of North America To learn more, visit xr.withgoogle.com, and check out this video to see for yourself!And finally, Matthew brought it all home, highlighting the incredible innovation coming from Google Workspace. He detailed how we are making it easier for developers to extend and customize the suite, and simplify integration with existing tools. For example, Google Workspace Add-ons allow you to build applications using your preferred stack and languages; you just build once, and your application is available to use across Google Workspace apps such as Gmail, Google Calendar, Drive and Docs. Matthew also shared how we are improving the development experience by allowing you to easily connect DevOps tools like PagerDuty to the Google Workspace platform. Finally, he noted the critical role that Google Workspace Marketplace can play in increasing the growth and engagement of your application. If you’re interested in learning about how we’re using machine learning to help make people’s work day more productive and impactful, here’s where you can find all of this week’s Workspace news. Sessions and workshopsWhew… that was a lot of cloud updates in three keynotes! But wait… there’s more!Google Cloud also had 14 cloud breakout sessions and 5 workshops at I/O, covering loads of different topics. Here’s the full list for you, all available on demand:SessionsAn introduction to MLOps with TFXAsynchronous operations in your UI using Workflows and FirestoreAuto alerts for Firebase users with Functions, Logging, and BigQueryConversational AI for business messagingDevelop for Google Cloud Platform faster with Cloud CodeExtending Google Workspace with AppSheet’s no-code platform and Apps ScriptFraudfinder: A comprehensive solution for real data science problemsFrom colab to Cloud in five stepsLearn how to enable shared experiences across platformsLearn to refactor Cloud applications in Go 1.18 with GenericsModern Angular deployment with Google CloudRun your jobs on serverlessThe future of app development with cloud databasesWhat’s new in the world of Google Chat appsWorkshopsApply responsible AI principles when building remote sensing datasetsBuild an event-driven orchestration with Eventarc and WorkflowsBuilding AppSheet apps with the new Apps Script connectorFaster model training and experimentation with Vertex AISpring Native on GCP – what, when, and why?And finally, what would I/O be without some massively fun interactive experiences? Take our cloud island at I/O Adventure featuring custom interactive demos and sandboxes. Here, attendees can explore content, chat with Googlers, and earn some really cool swag.  So that’s a wrap on Google Cloud announcements at I/O. We’ll have lots more exciting announcements in the next few months that will make your developer experience even simpler and more intuitive. In the meantime, join our developer community, Google Cloud Innovators, where you’ll make lots of awesome new friends. And be sure to register for Google Cloud Next ’22 in October. We can’t wait to see you again!
Quelle: Google Cloud Platform

How Google Cloud and SAP solve big problems for big companies

With SAP Sapphire kicking off today in Orlando, we’re looking forward to seeing our customers and discussing how they can make core processes more efficient and improve how they serve their customers.One thing is certain to be top of mind – the global supply chain challenges facing the world today. It’s affecting every business across every industry, from common household items that once filled store shelves and are now on backorder, to essential goods and services like food and medical treatments, which are at risk. Even cloud-native companies are making changes to ensure they have the insights, equipment, and other assets they need to continue serving customers. We are proud to work with SAP on many initiatives that are driving results for our customers and helping them run more intelligent and sustainable companies. I’d like to highlight three of these important initiatives and how they are helping address global supply chain challenges. Enabling more efficient migrations of critical workloads We know a key barrier to entry in the cloud is the ability to easily migrate from on-premises environments. Our cloud provides a safe path to help companies including Johnson Controls, PayPal, and Kaeser Compressor to digitize and solve large, complex business problems, reduce costs, scale without cycles of investment, and gain access to key services and capabilities that can unlock value and enable growth. Singapore-based shipping company Ocean Network Express (ONE) has become more agile by running their mission-critical SAP workloads on Google Cloud and using our data analytics to improve operational efficiency and make faster decisions. They have gone from an on-premises data warehouse solution that would take a full day loading data from SAP S/4HANA, to using our BigQuery solution that delivers business insights in minutes.Since The Home Depot moved its critical SAP workloads to Google Cloud, the company has been able to shorten the time it takes to prepare a supply chain use case from 8 hours to 5 minutes by using BigQuery to analyze large volumes of internal and external data. This helps improve forecast accuracy and more effectively replenish inventory by being able to create a new plan when circumstances change unexpectedly with demand or a supplier.Accelerating cloud benefits through RISE and LiveMigration At Google Cloud, we have dedicated programs to help migrate SAP and other mission-critical workloads to our cloud with our Cloud Acceleration Program for SAP.For SAP customers moving to Google Cloud, we provide LiveMigration to provide superior uptime and business continuity. LiveMigration eliminates downtime required for planned infrastructure maintenance. This means that your SAP system continues running even when Google Cloud is performing planned infrastructure maintenance upgrades thus ensuring superior business continuity for your mission critical workloads. We are also proud to be a strategic partner with the RISE with SAP program, which helps accelerate cloud migration for SAP’s global customer base while minimizing risks along the migration journey. This program provides solutions and expertise from SAP and technology ecosystem partners to help companies transform through process consulting, workload migration services, cloud infrastructure, and ongoing training and support. To secure your mission critical workloads, SAP and Google Cloud can provide a 99.9% uptime SLA as part of the RISE with SAP program.Many large manufacturers have taken advantage of RISE with SAP to forge a secure, proven path to our cloud, including Energizer Holdings Inc., a leading manufacturer and distributor of primary batteries, portable lights, and auto care products. Energizer has turned to RISE with SAP on Google Cloud to power its move to SAP S/4HANA. The company wants to automate essential business processes, improve customer service, and boost innovation. It had been using a private cloud solution but needed to gain flexibility while better containing costs.“SAP S/4HANA for central finance will help us automate essential business processes, improve customer service, and fuel innovation that grows our company’s leadership position globally. We selected RISE with SAP to begin our journey to SAP S/4HANA and maintain the freedom and flexibility to move at our own pace,” said Energizer Chief Information Officer Dan McCarthy.Another example is global automotive distributor Inchcape, which moved its mission-critical sales, marketing, finance, and operations systems and data to Google Cloud. With its diverse data sets now in a single, secure cloud platform, Inchcape is applying Google Cloud AI and ML capabilities to manage and analyze its data, automate operations, and ultimately transform the car ownership experience for millions. “Google Cloud’s close relationship with SAP and its strong technical expertise in this space were a big pull for us,” said Mark Dearnley, Chief Digital Officer at Inchcape. “Ultimately, we wanted a headache-free RISE with SAP implementation and to unlock value for auto makers and consumers in all our regions, while continuing to have the choice and flexibility to modernize our 150-year old business in a way that works for us.” A new intelligence layer for all SAP Google Cloud customersWhen moving mission-critical workloads to the cloud, companies not only need to migrate safely, they also need to quickly realize value, which we enable with Google Cloud Cortex Framework — a layer of intelligence that integrates with SAP Business Technology Platform (SAP BTP). Google Cloud Cortex Framework provides reference architectures, deployment accelerators, and integration services for analytics scenarios. Like many large e-commerce companies, Mercado Libre experienced skyrocketing transactions that more than doubled in 2020 as people sheltered at home during the pandemic, and they are anticipating more growth. The Google Cloud Cortex Framework is enabling Mercado Libre to respond, run more efficiently, and make faster, data-driven decisions.Continued partnership to support organizations around the world Our longstanding partnership with SAP continues to yield exciting innovations for our customers, and we’re honored to work with them to help customers address the ongoing impact of global supply chain challenges. We’re looking forward to sharing new insights and innovations at SAP Sapphire this week, and to listening and learning from you about your plans and challenges, and how we can best support your transformation to the cloud.Related Article6 SAP companies driving business results with BigQuerySAP systems generate large amounts of key operational data. Learn how six Google Cloud customers are leveraging BigQuery to drive value f…Read Article
Quelle: Google Cloud Platform

3co reinvents the digital shopping experience with augmented reality on Google Cloud

 Giving people as close to a “try-before-you-buy” experience is essential for retailers. With the move to online shopping further accelerated by the COVID-19 pandemic, many people are now comfortable shopping online for items they previously only considered buying in stores. The problem for shoppers is that it still can be difficult to get what feels like more hands-on experiences of items given limitations with even some of today’s most advanced augmented reality (AR) technologies. And while retailers continue to invest heavily in creating the most life-like digital experiences possible, the results often come up short for shoppers with more digital buying options than ever. To make AR experiences more convincing for shoppers—and for anyone wanting richer, more immersive experiences in entertainment and other industries—the depiction of real-world physical objects in digital spaces needs to continue to improve and evolve. As avid plant lovers, we knew the experience of viewing and buying plants online was severely lacking. That prompted our initial exploration into rethinking what’s possible with AR: we built a direct-to-consumer app for buying plants in AR. However, during our time in the Techstars program, we quickly realized that improving how people see and experience plants online was just a fraction of a much bigger, multi-billion-dollar opportunity for us. Since 2018, 3co has been laser-focused (quite literally) on scaling 3D tech for all of e-commerce.An automated 3D scanning system for photorealistic 3D modeling of retail products, designed by 3co and powered by Google Cloud.Closing the gap between imagination and reality with Google CloudWith that in mind, 3co began developing breakthroughs needed in 3D computer vision. Our advanced artificial intelligence (AI) stack is designed to give companies an all-in-one 3D commerce platform to easily and cost-effectively create realistic 3D models of physical objects and stage them in virtual showrooms.When building our AR platform, we quickly understood that engineering 3D simulations with sub-perceptual precision requires an enormous amount of compute power. Fortunately the problems are parallelizable. But it simply isn’t possible to 3D model the complex real world with superhuman precision on conventional laptops or desktops.As a part of the Google for Startups Cloud Program, Startup Success Managers helped 3co plug into the full power of Google’s industry-leading compute capabilities. For several projects, we selected a scalableCompute Engine powerful enough to solve even the most complex 3D graphics optimizations at scale. Today with the A2 virtual machine, 3co leverages NVIDIA Ampere A100 Tensor Core GPUs to create more life-like 3D renderings over ten times faster. And this is just the beginning.We’re also proud to have deployed a customized streaming GUI on top of Google’s monstrous machines, which allowed our colleagues across the world (including in Amsterdam and Miami) to plug-and-play with the latest 3D models on a world-class industrial GPU. I would highly recommend to companies solving super hard AI and/or 3D challenges in a distributed team to consider adopting cloud resources in the same way. It was a delight to see Blender render gigabyte 3D models faster than ever before in my life.GUI for 3D modeling, streamed from Google Cloud computers by 3co, which unlocked previously impossible collaborative workflows on gigabyte-sized 3D models.Equally critical, with our technology, 3D artists in retail, media and entertainment, and other industries pressured to deliver more—and more immersive AR—experiences can reduce costs and speed to generate photorealistic 3D models, as much as tenfold. We know this from our own work because we’ve seen computing costs to generate the highest-quality 3D experiences drop significantly—even though we run an advanced Compute Engine loaded with a powerful GPUs, high-end CPUs, and massive amounts of RAM. If the goal is to scale industry-leading compute power quickly for a global customer base, Google Cloud is the proper solution. Cloud Storage is another key but often overlooked component of the Google Cloud ecosystem, critical for 3co. We need the high throughput, low latency, and instant scalability delivered bylocal cloud SSDs to support the massive amounts of data we generate, store, and stream. The local SSDs complement our A2 compute engines and are physically attached to the servers hosting the virtual machine instances. This local configuration supports extremely high input/output operations per second (IOPS) with very low latency compared to persistent disks.To top it off,Cloud Logging delivers us real-time log management at exabyte scale — ingesting analytic events that are streamed to data lakes withPub/Sub – so we can know while enjoying the beach here in Miami, Florida that everything is going smoothly in the cloud.Building the 3co AI stack with TensorFlowBuilding one of the world’s most advanced 3D computer vision solutions would not have been possible withoutTensorFlow and its comprehensive ecosystem of tools, libraries, and community resources. Since the launch of TensorFlow in 2015, I’ve personally built dozens of deep learning systems using this battle-hardened technology, an open source Google API for AI. Through TensorFlow on Google Cloud, 3co is able to scale its compute power for creation of truly photorealistic digital models of physical objects — down to microscopic computation of material textures, and deep representations of surface light transport from all angles.  Most recently, 3co has been making massive progress on top of the TensorFlow implementation of Neural Radiance Fields (“NeRF”, Mildenhall et al. 2020). We are humbled to note that this breakthrough AI in TensorFlow truly is disruptive for the 3D modeling industry: we anticipate the next decade in 3D modeling will be increasingly shaped and colored by similar neural networks (I believe the key insight of the original authors of NeRF is to force a neural network to learn a physics-based model of light transport). For our contribution, 3co is now (1) adapting NeRF-like neural networks to optimally leverage sensor data from various leading devices for 3D computer vision, and (2) forcing these neural networks to learn industry-standard 3D modeling data structures, which can instantly plug-and-play on the leading 3D platforms. As Isaac Newton said, “If I have seen further, it is by standing on the shoulders of giants.” That is, tech giants. In several ways, TensorFlow is the go-to solution both for prototyping and for large-scale deployment of AI in general. Under-the-hood, TensorFlow uses a sophisticated compiler (XLA) for optimizing how computations are allocated on underlying hardware.3co achieved a 10x speed-up in neural network training time (for inverse rendering optimization), by compiling its computations with TensorFlow XLA.Unlike its competitors (e.g. PyTorch, JAX), TensorFlow can also compile binaries to run on TPUs (i.e. TFLite) and across device architectures (e.g. iOS, Android, JavaScript). This ability is important because 3co is committed to delivering 3D computer vision wherever it is needed, with maximum speed and accuracy. Through TensorFlow on Google Cloud, 3co has been able to speed up experimental validation of patent-pending 3D computer vision systems that can run the same TensorFlow code across smartphones, LIDAR scanners, AR glasses, and so much more.3co is developing an operating system for 3D computer vision powered by TensorFlow, in order to unify development of a single codebase for AI, across the most common sensors & processors.TensorFlow also enables 3co’s neural networks to train faster, through an easy API for distributed training across many computers. Distributed deep learning was the focus of my masters thesis in 2013 (inspired by work from Jeff Dean, Andrew Ng, and Google Brain), so you can imagine how excited I was to see Google optimize these industry-leading capabilities for the open source community, over the following years. Parallelization of deep learning has consistently proven essential for creating this advanced AI, and 3co is no exception to this rule. As well, with faster AI training means faster conclusion of R&D experiments. As Sam Altman says, “The number one predictor of success for a very young startup: rate of iteration”. From day one, TensorFlow was built to speed up Google’s AI computing challenges at the biggest scale, but it also “just works” at the earliest stages of exploration. Through TensorFlow on Google Cloud, 3co is steadily improving our capabilities for autonomous photorealistic 3D modeling. Simple and flexible architectures for fast experimentation enable us to quickly move from concept to code, from code to state-of-the-art deployed ML models. Thus, Google has given 3co through TensorFlow a powerful tool needed to better serve customers with their modern AI and computer vision. In the future, 3co has big plans involving supercomputers of Google Cloud Tensor Processing Units (TPUs), so we plan to achieve even greater speed and cost optimization. Running TensorFlow on Cloud TPUs requires just a little bit of extra work by the AI developer, but Google is increasingly making it easier to plug-and-play on these gargantuan computing architectures. They truly are world class servers for AI. I remember being as excited as a little boy in a candy store, reading research back in 2017 on Google’s TPUs, which was the climax of R&D for literally dozens of super smart computer engineers. Since then, several versions of TPUs have been deployed internally at Google for many kinds of applications (e.g. Google Translate), and increasingly have been made more useful and accessible. Startups like 3co – and our customers – can benefit so much here. Through the use of advanced computer processors like TPUs, 3co expects to parallelize its AI to perform photorealistic 3D modeling of real scenes in real-time. Imagine the possibilities for commerce, gaming, entertainment, design, and architecture that this ability could unlock. Scaling 3D commerce with Google Cloud and credits3co’s participation in the Google for Startups Cloud Program (facilitated via Techstars, we also can’t thank them enough) has been instrumental to our success in closing the gap between imagination and reality. It’s a mission we’ve been working on for years – and will continue to hone for many years to come. And this success is thanks to the Google for Startups Success team: they are truly amazing. They just care about you. If you’re a startup founder, just reach out to them: they really do wonders. We especially want to highlight the Google Cloud research credits which provided 3co access to vastly greater amounts of compute power. We are so grateful to Google Cloud for enabling 3co to scale its 3D computer vision services to customers worldwide. I love that 3co is empowered by Google to help many people see the world in a new light.  If you want to learn more about how Google Cloud can help your startup, visit our pagehere to get more information about our program, and sign up for our communications to get a look at our community activities, digital events, special offers, and more.Related ArticleThe Future of Data: Unified, flexible, and accessibleGoogle Cloud’s whitepaper explores why the future of data will involve three key themes: unified, flexible, and accessible.Read Article
Quelle: Google Cloud Platform

Security through collaboration: Building a more secure future with Confidential Computing

At Google Cloud, we believe that the protection of our customers’ sensitive data is paramount, and encryption is a powerful mechanism to help achieve this goal. For years, we have supported encryption in transit when our customers ingest their data to bring it to the cloud. We’ve also long supported encryption at rest, for all customer content stored in Google Cloud. To complete the full data protection lifecycle, we can protect customer data when it’s processed through our Confidential Computing portfolio. Confidential Computing products from Google Cloud protect data in use by performing computation in a hardware isolated environment that is encrypted with keys managed by the processor and unavailable to the operator. These isolated environments help prevent unauthorized access or modification of applications and data while in use, thereby increasing the security assurances for organizations that manage sensitive and regulated data in public cloud infrastructure. Secure isolation has always been a critical component of our cloud infrastructure; with Confidential Computing, this isolation is cryptographically reinforced. Google Cloud’s Confidential Computing products leverage security components in AMD EPYC™ processors including AMD Secure Encrypted Virtualization (SEV) technology.Building trust in Confidential Computing through industry collaborationPart of our mission to bring Confidential Computing technology to more cloud workloads and services is to make sure that the hardware and software used to build these technologies is continuously reviewed and tested. We evaluate different attack vectors to help ensure Google Cloud Confidential Computing environments are protected against a broad range of attacks. As part of this evaluation, we recognize that the secure use of our services and the Internet ecosystem as a whole depends on interactions with applications, hardware, software, and services that Google doesn’t own or operate. The Google Cloud Security team, Google Project Zero, and the AMD firmware and product security teams collaborated for several months to conduct a detailed review of the technology and firmware that powers AMD Confidential Computing technology. This review covered both Secure Encrypted Virtualization (SEV) capable CPUs, and the next generation of Secure Nested Paging (SEV-SNP) capable CPUs which protect confidential VMs against the hypervisor itself. The goal of this review was to work together and analyze the firmware and technologies AMD uses to help build Google Cloud’s Confidential Computing services to further build trust in these technologies.This in-depth review focused on the implementation of the AMD secure processor in the third generation AMD EPYC processor family delivering SEV-SNP. SNP further improves the posture of confidential computing using technology that removes the hypervisor from the trust boundary of the guest, allowing customers to treat the Cloud Service Provider as another untrusted party. The review covered several AMD secure processor components and evaluated multiple different attack vectors. The collective group reviewed the design and source code implementation of SEV, wrote custom test code, and ran hardware security tests, attempting to identify any potential vulnerabilities that could affect this environment.PCIe hardware pentesting using an IO screamerWorking on this review, the security teams identified and confirmed potential issues of varying severity. AMD was diligent in fixing all applicable issues and now offers updated firmware through its OEM channels. Google Cloud’s AMD-based Confidential Computing solutions now include all the mitigations implemented during the security review.“At Google, we believe that investing in security research outside of our own platforms is a critical step in keeping organizations across the broader ecosystem safe,” said Royal Hansen, vice president of Security Engineering at Google. “At the end of the day, we all benefit from a secure ecosystem that organizations rely on for their technology needs and that is why we’re incredibly appreciative of our strong collaboration with AMD on these efforts.” “Together, AMD and Google Cloud are continuing to advance Confidential Computing, helping enterprises to move sensitive workloads to the cloud with high levels of privacy and security, without compromising performance,” said Mark Papermaster, AMD’s executive vice president and chief technology officer. ”Continuously investing in the security of these technologies through collaboration with the industry is critical to providing customer transformation through Confidential Computing. We’re thankful to have partnered with Google Cloud and the Google Security teams to advance our security technology and help shape future Confidential Computing innovations to come.”  Reviewing trusted execution environments for security is difficult given the closed-source firmware and proprietary hardware components. This is why research and collaborations such as this are critical to improve the security of foundational components that support the broader Internet ecosystem. AMD and Google believe that transparency helps provide further assurance to customers adopting Confidential Computing, and to that end AMD is working toward a model of open source security firmware.With the analysis now complete and the vulnerabilities addressed, the AMD and Google security teams agree that the AMD firmware which enables Confidential Computing solutions meets an elevated security bar for customers, as the firmware design updates mitigate several bug classes and offer a way to recover from vulnerabilities. More importantly, the review also found that Confidential VMs are protected against a broad range of attacks described in the review.Google Cloud’s Confidential Computing portfolio The Google Cloud Confidential VMs, Dataproc Confidential Compute, and Confidential GKE Nodes have enabled high levels of security and privacy to address our customers’ data protection needs without compromising usability, performance, and scale. Our mission is to make this technology ubiquitous across the cloud. Confidential VMs run on hosts with AMD EPYC processors which feature AMD Secure Encrypted Virtualization (SEV). Incorporating SEV into Confidential VMs provide benefits and features including: Isolation: Memory encryption keys are generated by the AMD Secure Processor during VM creation and reside solely within the AMD Secure Processor. Other VM encryption keys such as for disk encryption can be generated and managed by an external key manager or in Google Cloud HSM. Both sets of these keys are not accessible by Google Cloud, offering strong isolation. Attestation: Confidential VMs use Virtual Trusted Platform Module (vTPM) attestation. Every time a Confidential VM boots, a launch attestation report event is generated and posted to customer cloud logging, which gives administrators the opportunity to act as necessary.Performance: Confidential Computing offers high performance for demanding computational tasks. Enabling Confidential VM has little or no impact on most workloads. The future of Confidential Computing and secure platformsWhile there are no absolutes in computer security, collaborative research efforts help uncover security vulnerabilities that can emerge in complex environments and help to prevent Confidential Computing solutions from threats today and into the future. Ultimately, this helps us increase levels of trust for customers. We believe Confidential Computing is an industry-wide effort that is critical for securing sensitive workloads in the cloud and are grateful to AMD for their continued collaboration on this journey. To read the full security review, visit this page. Acknowledgments We thank the many Google security team members who contributed to this ongoing security collaboration and review, including James Forshaw, Jann Horn and Mark Brand.We are grateful for the open collaboration with AMD engineers, and wish to thank David Kaplan, Richard Relph and Nathan Nadarajah for their commitment to product security. We would also like to thank AMD leadership: Ab Nacef, Prabhu Jayanna, Hugo Romero, Andrej Zdravkovic and Mark Papermaster for their support of this joint effort.Related ArticleExpanding Google Cloud’s Confidential Computing portfolioGoogle Cloud Confidential Computing is now GA and including Confidential GKE Nodes.Read Article
Quelle: Google Cloud Platform

Cloud TPU VMs are generally available

Earlier last year, Cloud TPU VMs on Google Cloud were introduced to make it easier to use the TPU hardware by providing direct access to TPU host machines. Today, we are excited to announce the general availability (GA) of TPU VMs.With Cloud TPU VMs you can work interactively on the same hosts where the physical TPU hardware is attached. Our rapidly growing TPU user community has enthusiastically adopted this access mechanism, because it not only makes it possible to have a better debugging experience, but it also enables certain training setups such as Distributed Reinforcement Learning which were not feasible with TPU Node (networks accessed) architecture.What’s new for the GA release?Cloud TPUs are now optimized for large-scale ranking and recommendation workloads. We are also thrilled to share that Snap, an early adopter of this new capability, achieved about ~4.65x perf/TCO improvement to their business-critical ad ranking workload. Here are a few highlights from Snap’s blog post on Training Large Scale Recommendation Models:> TPUs can offer much faster training speed and significantly lower training costs for recommendation system models than the CPUs;> TensorFlow for cloud TPU provides a powerful API to handle large embedding tables and fast lookups;> On TPU v3-32 slice, Snap was able to get a ~3x better throughput (-67.3% throughput on A100) with 52.1% lower cost compared to an equivalent A100 configuration (~4.65x perf/TCO)Ranking and recommendationWith the TPU VMs GA release, we are introducing the new TPU Embedding API, whichcan accelerate ML Based ranking and recommendation workloads.Many businesses today are built around ranking and recommendation use-cases, such as audio/video recommendations, product recommendations (apps, e-commerce), and ad ranking. These businesses rely on ranking and recommendation algorithms to serve their users and drive their business goals. In the last few years, the approaches to these algorithms have evolved from being purely statistical to deep neural network-based. These modern DNN-based algorithms offer greater scalability and accuracy, but they can come at a cost. They tend to use large amounts of data and can be difficult and expensive to train and deploy with traditional ML infrastructure.Embedding acceleration with Cloud TPU can solve this problem at a lower cost. Embedding APIs can efficiently handle large amounts of data, such as embedding tables, by automatically sharding across hundreds of Cloud TPU chips in a pod, all connected to one another via the custom-built interconnect.To help you get started, we are releasing the TF2 ranking and recommendation APIs, as part of the Tensorflow Recommenders library. We have also open sourced DLRM and DCN v2 ranking models in the TF2 model garden and the detailed tutorials are available here.Framework supportTPU VM GA Release supports the three major frameworks (TensorFlow, PyTorch and JAX) now offered through three optimized environments for ease of setup with the respective framework. GA release has been validated with TensorFlow v2-tf-stable, PyTorch/XLA v1.11and JAX [0.3.6].TPU VMs Specific FeaturesTPU VMs offer several additional capabilities over TPU Node architecture thanks to the local execution setup, i.e. TPU hardware connected to the same host that users execute the training workload(s).Local execution of input pipeline Input data pipeline executes directly on the TPU hosts. This functionality allows saving precious computing resources earlier used in the form of instance groups for PyTorch/JAX distributed training. In the case of Tensorflow, the distributed training setup required only one user VM and data pipeline executed directly on TPU hosts.The following study summarizes the cost comparison for Transformer (FairSeq; PyTorch/XLA) training executed for 10 epochs on TPU VM vs TPU Node architecture (Network attached Cloud TPUs):Google Internal data (published benchmarkconducted on Cloud TPU by Google).Distributed Reinforcement Learning with TPU VMsLocal execution on the host with the accelerator, also enables use cases such as Distributed Reinforcement Learning. Canonical works in this domain such as seed-RL, IMPALA and PodTracer have been developed using Cloud TPUs.“…, we argue that the compute requirements of large scale reinforcement learning systems are particularly well suited for making use of Cloud TPUs , and specifically TPU Pods: special configurations in a Google data center that feature multiple TPU devices interconnected by low latency communication channels. “—PodTracer, DeepMindCustom Ops Support for TensorFlowWith direct execution on TPU VM, users can now build their own custom ops such as TensorFlow Text. With this feature, the users are no longer bound to TensorFlow runtime release versions.What are our customers saying?“Over the last couple of years, Kakao Brain has developed numerous groundbreaking AI services and models, including minDALL-E, KoGPT and, most recently, RQ-Transformer. We’ve been using TPU VM architecture since its early launch on Google Cloud, and have experienced significant performance improvements compared to the original TPU node set up. We are very excited about the new features added in the Generally Available version of TPU VM, such as Embeddings API, and plan to continue using TPUs to solve some of the globe’s biggest ‘unthinkable questions’ with solutions enabled by its lifestyle-transforming AI technologies”—Kim Il-doo, CEO of Kakao BrainAdditional Customers’ testimonials are available here.How to get started?To start using TPU VM, you can follow one of our quick starts or tutorials. If you are new to TPUs you can explore our concepts deep-dives and system architecture. We strive to make Cloud TPUs – Google’s advanced AI infrastructure – universally useful and accessible.Related ArticleGoogle showcases Cloud TPU v4 Pods for large model trainingGoogle’s MLPerf v1.1 Training submission showcased two large (480B & 200B parameter) language models using publicly available Cloud TPU v…Read Article
Quelle: Google Cloud Platform

The Future of Data: Unified, flexible, and accessible

As the volume of data that people and businesses produce continues to grow exponentially, it goes without saying that data-driven approaches are critical for tech companies and startups across all industries. But our conversations with customers, as well as numerous industry commentaries, reiterate that managing data and extracting value from it remains difficult, especially with scale. Numerous factors underpin the challenges, including access to and storage of data, inconsistent tools, new and evolving data sources and formats, compliance concerns, and security considerations. To help you identify and solve these challenges, we’ve created a new whitepaper, “The future of data will be unified, flexible, and accessible,” which explores many of the most common reasons our customers tell us they’re choosing Google Cloud to get the most out of their data.For example, you might need to combine data in legacy systems with new technologies. Does this mean moving all your data to the cloud? Should it be in one cloud or distributed across several? How do you extract real value from all of this data without creating more silos?You might also be limited to analyzing your data in batch instead of processing it in real-time, adding complexity to your architecture and necessitating expensive maintenance to combat latency. Or you might be struggling with unstructured data, with no scalable way to analyze and manage it. Again, the factors are numerous—but many of them accrue to inadequate access to data, often exacerbated by silos, and insufficient ability to process and understand it. The modern tech stack should be a streaming stack that scales with your data, provides real-time analytics, incorporates and understands different types of data, and lets you use AI/ML to predictively derive insights and operationalize processes. These requirements mean that to effectively leverage your data assets:Data should be unified across your entire company, even across suppliers, partners, and platforms., eliminating organizational and technology silos.Unstructured data should be unlocked and leveraged in your analytics strategy. The technology stack should be unified and flexible enough to support use cases ranging from analysis of offline data to real-time streaming and application of ML without maintaining multiple bespoke tech stacks. The technology stack should be accessible on-demand, with support for different platforms, programming languages, tools, and open standards compatible with your employees’ existing skill sets. With these requirements met, you’ll be equipped to maximize your data, whether that means discerning and adapting to changing customer expectations or understanding and optimizing how your data engineers and data scientists spend their time. In coming weeks, we’ll explore aspects of the whitepaper in additional blog posts—but if you’re ready to dive in now, and to steer your tech company or startup towards success by making your data better work for you, click here to download your copy, free of charge.Related ArticleCelebrating our tech and startup customersTech companies and startups are choosing Google Cloud so they can focus on innovation, not infrastructure. See what they’re up to!Read Article
Quelle: Google Cloud Platform

Now generally available: BigQuery BI Engine supports any BI tool or custom application

Customers who work with data warehouses, running BI on large datasets used to have to pick low latency but trading off freshness of data. With BigQuery BI Engine, they can accelerate their dashboards and reports that connect to BigQuery without having to sacrifice freshness of the data. Using the latest insights helps them make better decisions for the business. BI Engine enables customers to be able to get “formula one” performance for their queries across all BI tools that connect with BigQuery, thereby helping them leverage existing investments. Last year, we launched a preview of BigQuery BI Engine, a fast in-memory analysis service that accelerates and provides sub-second query performance for dashboards and reports that connect to BigQuery. BI Engine works with any BI or custom dashboarding tool. This was designed to help analysts identify trends faster, reduce risk, match the pace of customer demand, and improve operational efficiency in an ever-changing business climate. With this launch, customers were able to build fast, interactive dashboards using any of the popular tools like Looker, Tableau, Sheets, PowerBI, Qlik or even any custom application.And our customers have realized this value quickly. “We have seen significant performance improvements within BigQuery after implementing BI Engine. Our views and materialized views have been especially improved after implementing BI Engine.” says Yell McGuyer, Data Architect at Keller Williams Realty.Today, we are very excited to announce the general availability of BigQuery BI Engine for all BI and custom applications that work with BigQuery! BI Engine Acceleration works seamlessly with BigQuery Native Integration with the BigQuery API. BI Engine natively integrates with the BigQuery API, which means that if your dashboards use standard interfaces like SQL, BigQuery APIs or JDBC/ODBC drivers to connect to BigQuery, then BI Engine is automatically supported. No changes are required for applications or dashboards to get sub-second, scalable dashboards up and running. If you run a query with BigQuery and if it can be accelerated, it will be accelerated with BI Engine.Intelligent Scaling. Customers do not have to worry about efficiently using the memory reservation, BI Engine does it for you based on the access patterns. BI Engine leverages advanced techniques like vectorized processing, advanced data encodings, and adaptive caching to maximize performance while optimizing memory usage. It can also intelligently create replicas of the same data to enable concurrent access.Simple Configuration. The only configuration needed when using BI Engine is to set up memory reservation, which is provided in a fine-grained increment of 1GB each. Full Visibility. Monitoring and logging are critical for running applications in the cloud and to gain insight into performance and opportunities for optimization. BI Engine integrates with familiar tools such as Information Schema for job details (e.g. aggregate refresh time, cache hit ratios, query latency etc) and Stackdriver for monitoring of usage. Getting started with BI EngineBI Engine is now available in all regions where BigQuery is available. You can sign up for a BigQuery sandbox here and enable BI Engine for your project. Feel free to read through the documentation and quick-starts guides for popular BI tools. You can also watch the demo from Data Cloud Summit to see how BI Engine works with BI tools like Looker, Data Studio and Tableau. If you’re a partner with an integration to BigQuery, consider joining the Google Cloud Ready – BigQuery initiative. You can find more details about the program here.
Quelle: Google Cloud Platform

How Google Cloud and partners can accelerate your migration success

As enterprises accelerate their migration to the cloud, they experience more notable mid- and late-phase migration challenges. Specifically, 41% face challenges when optimizing apps in the cloud post-migration, and 38% struggle with performance issues on workloads migrated to the cloud. Further, organizations have also increased reliance on outside consultants and other service providers for early-stage cloud migration tasks to ongoing management post-implementation.1To help customers through these challenges with a simple, quick path to a successful cloud migration, Google Cloud created our comprehensive Rapid Assessment & Migration Program (RAMP). And we’ve got some exciting developments to share with our customers and partners: Expanded focus on post-migration TCO/ROIGiven the complex nature of cloud migrations, we are committed to meeting our customers where they’re at in their cloud journeys and partnering with them to achieve their business goals — be it building customer value through innovation, driving cost efficiencies, or increasing competitive differentiation and productivity. RAMP is a holistic framework based on tangible customer TCO and ROI analyses, that supports our customers’ journeys all the way through: from assessing their digital landscapes across multiple sources including on-prem and other clouds, and identifying prioritized target workloads to building a comprehensive migration and modernization plan. Accelerate positive outcomes with expert partnersCustomers can also now expect a more streamlined migration experience through our ecosystem of partners who have completed their cloud migration specialization. Last week, we announced industry-leading updates to our partner funding programs with new assessment and consumption packages that simplify and accelerate our customers’ journey to Google Cloud, at little-to-no cost. These packages offer prescriptive pathways for infrastructure and application modernization initiatives, empowering our partners to support our customers at every stage — from discovery and planning to migration and modernization. Through our partner ecosystem, our customers can expect:Distinct funding packages for assessment, planning, and migrationFaster approval processes for accelerated deploymentsMore partners eligible to participate in RAMP and access these new funding packagesSustainability through migrationAnother major focus area for RAMP is helping enterprises optimize their migration planning and maximize their ROI by including their business and technical considerations early in the process and including any sustainability goals they may have. To aid with their sustainability efforts, we are excited to share that customers can now receive a Digital Sustainability Report along with their IT assessments – enabling sustainability to be built into their migration strategies. The report provides actionable insights to measure and reduce their environmental impact, and is based on some of Google Cloud’s own best practices, having been carbon-neutral for decades and looking to run on carbon-free energy by 2030. We are committed to solving complex problems for our customers and partners, and these updates are a reflection of the feedback we receive. Simplify your cloud migration strategy today by requesting your free assessment, finding a partner to work with, or talking to your existing partner to get started.1. Forrester Consulting, State Of Public Cloud Migration; A study commissioned by Google, 2022Related ArticleIs a cloud migration on your to do list? Our top stories from 2021 can helpThinking about migrating to Google Cloud in 2022? Here’s what to catch up on from 2021!Read Article
Quelle: Google Cloud Platform

Introducing new Google Cloud manufacturing solutions: smart factories, smarter workers

Today, manufacturers are advancing on their digital transformation journey, betting on innovative technologies like cloud and AI to strengthen competitiveness and deliver sustainable growth. Nearly two thirds of manufacturers already use cloud solutions, according to McKinsey. The actual work of scaling digital transformation projects from proof of concept to production, however, still remains a challenge for the majority of them, according to analysts.We believe the scalability challenges revolve around two factors—the lack of access to contextualized operational data and the skills gap to use complex data science and AI tools on the factory floor.To ensure manufacturers can scale their digital transformation efforts into production, Google Cloud is announcing new manufacturing solutions, specifically designed for manufacturers’ needs. The new manufacturing solutions from Google Cloud give manufacturing engineers and plant managers access to unified and contextualized data from across their disparate assets and processes.Let’s take a look at the new solutions as we follow the data journey, from the factory floor to the cloud:Manufacturing Data Engine is the foundational cloud solution to process, contextualize and store factory data. The cloud platform can acquire data from any type of machine, supporting a wide range of data, from telemetry to image data, via a private, secure, and low cost connection between edge and cloud. With built-in data normalization and context-enrichment capabilities, it provides a common data model, with a factory-optimized data lakehouse for storage. Manufacturing Connect is the factory edge platform co-developed with Litmus Automation that quickly connects with nearly any manufacturing asset via an extensive library of 250-plus machine protocols. It translates machine data into a digestible dataset and sends it to the Manufacturing Data Engine for processing, contextualization and storage. By supporting containerized workloads, it allows manufacturers to run low-latency data visualization, analytics and ML capabilities directly on the edge.Built on the Manufacturing Data Engine are a growing set of data analytics and AI use cases, enabled by Google Cloud and our partners:Manufacturing analytics & insights: An out-of-the-box integration with Looker templates that delivers a dashboarding and analytics experience. As an easy-to-use, no-code data and analytics model, it empowers manufacturing engineers and plant managers to quickly create and modify custom dashboards, adding new machines, setups, and factories automatically. The solution enables drill down into the data against KPIs, or on-demand to uncover new insights and improvement opportunities throughout the factory. Shareable insights unlock collaboration across the enterprise and with partners.Predictive maintenance: Pre-built predictive maintenance machine learning models allow manufacturers to deploy in weeks without compromising on prediction accuracy. Manufacturers can continuously improve their models and refine them in collaboration with Google Cloud engineers. Machine-level anomaly detection: A purpose-built integration that leverages Google Cloud’s Time Series Insights API on real-time machine and sensor data to identify anomalies as they occur and provide alerts. “The growing amount of sensor data generated on our assembly lines creates an opportunity for smarter analytics around product quality, production efficiency, and equipment health monitoring, but it also means new data intake and management challenges,” said Jason Ryska, director of manufacturing technology development at Ford Motor Company. “We worked with Google Cloud to implement a data platform now operating on more than 100 key machines connected across two plants, streaming and storing over 25 million records per week. We’re gaining strong insights from the data that will help us implement predictive and preventive actions and continue to become even more efficient in our manufacturing plants.”“With the tight integration of a powerful factory edge solution with Google Cloud, it is easier than ever for factories to tap into cloud capabilities,” said Masaharu Akieda, general manager for the Digital Solutions Division at KYOCERA Communication Systems Company. “Google Cloud’s solutions enable a broader group of users beyond data scientists to quickly access, analyze and use data in a variety of use cases. We are excited to partner with Google Cloud as we implement new manufacturing solutions to optimize production operations and consistently increase quality.”“As the global innovator of solid state cooling and heating technology, we’ve developed a sustainable manufacturing platform that uses less water, less electricity, and less chemical waste,” says Jason Ruppert, chief operations officer of Phononic. “This partnership with Google Cloud allows us to contextualize data across all of our manufacturing processes – ultimately providing us the analytics and insights to optimize our operations and continue to bring to the world products that cool sustainably, reducing greenhouse gas (GhG) emissions and improving the environment.”A growing number of partners are extending Google Cloud’s manufacturing solutions, from connectors, to AI-driven use cases. Take a look at what our partners are saying about the Manufacturing Data Engine and Manufacturing Connect at our upcoming Google Cloud Manufacturing Spotlight.With Google cloud’s new manufacturing solutions, three critical pieces of smart manufacturing operations are strengthened and integrated: factory-floor engineers, data, and AI. Empowering factory-floor engineers to be the hub of smart manufacturingOver the last few years, the manufacturing industry contributed more than 10% of the U.S. gross domestic product, or 24% of GDP with indirect value (i.e. purchases from other industries) included. This is also the sector that employs approximately 15 million people, representing 10% of total U.S. employment. However, more than 20% of manufacturers’ workforce in the US is older than 55 years, and an average age of 44 years old – with similar patterns seen across the world. Finding new talent to replace the retiring workforce is getting increasingly harder for manufacturers.Companies therefore need to both enable their existing workforce, while making it more attractive to new talent to join. This balance requires making critical technology such as Cloud and AI accessible, easier to use, and deeply embedded in manufacturers’ day-to-day operations.Google Cloud’s manufacturing solutions are designed with this end in mind. Combining fast implementation and ease-of-use, powerful digital tools are put directly into the hands of the manufacturers’ workforce to uncover new insights and optimize operations in entirely new ways.Key parts of the solution are low- to no-code in setup and use, and therefore are suitable for a large variety of end users. Built for scale, the solutions allow for template-based rollouts and encourage reuse through standardization. Designed with best practices in mind, manufacturers are enabled to focus precious resources on use cases, instead of the underlying infrastructure.Manufacturing engineers can visualize and drill down into data using Manufacturing Analytics & Insights, built on Looker’s business intelligence engine. Being integrated with the Manufacturing Data Engine, its automatic configuration provides an up-to-date view into any aspect of manufacturing operations.  From the COO to plant managers and factory engineers, users are enabled to easily browse and explore factory data on the enterprise, factory, line machine, and sensor level.Besides designing manufacturing solutions from the ground up for ease-of-use, Google Cloud and partners are actively helping manufacturers in upskilling their workforce capabilities with a dedicated enablement service.Making every data point accessible and actionableData is the backbone of digital manufacturing transformation and manufacturers have a potential abundance of data: performance logs from a single machine can generate 5 gigabytes of data per week, and a typical smart factory can produce 5 petabytes per week.However, this wealth of data and the insights contained within it remain largely inaccessible for many manufacturers today: data is often only partially captured, and then locked away in a variety of disparate and proprietary systems.Manufacturing Connect, co-developed with Litmus Automation, provides an industry-leading breadth of 250-plus native protocol connectors to quickly connect to and acquire data from nearly any production asset and system with a few clicks. Integrated analytics features and support for containerized workloads provide manufacturers with the option for on-premise processing of data.A complementary cloud component allows manufacturers to centrally manage, configure, standardize and update edge instances across all their factories for roll-outs on a global scale. Integrated in the same UI, users can also manage downstream processing of data sent to the cloud by configuring Google Cloud’s Manufacturing Data Engine solution.The Manufacturing Data Engine provides structure to the data, and allows for semantic contextualization. Doing so, data is made universally accessible and useful across the enterprise. By abstracting away the underlying complexity of manufacturing data, manufacturers and partners are enabled to develop high value, repeatable, scalable, and quick to implement analytics and AI use cases.AI for smart manufacturing demands a broad partner ecosystemManufacturers recognize the value of AI solutions in driving cost and production optimizations. So much so that several of them have active patents on AI initiatives. In fact, according to research from Google in June, 2021, 66% of manufacturers that use AI in their day-to-day operations report their reliance on AI is increasing. Google Cloud helps manufacturers put cloud technology and artificial intelligence to work helping factories run faster and smoother. Customers using the Manufacturing Data Engine from Google Cloud can directly access Google Cloud’s industry-leading Vertex AI platform, which offers integrated AI/ML tools ranging from AutoML for manufacturing engineers, to advanced AI tools for experts to fine-tune results. With Google Cloud, AI/ML use case development has never been more accessible for manufacturers.Crossing the scalability chasm for using the power of cloud and AI in manufacturingOur mission is to accelerate your digital transformation by bridging data silos, and to help make every engineer into a data scientist with easy-to-use AI technologies and an industry data platform. Join us at the Google Cloud Manufacturer Spotlight to learn more.The new manufacturing solutions will be demonstrated in person for the first time at Hannover Messe 2022, May 30–June 2. Visit us at Stand E68, Hall 004, or schedule a meeting for an onsite demonstration with our experts.Related ArticleLeading with Google Cloud & Partners to modernize infrastructure in manufacturingLearn how Google Cloud Partner Advantage partners help customers solve real-world business challenges in manufacturing.Read Article
Quelle: Google Cloud Platform