Your guide to sessions at Google Cloud Security Summit 2022

Google Cloud Security Summit is just a few days away! We have an exciting agenda with a keynote, demo, and breakout sessions across four tracks – Zero Trust, Secure Software Supply Chain, Ransomware & Emerging Threats, and Cloud Governance & Sovereignty. By attending this summit, you will be the first to learn about new products and advanced capabilities we are announcing from Google Cloud security and discover new ways to define and drive your security strategy and solve your biggest challenges.We hope you’ll join us for the Security Summit digital online event on May 17, 2022, to learn from experts, explore the latest tools, and share our vision for the future of security. Register here for the event and watch the sessions live and on-demand. If you are in Europe, the Middle East, or Africa please visit the EMEA page to view summit events in your time zone and captions in your local language.Security Summit KeynoteCharting a safer future with Google CloudFeatured Speakers:Chris Inglis, National Cyber Director, Executive Office of the President White HouseJonathan Meadows, Head of Cloud Cyber Security Engineering, CitibankSunil Potti, General Manager and Vice President of Cloud Security, Google CloudCybersecurity remains at the top of every organization’s agenda. Join our opening keynote to hear how Google Cloud’s unique capabilities and expertise can help organizations, large and small, in the public or private sector, address today’s most prominent security challenges and imperatives: Zero Trust, Securing the Software Supply Chain, Ransomware and other emerging threats, Cloud governance and Digital Sovereignty. Whether you use our trusted cloud for digital transformation, or continue to operate on-premise or in other clouds, you’ll learn how we can help you be safer with Google.DemoModern threat detection, investigation, and response with Google Cloud’s SecOps suiteFeatured Speakers:Arnaud Loos, Customer Engineer, Google CloudSvetla Yankova, Head of Customer Engineering, Google CloudTo stay secure in today’s growing threat landscape, organizations must detect and respond to cyber threats at unprecedented speed and scale. This demonstration will showcase Google Cloud’s Security Operations Suite, and its unique approach to building modern threat detection, investigation and response.Breakout SessionsWe have 19 breakout sessions that include sessions from Google speakers, our customers, and partners. The breakout sessions are available across four different tracks covering Zero Trust, Secure Software Supply Chain, Ransomware & Emerging threats, and Cloud Governance and Sovereignty.Zero Trust Track 1. How Google is helping customers move to Zero TrustFeatured Speakers:Aman Diwakar, Security Engineering Manager – Corporate Security, Door DashJeanette Manfra, Senior Director, Risk and Compliance, Google CloudTanisha Rai, Product Manager, Google CloudEnterprises around the globe are committed to moving to a Zero Trust architecture, but actually making that happen can be hard. Every day, we hear from customers asking how they can set up a Zero Trust model like Google’s, and we are here to help. Tune in to this session to hear speakers discuss how Google did it and how we can now help you with a comprehensive set of products, advisory services, and solutions. Whether you’re “born in the cloud,” a government agency looking to meet federal directives, or somewhere in between, Google Cloud products like BeyondCorp Enterprise and our set of partner solutions can help you jump-start your Zero Trust approach.2. A look ahead: the future of BeyondCorp EnterpriseFeatured Speakers:Prashant Jain, Product Manager, Google CloudJian Zhen, Product Manager, Google CloudGoogle pioneered Zero Trust. Now we’re pioneering rapid Zero Trust transformation. We know one size does not fit all and Zero Trust capabilities should conform to your needs – not vice versa. Join this session to learn more about how BeyondCorp Enterprise enables you to quickly and flexibly apply a Zero Trust approach to meet your application use cases and security requirements. Hear from product leaders as they share updates on new BeyondCorp capabilities, partnerships, and integrations that enable you to deliver rapid wins and avoid drawn out deployment projects.3. CrowdStrike and Deloitte: Managing cloud migration, remote workforce, and today’s threatsFeatured Speakers:Chris Kachigian, Sr. Director, Global Solutions Architecture, CrowdStrikeMike Morris, Detect and Respond CTO, Head of Engineering, DeloitteMcCall McIntyre, Strategic Technology Partner Lead, Google CloudYour organization is in the cloud migration journey, you have a remote or hybrid workforce and your extended infrastructure is more dependent than ever on disparate devices, partners and apps. To make things even more complicated, threat actors are targeting you in all of these facets, causing business disruption. How can you secure this new extended environment without negatively impacting user productivity? Join this Lightning Talk to learn more about how CrowdStrike and Deloitte have helped customers solve for just that. 4. Working safer with Google WorkspaceFeatured Speakers:Neil Kumaran, Product Lead, Gmail & Chat Security & Trust, Google CloudNikhil Sinha, Sr. Product Manager, Workspace Security, Google CloudGoogle Workspace is on a mission to make phishing and malware attacks a thing of the past. Google keeps more people safe online than anyone else in the world. According to our research, Gmail blocks more than 99.9% of malware and phishing attempts from reaching users’ inboxes. We do this by using our expertise protecting against threats at scale to protect every customer by default. This session will provide an overview of how Google Workspace layered, AI powered protections function across Gmail, Docs, Sheets, Slides, and Drive. We’ll examine real-life examples of large malware attacks to showcase how advanced capabilities like sandboxing, deep-learning-based malicious document classification, and performant, deep antivirus protections work to help stop threats. 5. Securing IoT devices using Certificate Authority ServiceFeatured Speakers:Sudhi Herle, Director, Engineering & Product Management, Android Platform Security, Google CloudAnoosh Saboori, Product Manager, Google CloudMahesh Venugopala, Director Security, AutonomicScaling security for IoT devices can be challenging. As the IoT market continues to grow, it is imperative that strong security measures are put into place to protect the information these devices send to the cloud. Join this session to learn how Google customers can leverage capabilities such as Certificate Authority Service to apply Zero Trust principles to secure IoT devices.Secure Software Supply Chain Track6. Building trust in your software supply chainFeatured Speakers:Nikhil Kaul, Head of Product Marketing – Application Modernization, Google CloudVictor Szalvay, Outbound Product Manager, Google CloudWhether you’re building an application on Kubernetes, or in a serverless or virtual machine environment, end-to-end security is critical for mitigating the vulnerabilities lurking within open source software, as well as those related to recent cybersecurity attacks and data breaches. Come learn how you can meet guidelines from the U.S. government and adopt an in-depth, security-first approach with Google Cloud that embeds security at every step of your software life cycle. 7. Protecting and securing your Kubernetes infrastructure with enterprise-grade controlsFeatured Speaker: Gari Singh, Product Manager, Google CloudKubernetes is not just a technology. It’s also a model for creating value for your business, a way of developing apps and services, and a means to help secure and develop cloud-native IT capabilities for innovation. Google Kubernetes Engine (GKE) allows your developers to spend less time worrying about security and to achieve more secure outcomes. In this session, learn how you can set up enterprise-grade security for your app right out of the box. We’ll cover the latest security controls, hardened configuration, and policies for GKE, including confidential computing options. 8. Managing the risks of open source dependencies in your software supply chainFeatured Speaker:Andy Chang, Group Product Manager, Google CloudOpen-source software code is available to the public – free for anyone to use, modify, or inspect. But securing open-source code, including fixing known vulnerabilities, is often done on an ad hoc, volunteer basis. Join this session to learn how our new Google Cloud solution addresses open-source software security.Ransomware and Emerging Threats Track9. A holistic defense strategy for modern ransomware attacksFeatured Speaker:Adrian Corona, Head of Security Solutions GTM, Google CloudMaking your organization resilient against modern ransomware attacks requires holistic detection, protection, and response capabilities. In this session, we’ll demonstrate how you can apply a cyber resilience framework, and products from Google Cloud and partners, to help thwart threats and combat ransomware attacks.10. Taking an autonomic approach to security operationsFeatured Speakers: Anton Chuvakin, Head of Security Solution Strategy, Google CloudIman Ghanizada, Head of Autonomic Security Operations, Google CloudSecurity operations centers are constantly pressed for time. Analysts seldom have the luxury to “clear the board” of active attacks and, as a result, can often feel overwhelmed. In this talk, we’ll show you how you can turn the tide and leverage Chronicle and Siemplify to prioritize and automate your SecOps, giving analysts valuable time back to focus on the threats that matter.11. Insight and perspective from the Unit 42 Ransomware Threat Report Featured Speakers:Joshua Haslett, Strategic Technology Partnership Manager, Google CloudJosh Zelonis, Field CTO and Evangelist, Palo Alto NetworksRansomware groups turned up the pressure on their victims in 2021, demanding higher ransoms and using new tactics to force them into paying.In fact, the average ransomware demand in cases handled by Unit 42 in 2021 climbed 144% since 2020. At the same time, there was an 85% increase in the number of victims who had their names and other details posted publicly on dark web “leak sites” that ransomware groups use to coerce their targets. As the ransomware landscape continues to evolve, and threat actors leverage new creative techniques to cripple business operations, what can your organization do to prepare and stay ahead of threats? Join us for this presentation as we discuss the key findings in our 2022 Unit 42 Ransomware Threat Report. 12. Cloud-native risk management and threat detection with Security Command CenterFeatured Speakers:Thomas Meriadec, Head of Cloud Platforms Security & Compliance, VeoliaTim Wingerter, Product Manager, Google CloudAs organizations move to the cloud, continuous monitoring of the environment for risk posture and threats is critical. In this session, learn how Security Command Center Premium provides risk management and threat detection capabilities to help you manage and improve your cloud security and risk posture. Join us to hear about Veolia’s experience with Security Command Center Premium.13. Securing web applications and APIs anywhereFeatured Speakers:Shelly Hershkovitz, Product Manager, Apigee API Security, Google CloudGregory Lebovitz, Product Management, Cloud Network Security, Google CloudApplication attack vectors are increasing rapidly, and many organizations seek to  protect against the different types of application and API attacks. Join this session to learn how Google Cloud can help protect and secure applications and APIs from fraud, abuse, and attacks – such as DDoS, API abuse, bot fraud, and more – using our Web App and API Protection (WAAP) offering.14. Maximizing your detection & response capabilitiesFeatured Speakers:Magali Bohn, Director, Partnerships and Channels GSEC, Google CloudBrett Perry, CISO, Dot FoodsJason Sloderbeck, Vice President, Worldwide Channels, CYDERESJoin Google Cloud, Cyderes (Cyber Defense and Response), and Dot Foods as we discuss best practices and real-world use cases that enable a company to detect threats and respond to incidents in real-time. Learn their autonomic security operations journey and how they’ve scaled a robust, cost-efficient program to accelerate their digital transformation and overall growth. Cloud Governance & Sovereignty Track15. Achieving your digital sovereignty with Google CloudFeatured Speaker:Dr. Wieland Holfelder, Vice President Engineering, Google CloudGoogle Cloud’s unique approach, which includes strong local partnerships, helps organizations balance transparency, control, and the ability to survive the unexpected – on a global scale. Join this session to learn how you can meet current and emerging digital sovereignty goals. 16. Compliance with confidence: Meeting regulatory mandates using software-defined community cloudsFeatured Speakers:Bryce Buffaloe, Product Manager Security & Compliance, Google CloudJamal Mahboob, Customer Engineer, Google CloudAdopting the cloud in regulated industries can require constraints for data residency, and the need for support and specific security controls. Learn how Google Cloud can help provide assurances without the strict physical infrastructure constraints of legacy approaches, enabling organizations to benefit from cloud innovation while meeting their compliance needs.17. Demystifying cyber security analytics – Scalable approaches for the real worldFeatured Speakers:Philip Bice, Global Lead – Service Provider Partnerships, Google CloudChris Knackstedt, Sr. Manager / Data Scientist, Deloitte & Touche LLPIn this session, join security leaders from Deloitte & Touche LLP and Google Cloud for an insightful conversation on key trends and challenges warranting the need for scalable, flexible and predictive security analytics solutions for today’s hybrid, multi cloud technology environments. The speakers will share practical approaches to designing and deploying use case-driven security analytics by leveraging the power of Google Cloud native data management and analytics services. The session will also cover solutions and managed services offered jointly by Deloitte and Google Cloud that can help organizations maintain their competitive differentiation and continually accelerate cyber security maturity.18. Best practices for defining and enforcing policies across your Google Cloud environmentFeatured Speakers:Vandhana Ramadurai, Sr. Product Manager, Google CloudSri Subramanian, Head of Product, Cloud Identity and Access Management, Google CloudLearn how to take a policy-driven approach to governing your cloud resources. In this session, we’ll cover best practices that enable organizations to shift from remediating resources that violate requirements to a more proactive state for preventing those violations.19. A comprehensive strategy for managing sensitive data in the cloudFeatured Speakers:Nelly Porter, Group Product Manager, Google CloudMatt Presson, Lead Security Architect, Product Security, BullishData is a big asset and a big risk, and classification and protection of it is an important task for organizations. In this session, learn how you can leverage Google security tools to more effortlessly take back control of your data.In addition to these sessions, there will be on-demand videos and demos published on May 17 that you can watch at your convenience by visiting the Security Summit page. We can’t wait for you to join us and learn all things Security at Google Cloud Security Summit!Related ArticleCloud CISO Perspectives: April 2022Google Cloud CISO Phil Venables shares his thoughts on the latest security updates from the Google Cybersecurity Action Team.Read Article
Quelle: Google Cloud Platform

Extending BigQuery Functions beyond SQL with Remote Functions, now in preview

Today we are announcing the Preview of BigQuery Remote Functions. Remote Functions are user-defined functions (UDF) that let you extend BigQuery SQL with your own custom code, written and hosted in Cloud Functions, Google Cloud’s scalable pay-as-you-go functions as a service.  A remote UDF accepts columns from BigQuery as input, performs actions on that input using a Cloud Function, and returns the result of those actions as a value in the query result. With Remote Functions, you can now write custom SQL functions in Node.js, Python, Go, Java, NET, Ruby, or PHP. This ability means you can personalize BigQuery for your company, leverage the same management and permission models without having to manage a server.In what type of situations could you use remote functions?Before today, BigQuery customers had the ability to create user defined functions or UDFs in either SQL or javascript that ran entirely within BigQuery. While these functions are performant and fully managed from within BigQuery, customers expressed a desire to extend BigQuery UDFs with their own external code. Here are some examples of what they have asked for:Security and Compliance: Use data encryption and tokenization services from the Google Cloud security ecosystem for external encryption and de-identification. We’ve already started working with key partners like Protegrity and CyberRes Voltage on using these external functions as a mechanism to merge BigQuery into their security platform, which will help our mutual customers address strict compliance controls. Real Time APIs: Enrich BigQuery data using external APIs to obtain the latest stock price data, weather updates, or geocoding information.Code Migration: Migrate legacy UDFs or other procedural functions written in Node.js, Python, Go, Java, .NET, Ruby or PHP. Data Science: Encapsulate complex business logic and score BigQuery datasets by calling models hosted in Vertex AI or other Machine Learning platforms.Getting StartedLet’s go through the steps to use a BigQuery remote UDF. Setup the BigQuery Connection:   1. Create a BigQuery Connection      a. You may need to enable the BigQuery Connection APIDeploy a Cloud Function with your code:   1. Deploying your Cloud Function     a. You may need to enable Cloud Functions API     b. You may need to enable Cloud Build APIs   2. Grant the BigQuery Connection service account access to the Cloud Function     a. One way you can find the service account is by using the bq cli show commandcode_block[StructValue([(u’code’, u’bq show –location=US –connection $CONNECTION_NAME’), (u’language’, u”)])]Define the BigQuery remote UDF:    1. Create the remote UDFs definition within BigQuery      a. One way to find the endpoint name is to use the gCloud cli functions describe commandcode_block[StructValue([(u’code’, u’gcloud functions describe $FUNCTION_NAME’), (u’language’, u”)])]Use the BigQuery remote UDF in SQL:   1. Write a SQL statement as you would calling a UDF    2. Get your results! How remote functions can help you with common data tasksLet’s take a look at some examples of how using BigQuery with remote UDFs can help accelerate development and enhance data processing and analysis.Encryption and DecryptionAs an example, let’s create a simple custom encryption and decryption Cloud Function in Python. The encryption function can receive the data and return an encrypted base64 encoded string. In the same Cloud Function, the decryption function can receive an encrypted base64 encoded string and return the decrypted string. A data engineer would be able to enable this functionality in BigQuery.The Cloud Function receives the data and determines which function you want to invoke. The data is received as an HTTP request. The additional userDefinedContext fields allow you to send additional pieces of data to the Cloud Function.code_block[StructValue([(u’code’, u’def remote_security(request):rn request_json = request.get_json()rn mode = request_json[‘userDefinedContext’][‘mode’]rn calls = request_json[‘calls’]rn not_extremely_secure_key = ‘not_really_secure’rn if mode == “encryption”:rn return encryption(calls, not_extremely_secure_key)rn elif mode == “decryption”:rn return decryption(calls, not_extremely_secure_key)rn return json.dumps({“Error in Request”: request_json}), 400′), (u’language’, u”)])]The result is returned in a specific JSON formatted response that is returned to BigQuery to be parsed.code_block[StructValue([(u’code’, u’def encryption(calls,not_extremely_secure_key):rn return_value = []rn for call in calls:rn data = call[0].encode(‘utf-8′)rn cipher = AES.new(rn not_extremely_secure_key.encode(‘utf-8′)[:16],rn AES.MODE_EAXrn )rn cipher_text = cipher.encrypt(data)rn return_value.append(rn str(base64.b64encode(cipher.nonce + cipher_text))[2:-1]rn )rn return json.dumps({“replies”: return_value})’), (u’language’, u”)])]This Python code is deployed to Cloud Functions where it awaits to be invoked.Let’s add the User Defined Function to BigQuery so we can invoke it from a SQL statement. The additional user_defined_context is what is sent to Cloud Functions as additional context in the request payloadso you can use multiple remote functions mapped to one endpoint.code_block[StructValue([(u’code’, u’CREATE OR REPLACE FUNCTION `<project-id>.demo.decryption` (x STRING) RETURNS STRING REMOTE WITH CONNECTION `<project-id>.us.my-bq-cf-connection` OPTIONS (endpoint = ‘https://us-central1-<project-id>.cloudfunctions.net/remote_security’, user_defined_context = [(“mode”,”decryption”)])’), (u’language’, u”)])]Once we’ve created our functions, users with the right IAM permissions can use them in SQL on BigQuery.If you’re new to Cloud Functions, be aware that there are very minimal delays known as “cold starts”. The neat thing is you can call APIs as well, which is how our partners at Protegrity and Voltage enable their platforms to perform encryption and decryption of BigQuery data.Calling APIs to enrich your dataUsers, such as data analysts, can use the user defined functions created easily without needing other tools and moving the data out of BigQuery.You can enrich your dataset with many more APIs, for example, the Google Cloud Natural Language API to analyze sentiment on your text without having to use another tool.code_block[StructValue([(u’code’, u’def call_nlp(calls):rn return_value = []rn client = language_v1.LanguageServiceClient()rn for call in calls:rn text = call[0]rn document = language_v1.Document(rn content=text, type_=language_v1.Document.Type.PLAIN_TEXTrn )rn sentiment = client.analyze_sentiment(rn request={“document”: document}rn ).document_sentimentrn return_value.append(str(sentiment.score))rn return_json = json.dumps({“replies”: return_value})rn return return_json’), (u’language’, u”)])]Once the Cloud Function is deployed and the remote UDF definition is created on BigQuery, you are able to invoke the NLP API and return the data from it for use in your queries.Custom Vertex AI endpointData Scientists can integrate Vertex AI endpoints and other APIs, all from the SQL console for custom models. Remember, the remote UDFs are meant for scalar executions.You are able to deploy a model to a Vertex AI endpoint, which is another API, and then call that endpoint from Cloud Functions.code_block[StructValue([(u’code’, u’def predict_classification(calls):rn # Vertex AI endpoint detailsrn client = aiplatform.gapic.PredictionServiceClient(client_options=client_options)rn endpoint = client.endpoint_path(rn project=project, location=location, endpoint=endpoint_idrn )rn # Call the endpoint for eachrn for call in calls:rn content = call[0]rn instance = predict.instance.TextClassificationPredictionInstance(rn content=content,rn ).to_value()rn instances = [instance]rn parameters_dict = {}rn parameters = json_format.ParseDict(parameters_dict, Value())rn response = client.predict(rn endpoint=endpoint, instances=instances, parameters=parametersrn )’), (u’language’, u”)])]Try it out todayTry out the BigQuery remote UDFs today!Related ArticleRead Article
Quelle: Google Cloud Platform

Introducing AlloyDB for PostgreSQL: Free yourself from expensive, legacy databases

Enterprises are struggling to free themselves from legacy database systems, and need an alternative option to modernize their applications. Today at Google I/O, we’re thrilled to announce the preview of AlloyDB for PostgreSQL, a fully-managed, PostgreSQL-compatible database service that provides a powerful option for modernizing your most demanding enterprise database workloads. Compared with standard PostgreSQL, in our performance tests, AlloyDB was more than four times faster for transactional workloads, and up to 100 times faster for analytical queries. AlloyDB was also two times faster for transactional workloads than Amazon’s comparable service. This makes AlloyDB a powerful new modernization option for transitioning off of legacy databases.As organizations modernize their database estates in the cloud, many struggle to eliminate their dependency on legacy database engines. In particular, enterprise customers are looking to standardize on open systems such as PostgreSQL to eliminate expensive, unfriendly licensing and the vendor lock-in that comes with legacy products. However, running and replatforming business-critical workloads onto an open source database can be daunting: teams often struggle with performance tuning, disruptions caused by vacuuming, and managing application availability. AlloyDB combines the best of Google’s scale-out compute and storage, industry-leading availability, security, and AI/ML-powered management with full PostgreSQL compatibility, paired with the performance, scalability, manageability, and reliability benefits that enterprises expect to run their mission-critical applications.As noted by Carl Olofson, Research Vice President, Data Management Software, IDC, “databases are increasingly shifting into the cloud and we expect this trend to continue as more companies digitally transform their businesses. With AlloyDB, Google Cloud offers large enterprises a big leap forward, helping companies to have all the advantages of PostgreSQL, with the promise of improved speed and functionality, and predictable and transparent  pricing.”AlloyDB is the next major milestone in our journey to support customers’ heterogeneous migrations. For example, we recently added Oracle-to-PostgreSQL schema conversion and data replication capabilities to our Database Migration Service, while our new Database Migration Program helps you accelerate your move to the cloud with tooling and incentive funding. “Developers have many choices for building, innovating and migrating their applications. AlloyDB provides us with a compelling relational database option with full PostgreSQL compatibility, great performance, availability and cloud integration. We are really excited to co-innovate with Google and can now benefit from enterprise grade features while cost-effectively modernizing from legacy, proprietary databases.”—Bala Natrajan, Sr. Director, Data Infrastructure and Cloud Engineering, PayPal Let’s dive into what makes AlloyDB uniqueWith AlloyDB, we’re tapping into decades of experience designing and managing some of the world’s most scalable and available database services, bringing the best of Google to the PostgreSQL ecosystem. At AlloyDB’s core is an intelligent, database-optimized storage service built specifically for PostgreSQL. AlloyDB disaggregates compute and storage at every layer of the stack, using the same infrastructure building blocks that power large-scale Google services such as YouTube, Search, Maps, and Gmail. This unique technology allows it to scale seamlessly while offering predictable performance.Additional investments in analytical acceleration, embedded AI/ML, and automatic tiering of data means that AlloyDB is ready to handle any workload you throw at it, with minimal management overhead.Finally, we do all this while maintaining full compatibility with PostgreSQL 14, the latest version of the advanced open source database, so you can reuse your existing development skills and tools, and migrate your existing PostgreSQL applications with no code changes, benefitting from the entire PostgreSQL ecosystem. Furthermore, by using PostgreSQL as the foundation of AlloyDB, we’re continuing our commitment to openness while delivering differentiated value to our customers.“We have been so delighted to try out the new AlloyDB for PostgreSQL service. With AlloyDB, we have significantly increased throughput, with no application changes to our PostgreSQL workloads. And since it’s a managed service, our teams can spend less time on database operations, and more time on value added tasks.”—Sofian Hadiwijaya, CTO and Co-Founder, Warung PintarWith AlloyDB you can modernize your existing applications with:1. Superior performance and scaleAlloyDB delivers superior performance and scale for your most demanding commercial-grade workloads. AlloyDB is four times faster than standard PostgreSQL and two times faster than Amazon’s comparable PostgreSQL-compatible service for transactional workloads. Multiple layers of caching, automatically tiered based on workload patterns, provide customers best-in-class price/performance.2. Industry-leading availabilityAlloyDB provides a high-availability SLA of 99.99% inclusive of maintenance. AlloyDB automatically detects and recovers from most database failures within seconds, independent of database size and load. AlloyDB’s architecture also supports non-disruptive instance resizing and database maintenance. The primary instance can resume normal operations in seconds, while replica pool updates are fully transparent to users. This ensures that customers have a highly reliable, continuously available database for their mission-critical workloads.“We are excited about the new PostgreSQL-compatible database. AlloyDB will bring more scalability and availability with no application changes. As we run our e-commerce platform and its availability is important, we are specially expecting AlloyDB to minimize the maintenance downtime.”—Ryuzo Yamamoto, Software Engineer, Mercari (​​Souzoh, Inc.)3. Real-time business insights AlloyDB delivers up to 100 times faster analytical queries than standard PostgreSQL. This is enabled by a vectorized columnar accelerator that stores data in memory in an optimized columnar format for faster scans and aggregations. This makes AlloyDB a great fit for business intelligence, reporting, and hybrid transactional and analytical workloads (HTAP). And even better, the accelerator is auto-populated, so you can improve analytical performance with the click of a button. “At PLAID, we are developing KARTE, a customer experience platform. It provides advanced real-time analytics capabilities for vast amounts of behavioral data to discover deep insights and create an environment for communicating with customers. AlloyDB is fully compatible with PostgreSQL and can transparently extend column-oriented processing. We think it’s a new powerful option with a unique technical approach that enables system designs to integrate isolated OLTP, OLAP, and HTAP workloads with minimal investment in new expertise. We look forward to bringing more performance, scalability, and extensibility to our analytics capabilities by enhancing data integration with Google Cloud’s other powerful database services in the future.”—Takuya Ogawa, Lead Product Engineer, PLAID4. Predictable, transparent pricingAlloyDB makes keeping costs in check easier than ever. Pricing is transparent and predictable, with no expensive, proprietary licensing and no opaque I/O charges. Storage is automatically provisioned and customers are only charged for what they use, with no additional storage costs for read replicas. A free ultra-fast cache, automatically provisioned in addition to instance memory, allows you to maximize price/performance.5. ML-assisted management and insights Like many managed database services, AlloyDB automatically handles database patching, backups, scaling and replication for you. But it goes several steps further by using adaptive algorithms and machine learning for PostgreSQL vacuum management, storage and memory management, data tiering, and analytics acceleration. It learns about your workload and intelligently organizes your data across memory, an ultra-fast secondary cache, and durable storage. These automated capabilities simplify management for DBAs and developers. AlloyDB also empowers customers to better leverage machine learning in their applications. Built-in integration with Vertex AI, Google Cloud’s artificial intelligence platform, allows users to call models directly within a query or transaction. That means high throughput, low-latency, and augmented insights, without having to write any additional application code.Get started with AlloyDBA modern database strategy plays a critical role in developing great applications faster and delivering new experiences to your customers. The AlloyDB launch is an exciting milestone for Google Cloud databases, and we’re thrilled to see how you use it to drive innovation across your organization and regain control and freedom of your database workloads.To learn more about the technology innovations behind AlloyDB, check out this deep dive into its intelligent storage system. Then, visit cloud.google.com/alloydb to get started and create your first cluster. You can also review the demos and launch announcements from Google I/O 2022.Related ArticleAlloyDB for PostgreSQL under the hood: Intelligent, database-aware storageIn this technical deep dive, we take a look at the intelligent, scalable storage system that powers AlloyDB for PostgreSQL.Read Article
Quelle: Google Cloud Platform

Google Cloud unveils world’s largest publicly available ML hub with Cloud TPU v4, 90% carbon-free energy

At Google, the state-of-the-art capabilities you see in our products such as Search and YouTube are made possible by Tensor Processing Units (TPUs), our custom machine learning (ML) accelerators. We offer these accelerators to Google Cloud customers as Cloud TPUs. Customer demand for ML capacity, performance, and scale continues to increase at an unprecedented rate. To support the next generation of fundamental advances in artificial intelligence (AI), today we announced Google Cloud’s machine learning cluster with Cloud TPU v4 Pods in Preview — one of the fastest, most efficient, and most sustainable ML infrastructure hubs in the world.Powered by Cloud TPU v4 Pods, Google Cloud’s ML cluster enables researchers and developers to make breakthroughs at the forefront of AI, allowing them to train increasingly sophisticated models to power workloads such as large-scale natural language processing (NLP), recommendation systems, and computer vision algorithms. At 9 exaflops of peak aggregate performance, we believe our cluster of Cloud TPU v4 Pods is the world’s largest publicly available ML hub in terms of cumulative computing power, while operating at 90% carbon-free energy.  “Based on our recent survey of 2000 IT decision makers, we found that inadequate infrastructure capabilities are often the underlying cause of AI projects failing. To address the growing importance for purpose-built AI infrastructure for enterprises, Google launched its new machine learning cluster in Oklahoma with nine exaflops of aggregated compute. We believe that this is the largest publicly available ML hub with 90% of the operation reported to be powered by carbon free energy. This demonstrates Google’s ongoing commitment to innovating in AI infrastructure with sustainability in mind.” —Matt Eastwood, Senior Vice President, Research, IDCPushing the boundaries of what’s possibleBuilding on the announcement of Cloud TPU v4 at Google I/O 2021, we granted early access to Cloud TPU v4 Pods to several top AI research teams, including Cohere, LG AI Research, Meta AI, and Salesforce Research. Researchers liked the performance and scalability that TPU v4 provides with its fast interconnect and optimized software stack, the ability to set up their own interactive development environment with our new TPU VM architecture, and the flexibility to use their preferred frameworks, including JAX, PyTorch, or TensorFlow. These characteristics allow researchers to push the boundaries of AI, training large-scale, state-of-the-art ML models with high price-performance and carbon efficiency.co-here.jpglg ai research.jpgmeta.jpgsalesforce.jpgIn addition, TPU v4 has enabled breakthroughs at Google Research in the areas of language understanding, computer vision, speech recognition, and much more, including the recently announced Pathways Language Model (PaLM) trained across two TPU v4 Pods.“In order to make advanced AI hardware more accessible, a few years ago we launched theTPU Research Cloud (TRC) program that has provided access at no charge to TPUs to thousands of ML enthusiasts around the world. They have published hundreds of papers and open-source github libraries on topics ranging from ‘Writing Persian poetry with AI’ to ‘Discriminating between sleep and exercise-induced fatigue using computer vision and behavioral genetics’. The Cloud TPU v4 launch is a major milestone for both Google Research and our TRC program, and we are very excited about our long-term collaboration with ML developers around the world to use AI for good.”—Jeff Dean, SVP, Google Research and AISustainable ML breakthroughsThe fact that this research is powered predominantly by carbon-free energy makes the Google Cloud ML cluster all the more remarkable. As part of Google’s commitment to sustainability, we’ve been matching 100% of our data centers’ and cloud regions’ annual energy consumption with renewable energy purchases since 2017. By 2030, our goal is to run our entire business on carbon-free energy (CFE) every hour of every day. Google’s Oklahoma data center, where the ML cluster is located, is well on its way to achieving this goal, operating at 90% carbon-free energy on an hourly basis within the same grid. In addition to the direct clean energy supply, the data center has a Power Usage Efficiency (PUE)1 rating of 1.10, making it one of the most energy-efficient data centers in the world. Finally, the TPU v4 chip itself is highly energy efficient, with about 3x the peak FLOPs per watt of max power of TPU v3. With energy-efficient ML-specific hardware, in a highly efficient data center, supplied by exceptionally clean power, Cloud TPU v4 provides three key best practices that can help significantly reduce energy use and carbon emissions.Breathtaking scale and price-performanceIn addition to sustainability, in our work with leading ML teams we have observed two other pain points: scale and price-performance. Our ML cluster in Oklahoma offers the capacity that researchers need to train their models, at compelling price-performance, on the cleanest cloud in the industry. Cloud TPU v4 is central to solving these challenges. Scale: Each Cloud TPU v4 Pod consists of 4096 chips connected together via an ultra-fast interconnect network with the equivalent of an industry-leading 6 terabits per second (Tbps) of bandwidth per host, enabling rapid training for the largest models.Price-performance: Each Cloud TPU v4 chip has ~2.2x more peak FLOPs than Cloud TPU v3, for ~1.4x more peak FLOPs per dollar. Cloud TPU v4 also achieves exceptionally high utilization of these FLOPs for training ML models at scale up through thousands of chips. While many quote peak FLOPs as the basis for comparing systems, it is actually sustained FLOPs at scale that determines model training efficiency, and Cloud TPU v4’s high FLOPs utilization (significantly better than other systems due to high network bandwidth and compiler optimizations) helps yield  shorter training time and better cost efficiency.Table 1: Cloud TPU v4 pods deliver state-of-the-art performance through significant advancements in FLOPs, interconnect, and energy efficiency.Cloud TPU v4 Pod slices are available in configurations ranging from four chips (one TPU VM) to thousands of chips. While slices of previous-generation TPUs smaller than a full Pod lacked torus links (“wraparound connections”), all Cloud TPU v4 Pod slices of at least 64 chips have torus links on all three dimensions, providing higher bandwidth for collective communication operations.Cloud TPU v4 also enables accessing a full 32 GiB of memory from a single device, up from 16 GiB in TPU v3, and offers two times faster embedding acceleration, helping to improve performance for training large-scale recommendation models. PricingAccess to Cloud TPU v4 Pods comes in evaluation (on-demand), preemptible, and committed use discount (CUD) options. Please refer to this page for more details.Get started todayWe are excited to offer the state-of-the-art ML infrastructure that powers Google services to all of our users, and look forward to seeing how the community leverages Cloud TPU v4’s combination of industry-leading scale, performance, sustainability, and cost efficiency to deliver the next wave of ML-powered breakthroughs. Ready to start using Cloud TPU v4 Pods for your AI workloads? Reach out to your Google Cloud account manager or fill in this form. Interested in access to Cloud TPU for open-source ML research? Check out our TPU Research Cloud program.AcknowledgementsThe authors would like to thank the Cloud TPU engineering and product teams for making this launch possible. We also want to thank James Bradbury, Software Engineer, Vaibhav Singh, Outbound Product Manager and Aarush Selvan, Product Manager, for their contributions  to this blog post.1. We report a comprehensive trailing twelve-month (TTM) PUE in all seasons, including all sources of overhead.Related ArticleCloud TPU VMs are generally availableCloud TPU VMs with Ranking & Recommendation acceleration are generally available on Google Cloud. Customers will have direct access to TP…Read Article
Quelle: Google Cloud Platform

Google Cloud at I/O: Everything you need to know

We love this time of year. This week is Google I/O, our largest developer conference, where developer communities from around the world come together to learn, catch up, and have fun. Google Cloud and Google Workspace had a big presence at the show, talking about our commitment to building intuitive and helpful developer experiences to help you innovate freely and quickly. We do the heavy lifting, embedding the expertise from years of Google research in areas like AI/ML and security, so you can easily build secure and intelligent solutions for your customers.So, what’s happening at I/O this year? Let’s start with the keynotes… Google I/O keynote Google and Alphabet CEO Sundar Pichai kicked off Day 1 of I/O with a powerhouse keynote highlighting recent breakthroughs in machine learning, including one of the fastest, most efficient, and most sustainable ML infrastructure hubs in the world. Google Cloud’s machine learning cluster with Cloud TPU v4 pods (in Preview), allows researchers and developers to make AI breakthroughs by training larger and more complex models faster, to power workloads like large-scale natural language processing (NLP), recommendation systems, and computer vision. With eight TPU v4 pods in a single data center, generating 9 exaflops of peak performance, we believe this system is the world’s largest publicly available ML hub in terms of cumulative computing power, while operating at 90% carbon-free energy. Read more about the ML hub with Cloud TPU v4 here.“Early access to TPU v4 has enabled us to achieve breakthroughs in conversational AI programming with our CodeGen, a 16-billion parameter auto-regressive language model that turns simple English prompts into executable code.” —Erik Nijkamp, Research Scientist, Salesforce“…we saw a 70% improvement in training time for our ‘extremely large’ model when moving from TPU v3 to TPU v4… The exceptionally low carbon footprint of Cloud TPU v4 Pods was another key factor.…”—Aidan Gomez, CEO and Co-Founder, CohereIn the keynote, Sundar also announced new AI-enabled features in Google Workspace focused on users, that are designed to help people thrive in the hybrid workplace. New advancements in NLP enable summaries in Spaces to help users catch up on missed conversations by providing a helpful digest. Automated meeting transcription for Google Meet allows users who didn’t attend a meeting to stay in the loop, or for attendees to easily reference the discussion at a later time. Users can also now leverage portrait restore, which automatically improves video image quality — even on devices with lower quality webcams. And they can filter out the reverberation in large spaces with hard surfaces, giving users “conference-room-quality” audio whether they are in their basement, kitchen, or garage. These new features deliver high quality experiences, allowing Google Workspace users to benefit from our AI leadership.Developer keynoteNext up, we heard from Jeanine Banks, Google Vice President of Developer Experiences and DevRel, and a number of product teams who led us through a flurry of exciting new updates about everything from Android to Flutter to Cloud. On the Google Cloud front, we announced the preview of Cloud Run jobs, which can reduce the time developers spend performing administrative tasks such as database migration, managing scheduled jobs like nightly reports, or doing batch data transformation. With Cloud Run jobs, you can execute your code on the highly scalable, fully managed Cloud Run platform, but only pay when your jobs are executing — and without having to worry about managing infrastructure. Learn more about Cloud Run jobs here.Then, we announced the preview of AlloyDB for PostgreSQL, a new fully managed, relational database service that gives enterprises the performance, availability, and ease of management they need to migrate from their expensive legacy database systems and onto Google Cloud. AlloyDB combines proven, disaggregated storage and compute that powers our most popular, globally available products such as Google Maps, YouTube, Search, and Ads — with PostgreSQL, an open source database engine beloved by developers.Our performance tests show that AlloyDB is four times faster for transaction processing and up to 100 times faster for analytical queries than standard PostgreSQL. It’s also two times faster than AWS’ comparable PostgreSQL-compatible service for transactional workloads. AlloyDB’s fully-managed database operations and ML-based management systems can relieve administrators and developers from daunting database management tasks. Of course, AlloyDB is fully PostgreSQL-compatible, meaning that developers can reuse their existing development skills and tools. It also offers an impressive 99.99% SLA inclusive of maintenance, and no complex licensing or I/O charges. You can learn more about AlloyDB for PostgreSQL here.“Developers have many choices for building, innovating and migrating their applications. AlloyDB provides us with a compelling relational database option with full PostgreSQL compatibility, great performance, availability, and cloud integration. We are really excited to co-innovate with Google and can now benefit from enterprise grade features while cost-effectively modernizing from legacy, proprietary databases.”—Bala Natrajan, Sr. Director, Data Infrastructure and Cloud Engineering at PayPalCloud keynote – “The cloud built for developers” Moving on to the Cloud keynote, Google Cloud’s very own Aparna Sinha, Director of Product Management, Google Cloud and Google Workspace’s Matthew Izatt, Product Lead, gave the I/O audience exciting cloud updates. Aparna reiterated the benefits of Cloud Run jobs and AlloyDB, while showcasing how our services integrate nicely to give you a full stack specifically tailored for backend, web, mobile and data analytics applications. These stacks also natively embed key security and AI/ML features for simplicity. Specifically, with build integrity, a new feature in Cloud Build, you get out-of-the-box build provenance and “Built by Cloud Build” attestations, including details like the images generated, the input sources, the build arguments, and the built time, helping you achieve up to SLSA Level 2 assurance. Next, you can use Binary Authorization to help ensure that only verified builds with the right attestations are deployed to production. You can get the same results as the experts — without having to be a security expert yourself. Aparna also announced the preview of Network Analyzer, showing how developers can troubleshoot and isolate root causes of complex service disruptions quickly and easily. The new Network Analyzer module in Network Intelligence Center can proactively detect network failures to prevent downtime caused by accidental misconfiguration, over-utilization, and suboptimal routes. Network Analyzer is available for services like Compute Engine, Google Kubernetes Engine (GKE), Cloud SQL, and more. You can visit the Network Analyzer page to learn more. Something that really got the developer audience excited was the announcement of the preview of Immersive Stream for XR allowing you to render eXtended Reality experiences using powerful Google Cloud GPUs, and stream these experiences to mobile devices around the world. Immersive Stream for XR integrates the process of creating, maintaining, and scaling high-quality XR. In fact, XR content delivered using Immersive Stream for XR works on nearly every mobile device regardless of model, year, or operating system. Also, your users can enjoy these immersive experiences simply by clicking a link or scanning a QR code. “We know that our new and existing customers expect unique and innovative campaigns for two of the most unique and innovative vehicles in our brand’s history, and Google Cloud helped us create something very special to share with them.”—Albi Pagenstert, Head of Brand Communications and Strategy, BMW of North America To learn more, visit xr.withgoogle.com, and check out this video to see for yourself!And finally, Matthew brought it all home, highlighting the incredible innovation coming from Google Workspace. He detailed how we are making it easier for developers to extend and customize the suite, and simplify integration with existing tools. For example, Google Workspace Add-ons allow you to build applications using your preferred stack and languages; you just build once, and your application is available to use across Google Workspace apps such as Gmail, Google Calendar, Drive and Docs. Matthew also shared how we are improving the development experience by allowing you to easily connect DevOps tools like PagerDuty to the Google Workspace platform. Finally, he noted the critical role that Google Workspace Marketplace can play in increasing the growth and engagement of your application. If you’re interested in learning about how we’re using machine learning to help make people’s work day more productive and impactful, here’s where you can find all of this week’s Workspace news. Sessions and workshopsWhew… that was a lot of cloud updates in three keynotes! But wait… there’s more!Google Cloud also had 14 cloud breakout sessions and 5 workshops at I/O, covering loads of different topics. Here’s the full list for you, all available on demand:SessionsAn introduction to MLOps with TFXAsynchronous operations in your UI using Workflows and FirestoreAuto alerts for Firebase users with Functions, Logging, and BigQueryConversational AI for business messagingDevelop for Google Cloud Platform faster with Cloud CodeExtending Google Workspace with AppSheet’s no-code platform and Apps ScriptFraudfinder: A comprehensive solution for real data science problemsFrom colab to Cloud in five stepsLearn how to enable shared experiences across platformsLearn to refactor Cloud applications in Go 1.18 with GenericsModern Angular deployment with Google CloudRun your jobs on serverlessThe future of app development with cloud databasesWhat’s new in the world of Google Chat appsWorkshopsApply responsible AI principles when building remote sensing datasetsBuild an event-driven orchestration with Eventarc and WorkflowsBuilding AppSheet apps with the new Apps Script connectorFaster model training and experimentation with Vertex AISpring Native on GCP – what, when, and why?And finally, what would I/O be without some massively fun interactive experiences? Take our cloud island at I/O Adventure featuring custom interactive demos and sandboxes. Here, attendees can explore content, chat with Googlers, and earn some really cool swag.  So that’s a wrap on Google Cloud announcements at I/O. We’ll have lots more exciting announcements in the next few months that will make your developer experience even simpler and more intuitive. In the meantime, join our developer community, Google Cloud Innovators, where you’ll make lots of awesome new friends. And be sure to register for Google Cloud Next ’22 in October. We can’t wait to see you again!
Quelle: Google Cloud Platform

How Google Cloud and SAP solve big problems for big companies

With SAP Sapphire kicking off today in Orlando, we’re looking forward to seeing our customers and discussing how they can make core processes more efficient and improve how they serve their customers.One thing is certain to be top of mind – the global supply chain challenges facing the world today. It’s affecting every business across every industry, from common household items that once filled store shelves and are now on backorder, to essential goods and services like food and medical treatments, which are at risk. Even cloud-native companies are making changes to ensure they have the insights, equipment, and other assets they need to continue serving customers. We are proud to work with SAP on many initiatives that are driving results for our customers and helping them run more intelligent and sustainable companies. I’d like to highlight three of these important initiatives and how they are helping address global supply chain challenges. Enabling more efficient migrations of critical workloads We know a key barrier to entry in the cloud is the ability to easily migrate from on-premises environments. Our cloud provides a safe path to help companies including Johnson Controls, PayPal, and Kaeser Compressor to digitize and solve large, complex business problems, reduce costs, scale without cycles of investment, and gain access to key services and capabilities that can unlock value and enable growth. Singapore-based shipping company Ocean Network Express (ONE) has become more agile by running their mission-critical SAP workloads on Google Cloud and using our data analytics to improve operational efficiency and make faster decisions. They have gone from an on-premises data warehouse solution that would take a full day loading data from SAP S/4HANA, to using our BigQuery solution that delivers business insights in minutes.Since The Home Depot moved its critical SAP workloads to Google Cloud, the company has been able to shorten the time it takes to prepare a supply chain use case from 8 hours to 5 minutes by using BigQuery to analyze large volumes of internal and external data. This helps improve forecast accuracy and more effectively replenish inventory by being able to create a new plan when circumstances change unexpectedly with demand or a supplier.Accelerating cloud benefits through RISE and LiveMigration At Google Cloud, we have dedicated programs to help migrate SAP and other mission-critical workloads to our cloud with our Cloud Acceleration Program for SAP.For SAP customers moving to Google Cloud, we provide LiveMigration to provide superior uptime and business continuity. LiveMigration eliminates downtime required for planned infrastructure maintenance. This means that your SAP system continues running even when Google Cloud is performing planned infrastructure maintenance upgrades thus ensuring superior business continuity for your mission critical workloads. We are also proud to be a strategic partner with the RISE with SAP program, which helps accelerate cloud migration for SAP’s global customer base while minimizing risks along the migration journey. This program provides solutions and expertise from SAP and technology ecosystem partners to help companies transform through process consulting, workload migration services, cloud infrastructure, and ongoing training and support. To secure your mission critical workloads, SAP and Google Cloud can provide a 99.9% uptime SLA as part of the RISE with SAP program.Many large manufacturers have taken advantage of RISE with SAP to forge a secure, proven path to our cloud, including Energizer Holdings Inc., a leading manufacturer and distributor of primary batteries, portable lights, and auto care products. Energizer has turned to RISE with SAP on Google Cloud to power its move to SAP S/4HANA. The company wants to automate essential business processes, improve customer service, and boost innovation. It had been using a private cloud solution but needed to gain flexibility while better containing costs.“SAP S/4HANA for central finance will help us automate essential business processes, improve customer service, and fuel innovation that grows our company’s leadership position globally. We selected RISE with SAP to begin our journey to SAP S/4HANA and maintain the freedom and flexibility to move at our own pace,” said Energizer Chief Information Officer Dan McCarthy.Another example is global automotive distributor Inchcape, which moved its mission-critical sales, marketing, finance, and operations systems and data to Google Cloud. With its diverse data sets now in a single, secure cloud platform, Inchcape is applying Google Cloud AI and ML capabilities to manage and analyze its data, automate operations, and ultimately transform the car ownership experience for millions. “Google Cloud’s close relationship with SAP and its strong technical expertise in this space were a big pull for us,” said Mark Dearnley, Chief Digital Officer at Inchcape. “Ultimately, we wanted a headache-free RISE with SAP implementation and to unlock value for auto makers and consumers in all our regions, while continuing to have the choice and flexibility to modernize our 150-year old business in a way that works for us.” A new intelligence layer for all SAP Google Cloud customersWhen moving mission-critical workloads to the cloud, companies not only need to migrate safely, they also need to quickly realize value, which we enable with Google Cloud Cortex Framework — a layer of intelligence that integrates with SAP Business Technology Platform (SAP BTP). Google Cloud Cortex Framework provides reference architectures, deployment accelerators, and integration services for analytics scenarios. Like many large e-commerce companies, Mercado Libre experienced skyrocketing transactions that more than doubled in 2020 as people sheltered at home during the pandemic, and they are anticipating more growth. The Google Cloud Cortex Framework is enabling Mercado Libre to respond, run more efficiently, and make faster, data-driven decisions.Continued partnership to support organizations around the world Our longstanding partnership with SAP continues to yield exciting innovations for our customers, and we’re honored to work with them to help customers address the ongoing impact of global supply chain challenges. We’re looking forward to sharing new insights and innovations at SAP Sapphire this week, and to listening and learning from you about your plans and challenges, and how we can best support your transformation to the cloud.Related Article6 SAP companies driving business results with BigQuerySAP systems generate large amounts of key operational data. Learn how six Google Cloud customers are leveraging BigQuery to drive value f…Read Article
Quelle: Google Cloud Platform

3co reinvents the digital shopping experience with augmented reality on Google Cloud

 Giving people as close to a “try-before-you-buy” experience is essential for retailers. With the move to online shopping further accelerated by the COVID-19 pandemic, many people are now comfortable shopping online for items they previously only considered buying in stores. The problem for shoppers is that it still can be difficult to get what feels like more hands-on experiences of items given limitations with even some of today’s most advanced augmented reality (AR) technologies. And while retailers continue to invest heavily in creating the most life-like digital experiences possible, the results often come up short for shoppers with more digital buying options than ever. To make AR experiences more convincing for shoppers—and for anyone wanting richer, more immersive experiences in entertainment and other industries—the depiction of real-world physical objects in digital spaces needs to continue to improve and evolve. As avid plant lovers, we knew the experience of viewing and buying plants online was severely lacking. That prompted our initial exploration into rethinking what’s possible with AR: we built a direct-to-consumer app for buying plants in AR. However, during our time in the Techstars program, we quickly realized that improving how people see and experience plants online was just a fraction of a much bigger, multi-billion-dollar opportunity for us. Since 2018, 3co has been laser-focused (quite literally) on scaling 3D tech for all of e-commerce.An automated 3D scanning system for photorealistic 3D modeling of retail products, designed by 3co and powered by Google Cloud.Closing the gap between imagination and reality with Google CloudWith that in mind, 3co began developing breakthroughs needed in 3D computer vision. Our advanced artificial intelligence (AI) stack is designed to give companies an all-in-one 3D commerce platform to easily and cost-effectively create realistic 3D models of physical objects and stage them in virtual showrooms.When building our AR platform, we quickly understood that engineering 3D simulations with sub-perceptual precision requires an enormous amount of compute power. Fortunately the problems are parallelizable. But it simply isn’t possible to 3D model the complex real world with superhuman precision on conventional laptops or desktops.As a part of the Google for Startups Cloud Program, Startup Success Managers helped 3co plug into the full power of Google’s industry-leading compute capabilities. For several projects, we selected a scalableCompute Engine powerful enough to solve even the most complex 3D graphics optimizations at scale. Today with the A2 virtual machine, 3co leverages NVIDIA Ampere A100 Tensor Core GPUs to create more life-like 3D renderings over ten times faster. And this is just the beginning.We’re also proud to have deployed a customized streaming GUI on top of Google’s monstrous machines, which allowed our colleagues across the world (including in Amsterdam and Miami) to plug-and-play with the latest 3D models on a world-class industrial GPU. I would highly recommend to companies solving super hard AI and/or 3D challenges in a distributed team to consider adopting cloud resources in the same way. It was a delight to see Blender render gigabyte 3D models faster than ever before in my life.GUI for 3D modeling, streamed from Google Cloud computers by 3co, which unlocked previously impossible collaborative workflows on gigabyte-sized 3D models.Equally critical, with our technology, 3D artists in retail, media and entertainment, and other industries pressured to deliver more—and more immersive AR—experiences can reduce costs and speed to generate photorealistic 3D models, as much as tenfold. We know this from our own work because we’ve seen computing costs to generate the highest-quality 3D experiences drop significantly—even though we run an advanced Compute Engine loaded with a powerful GPUs, high-end CPUs, and massive amounts of RAM. If the goal is to scale industry-leading compute power quickly for a global customer base, Google Cloud is the proper solution. Cloud Storage is another key but often overlooked component of the Google Cloud ecosystem, critical for 3co. We need the high throughput, low latency, and instant scalability delivered bylocal cloud SSDs to support the massive amounts of data we generate, store, and stream. The local SSDs complement our A2 compute engines and are physically attached to the servers hosting the virtual machine instances. This local configuration supports extremely high input/output operations per second (IOPS) with very low latency compared to persistent disks.To top it off,Cloud Logging delivers us real-time log management at exabyte scale — ingesting analytic events that are streamed to data lakes withPub/Sub – so we can know while enjoying the beach here in Miami, Florida that everything is going smoothly in the cloud.Building the 3co AI stack with TensorFlowBuilding one of the world’s most advanced 3D computer vision solutions would not have been possible withoutTensorFlow and its comprehensive ecosystem of tools, libraries, and community resources. Since the launch of TensorFlow in 2015, I’ve personally built dozens of deep learning systems using this battle-hardened technology, an open source Google API for AI. Through TensorFlow on Google Cloud, 3co is able to scale its compute power for creation of truly photorealistic digital models of physical objects — down to microscopic computation of material textures, and deep representations of surface light transport from all angles.  Most recently, 3co has been making massive progress on top of the TensorFlow implementation of Neural Radiance Fields (“NeRF”, Mildenhall et al. 2020). We are humbled to note that this breakthrough AI in TensorFlow truly is disruptive for the 3D modeling industry: we anticipate the next decade in 3D modeling will be increasingly shaped and colored by similar neural networks (I believe the key insight of the original authors of NeRF is to force a neural network to learn a physics-based model of light transport). For our contribution, 3co is now (1) adapting NeRF-like neural networks to optimally leverage sensor data from various leading devices for 3D computer vision, and (2) forcing these neural networks to learn industry-standard 3D modeling data structures, which can instantly plug-and-play on the leading 3D platforms. As Isaac Newton said, “If I have seen further, it is by standing on the shoulders of giants.” That is, tech giants. In several ways, TensorFlow is the go-to solution both for prototyping and for large-scale deployment of AI in general. Under-the-hood, TensorFlow uses a sophisticated compiler (XLA) for optimizing how computations are allocated on underlying hardware.3co achieved a 10x speed-up in neural network training time (for inverse rendering optimization), by compiling its computations with TensorFlow XLA.Unlike its competitors (e.g. PyTorch, JAX), TensorFlow can also compile binaries to run on TPUs (i.e. TFLite) and across device architectures (e.g. iOS, Android, JavaScript). This ability is important because 3co is committed to delivering 3D computer vision wherever it is needed, with maximum speed and accuracy. Through TensorFlow on Google Cloud, 3co has been able to speed up experimental validation of patent-pending 3D computer vision systems that can run the same TensorFlow code across smartphones, LIDAR scanners, AR glasses, and so much more.3co is developing an operating system for 3D computer vision powered by TensorFlow, in order to unify development of a single codebase for AI, across the most common sensors & processors.TensorFlow also enables 3co’s neural networks to train faster, through an easy API for distributed training across many computers. Distributed deep learning was the focus of my masters thesis in 2013 (inspired by work from Jeff Dean, Andrew Ng, and Google Brain), so you can imagine how excited I was to see Google optimize these industry-leading capabilities for the open source community, over the following years. Parallelization of deep learning has consistently proven essential for creating this advanced AI, and 3co is no exception to this rule. As well, with faster AI training means faster conclusion of R&D experiments. As Sam Altman says, “The number one predictor of success for a very young startup: rate of iteration”. From day one, TensorFlow was built to speed up Google’s AI computing challenges at the biggest scale, but it also “just works” at the earliest stages of exploration. Through TensorFlow on Google Cloud, 3co is steadily improving our capabilities for autonomous photorealistic 3D modeling. Simple and flexible architectures for fast experimentation enable us to quickly move from concept to code, from code to state-of-the-art deployed ML models. Thus, Google has given 3co through TensorFlow a powerful tool needed to better serve customers with their modern AI and computer vision. In the future, 3co has big plans involving supercomputers of Google Cloud Tensor Processing Units (TPUs), so we plan to achieve even greater speed and cost optimization. Running TensorFlow on Cloud TPUs requires just a little bit of extra work by the AI developer, but Google is increasingly making it easier to plug-and-play on these gargantuan computing architectures. They truly are world class servers for AI. I remember being as excited as a little boy in a candy store, reading research back in 2017 on Google’s TPUs, which was the climax of R&D for literally dozens of super smart computer engineers. Since then, several versions of TPUs have been deployed internally at Google for many kinds of applications (e.g. Google Translate), and increasingly have been made more useful and accessible. Startups like 3co – and our customers – can benefit so much here. Through the use of advanced computer processors like TPUs, 3co expects to parallelize its AI to perform photorealistic 3D modeling of real scenes in real-time. Imagine the possibilities for commerce, gaming, entertainment, design, and architecture that this ability could unlock. Scaling 3D commerce with Google Cloud and credits3co’s participation in the Google for Startups Cloud Program (facilitated via Techstars, we also can’t thank them enough) has been instrumental to our success in closing the gap between imagination and reality. It’s a mission we’ve been working on for years – and will continue to hone for many years to come. And this success is thanks to the Google for Startups Success team: they are truly amazing. They just care about you. If you’re a startup founder, just reach out to them: they really do wonders. We especially want to highlight the Google Cloud research credits which provided 3co access to vastly greater amounts of compute power. We are so grateful to Google Cloud for enabling 3co to scale its 3D computer vision services to customers worldwide. I love that 3co is empowered by Google to help many people see the world in a new light.  If you want to learn more about how Google Cloud can help your startup, visit our pagehere to get more information about our program, and sign up for our communications to get a look at our community activities, digital events, special offers, and more.Related ArticleThe Future of Data: Unified, flexible, and accessibleGoogle Cloud’s whitepaper explores why the future of data will involve three key themes: unified, flexible, and accessible.Read Article
Quelle: Google Cloud Platform

Security through collaboration: Building a more secure future with Confidential Computing

At Google Cloud, we believe that the protection of our customers’ sensitive data is paramount, and encryption is a powerful mechanism to help achieve this goal. For years, we have supported encryption in transit when our customers ingest their data to bring it to the cloud. We’ve also long supported encryption at rest, for all customer content stored in Google Cloud. To complete the full data protection lifecycle, we can protect customer data when it’s processed through our Confidential Computing portfolio. Confidential Computing products from Google Cloud protect data in use by performing computation in a hardware isolated environment that is encrypted with keys managed by the processor and unavailable to the operator. These isolated environments help prevent unauthorized access or modification of applications and data while in use, thereby increasing the security assurances for organizations that manage sensitive and regulated data in public cloud infrastructure. Secure isolation has always been a critical component of our cloud infrastructure; with Confidential Computing, this isolation is cryptographically reinforced. Google Cloud’s Confidential Computing products leverage security components in AMD EPYC™ processors including AMD Secure Encrypted Virtualization (SEV) technology.Building trust in Confidential Computing through industry collaborationPart of our mission to bring Confidential Computing technology to more cloud workloads and services is to make sure that the hardware and software used to build these technologies is continuously reviewed and tested. We evaluate different attack vectors to help ensure Google Cloud Confidential Computing environments are protected against a broad range of attacks. As part of this evaluation, we recognize that the secure use of our services and the Internet ecosystem as a whole depends on interactions with applications, hardware, software, and services that Google doesn’t own or operate. The Google Cloud Security team, Google Project Zero, and the AMD firmware and product security teams collaborated for several months to conduct a detailed review of the technology and firmware that powers AMD Confidential Computing technology. This review covered both Secure Encrypted Virtualization (SEV) capable CPUs, and the next generation of Secure Nested Paging (SEV-SNP) capable CPUs which protect confidential VMs against the hypervisor itself. The goal of this review was to work together and analyze the firmware and technologies AMD uses to help build Google Cloud’s Confidential Computing services to further build trust in these technologies.This in-depth review focused on the implementation of the AMD secure processor in the third generation AMD EPYC processor family delivering SEV-SNP. SNP further improves the posture of confidential computing using technology that removes the hypervisor from the trust boundary of the guest, allowing customers to treat the Cloud Service Provider as another untrusted party. The review covered several AMD secure processor components and evaluated multiple different attack vectors. The collective group reviewed the design and source code implementation of SEV, wrote custom test code, and ran hardware security tests, attempting to identify any potential vulnerabilities that could affect this environment.PCIe hardware pentesting using an IO screamerWorking on this review, the security teams identified and confirmed potential issues of varying severity. AMD was diligent in fixing all applicable issues and now offers updated firmware through its OEM channels. Google Cloud’s AMD-based Confidential Computing solutions now include all the mitigations implemented during the security review.“At Google, we believe that investing in security research outside of our own platforms is a critical step in keeping organizations across the broader ecosystem safe,” said Royal Hansen, vice president of Security Engineering at Google. “At the end of the day, we all benefit from a secure ecosystem that organizations rely on for their technology needs and that is why we’re incredibly appreciative of our strong collaboration with AMD on these efforts.” “Together, AMD and Google Cloud are continuing to advance Confidential Computing, helping enterprises to move sensitive workloads to the cloud with high levels of privacy and security, without compromising performance,” said Mark Papermaster, AMD’s executive vice president and chief technology officer. ”Continuously investing in the security of these technologies through collaboration with the industry is critical to providing customer transformation through Confidential Computing. We’re thankful to have partnered with Google Cloud and the Google Security teams to advance our security technology and help shape future Confidential Computing innovations to come.”  Reviewing trusted execution environments for security is difficult given the closed-source firmware and proprietary hardware components. This is why research and collaborations such as this are critical to improve the security of foundational components that support the broader Internet ecosystem. AMD and Google believe that transparency helps provide further assurance to customers adopting Confidential Computing, and to that end AMD is working toward a model of open source security firmware.With the analysis now complete and the vulnerabilities addressed, the AMD and Google security teams agree that the AMD firmware which enables Confidential Computing solutions meets an elevated security bar for customers, as the firmware design updates mitigate several bug classes and offer a way to recover from vulnerabilities. More importantly, the review also found that Confidential VMs are protected against a broad range of attacks described in the review.Google Cloud’s Confidential Computing portfolio The Google Cloud Confidential VMs, Dataproc Confidential Compute, and Confidential GKE Nodes have enabled high levels of security and privacy to address our customers’ data protection needs without compromising usability, performance, and scale. Our mission is to make this technology ubiquitous across the cloud. Confidential VMs run on hosts with AMD EPYC processors which feature AMD Secure Encrypted Virtualization (SEV). Incorporating SEV into Confidential VMs provide benefits and features including: Isolation: Memory encryption keys are generated by the AMD Secure Processor during VM creation and reside solely within the AMD Secure Processor. Other VM encryption keys such as for disk encryption can be generated and managed by an external key manager or in Google Cloud HSM. Both sets of these keys are not accessible by Google Cloud, offering strong isolation. Attestation: Confidential VMs use Virtual Trusted Platform Module (vTPM) attestation. Every time a Confidential VM boots, a launch attestation report event is generated and posted to customer cloud logging, which gives administrators the opportunity to act as necessary.Performance: Confidential Computing offers high performance for demanding computational tasks. Enabling Confidential VM has little or no impact on most workloads. The future of Confidential Computing and secure platformsWhile there are no absolutes in computer security, collaborative research efforts help uncover security vulnerabilities that can emerge in complex environments and help to prevent Confidential Computing solutions from threats today and into the future. Ultimately, this helps us increase levels of trust for customers. We believe Confidential Computing is an industry-wide effort that is critical for securing sensitive workloads in the cloud and are grateful to AMD for their continued collaboration on this journey. To read the full security review, visit this page. Acknowledgments We thank the many Google security team members who contributed to this ongoing security collaboration and review, including James Forshaw, Jann Horn and Mark Brand.We are grateful for the open collaboration with AMD engineers, and wish to thank David Kaplan, Richard Relph and Nathan Nadarajah for their commitment to product security. We would also like to thank AMD leadership: Ab Nacef, Prabhu Jayanna, Hugo Romero, Andrej Zdravkovic and Mark Papermaster for their support of this joint effort.Related ArticleExpanding Google Cloud’s Confidential Computing portfolioGoogle Cloud Confidential Computing is now GA and including Confidential GKE Nodes.Read Article
Quelle: Google Cloud Platform

Cloud TPU VMs are generally available

Earlier last year, Cloud TPU VMs on Google Cloud were introduced to make it easier to use the TPU hardware by providing direct access to TPU host machines. Today, we are excited to announce the general availability (GA) of TPU VMs.With Cloud TPU VMs you can work interactively on the same hosts where the physical TPU hardware is attached. Our rapidly growing TPU user community has enthusiastically adopted this access mechanism, because it not only makes it possible to have a better debugging experience, but it also enables certain training setups such as Distributed Reinforcement Learning which were not feasible with TPU Node (networks accessed) architecture.What’s new for the GA release?Cloud TPUs are now optimized for large-scale ranking and recommendation workloads. We are also thrilled to share that Snap, an early adopter of this new capability, achieved about ~4.65x perf/TCO improvement to their business-critical ad ranking workload. Here are a few highlights from Snap’s blog post on Training Large Scale Recommendation Models:> TPUs can offer much faster training speed and significantly lower training costs for recommendation system models than the CPUs;> TensorFlow for cloud TPU provides a powerful API to handle large embedding tables and fast lookups;> On TPU v3-32 slice, Snap was able to get a ~3x better throughput (-67.3% throughput on A100) with 52.1% lower cost compared to an equivalent A100 configuration (~4.65x perf/TCO)Ranking and recommendationWith the TPU VMs GA release, we are introducing the new TPU Embedding API, whichcan accelerate ML Based ranking and recommendation workloads.Many businesses today are built around ranking and recommendation use-cases, such as audio/video recommendations, product recommendations (apps, e-commerce), and ad ranking. These businesses rely on ranking and recommendation algorithms to serve their users and drive their business goals. In the last few years, the approaches to these algorithms have evolved from being purely statistical to deep neural network-based. These modern DNN-based algorithms offer greater scalability and accuracy, but they can come at a cost. They tend to use large amounts of data and can be difficult and expensive to train and deploy with traditional ML infrastructure.Embedding acceleration with Cloud TPU can solve this problem at a lower cost. Embedding APIs can efficiently handle large amounts of data, such as embedding tables, by automatically sharding across hundreds of Cloud TPU chips in a pod, all connected to one another via the custom-built interconnect.To help you get started, we are releasing the TF2 ranking and recommendation APIs, as part of the Tensorflow Recommenders library. We have also open sourced DLRM and DCN v2 ranking models in the TF2 model garden and the detailed tutorials are available here.Framework supportTPU VM GA Release supports the three major frameworks (TensorFlow, PyTorch and JAX) now offered through three optimized environments for ease of setup with the respective framework. GA release has been validated with TensorFlow v2-tf-stable, PyTorch/XLA v1.11and JAX [0.3.6].TPU VMs Specific FeaturesTPU VMs offer several additional capabilities over TPU Node architecture thanks to the local execution setup, i.e. TPU hardware connected to the same host that users execute the training workload(s).Local execution of input pipeline Input data pipeline executes directly on the TPU hosts. This functionality allows saving precious computing resources earlier used in the form of instance groups for PyTorch/JAX distributed training. In the case of Tensorflow, the distributed training setup required only one user VM and data pipeline executed directly on TPU hosts.The following study summarizes the cost comparison for Transformer (FairSeq; PyTorch/XLA) training executed for 10 epochs on TPU VM vs TPU Node architecture (Network attached Cloud TPUs):Google Internal data (published benchmarkconducted on Cloud TPU by Google).Distributed Reinforcement Learning with TPU VMsLocal execution on the host with the accelerator, also enables use cases such as Distributed Reinforcement Learning. Canonical works in this domain such as seed-RL, IMPALA and PodTracer have been developed using Cloud TPUs.“…, we argue that the compute requirements of large scale reinforcement learning systems are particularly well suited for making use of Cloud TPUs , and specifically TPU Pods: special configurations in a Google data center that feature multiple TPU devices interconnected by low latency communication channels. “—PodTracer, DeepMindCustom Ops Support for TensorFlowWith direct execution on TPU VM, users can now build their own custom ops such as TensorFlow Text. With this feature, the users are no longer bound to TensorFlow runtime release versions.What are our customers saying?“Over the last couple of years, Kakao Brain has developed numerous groundbreaking AI services and models, including minDALL-E, KoGPT and, most recently, RQ-Transformer. We’ve been using TPU VM architecture since its early launch on Google Cloud, and have experienced significant performance improvements compared to the original TPU node set up. We are very excited about the new features added in the Generally Available version of TPU VM, such as Embeddings API, and plan to continue using TPUs to solve some of the globe’s biggest ‘unthinkable questions’ with solutions enabled by its lifestyle-transforming AI technologies”—Kim Il-doo, CEO of Kakao BrainAdditional Customers’ testimonials are available here.How to get started?To start using TPU VM, you can follow one of our quick starts or tutorials. If you are new to TPUs you can explore our concepts deep-dives and system architecture. We strive to make Cloud TPUs – Google’s advanced AI infrastructure – universally useful and accessible.Related ArticleGoogle showcases Cloud TPU v4 Pods for large model trainingGoogle’s MLPerf v1.1 Training submission showcased two large (480B & 200B parameter) language models using publicly available Cloud TPU v…Read Article
Quelle: Google Cloud Platform

The Future of Data: Unified, flexible, and accessible

As the volume of data that people and businesses produce continues to grow exponentially, it goes without saying that data-driven approaches are critical for tech companies and startups across all industries. But our conversations with customers, as well as numerous industry commentaries, reiterate that managing data and extracting value from it remains difficult, especially with scale. Numerous factors underpin the challenges, including access to and storage of data, inconsistent tools, new and evolving data sources and formats, compliance concerns, and security considerations. To help you identify and solve these challenges, we’ve created a new whitepaper, “The future of data will be unified, flexible, and accessible,” which explores many of the most common reasons our customers tell us they’re choosing Google Cloud to get the most out of their data.For example, you might need to combine data in legacy systems with new technologies. Does this mean moving all your data to the cloud? Should it be in one cloud or distributed across several? How do you extract real value from all of this data without creating more silos?You might also be limited to analyzing your data in batch instead of processing it in real-time, adding complexity to your architecture and necessitating expensive maintenance to combat latency. Or you might be struggling with unstructured data, with no scalable way to analyze and manage it. Again, the factors are numerous—but many of them accrue to inadequate access to data, often exacerbated by silos, and insufficient ability to process and understand it. The modern tech stack should be a streaming stack that scales with your data, provides real-time analytics, incorporates and understands different types of data, and lets you use AI/ML to predictively derive insights and operationalize processes. These requirements mean that to effectively leverage your data assets:Data should be unified across your entire company, even across suppliers, partners, and platforms., eliminating organizational and technology silos.Unstructured data should be unlocked and leveraged in your analytics strategy. The technology stack should be unified and flexible enough to support use cases ranging from analysis of offline data to real-time streaming and application of ML without maintaining multiple bespoke tech stacks. The technology stack should be accessible on-demand, with support for different platforms, programming languages, tools, and open standards compatible with your employees’ existing skill sets. With these requirements met, you’ll be equipped to maximize your data, whether that means discerning and adapting to changing customer expectations or understanding and optimizing how your data engineers and data scientists spend their time. In coming weeks, we’ll explore aspects of the whitepaper in additional blog posts—but if you’re ready to dive in now, and to steer your tech company or startup towards success by making your data better work for you, click here to download your copy, free of charge.Related ArticleCelebrating our tech and startup customersTech companies and startups are choosing Google Cloud so they can focus on innovation, not infrastructure. See what they’re up to!Read Article
Quelle: Google Cloud Platform