The Year in Google Cloud — 2025

In the AI era, when one year can feel like 10, you’re forgiven for forgetting what happened last month, much less what happened all the way back in January. To jog your memory, we pulled the readership data for top product and company news of 2025. And because we publish a lot of great thought leadership and customer stories, we pulled that data too. Long story short: the most popular stories largely mapped to our biggest announcements. But not always — there were more than a few sleeper hits on this year’s list. Read on to relive this huge year, and perhaps discover a few gems that you may have missed. 

Building tomorrow, today: 2025 customer AI innovation highlights with Google Cloud

January
2025 started strong with important new virtual machine offerings, foundational AI tooling, and tools for both Kubernetes and data professionals. We also launched our “How Google Does It” series, looking at the internal systems and engineering principles behind how we run a modern threat-detection pipeline. We showed developers how to get started with JAX and made AI predictions for the year ahead. Readers were excited to learn about how L’Oréal built its MLOps platform and Deutsche Börse’s pioneering work on cloud-native financial trading.
Product news

Simplify the developer experience on Kubernetes with KRO

Blackwell is here — new A4 VMs powered by NVIDIA B200 now in preview

Introducing Vertex AI RAG Engine: Scale your Vertex AI RAG pipeline with confidence

Introducing BigQuery metastore, a unified metadata service with Apache Iceberg support

C4A, the first Google Axion Processor, now GA with Titanium SSD

Thought leadership:

How Google Does It: Making threat detection high-quality, scalable, and modern

2025 and the Next Chapter(s) of AI

Customer stories

How L’Oréal Tech Accelerator built its end-to-end MLOps platform

Trading in the Cloud: Lessons from Deutsche Börse Group’s cloud-native trading engine

FebruaryThere are AI products, and then there are products enhanced by AI. This month’s top launch, Gen AI Toolbox for Databases, falls into the latter category. This was also the month readers got serious about learning, with blogs about upskilling, resources, and certifications topping the charts. The fruits of our partnership with Anthropic made an appearance in our best-read list, and engineering leaders detailed Google’s extensive efforts to optimize AI system energy consumption. Execs ate up an opinion piece about how agents will unlock insights into unstructured data (which makes up 90% of enterprises’ information assets), and digested a sobering report on AI and cybercrime. During the Mobile World Congress event, we saw considerable interest in our work with telco leaders like Vodafone Italy and Amdocs.Product and company news:Announcing public beta of Gen AI Toolbox for DatabasesGet Google Cloud certified in 2025—and see why the latest research says it mattersDiscover Google Cloud careers and credentials in our new Career DreamerAnnouncing Claude 3.7 Sonnet, Anthropic’s first hybrid reasoning model, is available on Vertex AIThought leadershipDesigning sustainable AI: A deep dive into TPU efficiency and lifecycle emissionsFrom dark data to bright insights: How AI agents make data simpleNew AI, cybercrime reports underscore need for security best practicesCustomer storiesTransforming data: How Vodafone Italy modernized its data architecture in the cloudAI-powered network optimization: Unlocking 5G’s potential with Amdocs

MarchBack when we announced it, our intent to purchase cybersecurity startup Wiz was Google’s largest deal ever, and the biggest tech deal of the year. We built on that security momentum with the launch of AI Protection. We also spread our wings to the Nordics with a new region, and announced the Gemma 3 open model on Vertex AI. Meanwhile, we explained the threat that North Korean IT workers pose to employers, gave readers a peek under the hood of the Colossus file system, and reminisced about what we’ve learned over 25 years of building data centers. Readers were interested in Levi’s approach to data and weaving it into future AI efforts, and in honor of the GDC Festival of Gaming, our AI partners shared some new perspectives on “living games.”Product and company newsGoogle + Wiz: Strengthening Multicloud SecurityAnnouncing AI Protection: Security for the AI eraHej Sverige! Google Cloud launches new region in SwedenAnnouncing Gemma 3 on Vertex AIThought leadershipThe ultimate insider threat: North Korean IT workersColossus under the hood: How we deliver SSD performance at HDD prices3 key lessons from 25 years of warehouse scale computingCustomer storiesLevi’s seamless data strategy: How tailor-made AI keeps an icon from getting hemmed inCo-op mode: New partners driving the future of gaming with AI

AprilWith April came Google Cloud Next, our flagship annual conference. From Firebase Studio, Ironwood TPUs, and Google Agentspace, to Vertex AI, Cloud WAN, and Gemini 2.5, it’s hard to limit ourselves to just a few stories, there were so many bangers (for the whole list, there’s always the event recap). Meanwhile, our systems team discussed innovations to keep data center infrastructure’s thermal envelope in check. And at the RSA Conference, we unveiled our vision for the agentic security operations center of the future. On the customer front, we highlighted the startups who played a starring role at Next, and took a peek behind the curtain of The Wizard of Oz at Sphere.Product and company newsIntroducing Firebase Studio and agentic developer tools to build with GeminiIntroducing Ironwood TPUs and new innovations in AI HypercomputerVertex AI offers new ways to build and manage multi-agent systemsScale enterprise search and agent adoption with Google AgentspaceCloud WAN: Connect your global enterprise with a network built for the AI eraGemini 2.5 brings enhanced reasoning to enterprise use casesThe dawn of agentic AI in security operations at RSAC 2025Thought leadershipAI infrastructure is hot. New power distribution and liquid cooling infrastructure can help3 new ways to use AI as your security sidekickCustomer storiesGlobal startups are building the future of AI on Google CloudThe AI magic behind Sphere’s upcoming ‘The Wizard of Oz’ experience

MaySchool was almost out, but readers got back into learning mode to get certified as generative AI leaders. You were also excited about new gen AI media models in Vertex AI, the availability of Anthropic’s Claude Opus 4 and Claude Sonnet 4. We also learned that you’re very excited to use AI to generate SQL code, and about using Cloud Run as a destination for your AI apps. We outlined the steps for building a well-defined data strategy, and showed governments how AI can actually improve their security posture. And on the customer front, we launched our “Cool Stuff Customers Built” round-ups, and ran stories from Formula E and MLB.Google Cloud announces first-of-its-kind generative AI leader certificationExpanding Vertex AI with the next wave of generative AI media modelsAnnouncing Anthropic’s Claude Opus 4 and Claude Sonnet 4 on Vertex AIThought leadershipGetting AI to write good SQL: Text-to-SQL techniques explainedAI deployments made easy: Deploy to Cloud Run from AI Studio or any MCP clientBuilding a data strategy for the AI eraHow governments can use AI to improve threat detection and reduce costCustomer storiesCool Stuff Customers Built: May EditionPushing the limits of electric mobility: Formula E’s Mountain RechargeTuning in with AI: How MLB My Daily Story creates truly personalized highlight videos

JuneUp until this point, the promise of generative AI was largely around text and code. The launch of Veo 3 changed all that. Developers writing and deploying AI apps saw the availability of GPUs on Cloud Run as a big win, and we continued our steady drumbeat of Gemini innovation with 2.5 Flash and Flash-Lite. We also shared our thoughts on securing AI agents. And to learn how to actually build these agents, readers turned to stories about Box, the British real estate firm Schroders, and French luxury conglomerate LVMH (home of Louis Vuitton, Channel, Sephora and more).You dream it, Veo creates it: Veo 3 is now available for everyone in public preview on Vertex AICloud Run GPUs, now GA, makes running AI workloads easier for everyoneGemini momentum continues with launch of 2.5 Flash-Lite and general availability of 2.5 Flash and Pro on Vertex AIThought leadershipAsk OCTO: Making sense of AI agentsCloud CISO Perspectives: How Google secures AI agentsCustomer storiesThe secret to document intelligence: Box builds Enhanced Extract Agents with A2A frameworkHow Schroders built its multi-agent financial analysis research assistantInside LVMH’s perfectly manicured data estate, where luxury AI agents are taking root

JulyReaders took a break from reading about AI to read about network infrastructure — the new Sol transatlantic cable, to be precise. Then it was back to AI: new video generation models in Vertex; a crucial component for building stateful, context-aware agents; and a new toolset for connecting BigQuery data to Agent Development Kit (ADK) and Multi-Cloud Protocol (MCP) environments. Developers cheered the integration between Cloud Run and Docker Compose, and executive audiences enjoyed a listicle on actionable, real-world uses for AI agents.On the security front, we took a back-to-basics approach this month, exploring the persistence of some cloud security problems. And then, back to AI again, with our Big Sleep agent. Readers were also interested in how AI is alleviating record-keeping for nurses at HCA Healthcare, Ulta Beauty’s data warehousing and mobile record keeping initiatives, and how SmarterX migrated from Snowflake to BigQuery.Strengthening network resilience with the Sol transatlantic cableVeo 3 and Veo 3 Fast are now generally available on Vertex AIAnnouncing Vertex AI Agent Engine Memory Bank available for everyone in previewBigQuery meets ADK & MCP: Accelerate agent development with BigQuery’s new first-party toolsetFrom localhost to launch: Simplify AI app deployment with Cloud Run and Docker ComposeThought leadershipSecure cloud. Insecure use. (And what you can do about it)Our Big Sleep agent makes a big leapCustomer storiesHow nurses are charting the future of AI at America’s largest hospital network, HCA HealthcareUlta Beauty redefines beauty retail with BigQuerySmarterX’s migration from Snowflake to BigQuery accelerated model building and cut costs in half

AugustAI is compute- and energy-intensive; in a new technical paper, we released concrete numbers about our AI infrastructure’s power consumption. Then people went [nano] bananas for Gemini 2.5 Flash Image on Vertex AI, and developers got a jump on their AI projects with a wealth of technical blueprints to work from. The summer doldrums didn’t stop our security experts from tackling the serious challenge of cyber-enabled fraud. We also took a closer look at the specific agentic tools empowering workers at Wells Fargo, and how Keeta processes 11 million blockchain transactions per second with Spanner.How much energy does Google’s AI use? We did the mathBuilding next-gen visuals with Gemini 2.5 Flash Image (aka nano-banana) on Vertex AI101+ gen AI use cases with technical blueprintsThought leadershipNew Threat Horizons report details evolving risks — and defensesHow CISOs and boards of directors can help fight cyber-enabled fraudHow AI-powered weather forecasting can transform energy operationsCustomer storiesHow Wells Fargo is using Google Cloud AI to empower its workforce with agentic toolsHow Keeta processes 11 million financial transactions per second on the blockchain with Spanner

SeptemberAI is cool tech, but how do you monetize it? One answer is the Agent Payment Protocol, or AP2. Developers and data scientists preparing for AI flocked to blogs about new Data Cloud offerings, the 2025 DORA Report, and new trainings. Executives took in our thoughts on building an agentic data strategy, and took notes on the best prompts with which to kickstart their AI usage. And because everybody is impacted by the AI era, including business leaders, we explained what it means to be “bilingual” in AI and security. Then, at Google’s AI Builders Forum, startups described how Google’s AI, infrastructure, and services are supporting their growth. Not to be left out, enterprises like Target and Mr. Cooper also showed off their AI chops.Powering AI commerce with the new Agent Payments Protocol (AP2)The new data scientist: From analyst to agentic architectAnnouncing the 2025 DORA Report: State of AI-Assisted Software DevelopmentBack to AI school: New Google Cloud training to future-proof your AI skillsThought leadershipBuilding better data platforms, for AI and beyondBoards should be ‘bilingual’ in AI, security to gain advantageA leader’s guide to five essential AI promptsCustomer storiesHow Google Cloud’s AI tech stack powers today’s startupsFrom query to cart: Inside Target’s search bar overhaul with AlloyDB AIHow Mr. Cooper assembled a “team” of AI agents to handle complex mortgage questions

OctoberWelcome to the Gemini Enterprise era, which brings enhanced security, data control, and advanced agent capabilities to large organizations. To help you prepare, we relaunched a variety of enhancements to our learning platform, and added new commerce and security programs. And while developers versed themselves on the finer points of Veo prompts, we discussed securing the AI supply chain, building AI agents for cybersecurity and defense, and a new vision on economic threat modeling. We partnered with PayPal to enable commerce in AI chats, Germany’s Planck Institute showed how AI can help share deep scientific expertise, and DZ Bank pioneered ways to make blockchain-based finance more reliable.Introducing Gemini EnterpriseGoogle Skills: Your new home for cloud learningEnabling a safe agentic web with reCAPTCHAPartners powering the Gemini Enterprise agent ecosystemThought leadershipThe ultimate prompting guide for Veo 3.1How you can secure your AI supply chainHow Google Does It: Building AI agents for cybersecurity and defenseCustomer storiesIntroducing an agentic commerce solution for merchants from PayPal and Google CloudHow the Max Planck Institute is sharing expert skills through multimodal agentsThe oracles of DeFi: How DZ Bank builds trustworthy data feeds for decentralized applications

NovemberWhether it was Gemini 3, Nana Banana Pro, or our seventh-generation Ironwood TPUs, this was the month that we gave enterprise customers access to all our latest and greatest AI tech. We also did a deep dive on how we built the largest-ever Kubernetes cluster, clocking in at a massive 130,000 nodes, and we announced a new collaboration with AWS to improve connectivity between clouds.Meanwhile, we updated our findings on the adversarial misuse of AI by threat actors and on the ROI of AI for security, and executives vibed out on our piece about vibe coding. Then, just in time for the holidays, we took a look at how Mattel is using AI tools to revamp its toys, and Waze showed how it uses Memorystore to keep the holiday traffic flowing.Bringing Gemini 3 to EnterpriseHow Google Does It: Building the largest known Kubernetes cluster, with 130,000 nodesAnnouncing Nano Banana Pro for every builder and businessAnnouncing Ironwood TPUs General Availability and new Axion VMs to power the age of inferenceAWS and Google Cloud collaborate to simplify multicloud networkingThought leadershipRecent advances in how threat actors use AI toolsBeyond the hype: Analyzing new data on ROI of AI in securityHow vibe coding can help leaders move fasterCustomer storiesMattel’s game changer: How AI is turning customer feedback into real-time product updatesWaze keeps traffic flowing with 1M+ real-time reads per second on Memorystore

DecemberThe year is winding down, but we still have lots to say. Early returns show that you were interested in how to mitigate the React2Shell vulnerability, support for MCP across Google services, and the early access launch of AlphaEvolve. And let’s not forget Gemini 3 Flash, which is turning heads with its high-level reasoning, plus amazing speed and a flexible cost profile.What does this all mean for you and your future? It’s important to contextualize these technology developments, especially AI. For example, the DORA team put together a guide on how high-performing platform teams can integrate AI capabilities into their workflows, we discussed what it looks like to have an AI-ready workforce, and our Office of the CISO colleagues put out their 2026 cybersecurity predictions. More to the point (guard), you could do like Golden State Warrior Stephen Curry and turn to Gemini to analyze your game, to prepare for the year ahead. We’ll be watching on Christmas Day to see how Steph is faring with Gemini’s advice.Responding to React2Shell (CVE-2025-55182): Secure your React and Next.js workloadsAnnouncing Model Context Protocol (MCP) support for Google servicesAlphaEvolve on Google Cloud: AI for agentic discovery and optimizationIntroducing Gemini 3 Flash: Intelligence and speed for enterprisesThought leadershipFrom adoption to impact: Putting the DORA AI Capabilities Model to workIs AI fluency the ingredient or the result of an AI-ready workforce?Our 2026 Cybersecurity Forecast reportCustomer storiesWhat Stephen Curry learned about his game from a custom Gemini agent

The Curry sibling rivalry is going strong

And that’s a wrap on 2025! Thanks for reading, and see you next year!
Quelle: Google Cloud Platform

Supporting Viksit Bharat: Announcing our newest AI investments in India

India’s developer community, vibrant startup ecosystem, and leading enterprises are embracing AI with incredible speed. To meet this moment for India, we are investing in powerful, locally-available tools in India that can help foster a diverse ecosystem, and ensure our platform delivers the controls you need for compliance and AI sovereignty.
Today, we’re announcing a significant expansion of our local AI hardware capacity for customers in India. This increase in local compute, powered by Google’s AI Hypercomputer architecture with the latest Trillium TPUs, will help more businesses and public sector organizations train and serve their most advanced Gemini models in India. 
By unblocking new opportunities for high-performance, low-latency AI applications we can help customers meet India’s data residency and sovereignty requirements.
Enabling models and control: AI tools built for India’s context
While infrastructure is the foundation for digital sovereignty, it also requires control over the data and the models built on it. We’re committed to bringing our latest AI advancements to India faster than ever, with the controls you need.
Our new services would enable you to build, tune, and deploy models that understand India’s unique business logic and rich cultural context.

Next-generation models, here in India: Earlier this year, Google Cloud made Gemini available to regulated Indian customers by deploying Gemini 2.5 Flash with local machine-learning processing support. Now, we’re opening early testing for our latest and most advanced Gemini models to Indian customers. We’re also committing to launching the most powerful Gemini models in India with full data residency support. This is a first for Google Cloud, and a direct response to help meet the needs of our Indian customers.

More AI capabilities, available locally: We’re providing additional consumption models and pre-built AI-powered applications tailored for local context by launching a suite of new capabilities with data residency support in India:

Batch support for Gemini 2.5 Flash: Now generally available, this allows organizations to run high-volume, non-real-time AI tasks at a lower cost, all in India.

Document AI: Now in preview, we’re providing local support to help Indian businesses automate document processing.

More local context in your AI: Grounding on Google Maps is a new capability to ground model responses in real time from Google Maps, ensuring AI applications can provide accurate, location-aware answers.

A sovereign AI ecosystem: Building for India, with India
The most durable and decisive factor for long-term digital sovereignty lies in cultivating the “human element” — the skilled talent and innovation ecosystem. A sovereign AI future depends on building a strong local ecosystem.
Our strategy is to support India’s ecosystem-led approach by investing in the researchers, developers, and startups who are building for India’s specific needs.
Collaboration with IIT Madras: Google Cloud and Google DeepMind are thrilled to collaborate with IIT Madras to support the launch of Indic Arena. Run independently by the renowned AI4Bharat center at IIT Madras, this platform will allow users from all over India to anonymously evaluate and rank AI models on tasks unique to India’s rich multilingual landscape. To support this initiative, we are providing cloud credits to power this critical, community-driven resource.
“At AI4Bharat, our mission is to build AI for India’s specific needs. A critical part of this is having a neutral, standardized benchmark to understand how models are performing across our many languages,” said Mitesh Khapra, associate professor, IIT Madras. “Indic Arena will be that platform. We are delighted to have Google Cloud’s support to provide the initial compute power to bring this independent, public-facing project to life for the entire Indian AI community.”
We encourage all developers, researchers, and organizations in India to explore the Indic Arena platform and contribute to building a more inclusive AI future.
We invite the entire Indian ecosystem, from startups and universities to government bodies and enterprises, to take advantage of this new, dedicated capacity for Gemini in Vertex AI and our sovereign-ready infrastructure to build the next generation of AI that is built by Indians, for Indians.
Quelle: Google Cloud Platform

How scientists can leverage AI agents using Gemini Enterprise, Gemini Code Assist, and Gemini CLI

Scientific inquiry has always been a journey of curiosity, meticulous effort, and groundbreaking discoveries. Today, that journey is being redefined, fueled by the incredible capabilities of AI. It’s moving beyond simply processing data to actively participating in every stage of discovery, and Google Cloud is at the forefront of this transformation, building the tools and platforms that make it possible. 
The sheer volume of data generated by modern research is immense, often too vast for human analysis alone. This is where AI steps in, not just as a tool, but as a collaborative force. We’re seeing powerful new models and AI agents assist with everything from identifying relevant literature and generating novel hypotheses to designing experiments, running simulations, and making sense of complex results. This collaboration doesn’t replace human intellect; it amplifies it, allowing researchers to explore more avenues, more quickly, and with greater precision. 
At Google Cloud, we’re bringing together high-performance computing (HPC) and advanced AI on a single, integrated platform. This means you can seamlessly move from running massive-scale simulations to applying sophisticated machine learning models, all in one environment. 
So, how can you leverage these capabilities to get to insights faster? The journey begins at the foundation of scientific inquiry: the hypothesis.
AI-enhanced scientific inquiry
Every great discovery starts with a powerful hypothesis. With millions of research papers published annually, identifying novel opportunities is a monumental task. To overcome this information overload, scientists can now turn to AI as a powerful research partner.
Our Deep Research agent tackles the first step: performing a comprehensive analysis of published literature to produce detailed reports on a given topic that would otherwise take months to compile. Building on that foundation, our Idea Generation agent then deploys an ensemble of AI collaborators to brainstorm, evaluate, propose, debate, and rank novel hypotheses. This powerful combination, available in Gemini Enterprise, transforms the initial phase of scientific inquiry, empowering researchers to augment their expertise and find connections they might otherwise miss.
Go from hypothesis to results, faster
Once a hypothesis is formed, the work of translating it into executable code begins. This is where AI coding assistants, such as Gemini Code Assist, excel. They automate the tedious tasks of writing analysis scripts and simulation models by generating code from natural language and providing real-time suggestions, dramatically speeding up the core development process. 
But modern research is more than just a single script; it’s a complete workflow of data, environments, and results managed from the command line. For this, Gemini CLI brings that same conversational power directly to your terminal. It acts as the ultimate workflow accelerator, allowing you to instantly synthesize research and generate hypotheses with simple commands, then seamlessly transition to experimentation by generating sophisticated analysis scripts, and debugging errors on the fly, all without ever breaking your focus. Gemini CLI can further accelerate your path to impact by transforming raw results into publication-ready text, generating the code for figures and tables, and refining your work for submission. 
This capability extends to automating the entire research environment. Beyond single commands, Gemini CLI can manage complex, multi-step processes like cloning a scientific application, installing its dependencies, and then building and testing it—all with a simple prompt, maximizing your productivity.
The new era of discovery: Your expertise, AI agents, and Google Cloud
The new era of scientific discovery is here. By embedding AI into every stage of the scientific process – from sparking the initial idea to accelerating the final analysis – Google Cloud provides a single, unified platform for discovery. This new era of AI-enhanced scientific inquiry is built on a robust, intelligent infrastructure that combines the strengths of HPC simulation and AI. This includes purpose-built solutions like our H4D VMs optimized for scientific simulations, alongside the latest A4 and A4X VMs, powered by the latest NVIDIA GPUs, and Google Cloud Managed Lustre, a parallel file system that eliminates storage bottlenecks and allows your HPC and AI workloads to create and analyze massive datasets simultaneously. We provide the power to streamline the entire process so you can focus on scientific creativity – and changing the world! 
Join the Google Cloud Advanced Computing Community to connect with other researchers, share best practices, and stay up to date on the latest advancements in AI for scientific and technical computing, or contact sales to get started today.
Quelle: Google Cloud Platform

Gemeinsam gegen Geldwäsche: Wie EuroDaT den sicheren Austausch sensibler Finanzdaten ermöglicht

Ein Beitrag von Dr. Alexander Alldridge, Geschäftsführer von EuroDaTGeldwäschebekämpfung ist Teamarbeit. Banken, Regierungen und Technologiepartner müssen eng zusammenarbeiten, um kriminelle Netzwerke effektiv aufzudecken. Diese Herausforderung ist im streng regulierten Finanzsektor besonders komplex: Wie funktioniert Datenabgleich, wenn die Daten, um die es geht, hochsensibel sind? In diesem Blogbeitrag erklärt Dr. Alexander Alldridge, Geschäftsführer von EuroDaT, welche Rolle ein Datentreuhänder dabei spielen kann – und wie EuroDaT mit Lösungen von Google Cloud eine skalierbare, DSGVO-konforme Infrastruktur für genau diesen Zweck aufgebaut hat.
Wenn eine Bank eine verdächtige Buchung bemerkt, beginnt ein sensibler Abstimmungsprozess. Um mögliche Geldflüsse nachzuverfolgen, bittet sie andere Banken um Informationen zu bestimmten Transaktionen oder Konten. Aktuell geschieht das meist telefonisch – nicht, weil es keine digitalen Alternativen gäbe, sondern weil die Weitergabe sensibler Finanzdaten wie IBANs oder Kontobewegungen nur unter sehr engen rechtlichen Vorgaben erlaubt ist.Das Hin und Her per Telefon ist nicht nur mühsam, sondern auch fehleranfällig. Deutlich schneller und sicherer wäre ein digitaler Datenabgleich, der nur berechtigten Stellen Zugriff auf genau die Informationen gibt, die sie im konkreten Verdachtsfall benötigen.Hier bei EuroDaT, einer Tochtergesellschaft des Landes Hessen, bieten wir genau das: Als Europas erster transaktionsbasierter Datentreuhänder ermöglichen wir einen kontrollierten, anlassbezogenen Austausch sensibler Finanzdaten, der vertrauliche Informationen schützt und alle gesetzlichen Vorgaben erfüllt.safeAML: Ein neuer Weg für den Datenaustausch im FinanzsektorMit safeAML haben wir in Zusammenarbeit mit der Commerzbank, der Deutschen Bank und N26 ein System entwickelt, das den Informationsaustausch zwischen Finanzinstituten digitalisiert. Statt aufwendig andere Institute abzutelefonieren, kann künftig jede Bank selbst die relevanten Daten von anderen Banken hinzuziehen, um auffällige Transaktionen besser einordnen zu können.Der Datenaustausch läuft dabei kontrolliert und datenschutzkonform ab: Die Daten werden pseudonymisiert verarbeitet und so weitergegeben, dass nur die anfragende Bank sie am Ende wieder zuordnen kann. Wir bei EuroDaT haben als Datentreuhänder zu keinem Zeitpunkt Zugriff auf personenbezogene Inhalte.

safeAML Anwendung

Höchste Sicherheits- und Compliance-Standards mit Google CloudsafeAML ist eine Cloud-native Anwendung, wird also vollständig in der Cloud entwickelt und betrieben. Dafür braucht es eine Infrastruktur, die nicht nur technisch leistungsfähig ist, sondern auch die strengen Vorgaben im Finanzsektor erfüllt – von der DSGVO bis zu branchenspezifischen Sicherheits- und Cyber-Resilienz-Anforderungen. Google Cloud bietet dafür eine starke Basis, weil das Google Cloud-Team technisch und vertraglich schon früh die passenden Grundlagen für solche sensiblen Anwendungsfälle gelegt hat. Für uns war das ein entscheidender Vorteil gegenüber anderen Anbietern.Unsere gesamte Infrastruktur ist auf Google Kubernetes Engine (GKE) aufgebaut. Darüber richten wir sichere, isolierte Umgebungen ein, in denen jede Anfrage nachvollziehbar und getrennt von anderen verarbeitet werden kann. Alle technischen Ressourcen, darunter auch unsere Virtual Private Clouds (VPCs), sind in der Google-Cloud-Umgebung über Infrastruktur als Code definiert. Das bedeutet: Die gesamte Infrastruktur von EuroDaT wird automatisiert und wiederholbar aufgebaut, inklusive der Regeln dafür, welche Daten wohin fließen dürfen.Diese transparente, einfach reproduzierbare Architektur hilft uns auch dabei, die strengen Compliance-Anforderungen im Finanzsektor zu erfüllen: Wir können jederzeit belegen, dass sicherheitsrelevante Vorgaben automatisch umgesetzt und überprüft werden.
Banken nutzen safeAML für schnellere VerdachtsprüfungsafeAML ist inzwischen bei den ersten deutschen Banken testweise im Einsatz, um verdächtige Transaktionen schneller und besser einordnen zu können. Anstatt wie gewohnt zum Telefon greifen zu müssen, können Ermittler*innen jetzt gezielt ergänzende Informationen von anderen Instituten einholen, ohne dabei sensible Daten offenzulegen.Das beschleunigt nicht nur die Prüfung, sondern reduziert auch Fehlalarme, die bisher viel Zeit und Kapazitäten gebunden haben. Die Meldung, ob ein Geldwäscheverdacht vorliegt, bleibt dabei weiterhin eine menschliche Einzelfallentscheidung, wie es das deutsche Recht verlangt.Dass Banken über safeAML erstmals kontrolliert Daten austauschen können, ist bereits ein großer Schritt für die Geldwäschebekämpfung in Deutschland. Wir stehen aber noch am Anfang: Jetzt geht es darum, mehr Banken einzubinden, die Vernetzung national und international auszuweiten und den Prozess so unkompliziert wie möglich zu machen. Denn je mehr Institute mitmachen, desto besser können wir ein vollständiges Bild verdächtiger Geldflüsse zeichnen. Die neue Datenbasis kann künftig auch dabei helfen, Verdachtsfälle besser einzuordnen und fundierter zu bewerten.
Nachhaltiger Datenschutz: Sicherer Austausch von ESG-DatenUnsere Lösung ist aber nicht auf den Finanzbereich beschränkt. Als Datentreuhänder können wir das Grundprinzip, sensible Daten nur gezielt und kontrolliert zwischen dazu berechtigten Parteien zugänglich zu machen, auch auf viele andere Bereiche übertragen. Wir arbeiten dabei immer mit Partnern zusammen, die ihre Anwendungsideen auf EuroDaT umsetzen, und bleiben als Datentreuhänder selbst neutral.

Leistungsangebot EuroDaT

Ein aktuelles Beispiel sind ESG-Daten: Nicht nur große Firmen, sondern auch kleine und mittlere Unternehmen stehen zunehmend unter Druck, Nachhaltigkeitskennzahlen offenzulegen – sei es wegen neuer gesetzlicher Vorgaben oder weil Geschäftspartner wie Banken und Versicherer sie einfordern.Gerade für kleinere Firmen ist es schwierig, diesen Anforderungen gerecht zu werden. Sie haben oft nicht die nötigen Strukturen oder Ressourcen, um ESG-Daten standardisiert bereitzustellen, und möchten sensible Informationen wie Verbrauchsdaten verständlicherweise auch nicht einfach öffentlich machen.Hier kommt EuroDaT ins Spiel: Wir sorgen als vertrauenswürdige Zwischenstelle dafür, dass Nachhaltigkeitsdaten sicher weitergegeben werden, ohne dass Unternehmen die Kontrolle darüber verlieren. Mit dem Deutschen Nachhaltigkeitskodex (DNK) führen wir aktuell Gespräche zu einer Lösung, die kleinen Firmen das Übermitteln von ESG-Daten an Banken, Versicherungen und Investor*innen über EuroDaT als Datentreuhänder erleichtern kann.
Forschung im Gesundheitssektor: Sensible Daten, sichere ErkenntnisseAuch im Gesundheitssektor sehen wir großes Potenzial für unsere Technologie. Hier geht es natürlich um besonders sensible Daten, die nur unter strengen Auflagen verarbeitet werden dürfen. Trotzdem gibt es viele Fälle, in denen Gesundheitsdaten zusammengeführt werden müssen – etwa für die Grundlagenforschung, die Ausgestaltung klinischer Studien und politische Entscheidungen.Im Auftrag der Bundesregierung hat die Unternehmensberatung d-fine jetzt gezeigt, wie Gesundheitsdaten mithilfe von EuroDaT genutzt werden können – etwa zur Analyse der Auswirkungen von Post-COVID auf die Erwerbstätigkeit. Dafür müssen diese Daten mit ebenfalls hochsensiblen Erwerbsdaten zusammengeführt werden, was durch EuroDaT möglich wird: Als Datentreuhänder stellen wir sicher, dass die Daten vertraulich bleiben und dennoch sinnvoll genutzt werden können.Datensouveränität als Schlüssel zur digitalen ZusammenarbeitWenn Daten nicht ohne Weiteres geteilt werden dürfen, hat das meist gute Gründe. Gerade im Finanzwesen oder im Gesundheitssektor sind Datenschutz und Vertraulichkeit nicht verhandelbar. Umso wichtiger ist, dass der Austausch dieser Daten, wenn er tatsächlich notwendig wird, rechtlich sicher und kontrolliert stattfinden kann.Als Datentreuhänder sorgen wir deshalb nicht nur für sicheren Datenaustausch in sensiblen Branchen, sondern stärken dabei auch die Datensouveränität aller Beteiligten. Gemeinsam mit Google Cloud verankern wir Datenschutz fest im Kern der digitalen Zusammenarbeit zwischen Unternehmen, Behörden und Forschungseinrichtungen.
Quelle: Google Cloud Platform

Gemeinsam gegen Geldwäsche: Wie EuroDaT den sicheren Austausch sensibler Finanzdaten ermöglicht

Ein Beitrag von Dr. Alexander Alldridge, Geschäftsführer von EuroDaTGeldwäschebekämpfung ist Teamarbeit. Banken, Regierungen und Technologiepartner müssen eng zusammenarbeiten, um kriminelle Netzwerke effektiv aufzudecken. Diese Herausforderung ist im streng regulierten Finanzsektor besonders komplex: Wie funktioniert Datenabgleich, wenn die Daten, um die es geht, hochsensibel sind? In diesem Blogbeitrag erklärt Dr. Alexander Alldridge, Geschäftsführer von EuroDaT, welche Rolle ein Datentreuhänder dabei spielen kann – und wie EuroDaT mit Lösungen von Google Cloud eine skalierbare, DSGVO-konforme Infrastruktur für genau diesen Zweck aufgebaut hat.
Wenn eine Bank eine verdächtige Buchung bemerkt, beginnt ein sensibler Abstimmungsprozess. Um mögliche Geldflüsse nachzuverfolgen, bittet sie andere Banken um Informationen zu bestimmten Transaktionen oder Konten. Aktuell geschieht das meist telefonisch – nicht, weil es keine digitalen Alternativen gäbe, sondern weil die Weitergabe sensibler Finanzdaten wie IBANs oder Kontobewegungen nur unter sehr engen rechtlichen Vorgaben erlaubt ist.Das Hin und Her per Telefon ist nicht nur mühsam, sondern auch fehleranfällig. Deutlich schneller und sicherer wäre ein digitaler Datenabgleich, der nur berechtigten Stellen Zugriff auf genau die Informationen gibt, die sie im konkreten Verdachtsfall benötigen.Hier bei EuroDaT, einer Tochtergesellschaft des Landes Hessen, bieten wir genau das: Als Europas erster transaktionsbasierter Datentreuhänder ermöglichen wir einen kontrollierten, anlassbezogenen Austausch sensibler Finanzdaten, der vertrauliche Informationen schützt und alle gesetzlichen Vorgaben erfüllt.safeAML: Ein neuer Weg für den Datenaustausch im FinanzsektorMit safeAML haben wir in Zusammenarbeit mit der Commerzbank, der Deutschen Bank und N26 ein System entwickelt, das den Informationsaustausch zwischen Finanzinstituten digitalisiert. Statt aufwendig andere Institute abzutelefonieren, kann künftig jede Bank selbst die relevanten Daten von anderen Banken hinzuziehen, um auffällige Transaktionen besser einordnen zu können.Der Datenaustausch läuft dabei kontrolliert und datenschutzkonform ab: Die Daten werden pseudonymisiert verarbeitet und so weitergegeben, dass nur die anfragende Bank sie am Ende wieder zuordnen kann. Wir bei EuroDaT haben als Datentreuhänder zu keinem Zeitpunkt Zugriff auf personenbezogene Inhalte.

safeAML Anwendung

Höchste Sicherheits- und Compliance-Standards mit Google CloudsafeAML ist eine Cloud-native Anwendung, wird also vollständig in der Cloud entwickelt und betrieben. Dafür braucht es eine Infrastruktur, die nicht nur technisch leistungsfähig ist, sondern auch die strengen Vorgaben im Finanzsektor erfüllt – von der DSGVO bis zu branchenspezifischen Sicherheits- und Cyber-Resilienz-Anforderungen. Google Cloud bietet dafür eine starke Basis, weil das Google Cloud-Team technisch und vertraglich schon früh die passenden Grundlagen für solche sensiblen Anwendungsfälle gelegt hat. Für uns war das ein entscheidender Vorteil gegenüber anderen Anbietern.Unsere gesamte Infrastruktur ist auf Google Kubernetes Engine (GKE) aufgebaut. Darüber richten wir sichere, isolierte Umgebungen ein, in denen jede Anfrage nachvollziehbar und getrennt von anderen verarbeitet werden kann. Alle technischen Ressourcen, darunter auch unsere Virtual Private Clouds (VPCs), sind in der Google-Cloud-Umgebung über Infrastruktur als Code definiert. Das bedeutet: Die gesamte Infrastruktur von EuroDaT wird automatisiert und wiederholbar aufgebaut, inklusive der Regeln dafür, welche Daten wohin fließen dürfen.Diese transparente, einfach reproduzierbare Architektur hilft uns auch dabei, die strengen Compliance-Anforderungen im Finanzsektor zu erfüllen: Wir können jederzeit belegen, dass sicherheitsrelevante Vorgaben automatisch umgesetzt und überprüft werden.
Banken nutzen safeAML für schnellere VerdachtsprüfungsafeAML ist inzwischen bei den ersten deutschen Banken testweise im Einsatz, um verdächtige Transaktionen schneller und besser einordnen zu können. Anstatt wie gewohnt zum Telefon greifen zu müssen, können Ermittler*innen jetzt gezielt ergänzende Informationen von anderen Instituten einholen, ohne dabei sensible Daten offenzulegen.Das beschleunigt nicht nur die Prüfung, sondern reduziert auch Fehlalarme, die bisher viel Zeit und Kapazitäten gebunden haben. Die Meldung, ob ein Geldwäscheverdacht vorliegt, bleibt dabei weiterhin eine menschliche Einzelfallentscheidung, wie es das deutsche Recht verlangt.Dass Banken über safeAML erstmals kontrolliert Daten austauschen können, ist bereits ein großer Schritt für die Geldwäschebekämpfung in Deutschland. Wir stehen aber noch am Anfang: Jetzt geht es darum, mehr Banken einzubinden, die Vernetzung national und international auszuweiten und den Prozess so unkompliziert wie möglich zu machen. Denn je mehr Institute mitmachen, desto besser können wir ein vollständiges Bild verdächtiger Geldflüsse zeichnen. Die neue Datenbasis kann künftig auch dabei helfen, Verdachtsfälle besser einzuordnen und fundierter zu bewerten.
Nachhaltiger Datenschutz: Sicherer Austausch von ESG-DatenUnsere Lösung ist aber nicht auf den Finanzbereich beschränkt. Als Datentreuhänder können wir das Grundprinzip, sensible Daten nur gezielt und kontrolliert zwischen dazu berechtigten Parteien zugänglich zu machen, auch auf viele andere Bereiche übertragen. Wir arbeiten dabei immer mit Partnern zusammen, die ihre Anwendungsideen auf EuroDaT umsetzen, und bleiben als Datentreuhänder selbst neutral.

Leistungsangebot EuroDaT

Ein aktuelles Beispiel sind ESG-Daten: Nicht nur große Firmen, sondern auch kleine und mittlere Unternehmen stehen zunehmend unter Druck, Nachhaltigkeitskennzahlen offenzulegen – sei es wegen neuer gesetzlicher Vorgaben oder weil Geschäftspartner wie Banken und Versicherer sie einfordern.Gerade für kleinere Firmen ist es schwierig, diesen Anforderungen gerecht zu werden. Sie haben oft nicht die nötigen Strukturen oder Ressourcen, um ESG-Daten standardisiert bereitzustellen, und möchten sensible Informationen wie Verbrauchsdaten verständlicherweise auch nicht einfach öffentlich machen.Hier kommt EuroDaT ins Spiel: Wir sorgen als vertrauenswürdige Zwischenstelle dafür, dass Nachhaltigkeitsdaten sicher weitergegeben werden, ohne dass Unternehmen die Kontrolle darüber verlieren. Mit dem Deutschen Nachhaltigkeitskodex (DNK) führen wir aktuell Gespräche zu einer Lösung, die kleinen Firmen das Übermitteln von ESG-Daten an Banken, Versicherungen und Investor*innen über EuroDaT als Datentreuhänder erleichtern kann.
Forschung im Gesundheitssektor: Sensible Daten, sichere ErkenntnisseAuch im Gesundheitssektor sehen wir großes Potenzial für unsere Technologie. Hier geht es natürlich um besonders sensible Daten, die nur unter strengen Auflagen verarbeitet werden dürfen. Trotzdem gibt es viele Fälle, in denen Gesundheitsdaten zusammengeführt werden müssen – etwa für die Grundlagenforschung, die Ausgestaltung klinischer Studien und politische Entscheidungen.Im Auftrag der Bundesregierung hat die Unternehmensberatung d-fine jetzt gezeigt, wie Gesundheitsdaten mithilfe von EuroDaT genutzt werden können – etwa zur Analyse der Auswirkungen von Post-COVID auf die Erwerbstätigkeit. Dafür müssen diese Daten mit ebenfalls hochsensiblen Erwerbsdaten zusammengeführt werden, was durch EuroDaT möglich wird: Als Datentreuhänder stellen wir sicher, dass die Daten vertraulich bleiben und dennoch sinnvoll genutzt werden können.Datensouveränität als Schlüssel zur digitalen ZusammenarbeitWenn Daten nicht ohne Weiteres geteilt werden dürfen, hat das meist gute Gründe. Gerade im Finanzwesen oder im Gesundheitssektor sind Datenschutz und Vertraulichkeit nicht verhandelbar. Umso wichtiger ist, dass der Austausch dieser Daten, wenn er tatsächlich notwendig wird, rechtlich sicher und kontrolliert stattfinden kann.Als Datentreuhänder sorgen wir deshalb nicht nur für sicheren Datenaustausch in sensiblen Branchen, sondern stärken dabei auch die Datensouveränität aller Beteiligten. Gemeinsam mit Google Cloud verankern wir Datenschutz fest im Kern der digitalen Zusammenarbeit zwischen Unternehmen, Behörden und Forschungseinrichtungen.
Quelle: Google Cloud Platform

Top 25 blogs of 2025… so far

Six months into 2025, we’ve already published hundreds of posts here on the Google Cloud blog. We asked ourselves, why wait until the busy end of the year to review your favorites? With everything from new AI models, product launches, emerging cyber threats, company news, certifications and customer stories, here is a mid-year recap that will get you up to speed on the latest from Google Cloud and the rapidly emerging cloud and AI landscape. 
25. How Google Does It: Making threat detection high-quality, scalable, and modern
Published January 7, 2025
Google and Alphabet run the largest Linux fleet in the world, with nearly every flavor of operating system available, and see a steady stream of malicious system and network activity. Learn how our threat detection and response team detects, analyzes, and responds to threats on a vast scale.   
Read the blog. 
24. Cloud Run GPUs are now generally available
Published June 2, 2025
More and more organizations are turning to Cloud Run, Google Cloud’s serverless runtime, for its simplicity, flexibility, and scalability. And now, with the general availability of NVIDIA GPUs on the platform, developers can choose Cloud Run for applications that require powerful graphics processing, like machine learning models.
Read the blog. 
23. BigQuery emerges as autonomous data-to-AI platform
Published April 10, 2025
This is not your grandfather’s data warehouse. BigQuery is now an AI-native, multimodal, and agentic data-to-AI platform. The blog post provides an overview of the many new features and capabilities that went into this new designation, including new data preparation, data analysis, code generation and management and troubleshooting capabilities. 
Read the blog. 
22. Announcing Gen AI Toolbox for Databases. Get started today
Published February 6, 2025
Tired of building custom plumbing to connect your AI apps to your databases? This article announces the public beta of the Gen AI Toolbox for Databases, an open-source server built with LangChain that provides a secure, scalable, and manageable way to connect your generative AI applications to your data.
Read the blog. 
21. Ghost in the router: China-nexus espionage actor UNC3886 targets Juniper Networks
Published March 11, 2025
After discovering in 2024 that threat actors deployed custom backdoors to Juniper Networks’ Junos OS routers, Mandiant worked with Juniper to investigate this activity and observed that the affected routers were running end-of-life hardware and software. Learn more about the threat and how to remediate it in your environment. 
Read the blog.
20. What’s new with AI Hypercomputer?
Published April 9, 2025
It’s a platform, it’s a system, it’s AI Hypercomputer, Google Cloud’s fully managed supercomputing system for running AI and HPC workloads. As discussed at Google Cloud Next 2025, AI Hypercomputer supports all the latest and greatest compute, networking and storage infrastructure, and its software layer helps AI practitioners and engineers move faster with open and popular ML frameworks. Finally, there’s a full suite of workload management and observability tools to help you manage the thing.
Read the blog. 
19. Ipsos research shows why cloud certification matters — get certified with Google Cloud

Published February 25, 2025Google Cloud partnered with Ipsos, the global research firm, to study the impact of cloud certifications on career advancement and achievement. For example, 8 out of 10 survey respondents said earning a recognized certificate helped them land a job faster and 75% believe they secured a higher salary through their certification.Read the blog.

18. Connect globally with Cloud WAN for the AI Era
Published April 9, 2025
With 202 points of presence (PoPs), powered by over 2 million miles of fiber, 33 subsea cables, and backed by a 99.99% reliability SLA, Google’s backbone network is, how do we put it? Vast. And with Cloud WAN, enterprises can now use it for their own wide area network (WAN) architectures. 
Read the blog. 
17. Expanding generative media for enterprise on Vertex AI

Published April 9, 2025At Google Cloud Next 25, we announced powerful new creative controls for our generative media models on Vertex AI. Now you can edit video with in-painting and out-painting, use camera controls for dynamic shots, and even create custom voices for AI-powered narration with as little as 10 seconds of audio.Read the blog.

16. Suspected China-nexus threat actor actively exploiting critical Ivanti Connect Secure vulnerability
Published April 3, 2025
Threat actors continue to target edge devices globally, leveraging deep device knowledge and using both zero-day and now n-day flaws. This activity aligns with the broader strategy that the Google Threat Intelligence Group has observed among suspected China-nexus espionage groups, who invest significantly in exploits and custom malware for critical edge infrastructure.
Read the blog. 
15. Defending against UNC3944: Cybercrime hardening guidance from the frontlines 
Published May 6, 2025
Who is UNC3944? A financially-motivated threat actor characterized by its persistent use of social engineering and brazen communications with victims. Mandiant provides guidance and strategies for hardening systems and defenses against the cybercrime group, offering practical steps to protect against their specific attack methods.
Read the blog. 
14. MCP Toolbox for Databases (formerly Gen AI Toolbox for Databases)
Published April 22, 2025
Ready to build AI agents that can actually use your data? This article announces that our MCP Toolbox for Databases now supports the Model Context Protocol (MCP), making it easier than ever to connect your generative AI agents to enterprise data. With new support for the Agent Development Kit (ADK) and LangGraph, you can build powerful, stateful agents with intuitive code and connect them to your databases securely.
Read the blog.
13. Formula E’s AI equation: A new Driver Agent for the next era of racing

Published March 25, 2025As motorsport has grown in popularity, the ability of fans from diverse backgrounds to enter the cockpit has not always kept up. Formula E sought to level the course for aspiring drivers by creating an AI-powered Driver Agent; connected to a Formula E simulator, the agent provides drivers and coaches with real-time feedback on technique and tactics, help them improve faster than a flying lap.Read the blog.

12. Google Agentspace enables the agent-driven enterprise
Published April 9, 2025
Do you want to search all your company’s information in a few clicks, or generate ideas with built-in agents that already know your company’s style? Google Agentspace now includes a no-code agent designer, a gallery for discovering agents, and two new expert agents for deep research and idea generation, all integrated directly into Chrome.
Read the blog.
11. Announcing Veo 3, Imagen 4, and Lyria 2 on Vertex AI

Published May 20, 2025The next generation of creating for enterprise is here. We expanded Vertex AI to include our most powerful generative AI media models: Imagen 4 for stunningly realistic images with crisp text, Veo 3 for breathtaking video with synchronized audio, and Lyria 2 for composing high-fidelity, original music.Read the blog.

10. Adversarial misuse of generative AI
Published January 19, 2025
In the security realm, large language models (LLMs) open a world of new possibilities, from sifting through complex telemetry to secure coding, vulnerability discovery, and streamlining operations. However, some of these same AI capabilities are also available to attackers, leading to understandable anxieties about the potential for AI to be misused for malicious purposes.
Read the blog.
9. Ivanti Connect Secure VPN targeted in new zero-day exploitation
Published January 8, 2025
Ivanti kicked off the year by disclosing two new vulnerabilities impacting its Ivanti Connect Secure (ICS) VPN appliances. Mandiant identified UNC5221, a suspected China-nexus espionage actor that previously exploited two other Ivanti vulnerabilities as early as December 2023, as the threat actor targeting the new zero-days. Successfully exploiting one of the vulnerabilities could result in downstream compromise of a victim network.
Read the blog. 
8. Google announces agreement to acquire Wiz
Published March 18, 2025
Google Cloud shares a vision with Wiz to improve security by making it easier and faster for organizations of all types and sizes to protect themselves, end-to-end, across all major clouds, and this post announces Google’s agreement to acquire the cloud security startup.
Read the blog.
7. Veo 3 available for everyone in preview on Vertex AI
Published June 26, 2025
You dream it, Veo creates it. This post announces Veo 3, our most powerful text-to-video model yet, is now open for everyone to try in public preview on Vertex AI. Create stunning, near-cinematic videos with synchronized sound, and join the next wave of creative storytelling, now available to Google Cloud customers and partners.
Read the blog. 
6. Vertex AI offers new ways to build and manage multi-agent systems

Published April 9, 2025This article announces ways to build multi-agentic systems, an evolution of traditional AI agents. To get there, we launched a new suite of tools in Vertex AI to help developers build and deploy them, including an open-source Agent Development Kit (ADK) and a managed Agent Engine. We also introduce the Agent2Agent (A2A) protocol, a new open standard to allow agents built by different companies to communicate and collaborate.Read the blog.

5. Techniques for improving text-to-SQL
Published May 16, 2025
Even though it’s been around for a long time, not all developers speak fluent SQL. English, on the other hand, is pretty well-known. In this technical deep dive for developers working with natural language processing and databases, get the insights and techniques you need to enhance the accuracy and performance of your text-to-SQL conversions.
Read the blog.
4. Firebase Studio lets you build full-stack AI apps with Gemini
Published April 9, 2025
For over a decade, developers the world over have relied on Firebase’s backend cloud computing services and application development platforms to power their web applications. And with the new Firebase Studio, they can now use it to develop full-stack AI applications, integrating with the Gemini AI model.
Read the blog.  
3. Multiple Russia-aligned threat actors targeting Signal Messenger
Published February 19, 2025
As part of the ongoing Russian-Ukrainian conflict, Signal Messenger accounts are of great interest to Russia’s intelligence services for their potential to deliver sensitive government and military communications. Google Threat Intelligence Group has observed increasing efforts from several Russia state-aligned threat actors to compromise Signal Messenger accounts used by individuals of interest to Russia’s intelligence services.
Read the blog.
2. New Google Cloud certification in generative AI
One of the top questions we hear is “how do I get ahead”? This isn’t just another certification in a sea of technical qualifications. The Generative AI Leader certification is specifically focused on generative AI, and designed for visionary professionals like you — the managers, administrators, strategic leaders and more who understand that AI’s impact stretches far beyond code.
Read the blog.
1. 601 real-world gen AI use cases from the world’s leading organizations

Published April 9, 2025Since Next 2024, we’ve been gathering examples of how our customers are putting generative AI to use everyday across their operations and offerings. We nearly doubled the number of entries for Next 2025, and clearly they’re still resonating, as this has been our most popular story of the year. What use cases are most exciting you? Pop over to our LinkedIn page and let us know.Read the blog.

Thank you for being a part of the Google Cloud blog community! We look forward to bringing you lots more blogs for you to devour in the second half of the year.
Quelle: Google Cloud Platform

News you can use: What we announced in AI this month

2025 is off to a racing start. From announcing strides in the new Gemini 2.0 model family to retailers accelerating with Cloud AI, we spent January investing in our partner ecosystem, open-source, and ways to make AI more useful. We’ve heard from people everywhere, from developers to CMOs, about the pressure to adapt the latest in AI with efficiency and speed – and the delicate balance of being both conservative and forward-thinking. We’re here to help. Each month, we’ll post a retrospective that recaps Google Cloud’s latest announcements in AI – and importantly, how to make the most of these innovations. 
Top announcements: Bringing AI to you 
This month, we announced agent evaluation in Vertex AI. A surprise to nobody, AI agents are top of mind for many industries looking to deploy their AI and boost productivity. But closing the gap between impressive model demos and real-world performance is crucial for successfully deploying generative AI. That’s why we announced Vertex AI’s RAG Engine, a fully managed service that helps you build and deploy RAG implementations with your data and methods. Together, these new innovations can help you build reliable, trustworthy models.
From an infrastructure perspective, we announced new updates to AI Hypercomputer. We wanted to make it easier for you to run large multi-node workloads on GPUs by launching A3 Ultra VMs and Hypercompute Cluster, our new highly scalable clustering system. This builds on multiple advancements in AI infrastructure, including Trillium, our sixth-generation TPU.

aside_block
<ListValue: [StructValue([('title', '$300 in free credit to try Google Cloud AI and ML'), ('body', <wagtail.rich_text.RichText object at 0x3eec5a0d4610>), ('btn_text', 'Start building for free'), ('href', 'http://console.cloud.google.com/freetrial?redirectPath=/vertex-ai/'), ('image', None)])]>

What’s new in partners and open-source 
This month, we invested in our relationship with our partners. We shared how Gemini-powered content creation in Partner Marketing Studio will help partners co-market faster. These features are designed to streamline marketing efforts across our entire ecosystem, empowering our partners to unlock new levels of success, efficiency, and impact. 
At the same time, we shared several important announcements in the world of open-source. We announced Mistral AI’s Mistral Large 24.11 and Codestral 25.01 models on Vertex AI. These models will help developers write code and build faster – from high-complexity tasks to reasoning tasks, like creative writing. To help you get started, we provided sample code and documentation.
And, most recently, we announced the public beta of Gen AI Toolbox for Databases in partnership with LangChain, the leading orchestration framework for developers building LLM applications. Toolbox is an open-source server that empowers application developers to connect production-grade, agent-based generative AI applications to databases. You can get started here.
Industry news: Google Cloud at the National Retail Federation (NRF) 
The National Retail Federation kicked off the year with their annual NRF conference, where Google Cloud showed how AI agents and AI-powered search are already helping retailers operate more efficiently, create personalized shopping experiences, and use AI to get the latest products and experiences to their customers. Check our new AI tools to help retailers build gen AI search and agents. 
As an example, Google Cloud worked with NVIDIA to empower retailers to boost their customer engagements in exciting new ways, deliver more hyper-personalized recommendations, and build their own AI applications and agents. Now with NVIDIA’s AI Enterprise software available on Google Cloud, retailers can handle more data and more complex AI tasks without their systems getting bogged down.
News you can use 
This month, we shared several ways to better implement fast-moving AI, from a comprehensive guide on Supervised Fine Tuning (SFT), to how developers can help their LLMs deliver more accurate, relevant, and contextually aware responses, minimizing hallucinations and building trust in AI applications by optimizing their RAG retrieval.
We also published new documentation to use open models in Vertex AI Studio. Model selection isn’t limited to Google’s Gemini anymore. Now, choose models from Anthropic, Meta, and more when writing or comparing prompts.
Hear from our leaders
We closed out the month with The Prompt, our monthly column that brings observations from the field of AI. This month, we heard from Warren Barkley, AI product leader, who shares some best practices and essential guidance to help organizations successfully move AI pilots to production. Here’s a snippet:
More than 60% of enterprises are now actively using gen AI in production, helping to boost productivity and business growth, bolster security, and improve user experiences. In the last year alone, we witnessed a staggering 36x increase in Gemini API usage and a nearly 5x increase of Imagen API usage on Vertex AI — clear evidence that our customers are making the move towards bringing gen AI to their real-world applications.
Stay tuned for monthly updates on Google Cloud’s AI announcements, news, and best practices. For a deeper dive into the latest from Google Cloud, read our weekly updates, The Overwhelmed Person’s Guide to Google Cloud.
Quelle: Google Cloud Platform

What’s new with Google Cloud – 2024

Week of Dec 16 – Dec 20Windows Server 2025 is now available on Google Compute Engine. We are excited to announce the general availability of Windows Server 2025 on Google Compute Engine. You can now run Windows Server 2025 Data Center, and Windows Server 2025 Data Center Core editions as well as Windows SQL Server 2022 on Windows Server 2025 with pay-as-you-go licenses. Customers that don’t already have a Microsoft Enterprise Agreement can use the Google Compute Engine’s provided image to take advantage of Google Cloud’s relationships with Microsoft for pay-as-you-go licenses that scale with your workload and offers premium support.Google Agentspace is here: unlock enterprise expertise for employees with agents that bring together Gemini’s advanced reasoning, Google-quality search, and enterprise data, regardless of where it’s hosted. Google Agentspace makes your employees highly productive by helping them accomplish complex tasks that require planning, research, content generation, and actions – all with a single prompt.Week of Dec 9 – Dec 13Best of N: Generating High-Quality Grounded Answers with Multiple Drafts. We are excited to announce that Check Grounding API has released a new helpfulness score feature. Building on top of our existing groundedness score, we now enable users to implement Best of N to improve RAG response quality without requiring extensive model retraining!A3 Ultra VMs powered by NVIDIA H200 Tensor Core GPUs and Hypercompute Clusters are in preview. A3 Ultra VMs offer a significant leap in performance over previous generations. Coupled with Hypercompute Cluster, which the infrastructure and workload provisioning, and ongoing operations of AI supercomputers up to tens of thousands of accelerators is seamless. This delivers a easy to manage, secure and high-performance cloud experience for AI workloads. Learn more about the offering!Week of Nov 25 – Nov 29The Cyber Threat Intelligence Program Design Playbook, Now Available. Mandiant Academy published a new on-demand course, Cyber Threat Intelligence (CTI) Program Design Playbook. Comprised of three 2-hour courses, this learning track explains Mandiant’s approach to design and build, efficiently operate, and enhance a CTI program. This is the inaugural track from the Academy’s updated approach to on-demand learning with succinct, operations-oriented lessons designed to provide the world’s cybersecurity professionals the answers and resources they need to succeed.Week of Nov 11 – Nov 15Subsea cable connectivity is coming to Tuvalu for the first time with the addition of the Tuvalu Vaka cable. Building on the Bulikula subsea cable system announced last year, this new network infrastructure is a collaboration among several partners including Australia, Japan, New Zealand, Taiwan, Tuvalu, Tuvalu Telecommunications Corporation and the United States, and will help reduce the digital divide in the Pacific.Week of Nov 4-8We are excited to announce the reCAPTCHA Password Leak Detection Container App, a new tool that makes it easier than ever to protect your users from account takeovers. This container app simplifies the integration of reCAPTCHA’s powerful password leak detection, allowing you to instantly detect compromised credentials and proactively prompt users to change their password before their account is compromised. With pre-built libraries and a streamlined process, you can significantly reduce integration time and enhance your website’s security with ease.Week of Oct 21-25We’re excited to announce GA support for scanning: Rocky Linux, Alma, SUSE (SLES), Red Hat (UBI), Chainguard, Wolfi & Google Distroless. These operating systems are now supported in both Artifact Registry scanning, as well as On Demand Scanning. When the Container Scanning API is enabled, any container with these new operating systems or distroless images will automatically be scanned for vulnerabilities when pushed to Artifact Registry. We’ve also upgraded our On Demand scan to include; NPM, Python, Ruby, Rust, .Net & PHP language packages. See all supported package types.Term Extension Now Available for Compute Engine Committed Use Discounts: You can now extend the term length of your Compute Engine resource-based Committed Use Discounts (“CUDs”) beyond the preset 1-year and 3-year options. CUDs offer significant cost savings for predictable workloads. You can now choose a CUDs term length beyond the original commitment end date that perfectly aligns with your workload needs, from one year and one day up to 6 years. Learn moreWeek of Oct 14-18Announcing Google Cloud Marketplace private offer enhancements that enable additional payment flexibility for enterprises, including when transacting generative AI models.Week of Oct 7-11We are excited to announce the launch of new Google Cloud Cortex Framework data integration and analytics solution content for BigQuery and Looker with Oracle EBS data. To learn more read our announcement blog.Google Cloud is partnering with leading AI and cybersecurity startups to accelerate their growth and innovation, through the ISV Startup Springboard program, announced this week at the Google Cloud Startup Summit. Learn more and register interest.Privileged Access Manager (PAM) is now Generally Available. The GA release offers new capabilities in addition to recently released public preview and includes features such as Pub/Sub integration for custom alerting and monitoring, alerts on IAM grant modifications outside of PAM, and integration with VPC Service Controls to tackle data exfiltration. Learn more.Week of Sept 23-27We are excited to announce that registration is open for the App Dev & Infrastructure Summit on October 30 (AMER) and October 31 (EMEA). Google Technology Fellows – our luminary technical leaders – and industry experts will share strategies and learnings on how to improve efficiency, reduce costs, and speed up AI innovation for your cloud and application infrastructure at this global digital event. Register here.Week of Sept 16-20Starting this week, Google Cloud customers with eligible support plans can access assistance for the Cluster Toolkit through the Cloud Console. Cluster Toolkit, formerly known as Cloud HPC Toolkit, is open-source software offered by Google Cloud which simplifies the process for you to deploy HPC, AI and ML workloads on Google Cloud. The Cloud Support team will handle filed cases, ensuring that users receive timely and effective support for their Cluster Toolkit implementations. Select ‘Cluster Toolkit’ as the sub-category under ‘Compute Engine’ when creating a support ticket in your Cloud Console to get in touch about any Cluster Toolkit issues.Backup and DR service is excited to announce the public preview of backup vaults and simplified VM backup offering. Backup vaults provide secure backups for cyber resilience through immutable and indelible backups for VMs and databases, delivering security against accidental or malicious data deletion. Simplified Compute Engine VM backup with a fully-managed experience, directly integrated into the cloud console makes backing of VMs as easy as 1-2-3. The solution also enables backup admins to empower application developers to self-protect their VMs while retaining centralized governance and oversight. Read the full blog to learn more and try out the new features.Week of Sept 2-6We’re excited to share that Topaz will be extended to Taiwan. Announced in 2022, the transpacific subsea cable system was the first to connect Canada and Japan. Now, with the extension of Topaz to Taiwan, we’ll provide the region with increased reliability and resilience for network operators, for Google, and for users.Week of Aug 26-30We are excited to announce the general availability of Instant snapshots for Google Compute Engine Persistent Disks, which provide near-instantaneous, high-frequency, point-in-time checkpoints of a disk that can be rapidly restored as needed. Read the full blog to try it out.In response to customer and partner requests for pollen data in Japan, we are excited to announce that data for Japanese Cedar and Cypress trees-the 2 main sources of pollen allergens in Japan-have been added to our Pollen API from Google Maps Platform.Week of Aug 19-23We are excited to announce we’re adding support for NVIDIA L4 GPUs to Cloud Run, in preview. Developers love Cloud Run for its simplicity, fast autoscaling, scale-to-zero capabilities, and pay-per-use pricing. Those same benefits come into play for real-time inference apps serving open gen AI models. Check out this launch blog. Also watch demos from this launch event webinar Run AI on Cloud run.We are excited to announce that Google Cloud Functions is now Cloud Run functions — event-driven programming in one unified serverless platform. This goes beyond a simple name change. We’ve unified the Cloud Functions infrastructure with Cloud Run, and developers of Cloud Functions (2nd gen) get immediate access to all new Cloud Run features, including NVIDIA GPUs. Read the launch blog and watch demos from this launch event webinar Run AI on Cloud run.Week of Aug 5-9Google’s Workforce Identity federation now enables Microsoft Entra ID users to access Google BigQuery from Microsoft Power BI with Single-Sign-On. No users or groups need to be provisioned in Google Cloud as Workforce identity Federation leverages a syncless federation capability using attribute based access control to authorize access to Google BigQuery using Microsoft Entra user attributes such as user group membership. You can refer to our documentation to learn more.We are excited to announce the preview of SQL support in Bigtable to bring Google’s pioneering NoSQL database to a broader developer audience. Bigtable leverages GoogleSQL ─the same SQL dialect used by BigQuery─ making it easier to use Bigtable as low-latency analytics serving layer in combination with BigQuery’s newly announced continuous queries but does so with extensions to support its signature data model so you can use SQL without giving up on all the flexibility that comes with a NoSQL database. It also simplifies migrations from open source databases such as Apache Cassandra. With over 100 new functions from JSON processing capabilities, kNN for GenAI and HLL for real-time analytics, SQL opens the door to many new possibilities with Bigtable. Learn more in our detailed blog post.We are excited to announce the public preview of BigQuery continuous queries, a groundbreaking new feature that empowers users to run continuously processing SQL statements that can process, analyze, and transform data as new events arrive in BigQuery, ensuring insights are always up to date. Native integration with the Google Cloud ecosystem unlocks the ability of Vertex AI and Gemini to perform machine learning inference on incoming data in real time. As well as streaming replication of continuous query results to Pub/Sub topics, Bigtable instances, or other BigQuery tables. Read the full blog and try it out!AlloyDB’s AutoPilot capabilities- Automatic memory management, Adaptive AutoVacuum, Automatic storage tiering ,Automatic data columnarization and query rewrite- makes management super efficient and easy. AlloyDB eliminates the drudgery of maintaining a PostgreSQL database by using, behind the scenes , advanced self-tuning machine learning algorithm. In this blog we will look into a real world example of AlloyDB Adaptive AutoVacuum in work and how AlloyDB Cluster Storage Space is ReleasedGoogle Cloud Identity Platform, our consumer identity solution, now supports Passkeys. With Passkeys, developers can authenticate their app’s end users securely, protecting them from account takeover attacks like phishing and leaked credentials. To join the private preview, contact your Google account team.Week of July 15-19Google Cloud is excited to launch the Modern SecOps Masterclass, now available on Coursera. This course equips security professionals with cutting-edge skills to modernize their Security Operations Centers (SOCs) using our Autonomic Security Operations framework and Continuous Detection, Continuous Response (CD/CR) methodology. Read the full blog and enroll now.Learn how to potentially achieve a strong consistency in Cloud Bigtable for your next big data solution. Bigtable offers high throughput at low latency. It is ideal for storing large amounts of data in a key-value store while supporting high read and write throughput at low latency for fast access. Bigtable provides eventual consistency as well as strong consistency. This blog talks about achieving strong data consistency in a multi-cluster Bigtable instance. Read the full blog.Week of June 24-28Introducing Google Cloud Marketplace Channel Private Offers, enabling customers, ISV partners, and channel partners to efficiently transact private offers via reseller-initiated sales of third-party solutions listed on the Google Cloud Marketplace. This differentiated program also empowers channel partners to manage the customer relationship from billing, collections to revenue recognition. Read the full blog.A blog on benchmark study (collaborated with Yahoo) by comparing the cost and performance of Apache Flink and Google Cloud Dataflow for two specific streaming data processing use cases. The goal of the study was to determine the most cost-effective platform for these use cases by establishing a fair comparison methodology and controlling variables such as throughput and workload. The results indicate that, with some optimization on Dataflow can perform on-par with Apache Flink. Read the full blog.A Blog on Secure Gateways: Mutual TLS for Ingress Gateway Secure Gateways: Mutual TLS for Ingress Gateway,” discusses the implementation of mutual TLS (mTLS) for enhanced security in ingress gateways. It explains how mTLS ensures both client and server authentication through certificates, going beyond the traditional server-only verification. The article explores the setup process and the benefits of using mTLS, emphasizing its role in establishing secure communication channels in modern cloud architectures. Read the full blog.A Blog on Wildcard certificates with Ingress Gateway “Wildcard certificates with Ingress Gateway” provides a guide on how to use wildcard certificates to secure multiple services behind a single Istio Ingress Gateway. This simplifies certificate management and improves the user experience by allowing seamless connections across different services within the same domain. The article demonstrates the configuration process step-by-step and explains how wildcard certificates are matched to incoming requests. Read the full blogWeek of June 17-21Learn how to leverage BigQuery vector search to analyze your logs and asset metadata stored in BigQuery. Using vector search, you can find semantically similar logs which can be helpful in several use cases such as outlier detection, triage and investigation. This how-to blog walks you through the setup from processing logs, generating vector embeddings, to analyzing vector search results. It includes sample SQL queries which can be adapted for your own logs and use case. Read the full blog.Nuvem, first announced last year, is a transatlantic subsea cable system that will connect Portugal, Bermuda, and the United States. We are now working with the Regional Government of Azores to enable extending the system to the Azores as well. Named after the Portuguese word for “cloud,” Nuvem will improve network resiliency across the Atlantic, helping meet growing demand for digital services and further establishing its landing locations as digital hubs.Week of June 10-14General Availability of A3 Mega, a new instance type in the A3 VM family. A3 Mega is powered by the NVIDIA H100 Tensor Core GPU, delivers a 2.4x improvement in large scale training performance over multiple A3 instances.2x the GPU-to-GPU networking bandwidth over A3 Instances.Enhanced GPUDirect-TCPXO networking offloads GPUDirect memory access from the CPU, providing direct access through through the NIC (Network Interface Card) to GPU memory, based on Titanium TOPs, which improves performance of multi-node distributed training workloads.Simplify your Network: The Cloud Networking Product Management and Engineering team will be traveling across US cities in June/July and Sept. Learn how Cross-Cloud Network can transform your infrastructure. The workshop will address Cross-Cloud Networking for hybrid and multicloud enterprises with distributed applications, internet-facing content and applications, security, and AI-assisted network operations with Gemini Cloud Assist. Join us at one of the following Google office locations and meet the experts who will share the latest innovations, use cases, and demos. Register here.Learn how you can leverage the cloud deployment archetypes (zonal, regional, multi-regional, global, hybrid, & multicloud) to architect cloud topologies that meet your workload’s requirements for reliability, cost, performance, & operational simplicity. Read the full blog.Week of May 20-24Maximize performance and optimize spend with Compute Engine’s latest General Purpose VMs, N4 and C4. N4’s flexible configurations and price-performance gains help optimize costs, while C4 provides top-tier performance for demanding applications. With N4 and C4, you get tailored solutions for all your general-purpose workloads, so you can lower the total cost of running your business without compromising on performance or workload-specific requirements. Learn more here.Week of Apr 22 – April 26Simplify your connectivity to Google by using a Verified Peering Provider to connect to Google, instead of using Direct Peering. Verified Peering Providers handle all of the complex connectivity allowing you to focus on your core business. Learn more here.Week of Apr 15- Apr 19New training in AI, data analytics and cybersecurity, designed to expand onramps to tech careers through colleges and employers. Learn more.Week of Apr 1- Apr 5Security Command Center (SCC) Enterprise is now generally available (GA). It is the industry’s first cloud risk management solution that converges cloud security and enterprise security operations into a single platform, supercharged by Mandiant expertise and AI. Learn more in our announcement blog.Identify common container runtime attacks, analyzes suspicious code, and use natural language processing to pinpoint malicious scripts with GKE threat detection, powered by Security Command Center. Now in public preview.Get a fully managed compliance service that automatically delivers end-to-end coverage for GKE, scanning for issues against the most important benchmarks with GKE compliance, now in public preview. Near-real-time insights are available in a centralized dashboard, with compliance reports automatically produced for you.Streamline your GCE backup strategy! With tag-based backups in Google Backup and DR, protection is automated – new VMs with the right tags are protected immediately, saving you time and increasing reliability. Read more on the blog here. Differential privacy enforcement with privacy budgeting is now available in BigQuery data clean rooms so organizations can prevent data from being reidentified when it is shared.Week of Mar 18- Mar 22Google Kubernetes Engine (GKE) and NVIDIA NeMo framework are used to train large language models (LLMs). Due to the increasing demand for efficient and scalable training of LLMs, the need for GPUs at a large scale with high speed networking is rapidly growing. GKE offers a comprehensive set of features that make it suitable for enterprise-level training and inference. This blog post shows how generative AI models can be adapted to your use cases by demonstrating how to train models on Google Kubernetes Engine (GKE) using the NVIDIA NeMo framework.Cloud Run now supports volume mounts! Mount a Cloud Storage bucket or NFS file share as a volume to easily serve static assets, access app configuration data, or access an AI/ML model. Learn more in our blog post.Week of Mar 11- Mar 15Datastream adds support for SQL Server sources, now in preview. With existing support for MySQL, PostgreSQL, and Oracle, support for SQL Server sources extends the reach of Datastream and empowers you to replicate data from a range of relational sources to several Google Cloud services, such as BigQuery, Cloud Storage, AlloyDB, and Spanner. Read more in the blog here.Week of Feb 5- Feb 9Check out this new blog and learn more about the Integrated Commerce Network (ICN) delivered by Kin + Carta and built on Google Cloud. The ICN features 3 of our premier digital commerce partners for an integrated end-to-end solution including Bloomreach, commercetools and Quantum Metric.Week of Jan 29- Feb 2IDC finds 318% ROI from migrating to Google Cloud IaaS: Check out the latest IDC research study to learn how organizations worldwide are benefitting by adopting Google Cloud Infrastructure as a Service.Week of Jan 15-19Check out the latest generative AI training available from Google Cloud : Take a look at our top ten trainings in Duet AI to help boost your productivity in 2024.Week of Jan 1-5The year in Google Cloud: Top news of 2023: A look back at the biggest stories of 2023 from Google Cloud, covering generative AI, DevOps, containers, data and databases, security, and more.
Quelle: Google Cloud Platform

The Year in Google Cloud – 2024

If you’re a regular reader of this blog, you know that 2024 was a busy year for Google Cloud. From AI to Zero Trust, and everything in between, here’s a chronological recap of our top blogs of 2024, according to readership. You’ll probably see some favorites on this list, and discover a few that you missed the first time around. 
January
We started the new year strong, removing data transfer fees for anyone moving data off of our platform. Translation: Anyone doing cool things on Google Cloud (like using generative AI to analyze their microservices deployment) is doing it because they want to, not because they have to. And in business news, we shared how to make the most out of your data and AI in the coming year. 

Cloud switching just got easier: Removing data transfer fees when moving off Google Cloud

Figuring out microservices running on your GKE cluster with help from Duet AI

Transform: Five ways your data can thrive with generative AI in 2024

February
From local GPUs, to model libraries, to distributed system design, the second month of 2024 was the first of many to come where AI topics dominated the charts. Our Transform team explored gen AI’s impact on various industries.

No GPU? No problem. localllm lets you develop gen AI apps on local CPUs

Google Cloud expands access to Gemini models for Vertex AI customers

Coming of age in the fifth epoch of distributed computing, accelerated by machine learning

Your RAGs powered by Google Search technology, part 1

Transform: Putting AI to work: Where gen AI is impacting industries in 2024

March
If it wasn’t already obvious, this month’s top-read blogs showed that our core audience is developers pushing the boundaries of innovation. Business leaders, meanwhile, read about best practices for securely deploying AI on Google Cloud. 

Domain-specific AI apps: A three-step design pattern for specializing LLMs

Announcing Anthropic’s Claude 3 models in Google Cloud Vertex AI

Transform: To securely build AI on Google Cloud, follow these best practices

April
Watch out, here comes Google Cloud Next, where we made a record 218 announcements; the top three are listed here. Readers were also keen to hear about how Citadel Securities built out a research platform on Google Cloud. 

Introducing Google Axion Processors, our new Arm-based CPUs

Powering Google Cloud with Gemini

BigQuery is now your single, unified AI-ready data platform

Transform: How Citadel Securities is reimagining quantitative research on the cloud

May
We don’t always get it right, but when there’s a problem, we’re committed to providing you with accurate, timely information with which to make your own assessments. We’re also committed to making you, and customers like McLaren Racing, go really, really fast when developing new AI-based applications.  

Sharing details on a recent incident impacting one of our customers

RSA ’24: Introducing Google Threat Intelligence: Actionable threat intelligence at Google scale

Announcing Trillium, the sixth generation of Google Cloud TPU

Improving connectivity and accelerating economic growth across Africa with new investments

Transform: AI at the track: How McLaren Racing leverages terabytes of data each race for a winning edge

June
Whether you wanted to modernize your databases, deliver higher system reliability, create really cool AI-powered apps, or learn how legendary companies tackle data management, the Google Cloud blog was your go-to source midway through the year. 

Accelerating cloud transformation with Google Cloud and Oracle

Free to be SRE, with this systems engineering syllabus

Announcing Anthropic’s Claude 3.5 Sonnet on Vertex AI, providing more choice for enterprises

Google Cloud expands grounding capabilities on Vertex AI

Transform: We are Legend: How Goldman Sachs’ open-source data platform democratizes information

July
We talk a lot about “meeting customers where they are.” Sometimes that means a disaster zone, a remote research station, or a truck cruising down the highway. Over on Transform, you read about the history of our custom Axion and TPU chips. 

Bringing cloud and AI capabilities to the tactical edge: Google Distributed Cloud air-gapped appliance is generally available

VMware Cloud Foundation on Google Cloud VMware Engine: 20% lower price and up to 40% in migration incentives

Transform: Why Google keeps building custom silicon: The story behind Axion

Transform: TPU transformation: A look back at 10 years of our AI-specialized chips

August
Just when you thought you knew how to run AI inference, your available graph database options, or the name of Google Cloud’s event-driven programming service, we went and changed things up. We like to keep you on your toes ;) And business readers got a first look at AI agents — more to come on this. 

Run your AI inference applications on Cloud Run with NVIDIA GPUs

GenOps: learning from the world of microservices and traditional DevOps

Real-time in no time: Introducing BigQuery continuous queries for up-to-the-minute insights

Introducing Spanner Graph: Graph databases reimagined

Cloud Functions is now Cloud Run functions — event-driven programming in one unified serverless platform

Transform: So much more than gen AI: Meet all the other AI making AI agents possible

September
You’ve been generating (and storing) business data for years. Now, we’re making it easier for you to make sense of, and actually use, that data. Speaking of using data, the Transform team compiled a jaw-dropping list of the real-world ways customers are using gen AI in their organizations. 

Chat with your business data – Conversational Analytics comes to Gemini in Looker

Find key insights faster with a new contribution analysis model in BigQuery ML

Transform: 185 real-world gen AI use cases from the world’s leading organizations

October
According to this month’s most-read blog, 75% of you rely on AI for at least one daily professional responsibility, including code writing, information summarization, and code explanation, and experience “moderate” to “extreme” productivity gains. So it was no big surprise that business leaders wanted to read about how to develop an AI strategy. 

Highlights from the 10th DORA report

Gemini models are coming to GitHub Copilot

Revolutionizing SQL with pipe syntax, now available in BigQuery and Cloud Logging

BigQuery tables for Apache Iceberg: optimized storage for the open lakehouse

Transform: How to build an effective AI strategy

November
Not content to hold the existing record for most nodes in a Kubernetes cluster (15,000), we went ahead and more than quadrupled it, to the delight of AI unicorns. But whether you work for an AI unicorn, or just a plain old zebra, all Google Cloud users needs to start using multi-factor authentication next year, as well as learn how to avoid comman AI pitfalls. 

65,000 nodes and counting: Google Kubernetes Engine is ready for trillion-parameter AI models

Mandatory MFA is coming to Google Cloud. Here’s what you need to know

Transform: Unlocking gen AI success: Five pitfalls every executive should know

December
We’re closing out the year on an AI highnote, with the availability of amazing new image and video generation models, as well as the new Trillium TPU, which Google used to train Gemini 2.0, our most capable AI model… yet. Be on the lookout for how these technologies — and many others — will reshape your industry and how you work in the coming year. 

Veo and Imagen 3: Announcing new video and image generation models on Vertex AI

Announcing the general availability of Trillium, our sixth-generation TPU

Introducing Google Agentspace: Bringing AI agents and AI-powered search to enterprises

Transform: AI’s impact on industries in 2025

To all our loyal readers and customers, thanks for an amazing 2024, and here’s to an even more exciting and productive 2025!
Quelle: Google Cloud Platform

AI Playground: Where learning and innovation converge in the heart of London

AI is rapidly transforming industries and redefining the future of work. However, many organizations face a significant hurdle: bridging the knowledge gap and acquiring the necessary skills to effectively harness the power of AI. 
Recognizing this challenge, Google Cloud is set to launch the AI Playground in Shoreditch, Central London, in the first quarter of 2025. This innovative space will serve as a dynamic hub for businesses and individuals to demystify AI, explore its potential, and develop practical expertise.
More than a showcase — dive deep into AI
Google’s powerful Gemini model family — now in its supercharged second generation — takes center stage at the AI Playground, featuring several interactive demos that put its multimodal and agentic capabilities on display. Among the capabilities guests can experience and experiment with first hand are Gemini’s ability to analyze complex data, generate creative formats, and power innovative solutions.
The AI Playground is much more than a technology showcase. 
We’ve built the space to serve as an immersive learning environment where visitors can actively engage with AI, participate in hands-on workshops and hackathons, and connect with Google Cloud AI experts. This unique approach fosters a deeper understanding of AI concepts and encourages experimentation with cutting-edge tools and techniques.

aside_block
<ListValue: [StructValue([('title', '$300 in free credit to try Google Cloud AI and ML'), ('body', <wagtail.rich_text.RichText object at 0x3e0518699700>), ('btn_text', 'Start building for free'), ('href', 'http://console.cloud.google.com/freetrial?redirectPath=/vertex-ai/'), ('image', None)])]>

Addressing the AI skills gap
The AI Playground directly addresses the growing need for AI skills development in today’s rapidly evolving technological landscape, providing a dedicated space for:

Hands-on experimentation: Gain practical experience with cutting-edge AI tools and techniques, moving beyond theoretical knowledge to real-world application.

Skills development: Build on-demand AI skills through interactive workshops and hackathons led by Google Cloud experts, equipping individuals and teams with the expertise needed to thrive in the AI era.

Community building: Connect with fellow AI enthusiasts, share knowledge, and facilitate collaboration, creating a vibrant ecosystem for learning and innovation.

Real-world inspiration: Explore AI applications across diverse industries and discover new possibilities for your organization, sparking creativity and driving the development of novel AI solutions.

Empowering the future of AI
By providing accessible learning opportunities and fostering a thriving AI community, the AI Playground empowers individuals and businesses to embrace the transformative power of AI and contribute to shaping a better future. It’s a place where curiosity meets innovation, learning is hands-on, and  the potential of AI is unlocked. Mark your calendars for Q1 2025 and prepare to embark on your AI journey at Google Cloud’s AI Playground.
Quelle: Google Cloud Platform