When two become one: Integrating Google Cloud Organizations after a merger or acquisition

Congratulations! Your company just acquired or merged with another organization, beginning an important new chapter in its history. But like with many business deals, the devil is in the details — particularly when it comes to integrating the two companies’ cloud domains and organizations. In this blog post, we look at how to approach mergers and acquisitions (M&A) from the perspective of Google Cloud. These are the best practices that your Google Cloud Technical Account Managers follow — or that we recommend you follow if you plan to perform the integration yourself.Although there are various M&A scenarios, here are the two most common ones we will focus on:Both entities engaged in the M&A have some presence on Google Cloud and are looking for some level of integrationOnly one of the entities has a presence on Google Cloud, and is looking at best ways to work togetherDepending on your situation, your approach to integrating the two companies will vary substantially. When both companies have a Google Cloud presenceIn the first scenario, let’s assume company A is acquiring company B. Prior to the M&A, both companies have their own Google Cloud Organizations — the top level structures in the Google Cloud resource hierarchy — and have one or more billing accounts associated with them. There are also various Folders and Projects below each Google Cloud Organization. In this scenario, here are the key questions to ask: How do you plan to integrate/consolidate two distinct Google Cloud Organizations?How do you plan to organize the billing structure?How do you handle Projects under the two Organizations?What is the identity management strategy for the two Organizations?For each of these key questions, go ahead and formulate a detailed plan of action. If you have access to the Technical Account Manager service through Google Cloud Premium Support, you can reach out to them to further develop this plan.Understanding Google Cloud OrganizationsFrom an organizational integration standpoint, when each entity in an M&A has its own Google Cloud Organization, you have various options: no integration, partial, or full.No integration – When company B operates as an independent entity from company A, no migration is required. One caveat is if company A has negotiated better pricing terms/discounts and support packages with Google Cloud. In that case, you can sign an affiliate addendum by working with your Google Cloud account team to help unlock the same benefits for company B.Partial integration – Some projects move over to company A from company B and others stay with Company B. There can be some shared access between the two companies and each of the organizations can continue to use their existing identity providers. This can be a self-serve or a paid services engagement with Google Cloud depending on the complexity of the two companies and how many project migrations need to take place between them.Full integration – Company B is fully incorporated into company A. This means you go through a full billing, Google Workspace identity and project migration from company B into company A. This can be a complex process and we highly recommend engaging your Google Cloud account teams to scope out a paid services engagement to go through this transition.Planning your project migrationNo matter what you want your end state to look like, project migration requires careful planning. Again, if you have an assigned Technical Account Manager, please reach out to them to ensure that you have a conversation around best practices before starting this migration.If you’re taking a self-service approach, at a high level, we recommend leveraging the Resource Manager API to manage your project migrations. Do keep in mind that there are several prerequisites and required permissions documented here that need to be assigned before going down this path.In addition, please be sure to read the billing and identity management considerations below to ensure that you are covering all of the bases associated with such a migration, as your choices can fundamentally alter your Google Cloud footprint.Billing considerationsWhen deciding how to structure your Organizations and billing accounts, our recommendation is to always limit the number of Organization nodes and use the Folder structure to manage departments/teams within it. Creating additional Organization nodes is only advised in cases where you require a level of isolation for certain Projects from central administration for a specific business reason, for example, if the company being acquired already has their own Organization node and there is a business justification to let it operate as a standalone entity.Warning: If you have multiple Organization nodes, be aware that you will not have central visibility across all your organizational resources, and that policy management across different Organization nodes can be cumbersome. You will also have to manage multiple Workspace accounts and manage identities across them, which can be difficult, especially when operating at scale. From a billing account management perspective, our recommendation is to create one central billing account that lives within the Organization node with tags and labels incorporated for additional granularity. However, there are a few business cases which warrant the creation of additional billing accounts such as:You need to split charges for legal or accounting purposesInvoices are paid in multiple currenciesYou need to segregate usage to draw down on a Google Cloud promotional creditSubsidiaries need their own invoiceKeep in mind that committed-use and spend-based discounts and promotional credits cannot be shared across billing accounts and are provisioned on a per-billing-account basis. As such, more billing accounts can make it harder to leverage these discounts and credits.Identity managementAs you might expect, merging two entities has identity management implications. Cloud Identity is the solution leveraged by Google Cloud to help you manage your user and group identities. Even if the acquired company only uses the productivity products that are part of Google Workspace, the identities would still be managed by Cloud Identity.Google Workspace considerationsTo move large amounts of content into a Google Workspace domain, we recommend one of three options, depending on your end goal and data complexity:For general migrations: Leverage Google Workspace Migrate to move data into your Workspace domain from either another Workspace domain or a third-party productivity solutionFor manual migrations: Use the Export tool to move your organization’s data to a Cloud Storage archive so you can selectively download exported data by user and serviceFor complex Google Workspace scenarios: Speak with your Google Cloud Technical Account Manager about the possibility of using a custom scoped engagement to merge two Google Workspace environments without business interruptionWhen only one company is on Google CloudNow, let’s consider the scenario where only company A has a presence on Google Cloud but company B does not. The approach you take to integrate the two organizations largely depends on your desired end state — full, partial or no integration. If the plan is to eventually integrate company B into company A, your approach here will have a lot of similarities with the ‘full integration’ option mentioned above — just at a later point in time.You may also run into a scenario where company B has a presence on an alternative cloud platform and you need to migrate resources into or out of Google Cloud. Again, similar to the partial integration option called out above, a paid engagement or a self-service exercise would be a good fit depending on the complexity of the desired end state. Here to helpA merger or acquisition is an exciting milestone for any company, but one that needs to be managed carefully. Once you carefully review these considerations, develop a plan of action for your organization. You can also engage Google’s Professional Services for a paid engagement or Google’s Technical Account Management Service for a self-managed process to achieve the desired results. If you are going through or considering going through M&A at your organization and have a different scenario than what we have discussed, please feel free to reach out to your account teams for guidance or contact us at https://cloud.google.com/contact.Related ArticleGoogle Cloud’s 5 ways to create differentiated value in post-merger integrationsThe post-merger integration of the new IT estate is likely an important element to delivering added-value. Google Cloud can help with its…Read Article
Quelle: Google Cloud Platform

Package management for Debian/Ubuntu operating systems on Google Cloud

Most customers are operating in a restrictive environment with limited egress connectivity to the Internet. This results in customers investing in third-party tools such as Jfrog Artifactory, Nexus, etc. to store operating system packages and libraries. There is a pressing need to download these dependencies without going to the internet and also avoid investing in a third-party tool if there are budgetary or time constraints. In this blog, we will describe how packages.cloud.google.com subdomain works and helps start to address these challenges. This solution focuses on addressing how to download Debian/Ubuntu packages from the Google-managed repositories; however, the repo does not contain packages for popular programming languages such as Python, Javascript, etc.So let’s get started….Apt package managerIf you create a Linux VM on Google Cloud with Debian or Ubuntu operating system, one of the first commands you have to run before downloading a package is to download package information from all configured sources.code_block[StructValue([(u’code’, u’sudo apt-get update’), (u’language’, u”)])]Apt is a package management tool that downloads packages from one or more software repositories (sources) and installs them onto your computer. A repository is generally a network server, such as the official DebianStable repository.The main Apt sources configuration file is at /etc/apt/sources.list. To add custom sources, creating separate files under /etc/apt/sources.list.d/ is preferred.Understanding the configuration fileLet’s take a look at the files in /etc/apt directory and /etc/apt/sources.list file to start with.In the screenshot above, the /etc/apt/sources.list file contains multiple entries that notably show the archive type, repository URL, distribution and component. For more details on each attribute for Debian distribution, please refer to this link.Archive type: The first word on each line, deb or deb-src, indicates the type of archive. deb indicates that the archive contains binary packages (deb), the pre-compiled packages that we normally use. deb-src indicates source packages, which are the original program sources plus the Debian control file (.dsc) and the diff.gz containing the changes needed for packaging the program. Source packages provide you with all of the necessary files to compile or otherwise, build the desired piece of software.Repository URL: The next entry on the line is a URL to the repository that you want to download the packages from. The main list of Debian repository mirrors is located here.Distribution: The ‘distribution’ can be either the release code name / alias ( stretch, buster, bullseye, bookworm, sid) or the release class (oldoldstable, oldstable, stable, testing, unstable) respectively. Component: mainconsists of DFSG-compliant packages, which do not rely on software outside this area to operate. These are the only packages considered part of the Debian distribution. Google startup processLet’s see what is under sources.list.d directory that Google adds as part of the startup process. There are a couple of files and both contain links to google managed repositories (packages.cloud.google.com)However, the repositories that are added by default will only help us download gcloud CLI components such as google-cloud-sdk-datalab , google-cloud-sdk-spanner-emulator and kubectl.For example, if you wanted to learn which repository a potential package were to be downloaded from, the command below shows which repository and version you would be directed to. The screenshots below show that apt will try to look for those packages in the repositories that are configured by default in gce_sdk.list and google-cloud.listBut, if we run a sudo apt-get update command, it will fail if we do not have egress connectivity to the internet. When it tries to connect to the external debian repository that is configured by default in the /etc/apt/sources.list file, it will timeout.Packages.cloud.google.com – Apt mirror repoPackages.cloud.google.com is a repository that Google maintains and hosts a mirror repository for popular Debian/ Ubuntu releases. See table below to understand the mapping between the OS release codenames indicated by the arrows and the OS versions.Please note that Ubuntu repositories are subdivided into base (no suffix), updates (-updates), security (-security), and backports (-backports), universe (-universe), security universe (-security-universe), and updates universe (-updates-universe). This subdivision has to be followed when configuring repositories on Ubuntu instances.Packages.cloud.google.com – demoFor the rest of this demo, I will be working out of a Debian OS VM. I will verify what version I am running on and modify the apt sources file accordingly to point to the right URLs. The approach shown in subsequent steps can be extended to Ubuntu OS as long as you point to the appropriate repository URLs following the Ubuntu specific repository structure described earlier.I will create a new file under the /etc/apt/sources.list.d directory as “google-packages.list” that points to the appropriate repository URLs based on the semantics explained in sources.listformat.code_block[StructValue([(u’code’, u’cat << EOF > google-packages.listrndeb https://packages.cloud.google.com/mirror/cloud-apt/bullseye bullseye mainrndeb https://packages.cloud.google.com/mirror/cloud-apt/bullseye-security bullseye-security mainrndeb https://packages.cloud.google.com/mirror/cloud-apt/bullseye-updates bullseye-updates mainrnEOF’), (u’language’, u”)])]Now that I have configured the alternate repository, let’s test by installing a debian package, htop.Since I still have a file at /etc/apt/sources.list that refers to debian mirrors, our Update will still check that location first before falling back on the new packages.cloud.google.com mirror repositories.When multiple Apt repositories are enabled, a package can exist in several of them. To know which one should be installed, Apt assigns priorities to packages. The default is 500. If the packages have the same priority, the package with a higher version number (most recent) wins. If packages have different priorities, the one with the higher priority wins.Since our package is now installed into the local OS, we see a priority (100) for locally installed packages.PrerequisitesTo utilize the packages.cloud.google.com, there are also other networking configurations that you need to configure in Google Cloud illustrated below.1. Ensure that the subnet where the VM is created has Private Google Access enabled2. Create a firewall rule that allows egress to private VIP. Packages.cloud.google.com is only supported by the Private Google API endpoint.3. Create DNS records to resolve to packages.cloud.google.com domain.4. Create a route to the private google api endpoints. This is necessary if you do not have the default route to the internet (0.0.0.0/0).SummaryIn this blog post, we provided an overview of the subdomain, packages.cloud.google.com and how it can be used to download software packages for Debian and Ubuntu distributions. We also covered the networking requirements that are needed to make it work in a tightly controlled environment. To view the contents of the repository, please refer to this link.Related ArticleVM Manager simplifies compliance with OS configuration management PreviewA new version of OS configuration management within VM Manager makes it easier to manage large fleets of Compute Engine virtual machines.Read Article
Quelle: Google Cloud Platform

Pride Month: Q&A with Beepboop founders about more creative, effective approaches to learning a new language

June is Pride Month—a time for us to come together to bring visibility and belonging, and to celebrate the diverse set of experiences, perspectives, and identities of the LGBTQ+ community. Over the next few weeks, Lindsey Scrase, Managing Director, Global SMB and Startups at Google Cloud, will showcase conversations with startups led by LGBTQ+ founders and how they use Google Cloud to grow their businesses. This feature highlights Beepboop and its co-founders, Devon Saliga, CEO, and Matt Douglass, CTO. Lindsey: Thank you so much Devon and Matt for taking the time to speak with me today. Let’s start by learning more about your company – what inspired you to foundBeepboop?Devon Saliga: As a closeted gay kid, langauge learning provided me an escape to another world. This passion led me to Dartmouth College’s Rassias Center, which is known for developing innovative language drills where a highly trained instructor guides full classrooms of students through rapid-paced, round-robin speaking exercises proven to be 40% more effective than traditional classroom techniques in helping students gain fluency. It was the opposite of boring lectures and rote memorization of vocab and grammar. Through these methods I learned Japanese, which opened up a world of opportunity, helping me to get my first job at Goldman Sachs. Sadly, not everyone has access to this type of language education. That’s why we created Beepboop, where our technology gives all language teachers the ability to run these effective exercises in both their online and in-person classrooms. Lindsey: What an incredible way to learn a new language, and clearly it’s quite effective. My wife is Austrian and I’ve been slowly trying to learn German and agree that conversation and engagement with the language is vastly more effective than trying to memorize! So from there, where did the name Beepboop come from and what makes the company unique?Devon: Our classes are like massive multiplayer games of language-learning hot potato where spoken challenges are passed from student to student. Students without any scheduling can hop into ongoing live classes and start playing. Speaking a language can be intimidating, so our instructors say “beepboop” to let students know when they’ve made a mistake. It’s a lighthearted word that puts a smile on everyone’s face.Lindsey: Maybe I’ll try out using “Beepboop” with my kids when they make a mistake, it has a nice ring to it! So what languages do you teach on Beepboop?Devon: Our go-to-market languages are English and Spanish and our curriculum is geared toward employers who want to recruit, retain and upskill their workforce through language learning. Globally, over $9 billion is spent on business English education alone. It’s a gigantic market and our innovative group-based learning approach enables companies to offer Beepboop to more of its employees for less. We have over 100,000 students and have carved out some really interesting niches for ourselves, like medical Spanish.Devon Saliga, CEO & Co-Founder of BeepboopLindsey: Clearly there is a market for this and it’s an incredible opportunity to have an impact in helping so many people. What major challenges did your team overcome in getting to where you are today?Matt Douglass: We adapted in-person language drills to support remote learning and developed unique techniques to quickly train new instructors. We initially struggled with lagged video and frozen screens because many students worldwide don’t have fast internet. In response, we built an inclusive, audio-only teaching platform that enables everyone to comfortably participate in conversational drills—without worrying about slow connections or how they look on camera. Lindsey: Ok, so it sounds like having the right platform and technology has been critical to supporting you in scaling and pivoting when needed. Why did Beepboop standardize on Google Cloud?  Devon: Before I partnered with an amazing CTO like Matt, I had to run product and engineering while focusing on business development and creating a compelling online curriculum. I honestly didn’t have the time or technical skills to create a minimal viable product (MVP) from the ground up. Fortunately,Google Cloud offered easy-to-use tools, APIs, and integrated solutions such asFirebase that enabled my small team of Dartmouth students to code an alpha version of Beepboop in just three months.The Startup Success Managers also provided much-needed technical guidance and credits so that we could affordably trial different solutions. Lindsey: You’re not alone and we hear time and time again from startups who appreciate the simplicity and speed of going from concept to MVP with Firebase and our tools and APIs. I’m so glad that was your experience and that our Startup Success team provided the support you needed to get going quickly. From there, how have Google Cloud solutions helped Beepboop grow? Matt: Beepboop now supports massive classes of up to 200 students with a customized WebRTC platform built on the highly secure Google Cloud. We useFirestore for all data that doesn’t require a real-time lookup and Firebase for our react apps. We also leverageFirebase Realtime Database to automatically message teachers when students need extra help, alert students and teachers when their internet connection slows, and even power peer-to-peer language-learning games that run autonomously without live teachers! Right now our instructors are fully responsible for tracking student performance in real time and then adjusting the intensity and the pace of their classes accordingly. This becomes more and more challenging with each additional student in a class. We’re aiming to simplify the process of teaching while giving our students more corrective feedback by using Google CloudAI and machine learning products to develop deep learning algorithms that automatically detect slight mispronunciations and monitor the melodic intonations of Spanish and English. Lindsey: It’s incredible to see what you’ve already done in such a short period of time and also your vision of what’s next. Before we switch topics, can you share what excites you the most about Beepboop? Devon: Seeing Beepboop positively disrupt the education industry and democratize foreign language instruction. We hear every day from our students how their language skills got them a promotion or how just after a few months on our platform they can now confidently interact with native speakers. Our high success rate speaks for itself, pun intended.Matt: It’s exciting to see how the technology behind Beepboop creates safe and supportive spaces for our students and instructors. Beepboop automatically mutes and unmutes microphones so that every student can equally participate in our conversational drills. Beepboop also gives students a chance to correct their mistakes—and alerts teachers if people need extra help or time to answer a question.Matt Douglass, CTO Co-Founder of BeepboopLindsey: One of the most rewarding parts of my job is seeing how companies are using our technology to drive incredible impact in the world and this is an amazing example of doing just that! Thank you for sharing more about Beepboop and your vision for the future.Now, given it’s Pride month, let’s shift gears. As a member of the LGBTQ+ community I am thrilled to see increasing visibility of LGBTQ+ founders. Can you talk about how being part of the LGBTQ+ community impacted your and Beepboop’s success?Matt: Even before the days ofHarvey Milk, the LGBTQ+ community always found creative ways to work together to further important causes. I’ve experienced the same support as an LGBTQ+ entrepreneur. Working through our community, I’ve met many other founders and shared ideas and strategies we’re incorporating to help Beepboop succeed. We also connected withStartOut, an organization focused on building a world where every LGBTQ+ entrepreneur has equal access to lead, succeed, and shape the workforce of the future. StartOut gives us further networking opportunities too, which is why I’m talking to you today.Devon: As part of StartOut, we joined their Growth Lab, which is a six-month accelerator that provides strategic guidance and mentorship. It was a game changer for us. Now we’re connected to tons of investors and are part of a dynamic and diverse community that continues to be supportive, understanding, and encouraging. Lindsey: I love seeing the community coming together to provide support – which as you mentioned is such a cornerstone of LGBTQ+ history. Do you have any advice for others in the LGBTQ+ community looking to start and grow their own companies?Devon: We’ve learned it takes a lot of networking, listening, and collaboration to build a successful company. Don’t be afraid to ask for help from family, friends, and your community—and don’t be afraid to ask yourself tough questions about what you’re doing and change course if needed.There are many organizations dedicated to helping the LGBTQ+ high tech community, including StartOut, Serif, and Gaingels.Google for Small Business also offers tools and resources for LGBTQ-friendly businesses such as LGBTQ+ friendly tags, transgender safespace attributes for business profiles, and tips to create more inclusive and innovative workplaces. Matt: Giving back to the LGBTQ+ community by mentoring new startups is equally important. Sharing your successes and failures can help others avoid similar mistakes and bring their ideas to market faster. We’re fortunate that the LGBTQ+ leaders—especially the startup organizations and founder networks—have been extremely supportive of Beepboop.Lindsey: Thank you so much for sharing those insights and resources, and I’ll add a couple of others – Lesbians who Tech and Out in Tech. I also want to thank you for all you’re doing to be visible and give back. I have no doubt you’re an inspiration for so many founders. So in closing Devon, what are the next steps for Beepboop?Devon: We look forward to working more with partners such as Google for Startups and StartOut to further democratize language learning and teach students around the world how to confidently speak a new language. Lindsey: Thank you. And we look forward to partnering with you to do just that!From left in back: Devon Saliga, CEO & Co-Founder, Matt Douglass, CTO & Co-Founder,  Alejandra Molina, Director of Marketing & Co-Founder. Front: Lucas Ogden-Davis, Founding EngineerIf you want to learn more about how Google Cloud can help your startup, visit our pagehere to get more information about our program, and sign up for our communications to get a look at our community activities, digital events, special offers, and more.
Quelle: Google Cloud Platform

GKE release channels: Balancing innovation and speed of change, now with more granular controls

If you run your business on Kubernetes, you know how important it is to perform regular patching and upgrades to maintain a healthy environment. At Google Cloud, we automatically upgrade the Google Kubernetes Engine (GKE) cluster control plane, and if you enable node auto-upgrade, we also automatically upgrade and patch your cluster nodes. Moreover, you can  subscribe your cluster to a release channel that meets your constraints and needs — rapid, regular or stable. This is just one of the ways that GKE supports enterprises running production workloads, and is broadly used by GKE customers as part of their continuous upgrade strategy. For enterprises, release channels provide the level of predictability needed for advanced planning, and the flexibility to orchestrate custom workflows automatically when a change is scheduled (e.g. informing their DevOps team when a new security patch is available).While automating upgrades with release channels simplifies keeping track of Kubernetes versions and OSS compatibility, you may want to upgrade your cluster at specific times, for business continuity or quality assurance reasons. You may also want to control the scope of the upgrades —  apply just patches, or avoid minor releases — allowed on your cluster. This is critical, especially when you feel that an upgrade requires more qualification, testing, or preparation before it can be rolled out to production.We recently enhanced the maintenance windows that you can set in GKE release channels. Previously, maintenance windows allowed you to specify “no upgrades” to the control plane or nodes for up to 30 days. Now, the upgrade-scope exclusion allows you to control and limit the scope of upgrades for up to 180 days, or end-of-life date, whichever comes first. Likewise, you can preclude minor upgrades and node upgrades from being applied to both your control planes and nodes with two new modes, “no_minor_upgrades” and “no_minor_or_nodes_upgrades”. And in both cases, you can “pin” your cluster to a specific minor k8s version (say 1.21) for a prolonged period of up to six months.GKE release channels in actionGerman food-delivery network Delivery Hero is one GKE customer that recently began using release channels. With 791 million orders processed in the third quarter of 2021, Delivery Hero initially chose to eliminate potential disruptions to its customers by relying on a manual process to control the timing of changes, and reduce the risk that an untested update might impact availability. But this was not an ideal solution: constantly monitoring the Kubernetes release schedule, tracking version skew compatibility, and applying security patches was cumbersome.In an effort to balance between risk mitigation and operational efficiency, Delivery Hero decided to subscribe their GKE clusters to a release channel. But in order to do it even more safely, they also defined the scope of auto-upgrades to include only patch versions, and to postpone minor upgrades. This way their GKE cluster is patched automatically to ensure security and compliance, but they hold back on minor version upgrades until they can be internally tested and qualified.“Before the option to control upgrade scope, and especially the ability to postpone minor upgrades up to 6 months, we were struggling to align our qualification timeline with the cadence of Kubernetes OSS releases, especially with the API deprecations in recent releases,” said Kumar Saurabh Sinha, Engineering Manager (Platform & SRE) at Delivery Hero. “With upgrade scope exclusions, we managed to safely migrate our clusters and subscribe them on release channels while still having the ability to mitigate the risk of untested minor releases.”Get started with GKE release channels today. When you create a new GKE cluster, it is automatically subscribed to the ‘regular’ release channel by default.You can also migrate existing clusters onto release channels from the Google Cloud Console, or the command line. Read more here.For example, if you want to subscribe a cluster to a release channel, and also to avoid minor Kubernetes upgrades for three months. You can follow these steps:Ensure that clusters are running a version supported by the channel.Exclude auto-upgrade for 24 hours. This is an optional safety step to avoid unplanned upgrades immediately after subscribing the cluster to a channel.Subscribe the clusters to your desired release channel.Set upgrade scope to no_minor_upgrades, allowing only patch versions to be applied to the cluster, while keeping the cluster on the same minor release.With GKE release channels, you have the power to decide not only when, but how, and what to upgrade in your clusters and nodes. You can learn more about release channels here, and about maintenance windows here. And for even more, tune into this episode of the Google Cloud Platform Podcast, where my colleague Abdelfettah Sghiouar and I discuss the past and future of GKE release channels with the hosts Kaslin Fields and Mark Mirchandani.Related ArticleHow a robotics startup switched clouds and reduced its Kubernetes ops costs with GKE AutopilotCompared with using AWS EKS, Brain Corp’s use of GKE Autopilot reduced the operational overhead involved with running 100,000 robots in p…Read Article
Quelle: Google Cloud Platform

Startup Highnote builds end-to-end embedded finance platform on Google Cloud

The ability to quickly introduce and evolve payment options for products or services is essential for businesses, as nearly50% of consumers who can’t use a preferred payment method abandon their purchase. At the same time, gift cards, branded credit cards and rewards programs are critical tools that companies rely on to build more loyal and lasting customer relationships. With Highnote, companies have an all-in-one embedded platform to quickly create payment cards and wallets, offer innovative rewards programs and credit, and provide sustainable wage access. It is the first platform that allows enterprises to make card issuance an embedded capability of their product without creating an entirely new (and costly) organization. Creating an exciting fintech future with Google CloudWhen thinking about building the industry’s first end-to-end embedded finance platform, we quickly realized Highnote would only be successful if it enabled companies to truly innovate and quickly roll out new programs. To do so, the platform would have to be built on scalable infrastructure capable of securely delivering services with speed and reliability while offering easy access to actionable Big Data analytics.Working closely with the team at the Google for Startups Cloud Program, we successfully implementedGoogle Cloud as a versatile, future-proof foundation of our platform—and built Highnote from the ground up in just one year. Highnote’s GraphQL-based API platform reinvents the card issuance process. Utilizing the developer-friendly Highnote platform, product and engineering teams at digital enterprises of all sizes can easily and efficiently embed virtual and physical payment cards (commercial and consumer prepaid, debit, credit, and charge), ledger, and wallet capabilities into their existing products. This creates compelling value while growing revenue and building a unique and differentiated brand. We leverageCloud Spanner,BigQuery, andGoogle Kubernetes Engine (GKE) to create a unified and highly secure PCI DSS-compliant platform with GraphQL APIs that provide rapid and flexible money transfers. This gives us a reliable platform to deliver and test customer experiences, respond to outcomes, and make better business decisions. Powered by Google Cloud, our data models and application domains are architected to support configurations and customizations that unlock a diverse set of new use cases across industries, including retail, travel, logistics, healthcare, and sustainable wage access programs.We are especially proud to highlight our enablement of sustainable wage access, as this program helps the 50% of Americans living paycheck to paycheck. Embedding this program within payroll systems provides a viable alternative to payday lenders who often charge exorbitant fees and interest rates. In real world terms, this means Highnote helps people access earned wages before payday at no cost. The other customer we just went live with was Tillful, and their Tillful card helps small businesses build their business credit. This program will help new and emerging businesses as well as underrepresented owners of small businesses by making the credit ecosystem accessible. Highnote’s platform is designed to support multiple use cases across many industries. For example, we also help the trucking and logistics companies to develop fleet and fuel cards, and spend management companies who are looking to uplevel offerings. Delivering high-performance transactions with Cloud SpannerBuilding one of the world’s most modern card platforms would not have been possible withoutCloud Spanner. We needed a solution that would keep our massive petabyte databases from buckling and more securely deliver data anywhere in the U.S. Cloud Spanner does all this and more, as it routinely connects purchases from millions of customers to tens of thousands of vendors. We also wanted to reduce overhead by 80% by eliminating manual sharding, partitioning, and optimization of data. These processes are automatic with Cloud Spanner so we can operate at maximum efficiency. We specifically selected Cloud Spanner as our distributed SQL database management and storage solution because of its outstanding availability, zero plan maintenance downtime, security certifications, and the highest consistency guarantees of any scale-out database. We continue to optimally scale without any downtime or compromises to the integrity or security of our data. This is key for us because we can address unexpected spikes, long-term growth, and new services without costly rearchitecting.Highnote is designed to perform over billions of transactions on Cloud Spanner, and the average latency of less than 250 ms is a testament to the robustness of Google Cloud services.Enabling actionable customer insights at scaleBigQuery is another key Google Cloud solution that we rely on to deliver deep insights and visibility for our customers on a highly secure and scalable platform. When building Highnote, we knew we needed a cost-effective solution that excelled at data analytics. This is particularly critical for accurately measuring the performance—whether profitability or efficacy—of any program or card.Using BigQuery, we successfully run analytics at scale with as much as a 34% lower three-year TCO than cloud data warehouse alternatives. Over the past year, BigQuery has enabled our customers to unlock data-rich capabilities with a ledger that tracks money in real time and serves up complete debit and credit entries for every event across their accounts. Companies also access real time balances for revenue, fees, customer accounts, and available funds management without complicated spreadsheets.To quickly and efficiently roll out Highnote to our customers, we needed a simple way to automatically deploy, scale, and manage Kubernetes. When selecting a Kubernetes management tool, our top priorities were rapidly spinning up and securely scaling across multiple sites. As part of Google Cloud’s expansive ecosystem,Google Kubernetes Engine (GKE) was the top choice due to seamless and automatic Kubernetes scaling and management.We quickly got off the ground with single-click clusters and scaled up by using the high-availability control plane—including multi-zonal and regional clusters—to easily accommodate multiple active-active regions (which other solutions cannot do). As an embedded finance platform, stringent security protocols were obviously a key consideration for us. GKE is secure by default and runs routine vulnerability scans of container images and data encryption. Further security assistance was provided by Google Cloud partners 66degrees and DoiT International to help us rapidly validate VPC PCI compliance and ensure the uninterrupted performance of thousands of transactions per second. Winning in fintech with Google for StartupsBuilding the industry’s first end-to-end embedded finance platform would have been extremely challenging without the extensive Google Cloud support. By working closely with our Startups team and Google partners, we had access to Google Cloud services to more easily validate VPC PCI compliance and address most issues before we exited stealth. Their responsiveness is incredible and stands out compared to support services we’ve seen from other technology providers.Our participation in the Google for Startups Cloud program has been instrumental to our success. With Google Cloud, we are making embedded payments accessible to our customers without a big budget price tag. By doing so, we help unleash the creativity of emerging enterprises by enabling them to innovate with payment services and rewards programs to reach new markets and customers. If companies can dream, we can enable them to realize it on Highnote. Our platform really is that flexible. We’re excited where we can go and grow with Google Cloud.If you want to learn more about how Google Cloud can help your startup, visit our pagehere to get more information about our program, and sign up for our communications to get a look at our community activities, digital events, special offers, and moreRelated ArticleWhy managed container services help startups and tech companies build smarterWhy managed container services such as GKE are crucial for startups and tech companies.Read Article
Quelle: Google Cloud Platform

How The Home Depot is teaming up with Google Cloud to delight customers with personalized shopping experiences

The Home Depot, Inc., is the world’s largest home improvement retailer with annual revenue of over $151B. Delighting our customers—whether do-it-yourselfers or professionals—by providing the home improvement products, services, and equipment rentals they need, when they need them, is key to our success. We operate more than 2,300 stores throughout the United States, Canada, and Mexico. We also have a substantial online presence through HomeDepot.com, which is one of the largest e-commerce platforms in the world in terms of revenue. The site has experienced significant growth both in traffic and revenue since the onset of Covid-19.Because many of our customers shop at both our brick-and-mortar stores and online, we’ve embarked on a multi-year strategy to offer a shopping experience that seamlessly bridges the physical and digital worlds. To maximize value for the increasing number of online shoppers, we’ve shifted our focus from event marketing to personalized marketing, as we found it to be far more effective in improving the customer experience throughout the sales journey. This led to changing our approach to marketing content, email communications, product recommendations, and the overall website experience.Challenge: launching a modern marketing strategy using legacy ITFor personalized marketing to be successful, we had to improve our ability to recognize a customer at the point of transaction so we could—among other things—suspend irrelevant and unnecessary advertising. Most of us have experienced the annoyance of receiving ads for something we’ve already purchased, which can degrade our perception of the brand itself. While many online retailers can identify 100% of their customer transactions due to the rich information captured during checkout, most of our transactions flow through physical stores, making this a more difficult problem to solve.Our old legacy IT system, which ran in an on-premises data center and leveraged Hadoop, also challenged us since maintaining both the hardware and software stack required significant resources. When that system was built, personalized marketing was not a priority, so it took several days to process customer transaction data and several weeks to roll out any system changes. Further, managing and maintaining the large Hadoop cluster base presented its own set of issues in terms of quality control and reliability, as did keeping up with open-source community updates for each data processing layer.  Adopting a hybrid approachAs we worked through the challenges of our legacy system, we started thinking about what we wanted our future system to look like. Like many companies, we began with a “build vs. buy” analysis. We looked at several products on the market and determined that while each of them had their strengths, none was able to offer the complete set of features we needed.Our project team didn’t think it made sense to build a solution from scratch, nor did we have access to the third-party data we needed. After much consideration, we decided to adopt a solution that combined a complete rewrite of the legacy system with the support of a partner to help with the customer transaction matching process.Building the foundation on Google CloudWe chose Google Cloud’s data platform, specifically BigQuery, Dataflow,  DataProc, Cloud Storage, and Cloud Composer. Google Cloud platform empowered us to break down data silos and unify each stage of the data lifecycle from ingestion, storage, and processing to analysis and insights. Google Cloud offered best-in-class integration with open-source standards and provided the portability and extensibility we needed to make our hybrid solution work well. The open standards of BigQuery’s BQ Storage API allowed us to leverage fast BQ storage layers to be utilized with other compute platforms, e.g., DataProc. We used BigQuery combined with Dataflow to integrate our first- and third-party data into an enterprise data and analytics data lake architecture. The system then combined previously siloed data and used BigQuery ML to create complete customer profiles spanning the entire shopping experience, both in-store and online. Understanding the customer journey with the help of Dataflow and BigQueryThe process of developing customer profiles involves aggregating a number of first- and third-party data sources to create a 360-degree view of the customer based on both their history and intent. It starts with creating a single historical customer profile through data aggregation, deduplication, and enrichment. We used several vendors to help with customer resolution and NCOA (Change of Address) updates, which allows the profile to be house-holded and transactions to be properly reconciled to both the individual and the household. This output is then matched to different customer signals to help create an understanding of where the customer is in their journey—and how we can help.The initial implementation used Google Dataflow, Google’s streaming analytics solution, to load data from Google Cloud Storage into BigQuery and perform all necessary transformations. The Dataflow process was converted into BQML (BigQuery Machine Learning) since this significantly reduced costs and increased visibility into data jobs. We used Google Cloud Composer, a fully managed workflow orchestration service, to help orchestrate all data operations and DataProc and Google Kubernetes Engine to enable special case data integration so we could quickly pivot and test new campaigns. The architecture diagram below shows the overall structure of our solution.Taking full advantage of cloud-native technology  In our initial migration to Google Cloud, we moved most of our legacy processes in their original form. However, we quickly learned that this approach didn’t take full advantage of the cloud-native and more improved features Google Cloud offered such as auto scaling of resources, flexibility to decouple storage from the compute layer, and a wide variety of options to choose the best tool for the job. We refactored our Hadoop-based data pipelines written in Java-based Map Reduce and our Pig Latin jobs to Dataflow and BigQuery jobs. This dramatically reduced processing time and made our data pipeline code concise and efficient. Previously, our legacy system processes ran longer than intended, and data was not used efficiently. Optimizing our code to be cloud-native and leveraging all the capabilities of Google Cloud services resulted in reduced run times. We decreased our data processing window from 3 days to 24 hours, improved resource usage by dramatically reducing the amount of compute we used to possess this data, and built a more streamlined system. This in turn reduced cloud costs and provided better insight. For example, DataFlow offers powerful native features to monitor data pipelines, enabling us to be more agile.Leveraging the flexibility and speed of the cloud to improve outcomes Today, using a continuous integration/continuous delivery (CI/CD) approach, we can deploy multiple system changes each week to further improve our ability to recognize in-store transactions. Leveraging the combined capabilities of various Google Cloud systems—BigQuery, DataFlow, Cloud Composer, Dataproc, and Cloud Storage–we drastically increased our ability to recognize transactions and can now connect over 75% of all transactions to an existing household. Further, the flexible Google Cloud environment coupled with our cloud-native application makes our team more nimble and better able to respond to emerging problems or new opportunities. Increased speed has led to better outcomes in our ability to match transactions across all sales channels to a customer and thereby improve their experience. Before moving to Google Cloud, it took 48 to 72 hours to match customers to their transactions, but now we can do it in less than 24 hours. Making marketing more personal—and more efficientThe ability to quickly match customers to transactions has huge implications for our downstream marketing efforts in terms of both cost and effectiveness. By knowing what a customer has purchased, we can turn off ads for products they’ve already bought or offer ads for things that support what they’ve bought recently. This helps us use our marketing dollars much more efficiently and offer an improved customer experience. Additionally, we can now apply the analytical models developed using BQML and Vertex AI to sort customers into audiences. This allows us to more quickly identify a customer’s current project, such as remodeling a kitchen or finishing a basement, and then personalize their journey by offering them information on products and services that matter most at a given point through our various marketing channels. This provides customers with a more relevant and customized shopping journey that mirrors their individual needs. Protecting a customer’s privacyWith this ability to better understand our customers, we also have the responsibility to ensure we have good oversight and maintain their data privacy. Google’s cloud solutions provide us the security needed to help protect our customers’ data, while also being flexible enough to allow us to support state and federal regulations, like the California Customer Privacy Act. This way we can provide a customer the personalized experience they desire without having to fear how their data is being used.With flexible Google Cloud technology in place, The Home Depot is well positioned to compete in an industry where customers have many choices. By putting our customers’ needs first, we can stay top of mind whenever the next project comes up.Related ArticleHow The Home Depot helps doers get more done with SAP and Google CloudThe Home Depot’s decision to migrate to cloud-based infrastructure, including migrating its SAP applications, has set it up for success i…Read Article
Quelle: Google Cloud Platform

Announcing private network solutions on Google Distributed Cloud Edge

Today, we’re announcing a new private networking solutions portfolio to further accelerate adoption of private cellular networks. Based on Google Distributed Cloud Edge and leveraging our ISV ecosystem, these solutions address the distinct performance, service-level, and economic needs of key industry verticals by combining dedicated network capabilities with full edge-computing application stacks.Enterprises today are facing a network coverage and quality of service challenge that strains existing solutions like WiFi. Whether being used to add users, deploy industry-specific workloads, or support Internet of Things (IoT) and other connected devices, existing networking solutions struggle to deliver the connectivity, control, and scalability that enterprises need. Private networks based on cellular technologies like 5G offer a variety of benefits over WiFi for several enterprise use cases. For example, WiFi can be noisy and deliver inconsistent performance in terms of both latency and bandwidth, which impacts its ability to deliver the Quality of Service (QoS) you need for real-time applications like video monitoring and robotic manufacturing. It’s also hard to use WiFi to provide capacity and coverage in large areas like entertainment venues, nor is WiFi well suited for connecting large numbers of sensors and IoT devices. And in places where a connected device is on the move, like in a warehouse or distribution center, WiFi doesn’t offer the seamless connectivity that workers and vehicles require. Private networks, meanwhile, allow organizations to introduce local private cellular networks to complement existing WiFi and public cellular connectivity. For example, manufacturers can deploy a private network across a large factory site bridging operations, automation, and IoT devices, with robust baseline connectivity and support for next-generation functionality such as predictive maintenance and quality control through computer vision analytics. For educators, private networks can extend connectivity to underserved communities and students, enabling distance learning outside the classroom. Building and venue owners can use private networks to improve occupant safety, reduce costs and lower energy consumption via smart-building applications, and deliver new occupant and visitor experiences.  And critically, cellular networks’ built-in security provides peace of mind for data privacy in a way that other approaches do not.A flexible, mature solutionMany enterprises have been experimenting with private networks but operating and scaling them presents numerous challenges. With this new portfolio, built upon Google Distributed Cloud (GDC) Edge and new key partnerships, customers can rapidly adopt turn-key, private network solutions with the flexibility to deploy management, control, and user plane functions both in the cloud and at the edge. GDC  Edge has access to Google Cloud services and is backed by Google’s security best practices. By building on a mature, cloud-native management experience, powered by Anthos, enterprises benefit from a consistent developer and operational model across their entire IT estate. In addition, Distributed Cloud Edge offers the flexibility to scale to other use cases that need low latency and Quality of Service (QoS) for critical applications.Every enterprise has unique topography, latency and QoS requirements for their applications. Google Distributed Cloud Edge provides a centralized control and management plane for secure networks, scaling from one to thousands of locations. With GDC Edge, customers can run private networks including virtualized RAN for connectivity and edge applications in a single solution. Our partnerships with Communications Service Providers (CSPs) further enable enterprises with roaming connectivity while retaining control of their private environments.A broad ecosystem of partnersGiven the variety of needs across different industries, we are working with key ISV ecosystem partners to deliver integrated solutions built on our GDC Edge portfolio combined with their own distinct solutions. Our launch partners include:Betacom will deploy its fully managed private wireless service, 5G as a Service (5GaaS), on GDC Edge, giving enterprises access to cost-effective, high-performance 5G networks that are designed, deployed and managed to support new intelligent manufacturing applications. Boingo Wireless will deploy its fully managed, end-to-end private cellular networks for enterprise customers using GDC Edge at major airports, stadiums, hospitals, manufacturing facilities, and U.S. military bases. Celona’s 5G LAN solution automates rollout of private cellular networks that are tightly integrated with existing security and app QoS policies. Celona’s 5G LAN network operating system can also be deployed as a resource within GDC Edge, further accelerating private cellular adoption. Crown Castle owns and operates communications infrastructure, including wireless infrastructure and fiber networks, that serves the demands of telecommunications network operators, enterprises, and the public sector, and seeks to enable the next wave of deployments with partners leveraging GDC Edge for private network deployments. Kajeet will deploy its 5G solution on GDC Edge with a mission to connect students and communities with safe, simple, and secure high-speed wireless Internet to eliminate the digital divide once and for all.Several countries including the US, UK, Germany, Japan, and South Korea allocate spectrum for private networking, and CSPs have spectrum that can be extended for private use as well. In the US, private network solution partners can also utilize our Spectrum Access System (SAS) to leverage the Citizens Broadband Radio Service (CBRS). Google Cloud has led the way in this space, laying the foundation for low-friction private network deployments by promoting industry-wide adoption of CBRS, and by operating a market-leading SAS. Get started todayThe network is the backbone of any business, and running a private network solution on Google Distributed Cloud Edge opens the door to new use cases predicated on fast, flexible and secure connectivity. Click here to learn more about Google Distributed Cloud Edge, and check out the news releases from our partners Betacom, Celona, and Kajeet. And if you want to  become a private 5G solution early adopter, reach out to us at private-networks@google.com.Related ArticleDeploying and operating cloud-based 5G networksCloud services and networks can help Communication Service Providers (CSPs) deliver next-generation 5G networks to their customers quickly.Read Article
Quelle: Google Cloud Platform

Why managed container services help startups and tech companies build smarter

Learning how to install and manage Kubernetes clusters can be a big obstacle between your organization and all the potential magic of a container orchestration platform. You have to consider tasks like provisioning machines, choosing OS and runtime, and setting up networking and security. Plus, integrating it all correctly and understanding it well enough to roll out new updates can take months (if not, years). When adopting containers, you should always ask yourself: Does my startup or tech company have the skills, resources, and time to maintain, upgrade, and secure my platform?If the answer is no, you can instead use a fully-managed Kubernetes service. If you’re on AWS, that would mean EKS. If you’re on Azure, that will mean AKS. And if you’re on Google Cloud, that will mean Google Kubernetes Engine (GKE).This post examines the benefits of managed container platforms covered in our recent whitepaper, “The future of infrastructure will be containerized,” which is informed by our work with startups and tech companies choosing Google Cloud.The benefits of embracing managed servicesGoing for a fully-managed service has many positive implications for your business that can help you keep moving forward as efficiently as possible. Firstly, a fully-managed container service eliminates most of the Kubernetes platform operational overhead. For instance, GKE offers fully-integrated IaaS ranging from VM provisioning of Tau VMs, autoscaling across multiple zones, and on-demand upgrades for GPUs, TPUs for machine learning, storage volumes, and security credentials. Simply put your application into a container and select the system that works best for your needs.If you don’t want the responsibility of provisioning, scaling, and upgrading clusters, you can opt for the production-ready GKE Autopilot mode that manages your cluster infrastructure, control plane, and nodes. For cases where you don’t need total control over container orchestration and operations, you can simplify app delivery by leveraging a managed serverless compute platform like Cloud Run. While GKE is a fully-managed service, you still need to think about key decisions, such as which zones to run in, where to store logs, and how to manage traffic between different application versions. Cloud Run eliminates all of those decisions, allowing you to run cloud native workloads and scale based on incoming requests without having to configure or manage a Kubernetes cluster. Using Cloud Run also lets your teams focus on more impactful work: developers can focus more on writing code; and rather than dedicating lots of time to automation or operational tasks, DevOps engineers and cloud system admins can instead focus on application performance, security, compliance, API integration. Beyond easing the management of containers, managed services also help ease other aspects of operating in today’s modern digital world, such as the increasing need to operate in multiple clouds. Being able to mix and match providers that work best for individual workloads is key for making the best use of existing assets, increasing flexibility and resilience, and leveraging new technologies and capabilities in the future. And when it comes to implementing a multicloud strategy, the migrations that work best are those that leverage open standards. Let’s say, for example, that AWS is your cloud provider for your VMs. You can manage and maintain any Kubernetes clusters you have, no matter where they are, using GKE Connect. And because GKE has a provider-agnostic API, you can build tooling and workflows once and deploy them across multiple clouds while making updates from a single, central platform. With Kubernetes, your startup or tech company can save huge amounts of time on deployments and automation without having to build and maintain separate infrastructures for each individual cloud provider.Always be building towards an open future Containers and managed services based on open standards are a powerful combination that allow you to take advantage of best-of-breed capabilities on any platform, while simultaneously standardizing skills and processes. As a leader at a startup or tech company, you’re always looking for ways to move faster, work more efficiently, and make the most of the technical talent you have. You want to spend more time on roadmap priorities and spend the minimum amount of resources on maintaining your infrastructure. When you choose open platforms and technology, you are ensuring that your team and business will have a front-row seat to cutting-edge innovations that will deliver new ways to build great products. Plus, developers and engineers want to work with systems and projects where they can have transferable versus proprietary systems, and where they can enrich their career development. To learn more about how containers and Kubernetes can help you get to market faster, we highly recommend reading our previous post on this whitepaper, and for a deeper dive into how managed containers can help streamline app development in the cloud, read the full whitepaper. To learn more about how startups and tech companies are working with Google Cloud, check out this post or visit the Google Cloud page for startups and tech companies.Related ArticleHow tech companies and startups get to market faster with containers on Google CloudGoogle Cloud’s whitepaper explores how startups and tech companies can move faster with a managed container platformRead Article
Quelle: Google Cloud Platform

Introducing managed zone permissions for Cloud DNS

The Cloud DNS team is pleased to announce the Preview launch of managed zone permissions. Cloud DNS is integrated with the Identity and Access Management (IAM) service, which gives you the ability to control access to Cloud DNS resources and prevent unwanted access. This launch enables enterprises with distributed DevOps teams to delegate Cloud DNS managed zone administration to their individual application teams. Up until now, Cloud DNS supported resource permissions at the project level only. This allowed for centralized management of IAM permissions at a higher level of granularity. Any user with a specific permission in a project would have that permission apply to all the resources under it. If the project contained multiple managed zones for instance, a single user with project access could make changes to the DNS records in any of the managed zones in that project. This created two challenges for some customers: It resulted in the reliance on a centralized team to manage all the DNS zones and record creations. This is usually fine when the deployment sizes are small. However for customers with a large number of managed DNS zones, administration becomes a toil usually borne by the central team.A single user could modify the DNS records in multiple managed zones, resulting in either broken DNS records or causing security issues. With this launch, Cloud DNS will allow access controls at a finer granularity at a managed zone level. This allows admins to delegate responsibilities for managed zone operations to individual application teams and prevents one application team from accidentally changing the DNS records of another application. It also allows for a better security posture because only authorized users will be able to modify managed zones and better supports the principle of least privilege. The launch does not change the IAM roles and permissions; we’ve merely added additional granularity to use when needed. For more details on IAM roles for Cloud DNS, please see Access control with IAM.This capability also helps when customers are using a Shared VPC environment. A typical Shared VPC setup has service projects that take ownership of a virtual machine (VM) application or services, while the host project takes ownership of the VPC network and network infrastructure. Often, a DNS namespace is created from the VPC network’s namespace to match the service project’s resources. For such a setup, it can be easier to delegate the administration of each service project’s DNS namespace to the administrators of each service project (which are often different departments or businesses). Cross-project binding lets you separate the ownership of the DNS namespace of the service project from the ownership of the DNS namespace of the entire VPC network. This, coupled with managed zone permissions, ensures that the managed zone within the service project can only be administered by service project owners while allowing for the host project (and other service projects) to access the namespace from their respective projects. For more details on how to configure and use managed zone permissions, please look at our documentation.Related ArticleAnnouncing private network solutions on Google Distributed Cloud EdgeWith a private cellular network running on Google Distributed Cloud Edge, enterprises can solve the connectivity problems of many new use…Read Article
Quelle: Google Cloud Platform

Announcing general availability of reCAPTCHA Enterprise password leak detection

As long as passwords remain an incredibly common form of account authentication, password reuse attacks—which take advantage of people reusing the same password across multiple services—will be one of the most common ways for malicious hackers to hijack user accounts. Password reuse is such a serious problem that more than 52% of users admitted to reusing their password on some sites, and 13% confessed that they use the same password on all their accounts, according to a Google/Harris Poll conducted in 2019. When malicious hackers steal passwords in data breaches, they’re looking to exploit password reuse and increase the chances that future attacks against organizations will be more successful. They can automate these attacks with bots, too, which enables them to scale their attacks. Ultimately, these attacks can lead to account takeovers and can create real regulatory, trust, and reputational risk for organizations.reCAPTCHA Enterprise password leak detection can help prevent account takeovers Although it’s considered a best-practice to discourage password reuse, we realize that it still happens. That’s why we’ve built into reCAPTCHA Enterprise a password leak detection capability, now generally available to all reCAPTCHA Enterprise users. One of the most effective ways to prevent a successful account takeover is to warn users as early as possible that their passwords need to be changed. Customers can take immediate action on user credentials as part of any assessment to ensure that they have not been leaked or breached elsewhere.reCAPTCHA Enterprise provides organizations with the ability to protect their users against account takeovers. The reCAPTCHA Enterprise password leak detection feature is implemented using a privacy-preserving API which hides the details of the credentials and the result from Google’s backend services, and allows customers to keep their users’ credentials private. When paired with reCAPTCHA Enterprise bot management, account defender, and two-factor authentication, organizations can build robust protections against attacks such as credential stuffing and account takeovers.Get started protecting your app against account takeovers, bots, and credential stuffingNew and current reCAPTCHA Enterprise customers can now activate password leak detection following the detailed documentation available in our help center, or reach out to sales to learn more about the capabilities of reCAPTCHA Enterprise.Related ArticlereCAPTCHA Enterprise puts users firstreCAPTCHA Enterprise has evolved from requiring engagement from end users to being frictionless while still providing best-in-class secur…Read Article
Quelle: Google Cloud Platform