Accelerate your developer productivity with Query Library

Our goal in Cloud Logging is to help increase developer productivity by streamlining the troubleshooting process. The time spent on writing and executing a query, and then analyzing the errors can impact developer productivity. Whether you’re troubleshooting an issue or analyzing your logs, finding the right logs quickly, is critical. That’s why we recently launched a Query Library and other new features to make querying your logs even easier. The Query Library in Cloud Logging makes it easier to find logs faster by using common queries.Build queries faster with our templatesThe new text search and drop-down features are designed to make querying something that you can achieve with a few mouse clicks. These features automatically generate the Logging query language necessary for you. The Query Library extends this simplicity with templates for common GCP queries.The Query Library is located in the query builder bar next to the Suggested queries. To help find the most relevant queries you’ll notice the following details:Query categories – Each query is broken down into categories that can be used to easily narrow down to relevant queries. Query occurrences – To help you pick queries that have the most useful results, sparklines are displayed for queries that have logs in your project. Query details – Each query has a description along with the Logging query Run/Stream – Run the query or start streaming logs right from the librarySave – Save the query in your list of saved queriesThe road aheadWe’re committed to making Logs Explorer the best place to troubleshoot your applications running on Google Cloud. Over the coming months, we have many more changes planned to make Logs Explorer both easier and more powerful for all users. If you haven’t already, get started with the Logs Explorer and join the discussion in our Cloud Operations page on the Google Cloud Community site.Related ArticleGoogle Cloud Deploy gets continuous delivery productivity enhancementsIn this latest release, Google Cloud Deploy got improved onboarding, delivery pipeline management and additional enterprise features.Read Article
Quelle: Google Cloud Platform

Google Cloud and Apollo24|7: Building Clinical Decision Support System (CDSS) together

Clinical Decision Support System (CDSS) is an important technology for the healthcare industry that analyzes data to help healthcare professionals make decisions related to patient care. The market size for the global clinical decision support system appears poised for expansion, with one study predicting a compound annual growth rate (CAGR) of 10.4%, from 2022 to 2030, to $10.7 billion.For any health organization that wants to build a CDSS system, one key block is to locate and extract the medical entities that are present in the clinical notes, medical journals, discharge summaries, etc. Along with entity extraction, the other key components of the CDSS system are capturing the temporal relationships, subjects, and certainty assessments.At Google Cloud, we know how critical it is for the healthcare industry to build CDSS systems, so we worked with Apollo 24|7, the largest multi-channel digital healthcare platform in India, to build the key blocks of their CDSS solution. We helped them to parse the discharge summaries and prescriptions to extract the medical entities. These entities can then be used to build a recommendation engine that would help doctors with the “Next Best Action” recommendation for medicines, lab tests, etc.Let’s take a sneak peek at Apollo 24|7’s entity extraction solutions, and the various Google AI technologies that were tested to form the technology stack. Datasets UsedTo perform our experiments on entity extraction, we used two types of datasets. i2b2 Dataset – i2b2 is an open-source clinical data warehousing and analytics research platform that provides annotated deidentified patient discharge summaries made available to the community for research purposes. This dataset was primarily used for training and validation of the models.Apollo 24|7’s Dataset – De-identified doctor’s notes from Apollo24|7 were used for testing. Doctors annotated them to label the entities and offset values. Experimentation and choosing the right approach — Four models put to testFor entity extraction, both Google Cloud products and open-source approaches were explored. Below are the details:1. Healthcare Natural Language API: This is a no-code approach that provides machine learning solutions for deriving insights from medical text. Using this, we parsed unstructured medical text and then generated a structured data representation of the medical knowledge entities stored in the data for downstream analysis and automation. The process includes:Extract information about medical concepts like diseases, medications, medical devices, procedures, and their clinically relevant attributes;Map medical concepts to standard medical vocabularies such as RxNorm, ICD-10, MeSH, and SNOMED CT (US users only);Derive medical insights from text and integrate them with data analytics products in Google Cloud.The advantage of using this approach is that it not only extracts a wide range of entity types like MED_DOSE, MED_DURATION, LAB_UNIT, LAB_VALUE, etc, but also captures functional features such as temporal relationships, subjects, and certainty assessments, along with the confidence scores. Since it is available on Google Cloud, this offers long-term product support. It is also the only fully-managed NLP service among all the approaches tested and hence, it requires the least effort to implement and manage. But one thing to keep in mind is that since the Healthcare NL API offers natural language models that are pre-trained, it currently cannot be used for custom entity extraction models trained using custom annotated medical text or to extract custom entities. This has to be done via AutoML Entity Extraction for Healthcare, another Google Cloud service for custom model development. Custom model development is important for adapting the pre-trained models to new languages or region-specific natural language processing, such as medical terms whose use may be more prevalent in India than in other regions2. Vertex AutoML Entity Extraction for Healthcare: This is a low-code approach that’s already available on Google Cloud. We used AutoML Entity Extraction to build and deploy custom machine learning models that analyzed documents, categorized them, and identified entities within them. This custom machine learning model was trained on the annotated dataset provided by the Apollo 24|7 team.The advantage of AutoML Entity Extraction is that it gives the option to train on a new dataset. However, one of the prerequisites to keep in mind is that it needs a little pre-processing to capture the input data in the required JSONL format. Since this is an AutoML model just for Entity Extraction, it does not extract relationships, certainty assessments, etc.3. BERT-based Models on Vertex AI: Vertex AI is Google Cloud’s fully managed unified AI platform to build, deploy, and scale ML models faster, with pre-trained and custom tooling. We experimented with multiple custom approaches based on pre-trained BERT-based models, which have shown state-of-the-art performance in many natural language tasks. To gain better contextual understanding of medical terms and procedures, these BERT-based approaches are explicitly trained on medical domain data. Our experiments were based on BioClinical BERT, BioLink BERT, Blue BERT trained on Pubmed dataset, and Blue BERT trained on Pubmed + MIMIC datasets.The major advantage of these BERT-based models is that they can be finetuned on any Entity Recognition task with minimal efforts. However, since this is a custom approach, it requires some technical expertise. Additionally, it does not extract relationships, certainty assessments, etc. This is one of the main limitations of using BERT-based models. 4. ScispaCy on Vertex AI: We used Vertex AI to perform experiments based on ScispaCy, which is a Python package containing spaCy models for processing biomedical, scientific or clinical text. Along with Entity Extraction, Scispacy on Vertex AI provides additional components like Abbreviation Detector, Entity Linking, etc. However, when compared to other models, it was less precise, with too many junk phrases, like “Admission Date,” captured as entities.“Exploring multiple approaches and understanding the pros/cons of each approach helped us to decide the one that would fit our business requirements.” according to Abdussamad M, Engineering Lead at Apollo 24|7. Evaluation StrategyIn order to match the parsed entity with the test data labels, we used extensive matching logic that comprised of the below four methods:Exact Match – Exact match captures entities where the model output and the entities in the test dataset match. Here, the offset values of the entities have also been considered. For example, the entity “gastrointestinal infection” that is present as-is in both the model output and the test label will be considered an “Exact Match.” Match-Score Logic – We used a scoring logic for matching the entities. For each word in the test data labels, every word in the model output is matched along with the offset. A score is calculated between the entities and based on the threshold, it is considered as a match.Partial Match – In this matching logic, entities like “hypertension” and “hypertensive” are matched based on the Fuzzy logic.UMLS Abbreviation Lookup – We also observed that the medical text had some abbreviations, like AP meaning abdominal pain. These were first expanded by doing a lookup on the respective UMLS (Unified Medical Language System) tables and then passed to the individual entity extraction models. Performance MetricsWe used precision and recall metrics to compare the outcomes of different models/experiments. Precision (also called positive predictive value) is the fraction of relevant instances among the retrieved instances, while recall (also known as sensitivity) is the fraction of relevant instances that were retrieved. The below example shows how to calculate these metrics for a given sample.Example sample: “Krish has fever, headache and feels uncomfortable”Expected Entities: [“fever”, “headache”]Model Output: [“fever”, “feels”, “uncomfortable”]Thus,Experimentation ResultsThe following table captures the results of the above experiments on Apollo24|7’s internal datasets.Finally, the Blue BERT model trained on the Pubmed dataset had the best performance metrics with a 81% improvement on Apollo 24|7’s baseline mode with the Healthcare Natural Language API providing the context, relationships, and codes. This performance could be further improved by implementing an ensemble of these two models.“With the Blue BERT model giving the best performance for entity extraction on Vertex AI and the Healthcare NL API being able to extract the relationships, certainty assessments etc, we finally decided to go with an ensemble of these 2 approaches,“ Abdussamad added. Fast track end-to-end deployment with Google Cloud AI Services (AIS)Google AIS (Professional Services Organization) helped Apollo24|7 to build the key blocks of the CDSS system.The partnership between Google Cloud and Apollo 24|7 is just one of the latest examples of how we’re providing AI-powered solutions to solve complex problems to help organizations drive the desired outcomes. To learn more about Google Cloud’s AI services, visit our AI & ML Products page, and to learn more about Google Cloud solutions for health care, explore our Google Cloud Healthcare Data Engine page. AcknowledgementsWe’d like to give special thanks to Nitin Aggarwal, Gopala Dhar and Kartik Chaudhary for their support and guidance throughout the project. We are also thankful to Manisha Yadav, Santosh Gadgei and Vasantha Kumar for implementing the GCP infrastructure. We are grateful to the Apollo team (Chaitanya Bharadwaj, Abdussamad GM, Lavish M, Dinesh Singamsetty, Anmol Singh and Prithwiraj) and our partner team from HCL/Wipro (Durga Tulluru and Praful Turanur) who partnered with us in delivering this successful project. Special thanks to the Cloud Healthcare NLP API team (Donny Cheung, Amirhossein Simjour, and Kalyan Pamarthy).Related ArticleHIMSS 2022: Improving health through data interoperability and natural language processingAt HIMSS 2022, Google Cloud showcases how data interoperability and natural language processing can help improve health outcomes.Read Article
Quelle: Google Cloud Platform

Google Cloud Deploy gets continuous delivery productivity enhancements

Since Google Cloud Deploy became generally available in January 2022, we’ve remained focused on our core mission: making it easier to establish and operate software continuous delivery to a Google Kubernetes Engine environment. Through ongoing conversations with developers, DevOps engineers, and business decision makers alike, we’ve received feedback about onboarding speed, delivery pipeline management, and expanding enterprise features.Today, we are pleased to introduce numerous feature additions to Google Cloud Deploy in these areas. Faster onboardingSkaffold is an open source tool that orchestrates continuous development, continuous integration (CI), and continuous delivery (CD), and it’s integral to Google Cloud Deploy. Through Skaffold and Google Cloud Deploy, the local application development loop is seamlessly connected to a continuous delivery capability, bringing consistency to your end-to-end software delivery lifecycle tooling. This may be the first time your team is using Skaffold. To help, Google Cloud Deploy can now generate a Skaffold configuration for single manifest applications when one is not present. When you create a release, the new ‘gcloud deploy releases create … –from-k8s-manifest‘ command provides an application manifest, and generates a Skaffold configuration. This lets your application development teams and continuous delivery operators familiarize themselves with Google Cloud Deploy, reducing early-stage configuration and learning friction as they establish their continuous delivery capabilities. When you use this option, you can review the generated Skaffold configuration, and as your comfort with Skaffold configuration and Google Cloud Deploy increases, you can develop your own Skaffold configurations tailored to your specific delivery pipeline needs.Delivery pipeline managementContinuous delivery pipelines are always in use. New releases navigate a progression sequence as they make their way out to the production target. The journey, however, isn’t always smooth. In that case, you may need to manage your delivery pipeline and related resources more discretely. With the addition of delivery pipeline suspension, you can now temporarily pause problematic delivery pipelines to restrict all release and rollout activity. By pausing the activity, you can undertake an investigation to identify problems and their root cause. Sometimes it isn’t the delivery pipeline that has a problem, but rather a release. Through release abandonment, you can prohibit application releases that have a feature defect, outdated library, or other identified issues from being deployed further. Release abandonment ensures an undesired release won’t be used again, while keeping it available for issue review and troubleshooting.A suspended delivery pipeline and abandoned releasesWhen reviewing or troubleshooting release application manifest issues, you may want to compare application manifests between releases and target environments to determine when an application configuration changed and why. But comparing applications manifests can be hard, requiring you to use the command line to locate and diff multiple files.To help, Google Cloud Deploy now has a Release inspector, which makes it easy to review application manifests and compare against releases and targets within a delivery pipeline.Reviewing and comparing application manifests with the Release InspectorRollout listings within the Google Cloud Deploy console have, to date, been limited to a specific release or target. A complete delivery pipeline rollout listing (and filtering) has been a standing request, and you can now find it on the delivery pipeline details page.Delivery pipeline details now with complete Rollouts listingFinally, execution environments are an important part of configuring custom render and deploy environments. In addition to the ability to specify custom worker pools, Cloud Storage buckets, and service accounts, we’ve added an execution timeout to better support long-running deployments. Expanded enterprise featuresEnterprise environments frequently have numerous requirements to be able to operate, such as security controls, logging, Terraform support, and regional availability.In a previous blog post, we announced support for VPC Security Controls (VPC-SC) in Preview. We are pleased to announce that Google Cloud Deploy VPC-SC is now generally available. We’ve also documented how you can configure customer managed encryption keys (CMEK) with services that depend on Google Cloud Deploy.There are also times when reviewing manifest-render and application deployment logs may not be sufficient for troubleshooting. For these situations, we’ve added Google Cloud Deploy service platform logs, which may provide additional details towards issue resolution.Terraform plays an important role in deploying Google Cloud resources. You can now deploy Google Cloud Deploy delivery pipelines and target resources using Google Cloud Platform’s Terraform provider. With this, you can now deploy Google Cloud Deploy resources as part of a broader Google Cloud Platform resource deployment.Regional availability is important for businesses that need a regional service presence. Google Cloud Deploy is now available in an additional nine regions, bringing the total number of Google Cloud Deploy worldwide regions to 15.The futureComprehensive, easy-to-use, and cost-effective DevOps tools are key to building an efficient software delivery capability, and it’s our hope that Google Cloud Deploy will help you implement complete CI/CD pipelines. And we’re just getting started. Stay tuned as we introduce exciting new capabilities and features to Google Cloud Deploy in the months to come. In the meantime, check out the product page, documentation, quickstart, and tutorials. Finally, If you have feedback on Google Cloud Deploy, you can join the conversation. We look forward to hearing from you.Related ArticleGoogle Cloud Deploy, now GA, makes it easier to do continuous delivery to GKEGoogle Cloud Deploy managed service, now GA, makes it easier to do continuous delivery to Google Kubernetes EngineRead Article
Quelle: Google Cloud Platform

Cloud Functions 2nd gen is GA, delivering more events, compute and control

For over seven years, Functions-as-a-Service has changed how developers create solutions and move toward a programmable cloud. Functions made it easy for developers to build highly scalable, easy-to-understand, loosely-coupled services. But as these services evolved, developers faced challenges such as cold starts, latency, connecting disparate sources, and managing costs. In response, we are evolving Cloud Functions to meet these demands, with a new generation of the service that offers increased compute power, granular controls, more event sources, and an improved developer experience. Today, we are announcing the general availability of the 2nd generation of Cloud Functions, enabling a greater variety of workloads with more control than ever before. Since the initial public preview, we’ve equipped Cloud Functions 2nd gen with more powerful and efficient compute options, granular controls for faster rollbacks and new triggers from over 125 Google and third-party SaaS event sources using Eventarc. Best of all, you can start to use 2nd gen Cloud Functions for new workloads, while continuing to use your 1st gen Cloud Functions.Let’s take a closer look at what you’ll find in Cloud Functions 2nd gen.Increased compute with granular controlsOrganizations are choosing Cloud Functions for increasingly demanding and sophisticated workloads that require increased compute power and more granular controls. Functions built on Cloud Functions 2nd gen have the following features and characteristics: Instance concurrency – Process up to 1000 concurrent requests with a single instance. Concurrency can drastically reduce cold starts, improve latency and lower cost.Fast rollbacks, gradual rollouts – Quickly and safely roll back your function to any prior deployment or configure how traffic is routed across revisions. A new revision is created every time you deploy your function.6x longer request processing – Run your 2nd gen HTTP-triggered Cloud Functions for up to one hour. This makes it easier to run longer request workloads such as processing large streams of data from Cloud Storage or BigQuery.4x larger instances – Leverage up to 16GB of RAM and 4 vCPUs on 2nd gen Cloud Functions, allowing larger in-memory, compute-intensive and more parallel workloads. 32GB / 8 vCPU instances are in preview.Pre-warmed instances – Configure a minimum number of instances that will always be ready to go to cut your cold starts and make sure the your application’s bootstrap time doesn’t impact its performance.More regions – 2nd gen Cloud Functions will be available in all 1st gen regions plus new regions including Finland (europe-north1) and Netherlands (europe-west4).Extensibility and portability- By harnessing the power of Cloud Run’s scalable container platform, 2nd gen Cloud Functions let you move your function to Cloud Run or even to Kubernetes if your needs change.Lots more event sourcesAs more workloads move to the cloud, you need to connect more event sources together. Using Eventarc, 2nd gen Cloud Functions supports 14x more event sources than 1st gen, supporting business-critical event-driven workloads.Here are some highlights of events in 2nd gen Cloud Functions:125+ Event sources: 2nd gen Cloud Functions can be triggered from a growing set of Google and third-party SaaS event sources (through Eventarc) and events from custom sources (by publishing to Pub/Sub directly). Standards-based Event schema for consistent developer experience: These event-driven functions are able to make use of the industry-standard CloudEvents format. Having a common standards-based event schema for publishing and consuming events can dramatically simplify your event-handling code.CMEK support: Eventarc supports customer-managed encryption keys, allowing you to encrypt your events using your own managed encryption keys that only you can access.As Eventarc adds new event providers, they become available in 2nd gen Cloud Functions as well. Recently, Eventarc added Firebase Realtime Database, DataDog, Check Point CloudGuard, LaceWork and ForgeRock, as well as the Firebase Stripe / Revenuecat extensions as event sources. Improved developer experienceYou can use the same UI and gcloud commands as for your 2nd gen functions as for 1st gen, help you get started quickly from one place. That’s not to say we didn’t make some big improvements to the UI:Eventarc subtask – Allows you to easily discover and configure how your function is triggered during creation.Deployment tracker – Enables you to view the status of your deployment and spot any errors quickly if they occur during deployment.Improved testing tab – Simplifies calling your function with sample payloads.Customizable dashboard – Gives you important metrics at a glance and the accessibility updates improve the experience for screen readers.As with 1st gen, you can drastically speed up development time by using our open source Functions Framework to develop your functions locally.Tying it together2nd gen Cloud Functions allows developers to connect anything from anywhere to get important work done. This example shows an end-to-end architecture for an event-driven solution that uses new features in 2nd gen Cloud Functions and Eventarc. It starts with identifying the data sources to which you want to programmatically respond. These can be any of the 125+ Google Cloud or third-party sources supported by Eventarc. Then you’re able to configure the trigger and code the function while specifying instance size, concurrency and processing time based on your workload. Your function can process and store the data using Google Cloud’s AI and data platforms to transform data into actionable insights.Get started with the new Cloud FunctionsWe built Cloud Functions to be the future of how organizations build enterprise applications. Our 2nd generation incorporates feedback we’ve received from customers to meet their needs for more compute, control and event sources with an improved developer experience. We’re excited to see what you build with 2nd gen functions. You can learn more about Cloud Functions in the documentation and get started using Quickstarts: Cloud Functions.Related ArticleSupercharge your event-driven architecture with new Cloud Functions (2nd gen)The next generation of our Cloud Functions Functions-as-a-Service platform gives you more features, control, performance, scalability and…Read Article
Quelle: Google Cloud Platform

Fueling business growth with a seamless Google Cloud migration

In today’s hybrid office environments, it can be difficult to know where your most valuable, sensitive content is, who’s accessing it, and how people are using it. That’s why Egnyte focuses on making it simple for IT teams to manage and control a full spectrum of content risks, from accidental data deletion to privacy compliance. I used to be an Egnyte customer before joining the team, so I’ve experienced first-hand the transformative effects that Egnyte can have on a company. Because data is fundamental to a company’s success, we take the trust of our 16,000 clients very seriously. There is no room for error with a cloud governance platform, which means that the technology providers we work with can’t fail either. That’s why we work with Google Cloud.Since Egnyte was founded in 2007, we have delivered our services to clients 24/7. We do this by running our own data centers: two in the USA and one in Europe. But as the company continued its steady growth, owning and managing these data centers became unsustainable. There’s a tremendous amount of work that goes into managing everything that we need from a data center. Not only were we constantly building, maintaining, and paying for all this infrastructure, but we’d have to constantly expand our data centers to accommodate our business growth. This caused a never-ending pipeline issue because we had to predict how many businesses we were going to win over the next 12 to 18 months. What if we planned to grow the business by 20%, and ended up growing by 25% instead? We knew that being limited to our own data centers was going to negatively impact our business, so we looked for alternatives. To gain scalability and introduce another layer of reliability to our business, we decided to collaborate with a reputable cloud provider who could reliably back up our data. We examined the offerings of every cloud provider, and found that in every category that we analyzed, Google was hands-down the winner.One of these categories is the reach of the network. With its own transoceanic fiber with points of presence in all markets where we’re currently doing business as well as markets where we intend to do business one day, Google network is second to none. Another important criteria for us was flexibility in the product offering, so we could better consider the financial risks of this large-scale data migration. For a while, we needed to pay for both our new cloud infrastructure and our old on-premises one while they overlapped during the migration, but Google Cloud made it easier for us to plan for this. By December 2021, we had completed our full migration to Google Cloud. This significant migration was completed gradually and without disrupting our services at any point. Our close collaboration with the Google Cloud team is one of the big reasons we completed this so successfully. Google Cloud was able to anticipate some of the problems we’d likely be facing and helped us overcome them along the way. We were able to shut off our last data center in February 2022, and the beneficial changes to the business are already obvious. Capacity planning, which used to be our biggest challenge on-premises, is now a problem of the past. The ability to spin up new resources on Google Cloud means we no longer need to buy additional resources a year in advance and wait for them to be shipped. Using Google Cloud means that we no longer rely on aging infrastructure, which is a very limiting factor when you’re developing and engineering a platform as complex as Egnyte. Our entire platform is now always operating on the latest storage, processing, network, and services available on Google Cloud.Additionally, we have services embedded on our infrastructure such as Cloud SQL, Cloud Bigtable, BigQuery, Dataflow, Pub/Sub, and Memorystore for Redis, which means we no longer need to build services from scratch or shop, install, and build them into the product and company work flow. There’s a long list of Google Cloud services that have significantly simplified our processes and that now support our flagship products, Egnyte Collaborate and Secure and Govern.Looking ahead, we’ll continue to take advantage of what Google Cloud has to offer. Our migration has impacted not only our business but also our clients. We can offer even higher reliability and faster scalability to our clients whenever they need our platform to protect and manage critical content on any cloud or any app, anywhere in the world. We look forward to seeing what’s next.Related Article4 new ways Citrix & Google Cloud can simplify your Cloud MigrationCitrix and Google Cloud simplify your cloud migration. The expanding partnership between Citrix and Google Cloud means that customers con…Read Article
Quelle: Google Cloud Platform

Introducing Google Cloud and Google Workspace support for multiple Identity providers with Single Sign-On

Google is one of the largest identity providers on the Internet. Users rely on our identity systems to log into Google’s own offerings, as well as third-party apps and services. For our business customers, we provide administratively managed Google accounts that can be used to access Google Workspace, Google Cloud, and BeyondCorp Enterprise. Today we’re announcing that these organizational accounts support single sign-on (SSO) from multiple third-party identity providers (IdPs), available in general availability immediately. This allows customers to more easily access Google’s services using their existing identity systems. Google has long provided customers with a choice of digital identity providers. For over a decade, we have supported SSO via the SAML protocol. Currently, Google Cloud customers can enable a single identity provider for their users with the SAML 2.0 protocol. This release significantly enhances our SSO capabilities by supporting multiple SAML-based identity providers instead of just one. Business cases for supporting multiple identity providersThere are many reasons for customers to federate identity to multiple third-party identity providers. Often, organizations have multiple identity providers resulting from mergers and acquisitions, or due to differing IT strategies across corporate divisions and subsidiaries. Supporting multiple identity providers allows the users from these different organizations to all use Google Cloud without time-consuming and costly migrations.Another increasingly common use case is data sovereignty. Companies that need to store the data of their employees in specific jurisdictional locations may need to use different identity providers. Migrations are yet another common use case for supporting multiple identity providers. Organizations transitioning to new identity providers can now keep their old system active with the new one during the transition phase.”The City of Los Angeles is launching a unified directory containing all of the city’s workforce. Known as “One Digital City,” the directory provides L.A. city systems with better security and a single source for authentication, authorization, and directory information,” said Nima Asgari, Google Team Manager for the City of Los Angeles. “As the second largest city in the United States, this directory comes at a critical time for hybrid teleworkers, allowing a standard collaboration platform based on Google Docs, Sheets, Slides, Forms, and Sites. From our experience, Google Cloud’s support of multiple identity providers has saved us from having to create a number of custom solutions that would require valuable staff time and infrastructure costs.”How it worksTo use these new identity federation capabilities, Google Cloud Administrators must first configure one or more identity provider profiles in the Google Cloud Admin console; we support up to 100 profiles. These profiles require information from your identity provider, including a sign-in URL and an X.509 certificate. Once these profiles have been created, they can then be assigned to the root level for your organization or to any organizational unit (OU). In addition, profiles can be assigned to a Group as an override for the OU. It is also possible to configure an Organizational Unit or group to sign in with Google usernames and passwords instead of a third-party IdP.For detailed information on configuring SSO with third-party IdPs, see the documentation here.OIDC Support, Coming SoonCurrently, SSO supports the popular SAML 2.0 protocol. Later this year, we plan on adding support for OIDC. OIDC is becoming increasingly popular for both consumer and corporate SSO. By supporting OIDC, Google Cloud customers can choose which protocol is best for the needs of their organization. OIDC works alongside the multi-IdP support being released now,  so administrators can configure IdPs using both SAML and OIDC.Related ArticleAnnouncing Sovereign Controls for Google WorkspaceTo further enable EU organizations through digital sovereignty, we’re launching new capabilities to control, limit, and monitor transfers…Read Article
Quelle: Google Cloud Platform

New Google Cloud regions are coming to Asia Pacific

Digital tools offered by cloud computing are fueling transformation around the world, including in Asia Pacific. In fact, IDC expects that total spending on cloud services in Asia Pacific (excluding Japan) will reach 282 billion USD by 2025.1 To meet growing demand for cloud services in Asia Pacific, we are excited to announce our plans to bring three new Google Cloud regions to Malaysia, Thailand, and New Zealand — on top of six other regions that we previously announced are coming to Berlin, Dammam, Doha, Mexico, Tel Aviv, and Turin. When they launch, these new regions will join our 34 cloud regions currently in operation around the world — 11 of which are located in Asia Pacific — delivering high-performance services running on the cleanest cloud in the industry. Enterprises across industries, startups, and public sector organizations across Asia Pacific will benefit from key controls that enable them to maintain low latency and the highest security, data residency, and compliance standards, including specific data storage requirements.“The new Google Cloud regions will help to address organizations’ increasing needs in the area of digital sovereignty and enable more opportunities for digital transformation and innovation in Asia Pacific. With this announcement, Google Cloud is providing customers with more choices in accessing capabilities from local cloud regions while aiding their journeys to hybrid and multi-cloud environments,” said Daphne Chung, Research Director, Cloud Services and Software Research, IDC Asia/Pacific.What customers and partners are sayingFrom retail and media & entertainment to financial services and public sector, leading organizations come to Google Cloud as their trusted innovation partner. The new Google Cloud regions in Malaysia, Thailand, and New Zealand will help our customers continue to enable growth and solve their most critical business problems. We will work with our customers to ensure the cloud region fits their evolving needs. “Kami was born out of the digital native era, where in order to scale globally we needed a partner like Google Cloud who could support us on our ongoing innovation journey. We have since delivered an engaging and dependable experience for millions of teachers and students around the world, so it’s incredibly exciting to hear about the new region coming to New Zealand. This investment from Google Cloud will enable us to deliver services with lower latency to our Kiwi users, which will further elevate and optimize our free premium offering to all New Zealand schools.” – Jordan Thoms, Chief Technology Officer, Kami “Our customers are at the heart of our business, and helping Kiwis find what they are looking for, faster than ever before, is our key priority. Our collaboration with Google Cloud has been pivotal in ensuring the stability and resilience of our infrastructure, allowing us to deliver world-class experiences to the 650,000 Kiwis that visit our site every day. We welcome Google Cloud’s investment in New Zealand, and are looking forward to more opportunities to partner closely on our technology transformation journey.” – Anders Skoe, CEO, Trade Me “Digital transformation plays a key role in helping Vodafone deliver better customer experiences and connect all Kiwis. We welcome Google Cloud’s investment in New Zealand and look forward to working together to offer more enriched experiences for local businesses, and the communities we serve,” said Jason Paris, CEO, Vodafone New Zealand“Our journey with Google Cloud spans almost half a decade, with our most recent partnership and co-innovation initiatives paving the way for AirAsia and Capital A to disrupt the digital platform arena in the same vein as we did airlines. The announcement of a new cloud region that’s coming to Malaysia – and Thailand too if I may add – showcases Google Cloud’s continuous desire to expand its in-region capabilities to complement and support our aspiration of establishing the airasia Super App at the center of our e-commerce, logistics and fintech ecosystem, while enriching the local community and giving all 700 million people in Asean inclusivity, accessibility, and value. I couldn’t be more excited about this massive milestone and the new possibilities that Google Cloud’s growing network of cloud regions will create for us, our peers, and the common man.” – Tony Fernandes, CEO, Capital A“Google Cloud’s world-class cloud-based analytics and artificial intelligence (AI) tools have enabled Media Prima to embed a digital DNA across our organization, deliver trusted and real-time news updates during peak periods when people need them the most, and implement whole new engagement models like content commerce, thereby allowing us to diversify our revenue streams and remain at the forefront of an industry in transition. By allowing us to place our digital infrastructure and applications even closer to our audiences, this cloud region will supercharge data-driven content production and distribution, and our ability to enrich the lives of Malaysians by informing, entertaining, and engaging them through new and innovative mediums.” – Rafiq Razali, Group Managing Director, Media Prima“Google Cloud’s global network has been playing an integral role in Krungthai Bank’s adoption of advanced data analytics, cybersecurity, AI, and open banking capabilities to earn and retain the trust of the 40 million Thais who use our digital services to meet their daily financing needs. This new cloud region is a fundamentally important milestone that will help accelerate our continuous digital reinvention and sustainable growth strategy within the local regulatory framework, thereby allowing us to reach and serve Thais at all levels, including unbanked consumers and small business owners, no matter where they may be.” – Payong Srivanich, CEO, Krungthai Bank“Having migrated our operations and applications onto Google Cloud’s superior data cloud infrastructure, we are already delivering more personalized services and experiences to small business owners, delivery riders, and consumers than ever before – and in a more cost efficient and sustainable way. With the new cloud region, we will be physically closer to the computing resources that Google Cloud has to offer, and able to access cloud technologies in a faster and even more complete way. This will help strengthen our mission: to build a homegrown ‘super app’ that assists smaller players and revitalizes the grassroots economy.” – Thana Thienachariya, Chairman of the Board, Purple Ventures Co., Ltd. (Robinhood)Delivering a global networkThese new cloud regions represent our ongoing commitment to supporting digital transformation across Asia Pacific. We continue to invest in expanding connectivity throughout the region by working with partners in the telecommunications industry to establish subsea cables — including Apricot, Echo, JGA South, INDIGO, and Topaz — and points of presence in major cities. Learn more about our global cloud infrastructure, including new and upcoming regions.1. Source: Asia/Pacific (Excluding Japan) Whole Cloud Forecast, 2020—2025, Doc # AP47756122, February 2022Related ArticleA new Google Cloud region is coming to MexicoThe new Google Cloud region in Mexico will be the third in Latin America, joining Chile and Brazil, and bringing the total of regions and…Read Article
Quelle: Google Cloud Platform

How NTUC FairPrice delivers a seamless shopping and payment experience through Google Cloud

Editor’s note: Today we hear from NTUC Enterprise, which operates a sprawling retail ecosystem, including NTUC FairPrice, Singapore’s largest grocery chain, and a network of Unity pharmacies and Cheers convenience stores. As a social enterprise, NTUC Enterprise’s mission is to deliver affordable value in areas like daily essentials, healthcare, childcare, ready-to-eat meals, and financial services. Serving over two million customers annually, NTUC Enterprise strives to empower all Singaporeans to live more meaningful lives.In August 2021, NTUC FairPrice launched a new app payment solution, allowing customers to pay for purchases and accumulate reward points at any FairPrice store or Unity pharmacy, by simply scanning a QR code. The app eliminates the need to present a physical loyalty or credit card and integrates all customer activities across NTUC FairPrice’s network of stores and services. By using the FairPrice app’s payment feature, customers enjoy a seamless checkout experience at all FairPrice and Unity outlets.The mission to build an integrated app across a network of over 200 stores encountered two challenges that we were able to overcome with Google Cloud solutions, namely Cloud Functions, BigQuery, and Cloud Run:Financial reconciliation: FairPrice transactional data sits in multiple locations, including the point-of-sale (POS) server, the FairPrice app server, and the payment gateway (third-party technology used by merchants to accept purchases from customers). For the app to work, our finance team needed to ensure that all sales data coming from disparate sources are tallied correctly.Platform stability: Instead of a staggered rollout, we opted for a ‘big bang’ launch across all FairPrice and Unity stores across Singapore. System resilience, seamless autoscaling, and a near-zero latency network environment were critical for ensuring that customers weren’t stuck in line due to network delays or outages, especially during peak hours.For a complex operation such as ours, the main technical hurdle was agile syncing between transaction systems. Our finance team needed to ensure that all funds that land in the bank correspond with sales made through our stores’ POS machines. Resolving this issue required a custom solution that integrates disparate data sources across the sales spectrum.At the time, the architecture of our sales system was as follows: The POS system processed a transaction and sends the data into our SAP network. From there, the data was funneled through enterprise resource planning (ERP) workflows managed by the finance team.The actual financial transaction, however, was performed on our FairPrice app server. This communicated with a third-party payment gateway that then transferred funds electronically to our bank.The POS system was sophisticated enough to aggregate different payment methods registered in the FairPrice app, from GrabPay to Visa and Mastercard, and send granular transactions information to finance. But given that it wasn’t executing actual transactions, finance then needed to make extra reconciliations to ensure that the POS data and payment data corresponded. This process placed a significant manual strain on the team, which would only increase as the business continued to grow. Automating financial processes to drive business growth with cloud technologyTo automate financial reconciliation, we used Google Cloud tools and designed a custom solution to integrate POS data with the payment network. Here’s a step-by-step summary of how we integrated all the elements of our transactions ecosystem, unlocking the potential for growth in our FairPrice app: We first worked with the POS team to set up real-time data pipelines to import all transactional data across our retail network, from online sales to physical store purchases, into Cloud Storage every five minutes. Next, we deployed Cloud Functions to detect changes made in Cloud Storage, before processing the data and syncing it with BigQuery, our main data analytics engine for data imported from POS systems into SAP systems. Leveraging CloudSQL as our main managed database, we created data pipelines from the app server into BigQuery. We created two parallel channels to ingest datastreams into BigQuery, which then became a unified data processing engine for unlimited, real-time insights. At that point, we used Google Cloud Scheduler to send BigQuery data analytics at the end of each day to a SAP secure file transfer protocol (SFTP) folder for processing by the data analytics team. Combining readouts from these two data sources, our data scientists can now build an easy-to-read Data Studio dashboard. If transactions from different systems do not match up, an email alert will be sent to the finance team and other platform stakeholders. Finance then reconciles the transactions in Data Studio to ensure all sales from the POS system and the app server correspond correctly.Combining the power of BigQuery with the convenience of Data Studio provided us with an additional advantage: the finance team, without requiring in-depth technical knowledge, can now easily obtain any piece of data they need without seeking help from the engineering team. The finance team can directly query what they need in BigQuery, and create an instant visualization on the Data Studio dashboard for any business use case. Keeping customers happy with seamless autoscaling driven by Cloud RunOne of the key objectives of launching Pay for our FairPrice app is to enable faster, more seamless checkouts across all stores in our retail network, through quick-and-easy QR code scanning. With a ‘big bang’ rollout, we needed the most powerful and agile computing infrastructure available to handle fluctuations in footfall at our stores, from peak lunchtime to dips in the middle of the night. We sought to combine this ability to autoscale with minimal infrastructure configuration, so our development team could focus on building innovative solutions that delight our customers.Google Kubernetes Engine (GKE) had been powering NTUC FairPrice solutions for a long time. When it came to developing our new payment solution, we decided that it would be a good time to try Cloud Run, cognizant of the complex interplay of APIs required to evolve the app. The aim was to see if we could achieve even more automation and ease-of-use in scaling and deploying our solution. The experiment paid off as we gained a new dimension of API agility through the optimized deployment of Cloud Run features. Here’s an overview of how we leveraged Cloud Run to support failure-free store operation with virtually no manual configuration:Default endpoints: Our FairPrice app deploys a wide range of proprietary APIs to synchronize all aspects of the solution. Cloud Run’s key advantage here is that provides a secure HTTPS endpoint by default. This automates the connection of APIs to software programs, removing the need to set up extra layers of network components to manage APIs. Ensuring this strong connectivity results in a seamless experience for shoppers.Automated configuration: Even with Autopilot, the new GKE operating environment, we still needed to set up CPU and memory for our microservices clusters. With Cloud Run, we’re freed from this task. All that is required is to set the maximum instance variable, and Cloud Run takes care of the rest, by automatically scaling microservice clusters in real-time according to our needs. This saves our DevOps team several hours per week, which they can devote to developing new features and updates for the FairPrice app. Ultimately, this convenience is passed on to our customers, translating into a more seamless and enjoyable shopping experience. In just one year since launching Pay for the FairPrice app, we have gained measurable benefits from the innovations enabled by Google Cloud tools: A 90% rate of return to the app. This means that nine out of 10 customers who use the app once will continue using it for subsequent purchases. Added nearly 270,000 new FairPrice customers to the app, and that number is growing at an exponential rate. Since launching the app, we have been able to convert 6% of offline transactions into digital transactions.Given that app availability and seamless end user experiences are major factors in Net Promoter Scores (NPS), NTUC FairPrice has achieved ~75% in overall customer satisfaction.Building Singapore’s food “super app” with Google Cloud toolsWe’re excited about the next stage of our digital evolution, which is to turn the FairPrice App into a food ‘super app.’ We aim to spark customer delight in everything related to food within the NTUC FairPrice Group network. This includes “hawker centers” (food courts), restaurants, deliveries, and takeaways.All of these services will be built on Google Cloud solutions, in particular Cloud Run and BigQuery. We believe that with our newfound autoscaling and data analytics capabilities, NTUC FairPrice is ready to bring the business to new heights, and meet Singapore’s appetite for great food.Related ArticleNew Singapore GCP region – open nowThe Singapore region is now open as asia-southeast. This is our first Google Cloud Platform (GCP) region in Southeast Asia and our third …Read Article
Quelle: Google Cloud Platform

Filestore Enterprise for fully managed, fault tolerant persistent storage on GKE

Storing state with containers Kubernetes has become the preferred choice for running not only stateless workloads (e.g., web services) but also for stateful applications (e.g., e-commerce applications). According to the Data on Kubernetes report, over 70% of Kubernetes users run stateful applications in containers. Additionally, there is a rising trend of managed data services like MariaDB and Databricks using Google Kubernetes Engine to power their SaaS businesses to benefit from the portability of Kubernetes, built-in auto-upgrade features such as blue-green deployments, backup for GKE and out-of-the-box cost efficiency for better unit economics. All of this means that container-native storage on GKE is increasingly important. Specifically, storage that can be seamlessly attached and detached to containers as they churn (because the average container lifetime is much shorter than VMs) and remain portable across zones to stay resilient. That’s where Filestore Enterprise fits in. Customers get a fully managed regional file system with four 9s of availability. Storage is instantaneously attached to containers as they churn and zonal failovers are handled seamlessly. The rest of this blog explores multiple storage options with containers and how Filestore Enterprise fits in to help guide customers to make decisions of the best storage option that meets their needs.External persistent state for “stateless” containers (left) vs. persistent containers with CSI managed state within persistent volumes (right)Storage optionsThree storage models (from left to right): local file system, SAN and NAS.To understand the lay of the land, let’s explore three options for common patterns for attached storage with containers (note: Cloud Storage is accessed via the application code in a container and not covered here). Local file system over a local SSD device: A local file system (over local ssd block device) is the simplest to set up and can be very cost-effective and provide good performance (over local SSD), but in most cases it lacks enterprise storage capabilities such as snapshots, backups, and asynchronous DR. Also it provides limited reliability and redundancy as the state is host local. This model is well suited for scratch space/ephemeral storage use cases, but much less so for production-level, mission-critical use cases.Local file system over a remote/shared block device (SAN): The SAN (Storage Area Network) model is powerful and well known. A SAN-backed remote volume can provide good performance, advanced storage services, and good reliability. As the volume is external to the containers’ host, the persistent volume can be reattached (mounted) to a different host in case of container migration or if the original one failed, but is predominantly limited to only one host and Pod at a time. In the cloud world, SAN devices are replaced by networked block services, such as Google Cloud Persistent Disk (PD).Remote/networked file system (NAS): The NAS (Network Attached Storage) model is semantically a powerful storage model as it also allows read-write sharing of the volume across several containers. In such a model the file system logic is implemented in a remote filer and accessed via a dedicated file system protocol, most commonly Network File System (NFS). In the cloud world, NAS devices are commonly replaced by file system services such as Filestore.GCP block and file storage backendsIn Google Cloud non-local storage can be implemented using either PD or Filestore. PD provides flexible SSD- or HDD-backed block storage, while Filestore provides NFSv3 file volumes. Both models are CSI (Container Storage Interface) managed and fully integrated into the GKE management system. The main advantages and disadvantages of both models (depicted below) are as follows:PD provides capacity-optimized storage (HDD) and good price-performance variants (SSD, Balanced). PD provides flexible sizes and zonal volumes. On the other hand, PD based volumes do not support read-write sharing. This means multiple containers can’t read and write to the same volume. Customers can choose Regional support (RePD) but this is limited to active-passive models. PD-backed volumes support container migration and failover (after host failures), but such migration or failover may require time and expertise to implement.Filestore provides similar HDD and SSD variants and active-active regional (enterprise) variants. All Filestore variants support the read-write sharing model and almost instantaneous container migration and failover. Because of this increased functionality, Filestore-backed volumes have higher cost compared to the PD-backed volumes and have a minimum size limit of 1TB.Main Google Cloud storage models PD & FilestoreFilestore as fully managed container storageBoth PD and Filestore support container native operations such as migrating containers across hosts for use cases such as upgrades or failover. Customers on PD get best-in-class price/performance with extensive selection of multiple PD types. That’s why PD is popular with many GKE customers, as they benefit from price-performance and capabilities. However, with PD, customers need to have expertise in storage systems. In PD, the file system logic is built into the host. This coupling means during migration the host must cleanly shut down the container, unmount the file system, reattach the PD to the target host, mount the file system and only then boot the container. While GKE manages a lot of these operations automatically, in the case of failover there are potential file system and disk corruption issues. Users will need to run some cleanup processes (“fsck”) on the mounted volume before it can be used. With Filestore, customers get a fully managed regional file system that is decoupled from the host. Customers don’t need any expertise to operate storage and failovers are handled seamlessly as there are no infrastructure operations to attach/detach volumes. In addition, customers also benefit from storage that can be simultaneously read and written to by multiple containers.In addition to the general value of the Filestore as a GKE backend, Filestore Enterprise supports mission-critical and medium-to-large stateful deployments as it adds regional (four 9s) availability, active-active zone access, instantaneous snapshots, and smaller SSD entry point for each volume. Summary and conclusionsGoogle Cloud offers several fully managed options for GKE persistent volumes. In addition to the PD-based volumes, Filestore Enterprise is a first-class citizen storage backend for GKE and can also serve mission-critical use cases where (active/active) regional redundancy and fast failover/migration are important. Furthermore, Filestore Enterprise is just getting started on delivering better price-performance efficiency for customers. For example, customers can access a private preview to drive higher utilization of Filestore Enterprise instances by bin packing volumes as shares. Summary tableLinksAccessing file shares from Google Kubernetes Engine clusters | FilestoreHow persistent container storage works — and why it mattersDisk and image pricing | Compute Engine: Virtual Machines (VMs) | Google CloudPersistent disksService tiers Using the Filestore CSI driver1. The full list of PD models and pricing can be found here: https://cloud.google.com/compute/disks-image-pricing#disk
Quelle: Google Cloud Platform

Accelerating migrations to Google Cloud with migVisor by EPAM

Application modernization is quickly becoming one of the pillars of successful digital transformation and cloud migration initiatives. Many organizations are becoming aware of the dramatic benefits that can be achieved by moving legacy, on-premises apps and databases into cloud native infrastructure and services, such as reduced Total Cost of Ownership (TCO), elimination of expensive commercial software licenses, and improved performance, scalability, security and availability.The complexity of applications and databases to a cloud-centric architecture requires a rapid, accurate, and customized assessment of modernization potential and identification of challenges. Addressing business and functional drivers, TCO calculations, uncovering technological challenges and cross-platform incompatibilities, preparation of migration, and rollback plans can be essential to the success and outcome of the migration. These cloud migration initiatives are often divided into three high-level phases: Discovery: identifying and cataloging the source inventory. Output is usually an inventory of source apps, databases, servers, networking, storage, etc. The discovery of existing assets within a data center is usually straightforward and can often be highly automated. Pre-migration readiness: the planning phase. This includes the analysis of the current portfolio of the databases and applications for migration readiness, determining the target architecture, identifying technological challenges or incompatibilities, calculating TCO, and preparing detailed migration plans. Migration execution: where the rubber hits the road. During this phase of the migration process, database schemas are actively converted, the application data access layer is refactored, data is replicated from source to target, often in real-time, and the application is deployed in its determined compute platform(s). Successful evaluation and planning phase as part of the pre-migration readiness phase can bolster confidence in investment towards modernization. Skipping or inaccurately completing the pre-migration phase can lead to a costly and sub-optimal result. Relying on manual pre-migration assessments can lead to long migration timelines, reduced success rates and poor confidence in the post-migration state, increased risk and total migration cost. Some of the commonly asked question during pre-migration include:How compatible are my source databases, which are often commercial and proprietary in nature, with their open-source cloud-native alternatives? For example, how compatible are my Oracle workloads and usage patterns with Cloud SQL for PostgreSQL? What’s my degree of vendor lock-in with my current technology stack? Are proprietary features and capabilities being used that are incompatible with open-source database technologies?How tightly-coupled are my applications with my current database engine technology? Can my applications be deployed as-is, refactored for cloud readiness with ease, or will it be a big undertaking? How much effort will my migration require? How expensive will it be? What will be my run-rate in Google Cloud post-migration and my ROI?Can we identify quick-win applications and databases to start with?There is a direct association between the accuracy and speed of the pre-migration phase and the outcome of the migration itself. The faster and more accurately organizations complete the required pre-migration analysis, the more cost efficient and successful the migration itself will usually be.  EPAM Systems, Inc., a leader in digital transformation, worked with Google Cloud as a preferred partner to accelerate cloud migrations beginning with pre-migration assessments. Leveraging EPAM’s migVisor for Google Cloud—a unique pre-migration accelerator that automates the pre-migration process—and EPAM’s consulting and support services, organizations can quickly generate a cloud migration roadmap for rapid and systematic pre-migration analysis. This approach has resulted in the completion of thousands of database assessments for hundreds of customers.migVisor is agentless, non-intrusive, and hosted in the EPAM cloud. migVisor seamlessly connects to your source databases and runs SQL queries to ascertain the database configuration, code, schema objects and infrastructure setup. Scanning of source databases is done rapidly and without interruption to production workloads.migVisor prepares customers to land applications in Google Cloud and its managed suite of databases services and platforms such as Cloud SQL, bare metal hosting, Spanner and Cloud Bigtable. migVisor supports re-hosting (lift-and-shift), re-platforming, and re-factoring.  “EPAM’s recent application assessment update to its migration tooling system, migVisor, will bring a new level of transparency to the entire application and database modernization process”,  said Dan Sandlin, Google Cloud Data GTM Director at Google Cloud. “This enables organizations to make the most of digital technologies and provides a clear IT ecosystem transformation that allows our customers to build a flexible foundation for future innovation.”Previously, migVisor focused on assessments of the source databases and the compatibility of customers’ existing database portfolio with cloud-centric database technologies. Coming this quarter, migVisor adds support for application assessments, augmenting its existing and class-leading capabilities in the database space. The addition of application modernization assessment functionality in migVisor, combined with EPAM’s certification and specialization in Google Cloud Data Management and hands-on engineering experience, strengthens EPAM’s position as a leader for large-scale digital transformation projects and migVisor as a trusted product for cloud migration assessments to Google Cloud customers. EPAM provides customers an end-to-end solution for faster and more cost-effective migrations.  Assessments that used to take weeks can now be completed in mere days. Within minutes of registering for an account, anyone can start usingmigVisor by EPAM to automatically assess applications and application code. Visit themigVisor page to learn more and sign up for your account.Related ArticleAccelerate Google Cloud database migration assessments with EPAM’s migVisorThe Database Migration Assessment is a Google Cloud-led project to help customers accelerate their deployment to Google Cloud databases w…Read Article
Quelle: Google Cloud Platform