How DueDil leverages Apigee API-first approach to deliver data insights at scale

As their name reflects, DueDil provides due diligence services ranging from customer-specific risk evaluations and selections to customer onboarding and real-time risk monitoring for leading financial services, high-growth tech and insurance companies. Founded in 2009, the company helps more than 3,000 enterprise users from over 400 clients to not only understand with whom they’re doing business, but to do so with increased efficiency and in compliance with regulatory requirements. Due diligence services have evolved in recent years, both because of new regulations and new technologies supplanting legacy systems and processes, many of which relied until recently on pen-and-paper workflows or exhaustive spreadsheet work. DueDil knew this technology transformation represented an opportunity to replace manual processes with automation–but it also recognized a second opportunity: to not merely process data but also activate it by connecting information in disparate IT systems and generating data-driven insights delivered at scale.  To capitalize on this opportunity, the company built its Business Information Graph, or B.I.G., a platform that maps approximately 300 million connections among companies. B.I.G. ingests billions of data points, and is refreshed multiple times per day, to surface unique insights about business’s relationships, such as fraud risks. The results that B.I.G. drives often speak for themselves: some DueDil customers onboard partners up to 80% faster, perform risk verification up to 18 times faster, and reduce time spent on manual portfolio checks by up to 80%. What powers all of this transformation? Application Programming Interfaces (APIs). “From a go-to-market standpoint, our product is an API,” said Denis Dorval, DueDil COO, in a recent webcast, explaining that customers can directly tap B.I.G.’s resources for themselves, and build atop them for their own needs, via DueDil’s API. Choosing an API management platform to deliver fast, secure, and scalable APIsTo execute on their vision of connecting B2B ecosystems for better insights and efficiency, DueDil looked for a cloud provider that could fulfill several specific criteria. They needed robust management for the APIs with which their internal developers leverage different systems for new use cases and process automations, as well as for the productized API they offer to customers. They needed sophisticated analytics and abundant processing power to crunch through billions of data points. And, they needed enterprise-grade security, scalability, and agility to underpin it all. Last but not least, the company prioritized a smooth transition; DueDil did not want the user experience to suffer as it switched providers.“The stability of Google Cloud’s Apigee API management platform and the strength of its services stood out”, said DueDil’s Engineering Manager, Robert Cicero. “Apigee is a resilient and agile platform, fulfilling our need to build APIs quickly, safely, and at scale,” he remarked, noting that he appreciated that many of Apigee’s API security defense tools and policies work out-of-the-box. For instance, Apigee’s JSON threat detection policies, custom policies, and authentication and authorization processes can be deployed instantly and add minimal latency, meaning DueDil can stop security threats before they enter its network while still avoiding the risk of service lags.Today, DueDil has five internal services that facilitate business due diligence, all exposed via Apigee. They also use Apigee’s monetization feature to drive API consumption. This said, because DueDil’s go-to-market strategy is fast-paced and client-oriented, they most often use Apigee to rapidly prototype APIs for their clients, so they can understand what a specific API would look like and how it would behave. This allows DueDil, its partners, and its customers to spend more time delivering value from insights rather than getting bogged down in building backend systems. Moreover, Apigee made it simpler to also connect to other Google Cloud services, such as BigQuery, Google Data Studio, and Google Cloud Storage. Apigee acts as a central nervous system among systems, giving DueDil not only the ability to connect systems and automate processes but also insight and visibility into how its B.I.G. services are being used by partners and customers. Plus, added Cicero, “the migration to Apigee was seamless, with arguably our biggest win being that no one knew that we had switched API management providers to Apigee.”  Leveraging APIs to provide self-service while enforcing security and governance policiesMoving forward, DueDil plans to leverage Apigee to give staff members and clients more privileges, visibility, and opportunity to create and edit apps in a self-service manner, without needing to rely on an IT department or endure long approvals processes. Harnessing APIs to open up B.I.G. and other capabilities to more teams across the company will also allow DueDil to move faster and include more people in the innovation process. Leveraging Apigee API management capabilities, DueDil also intends to dive deeper and experiment with other Google Cloud products and services, including Cloud Function, Cloud Pub/Sub, and more.“At the end of the day, every company goes about due diligence a little differently. The only way that we at DueDil are able to provide something that is configurable and dynamic to diverse businesses is if we use platforms that can adapt, too,” said Cicero. “Apigee gives us the agility required to create and deliver for a wide variety of businesses.”Google Cloud, today, works across banking, capitalmarkets, insurance, and payments worldwide to solve their most challenging problems. Click here to learn more about how Google Cloud Apigee API management can help you design, secure, analyze, and scale APIs anywhere with visibility and control. To try Apigee API management for free, click here.Related ArticleThe time for digital excellence is here—Introducing Apigee XApigee X, the new version of Google Cloud’s API management platform, helps enterprises accelerate from digital transformation to digital …Read Article
Quelle: Google Cloud Platform

Automate your budgeting with the Billing Budgets API

TL;DR – Budgets are ideal for visibility into your costs but they can become tedious to manually update. Using the Billing Budgets API you can automate updates and changes with your custom business logic!We’ve talked a lot about what budgets are and how to set them up. Once you’re working with a larger company or lots of projects, it can become useful to have multiple budgets that line up to things like lines of business or certain teams. Unfortunately, manually going in and updating budgets can be a tedious task especially if you’re in a rapidly changing environment. Thankfully, the Billing Budgets API can help!Automation through APIs and Service AccountsThere’s plenty of ways you can automate Google Cloud project creation, such as Terraform or using the API directly. For this post, we’ll focus on the steps for interacting with the Billing Budget API so you can automate budgets regardless of which tool you’re using. We’ll also be using Python but the documentation covers more ways/languages to use the API. The first thing we’ll need is an identity with which to manage these budgets! To create a Service Account, you’ll want to head to the Service Accounts page under IAM & Admin.Feel free to use an even more descriptive name, like “Billing Budget API Automation Service Account”If you need more information about Service Accounts, check out the documentation here. We’re keeping things pretty simple by creating a simple Service Account and JSON key (don’t forget to download this!) to authenticate with it. We’ll also need to do one more thing and give the Service Account access to manage budgets on the billing account itself. You can choose Billing from the menu in the top left, choose which billing account you want to work with, and then choose Account management from the left navigation.We’ll grant this Service Account administrator access so it can manage all budgets. Of course, if this were a production account and environment, it would be worth setting up a custom role to ensure that just in case the Service Account key was compromised, the amount of damage would be limited. Principle of least privilege and all that!You can copy/paste the Service Account email and choose the Billing Account Administrator role to give that account full, unfiltered access to your billing account!We’ll also have to make sure the API is enabled, which you can do from the Cloud Billing Budget API page. Each project that you plan to use this API from will need to enable it, which is all the more reason to have a centralized Billing Administration project for projects like these.Get to the code With all the setup done, the next step is to start coding! We’ll work with a quick Python script here to keep it simple but this could be run from anywhere that makes sense for your setup, such as a Cloud Function that you could call whenever a new project gets created. Since we’re using Python and the client libraries here, we’ll first create a virtualenv and then run:pip install google-cloud-billing-budgetsNow let’s look at some code!This part is pretty simple, as all we’re doing is listing out the existing budgets. To break it down a bit further:We’ll need to import the budgets resource from the Google Cloud billing client library. We’re also setting the location of the Service Account key manually (though you could skip this if you just set the environment variable outside of the script) as well as setting a variable for the billing account ID. You can find this ID on the billing account page and it should look like three sets of six characters or numbers. Finally, we set up the client so we can make calls.In the list_budgets function, we’re using the client library to request a list of the budgets and passing in that billing account ID as the parent. Simple enough!Now we’re looping through the results and printing out their display name, along with the budget amount (refer back to this post for more details on how the different amounts work). So, running this function gives us the following:This can be a handy way to see our existing budgets and get the details that are relevant. That’s a great starting point but let’s take it a step further.Create and updateFirst, let’s create a new budget, which can be useful to automate if you’re doing things like spinning up multiple projects or a whole new environment. Here’s some quick code to do just that!Keeping it simple, this function will take in the name for the budget and the amount, as well as a flag for specified amount versus the dynamic last month amount. So if we call this function with something like this:create_budget(“Dynamically created budget”, 100, False)We’ll see our new budget show up in the console!Isn’t it great when code works?Of course, this is a pretty default budget since all we’ve set is the name and an amount. What if we only want to limit this budget to a certain set of projects? Back to the code!This function takes the budget ID (you can find this through the API or the console URL) and the project number you want to add (project IDs are separate from project numbers, but you can find both through the API or console) in order to add that to the scope of the budget. If the budget didn’t have a scope before, this would scope the budget to just a single project. Here’s the output when we run it:We created and updated a budget through code today! Pat yourself on the back, you’ve done a great job.Next stepsHopefully this code serves as a good starting point for how you can automate your budgets to make sure your budgets can stay updated even if your environments are changing quickly. Check out the client library and the documentation if you want to learn more and automate away!Related ArticleProtect your Google Cloud spending with budgetsBudgets are the first and simplest way to get a handle on your cloud spend. In this post, we break down a budget and help you set up aler…Read Article
Quelle: Google Cloud Platform

Why embedding financial services into digital experiences can generate new revenue

Faced with changing customer behaviors and demands, tightening margins, and increasing threat from digital competitors, financial services institutions (FSIs) will need to meet customers where they are, open up their services, and establish new ways to monetize their products. Doing so will also enable them to build a better profile of their customers, and deliver more personalized user experiences and fast, convenient banking and payment services. Cloud technology plays a big role in this shift toward digital FSIs. In Asia, bank branches now account for just 12% to 21% of monthly transactions in the region, with customers turning to digital channels for routine transactions such as peer-to-peer transfers and bill payments, according toMcKinsey&Company. Overall customer engagement has climbed from an average 12.7 to 14.9 transactions a month in Asia’s developed markets, and from 6 to 8.1 in emerging markets.1Fueled by growing smartphone adoption, the evolving customer behavior and momentum toward digital platforms have enabled digital-first players to snag a growing piece of the banking pie. McKinsey estimates that digital banking penetration has grown an average of 97% in Asia’s developed markets, and 52% in emerging markets, with between 30% and 50% of those that have yet to use digital banking likely to do so.Consumers now are more than ready to make the switch to neobanks, or digital banks. In Singapore, 63% are open to banking with digital-only players, according to aVisa study. On what will entice them to do so, 63% point to bill payments while 56% will use neobank services to make payments at retail outlets. Furthermore, 54% prefer digital banks for the convenience they offer while 52% like the faster service.Among those who are open to digital banks, 60% will move some services from their current bank to these new players even if the latter have no prior banking experience. One in five of respondents say they are willing to switch all services to a neobank.The same is true for small and midsize businesses (SMBs) in Singapore. According to a separatesurvey by Visa, 88% of these companies will consider moving some services to digital banks.Driven to do so by their frustration over a lack of quality corporate products and control of their banking experience, 55% of SMBs believe neobanks will help bring down overall banking costs. Another 54% say digital banks offer greater convenience, while 53% point to greater ease in paying bills online.These stats should worry even established FSIs, especially those that have not done quite enough to open up their service ecosystems and drive innovation through APIs.An API toward new revenueWhile most banks have active APIs, the services that some of them currently provide are just functional; they’re the means to an end for partners to obtain their targeted products and services. Without knowing, consumers use these types of APIs indirectly by using their favorite applications every day—a payment processing API will enable them to purchase their lunch, while a loan application API will get them that dream home.But while banks do not always own the customer journey, they still can find opportunities to sell their products via partners. Many leading banks are leveraging key technologies, such as API management, artificial intelligence (AI), and data analytics to embed digital banking into consumers’ everyday lives, including groceries, travel, entertainment, healthcare, and food delivery. When traditional banks open up their APIs to third parties offering broader services that pull in unique services into their own apps, they then become plugged into the broader customer journey. This helps boost usage of their services and embeds them in the overall customer experience. It also provides aggregated data that will help banks build richer consumer profiles, and deliver more personalized products and services.APIs also create equal opportunities for smaller participants to be involved in the financial services ecosystem, potentially creating micro-segments that previously may not have existed. With insufficient demand within a closed system, to justify the provision of such services, some customers in these micro-segments have previously been left unserved. The APIs, which facilitate collaboration between the different micro-segments so they can be commercially viable, help assuage this problem. Some banks are also opening up APIs to allow access to datasets that enable businesses to trigger automated workflows and enhance their operational efficiencies. Others, such asBank Rakyat Indonesia (Bank BRI) have generated new revenue by leveragingGoogle Cloud’s Apigee to manage their API lifecycle and identify new revenue opportunities.Apigee’s monetization feature has helped Bank BRI realize $50 million in revenue and enabled the bank to define its pricing based on API calls and automatically bill based on usage.In addition, the Indonesian bank uses the data analysis alongside Google Maps Platform to score its customer base of 75.5 million, and identify those who can be recruited as BRILink agents for underbanked areas. These agents are customers who maintain a minimum balance of $800 USD and score high on reliability.The appointment of branchless agents via the Agent BRILink app has pushed the loan volume from the bank’s branchless business to $26 billion in 2018, up from $15 billion the year before.How banks can get started with APIsClearly, there are new revenue opportunities for banks to leverage the data they already have. Here are some tips to help FSIs kickstart their API journey:Align with internal leadership growth initiatives. Leverage executive key performance indicators around growth and cost savings to foster a culture that offers APIs to micro-segmented markets with an eye on cultivating a healthy financial services ecosystem.Productize APIs with a strong value proposition. Starting with an API-first approach, stock the shelves of your API shop with new services and a strong inventory of APIs that will entice third parties (i.e., retailers, telcos, etc.) to start using them. This customer-first, outside-in approach will serve as a strong base to build on and enable the addition of more APIs as adoption grows.Actively nurture a developer community. A properly trained API manager will ensure constant contact with the developer community, and that partners are provided with case studies to help them identify viable use cases for your APIs.Leverage security as a strategic enabler. Security is a key enabler of the API economy, and most API security postures are defensive. By leveraging deep security tooling together with strong identification of developers, banks can better track information and data usage offensively. FSIs also need to avoid some common pitfalls, such as overlooking the need to continuously improve their APIs. If no one is using it, the API clearly is failing to provide any real value to third-party developers.In addition, efforts should be made to market the APIs and let developers know what is available. A common mistake FSIs make is assuming their work is done once their APIs are released and neglecting the need to carry out community outreach and marketing to generate awareness about the APIs.If you are interested in learning more about this topic, don’t miss our session at the Google Cloud Financial Services Summit on Embedded Finance: The Future of Banking.1. McKinsey & Company. “Asia’s digital banking race: Giving customers what they want.” Global Banking Practice. April 2018.Related ArticleHow FFN accelerated their migration to a fully managed database in the cloudSee how Freedom Financial Network migrated a terabyte of data in just hours to fully managed Cloud SQL database service as part of creati…Read Article
Quelle: Google Cloud Platform

Costa Mesa Sanitary District improves manhole maintenance with machine learning

Local governments are embracing more modern and scalable ways to support their communities. In an effort to save both time and money, Costa Mesa Sanitary District  (“CMSD”) used machine learning to automate and streamline manhole maintenance. Manhole maintenance is an essential part of the upkeep of cities. Manholes provide critical access points for underground public utilities, allowing inspection, maintenance, and system upgrades. But failure to upkeep manholes can cause a multitude of problems, from road hazards to sewer blockages, and can make it difficult for workers to access underground public utilities, which can lead to other safety issues. Manhole maintenance is an essential part of Costa Mesa Sanitary District maintenance, but this process requires the work of an outside consultant and costs the CMSD over $100,000.ML to the rescue CMSD, in collaboration with SpringML and Google Cloud, developed a project to streamline manhole maintenance by leveraging the power of machine learning (ML) to detect sewer manholes and rate their conditions. This solution saves CMSD $40,000 every year, freeing up funds for other public service projects.Every quarter, one member of the CMSD drives a car outfitted with a GoPro camera. This car travels through the entire District area, which includes the city of Costa Mesa and small portions of Newport Beach, CA, which is about 218 street miles, and records the roads to detect approximately 5,000 manholes. At the end of the recording day, CMSD members transfer images and videos from the GoPro SD card into a local server. Then these files are automatically ingested into Google Cloud Storage for processing. From here, the machine learning algorithms detect which manholes need repair.Google Cloud products are used throughout this project. Once the images and videos are in Google Cloud Storage, a workflow with Cloud Scheduler spins up the VM every night to detect if there are new videos on Cloud Storage. If there are, this triggers cloud machine learning, which reviews the numerous images and videos, rates each manhole, and determines if any require maintenance. Machine learning to detect and grade manholesSpringML applied a very systematic approach to detecting and grading the manholes. First, image processing ensures that the region of interest is only the section of the road in front of the vehicle to avoid any privacy concerns. Then SpringML applied a 5-step process using machine learning to detect and grade the manholes.SpringML developed two separate custom TensorFlow-based Mask R-CNN models. Mask-R-CNN is a deep neural network that is used for image segmentation tasks, which means it can separate different objects in an image or a video. The first Mask-R-CNN model was created to accurately detect if an image showed a manhole cover. This was a critical step because sewer manhole covers look similar to water main covers. To make the model accurate, it was important that there be no false positives. SpringML used around 50 images of sewer manholes and other images to train and validate that the algorithm could successfully detect sewer manhole covers. Once a manhole is accurately detected, the surrounding area is masked using image processing to focus on the manhole and then the cropped images are sent to the second model which is used to detect damage in the area around the manhole, called the apron. To reduce the processing time, the second model only does the inference if a manhole cover is detected. The second Mask-R-CNN model uses CMSD Guidelines to decide what types of damage contributed to the rating system. For training purposes, SpringML uses a TensorFlow-Keras inside a Virtual Machine on Google Cloud. The initial model was trained and the Keras weights were saved on Google Cloud Storage. This helped create a versioned system of the models as the model gets refined over time.Then, duplicated detections are cleaned up to have a unique detection for each manhole cover.Finally, Manholes are graded from a 1-5 rating, with 5 representing high damage and 1 representing low damage. A Smooth road aheadOnce cloud machine learning has analyzed the new videos and images, the final scores are stored in BigQuery. Results are then served to members of CMSD via a simple web application where they can see which manholes need to be maintained. Two staff members review the ML results and determine priorities for repairs. One of the most interesting features of this project is that the model gets continually retrained based on feedback submitted in the web application. For example, if the model inaccurately detects a manhole, a member of CMSD can mark that in the web application and their feedback is immediately used to refine the model.This solution shows how leveraging machine learning can streamline a necessary local government project and save money and labor while being highly scalable! Not only does this system streamline the manhole maintenance process, but it also allows for more frequent review of manhole conditions and provides a historical view of how the District’s manholes change over time.Want to learn more about Google Cloud machine learning? Check out this tutorial to  learn TensorFlow and Keras and check out more machine learning tools in our AI Platform.Related ArticleUsing machine learning to improve road maintenanceThe City of Memphis improves pothole detection and road safety with cameras on buses and the latest machine learning from Google Cloud.Read Article
Quelle: Google Cloud Platform

Cloud CISO Perspectives: May 2021

May is a big month for the security industry. It’s been over a year since we gathered for RSA in San Francisco for one of 2020’s last major in-person events. While we likely won’t be together in person this year, it’s an important time for the security community to come together and reflect on many accomplishments, and to consider the challenges still ahead of us. As the world focuses on security incidents and all the risks that still need resolving, it is important to stand back, on occasion, and also note that immense progress has been made by large numbers of small, medium and large enterprises to protect themselves and their customers against increased threats. What is also amazing is to see organizations do this while accelerating their digital transformations, supporting and protecting customers and managing ongoing remote working challenges. We are privileged to play our part in supporting those great teams. It’s also been a busy month for us here at Google Cloud since our inaugural CISO perspectives blog post in April. Today, I’ll recap our cloud security and industry highlights, a sneak peak of what’s ahead from Google at RSA and more. Thoughts from around the industry Risk Governance of Digital Transformation in the Cloud – In our latest Office of the CISO whitepaper, we shared guidance on both the challenges and opportunities of cloud transformation for Chief Risk Officers, Chief Compliance Officers, Heads of Internal Audit and their teams. A misconception we sometimes see among these executives is that moving to the cloud creates more risk to manage. Having held these leadership positions in previous roles, I believe that the cloud is as much a means of managing security, resilience and other risks as it is a risk in its own right. The whitepaper dives deep into considerations for each of these leadership functions as their organization embarks on a digital transformation journey. The importance of meeting global compliance requirements – Compliance is critical for building trust with customers in regulated industries, especially the public sector. It is worth remembering that in any critical industry, where there can be material impact from incidents, strong industry practices and standards to protect customers are vital (I wrote about this last summer). At Google Cloud, we’re regularly adding new compliance and security certifications to meet our customers’ needs globally. Recently, we expanded our list of FedRAMP High-certified products to include Cloud DNS, and helped our customers in the Asia-Pacific region address various compliance requirements to meet new government regulations for security and data protections. Google Cloud was also the only cloud service provider to complete an annual pooled audit with the Collaborative Cloud Audit Group (CCAG), which is a syndicate of 39 leading European financial institutions and insurance companies who depend on cloud infrastructure and technologies to deliver innovative solutions and experiences for their customers. Having spent most of my career in the financial services industry, I know firsthand the importance of managing risk assessments for outsourced vendors to provide the necessary assurances customers need from their cloud providers. RSA 2021 We have a great lineup of speaking sessions and keynotes from Googlers at RSA this year. Below are the highlights you don’t want to miss: I’ll be doing a session on May 20 about supply chain resilience, where a panel of experts will dive into how we can adjust risk and security initiatives to handle the next “punch to the supply chain.” Additionally, on May 18 I’ll join many of my esteemed CISO leaders from various industries and governments for a keynote discussion on our top security insights, lessons learned and best practices for how we move forward as an industry to address the next wave of challenges. Google’s Senior Director of Information Security Heather Adkins will deliver a session on how to build secure and reliable systems at scale, which will cover principles from Google’s Site Reliability Engineering book with the same title (available for free download here). I’m most looking forward to Heather’s advice for how we as an industry can reshape our security thinking, based on modern architectures and technologies that can help organizations design scalable and reliable systems that are fundamentally secure.Nelly Porter, Senior Product Manager at Google Cloud Security, will participate in a panel discussion with security experts on the importance of Confidential Computing technology, how it’s changing the security landscape and where it’s headed. Google Cloud has made great progress in delivering a Confidential Computing portfolio for our customers in regulated industries over the past year, and we’re excited for new milestones in 2021. Google cloud security highlightsInfrastructure and SRE spotlight – Before I joined Google Cloud, I always admired the infrastructure and benefits this organization delivers that are uniquely Google – from the subsea cable innovations to SRE inventions and principles. Security and resiliency are baked into every layer of our infrastructure. Many of the Googlers who build and support our platform have sat in the same seat as our customers, so they understand those needs intimately. Over the last few months it’s been amazing to watch our technical infrastructure team grow, and the direct reliability, operational resilience and security benefits that team brings to our customers. For example, we’ve opened a new region in Poland, announced the first subsea cable that will directly connect the U.S. to Singapore with fiber pairs over an express route, and released an SRE book focused on how organizations can complete a successful cloud migration.New security foundations blueprint guide – As part of our mission to deliver the industry’s most trusted cloud, we strive to operate in a shared-fate model for risk management in conjunction with our customers. This includes sharing opinionated step-by-step guidance with key decision points and focus areas for how our customers deploy workloads in Google Cloud. This is why we’ve updated our Google Cloud security foundations guide and corresponding Terraform blueprint scripts. These blueprints are tremendously helpful to many stakeholders within an enterprise, like a CISO that needs to understand our key principles for cloud security, or a C-Suite business leader that needs to quickly identify the skills their teams need to meet an organization’s security, risk, and compliance needs on Google Cloud. When we think about the types of features to build into products, we have many principles we follow. But the two that I keep coming back to as crucial are:The need for secure products not just security products. All products should have security built in and while we do build great security products our security and other teams remain focused on constantly enhancing the base levels of security and the security features in all our products. Defense in Depth. We don’t just focus on defense in depth from attacks – for ourselves and our customers. We also prioritize defense in depth from configuration errors or other hazards. As you see below in some of the highlights of new features and products, these represent our commitment to secure products and all forms of defense in depth. Workload identity federation- Service account keys are powerful credentials, and can represent a security risk if they are not managed correctly. A safer approach is to use workload identity federation, using IAM to grant external identities IAM roles, including the ability to impersonate service accounts. This lets you access resources directly and eliminates the maintenance and security burden associated with service account keys. We also offered related overall guidance on the best way to use and authenticate service accounts on Google Cloud.VPC-SC Directional Policies – With VPC Service Controls (VPC-SC), admins can define a security perimeter around Google-managed services to control communication to and between those services. Using VPC-SC, you can isolate your production GCP resources from unauthorized VPC networks or the internet. But what if you need to transfer data between isolated environments that you’ve set up? VPC-SC directional policies is a new secure data exchange feature that allows you to configure efficient, private, and secure data exchange between isolated environments. Anthos service mesh supports VMs as well as clusters – Most enterprise compute resources are still in VMs and many will remain there for a long time to come. In Anthos 1.7,  your VM-based workloads can now take advantage of the same mesh functionality as your container-based workloads.Cloud Spanner CMEK and Access Approvals – Cloud Spanner is Google Cloud’s fully managed relational database that offers unlimited scale, high performance, strong consistency across regions and high availability. Spanner now supports customer-managed encryption keys (CMEK) and Access Approval, Google Cloud’s industry-leading controls to require approval before access to your content by Google support and engineering teams.External Key Manager enhancements – In early 2020 we launched Cloud External Key Manager (Cloud EKM), the industry’s leading Hold-Your-Own-Key (HYOK) product. Using Cloud EKM, the keys used to protect your data stored and processed in Google Cloud are completely hosted and managed outside of Google Cloud infrastructure. Cloud EKM initially launched with support for BigQuery and GCE/PD;  we expanded support for Cloud SQL, GKE, Dataflow Shuffle, and Secret Manager, with CMEK support currently in beta. We also provided in-depth documentation on the functionality, architecture and use cases for Cloud EKM in a new whitepaper.Web App and API Protection solution-  Web applications and public APIs are increasingly important to how organizations interface with their customers and partners, and we’ve seen increased investment in tools to protect these resources from fraud and abuse. Google Cloud’s new Web App and API protection solution is based on the same technology Google uses to protect its public-facing services against web application exploits, DDoS attacks, fraudulent bot activity, and API targeted threats. It provides protection across clouds and on-premises environments.Threat Intel for Chronicle – Most threat intelligence feeds require security teams to do the implementation and legwork. With our new Threat Intel for Chronicle offering, however, our intelligence insights are applied automatically across your security telemetry to present unique observations within your environment. Threat Intel for Chronicle is exclusively curated for enterprise customers by Uppercase, Google Cloud’s intelligence research and applications team to provide our perspective on threats across the internet and surface them as relevant alerts.That wraps up another month of thoughts and highlights. If you’d like to have this Cloud CISO Perspectives post delivered every month to your inbox, click here to sign-up, and we’ll see you in June!Related ArticleRisk governance of digital transformation: guide for risk, compliance & audit teamsOur whitepaper serves as a guide for risk, compliance, and audit teams on how to manage risk governance in your cloud digital transformat…Read Article
Quelle: Google Cloud Platform

Maximize your Cloud Run investments with new committed use discounts

One of the key benefits of Cloud Run is that it lets you pay only for what you use, down to 100 millisecond granularity. This is ideal for elastic workloads, notably workloads that can scale to zero or that need instant scaling. However, the traffic on your website or API doesn’t always need this kind of elasticity. Often, customers have a steady stream of requests, or the same daily traffic pattern, resulting in a predictable spend for your Cloud Run resources. To help address these use cases, today, we’re introducing self-service spend-based committed use discounts for Cloud Run, which let you commit for a year to spending a certain amount on Cloud Run and benefiting from a 17% discount on the amount you committed. After you purchase a Cloud Run committed use discount, it automatically applies to all aggregated Cloud Run CPU, memory, and request usage in a region, across all projects in your billing account.An exampleHow do committed use discounts for Cloud Run work? Let’s take an example.Assume that you’re running a few services on Cloud Run in Oregon (us-west1), serving user requests with a consistent daily traffic pattern, resulting in a predictable pattern on your bill:As you can see in the above image, the Cloud Run cost per hour is $1 or above most of the time. In that case, it’s more economical for you to commit to spending $1 per hour on Cloud Run for one year in us-west1 across all your projects. With the 17% discount, you are then charged a minimum of $1 * 83% = $0.83 per hour for the whole year, independently of your actual usage. Then, at times when the cost per hour exceeds your committed price, you still benefit from the discount rate. Thus, in the above example, at 1PM, Cloud Run pricing is $1.50 for this region before the committed use discount. In that case, you only pay 1*83%+(1.5-1) = $1.33 for this hour.Get started todayCloud Run is an easy-to-use platform for running container-based workloads that need to scale up and down on demand. And now, with committed use discounts for Cloud Run, you benefit from predictable costs as well. To purchase a committed use discounts for your Cloud Run services, open the Cloud Console Billing page, select the Commitments tab, then select the PURCHASE action at the top:Check out our documentation for more details on Cloud Run committed use discounts. For Cloud Run pricing information, take a look at our pricing page.Related Article4 new features to secure your Cloud Run servicesWe’re improving the security of your Cloud Run environment with things like support for Secret Manager and Binary Authorization.Read Article
Quelle: Google Cloud Platform

Next-generation serverless: three ways enterprises can benefit

As we reflect on the past year, Heraclitus’ phrase “The only constant in life is change” has never rang more true. With the pandemic, companies had to shift operations, launch new products and adapt to extreme demand patterns, sometimes within a matter of weeks.To respond to customer needs faster and more efficiently, many companies turned to serverless technology, designing applications with real-time signals and intelligence built in. From apps and sites for healthcare appointments and vaccinations, public-sector employment benefits, contact tracing, retail logistics, curbside delivery, hotel and travel booking—you name it, companies built it with serverless.Redefining serverlessThe world changed, the market changed, our lives changed and we here at Google Cloud also changed, introducing new products to meet our customers’ needs and grow with them.Serverless technology, in particular, has changed a lot since it was first introduced. Google first launched serverless compute in 2008 with the launch of App Engine, helping customers scale their applications faster and seamlessly. We then added the ability to run Functions as a Service with Cloud Functions, giving customers a simple developer experience with integrated telemetry and observability. In parallel, we also introduced innovations to the container market with Kubernetes. Pretty soon, customers started asking us if we could combine the awesome serverless attributes of auto-scaling and developer experience with the flexibility of containers. Enter Cloud Run, the next generation of serverless. Serverless is now no longer just about event-driven programming or microservices. It’s also about running complex workloads at scale while still preserving a delightful developer experience. In fact, serverless with Cloud Run is about having a true developer platform with the flexibility to run any language, any library, any binary.There are three capabilities that make Cloud Run the next-generation of serverless, and not the same ‘serverless’ you find elsewhere:A great developer-centric experienceVersatility: expanding to a broader set of containerized appsBuilt-in DevOps and securityLet’s take a look at the attributes in greater depth.A great developer experienceBeing developer-centric comes from having fully-managed self-operating infrastructure and a great developer experience. We want everyone to be able to develop smart applications and for that we have to make it easy. We also want to be sure we are bringing your technical talent closer to where you generate your business value. To make things easy, last year we introduced buildpacks, which creates container images directly from source code. No need to learn Docker or containers. Although there are containers underneath, they’re transparent to the developer.To simplify things further, we also introduced a single “gcloud run deploy” command to build and deploy code to Cloud Run. These types of features are some of the reasons why 98% of Cloud Run users deploy an application on their first try in less than 5 minutes. In fact, in the past year alone, we added over 25 new features and services to our serverless stack, making development of complex apps easier. One of our main launches was Workflows, which lets you combine Cloud Run with any Google Cloud product or any HTTP-based API service. As a developer, this is very useful when automating complex processes, or integrating GCP’s analytic services across a variety of systems. Taken together, all these new features make the Cloud Run developer experience far easier than its competitors’, according to a recent report by User Research International.VersatilityNext-generation serverless is also about versatility. It supports a wider variety of applications and caters to enterprise requirements. Functions and web apps of course, but also heavyweight applications, and in the fullness of time, also brownfield and third-party containerized apps. This versatility is enabled by the container primitive, which removes restrictions on languages, run times, and hardware. Being able to run a greater variety of apps on our serverless stack means you can optimize for predictable usage. Today, we announced new spend-based committed use discounts for Cloud Run. Enterprises with stable, steady-state, and predictable usage can now purchase committed use contracts directly in the billing UI. There are no upfront payments, so these discounts are a perfect way to reduce your spend by as much as 17%. Related ArticleMaximize your Cloud Run investments with new committed use discountsCommitted use discounts in Cloud Run enable predictable costs—and a substantial discount!Read ArticleAnother way we provide versatility is with support for WebSockets and gRPC in Cloud Run. With these new additions, you get the benefits of serverless infrastructure to build responsive, high-performance applications. We also added the use of min instances with Cloud Run. This feature allows you to cut cold-start times and run latency-sensitive applications on Cloud Run! At the same time, you can still scale to zero, or keep a minimum amount of compute available, for example when running brownfield Java applications.Built-in DevOpsServerless doesn’t just make it faster for developers to set up their apps—it also helps once the application is up and running, taking a big management load off of operations teams. Notably, serverless systems take care of “scaling” an application up or down. That means that if your application suddenly starts fielding a lot of traffic, the serverless platform automatically spins up more resources to handle the load. No more dreaded timeouts, wheels or hourglasses—or work for your operations team. Likewise, as soon as demand goes down, the platform takes care of decommissioning resources, i.e., scaling down, so that you’re not paying for resources that you no longer need. Want to run your service globally with low latency, without an operations team, and zero stranded costs? Cloud Run takes care of global load balancing and autoscaling to zero for you in every Google Cloud region.Further, features like support for gradual rollouts and rollbacks allow developers to experiment and test ideas quickly, as well as sophisticated traffic management in Cloud Run. Likewise, Cloud Run provides access to distributed tracing with no setup or configuration, allowing developers to find performance bottlenecks in production.Next up: serverless securityAs part of DevOps best practices, we build in security for your serverless applications at every layer: deployment time, runtime and networking. For example, built-in vulnerability scanning ensures you only deploy artifacts you trust. Today, we are announcing Cloud Run support for Google Secret Manager and customer-managed encryption keys (CMEK), making it easy to protect data at rest and store sensitive data. We’re also integrating Cloud Run with Binary Authorization, which lets you enforce specific policies to make sure only verified images make it to production. And finally, we added a new integration with Identity-Aware Proxy, support for VPC-SC, and egress controls that you can use to enforce a security perimeter, limiting both who can access specific services and what resources can be accessed when these services run in production. You can read more about these security enhancements here. Related Article4 new features to secure your Cloud Run servicesWe’re improving the security of your Cloud Run environment with things like support for Secret Manager and Binary Authorization.Read ArticleIn summary, the next generation of serverless combines the best of serverless with containers to run a broad spectrum of apps, with no language, networking or regional restrictions. The next generation of serverless will help developers build the modern applications of tomorrow—applications that adapt easily to change, scale as needed, respond to the needs of their customers faster and more efficiently, all while giving developers the best developer experience. Learn more by attending The Power of Serverless, a two-hour virtual event where we’ll lay out our vision for serverless compute, and where serverless subject matter experts will present on in-depth serverless development topics. Hope to see you there!Want to learn even more about serverless and cloud-native application development? Check out the upcoming Modern App Dev & Delivery workshop, and our Ask the Experts roundtable.
Quelle: Google Cloud Platform

4 new features to secure your Cloud Run services

Cloud Run makes developing and deploying containerized applications easier for developers. At the same time, Cloud Run services need to be secure. Today, we’re announcing several new ways for you to secure your Cloud Run environments:Mount secrets from Google Secret Manager Use Binary Authorization to ensure you only deploy trusted container imagesUse your own encryptions keysGet recommendations based on the principle of least privilegeLet’s take a closer look at each of these new features. 1. Mount secrets from Google Secret ManagerYou might have previously stored API keys, passwords, certificates, and other sensitive data in environment variables. This is not considered a best practice. More security-conscious customers store sensitive data in Secret Manager, Google Cloud’s secure system for storing API keys, passwords, certificates, and other sensitive data. However, accessing these secrets from a Cloud Run service traditionally requires developers to use client libraries.Now, you can mount secrets from Secret Manager as environment variables or file system volumes in your Cloud Run services, from the command line:gcloud beta run services update my-service –update-secrets API_KEY=api-key:1 or from the Google Cloud Console:Secret Manager features fine-grained IAM permissions: By default, Cloud Run services do not have the permission to access Secret Manager secrets; access must be specifically granted. Integration with Secret Manager also allows you to encrypt secrets with your own encryption keys.Learn more about Cloud Run integration with Secret Manager in this video:2. Binary Authorization for Cloud RunContainers are the industry-standard way to package and run software. Cloud Run enables developers to deploy and scale containers in a fully-managed environment. Today, we’re introducing Cloud Run integration with Binary Authorization.With Binary Authorization, security administrators can now define and enforce policies about the containers that are being deployed to Cloud Run. For example, organization administrators can enforce that Binary Authorization be used on Cloud Run services in a certain project. This process is transparent to developers, who can deploy to Cloud Run as usual, being sure that the container will meet any defined policies.With Binary Authorization and Cloud Run, you can enforce that a container:was created using an approved build systemwas approved by the QA teamwas analyzed for vulnerabilitiesMike Helmick, Principal Software Engineer, COVID-19 Exposure Notifications, uses Binary Authorization alongside Cloud Run, which he credits for improving their compliance posture:“As part of our efforts to combat COVID-19, Google built the exposure notifications key and verification servers. We leverage Binary Authorization for Cloud Run to ensure that the servers are always built with our trusted Cloud Build builders, give us an audit trail, and ensure that the code that is running is what we intend.”3. Customer-managed encryption keysBy default, container images deployed to Cloud Run are encrypted at rest with Google-managed encryption keys. Customers with higher security needs might want more control over the encryption of their resources at rest to improve their confidence in using the public cloud. A container image is such a resource. Today, we are introducing a preview of customer-managed encryption keys for Cloud Run. This capability allows you to protect container images using encryption keys that you manage within Cloud Key Management Service, or KMS. The keys in Cloud KMS are called customer-managed encryption keys (CMEK). When you protect data in Cloud Run with CMEK, the CMEK key is within your control, not controlled by Google. This means that when these keys are disabled or destroyed, no one (including Google) can access the data protected by these encryption keys, putting protection in your control.Support for CMEK in Cloud Run is particularly important for regulated industries such as financial services. Just as Artifact Registry uses CMEK to protect the storage of container images, Cloud Run’s support of CMEK extends this protection to container images that are deployed into production, protecting the container across the build, deploy and run process.You can get started with Binary Authorization or customer-managed encryption keys from the command line or the Cloud Console:Further, you can enforce the use of Binary Authorization or Customer-managed encryption keys on all Cloud Run services in your projects using an Organization Policy constraint.4. Get recommended best practicesBy default, Cloud Run services run with the same identity as Compute Engine VMs: the default Compute Service Account. This is a great way to get started quickly and without bothering with permissions. At the same time, it’s a best practice to only give to your Cloud Run service the permissions it needs to operate.Starting today, you will see recommendations in Recommendation Hub to create dedicated service accounts for your Cloud Run services with minimal sets of permissions. Look for a blue light bulb icon in the Cloud Run user interface. Or, you can find recommendations to improve the security of your Cloud Run services in the Recommendations tab of the Cloud Console home page:Get started todayThese new security capabilities for Cloud Run join a long list of other security-related features such as making services internal by restricting their ingress, or integrations with Identity Aware Proxy or Cloud Armor. We hope that these new features will help you adopt Cloud Run without compromising on security best practices. You can start to  secure your Cloud Run services today by following recommendations, enabling Binary Authorization or moving sensitive data to Secret Manager.Related ArticleMaximize your Cloud Run investments with new committed use discountsCommitted use discounts in Cloud Run enable predictable costs—and a substantial discount!Read Article
Quelle: Google Cloud Platform

The cloud developer’s guide to Google I/O 2021

Google I/O 2021 may look a little different this year, but don’t worry, you’ll still get the same first-hand look at the newest launches and projects coming from Google. Best of all – it’s free and available to all (virtually) on May 18-20! You probably already know that Google I/O is where the world first finds out about the latest releases for Google Assistant, Maps, Android, and Workspace. Google Cloud entered the stage over the last several years to do our own show-and-tell before our own annual conference, Next, each year. Our teams love I/O as a way to connect with developer communities, whether you’re new to the platform or need to secure your apps and data on Google Cloud. This year, there’s a lot to look forward to during I/O from Cloud, and I’m here to guide you through the most inspiring sessions and events happening. Many of my teammates in Developer Relations and Product Management are leading these sessions, and I can tell from the work I’ve seen, you won’t be disappointed with what they have in store for I/O. To start I/O off, there are two keynotes that will give you a key overview of the themes and most notable launches Google and Google Cloud have this year: Sundar Pichai’s I/O Keynote and Jason Titus’ Developer Keynote. *Pro tip: Add them to your calendar by clicking the calendar icon on each event card. In the schedule, you’ll also find dozens of sessions spanning Google Maps, Firebase, Flutter, and more. To find sessions related to Cloud, use the filter in the top right.A central theme this year is around machine learning and artificial intelligence, so let’s start with some of those sessions:AIWhat’s new in machine learningI won’t spoil exactly what will be covered in this session, but you are not going to want to miss it! Our product and engineering directors will cover some exciting announcements around ML, including developer tooling for creating, understanding, and deploying models for a variety of applications. From responsible AI to TensorFlow 2.5, mobile devices, microcontrollers and beyond, you’ll be the first to learn about our latest releases. You’ll also hear about how to enable an end-to-end ML pipeline. AI/ML demo derbyDemo derbies are rich product demos in quick succession. This is your chance to see our newest products in action. Our demo champions are Zack Akil, Markku Lepisto, and Kaz Sato, who have done incredible work creating imaginative demos using ML. In the past, they’ve created ML models to analyze live soccer kicks, give you better athletic form, read PDF files as audiobooks, and even create a rock-paper-scissors machine. In this session, they’ll show you three quick and fun demos showing what’s possible for Developers to accomplish with Cloud AI.Build end-to-end AI solutions with Google CloudIf you’ve been paying attention to ML at Google Cloud, you might recognize Developer Advocate, Sara Robinson, from her work creating brand new baking recipes using ML and live coding an ML model from scratch. In this session, you’ll learn how to use Google Cloud to build, train, and deploy scalable AI applications. From raw data to deployed model, Sara will cover each step in the ML process. You’ll leave this session ready to accelerate your own ML projects with Cloud AI.Spotting and solving everyday problems with machine learningWhether you’re new to ML or well-seasoned, a common sticking point is early in the process — deciding when to use ML and figuring out the quickest way to integrate it into your app. Dale Markowitz, another accomplished Developer Advocate at Google Cloud, has integrated ML in everyday happenings to augment her life, like helping her choose an outfit, turn PDFs into audiobooks, and create a video archive for Father’s Day.  In this session, she’ll teach you how to spot the most common ML use cases—analyzing multimedia, building smart search, and transforming data—and how to quickly build them into your app with user-friendly tools. She’ll use ML to intelligently search through videos and articles, analyze your tennis serve, translate and dub videos, and more, using tools like Google Cloud, Firebase, and TensorFlow.ServerlessServerless has become the next-generation cloud offering defined by both its operational model (no infrastructure management, managed security, pay for usage) and its programming model (service-based, event-driven, uses open source). This year, it’s taken center stage as Google Cloud redefines the space through developer centric design and versatility. Here are my top picks for Serverless:Go full-stack with Kotlin or Dart on Google CloudDevelopers using Kotlin or Dart to create mobile, desktop, or web apps can take a full-stack approach to building backends that run on Google Cloud. My colleagues, Tony Pujals, Grant Timmerman, and James Ward, will show you how to take advantage of Cloud Run with Kotlin or Dart to create high performance, autoscaling, event-driven services without creating servers or managing infrastructure.Serverless demo derbyJoin Abby Carey, Sara Ford, and Katie McLaughlin for three rapid demos showcasing the art of the possible with our serverless computing solutions. They’ve each been working on their own original demos recently, like using Cloud Code to maximize developer productivity or running stress tests against Cloud Run. In this session, they’ll be demonstrating how to use Buildpacks to create containers, how to use Cloud Code to deploy Cloud Run services, and how to use Visual Studio Code to debug Cloud Functions.Dev to prod in 3 easy steps with Google Cloud RunIn this Workshop, my teammate Marc Cohen will run a simple web app in three ways: locally, in a Docker container, and in the Google Cloud. You’ll see how easy it is to cross those boundaries without changing your code.Cross ProductDeveloping AppSheet with Workspace AMP for Gmail and Apps Script appsIf you’ve been following my work lately, I’ve written about automation in the application development space. AppSheet is our platform to decrease development hours through a no-code workflow to create bots and integrations. It now supports seamless integration with Workspace technologies including AMP for Email and Apps Script. Developer Advocate, Christian Schalk, will demo how to use the AppSheet AMP to send out emails with form controls, so recipients can then submit responses back to the AppSheet app without having to leave their email context (no app required). The second demo will show how to connect an AppSheet app with existing Apps Script functions, allowing for sophisticated Workspace automations.Exposure notifications: Building infrastructure to serve 1M usersThe Exposure Notifications project is a joint effort by Google and Apple to help governments and global communities fight the pandemic through digital contact tracing using iOS and Android devices. Seth Vargo will give you a guided tour of the backend code and infrastructure systems that support this global program, all of which is open source on GitHub. You’ll learn how they engineered a secure, rapidly-deployable, and globally-available system with a team of six, using only Google Cloud services.While there are many more sessions to check out, don’t forget you can explore content by topic and level (beginner, intermediate, and advanced) on the I/O Discover page.Workshops and AMA’sBe sure to also take a look at the Workshops and AMA’s page, and filter by Cloud (top right). There you’ll find workshops for AI, building chatbots with Dialogflow, and AMAs for full-stack Flutter and more. Codelabs and learningYou can grow your knowledge with hands-on codelabs and learning pathways. These are self-guided learning experiences that help you adopt our products. Plus, you can earn Google Developer profile badges for completing these lessons. AdventureI/O will have a virtual Shoreline where you can “walk around” and explore each product area’s sandbox dome. Here you’ll be able to see additional content, like videos, codelabs, round tables, and our Rainbow Rumpus competition. In this competition, get to know Google Cloud by deploying a microservice on Cloud Run and join a virtual rumpus where your microservice will throw “rainbows” at other microservices, competing to win some swag. You will get hands on deploying Kotlin, Java, Go, Python, or Node.js microservices, learning about containers and Cloud Run along the way.Community loungeThere are also a handful of events at the Community Lounge where you can connect with other Cloud developers in meetups for Women Techmakers, Google Cloud certifications, regional developers (in various languages), career development, open source, and more.Keep in touch with me during I/O online at @stephr_wong. See you there!Related ArticleA handy new Google Cloud, AWS, and Azure product mapTo help developers translate their prior experience with other cloud providers to Google Cloud, we have created a table showing how gener…Read Article
Quelle: Google Cloud Platform

Translation API Advanced can translate business documents across 100+ languages

Translation is critical to many developers and localization providers, whether you’re releasing a document, a piece of software, training materials or a website in multiple languages. Companies acquire and share content in many languages and formats, and scaling translation to meet this need is a tall order, due to multiple document formats, integrations with OCR, and correcting for domain terminology. Now, developers can use machine learning to translate faster and more efficiently than ever with Google Cloud’s flagship Translation products.Today, we’re excited to announce a new feature to our Google Cloud’s Translation services, Document Translation, now in preview,for Translation API Advanced. This feature allows customers to directly translate documents in 100+ languages and formats such as Docx, PPTx, XLSx, and PDF while preserving document formatting.One company that’s pulling all this together is Welocalize. It uses Translation API Advanced to translate hundreds of millions of words per year using machine translation in widely disparate enterprise customer scenarios like multimedia, e-learning, and localization.”Google Cloud’s Translation API has helped us enforce broad terminology coverage for customers with sparse data, providing highly accurate translations for their documents. Translation API’s pre-trained models have allowed us to deliver real-time on demand translation, reducing lag so that our end users can get content in their language in seconds.” – Olga Beregovaya, VP Language Services, WelocalizeGet real-time online translation in secondsTraditional businesses may use batch-translation for their translation needs, but some companies require more immediate time to value. One of the biggest differentiators for Translation API Advanced’s Document feature is the ability to do real-time (synchronous processing), online translation, for a single file. For example, if you are translating a business document such as human resource (HR) documentation, online translation provides flexibility for customers who have smaller files and want faster results. You can easily integrate with our APIs via REST or gRPC with mobile or browser applications, with instant access to 100+ language pairs so that content can be understandable in any supported language. The figure below shows the workflow in which documents are translated with Translation API Advanced.Use AutoML Translation to build custom translation modelsInstead of the Google managed model, you can also use your own AutoML Translation models to translate documents. The new Document Translation feature translates business documents quickly and easily with our SOTA translation models and also combines Translation API Advanced features to easily control custom translations through a glossary or models you have trained on AutoML. Translation API’s glossary feature helps maintain brand names in translated content. You define the names and vocabulary in your source and target languages, then save the glossary file to your translation project. Those words and phrases will then be automatically included in the copy of your translation request.Our full Translation portfolio includes Translation API (basic & advanced) for those who want to use pre-trained models for common use cases such as chat applications, social media, and gaming. We also have AutoML Translation to help businesses build high-quality and production-ready custom translation models without writing a single line of code.This is just the latest example of how Google is continuing to drive AI-powered innovation in extracting structured data from unstructured sources. With Document AI, we brought this technology to some of the largest document based workflows in the world through data extraction and classification. And now with Document Support for Translation API Advanced, we’re delivering document processing solutions to help you translate your business documents at scale.More Cloud Translation resourcesLearn more about our Cloud Translation services on our website. For a technical review of how to use this feature, view the documentation.Related ArticleGartner names Google a leader in 2021 Magic Quadrant for Cloud AI Developer Services reportWe believe this recognition is based on Gartner’s evaluation of Google Cloud’s language, vision, conversational, and structured data serv…Read Article
Quelle: Google Cloud Platform