Know before you go: Google Cloud Next

Google Cloud Next is one day away. Don’t miss out on the latest news, product announcements, and predictions from Google that will shape the cloud of tomorrow.Sign up now if you haven’t registered, so you can:Join the Opening Keynote with Google and Alphabet CEO Sundar Pichai and Google Cloud CEO Thomas Kurian tomorrow at 9 AM PDT – kicking off 24 hours of live broadcasts from New York, Sunnyvale, Tokyo, Bengaluru, and Munich.Browse the Google Cloud Next session catalog for curated content by track: Build for application developers, Analyze for data analysts and scientists, Design for data engineers, Modernize for enterprise architects and developers, Operate for DevOps, sysadmins and operations, Secure for security professionals, Collaborate for business leaders and IT administrators, and Innovate for executive and technology business leaders. Once you’re registered, you can create your playlists  with live broadcasts and 125 on-demand sessions. You can also check out Curated by Google playlists to discover which sessions some of today’s brightest minds are excited to attend.View Innovators Hive at Google Cloud Next, the event experience designed for developers and technical practitioners, is on demand on Day 3 from Sunnyvale, Munich, Bengaluru, and Tokyo — which is being presented live in Japanese.Get ready to unlock Google Cloud training and certification offerings and other exciting opportunities that will be announced at Next ’22.The cloud event of the year starts tomorrow, and it’s going to be big. Register now.Related ArticleRead Article
Quelle: Google Cloud Platform

Streamline your models to production with the Vertex AI Model Registry

Machine learning (ML) is iterative in nature — model improvement is a necessity to drive the best business outcomes. Yet, with the proliferation of model artifacts, it can be difficult to ensure that only the best models make it into production.Data science teams may get access to new training data, expand the scope of use cases, implement better model architectures, or simply make adjustments as the world around your models is constantly changing. All of these scenarios require building new versions of models to be released into production. And with the addition of new versions, it matters to be able to manage, compare, and organize them. Moreover, without a central place to manage your models at scale, it’s difficult to govern model deployment with appropriate gates on release and maintenance according to compliance to industry standards and regulations. To address these challenges, today we are excited to announce the Global Availability (GA) launch of the Vertex AI Model Registry.Fig. 1  – Vertex AI Model Registry – Landing pageWith the Vertex AI Model Registry, you have a central place to manage and govern the deployment of all of your models, including BigQuery, AutoML and custom models. You can use the Vertex AI Model Registry at no charge. The only cost that occurs when using the registry is if you deploy any of your models to endpoints or if you run a batch prediction.Vertex AI Model Registry offers key benefits to build a streamlined MLOps process: Version control and ML metadata tracking to guarantee reproducibility across different model versions over time. Integrated model evaluation to validate and understand new models using evaluation and explainability metrics. Simplified model validation to enhance model release.Easy deploymentto streamline models to production. Unified model reporting to ensure model performanceVersion control and ML metadata tracking to guarantee model reproducibilityVertex AI Model Registry allows you to simplify model versioning and track all model metadata to guarantee reproducibility over time. With the Vertex AI SDK, you can register custom models, all AutoML models (text, tabular, image, and video), and BQML models. You can also register models that you trained outside of Vertex AI by importing them to the registry.Fig. 2 – Vertex AI Model Registry – Versioning viewIn Vertex AI Model Registry, you can organize, label, evaluate, and version models. The registry gives you a wealth of model information at your fingertips, such as model version description, model type, and model deployment status. You can also associate additional information such as the team who built a particular version or the application the model is serving.In the end, you can get a single picture of your models and all of their versions using the Model Registry console. You can drill down and get all the information about a specific model and its associated versions so you can guarantee reproducibility across different model versions over time. Integrated model evaluation to ensure model quality Thanks to the integration with the new Vertex AI Model Evaluation service, you can now validate and understand your model versions using evaluation and explainability metrics. This integration allows you to quickly identify the best model version and audit the quality of the model before deploying it in production. For each model version, the Vertex AI Model Registry console shows classification, regression, and forecasting metrics depending on the type of model.Fig. 3 – Vertex AI Model Registry – Model Evaluation viewSimplified model validation to improve model release.  In an MLOps environment, automation is critical for ensuring that the correct model version is used consistently across all downstream systems. As you scale your deployments and expand the scope of your use cases, your team will need solid infrastructure for flagging that a particular model version is ready for production.In Vertex AI Model Registry, aliases are uniquely named references to a specific model version. When you register a new model, the first version automatically gets assigned the default alias. Then you can create and assign custom aliases to your models depending on how you decide to organize your model lifecycle. An example of model alias usage would be assigning the stage of the reviewing process (not started, in progress, under review, approved) or the status of the model life cycle (experimental, staging, or production).Fig. 4 – Vertex AI Model Registry – Aliases viewIn this way, the Model Registry simplifies the entire model validation process by making it easy for downstream services, such as model deployment pipelines or model serving infrastructure, to automatically fetch the right model.Easy deployment to streamline models to productionAfter a model has been trained, registered, and validated, the model is ready to be deployed. With Vertex AI Model Registry, you can easily productionalize all of your models (BigQuery models included) with point-and-click model deployment thanks to the integration with Vertex AI Endpoints and Vertex AI Batch Predictions. In the Vertex AI Model Registry console, you select the approved model version, you define the endpoint and you specify some model deployment and model monitoring settings. Then you deploy the model. After the model has been successfully deployed, you can see that the model status is automatically updated in the models view and it is ready to generate both online and batch predictions.Fig. 5 – Vertex AI Model Registry – Model DeploymentUnified model reporting to ensure model performanceA deployed model keeps performing if the input data remains similar to the training data. But realistically, data changes over time and the model performance degrades. This is why model retraining is so important. Typically, models are retrained at regular intervals, but ideally models should be continuously evaluated with new data before making any retraining decisions.  With the integration of Vertex AI Model Evaluation, now in preview, after you deploy your model, you define a test dataset and an evaluation configuration as inputs. In turn, it returns model performance and fairness metrics directly in the Vertex AI Model Registry console. Looking at those metrics you can determine when the model needs to be retrained based on the data you record in production. These are important capabilities for model governance, ensuring that only the freshest, most accurate models are used to drive your business forward.Fig. 6 – Vertex AI Model Registry – Model Evaluation comparison viewConclusion The Vertex AI Model Registry is a step forward for model management in Vertex AI. It provides a seamless user interface which shows you all of the models that matter most to you free of charge, and at-a-glance metadata to help you make business decisions.In addition to a central repository where you can manage the lifecycle of your ML models, it introduces new ways to work with models you’ve trained outside of Vertex AI, like your BQML models. It also provides model comparison functionality via the integration with our Model Evaluation service, which makes it easy to ensure that only the best and freshest models are deployed. Additionally, this one stop view improves governance and communication across all stakeholders involved in the model training and deployment process. With all these benefits of the Vertex AI Model Registry, you can confidently move your best models to production faster. Want to learn more?To learn more about the Vertex AI Model Registry, please visit our other resources:Vertex AI Model Registry DocumentationBQML Model Registry Documentation Vertex AI Model Evaluation Documentation Want to dive right in? Check out some of our Notebooks, where you can get hands-on practice: Get started with Vertex AI Model RegistryGet started with Model Governance with Vertex AI Model RegistryDeploy BigQuery ML Model on Vertex AI Model Registry and Make PredictionsGet started with Vertex AI Model EvaluationSpecial thanks to Ethan Bao, Shangjie Chen, Marton Balint, Phani Kolli, Andrew Ferlitch, Katie O’Leary, and all the Vertex AI Model Registry team for support and great feedback.Related ArticleRead Article
Quelle: Google Cloud Platform

Built with BigQuery: How Tinyclues and Google Cloud deliver the CDP capabilities that marketers need

Editor’s note: The post is part of a series highlighting our awesome partners, and their solutions, that are Built with BigQuery.What are Customer Data Platforms (CDPs) and why do we need them?Today, customers utilize a wide array of devices when interacting with a brand. As an example, think about the last time you bought a shirt. You may start with a search on your phone as you take the subway to work. During that 20 minute ride, you narrow down the type of shirt . Later, as you take your lunch break, you spend a few more minutes refining your search on your work laptop and you are able to find two shirt models of interest. Pressed for time, you add both to your shopping cart at an online retailer to review at a later point. Finally, after you arrive back home and as you are checking your physical mail, you stumble across a sales advertisement for the type of shirt that you are looking for, available at your local brick and mortar store. The next day you visit that store during your lunch break and purchase the shirt. Many marketers face the challenge of creating a consistent 360 customer view that captures the customer lifecycle, as illustrated in the example above – including their online/offline journey, interacting with multiple data points across multiple data sources.The evolution of managing customer data reached a turning point in the late 90’s with CRM software that sought to match current and potential customers with their interactions. Later as a backbone of data-driven marketing, Data Management Platforms (DMPs) expanded the reach of data management to include second and third party datasets including anonymous IDs. A Customer Data Platform combines these two types of systems, creating a unified, persistent customer view across channels (mobile, web etc) that provide data visibility and granularity at individual level.A new approach to empowering marketing heroesTinyclues is a company that specializes in empowering marketers to drive sustainable engagement from their customers and generate additional revenue, without damaging customer equity. The company was founded in 2010 on a simple hunch: B2C marketing databases contain sufficient amounts of implicit information (data unrelated to explicit actions) to transform the way marketers interact with customers, and a new class of algorithms based on Deep Learning (sophisticated machine learning that mimics the way humans learn) holds the power to unlock this data’s potential. Where other players in the space have historically relied – and continue to rely – on a handful of explicit past behaviors and more than a handful of assumptions, Tinyclues’ predictive engine uses all of the customer data that marketers have available in order to formulate deeply precise models, down even to the SKU level. Tinyclues’ algorithms are designed to detect changes in consumption patterns in real-time, and adapt predictions accordingly.This technology allows marketers to find precisely the right audiences for any offer during any timeframe, increasing engagement with those offers and, ultimately, revenue; additionally, marketers are able to increase campaign volume while decreasing customer fatigue and opt-outs, knowing that audiences are receiving only the most relevant messages. Tinyclues’ technology also reduces time spent building and planning campaigns by upwards of 80%, as valuable internal resources can be diverted away from manual audience-building.Google Cloud’s Data Platform, spearheaded by BigQuery, provides a serverless, highly scalable, and cost-effective foundation to build this next generation of CDPs. Tinyclues Architecture:To enable this scalable solution for clients, Tinyclues receives purchase and interaction logs from clients in addition to product and user tables. In most cases, this data is already in the client’s BigQuery instance, in which case they can be easily shared with Tinyclues utilizing BigQuery authorized views. In cases where the data is not in BigQuery, flat files are sent to Tinyclues via GCS and are ingested in the client’s data set via a lightweight Cloud Function. The orchestration of all pipelines is implemented via Cloud Composer (Google’s managed Airflow). The transformation of data is accomplished by utilizing simple select statements in the Data Built Tool (DBT), which is wrapped inside an airflow DAG that powers all data normalization and transformations. There are several other DAGs to fulfill more functionalities, including: Indexing the product catalog on Elastic Cloud (Elasticsearch managed service) on GCP to provide auto-complete search capabilities to TCs clients as shown below:The export of Tinyclues-powered audiences to the clients’ activation channels, whether they are using SFMC, Braze, Adobe, GMP, or Meta.Tinyclues AI/ML Pipeline powered by Google Vertex AITCs ML Training pipelines are used to train models that calculate propensity scores. They are composed using Airflow DAGs, powered by Tensorflow & Vertex AI Pipelines. BigQuery is used natively, without data movement, to perform as much feature engineering as possible in-place. TC uses the TFX library to run ML Pipelines in Vertex AI. Building on top of Tensorflow as their main deep learning framework of choice due to its maturity, open source platform, scalability and support for complex data structures (Ragged and Sparse Tensors). Below is a partial example of TC’s Vertex AI Pipeline graph, illustrating the workflow steps in the training pipeline. This pipeline allows for the modularization & standardization of functionality into easily manageable building blocks. These blocks are composed of TFX components (TC reuses most of the standard components in addition to customizing some such as a proprietary implementation of the Evaluator to compute both ML Metrics (which is part of the standard implementation) but also more Business Metrics like Overlap of clickers etc. The individual components/steps are chained with DSL to form a pipeline that is modular and easily orchestrated or updated as needed.With the trained Tensorflow models available in GCS, TCs exposes these in BigQuery ML (BQML) to enable their clients to score millions of users for their propensity to buy X or Y within minutes. This would not be possible without the power of BigQuery and also frees TC from previously experienced scalability issues.As an illustration, TC has the need to score thousands of topics among millions of users. This used to take north of 20 hours on their previous stack, and now takes less than 20 minutes thanks to the optimization work that TC has implemented in their custom algorithm and the sheer power of BQ to scale to any workload accordingly. Data Gravity: Breaking the Paradigm – Bringing the Model to your DataBQML enables TC to call pre-trained TensorFlow models within an SQL environment, thus avoiding exporting data in and out of BQ using already provisioned BQ serverless processing power. Using BQML removes the layers between the models and the data warehouse and allows them to express the entire inference pipe as a number of SQL requests. TC no longer has to export data to load it into their models. Instead, they are bringing their models to the data.Avoiding the export of data in and out of BQ and the serverless provisioning and start of machines saves significant time. As an example, exporting an 11M lines campaign for a large client previously took 15 min or more to process. Deployed on BQML it now takes minutes with more than half of the processing time attributed to network transfers to our client system. Inference times in BQML compared to TCs legacy stack:As can be seen, using this approach enabled by BQML, the reduction in the number of steps leads to a 50% decrease in overall inference time, improving upon each step of the prediction.The Proof is in the puddingTinyclues has consistently delivered on its promises of increased autonomy for CRM teams, rapid audience building, superior performance against in-house segmentation, identification of untapped messaging and revenue opportunities, fatigue management, and more, working with partners like Tiffany & Co, Rakuten, and Samsung, among many others.ConclusionGoogle’s data cloud provides a complete platform for building data-driven applications like the headless CDP solution developed by Tinyclues — from simplified data ingestion, processing, and storage to powerful analytics, AI, ML, and data sharing capabilities — all integrated with the open, secure, and sustainable Google Cloud platform. With a diverse partner ecosystem, open-source tools, and APIs, Google Cloud can provide technology companies the portability and differentiators they need to serve the next generation of marketing customers.  To learn more about Tinyclues on Google Cloud, visit Tinyclues. Click here to learn more about Google Cloud’s Built with BigQuery initiative. We thank the many Google Cloud team members who contributed to this ongoing data platform  collaboration and review, especially Dr. Ali Arsanjani in Partner Engineering.Related ArticleRead Article
Quelle: Google Cloud Platform

Google Cloud Next for application developers: 5 can’t miss breakout sessions

Containers. Serverless. CI/CD. For forward-looking developers, Google Cloud is practically synonymous with the latest trends in application development. With Google Cloud Next starting on October 11, here are a few must-watch developer sessions to add to your playlist:1. BLD106What’s next for application developersStart your foray into Google Cloud application development news here, where Tom De Leo, Director, Product Management, Platform Developer Tools, will take you through all the new application developer services and features that we are announcing at Next ‘22. 2. BLD201Building a serverless event-driven web app in under 10 minsLed by Google Cloud Developer Advocate Prashanth Subrahmanyam, in this session we take a traditional monolithic use case, break it down into composable pieces, and build an end-to-end application using Google Cloud’s portfolio of serverless products.3. BLD209What’s new in cloud-native CI/CD: speed, scale, securityApplication development teams are increasingly embracing CI/CD. Join Google Cloud Product Manager David Jacobs and Software Engineer Edward Thiele to learn about new capabilities in Cloud Build, Artifact Registry, Artifact Analysis, and Google Cloud Deploy, and how they can help your teams deliver software to Cloud Run and Google Kubernetes Engine (GKE).4. BLD300What’s new in Kubernetes: Run batch and high performance computing in GKESpeaking of GKE, did you know that it’s emerged as a great place to deploy high performance computing workloads? Here, PGS Chief Enterprise Architect Louis Bailleul and Google Cloud Senior Product Manager Maciek Różacki share how PGS used GKE to replace its 260,000-core Cray supercomputers. The session will also go over recent feature launches in the data processing space for GKE and what’s coming up on the roadmap.5. BLD2055 reasons why your Java apps are better on Google CloudWhy should you run your Java workloads on Google Cloud? Simple: Java Cloud Client Libraries now support Native Image out of the box. In this session, Google Cloud Senior Product Manager  Cameron Balahan and Developer Advocate Aaron Wanjala show you how to compile your Java applications ahead of time, so you can dramatically speed up your cold start times.Build your developer playlistTo explore the full catalog of breakout sessions and labs designed for application developers, check out the entire Build track in the Catalog. And don’t forget to tune into the Developer Keynote presented live from the Next Innovators Hive from Sunnyvale on Tuesday October 11 from 10:00 – 11:00 PT – and again from Bengaluru, Munich, and Tokyo. See the Innovators Hive for local playtimes. Register for Next ‘22.Related ArticleRead Article
Quelle: Google Cloud Platform

How to secure APIs against fraud and abuse with reCAPTCHA Enterprise and Apigee X

A comprehensive API security strategy requires protection from fraud and abuse. To better protect our publicly-facing APIs from malicious software that engages in abusive activities, we can deploy CAPTCHAs to disrupt abuse patterns. Developers can prevent attacks, reduce their API security surface area, and minimize disruption to users by implementing Google Cloud’s reCAPTCHA Enterprise and Apigee X solutions. As Google Cloud’s API management platform, Apigee X can help protect APIs using a reverse-proxy approach to HTTP requests and responses. One important feature of Apigee X is the ability to include a reCAPTCHA Enterprise challenge in the authentication (AuthN) stage of the request. This post shows how to provision a reCAPTCHA proxy flow to protect your APIs. Complete code samples are available in this Github repo.When and why to use Apigee X for implementing CAPTCHAsThe initial way to use reCAPTCHA Enterprise as part of a Web Application and API Protection (WAAP) solution is through Cloud Armor. For developers who want a purely API-based solution, Apigee X allows developers to define the reCAPTCHA process as a set of Apigee X proxy flows. As a dedicated solution, it moves as much API security code as possible into Apigee. This method can also make code maintenance easier and can allow API business rules to be managed in code. The reCAPTCHA process can be included directly in Apigee proxies, either individually or as shared flows. This code can then be added to the same source control as all the Apigee proxy code, in line with the API business rules.Let’s first review a few implementations of reCAPTCHA Enterprise, and then contrast those with an Apigee X implementation example to see which might be best for you.An introduction to reCAPTCHA EnterpriseA reCAPTCHA challenge page can redirect incoming HTTP requests to reCAPTCHA Enterprise, which can help stop possible malicious attacks. When reCAPTCHA Enterprise is integrated with Cloud Armor, and the Challenge Page option is selected, a reCAPTCHA will trigger when the policy rule of Cloud Armor matches the incoming URL/traffic pattern.To avoid CAPTCHA fatigue (mouse-click fatigue due to too many CAPTCHA challenges), developers should consider using reCAPTCHA session-tokens, which we explain in more detail below. A challenge page is most useful for dealing with a bot making repeated programmatic HTTP requests. The challenge page redirect and possible reCAPTCHA challenge can stop malicious bots. However, the challenge page can also interrupt a legitimate user’s activity — a reCAPTCHA challenge page is less desirable for a well-intended human user.For more details, please check out the reCAPTCHA challenge page documentation.To protect important user interactions, reCAPTCHA Enterprise uses an object called an action-token. These can help protect human users and their legitimate interactions, such as shopping cart checkouts or sensitive knowledge base requests that you want to safeguard.A deeper review of reCAPTCHA Enterprise action tokens can be found in the reCAPTCHA action-tokens documentation.As an alternative to action-tokens, session-tokens protect the whole user session on the site’s domain. This can help developers reuse an existing reCAPTCHA Enterprise assessment, which is analogous to a session key, but for authentication not encryption. It is recommended to use a reCAPTCHA session-token on all the web pages of your site. This enables reCAPTCHA Enterprise to secure your entire site and recognize deviations in human browsing patterns, such as a bot crawling your site.For more details, please check out the reCAPTCHA session-tokens documentation.Using Apigee X and reCAPTCHA EnterpriseAll of the above can also be accomplished in Apigee X, without the need for Cloud Armor. Code for an Apigee X flow that initiates a reCAPTCHA Enterprise challenge is below, and is also available in our Github repo file SC-AccessReCaptchaEnterprise.xml.code_block[StructValue([(u’code’, u'<ServiceCallout name=”SC-AccessReCaptchaEnterprise”>rn <Request>rn <Set>rn <Payload contentType=”application/json”>{rn “event”: {rn “token”: “{flow.recaptcha.token}”,rn “siteKey”: “{flow.recaptcha.sitekey}”rn }rn}</Payload>rn <Verb>POST</Verb>rn </Set>rn </Request>rn <Response>recaptchaAssessmentResponse</Response>rn <HTTPTargetConnection>rn <Authentication>rn <GoogleAccessToken>rn <Scopes>rn <Scope>https://www.googleapis.com/auth/cloud-platform</Scope>rn </Scopes>rn </GoogleAccessToken>rn </Authentication>rn <URL>https://recaptchaenterprise.googleapis.com/v1/projects/{flow.recaptcha.gcp-projectid}/assessments</URL>rn </HTTPTargetConnection>rn</ServiceCallout>’), (u’language’, u”), (u’caption’, <wagtail.wagtailcore.rich_text.RichText object at 0x3e723ce0a310>)])]The most important line is the initiation of the reCAPTCHA handshake (shown in the above diagrams), with a POST request. The POST request includes both the reCAPTCHA token (either action-token or session-token, discussed above) and the reCAPTCHA sitekey (how reCAPTCHA Enterprise protects your API endpoint).code_block[StructValue([(u’code’, u'<Request>rn <Set>rn <Payload contentType=”application/json”>{rn “event”: {rn “token”: “{flow.recaptcha.token}”,rn “siteKey”: “{flow.recaptcha.sitekey}”rn }rn}</Payload>rn <Verb>POST</Verb>rn </Set>rn </Request>’), (u’language’, u”), (u’caption’, <wagtail.wagtailcore.rich_text.RichText object at 0x3e722ca98610>)])]Here is an explanation of all the proxy definitions included in the Github repo. A reCAPTCHA token is silently and periodically retrieved by a client app and transmitted to an Apigee runtime when an API is invoked.The shared flow configuration in this example is able to get a reCAPTCHA token validation status and a risk score from the Google reCAPTCHA Enterprise assessment endpoint. The sf-recaptcha-enterprise-v1 Apigee X shared flow gets a reCAPTCHA token validation status and a risk score from the Google reCAPTCHA Enterprise assessment endpoint. The risk score is a decimal value between 0.0 and 1.0.The score 1.0 indicates that the interaction poses low risk and is very likely legitimate, whereas 0.0 indicates that the interaction poses high risk and might be fraudulent. Between both extremes, the shared flow’s processing decides if an API invocation must be rejected or not. For the purpose of this reference, we consider a minimum score of 0.6: This value is configurable and can be set to a higher or lower value depending on the risk profile of the client application.The pipeline script deploys a shared flow (sf-recaptcha-enterprise-v1) on Apigee X, containing the full configuration of the reCAPTCHA Enterprise reference as well as the following artifacts:recaptcha-data-proxy-v1: a data proxy, which calls the reCAPTCHA Enterprise shared flow. The target endpoint of this proxy is httpbin.orgrecaptcha-deliver-token-v1: an API proxy used to deliver an HTML page that includes a valid reCAPTCHA token (cf. Option 2 above). This proxy is not intended to be used in production but only during test phases.The reCAPTCHA Enterprise API productA developer (Jane Doe)app-recaptcha-enterprise: a single developer app when Option 1 has been selected2 developer apps with real app credentials and reCAPTCHA Enterprise sitekeys when Option 2 has been selected:app-recaptcha-enterprise-always0App-recaptcha-enterprise-always1Google Cloud’s Web App and API Protection (WAAP) solutionThis implementation is a part of Google Cloud’s WAAP solution. Google’s WAAP security solution stack is a comprehensive solution which is an integration of web application firewall (WAF), DDoS prevention, bot mitigation, content delivery network, Zero Trust, and API protection. The Google Cloud WAAP solution consists of Cloud Armor (for DDoS and web app defense), reCAPTACHA Enterprise (for bot defense) and Apigee (for API defense). This solution is a set of tools and controls designed to protect web applications, APIs, and associated assets. Learn more about the WAAP solution here. Google’s WAAP Security solution is driven by the following principles:Safe by default Build on tested and proven components and codeDetect risky functionalityNew code should be reviewed Bypassing safe patterns should also be justified High-risk activities should be scrutinized Automate If you do it more than once, automate What’s nextGive it a try and test out the reCAPTCHA Enterprise Apigee proxy flow code for yourself. An existing reCAPTCHA token and sitekey are required so please acquire those first. When you are ready, you can explore all of Apigee X’s security features in the following documentation: Securing a proxy and Overview of Advanced API Security.Related ArticleRead Article
Quelle: Google Cloud Platform

Analyzing satellite images in Google Earth Engine with BigQuery SQL

Google Earth Engine (GEE)  is a groundbreaking product that has been available for research and government use for more than a decade. Google Cloud recently launched GEE to General Availability for commercial use. This blog post describes a method to utilize GEE from within BigQuery’s SQL allowing SQL speakers to get access to and value from the vast troves of data available within Earth Engine.We will use Cloud Functions to allow SQL users at your organization to make use of the computation and data catalog superpowers of Google Earth Engine.  So, if you are a SQL speaker and you want to understand how to leverage a massive library of earth observation data in your analysis then buckle up and read on.Before we get started let’s spend thirty seconds on setting geospatial context for our use-case.  BigQuery excels at doing operations on vector data.  Vector data are things like points, polygons, things that you can fit into a table.  We use the PostGIS syntax so users that have used spatial SQL before will feel right at home in BigQuery.  BigQuery has more than 175+ public datasets available within Analytics Hub.  After doing analysis in BigQuery users can use tools like GeoViz,  Data Studio, Carto and Looker to visualize those insights. Earth Engine is designed for raster or imagery analysis, particularly satellite imagery. GEE, which holds more than 70PB of satellite imagery, is used to detect changes, map trends, and quantify differences on the Earth’s surface. GEE is widely used to extract insights from satellite images to make better use of  land, based on its diverse geospatial datasets and easy-to-use application programming interface (API).By using these two products in conjunction with each other you can expand your analysis to incorporate both vector and raster datasets to combine insights from 70PB of GEE and 175+ datasets from BigQuery.  For example, in this blog we’ll create a Cloud Function that pulls temperature and vegetation data from the Landsat satellite imagery within the GEE Catalog and we’ll do it all from SQL in BigQuery. If you are curious about how to move data from BigQuery into Earth Engine you can read about it in this post.While our example is focused on agriculture this method can apply to any industry that matters to you.Let’s get started Agriculture is transforming with the implementation of modern technologies. Technologies such as GPS and satellite image dissemination allow researchers and farmers to gain more information, monitor and manage agricultural resources. Satellite imagery can be a reliable source to track images of how a field is developing. A common analysis of imagery used in agricultural tools today is Normalized Difference Vegetation Index (NDVI). NDVI is a measurement of plant health that is visually displayed with a legend from -1 to +1. Negative values are indicative of water and moisture. But high NDVI values suggest a dense vegetation canopy. Imagery and yield tend to have a high correlation; thus, it can be used with other data like weather to drive seeding prescriptions.As an agricultural engineer you are keenly interested in crop health for all the farms and fields that you manage.  The healthier the crop the better the yield and the more profit the farm will produce.  Let’s assume you have mapped all your fields and the coordinates are available in BQ. You now want to calculate the NDVI of every field, along with the average temperature for different months, to ensure the crop is healthy and take necessary action if there is an unexpected fall in NDVI. So the question is  how do we pull NDVI and temperature information into BigQuery for the fields by only using SQL?Using GEE’s ready-to-go Landsat 8 imagerywe can calculate NDVI for any given point on the planet. Similarly, we can use the publicly available ERA5 dataset of monthly climate for global terrestrial surfaces to calculate the average temperature for any given point.ArchitectureCloud Functions are a powerful tool to augment the SQL commands in BigQuery.  In this case we are going to wrap a GEE script within a Cloud Function and call that function directly from BigQuery’s SQL. Before we start, let’s get the environment set up.Environment setupBefore you proceed we need to get the environment setup:A Google Cloud project with billing enabled.  (Note:  this example cannot run within the BigQuery Sandbox as a billing account is required to run Cloud Functions)Ensure your GCP user has access to Earth Engine, can create Service accounts and assign roles. You can sign up for Earth Engine at Earth Engine Sign Up. Verify if you have access, check if you can view the Earth Engine Code Editor with your GCP user.At this point Earth Engine and BigQuery are enabled and ready to work for you. Now let’s set up the environment and define the cloud functions.1. Once you have created your project in GCP, select it on the console and click on cloud-shell.2. On cloud-shell, you will need to clone a git repository which contains the shell script and assets required for this demo. Run the following command on cloud shell,code_block[StructValue([(u’code’, u’git clone https://github.com/dojowahi/earth-engine-on-bigquery.gitrncd ~/earth-engine-on-bigqueryrnchmod +x *.sh’), (u’language’, u”), (u’caption’, <wagtail.wagtailcore.rich_text.RichText object at 0x3ee1d943a550>)])]3. Edit config.sh – In your editor of choice update the variables in config.sh to reflect your GCP project.4. Execute setup_sa.sh. You will be prompted to authenticate and you can choose “n” to use your existing auth.code_block[StructValue([(u’code’, u’sh setup_sa.sh’), (u’language’, u”), (u’caption’, <wagtail.wagtailcore.rich_text.RichText object at 0x3ee1d943a1d0>)])]4. If the shell script has executed successfully, you should now have a new Service Account created, as shown in the image below5. A Service Account(SA) in format <PROJECT_NUMBER>-compute@developer.gserviceaccount.com was created in the previous step, you need to sign up this SA for Earth Engine at EE SA signup. Check out the last line of the screenshot above it will list out SA nameThe screenshot below shows how the signup process looks for registering your SA.6. Execute deploy_cf.sh, it should take around 10 minutes for the deployment to complete.code_block[StructValue([(u’code’, u’sh deploy_cf.sh’), (u’language’, u”), (u’caption’, <wagtail.wagtailcore.rich_text.RichText object at 0x3ee1ea169ad0>)])]You should now have a dataset named gee and table land_coords under your project in BigQuery along with the functions get_poly_ndvi_month and get_poly_temp_month.You will also see a sample query output on the Cloud shell, as shown below7. Now execute the command below in Cloudshellcode_block[StructValue([(u’code’, u”bq query –use_legacy_sql=false ‘SELECT name,gee.get_poly_ndvi_month(aoi,2020,7) as ndvi_jul, gee.get_poly_temp_month(aoi,2020,7) as temp_jul FROM `gee.land_coords` LIMIT 10′”), (u’language’, u”), (u’caption’, <wagtail.wagtailcore.rich_text.RichText object at 0x3ee1ea7367d0>)])]and you should see something like thisIf you are able to get a similar output to one shown above, then you have successfully executed SQL over Landsat imagery.Now navigate to the BigQuery console and your screen should look something like this:You should see a new external connection us.gcf-ee-conn, two external routines called get_poly_ndvi_month, get_poly_temp_month and a new table land_coords.Next navigate to the Cloud functions console and you should see two new functions polyndvicf-gen2 and polytempcf-gen2 as shown below.At this stage your environment is ready. Now you can go to the BQ console and execute queries. The query below calculates the NDVI and temperature for July 2020 for all the field polygons stored in the table land_coordscode_block[StructValue([(u’code’, u’select name,rnst_centroid(st_geogfromtext(aoi)) as centroid,rngee.get_poly_ndvi_month(aoi,2020,7) AS ndvi_jul,rngee.get_poly_temp_month(aoi,2020,7) AS temp_jul rnFROM `gee.land_coords`’), (u’language’, u”), (u’caption’, <wagtail.wagtailcore.rich_text.RichText object at 0x3ee1db774d90>)])]The output should look something like this:When the user executes the query in BQ, the function get_poly_ndvi_month and get_poly_temp_month trigger remote calls to the cloud functions polyndvicf-gen2 and polytempcf-gen2 which would initiate the script on GEE. The results from GEE are streamed back to the BQ console and shown to the user.What’s Next?You can now plot this data on a map in Data Studio or Geoviz and publish it to your usersNow that your data is within BigQuery, you can join this data with your private datasets or other public datasets within BigQuery and build ML models using BigQuery ML to predict crop yields, seed prescriptions.SummaryThe example above demonstrates how users can wrap GEE functionality within Cloud Functions so that GEE can be executed exclusively within SQL. The method we have described requires someone who can write GEE scripts. The advantage is that once the script is built,  all of your SQL-speaking data analysts-scientists-engineers can do calculations on vast troves of satellite imagery in GEE directly from the BigQuery UI or API.Once the data and results are in BigQuery you can join the data with other tables in BigQuery or with the data available through Analytics Hub.  Additionally with this method, users can combine GEE data with other functionality such as geospatial functions or BQML.  In future we’ll expand our examples to include these other BigQuery capabilities.Thanks for reading, and remember,  if you are interested in learning more about how to move data from BigQuery to Earth Engine together, check out this blog post. The post outlines a solution for a sustainable sourcing use case for a fictional consumer packaged goods company trying to understand their palm oil supply chain which is primarily located in Indonesia. Acknowledgements: Shout out to David Gibson and Chao Shen for valuable feedback.Related ArticleMosquitoes get the swat with new Mosquito Forecast built by OFF! Insect Repellents and Google CloudBy visualizing data about mosquito populations with Google Earth Engine, SC Johnson built an app that predicts mosquito outbreaks in your…Read Article
Quelle: Google Cloud Platform

How to simplify and fast-track your data warehouse migrations using BigQuery Migration Service

Migrating data to the cloud can be a daunting task. Especially moving data from warehouses and legacy environments requires a systematic approach. These migrations usually need manual effort and can be error-prone. They are complex and involve several steps such as planning, system setup, query translation, schema analysis, data movement, validation, and performance optimization. To mitigate the risks, migrations necessitate  a structured approach with a set of consistent tools to help make the outcomes more predictable.Typical data warehouse migrations: Error prone, labor intensive, trial and error basedGoogle Cloud simplifies this with the BigQuery Migration Service – a suite of managed tools that allow users to reliably plan and execute migrations, making outcomes more predictable. It is free to use and generates consistent results with a high degree of accuracy.Major brands like PayPal, HSBC, Vodafone and Major League Baseball use BigQuery Migration Service to accelerate time to unlock the power of BigQuery, deploy new use cases, break down data silos, and harness the full potential of their data. It’s incredibly easy to use, open and customizable. So, customers can migrate on their own or choose from our wide range of specialized migration partners.BigQuery Migration Service: Automatically assess, translate SQL, transfer data, and validateBigQuery Migration Service automates most of the migration journey for you. It divides the end-to-end migration journey into four components: assessment, SQL translation, data transfer, and validation. Users can accelerate migrations through each of these phases often just with the push of a few buttons. In this blog, we’ll dive deeper into each of these phases and learn how to reduce the risk and costs of your data warehouse migrations.Step 1: AssessmentBigQuery Migration Service generates a detailed plan with a view of dependencies, risks, and the optimized migrated state on BigQuery by profiling the source workload logs and metadata.During the assessment phase, BigQuery Migration Service guides you through a set of steps using an intuitive interface and automatically generates a Google Data Studio report with rich insights and actionable steps. Assessment capabilities are currently available for Teradata and Redshift, and will soon be expanded for additional sources.Assessment Report: Know before you start and eliminate surprises. See your data objects and query characteristics before you start the data transfer.Step 2: SQL Translation This phase is often the most difficult part of any migration. BigQuery Migration Service provides fast, semantically correct, human readable translations from most SQL flavors to BigQuery. It can intelligently translate SQL statements  in high-throughput batch and Google-translate-like interactive modes from Amazon Redshift SQL, Apache HiveQL, Apache Spark SQL, Azure Synapse T-SQL, IBM Netezza SQL/NZPLSQL, MySQL, Oracle SQL/PL/SQL/Exadata, Presto SQL, PostgreSQL, Snowflake SQL, SQL Server T-SQL, Teradata SQL/SPL/BTEQ and Vertica SQL.Unlike most existing offerings which parse Regular Expressions, BigQuery’s SQL translation is true compiler based, with advanced customizable capabilities to handle macro substitutions, user defined functions, output name mapping and other source-context-aware nuances. The output is  detailed and prescriptive with clear “next-actions”. Data engineers and data analysts save countless hours leveraging our industry leading automated SQL translation service.Batch Translations: Automatic translations from a comprehensive list of SQL dialects accelerate large migrationsInteractive Translations: A favorite feature for data engineers, interactive translations simplify the refactoring efforts and reduce errors dramatically and serve as a great learning aidStep 3: Data TransferBigQuery offers data transfer service from source systems into BigQuery using a simple guided wizard. Users create a transfer configuration and choose a data source from the drop down list.Destination settings walk the user through connection options to the data sources and securely connect to the source and target systems. A critical feature of BigQuery’s data transfer is the ability to schedule jobs. Large data transfers can impose additional burdens on operational systems and impact the data sources. BigQuery Migration Service provides the flexibility to schedule transfer jobs to execute at user-specified times to avoid any adverse impact on production environmentsData Transfer Wizard: A step-by-step wizard guides the user to move data from source systems to BigQueryStep 4: ValidationThis phase ensures that data at the legacy source and BigQuery are consistent after the migration is completed. Validation allows highly configurable, and orchestrate-able rules to perform a granular per-row, per-column, or per-table left-to-right comparison between the source system and BigQuery. Labeling, aggregating, group-by, and filtering enable deep validations.Validation: The peace-of-mind module for BigQuery Migration ServiceIf you would like to leverage BigQuery Migration Service for an upcoming proof-of-concept or migration, reach out to your GCP partner, your GCP sales rep or check out our documentation to try it out yourself.Related ArticleMLB’s fan data team hits it out of the park with data warehouse modernizationSee how the fan data team at Major League Baseball (MLB) migrated its enterprise data warehouse (EDW) from Teradata to BigQuery.Read Article
Quelle: Google Cloud Platform

EyecareLive transforms healthcare ecosystem with Enhanced Support

EyecareLive transforms the healthcare ecosystem with Enhanced Support, a support service  from the Google Cloud Customer Care portfolio.Telemedicine is now mainstream. It exploded during the COVID-19 pandemic. A 2022 survey by Jones Lang Lasalle (registration required) found that 38% of U.S. patients were using some form of telemedicine. This number is expected to grow as consumers are demanding more convenient and immediate access to care, and doctors are seeking efficiencies, cost savings, and to forge closer relationships with patients. But because the eye-care field is so heavily regulated, optometrists and ophthalmologists face more technical hurdles to perform telemedicine than their peers in other medical practices. To join the telemedicine revolution, a generic technology solution wouldn’t do. Eye-care professionals need a more carefully architected and rigorously secure platform – one that ensures a very high degree of compliance and privacy.EyecareLive provides exactly that. Their comprehensive cloud-based solution was built specifically for eye-care telemedicine practices. They not only facilitate telemedicine visits with patients via video, but help providers stay in compliance with complex industry regulations.What’s more, EyecareLive is the only platform in the industry that conducts vision screening using Food and Drug Administration (FDA)-registered tests to check a patient’s vision before connecting them to a doctor through a video call. The doctor can thus triage any issues immediately and quickly determine the right next steps for proper care. In addition, their platform digitally connects optometrists and ophthalmologists to the entire eye-care ecosystem, including other doctors for referrals, insurance companies, hospitals, pharmaceutical firms, pharmacies, and, of course, patients.On top of all of this, the automated back office for their eye-care practices processes electronic health records (EHRs), clinical workflow, billing, coding, and more into one platform. EyecareLive streamlines operations and frees up doctors to focus on delivering the highest possible eye healthcare and on building stronger relationships with patients.“Considering the number of plug-and-play services that Google has built into the Google Cloud Healthcare solutions, Google is basically supporting the entire healthcare industry from an infrastructure provider point of view.”  — Raj Ramchandani, CEO, EyecareLiveSeeking greater agility, EyecareLive migrated to Google CloudEyecareLive is truly cloud first. They had operated entirely in the AWS cloud since opening their doors in 2017. Several years in, they decided to look for an additional cloud provider with broader support for digital health platforms. They specifically wanted to migrate to one they could rely on to deliver plug-and-play services, which would accelerate innovation of their platform. Rather than re-architecting for a new cloud, EyecareLive wanted a cloud platform that would offer compatible services they could use to meet their needs for reliability and availability.  “If we want to deploy a new conversational bot or build AI models that assist doctors to diagnose based on a retina image, Google Cloud provides these services which are  reliable and tested by Google Cloud Healthcare solutions in many cases.” — Raj Ramchandani, CEO, EyecareLiveVersatility was another requirement. The EyecareLive platform must fulfill the demands of a variety of organizations — doctors, pharmaceutical companies, clinics, and others. EyecareLive also has an international deployment strategy that goes far beyond offering a domestic telehealth solution. Therefore EyecareLive needed a cloud functionality that extended into the broader global eye-care ecosystem.EyecareLive chose Google Cloud. The most compelling reason was the deep industry expertise found in Google Cloud for healthcare and life sciences. This distinguished Google Cloud from all other possible cloud providers considered by EyecareLive. “We like Google Cloud because of the differentiations such as Google Cloud Healthcare solutions, computer vision, and AI models that can be used out of the box,” says Raj Ramchandani, CEO of EyecareLive. “We found these features more robust for our use cases on Google Cloud than any other.”“Google is heavily into its Healthcare Cloud. That’s what differentiates it. We love that part because we can tap into innovative healthcare cloud functionality quickly.” — Raj Ramchandani, CEO, EyecareLiveKey to production deployment (and beyond): Google Cloud Enhanced SupportAs a cloud-born company, EyecareLive had an exceedingly tech-savvy team. But the migration was a complex one that involved migrating third-party software and networking products that were tightly integrated into EyecareLive’s own code. The team knew it needed expert help with the migration. What’s more, doctors, patients, and other users required 24/7 access to the platform, and any interruptions to availability or infrastructure hiccups during the migration would disrupt their online experiences. However, the EyecareLive team was already stretched by continuing to grow and innovate the business, so they asked Google Cloud for help.EyecareLive purchased Enhanced Support, a support service offered by the Google Cloud Customer Care portfolio. Specifically designed for small and midsized businesses (SMBs). Enhanced Support gave EyecareLive unlimited, fast access to expert support from a team of experienced Google Cloud engineers during the intricate, multifaceted migration.“It was my top priority to engage Google Cloud Customer Care to help us keep the platform always available for our doctors and users,” says Ramchandani. “The level of detail to the answers, the clarifications of having the Enhanced Support experts tell us to do it a certain way has been enormously helpful.” For example, one of the valuable features delivered by Enhanced Support is Third-Party Technology Support, which gives EyecareLive access to experts with specialized knowledge of third-party technologies, such as networking, MongoDB, and infrastructure. This meant all components in EyecareLive’s infrastructure could be seamlessly migrated to Google Cloud, and afterward EyecareLive could lean on Enhanced Support experts to continue to troubleshoot and mitigate issues as necessary.“The response times to the questions and issues we had when going live was fantastic. It was the best experience with a tech vendor we’ve had in a long time.” — Raj Ramchandani, CEO, EyecareLiveWith Enhanced Support at their side, EyecareLive was able to get up and running quickly in preparation for their international expansion by using Google Cloud’s prebuilt AI models, load balancers, and networking technologies that were designed to be easily deployed across multiple regions throughout the globe. “We know exactly how to implement data locality to scale our deployment into different regions and into different countries, because we’ve learned that from the Google Cloud support team.” — Raj Ramchandani, CEO, EyecareLiveEyecareLive then proceeded to rapidly scale their business, knowing that Google Cloud would ensure they could meet compliance standards in whatever country or region they expanded into.“Since we’ve moved to Google Cloud and chose Enhanced Support, we’ve had 100% availability. That’s zero downtime, which is incredible.” — Raj Ramchandani, CEO, EyecareLiveEnhanced Support also provided the capabilities for EyecareLive to:Resolve issues and minimize any unplanned downtime to maintain a high-quality, secure experience for doctors and patients during and after migrationAcquire fast responses to questions from technical support experts Learn from guidance from the Enhanced Support team beyond immediate technical issues EyecareLive builds momentum toward their vision for eye-care telemedicine  By working closely with the Google Cloud Enhanced Support team, EyecareLive was able to successfully migrate their platform. “If you ask any of my engineers which cloud provider they prefer, they’d all respond ‘Google Cloud,’” says Ramchandani. “The documentation is there, the sample code is there, everything that we need to get started is available.”EyecareLive was then able to go on to grow and scale their business in the cloud in the following ways: Successfully managed a complex migration with minimal disruption and maximum availability, ensuring a consistent, secure, and compliant-ready experience for doctors and patientsGained the trust of both doctors and patients – they know that EyecareLive protects their sensitive medical dataKept EyecareLive agile and focused on innovating forward rather than building new features from scratch by supporting the team as they took advantage of Google’s tailored, plug-and-play technologiesAnalyzed performance over time to plan for future growth by partnering with Enhanced Support for the long term“We know we can rely on Google Cloud from a security point of view. We love the fact that Google Cloud Healthcare solution is HIPAA compliant. Those are the things that make us trust Google to do the right thing.” — Raj Ramchandani, CEO, EyecareLiveWith Enhanced Support, EyecareLive sees a bright future in the cloudWith the help of Enhanced Support, EyecareLive brings digital transformation to the eye-care in the healthcare industry by integrating the entire ecosystem of eye-care partners onto one platform making EyecareLive a leader in their industry. Learn more about Google Cloud Customer Care services and sign up today.Related ArticleRead Article
Quelle: Google Cloud Platform

U.S. Army chooses Google Workspace to deliver cutting-edge collaboration

In June, we announced the creation of Google Public Sector, a new Google division focusing on helping U.S. public sector entities—including federal, state, and local governments, and educational institutions—accelerate their digital transformations. It is our belief that Google Public Sector, and our industry partner ecosystem, will play an important role in applying cloud technology to solve complex problems for our nation. Today, I’m proud to announce one of our first big partnerships following the launch of this new subsidiary, as Google Public Sector will provide up to 250,000 active-duty Army personnel of the U.S. Army workforce with the Google Workspace. The government has asked for more choice in cloud vendors who can support its missions, and Google Workspace will equip today’s military with a leading suite of collaboration tools to get their work done.In the Army, personnel often need to work across remote locations, in challenging environments, with seamless collaboration key to their success. Google Workspace was designed with these challenges in mind and can be deployed quickly across a diverse set of working conditions, locations, jobs, and skill levels. And more than three billion users already rely on Google Workspace, which means that they’re familiar tools and require little training or extended ramp-up time for Army personnel—ultimately helping Soldiers and employees communicate better and more securely than ever before. Working with Accenture Federal Services under the Army Cloud Account Management Optimization contract and our implementation partner SADA, we’re excited to help the Army deploy a cloud-first collaboration solution, improving upon more traditional technologies with unparalleled security and versatility. Google Workspace is not only “born in the cloud” with secure-by-design architecture, but also provides a clear path to future features and innovations.One of the key reasons we are able to serve the U.S. Army is that Google Workspace recently received an Impact Level 4 authorization from the DoD. IL4 is a DoD security designation related to the safe handling of controlled unclassified information (CUI). That means government officials and others can use Google Workspace with more confidence and ease than ever before. Momentum for Google Public SectorWith Google Public Sector, we are committed to building our relationship with the U.S. Army and with other public sector agencies. In fact, we recently announced a partnership with Acclima to provide New York State with hyperlocal air quality monitoring and an alliance with Arizona State University to deliver an immersive online K-12 learning technology to students in the United States and around the world.This is just the start. Google Public Sector is dedicated to helping U.S. public sector customers become experts in Google Cloud’s advanced cybersecurity products, protecting people, data, and applications from increasingly pervasive and challenging cyber threats. We have numerous training and certification programs for public sector employees and our partners in digital and cloud skills, and we continue to expand our ecosystem of partners capable of building new solutions to better serve U.S. public sector organizations.Delivering the best possible public services means making life better and work more fulfilling for millions of people, inside and outside of government. We’re thrilled by what we are accomplishing at Google Public Sector, particularly with today’s partnership with the U.S. Army, and look forward to announcing even more great developments in the future. Learn more about how the government is accelerating innovation at ourGoogle Government Summit, taking place in person on Nov. 15 in Washington D.C. Join agency leaders as they explore best practices and share industry insights on using cloud technology to meet their missions.
Quelle: Google Cloud Platform

Use Artifact Registry and Container Scanning to shift left on security and streamline your deployments

3 ways Artifact Registry & Container Analysis can help optimize and protect container workloadsCybercrime costs companies 6 trillion dollars annually, with ransomware damage accounting for $20B alone1. A major source of attack vectors is vulnerabilities present in your open source software and vulnerabilities are more common in popular projects. In 2021, the top 10% of most popular OSS project versions are 29% more likely on average to contain known vulnerabilities. Conversely, the remaining 90% of project versions are only 6.5% likely to contain known vulnerabilities2. Google understands the challenges of working with open source software. We’ve been doing it for decades and are making some of our best practices available to customers through our solutions on Google Cloud. Below are three simple ways to get started and leverage our artifact management platform.Using Google Cloud’s native registry solution: Artifact Registry is the next generation of Container Registry and a great option for securing and optimizing storage of your images. It provides centralized management and lets you store a diverse set of artifacts with seamless integration with Google Cloud runtimes and DevOps solutions, letting you build and deploy your applications with ease. Shift left to discover critical vulnerabilities sooner: By enabling automatic scanning of containers in Artifact Registry, you get vulnerability detection early on in the development process. Once enabled, any image pushed to the registry is scanned automatically for a growing number of operating system and language package vulnerabilities. Continuous analysis updates vulnerability information for the image as long as it’s in active use. This simple step allows you to shift security left and detect critical vulnerabilities in your running applications before they become more broadly available to malicious actors. Deployments made easy and optimized for GKE: With regionalized repositories, your images are well positioned for quick and easy deployment to Google Cloud runtimes. You can further reduce the start-up latency of your applications running on GKE with image streaming.Our native Artifact Management solutions have tight integration with other Google Cloud services like IAM and Binary Authorization. Using Artifact Registry with automatic scanning is a key step towards improving the security posture of your software development life cycle.End to end software supply chainLeverage these Google Cloud solutions to optimize your container workloads and help your organization shift security left. Learn more about Artifact Registry and enabling automated scanning. These features are available now.1. Cyberwarfare In The C-Suite 2. State of the software supply chain 2021Related ArticleHow Google Cloud can help secure your software supply chainGoogle Cloud just introduced its new Assured OSS service. Here’s how it can help secure your software supply chain.Read Article
Quelle: Google Cloud Platform