Understanding Firestore performance with Key Visualizer

Firestore is a serverless, scalable, NoSQL document database. It is ideal for rapid and flexible web and mobile application development, and uniquely supports real-time client device syncing to the database.To get the best performance out of Firestore, while also making the most out of Firestore’s automatic scaling and load balancing features, you need to make sure the data layout of your application allows requests to be processed optimally, particularly as your user traffic increases. There are some subtleties to be aware of when it comes to what could happen when traffic ramps up, and to help make this easier to identify, we’re announcing the General Availability of Key Visualizer, an interactive, performance monitoring tool for Firestore.Key Visualizer generates visual reports based on Firestore documents accessed over time, that will help you understand and optimize the access patterns of your database, as well as troubleshoot performance issues. With Key Visualizer, you can iteratively design a data model or improve your existing application’s data usage pattern.Tip: While Key Visualizer can be used with production databases, it’s best to identify performance issues prior to rolling out changes in production. Consider running application load tests with Firestore in a pre-production environment, and using Key Visualizer to identify potential issues.Viewing a visualizationThe Key Visualizer tool is available to all Firestore customers. Visualizations are generated at every hour boundary, covering data for the preceding two hours. Visualizations are generated as long as overall database traffic during a selected period meets the scan eligibility criteria.To get an overview of activity using Key Visualizer, first select a two-hour time period and review the heatmap for the “Total ops/s” metric. This view estimates the number of operations per second and how they are distributed across your database. Total ops/s is an estimated sum of write, lookup, and query operations averaged by seconds.Firestore automatically scales using a technique called range sharding. When using Firestore, you model data in the form of documents stored in hierarchies of collections. The collection hierarchy and document ID is translated to a single key for each document. Documents are logically stored and ordered lexicographically by this key. We use the term “key range” to refer to a range of keys. The full key range is then automatically split up as-needed, driven by storage and traffic load, and served by many replicated servers inside of Firestore.The following example of Key Visualizer visualization shows a heatmap where there are some major differences in the usage pattern across the database. The X-axis is time, and the Y-axis is the key range for your database, broken down into buckets by traffic.Ranges shown in dark colors have little or no activity.Ranges in bright colors have significantly more activity. In the example below, you can see the “Bar” and “Qux” collections going beyond 50 operations per second for some period of time.Additional methods of interpreting Key Visualizer visualizations are detailed in our documentation.Besides the total number of operations, Key Visualizer also provides views with metrics for ops per second, average latency, and tail latency, where traffic is broken down for writes and deletes, lookups, and queries. This capability allows you to identify issues with your data layout or poorly balanced traffic that may be contributing to increased latencies.Hotspots and heatmap patternsKey Visualizer gives you insight into how your traffic is distributed, and lets you understand if latency increases correlate with a hotspot, thus providing you with information to determine what parts of your application need to change. We refer to a “hotspot” as a case where traffic is poorly balanced across the database’s keyspace. For the best performance, requests should be distributed evenly across a keyspace. The effect of a hotspot can vary, but typically hotspotting causes higher latency and in some cases, even failed operations.Firestore automatically splits a key range into smaller pieces and distributes the work of serving traffic to more servers when needed. However, this has some limitations. Splitting storage and load takes time, and ramping up traffic too fast may cause hotspots while the service adjusts. The best practice is to distribute operations across the key range, while ramping up traffic on a cold database with 500 operations per second, and then increasing traffic by up to 50% every 5 minutes. This is called the “500/50/5″ rule, and allows you to rapidly warm up a cold database safely. For example, ramping to 1,000,000 ops/s can be achieved in under two hours.Firestore can automatically split a key range until it is serving a single document using a dedicated set of replicated servers. Once this threshold is hit, Firestore is unable to create further splits beyond a single document. As a result, high and sustained volumes of concurrent operations on a single document may result in elevated latencies. You can observe these high latencies using Key Visualizer’s average and tail latency metrics. If you encounter sustained high latencies on a single document, you should consider modifying your data model to split or replicate the data across multiple documents.Key Visualizer will also help you identify additional traffic patterns:Evenly distributed usage: If a heatmap shows a fine-grained mix of dark and bright colors, then reads and writes are evenly distributed throughout the database. This heatmap represents an effective usage pattern for Firestore, and no additional action is required.Sequential Keys: A heatmap with a single bright diagonal line can indicate a special hotspotting case where the database is using strictly increasing or decreasing keys (document IDs). Sequential keys are an anti-pattern in Firestore, which will result in elevated latency especially at higher operations per second. In this case, the document IDs that are generated and utilized should be randomized. To learn more, see the best practices page.Sudden traffic increase: A heatmap with a key range that suddenly changes from dark to bright indicates a sudden spike in load. If the load increase isn’t well distributed across the key range, and exceeds the 500/50/5 rule best practice, the database can experience elevated latency in the operations. In this case, the data layout should be modified to reflect a better distribution of usage and traffic across the keyspace.Next stepsFirestore Key Visualizer is a performance monitoring tool available to administrators and developers to better understand how their applications interact with Firestore. With this launch, Firestore joins our family of Cloud-native databases, including Cloud Spanner and Cloud Bigtable, in offering Key Visualizer to its customers. You can get started with Firestore Key Visualizer for free, from the Cloud Console.AcknowledgementSpecial thanks to Minh Nguyen, Lead Product Manager for Firestore, for contributing to this post.
Quelle: Google Cloud Platform

How can demand forecasting approach real time responsiveness? Vertex AI makes it possible

Everyone wishes they had a crystal ball—especially retailers and consumer goods companies looking for the next big trend, or logistics companies worried about the next big storm. With a veritable universe of data now at their fingertips (or at least at their keyboards), these companies can now get closer to real-time forecasting across a range of areas when they leverage the right AI and machine learning tools.For retailers, supply chain, and consumer goods organizations, accurate demand forecasting has always been a key driver of efficient business planning, inventory management, streamlined logistics and customer satisfaction. Accurate forecasting is critical to ensure that the right products, in the right volumes, are delivered to the right locations. Customers don’t like to see items out of stock, but too much inventory is costly and wasteful. Retailers lose over a trillion dollars a year in mismanaged inventory, according to IHL Group, whereas a 10% to 20% improvement in demand forecasting accuracy can directly produce a 5% reduction in inventory costs and a 2% to 3% increase in revenue (Notes from the AI Frontier, McKinsey & Company).Yet, inventory management is only one of the applications among many that demand forecasting can support—retailers need to also staff their stores and their support centers for busy periods, plan promotions and evaluate different factors that can impact store or online traffic. As retailers’ product catalog and global reach broaden, the available data becomes more complex and more difficult to forecast accurately. Unconstrained activities through the pandemic have only accentuated supply chain bottlenecks and forecasting challenges as the pace of change has been so rapid. Retailers can now infuse machine learning into their existing demand forecasting to achieve high forecast accuracy, by leveraging Vertex AI Forecast. This is one of the latest innovations born of Google Brain researchers and being made available to enterprises within an accelerated time frame. Top performing models within two hoursVertex AI Forecast can ingest datasets of up to 100 million rows covering years of historical data for many thousands of product lines from BigQuery or CSV files. The powerful modeling engine would automatically process the data and evaluate hundreds of different model architectures and package the best ones into one model which is easy to manage, even without advanced data science expertise. Users can include up to 1,000 different demand drivers  (color, brand, promotion schedule, e-commerce traffic statistics, and more) and set budgets to create the forecast. Given how quickly market conditions change, retailers need an agile system that can learn quickly. Teams can build demand forecasts at top-scoring accuracy with Vertex AI Forecast within just two hours of training time and no manual model tuning.The key part of the Vertex AI Forecast is model architecture search, where the service evaluates hundreds of different model architectures and settings. This algorithm allows Vertex AI Forecast to consistently find the best performing model setups for a wide variety of customers and datasets. Google has effectively built the brain that is applied towards demand forecasting in a non-intrusive and contextual way, to merge the art and (data) science of accurate demand forecasting. In benchmarking tests based on Kaggle datasets, Vertex AI Forecast performed in the highest 3% of accuracy in M5, the World’s Top Forecasting Competition. Leading retailers are already transforming their operations and reaping the benefits of highly accurate forecasting. ​​”Magalu has deployed Vertex AI Forecast to transform our forecasting predictions, by implementing distribution center level forecasting and reducing prediction errors simultaneously” said Fernando Nagano, director of Analytics and Strategic Planning at Magalu. “Four-week live forecasting showed significant improvements in error (WAPE) compared to our previous models,” Nagano added. “This high accuracy insight has helped us to plan our inventory allocation and replenishment more efficiently to ensure that the right items are in the right locations at the right time to meet customer demand and manage costs appropriately.”From weather to leather, Vertex AI can handle all kind of inputsWith the hierarchical forecast capabilities of Vertex AI Forecast, retailers can generate a highly accurate forecast that works on multiple levels (for example, tying together the demand at the individual item, store level, and regional levels) to minimize challenges created by organizational silos. Hierarchical models can also improve overall accuracy when historical data is sparse. When the demand for individual items is too random to forecast, the model can still pick up on patterns at the product category level.Vertex AI can ingest large volumes of structured and unstructured data, allowing planners to include many relevant demand drivers such as weather, product reviews, macroeconomic indicators, competitor actions, commodity prices, freight charges, ocean shipping carrier costs, and more. Vertex AI Forecast explainability features can show how each of these drivers contributes to the forecast and help the decision makers understand what drives the demand to take the corrective action early.The demand driver attributions are available not only for the overall forecast but for each individual item at every point. For instance, planners may discover that promotions are the main drivers of demand in the clothing category on weekdays, but not during the holidays. These kinds of insights can be invaluable when decisions are made on how to act on forecasts.Vertex AI Forecast is already helping Lowe’s with a range of models at the company’s more than 1,700 stores, according to Amaresh Siva, senior vice president for Innovation, Data and Supply Chain Technology at Lowe’s.“At Lowe’s, our stores and operations stretch across the United States, so it’s critical that we have highly accurate SKU-level forecasts to make decisions about allocating inventory to specific stores and replenishing items in high demand,” Siva said. “Using Vertex AI Forecast, Lowe’s has been able to create accurate hierarchical models that balance between SKU and store-level forecasts. These models take into account our store-level, SKU-level, and region-level inventory, promotions data and multiple other signals, and are yielding more accurate forecasts.”Key retail and supply chain partners, including o9 Solutions and Quantiphi, are already integrating Vertex AI Forecast into to provide value added services to customers. To learn more about demand forecasting with Vertex AI, please contact your Field Sales Representative, or try Vertex AI for free here.Related ArticleGoogle Cloud unveils Vertex AI, one platform, every ML tool you needGoogle Cloud launches Vertex AI, a managed platform for experimentation, versioning and deploying ML models into production.Read Article
Quelle: Google Cloud Platform

How Macy’s enhances the customer experience with Google Cloud services

Editor’s note: Learn from Mohamed Nazeemudeen, Director of software engineering at Macy’s, about Macy’s strategy regarding choosing cloud databases and how Macy’s pricing services leverage Cloud Bigtable under the hood. You can also find Mohamed’s Google Cloud Next ‘21 session on this topic on YouTube.At Macy’s we lead with our aim of fostering memorable shopping experiences for our customers. Our transition from on-premises operations to the Google Cloud Platform (GCP) cloud-first managed service databases is an extension of this dedication. Our mutual commitment to innovation in customer service led to the acceleration of our digital transition at an uncertain time for our industry and our company. As one of the nation’s premier omnichannel fashion retailers, Macy’s has 727 stores and operates in 43 states in the US. By leveraging Google’s databases, we’ve emerged from the COVID-19 pandemic with newfound scalability, flexibility, customer growth, and a vision that consistently challenges and inspires us to enhance the customer experience. Through our Google partnership, we succeeded at bolstering our e-commerce platform, optimizing internal operational efficiency, and enhancing every critical component of our services by choosing the appropriate database tools. How Macy’s leveraged GCP services to optimize efficiencyCommon Services is a strategic initiative at Macy’s that leverages GCP-managed services. The goal of Common Services is to provide a single source of truth for all internal clients of Macy’s selling channels. This centralization of our operations allows us to provide an integrated customer experience across the various channels of our company (digital, stores, enterprise, call centers, etc.).How Bigtable and Spanner support pricing and inventory managementThe SLA for Common Services is a 99.99% uptime, with cross-regional availability, supporting more than tens of thousands of queries per second at single digit latency at the 95th percentile. We decided to use GCP-managed services to lower our operational overhead.To store data from our catalog and support our inventory management, we leveraged Spanner. Our catalog service requires low latency and is tolerant to slightly stale data, so we used stale reads from Spanner with about 10 seconds exact staleness to keep latency low (single digit).We utilized Bigtable on Google Cloud as the backing database for our pricing system as it entails a very intensive workload and is highly sensitive to latency. BigTable allows us to get the information we need, with latency under 10ms at p99, regardless of the scale and size of the data. Our access pattern entails finding an item’s ticket price based on a given division, location, and the universal price code (UPC) which identifies the item. The system on BigTable supports a time that spans from multiple days in the past to multiple days in the future.We have millions of UPCs and more are added every day. With 700+ stores, and potentially multiple  price points per item, we create billions of data points. Our calculations, therefore, show that we will require  dozens of terabytes of storage. The storage available on GCP supports all our extensive storage needs while optimizing speed, functionality, and efficiency.How we designed our BigTable schemaWe wanted to access the information with one row key lookup to keep the overall latency low. For the row key, we use the location and the UPC. In order to avoid key range scans, and to be mindful of storage requirements, for the timestamp price values, we chose to use a protobuf inside a cell. Our performance testing showed that the cost of deserializing the protobuf was negligible and with GCP, our latency remained in single digit milliseconds.The Cloud Bigtable schema design for the price common serviceOur price systems involve heavy batch writes while processing price adjustment instructions, we have isolated the read and write workloads using Bigtable app profiles. The app profile is configured with multi-cluster routing so that Bigtable does the high availability for us.Our ability to enhance the performance of our operations and deliver a better experience for our customers is a direct reflection of GCP-managed services. The success of our partnership with Google reflects a mutual commitment to embracing innovation and imagination. We enjoyed this opportunity to expand Macy’s reach and streamline the shopping experience for our customers. We are excited to bring a new standard of personalization, accessibility, and comfort to today’s retail industry. 
Quelle: Google Cloud Platform

Quantum Metric explores retail big data use cases on BigQuery

Editor’s note: To kick off the new year, we invited partners from across our retail ecosystem to share stories, best practices, and tips and tricks on how they are helping retailers transform during a time that has seen tremendous change. The original version of this blog was published by Quantum Metric. Please enjoy this updated entry from our partner.If you had access to 100% of the behavioral data on the visitors to your digital properties, what would you change? The key to any digital intelligence platform is adoption. For this to happen, you need data – big data. Our most advanced customers are using Quantum Metric data outside the walls of the UI and exploring big data use cases for experience data.As such, Quantum Metric is built on Google Cloud BigQuery which enables our customers, many of which are retailers, to have access to their raw data. They can leverage this data directly in BigQuery or stream it to any data lake, cloud, or other system of their choosing. Through the Quantum Metric and BigQuery integration, customers can start leveraging experience data in more ways than you might realize. Let’s explore three ways enterprises are leveraging Quantum Metric data in BigQuery to enhance the customer experience. Use Case 1: Retargeting consumers when they don’t complete an online purchaseFirst, we look at retargeting. Often, when a shopping cart is abandoned or an error occurs during a consumer’s online shopping experience, you may not know why the situation occurred nor how to fix it in real-time.  With Quantum Metric data in Google BigQuery, you can see user behavior, including what happens when a cohort of users don’t convert. As a result, enterprises can leverage those insights to retarget and win the consumer over. Use Case 2: Enable real-time decision making with a customer data platformNext, consider how you can inform a customer data platform (CDP) to enable real-time decision making – the holy grail of data analytics. Imagine you are an airline undergoing digital transformation. Most airlines offer loyalty status or programs, and this program is usually built in tandem with a CDP, which allows airlines to get a 360-degree view of their customer from multiple sources of data and from different systems. With Quantum Metric on Google Cloud, you can combine customer data with experience data, empowering you to better understand how users are experiencing your products, applications or services, and enabling you to take action as needed in real-time.For example, you can see when loyalty members are showing traits of frustration and deploy a rescue via chat, or even trigger a call from a special support agent. You can also send follow-up offers like promos to drive frustrated customers back to your website. The combined context of behavior data and customer loyalty status data allows you to be more pragmatic and effective with your resources. This means taking actions that rescue frustrated customers and drive conversion rates.Use Case 3: Developing impactful personalizationThe above CDP example is just the beginning of what you can achieve with the Quantum Metric and BigQuery integration. To develop truly impactful personalization programs, you need a joint dataset that is informed by real-time behavioral data. With Quantum Metric and BigQuery, customers can access real-time behavioral data, such as clicks, view time, and frustrations, which allows you to develop impactful personalized experiences. Let’s think about this through an example. Imagine a large retailer that specializes in selling commodities and needs to perform well on Black Friday. Through the Quantum Metric and BigQuery integration, they have real-time data on product engagement, such as clicks, taps, view time, frustration, and other statistics. When they combine these insights with products available by region and competitive pricing data, they have a recipe for success when it comes to generating sales on Black Friday. With these data insights, retailers can create cohorts of users (by age, device, loyalty status, purchase history, etc.). These cohorts receive personalized product recommendations based on the critical sources of data. These recommendations are compelling for consumers, since they are well priced, popular products that shoppers know are in stock. This approach to personalization will become more important as supply chain inventory challenges continue into 2022.Quantum MetricWith Quantum Metric and BigQuery, you can explore these three big data use cases. The exciting part is, this is just the beginning of what you can accomplish when you combine real-time experience analytics data with critical business systems. Read the companion piece to learn more about how companies are making the most of Quantum Metric and BigQuery today.Related ArticleFaster time to value with Data Analytics Design PatternsDesign Patterns provide customers with tools they need to accelerate time to value and implement common use cases so they can focus on in…Read Article
Quelle: Google Cloud Platform

10 questions to help boards safely maximize cloud opportunities

The accelerating pursuit of cloud-enabled digital transformations brings new growth opportunities to organizations, but also raises new challenges. To ensure that they can lock in newfound agility, quality improvements, and marketplace relevance, boards of directors must prioritize safe, secure, and compliant adoption processes that support this new technological environment. The adoption of cloud at scale by a large enterprise requires the orchestration of a number of significant activities, including:Rethinking how strategic outcomes leverage technology, and how to enable those outcomes by changing how software is designed, delivered, managed across the organization. Refactoring security, controls, and risk governance processes to ensure that the organization stays within its risk appetite and in compliance with regulation during and following the transformation.Implementing new organizational and operating models to empower a broad and deep skills and capabilities uplift, and fostering the right culture for success.As such, the organization across all lines of defense has significant work to do. The board of directors plays a key role in overseeing and supporting management on this journey, and our new paper is designed to provide a framework and handbook for boards of directors in that position. We provide a summary of our recommendations, in addition to a more detailed handbook. This paper complements two papers we published in 2021: The CISO’s Guide to Cloud Security Transformation, and Risk Governance of Digital Transformation in the Cloud, which is a detailed guide for chief risk officers, chief compliance officers, and heads of internal audit.We have identified 10 questions that we believe help a board of directors in a structured, meaningful discussion with their organization and its approach to cloud. We’ve included additional points with each, as examples of what a good approach could look like, and potential red flags that might indicate all is not well with the program. At a high level, those questions are:How is the use of cloud technology being governed within the organization? Is clear accountability assigned and is there clarity of responsibility in decision making structures?How well does the use of cloud technology align with, and support, the technology and data strategy for the organization, and, ideally, the overarching business strategy, in order that the cloud approach can be tailored to achieve those right outcomes?Is there a clear technical and architectural approach for the use of cloud, that incorporates the controls necessary to ensure that infrastructure and applications are deployed and maintained in a secure state? Has a skills and capabilities assessment been conducted, in order to determine what investments are needed across the organization?How is the organization structure and operating model evolving to both fully leverage cloud, but also to increase the likelihood of a secure and compliant adoption? How are risk and control frameworks being adjusted, with an emphasis on understanding how the organization’s risk profile is changing and how the organization is staying within risk appetite? How are independent risk and audit functions adjusting their approach in light of the organization’s adoption of cloud?How are regulators and other authorities being engaged, in order to keep them informed and abreast of the organization’s strategy and of the plans for the migration of specific business processes and data sets?How is the organization prioritizing resourcing to enable the adoption of cloud, but also to maintain adequate focus on managing existing and legacy technologies?  Is the organization consuming and adopting the cloud provider’s set of best practices and leveraging the lessons the cloud provider will have learned from their other customers?Our conclusions in this whitepaper have been guided by Google’s years of leading and innovating in cloud security and risk management, and the experience that Google Cloud experts have gained from their previous roles in risk and control functions in large enterprises. The board of directors plays a critical role in overseeing any organization’s cloud-enabled digital transformation. We recommend a structured approval to that oversight and asking the questions we pose in this whitepaper. We are excited to collaborate with you on the risk governance of your cloud transformation.
Quelle: Google Cloud Platform

Where is your Cloud Bigtable cluster spending its CPU?

CPU utilization is a key performance indicator for Cloud Bigtable. Understanding CPU spend is essential for optimizing Bigtable performance and cost. We have significantly improved Bigtable’s observability by allowing you to visualize your Bigtable cluster’s CPU utilization in more detail. We now provide you with the ability to break the utilization down by various dimensions like app profile, method and table. This finer grained reporting can help you make more informed application design choices and help with diagnosing performance related incidents.In this post, we present how this visibility may be used in the real world, through example persona-based user journeys.User Journey: Investigate an incident with high latencyTarget Persona: Site Reliability Engineer (SRE)ABC Corp runs Cloud Bigtable in a multi-tenant environment. Multiple teams at ABC Corp use the same Bigtable instance.Alice is an SRE at ABC Corp. Alice gets paged because the tail latency of a cluster exceeded the acceptable performance threshold. Alice looks at the cluster level CPU utilization chart and sees that the CPU usage spiked during the incident window.P99 latency for app profile personalization-reader spikesCPU utilization for the cluster spikesAlice wants to drill down further to get more details about this spike. The primary question she wants to answer is “Which team should I be reaching out to?” Fortunately, teams at ABC Corp follow the best practice of tagging the usage of each team with an app profile in the following format: <teamname>-<workload-type>The bigtable instance has the following app profiles:revenue-updaterinfo-updaterpersonalization-readerpersonalization-batch-updaterThe instance’s data is stored in the following tables:revenueclient-infopersonalizationShe uses the CPU per app profile chart to determine that the personalization-batch-updater app profile utilized the most CPU during the time of the incident and also saw a spike that corresponded with the spike in latency of the serving path traffic under the personalization-reader app profile.At this point, Alice knows that the personalization-batch-updater traffic is adversely impacting the personalization-reader traffic. She further digs into the dashboards in Metrics Explorer to figure out the problematic method and table.CPU usage breakdown by app profile, table and methodAlice has now identified the personalization-batch-updater app profile, the personalization table and the MutateRows method as the reason for the increase in CPU utilization that is causing high tail latency of the serving path traffic.With this information, she reaches out to the personalization team to provision the cluster correctly before the batch job starts so that the performance of other tenants is not affected. The following options can be considered in this scenario:Run the batch job on a replicated instance with multiple clusters. Provision a dedicated cluster for the batch job and use single cluster routing to completely isolate the serving path traffic from the batch updatesProvision more nodes for the cluster before the batch job starts and for the duration of the batch job. This option is less preferred than option 1, since serving path traffic may still be impacted. However, this option is more cost effective.User Journey: Schema and cost optimizationTarget Persona: DeveloperBob is a developer who is onboarding a new workload on Bigtable. He completes the development of his feature and moves on to the performance benchmarking phase before releasing to production. He notices that both the throughput and latency of his queries are lower than what he expected and begins debugging the issue. His first step is to look at the CPU utilization of the cluster, which is higher than expected and is hovering around the recommended max.CPU utilization by clusterTo debug further, he looks at the CPU utilization by app profile and the CPU utilization by table charts. He determines that the majority of the CPU is consumed by the product-reader app profile and the product_info table.CPU utilization by app profileCPU utilization by tableHe inspects the application code and notices that the query includes a value range filter. He realizes that value filters are expensive, so he moves the filtering to the application. This leads to substantial decrease in Bigtable cluster CPU utilization. Consequently, not only does he improve performance, but he can also lower costs for the Bigtable cluster.CPU utilization by cluster after removing value range filterCPU utilization by app profile after removing value range filterCPU utilization by table after removing value range filterWe hope that this blog helps you to understand why and when you might want to use our new observability metric – CPU per app profile, method and table. Accessing the metricsThese metrics can be accessed on the Bigtable Monitoring UI under the Tables and Application Profiles tabs. To see the method breakdown, view the metric in Metrics Explorer, which you can also navigate to from Cloud Monitoring UI.
Quelle: Google Cloud Platform

How Bayer Crop Science uses BigQuery and geobeam to improve soil health

Bayer Crop Science uses Google Cloud to analyze billions of acres of land to better understand the characteristics of the soil that produces our food crops. Bayer’s teams of data scientists are leveraging services from across  Google Cloud to load, store, analyze, and visualize geospatial data to develop unique business insights. And because much of this important work is done using publicly-available data, you can too!Agencies such as the United States Geological Survey (USGS), National Oceanic and Atmospheric Administration (NOAA), and the National Weather Service (NWS) perform measurements of the earth’s surface and atmosphere on a vast scale, and make this data available to the public. But it is up to the public to turn this data into insights and information. In this post, we’ll walk you through some ways that Google Cloud services such as BigQuery and Dataflow make it easy for anyone to analyze earth observation data at scale. Bringing data togetherFirst, let’s look at some of the datasets we have available. For this project, the Bayer team was very interested in one dataset in particular from ISRIC, a custodian of global soil information. ISRIC maps the spatial distribution of soil properties across the globe, and collects soil measurements such as pH, organic matter content, nitrogen levels, and much more. These measurements are encoded into “raster” files, which are large images where each pixel represents a location on the earth, and the “color” of the pixel represents the measured value at that location. You can think of each raster as a layer, which typically corresponds to a table in a database. Many earth observation datasets are made available as rasters, and they are excellent for storage of gridded data such as point measurements, but it can be difficult to understand spatial relationships between different areas of a raster, and between multiple raster tiles and layers.Processing data into insightsTo help with this, Bayer used Dataflow with geobeam to do the heavy-lifting of converting the rasters into vector data by turning them into polygons, reprojecting them to the WGS 84 coordinate system used by BigQuery, and generating h3 indexes to help us connect the dots — literally. Polygonization in particular is a very complex operation and its difficulty scales exponentially with file size, but Dataflow is able to divide and conquer by splitting large raster files into smaller blocks and processing them in parallel at massive scale. You can process any amount of data this way, at a scale and speed that is not possible on any single machine using traditional GIS tools. What’s best is that this is all done on the fly with minimal custom programming. Once the raster data is polygonized, reprojected, and fully discombobulated, the vector data is written directly to BigQuery tables from Dataflow.Once the data is loaded into BigQuery, Bayer uses BigQuery GIS and the h3 indexes computed by geobeam to join the data across multiple tables and create a single view of all of their soil layers. From this single view, Bayer can analyze the combined data, visualize all the layers at once using BigQuery GeoViz, and apply machine learning models to look for patterns that humans might not seeScreenshot of Bayer’s soil analysis in GeoVizUsing geospatial insights to improve the businessThe soil grid data is essential to help characterize the soil characteristics of the crop growth environments experienced by Bayer’s customers. Bayer can compute soil environmental scenarios for global crop lands to better understand what their customers experience in order to aid in testing network optimization, product characterization, and precision product design. It also impacts Bayer’s real-world objectives by enabling them to characterize the soil properties of their internal testing network fields to help establish a global testing network and enable environmental similarity calculations and historical modeling.It’s easy to see why developing spatial insights for planting crops is game-changing for Bayer Crop Sciences, and these same strategies and tools can be used across a variety of industries and businesses.Google’s mission is to organize the world’s information and make it universally accessible and useful, and we’re excited to work with customers like Bayer Crop Sciences who want to harness their data to build products that are beneficial to their customers and the environment. To get started building amazing geospatial applications for your business, check out our reference guide to learn more about geospatial capabilities in Google Cloud, and open BigQuery in the Google Cloud console to get started using BigQuery and geobeam for your geospatial workloads.
Quelle: Google Cloud Platform

The Google Cloud DevOps Awards: Final call for submissions!

DevOps continues to be a major business accelerator for our customers and we continually see success from customers applying DevOps Research and Assessment (DORA) principles and findings to their organization. This is why the first annual DevOps Awardsis targeted to recognize customers shaping the future of DevOps with DORA. Share your inspirational story, supported by examples of business transformation and operational excellence, today. With inputs from over 32,000 professionals worldwide and seven years of research, the Accelerate State of DevOps Report is the largest and longest running DevOps research of its kind. The different categories of DevOps Awards map closely to the practices and capabilities that drive high performance, as identified by the report. Organizations, irrespective of their  size, industry, and region are able to apply to one or all ten categories. Please find the categories and their descriptions below:Optimizing for Speed without sacrificing stability: This award recognizes one  Google Cloud customer that has driven improvements in speed without sacrificing quality. Embracing easy-to-use tools to improve remote productivity: The research showcases how high performing engineers are 1.5 times more likely to have easy to-use tools. To be eligible for this award, share your stories on how easy to use DevOps tools have helped you improve engineer productivity.Mastering effective disaster recovery: This award winner will be awarded to demonstrate how a robust, well-testeddisaster recovery (DR) plan can  protect business operations.Leveraging loosely coupled architecture: This award recognizes one customer that successfully transitioned from a tightly coupled architecture to service-oriented and microservice architectures.Unleashing the full power of the Cloud: This award recognizes a Google Cloud customer leveraging all five capabilities of cloud computing to improve software delivery and organizational performance. Specifically, these five capabilities include: – On demand self-service- Broad network access- Measured service- Rapid elasticity- Resource pooling.Read more about the five essential characteristics of cloud computingMost improved documentation quality: This award recognizes one customer that has successfully integrated documentation into their DevOps workflow using Google Cloud Platform tools.Reducing burnout during COVID-19: We will recognize one customer that implemented effective processes to improve work/life balance, foster a healthy DevOps culture, and ultimately prevent burnout.Utilizing IT operations to drive informed business decisions: This award will go to one customer that employed DevOps best practices to break down silos between development and operations teams.Driving inclusion and diversity in DevOps: To highlight the importance of a diverse organization, this award honors one Google Cloud customer that: Prioritizes diversity and inclusion initiatives for their organization to transform and strengthen their business. -orCreates unique solutions to help build a more diverse, inclusive, and accessible workplace for your customer, leading to higher levels of engagement, productivity, and innovation.Accelerating DevOps with DORA: This award recognizes one customer that has successfully integrated the most DORA practices and capabilities into their workflow using Google Cloud Platform tools.This is your chance to show your innovation globally and become a role model for the industry to improve. Winners will receive invitations to roundtables and discussions, press materials, website and social badges, special announcements and even a trophy award.We are excited to see all your great submissions. Applications are open until January 31st, so apply for what best suits your company and stay tuned for our awards show in February 2022!For more information on the awards visit our webpageand check out The Google Cloud DevOps Awards Guidebook.
Quelle: Google Cloud Platform

Data governance in the cloud – part 1 – People and processes

In this blog, we’ll cover data governance as it relates to managing data in the cloud. We’ll discuss the operating model which is independent of technologies whether on-prem or cloud, processes to ensure governance, and finally the technologies that are available to ensure data governance in the cloud. This is a two part blog on data governance. In this first part, we’ll discuss the role of data governance, why it’s important, and processes that need to be implemented to run an effective data governance program. In the second part, we’ll dive into the tools and technologies that are available to implement data governance processes, e.g. data quality, data discovery, tracking lineage, and security.For an in-depth and comprehensive text on Data governance, check Data Governance: People, Processes, and Tools to Operationalize Data Trustworthiness.What is Data Governance?Data governance is a function of data management which creates value for the organization by implementing processes to ensure high data quality, and provides a platform that makes it easier to share data securely across the organization while ensuring compliance with all the regulations. The goal of data governance is to maximize the value derived from data, build user trust, and ensure compliance by implementing required security measures.Data governance needs to be in place from the time a factoid of data is collected or generated and until the point in time at which that data is retired. Along the way, in this full lifecycle of the data, data governance focuses on making the data available to all stakeholders in a form that they can readily access and use in a manner that generates the desired business outcomes (insights, analysis), and if relevant, conforms to regulatory standards. These regulatory standards are often an intersection of industry (e.g. healthcare), government (e.g. privacy), and company (e.g. non-partisan) rules and codes of behavior. See more details here.Why is Data Governance Important?In the last decade, the amount of data generated by users using mobile phones, health & fitness and IOT devices, retail beacons etc. have caused an exponential growth in data. At the same time, the cloud has made it easier to collect, store, and analyze this data at a lower cost. As the volume of data and adoption of cloud continues to grow, organizations are challenged with a dual mandate to democratize and embed data in all decision making while ensuring that it is secured and protected from unauthorized use. An effective data governance program is needed to implement this dual mandate to make the organization data driven on one hand and securing data from unauthorized use on the other. Organizations without an effective data governance program will suffer from compliance violations leading to fines, poor data quality which leads to lower quality insights impacting business decisions, challenges in finding data which results in delayed analysis and missed business opportunities, poorly trained data models for AI which reduces the model accuracy and benefits of using AI.An effective data governance strategy encompasses people, processes, and tools & technologies. It drives data democratization to embed data in all decision making, builds user trust, increases brand value, reduces the chances of compliance violations which can lead to substantial fines, and loss of business.Components of Data GovernancePeople and Roles in Data GovernanceA comprehensive data governance program starts with a data governance council composed of leaders representing each business unit in the organization. This council establishes the high level governing principles on how the data will be used to drive business decisions. The council with the help of key people in each b business functions identify the data domains, e.g. customer, product, patient, and provider. The council then assigns data ownership and stewardship roles for each data domain. These are senior level roles and each owner is held accountable and accordingly rewarded for driving the data goals set by the data governance council. Data owners and stewards are assigned from business, for example customer data owner may be from marketing or sales, finance data owner from finance, while HR data owner from HR.The role of IT is that of data custodian. IT ensures the data is acquired, protected, stored, and shared according to the policies specified by data owners. As data custodians, IT does not make the decisions on data access or data sharing. IT’s role is limited to managing technology to support the implementation of data management policies set by data owners.Processes in Data GovernanceEach organization will establish processes to drive towards the implementation of goals set by the data governance council. The processes are established by data owners and data stewards for each of their data domains. The processes focus on the following high level goals:1. Data meets the specified data quality standards – e.g. 98% completeness, no more than 0.1% duplicate values, 99.99% consistent data across different tables, and what constitutes on-time delivery2. Data security policies to ensure compliance with internal and external policiesData is encrypted at rest and on wireData access is limited to authorized users onlyAll sensitive data fields are redacted or encrypted and dynamically decrypted only for authorized usersData can be joined for analytics in de-identified form, e.g. using deterministic encryption or hashingAudits are available for authorized access as well as unauthorized attempts3. Data sharing with external partners is available securely via APIs4. Compliance with industry and geo specific regulations, e.g. HIPAA, PCI DSS, GDPR, CCPA, LGPD5. Data replication is minimized6. Centralized data discovery for data users via data catalogs7. Trace data lineage to identify data quality issues, data replication sources, and help with auditsTechnologyImplementing the processes as specified in the data governance program requires use of technology. From securing data, retaining and reporting audits, to automate monitoring and alerts, multiple technologies are integrated to manage data life cycle.In Google Cloud, a comprehensive set of tools enables organizations to manage their data securely and drive data democratization. Data Catalog enables users to easily find data from one centralized place across Google Cloud. Data Fusion tracks lineage so data owners can trace data at every point in the data life cycle and fix issues that may be corrupting data. Cloud Audit Logs  retain audits needed for compliance. Dataplex provides intelligent data management, centralized security and governance, automatic data discovery, metadata harvesting, lifecycle management, and data quality with built-in AI-driven intelligence.We will discuss the use of tools and technologies to implement governance in part 2 of this blog.
Quelle: Google Cloud Platform

Megatrends drive cloud adoption—and improve security for all

We are often asked if the cloud is more secure than on-premise infrastructure. The quick answer is that, in general, it is. The complete answer is more nuanced and is grounded in a series of cloud security “megatrends” that drive technological innovation and improve the overall security posture of cloud providers and customers.An on-prem environment can, with a lot of effort, have the same default level of security as a reputable cloud provider’s infrastructure. Conversely, a weak cloud configuration can give rise to many security issues. But in general, the base security of the cloud coupled with a suitably protected customer configuration is stronger than most on-prem environments. Google Cloud’s baseline security architecture adheres to zero-trust principles—the idea that every network, device, person, and service is untrusted until it proves itself. It also relies on defense in depth, with multiple layers of controls and capabilities to protect against the impact of configuration errors and attacks. At Google Cloud, we prioritize security by design and have a team of security engineers who work continuously to deliver secure products and customer controls. Additionally, we also take advantage of industry megatrends that increase cloud security further, outpacing the security of on-prem infrastructure.These eight megatrends actually compound the security advantages of the cloud compared with on-prem environments (or at least those that are not part of a distributed or trusted partner cloud). IT-decision makers should pay close attention to these megatrends because they’re not just transient issues to be ignored once 2023 rolls around—they guide the development of cloud security and technology, and will continue to do so for the foreseeable future.At a high level, these eight megatrends are:Economy of scale: Decreasing the marginal cost of security raises the baseline level of security. Shared fate: A flywheel of increasing trust drives more transition to the cloud, which compels even higher security and even more skin-in-the-game from the cloud provider.Healthy competition: The race by deep-pocketed cloud providers to create and implement leading security technologies is the tip of the spear of innovation. Cloud as the digital immune system: Every security update the cloud gives the customer is informed by some threat, vulnerability, or new attack technique often identified by someone else’s experience. Enterprise IT leaders use this accelerating feedback loop to get better protection.Software-defined infrastructure: Cloud is software defined, so it can be dynamically configured without customers having to manage hardware placement or cope with administrative toil. From a security standpoint, that means specifying security policies as code, and continuously monitoring their effectiveness.Increasing deployment velocity: Because of cloud’s vast scale, providers have had to automate software deployments and updates, usually with automated continuous integration/continuous deployment (CI/CD) systems. That same automation delivers security enhancements, resulting in more frequent security updates.Simplicity: Cloud becomes an abstraction-generating machine for identifying, creating and deploying simpler default modes of operating securely and autonomically. Sovereignty meets sustainability: The cloud’s global scale and ability to operate in localized and distributed ways creates three pillars of sovereignty. This global scale can also be leveraged to improve energy efficiency.Let’s look at these megatrends in more depth. Economy of scale: Decreasing marginal cost of securityPublic clouds are of sufficient scale to implement levels of security and resilience that few organizations have previously constructed. At Google, we run a global network, we build our own systems, networks, storage and software stacks. We equip this with a level of default security that has not been seen before, from our Titan security chips which assure a secure boot; our pervasive data-in-transit and data-at-rest encryption; and make available confidential computing nodes that encrypt data even while it’s in use. We prioritize security, of course, but prioritizing security becomes easier and cheaper because the cost of an individual control at such scale decreases per unit of deployment. As the scale increases, the unit cost of control goes down. As the unit cost goes down, it becomes cheaper to put those increasing baseline controls everywhere. Finally, where there is necessary incremental cost to support specific configurations, enhanced security features, and services to support customer security operations and updates, then even the per-unit cost of that will decrease. It may be chargeable but it is still a lower cost than on-prem services whose economics are going in the other direction. Cloud is, therefore, the strategic epitome of raising the security baseline by reducing the cost of control. The measurable level of security can’t help but increase.Shared fate: The flywheel of cloud expansionThe long-standing shared responsibility model is conceptually correct. The cloud provider offers a secure base infrastructure (security of the cloud) and the customer configures their services on that in a secure way (security in the cloud). But, if the shared responsibility model is used more to allocate responsibility when incidents occur and less as a means of understanding mutual collective responsibility, then we are not living up to mutual expectations or responsibility.  Taking a broader view of a “shared responsibility” model, we should use such a model to create a mutually beneficial shared fate. We’re in this together. We know that if our customers are not secure, then we as cloud providers are collectively not successful. This shared fate extends beyond just Google Cloud and our customers—it affects all the clouds because a trust issue in one impacts the trust in all. If that trust issue makes the cloud “look bad,” then current and potential future customers might shy away from the cloud, which ultimately puts them in a less-secure position. This is why our security mission is a triad of Secure the Cloud (not only Google Cloud), Secure the Customer (shared fate) and Secure the Planet (and beyond). Further, “shared fate” goes beyond just the reality of shared consequences. We view this as a philosophy of deeply caring about customer security, which gives rise over time to elements like:Secure-by-default configurations. Our default configurations ensure security basics have been enabled and that all customers start from a high security baseline, even if some customers change that later. Secure blueprints. Highly opinionated configurations for assemblies of products and services in secure-by-default ways, with actual configuration code, so customers can more easily bootstrap a secure cloud environment.Secure policy hierarchies. Setting policy intent at one level in an application environment should automatically configure down the stack so there’s no surprises or additional toil in lower-level security settings.Consistent availability of advanced security features. Providing advanced features to customers across a product suite and available for new products at launch is part of the balancing act between faster new launches and the need for security consistency across the platform. We reduce the risks customers face by consistently providing advanced security featuresHigh assurance attestation of controls. We provide this through compliance certifications, audit content, regulatory compliance support, and configuration transparency for ratings and insurance coverage from partners such as our Risk Protection Program. Shared fate drives a flywheel of cloud adoption. Visibility into the presence of strong default controls and transparency into their operation increases customer confidence, which in turn drives more workloads coming onto cloud. The presence of and potential for more sensitive workloads in turn inspires the development of even stronger default protections that benefit customers. Healthy competition: The race to the top The pace and extent of security feature enhancement to products is accelerating across the industry. This massive, global-scale competition to keep increasing security in tandem with agility and productivity is a benefit to all.For the first time in history, we have companies with vast resources working hard to deliver better security, as well as more precise and consistent ways of helping customers manage security. While some are ahead of others, perhaps sustainably so, what is consistent is that cloud will always lead on-prem environments which have less of a competitive impetus to provide progressively better security. On-prem may not ever go away completely, but cloud competition drives security innovation in a way that on-prem hasn’t and won’t.Cloud as the digital immune system: Benefit for the many from the needs of the few(er) Security improvements in the cloud happen for several reasons:The cloud provider’s large number of security researchers and engineers postulate a need for an improvement based on a deep theoretical and practical knowledge of attacks.A cloud provider with significant visibility on the global threat landscape applies knowledge of threat actors and their evolving attack tactics to drive not just specific new countermeasures but also means of defeating whole classes of attacks. A cloud provider deploys red teams and world-leading vulnerability researchers to constantly probe for weaknesses that are then mitigated across the platform. The cloud provider’s software engineers often incorporate and curate open-source software and often support the community to drive improvements for the benefit of all. The cloud provider embraces vulnerability discovery and bug bounty programs to attract many of the world’s best independent security researchers.And, perhaps most importantly, the cloud provider partners with many of its customer security teams, who have a deep understanding of their own security needs, to drive security enhancements and new features across the platform. This is a vast, global forcing function of security enhancements which, given the other megatrends, is applied relatively quickly and cost-effectively. If the customer’s organization can not apply this level of resources, and realistically even some of the biggest organizations can’t, then an optimal security strategy is to embrace every security feature update the cloud provides to protect networks, systems, and data. It’s like tapping into a global digital immune system.Software-defined infrastructure: Continuous controls monitoring vs. policy intent One of the sources of the comparative advantage of the cloud over on-prem is that it is a software-defined infrastructure. This is a particular advantage for security since configuration in the cloud is inherently declarative and programmatically configured. This also means that configuration code can be overlaid with embedded policy intent (policy-as-code and controls-as-code). The customer validates their configuration by analysis, and then can continuously assure that configuration corresponds to reality. They can model changes and apply them with less operating risk, permitting phased-in changes and experiments. As a result, they can take more aggressive stances to apply tighter controls with less reliability risk. This means they can easily add more controls to their environment and update it continuously. This is another example of where cloud security aligns fully with business and technology agility. The BeyondProd model and SLSA framework are prime examples of how our software-defined infrastructure has helped improve cloud security. BeyondProd and the BeyondCorp framework apply zero-trust principles to protecting cloud services. Just like not all users are in the same physical location or using the same devices, developers do not all deploy code to the same environment. BeyondProd enables microservices to run securely with granular controls in public clouds, private clouds, and third-party hosted services.The SLSA framework applies this approach to the complex nature of modern software development and deployment. Developed in collaboration with the Open Source Security Foundation, the SLSA framework formalizes criteria for software supply chain integrity. That’s no small hill to climb, given that today’s software is made up of code, binaries, networked APIs and their assorted configuration files. Managing security in a software-defined infrastructure means the customer can intrinsically deliver continuous controls monitoring, constant inventory assurance and be capable of operating at an “efficient frontier” of a highly secure environment without having to incur significant operating risks. Increasing deployment velocityCloud providers use a continuous integration/continuous deployment model. This is a necessity for enabling innovation through frequent improvements, including security updates supported by a consistent version of products everywhere, as well as achieving reliability at scale. Cloud security and other mechanisms are API based and uniform across products, which enables the management of configuration in programmatic ways—also known as configuration-as-code. When configuration-as-code is combined with the overall nature of cloud being a software-defined infrastructure, it enables customers to implement CI/CD approaches for software deployment and configuration to enable consistency in their use of the cloud. This automation and increased velocity decreases the time customers spend waiting for fixes and features to be applied. That includes the speed of deploying security features and updates, and permits fast roll-back for any reason. Ultimately, this means that the customer can move even faster yet with demonstrably less risk—eating and having your cake, as it were. Overall, we find deployment velocity to be a critical tool for strong security.Simplicity: Cloud as an abstraction machine A common concern about moving to the cloud is that it’s too complex. Admittedly, starting from scratch and learning all the features the cloud offers may seem daunting. Yet even today’s feature-rich cloud offerings are much simpler than prior on-prem environments—which are far less robust. The perception of complexity comes from people being exposed to the scope of the whole platform, despite more abstraction of the underlying platform configuration. In on-prem environments, there are large teams of network engineers, system administrators, system programmers, software developers, security engineering teams, storage admins, and many more roles and teams. Each has their own domain or silo to operate in. That loose-flying collection of technologies with its myriad of configuration options and incompatibilities required a degree of artisanal engineering that represents more complexity and less security and resilience than customers will encounter in the cloud. Cloud is only going to get simpler because the market rewards the cloud providers for abstraction and autonomic operations. In turn, this permits more scale and more use, creating a relentless hunt for abstraction. Like our digital immune system analogy, the customer should see the cloud as an abstraction pattern-generating machine: It takes the best operational innovations from tens of thousands of customers, and assimilates them for the benefit of everyone. The increased simplicity and abstraction permit more explicit assertion of security policy in more precise and expressive ways applied in the right context. Simply put, simplicity removes more potential surprise—and security issues are often rooted in surprise.Sovereignty meets sustainability: Global to local The cloud’s global scale and ability to operate in localized and distributed ways creates three potential pillars of sovereignty, which will be increasingly important in all jurisdictions and sectors. It can intrinsically support the need for national or regional controls, limits on data access, as well as delegation of certain operations, and means for greater portability across services.The global footprint of many cloud providers means that cloud can more easily meet national or regional deployment needs. Workloads can be more easily deployed to more energy-efficient infrastructures. That, coupled with cloud’s inherent efficiency due to higher resource utilization, means cloud is more sustainable overall. By engaging with customers and policymakers across these pillars, we can provide solutions that address their requirements, while optimizing for additional considerations like functionality, cost, infrastructure consistency, and developer experience. Data sovereignty provides customers with a mechanism to prevent the provider from accessing their data, approving access only for specific provider behaviors that customers think are necessary. Examples of customer controls provided by Google Cloud include storing and managing encryption keys outside the cloud, giving customers the power to only grant access to these keys based on detailed access justifications, and protecting data-in-use. With these features, the customer is the ultimate arbiter of access to their data. Operational sovereignty provides customers with assurances that the people working at a cloud provider cannot compromise customer workloads. The customer benefits from the scale of a multi-tenant environment while preserving control similar to a traditional on-prem environment. Examples of these controls include restricting the deployment of new resources to specific provider regions, and limiting support personnel access based on predefined attributes such as citizenship or a particular geographic location. Software sovereignty provides customers with assurances that they can control the availability of their workloads and run them wherever they want, without being dependent on or locked-in to a single cloud provider. This includes the ability to survive events that require them to quickly change where their workloads are deployed and what level of outside connection is allowed. This is only possible when two requirements are met, both of which simplify workload management and mitigate concentration risks: first, when customers have access to platforms that embrace open APIs and services; and second, when customers have access to technologies that support the deployment of applications across many platforms, in a full range of configurations including multi-cloud, hybrid, and on-prem, using orchestration tooling. Examples of these controls are platforms that allow customers to manage workloads across providers; and orchestration tooling that allows customers to create a single API that can be backed by applications running on different providers, including proprietary cloud-based and open-source alternatives.This overall approach also provides a means for organizations (and groups of organizations that make up a sector or national critical infrastructure) to manage concentration risks. They can do this either by relying on the increased regional and zonal isolation mechanisms in the cloud, or through improved means of configuring resilient multi-cloud services. This is also why the commitment to open source and open standards is so important.The bottom line is that cloud computing megatrends will propel security forward faster, for less cost and less effort than any other security initiative. With the help of these megatrends, the advantage of cloud security over on-prem is inevitable.Related ArticleRead Article
Quelle: Google Cloud Platform