Monitoring BigQuery reservations and slot utilization with INFORMATION_SCHEMA

BigQuery Reservations help manage your BigQuery workloads. With flat-rate pricing, you can purchase BigQuery slot commitments in 100-slot increments in either flex, monthly, or yearly plans instead of paying for queries on demand. You can then create/manage buckets of slots called reservations and assign projects, folders, or organizations to use the slots in these reservations. By default, queries running in a reservation automatically use idle slots from other reservations. In this way, organizations have greater control over workload management in a way that ensures high-priority jobs always have access to the resources they need without contention. Currently, two ways to monitor these reservations and slots are via the BigQuery Reservations UI or Cloud Monitoring.But how does an organization know how many slots to delegate to a reservation? Or if a reservation is being over or underutilized? Or what the overall slot utilization is across all reservations? In this blog post, we will discuss how we used BigQuery’s INFORMATION_SCHEMA system tables to create the System Tables Reports Dashboard and answer these questions.Using INFORMATION_SCHEMA tablesThe INFORMATION_SCHEMA metadata tables contain relevant, granular information about jobs, reservations, capacity commitments, and assignments. Using the data from these tables, users can create custom dashboards to report on the metrics they are interested in in ways that inform their decision making.While there are several tables that make up INFORMATION_SCHEMA, there are a few that are specifically relevant to monitoring slot utilization across jobs and reservations. The JOBS_BY_ORGANIZATION table is the primary table to extract job-level data across all projects in the organization. This information can be supplemented with data from the CAPACITY_COMMITMENT_CHANGES_BY_PROJECT, RESERVATION_CHANGES_BY_PROJECT, and ASSIGNMENT_CHANGES_BY_PROJECT tables to include details about specific capacity commitments, reservations, and assignments. It’s worth noting that the data retention period for INFORMATION_SCHEMA is 180 days and all timestamps are in UTC. For information about the permissions required to query these tables, follow the links above.Monitoring with the System Tables Reports DashboardThe System Tables Reports Dashboard is a Data Studio dashboard that queries data from INFORMATION_SCHEMA by using Data Studio’s BigQuery connector. Organizations can use this dashboard and/or its underlying queries as-is or as a starting point for more complex solutions in Data Studio or any other dashboarding tools.Daily Utilization ReportThe Daily Utilization Report gives an overview of an organization’s daily slot utilization measured in slot days. The primary chart in the report is for overall slot utilization per day alongside the active capacity commitments for the organization. This chart is ideal for gaining a high-level understanding of how an organization’s usage compares to the total number of slots it has committed to (or purchased).Click to enlargeThe query used to derive the average slot utilization is as follows:Slot utilization is derived by dividing the total number of slot-milliseconds (total_slot_ms from INFORMATION_SCHEMA.JOBS_BY_ORGANIZATION) consumed by all jobs on a given day by the number of milliseconds in a day (1000 * 60 * 60 * 24). This aggregate-level computation provides the most accurate approximation of the overall slot utilization for a given day. Note that this calculation is most accurate for organizations with consistent daily slot usage. If an organization does not have consistent slot usage this number might be lower than expected. For more information about calculating average slot utilization, see our public documentation.This report also includes charts that break down the utilization further by job type, project id, reservation id (shown below), user email, and top usage.click to enlargeHourly Utilization ReportThe Hourly Utilization Report is similar to the daily utilization report but gives an overview of an organization’s hourly slot utilization measured in slot hours. This report can help an organization understand their workloads at a more granular level in a way that helps with workload management.Reservation Utilization ReportThe Reservation Utilization Report gives an overview of an organization’s current assignments and reservation utilization in the last 7 and 30 days.The current reservation assignments table displays details for the current assignments across an organization including the assignment type, job type and reservation capacity.Click to enlargeThe reservation utilization tables display information about the utilization of a given reservation in the last 7 or 30 days. This includes average weekly or monthly slot utilization, average reservation capacity, current reservation capacity, and average reservation utilization. Average weekly and monthly utilization is derived using the same calculation as daily utilization, but adjusted for a week or month accordingly. click to enlargeThese tables are great for understanding if an organization is making the most of its allocated reservations. Reservations that are severely over or under utilized are colored in red, while reservations that are close to 100% utilization are colored in green. That said, because idle slot capacity is shared across reservations by default, underutilized reservations do not necessarily indicate that slots are being wasted. Instead, the jobs in that reservation simply do not need as many slots reserved and those slots could be allocated to a different reservation.Job Execution ReportThe Job Execution Report provides a per-job breakdown of slot utilization, among other job statistics. The purpose of this report is to allow users to drill down into individual jobs or understand trends in a specific group of jobs.Click to enlargeIn this report, the average slot utilization is displayed on a per-job level instead of an aggregate level. This is calculated by dividing total_slot_ms for that job by the  job’s duration in milliseconds (this can be computed by subtracting creation_time from end_time) as seen in the following query.Job Error ReportThe Job Error Report provides an overview of the types of errors encountered by jobs in the organization aggregated by project and error reason, among other fields. The INFORMATION_SCHEMA tables provide detailed information about job-level errors, so depending on an organization’s use case this report can be customized with more specific error reporting information.Click to enlargeWhat’s next?To learn more about INFORMATION_SCHEMA and the System Tables Reports Dashboard, check out the videos in our Modernizing Data Lakes and Data Warehouses with GCP course on Coursera. For more detailed information about each report, the queries used, and how to copy the dashboard for your own organization, visit our Github Repository.
Quelle: Google Cloud Platform

Why you need to explain machine learning models

Many companies today are actively using AI or have plans to incorporate it into their future strategies — 76% of enterprises are now prioritizing AI and ML over other initiatives in their IT budgets and the global AI industry is expected to reach over $260 billion by 2027.  But as AI and advanced analytics become more pervasive, the need for more transparency around how AI technologies work will be paramount. In this post, we’ll explore why explainable AI (XAI) is essential to widespread AI adoption, common XAI methods, and how Google Cloud can help. Why you need to explain ML modelsAI technology suffers from what we call a black box problem. In other words, you might know the question or the data (the input), but you have no visibility into the steps or processes that provide you with the final answer (the output). This is especially problematic in deep learning and artificial neural network approaches, which contain many hidden layers of nodes that “learn” through pattern recognition. Stakeholders are often reluctant to trust ML projects because they don’t understand what they do. It’s hard for decision-makers to relinquish control to a mysterious machine learning model, especially when it’s responsible for critical decisions. AI systems are making predictions that have a profound impact, and in some industries like healthcare or driverless cars, it can mean the difference between life and death. It’s often hard to get support that a model can be trusted to make decisions, let alone make them better than a human can—especially when there is no explanation of how that decision was made. How did the AI model predict or make a decision? How can you be sure there is no bias creeping into algorithms? Is there enough transparency and interpretability to trust the model’s decision? Decision-makers want to know the reasons behind an AI-based decision, so they have the confidence that it is the right one. In fact, according to a PwC survey, the majority of CEOs (82%) believe that AI-based decisions must be explainable to be trusted.What is Explainable AI? Explainable AI(XAI) is a set of tools and frameworks that can be used to help you understand how your machine learning models make decisions. This shouldn’t be confused with showing a complete step-by-step deconstruction of an AI model, which can be close to impossible if you’re attempting to trace the millions of parameters used in deep learning algorithms. Rather, XAI aims to provide insights into how models work, so human experts are able to understand the logic that goes into making a decision. When you apply XAI successfully, it offers three important benefits: 1. Increases trust in ML modelsWhen decision-makers and other stakeholders have more visibility into how a ML model found its final output, they are more likely to trust AI-based systems. Explainable AI tools can be used to provide clear and understandable explanations of the reasoning that led to the model’s output. Say you are using a deep learning model to analyze medical images like X-rays, you can use explainable AI to produce saliency maps (i.e. heatmaps) that highlight the pixels that were used to get the diagnosis. For instance, a ML model that classifies a fracture would also highlight the pixels used to determine that the patient is suffering from a fracture.2. Improves overall troubleshooting Explainability in AI can also enable you to debug a model and troubleshoot how well a model is working. Let’s imagine your model is supposed to be able to identify animals in images. Over time, you notice that the model keeps classifying images of dogs playing in snow as foxes. Explainable AI tools make it easier to figure out why this error keeps occurring. As you look into your explainable AI models that you’re using to show how a prediction is made, you discover that that ML model is using the background of an image to differentiate between dogs and foxes. The model has mistakenly learned that domestic backgrounds are dogs and snow in an image means the image contains a fox. 3. Busts biases and other potential AI potholesXAI is also useful for identifying sources of bias. For example, you might have a model to identify when cars are making illegal left hand turns. When you are asked to define what the violation is based on in an image, you find out that the model has picked up a bias from the training data. Instead of focusing on cars turning left illegally, it’s looking to see if there is a pothole. This influence could be caused by a skewed dataset that contained a large amount of images taken on poorly maintained roads, or even real-bias, where a ticket might be more likely to be given out in an underfunded area of a city. Where does explainability fit into the ML lifecycle? Explainable AI should not be an afterthought that’s done at the end of your ML workflow. Instead, explainability should be integrated and applied every step of the way—from data collection, processing to model training, evaluation, and model serving. There are a few ways you can work explainability into your ML lifecycle. This could mean using explainable AI to identify data set imbalances, ensure model behavior satisfies specific rules and fairness metrics, or show model behavior both locally and globally. For instance, if a model was trained using synthetic data, you need to ensure it behaves the same when it uses real data. Or, as we discussed above with deep learning models for medical imaging, a common form of explainability is to create heatmaps to identify the pixels used for image classification.Another tool you might use is sliced evaluations of machine learning model performance. According to our AI principles, you should avoid creating or reinforcing unfair bias. AI algorithms and datasets can often reflect or reinforce unfair biases. If you notice that a model is not performing well for a small minority of cases, it’s important for you to address any fairness concerns. Sliced evaluations will allow you to explore how different parts of a dataset might be affecting your results. In the case of imaging models, you might explore different images based on factors like poor lighting or over exposure. We also recommend creating model cards, which can help explain any potential limitations, any trade-offs you have to make for performance, and then, providing a way to test out what the model does. Explainable AI methodsWhen we talk about explainable AI methods, it’s important to understand the difference between global and local methods.A global method is understanding the overall structure of how a model makes a decision. A local method is understanding how the model made decisions for a single instance. For instance, a global method might be that you look at a table that includes all the features that were used, ranked by the overall importance they have for making a decision. Feature importance tables are commonly used to explain structured data models to help people understand how specific input variables impact the final output of a model.But what about explaining how a model makes a decision for an individual prediction or a specific person? This is where local methods come into play. For the purpose of this post, we’ll cover local methods based on how they can be used for explaining model predictions in image data.Here are the most common explainable AI local methods:Local interpretable model-agnostic explanation (LIME)  Kernel Shapley additive explanations (KernalSHAP) Integrated gradients (IG) Explainable explanations through AI (XRAI) Both LIME and KernalShap break down an image into patches, which are randomly sampled from the prediction to create a number of perturbed (i.e. changed) images. The image will look like the original, but parts of the image have been zeroed out. Perturbed images are then fed to the trained model and asked to make a prediction. In the example below, the model would be asked: Is this image a frog or not a frog?The model would then provide the probability of whether the image is a frog. Based on the patches that were selected, you can actually rank the importance of each patch to the final probability. Both these methods can be used to help explain the local importance for determining whether the image contained a frog.Integrated gradients is a technique used to give importance value based on gradients of the final output. IG takes baseline images and compares them to the actual pixel value of the images that contain the information the corresponding model is designed to identify. The idea is that the value should improve in accuracy when the image contains what the model was trained to find. It helps determine how much a gradient changes from the baseline image to the point where it makes a prediction, providing an attribution mask that helps determine what the image is using to classify an image.XRAI is a technique that combines all of the three methods mentioned above, combining patch identification with integrated gradients to show salient regions that have the most impact on a decision, rather than individual pixels. The larger regions in this approach tend to deliver better results.Another emerging method that we’re starting to incorporate at Google Cloud is TracIn—a simple, scalable approach that estimates training data influence. The quality of ML model’s training data can have a huge impact on a model’s performance. TracIn tracks mislabeled examples and outliers from various datasets and helps explain predictions by assigning an influence score to each training example. If you are training a model to predict whether images have zucchinis, you would look at the gradient changes to determine which reduce loss (proponents) and increase loss (opponents). TracIn allows you to identify what images allow the model to learn to identify a zucchini and which are used to distinguish what’s not a zucchini.Using Explainable AI in Google CloudWe launched Vertex Explainable AI to help data scientists not only improve their models but provide insights that make them more accessible for decision-makers. Our aim is to provide a set of helpful tools and frameworks that can help data science teams in a number of ways, such as explaining how ML models reach a conclusion, debugging models, and combating bias.With Vertex Explainable AI platform, you can:Design interpretable and inclusive AI. Build AI systems from the ground up with Vertex Explainable AI tools designed to help detect and resolve bias, drift, and other gaps in data and models. With AI Explanations, data scientists can use AutoML Tables, Vertex Predictions, and Notebooks to explain how much a factor contributed to model predictions, helping to improve datasets and model architecture. The What-If Tool enables you to investigate model performance across a wide range of features, optimize strategies, and even manipulate individual datapoint values.Deploy ML models with confidence by providing human-friendly explanations. When deploying a model on AutoML Tables or Vertex AI , you can reflect patterns found in your training data to get a prediction and a score in real time about how different factors affected the final output. Streamline model governance with performance monitoring and training. You can easily monitor predictions and provide ground truth labels for prediction inputs with the continuous evaluation feature. Vertex Data Labelingcompares predictions with ground truth labels to incorporate feedback and optimize model performance.  AI continues to be an exciting frontier that will continue to shape and inspire the future of enterprises across all industries. But in order for AI to reach its full potential and gain wider adoption, it will require that all stakeholders, not just data scientists, understand how ML models work. That’s why, we remain committed to ensuring that no matter where AI goes in the future, it will serve everyone—be it customers, business users, or decision-makers. Next stepsLearn how to serve out explanations alongside predictions by running this Jupyter notebook on Cloud AI Platform. Step-by-step instructions are also available on Qwiklabs. And if you are interested in what’s coming in machine learning over the next five years, check out our Applied ML Summit to hear from Spotify, Google, Kaggle, Facebook and other leaders in the machine learning community.Related ArticleGoogle Cloud unveils Vertex AI, one platform, every ML tool you needGoogle Cloud launches Vertex AI, a managed platform for experimentation, versioning and deploying ML models into production.Read Article
Quelle: Google Cloud Platform

The top cloud capabilities industry leaders want for sustained innovation

Cloud computing technologies help companies and governments to deliver essential services to their customers and citizens—never was this seen more than during the pandemic. From enabling the quick rollout of indispensable programs like unemployment assistance or access to COVID-19 testing online portals to leveraging on-demand infrastructure to meet enterprise compute needs, cloud empowers IT leaders to react and respond quickly under extreme pressure.  With increasingly complex environments, which include a mix of proprietary and vendor solutions, legacy apps, geographically distributed, and resources living both on-premises or across multiple clouds, enterprises and agencies want to achieve more agility and improve cost efficiencies without getting locked into a single vendor in the future. At the same time, they are looking to leverage emerging technologies like edge solutions with the rollout of 5G.Multicloud and hybrid cloud approaches, coupled with open-source technology adoption, enable IT teams to take full advantage of the best cloud has to offer. And a recent study from the International Data Group (IDG) shows just how much of a priority this has become for business leaders. Multicloud and hybrid cloud capabilities among ‘must-haves’ from cloud providersAfter more than a year of uncertainty, organizations are applying lessons learned along the way as they assess the capabilities they need out of their cloud providers to keep pace with rapidly evolving requirements.  The results of the Google-commissioned study by IDG, based on a global survey of over 2,000 IT decision-makers, shows that multicloud/hybrid cloud support and other cutting edge technologies, such as containers, microservices, service mesh, and AI-powered analytics, are now major considerations for enterprises when selecting a cloud provider. This is true at almost all companies, regardless of their digital maturity, including those who are fully transformed (digital natives), currently implementing strategy (digital forwards), or not yet implementing any transformation strategy (digital conservatives).Related ArticleRead ArticleOrganizations are progressively more committed to the cloud, especially those who are further along on their digital transformation journey. The survey found that the majority of digital natives (83%) and digital forward (81%) companies list multicloud/hybrid cloud support and cutting-edge technology as key considerations when considering a cloud provider. However, the same factors are still among the top considerations for over 70% of digital conservatives.Another trend that goes hand-in-hand with multicloud and hybrid cloud support is the broader adoption of open source software solutions. In particular, open source technologies address barriers that arise from the need to modernize or integrate legacy systems and technologies—a primary pain point that impedes transformation efforts.  While once viewed as unconventional, open source has become vital to unlocking cloud innovation, delivering the speed and rich capabilities needed to speed production and increase creativity. This link between cloud and open source is also reflected in the IDG study results.  While globally, 74% of global IT leaders say they prefer open-source cloud solutions, this number jumps to 82% at digital-forward organizations and 87% for digital natives. By comparison, the same is true for just over half of digital-conservative companies.Freedom to innovate—anywhereGoogle Cloud’s commitment to multicloud, hybrid cloud, and open source enables our customers to use their data as well as build and run apps in the environment of their choice, whether on-premises, in Google Cloud, on another cloud provider, or across geographic regions. To learn more about the IDG findings and how IT leaders are creating new ways to operate and innovate, download the full report.Interested in how Google Cloud’s commitment to multi/hybrid cloud and OSS empowers transformation and drives innovation?Our distributed cloud services, including Anthos and Google Kubernetes Engine (GKE),  provide consistency between any public and private clouds as well as a solid foundation for modernization and future growth, while allowing developers to build, manage, and innovate faster, anywhere. Anthos extends Google Cloud’s best-in-breed solutions to any environment, enabling teams to modernize apps faster and establish operational consistency. It can be used for both legacy and cloud-native deployments, running on existing virtual machines (VMs) and bare metal services, while minimizing vendor lock-in and meeting regulatory requirements. Google Cloud’s commitment to multicloud, hybrid cloud, and open source enable organizations to leverage their data and run their applications and services on the cloud or in the environment of your choice, rather than using a single vendor solution. We aim to support our customers’ journeys to reinvention, and we hope that together we can pave the way for whatever is coming next.Related ArticleRead Article
Quelle: Google Cloud Platform

All about cables: A guide to posts on our infrastructure under the sea

From data centers and cloud regions to subsea cables, Google is committed to connecting the world. Our investments in infrastructure aim to further improve our network—one of the world’s largest—which helps improve global connectivity, supporting  users and Google Cloud customers. Our subsea cables play a starring role in this work, linking up cloud infrastructure that includes more than 100 network edge locations and over 7,500 edge caching nodes. As it turns out, readers of this blog seem to find what happens under the sea just as fascinating as what’s going on in the cloud. Posts on our cables are consistently among our most popular, which is why we brought them together for you here so you can take a deeper dive (pun intended).Here’s a list our most popular posts on our underwater infrastructure:2021Hola, South America! Announcing the Firmina subsea cableThis bears repeating: Introducing the Echo subsea cableThe Dunant subsea cable, connecting the US and mainland Europe, is ready for service2020Announcing the Grace Hopper subsea cable, linking the U.S., U.K. and Spain2019Introducing Equiano, a subsea cable from Portugal to South AfricaA quick hop across the pond: Supercharging the Dunant subsea cable with SDM technologyCurie subsea cable set to transmit to Chile, with a pit stop to Panama2018Expanding our cloud network for a faster, more reliable experience between Australia and Southeast AsiaDelivering increased connectivity with our first private trans-Atlantic subsea cable2017Google invests in INDIGO undersea cable to improve cloud infrastructure in Southeast Asia2016New undersea cable expands capacity for Google APAC customers and usersGoogle Cloud customers run at the speed of light with new FASTER undersea pipeA journey to the bottom of the internetOur cable systems provide the speed, capacity and reliability Google is known for worldwide, and at Google Cloud, our customers can make use of the same network infrastructure that powers Google’s own services. To learn more, you can view our network on a map, or read more about our network.Related ArticleHola, South America! Announcing the Firmina subsea cableThe new Firmina subsea cable will run from the eastern U.S. to Argentina, and will be the world’s longest cable cable powered by a single…Read Article
Quelle: Google Cloud Platform

How Zebra Technologies manages security & risk using Security Command Center

Zebra Technologies enables businesses around the world to gain a performance edge – our products, software, services, analytics and solutions are used principally in the manufacturing, retail, healthcare, transportation & logistics and public sectors. With more than 10,000 partners across 100 countries, our businesses and workflows operate across the interconnected world. Many of these workflows run through Google Cloud, where we host environments for both our own enterprise use and for our customer solutions. As CISO for Zebra, it’s my team’s responsibility to keep our organization’s network and data secure. We must ensure the confidentiality, integrity, and availability of all our assets and data. Google Cloud’s Security Command Center (SCC) supports our approach to protecting our technology environment.Adopting a cloud-native security approachSecuring our environment at all times is of utmost importance. As a security-conscious organization, we run best-of-breed security technologies in our environment. During our cloud transformation, we found security visibility gaps that our existing tools did not address when it came to our cloud assets and infrastructure. We needed to augment the capabilities we had with our cloud-agnostic security stack by adding cloud-native security tools that could provide a holistic view of our Google Cloud assets. That’s when we found Security Command Center. SCC’s cloud-native, platform-level integration with Google Cloud provides real-time visibility into our assets. It gives us the ability to see resources that are currently deployed, their attributes, and changes to those resources in real-time. For instance, SCC gives visibility into how many projects are new, what resources like Compute Engine, Cloud Storage buckets are deployed, what images are running on our containers, and security findings in our firewall configurations.At Zebra, the infosec team partners with product and security solution teams to manage risk, and to provide technology platforms that detect and respond to threats in our environment. We use SCC across our teams to monitor our Google Cloud environment, quickly discover misconfigurations, and detect and respond to the threats. We were also looking for new ways to get additional vulnerability information provided by vulnerability scanners into the hands of the development and support teams. Security Command Center emerged as a means to communicate that information through a common user interface. SCC’s third-party integration capabilities enabled us to provide findings from our vulnerability scanner into the same user interface in Security Command Center for our development and support teams to assess risk and drive resolution. The compliance benchmarks views that were provided out-of-the-box by Security Command Center revealed how we stacked up against key industry standards, and showed best practices to take corrective action. Operationalizing with Security Command Center PremiumWe run a 24/7 infosec operation that monitors and responds to threats across our environment. We use SCC for critical detection and response both in our Security Operations Center (SOC) and in our Vulnerability Management functions. SCC helps us identify threats such as potential malware infections, data exfiltration, cryptomining activity, brute force SSH attacks, and more. We particularly like SCC’s container security capabilities that enable us to detect top attack techniques that foreclose adversarial pathways for container threats. We’ve also integrated Security Command Center into our Security Incident and Event Management (SIEM) tool to ensure threat detections that are surfaced by Security Command Center get an immediate response. The integration capabilities provided by SCC allow us to seamlessly embed it into our SOC design, where we triage and respond to events. Being able to act from the platform in the same manner and in the same timeframe as detections from our other tools allows us to respond effectively using the same standard processes. Our SOC team has seen great value in how SCC allows us to directly pivot from a finding to detailed logs, which has helped us to significantly reduce triage time.We have a dedicated Vulnerability Management function that addresses misconfiguration risks in our resources, and vulnerabilities in our applications. Our Vulnerability Management team uses information presented in SCC’s dashboards to assess potential configuration risks and work with the asset owners to drive resolution. SCC helped us to address what needed to be fixed, especially as new resources are coming onboard into our environment we were able to detect if those assets had mis-configurations or violated any compliance controls. This team uses third-party tools to scan for known common vulnerability exposures (CVEs). We liked how SCC integrates with third-party vulnerability tools, so we can use SCC as a single pane of glass for our vulnerability information. For instance, we can use SCC to identify critical assets that have misconfigurations or vulnerabilities, assess the severity in one view so that we can immediately act to fix the issue. Before deploying SCC, we relied on spreadsheets or other mechanisms to share this information. Now, all security vulnerability findings exist in a unified view. All relevant information is available for our teams to digest and address from the same place. We also engage our engineering development teams to take certain ownership for addressing the security findings for the assets in their line of business. This is what the industry refers to as “shift left” security. We have multiple development teams at Zebra, and believe they should have the power to address security findings within their teams. SCC’s granular access control (scoped view) capability enables us to provide asset visibility and security findings in real-time based on roles and responsibilities. This ensures individual teams can only see the assets and findings for which they are responsible. This helps us limit sensitive information to those who have a need to know, and helps those individual teams to take action quickly as they are not overwhelmed or distracted by security findings that are not under their ownership. It also helps us reduce security risk and achieve compliance goals by limiting access as needed within our organization. In addition, this scoped view capability has created operational efficiencies in how we addressed our asset misconfigurations and vulnerability findings.Securing the future together with Google CloudSecurity Command Center has become integral to our security fabric thanks to its native platform-level integration with Google Cloud, as well as its ease of use. Overall, Security Command Center helps us continuously monitor our Google Cloud environment to provide real-time visibility and a prioritized view of security findings so that we can quickly respond and take action. Both Zebra and Google have a shared goal to keep cloud environments secure. With the help of Google Cloud and Security Command Center, Zebra Technologies improved our overall security posture and workload protection. It also helped us enhance our collaboration between the development teams and security teams as well as manage and lower the company’s security risk. Google Cloud blog note:Security Command Center is a native security and risk management platform for Google Cloud. Security Command Center Premium tier provides built-in services that enable you to gain visibility into your cloud assets, discover misconfigurations and vulnerabilities in your resources, detect threats targeting your assets, and help maintain compliance based on industry standards and benchmarks. To enable a Premium subscription, contact your Google Cloud Platform sales team. You can learn more about Security Command Center and how it can help with your security operations using our product documentation.Related ArticleSecurity Command Center now supports CIS 1.1 benchmarks and granular access controlApply fine-grained access control and compare your security posture against industry best practices with new Security Command Center capa…Read Article
Quelle: Google Cloud Platform

How BBVA is using Cloud SQL for it’s next generation IT initiatives

Editor’s note: Today we’re hearing from Gerardo Mongelli de Borja, Diego Garcia Teba and Víctor Armingol Guisado – Google Cloud Architects at BBVA. They share how Google Cloud fits into their multi cloud strategy and how their team provides Google Cloud services to stakeholders in BBVA.Banco Bilbao Vizcaya Argentaria, S.A. (BBVA) is a Spanish multinational financial services company and one of the largest financial institutions in the world. Based in Madrid and Bilbao, Spain, BBVA has been engaged in digital transformation on a multi-cloud architecture which started nine years ago. Services like Cloud SQL and other solutions from Google Cloud have played instrumental roles in our transformation.Financial institutions aren’t typically known for their quick embrace of new technology, but our willingness to try and benefit from new Google Cloud solutions has helped us carve a trailblazing path of digital adoption and innovation not only within the Spanish banking sector, but within the European and the Americas sectors as well. How we started on Google CloudWe began building on Google Cloud by deploying a social network service on Google App Engine with Firestore (back then Datastore). This proved to be an incredibly flexible solution that provided such short delivery times that we decided to integrate our organization’s intranet on the same system. From that point forward, BBVA stakeholders requested a number of internal employee-related applications, and we developed them using the same App Engine/Firestore system. Since then, BBVA has further extended its cloud adoption. We established a global architectural department whose main purpose was to build an internal cloud called Ether Cloud Services (ECS). 90 to 95 percent of our current Google Cloud services were born in the cloud, and to avoid vendor lock-in, we’ve designed and built a multi-cloud architecture, with our entire ECS spanning over Google Cloud, AWS, and Azure.  To better iterate on our long-term plans, our section of the engineering team was moved within the architectural department and tasked with building an integration architecture for Google Cloud. This internal team provides the solutions and archetypes that allow the rest of BBVA to build their services on top of Google Cloud, following our established patterns. Cloud SQL fits our strategy for effective managed servicesOver those nine years, our database architecture has transformed as well, and we’ve tested various services within Google Cloud to determine which best suited our needs and our roadmap, starting with Datastore and later moving to Cloud SQL as we explored relational database engines. We also used Bigtable upon its release, and more recently, we’ve been using Firestore.BBVA prioritizes managed services where available for their speed, ease of maintenance, and centralized control features. The fully managed relational database service provided by Cloud SQL fits perfectly within our internal strategy. Any time there’s a management application with a use case for a transactional relational database, we consider the option of Cloud SQL. For most initiatives, we use MySQL, since people often have experience working with it. PostgreSQL is also used for more specific use cases such as global deployments, which are typically regional in Europe or the U.S. and provide service to Mexico and other American countries.How BBVA approaches new initiativesWhenever there’s a business requirement within BBVA, the solution architecture department first jumps in and analyzes our overall technology stack and the initiative requirements. When a Google Cloud use case arises—and that’s mainly on internal employee-activity applications—we pull from many of the Google Cloud solutions, deciding which tools can be used within the organization.The internal application examples include paycheck portals, internal directories, and internet applications like procurement, project control, and management control, all developed within BBVA. For example, we have many WordPress apps within the organization that use Cloud SQL. Most of the applications are built on top of our base stack of App Engine with Datastore. From there, if the initiative needs relational data coverage, we propose Cloud SQL as a solution. If the internal stakeholders need to install their own third-party product, we may suggest using Compute Engine, Cloud Run, or Google Kubernetes Engine GKE)Because the Google stack is so deep and diverse, our internal Google Cloud team often fields internal questions about how to use a service, such as how to integrate Dataflow with an external cloud. So then solution architects often come to us to ask for a proof of concept, or an investigation, which leads to a new integration. Having that in mind, when an initiative brings its own use case, the solution architecture department sets up the solution, and turns to us to set up the whole Google Cloud environment. Part of our job is to provide daily support to such tasks. We set up the project, we set up the Cloud Identity and Access Management (Cloud IAM) roles, and all the permissions. More specifically for Cloud SQL, we set up the database itself according to their needs. We give them a root user with a generated root password, and we provide initial guidelines on how to start using Cloud SQL. For example, we try to avoid direct external connections, since we want to avoid IP whitelisting, so we recommend using Cloud SQL Proxy for their direct connections. From time to time, we monitor their use and consumption, the billing for those projects, and whether they have the proper sizing for Cloud SQL databases. As part of our constant monitoring work on initiatives, we continue to benchmark Cloud SQL against other databases within Google Cloud like Datastore and MySQL in order to recommend the best option for each use case. Using Cloud Composer, we also provide backup systems for individual databases to comply with legal standards. For example, we might need a full backup for the last ten years, or one backup for a week, or the last 30 full logical backups.  We have many IT silos within BBVA. Different teams try to tackle a problem with a solution they arrange themselves. So as part of our digital transformation, we may offer these teams the option to put their information on a database type of their choice so long as it’s within Google Cloud. That way, they get the features they need, and we get the control we need. Using Cloud SQL to solve shadow ITOne of the next big things for us to solve is Shadow IT. Cloud SQL allows us to give project owners, solution architects and other groups in general, a way to create resources in a secure, controlled and approved way while at the same time giving them the freedom and flexibility to spin up resources without us having to be a bottleneck in the process. This allows us to apply best practices, keep things secure and in compliance, out of the box monitoring and alarms and gives us better visibility into BBVA’s database inventory on GCP.Google Cloud supports our multi-cloud strategyThe full integration of Google Cloud solutions feels natural and intuitive, and makes it so easy to work with its various tools, such as SQL Proxy orIdentity Aware Proxy (IAP). Everything is connected and easy to use. And when we find a solution that works for a use case, we reproduce that solution over and over within the organization. In addition to Cloud SQL, we’re super fans of Firebase, and we have an explosion of use cases within BBVA that are being handled well with this solution. We’re currently migrating toMemorystore for Redis to change our applications from Google App Engine version one to version two. As our embrace of the full Google Cloud stack of products shows, we’ve found them to be instrumental and effective solutions in our digital transformation, offering security, scalability, and fully managed services that perform across our multi-cloud architecture, and allow us to focus on new initiatives and meeting the needs of our future roadmap. Learn more about BBVA. To further explore the benefits to your organization of a multi-cloud strategy, check out our recent blog.Related ArticleThe 5 benefits of Cloud SQL [infographic]Check out this infographic on the 5 benefits of Cloud SQL, Google Cloud’s managed database service for MySQL, PostgreSQL and SQL Server.Read Article
Quelle: Google Cloud Platform

How Anthos clusters on AWS delivers on the promise of multicloud

Editor’s note: Today’s post comes from Naohiko Takemura, Head of Engineering, and Kosukex Oya, Engineer, both from Japanese customer experience platform PLAID. The company runs its platform in a multicloud environment through Anthos clusters on AWS and shares more on its experiences and best practices. At PLAID, our mission is to maximize the value of people through data, and we are developing a range of products that focus on improving customer experience. Our core product is a customer experience platform, KARTE, that can analyze the behavior and emotions of website visitors and application users, enabling businesses to deliver relevant communications in real time. We make KARTE available as a service to functions such as human resources and industries such as real estate and finance, and run the platform in a multicloud environment to achieve high-speed response and meet availability requirements. This is where Anthos comes in.We introduced KARTE in 2015 and updated the system configuration in line with the addition of new functions and the need to increase scale. Our multicloud configuration is optimized through Anthos clusters on AWS, which give us access to the capabilities of Google Kubernetes Engine (GKE). KARTE runs in two groups of server instances in each cloud; one group runs the management screens used by clients, and the other provides content for visitors to our website. In Google Cloud, the management system runs in GKE and content is delivered through Compute Engine.We initially developed and operated the core of our services on another provider and from 2016 began to transition to Google Cloud due to its strong data processing capabilities. The products that handled big data, such as Cloud Bigtable and BigQuery, were attractive because they could handle data in real time and were compatible with KARTE. Now most functions, including peripheral aspects, run in Google Cloud, because we thought if we built a system centered on these products, it would become efficient to build other parts on Google Cloud.While we considered migrating everything to Google Cloud, we decided to leverage its strengths alongside those of our existing provider, AWS. We felt a multicloud approach could create more opportunities and deliver higher growth than a mono-cloud environment.  We completed our move to a multicloud environment in 2017 and found that by building systems with almost the same content on two cloud services to leverage the strengths of each, we could reduce costs and improve performance and availability.However, as KARTE grew, and the content of the service increased in complexity, we began to experience new problems. The increased load on the system due to an influx of in-house engineers from 2018 onwards impacted the scalability and development speeds of our  conventional monolithic architecture running in virtual machines. We opted for an approach based on microservices and containerization, excluding the components that enabled real-time analysis as these had been modernized since initially being deployed in 2016, and the management screens, as the infrastructure running these did not require crisp tuning. Our key priority was to improve the ability of our engineers to deliver quickly.From 2019, we turned to promoting microservices that make full use of container technology. When deciding to move from a target built on virtual machines to containerization, we evaluated the ease of use of GKE and decided to build in Google Cloud. At the same time, the number of systems with strict service level obligations was increasing, so to ensure higher availability, we considered running these in a multi cloud environment. The announcement of Anthos clusters on AWS at Google Cloud Next ‘19 in San Francisco provided an answer.We had been wondering how to achieve the equivalent smooth operation of GKE in our AWS environment, and welcomed the Anthos clusters on AWS announcement. We consulted with a Google Cloud customer engineer through an early access program and quickly gained an opportunity to work with this version of Anthos. This allowed us to provide feedback and requests for improvement, and this paved the way for us to implement the product to take advantage of its functionality and future enhancements. With Google Cloud, we have been able to continue to interact closely with the development team to understand and provide input into the product roadmap.We are now realizing the benefits of multicloud, including faster development speeds and higher availability. For businesses in general, we recommend they take a thoughtful approach to multicloud—while for us, multicloud is a useful mechanism that enables us to provide large-scale data analysis in real time, other businesses should consider whether multicloud is right for them and if so, the role of a technology like Anthos. They should also start small before ramping up.  Moving forward, we are keen to see what other products Google Cloud is creating that can help drive our business to a higher level.Related Article3 keys to multicloud success you’ll find in Anthos 1.7The new Anthos 1.7 lets you do a whole lot more than just run in multiple clouds.Read Article
Quelle: Google Cloud Platform

How to detect machine-learned anomalies in real-time foreign exchange data

Let’s say you are a quantitative trader with access to real-time foreign exchange (forex) price data from your favorite market data provider. Perhaps you have adata partner subscription, or you’re using a synthetic data generator to prove value first. You know there must be thousands of other quants out there with your same goal. How will you differentiate your anomaly detector?What if, instead of training an anomaly detector on raw forex price data, you detected anomalies in an indicator that already provides generally agreed buy and sell signals? Relative Strength Index (RSI) is one such indicator; it is often said that RSI going above 70 is a sell signal, and RSI going below 30 is a buy signal. As this is just a simplified rule, it means there could be times when the signal is inaccurate, such as a currency market correction, making it a prime opportunity for an anomaly detector.This gives us the following high level components:Of course, we want each of these components to handle data in real time, and scale elastically as needed. Dataflow pipelines and Pub/Sub are the perfect services for this. All we need to do is write our components on top of the Apache Beam sdk, and they’ll have the benefit of distributed, resilient and scalable compute.Luckily for us, there are some great existing Google plugins for Apache Beam. Namely, a Dataflow time-series sample library that includes RSI calculations, and a lot of other useful time series metrics; and a connector for using AI Platform or Vertex AI inference within a Dataflow pipeline. Let’s update our diagram to match, where the solid arrows represent Pub/Sub topics.The Dataflow time-series sample library also provides us with gap-filling capabilities, which means we can rely on having contiguous data once the flow reaches our machine learning (ML) model. This lets us implement quite complex ML models, and means we have one less edge case to worry about.So far we’ve only talked about the real time data flow, but for visualization and continuous retraining of our ML model, we’re going to want historical data as well. Let’s use BigQuery as our data warehouse, and Dataflow to plumb Pub/Sub into it. As this plumbing job is embarrassingly parallelizable, we wrote our pipeline to be generic across data types and share the same Dataflow job, such that compute resources can be shared. This results in efficiencies of scale both in cost savings and time required to scale-up.Data ModelingLet’s discuss data formats a bit further here. An important aspect of running any data engineering project at scale is flexibility, interoperability and ease of debugging. As such, we opted to use flat JSON structures for each of our data types, because they are human readable and ubiquitously understood by tooling. As BigQuery understands them too, it’s easy to jump into the BigQuery console and confirm each component of the project is working as expected.(synthetic data)As you can see, the Dataflow sample library is able to generate many more metrics than RSI. It supports generating two types of metrics across time series windows, metrics which can be calculated on unordered windows, and metrics which require ordered windows, which the library refers to as Type 1 metrics and Type 2 metrics, respectively. Unordered metrics have a many-to-one relationship, which can help reduce the size of your data by reducing the frequency of points through time. Ordered metrics run on the outputs of the unordered metrics, and help to spread information through the time domain without loss in resolution. Be sure to check out the Dataflow sample library documentation for a comprehensive list of metrics supported out of the box.As our output is going to be interpreted by our human quant, let’s use the unordered metrics to reduce the time resolution of our flow of real time data to one per second, or one hertz. If our output was being passed into an automated trading algorithm, we might choose a higher frequency. The decision for the size of our ordered metrics window is a little more difficult, but broadly determines the amount of time-steps our ML model will have for context, and therefore the window of time for which our anomaly detection will be relevant. We at least need it to be larger than our end-to-end latency, to ensure our quant will have time to act. Let’s set it to five minutes.Data VisualizationBefore we dive into our ML model, let’s work on visualization to give us a more intuitive feel for what’s happening with the metrics, and confirm everything we’ve got so far is working. We use the Grafana helm chart with the BigQuery plugin on a Google Kubernetes Engine (GKE) Autopilot cluster. The visualisation setup is entirely config-driven and provides out-of-the-box scaling, and GKE gives us a place to host some other components later on.GKE Autopilot has Workload Identity enabled by default, which means we don’t need to worry about passing around secrets for BigQuery access, and can instead just create a GCP service account that has read access to BigQuery and assign it to our deployment through the linked Kubernetes service account.That’s it! We can now create some panels in a Grafana dashboard and see the gap filling and metrics working in real time.(synthetic data)Building and deploying the Machine Learning ModelOk, ML time. As we alluded to earlier, we want to continuously retrain our ML model as new data becomes available, to ensure it remains up to date with the current trend of the market. TensorFlow Extended (TFX) is a platform for creating end-to-end machine learning pipelines in production, and eases the process around building a reusable training pipeline. It also has extensions for publishing to AI Platform or Vertex AI, and it can use Dataflow runners, which makes it a good fit for our architecture. The TFX pipeline still needs an orchestrator, so we can host that in a Kubernetes job, and if we wrap it in a scheduled job, then our retraining happens on a schedule too!TFX requires our data be in the tf.Example format. The Dataflow sample library can output tf.Examples directly, but this tightly couples our two pipelines together. If we want to be able to run multiple ML models in parallel, or train new models on existing historical data, we need our pipelines to only be loosely coupled. Another option is to use the default TFX BigQuery adaptor, but this restricts us to each row in BigQuery mapping to exactly one ML sample, meaning we can’t use recurrent networks. As neither of the out-of-the-box solutions met our requirements, we decided to write a custom TFX component that did what we needed. Our custom TFX BigQuery adaptor enables us to keep our standard JSON data format in BigQuery and train recurrent networks, and it keeps our pipelines loosely coupled! We need the windowing logic to be the same for both training and inference time, so we built our custom TFX component using standard Beam components, such that the same code can be imported in both pipelines.With our custom generator done, we can start designing our anomaly detection model. An autoencoder utilising long-short-term-memory (LSTM) is a good fit for our time-series use case. The autoencoder will try to reconstruct the sample input data, and we can then measure how close it gets. That difference is known as thereconstruction error. If there is a large enough error, we call that sample an anomaly. To learn more about autoencoders, please consider reading chapter 14 from Deep Learning by Ian Goodfellow, Yoshua Bengio, and Aaron Courville.Our model uses simple moving average, exponential moving average, standard deviation, and log returns as input and output features. For both the encoder and decoder subnetworks, we have 2 layers of 30 time step LSTMs, with 32 and 16 neurons, respectively.In our training pipeline, we include z score scaling as a preprocessing transformer – which is usually a good idea when it comes to ML. However, there’s a nuance to using an autoencoder for anomaly detection. We need not only the output of the model, but also the input, in order to calculate the reconstruction error. We’re able to do this by using model serving functions to ensure our model returns both the output and preprocessed input as part of its response. As TFX has out-of-the-box support for pushing trained models to AI Platform, all we need to do is configure the pusher, and our (re)training component is complete.Detecting Anomalies in real timeNow that we have our model in Google Cloud AI Platform, we need our inference pipeline to call to it in real time. As our data is using standard JSON, we can easily apply our RSI rule of thumb inline, ensuring our model only runs when needed. Using the reconstructed output from AI Platform, we are then able to calculate the reconstruction error. We choose to stream this directly into Pub/Sub to enable us to dynamically apply an anomaly threshold when visualising, but if you had a static threshold you could apply it here too.SummaryHere’s what the wider architecture looks like now:More importantly though, does it fit for our use case? We can plot the reconstruction error of our anomaly detector against the standard RSI buy/sell signal, and see when our model is telling us that perhaps we shouldn’t blindly trust our rule of thumb. Go get ‘em, quant!In terms of next steps, there are many things you could do to extend or adapt what we’ve covered. You might want to explore with multi-currency models, where you could detect when the price action of correlated currencies is unexpected, or you could connect all of the Pub/Sub topics to a visualization tool to provide a real-time dashboard.Give it a tryTo finish it all off, and to enable you to clone the repo and set everything up in your own environment, we include a data synthesizer to generate forex data without needing access to a real exchange. As you might have guessed, we host this on our GKE cluster as well. There are a lot of other moving parts – TFX uses a SQL database and all of the application code is packaged into a docker image and deployed along with the infra using Terraform and cloud build.But if you’re interested in those nitty gritty details, head over to the repo and get cloning!Feel free to reach out to our teams at Google Cloud and Kasna for help in making this pattern work best for your company.
Quelle: Google Cloud Platform

Hola, South America! Announcing the Firmina subsea cable

Today, we’re announcing Firmina, an open subsea cable being built by Google that will run from the East Coast of the United States to Las Toninas, Argentina, with additional landings in Praia Grande, Brazil, and Punta del Este, Uruguay. Firmina will be the longest cable in the world capable of running entirely from a single power source at one end of the cable if its other power source(s) become temporarily unavailable—a resilience boost at a time when reliable connectivity is more important than ever. As people and businesses have come to depend on digital services for many aspects of their lives, Firmina will improve access to Google services for users in South America. With 12 fiber pairs, the cable will carry traffic quickly and securely between North and South America, giving users fast, low-latency access to Google products such as Search, Gmail and YouTube, as well as Google Cloud services.Single-end power source capability is important for reliability, a key priority for Google’s network. With submarine cables, data travels as pulses of light inside the cable’s optical fibers. That light signal is amplified every 100 km with a high-voltage electrical current supplied at landing stations in each country. While shorter cable systems can enjoy the higher availability of power feeding from a single end, longer cables with large fiber-pair counts make this harder to do. Firmina breaks this barrier—connecting North to South America, the cable will be the longest ever to feature single-end power feeding capability. Achieving this record-breaking, highly-resilient design is accomplished by supplying the cable with a voltage 20% higher than with previous systems.Celebrating the world’s visionariesWe sought to honor a luminary who worked to advance human understanding and social justice. The cable is named after Maria Firmina dos Reis (1825 – 1917), a Brazilian abolitionist and author whose 1859 novel, Úrsula, depicted life for Afro-Brazilians under slavery. A mixed-race woman and intellectual, Firmina is considered Brazil’s first novelist. With this cable, we’re thrilled to draw attention to her pioneering work and spirit. You can learn more about Firmina in this Google Doodle.Including Firmina, we now have investments in 16 subsea cables, such as Dunant, Equiano and Grace Hopper, and consortium cables like Echo, JGA, INDIGO, and Havfrue. We’re continuing our work of building out a robust global network and infrastructure, which includes Google data centers and Google Cloud regions around the world. Learn more about our infrastructure.Related ArticleThe Dunant subsea cable, connecting the US and mainland Europe, is ready for serviceThe Dunant submarine cable system, crossing the Atlantic Ocean between Virginia Beach in the U.S. and Saint-Hilaire-de-Riez on the French…Read Article
Quelle: Google Cloud Platform

New research reveals what’s needed for AI acceleration in manufacturing

While the promise of artificial intelligence transforming the manufacturing industry is not new, long-ongoing experimentation hasn’t yet led to widespread business benefits. Manufacturers remain in “pilot purgatory,” as Gartner reports that only 21% of companies in the industry have active AI initiatives in production. However, new research from Google Cloud reveals that the COVID-19 pandemic may have spurred a significant increase in the use of AI and other digital enablers among manufacturers. According to our data—which polled more than 1,000 senior manufacturing executives across seven countries—76% have turned to digital enablers and disruptive technologies due to the pandemic such as data and analytics, cloud, and artificial intelligence (AI). And 66% of manufacturers who use AI in their day-to-day operations report that their reliance on AI is increasing.Click to enlargeThe top three sub-sectors deploying AI to assist in day-to-day operations are automotive/OEMs (76%), automotive suppliers (68%), and heavy machinery (67%).Click to enlargeIn fact, Bryan Goodman, Director of Artificial Intelligence and Cloud, Ford Global Data & Insight and Analytics shares, “Our new relationship with Google will supercharge our efforts to democratize AI across our business, from the plant floor to vehicles to dealerships. We used to count the number of AI and machine learning projects at Ford. Now it’s so commonplace that it’s like asking how many people are using math. This includes an AI ecosystem that is fueled by data, and that powers a ‘digital network flywheel.’”Moving from edge cases to mainstream business needsWhy are manufacturers now turning to AI in increasing numbers? Our research shows that companies who currently use AI in day-to-day operations are looking for assistance with business continuity (38%), helping make employees more efficient (38%), and to be helpful for employees overall (34%). It’s clear that AI/ML technology can augment manufacturing employees’ efforts, whether by providing prescriptive analytics like real-time guidance and training, flagging safety hazards, or detecting potential defects on the assembly line.Click to enlargeIn terms of specific AI use cases called out by the research, two main areas emerged: quality control and supply chain optimization. In the quality control category, 39% of surveyed manufacturers who use AI in their day-to-day operations use it for quality inspection and 35% for product and/or production line quality checks. At Google Cloud, we often speak with manufacturers about AI for visual inspection of finished products. Using AI vision, production line workers can spend less time on repetitive product inspections and can instead focus on more complex tasks, such as root cause analysis. In the supply chain optimization category, manufacturers said they tapped AI for supply chain management (36%), risk management (36%), and inventory management (34%).Click to enlargeIn our day-to-day work, we’re seeing many manufacturers rethink their supply chains and operating models to better accommodate for the increased volatility that has been brought about by the pandemic and support the secular trend of consumers asking for increasingly individualized products. We’ll share more on deglobalization in the third installment of our manufacturing insights series.AI use differs by geography, but not for the reasons you may thinkThe extent to which AI is already being used today varies quite strongly between geographies, according to our research. While 80% and 79% of manufacturers in Italy and Germany respectively report using AI in day-to-day operations, that percentage plummets in the United States (64%), Japan (50%) and Korea (39%).Click to enlargeIt’s tempting to state this disparity is due to an “AI talent gap.” Although the most common barrier, just a quarter (23%) of manufacturers surveyed believe they don’t have the talent to properly leverage AI. Cost, too, does not appear to be a roadblock (21% of those surveyed). Rather, from our observations, the missing link appears to be having the right technology platform and tools to manage a production-grade AI pipeline. This is obviously the focus of our efforts and others in the space, as we believe the cloud can truly help the industry make a step change.Looking ahead: The Golden Age of AI for manufacturingThe key to widespread adoption of AI lies in its ease of deployment and use. As AI becomes more pervasive in solving real-world problems for manufacturers, we see the industry moving away from “pilot purgatory” to the “golden age of AI.” The manufacturing industry is no stranger to innovation, from the days of mass production, to lean manufacturing, six sigma and, more recently, enterprise resource planning. AI promises to bring even more innovation to the forefront. To learn more about these findings and more, download our infographic here and our full report here. Research methodologyThe survey was conducted online by The Harris Poll on behalf of Google Cloud, from October 15 – November 4, 2020, among 1,154 senior manufacturing executives in France (n=150), Germany (n=200), Italy (n=154), Japan (n=150), South Korea (n=150), the UK (n=150), and the U.S. (n=200) who are employed full-time at a company with more than 500 employees, and who work in the manufacturing industry with a title of director level or higher. The data in each country were weighted by number of employees to bring them into line with actual company size proportions in the population. A global post-weight was applied to ensure equal weight of each country in the global total.Related ArticleCOVID-19 reshapes manufacturing landscape, new Google Cloud findings showAccording to our new research released today, manufacturers around the world have started to revamp their operating models and supply cha…Read Article
Quelle: Google Cloud Platform