Introducing Autonomic Security Operations for the U.S. public sector

As sophisticated cyberattack campaigns increasingly target the U.S. public and private sectors during the COVID era, the White House and federal agencies have taken steps to protect critical infrastructure and remote-work infrastructure. These include Executive Order 14028 and the Office of Management and Budget’s Memorandum M-21-31, which recommend adopting Zero Trust policies, and span software supply chain security, cybersecurity threat management, and strengthening cyberattack detection and response.However, implementation can be a challenge for many agencies due to cost, scalability, engineering, and a lack of resources. Meeting the requirements of the EO and OMB guidance may require technology modernization and transformational changes around workforce and business processes. Today we are announcing Autonomic Security Operations (ASO) for the U.S. public sector, a solution framework to modernize cybersecurity analytics and threat management that’s aligned with the objectives of EO 14028 and OMB M-21-31. Powered by Google’s Chronicle and Siemplify, ASO helps agencies to comprehensively manage cybersecurity telemetry across an organization, meet the Event Logging Tier requirements of the White House guidance, and transform the scale and speed of threat detection and response. ASO can support government agencies in achieving continuous detection and continuous response so that security teams can increase their productivity, reduce detection and response time, and keep pace with – or ideally, stay ahead of – attackers. While the focus of OMB M-21-31 is on the implementation of technical capabilities, transforming security operations will require more than just technology. Transforming processes and people in the security organization is also important for long-term success. ASO provides a more comprehensive lens through which to view the OMB event logging capability tiers, which can help drive a parallel transformation of security-operations processes and personnel.Modern Cybersecurity Threat Detection and ResponseGoogle provides powerful technical capabilities to help your organization achieve the requirements of M-21-31 and EO 14028:Security Information & Event Management (SIEM) – Chronicle provides high-speed petabyte-scale analysis, and is capable of consuming log types outlined in the Event Logging (EL) tiers in a highly cost-effective manner.Security Orchestration, Analytics, and Response (SOAR) – Siemplify offers dozens of out-of-box playbooks to deliver agile cybersecurity response and drive mission impact, including instances of automating 98% of Tier-1 alerts and driving an 80% reduction in caseload.User and Entity Behavior Analytics (UEBA) – For agencies that want to develop their own behavioral analytics, agencies can use BigQuery, Google’s petabyte scale data lake, to store, manage, and analyze diverse data types from many sources. Telemetry can be exported out of Chronicle, and custom data pipelines can be built to import other relevant data from disparate tools and systems, such as IT Ops, HR and personnel data, and physical security data. From there, users can leverage BQML to readily generate machine learning models without needing to move the data out of BigQuery. For Google Cloud workloads, our Security Command Center Premium product offers native, turnkey UEBA across GCP workloads.Endpoint Detection and Response (EDR)– For most agencies, EDR is a heavily adopted technology that has broad applicability in Security Operations. We offer integrations to many EDR vendors. Take a look at our broad list of Chronicle integrations here.Threat intelligence – Our solution offers a native integration with VirusTotal, has the ability to operationalize threat intelligence feeds natively in Chronicle, and integrates with various TI and TIP solutions.Community Security AnalyticsTo increase collaboration across public-sector and private-sector organizations, we recently launched our Community Security Analytics (CSA) repository, where we’ve partnered with the MITRE Engenuity Center for Threat-Informed Defense, CYDERES, and others to develop open-source queries and rules that support self-service security analytics for detecting common cloud-based security threats. CSA queries are mapped to the MITRE ATT&CK® framework of tactics, techniques and procedures (TTPs) to help you evaluate their applicability in your environment and include them in your threat model coverage.“Deloitte is excited to collaborate with Google Cloud on their transformational public sector Autonomic Security Operations (ASO) solution offering. Deloitte has been recognized as Google Cloud’s Global Services Partner of the Year for four consecutive years, and also as their inaugural Public Sector Partner of the Year in 2020,” said Chris Weggeman, managing director of GPS Cyber and Strategic Risk, Google Cloud Cyber Alliance Leader, Deloitte & Touche LLP. “Our deep bench of more than 1,000 Google Cloud certifications, capabilities spanning the Google Cloud security portfolio, and decades of delivery experience in the government and public sector makes us well-positioned to help our clients undertake critical Security Operations Center transformation efforts with Google Cloud ASO.”Cost-effective for government agenciesTo help Federal Agencies meet the requirements of M-21-31 and the broader EO, Google’s ASO solutions can drive efficiencies and help manage the overall costs of the transformation. ASO can make petabyte-scale data ingestion and management more viable and cost-effective. This is critical at a time when M-21-31 is requiring many agencies to ingest and manage dramatically higher volumes of data that had not been previously budgeted for. PartnersWe’re investing in key partners who can help support U.S. government agencies on this journey. Deloitte and CYDERES both have deep expertise to help transform agencies’ Security Operations capabilities, and we continue to expand our partners to support the needs of our clients. A prototypical journey can be seen below.“Cyderes shares Google Cloud’s mission to transform security operations, and we are honored to deliver the Autonomic Security Operations solution to the U.S. public sector. As the number one MSSP in the world (according to Cyber Defense Magazine’s 2021 Top MSSPs List) with decades of advisory and technology experience detecting and responding to the world’s biggest cybersecurity threats, Cyderes is uniquely positioned to equip federal agencies and departments to go far beyond the requirements of the executive order to transform their security programs entirely via Google’s unique ASO approach,” said Robert Herjavec, CEO of CYDERES. “As an original launch partner of Google Cloud’s Chronicle, our deep expertise will propel our joint offering to modernize security operations in the public sector, all with significant cost efficiency compared to competing solutions.” said Eric Foster, President of CYDERES.Embracing ASOAutonomic Security Operations can help U.S. government agencies advance their event logging capabilities in alignment with OMB maturity tiers. More broadly, ASO can help the U.S. government undertake a larger transformation of technology, process, and people, toward a model of continuous threat detection and response. As such, we believe that ASO can help address a number of challenges presently facing cybersecurity teams, from the global shortage of skilled workers, to the overproliferation of security tools, to poor cybersecurity situational awareness and analyst burnout caused by an increase of data without sufficient context or tools to automate and scale detection and response.We believe that by embracing ASO, agencies can help agencies achieve:10x technology, through the use of cloud-native tools that help agencies meet event logging requirements in the near term, while powering a longer-term transformation in threat management; 10x process, by redesigning workflows and using automation to achieve Continuous Detection and Continuous Response in security operations; 10x people, by transforming the productivity and effectiveness of security teams and expanding their diversity; and10x influence across the enterprise through a more collaborative and data-driven approach to solving security problems between security teams and non-security stakeholders.To learn more about Google’s Autonomic Security Operations solution for the U.S. public sector, please read our whitepaper. More broadly, Google Cloud continues to provide leadership and support for a wide range of critical public-sector initiatives, including our work with the MITRE Engenuity Center for Threat-Informed Defense, the membership of Google executives on the President’s Council of Advisors on Science and Technology and the newly established Cyber Safety Review Board; Google’s White House commitment to invest $10 billion in Zero Trust and software supply chain security, and Google Cloud’s introduction of a framework for software supply chain integrity. We look forward to working with the U.S. government to make the nation more secure.Visit our Google Cloud for U.S. federal cybersecurity webpage.Related posts:Autonomic Security Operations for the U.S. Public Sector Whitepaper“Achieving Autonomic Security Operations: Reducing toil”“Achieving Autonomic Security Operations: Automation as a Force Multiplier”“Advancing Autonomic Security Operations: New resources for your modernization journey”Related ArticleRead Article
Quelle: Google Cloud Platform

Twitter takes data activation to new heights with Google Cloud

Twitter is an open, social platform that’s home to a world of diverse people, perspectives, ideas, and information. We aim to foster free and global conversations that allow people to consume, create, distribute, and discover information about the topics they care about the most.Founded in 2006, Twitter keeps a watchful eye on emerging technologies to maintain a modern platform that can meet the needs of the changing times. These early investments helped accelerate Twitter’s product but predated modern open source equivalents. As a result of its desire to leverage more open source technologies to keep up with the changing times, Twitter wanted to use the data it collected to maximize the user experience. However, its past generation of operational tools highlighted a need to create less time-consuming and more reliable data processing techniques that allowed Twitter developers to automate complex, manual tasks to relieve developer burden. This presented an opportunity for Twitter to modernize its tools and glean valuable insights that would be transformative for the evolution of its products and partnerships with advertisers. With the plan to standardize and simplify its approach to data processing across its operations, Twitter progressively migrated its operations to BigQuery on Google Cloud.In the complex, competitive world of programmatic advertising, the relevance, quality, and interpretation of data insights are critical in a company’s ability to stay ahead of ever-changing needs. The ability to streamline its approach to large-scale data processing quickly became an anchor in Twitter’s plan to better align its goals with those of its advertisers and customers. With the recent migration of its advertising data from on-premises to Google Cloud, Twitter has leveraged several Google Cloud solutions, notably BigQuery and Dataflow, to facilitate this greater alignment.Leveraging BigQuery for improved advertising partnerships and data extractionAligning the goals of advertisers and customers with those of a company is a considerable challenge, but for a company with hundreds of millions of avid users like Twitter, developing and executing an approach that balanced the needs of all parties was proving to be a complex task. Pradip Thachile, a senior data scientist responsible for Twitter’s revenue team’s adoption of Google Cloud, likened the process to a kind of flywheel that allows the Twitter team to work in collaboration with advertising partners to develop and test hypothetical approaches that center its goals and those of advertising partners. He explained the essential role of the BigQuery solution in the synthesis of these goals with an eye on the optimization of business growth for all involved. “Mating all this is a nontrivial problem at scale. The only way we can accomplish it is by being able to build this kind of scientific learning flywheel. BigQuery is a critical component, because the velocity with which we can go from hypothesizing to actual action through BigQuery is huge.”As the anchoring service for the ingestion, movement, and the extraction of valuable insights from all data at Twitter, BigQuery is the engine of Twitter’s recent optimization of internal productivity and revenue growth.Data modeling for optimized productivity and value extraction with DataflowAs a fully managed streaming analytics service, Dataflow has proven to be a time-saving solution that contributes significantly to the enhancement of productivity at Twitter. Through the reduction of the time invested in manual tasks for scaling, Dataflow facilitates the seamless and effortless organization and templatization of the movement of the archetypal data sets at Twitter. With less time devoted to the calibration of operational tools, Twitter’s team can focus on the higher-value tasks related to the discovery and development of innovative ways to further leverage its data insights. Reliable support with data expertise from GoogleNotable for its expertise in data, Google Cloud contributed substantial technical support to Twitter. The Twitter team routinely accessed the Google Cloud product team for guidance on ingestion velocity as they leveraged the sizable ingestion capabilities of BigQuery for its data. At a higher level, the Google Cloud support team supplied valuable resources including white papers and use cases that could enhance Twitter’s performance. Thachile describes the value of Google Cloud’s support, “Google Cloud provides a very effective stratified layer of support. They can be as close to the problem as you’d like them to be.”For more of the story about how Twitter is using BigQuery, read this blog from Twitter.Related ArticleNow generally available: BigQuery BI Engine supports many BI tools or custom applicationLearn about BigQuery BI Engine and how to analyze large and complex datasets interactively with sub-second query response time and high c…Read Article
Quelle: Google Cloud Platform

Maintenance made flexible: Cloud SQL launches self-service maintenance

Routine maintenance is critical in the upkeep of any healthy database system. Maintenance involves updating your operating system and upgrading your database software so that you can rest assured that your system is secure, performant, and up-to-date. When you run your database on Cloud SQL, we schedule maintenance for you once every few months during your weekly maintenance window, so that you can turn your attention to more interesting matters. However, from time-to-time, you may find that Cloud SQL’s regular maintenance cadence just doesn’t work for you. Maybe you need a bug fix from the latest database minor version to address a performance issue, or maybe there’s an operating system vulnerability that your security team wants patched as soon as possible. Whatever the case, having the flexibility to update before the next scheduled maintenance event would be ideal.Cloud SQL has now made self-service maintenance generally available. Self-service maintenance allows you the freedom to upgrade your Cloud SQL instance’s maintenance version to the latest on your own, so that you can receive the latest security patches, bug fixes, and new features on demand. When combined with deny maintenance periods, self-service maintenance gives you the flexibility to upgrade your instance according to your own maintenance schedule. You can perform self-service maintenance using just a single command through gcloud or the API.Cloud SQL has also launched maintenance changelogs, a new section in our documentation that describes the contents of maintenance versions released by Cloud SQL. For each database engine major version, Cloud SQL publishes a running list of the maintenance versions and the changes introduced in each, such as database minor version upgrades and security patches. With maintenance changelogs, you can know what’s new with the latest maintenance version and make informed decisions about when you need to maintain your instance on your own ahead of regularly scheduled maintenance. Cloud SQL also upkeeps an RSS feed for each maintenance changelog that you can subscribe your feed reader to and receive notifications when Cloud SQL releases new maintenance versions.How to perform self-service maintenanceSay you’re a PostgreSQL database administrator at a tax accounting software firm named Taxio. During Q1 of each year, you use a deny maintenance period to skip maintenance on your database instance named tax-services-prod in order to ensure your environment is as stable as possible during your busy season. Now that it’s May, you take a closer look at how your PostgreSQL 12.8 instance is operating on the older maintenance version.After studying the query performance patterns using Query Insights, you realize that your queries that use regular expressions are running far slower than you expected. You check out the PostgreSQL bugs page and you see that other users reported the same performance regression in PostgreSQL 12.8. Fortunately, it looks like the issue was patched in PostgreSQL 12.9 and later minor versions.You decide you’d like to go ahead and take care of the issue right away ahead of the next scheduled maintenance event, which is a few months away. First, you need to see what maintenance version tax-services-prod is running and what the latest maintenance version available is. You spin up gcloud and retrieve the instance’s configuration information with the following command:code_block[StructValue([(u’code’, u’gcloud sql instances describe tax-services-prod’), (u’language’, u”)])]Cloud SQL returns the following information:code_block[StructValue([(u’code’, u”connectionName: taxio:us-central1:tax-services-prodrncreateTime: ‘2019-03-22T03:30:48.231Z’rndatabaseInstalledVersion: POSTGRES_12_8rnu2026rnmaintenanceVersion: POSTGRES_12_8.R20210922.02_00rnu2026rnavailableMaintenanceVersions: rn- POSTGRES_12_10.R20220331.02_01rnu2026″), (u’language’, u”)])]You see that there is a new maintenance version, POSTGRES_12_10.R20220331.02_01, that is much more current than your current maintenance version, POSTGRES_12_8.R20210922.02_00. From the version name, it looks like the new maintenance version runs on PostgreSQL 12.10, but you want to be sure. You navigate to the PostgreSQL 12 maintenance changelog page in the documentation and confirm that the new maintenance version upgrades the database minor version to PostgreSQL 12.10.You decide to perform self-service maintenance. You enter the following command into gcloud:code_block[StructValue([(u’code’, u’gcloud sql instances patch tax-services-prod \rnu2013maintenance-version=POSTGRES_12_10.R20220331.02_01′), (u’language’, u”)])]Cloud SQL returns the following response:code_block[StructValue([(u’code’, u’The following message will be used for the patch API method.rn{“maintenanceVersion”: “POSTGRES_12_10.R20220331.02_01″, “name”: “tax-services-prod”, “project”: “taxio”, “settings”: {}}rnPatching Cloud SQL instance…working..’), (u’language’, u”)])]A few minutes later, your tax-services-prod is up-to-date, running PostgreSQL 12.10. You run some acceptance tests and you’re delighted to see that the performance for queries with regular expressions is much better.Learn moreWith self-service maintenance, you can update your instance with the latest maintenance version, outside of the flow of regularly scheduled maintenance. You can also use maintenance changelogs to review the contents of new maintenance versions. See our documentation to learn more about self-service maintenance and maintenance changelogs.Related ArticleUnderstanding Cloud SQL Maintenance: why is it needed?Get acquainted with the way maintenance works in Cloud SQL so you can effectively plan availability.Read Article
Quelle: Google Cloud Platform

Join us in evolving the usability of GitOps

Kubernetes configuration automation remains challengingCompanies of all sizes are leveraging Kubernetes to modernize how they build, deploy, and operate applications on their infrastructure. As these companies expand the numbers of development and production clusters they use, creating and enforcing consistent configurations and security policies across a growing environment becomes difficult. To address this challenge, it is increasingly common for platform teams to use GitOps methodology to deploy configuration and policies consistently across clusters and environments with a version-controlled deployment process. Using the same principles as Kubernetes itself, GitOps reconciles the desired state of clusters with a set of declarative Kubernetes configuration files in a versioned storage system, typically git. However, implementing the git workflow is often left as exercise for the user: repo, branch, and directory organization, versioning and tagging, change proposal and approval authorization, pre-merge validation checks, etc. It can be difficult to set up appropriately, especially when managing changes across 10s, to 100s, and even 1000s of applications that are deployed at large enterprises. Moreover, configuration is typically represented using code and code-like formats, such as templates, domain-specific languages, and general-purpose programming languages, which effectively require manual authoring and editing. Here is a very simple template, for generating Kubernetes RoleBindings:code_block[StructValue([(u’code’, u'{{- range .roleBindings }}rn—rnapiVersion: rbac.authorization.k8s.io/v1rnkind: RoleBindingrnmetadata:rn name: {{ .name }}rn namespace: {{ .namespace }}rnroleRef:rn apiGroup: rbac.authorization.k8s.iorn kind: {{ .roleKind }}rn name: {{ .role }}rnsubjects:rn- apiGroup: rbac.authorization.k8s.iorn kind: Grouprn name: {{ .namespace }}.admin@bigco.comrn{{- end }}’), (u’language’, u”)])]Cross-functional collaboration across platform and application teams can become a bottleneck especially as the needs of individual teams differ from one another, requiring frequent template changes that potentially affect all uses of the templates. For example, the template above does not support binding to ServiceAccounts. Adding that option could potentially affect all uses of the template.Since such configuration tools assume they exclusively generate and set the desired state, they are not interoperable with easier-to-use client surfaces, such as Graphical User Interfaces (GUIs) and Command-Line Interfaces (CLIs). Some of these tools support transitioning to configuration tools by providing the ability to download or output the YAML representations of resources.Once that transition is made, however, it’s a one-way door, and future edits must be made manually, to a different format, through a different process. We’ve heard from users that changes that take only seconds to make in a GUI can take days to make through configuration tools. Wouldn’t it be great if you didn’t have to choose between “the easy way” and “the right way”?To really make GitOps usable, we need to address the inherent dichotomy between preferred client surfaces and configuration tools.Making configuration authoring and editing a first class citizenWe previously open sourced kpt, a package-centric toolchain for helping platform teams manage their infrastructure. To address the usability challenges outlined previously, we are extending that toolchain with Porch, the package orchestrator, which enhances the toolchain by enabling a What You See Is What You Get (WYSIWYG) configuration authoring, automation, and delivery experience. This experience simplifies managing Kubernetes platforms and KRM-driven infrastructure at scale by manipulating declarative Configuration as Data, separated from the code that transforms it. Whereas GitOps automates on-the-fly configuration generation from existing configuration packages and repositories and deployment of the output of that process to Kubernetes, the package orchestrator automates configuration package creation, editing, transformation, upgrades, and other configuration package lifecycle operations, creating and managing the content to be deployed via GitOps.We created an open-source plugin for the Backstage platform portal framework that provides a WYSIWYG GUI experience. It builds on the package orchestrator to allow platform and application teams to easily author and edit configuration, while enforcing guardrails. You don’t need to write YAML, patches, or templates, or even branch, commit, tag, push, and merge changes.This approach is unique in that it avoids many of the pitfalls currently faced today in the ecosystem when building a GUI on top of GitOps. In particular, prevailing approaches require creating abstractions, often thin ones, that need to be custom-built on top of the Kubernetes resource model. This creates a situation where platform teams need to do a lot of additional work to create a management experience on top of Kubernetes, and lose out on the value of the ecosystem of tooling and educational content built around the standard Kubernetes (and extensions’) resource types.By leveraging Configuration as Data and package orchestration, we enable a GUI that complements the existing ecosystem rather than requiring thin abstractions that just get in the way. The GUI modifies configuration data very similarly to GUIs that directly operate on the live state in Kubernetes – the resource schemas are identical, since Kubernetes is natively declarative. Since it is early, the GUI supports a limited use case, provisioning and managing namespaces and their adjacent Kubernetes policy resources. Over time we plan to build in support for other critical use cases faced by cluster administrators today, which is mostly a matter of simply implementing form editors for additional resource types, and transformer functions for additional customization scenarios.As shown in our tutorial, blueprints can be created through a simple form-based UI, again, without templates. Just draft examples of the resources to deploy, similar to kustomize bases:Resources can be added, edited, or deleted, without writing YAML:Like kustomize, kpt uses KRM functions to transform the resources in order to create variants. You can select functions from the catalog and choose their inputs. Now you have a recipe for creating similar instances, as many as are needed. Functions can be used to validate blueprints and their derived instances, also, similar to Kubernetes admission control. There’s no need to build a whole new Operator or monolithic configuration generator just to automate provisioning groups of resources. Composable functions enable a low-code experience for platform builders and a no-code experience for platform users.To see this in action, check out our demo video.A GUI isn’t the only capability enabled by making the configuration in storage mutable. Nephio, the Cloud Native Network Automation project, is building on kpt, Porch, and Config Sync to fully automate configuration of interconnected network functions and the underlying infrastructure that supports those functions. Configuration as Data provides the foundational API for configuration data, enabling mutation by Nephio automation controllers.Configuration as Data is a novel approach that doesn’t sacrifice usability or the potential for higher-level automation in order to enable reproducibility. Instead, it supports an interoperable, WYSIWYG, automatable configuration authoring and editing experience. We are looking to demonstrate this innovative approach and engage with the community on advancing it further.Come innovate with usWe are looking to engage with the community to advance this technology forward. In particular, we are deeply interested in collaborating with developers working on GitOps technologies or looking to build around the existing GitOps technologies. We are including our own GitOps reference implementation Config Sync as part of kpt, but our interface to GitOps is intended to be extensible. Please check out our contact page to connect with us or jump straight to contributing. We’d love to hear and collaborate with you so that we can make GitOps usable by everyone.
Quelle: Google Cloud Platform

Standardization, security, and governance across environments with Anthos Multi-Cloud

Kubernetes is being used for an ever growing percentage of production applications that power the world. Day 2 operations are now in focus as organizations scale from just a few clusters and applications to many clusters across multiple environments, in one cloud, multiple, and even on premise. How do you establish “sameness” across all of your clusters, regardless of where they are?Standardization, security, and governanceContainer platform teams are tasked with keeping groups of clusters up to date and aligned with their organizations standards and security policies. They will need to automate as much of this work as possible since managing 1 cluster is very different than managing 10s or 100s across geographies. Automation and keeping things as similar as possible, or sameness which is a concept Google uses internally for Kubernetes management, is critical. Anthos has a number of benefits operators can take advantage of when it comes to establishing “sameness” with regard to standardization, security, and governance across Kubernetes clusters. As a first step in evaluating Anthos it is best define the environment you will be operating in:Do you want to utilize existing Kubernetes clusters deployed with first party Kubernetes services such as Google Cloud’s GKE, Amazon’s EKS or Azure’s AKS?Are you looking to standardize on GKE across clouds for runtime consistency? This decision will define which multi-cloud product, Anthos Clusters (GKE on AWS/Azure/GCP) or Attached Clusters (any CNCF conformant K8s) are best suited to your use case when it comes to applying standardization, security, and governance across your Kubernetes estate:Standardization, security, and governance across environments Anthos Configuration Management (ACM) config sync, Policy Controller, and Service Mesh can be extended to popular Kubernetes distributions such as EKS and AKS in addition to GKE. In a multi-tenant environment you can manage the baseline configurations required across all clusters such as telemetry, infosec tooling, and networking controls centrally in your ACM git repo while allowing your teams access to namespaces for application deployment and configuration. This architecture provides a safe landing zone for applications while providing automation tooling for day 2 operations. Application teams are free to use their application deployment tool of choice within a defined namespace while the operations group manages each cluster from a centralized git repo. ACM does allow fine grained configuration syncing per cluster based on labeling schemas, which may be required if operating across environments or geographies where different tooling or policy is required.Example multi-cluster/multi-environment strategy for establishing standardization, security, and governanceUse case spotlightHosted SaaS DeploymentsDeploying microservice based software across many public cloud accounts is made possible with the Anthos Multi-Cloud API which allows standardization of your Kubernetes runtime and lifecycle management activities for the cluster and associated infrastructure across environments with centralized remote management, telemetry, and logging. Maintaining a common runtime, security posture, toolset, and observability plane across customer deployments is critical to scaling and supporting a distributed user base. These capabilities of the Anthos Multi-Cloud product have been embraced by software vendors that need to be able to provide infrastructure and application level support into their customers’ cloud environments. In the example diagram below Anthos maintains the state of each cluster in each end user account and associated GCP project. Clusters are connected to a unified CD pipeline via Config Sync. Telemetry across the cluster projects is consolidated to a custom dashboard in Google Cloud operations for a consolidated view of the entire estate.Multi-cluster/ multi-account strategy for establishing standardization, security and governance over remote applications”As an integration platform that runs on multi-clouds, we chose Anthos for multi-cloud deployments to standardize our operations across multiple clouds while relying on GKE’s valuable security and governance features which already serve us far and wide. With Anthos, we have normalized our operations and fully unified our infrastructure support.” – Diego Maia, Head of SRE, Digibee  New features with Anthos 1.11 for Multi-CloudThe following multi-cloud features are part of our Anthos 1.11 Anthos Service Mesh Topology Diagrams for GKE on AWSSupport for Windows Worker NodesSupport for Dedicated Hosts/Instances for GKE on AWSApplication Logging for Linux and Windows workloadsRelated ArticleBest practices for upgrading your Anthos clusters on bare metalHere are some questions to consider before you go about upgrading your Anthos clusters running on bare metal.Read Article
Quelle: Google Cloud Platform

Google Cloud at KubeCon EU: New projects, updated services, and how to connect

It’s that time of the year again, when we get excited about all things cloud-native and gear up to connect, share and learn from fellow developers and technologists at KubeCon EU 2022. Here is a quick round up of the latest news from the Google open source and Kubernetes teams, and how to connect with us this week at the event. Google’s continued commitment to the open-source communityFor over 20 years, Google has helped define the state of computing with it’s commitment to open source. Google originated Kubernetes and supported the evolution of the project since contributing it to the Cloud Native Computing Foundation (CNCF) in 2015. Kubernetes became central to cloud-native computing because it was open sourced and under the governance of a neutral body. Since then, we’ve continued to invest deeply in cloud-native open source technologies on behalf of our customers. Most recently we completed the transition of Knative to the CNCF and announced our intent to contribute Istio to the organization, which alongside Kubernetes and Knative, is a critical part of cloud-native infrastructure. We continue to support the evolution of these projects and will be hosting KnativeCon at KubeCon EU, where you can learn more about the project and how you can join the community to help it grow further. Building new capabilities for critical workloads in the cloudKubernetes has been a transformative technology, bringing cloud-native best practices and design patterns to a number of industries. Yet AI/ML, batch, and HPC workloads have lagged behind traditional enterprise counterparts primarily due to the complex scheduling and resource allocation needs that make it difficult to deploy and scale these scientific workloads. Google, along with a number of community members, is working to make Kubernetes a first-class platform for these workloads through improvements to the batch API, improving scheduling performance, and leading the development of kueue, a Kubernetes-native work queue. Combined with Google Cloud’s leading hardware and autoscaling capabilities, these upstream efforts make Google Kubernetes Engine (GKE) an ideal platform for AI/ML and batch computing. To learn more about how Google is helping to add these critical capabilities into the project you can join us at Batch and HPC Day and Kubernetes AI Day onsite during KubeCon. Driving Kubernetes ease of use for customers through new open source projectsWe are embedded in open source communities, and believe in the power of the community to drive innovation and make it easier for everyone to build in the cloud. This week we reached an important milestone with a new open source offering from Google: Config Connector and Config Sync are now available as open source (Config Connector, Config Sync), joining Gatekeeper. Now, the entirety of Anthos Config Management is based on open source. We’ve also added Config Sync and the new package orchestrator to the kpt project. Together, these projects provide an end-to-end portable solution that enables a “What You See Is What You Get” configuration authoring, automation, and delivery experience, simplifies managing Kubernetes platforms and KRM-driven infrastructure at scale. We are seeking help from the community to innovate with us on this project, as we hope that it can help improve how others build platforms on top of Kubernetes. We are happy to accept contributions to kpt from the community and our customers. You can check out more information here on how to get involved as this project grows.Adding a high-usage tier to Managed Service for PrometheusThis March, we introduced Google Cloud Managed Service for Prometheus, and Kubernetes users are enthusiastic about the monitoring service’s ease of use and scalability. To get a sense of why customers are using it, you can read about the experience of Maisons du Monde, a French furniture and home decor company that adopted Managed Service for Prometheus after first running the open source version.In fact, Managed Service for Prometheus’ scalability is so strong that we’ve introduced a new high-usage tier designed for customers with extremely large volumes of metrics — more than 500 billion metric samples per month. This new pricing tier is 50% less than the previous highest-tier list price. We’ve also reduced the list price of lower-usage tiers by 25%. To get started with Managed Service for Prometheus, try out our new Managed Service for Prometheus qwiklab at no charge now through June 15, and join us on Tuesday at KubeCon for the presentation: Easy, scalable metrics for Kubernetes with Managed Service for Prometheus.The most automated and scalable managed KubernetesKubernetes is not just a technology — it’s a model for creating value for your business, a way of developing apps and services, and a means to secure and develop cloud-native IT capabilities for innovation. Given our long history with Kubernetes we are able to offer unparalleled managed services based on critical open source projects. Created by the same developers that built Kubernetes, Google Kubernetes Engine(GKE) leads the way in cloud-based Kubernetes services for running containerized applications. GKE makes it easy to recognize the benefits of innovation initiatives without getting bogged down troubleshooting infrastructure issues and managing day-to-day operations related to enterprise-scale container deployment. With fully managed Autopilot mode of operation combined with multi-dimensional auto scaling capabilities, GKE delivers most dimensions of automation to efficiently and easily operate your applications. Only GKE can run 15,000 node clusters, outscaling other cloud providers by up to 10X, letting you run applications effectively and reliably at scale.At KubeCon you will have direct access to our Kubernetes experts, starting on May 17th at our co-located event: Build with the most automated and scalable Kubernetes hosted by Google Cloud. Join us to learn what is new in the world of containers and Kubernetes here at Google Cloud and get access to technical demos.More ways to engage with Google expertise at KubeCon EUExplore several interesting courses to help get you started with Kubernetes by visiting our virtual booth. This includes some top sessions produced in the Learn Kubernetes with Googlevideo series and an opportunity to claim exclusive swag to support your Kubernetes learning from Google Cloud. You can also join over 25 sessions from Googlers onsite at the event. Kubernetes builds on more than 15 years of running Google’s containerized workloads and the invaluable contributions from the open source community. Have a question? Curious about the latest things in Google Cloud or want to talk to Kubernetes experts? Join us virtually on the CNCF slack in the #6-kubecon-googlecloud channel! There will be a number of Google Cloud and cloud-native open source community members available to field your questions. You can also request some time with our team on the ground.We are looking forward to connecting with developers and sharing expertise from some of our top Kubernetes experts this week at the event.
Quelle: Google Cloud Platform

New observability features for your Splunk Dataflow streaming pipelines

We’re thrilled to announce several new observability features for the Pub/Sub to Splunk Dataflow template to help operators keep a tab on their streaming pipeline performance. Splunk Enterprise and Splunk Cloud customers use the Splunk Dataflow template to reliably export Google Cloud logs for in-depth analytics for security, IT or business use cases. With newly added metrics and improved logging for Splunk IO sink, it’s now easier to answer operational questions such as:Is the Dataflow pipeline keeping up with the volume of logs generated?What is the latency and throughput (Event Per Second or EPS) when writing to Splunk?What is the response status breakdown of downstream Splunk HTTP Event Collector (HEC) and potential error messages?This critical visibility helps you derive your log export service-level indicators (SLIs) and monitor for any pipeline performance regressions. You can also more easily root cause potential downstream failures between Dataflow & Splunk such as Splunk HEC network connections or server issues, and fix the problem before it cascades. To help you quickly chart these new metrics, we’ve included them in the custom dashboard as part of the updated Terraform module for Splunk Dataflow. You can use those Terraform templates to deploy the entire infrastructure for log export to Splunk, or just the Monitoring dashboard alone.Log Export Ops Dashboard for Splunk DataflowMore metricsIn your Dataflow Console, you may have noticed several new custom metrics (highlighted below) for launched jobs as of template version 2022-03-21-00_RC01, that is gs://dataflow-templates/2022-03-21-00_RC01/Cloud_PubSub_to_Splunk or later:Pipeline instrumentationBefore we dive into the new metrics, let’s take a step back and go over the Splunk Dataflow job steps. The following flowchart represents the different stages that comprise a Splunk Dataflow job along with corresponding custom metrics:In this pipeline, we utilize two types of Apache Beam custom metrics:Counter metrics, labeled 1 through 10 above, used to count messages and requests (both successful and failed).Distribution metrics, labeled A through C above, used to report on distribution of request latency (both successful and failed) and batch size. Downstream request visibilitySplunk Dataflow operators have relied on some of these pre-built custom metrics to monitor log messages progress through the different pipeline stages, particularly in the last stage Write To Splunk, with metrics outbound-successful-events (counter #6 above) and outbound-failed-events (counter #7 above) to track the number of messages that were successfully exported (or not) to Splunk. While operators had visibility of the outbound message success rate, they lacked visibility at the HEC request level. Splunk Dataflow operators can now monitor not only the number of successful and failed HEC requests over time, but also the response status breakdown to determine if request failed due to a client request issue (e.g. invalid Splunk index or HEC token), or a transient network or Splunk issue (e.g. server busy or down) all from Dataflow Console with the addition of counters #7-10 above, that is:http-valid-requestshttp-invalid-requestshttp-server-error-requestsSplunk Dataflow operators can also now track average latency of downstream requests to Splunk HEC, as well as average request batch size, by using the new distribution metrics #A-C, that is:successful_write_to_splunk_latency_msunsuccessful_write_to_splunk_latency_mswrite_to_splunk_batchNote that a Distribution metric in Beam is reported by Dataflow as four sub-metrics suffixed with _MAX, _MIN, _MEAN and _COUNT. That is why those 3 new distribution metrics translate to 12 new metrics in Cloud Monitoring, as you can see in the earlier job info screenshot from Dataflow Console. Dataflow currently does not support creating a histogram to visualize the breakdown of these metrics’ values. Therefore, _MEAN metric is the only useful sub-metric for our purposes. As an all-time average value, _MEAN cannot be used to track changes over arbitrary time intervals (e.g. hourly), but it is useful to capture baseline, track trend or to compare different pipelines.Dataflow custom metrics, including aforementioned metrics reported by Splunk Dataflow template, are a chargeable feature of Cloud Monitoring. For more information on metrics pricing, see Pricing for Cloud Monitoring.Improved loggingLogging HEC errorsTo further root cause downstream issues, HEC request errors are now adequately logged, including both response status code and message:You can retrieve them directly in Worker Logs from Dataflow Console by setting log severity to Error.Alternatively, for those who prefer using Logs Explorer, you can use the following query.code_block[StructValue([(u’code’, u’log_id(“dataflow.googleapis.com/worker”)rnresource.type=”dataflow_step”rnresource.labels.step_id=”WriteToSplunk/Write Splunk events”rnseverity=ERROR’), (u’language’, u”)])]Disabling batch logsBy default, Splunk Dataflow workers log every HEC request as follows:Even though these requests are often batched events, these ‘batch logs’ are chatty as they add 2 log messages for every HEC request. With the addition of request-level counters (http-*-requests), latency & batch size distributions, and HEC error logging mentioned above, these batch logs are generally redundant. To control worker log volume, you can now disable these batch logs by setting the new optional template parameter enableBatchLogs to false, when deploying the Splunk Dataflow job. For more details on latest template parameters, refer to template user documentation.Enabling debug level logsThe default logging level for Google provided templates written using the Apache Beam Java SDK is INFO, which means all messages of INFO and higher i.e. WARN and ERROR will be logged. If you’d like to enable lower log levels like DEBUG, you can do so by setting the –defaultWorkerLogLevel flag to DEBUG while starting the pipeline using gcloud command-line tool. You can also override log levels for specific packages or classes with the –workerLogLevelOverridesflag. For example, the HttpEventPublisher class logs the final payload sent to Splunk at the DEBUG level. You can set the –workerLogLevelOverridesflag to {“com.google.cloud.teleport.splunk.HttpEventPublisher”:”DEBUG”} to view the final message in the logs before it is sent to Splunk, and keep the log level at INFO for other classes. Exercise caution while using this as it will log all messages sent to Splunk under the Worker Logs tab in the console, which might lead to log throttling or reveal sensitive information.Putting it all togetherWe put all this together in a single Monitoring dashboard that you can readily use to monitor your log export operations:Pipeline Throughput, Latency & ErrorsThis dashboard is a single pane of glass for monitoring your Pub/Sub to Splunk Dataflow pipeline. Use it to ensure your log export is meeting your dynamic log volume requirements, by scaling to adequate throughput (EPS) rate, while keeping latency and backlog to a minimum. There’s also a panel to track pipeline resource usage and utilization, to help you validate that the pipeline is running cost-efficiently during steady-state.Pipeline Utilization and Worker LogsFor specific guidance on handling and replaying failed messages, refer to Troubleshoot failed messages as part of the Splunk Dataflow reference guide. For general information on troubleshooting any Dataflow pipeline, check out the Troubleshooting and debugging documentation, and for a list of common errors and their resolutions look through the Common error guidance documentation. If you encounter any issue, please open an issue in the Dataflow templates GitHub repository, or open a support case directly in your Google Cloud Console.For a step-by-step guide on how to export GCP logs to Splunk, check out the Deploy production-ready log exports to Splunk using Dataflow tutorial, or use the accompanying Terraform scripts to automate the setup of your log export infrastructure along with the associated operational dashboard.Related ArticleWhat’s new with Splunk Dataflow template: Automatic log parsing, UDF support, and moreAnnouncing new features for Splunk Dataflow template with improved compatibility with Splunk Add-on for GCP, more extensibility using use…Read Article
Quelle: Google Cloud Platform

Google’s open-source solution to DFDL Processing

The cloud has become the choice for extending and modernizing applications, but there are some situations where the transition is not straightforward, such as migrating applications that access data from a mainframe environment.  Migrating the data and the applications at certain points can be outsync.  Mechanisms need to be in place during the transition to support interoperability with legacy workloads and  access data out of the mainframe.  For the latter, the Data Format Description Language  (DFDL) which is an open standard modeling language from the Open Grid Forum (OGF), has been used to access data from a mainframe, e.g. IBM Integration Bus.  DFDL uses a model or schema that allows text or binary data to be parsed from its native format and to be presented as an information set out of the mainframe (i.e., logical representation of the data contents, independent of the physical format). DFDL Processing with IBM App ConnectIf we talk about solutions for parsing and processing data described by DFDL, one of the options in the past has been IBM App Connect which allows development of custom solutions via IBM DFDL. The following diagram represents a high-level architecture of DFDL Solution implementation on IBM App Connect:IBM App Connect brings stable integration to the table at an enterprise level cost. According to IBM’s sticker pricing as of May 2022, IBM App Connect charges $500 and above per month for using the App Connect with IBM Cloud services. These prices are excluding the cost of storing and maintaining DFDL Definitions in the Mainframe. With the introduction of Tailored Fit Pricing on IBMz15, cost of maintaining the mainframe can range from $4900 to $9300 per month over the span of 5 years, which may be costly for a small/medium business only wanting to process data defined by DFDL.Introducing Google Open-Source DFDL Processor with Google CloudAt Google our mission is to build for everyone, everywhere. With this commitment in mind, the Google Cloud team has developed and open-sourced the solution for DFDL Processor which can be easily accessible and customizable for organizations to  use it. We understand that mainframes can be expensive to maintain and use, which is why we have integrated Cloud Firestore and Bigtable as the databases to store the DFDL definitions. Firestore can provide 100K reads, 25K writes, 100K deletes, and 1TB of storage per month for approximately $186 per month. While on the other hand Bigtable provides a fast, scalable database solution for storing terabytes, or even petabytes of data at a relatively lower cost too. This move away from the mainframe and adopting cloud-native database solutions can save organizations thousands of dollars every month.Next, we have substituted App Connect with a combination of our open-source DFDL processor, Cloud Pub/Sub service and open-source Apache Daffodil Library. Pub/Sub provides the connection between the mainframe and the processor, and from the processor to the downstream applications. The Daffodil Library helps in compiling schemas, and outputting infosets for the given DFDL definition and message. The total cost of employing the Pub/Sub service and the Daffodil Library comes out to be approximately $117 per month, which means an organization can save a minimum of $380 per month by using this solution.The table below shows a summary of the cost difference breakdown between the solutions as discussed above:How it worksThe data described by the DFDL usually needs to be available in widely used formats such as JSON, in order to be consumed by downstream applications which might  have already been migrated to a cloud native environment. To achieve the consumption of the data, cloud native applications/services can be implemented in conjunction with Google Cloud Services, which accepts the textual or binary data as input from the mainframe , fetches corresponding DFDL from a database, and finally compiles and outputs the equivalent JSON for the downstreaming applications to consume.The following diagram describes a high level architecture to be presentedAn application can be built to process the information being received from the mainframe, e.g a DFDL Processor Service, leveraging the Daffodil API to parse the data against a corresponding DFDL schema and output the JSON. DFDL schema definitions can be potentially migrated and stored in Firestore or Bigtable. Since these definitions rarely change and they can be stored in a key-value pair format, the storage of preference is a non-relational managed database. Google Cloud Pub/Sub, can leverage an eventing mechanism that receives the binary/textual message from a Data Source, i.e. the mainframe, in a Pub/Sub topic.  This feature will  allow the DFDL Processor to access the data, to retrieve the corresponding DFDL definition from Firestore or Bigtable and finally pass both on to the Daffodil API to compile and output the JSON result. The JSON result is finally published into a resulting Pub/Sub topic for any downstream application to consume. It is recommended to follow CloudEvent schema specification which allows to describe events in common formats, providing interoperability across services platforms and systems.You can find examples of the implementation in Github:  Firestore ExampleBigtable ExampleConclusionIn this post, we have discussed different pipelines used to process data defined by DFDL, and cost comparisons of these pipelines. Additionally, we have demonstrated how to use Cloud Pub/Sub, Firestore, and Bigtable to create a service which is capable of listening to binary event messages,  extract the corresponding DFDL definition from a  managed database, and process it to output a JSON which can then be consumed by downstream applications using well-established technologies and libraries.1. Price comparison analysis as of May 2022 and subject to change based on usageRelated Article5 principles for cloud-native architecture—what it is and how to master itLearn to maximize your use of Google Cloud by adopting a cloud-native architecture.Read Article
Quelle: Google Cloud Platform

How a top gaming company transformed its approach to security with Splunk and Google Cloud

Since Aristocrat’s founding in 1953, technology has constantly transformed gaming and the digital demands on our gaming business are a far cry from challenges we faced when we started. As we continue to expand globally, security and compliance are top priorities. Managing IT security for several gaming subsidiaries and our core business became more complex as we entered into new markets and scaled up our number of users. We needed a centralized platform that could give us full visibility into all of our systems and efficient monitoring capabilities to keep data and applications secure. We also needed the ability to secure our systems without compromising user experiences.We turned to Google Cloud and Splunk to better manage complexity and support highly efficient, secure, and more dynamic gaming experiences for everyone. We are committed to using today’s modern technologies to give players more optimal experiences.Bringing our digital footprint into the cloudWhen we set out on our digital transformation, we looked to address many business requirements. These requirements included:Regulation: We wanted a platform that could efficiently address our industry’s stringent and global regulatory compliance requirements. Player experience: Our IT environment must support smooth gaming experiences to keep users engaged and satisfied.Scalability: As we grow and diversify, meeting the changing demands of an increasingly global gaming community, we need an easily scalable platform to align with our current and future needs.Google Cloud offered us the perfect foundation through solutions such as Compute Engine, Google Kubernetes Engine, BigQuery, and Google Cloud Storage. These acted as the right infrastructure components for us for the following reasons:Google Cloud is globally accessible and supports compliance, helping to streamline security and regulatory processes for our team. With Google Cloud, we can manage our entire development and delivery processes globally with fast and efficient reconciliation of regional compliance requirements. When we need to adjust existing infrastructure or deliver new capabilities, Google Cloud accelerates the process and takes the heavy lifting off of our team. Google Cloud allows us to support tens of thousands of players on each of our apps while experiencing minimal downtime and low latency. The importance of this support can’t be underestimated in an industry where players have little to no patience if lags in games occur.We migrated our back-office IT stack alongside our consumer-facing production applications to Google Cloud given our positive experiences with compliance, security, scalability, and process management. This migration has significantly accelerated our digital transformation while streamlining our infrastructure for faster and more cost-effective performance.In many ways, Google Cloud has been, with maybe a pun intended, a game-changer for us. For instance, when we suddenly had to support a lot of remote work during the COVID-19 pandemic, native identity and access management tools in Google Cloud allowed us to retire costly VPNs used for backend access and quickly adopt a more easily managed, cost-effective zero-trust security posture.Accessing vital third-party partners and managed servicesAristocrat has many IT needs best addressed in a multi-cloud environment. Google Cloud is particularly attractive given its strong cloud interoperability, as well as the many products and services available on Google Cloud Marketplace. The marketplace accelerated our deployment of key third-party apps including Splunk and Qualys.Given the personal information we store and the global regulatory compliance statutes we must oblige, security lies at the heart of our business. Splunk is a critical component of our digital transformation because it offers solutions that provide the enhanced monitoring capabilities and visibility we need. The integration between Splunk and Google Cloud gives us confidence that our data is secure. We know our data can be secure in Google Cloud, while simplified billing through Google Cloud Marketplace makes payments and license tracking easier for our procurement team.As part of our protected environment, we use the Splunk platform as our security information and event management system, leveraging the InfoSec app for Splunk that provides continuous monitoring and advanced threat detection to significantly improve our security.We can manipulate and present data in Splunk in a way that provides us with a single pane-of-glass for our hybrid, multi-cloud environment and our third-party apps and systems. Splunk observability tools have likewise helped us to track browser-based applications like our online gaming apps to monitor details related to security and performance.Splunk and Google Cloud have transformed how we operate. We can now quickly ingest and analyze data at scale within our refined approach to security management by offloading software management to Splunk and Google Cloud. This ability enables us to approach security more strategically, and positions us to integrate more AI/ML capabilities into our products for even greater governance and performance.This is just the beginning of our journey with Splunk and Google Cloud. We’re excited to see the innovation we can continue bringing to the gaming community worldwide.
Quelle: Google Cloud Platform

Sharpen your machine learning skills at Google Cloud Applied ML Summit

Artificial intelligence (AI) and particularly machine learning (ML) continue to advance at breakneck pace. We see it throughout projects and commentaries across the broader technology industry. We see it in the amazing things our customers are doing, from creating friendly robots to aid childhood development, to leveraging data for better manufacturing and distribution, to fostering internal innovation through hackathons. And we see it in our own research and product development at Google, from improved machine learning models for our Speech API, to integrations that streamline data management and ML modeling, to making AlphaFold (DeepMind’s breakthrough protein structure prediction system) available to researchers throughout the world using VertexAI. At Google Cloud, we’ve helped thousands of companies to accelerate their AI efforts, empower their data scientists, and extend the ability to build AI-driven apps and workflows to more people, including those without data science or ML expertise. Next month, we’ll take the next step in this journey with our customers, at Google Cloud Applied ML Summit. Join us June 9 for this digital event, which will bring together some of the world’s leading ML and data science professionals to explore the latest cutting-edge AI tools for developing, deploying, and managing ML models at scale. On-demand sessions kick off at 9:00 AM Pacific with “Accelerating the deployment of predictable ML in production,” featuring VP & GM of Google Cloud AI & Industry Solutions Andrew Moore; Google Cloud Developer Advocate Priyanka Vergadia; Ford Director of AI and Cloud Bryan Goodman; and UberAI Director of Engineering Smitha Shyam.At the summit, you’ll learn how companies like General Mills, Vodafone, H&M, and CNA Insurance are developing, deploying, and safely managing long-running, self-improving AI services. Get insights in practitioner sessions where you can find new ways to:Build reliable, standardized AI pipelines across Spark on Google Cloud, Dataproc, BigQuery, Dataplex, Looker, and more, with a unified experience from Vertex AI, all in the session “Data to deployment – 5x faster.”Train high-quality ML models in minutes with AutoML innovations born of the latest Google Brain research, explored in the session “End-to-end AutoML for model prep.”Make the most of your Google Cloud investments in Vertex AI Training and Vertex AI Prediction to help you deploy custom models built on TensorFlow, PyTorch, scikit-learn, XGBoost, and other frameworks. Check out the session “ML prediction and serving: Vertex AI roadmap.”Automate and monitor AI integration, deployment, and infrastructure management to drive greater speed and efficiency. Don’t miss the session “Machine learning operations (MLOps) strategy and roadmap.”Streamline the process to audit, track, and govern ML models as they adapt to live data within a dynamic environment, without degrading performance. Dive into this topic  in the session “Model governance and auditability.” You can choose from over a dozen sessions across three tracks: “Data to ML Essentials,” “Fact-track Innovation,” and “Self-improving ML.” Session topics range from MLOps best practices, to Google Cloud customer experiences, to the importance of model auditability, and explainable and responsible AI, with multiple customer panels and “ask me anything” sessions to help you get the insights and develop the skill to take your business’s ML efforts to the next level.We’re committed to continuing to serve our customers in this rapidly-evolving space, and we’re excited to learn and collaborate with you at this event. To register, visit this link to reserve your seat for the Applied ML  Summit.Related ArticleUnified data and ML: 5 ways to use BigQuery and Vertex AI togetherVertex AI is a single platform with every tool you need to build, deploy, and scale ML models. Get started quickly with five easy integra…Read Article
Quelle: Google Cloud Platform