10 questions to help boards safely maximize cloud opportunities

The accelerating pursuit of cloud-enabled digital transformations brings new growth opportunities to organizations, but also raises new challenges. To ensure that they can lock in newfound agility, quality improvements, and marketplace relevance, boards of directors must prioritize safe, secure, and compliant adoption processes that support this new technological environment. The adoption of cloud at scale by a large enterprise requires the orchestration of a number of significant activities, including:Rethinking how strategic outcomes leverage technology, and how to enable those outcomes by changing how software is designed, delivered, managed across the organization. Refactoring security, controls, and risk governance processes to ensure that the organization stays within its risk appetite and in compliance with regulation during and following the transformation.Implementing new organizational and operating models to empower a broad and deep skills and capabilities uplift, and fostering the right culture for success.As such, the organization across all lines of defense has significant work to do. The board of directors plays a key role in overseeing and supporting management on this journey, and our new paper is designed to provide a framework and handbook for boards of directors in that position. We provide a summary of our recommendations, in addition to a more detailed handbook. This paper complements two papers we published in 2021: The CISO’s Guide to Cloud Security Transformation, and Risk Governance of Digital Transformation in the Cloud, which is a detailed guide for chief risk officers, chief compliance officers, and heads of internal audit.We have identified 10 questions that we believe help a board of directors in a structured, meaningful discussion with their organization and its approach to cloud. We’ve included additional points with each, as examples of what a good approach could look like, and potential red flags that might indicate all is not well with the program. At a high level, those questions are:How is the use of cloud technology being governed within the organization? Is clear accountability assigned and is there clarity of responsibility in decision making structures?How well does the use of cloud technology align with, and support, the technology and data strategy for the organization, and, ideally, the overarching business strategy, in order that the cloud approach can be tailored to achieve those right outcomes?Is there a clear technical and architectural approach for the use of cloud, that incorporates the controls necessary to ensure that infrastructure and applications are deployed and maintained in a secure state? Has a skills and capabilities assessment been conducted, in order to determine what investments are needed across the organization?How is the organization structure and operating model evolving to both fully leverage cloud, but also to increase the likelihood of a secure and compliant adoption? How are risk and control frameworks being adjusted, with an emphasis on understanding how the organization’s risk profile is changing and how the organization is staying within risk appetite? How are independent risk and audit functions adjusting their approach in light of the organization’s adoption of cloud?How are regulators and other authorities being engaged, in order to keep them informed and abreast of the organization’s strategy and of the plans for the migration of specific business processes and data sets?How is the organization prioritizing resourcing to enable the adoption of cloud, but also to maintain adequate focus on managing existing and legacy technologies?  Is the organization consuming and adopting the cloud provider’s set of best practices and leveraging the lessons the cloud provider will have learned from their other customers?Our conclusions in this whitepaper have been guided by Google’s years of leading and innovating in cloud security and risk management, and the experience that Google Cloud experts have gained from their previous roles in risk and control functions in large enterprises. The board of directors plays a critical role in overseeing any organization’s cloud-enabled digital transformation. We recommend a structured approval to that oversight and asking the questions we pose in this whitepaper. We are excited to collaborate with you on the risk governance of your cloud transformation.
Quelle: Google Cloud Platform

Where is your Cloud Bigtable cluster spending its CPU?

CPU utilization is a key performance indicator for Cloud Bigtable. Understanding CPU spend is essential for optimizing Bigtable performance and cost. We have significantly improved Bigtable’s observability by allowing you to visualize your Bigtable cluster’s CPU utilization in more detail. We now provide you with the ability to break the utilization down by various dimensions like app profile, method and table. This finer grained reporting can help you make more informed application design choices and help with diagnosing performance related incidents.In this post, we present how this visibility may be used in the real world, through example persona-based user journeys.User Journey: Investigate an incident with high latencyTarget Persona: Site Reliability Engineer (SRE)ABC Corp runs Cloud Bigtable in a multi-tenant environment. Multiple teams at ABC Corp use the same Bigtable instance.Alice is an SRE at ABC Corp. Alice gets paged because the tail latency of a cluster exceeded the acceptable performance threshold. Alice looks at the cluster level CPU utilization chart and sees that the CPU usage spiked during the incident window.P99 latency for app profile personalization-reader spikesCPU utilization for the cluster spikesAlice wants to drill down further to get more details about this spike. The primary question she wants to answer is “Which team should I be reaching out to?” Fortunately, teams at ABC Corp follow the best practice of tagging the usage of each team with an app profile in the following format: <teamname>-<workload-type>The bigtable instance has the following app profiles:revenue-updaterinfo-updaterpersonalization-readerpersonalization-batch-updaterThe instance’s data is stored in the following tables:revenueclient-infopersonalizationShe uses the CPU per app profile chart to determine that the personalization-batch-updater app profile utilized the most CPU during the time of the incident and also saw a spike that corresponded with the spike in latency of the serving path traffic under the personalization-reader app profile.At this point, Alice knows that the personalization-batch-updater traffic is adversely impacting the personalization-reader traffic. She further digs into the dashboards in Metrics Explorer to figure out the problematic method and table.CPU usage breakdown by app profile, table and methodAlice has now identified the personalization-batch-updater app profile, the personalization table and the MutateRows method as the reason for the increase in CPU utilization that is causing high tail latency of the serving path traffic.With this information, she reaches out to the personalization team to provision the cluster correctly before the batch job starts so that the performance of other tenants is not affected. The following options can be considered in this scenario:Run the batch job on a replicated instance with multiple clusters. Provision a dedicated cluster for the batch job and use single cluster routing to completely isolate the serving path traffic from the batch updatesProvision more nodes for the cluster before the batch job starts and for the duration of the batch job. This option is less preferred than option 1, since serving path traffic may still be impacted. However, this option is more cost effective.User Journey: Schema and cost optimizationTarget Persona: DeveloperBob is a developer who is onboarding a new workload on Bigtable. He completes the development of his feature and moves on to the performance benchmarking phase before releasing to production. He notices that both the throughput and latency of his queries are lower than what he expected and begins debugging the issue. His first step is to look at the CPU utilization of the cluster, which is higher than expected and is hovering around the recommended max.CPU utilization by clusterTo debug further, he looks at the CPU utilization by app profile and the CPU utilization by table charts. He determines that the majority of the CPU is consumed by the product-reader app profile and the product_info table.CPU utilization by app profileCPU utilization by tableHe inspects the application code and notices that the query includes a value range filter. He realizes that value filters are expensive, so he moves the filtering to the application. This leads to substantial decrease in Bigtable cluster CPU utilization. Consequently, not only does he improve performance, but he can also lower costs for the Bigtable cluster.CPU utilization by cluster after removing value range filterCPU utilization by app profile after removing value range filterCPU utilization by table after removing value range filterWe hope that this blog helps you to understand why and when you might want to use our new observability metric – CPU per app profile, method and table. Accessing the metricsThese metrics can be accessed on the Bigtable Monitoring UI under the Tables and Application Profiles tabs. To see the method breakdown, view the metric in Metrics Explorer, which you can also navigate to from Cloud Monitoring UI.
Quelle: Google Cloud Platform

How Bayer Crop Science uses BigQuery and geobeam to improve soil health

Bayer Crop Science uses Google Cloud to analyze billions of acres of land to better understand the characteristics of the soil that produces our food crops. Bayer’s teams of data scientists are leveraging services from across  Google Cloud to load, store, analyze, and visualize geospatial data to develop unique business insights. And because much of this important work is done using publicly-available data, you can too!Agencies such as the United States Geological Survey (USGS), National Oceanic and Atmospheric Administration (NOAA), and the National Weather Service (NWS) perform measurements of the earth’s surface and atmosphere on a vast scale, and make this data available to the public. But it is up to the public to turn this data into insights and information. In this post, we’ll walk you through some ways that Google Cloud services such as BigQuery and Dataflow make it easy for anyone to analyze earth observation data at scale. Bringing data togetherFirst, let’s look at some of the datasets we have available. For this project, the Bayer team was very interested in one dataset in particular from ISRIC, a custodian of global soil information. ISRIC maps the spatial distribution of soil properties across the globe, and collects soil measurements such as pH, organic matter content, nitrogen levels, and much more. These measurements are encoded into “raster” files, which are large images where each pixel represents a location on the earth, and the “color” of the pixel represents the measured value at that location. You can think of each raster as a layer, which typically corresponds to a table in a database. Many earth observation datasets are made available as rasters, and they are excellent for storage of gridded data such as point measurements, but it can be difficult to understand spatial relationships between different areas of a raster, and between multiple raster tiles and layers.Processing data into insightsTo help with this, Bayer used Dataflow with geobeam to do the heavy-lifting of converting the rasters into vector data by turning them into polygons, reprojecting them to the WGS 84 coordinate system used by BigQuery, and generating h3 indexes to help us connect the dots — literally. Polygonization in particular is a very complex operation and its difficulty scales exponentially with file size, but Dataflow is able to divide and conquer by splitting large raster files into smaller blocks and processing them in parallel at massive scale. You can process any amount of data this way, at a scale and speed that is not possible on any single machine using traditional GIS tools. What’s best is that this is all done on the fly with minimal custom programming. Once the raster data is polygonized, reprojected, and fully discombobulated, the vector data is written directly to BigQuery tables from Dataflow.Once the data is loaded into BigQuery, Bayer uses BigQuery GIS and the h3 indexes computed by geobeam to join the data across multiple tables and create a single view of all of their soil layers. From this single view, Bayer can analyze the combined data, visualize all the layers at once using BigQuery GeoViz, and apply machine learning models to look for patterns that humans might not seeScreenshot of Bayer’s soil analysis in GeoVizUsing geospatial insights to improve the businessThe soil grid data is essential to help characterize the soil characteristics of the crop growth environments experienced by Bayer’s customers. Bayer can compute soil environmental scenarios for global crop lands to better understand what their customers experience in order to aid in testing network optimization, product characterization, and precision product design. It also impacts Bayer’s real-world objectives by enabling them to characterize the soil properties of their internal testing network fields to help establish a global testing network and enable environmental similarity calculations and historical modeling.It’s easy to see why developing spatial insights for planting crops is game-changing for Bayer Crop Sciences, and these same strategies and tools can be used across a variety of industries and businesses.Google’s mission is to organize the world’s information and make it universally accessible and useful, and we’re excited to work with customers like Bayer Crop Sciences who want to harness their data to build products that are beneficial to their customers and the environment. To get started building amazing geospatial applications for your business, check out our reference guide to learn more about geospatial capabilities in Google Cloud, and open BigQuery in the Google Cloud console to get started using BigQuery and geobeam for your geospatial workloads.
Quelle: Google Cloud Platform

The Google Cloud DevOps Awards: Final call for submissions!

DevOps continues to be a major business accelerator for our customers and we continually see success from customers applying DevOps Research and Assessment (DORA) principles and findings to their organization. This is why the first annual DevOps Awardsis targeted to recognize customers shaping the future of DevOps with DORA. Share your inspirational story, supported by examples of business transformation and operational excellence, today. With inputs from over 32,000 professionals worldwide and seven years of research, the Accelerate State of DevOps Report is the largest and longest running DevOps research of its kind. The different categories of DevOps Awards map closely to the practices and capabilities that drive high performance, as identified by the report. Organizations, irrespective of their  size, industry, and region are able to apply to one or all ten categories. Please find the categories and their descriptions below:Optimizing for Speed without sacrificing stability: This award recognizes one  Google Cloud customer that has driven improvements in speed without sacrificing quality. Embracing easy-to-use tools to improve remote productivity: The research showcases how high performing engineers are 1.5 times more likely to have easy to-use tools. To be eligible for this award, share your stories on how easy to use DevOps tools have helped you improve engineer productivity.Mastering effective disaster recovery: This award winner will be awarded to demonstrate how a robust, well-testeddisaster recovery (DR) plan can  protect business operations.Leveraging loosely coupled architecture: This award recognizes one customer that successfully transitioned from a tightly coupled architecture to service-oriented and microservice architectures.Unleashing the full power of the Cloud: This award recognizes a Google Cloud customer leveraging all five capabilities of cloud computing to improve software delivery and organizational performance. Specifically, these five capabilities include: – On demand self-service- Broad network access- Measured service- Rapid elasticity- Resource pooling.Read more about the five essential characteristics of cloud computingMost improved documentation quality: This award recognizes one customer that has successfully integrated documentation into their DevOps workflow using Google Cloud Platform tools.Reducing burnout during COVID-19: We will recognize one customer that implemented effective processes to improve work/life balance, foster a healthy DevOps culture, and ultimately prevent burnout.Utilizing IT operations to drive informed business decisions: This award will go to one customer that employed DevOps best practices to break down silos between development and operations teams.Driving inclusion and diversity in DevOps: To highlight the importance of a diverse organization, this award honors one Google Cloud customer that: Prioritizes diversity and inclusion initiatives for their organization to transform and strengthen their business. -orCreates unique solutions to help build a more diverse, inclusive, and accessible workplace for your customer, leading to higher levels of engagement, productivity, and innovation.Accelerating DevOps with DORA: This award recognizes one customer that has successfully integrated the most DORA practices and capabilities into their workflow using Google Cloud Platform tools.This is your chance to show your innovation globally and become a role model for the industry to improve. Winners will receive invitations to roundtables and discussions, press materials, website and social badges, special announcements and even a trophy award.We are excited to see all your great submissions. Applications are open until January 31st, so apply for what best suits your company and stay tuned for our awards show in February 2022!For more information on the awards visit our webpageand check out The Google Cloud DevOps Awards Guidebook.
Quelle: Google Cloud Platform

Data governance in the cloud – part 1 – People and processes

In this blog, we’ll cover data governance as it relates to managing data in the cloud. We’ll discuss the operating model which is independent of technologies whether on-prem or cloud, processes to ensure governance, and finally the technologies that are available to ensure data governance in the cloud. This is a two part blog on data governance. In this first part, we’ll discuss the role of data governance, why it’s important, and processes that need to be implemented to run an effective data governance program. In the second part, we’ll dive into the tools and technologies that are available to implement data governance processes, e.g. data quality, data discovery, tracking lineage, and security.For an in-depth and comprehensive text on Data governance, check Data Governance: People, Processes, and Tools to Operationalize Data Trustworthiness.What is Data Governance?Data governance is a function of data management which creates value for the organization by implementing processes to ensure high data quality, and provides a platform that makes it easier to share data securely across the organization while ensuring compliance with all the regulations. The goal of data governance is to maximize the value derived from data, build user trust, and ensure compliance by implementing required security measures.Data governance needs to be in place from the time a factoid of data is collected or generated and until the point in time at which that data is retired. Along the way, in this full lifecycle of the data, data governance focuses on making the data available to all stakeholders in a form that they can readily access and use in a manner that generates the desired business outcomes (insights, analysis), and if relevant, conforms to regulatory standards. These regulatory standards are often an intersection of industry (e.g. healthcare), government (e.g. privacy), and company (e.g. non-partisan) rules and codes of behavior. See more details here.Why is Data Governance Important?In the last decade, the amount of data generated by users using mobile phones, health & fitness and IOT devices, retail beacons etc. have caused an exponential growth in data. At the same time, the cloud has made it easier to collect, store, and analyze this data at a lower cost. As the volume of data and adoption of cloud continues to grow, organizations are challenged with a dual mandate to democratize and embed data in all decision making while ensuring that it is secured and protected from unauthorized use. An effective data governance program is needed to implement this dual mandate to make the organization data driven on one hand and securing data from unauthorized use on the other. Organizations without an effective data governance program will suffer from compliance violations leading to fines, poor data quality which leads to lower quality insights impacting business decisions, challenges in finding data which results in delayed analysis and missed business opportunities, poorly trained data models for AI which reduces the model accuracy and benefits of using AI.An effective data governance strategy encompasses people, processes, and tools & technologies. It drives data democratization to embed data in all decision making, builds user trust, increases brand value, reduces the chances of compliance violations which can lead to substantial fines, and loss of business.Components of Data GovernancePeople and Roles in Data GovernanceA comprehensive data governance program starts with a data governance council composed of leaders representing each business unit in the organization. This council establishes the high level governing principles on how the data will be used to drive business decisions. The council with the help of key people in each b business functions identify the data domains, e.g. customer, product, patient, and provider. The council then assigns data ownership and stewardship roles for each data domain. These are senior level roles and each owner is held accountable and accordingly rewarded for driving the data goals set by the data governance council. Data owners and stewards are assigned from business, for example customer data owner may be from marketing or sales, finance data owner from finance, while HR data owner from HR.The role of IT is that of data custodian. IT ensures the data is acquired, protected, stored, and shared according to the policies specified by data owners. As data custodians, IT does not make the decisions on data access or data sharing. IT’s role is limited to managing technology to support the implementation of data management policies set by data owners.Processes in Data GovernanceEach organization will establish processes to drive towards the implementation of goals set by the data governance council. The processes are established by data owners and data stewards for each of their data domains. The processes focus on the following high level goals:1. Data meets the specified data quality standards – e.g. 98% completeness, no more than 0.1% duplicate values, 99.99% consistent data across different tables, and what constitutes on-time delivery2. Data security policies to ensure compliance with internal and external policiesData is encrypted at rest and on wireData access is limited to authorized users onlyAll sensitive data fields are redacted or encrypted and dynamically decrypted only for authorized usersData can be joined for analytics in de-identified form, e.g. using deterministic encryption or hashingAudits are available for authorized access as well as unauthorized attempts3. Data sharing with external partners is available securely via APIs4. Compliance with industry and geo specific regulations, e.g. HIPAA, PCI DSS, GDPR, CCPA, LGPD5. Data replication is minimized6. Centralized data discovery for data users via data catalogs7. Trace data lineage to identify data quality issues, data replication sources, and help with auditsTechnologyImplementing the processes as specified in the data governance program requires use of technology. From securing data, retaining and reporting audits, to automate monitoring and alerts, multiple technologies are integrated to manage data life cycle.In Google Cloud, a comprehensive set of tools enables organizations to manage their data securely and drive data democratization. Data Catalog enables users to easily find data from one centralized place across Google Cloud. Data Fusion tracks lineage so data owners can trace data at every point in the data life cycle and fix issues that may be corrupting data. Cloud Audit Logs  retain audits needed for compliance. Dataplex provides intelligent data management, centralized security and governance, automatic data discovery, metadata harvesting, lifecycle management, and data quality with built-in AI-driven intelligence.We will discuss the use of tools and technologies to implement governance in part 2 of this blog.
Quelle: Google Cloud Platform

Megatrends drive cloud adoption—and improve security for all

We are often asked if the cloud is more secure than on-premise infrastructure. The quick answer is that, in general, it is. The complete answer is more nuanced and is grounded in a series of cloud security “megatrends” that drive technological innovation and improve the overall security posture of cloud providers and customers.An on-prem environment can, with a lot of effort, have the same default level of security as a reputable cloud provider’s infrastructure. Conversely, a weak cloud configuration can give rise to many security issues. But in general, the base security of the cloud coupled with a suitably protected customer configuration is stronger than most on-prem environments. Google Cloud’s baseline security architecture adheres to zero-trust principles—the idea that every network, device, person, and service is untrusted until it proves itself. It also relies on defense in depth, with multiple layers of controls and capabilities to protect against the impact of configuration errors and attacks. At Google Cloud, we prioritize security by design and have a team of security engineers who work continuously to deliver secure products and customer controls. Additionally, we also take advantage of industry megatrends that increase cloud security further, outpacing the security of on-prem infrastructure.These eight megatrends actually compound the security advantages of the cloud compared with on-prem environments (or at least those that are not part of a distributed or trusted partner cloud). IT-decision makers should pay close attention to these megatrends because they’re not just transient issues to be ignored once 2023 rolls around—they guide the development of cloud security and technology, and will continue to do so for the foreseeable future.At a high level, these eight megatrends are:Economy of scale: Decreasing the marginal cost of security raises the baseline level of security. Shared fate: A flywheel of increasing trust drives more transition to the cloud, which compels even higher security and even more skin-in-the-game from the cloud provider.Healthy competition: The race by deep-pocketed cloud providers to create and implement leading security technologies is the tip of the spear of innovation. Cloud as the digital immune system: Every security update the cloud gives the customer is informed by some threat, vulnerability, or new attack technique often identified by someone else’s experience. Enterprise IT leaders use this accelerating feedback loop to get better protection.Software-defined infrastructure: Cloud is software defined, so it can be dynamically configured without customers having to manage hardware placement or cope with administrative toil. From a security standpoint, that means specifying security policies as code, and continuously monitoring their effectiveness.Increasing deployment velocity: Because of cloud’s vast scale, providers have had to automate software deployments and updates, usually with automated continuous integration/continuous deployment (CI/CD) systems. That same automation delivers security enhancements, resulting in more frequent security updates.Simplicity: Cloud becomes an abstraction-generating machine for identifying, creating and deploying simpler default modes of operating securely and autonomically. Sovereignty meets sustainability: The cloud’s global scale and ability to operate in localized and distributed ways creates three pillars of sovereignty. This global scale can also be leveraged to improve energy efficiency.Let’s look at these megatrends in more depth. Economy of scale: Decreasing marginal cost of securityPublic clouds are of sufficient scale to implement levels of security and resilience that few organizations have previously constructed. At Google, we run a global network, we build our own systems, networks, storage and software stacks. We equip this with a level of default security that has not been seen before, from our Titan security chips which assure a secure boot; our pervasive data-in-transit and data-at-rest encryption; and make available confidential computing nodes that encrypt data even while it’s in use. We prioritize security, of course, but prioritizing security becomes easier and cheaper because the cost of an individual control at such scale decreases per unit of deployment. As the scale increases, the unit cost of control goes down. As the unit cost goes down, it becomes cheaper to put those increasing baseline controls everywhere. Finally, where there is necessary incremental cost to support specific configurations, enhanced security features, and services to support customer security operations and updates, then even the per-unit cost of that will decrease. It may be chargeable but it is still a lower cost than on-prem services whose economics are going in the other direction. Cloud is, therefore, the strategic epitome of raising the security baseline by reducing the cost of control. The measurable level of security can’t help but increase.Shared fate: The flywheel of cloud expansionThe long-standing shared responsibility model is conceptually correct. The cloud provider offers a secure base infrastructure (security of the cloud) and the customer configures their services on that in a secure way (security in the cloud). But, if the shared responsibility model is used more to allocate responsibility when incidents occur and less as a means of understanding mutual collective responsibility, then we are not living up to mutual expectations or responsibility.  Taking a broader view of a “shared responsibility” model, we should use such a model to create a mutually beneficial shared fate. We’re in this together. We know that if our customers are not secure, then we as cloud providers are collectively not successful. This shared fate extends beyond just Google Cloud and our customers—it affects all the clouds because a trust issue in one impacts the trust in all. If that trust issue makes the cloud “look bad,” then current and potential future customers might shy away from the cloud, which ultimately puts them in a less-secure position. This is why our security mission is a triad of Secure the Cloud (not only Google Cloud), Secure the Customer (shared fate) and Secure the Planet (and beyond). Further, “shared fate” goes beyond just the reality of shared consequences. We view this as a philosophy of deeply caring about customer security, which gives rise over time to elements like:Secure-by-default configurations. Our default configurations ensure security basics have been enabled and that all customers start from a high security baseline, even if some customers change that later. Secure blueprints. Highly opinionated configurations for assemblies of products and services in secure-by-default ways, with actual configuration code, so customers can more easily bootstrap a secure cloud environment.Secure policy hierarchies. Setting policy intent at one level in an application environment should automatically configure down the stack so there’s no surprises or additional toil in lower-level security settings.Consistent availability of advanced security features. Providing advanced features to customers across a product suite and available for new products at launch is part of the balancing act between faster new launches and the need for security consistency across the platform. We reduce the risks customers face by consistently providing advanced security featuresHigh assurance attestation of controls. We provide this through compliance certifications, audit content, regulatory compliance support, and configuration transparency for ratings and insurance coverage from partners such as our Risk Protection Program. Shared fate drives a flywheel of cloud adoption. Visibility into the presence of strong default controls and transparency into their operation increases customer confidence, which in turn drives more workloads coming onto cloud. The presence of and potential for more sensitive workloads in turn inspires the development of even stronger default protections that benefit customers. Healthy competition: The race to the top The pace and extent of security feature enhancement to products is accelerating across the industry. This massive, global-scale competition to keep increasing security in tandem with agility and productivity is a benefit to all.For the first time in history, we have companies with vast resources working hard to deliver better security, as well as more precise and consistent ways of helping customers manage security. While some are ahead of others, perhaps sustainably so, what is consistent is that cloud will always lead on-prem environments which have less of a competitive impetus to provide progressively better security. On-prem may not ever go away completely, but cloud competition drives security innovation in a way that on-prem hasn’t and won’t.Cloud as the digital immune system: Benefit for the many from the needs of the few(er) Security improvements in the cloud happen for several reasons:The cloud provider’s large number of security researchers and engineers postulate a need for an improvement based on a deep theoretical and practical knowledge of attacks.A cloud provider with significant visibility on the global threat landscape applies knowledge of threat actors and their evolving attack tactics to drive not just specific new countermeasures but also means of defeating whole classes of attacks. A cloud provider deploys red teams and world-leading vulnerability researchers to constantly probe for weaknesses that are then mitigated across the platform. The cloud provider’s software engineers often incorporate and curate open-source software and often support the community to drive improvements for the benefit of all. The cloud provider embraces vulnerability discovery and bug bounty programs to attract many of the world’s best independent security researchers.And, perhaps most importantly, the cloud provider partners with many of its customer security teams, who have a deep understanding of their own security needs, to drive security enhancements and new features across the platform. This is a vast, global forcing function of security enhancements which, given the other megatrends, is applied relatively quickly and cost-effectively. If the customer’s organization can not apply this level of resources, and realistically even some of the biggest organizations can’t, then an optimal security strategy is to embrace every security feature update the cloud provides to protect networks, systems, and data. It’s like tapping into a global digital immune system.Software-defined infrastructure: Continuous controls monitoring vs. policy intent One of the sources of the comparative advantage of the cloud over on-prem is that it is a software-defined infrastructure. This is a particular advantage for security since configuration in the cloud is inherently declarative and programmatically configured. This also means that configuration code can be overlaid with embedded policy intent (policy-as-code and controls-as-code). The customer validates their configuration by analysis, and then can continuously assure that configuration corresponds to reality. They can model changes and apply them with less operating risk, permitting phased-in changes and experiments. As a result, they can take more aggressive stances to apply tighter controls with less reliability risk. This means they can easily add more controls to their environment and update it continuously. This is another example of where cloud security aligns fully with business and technology agility. The BeyondProd model and SLSA framework are prime examples of how our software-defined infrastructure has helped improve cloud security. BeyondProd and the BeyondCorp framework apply zero-trust principles to protecting cloud services. Just like not all users are in the same physical location or using the same devices, developers do not all deploy code to the same environment. BeyondProd enables microservices to run securely with granular controls in public clouds, private clouds, and third-party hosted services.The SLSA framework applies this approach to the complex nature of modern software development and deployment. Developed in collaboration with the Open Source Security Foundation, the SLSA framework formalizes criteria for software supply chain integrity. That’s no small hill to climb, given that today’s software is made up of code, binaries, networked APIs and their assorted configuration files. Managing security in a software-defined infrastructure means the customer can intrinsically deliver continuous controls monitoring, constant inventory assurance and be capable of operating at an “efficient frontier” of a highly secure environment without having to incur significant operating risks. Increasing deployment velocityCloud providers use a continuous integration/continuous deployment model. This is a necessity for enabling innovation through frequent improvements, including security updates supported by a consistent version of products everywhere, as well as achieving reliability at scale. Cloud security and other mechanisms are API based and uniform across products, which enables the management of configuration in programmatic ways—also known as configuration-as-code. When configuration-as-code is combined with the overall nature of cloud being a software-defined infrastructure, it enables customers to implement CI/CD approaches for software deployment and configuration to enable consistency in their use of the cloud. This automation and increased velocity decreases the time customers spend waiting for fixes and features to be applied. That includes the speed of deploying security features and updates, and permits fast roll-back for any reason. Ultimately, this means that the customer can move even faster yet with demonstrably less risk—eating and having your cake, as it were. Overall, we find deployment velocity to be a critical tool for strong security.Simplicity: Cloud as an abstraction machine A common concern about moving to the cloud is that it’s too complex. Admittedly, starting from scratch and learning all the features the cloud offers may seem daunting. Yet even today’s feature-rich cloud offerings are much simpler than prior on-prem environments—which are far less robust. The perception of complexity comes from people being exposed to the scope of the whole platform, despite more abstraction of the underlying platform configuration. In on-prem environments, there are large teams of network engineers, system administrators, system programmers, software developers, security engineering teams, storage admins, and many more roles and teams. Each has their own domain or silo to operate in. That loose-flying collection of technologies with its myriad of configuration options and incompatibilities required a degree of artisanal engineering that represents more complexity and less security and resilience than customers will encounter in the cloud. Cloud is only going to get simpler because the market rewards the cloud providers for abstraction and autonomic operations. In turn, this permits more scale and more use, creating a relentless hunt for abstraction. Like our digital immune system analogy, the customer should see the cloud as an abstraction pattern-generating machine: It takes the best operational innovations from tens of thousands of customers, and assimilates them for the benefit of everyone. The increased simplicity and abstraction permit more explicit assertion of security policy in more precise and expressive ways applied in the right context. Simply put, simplicity removes more potential surprise—and security issues are often rooted in surprise.Sovereignty meets sustainability: Global to local The cloud’s global scale and ability to operate in localized and distributed ways creates three potential pillars of sovereignty, which will be increasingly important in all jurisdictions and sectors. It can intrinsically support the need for national or regional controls, limits on data access, as well as delegation of certain operations, and means for greater portability across services.The global footprint of many cloud providers means that cloud can more easily meet national or regional deployment needs. Workloads can be more easily deployed to more energy-efficient infrastructures. That, coupled with cloud’s inherent efficiency due to higher resource utilization, means cloud is more sustainable overall. By engaging with customers and policymakers across these pillars, we can provide solutions that address their requirements, while optimizing for additional considerations like functionality, cost, infrastructure consistency, and developer experience. Data sovereignty provides customers with a mechanism to prevent the provider from accessing their data, approving access only for specific provider behaviors that customers think are necessary. Examples of customer controls provided by Google Cloud include storing and managing encryption keys outside the cloud, giving customers the power to only grant access to these keys based on detailed access justifications, and protecting data-in-use. With these features, the customer is the ultimate arbiter of access to their data. Operational sovereignty provides customers with assurances that the people working at a cloud provider cannot compromise customer workloads. The customer benefits from the scale of a multi-tenant environment while preserving control similar to a traditional on-prem environment. Examples of these controls include restricting the deployment of new resources to specific provider regions, and limiting support personnel access based on predefined attributes such as citizenship or a particular geographic location. Software sovereignty provides customers with assurances that they can control the availability of their workloads and run them wherever they want, without being dependent on or locked-in to a single cloud provider. This includes the ability to survive events that require them to quickly change where their workloads are deployed and what level of outside connection is allowed. This is only possible when two requirements are met, both of which simplify workload management and mitigate concentration risks: first, when customers have access to platforms that embrace open APIs and services; and second, when customers have access to technologies that support the deployment of applications across many platforms, in a full range of configurations including multi-cloud, hybrid, and on-prem, using orchestration tooling. Examples of these controls are platforms that allow customers to manage workloads across providers; and orchestration tooling that allows customers to create a single API that can be backed by applications running on different providers, including proprietary cloud-based and open-source alternatives.This overall approach also provides a means for organizations (and groups of organizations that make up a sector or national critical infrastructure) to manage concentration risks. They can do this either by relying on the increased regional and zonal isolation mechanisms in the cloud, or through improved means of configuring resilient multi-cloud services. This is also why the commitment to open source and open standards is so important.The bottom line is that cloud computing megatrends will propel security forward faster, for less cost and less effort than any other security initiative. With the help of these megatrends, the advantage of cloud security over on-prem is inevitable.Related ArticleRead Article
Quelle: Google Cloud Platform

Carrefour Belgium: Driving a seamless digital experience with SAP on Google Cloud

Stijn Stabel’s first days as CTO of Carrefour Belgium were… challenging. “The data center was out of date with a roof that leaked,” he recalls. Not quite what one would expect from the eighth-largest retailer in the world. Carrefour is a household name in Europe, Asia, and the Middle East, operating over 12,000 hypermarkets, groceries, and convenience stores in more than 30 countries. Carrefour Belgium has more than 10,000 employees operating 700 stores along with an eCommerce business. Nonetheless, Stabel’s ambitions were ambitious: “Our goal is to become a digital retail company,” he says. “We want to move quickly from being a slow mover in digital transformation to becoming trailblazing. That’s one of the coolest challenges you can have as a CTO.”Three years later, Carrefour Belgium is well along the path to achieving that goal, having migrated nearly all of its SAP application stack to Google Cloud, including finance, HR, and others. “We’re really going headfirst and full-on. It’s a huge challenge, but it’s definitely one of the most exciting transformations I have seen so far,” he says. The challenges that Carrefour Belgium faced were more than just an aging data center. With systems divided between two private clouds, there was no efficient way to leverage data for the advanced analytics Stabel knew the company would need to compete. “The world is changing at a record pace,” he says. “Either you’re keeping up with that or you’re not. Standing still means basically choosing to give up on the company’s progress — and at some point, to really give up on the company altogether.” This is especially true, he says, when it comes to creating a seamless customer experience both online and in stores. “Everything in the store will be digital,” he says. “How much longer are people going to put up with price tags printed on paper that have to be changed over and over? How long will it be acceptable to maintain a large ecological footprint? Sustainability will be increasingly important.”Olivier Luxon, Carrefour Belgium’s CIO, agrees, emphasizing the centrality of customer experience in everything the company does and how quickly customer needs shifted due to the global pandemic. “What we really saw with COVID was customers seeking more digital services. That had been a trend previously, but it accelerated dramatically with COVID.”The decision to move SAP to Google CloudAfter researching its options, Carrefour Belgium chose a “Google-first” cloud strategy for four reasons:Partnership: “Google listens to our needs and adapts to them instead of trying to box us in,” Stabel says. Technology: Google’s analytical and other tools, particularly BigQuery, were the deciding factor when it came to choosing a cloud provider for its SAP systems. “There’s no sound reason to move your SAP environment to a cloud host other than the one you’re going to use it for analytics,” he says, noting that the company will eventually bring all of its SAP data into BigQuery for analytics. “Our data team is working on building out a foundation based on BigQuery, which will enable them to work on multiple use cases such as assortment planning and promotion. The goal is to become a truly data-driven company.” Security: Carrefour Belgium is implementing BeyondCorp, the zero trust security solution from Google Cloud. By shifting access controls from the network perimeter to individual users, BeyondCorp enables secure work from virtually any location without the need for a traditional VPN. ”We’re going to be the first Carrefour division moving to that platform, so we’re very excited to be blazing the trail,” Stabel says.Value: “I have to report to a CFO, so partnership and technology alone are not enough to make a business case,” Stabel says. “I have to show demonstrable business value, and that’s what we get with Google Cloud.”Carrefour Belgium’s migration strategy has been to lift and shift its legacy SAP ERP Central Component (ECC) environment before doing a greenfield implementation of S/4HANA on Google Cloud, a process that is already underway. Currently, the HR module has been upgraded to S/4HANA, with retail operations to follow. More performance, better experience, greater insightsIt’s still early days, but the move to Google Cloud has already paid dividends in improved performance for back office operations, which, Olivier points out, frees time and resources to devote to serving customers better. Eventually, he feels that SAP on Google Cloud will have a direct impact on customer experience, particularly given the opportunities that data analytics will provide to better understand customer needs and meet them more effectively. “Data is becoming more and more important, not only for Carrefour, but for all the companies in the world,” Olivier says. “It drives personalized customer experience, promotions, operational efficiency, and finance. If you don’t set up the right data architecture on Day One, it will be close to impossible to be efficient as a company in a few years from now.”In the end, the goal is to provide Carrefour Belgium the tools it needs to serve customers better. “SAP supports our business by giving us the right tools and processes to manage areas including supply chain, finance, HR, and retail,” Olivier says. “What was missing, however, was the availability, scalability, and security we needed to better serve our employees, stores and customers, and that’s something we got by moving to Google Cloud.” And by moving to Google Cloud — which has been carbon neutral since 2007 and is committed to operating entirely carbon-free by 2030 — Carrefour is also able to pursue its sustainability objectives simply by modernizing its business operations in the cloud.“Google is a company that eats, breathes, and sleeps digital,” Stabel says. “At its heart, Carrefour is a retail company. We know how to be a retailer. Our partnership is a cross-pollination. What I’m really looking forward to is continuing to learn from Google Cloud and see what other solutions we can adopt to improve Carrefour Belgium and better serve our users &  customers.”Learn more about Carrefour Belgium’s deployment and how you can accelerate your organization’s digital transformation by moving your SAP environment to Google Cloud.Related ArticleSupporting business transformation for German retailers with SAP on Google CloudLearn how we’re helping German retailers migrate SAP systems to Google Cloud to minimize risk and downtime and build a foundation for fut…Read Article
Quelle: Google Cloud Platform

Top 10 takeaways from Looker’s 2021 JOIN@Home conference

JOIN@Home was an incredible celebration of the achievements that the Looker community made in the last year, and I was proud to be a part of it. Prominent leaders in the data world shared their successes, tips, and plans for the future. In the spirit of keeping the learning alive, I summarized the top two takeaways from each of the keynotes. They’re accompanied by illustrations that were captured live during the sessions by a local artist. Plus, there’s a fun surprise for you at the end. “Celebrating Data Heroes – Transforming Our World with Data”Our opening keynote featured a number of inspiring data professionals who use Looker in their work every day to see trends, drive decision making, and grow their customer base.Some of their main takeaways were:You can use analytics to make change for the greater good.Surgeon scientist Dr. Cherisse Berry spoke of cross-referencing healthcare outcomes data like trauma care survival rates, how long patients wait before being seen, and whether patients were appropriately triaged with demographic data to find gender and racial disparities in healthcare. For instance, she found that critically injured women receive trauma care less often than men. Because her analysis made the disparity known, informed decisions and actions can be taken to bring greater equality to New York state’s trauma care system. Provide templates to make insights more easily available to more users, especially non-technical ones.Michelle Yurovsky of  UIPath, an automation platform that helps customers avoid repetitive tasks, shared one of the key ways UIPath gets customers engaged: by providing dashboard templates that answer common automation questions. Customers get useful insights the second they click on the product. They can copy and modify the templates according to their business needs, so they’re less intimidated to start working with analytics – especially if they have no previous experience building dashboards. *Source: Coursera internal data, November 2021.“Developing a Better Future with Data”This keynote looked to the future of analytics.Two major themes were:Composable analytics capabilities help make application development faster, easier and more accessible.Composable analytics means creating a custom analytics solution using readily available components. You have access to composable analytics with Looker through the extension framework, which offers downloadable components you can use to build your application right on top of the Looker platform. Filter and visualization components enable you to more easily create the visual side of these data experiences.Augmented analytics help make it easier to handle the scale and complexity of data in modern business – and to make smarter decisions about probable future outcomes.Augmented analytics generate sophisticated analyses, created by integrating machine learning (ML) and artificial intelligence (AI) with data. The Looker team has worked to make augmented analytics more accessible to everyone this year. In particular, new Blocks give you access to ML  insights through the familiar Looker interface, enabling you to more quickly prototype ML- and AI-driven solutions. For instance, the Time-series Forecasting Block (which uses BigQuery ML) can be installed to give analysts deeper insights into future demand for better inventory and supply chain management. CCAI Insights gives call centers access to Contact Center AI Insights data with analysis they can use immediately. “The Looker Difference”Product Managers Ani Jain and Tej Toor highlighted many recent features you might find useful for activating and enabling users with Looker.Here are two moments that stood out:Giving your teams better starting points can lead to more engagement with analytics.Two improved ways to find insights from this year: Quick Starts and board improvements. Quick Starts function as pre-built Explore pages that your users can open with a click, helping to make ad hoc analysis more accessible and less intimidating. They’re also a convenient way to save an analysis you find yourself doing frequently – and they even save your filter settings. And, with new navigation improvements in Looker, boards are easier to find and use. Now you can pin links to a board, whether it’s a dashboard, a Look, an Explore, or something else, including external links. So go ahead. Try your hand at creating a useful data hub for your team with a new board.Natural language processing and Looker can help you make sense of relationships within data, quickly.A great example of this is the Healthcare NLP API Block, which creates an interactive user interface where healthcare providers, payers, pharma companies, and others in the healthcare industry can more easily access intelligent insights. Under the hood, this Block works on top of GCP Healthcare NLP API, an API offering pre-trained natural language models to extract medical concepts and relationships from medical text. The NLP API helps to structure the data, and the Looker Block can make the insights within that data more accessible.“Building and Monetizing Custom Data Experiences with Looker” Pedro Arellano, Product Director at Looker, and Jawad Laraqui, CEO of Boston-based consultancy Data Driven, chatted about embedded analytics and the remarkable speed one can build data applications with Looker and monetization strategies.Points you don’t want to miss from this one:Looker can help you augment an existing customer experience and create a new revenue stream with embedded data.For example, you can provide personalized insights to a customer engaged with your product, or automate business processes such as using data to trigger a service order workflow when an issue is encountered with a particular product. Embedding data in these ways can make the customer experience smoother all around. To take it a step further, you can monetize a data product you build to help create a new revenue stream.Building for Looker Marketplace can help you find more customers for your app and can promote a better user experience. Jawad compared using the extension framework to build for the Looker Marketplace as having an app in the Apple store. Being in the Marketplace is a way for customers to find and use his product organically, and it helps give the end users a streamlined experience. He said: “We were able to quickly copy and paste our whole application from a stand-alone website into something that is inside of Looker. And we did this quickly—one developer did this in one day. It’s a lot easier than you think, so I encourage everyone to give it a try. Just go build!”“Looker for Modelers: What’s New and What’s Next”Adam Wilson, Product Manager at Looker, covered the latest upgrades and future plans for Looker’s semantic data model. This layer sits atop multiple sources of data and standardizes common metrics and definitions, so it can be governed and fed into modern built-in business intelligence (BI) interactive dashboards, connected into familiar tools such as Google sheets, and other BI tools where users work —we’re calling this the unified semantic model.Capabilities to look out for:Take advantage of Persistent Derived Table (PDT) upgrades that facilitate the end-user experience.You can use incremental PDTs to capture data updates without rebuilding the whole table, meaning your users get fresh data on a more regular basis with a lower load on your data warehouse. And it’s now possible to validate PDT build status in development mode, giving you the visibility needed to determine when to push updates to production. Coming soon, you’ll be able to do an impact analysis on proposed changes with visualized dependencies between PDTs.Reach users where they are with Connected Sheets and other BI tools.Coming soon, you’ll be able to explore Looker data in Google Sheets and share charts to Slides, too. And with Governed BI Connectors, Looker can act as a source of truth for users who are accustomed to interacting with data in Tableau, Power BI, and Google Data Studio.  You can sign up to hear when the Connected Sheets and Looker integration is available or separately to hear about preview availability for Governed BI Connectors.* Source: The Total Economic Impact™ of Looker, Forrester Consulting, June 2021.A commissioned study conducted by Forrester Consulting on behalf of Google Cloud** Source: Google Community Data, December 2020, 2021HackathonSpeaking of interesting new developments, here’s your fun surprise: a hackathon recap with a new chart you can use in your own analytics.The Looker developer community came together to create innovative Looker projects at this year’s JOIN hackathon, Hack@Home 2021. The event provided the participants access to the latest Looker features and resources to create tools useful for all Looker developers. The Nearly Best Hack Winner project demonstrated how easy it is to make custom visualizations by creating an animated bar chart race visualization that anyone can use. The Best Hack Winner showcased the power of the Looker extension framework with a Looker application that conveniently writes CSV data into Looker database connections.You can still view all the keynotes, as well as the breakout sessions and learning deep dives, on-demand on the JOIN@Home content hub. These will be available through the end of the month, so go soak up the learning while you can.
Quelle: Google Cloud Platform

Want multi-cluster Kubernetes without all the cost and overhead? Here’s how

Editor’s note: Today, we hear from Mengliao Wang, Senior Software Developer, Team Lead at Geotab, a leading provider of fleet management hardware and software solutions. Read on to hear how the company is expanding on their adoption of Google Cloud to deliver new services for their customers by leveraging Google Kubernetes Engine (GKE) multi-cluster features.Geotab’s customers ask a lot of our platform: They use it to gain insights from vast amounts of telemetry data collected from their fleet vehicles. They rely on it to adhere to strict data privacy requirements. And, because our customers are located all over the world, they need the platform to address their data residency and other jurisdictional processing requirements, which require compute and storage to live within a specific geographic region. Meanwhile, as a managed service provider, we need a cost-efficient business model — that was certainly a driving factor for adopting containers and GKE. As we started architecting the deployment of multiple clusters to support our customers’ data residency requirements, we determined we also needed to explore approaches to reduce the total operational maintenance and costs of our multi-cluster environment.In order to meet customers where they are, we moved forward with running GKE clusters in multiple Google Cloud Platform regions. At the same time, we recently began using GKE multi-cluster services, which provides our customers with the security and low latency they need, while giving us cost savings and an easy-to-maintain solution. Read on to learn more about Geotab, our journey to Google Cloud and GKE, and, more recently, how we deployed multi-cluster Kubernetes using GKE multi-cluster services.The rise of connected fleet vehicles “By 2024, 82% of all manufactured vehicles will be equipped with embedded telematics.”—Berg InsightAs a global leader in IoT and connected transportation, Geotab is advancing security, connecting commercial vehicles to the internet, and providing web-based analytics to help customers better manage their fleet vehicles. With over 2.5 million connected vehicles and processing billions of data points per day, we leverage data analytics and machine learning to support our customers in several ways. We help them improve productivity, optimize fleets by reducing their fuel consumption, enhance driver safety, achieve strong compliance to regulatory changes, and meet sustainability goals. Geotab partners with Original Equipment Manufacturers (OEMs) to help expand customers’ fleet management capabilities through access to the Geotab platform.Our journey to Google Cloud and GKEWe originally chose Google Cloud as our primary cloud provider as we found it to be the most stable of the cloud providers we tried, with the least amount of unscheduled downtime. End-to-end reliability is an important consideration for our customers’ safety and their confidence in Geotab’s driver-assistance features. Since getting started on our public cloud journey, we’ve leveraged Google Cloud to modernize different aspects of the Geotab platform.First, we embarked on a multi-milestone and multi-year initiative to modernize the Geotab Data Platform, adopting a container-based architecture using open source technologies; we continue to leverage Google Cloud services to launch innovative solutions that combine analytics and access to massive data volumes for better transportation planning decisions. Today, Geotab Data Platform is built entirely on GKE, with multiple services such as data ingestion, data digestion, data processing, monitoring and alerting, a management console, and several applications. We are now leveraging this modern platform to introduce new Geotab services to our customers.Exploring multi-cluster KubernetesAs discussed above, we recently began deploying our GKE clusters into multiple regions, to meet our customers’ performance and data residency requirements. However, not every service that makes up the Geotab platform is created equal… For example, data digestion and data ingestion services are at the core of the data platform. Data digestion services are Application Programming Interfaces (API), machine learning models, and business intelligence (BI) tools that consume data from the data environment for various data analysis purposes, and are served directly to customers. Data ingestion services ingest billions of telematics data records per day from Geotab GO devices and are responsible for persisting them into our data environment.But when looking at optimizing operating costs, we identified several services outside of the data platform that did not process sensitive customer information — our monitoring and alerting services are examples of these services. Duplicating these services in multiple regions would result in higher infrastructure costs and would result in additional maintenance complexity and overhead.We decided to deploy the services that do not process any customer data as shared services in a dedicated cluster. Not only does this lower the cost for resources, but it also makes it easier to manage from an operational perspective. However, this approach introduced two new challenges:Services such as data ingestion and data digestion that run in each jurisdiction needed to expose their metrics outside of their cluster to make them available to the shared services (monitoring and alerting / management console) running on the shared cluster, resulting in some security concerns.Since metrics would not be passing within a cluster subnetwork, they would travel via the public network, resulting in higher latency as well as additional security concerns.This is where GKE Multi-cluster Services (MCS) came in, perfectly solving these concerns without introducing any new architectural components for us to configure and maintain. MCS is a cross-cluster Service Discovery and invocation mechanism built-in to GKE. MCS extends the capabilities of the standard Kubernetes Service object. Services that are configured to be exported with MCS are discoverable and accessible across all clusters within a fleet of clusters via a virtual IP address, matching the behavior of a ClusterIP Service that is accessible within a cluster. With MCS, we do not need to expose public endpoints and all traffic is routed within the Google network.With MCS configured, we get the best of both worlds: services between the shared cluster and other regionally hosted clusters communicate as if they are all hosted in one cluster! Problem solved! Reflecting on the journeyOur modernization journey on Google Cloud continues to pay dividends. During the first phase of our journey, we reaped the benefits of being able to scale up our systems with less downtime. With GKE features like MCS, we are able to reduce the time required to roll-out new features to our global customers while addressing our business objectives to manage operating costs. We look forward to continuing on our multi-cluster journey with Google Cloud and GKE. Are you interested in learning more about how GKE multi-cluster services can help with your Kubernetes multi-cluster challenges? Check out this guide to configuring multi-cluster services, or reach out to a Google Cloud expert — we’re eager to help!Related ArticleDriving change: How Geotab is modernizing applications with Google CloudOver time, Geotab converted production servers running Windows Server to containers and open source, saving hundreds of thousands of doll…Read Article
Quelle: Google Cloud Platform

Are you a multicloud engineer yet? The case for building skills on more than one cloud

Over the past few months, I made the choice to move from the AWS ecosystem to Google Cloud — both great clouds! — and I think it’s made me a stronger, more well-rounded technologist.But I’m just one data point in a big trend. Multicloud is an inevitability in medium-to-large organizations at this point, as I and others have been saying for awhile now. As IT footprints get more complex, you should expect to see a broader range of cloud provider requirements showing up where you work and interview. Ready or not, multicloud is happening.In fact, Hashicorp’s recent State of Cloud Strategy Survey found 76% of employers are already using multiple clouds in some fashion, with more than 50% flagging lack of skills among their employees as a top challenge to survival in the cloud.That spells opportunity for you as an engineer. But with limited time and bandwidth, where do you place your bets to ensure that you’re staying competitive in this ever-cloudier world?You could pick one cloud to get good at and stick with it; that’s a perfectly valid career bet. (And if you do bet your career on one cloud, you should totally pick Google Cloud! I have reasons!) But in this post I’m arguing that expanding your scope of professional fluency to at least two of the three major US cloud providers (Google Cloud, AWS, Microsoft Azure) opens up some unique, future-optimized career options.What do I mean by ‘multicloud fluency’? For the sake of this discussion, I’m defining “multicloud fluency” as a level of familiarity with each cloud that would enable you to, say, pass the flagship professional-level certification offered by that cloud provider–for example, Google Cloud’s Professional Cloud Architect certification or AWS’s Certified Solutions Architect Professional. Notably, I am not saying that multicloud fluency implies experience maintaining production workloads on more than one cloud, and I’ll clarify why in a minute.How does multicloud fluency make you a better cloud engineer?I asked the cloud community on Twitter to give me some examples of how knowledge of multiple clouds has helped their careers, and dozens of engineers responded with a great discussion.Turns out that even if you never incorporate services from multiple clouds in the same project — and many people don’t! — there’s still value in understanding how the other cloud lives.Learning the lingua franca of cloudI like this framing of the different cloud providers as “Romance languages” — as with human languages in the same family tree, clouds share many of the same conceptual building blocks. Adults learn primarily by analogy to things we’ve already encountered. Just as learning one programming language makes it easier to learn more, learning one cloud reduces your ramp-up time on others.More than just helping you absorb new information faster, understanding the strengths and tradeoffs of different cloud providers can help you make the best choice of services and architectures for new projects. I actually remember struggling with this at times when I worked for a consulting shop that focused exclusively on AWS. A client would ask “What if we did this on Azure?” and I really didn’t have the context to be sure. But if you have a solid foundational understanding of the landscape across the major providers, you can feel confident — and inspire confidence! — in your technical choices.Becoming a unicornTo be clear, this level of awareness isn’t common among engineering talent. That’s why people with multicloud chops are often considered “unicorns” in the hiring market. Want to stand out in 2022? Show that you’re conversant in more than just one cloud. At the very least, it expands the market for your skills to include companies that focus on each of the clouds you know.Taking that idea to its extreme, some of the biggest advocates for the value of a multicloud resumé are consultants, which makes sense given that they often work on different clouds depending on the client project of the week. Lynn Langit, an independent consultant and one of the cloud technologists I most respect, estimates that she spends about 40% of her consulting time on Google Cloud, 40% on AWS, and 20% on Azure. Fluency across providers lets her select the engagements that are most interesting to her and allows her to recommend the technology that provides the greatest value.But don’t get me wrong: multicloud skills can also be great for your career progression if you work on an in-house engineering team. As companies’ cloud posture becomes more complex, they need technical leaders and decision-makers who comprehend their full cloud footprint. Want to become a principal engineer or engineering manager at a mid-to-large-sized enterprise or growing startup? Those roles require an organization-wide understanding of your technology landscape, and that’s probably going to include services from more than one cloud. How to multicloud-ify your careerWe’ve established that some familiarity with multiple clouds expands your career options. But learning one cloud can seem daunting enough, especially if it’s not part of your current day job. How do you chart a multicloud career path that doesn’t end with you spreading yourself too thin to be effective at anything?Get good at the core conceptsYes, all the clouds are different. But they share many of the same basic approaches to IAM, virtual networking, high availability, and more. These are portable fundamentals that you can move between clouds as needed. If you’re new to cloud, an associate-level solutions architect certification will help you cover the basics. Make sure to do hands-on labs to help make the concepts real, though — we learn much more by doing than by reading.Go deep on your primary cloudFundamentals aside, it’s really important that you have a native level of fluency in one cloud provider. You may have the opportunity to pick up multicloud skills on the job, but to get a cloud engineering role you’re almost certainly going to need to show significant expertise on a specific cloud.Note: If you’re brand new to cloud and not sure which provider to start with, my biased (but informed) recommendation is to give Google Cloud a try. It has a free tier that won’t bill you until you give permission, and the nifty project structure makes it really easy to spin up and tear down different test environments.It’s worth noting that engineering teams specialize, too; everybody has loose ends, but they’ll often try to standardize on one cloud provider as much as they can. If you work on such a team, take advantage of the opportunity to get as much hands-on experience with their preferred cloud as possible.Go broad on your secondary cloudYou may have heard of the concept of T-shaped skills. A well-rounded developer is broadly familiar with a range of relevant technologies (the horizontal part of the “T”), and an expert in a deep, specific niche. You can think of your skills on your primary cloud provider as the deep part of your “T”. (Actually, let’s be real — even a single cloud has too many services for any one person to hold in their heads at an expert level. Your niche is likely to be a subset of your primary cloud’s services: say, security or data.)We could put this a different way: build on your primary cloud, get certified on your secondary. This gives you hirable expertise on your “native” cloud and situational awareness of the rest of the market. As opportunities come up to build on that secondary cloud, you’ll be ready.I should add that several people have emphasized to me that they sense diminishing returns when keeping up with more than one secondary cloud. At some point the cognitive switching gets overwhelming and the additional learning doesn’t add much value. Perhaps the sweet spot looks like this: 1< 2 > 3.Bet on cloud-native services and multicloud toolingThe whole point of building on the cloud is to take advantage of what the cloud does best — and usually that means leveraging powerful, native managed services like Spanner and Vertex AI. On the other hand, the cloud ecosystem has now matured to the point where fantastic, open-source multicloud management tooling for wrangling those provider-specific services is readily available. (Doing containers on cloud? Probably using Kubernetes! Looking for a DevOps role? The team is probably looking for Terraform expertise no matter what cloud they major on.) By investing learning time in some of these cross-cloud tools, you open even more doors to build interesting things with the team of your choice.Multicloud and youWhen I moved into the Google Cloud world after years of being an AWS Hero, I made sure to follow a new set of Google Cloud voices like Stephanie Wong and Richard Seroter. But I didn’t ghost my AWS-using friends, either! I’m a better technologist (and a better community member) when I keep up with both ecosystems. “But I can hardly keep up with the firehose of features and updates coming from Cloud A. How will I be able to add in Cloud B?” Accept that you can’t know everything. Nobody does. Use your broad knowledge of cloud fundamentals as an index, read the docs frequently for services that you use a lot, and keep your awareness of your secondary cloud fresh:Follow a few trusted voices who can help you filter the signal from the noiseAttend a virtual event once a quarter or so; it’s never been easier to access live learningBuild a weekend side project that puts your skills into practiceUltimately, you (not your team or their technology choices!) are responsible for the trajectory of your career. If this post has raised career questions that I can help answer, please feel free to hit me up on Twitter. Let’s continue the conversation.Related ArticleFive do’s and don’ts of multicloud, according to the expertsWe talked with experts about why to do multicloud, and how to do it right. Here is what we learned.Read Article
Quelle: Google Cloud Platform