Cloud CISO Perspectives: June 2021

It’s been another busy month for security teams around the globe with no signs of slowing down. Many of us virtually attended RSA, and ransomware attacks continue to dominate headlines. The Biden Administration’s Executive Order on Cybersecurity is officially underway, with important milestones like the NIST workshops where many of us discussed the Standards and Guidelines to Enhance Software Supply Chain Security.In this month’s post I’ll recap these topics, the latest security updates from our Google Cloud product teams, and more. Don’t forget we have a new newsletter sign up for this series, so you can get the latest updates delivered to your inbox. Thoughts from around the industry Post-RSA takeaways: Resilience was the theme of this year’s event, and it’s something we need to think about throughout the rest of this year and beyond. Last year Iwrote about general resilience, and how one of the common mistakes many organizations make is to think that resilience can be obtained by simply writing down plans and procedures on what to do and how to respond to specific events. Overall, as recent cyber events have shown us, there are potential problems with this approach. For example, most crises or significant events are unique, and if the organization can only respond to what is in a plan or procedure then the muscle-memory for the needed agility in response may not be there. In many ways, what we need are a set of foundational capabilities across prevention, detection, response and recovery that are continuously exercised and improved. As everyone is learning, scenario specific plans are necessary but real resilience comes from organization muscle-memory joined with continuously tested people, process and technology capabilities that can be adapted to meet any challenge.Zero trust was also a hot topic during this year’s event. Between the COVID-19 pandemic and recent cyber attacks, it’s promising to see that organizations everywhere are now realizing they need a comprehensive and modern zero trust access approach that removes overreliance on the network perimeter to protect themselves against a variety of threats. For Google, zero trust is more than a marketing buzzword or trend to attach to—it’s how we have operated and helped to protect our internal operations over the last decade with our BeyondCorp framework. We will continue to improve upon the industry standard with our lessons learned, so that other organizations can benefit from zero trust access platforms with BeyondCorp Enterprise and move toward a safer security posture. Ransomware: From Colonial Pipeline to JBS, rarely a day goes by without a new attack in the news. The reality is that many of these problems stem from a lack of rigor implementing a range of basic technology controls. We’re at an inflection point where both the private and public sector need to work together to prioritize the right defenses against these rising threats. We think it’s a mistake to assume one control or one product can be the solution to ransomware. Many organizations have started to realize you need an array of controls working together to create and sustain a defensible security posture. We recently highlighted our recommendations to protect against ransomware based on the National Institute of Standards and Technology (NIST) primary pillars for a successful and comprehensive cybersecurity program. Securing open source software: The Open Source team at Google recently announced an incredibly useful exploratory visualization site called Open Source Insights, which provides an interactive view of the dependencies, including first layer and transitive dependencies, of open source projects. This is an extremely important effort for the industry, especially as more and more organizations rely on open source software for critical aspects of their environments. While the benefits of open source software are clear, challenges persist.  Take for example the complexity of the supply chain; open source software projects often have many hundreds of dependencies. Open Source Insights gives developers a comprehensive visualization of a project’s dependencies and their properties and vulnerabilities. This includes interactive visualizations for developers to analyze transitive dependency graphs, and a comparison tool to highlight how different versions of a package might affect their dependencies by introducing or removing licenses, fixing security problems, or changing the packages’ own dependencies. While much more work and research is needed in this space, Open Source Insights is a critical step in helping secure the open source software supply chain.Click to enlargeThe EU Cloud Code of Conduct: While it went into force in 2018, the EU’s General Data Protection Regulation (GDPR) remains firmly top of mind as organizations use the cloud for processing of sensitive data. Providers like Google Cloud are often asked derivatives of the question “how can we be sure you’re taking appropriate measures to safeguard data under the GDPR.” We now have a definitive answer. The EU GDPR Cloud Code of Conduct(CoC) is a mechanism for cloud providers to demonstrate how they offer sufficient guarantees to implement appropriate technical and organizational measures as data processors under the GDPR. The Belgian Data Protection Authority, based on a positive opinion by the European Data Protection Board (EDPB), last month approved the CoC, a product of years of constructive collaboration between the cloud computing community, the European Commission, and European data protection authorities. This is the first European code approved under the GDPR; it is excellent news for the industry to have a new transparency and accountability tool that helps promote trust in the cloud. We are proud to say that Google Cloud Platform and  Google Workspace already adhere to these provisions.Spotlight on the Administration’s Executive Order on Cybersecurity The Presidential Administration’s recent moves in the Executive Order to shore up our nation’s cyber defenses is an important milestone for both public and private sector organizations. At Google, we are deeply committed to advancing cybersecurity issues and believe that government officials shouldn’t have to tackle these issues on their own. Importantly, the EO makes critical strides to help modernize government technology and advance security innovation as well as improve standards for secure software development. We’ve already shared our perspective with the government and will continue to advocate on these issues in the coming months. Modernization and security innovation: One of the most promising aspects of the government’s approach is to set agencies and departments on a path to modernize security practices and strengthen cyber defenses across the federal government. For too long, the public sector has tried to solve security challenges by spending more on security products, but as recent events have proved, spending billions of dollars on cybersecurity on an unmodernized IT platform is like building on sand. We strongly support this push towards modernization and agree with the government’s focus on making security simple and scalable, by default. Modernizing can not only build cybersecurity at a foundational level but also gives the federal government the opportunity to diversify their vendors, which can lead to improved resilience.Secure software development: Earlier this month Google participated in the NIST workshops and submitted position papers for how the industry can enhance software supply chain security. We believe that the government’s call to action on secure software development practices could bring about the most significant progress on cybersecurity in a decade and will likely have the biggest impact on the government’s risk posture in the long term. To further the adoption of supply chain integrity best practices, Google in collaboration with the OpenSSF has proposed Supply-chain Levels for Software Artifacts (SLSA) to formalize criteria around software supply chain integrity. We look forward to continuing to collaborate and engage with the Administration on this important work.Google Cloud Security highlights Google Cloud Named a Leader in Forrester Wave™: Unstructured Data Security Platforms: Providing effective controls to protect sensitive data in the cloud is a core part of our Google Cloud product strategy and unstructured data in particular can be challenging to secure. Given the importance of these capabilities to our customers, we were happy to see that Forrester Research named Google Cloud a Leader in The Forrester Wave™: Unstructured Data Security Platforms, Q2 2021 report. The report evaluated the 11 most significant providers with platform solutions to secure and protect unstructured data, spanning cloud providers to data security-focused vendors. Google Cloud rated highest in the current offering category among all the providers evaluated and received the highest possible score in sixteen criteria. A copy of the full report can be viewed here.Security benefits of a Data Cloud: Last month, we held our first Data Cloud Summit where we announced three new services as part of our database and data analytics portfolio to provide organizations with a unified data platform: Dataplex, Analytics HubandDatastream. Security professionals often default to using only security branded tools, but some of the best tools for security teams might be to use data and analytics products that are key to other business functions within the organization. Digital technologies like AI, ML and data can be used to power innovation, especially for security efforts. At Google, security is the cornerstone of our product strategy and our customers can take advantage of the same secure-by-design infrastructure, built-in data protection, and global network that we use to ensure compliance, redundancy and reliability.New features to secure your Cloud Run environments:  Cloud Run makes developing and deploying containerized applications easier for developers. We announced several new ways to help make Cloud Run environments more secure based on enhanced integrations with Secret Manager, Binary Authorization, Cloud KMS, and Recommendation Hub.Advanced counter-abuse and threat analysis features in Google Workspace:We continue to add controls and capabilities for Workspace admins to protect their users and organizations against threats and abuse. We recently added features that enrich security alerts with VirusTotal threat context and reputation data, enable blocking of abusive users and bulk removal content they’ve shared in Drive, and programmatic blocking third-party API access.That wraps up another month of thoughts and highlights. If you’d like to have this Cloud CISO Perspectives post delivered every month to your inbox, click here to sign-up.Next month, we’ll be busy hosting our first digital Security Summit where you can hear from industry leaders and engage in interactive sessions that can help you solve your most critical security challenges. Be sure to register and tune in to the great event we have planned!Related ArticleCloud CISO Perspectives: May 2021Google Cloud CISO Phil Venables shares his perspective on industry news as RSA 2021 approaches.Read Article
Quelle: Google Cloud Platform

Creating custom financial indices with Dataflow and Apache Beam

Financial institutions across the globe rely on real-time indices to inform real-time portfolio valuations, to provide benchmarks for other investments, and as a basis for passive investment instruments including exchange-traded products (ETPs). This reliance is growing—the index industry dramatically expanded in 2020, reaching revenues of $4.08 billion.Today, indices are calculated and distributed by index providers with proximity and access to underlying asset data, and with differentiating real-time data processing capabilities. These providers offer subscriptions to real-time feeds of index prices and publish the constituents, calculation methodology, and update frequency for each index.But as new assets, markets, and data sources have proliferated, financial institutions have developed new requirements. Financial institutions will need to quickly create bespoke and frequently updating indices that represent a specific actual or theoretical portfolio, with its unique constituents and weightings.In other words, existing index providers and other financial institutions alike will need mechanisms for rapid creation of real-time indices. This blog post’s focus—an index publication pipeline collaboratively developed by CME Group and Google Cloud—is an example of such a mechanism. The pipeline closely approximates a particular CME Group index benchmark, but with far greater frequency (in near real time vs. daily) than its official counterpart. It does so by leveraging open-source models such as Apache Beam and cloud-based technologies such as Dataflow, which automatically scales pipelines based on inbound data volume.Machine learning’s production problemIn the past decade, advances in AI toolchains have enabled faster ML model training—and yet a majority of ML models are still not making it into production. As organizations endeavor to develop their ML capabilities, they soon realize that a real-world ML system is comprised of a small amount of ML code embedded in a network of complex and large ancillary components. Each component brings its own development and operational challenges, which are met by bringing a DevOps methodology to the ML system, commonly referred to as MLOps (Machine Learning Operations). To apply ML to business problems, a firm must develop continuous delivery and automation pipelines for ML.This index publication collaboration is instructive because it demonstrates MLOps best practices for just such a pipeline. One Apache Beam pipeline, suited for operating on both batch and streaming data, extracts insights and packages them for downstream consumers. These consumers may include ML pipelines that, thanks to Apache Beam, require only one code path for inference across batch and real-time data sources. The pipeline is run inside Google Cloud’s Dataflow execution engine, greatly simplifying management of underlying compute resources. But the collaboration’s value is not constrained to the ML and data science realm. The project shows that consumers of the Apache Beam pipeline’s insights may also include traditional business intelligence dashboards and reporting tools. It also demonstrates the simplicity and economy of cloud-based time series data such as CME Smart Stream, which is metered by the hour, quickly and automatically provisioned, and consumable at a per-product-code (not per-feed) level.A focus on real-time processing for financial servicesTo illustrate the above points, the collaboration applies data engineering and MLOps best practices to a financial services problem. We chose the financial services domain because many financial institutions do not yet have real-time market data processing or MLOps capabilities today, owing to a significant gap on either side of their ML/AI objectives.Upstream from ML/AI models, financial institutions often experience a data engineering gap. For many financial institutions, batch processes have sufficiently addressed business requirements. As a result, the temporal nature of the time series data underlying these processes is deemphasized. For example, the original purpose of most trade booking systems was to capture a trade and ensure that it found its way to the middle and back office for settlement. It was not built with ML/AI in mind, and its underlying data therefore has not been packaged for consumption by ML/AI processes. And downstream from ML/AI models, financial institutions often encounter the aforementioned “ML production problem.”As ML/AI becomes ever more strategic, these two gaps have left many financial institutions in a conundrum—unable to train ML models for lack of properly packaged time series data, and unmotivated to package time series data for lack of ML models. By recreating a key energy market index using open-source libraries and cloud-based tools, this collaboration demonstrates that for the financial services domain a solution to this conundrum is more accessible today than ever. Creating a new indexWe modeled our new index after one of CME Group’s many index benchmarks. The particular index expresses the value of a basket of three New York Mercantile Exchange—listed energy futures as a single price. Today, CME Group publishes the index at the end of the day by calculating the settlement price of each underlying futures contract, and then weighing and summing these values. While CME Group does not currently publish this index in real time, this collaboration aims to create a near real-time solution leveraging Google Cloud capabilities and CME Group market data delivered via CME Smart Stream. However, in order to publish the value so frequently—every five seconds, with 40-second publish latency—this collaboration’s pipeline has to solve a number of challenges in near-real time.First, the pipeline must process sparse data from three separate trades feeds in memory to create open-high-low-close (OHLC) bars. More specifically, for five-second windows for each of the three front-month (and sometimes second-month) energy contracts, a bar must be produced. This is solved by using the Apache Beam library to implement functions which, when executed on Dataflow, automatically scale out as input load increases. The bars must be time-aligned across the underlying feeds, which is greatly simplified by Beam’s watermark feature. And for intervals in which no tick data is observed, the Beam library is used to pull forward the last value received, yielding perfect gap-free bars for downstream processors.Second, the pipeline must calculate volume-weighted average price (VWAP) in near real-time for each front-month contract. The VWAP calculations are also written using the Beam API and executed on Dataflow. Each of these functions requires visibility of each element in the time window, so the functions cannot be arbitrarily scaled out. Nonetheless, this is tractable because their input—OHLC bars—is manageably small.Third, the pipeline must replicate CME Group’s specific settlement price methodology for each contract. The rules specify whether to use VWAP or another source as price, depending on certain conditions. They also specify how to weigh combinations of monthly contracts during a roll period. The pipeline again encapsulates these requirements as an Apache Beam class, and joins the separate price streams at the correct time boundary.The end result is a new stream publishing bespoke index data to a Google Cloud Pub/Sub topic thousands of times daily, enabling AI models as well as traditional industry index usage, dashboards, and other tools to assist real-time decision making. The stream’s pipeline uses open source libraries that solve common time series problems out-of-the box, and cloud-based services to reduce the user’s operational and scaling burden.Click to enlargeThe importance of cloud-based dataThe promise of cloud-based pipeline execution services cannot be realized using legacy data access patterns, which often require market data users to colocate and configure servers and network gear. Such patterns inject expense and scaling complexity into the pipeline’s overall operation, diverting resources from the adoption of MLOps best practices. Instead, a newer, cloud-based access pattern—in which resources subscribe to data streams inexpensively, rapidly and programatically—is necessary.In 2018, CME Group identified the customer need for accessible futures and options market data. CME Group collaborated with Google Cloud to launch CME Smart Stream, which distributes CME Group’s real-time market data across Google Cloud’s global infrastructure with sub-second latency. Any customer with a CME Group data usage license and a Google Cloud project can consume this data for an hourly usage fee, without purchasing and configuring servers and network gear.CME Smart Stream met this index pipeline’s requirements for cost-effective, cloud-based streaming data, but this is just one use case. Since the launch of a CME Smart Stream offering on Google Cloud, globally dispersed firms have adopted the solution. For example, Coin Metrics has been using the offering to better inform its customers in the crypto markets. According to CME Group, Smart Stream has become popular with new customers as the fastest, simplest way to access CME Group’s market data from anywhere in the world. Adapt the design pattern to your needsBy combining cloud-based data, open-source libraries, and cloud-based pipeline execution services, we created a real-time index using the same constituents as its end-of-day counterpart. Additionally, financial institutions will find this approach addresses many other challenges—real-time valuation of a large set of portfolios; benchmark creation for new ETPs; or external publication of new indices.Give it a tryThis approach is available to help you meet your organization’s needs. We’ll be discussing this topic in CME Group’s webinar End-to-End Market Data Solutions in the Cloud at 10:30 am ET on June 16th.
Quelle: Google Cloud Platform

Cloud Run: A story of serverless containers

Mindful Containers is a fictitious company that is creating containerized microservice applications. They need a fully managed compute environment for deploying and scaling serverless containerized microservices. So, they are considering Cloud Run. They are excited about Cloud Run because it abstracts away the cluster configuration, monitoring, and management so they can focus on building the features for their apps. Cloud Run is a fully-managed compute environment for deploying and scaling serverless containerized microservices.Click to enlargeWhat is Cloud Run?Cloud Run is a fully-managed compute environment for deploying and scaling serverless HTTP containers without worrying about provisioning machines, configuring clusters, or autoscaling.No vendor lock-in – Because Cloud Run takes standard OCI containers and implements the standard Knative Serving API, you can easily port over your applications to on-premises or any other cloud environment. Fast autoscaling – Microservices deployed in Cloud Run scale automatically based on the number of incoming requests, without you having to configure or manage a full-fledged Kubernetes cluster. Cloud Run scales to zero— that is, uses no resources—if there are no requests.Split traffic – Cloud Run enables you to split traffic between multiple revisions, so you can perform gradual rollouts such as canary deployments or blue/green deployments.Custom domains – You can set up custom domain mapping in Cloud Run and it will provision a TLS certificate for your domain. Automatic redundancy – Cloud Run offers automatic redundancy so you don’t have to worry about creating multiple instances for high availabilityHow to use Cloud RunWith Cloud Run, you write your code in your favorite language and/or use a binary library of your choice. Then push it to Cloud Build to create a container build. With a single command—“gcloud run deploy”—you go from a container image to a fully managed web application that runs on a domain with a TLS certificate and auto-scales with requests.How does Cloud Run work?Cloud Run service can be invoked in the following ways:HTTPS: You can sendHTTPS requests to trigger a Cloud Run-hosted service. Note that all Cloud Run services have a stable HTTPS URL. Some use cases include: Custom RESTful web APIPrivate microserviceHTTP middleware or reverse proxy for your web applicationsPrepackaged web applicationgRPC: You can usegRPC to connect Cloud Runservices with other services—for example, to provide simple, high-performance communication between internal microservices. gRPC is a good option when you: Want to communicate between internal microservicesSupport high data loads (gRPC uses protocol buffers, which are up to seven times faster than REST calls)Need only a simple service definition you don’t want to write a full client libraryUse streaming gRPCs in your gRPC server to build more responsive applications and APIsWebSockets: WebSockets applications are supported on Cloud Run with no additional configuration required. Potential use cases include any application that requires a streaming service, such as a chat application.Trigger from Pub/Sub:You can use Pub/Sub to push messages to the endpoint of your Cloud Run service, where the messages are subsequently delivered to containers as HTTP requests. Possible use cases include:Transforming data after receiving an event upon a file upload to a Cloud Storage bucketProcessing your Google Cloud operations suite logs with Cloud Run by exporting them to Pub/SubPublishing and processing your own custom events from your Cloud Run servicesRunning services on a schedule: You can use Cloud Scheduler to securely trigger a Cloud Run service on a schedule. This is similar to using cron jobs. Possible use cases include:Performing backups on a regular basisPerforming recurrent administration tasks, such as regenerating a sitemap or deleting old data, content, configurations, synchronizations, or revisionsGenerating bills or other documentsExecuting asynchronous tasks: You can use Cloud Tasks to securely enqueue a task to be asynchronously processed by a Cloud Run service. Typical use cases include:Handling requests through unexpected production incidentsSmoothing traffic spikes by delaying work that is not user-facingReducing user response time by delegating slow background operations, such as database updates or batch processing, to be handled by another service, Limiting the call rate to backend services like databases and third-party APIsEvents from Eventrac: You can trigger Cloud Run with events from more than 60 Google Cloud sources. For example:Use a Cloud Storage event (via Cloud Audit Logs) to trigger a data processing pipeline Use a BigQuery event (via Cloud Audit Logs) to initiate downstream processing in Cloud Run each time a job is completedHow is Cloud Run different from Cloud Functions?Cloud Run and Cloud Functions are both fully managed services that run on Google Cloud’s serverless infrastructure, auto-scale, and handle HTTP requests or events. They do, however, have some important differences:Cloud Functions lets you deploy snippets of code (functions) written in a limited set of programming languages, while Cloud Run lets you deploy container images using the programming language of your choice. Cloud Run also supports the use ofany tool or system library from your application; Cloud Functions does not let you use custom executables. Cloud Run offers a longer request timeout duration of up to 60 minutes, while with Cloud Functions the requests timeout can be set as high as 9 mins. Cloud Functions only sends one request at a time to each function instance, while by default Cloud Run is configured to send multiple concurrent requests on each container instance. This is helpful to improve latency and reduce costs if you’re expecting large volumes. PricingCloud Run comes with a generous free tier and is pay per use, which means you only pay while a request is being handled on your container instance. If it is idle with no traffic, then you don’t pay anything.ConclusionAfter learning about the ease of set up, scalability, and management capabilities of Cloud Run the Mindful Containers team is using it to deploy stateless microservices. If you are interested in learning more, check out the documentation.For more #GCPSketchnote, follow the GitHub repo. For similar cloud content follow me on Twitter @pvergadia and keep an eye out on thecloudgirl.devRelated Article3 cool Cloud Run features that developers loveCloud Run developers enjoy pay-per-use pricing, multiple concurrency and secure event processing.Read Article
Quelle: Google Cloud Platform

Streamline your application migration journey with Migrate for Anthos and GKE

Most customers I talk to today are excited about the opportunities modernizing their workloads in the cloud affords them. In particular, they are very interested in how they can leverage Kubernetes to speed up application deployment while increasing security. Additionally, they are happy to turn over some cluster management responsibilities to Google Cloud’s SREs so they can focus on solving core business challenges. However, moving VM-based applications to containers can present its own unique set of challenges: Assessing which applications are best suited for migrationFiguring out what is actually running inside a given virtual machine Setting up ingress and egress for migrated applicationsReconfiguring service discovery Adapting day 2 processes for patching and upgrading applicationsWhile those challenges may seem daunting, Google Cloud has a tool that can help you easily solve them in a few clicks. Migrate for Anthos helps automate the process of moving your applications – whether they are Linux or Windows – from various virtual machine environments to containers on either Google Kubernetes Engine (GKE) or Anthos. There is even a specialized capability to migrate Websphere applications. Your source VMs can be running in GCP, AWS, Azure or VMware. Once the workload has been containerized, it can then be easily deployed to Kubernetes running in either a GKE or an Anthos cluster on GCP, AWS or VMware. Let’s walk through the migration process together and I will show you how Migrate for Anthos can help you efficiently migrate virtual machines to containers The first step in any application migration journey is todetermine which applications are suitable for migration. I always recommend picking a few low risk applications with a high probability of success. This allows your team to build knowledge and define processes while simultaneously establishing credibility with key stakeholders. Migrate for Anthos has an application assessment component that will inspect the applications running inside your VM and provide guidance on the likelihood of success. There are different tools for Windows and Linux, and for Websphere applications we leverage tooling directly from IBM. After you’ve chosen a good migration candidate the next step is to perform the actual migration. Migrate for Anthos breaks this down into a couple of discrete steps. First, Migrate for Anthos will do a dry run where it inspects the virtual machine and determines what is actually running in the virtual machine. The artifact from this step is a migration plan in the form of a YAML file. Next, review the YAML file and adjust any settings you want to change. For instance, if you were migrating a database you would want to update the YAML file with the point in the file system to mount the persistent volume to hold the database’s data. After you’ve reviewed and adjusted the migration YAML, you can perform the actual migration. This process will create a couple of key artifacts. The first is a Docker container image. The second is the matching Dockerfile, and a Kubernetes deployment YAML that includes definitions for all the relevant primitives (services, pods, stateful sets, etc). The Docker image that is created is actually built using a multi-stage build leverating two different images. The first is the Migrate for Anthos runtime, the second includes the workload extracted from the source VM. This is important to understand as you plan Day 2 operations. This Dockerfile can be edited to update not only the underlying Migrate for Anthos runtime layer, but also the application components. And while not mandatory, you can easily manage all that through a CI/CD pipeline. If you want to ease complexity and accelerate your cloud migration journey, I highly recommend you check out Migrate for Anthos. Watch the videos I linked up above, and then get your hands on the keyboard and try out our  Quiklab.Related ArticleModernize your apps with Migrate for AnthosMigrate for Anthos can help modernize existing applications into containers.Read Article
Quelle: Google Cloud Platform

Arab Bank: Accelerating application innovation with Anthos and Apigee

Founded in 1930 and headquartered in Jordan, Arab Bank is one of the oldest banks in the Middle East. Operating out of 28 countries, we’ve earned our customers’ trust with a prudent approach to operations and respect for the cultures and customs in the region. With a few exceptions where cloud providers have hosted their datacenter in a Middle Eastern or North African country, the banking sector, in general, in the region has been slow to adopt cloud technology for a number of reasons, including concern about data security, maturity and security controls of cloud services (PaaS and SaaS), and regulations in place. But on the other hand, we saw the opportunity to accelerate our development and testing using the cloud, as well as to partner with the fintech community and digital service providers to integrate their solution in the banking ecosystem. We needed more flexibility to connect with the outside world, and a more open architecture to help us drive our internal innovation at a faster rate with the help of the fintech industry. By collaborating with Google Cloud, we reached those goals and accelerated app development and testing through products like Apigee and Anthos. We’re now offering innovative apps and services to our customers and employees that leverage new technological capabilities to give more agility and flexibility, and to optimize our workloads.Embracing the cloud in a regulated industry To get started with the cloud, we needed to create internal awareness about cloud technology, the API layer, containers and their benefits amongst our leaders and staff. Google helped us educate and get buy-in from key functions by organizing open technology demonstration sessions and discussion panels. When considering potential cloud providers we had four decision criteria: maturity of security controls, ease of use, cost, and scalability / agility for new deployments and continuous innovation. This last factor was critical, and we were impressed by Google Cloud’s innovation roadmap, both via direct conversations and at Google’s Next conference, where we met a lot of people passionate about technology, innovation and building something new.  Going back to our journey, given the above-mentioned regional limitations, we started to develop a hybrid cloud approach. This helped us continue to operate on-premises for a number of services in production, particularly those that have personally identifiable information (PII) or other sensitive data attached, and to leverage the cloud for development, testing and production workloads that don’t contain customer data. In the short term, we didn’t anticipate that our many jurisdictions would allow data to be transported to other countries. But cloud tools will allow us to tokenize or anonymize customer data while maintaining customer data on-premises. This applies to many digital journeys such as customer onboarding, credit facility online applications, or marketplace navigation. In the coming years, we predict our API integration with partners will accelerate and enrich the overall digital value proposition of our business segments, namely consumer banking, small and medium-size businesses and large corporate and institutional clients. Building connections and cornerstones in the cloudThe first move in our digital transformation was to implement Apigee, Google Cloud’s API management platform, to connect to the world’s digital banking ecosystem. Apigee provides the security, sharing, mediation policies, and developer portal capabilities for us to successfully meet Open Banking standards while focusing on innovation.  On the back of the Apigee implementation, we created an accelerator program to incubate Fintech ideas that can, in turn, integrate into our digital platforms and be offered to our customers. We also developed various banking APIs, all designed and documented in accordance with PSD2 and Open Banking regulations, and made them available to our partners. These APIs exposed on our API development portal offer the needed code structure for fintech companies to design creative solutions around them.Next, we adopted Anthos, Google Cloud’s managed application platform. Anthos has become a cornerstone of our operations because it works across hybrid cloud, offering integration of microservice containers and fueling collaborative opportunities with external parties. Our current Anthos infrastructure includes several hundreds of  microservices now running on containers in Google Kubernetes Engine (GKE) and on-premises. We now use the cloud for collaboration, development and testing, but not for production, which is done on-premises. Along the way, Google Cloud’s Professional Services Organization (PSO) helped us through the entire cloud setup process, and with the adoption of Anthos. We originally built on the cloud tools through an iterative process, learning from our successes and errors along the way. Now that we have a better sense of how Anthos operates, we’re building a fresh infrastructure atop a sound, stable, and resilient foundation that will let us scale easily as we work to transform Arab Bank into a digital-first enterprise, that is our ambition. Currently products running on Anthos include customer acquisition and onboarding via mobile apps, and our Arabi-Pay app, which allows customers to instantly pay each other via WhatsApp or other messaging platforms. Leveraging Anthos, our instant loan service for Arab Bank salaried employees can grant and disburse loans up to $7,000 in less than seven minutes.In addition, we’ve built a number of digital journeys for our Small and Medium Enterprise (SME) customers, such as our SME client digital onboarding process and paperless SME lending platform.While some may think that digital adoption in this part of the world can be slow, as customer contact remains anchored in our customs, the recent COVID-19 pandemic has accelerated the adoption of digital banking services and electronic payments, inspiring more confidence to buy and pay online. Thanks to our rich and user-friendly banking app that relies on Apigee and Anthos for critical customer journeys, over 90% of new-to-bank customers are using our mobile apps. Within the next 18 months, we predict that number will be closer to 100%. Of course, with higher customer adoption comes the challenge of potential service interruptions. A single moment of downtime can be highly visible to many digital customers. But Google Cloud’s Anthos and Apigee give us the flexibility to resume processes at a fast rate, so any interruptions are almost invisible to our customers. In fact, when the COVID-19 pandemic hit, though our branches could be open only for limited hours each day, our consumer clients in particular were able to take advantage of our digital services in a very self-sufficient manner. Being well positioned with Google Cloud, we could also keep our internal teams and external partners connected and productive. Without Google Cloud, continuing the digital transformation of the bank at the pace we wanted would have been a big challenge. Collaborating across borders and time zonesWith Google Cloud, our ability to collaborate and partner has transformed significantly. We operate 24/7 now because our developers are scattered across multiple geographies and different time zones. Because testing and deployment can run around the clock, including on weekends, we currently deploy a new digital journey in a few weeks, faster than ever before.. This has given our organization a spirited mindset that prioritizes innovation, and raised the bar in terms of our operating model.  Another consideration about Google Cloud tools is the elimination of inefficient processes typically seen in a software development lifecycle. We now build in a completely agile manner, from design squads until deployment in production. Compared to where we were two years ago, when we had an annual maximum of two systems releases in production, we now have close to monthly releases of our digital packages. In addition, we also  supplement those monthly releases with additional ad-hoc releases and fixes in between. As a result, we have removed internal silos and improved tremendously the collaboration between the product and sales teams, operations, IT Dev Factory and Infrastructure, as well as our supporting functions.Through the agile process facilitated by APIs, we introduced Design Thinking workshops involving external customers and prospects early on to understand their true pain points better and emotions during the existing journeys and how to make new digital journeys frictionless. As a result, the relevance of our products for various customer personas has improved tremendously. Transcending banking With Google Cloud, we can offer our customers so much more than just banking. We’ve become more digitally relevant to their lives. For example, we recently launched a mortgage app that helps customers all the way from home selection through mortgage negotiations and closing, and even to getting the home decorated. It’s an end-to-end journey in which API integration with key regional players was a cornerstone to our success. For other digital products, we have an extensive roadmap of lifestyle-based solutions relevant to each segment and age group. We’ve only scratched the surface of the services we can provide, and we see the cloud as the future for everything we want to do.Read more about Google Cloud’s Open Banking solution to learn how you can simplify and accelerate the process of delivering open banking as required by PSD2. You can also view our video on Open Banking, powered by Apigee API Management.Related ArticleHow banks can build resilience into core systems and accelerate a return to innovation in 2021IDC’s findings suggest that modernization of the back office and adoption of cloud for critical banking systems may help improve resilien…Read Article
Quelle: Google Cloud Platform

Ubuntu Pro lands on Google Cloud

Today, we’re pleased to announce the general availability of Ubuntu Pro images on Google Cloud, providing customers with an improved Ubuntu experience, expanded security coverage, and integration with critical Google Cloud features. In partnership with Canonical, we’re making it even easier for customers that have fully embraced open source to ensure security and compliance for their most mission-critical and enterprise workloads.With Ubuntu Pro on Google Cloud, you  now have access to features like:10-year lifetime security updates – Canonical backs Ubuntu Pro for 10 years with security updates and a guaranteed upgrade path.FIPS & CC-EAL2 certification – Ubuntu Pro includes components that meet requirements from entities like FedRAMP, HIPAA, ISO, and PCI.Open-source security coverage – Protect your most important open-source workloads including MongoDB, Apache Kafka, Redis, NGINX, and PostgreSQL.Multi-version availability – Pro images are available for the three most popular Ubuntu Server distributions: 16.04 LTS, 18.04 LTS, and 20.04 LTS.Kernel Livepatch – Kernel patches are delivered immediately without having to reboot your VMs.Optional CIS and DISA STIG profiles – Choose from two leading profiles to harden your environment according to industry benchmarks.Cloud-based pricing – Ubuntu Pro does not require a contract, and pricing tracks with the underlying compute cost depending on the instance type.Extended Security Maintenance (ESM) for Ubuntu 16.04 LTS with Ubuntu ProAvailability of Ubuntu Pro images is especially important if you’re an Ubuntu 16.04 LTS customer and want extended security maintenance (ESM) for your virtual machines but don’t want to upgrade to Ubuntu 18.04 LTS or Ubuntu 20.04 LTS versions immediately. ESM is included with Ubuntu Pro 16.04. You can move your workloads from Ubuntu 16.04 LTS VM instances to Ubuntu Pro 16.04 instances to continue receiving ESM and all the above-mentioned benefits, without having to test your applications on a new version of the OS.Gojek has evolved from offering just ride-hailing to a suite of more than 20 services today, serving everyday solutions for millions of users across Southeast Asia.“We needed more time to comprehensively test and migrate our Ubuntu 16.04 LTS workloads to Ubuntu 20.04 LTS, which would mean stretching beyond the standard maintenance timelines for Ubuntu 16.04 LTS. With Ubuntu Pro on Google Cloud, we now have the ability to postpone this, and in moving our 16.04 workloads to Ubuntu Pro, we benefit from its live kernel patching and improved security coverage for our key open source components.”—Kartik Gupta, Engineering Manager for CI/CD & FinOps at Gojek“With the launch of Ubuntu Pro on Google Cloud, we build on our joint investments with Google to optimize Ubuntu performance on Google Cloud, and add comprehensive security patching and Long Term Support for another 30,000 open source packages—the widest range of security-maintained open source on the planet,” said Mark Shuttleworth, CEO of Canonical. “As the world moves to open source for everything, Canonical offers the safety net of security maintenance that enterprises count on to unleash their developers.”Getting startedGetting started with Ubuntu Pro on Google Cloud is simple. You can now purchase these premium images directly from Google Cloud by selecting Ubuntu Pro as the operating system straight from the Google Cloud Console.To learn more about Ubuntu Pro on Google Cloud, please visit the documentation page and read the announcement from Canonical.
Quelle: Google Cloud Platform

Monitoring BigQuery reservations and slot utilization with INFORMATION_SCHEMA

BigQuery Reservations help manage your BigQuery workloads. With flat-rate pricing, you can purchase BigQuery slot commitments in 100-slot increments in either flex, monthly, or yearly plans instead of paying for queries on demand. You can then create/manage buckets of slots called reservations and assign projects, folders, or organizations to use the slots in these reservations. By default, queries running in a reservation automatically use idle slots from other reservations. In this way, organizations have greater control over workload management in a way that ensures high-priority jobs always have access to the resources they need without contention. Currently, two ways to monitor these reservations and slots are via the BigQuery Reservations UI or Cloud Monitoring.But how does an organization know how many slots to delegate to a reservation? Or if a reservation is being over or underutilized? Or what the overall slot utilization is across all reservations? In this blog post, we will discuss how we used BigQuery’s INFORMATION_SCHEMA system tables to create the System Tables Reports Dashboard and answer these questions.Using INFORMATION_SCHEMA tablesThe INFORMATION_SCHEMA metadata tables contain relevant, granular information about jobs, reservations, capacity commitments, and assignments. Using the data from these tables, users can create custom dashboards to report on the metrics they are interested in in ways that inform their decision making.While there are several tables that make up INFORMATION_SCHEMA, there are a few that are specifically relevant to monitoring slot utilization across jobs and reservations. The JOBS_BY_ORGANIZATION table is the primary table to extract job-level data across all projects in the organization. This information can be supplemented with data from the CAPACITY_COMMITMENT_CHANGES_BY_PROJECT, RESERVATION_CHANGES_BY_PROJECT, and ASSIGNMENT_CHANGES_BY_PROJECT tables to include details about specific capacity commitments, reservations, and assignments. It’s worth noting that the data retention period for INFORMATION_SCHEMA is 180 days and all timestamps are in UTC. For information about the permissions required to query these tables, follow the links above.Monitoring with the System Tables Reports DashboardThe System Tables Reports Dashboard is a Data Studio dashboard that queries data from INFORMATION_SCHEMA by using Data Studio’s BigQuery connector. Organizations can use this dashboard and/or its underlying queries as-is or as a starting point for more complex solutions in Data Studio or any other dashboarding tools.Daily Utilization ReportThe Daily Utilization Report gives an overview of an organization’s daily slot utilization measured in slot days. The primary chart in the report is for overall slot utilization per day alongside the active capacity commitments for the organization. This chart is ideal for gaining a high-level understanding of how an organization’s usage compares to the total number of slots it has committed to (or purchased).Click to enlargeThe query used to derive the average slot utilization is as follows:Slot utilization is derived by dividing the total number of slot-milliseconds (total_slot_ms from INFORMATION_SCHEMA.JOBS_BY_ORGANIZATION) consumed by all jobs on a given day by the number of milliseconds in a day (1000 * 60 * 60 * 24). This aggregate-level computation provides the most accurate approximation of the overall slot utilization for a given day. Note that this calculation is most accurate for organizations with consistent daily slot usage. If an organization does not have consistent slot usage this number might be lower than expected. For more information about calculating average slot utilization, see our public documentation.This report also includes charts that break down the utilization further by job type, project id, reservation id (shown below), user email, and top usage.click to enlargeHourly Utilization ReportThe Hourly Utilization Report is similar to the daily utilization report but gives an overview of an organization’s hourly slot utilization measured in slot hours. This report can help an organization understand their workloads at a more granular level in a way that helps with workload management.Reservation Utilization ReportThe Reservation Utilization Report gives an overview of an organization’s current assignments and reservation utilization in the last 7 and 30 days.The current reservation assignments table displays details for the current assignments across an organization including the assignment type, job type and reservation capacity.Click to enlargeThe reservation utilization tables display information about the utilization of a given reservation in the last 7 or 30 days. This includes average weekly or monthly slot utilization, average reservation capacity, current reservation capacity, and average reservation utilization. Average weekly and monthly utilization is derived using the same calculation as daily utilization, but adjusted for a week or month accordingly. click to enlargeThese tables are great for understanding if an organization is making the most of its allocated reservations. Reservations that are severely over or under utilized are colored in red, while reservations that are close to 100% utilization are colored in green. That said, because idle slot capacity is shared across reservations by default, underutilized reservations do not necessarily indicate that slots are being wasted. Instead, the jobs in that reservation simply do not need as many slots reserved and those slots could be allocated to a different reservation.Job Execution ReportThe Job Execution Report provides a per-job breakdown of slot utilization, among other job statistics. The purpose of this report is to allow users to drill down into individual jobs or understand trends in a specific group of jobs.Click to enlargeIn this report, the average slot utilization is displayed on a per-job level instead of an aggregate level. This is calculated by dividing total_slot_ms for that job by the  job’s duration in milliseconds (this can be computed by subtracting creation_time from end_time) as seen in the following query.Job Error ReportThe Job Error Report provides an overview of the types of errors encountered by jobs in the organization aggregated by project and error reason, among other fields. The INFORMATION_SCHEMA tables provide detailed information about job-level errors, so depending on an organization’s use case this report can be customized with more specific error reporting information.Click to enlargeWhat’s next?To learn more about INFORMATION_SCHEMA and the System Tables Reports Dashboard, check out the videos in our Modernizing Data Lakes and Data Warehouses with GCP course on Coursera. For more detailed information about each report, the queries used, and how to copy the dashboard for your own organization, visit our Github Repository.
Quelle: Google Cloud Platform

Why you need to explain machine learning models

Many companies today are actively using AI or have plans to incorporate it into their future strategies — 76% of enterprises are now prioritizing AI and ML over other initiatives in their IT budgets and the global AI industry is expected to reach over $260 billion by 2027.  But as AI and advanced analytics become more pervasive, the need for more transparency around how AI technologies work will be paramount. In this post, we’ll explore why explainable AI (XAI) is essential to widespread AI adoption, common XAI methods, and how Google Cloud can help. Why you need to explain ML modelsAI technology suffers from what we call a black box problem. In other words, you might know the question or the data (the input), but you have no visibility into the steps or processes that provide you with the final answer (the output). This is especially problematic in deep learning and artificial neural network approaches, which contain many hidden layers of nodes that “learn” through pattern recognition. Stakeholders are often reluctant to trust ML projects because they don’t understand what they do. It’s hard for decision-makers to relinquish control to a mysterious machine learning model, especially when it’s responsible for critical decisions. AI systems are making predictions that have a profound impact, and in some industries like healthcare or driverless cars, it can mean the difference between life and death. It’s often hard to get support that a model can be trusted to make decisions, let alone make them better than a human can—especially when there is no explanation of how that decision was made. How did the AI model predict or make a decision? How can you be sure there is no bias creeping into algorithms? Is there enough transparency and interpretability to trust the model’s decision? Decision-makers want to know the reasons behind an AI-based decision, so they have the confidence that it is the right one. In fact, according to a PwC survey, the majority of CEOs (82%) believe that AI-based decisions must be explainable to be trusted.What is Explainable AI? Explainable AI(XAI) is a set of tools and frameworks that can be used to help you understand how your machine learning models make decisions. This shouldn’t be confused with showing a complete step-by-step deconstruction of an AI model, which can be close to impossible if you’re attempting to trace the millions of parameters used in deep learning algorithms. Rather, XAI aims to provide insights into how models work, so human experts are able to understand the logic that goes into making a decision. When you apply XAI successfully, it offers three important benefits: 1. Increases trust in ML modelsWhen decision-makers and other stakeholders have more visibility into how a ML model found its final output, they are more likely to trust AI-based systems. Explainable AI tools can be used to provide clear and understandable explanations of the reasoning that led to the model’s output. Say you are using a deep learning model to analyze medical images like X-rays, you can use explainable AI to produce saliency maps (i.e. heatmaps) that highlight the pixels that were used to get the diagnosis. For instance, a ML model that classifies a fracture would also highlight the pixels used to determine that the patient is suffering from a fracture.2. Improves overall troubleshooting Explainability in AI can also enable you to debug a model and troubleshoot how well a model is working. Let’s imagine your model is supposed to be able to identify animals in images. Over time, you notice that the model keeps classifying images of dogs playing in snow as foxes. Explainable AI tools make it easier to figure out why this error keeps occurring. As you look into your explainable AI models that you’re using to show how a prediction is made, you discover that that ML model is using the background of an image to differentiate between dogs and foxes. The model has mistakenly learned that domestic backgrounds are dogs and snow in an image means the image contains a fox. 3. Busts biases and other potential AI potholesXAI is also useful for identifying sources of bias. For example, you might have a model to identify when cars are making illegal left hand turns. When you are asked to define what the violation is based on in an image, you find out that the model has picked up a bias from the training data. Instead of focusing on cars turning left illegally, it’s looking to see if there is a pothole. This influence could be caused by a skewed dataset that contained a large amount of images taken on poorly maintained roads, or even real-bias, where a ticket might be more likely to be given out in an underfunded area of a city. Where does explainability fit into the ML lifecycle? Explainable AI should not be an afterthought that’s done at the end of your ML workflow. Instead, explainability should be integrated and applied every step of the way—from data collection, processing to model training, evaluation, and model serving. There are a few ways you can work explainability into your ML lifecycle. This could mean using explainable AI to identify data set imbalances, ensure model behavior satisfies specific rules and fairness metrics, or show model behavior both locally and globally. For instance, if a model was trained using synthetic data, you need to ensure it behaves the same when it uses real data. Or, as we discussed above with deep learning models for medical imaging, a common form of explainability is to create heatmaps to identify the pixels used for image classification.Another tool you might use is sliced evaluations of machine learning model performance. According to our AI principles, you should avoid creating or reinforcing unfair bias. AI algorithms and datasets can often reflect or reinforce unfair biases. If you notice that a model is not performing well for a small minority of cases, it’s important for you to address any fairness concerns. Sliced evaluations will allow you to explore how different parts of a dataset might be affecting your results. In the case of imaging models, you might explore different images based on factors like poor lighting or over exposure. We also recommend creating model cards, which can help explain any potential limitations, any trade-offs you have to make for performance, and then, providing a way to test out what the model does. Explainable AI methodsWhen we talk about explainable AI methods, it’s important to understand the difference between global and local methods.A global method is understanding the overall structure of how a model makes a decision. A local method is understanding how the model made decisions for a single instance. For instance, a global method might be that you look at a table that includes all the features that were used, ranked by the overall importance they have for making a decision. Feature importance tables are commonly used to explain structured data models to help people understand how specific input variables impact the final output of a model.But what about explaining how a model makes a decision for an individual prediction or a specific person? This is where local methods come into play. For the purpose of this post, we’ll cover local methods based on how they can be used for explaining model predictions in image data.Here are the most common explainable AI local methods:Local interpretable model-agnostic explanation (LIME)  Kernel Shapley additive explanations (KernalSHAP) Integrated gradients (IG) Explainable explanations through AI (XRAI) Both LIME and KernalShap break down an image into patches, which are randomly sampled from the prediction to create a number of perturbed (i.e. changed) images. The image will look like the original, but parts of the image have been zeroed out. Perturbed images are then fed to the trained model and asked to make a prediction. In the example below, the model would be asked: Is this image a frog or not a frog?The model would then provide the probability of whether the image is a frog. Based on the patches that were selected, you can actually rank the importance of each patch to the final probability. Both these methods can be used to help explain the local importance for determining whether the image contained a frog.Integrated gradients is a technique used to give importance value based on gradients of the final output. IG takes baseline images and compares them to the actual pixel value of the images that contain the information the corresponding model is designed to identify. The idea is that the value should improve in accuracy when the image contains what the model was trained to find. It helps determine how much a gradient changes from the baseline image to the point where it makes a prediction, providing an attribution mask that helps determine what the image is using to classify an image.XRAI is a technique that combines all of the three methods mentioned above, combining patch identification with integrated gradients to show salient regions that have the most impact on a decision, rather than individual pixels. The larger regions in this approach tend to deliver better results.Another emerging method that we’re starting to incorporate at Google Cloud is TracIn—a simple, scalable approach that estimates training data influence. The quality of ML model’s training data can have a huge impact on a model’s performance. TracIn tracks mislabeled examples and outliers from various datasets and helps explain predictions by assigning an influence score to each training example. If you are training a model to predict whether images have zucchinis, you would look at the gradient changes to determine which reduce loss (proponents) and increase loss (opponents). TracIn allows you to identify what images allow the model to learn to identify a zucchini and which are used to distinguish what’s not a zucchini.Using Explainable AI in Google CloudWe launched Vertex Explainable AI to help data scientists not only improve their models but provide insights that make them more accessible for decision-makers. Our aim is to provide a set of helpful tools and frameworks that can help data science teams in a number of ways, such as explaining how ML models reach a conclusion, debugging models, and combating bias.With Vertex Explainable AI platform, you can:Design interpretable and inclusive AI. Build AI systems from the ground up with Vertex Explainable AI tools designed to help detect and resolve bias, drift, and other gaps in data and models. With AI Explanations, data scientists can use AutoML Tables, Vertex Predictions, and Notebooks to explain how much a factor contributed to model predictions, helping to improve datasets and model architecture. The What-If Tool enables you to investigate model performance across a wide range of features, optimize strategies, and even manipulate individual datapoint values.Deploy ML models with confidence by providing human-friendly explanations. When deploying a model on AutoML Tables or Vertex AI , you can reflect patterns found in your training data to get a prediction and a score in real time about how different factors affected the final output. Streamline model governance with performance monitoring and training. You can easily monitor predictions and provide ground truth labels for prediction inputs with the continuous evaluation feature. Vertex Data Labelingcompares predictions with ground truth labels to incorporate feedback and optimize model performance.  AI continues to be an exciting frontier that will continue to shape and inspire the future of enterprises across all industries. But in order for AI to reach its full potential and gain wider adoption, it will require that all stakeholders, not just data scientists, understand how ML models work. That’s why, we remain committed to ensuring that no matter where AI goes in the future, it will serve everyone—be it customers, business users, or decision-makers. Next stepsLearn how to serve out explanations alongside predictions by running this Jupyter notebook on Cloud AI Platform. Step-by-step instructions are also available on Qwiklabs. And if you are interested in what’s coming in machine learning over the next five years, check out our Applied ML Summit to hear from Spotify, Google, Kaggle, Facebook and other leaders in the machine learning community.Related ArticleGoogle Cloud unveils Vertex AI, one platform, every ML tool you needGoogle Cloud launches Vertex AI, a managed platform for experimentation, versioning and deploying ML models into production.Read Article
Quelle: Google Cloud Platform

The top cloud capabilities industry leaders want for sustained innovation

Cloud computing technologies help companies and governments to deliver essential services to their customers and citizens—never was this seen more than during the pandemic. From enabling the quick rollout of indispensable programs like unemployment assistance or access to COVID-19 testing online portals to leveraging on-demand infrastructure to meet enterprise compute needs, cloud empowers IT leaders to react and respond quickly under extreme pressure.  With increasingly complex environments, which include a mix of proprietary and vendor solutions, legacy apps, geographically distributed, and resources living both on-premises or across multiple clouds, enterprises and agencies want to achieve more agility and improve cost efficiencies without getting locked into a single vendor in the future. At the same time, they are looking to leverage emerging technologies like edge solutions with the rollout of 5G.Multicloud and hybrid cloud approaches, coupled with open-source technology adoption, enable IT teams to take full advantage of the best cloud has to offer. And a recent study from the International Data Group (IDG) shows just how much of a priority this has become for business leaders. Multicloud and hybrid cloud capabilities among ‘must-haves’ from cloud providersAfter more than a year of uncertainty, organizations are applying lessons learned along the way as they assess the capabilities they need out of their cloud providers to keep pace with rapidly evolving requirements.  The results of the Google-commissioned study by IDG, based on a global survey of over 2,000 IT decision-makers, shows that multicloud/hybrid cloud support and other cutting edge technologies, such as containers, microservices, service mesh, and AI-powered analytics, are now major considerations for enterprises when selecting a cloud provider. This is true at almost all companies, regardless of their digital maturity, including those who are fully transformed (digital natives), currently implementing strategy (digital forwards), or not yet implementing any transformation strategy (digital conservatives).Related ArticleRead ArticleOrganizations are progressively more committed to the cloud, especially those who are further along on their digital transformation journey. The survey found that the majority of digital natives (83%) and digital forward (81%) companies list multicloud/hybrid cloud support and cutting-edge technology as key considerations when considering a cloud provider. However, the same factors are still among the top considerations for over 70% of digital conservatives.Another trend that goes hand-in-hand with multicloud and hybrid cloud support is the broader adoption of open source software solutions. In particular, open source technologies address barriers that arise from the need to modernize or integrate legacy systems and technologies—a primary pain point that impedes transformation efforts.  While once viewed as unconventional, open source has become vital to unlocking cloud innovation, delivering the speed and rich capabilities needed to speed production and increase creativity. This link between cloud and open source is also reflected in the IDG study results.  While globally, 74% of global IT leaders say they prefer open-source cloud solutions, this number jumps to 82% at digital-forward organizations and 87% for digital natives. By comparison, the same is true for just over half of digital-conservative companies.Freedom to innovate—anywhereGoogle Cloud’s commitment to multicloud, hybrid cloud, and open source enables our customers to use their data as well as build and run apps in the environment of their choice, whether on-premises, in Google Cloud, on another cloud provider, or across geographic regions. To learn more about the IDG findings and how IT leaders are creating new ways to operate and innovate, download the full report.Interested in how Google Cloud’s commitment to multi/hybrid cloud and OSS empowers transformation and drives innovation?Our distributed cloud services, including Anthos and Google Kubernetes Engine (GKE),  provide consistency between any public and private clouds as well as a solid foundation for modernization and future growth, while allowing developers to build, manage, and innovate faster, anywhere. Anthos extends Google Cloud’s best-in-breed solutions to any environment, enabling teams to modernize apps faster and establish operational consistency. It can be used for both legacy and cloud-native deployments, running on existing virtual machines (VMs) and bare metal services, while minimizing vendor lock-in and meeting regulatory requirements. Google Cloud’s commitment to multicloud, hybrid cloud, and open source enable organizations to leverage their data and run their applications and services on the cloud or in the environment of your choice, rather than using a single vendor solution. We aim to support our customers’ journeys to reinvention, and we hope that together we can pave the way for whatever is coming next.Related ArticleRead Article
Quelle: Google Cloud Platform

All about cables: A guide to posts on our infrastructure under the sea

From data centers and cloud regions to subsea cables, Google is committed to connecting the world. Our investments in infrastructure aim to further improve our network—one of the world’s largest—which helps improve global connectivity, supporting  users and Google Cloud customers. Our subsea cables play a starring role in this work, linking up cloud infrastructure that includes more than 100 network edge locations and over 7,500 edge caching nodes. As it turns out, readers of this blog seem to find what happens under the sea just as fascinating as what’s going on in the cloud. Posts on our cables are consistently among our most popular, which is why we brought them together for you here so you can take a deeper dive (pun intended).Here’s a list our most popular posts on our underwater infrastructure:2021Hola, South America! Announcing the Firmina subsea cableThis bears repeating: Introducing the Echo subsea cableThe Dunant subsea cable, connecting the US and mainland Europe, is ready for service2020Announcing the Grace Hopper subsea cable, linking the U.S., U.K. and Spain2019Introducing Equiano, a subsea cable from Portugal to South AfricaA quick hop across the pond: Supercharging the Dunant subsea cable with SDM technologyCurie subsea cable set to transmit to Chile, with a pit stop to Panama2018Expanding our cloud network for a faster, more reliable experience between Australia and Southeast AsiaDelivering increased connectivity with our first private trans-Atlantic subsea cable2017Google invests in INDIGO undersea cable to improve cloud infrastructure in Southeast Asia2016New undersea cable expands capacity for Google APAC customers and usersGoogle Cloud customers run at the speed of light with new FASTER undersea pipeA journey to the bottom of the internetOur cable systems provide the speed, capacity and reliability Google is known for worldwide, and at Google Cloud, our customers can make use of the same network infrastructure that powers Google’s own services. To learn more, you can view our network on a map, or read more about our network.Related ArticleHola, South America! Announcing the Firmina subsea cableThe new Firmina subsea cable will run from the eastern U.S. to Argentina, and will be the world’s longest cable cable powered by a single…Read Article
Quelle: Google Cloud Platform