2 ways to migrate your SAP HANA database to Google Cloud

Many of the world’s leading companies run on SAP—and deploying it on Google Cloud extends the benefits of SAP even further. Migrating your current SAP S/4HANA deployment to Google Cloud—whether it resides on your company’s on-premises servers or another cloud service—provides your organization with a flexible virtualized architecture that lets you scale your environment to match your workloads, so you pay only for the compute and storage capacity you need at any given moment. Google Cloud includes built-in features, such as Compute Engine live migration and automatic restart, that minimize downtime for infrastructure maintenance. And it allows you to integrate your SAP data with multiple data sources and process it using Google Cloud technology such as BigQuery to drive data analytics.SAP server-side architecture consists of two layers: the SAP HANA database, and the Netweaver application layer. In this blog post, we’ll look at the options and steps for moving the database layer to Google Cloud as a lift and shift or rehost, a straightforward approach that entails moving your current SAP environment unchanged onto Google Cloud.Deploying an SAP HANA system on Google CloudGoogle Cloud offers SAP-certified virtual machines (VMs) optimized for SAP products, including SAP HANA and SAP HANA Enterprise Cloud, as well as dedicated servers for SAP HANA for environments greater than 12TB. (For a complete list of VM and hardware options, visit theCertified and Supported SAP HANA Hardware Directory.)Before proceeding with a rehost migration to Google Cloud, your current (source) environment and Google Cloud (target) environments should meet these specifications:Prerequisites:  The configuration of the Google Cloud environment (i.e., VM  resources, SSD storage capacity) should be identical to that of the source environment. If the underlying hardware is different, however, you must use Option 2 for your migration, detailed below.Both environments should be running the same operating system (SUSE or RHEL Linux).The HANA version, instance number, and system ID (SID) should be identical.Schema names must remain the same.Establishing the network connection between the on-premises environment and Google Cloud will be required in this phase to support rehost of the SAP application.you can use Cloud VPN or Dedicated Interconnect. Learn more about Dedicated Interconnect and Cloud VPN.Note: Depending on your internet connection and bandwidth requirements, we recommend using a Dedicated Interconnect over Cloud VPN for production environments. We offer a number of automated processes to accelerate your cloud journey. To deploy the SAP HANA system on Google Cloud, you can use theGoogle Cloud Deployment manager or Terraform and Ansible scripts available on GitHub with configuration file templates to define your installation. For more details, see theGoogle Cloud SAP HANA Planning Guide.Note: To deploy SAP HANA on Google Cloud machine types that are certified by SAP for production, please review theCertification for SAP HANA on Google Cloud page. Moving an SAP HANA Database to Google CloudThere are two different options you can use to rehost your SAP HANA database to Google Cloud, and each has pros and cons that you should consider when deciding on your approach.Option 1: Asynchronous replication uses SAP’s built-in replication tool to provide continuous data replication from the source system (also known as the primary system) to the destination or secondary system—in this case residing on Google Cloud. It’s best for mission-critical applications for which minimum downtime is a high priority, and for large databases. In addition, the high level of automation means that the process requires less manual intervention. Here’s where you can learn more on HANA Asynchronous Replication.Option 2: Backup and restore relies on SAP’s backup utility to create an image of the database that is then transferred to Google Cloud, where it is restored in the new environment. Downtime for this method varies by database size, so large databases may require more downtime via this method vs. asynchronous replication. It also involves more manual tasks. However, it requires fewer resources to perform, making it an attractive option for less urgent use cases. Here’s where you can learn more on SAP HANA database Backup and restore.Click to enlargeHow to migrate the SAP HANA database to Google Cloud using Asynchronous ReplicationClick to enlargeCreate and configure Dedicated Interconnect or Cloud VPN between the current environment and Google Cloud.Set up SAP HANA asynchronous replication. You can configure system replication using SAP HANA Cockpit, SAP HANA Studio, or hdbnsutil. See Setting Up SAP HANA System Replication in the SAP HANA Administration Guide.Be sure to use the same instance number and HANA SID in the template as the primary instance.Configure the Google Cloud instance as the secondary node for using HANA Asynchronous replication.Perform data validation once full data replication is completed to the SAP HANA database in Google Cloud. To learn more: HANA System Replication overview.  Perform an SAP HANA takeover on your standby database. This switches your active system from the current primary system onto the secondary system on Google Cloud. Once the takeover command runs, the system on Google Cloud becomes the new primary system.To learn more: HANA Takeover. How to migrate the SAP HANA database to Google Cloud using Backup and RestoreClick to enlargeCreate a full backup of your SAP HANA database in your current environment.Create a new storage bucket in your Google Cloud environment. VisitCreating Storage Buckets in the Google Cloud Storage documentation. Download and install gsutil onto the source environment and run it to upload the HANA backup to the Google Cloud storage bucket. To install gsutil utility on any computer or server, visit Install gsutil in the Google Cloud Storage documentation.Note: You can run parallel multi thread/multi processing in gsutil to copy large files more quickly.Recover the HANA database on Google Cloud using SAP’s RECOVER DATABASE statement. SeeRECOVER DATABASE Statement (Backup and Recovery) in the SAP HANA SQL Reference Guide for SAP HANA Platform.Note: BackInt agent is an integrated SAP interface tool used for HANA database on Google Cloud.Backint agent for SAP HANA can be used to store and retrieve backups directly from Google Cloud Storage. It is supported and certified by SAP on Google Cloud. To learn more:  SAP HANA Backint Agent on Google Cloud. In summary, we recommend using Asynchronous Replication (Option 1) for mission-critical applications that require the lowest downtime window. For all other applications, we recommend Backup and Restore (Option 2), as this approach requires fewer resources. It’s also a great way to implement the backup and restore functionality on Google Cloud.A rehost migration is the most straightforward path to getting your SAP on HANA system up and running on Google Cloud. And the sooner you migrate, the sooner you can take advantage of the many benefits Google Cloud brings to your SAP solution. For more information on the different migration options please review: SAP on Google Cloud: Migration strategies. Learn more about deploying SAP on Google Cloud. Technical resources can be found here.Related Article9 ways to back up your SAP systems in Google CloudBackup and storage design should be part of any business continuity plan. This post explains Google Cloud’s multiple cloud-based backup s…Read Article
Quelle: Google Cloud Platform

Cloud CISO Perspectives: June 2021

It’s been another busy month for security teams around the globe with no signs of slowing down. Many of us virtually attended RSA, and ransomware attacks continue to dominate headlines. The Biden Administration’s Executive Order on Cybersecurity is officially underway, with important milestones like the NIST workshops where many of us discussed the Standards and Guidelines to Enhance Software Supply Chain Security.In this month’s post I’ll recap these topics, the latest security updates from our Google Cloud product teams, and more. Don’t forget we have a new newsletter sign up for this series, so you can get the latest updates delivered to your inbox. Thoughts from around the industry Post-RSA takeaways: Resilience was the theme of this year’s event, and it’s something we need to think about throughout the rest of this year and beyond. Last year Iwrote about general resilience, and how one of the common mistakes many organizations make is to think that resilience can be obtained by simply writing down plans and procedures on what to do and how to respond to specific events. Overall, as recent cyber events have shown us, there are potential problems with this approach. For example, most crises or significant events are unique, and if the organization can only respond to what is in a plan or procedure then the muscle-memory for the needed agility in response may not be there. In many ways, what we need are a set of foundational capabilities across prevention, detection, response and recovery that are continuously exercised and improved. As everyone is learning, scenario specific plans are necessary but real resilience comes from organization muscle-memory joined with continuously tested people, process and technology capabilities that can be adapted to meet any challenge.Zero trust was also a hot topic during this year’s event. Between the COVID-19 pandemic and recent cyber attacks, it’s promising to see that organizations everywhere are now realizing they need a comprehensive and modern zero trust access approach that removes overreliance on the network perimeter to protect themselves against a variety of threats. For Google, zero trust is more than a marketing buzzword or trend to attach to—it’s how we have operated and helped to protect our internal operations over the last decade with our BeyondCorp framework. We will continue to improve upon the industry standard with our lessons learned, so that other organizations can benefit from zero trust access platforms with BeyondCorp Enterprise and move toward a safer security posture. Ransomware: From Colonial Pipeline to JBS, rarely a day goes by without a new attack in the news. The reality is that many of these problems stem from a lack of rigor implementing a range of basic technology controls. We’re at an inflection point where both the private and public sector need to work together to prioritize the right defenses against these rising threats. We think it’s a mistake to assume one control or one product can be the solution to ransomware. Many organizations have started to realize you need an array of controls working together to create and sustain a defensible security posture. We recently highlighted our recommendations to protect against ransomware based on the National Institute of Standards and Technology (NIST) primary pillars for a successful and comprehensive cybersecurity program. Securing open source software: The Open Source team at Google recently announced an incredibly useful exploratory visualization site called Open Source Insights, which provides an interactive view of the dependencies, including first layer and transitive dependencies, of open source projects. This is an extremely important effort for the industry, especially as more and more organizations rely on open source software for critical aspects of their environments. While the benefits of open source software are clear, challenges persist.  Take for example the complexity of the supply chain; open source software projects often have many hundreds of dependencies. Open Source Insights gives developers a comprehensive visualization of a project’s dependencies and their properties and vulnerabilities. This includes interactive visualizations for developers to analyze transitive dependency graphs, and a comparison tool to highlight how different versions of a package might affect their dependencies by introducing or removing licenses, fixing security problems, or changing the packages’ own dependencies. While much more work and research is needed in this space, Open Source Insights is a critical step in helping secure the open source software supply chain.Click to enlargeThe EU Cloud Code of Conduct: While it went into force in 2018, the EU’s General Data Protection Regulation (GDPR) remains firmly top of mind as organizations use the cloud for processing of sensitive data. Providers like Google Cloud are often asked derivatives of the question “how can we be sure you’re taking appropriate measures to safeguard data under the GDPR.” We now have a definitive answer. The EU GDPR Cloud Code of Conduct(CoC) is a mechanism for cloud providers to demonstrate how they offer sufficient guarantees to implement appropriate technical and organizational measures as data processors under the GDPR. The Belgian Data Protection Authority, based on a positive opinion by the European Data Protection Board (EDPB), last month approved the CoC, a product of years of constructive collaboration between the cloud computing community, the European Commission, and European data protection authorities. This is the first European code approved under the GDPR; it is excellent news for the industry to have a new transparency and accountability tool that helps promote trust in the cloud. We are proud to say that Google Cloud Platform and  Google Workspace already adhere to these provisions.Spotlight on the Administration’s Executive Order on Cybersecurity The Presidential Administration’s recent moves in the Executive Order to shore up our nation’s cyber defenses is an important milestone for both public and private sector organizations. At Google, we are deeply committed to advancing cybersecurity issues and believe that government officials shouldn’t have to tackle these issues on their own. Importantly, the EO makes critical strides to help modernize government technology and advance security innovation as well as improve standards for secure software development. We’ve already shared our perspective with the government and will continue to advocate on these issues in the coming months. Modernization and security innovation: One of the most promising aspects of the government’s approach is to set agencies and departments on a path to modernize security practices and strengthen cyber defenses across the federal government. For too long, the public sector has tried to solve security challenges by spending more on security products, but as recent events have proved, spending billions of dollars on cybersecurity on an unmodernized IT platform is like building on sand. We strongly support this push towards modernization and agree with the government’s focus on making security simple and scalable, by default. Modernizing can not only build cybersecurity at a foundational level but also gives the federal government the opportunity to diversify their vendors, which can lead to improved resilience.Secure software development: Earlier this month Google participated in the NIST workshops and submitted position papers for how the industry can enhance software supply chain security. We believe that the government’s call to action on secure software development practices could bring about the most significant progress on cybersecurity in a decade and will likely have the biggest impact on the government’s risk posture in the long term. To further the adoption of supply chain integrity best practices, Google in collaboration with the OpenSSF has proposed Supply-chain Levels for Software Artifacts (SLSA) to formalize criteria around software supply chain integrity. We look forward to continuing to collaborate and engage with the Administration on this important work.Google Cloud Security highlights Google Cloud Named a Leader in Forrester Wave™: Unstructured Data Security Platforms: Providing effective controls to protect sensitive data in the cloud is a core part of our Google Cloud product strategy and unstructured data in particular can be challenging to secure. Given the importance of these capabilities to our customers, we were happy to see that Forrester Research named Google Cloud a Leader in The Forrester Wave™: Unstructured Data Security Platforms, Q2 2021 report. The report evaluated the 11 most significant providers with platform solutions to secure and protect unstructured data, spanning cloud providers to data security-focused vendors. Google Cloud rated highest in the current offering category among all the providers evaluated and received the highest possible score in sixteen criteria. A copy of the full report can be viewed here.Security benefits of a Data Cloud: Last month, we held our first Data Cloud Summit where we announced three new services as part of our database and data analytics portfolio to provide organizations with a unified data platform: Dataplex, Analytics HubandDatastream. Security professionals often default to using only security branded tools, but some of the best tools for security teams might be to use data and analytics products that are key to other business functions within the organization. Digital technologies like AI, ML and data can be used to power innovation, especially for security efforts. At Google, security is the cornerstone of our product strategy and our customers can take advantage of the same secure-by-design infrastructure, built-in data protection, and global network that we use to ensure compliance, redundancy and reliability.New features to secure your Cloud Run environments:  Cloud Run makes developing and deploying containerized applications easier for developers. We announced several new ways to help make Cloud Run environments more secure based on enhanced integrations with Secret Manager, Binary Authorization, Cloud KMS, and Recommendation Hub.Advanced counter-abuse and threat analysis features in Google Workspace:We continue to add controls and capabilities for Workspace admins to protect their users and organizations against threats and abuse. We recently added features that enrich security alerts with VirusTotal threat context and reputation data, enable blocking of abusive users and bulk removal content they’ve shared in Drive, and programmatic blocking third-party API access.That wraps up another month of thoughts and highlights. If you’d like to have this Cloud CISO Perspectives post delivered every month to your inbox, click here to sign-up.Next month, we’ll be busy hosting our first digital Security Summit where you can hear from industry leaders and engage in interactive sessions that can help you solve your most critical security challenges. Be sure to register and tune in to the great event we have planned!Related ArticleCloud CISO Perspectives: May 2021Google Cloud CISO Phil Venables shares his perspective on industry news as RSA 2021 approaches.Read Article
Quelle: Google Cloud Platform

Creating custom financial indices with Dataflow and Apache Beam

Financial institutions across the globe rely on real-time indices to inform real-time portfolio valuations, to provide benchmarks for other investments, and as a basis for passive investment instruments including exchange-traded products (ETPs). This reliance is growing—the index industry dramatically expanded in 2020, reaching revenues of $4.08 billion.Today, indices are calculated and distributed by index providers with proximity and access to underlying asset data, and with differentiating real-time data processing capabilities. These providers offer subscriptions to real-time feeds of index prices and publish the constituents, calculation methodology, and update frequency for each index.But as new assets, markets, and data sources have proliferated, financial institutions have developed new requirements. Financial institutions will need to quickly create bespoke and frequently updating indices that represent a specific actual or theoretical portfolio, with its unique constituents and weightings.In other words, existing index providers and other financial institutions alike will need mechanisms for rapid creation of real-time indices. This blog post’s focus—an index publication pipeline collaboratively developed by CME Group and Google Cloud—is an example of such a mechanism. The pipeline closely approximates a particular CME Group index benchmark, but with far greater frequency (in near real time vs. daily) than its official counterpart. It does so by leveraging open-source models such as Apache Beam and cloud-based technologies such as Dataflow, which automatically scales pipelines based on inbound data volume.Machine learning’s production problemIn the past decade, advances in AI toolchains have enabled faster ML model training—and yet a majority of ML models are still not making it into production. As organizations endeavor to develop their ML capabilities, they soon realize that a real-world ML system is comprised of a small amount of ML code embedded in a network of complex and large ancillary components. Each component brings its own development and operational challenges, which are met by bringing a DevOps methodology to the ML system, commonly referred to as MLOps (Machine Learning Operations). To apply ML to business problems, a firm must develop continuous delivery and automation pipelines for ML.This index publication collaboration is instructive because it demonstrates MLOps best practices for just such a pipeline. One Apache Beam pipeline, suited for operating on both batch and streaming data, extracts insights and packages them for downstream consumers. These consumers may include ML pipelines that, thanks to Apache Beam, require only one code path for inference across batch and real-time data sources. The pipeline is run inside Google Cloud’s Dataflow execution engine, greatly simplifying management of underlying compute resources. But the collaboration’s value is not constrained to the ML and data science realm. The project shows that consumers of the Apache Beam pipeline’s insights may also include traditional business intelligence dashboards and reporting tools. It also demonstrates the simplicity and economy of cloud-based time series data such as CME Smart Stream, which is metered by the hour, quickly and automatically provisioned, and consumable at a per-product-code (not per-feed) level.A focus on real-time processing for financial servicesTo illustrate the above points, the collaboration applies data engineering and MLOps best practices to a financial services problem. We chose the financial services domain because many financial institutions do not yet have real-time market data processing or MLOps capabilities today, owing to a significant gap on either side of their ML/AI objectives.Upstream from ML/AI models, financial institutions often experience a data engineering gap. For many financial institutions, batch processes have sufficiently addressed business requirements. As a result, the temporal nature of the time series data underlying these processes is deemphasized. For example, the original purpose of most trade booking systems was to capture a trade and ensure that it found its way to the middle and back office for settlement. It was not built with ML/AI in mind, and its underlying data therefore has not been packaged for consumption by ML/AI processes. And downstream from ML/AI models, financial institutions often encounter the aforementioned “ML production problem.”As ML/AI becomes ever more strategic, these two gaps have left many financial institutions in a conundrum—unable to train ML models for lack of properly packaged time series data, and unmotivated to package time series data for lack of ML models. By recreating a key energy market index using open-source libraries and cloud-based tools, this collaboration demonstrates that for the financial services domain a solution to this conundrum is more accessible today than ever. Creating a new indexWe modeled our new index after one of CME Group’s many index benchmarks. The particular index expresses the value of a basket of three New York Mercantile Exchange—listed energy futures as a single price. Today, CME Group publishes the index at the end of the day by calculating the settlement price of each underlying futures contract, and then weighing and summing these values. While CME Group does not currently publish this index in real time, this collaboration aims to create a near real-time solution leveraging Google Cloud capabilities and CME Group market data delivered via CME Smart Stream. However, in order to publish the value so frequently—every five seconds, with 40-second publish latency—this collaboration’s pipeline has to solve a number of challenges in near-real time.First, the pipeline must process sparse data from three separate trades feeds in memory to create open-high-low-close (OHLC) bars. More specifically, for five-second windows for each of the three front-month (and sometimes second-month) energy contracts, a bar must be produced. This is solved by using the Apache Beam library to implement functions which, when executed on Dataflow, automatically scale out as input load increases. The bars must be time-aligned across the underlying feeds, which is greatly simplified by Beam’s watermark feature. And for intervals in which no tick data is observed, the Beam library is used to pull forward the last value received, yielding perfect gap-free bars for downstream processors.Second, the pipeline must calculate volume-weighted average price (VWAP) in near real-time for each front-month contract. The VWAP calculations are also written using the Beam API and executed on Dataflow. Each of these functions requires visibility of each element in the time window, so the functions cannot be arbitrarily scaled out. Nonetheless, this is tractable because their input—OHLC bars—is manageably small.Third, the pipeline must replicate CME Group’s specific settlement price methodology for each contract. The rules specify whether to use VWAP or another source as price, depending on certain conditions. They also specify how to weigh combinations of monthly contracts during a roll period. The pipeline again encapsulates these requirements as an Apache Beam class, and joins the separate price streams at the correct time boundary.The end result is a new stream publishing bespoke index data to a Google Cloud Pub/Sub topic thousands of times daily, enabling AI models as well as traditional industry index usage, dashboards, and other tools to assist real-time decision making. The stream’s pipeline uses open source libraries that solve common time series problems out-of-the box, and cloud-based services to reduce the user’s operational and scaling burden.Click to enlargeThe importance of cloud-based dataThe promise of cloud-based pipeline execution services cannot be realized using legacy data access patterns, which often require market data users to colocate and configure servers and network gear. Such patterns inject expense and scaling complexity into the pipeline’s overall operation, diverting resources from the adoption of MLOps best practices. Instead, a newer, cloud-based access pattern—in which resources subscribe to data streams inexpensively, rapidly and programatically—is necessary.In 2018, CME Group identified the customer need for accessible futures and options market data. CME Group collaborated with Google Cloud to launch CME Smart Stream, which distributes CME Group’s real-time market data across Google Cloud’s global infrastructure with sub-second latency. Any customer with a CME Group data usage license and a Google Cloud project can consume this data for an hourly usage fee, without purchasing and configuring servers and network gear.CME Smart Stream met this index pipeline’s requirements for cost-effective, cloud-based streaming data, but this is just one use case. Since the launch of a CME Smart Stream offering on Google Cloud, globally dispersed firms have adopted the solution. For example, Coin Metrics has been using the offering to better inform its customers in the crypto markets. According to CME Group, Smart Stream has become popular with new customers as the fastest, simplest way to access CME Group’s market data from anywhere in the world. Adapt the design pattern to your needsBy combining cloud-based data, open-source libraries, and cloud-based pipeline execution services, we created a real-time index using the same constituents as its end-of-day counterpart. Additionally, financial institutions will find this approach addresses many other challenges—real-time valuation of a large set of portfolios; benchmark creation for new ETPs; or external publication of new indices.Give it a tryThis approach is available to help you meet your organization’s needs. We’ll be discussing this topic in CME Group’s webinar End-to-End Market Data Solutions in the Cloud at 10:30 am ET on June 16th.
Quelle: Google Cloud Platform

Cloud Run: A story of serverless containers

Mindful Containers is a fictitious company that is creating containerized microservice applications. They need a fully managed compute environment for deploying and scaling serverless containerized microservices. So, they are considering Cloud Run. They are excited about Cloud Run because it abstracts away the cluster configuration, monitoring, and management so they can focus on building the features for their apps. Cloud Run is a fully-managed compute environment for deploying and scaling serverless containerized microservices.Click to enlargeWhat is Cloud Run?Cloud Run is a fully-managed compute environment for deploying and scaling serverless HTTP containers without worrying about provisioning machines, configuring clusters, or autoscaling.No vendor lock-in – Because Cloud Run takes standard OCI containers and implements the standard Knative Serving API, you can easily port over your applications to on-premises or any other cloud environment. Fast autoscaling – Microservices deployed in Cloud Run scale automatically based on the number of incoming requests, without you having to configure or manage a full-fledged Kubernetes cluster. Cloud Run scales to zero— that is, uses no resources—if there are no requests.Split traffic – Cloud Run enables you to split traffic between multiple revisions, so you can perform gradual rollouts such as canary deployments or blue/green deployments.Custom domains – You can set up custom domain mapping in Cloud Run and it will provision a TLS certificate for your domain. Automatic redundancy – Cloud Run offers automatic redundancy so you don’t have to worry about creating multiple instances for high availabilityHow to use Cloud RunWith Cloud Run, you write your code in your favorite language and/or use a binary library of your choice. Then push it to Cloud Build to create a container build. With a single command—“gcloud run deploy”—you go from a container image to a fully managed web application that runs on a domain with a TLS certificate and auto-scales with requests.How does Cloud Run work?Cloud Run service can be invoked in the following ways:HTTPS: You can sendHTTPS requests to trigger a Cloud Run-hosted service. Note that all Cloud Run services have a stable HTTPS URL. Some use cases include: Custom RESTful web APIPrivate microserviceHTTP middleware or reverse proxy for your web applicationsPrepackaged web applicationgRPC: You can usegRPC to connect Cloud Runservices with other services—for example, to provide simple, high-performance communication between internal microservices. gRPC is a good option when you: Want to communicate between internal microservicesSupport high data loads (gRPC uses protocol buffers, which are up to seven times faster than REST calls)Need only a simple service definition you don’t want to write a full client libraryUse streaming gRPCs in your gRPC server to build more responsive applications and APIsWebSockets: WebSockets applications are supported on Cloud Run with no additional configuration required. Potential use cases include any application that requires a streaming service, such as a chat application.Trigger from Pub/Sub:You can use Pub/Sub to push messages to the endpoint of your Cloud Run service, where the messages are subsequently delivered to containers as HTTP requests. Possible use cases include:Transforming data after receiving an event upon a file upload to a Cloud Storage bucketProcessing your Google Cloud operations suite logs with Cloud Run by exporting them to Pub/SubPublishing and processing your own custom events from your Cloud Run servicesRunning services on a schedule: You can use Cloud Scheduler to securely trigger a Cloud Run service on a schedule. This is similar to using cron jobs. Possible use cases include:Performing backups on a regular basisPerforming recurrent administration tasks, such as regenerating a sitemap or deleting old data, content, configurations, synchronizations, or revisionsGenerating bills or other documentsExecuting asynchronous tasks: You can use Cloud Tasks to securely enqueue a task to be asynchronously processed by a Cloud Run service. Typical use cases include:Handling requests through unexpected production incidentsSmoothing traffic spikes by delaying work that is not user-facingReducing user response time by delegating slow background operations, such as database updates or batch processing, to be handled by another service, Limiting the call rate to backend services like databases and third-party APIsEvents from Eventrac: You can trigger Cloud Run with events from more than 60 Google Cloud sources. For example:Use a Cloud Storage event (via Cloud Audit Logs) to trigger a data processing pipeline Use a BigQuery event (via Cloud Audit Logs) to initiate downstream processing in Cloud Run each time a job is completedHow is Cloud Run different from Cloud Functions?Cloud Run and Cloud Functions are both fully managed services that run on Google Cloud’s serverless infrastructure, auto-scale, and handle HTTP requests or events. They do, however, have some important differences:Cloud Functions lets you deploy snippets of code (functions) written in a limited set of programming languages, while Cloud Run lets you deploy container images using the programming language of your choice. Cloud Run also supports the use ofany tool or system library from your application; Cloud Functions does not let you use custom executables. Cloud Run offers a longer request timeout duration of up to 60 minutes, while with Cloud Functions the requests timeout can be set as high as 9 mins. Cloud Functions only sends one request at a time to each function instance, while by default Cloud Run is configured to send multiple concurrent requests on each container instance. This is helpful to improve latency and reduce costs if you’re expecting large volumes. PricingCloud Run comes with a generous free tier and is pay per use, which means you only pay while a request is being handled on your container instance. If it is idle with no traffic, then you don’t pay anything.ConclusionAfter learning about the ease of set up, scalability, and management capabilities of Cloud Run the Mindful Containers team is using it to deploy stateless microservices. If you are interested in learning more, check out the documentation.For more #GCPSketchnote, follow the GitHub repo. For similar cloud content follow me on Twitter @pvergadia and keep an eye out on thecloudgirl.devRelated Article3 cool Cloud Run features that developers loveCloud Run developers enjoy pay-per-use pricing, multiple concurrency and secure event processing.Read Article
Quelle: Google Cloud Platform

Streamline your application migration journey with Migrate for Anthos and GKE

Most customers I talk to today are excited about the opportunities modernizing their workloads in the cloud affords them. In particular, they are very interested in how they can leverage Kubernetes to speed up application deployment while increasing security. Additionally, they are happy to turn over some cluster management responsibilities to Google Cloud’s SREs so they can focus on solving core business challenges. However, moving VM-based applications to containers can present its own unique set of challenges: Assessing which applications are best suited for migrationFiguring out what is actually running inside a given virtual machine Setting up ingress and egress for migrated applicationsReconfiguring service discovery Adapting day 2 processes for patching and upgrading applicationsWhile those challenges may seem daunting, Google Cloud has a tool that can help you easily solve them in a few clicks. Migrate for Anthos helps automate the process of moving your applications – whether they are Linux or Windows – from various virtual machine environments to containers on either Google Kubernetes Engine (GKE) or Anthos. There is even a specialized capability to migrate Websphere applications. Your source VMs can be running in GCP, AWS, Azure or VMware. Once the workload has been containerized, it can then be easily deployed to Kubernetes running in either a GKE or an Anthos cluster on GCP, AWS or VMware. Let’s walk through the migration process together and I will show you how Migrate for Anthos can help you efficiently migrate virtual machines to containers The first step in any application migration journey is todetermine which applications are suitable for migration. I always recommend picking a few low risk applications with a high probability of success. This allows your team to build knowledge and define processes while simultaneously establishing credibility with key stakeholders. Migrate for Anthos has an application assessment component that will inspect the applications running inside your VM and provide guidance on the likelihood of success. There are different tools for Windows and Linux, and for Websphere applications we leverage tooling directly from IBM. After you’ve chosen a good migration candidate the next step is to perform the actual migration. Migrate for Anthos breaks this down into a couple of discrete steps. First, Migrate for Anthos will do a dry run where it inspects the virtual machine and determines what is actually running in the virtual machine. The artifact from this step is a migration plan in the form of a YAML file. Next, review the YAML file and adjust any settings you want to change. For instance, if you were migrating a database you would want to update the YAML file with the point in the file system to mount the persistent volume to hold the database’s data. After you’ve reviewed and adjusted the migration YAML, you can perform the actual migration. This process will create a couple of key artifacts. The first is a Docker container image. The second is the matching Dockerfile, and a Kubernetes deployment YAML that includes definitions for all the relevant primitives (services, pods, stateful sets, etc). The Docker image that is created is actually built using a multi-stage build leverating two different images. The first is the Migrate for Anthos runtime, the second includes the workload extracted from the source VM. This is important to understand as you plan Day 2 operations. This Dockerfile can be edited to update not only the underlying Migrate for Anthos runtime layer, but also the application components. And while not mandatory, you can easily manage all that through a CI/CD pipeline. If you want to ease complexity and accelerate your cloud migration journey, I highly recommend you check out Migrate for Anthos. Watch the videos I linked up above, and then get your hands on the keyboard and try out our  Quiklab.Related ArticleModernize your apps with Migrate for AnthosMigrate for Anthos can help modernize existing applications into containers.Read Article
Quelle: Google Cloud Platform