Go green: Sustainable disaster recovery using Google Cloud

At Google, we’re dedicated to building technology that helps people do more for the planet, and to fostering sustainability at scale. We continue to be the world’s largest corporate purchaser of renewable energy, and in September made a commitment to operate on 24/7 carbon-free energy in all our data centers and campuses worldwide by 2030.As we’ve shared previously, our commitment to a sustainable future for the earth takes many forms. This includes empowering our partners and customers to establish a disaster recovery (DR) strategy with zero net operational carbon emissions, regardless of where their production workload is.In this post, we’ll explore carbon considerations for your disaster recovery strategy, how you can take advantage of Google Cloud to reduce net carbon emissions, and three basic scenarios that can help optimize the design of your DR failover site.Balancing your DR plan with carbon emissions considerations: It’s easier than you thinkA DR strategy entails the policies, tools, and procedures that enable your organization to support business-critical functions following a major disaster, and recover from an unexpected regional failure. Sustainable DR, then, means running your failover site (a standby computer server or system) with the lowest possible carbon footprint. From a sustainability perspective, we frequently hear that organizations have trouble balancing a robust DR approach with carbon emissions considerations. In order to be prepared for a crisis, they purchase extra power and cooling, backup servers, and staff an entire facility—all of which sit idle during normal operations.In contrast, Google Cloud customers can lower their carbon footprint by running their applications and workloads on a cloud provider that has procured enough renewable energy to offset the operational emissions of its usage. In terms of traditional DR planning, Google Cloud customers don’t have to worry about capacity (securing enough resources to scale as needed) or the facilities and energy expenditure associated with running equipment that may only be needed in the event of a disaster.Related ArticleRead ArticleWhen it comes to implementing a DR strategy using Google Cloud, there are three basic scenarios. To help guide your DR strategy, here’s a look at what those scenarios are, plus resources and important questions to ask along the way. 1. Production on-premises, with Google Cloud as the DR siteIf you operate your own data centers or use a non-hyperscale data center, like many operated by hosting providers, some of the energy efficiency advantages that can be achieved at scale might not be available to you. For example, an average data center uses almost as much non-computing or “overhead” energy (such as cooling and power conversion) as they do to power their servers.Creating a failover site on-premises means not only are you running data centers that are not optimized for energy efficiency, but you are operating idle servers in a backup location that is consuming electricity with associated carbon emissions that are likely not offset. When designing your DR strategy, you can avoid increasing your carbon footprint by using Google Cloud as the target for your failover site.You could create your DR site on Google Cloud by replicating your on-prem environment. Replicating environments means that your DR failover site can directly take advantage of Google Cloud’s carbon-neutral data centers, which offsets the energy consumption and costs of running a DR site on-prem. However, the reality is that if you are just replicating your on-prem environment, there is an opportunity for you to optimize how your DR site will consume electricity. Google Cloud will offset all of the emissions of a DR site running on our infrastructure, but to truly take advantage of operating at the lowest possible carbon footprint, you should optimize the way you configure your DR failover environment on Google Cloud.To do that, there are three patterns—cold, warm, and hot—that can be implemented when your application runs on-prem and your DR solution is on Google Cloud. Get an in-depth look at those patterns here.The graph below illustrates how the pattern chosen relates to your “personal” energy use. In this context, we define “personal” energy costs as energy wasted on idle resources.Optimizing your personal energy use consists of more than offsetting where you run your DR site. It involves thinking about your DR strategy carefully beyond taking the simplest “let’s just replicate everything” approach. Some of the important questions you need to ask include: Are there some parts of your application that can withstand a longer recovery time objective (RTO) than others? Can you make use of Google Cloud storage as part of your DR configuration? Can you get closer to a cold DR pattern, and thus optimize your personal energy consumption? The elephant in the room, though, is “What if I absolutely need to have resources when I need them? How do I know the resources will be there when I need them? How will this work if I optimize the design of my DR failover site on Google Cloud such that I have minimal resources running until I need them?” In this situation, you should look into the ability to reserve Compute Engine zonal resources. This ensures resources are available for your DR workloads when you need them. Using reservations for virtual machines also means you can take advantage of discounting options (which we discuss later in this post).In summary, using Google Cloud as the target for your failover site can help immediately lower your net carbon emissions, and it’s also important to optimize your DR configuration by asking the right questions and implementing the right pattern. Lastly, if your particular use case permits, consider migrating your on-prem workloads to Google Cloud altogether. This will enable your organization to really move the needle in terms of reducing its carbon footprint as much as possible.2. Production on Google Cloud, with Google Cloud as the DR siteRunning your applications and DR failover site on Google Cloud means there are zero net operational emissions to operate both your production application and the DR configuration.From here, you want to focus on optimizing the design of your DR failover site on Google Cloud. The most optimal pattern depends on your use case.For example, a full high availability (HA) configuration, or hot pattern, means you are using all your resources. There are no standby resources idling, and you are using what you need, when you need it, all the time. Alternatively, your RTO may not require a full HA configuration, but you can adopt a warm or cold pattern when you need to scale or spin up resources as needed in the event of a disaster or major event. Adopting a warm or cold pattern means all or some of the resources needed for DR are not in use until you need them. This may lead to the exact same questions we mentioned in scenario #1: What if I absolutely need to have resources when I need them in case of a disaster or major event? How do I know the resources will be there when I need them? How will this work? A simple solution is, like in the previous scenario, to reserve Compute Engine zonal resources for your workloads when you need them. And since you’re running your production on Google Cloud, you can work with your Google Cloud sales representative to forecast your usage and take advantage of committed use discounts. These are where you purchase compute resources (vCPUs, memory, GPUs, and local SSDs) at a discounted price in return for committing to paying for those resources for one or three years. Committed use discounts are ideal for workloads with predictable resource needs.Taking advantage of committed use discounts enables Google Cloud to use your forecasting to help ensure our data centers are optimized for what you need, when you need it—rather than Google Cloud over-provisioning and essentially running servers that are not optimally used. Sustainability is a balancing act between the power that is being consumed, what sort of power is in use, and the usage of the resources that are being powered by the data centers.Related ArticleRodan + Fields achieve business continuity for retail workloads with SAP on Google CloudLearn how Rodan + Fields designed and implemented a cloud-native, automated resilience strategy for their SAP workloads on Google Cloud.Read Article3. Production on another cloud, with Google Cloud as the DR siteAs with running production on-prem, your overall carbon footprint is a combination of what you use outside of Google Cloud and what you’re running on Google Cloud (which is carbon neutral). If you’re running production on another cloud, you should investigate the sustainability characteristics of its infrastructure relative to your own sustainability goals. There are multiple ways to achieve carbon neutrality, and many providers are on different journeys towards their own sustainability goals. For the past three years, Google focused on matching its electricity consumption with renewable energy, and in September 2020 set a target to source carbon-free energy 24/7 for every data center. We believe these commitments will help our cloud customers meet their own sustainability targets. Regardless of which scenario applies to your organization, using Google Cloud for DR is an easy way to lower your energy consumption. When Google Cloud says we partner with our customers, we really mean it. We meet our customers where they are, and we are grateful for our customers who work with us by forecasting their resource consumption so we know where to focus our data center expansion. Our data centers are designed to achieve net-zero emissions and are optimized for maximum utilization. The resulting benefits get passed to our customers, who in turn can lower their carbon footprint. When it comes to sustainability, we get more done when we work together.Keep reading: Get more insights that can guide your journey toward 24×7 carbon-free energy. Download the free whitepaper, “Moving toward 24×7 Carbon-Free Energy at Google Data Centers: Progress and Insights.”
Quelle: Google Cloud Platform

New whitepaper: Designing and deploying a data security strategy with Google Cloud

William Gibson said it best: “The future is already here—it’s just not evenly distributed.”The cloud has arrived. Data security in the cloud is too often a novel problem for our customers. Well-worn paths to security are lacking. We often see customers struggling to adapt their data security posture to this new reality. There is an understanding that data security is critical, but a lack of well understood principles to drive an effective data security program. Thus, we are excited to share a view of how to deploy a modern and effective data security program.    Today, we are releasing a new white paper “Designing and deploying a data security strategy with Google Cloud” that accomplishes exactly that. It was written jointly by Andrew Lance of Sidechain (Sidechain blog post about this paper) and Dr. Anton Chuvakin, with a fair amount of help from other Googlers, of course.Before we share some of our favorite quotes from the paper, let me spend a few more minutes explaining the vision behind it.Specifically, we wanted to explore both the question of starting a data security program in a cloud-native way, as well as adjusting your existing daily security program when you start utilizing cloud computing.Imagine you are migrating to the cloud and you are a traditional company. You have some data security capabilities, and most likely you have an existing daily security program, part of your overall security program. Perhaps you are deploying tools like DLP, encryption, data classification and possibly others. Suddenly, or perhaps not so suddenly, you’re migrating some of your data processing and some of your data to the cloud. What to do? Do my controls still work? Are my practices current? Am I looking at the right threats? How do I marry my cloud migration effort and my other daily security effort? Our paper seeks to address this scenario by giving you advice on the strategy, complete with Google Cloud examples.On the other hand, perhaps you are the company that was born in the cloud. In this case, you may not have an existing data security effort. However, if you plan to process sensitive or regulated data in the cloud, you need to create one. How does a cloud native data security program look like? Which of the lessons learned by others on premise I can ignore? What are some of the cloud-native ways for securing the data?As a quick final comment, the paper does not address the inclusion of privacy requirements. It is a worthwhile and valuable goal, just not the one we touched in the paper.Here are some of our favorite quotes from the paper:“Simply applying a data security strategy designed for on-premise workloads isn’t adequate [for the cloud]. It lacks the ability to address cloud-specific requirements and doesn’t take advantage of the great amount of [cloud] security services and capabilities”A solid cloud data security strategy should rely on three pillars: “Identity / Access Boundaries / Visibility” (the last item covers the spectrum of assessment, detection, investigation and other monitoring and observability needs)Useful questions to ponder include ”How does my data security strategy need to change to accommodate a shift to the cloud? What new security challenges for data protection do I need to be aware of in the cloud? What does my cloud provider offer that could streamline or replace my on-premise controls?”“You will invariably need to confront data security requirements in your journey to the cloud, and performing a “lift and shift” for your data security program won’t work to address the unique opportunities and challenges the cloud offers.”“As your organization moves its infrastructure and operations to the cloud, shift your data protection strategies to cloud-native thinking.”At Google Cloud, we strive to accelerate our customers’ digital transformations. As our customers leverage the cloud for business transformation, adapting data security programs to this new environment is essential. Enjoy the paper!Related ArticleImproving security, compliance, and governance with cloud-based DLP data discoveryData discovery, a key component of DLP technology, has never been more important. Here’s why.Read Article
Quelle: Google Cloud Platform

Take the first step toward SRE with Cloud Operations Sandbox

At Google Cloud, we strive to bring Site Reliability Engineering (SRE) culture to our customers not only through training on organizational best practices, but also with the tools you need to run successful cloud services. Part and parcel of that is comprehensive observability tooling—logging, monitoring, tracing, profiling and debugging—which can help you troubleshoot production issues faster, increase release velocity and improve service reliability. We often hear that implementing observability is hard, especially for complex distributed applications that are implemented in different programming languages, deployed in a variety of environments, that have different operational costs, and many other factors. As a result, when migrating and modernizing workloads onto Google Cloud, observability is often an afterthought. Nevertheless, being able to debug the system and gain insights into the system’s behavior is important for running reliable production systems. Customers want to learn how to instrument services for observability and implement SRE best practices using tools Google Cloud has to offer, but without risking production environments. With Cloud Operations Sandbox, you can learn in practice how to kickstart your observability journey and answer the question, “Will it work for my use-case?”Cloud Operations Sandbox is an open-source tool that helps you learn SRE practices from Google and apply them on cloud services using Google Cloud’s operations suite (formerly Stackdriver). Cloud Operations Sandbox has everything you need to get started in one click:Demo service – an application built using microservices architecture on modern, cloud-native stack (a modified fork of a Online Boutique microservices demo app)One-click deployment – automated script that deploys and configures the service to Google Cloud, including:Service Monitoring configurationTracing with OpenTelemetryCloud Profiling, Logging, Error Reporting, Debugging and moreLoad generator – a component that produces synthetic traffic on the demo serviceSRE recipes – pre-built tasks that manufacture intentional errors in the demo app so you can use Cloud Operations tools to find the root cause of problems like you would in productionAn interactive walkthrough to get started with Cloud Operations Getting startedLaunching the Cloud Operations Sandbox is as easy as can be. Simply:Go to cloud-ops-sandbox.dev Click on the “Open in Google Cloud Shell” button. This creates a new Google Cloud project. Within that project, a Terraform script creates a Google Kubernetes Engine (GKE) cluster and deploys a sample application to it. The microservices that make up the demo app are pre-instrumented with logging, monitoring, tracing, debugging and profiling as appropriate for each microservices language runtime. As such, sending traffic to the demo app generates telemetry that can be useful for diagnosing the cloud service’s operation. In order to generate production-like traffic to the demo app, an automated script deploys a synthetic load generator in a different geo-location than the demo app.It creates 11 custom dashboards (one for each microservice) to illustrate the four golden signals of monitoring as described in Google’s SRE book.It also adds and automatically configures uptime checks, service monitoring (SLOs and SLIs), log-based metrics, alerting policies and more.At the end of the provisioning script you’ll get a few URLs of the newly created project:You can follow the user guide to learn about the entire Cloud Operations suite of tools, including tracking microservices interactions in Cloud Trace (thanks to the OpenTelemetry instrumentation of the demo app) and see how to apply the learnings to your scenario. Finally, to remove the Sandbox once you’re finished using it, you can runNext stepsFollowing SRE principles is a proven method for running highly reliable applications in the cloud. We hope that the Cloud Operations Sandbox gives you the understanding and confidence you need to jumpstart your SRE practice. To get started, visit  cloud-ops-sandbox.dev, explore the project repo, and follow along in the user guide.
Quelle: Google Cloud Platform

How Cloud SQL freed Arcules to keep building

Editor’s note: Arcules, a Canon Company, delivers the next generation of cloud-based video monitoring, access control, and video analytics—all in one unified, intuitive platform. Here, we look at how they turned to Google Cloud SQL’s fully managed services so they could focus more of their engineers’ time on improving their architecture.As the leading provider of unified, intelligent security-as-a-service solutions, Arcules understands the power of cloud architecture. We help security leaders in retail, hospitality, financial and professional services use their IP cameras and access control devices from a single, unified platform in the cloud. Here, they can gather actionable insights from video analytics to help enable better decision-making. Since Arcules is built on an open platform model, organizations can use any of their existing cameras with our system; they aren’t locked into particular brands, ensuring a more scalable and flexible solution for growing businesses.As a relatively young organization, we were born on Google Cloud, where the support of open-source tools like MySQL allowed us to bootstrap very quickly. We used MySQL heavily at the time of our launch, though we’ve eventually migrated most of our data over to PostgreSQL, which works better for us from the perspective of both security and data segregation. Our data backboneGoogle Cloud SQL, the fully managed relational database service, plays a significant role in our architecture. For Arcules, convenience was the biggest factor in choosing Cloud SQL. With Google Cloud’s managed services taking care of tasks like patch management, they’re out of sight, out of mind. If we were handling it all ourselves by deploying it on Google Kubernetes Engine (GKE), for example, we’d have to manage the updates, migrations, and more. Instead of patching databases, our engineers can spend time to improve performance of our codes or features of our products or automated our infrastructure in other areas to maintain and adopt an immutable infrastructure. Because we have an immutable infrastructure involving a lot of automation, it’s important that we stay on top of keeping everything clean and reproducible. Our setup includes containerized microservices on Google Kubernetes Engine (GKE), connecting to the data through Cloud SQL Proxy sidecars. Our services are all highly available, and we use multi-region databases. Nearly everything else is fully automated from a backup and deployment perspective, so all of the microservices handle the databases directly. All five of our teams work directly with Cloud SQL, with four of them building services, and one providing ancillary support. Our data analytics platform (covering many centuries of video data) was born on PostgreSQL, and we have two main types of analytics—one for measuring overall people traffic in a location and one for heat maps in a location. Because our technology is so geographically relevant, we use the PostGIS plugin for PostgreSQL in intersections, so we can re-regress over the data. In heat mapping, we generate a colorized map over a configurable time period—such as one hour or 30 days—using data that displays where security cameras have detected people. This allows a customer to see, for example, a summary of a building’s main traffic and congestion points during that time window. This is an aggregation query that we run on demand or periodically, whichever happens first. That can be in response to a query to the database, or it can also be calculated as a summary of aggregated data over a set period of time.We also store data in Cloud SQL for user management, which tracks data starting from UI login. And we track data around management of remote video and other devices, such as when a user plugs a video camera into our video management software, or when adding access control. That is all orchestrated through Cloud SQL, so it’s very essential to our work. We’re moving to have the databases fully instrumented in the deployment pipeline, and ultimately embed site reliability engineering (SRE) practices with the teams as well.Cloud SQL lets us do what we do bestGeographical restrictions and data sovereignty issues have forced us to reexamine our architecture and perhaps deploy some databases on GKE or Compute Engine, though one thing is clear: we’ll still be deploying any database we can on Cloud SQL. The time we save having Google manage our databases is time better spent on building new solutions. We ask ourselves: how can we make our infrastructure do more for us? With Cloud SQL handling our database management tasks, we’re free to do more of what we’re really good at.Learn more about Arculesand its cloud-based services. Curious about fully managed relational database services? Check out the features of Cloud SQL.
Quelle: Google Cloud Platform

Introducing WebSockets, HTTP/2 and gRPC bidirectional streams for Cloud Run

We are excited to announce a broad set of new traffic serving capabilities for Cloud Run: end-to-end HTTP/2 connections, WebSockets support, and gRPC bidirectional streaming, completing the types of RPCs that are offered by gRPC. With these capabilities, you can deploy new kinds of applications to Cloud Run that were not previously supported, while taking advantage of serverless infrastructure. These features are now available in public preview for all Cloud Run locations.Support for streaming is an important part of building responsive, high-performance applications. The initial release of Cloud Run did not support streaming, as it buffered both the request from the client and the service’s response. In October, we announced server-side streaming support, which lets you stream data from your serverless container to your clients. This allowed us to lift the prior response limit of 32 MB and support server-side streaming for gRPC. However, this still did not allow you to run WebSockets and gRPC with either client-streaming or bidirectional streaming.WebSockets and gRPC bidirectional streamingWith the new bidirectional streaming capabilities, Cloud Run can now run applications that use WebSockets (e.g., social feeds, collaborative editing, multiplayer games) as well as the full range of gRPC bi-directional streaming APIs. With these bidirectional streaming capabilities, both the server and the client keep exchanging data over the same request. WebSockets and bidirectional RPCs allow you to build more responsive applications and APIs. This means you can now build a chat app on top of Cloud Run using a protocol like WebSockets, or design streaming APIs using gRPC.Here’s an example of a collaborative live “whiteboard” application running as a container on Cloud Run, serving two separate WebSocket sessions on different browser windows. Note the real time updates to the canvases on both windows:Using WebSockets on Cloud Run doesn’t require any extra configuration and works out of the box. To use client-side streaming or bidirectional streaming with gRPC, you need to enable HTTP/2 support, which we talk about in the next section.To try out a sample WebSockets application on Cloud Run, deploy this whiteboard example from Socket.io by clicking on this link.It’s worth noting that WebSockets streams are still subject to the request timeouts configured on your Cloud Run service. If you plan to use WebSockets, make sure to set your request timeout accordingly.End-to-end HTTP/2 supportEven though many apps don’t support it, Cloud Run has supported HTTP/2 since its first release, including end-to-end HTTP/2 for gRPC. It does so by automatically upgrading clients to use the protocol, making your services faster and more efficient. However, until now, HTTP/2 requests were downgraded to HTTP/1 when they were sent to a container.Starting today, you can use end-to-end HTTP/2 transport on Cloud Run. This is useful for applications that already support HTTP/2. For apps that don’t support HTTP/2, Cloud Run will simply continue to handle HTTP/2 traffic up until it arrives at your container.For your service to serve traffic with end-to-end HTTP/2, your application needs to be able to handle requests with the HTTP/2 cleartext (also known as “h2c”) format. We have developed a sample h2c server application in Go for you to try out the “h2c” protocol. You can build and deploy this app to Cloud Run by cloning the linked repository and running:In the example command above, the “–use-http2″ option indicates that the application supports the “h2c” protocol and ensures the service gets the HTTP/2 requests without downgrading them.Once you’ve deployed the service, use the following command to validate that the request is served using HTTP/2 and not being downgraded to HTTP/1:curl -v –http2-prior-knowledge https://<SERVICE_URL>You can also configure your service to use HTTP/2 in the Google Cloud Console:Getting started With these new networking capabilities, you can now deploy and run a broader variety of web services and APIs to Cloud Run. To learn more about these new capabilities, now in preview, check out the WebSockets demo app or the sample h2c server app.If you encounter issues or have suggestions, please let us know. You can also help us shape the future of Cloud Run by participating in our research studies.Related ArticleIntroducing HTTP/gRPC server streaming for Cloud RunYou can now stream large or partial responses from Cloud Run to clients, improving the performance of your applications.Read Article
Quelle: Google Cloud Platform

Hands-on with Anthos on bare metal

Hands on with Anthos on Bare MetalIn this blog post I want to walk you through my experience of installing Anthos on bare metal  (ABM) in my home lab. It covers the benefits of deploying Anthos on bare metal, necessary prerequisites, the installation process, and using Google Cloud operations capabilities to inspect the health of the deployed cluster. This post isn’t meant to be a complete guide for installing Anthos on bare metal, for that I’d point you to the tutorial I posted on our community site. What is Anthos and Why Run it on Bare Metal?We recently announced that Anthos on bare metal is generally available. I don’t want to rehash the entirety of that post, but I do want to recap some key benefits of running Anthos on your own systems, in particular: Removing the dependency on a hypervisor can lower both the cost and complexity of running your applications. In many use cases, there are performance advantages to running workloads directly on the server. Having the flexibility to deploy workloads closer to the customer can open up new use cases by lowering latency and increasing application responsiveness. Environment OverviewIn my home lab I have a couple of Intel Next Unit of Computing (NUC) machines. Each is equipped with an i7 processor, 32GB of RAM, and a single 250GB SSD. Anthos on bare metal requires 32GB of RAM and at least 128GB of free disk space. Both of these machines are running Ubuntu Server 20.04 LTS, which is one of the supported distributions for Anthos on bare metal. The others are Red Hat Enterprise Linux 8.1 and CentOS 8.1.One of these machines will act as the Kubernetes control plane, and the other will be my worker node. Additionally I will use the worker node to run bmctl, the Anthos on bare metal command line utility used to provision and manage the Anthos on bare metal Kubernetes cluster. On Ubuntu machines, Apparmor and UFW both need to be disabled. Additionally, since I’m using the worker node to run bmctl I need to make sure that gcloud, gsutils, and Docker 19.03 or later are all installed. On the Google Cloud side I need to make sure I have a project created where I have the owner and editor roles. Anthos on bare metal also makes use of three service accounts and requires a handful of APIs. Rather than creating the service accounts and enabling the APIs myself I chose to let bmctl do that work for me. Since I want to take a look at the Cloud Operations dashboards that Anthos on bare metal creates, I need to provision a Cloud Monitoring Workspace.When you run bmctl to perform installation, it uses SSH to execute commands on the target nodes. In order for this to work, I need to ensure I configured passwordless SSH between the worker node and the control plane node. If I was using more than two nodes I’d need to configure connectivity between the node where I run bmctl and all the targeted nodes. With all the prerequisites met, I was ready to download bmctl and set up my cluster. Deploying Your ClusterTo actually deploy a cluster I need to perform the following high-level steps:Install bmctlVerify my network settingsCreate a cluster configuration fileModify the cluster configuration fileDeploy the cluster using bmctl and my customized cluster configuration file. Installing bmctl is pretty straightforward. I used gsutil to copy it down from a Google Cloud storage bucket to my worker machine, and set the execution bit.  Anthos on Bare Metal NetworkingWhen configuring Anthos on bare metal, you will need to specify three distinct IP subnets.Two are fairly standard to Kuberenetes: the pod network and the services network. The third subnet is used for ingress and load balancing. The IPs associated with this network must be on the same local L2 network as your load balancer node (which in my case is the same as the control plane node). You will need to specify an IP for the load balancer, one for ingress, and then a range for the load balancers to draw from to expose your services outside the cluster. The ingress VIP must be within the range you specify for the load balancers, but the load balancer IP may not be in the given range. The CIDR range for my local network is 192.168.86.0/24. Furthermore, I have my Intel NUCs all on the same switch, so they are all on the same L2 network. One thing to note is that the default pod network (192.168.0.0/16) overlapped with my home network. To avoid any conflicts, I set my pod network to use 172.16.0.0/16. Because there is no conflict, my services network is using the default (10.96.0.0/12). It’s important to ensure that your chosen local network doesn’t conflict with the bmctl defaults. Given this configuration, I’ve set my control plane VIP to 192.168.86.99. The ingress VIP, which needs to be part of the range that you specify for your load balancer pool, is 192.168.86.100. And, I’ve set my pool of addresses for my load balancers to 192.168.86.100-192.168.86.150. In addition to the IP ranges, you will also need to specify the IP address of the control plane node and the worker node. In my case the control plane is 192.168.86.51 and the worker node IP is 192.168.86.52.Create the Cluster Configuration FileTo create the cluster configuration file, I connected to my worker node via SSH. Once connected I authenticated to Google Cloud. The command below will create a cluster configuration file for a new cluster named demo cluster. Notice that I used the –enable-apis and –create-service-accounts flags. These flags tell bmctl to create the necessary service accounts and enable the appropriate APis. ./bmctl create config -c demo-cluster –enable-apis –create-service-accounts –project-id=$PROJECT_IDEdit the Cluster Configuration FileThe output from the bmctl create config command is a YAML file that defines how my cluster should be built. I needed to edit this file to provide the networking details I mentioned above, the location of the SSH key to be used to connect to the target nodes, and the type of cluster I want to deploy. With Anthos on bare metal, you can create standalone and multi-cluster deployments:Standalone: This deployment model has a single cluster that serves as a user cluster and as an admin clusterMulti-cluster: Used to manage fleets of clusters and includes both admin and user clusters.Since I’m deploying just a single cluster, I needed to choose standalone. Here are the specific changes I made to the cluster definition file. Under the list of access keys at the top of the file:For the sshPrivateKeyPath variable I specified the path to my SSH private keyUnder the Cluster definition:Changed the type to standaloneSet the IP address of the control plane node Adjusted the CIDR range for the pod networkSpecified the control plane VIP Uncommented and specified the ingress VIP Uncommented the addressPools section (excluding actual comments) and specified the load balancer address pool Under the NodePool definition:Specified the IP address of the worker node For reference, I’ve created a GitLab snippet for my cluster definition yaml (with the comments removed for the sake of brevity).Create the ClusterOnce I had modified the configuration file, I was ready to deploy the cluster using bmctl using the create clustercommand../bmctl create cluster -c demo-clusterbmctl will complete a series of preflight checks before creating your cluster. If any of the checks fail, check the log files specified in the output. Once the installation is complete, the kubeconfig file is written to  /bmctl-workspace/demo-cluster/demo-cluster-kubeconfig Using the supplied kubeconfig file, I can operate against the cluster as I would any other Kubernetes cluster. Exploring Logging and MonitoringAnthos on bare metal automatically creates three Google Cloud Operations (formerly Stackdriver) logging and monitoring dashboards when a cluster is provisioned: node status, pod status, and control plane status. These dashboards enable you to quickly gain visual insight into the health of your cluster. In addition to the three dashboards, you can use Google Cloud Operations Metrics Explorer to create custom queries for a wide variety of performance data points. To view the dashboards, return to Google Cloud Console, navigate to the Operations section, and then choose Monitoring and Dashboards. You should see the three dashboards in the list in the middle of the screen. Choose each of the three dashboards and examine the available graphs.ConclusionThat’s it! Using Anthos on bare metal enables you to create centrally managed Kubernetes clusters with a few commands. Once deployed you can view your clusters in Google Cloud Console, and deploy applications as you would with any other GKE cluster. If you’ve got the hardware available, I’d encourage you to run through my hands-on tutorial. Related ArticleAnthos in depth: exploring a bare-metal deployment optionRunning Anthos on bare metal may provide better performance and lower costs for some workloadsRead Article
Quelle: Google Cloud Platform

Enforcing least privilege by bulk-applying IAM recommendations

Imagine this scenario: Your company has been using Google Cloud for a little while now. Things are going pretty well—no outages, no security breaches, and no unexpected costs. You’ve just begun to feel comfortable when an email comes in from a developer. She noticed that the project she works on has a service account with a Project Owner role, even though this service account was created solely to access the Cloud Storage API. She’s uncomfortable with these elevated permissions, so you begin investigating.As you dig deeper and start looking at a few projects in your organization, you notice multiple instances of high privileged access roles like Project Owner and Editor assigned to people, groups, and service accounts that don’t need them. The worst part is you don’t even know how big the problem is. There are hundreds of projects at your company and thousands of GCP identities. You can’t check them all manually because you don’t have time, and you don’t know what permissions each identity needs to do its job.If any part of this scenario sounds familiar, that’s because it’s incredibly common. Managing identities and privileges is extremely challenging, even for the most sophisticated of organizations. There is good news though. Google Cloud’s IAM Recommender can help your security organization adhere to the principle of least privilege—the idea that a subject should only be given the access or privileges it needs to complete a task. As we discussed in this blog post, IAM Recommender uses machine learning to inspect every principal’s permission usage across your entire GCP environment for the last 90 days. Based on that scan, it either deems that a user has a role that is a good fit, or it recommends a new role that would be a better fit for that user’s needs. For example, suppose a senior manager uses Google Cloud to look at BigQuery reports. IAM Recommender notices that pattern and recommends changing the manager’s role from Owner to something more appropriate, like BigQuery Data Viewer. In this blog, we’ll walk through one way to analyze IAM recommendations across all your projects and bulk-apply those recommendations for an entire project using a set of commands in Cloud Shell. With this process, we’ll show you how to: View the total number of service accounts, members, and groups that have IAM Recommendations broken out by projects.Identify a project with IAM recommendations that you feel comfortable applying. Bulk-apply recommendations on that project. (Optional) Revert the bulk-applied recommendations if you find that you need to.Identify more projects with recommendationsRepeat steps 1-3.Let’s get started.Get ready to bulk-apply IAM RecommendationsBefore you get started, there’s a bit of work that needs to be done to get your Google Cloud environment ready:Make sure that the Recommender API and Cloud Asset API are enabled.Create a Service Account and give it the IAM Recommender Admin, Role Viewer, Cloud Asset Viewer, Cloud Security Admin roles at the org level. You will need to reference this Service Account and its associated key later while running these scripts. Note that these scripts will not run if the Cloud Asset API of a project is in a VPC Service Control parameter. Now you’re ready to start.Step 1: View your IAM recommendations1. Run this command in Cloud Shell to save all the required code in a folder named iam_recommender_at_scale. This command also creates a Python virtual environment within the folder to execute the code.2. Go to the source directory and activate the python environment.3. Next, retrieve all the IAM recommendations in your organization and break them out by project. Make sure to enter in your Organization ID, called out here as,”<YOUR-ORGANIZATION-ID>”. You’ll also need to include a path to the service account key you stored earlier in pre-step, called out below as, ”<SERVICE-ACCOUNT-FILE-PATH>”.Here’s an example:4. For this demo we exported the results from step 1.3 into a CSV and uploaded it into a Google Sheet. However, you could just as easily use something like BigQuery or your own data analytics tool to look at the data.Table 1: The resource column lists the name of every project with active IAM recommendations within your organization. Subsequent columns break out the total number of recommendations by service account, users, and groups.Step 2: Pick a project to apply IAM recommendations on1. Analyze the output of the work you’ve done so far.Table 2: When we visualize table 1 using a column chart, it becomes clear that there are a couple of outliers in terms of the total number of recommendations. We will focus on the “project/organization1:TestProj” project for the duration of this document.2. Choose a project whose recommendations you want to bulk-apply. In our example, we had two qualifying criteria that we felt were met by “project/organization1: TestProj”:Does the project have a relatively high number of recommendations? “TestProj” has the second highest total number of recommendations, so it qualified.Is the project a safe environment on which to test-drive IAM Recommender? Yes, because “TestProj” is a sandbox.3. (Optional) If you don’t have a sandbox project, or the criteria we mentioned in step 2 don’t feel right, here are some other ideas:Choose a project you are very familiar with. Something you would notice any unwanted changes on.Ask a security-conscious colleague if they’d be willing to use IAM Recommender on their project.Choose a legacy project with very predictable usage patterns. While IAM Recommender uses machine learning to make accurate recommendations for even the most dynamic of projects, this might be a more manageable risk.Step 3: Apply the IAM recommendations1. Surface each principal with a recommendation in “TestProj”. This step doesn’t apply the recommendations, only displays them.For example:2. The resulting JSON is the template for making actual changes to your IAM access policy. This JSON also serves as the mechanism to revert these changes should you find later that you need to, so make sure to store it somewhere safe. Below is a generic example of a JSON. Each recommendation in the JSON contains:id: a uniquely identifier for the recommendationetag: the modification time of the recommendation.member: the identity, or principal, that the recommendation is about. There can be more than one recommendation per member because a member can have more than one role.roles_recommended_to_be_removed: the role(s) that IAM Recommender will remove.roles_recommended_to_be_replaced_with: the role(s) that will replace the existing role. Depending on the recommendation, IAM Recommender replaces the existing role with one role, many roles, or no roles (i.e., removes that role altogether), with the goal of adhering to the principle of least privilege.3. (Optional) This demonstration doesn’t alter the JSON, but rather applies all the recommendations as is. However, if you wanted to customize this JSON and get rid of certain recommendations, this is the time. Simply delete a recommendation with the editor of your choice, save the file, and upload it into the Cloud Shell file manager. You can even write a script that goes through the JSON and removes certain types of recommendations (e.g., maybe you don’t want to take recommendations associated with a certain principal or role).4. Apply all the changes described in the JSON created in step 3.1 by executing the command below. Step 4 describes how you can revert these changes later if you want to.Example:5. Just like that, your project is far closer to adhering to the principle of least privilege than it was at the beginning of this process! When we run step 1.3 again we see that recommendations for “TestProj” went from 483 to 0.Step 4: Revert the changes (optional)Refer back to the JSON you created in 3.1. and run this code to revert the changes:Example:Step 5: Apply more recommendationsAt this point, there are a couple options for what do do next: You can start applying more recommendations! Run this script again or go to the IAM page in the Console and look for individual recommendations from the IAM Recommendation icon. Another option is go to the Recommendations Hub and look at all your GCP Recommendations, not just the IAM related ones.Or, as a bonus step, you can set up an Infrastructure-as-Code pipeline for IAM Recommender, using something like Terraform. Check out this tutorial to learn how to set that up.And that’s the least of itThere are many ways to use the IAM Recommender to ensure least privilege. We hope this blog has helped you identify and mitigate projects that could represent a security risk to your company. You can read about how companies like Veolia used the IAM Recommender to remove millions of permissions with no adverse effects. We are hopeful that your company will have a similar experience. Good luck and thanks for reading!Special thanks to Googlers Asjad Nasir, Bakh Inamov, and Tom Nikl for their valuable contribution.Related ArticleUnder the hood: The security analytics that drive IAM recommendations on Google CloudAn in-depth look at how IAM Recommender works and the benefits it provides.Read Article
Quelle: Google Cloud Platform

Work at warp-speed in the BigQuery UI

Data analysts can spend hours writing SQL each day to get the right insights. So it’s crucial that the tools in the Google Cloud Console make that job as easy and as fast as possible. Now, we’re excited to show you how BigQuery’s Cloud Console UI has been updated with radical usability improvements for more efficient work, making it easier to find the data you need and write the right SQL quickly. The new capabilities span the entire SQL workspace experience across three feature areas:New multi-tab navigationNew resource panel New SQL editorNew multi-tab navigationOne of the most popular requests for BigQuery has been to support tabs. Now you can work on multiple queries at once and iterate faster with tabbed navigation:Multitask by working on a new query time while you’re waiting for another query to run.Compare queries or results sets side-by-side by splitting your tabs to the left and right.Reference a table schema while you’re authoring a query: just click the table to open its tab.Reference history at any time with the panel at the bottom of the workspace.Reduce your browser’s memory footprint by avoiding the overhead of opening the Cloud Console in multiple browser tabs.New resource panelNow it’s easier than ever to find relevant data at your organization:Your resources and search results are loaded dynamically as you need them so your workspace is more responsive. The navigation buttons for transfers, scheduled queries, and administration have been moved to a collapsible panel on the far left to give you more space for writing queries!Before, you needed to know the exact name of a project prior to pinning it to your resources panel on the left-hand side of the page if you wanted to see resources in that project. Now you can expand a search to find resources outside your pinned projects with a single click on “Broaden search to all projects”.Pin and unpin projects fast with a single click on the pin icon next to each project.New SQL editorFinally, we’ve updated the SQL editor itself with support for tons of new features. In addition to faster performance, you get as-you-type suggestions for SQL functions and metadata like column names and time-saving IDE capabilities to help you write faster, powered by Monaco:Find/replace text within the editorMulti-cursor and multi-selection supportCollapse and expand line sectionsType F1 in the editor to see dozens of other handy new shortcuts and features.While the features are in preview, you can hide them with the “Hide Preview Features” button.If you encounter issues, let us know with the Send Feedback button in the top right of Cloud Console.Get started by visiting BigQuery’s Cloud Console UI. Happy querying! Related ArticleQuery without a credit card: introducing BigQuery sandboxWith BigQuery sandbox, you can try out queries for free, to test performance or to try Standard SQL before you migrate your data warehouse.Read Article
Quelle: Google Cloud Platform

Build your own workout app in 5 steps—without coding

With the holidays behind us and a new year ahead, it’s time to reset our goals and find ways to make our lives healthier and happier. This time last year, like many people, I decided to create a more regimented exercise routine and track my progress. I looked at several fitness and workout apps I could use, but none of them let me track my workouts exactly the way I wanted to—so I made my own, all without writing any code.If you’ve found yourself in a similar situation, don’t worry: Using AppSheet, Google Cloud’s no-code app development platform, you can also build a custom fitness app that can do things like record your sets, reps and weights, log your workouts and show you how you’re progressing.To get started, copy the completed version here. If you run into any snags along the way or have questions, we’ve also started a thread on AppSheet’s Community that you can join. Step 1: Set up your data and create your appFirst, you’ll need to organize your data and connect it to AppSheet. AppSheet can connect to a number of data sources, but it’ll be easiest to connect it to Google Sheets, as we’ve built some nifty integrations with Google Workspace. I’ve already set up some sample data. There are two tables (one on each tab): The first has a list of exercises I do each week and the second is a running log of each exercise I do and my results (such as the weight used and my number of reps). Feel free to copy this Sheet and use it to start your app. Once you’ve done that, you can create your app directly from Google Sheets. Go to Tools>AppSheet>Create an App and AppSheet will read your data and set up your app. Note that if you’re using another data source, you can follow these steps to connect to AppSheet.Step 2: Create a form to log your exercisesYou should now be in the AppSheet editor. A live preview of your app will be on the right side of your screen. At this point, AppSheet has only connected to one of the two tables we had in our spreadsheet (whichever was open when we created our app), so we’ll want to connect to the other by going to Data>Tables>”Add table for “Workout Log.”Before creating the form, we need to tell AppSheet what type of data is in each column and how that data should be used. Go to Data>Columns>Workout Log and set the following columns with these settings (you can adjust column settings by clicking on the Pencil icon to the left of each column):This image shows how I adjusted the settings for “Key,”,“Set 1 Weights (lbs),” “Set 1 Reps,” and “How I Feel.” Now let’s create a View for this form. A view is similar to a web page, but for apps. Go to UX>Views and click on New View. Set the View name to “Record Exercise”, select “Workout Log” next to For this data, set your View type to “form,” and set the Position as “Left.” Now, if you save your app, you should be able to click on “Record exercise” in your app and it will open up a form where you can log your exercise.Step 3: Set up your digital workout log bookI like to quickly see past workouts while I’m exercising to know how many reps and weights I should be doing. To make our workout log book, we’ll want to create a new view. Go to UX>View and click on New View. Name this view “Log Book,” select “Workout Log” as your data, select “Table” as the View Type, and set the Position to “Right.”Then, in the View Options section, choose Sort by “Date,” “Ascending and Group by “Date,” “Ascending.” Step 4: Create your Stats DashboardAt this point, we already have a working app that lets us record and review workouts. However, being the data geek I am, I love using graphs and charts to track progress. Essentially, we’ll be making an interactive dashboard with charts that will show stats for whichever exercise we select. This step is a little more involved, so feel free to skip it if you’d like—it is your app after all!Before we make the Dashboard view, we need to decide what metrics we want to see. I like to see the total number of reps per set, along with the amount of weight I lifted in my first set. We already have a column for weights (Set 1 Weights (lbs)), but we’ll need to set up a virtual column to calculate total reps. To do this, select Data>Columns>Workout Log>Add Virtual Column.For advanced logic, such as these calculations, AppSheet uses expressions, similar to those used in Google Sheets. Call the Virtual Column “Total Reps” and add this formula in the pop up box to calculate total reps: [Set 1 reps] + [Set 2 reps] + [Set 3 reps] + [Set 4 reps] + [Set 5 reps]Now we can work on creating our Dashboard view. In AppSheet, a Dashboard view is basically a view with several other views inside it. So before we create our dashboard, let’s create the following views.Now we can create our Dashboard view. Let’s call the View “Stats,” set the View type to “Dashboard,” and Position to “Center.” For View Entries, we’ll select “Exercise” (not Exercises!) “Total Reps,” “Set 1 Weight (lbs.),” “Sentiment,” and “Calendar.” Enable Interactive Mode and under Display>Icon type “chart” and select the icon of your choosing. Hit Save, and you should now have a pretty neat dashboard that adjusts each chart based on the exercise you select.Step 5: Personalize your app and send it to your phone!Now that your app is ready, you can personalize it by adjusting the look and feel or adding additional functionality. At this point, feel free to poke around the AppSheet editor and test out some of the functionality. For my app, here’s a few of the customizations I added.I went to UX>Brand and changed my primary color to Blue.I went to Behavior>Offline/Sync and turned on Offline Use so that I can use my app when I don’t have an internet connection.I changed the position of my Exercises view to Menu, so it only appears in the Menu in the top-left corner of my app.Once you’ve adjusted your app the way you want it, feel free to send it to your phone. Go to Users>Users>Share App, type in your email address next to User emails, check “I’m not a robot” and select “Add users + send invite.” Now check your email on your phone and follow the steps to download your app!AppSheet offers plenty of ways to simplify your life by building apps—see what other apps you can make. Happy app building!
Quelle: Google Cloud Platform

BenchSci helps pharma deliver new medicines—stat!—with Google Cloud

Every startup should have a lofty goal, even if they’re not 100% certain how they’ll reach it. Our company, BenchSci, is a Canadian biotech startup whose mission is to help scientists bring new medicines to patients 50% faster by 2025. Since founding the company in 2015, we’ve been building a platform to help scientists design better experiments by mining a vast catalog of public datasets, research articles, and proprietary customer datasets. And that platform is built entirely on Google Cloud, whose breadth and depth of features has supported us as we move toward our goal.  There’s urgency to our mission because pharmaceutical R&D can be inefficient. Take for example preclinical research: one study estimates that half of preclinical research spending is wasted, amounting to $28.2B annually in the U.S. alone and up to $48.6 billion globally1. And by our estimates, about 36.1% of that preclinical research waste comes from scientists using inappropriate reagents—materials such as antibodies used in life science experiments. As such, our first product was an AI-assisted reagent selection tool. It collects relevant scientific papers and reagent catalogs, extracts relevant data points from them with proprietary machine learning models, and makes the results searchable to scientists from an easy-to-use interface. Scientists can quickly determine up front whether a particular reagent is a good fit for their experiment, based on existing experimental evidence. That way, they can focus on experiments with the greatest likelihood of productive results and bring new treatments to patients faster.All this runs on Google Cloud. We collect papers, theses, product catalogs, medical and biological databases, and other data, and store them in Cloud Storage. We then organize and extract insights from the data, using a pipeline built from tools including Dataflow and BigQuery. Next, we process the data with our machine learning algorithms, and store results in Cloud SQL and Cloud Storage. Scientists access the results via a web interface built on Google Kubernetes Engine (GKE), Cloud Load Balancer, Identity-Aware Proxy, Cloud CDN, Cloud DNS, and other services. Finally, we use multiple cloud projects, IAM, and infrastructure as code to keep data secure and each customer isolated. As such, we’ve eliminated the need for all but the most specialized R&D infrastructure, as well as for operational hardware, and slashed our management overhead. The combination of Google Cloud’s managed services and easily scalable persistent containers and VMs also lets us prototype and test new capabilities, then bring them to production with minimal management on our part. Google Cloud has also scaled with BenchSci’s needs. The data we analyze has increased by an order of magnitude over three years, and switching to BigQuery and Cloud SQL, for example, removed a great deal of our operational overhead. We also appreciate the flexibility of BigQuery to drive critical steps in our text-processing ML pipeline and the stability of Cloud SQL to drive data access. Over time, we’ve also evolved our data processing pipeline. We started out with Dataproc, a managed Hadoop service, but eventually rewrote this system in Dataflow, which uses Apache Beam. Dataflow can handle hundreds of terabytes, and lets us focus on implementing our business logic rather than managing the underlying infrastructure.Recently, we’ve expanded our platform to support private datasets. Initially, we served all our customers different views of the same underlying public data. In time though, some customers asked if we could include their proprietary pharmacological data in our system. Rather than managing multitenant systems with strict project isolation between them, we leveraged GKE and Config Connector to create unique environments for each customer’s data—without increasing the operational demand on our teams.In short, Google Cloud has enabled us to focus on solving problems without being distracted by having to build and operate computing infrastructure and services. Looking ahead, running our company on Google Cloud gives us the confidence to grow by collecting more and broader data sources; extracting more information from each unit of data with ML algorithms; processing ever more extensive and more proprietary data; and serving a broader range of customer needs through a varied set of interfaces and access points. Our goal is still ambitious, but by partnering with Google Cloud, it feels attainable. Learn more about healthcare and life sciences solutions on Google Cloud.1. https://journals.plos.org/plosbiology/article?id=10.1371/journal.pbio.1002165Related ArticleHealthcare gets more productive with new industry-specific AI toolsWe’re launching in public preview a suite of fully-managed AI tools to help healthcare professionals with the review and analysis of medi…Read Article
Quelle: Google Cloud Platform