Compute Engine explained: Picking the right licensing model for your VMs

We recently posted a guide to choosing the right Compute Engine machine family and type for your workloads. Once you’ve decided on the right technical specs, you need to decide how to license it appropriately, what kind of image to use—and evaluate tradeoffs between your options. This is an important process as licensing decisions can add a layer of complexity to future operational or architectural decisions. To support your cloud journey, Compute Engine provides licensing flexibility and compliance support through various licensing options, as illustrated in the table below:Let’s take a closer look at the four available licensing options, and their respective benefits and considerations. For specific questions around your licenses or software rights, please work with your legal team, license reseller, or license provider.Option 1: Google Cloud-provided image and licenseArguably, the most straightforward option is to purchase new licenses from Google Cloud. For your convenience, Compute Engine provides prebuilt images that have pay-as-you-go licenses attached. This approach can help minimize licensing agreements and obligations, and enable you to take advantage of pay-as-you-go billing for elastic workloads. In addition, using Google Cloud’s premium Compute Engine images and licensing relieves you of a lot of operational burden, as Google Cloud:Provides ongoing updates and patches to base imagesManages license reporting and complianceSimplifies your support model by leveraging Google for your software support needsPremium images and licenses for Compute Engine are available for both Linux and Windows workloads. To ensure proper license reporting and compliance, you are automatically billed based on usage for all VMs created with these images.For customers that don’t already have a Microsoft Enterprise Agreement or Red Hat Cloud Access licenses, this model allows you to take advantage of Google Cloud’s relationships with third-party software vendors for pay-as-you-go licenses that scale with your workload, and offer premium support. This allows you to pay for exactly what you need when your workloads spike, rather than paying a predetermined amount through fixed third-party contracts.For pay-as-you-go licenses, Compute Engine offers the following premium images with built-in licensing:Red Hat Enterprise Linux (RHEL and RHEL for SAP)SUSE Linux Enterprise Server (SLES and SLES for SAP)Microsoft Windows ServerMicrosoft SQL ServerWith the exception of Microsoft SQL Server, all licenses associated with these images are charged in one-second increments, with a one-minute minimum. SQL Server images are also charged in one-second increments, but have a 10-minute minimum. To see additional pricing details, visit the premium images pricing documentation.Option 2: Bring your own image with Google Cloud-provided licensesIf you want to import your own images, but still wish to use pay-as-you-go licenses provided by Google Cloud, Compute Engine lets you import your virtual disks or import your virtual appliances and specify a license provided by Compute Engine to attach to the image. This model lets you bring a custom image to Compute Engine, ensure the proper Compute Engine drivers are installed, and use a Compute Engine pay-as-you-go license. Similar to Compute Engine premium images, all images created through this import process will have the appropriate license(s) attached. VMs created using these images are billed automatically to help ensure correct license reporting and compliance.Some of the benefits of using your own image with Google Cloud licensing include:You can use your own custom imageGoogle Cloud manages license reporting and complianceSimplified support by leveraging Google for your vendor software support needsThis option, available for both Linux (RHEL) and Windows workloads, helps reduce licensing agreements and complexity, and lets you take advantage of pay-as-you-go billing for elastic workloads.Option 3: Google Cloud-provided image with bring your own license or subscriptionIf you want to use an image from Google Cloud but want to bring your own licenses or subscriptions, that’s an option too. You can choose SUSE Linux Enterprise Server with your own subscription (BYOS) support from the Google Cloud Marketplace, allowing you to spin up your own images while taking advantage of any licensing agreements or subscriptions you may have with your Linux operating system vendor. To use BYOS, sign up for a license on the vendor’s website when you deploy the solution. Under this model, the vendor bills you directly for licensing, and Google Cloud bills you separately for infrastructure costs.This option is not available for Windows Server or SQL Server, as both require you to bring your own image when you bring your own licenses. Additional details on bringing your own Windows licenses is covered below.In short, with a Google Cloud-provided image plus your own license or subscription, you can:Use a Google Cloud Marketplace solution with pre-installed software packagesReuse existing licensing agreementsPay Google Cloud only for infrastructure costsOption 4: Bring your own image and your own licenseLastly, you can bring eligible licenses to Google Cloud to use with your own imported images. With this option, you can import your virtual disks or virtual appliances, specifying a ‘Bring Your Own License’ (BYOL) option. Like other BYOL or BYOS options, images created using this option are only billed for infrastructure. This option supports customers with eligible Red Hat Enterprise Linux or Windows Server and other Microsoft application (e.g., SQL Server) licenses. For Red Hat Enterprise Linux, you can import your RHEL images using the image import tool and specify your own licenses. You can run these workloads on either multi-tenant VMs or single-tenant VMs on Compute Engine sole-tenant nodes.For Windows licenses, you can import your own image using the same image import tooling. For customers running Microsoft application servers with Software Assurance (which includes SQL Server, but not the underlying OS), you can bring your licenses using License Mobility. However, for Windows OS licenses, regardless of whether or not you have Software Assurance, you are restricted to running your BYOL Windows Server or BYOL Desktop OS on dedicated hardware, available on Compute Engine sole-tenant nodes or Google Cloud VMware Engine.Sole-tenant nodes allow you to launch your instances onto physical Compute Engine servers that are dedicated exclusively to your workloads while providing visibility into the underlying hardware to support your license usage and reporting needs. When running on sole-tenant nodes, there are different host maintenance configurations to help support your physical server affinity licensing restrictions, while still ensuring you receive the latest host updates. Additional details on these options can be found in the sole-tenant node documentation.There are several benefits to using your own image with your own license or subscription:Save on licensing costs by reusing existing investments in licensesTake advantage of unlimited virtualization rights for your per-physical-core licenses by using CPU overcommit on sole-tenant nodesLeverage Compute Engine tooling for licensing reporting and complianceHowever, before going down this path, consider the following:Although Compute Engine provides support and tooling on sole-tenant infrastructures, you’re responsible for license activation, reporting and compliance.Windows Server BYOL requires the use of dedicated hardware, in the form of Compute Engine sole-tenant nodes.Sole-tenant nodes provide maintenance policy configurations for you to adjust the maintenance behavior to best comply with your licensing requirements.The licensing low-downChoosing the right image and licensing options for a given workload depends on a variety of factors, including the operating system or application that you’re running, and whether or not you have existing licenses, images or vendor relationships that you want to take advantage of. We hope this blog post helps you make sense of all your options. For more on licensing pricing, check out these resources:See the estimated costs of your instances and Compute Engine resources when you create them in the Google Cloud Console.Estimate your total project costs with the Google Cloud Pricing Calculator.View and download prices from the Pricing Table in the Cloud Console.Use the Cloud Billing Catalog API for programmatic access to SKU information.Gauge your costs with Premium Image PricingFor detailed information on licensing Microsoft workloads on Google Cloud, please reference this guide authored by SoftwareOne.
Quelle: Google Cloud Platform

The best of Google Cloud Next ’20: OnAir's Security Week for technical practitioners

Hello security aficionados! This is your week for Google Cloud Next ’20: OnAir. There is a ton of content related to security coming out this week across a wide range of topics and audiences. With that in mind, here are some sessions that I think are particularly useful for security professionals and technical practitioners:Take Control of Security with Cloud Security Command Center: Guillaume Blaquiere from Veolia, along with Kyle Olive and Andy Chang from Google Cloud demonstrate how to prevent, detect, and respond to threats in virtual machines, containers, and more using Cloud Security Command Center.Authentication for Anthos Clusters: Google’s Richard Liu and Pradeep Sawlani show you how to authenticate to Anthos clusters, including how to integrate with your identity providers using protocols such as OIDC and LDAP.Minimizing Permissions Using IAM Recommender: Find out how Uber developed automation to minimize permissions org-wide from Uber’s Senior Cloud Security Engineer Sonal Desai, along with Cloud IAM Product Manager Abhi Yadav.Check out our full Security Session Guide for a look at everything going on this week. In addition to sessions, our weekly Talks by DevRel series is a great companion to the conference. Join our host Kaslin Fields for our security technical practitioner-focused recap, Q&A, and deep dive sessions by Sandro Badame and Stephanie Wong on Friday August 7th at 9 AM PST. For folks in Asia-Pacific, I will be hosting the APAC edition with our APAC team on Friday at 11 AM SGT.If you want hands-on technical experience, we also have security-focused Study Jams available this week:HTTP Load Balancer with Cloud ArmorHands-on Lab: User Authentication: Identity-Aware ProxySecurity Week has something for everyone, so be sure to take a look at the full security session catalog for sessions that cover more, including security, compliance, and handling sensitive data.Beyond this week, we also have a lot of exciting security learning opportunities coming over the rest of the summer. Application Modernization Week (starting on August 24th), in particular, has some interesting security-related sessions:Secrets in Serverless – 2.0: Discover all the secrets about how to store secrets for Serverless workloads from my DevRel colleague and Cloud Secrets Product Manager Seth Vargo.Evolve to Zero Trust Security Model‎ with Anthos Security: Find out how you can protect your software supply chain with Binary Authorization and Anthos.Anthos Security: Modernize Your Security Posture for Cloud-Native Applications: Learn all the Cloud Native security tools that GKE and Anthos make available from GKE Security Engineer Greg Castle and Senior Product Manager Samrat Ray.Security touches many different areas and practitioners need to be constantly learning, so check back on the blog every Monday from now until the first week of September for session guides, and be on the lookout for sessions, demos, and Study Jams in other weeks as well!
Quelle: Google Cloud Platform

Google Cloud named a Leader in the 2020 Forrester Wave for API Management Solutions

APIs are a critical component of any enterprise’s digital transformation strategy. They can drive customer engagement, accelerate time to market of new services, power innovation and unlock new business opportunities. Therefore, choosing the right API management platform is critical to running a successful API program, and research from industry analyst firms like Forrester Research can help enterprises evaluate and choose the right solution.Today, we’re proud to share that Google Cloud has been recognized by Forrester as a leader in The Forrester Wave™: API Management Solutions, Q3 2020. We believe this recognition is a testament to our continued strategic investments in product innovation and laser-sharp focus on the success of our customers. Today, six of the top 10 healthcare companies, seven of the top 10 retailers, five of the top 10 financial services companies and six of the top 10 telecom providers trust Apigee to drive their digital transformation efforts. Moreover, many global organizations that were impacted by the COVID-19 pandemic continued to invest in Apigee as they doubled-down on their digital strategy. In this report, Forrester assessed 15 API management solutions against a set of pre-defined criteria. In addition to being named a leader, Google Cloud received the highest possible score in the market presence category, and the strategy category criteria of product vision, and planned enhancements, and current offering criteria such as API user engagement, REST API documentation, formal lifecycle management, data validation and attack protection, API product management, and analytics and reporting.“As a long-standing player in the market, Google Cloud has rich resources for educating customers and prospects on API business potential, including using them to create new ecosystem models. This shows in the product’s rich set of API product and pricing definition features,” according to the Forrester report. Forrester also noted that Google Cloud has deepened its integration of Apigee with other Google Cloud capabilities such as reCAPTCHA, machine learning and Istio-based service mesh. Forrester also noted that Google Cloud’s reference customers expressed high satisfaction with Apigee API management. We believe this reflects why large global brands like Nationwide Insurance, Philips, Pizza Hut, ABN Amro, Ticketmaster and Change Healthcare etc. partner with Google Cloud to drive their digital transformation programs. “What I’ve really appreciated about Apigee isn’t just the functionality in the developer portal, but the guidance they provided on how we should roll out our API strategy and how we can think strategically about digital transformation using APIs,” said Rick Schnierer, AVP, One IT Applications Business Solutions, Nationwide Insurance.You can download the full The Forrester Wave™: API Management Solutions, Q3 2020 report here (requires an email address).To learn more about Apigee, visit the website here.
Quelle: Google Cloud Platform

Introducing CAS: Securing applications with private CAs and certificates

Digital certificates underpin identity and authentication for many networked devices and services. Recently, we’ve seen increased interest in using public key infrastructure (PKI) in DevOps and device management, particularly for IoT devices. But one of the most fundamental problems with PKI remains—it’s hard to set up Certificate Authorities (CA), and even harder to do it reliably at scale. To help, we’re announcing Certificate Authority Service (CAS), now in beta, from Google Cloud—a highly scalable and available service that simplifies and automates the management and deployment of private CAs while meeting the needs of modern developers and applications. To see how CAS can help, let’s look a bit deeper at the challenges surrounding certificate use. As we mentioned, private certificates are one of the most common ways to authenticate users, machines, or services over networks. Digital certificates help make many interactions more secure, including when a user connects to an enterprise-owned website over HTTPS, when a laptop tries to connect to a WiFi access point, or when a user tries to sign into their email account. These certificates are normally issued from a private Certificate Authority (CA) that is hosted on-premises, and they tend to have an expiry date that is in the distant future (i.e., long-lived) with a device/application-specific certificate enrollment process that happens infrequently.An emerging scenario for using private certificates is in DevOps environments to protect containers, microservices, VMs, and service accounts. These emerging private certificate use cases, however, have drastically different requirements. As a result, organizations with an on-premise private CA quickly realize the limitations of their existing private CAs to support these emerging scenarios: These new use cases require short-lived certificates that are renewed frequently, which in turn require high availability and scalability from the CA. Existing private CA solutions fall short. For example, a company may have to issue 10 million certificates in one year vs. 10 thousand when dealing with IoT devices.Certificate enrollment processes do not support modern APIs expected in modern applications and CI/CD toolchains, which result in longer time to market, and delays in adoption and revenue. They are incompatible with cloud providers’ built-in CAs, resulting in customers losing a single point for management and monitoring for certificates. Moreover, organizations that leapfrogged building on-premise infrastructure and were cloud native from day one—i.e., they never had to set up a private CA—started seeing a need for private certificates. Existing on-prem private CAs are not compatible with cloud platforms and can’t support the scale associated with cloud native businesses and hyperscalers. The only option these organizations have is to build their own private CA. Thus, they realize the high cost of setting up and running a private CA (infrastructure, licensing, and operations costs) in addition to the high skill set required to successfully manage a private CA, which is not tied to their core business and only lengthens their go to market timeline. Often, it’s easier and more cost effective to offload this task to a trusted provider—ideally a cloud provider.Certificate Authority Service is designed to meet both traditional and emerging needs. With CAS, you can set up a private CA in minutes, rather than the months it would take to deploy a traditional private CA.Create private CAs in minutesCAS also lets you leverage simple, descriptive RESTful APIs to fully automate the acquisition and management of certificates without being a PKI expert. You can use these APIs for integration with your existing tooling and CI/CD channels. Moreover, you can manage, automate, and integrate private CAs in whichever way is most convenient for you: via APIs, the gcloud command-line, or cloud console.CAS is an enterprise-ready service that enables you to: Store the private CA keys in a Cloud HSM that is FIPS 140-2 Level 3 validated and available in several regions across the Americas, Europe, and Asia Pacific. You can select a subordinate CA’s region independent of its root CA’s regionObtain logs and gain visibility into who did what, when, and where with Cloud Audit LogsDefine granular access controls and virtual security perimeters with Cloud IAM and VPC Service ControlsScale with confidence knowing that the service supports up to 25 queries per second (QPS) per instance (in DevOps mode), which means it can issue millions of certificates. And it comes with an enterprise-grade SLA (at GA)Have assurance that CA private keys are protected by FIPS 140-2 Level 3 validated HSMsBring your own root: This will allow CAs to chain up to an existing root running on-premise or anywhere else outside Google CloudIntegration with the certificate management ecosystemWe also understand that the most important requirement for deploying a new service at an enterprise level is compatibility, with ease-of-use being a close second. After all, security measures that are hard to use end up going unused. We worked with leading partners in the certificate lifecycle management (CLM) space to make sure CAS is integrated with their solutions: Venafi is a leading vendor in machine identity protection with more than 400 worldwide customers and 20-plus years of cybersecurity research and innovation. Venafi’s role has been cited in industry research like Gartner’s 2020 Hype Cycle for IAM and Forester’s 2020 Now Tech report on Zero Trust Solution Providers. For more information on their integration with CAS see their blog.AppViewX CERT+ is a certificate management suite that lets you automate key and certificate lifecycles across multi-cloud environments. It also protects keys, delivers compliance, allows for role-based self-servicing of PKI, and enables hyper scalability and cryptographic agility. For more information on their integration with CAS see their blog.Getting started with CASWith CAS, you can offload time-consuming tasks associated with operating a private CA, like hardware provisioning, infrastructure security, software deployment, high-availability configuration, disaster recovery, backups, and more to the cloud. This will lower your total cost of ownership (TCO) and shorten time to market for your products. CAS also simplifies licensing with pay-as-you-go pricing and zero capital expenditures (CapEx)—you pay only for what you use. During beta availability, you can use CAS at no charge; visit the sign up form to register. Pricing will go into effect once the product is generally available. For more information, check out our product videos and the CAS home page. If you have any questions, just email us at cas-support@google.com.
Quelle: Google Cloud Platform

A better, safer normal: Helping you modernize security in the cloud or in place

During the first few months of the COVID-19 pandemic, many organizations expected a slowdown in their digital transformation efforts. Instead, we saw many enterprises accelerate their use of cloud-based services to help them manage and address emerging priorities in the new normal, which includes a distributed workforce and new digital strategies. Today, to kick off our Google Cloud Next ‘20: OnAir Security Week, we’re sharing the latest on some unique and powerful capabilities to help you simplify security operations in your organization and make the new normal a better, safer normal. Advanced security tools to support compliance and data confidentiality More and more companies, especially those in regulated industries, want to adopt the latest cloud technologies, but they often face barriers due to strict data privacy or compliance requirements. Last month, we introduced two new capabilities that help you securely take advantage of all the cloud has to offer while also simplifying security operations. Assured Workloads for Government, now in private beta, lets those in regulated industries like the public sector configure and deploy sensitive workloads according to their security and compliance requirements—in just a few clicks. Unlike traditional “government clouds,” Assured Workloads removes the tradeoff between meeting compliance requirements and having the latest capabilities in your cloud.Configuring a new workload in Assured Workloads for GovernmentConfidential VMs, the first product in our Confidential Computing portfolio, helps you protect your sensitive data in the cloud. We already encrypt data at-rest and in-transit, but customer data must traditionally be decrypted for processing. Confidential Computing is a breakthrough technology which encrypts data in-use—while it’s being processed. Confidential VMs takes this technology to the next level by offering memory encryption so that you can further isolate your workloads in the cloud. With the beta launch of Confidential VMs, we’re the first major cloud provider to offer this level of security and isolation while giving you a simple, easy-to-use option for your newly built and “lift and shift” applications.Confidential VMs demoA cloud-based, managed CA for the DevOps and IoT world Recently, we’ve seen a surge in interest in using Public Key Infrastructure (PKI) in DevOps and IoT device management. But a fundamental problem with PKI remains: it’s hard to set up Certificate Authorities (CA), and even harder to do it reliably at scale. These issues are front and center for these growing use cases. To help, we’re announcing the beta availability of Google Cloud’s new Certificate Authority Service (CAS)—a highly scalable and available service that simplifies and automates the management and deployment of private CAs while meeting the needs of modern developers and applications. With CAS, you can offload to the cloud time-consuming tasks associated with operating a private CA, like hardware provisioning, infrastructure security, software deployment, high-availability configuration, disaster recovery, backups, and more, allowing you to stand up a private CA in minutes, rather than the months it might normally take to deploy. A single pane of glass into your security posture Protecting your users, data, and applications while staying compliant can be challenging. Add in the demands of managing a remote workforce and the complexity increases. With Cloud Security Command Center (SCC), our native posture management platform, you can prevent and detect abuse of your cloud resources, centralize security findings from Google Cloud services and partner products, and detect common misconfigurations, all in one easy-to-use platform. We recently announced a Premium tier for Security Command Center to provide even more tools to protect your cloud resources. It adds new capabilities that let you: Spot threats using Google intelligence for events in Google Cloud Platform (GCP) logs and containersSurface large sets of misconfigurations Perform automated compliance scanning and reportingReporting on CIS Benchmarks in the SCC Compliance DashboardThese features help you understand your risks on Google Cloud, verify that you’ve configured your resources properly and safely, and document it for anyone who asks. Collaborating with partners on orchestration and endpointsAs part of our mission to enable operational security and simplicity, we’re committed to working with our security partners to help you on this journey. This week we’re announcing new integrations and go-to-market activities with Palo Alto Networks on their xSOAR Marketplace. Additionally, we’re announcing an expanded partnership with Tanium, which is integrating and offering Chronicle with their endpoint security and management solution. This integrated solution, sold by Tanium, links endpoint data from Tanium with other telemetry, such as DNS and proxy data in Chronicle, to provide a broader, clearer picture of threats in the enterprise. Chronicle retains Tanium telemetry for a year by default, improving your ability to investigate incidents over long periods of time.Simplifying protection against DDoS and web attacksWe’re simplifying how you can use Google Cloud Armor to help protect your websites and applications from exploit attempts, as well as Distributed Denial of Service (DDoS) attacks. With Cloud Armor Managed Protection Plus (in Beta), you will get access to DDoS and WAF services, curated rule sets, and other services for a predictable monthly price. You can learn more about our Cloud Armor announcements here.Automating more secure deployments with blueprintsWhile these new products provide real benefits, you also need to configure cloud deployments to meet your own unique security and compliance requirements. To help, we’re publishing a comprehensive new Google Cloud security foundations blueprint that provides curated, opinionated guidance and accompanying automation to help build a secure starting point for Google Cloud deployments. It’s launching as the cornerstone of our Google Cloud security best practices resource center, a new web destination that delivers world-class security expertise from Google and our partners in the form of security blueprints, guides, whitepapers, and more.A better, safer normal together Defending your enterprise requires continuous evolution, and the events so far in 2020 have made that even more clear. With compliance automation, simpler security operations, and better protection for employees and customers we’re committed to helping you adjust and evolve to make today’s new normal a safer normal. Be sure to check out our security sessions throughout this week where we’ll be digging into the new capabilities we’re introducing, and some we’ve already launched in 2020. You can also find more information at our privacy and security home page.
Quelle: Google Cloud Platform

Session guide: Get the most out of Next OnAir Security Week

With new threats, constantly evolving regulations, and the need to secure increasingly decentralized environments, keeping your organization safe and compliant keeps getting more complex. This year, COVID-19 has impacted all of us in multiple ways—including in how we think about and manage security. Google Cloud Next ‘20: OnAir has a range of sessions on ways we can help secure your organization in today’s new normal. If you haven’t registered yet, no problem. Just head to the Next OnAir website and you’re ready to go. Security week starts with our welcome session where you’ll get an overview of what’s to come. Then, be sure to check out our solution keynote, A Better, Safer Normal, where we’ll look at how you can operate securely and simply in this new security reality we’re facing. This year features over 20 security sessions—all going live at 9am PT on Tuesday, August 4—so there’s sure to be something for every security professional.  Data Protection and Customer ControlsThese sessions provide best practices and tips to help you ensure the confidentiality and privacy of your sensitive data. While they have a focus on encryption and key management options, we’ll also present options for data discovery, classification, and anonymization.Confidential Computing: The Next Frontier In Data Protection: Learn about Confidential Computing and how it’s enabling the shift to private, encrypted services where organizations can be confident that their data stays private and isn’t exposed to cloud providers or their own insiders. Cloud HSM Deep Dive and Best Practices: Cloud HSM has enjoyed fast growth since its 2018 debut. See what’s coming up next in this deep dive.The New World of Controlling Your Data in the Cloud: Cloud External Key Manager and Key Access Justifications: Need to retain tight control over access to your data? Learn more about these products and how they can help you solve your security and compliance challenges.Managing Sensitive Data in Hybrid Environments: Learn how Google Cloud Data Loss Prevention can help you manage data, focusing on support for inspection of content in hybrid environments.Threat Prevention, Detection, and ResponsePreventing threats—whether you’re operating in the cloud, on-prem, or in a hybrid model—is a huge part of almost any security practitioner’s job. Check out these sessions to improve your organization’s threat preparedness.Take Control of Security with Cloud Security Command Center: See how Cloud SCC can help you prevent, detect, and respond to threats in your Google Cloud environment.How You Can Protect Your Web Sites and Applications with Google Cloud Armor: Learn what the latest Cloud Armor announcements mean for you, complete with real-life examples and case studies.Protect Your Customers from Phishing, Malware, Fraud, and Hijacking with Google Cloud User Protection Suite Powered by Google Safe Browsing and reCAPTCHA: Keeping your users safe is job 1. This session shows how you can reduce online fraud with reCAPTCHA Enterprise and the Web Risk API.Scale-Up to Your Security Telemetry: Take an in-depth look at how to use Chronicle, our cloud-native security analytics system built on core Google infrastructure and fed by a massive threat database to be better prepared for threats.Identity & Access ManagementIdentity and access management is one of the key control points in the cloud. These sessions detail the ways you can make sure that the right users have access to the right resources to get their jobs done.Getting Started With BeyondCorp: A Deeper Look into IAP: Check out this session to understand why your company may need to adopt a new, zero-trust security model and how to get started.Using Policy Intelligence to Achieve Least Privilege Access: See how to improve visibility into who has access to what, quickly resolve access control issues, and discover best practices for putting tight controls around your cloud resources to increase security and reduce risk.Uber Presents: Minimizing Permissions Using IAM Recommender: Learn how Uber integrated IAM Recommender with their security hardening pipeline to tighten IAM policies, then look into best practices for improving your IAM without increasing your workload.Advanced IAM: Hacks, Tips, and Tricks for Policy Management: Take a deep-dive into access policy management in IAM, with advanced topics like avoiding policy change conflicts through concurrency control, policy attachment point discovery, client library usage, and more.Deep-Dive: Google Cloud’s Managed Microsoft AD and Applications: Get back on the board for another deep dive, this time into the Google Cloud Managed Service for Microsoft Active Directory.Authentication for Anthos Clusters: Dry off and take a walk through how to authenticate to Anthos clusters, including how to integrate with your identity providers using protocols such as OIDC and LDAP, then check out some hybrid-cloud scenarios to see how to leverage existing authentication tools with Anthos.CollaborationOur productivity and collaboration tools help teams work better together. These sessions will provide you with a blueprint on how G Suite secures your data, and the controls you have to do even more.Keep Hackers Out: A Use-case-led Approach to G Suite Security: Take a deep dive into the security use-cases that customers can solve with G Suite: What do I do if I detect a phishing incident? Looks like a device is compromised, how do I go about remediating? How can I find out who is sharing what outside the organization?No User Left Behind: Empowering Collaboration with Security: Enable seamless security to ensure that G Suite users can access data safely and securely while preserving individual trust and privacy. ComplianceSecurity and compliance are among the first topics that companies face when considering moving workloads to public clouds. These sessions provide the best practices and technology you need to help meet your compliance obligations.Master Security and Compliance in the Public Cloud: Discuss how customers and Google work together to be able to deploy workloads that must meet regulatory and compliance requirements.Building Trust Through Customer Collaboration: Learn how our Cloud Trust team engages with customers to ensure you can confidently adopt our products, whether through pooled audits, contractual commitments, or even a walk through of one of our global data centers.Security, Privacy, and Compliance Solutions for Healthcare: Take a detailed look at how Google Cloud can help healthcare organizations address the many challenges they’re facing and improve the security and governance of their systems while efficiently migrating to the cloud.Mission Possible: Moving Your Most Sensitive Data to GCP: With Assured Workloads, you can move your most sensitive data to Google Cloud while configuring data location controls, administrative access controls, and encryption parameters. Check out this session to learn how.When you’re done with your sessions, be sure to check out the Security Week recap, and keep visiting the blog for more security content.
Quelle: Google Cloud Platform

Google Cloud AI and Harvard Global Health Institute Collaborate on new COVID-19 forecasting model

The COVID-19 pandemic has had a tremendous impact on the world, from changing the way we live to driving extraordinary acts of human compassion. Nowhere have both the disruption and perseverance been more evident than among the front-line workers who continue to respond tirelessly.In partnership with the Harvard Global Health Institute, Google Cloud is releasing the COVID-19 Public Forecasts to serve as an additional resource for first responders in healthcare, the public sector, and other impacted organizations preparing for what lies ahead. These forecasts are available for free and provide a projection of COVID-19 cases, deaths, and other metrics over the next 14 days for US counties and states.The COVID-19 Public Forecasts are trained on public data such as those from Johns Hopkins University, Descartes Lab, and the United States Census Bureau and will continue to be updated with guidance from the Harvard Global Health Institute.“The COVID-19 Public Forecasts model produces forecasts at the critical jurisdiction of public health action—the county. Coupled with the work of the Harvard Global Health Institute’s county-level COVID-19 Suppression Metrics, the COVID-19 Public Forecast Model will allow for targeted testing and public health interventions on a county-by-county basis. By providing accurate, timely predictions of cases, infections, hospitalizations, and deaths to both policy makers and the general public, it will enhance our ability to understand and respond to the rapidly evolving COVID-19 pandemic,” said Dr. Thomas Tsai, surgeon and health policy researcher in the Department of Surgery at Brigham and Women’s Hospital and in the Department of Health Policy and Management at Harvard T.H. Chan School of Public Health.Alongside other data sources, the COVID-19 Public Forecasts can be a helpful resource for those at the front lines of responding to this pandemic who are seeking to better understand and prepare for the progression of COVID-19 in their region. For example, healthcare providers can incorporate the forecasted number of COVID-19 cases as one datapoint in resource planning for PPE, staffing, and scheduling. Similarly, state and county health departments can use the forecast of infections over the next two weeks to help inform their testing strategy and help identify areas at risk of new outbreaks.“As healthcare providers, the ability to ever more accurately predict the evolution of this pandemic is vital to our ability to prepare for, and manage, the COVID-19 crisis,” said Dr. Edmund Jackson, Chief Data Officer at HCA Healthcare. “Having Google bring their unique compute and AI prowess to better answering this question is enormously helpful. We are excited to be part of this work.”To generate the COVID-19 Public Forecasts, Google Cloud researchers developed a novel time series machine learning approach that combines AI with a robust epidemiological foundation. By design, this new model is trained on public data and leverages an architecture that allows researchers to dive into the different relationships that the model has learned to better interpret why it makes certain forecasts. We hope that these measures not only help the public understand how the model works, but can also enable further innovation in infectious disease modeling.The COVID-19 Public Forecasts are free to query in BigQuery or to download as CSVs (state forecast CSV and county forecast CSV). Additionally, they are available through our Data Studio dashboard and as part of the National Response Portal. We are also publishing a full explanation of the novel methodology and the datasets used in our White Paper and User Guide. As with any forecasts, the COVID-19 Public Forecasts have limitations that should be carefully considered before being used to inform decisions. In order to download or use the forecasts, users must agree to the Google Terms of Service.Google is committed to a core set of AI principles. In developing the COVID-19 Public Forecasts, we paid close attention to the disproportionate impact the disease has had and how that would impact our adherence to these principles, particularly principle #2: “Avoid creating or reinforcing unfair bias.” CDC research has shown that communities of color in the United States have been the hardest hit by COVID-19 with disproportionately high rates of cases and deaths. Our team has conducted a comprehensive fairness analysis to investigate how that disproportionate impact affects the accuracy of our forecasts and how they should be interpreted. We encourage all users who intend to make decisions in part based on the COVID-19 Public Forecasts to closely review the Fairness Analysis. Additionally, we call for an open dialog among public health officials and the AI community in how to address these inequities and measure how their impact may appear in various AI models.We are excited to focus Google Cloud’s commitment to innovation in AI to help those on the front lines of the COVID-19 response. Learn more about the COVID-19 Public Forecasts from our User Guide and White Paper, or get started with the data now in BigQuery, Data Studio dashboard, the National Response Portal, or via the CSV data (state, county).
Quelle: Google Cloud Platform

Getting to know Looker: How LookML can simplify BI workflows

Business intelligence workflows at any organization can help the business make better decisions, like where to expand your company and how to most effectively deploy your resources. My work as a data analyst, open data advocate, and former lead of the Google Cloud Public Datasets Program has given me a broad view into how data teams develop their business intelligence workflows. I’ve been fortunate to work with data analysis teams across numerous industries, including retail, weather, financial services, and more. Throughout this, I’ve seen the common challenges that teams run into in their business intelligence workflows.  You might think that teams’ challenges stem from the tool they pick, but every business intelligence tool has its own advantages. Some excel at data visualization, others are great for sharing dashboards, and some BI tools do well with data preparation. Most BI tools connect easily to at least some data warehouses, and have the ability to visualize these data. But every BI tool also has its drawbacks. And because every team across an enterprise has slightly different requirements for BI, they often choose different tools, creating a segmentation problem within a company. The most common form of this I’ve seen is that metrics are defined differently within each tool and there is no centralized data governance, which leads to unnecessarily duplicated workflows across the company. Looker, now part of Google Cloud, can help address these “in-between product” issues. LookML, Looker’s powerful semantic modeling layer, gives teams the ability to easily create a standardized data governance structure and empowers users across the enterprise to undertake their own analysis while trusting that they all are built on the same single source of truth. (You can read more about why Looker developed LookML in this blog post.) In this post, though, we’ll focus on five groups who can benefit from LookML and see how it can simplify their BI workflows. For each group, you’ll see how LookML can help, with a snippet of LookML code as an example. Click through to the “Here’s an example” link to GitHub repositories to see the full LookML file if you’d like more detail.Data engineers and modelersWho are you: You are the group that most obviously benefits from LookML. Your title is probably “business intelligence analyst” or “data engineer.” Your team builds the underlying infrastructure that makes data-driven decision-making possible, standardizes the data that feeds key metrics, and helps measure progress toward KPIs. How LookML helps: LookML is all about reusability. It brings to data modeling many of the tools and methodologies used in software development, such as collaborative development with Git integration, object definitions, and inheritance. It allows you to define a dimension or measure once and build on it, instead of having to repeat this effort. This enables you to standardize metrics and the data that define them across the entire enterprise in a scalable manner that saves you time. By converting raw data into meaningful metrics using LookML, you empower BI users across the entire enterprise, from accounting to marketing, to easily get started building their dashboards with the confidence that comes from knowing their metrics are properly defined and aggregated.Here’s an example: One common challenge for businesses is being able to compare profit and margin across different business units because of differences in revenue sources, inventory costs, personnel expenses, and other factors. This often leaves decision-makers siloed from each other and requires you to make manual adjustments any time you want to do cross-silo comparisons.However, LookML can eliminate that challenge. The LookML snippet below joins the item cost from the inventory_items table with the sale price from the order_items table, so that gross_margin can be defined as sales_price – inventory_items.cost. Once that’s in place, you can see how easily gross_margin is referenced repeatedly throughout other dimension definitions. You can call ${gross_margin} without having to either rewrite the SQL each time or rerun the same SQL statement in several different places.Why this matters: Centralizing the definition of a metric decreases the likelihood of introducing human error that can potentially break data pipelines, and reduces the demand on the SQL engine by not having to run the same SQL statement every time you want to reference gross margin. Most importantly, it ensures there’s a single, standard definition of gross margin across the business under the hood, hiding that complexity from downstream users.Data analystsWho are you: You are the people that companies rely on to verify that an analysis is accurate. Your title is something along the lines of “data analyst” or “business analyst,” and you know the intricacies of the data that lie behind these analyses. Your colleagues trust and rely on you to make sure they are interpreting the data correctly. How LookML helps: LookML empowers data professionals to empower others. It allows you to document your extensive knowledge of the data, pre-define common aggregations to prevent misuse, and provide your colleagues with a trusted layer of data definitions that allows them to focus on their individual analysis. LookML can reduce the number of chats, emails, and calls that you get, since it gives users a central repository of information about the data where they can get answers directly.  Here’s an example: Data analysts are regularly asked to develop a one-off analysis that can answer a question that either doesn’t come up very often, or needs to be answered in a certain way. This ends up consuming time that could be spent on higher-impact projects. LookML helps analysts expose key metrics, so users can do their own analysis without having to worry about whether they are calculated consistently. This means that everyone across the business gets a single source of truth for their data. Let’s look at an example from the healthcare industry. Customer health score has many various definitions. It’s easy for two individuals to define it differently, especially when various tools have different definitions that aren’t always clear about how they calculate it. However, LookML enables you to set this definition for business users so they can dig into the trends on their own using a reliable source. It also allows developers to add comments and descriptions to help users understand how a specific metric is calculated and what assumptions were made. These custom metrics descriptions propagate in Looker’s front end Explore environment as tool tips (see image below). Plus, Looker’s new Data Dictionary (available for free in the Looker Marketplace) hosts all of the descriptions and SQL definitions for each metric (see image below).Why this matters: LookML’s ability to help you communicate and collaborate with end users helps business users and decision-makers dig into the data further without having to reach out to a data analyst. Looker’s Explore capabilities empowers those users to dig into the relationships in your data on their own, knowing they can trust the underlying metrics and calculations. This means that you can spend less of your time in meetings, emails, and chats explaining the analysis and its underlying assumptions, and more time analyzing data for more impactful projects.IT securityWho are you: You make sure that those who are supposed to access data can, and those who aren’t supposed to cannot. Your title might be “IT specialist” or “Data security specialist.” You help companies comply with regulatory requirements such as HIPAA, GDPR, and CCPA, and protect customer privacy by ensuring that the company’s data isn’t exposed to those who don’t have a need for access. How LookML helps: LookML is built for today’s complex data. It allows you to provide access to those who need it and deny access to those who do not without having to create and manage separate dimensions, dashboards, or instances of Looker. LookML’s implementation of the open-source template language Liquid lets you tap into Looker’s user permissions to programmatically determine a user’s access level. This means you can dynamically restrict a user’s access to view a dimension or measure based on the user’s access permissions, managed directly in LookML. These controls enable you to set row-based access filters and column hiding that is determined based on user attributes and groups within Looker and data masking, which is demonstrated in the image below. This means that data analysts only have to maintain one dashboard, instead of several, while security professionals like you can have all the tools you need to manage permissions.Here’s an example: Currently, security teams have to manage user permissions across multiple data warehouses, each of which has their own permissions and definitions. Not only is this cumbersome, but it adds an unnecessary layer of complication to try to get the right permissions from each individual product that you need to provide only the right data to users.LookML support for Liquid SQL helps simplify this entire process. It directly connects to Looker’s user permissions, where you can set attributes and roles for each individual user. This example looks at a retail banking use case, where an individual may need to see a customer’s full credit card number. Instead of having to manage column-level permissions across multiple warehouses, LookML’s Liquid SQL enables Looker to dynamically determine if a user should have access to a given column based on their user attributes.Why this matters: Managing all user access permissions in a single place saves you time and effort while reducing the risk of a mistake leading to inappropriate data access. LookML’s Liquid templating means you can manage permissions right alongside the metric definition, so you know exactly what you are allowing appropriate users to access. Plus, you no longer need to determine which permissions need to be given in which data warehouse using what syntax.Executive/CXOWho are you: Your job is to make sure that the enterprise as a whole can use data to be more effective. Your title could be “chief data officer” or “SVP of analytics.” Your job is to ensure the company runs as smoothly as possible. This means making sure that metrics are measured and reported uniformly across the business. How LookML helps: Data warehousing technology and needs have evolved over the years. These repeated paradigm shifts have left many enterprises with data stored in different places, different structures, and with different definitions of key performance indicators. This can sometimes make it hard or impossible to compare performance between different business units. LookML lets you standardize these definitions across the business, so you can more easily analyze the success of the business. This means less duplication of work and more streamlined metrics.Here’s an example: You are constantly running between meetings, and don’t always have time to dig into the data as deeply as you might prefer. This means you have to trust that the answers provided for you are comparable between different business units, so you can prioritize resources and allocate budget as effectively as possible. Let’s look at how LookML can simplify life for software as a service (Saas) companies by calculating dollar-based net retention instead of requiring you to manipulate data in a spreadsheet and slice data in different ways. The importance of this metric, often presented at board meetings and during investor meetings, means there is incredible pressure to make sure it is accurate—every time. LookML allows you to define the metric once and never have to worry about it again, eliminating redundant effort between your teams. Plus, Looker can send you the results of the analysis on a scheduled basis in the format you need, whether it’s an emailed PDF for a quick review or a Google Sheet where you can dig in deeper or automatically update your slide deck for your next presentation.Why this matters: All of this automation saves you time and hassle, making it easier for you to stay on top of your business. You never have to worry about data freshness, no matter where you are reviewing the data. More importantly, it means that your teams no longer have to duplicate the same work across the entire business. This saves you time and ensures that you are receiving the most streamlined, consistent updates. Operations Who are you: Your job is to oversee the day-to-day operations of the business. Your title might be “operations manager” or “customer support director.” You care about understanding the high-level trends, but need to be able to dive down into the details of the data at a moment’s notice.How LookML helps: LookML is built for today’s complex data. The business segments you oversee are complex, and the data they produce reflects that. While high-level trends are valuable, a single abstraction or aggregation of the data will never fully communicate the intricacies you need to know to succeed. LookML’s ability to define how you want to drill down into your data gives you access to the details you need and can help you take action from the dashboard. When combined with Looker’s architecture, it gives access to the full, row-level data, not just extracts. You get all the tools you need to understand anomalies and important trends in your data.Here’s an example: You need to know the in-depth details about every part of the business you oversee, but can’t spend all day meeting with your teams to get them. While a dashboard with aggregated trends helps you know where to focus your energy, it’ll never be enough detail for you to take meaningful action to correct any potential problems and communicate with your teams or customers. The images below demonstrate how LookML can be used to help a retail operations or customer support team lead. The first two show how LookML can be used to define drill_fields on a dashboard. This allows a user to click on an aggregated metric, such as number of orders, to see a table with the specified fields of each data row that makes up the aggregation. You can also drill down to a separate dashboard that has more detail for a given metric.The final image shows how an action can be defined in LookML as well. This makes Looker’s drill-down capabilities even more powerful. An action can be defined on a given field to allow you to take action while examining the data without having to leave Looker. In this case, the action defined can send a promotional email to a customer when the data viewer clicks on the user’s email address. You can define any number of actions as appropriate for your data.Why this matters: These capabilities let you see exactly the information you need, find more detail when you need it, and take action immediately. This removes dependencies on others and frees up your team’s time and energy to focus on the most important projects. It also lets you engage and take action directly from the insights you uncover in your data, allowing you to operate more efficiently.If you’re interested in learning more about how LookML can support your BI workflows, check out the LookML documentation or join our deep-dive Looker session during Google Cloud Next ‘20: OnAir. If you’re ready to get started taking advantage of LookML’s unique capabilities, sign up for a free trial.
Quelle: Google Cloud Platform

Preventing lateral movement in Google Compute Engine

When organizations move to Google Cloud, a question we often hear from security operations teams is, “How can we prevent compromises in our deployments and better defend against lateral movement?” With lateral movement, an attacker can move within a system after an initial compromise and gain access to even more sensitive data. To prevent lateral movement and keep your organization secure, we recommend taking a “defense in depth” approach, which helps protect users and data with multiple layers of security that build upon and reinforce one another. In the event that one layer is circumvented or compromised, many more are in place to prevent potential attackers from accomplishing their objectives.To implement a defense in depth approach for Compute Engine there are a few things you should do:Isolate your production resources from the internetDisable the use of default service accountsLimit access to service account credentialsUse OS Login to manage access to your VMs Apply the principle of least-privilegeCollect logs and monitor your systemLet’s explore each of these recommended actions in more depth.Isolate your production resources from the internet The most effective way to ensure that Compute Engine instances don’t get compromised is to minimize their exposure to the public internet. Compute Engine VMs can have internal or external IP addresses. To minimize your attack surface, you should use Cloud NAT to assign your VMs only internal IP addresses and use Identity Aware Proxy (IAP) to allow curated access from the internet.Using IAP to protect a VMWhen you do have to directly expose a VM with an external IP address, ensure that your firewall rules restrict network access to only the ports and IP addresses that your application needs. Don’tAssign external IP addresses to your VMs.Configure permissive firewall rules that allow anyone on the internet to connect to your VMs.DoAssign private IP addresses to your VMs; don’t give them public IP addresses at all. Use IAP TCP forwarding to connect to your VMs for administration and Cloud NAT to allow your VMs to access the internet. IAP works by verifying a user’s identity and the context of the request to determine if a user should be allowed to access an application or a VM. Follow these instructions to set up IAP for your Compute Engine instances. Use Organization Policies to define allowed external IPs for VM instances, so new VM instances don’t get created or configured with an external IP address.Use Security Health Analytics to detect VMs that have external IP addresses and firewall rules that are too permissive. (Note: Some Security Heath Analytics features are available only in the Premium edition of Security Command Center.) Identify and resolve the following security findings:PUBLIC_IP_ADDRESS: Indicates that a Compute Engine instance is assigned an external IP address.OPEN_FIREWALL: Indicates that a firewall rule is configured to allow access from any IP address or on any port.Use VPC Service Controls to configure security perimeters that isolate Google Cloud resources and prevent sensitive data exfiltration—even by authorized clients.Disable legacy Compute Engine instance metadata APIs and migrate your application to the v1 metadata API to help protect it against Server-Side Request Forgery (SSRF) attacks. But what if, despite your preventative efforts, an attacker still manages to compromise a VM? Let’s look at some ways to minimize the impact and ensure that the attacker isn’t able to move laterally and gain access to more resources.Disable the default service accountsConfiguring identity and API access is a critical step in creating a VM. This configuration includes specifying which service account should be used by applications running on the VM. Google Cloud offers two approaches for granting privileges to your application: using aCompute Engine default service account or a user-created service account.The Compute Engine default service account is automatically created by the Google Cloud Console project and has an auto-generated name and email address. To simplify customer onboarding, it is automatically granted the Project Editor IAM role, which means that it has read and write access to almost all resources in the project (including the ability to impersonate other service accounts). Because these privileges are so permissive, we recommend using Organization Policies to disable the automatic granting of this role. Instead, remove any access grants given to the Compute Default Service Account, and use a new service account that’s granted only the permissions your VM needs. Don’tUse the Compute Engine default service account with the primitive Editor role.Use the same service account for different applications running on different VMs.DoRevoke the Editor role for the Compute Engine default service account and create a new service account for your VM that has only the needed permissions.Disable the default Compute Engine service account.Use Organization Policies to “Disable Automatic IAM Grants for Default Service Accounts” so the Compute Engine Default Service Account is not granted the Editor role by default.Use Security Health Analytics to detect and resolve the following misconfigurations: FULL_API_ACCESS: Indicates that a VM instance is configured to use the default service account with full access to all Google Cloud APIs.Limit access to service account credentialsTo limit an attacker’s ability to impersonate your service accounts, you should avoid creating service account keys whenever possible and protect access to existing keys. Service account keys are intended to allow external, non-Google Cloud workloads to authenticate as the service account, but when you’re operating inside Google Cloud, it’s almost never necessary to use a service account key.Don’tGenerate and download private keys for your service accounts. Your Compute Engine instance can automatically assume the identity of the configured service account. Grant the Service Account User or Service Account Token Creator roles at the project level. Instead, you should grant these roles on individual service accounts, when needed. The Service Account User role allows a user to start a long-running job on behalf of a service account. The Service Account Token Creator role allows a user to directly impersonate (or assert) the identity of that service account.DoUse Organization Policies to:“Disable service account key creation” and ensure that users can’t create and download user-managed private keys for service accounts.“Disable service account creation” for projects that shouldn’t host service accounts.Use Upload Service Account key to authenticate on-prem services with Google Cloud using a key in a Hardware Security Module. This minimizes the possibility that the service account private key will be exposed.Use the –impersonate-service-account flag to execute gcloud commands as a service account instead of using an exported service account key. This can be configured per-request or for all gcloud commands by running “gcloud config set auth/impersonate_service_account {service_account_email}”Use Security Health Analytics to detect broad use of the Service Account User role and existing service account keys. Look for:SERVICE_ACCOUNT_KEY_USER_MANAGED: Indicates that a user-managed private key for service accounts exists.SERVICE_ACCOUNT_KEY_NOT_ROTATED: Indicates that a user-managed private key for a service account has not been rotated in 90 days.OVER_PRIVILEGED_SERVICE_ACCOUNT_USER: Indicates that an IAM member has the Service Account User role at the project level, instead of for a specific service account.Use OS Login to manage access to your VMs One way for an attacker to escalate privileges and gain access to additional VMs is by looking for SSH keys stored in the project’s metadata. Manually managing SSH keys used for VM access is time-consuming and risky. Instead, you should use OS Login to grant access to your VMs based on IAM identities. If OS Login is enabled, an attacker can’t obtain access to new VMs by uploading SSH keys to the instance metadata, because those keys get ignored. To preserve backwards compatibility for workflows that rely on configuring their own users and SSH keys, OS Login is not enabled by default. You can learn more about managing access to your VM instances here.Don’tManually manage SSH keys in VM instance metadata.Allow project-wide SSH keys that can be used to connect to all VMs in a project.DoUse OS Login to manage access to your VMs based on IAM identities. Use Organization Policies to Require OS Login: All VM instances created in new projects will have OS Login enabled. On new and existing projects, this constraint prevents metadata updates that disable OS Login at the project or instance level.Use Security Health Analytics to ensure that OS Login is enabled and that project-wide SSH keys are not used. Look for the following misconfiguration types:OS_LOGIN_DISABLED: Indicates that OS Login is disabled on a VM instance.COMPUTE_PROJECT_WIDE_SSH_KEYS_ALLOWED: Indicates that project-wide SSH keys are used, allowing login to all Compute Engine instances in a project.ADMIN_SERVICE_ACCOUNT: Indicates that a service account is configured with Owner access or administrator roles such as roles/compute.osAdminLogin, which may allow them to change OS Login settings or have sudo access.Apply the principle of least privilegeEnsuring that every VM instance, service account, or user is able to access only the information and resources that are necessary for legitimate business operations can be a challenge, especially if you have a lot of VMs. Google Cloud’s IAM Recommender uses machine learning to help organizations right-size privilege management through monitoring a project’s actual permission usage and recommending specific, constrained roles to replace overly permissive ones. It recommends permissions that are safe to remove, and also predicts future access needs.Reviewing permissions in IAM RecommenderTo learn more about how to review and apply such recommendations, check out our page on enforcing least privilege with recommendations. For more tips on how to apply the principle of least privilege read this blog post.Don’tUse the primitive IAM roles: Owner, Editor, Viewer.DoTurn on Policy Intelligence and use IAM Recommender to discover and remediate excessive permissions. Set up an Organization Policy for “Domain Restricted Sharing” to prevent members from outside of configured organizations from receiving IAM policy grants.Consider creating and using a custom role if the predefined IAM roles are broader than what you need.Collect logs and monitor your systemStrong preventative controls need to be complemented with effective detection of malicious activity. Ensuring the right logs are collected is fundamental for security investigations and forensics. Make sure to turn on Data Access logs, which are part of Cloud Audit Logs and can help you answer questions like, “Who did what, where, and when?” within your Google Cloud resources.Reviewing recommendations in Security Health AnalyticsDoTurn on Data Access logs. Data Access audit logs record API calls that read the configuration or metadata of resources, as well as user-driven API calls that create, modify, or read user-provided resource data. They are disabled by default because they can generate large volumes of log data.Enable VPC Service Controls in dry run mode. Even if you can’t enforce service perimeters in your organization yet, you can enable dry run mode to log such requests. This will give you the ability to log information about cross-org or cross-project data movement. Use Security Command Center Premium capabilities including:Security Health Analytics to ensure that logging is properly turned on across your organization: AUDIT_LOGGING_DISABLED: Indicates that data access logs are not enabled or that certain users are exempted; FLOW_LOGS_DISABLED: Indicates there is a VPC subnetwork that has flow logs disabled.Use Security Health Analytics to detect if users outside of your organizations, such as those using Gmail addresses, have been granted access to your projects. NON_ORG_IAM_MEMBER: Indicates that there is an IAM member user who isn’t using organizational credentials. Use Event Threat Detection to automatically scan logs and be alerted on suspicious activity such as overly permissive IAM grants to users outside your organization, cryptomining, connection to known bad IP addresses or domains, and outgoing DDoS attacks. Additional resourcesWe hope that these suggestions will help you defend against and detect lateral movement in your cloud environment. To learn more about security best practices on Google Cloud, follow the Center for Internet Security’s CIS Google Cloud Platform Foundation Benchmark and check out further recommendations provided by Security Health Analytics. For more information on how to secure Kubernetes workloads, see the CIS Google Kubernetes Engine (GKE) Benchmarks. If you use Terraform to manage your deployments, set up Config Validator to detect security misconfigurations at pre-deployment time. Finally, to better understand how we protect our infrastructure and to learn more about our security solutions, check out our security page.
Quelle: Google Cloud Platform

In hybrid and multi-cloud environments, the network really matters

According to recent research1, among organizations adopting public cloud, a full 70% say that they will use a combination of public cloud and on-premises data centers. At the same time, 21% of business users reported that poor network connectivity negatively impacts web or cloud-based application performance2. How can you ensure that your hybrid or multi-cloud deployment doesn’t suffer from such a fate? Networks, clearly, are the foundation of a successful digital transformation. Yet with the growing heterogeneity in customer environments, they are becoming increasingly complex. Nor are all cloud networks created equal. For businesses migrating to the cloud, performance, reliability, and security are table stakes to ensure a risk-free migration of business-critical workloads. For networks that support hybrid and multi-cloud environments, you need comprehensive visibility and monitoring, along with proactive network operations, to help ensure a positive customer experience. At the same time, that cloud network must position you for growth, modernization and future innovation e.g., adoption of microservices architecture, or edge services like 5G. Further, you need to be able to take advantage of these capabilities at your own pace.As IT architectures evolve, Enterprise Strategy Group (ESG) has been watching key industry trends at play, the networking challenges of modern application environments, and how that translates into customer requirements for hybrid and multi-cloud networking infrastructure and services. In a new whitepaper, The Network Matters, ESG discusses the network’s role in infrastructure modernization, and the top challenges and requirements from the network in a multi-cloud and hybrid environment. It also discusses how Google Cloud networking solutions deliver on innovation, security, performance and simplicity and improve customer experiences globally. To learn how to ensure that your hybrid or multi-cloud deployment is a success, download the whitepaper today.1. https://research.esg-global.com/reportaction/ContainerTrendsSurveyMSR2019/Marketing2. https://research.esg-global.com/reportaction/2019technologyspendingintentionsmsr/Marketing
Quelle: Google Cloud Platform