Announcing Cloud DNS forwarding: Unifying hybrid cloud naming

A key part of a successful hybrid cloud strategy is making sure your resources can find each other via DNS, whether they are in the cloud or on-prem. Rather than create separate islands of DNS namespaces, we’ve added new forwarding capability to Cloud DNS, our managed DNS service, letting you easily link your cloud and on-prem environments, so you can use the same DNS service for all your workloads and resources.Built with Cloud DNS’ new network policy capability, DNS forwarding allows you to create bi-directional forwarding zones between your on-prem name servers and Google Cloud Platform’s internal name servers. Currently in beta, DNS forwarding provides the following features and benefits:Outbound forwarding lets your GCP resources use your existing DNS authoritative servers on-prem, including BIND, Active Directory, etc.Inbound forwarding allows on-prem (or other cloud resources) to resolve names via Cloud DNS.Intelligent Google caching improves the performance of your queries;  cached queries do not travel over your connectivity links.DNS forwarding is a fully managed service—no need to use additional software or your own compute and support resources.In a nutshell, DNS forwarding provides a first-class GCP managed service to connect your DNS cloud and on-prem environments, providing unified naming for your workloads and resources. Further, you can use DNS forwarding for inbound traffic, outbound traffic, or both, to support existing or future network architecture needs.DNS is a critical component of tying hybrid cloud architectures together. DNS forwarding in combination with GCP connectivity solutions such as Cloud Interconnect and Cloud VPN creates a seamless and secure network environment between your GCP cloud and on-prem data centers. To learn more, check out the DNS forwarding documentation and get started using DNS forwarding today.
Quelle: Google Cloud Platform

Readers’ choice: Top Google Cloud Platform stories of 2018

We’re wrapping up a busy year here at Google Cloud. As you head into a new year, take a minute to catch up on what happened in 2018—and get some ideas about what you might do in 2019. Here’s what was most popular this year on the Google Cloud Platform (GCP) blog, based on readership, and organized generally by key areas of cloud.Building the right cloud infrastructure for your businessThe many ways to build a cloud infrastructure keep expanding. Container tools like Kubernetes continued to grow in popularity, and we started to learn more about serverless computing possibilities.On the container front, this year brought news of the gVisor sandbox for secure container isolation, so you can run a bigger variety of workloads. Plus, Jib came out this year: It’s an open-source Java containerizer, so you can build containers using familiar Java tools.And at Next ‘18 we announced the Cloud Services Platform, a consistent development framework for your IT resources that gathers together cloud services to automate away tasks across on-prem and cloud infrastructure. The beta release of GPUs attached to preemptible VMs also came this year, making it more affordable to run large-scale ML workloads. And Cloud TPU hardware accelerators arrived (and continued to mature) to speed up and scale ML workloads programmed with TensorFlow.Developing cloud apps on that infrastructureAlong with solid cloud foundations, cloud app development made strides in 2018.News of support for headless Chrome for Google Cloud Functions and Cloud Functions for Firebase got attention. And the newly revamped Cloud Source Repositories made a splash—it’s powered by the same underlying code search infrastructure that Google engineers use every day.Now that you’ve found cloud, what are you gonna do with it?Cloud technology infrastructure really started to mature this year, especially for emerging use cases like machine learning (ML) that need powerful back-end tools.News of the Ethereum cryptocurrency dataset on BigQuery was a hit; it’s publicly available to use for analysis. A partnership with NASA’s Frontier Development Lab brought in Google Cloud to work on simulating and classifying the possible atmospheres of exoplanets.Also popular on the blog this year: We added a PyTorch 1.0 Preview VM image to GCP so you can easily conduct deep learning experimentation with the newest PyTorch framework. Cloud Text-to-Speech made Google’s internal technology, powered by DeepMind, available for uses like call center responses, IoT device speech, and converting text into audio format.And don’t forget the fun that’s powered by cloud, too. A post on the new open-source Agones project got a lot of attention; Agones uses Kubernetes to host and scale dedicated game servers. OpenMatch arrived this year too—this open source project lets game developers bring their own logic to a common matchmaking framework when building multiplayer games.Building the future cloud IT teamCloud technology hasn’t just changed IT infrastructure; it’s changed IT teams and processes as well. Concepts like site reliability engineering (SRE) bring some new ways of thinking about structuring these processes.This popular SRE vs. DevOps blog post laid out the details of how SRE is similar and different from DevOps and describes its availability targets, risk and error budgets, toil budgets and more. Then, there was the Accelerate: State of DevOps 2018 research report, with lots of takeaways based on survey results from DevOps professionals.Managing the modern cloudSome essential cloud management basics also stuck out among all the future-oriented, big-idea projects that got attention this year.The guide to best practices for user account authorization was a useful read for anyone creating, handling and authenticating GCP user accounts. Choosing strong database consistency also struck a chord, with details on why and how it’s important, with a particular focus on Cloud Spanner. Titan Security Keys became available in the Google Store this year. These FIDO security keys include a hardware chip with Google-engineered firmware for strong two-factor authentication.That’s a wrap for 2018! We’re looking forward to seeing what you build (and read) next.
Quelle: Google Cloud Platform

Accelerate your app delivery with Kubernetes and Istio on GKE

It’s no wonder so many organizations have moved all or part of their IT to the cloud; it offers a range of powerful benefits. However, making the jump is often easier said than done. Many organizations have a significant on-premises IT footprint, aren’t quite cloud-ready, and constrained by regulations or lack of consistent security and operating model across on-premises and the cloud.We are dedicated to helping you modernize your existing on-premises IT and move to the cloud at a pace that works for you. To do that, we are leading the charge on a number of open-source technologies for containers and microservices-based architectures. Let’s take a look at some of these and how they can help your organization prepare for a successful journey to the cloud.Toward an open cloud stackAt Google Cloud Next ‘18, we announced Cloud Services Platform, a fully managed solution based on Google open-source technologies. With Cloud Services Platform, you have the tools to transform your IT operations and build applications for today and the future, using containerized infrastructure and a microservices-based application architecture.Cloud Services Platform combines Kubernetes for container orchestration with Istio, the service management platform, helping you implement infrastructure, security, and operations best practices. The goal is to bring you increased velocity and reliability, as well as to help manage governance at the scale you need. Today, we are taking another step towards this vision with Istio on GKE.Think services first with IstioWe truly believe that Istio will play a key role in helping you make the most of your microservices. One way Istio does this is to provide improved visibility and security, making working with containerized workloads easier. With Istio on GKE, we are the first major cloud provider to offer direct integration to a Kubernetes service and simplified lifecycle management for your containers.Istio is a service mesh that lets you manage and visualize your applications as services, rather than individual infrastructure components. It collects logs, traces, and telemetry, which you can use to set and enforce policies on your services. Istio also lets you add security by encrypting network traffic, all while layering transparently onto any existing distributed application—you don’t need to embed any client libraries in your code.Istio securely authenticates and connects your services to one another. By transparently adding mTLS to your service communication, all information is encrypted in transit. Istio provides a service identity for each service, allowing you to create service-level policies that are enforced for each individual application transaction, while providing non-replayable identity protection.  Out of the gate, you can also benefit from Istio’s visibility features thanks to its integration with Stackdriver, GCP’s native monitoring and logging suite. This integration sends service metrics, logs, and traces to Stackdriver, letting you monitor your golden signals (traffic, error rates, and latencies) for every service running in GKE.Istio 1.0 was a key step toward helping you manage your services in a hybrid world, where multiple workloads run in different environments—clouds and on-premises, in containerized microservices or monolithic virtual machines. With Istio on GKE, you get granular visibility, security, and resilience for your containerized applications, with a dead-simple add-on that works out-of-the-box with all your existing applications.Using Istio on GKEThe service-level view and security that Istio delivers are especially important for distributed applications deployed as containerized microservices, and Istio on GKE lets you deploy Istio to your Kubernetes clusters with the click of a button.Istio on GKE works with both new and existing container deployments. It lets you incrementally roll out features, such as Istio security, bringing the benefits of Istio to your existing deployments. It also simplifies Istio lifecycle management by automatically upgrading your Istio deployments when newer versions become available.Today’s Beta availability of Istio on GKE is just the latest of many advancements we have made to make GKE the ideal choice for enterprises. Try Istio on GKE today by visiting the Google Cloud Platform console. To learn more please visit cloud.google.com/istio or the Istio on GKE documentation.Enhancing GKE networkingEarlier this year we announced many new networking features for GKE, including VPC-native clusters, Shared VPC, container-native load balancing and container-native network services for applications running on GKE and self-managed Kubernetes in Google Cloud.With VPC-native clusters , GKE natively supports many VPC features such as scale enhancement, IP management, security checks, and hybrid connectivity etc.Shared VPC lets  you delegate administrative responsibilities to cluster admins while ensuring your critical network resources are managed by network admins.Container-native load balancing lets you program load balancers with containers as endpoints directly for more optimal load balancing. Network services let you use Cloud Armor, Cloud CDN and Identity Aware Proxy natively with your container workloads.We also announced new features to help simplify the configuration of containerized deployments, with some backend and frontend config enhancements. These improvements make everything easier, from identity and access management for network resources to better controls for CDN, Cloud Armor, and load balancing for easier application delivery.Improving GKE securityGCP helps you secure your container environment at each stage of the build-and-deploy lifecycle with software supply chain and runtime security tools. These include integrations to tools from multiple security partners, all on top of Google’s security-focused infrastructure and security best practices. New features like node auto-upgrade and private clusters increase the security options available to GKE users. You can read more about new security features in GKE in “Exploring Containers Security: This year it’s about security.”Delivering Kubernetes apps via GCP MarketplaceEnterprises usually work with a number of partners within their IT environments, whether it’s in the cloud or on-premises. Six months ago, we introduced Kubernetes applications delivered through GCP Marketplace. Kubernetes apps offer more than just a container image; they are production-ready solutions that are integrated with GKE for simple click-to-deploy launches. Once deployed to GKE, Kubernetes apps are managed as full applications, simplifying resource management. You can also deploy Kubernetes apps to non-GKE Kubernetes clusters, whether they’re on-premises or in the cloud, for quick deployment that’s billed alongside other GCP spend.With Kubernetes, your cloud, your wayIf you use containers and Kubernetes, you already know how they can optimize infrastructure resources, reduce operational overhead, and improve application portability. But by standardizing on Kubernetes, you’ve also laid the foundation for improved service management and security, as well as simplified application procurement and deployment, across clouds and on-prem. Stay tuned in the coming months for more about Kubernetes, microservices, and Cloud Services Platform.
Quelle: Google Cloud Platform

Nurture what you create: How Google Cloud supports Kubernetes and the cloud-native ecosystem

At Google Cloud, we talk a lot about our belief in open source and open cloud. But what does that actually mean?Usually, when you’re a leader in an open-source community like Kubernetes and there’s a big event (like this week’s KubeCon North America), that means launching a brand new project. Launches are exciting, but maintaining a successful project like Kubernetes requires sustained investment and maintenance. We find that what really distinguishes a successful open-source project is the day-in day-out nurturing that happens behind the scenes. And it’s more than coding—it’s things like keeping the project safe and inclusive, writing documentation, managing test infrastructure, responding to issues, working in project governance, creating mentoring programs, reviewing pull requests, and participating in release teams. So today, we thought we’d take this opportunity not to announce a project, but rather reflect on some examples of what it means to us to be a part of the open-source cloud-native community.“Open-source software is not free like sunshine, it’s free like a puppy.” – Sarah Novotny, Head of Open Source Strategy for GCPSupporting communities and thinking differentlyFirst and foremost, with Kubernetes, we fully support the core values of the project, as well as provide technical and non-technical contributions in ways that reinforce positive results for the entire community. Since its inception, we’ve remained the top contributor to the project. This is something we’re incredibly proud of, and we hope that our work helps make the entire cloud-native landscape richer.Our commitment to open source also extends to making more impactful events. For example, this year, rather than produce new KubeCon conference swag, we donated diversity scholarships for 2019 to the CNCF instead. This aligns with our desire for inclusivity, and helps cultivate a stronger community. We also co-organized the Kubernetes Contributor Summit, so our community can have critical in-person interactions ahead of the full event.Supporting the existing cloud-native ecosystem: etcdAnother example of our commitment to open source is supporting the etcd distributed key value store, which has now joined the roster of CNCF projects. As the Kubernetes ecosystem matured, we saw the need for more support in this critical component. We dedicated full-time engineers to the project, including an etcd maintainer, and two of the top five code committers in 2018. We led improvements to the etcd release process, expanding release branch support from just the latest minor version to the latest three minor versions. We also dedicated staff to patch management duties and automating the release workflow, and actively helped stabilize etcd, hunting down and fixing issues including a critical boltdb data corruption issue. More recently, we contributed to the rewrite of the etcd client-side load balancer and led efforts to expand the metrics exposed by etcd for monitoring system health and performance.We’re committed to the quality and production readiness of etcd. Our plans include making upgrades safer by adding zero-downtime downgrade support, and expanding test coverage over more version pairings of etcd with Kubernetes. Finally, we’re continually making coordinated improvements to both etcd and the Kubernetes storage layer that interfaces with it to optimize scalability, performance, and ease of operability.Enriching the cloud-native landscapeOur commitment to open source isn’t just limited to supporting communities and existing projects. We also hope to share many of the valuable lessons we have learned while building scalable, secure, and reliable systems, Kubernetes being a prime example.A recent example is gVisor, based on technology Google uses to isolate and secure containerized workloads. As organizations run more heterogenous and less trusted workloads, there’s new interest in containers that provide a secure isolation boundary, and we wanted to share how we’ve been tackling the problem internally with the community. This in turn opened up broader discussions about the security challenges inherent in cloud-native architecture.In an effort to make gVisor more accessible, we integrated it with Minikube, so you can try out gVisor locally, in a VM on your laptop. We’re also actively working to open more of the project’s support infrastructure, plans, and processes, starting with a substantial system call compatibility test suite with more than 1500 tests.Releasing gVisor as an open-source project underscores the many different ways communities can form and contribute across the cloud-native landscape. Sometimes those contributions aren’t explicitly code, but instead feedback or ways to do things better. Being open helps build communities of practice across all technology groups and stakeholders.Improving the cloud-native developer experienceWe understand that the day-to-day life of an application developer can be challenging in the cloud-native world due to multiple points of divergence between how you run your application locally and in a production Kubernetes cluster. Our goal is to reduce these differences so all developers can have a positive experience in the Kubernetes ecosystem.In March we released an important open-source tool for cloud-native development called Skaffold, which allows you to define the build, test and deployment phases of your Kubernetes application with a single yaml file. In the skaffold dev command, this local pipeline is combined with an automated file watcher based on the build definition, creating a fast feedback loop—you can see your source file changes in your deployed app in seconds. This works both locally and in Google Kubernetes Engine (GKE), helping to provide a cohesive workflow.Learn and share: How we cross-pollinate communitiesAnother effort within Google open source is to create templates and other starter materials for emerging projects to use for things like governance and contributions. Our hope is to eventually provide everything necessary to bootstrap a successful open-source project, as well as offer guidance at key inflection points in the project lifecycle. These are distilled from our experience working on projects like Kubernetes, Istio, Knative, and Tensorflow. To further improve these materials, we regularly bring community managers together across projects to discuss shared struggles, opportunities, and lessons learned to avoid repeating antipatterns across projects. Scaling open-source contributions is important, especially if the goal is to ensure consistently positive and inclusive interactions across every project we support.So, as we all celebrate the continued success of Kubernetes, remember to take the time and thank someone you see helping make the community better. It’s up to all of us to foster a cloud-native ecosystem that prizes the efforts of everyone who helps maintain and nurture the work we do together.To stay up to date on what’s going on in the cloud-native community, both from Google and beyond, we urge you to subscribe to the Kubernetes Podcast. And, if you’re interested in getting involved, please visit the links provided below.Kubernetes for container scheduling and management [ Google Cloud | GitHub ]Istio to connect, monitor, and secure microservices [ Google Cloud | GitHub ]Knative to build, deploy, and manage modern serverless workloads [ Google Cloud | GitHub ]Container tools to help entire life-cycle of containerized applications [ Google Cloud | GitHub ]KubeFlow Pipeline to compose, deploy, and manage end-to-end machine learning workflows [ Google Cloud | GitHub ]
Quelle: Google Cloud Platform

Knative: bringing serverless to Kubernetes everywhere

Knative, the open-source framework that provides serverless building blocks for Kubernetes, is on a roll, and GKE serverless add-on, the first commercial Knative offering that we announced this summer, is enjoying strong uptake with our customers. Today, we are announcing that we’ve updated GKE serverless add-on to support Knative 0.2. In addition, today at KubeCon, RedHat, IBM, and SAP announced their own commercial offerings based on Knative. We are excited for this growing ecosystem of products based on Knative.Knative allows developers to easily leverage the power of Kubernetes, the de-facto cross-cloud container orchestrator. Although Kubernetes provides a rich toolkit for empowering the application operator, it offers less built-in convenience for application developers. Knative solves this by integrating automated container build, fast serving, autoscaling and eventing capabilities on top of Kubernetes so you get the benefits of serverless, all on the extensible Kubernetes platform. In addition, Knative applications are fully portable, enabling hybrid applications that can run both on-prem and in the public cloud.Knative plus Kubernetes together form a general purpose platform with the unique ability to run serveless, stateful, batch, and machine learning (ML) workloads alongside one another. That means developers can use existing Kubernetes capabilities for monitoring, logging, authentication, identity, security and more, across all their modern applications. This consistency saves time and effort, reduces errors and fragmentation and improves your time to market. As a user you get the ease of use of Knative where you want it, with the power of Kubernetes when you need it.Knative risingIn the four months since we announced Knative, an active and diverse community of companies has contributed to the project. Google Kubernetes Engine (GKE) users have been actively using the GKE serverless add-on since its launch in July and have provided valuable feedback leading to many of the improvements in Knative 0.2.In addition to Google, multiple partners are now delivering commercial offerings based on Knative. Red Hat announced that you can now start trying Knative as part of its OpenShift container application platform. IBM has committed to supporting Knative on its IBM Cloud Kubernetes Service. SAP is using Knative as part of its SAP Cloud Platform and open-source Kyma project.A consistent experience, with the flexibility to run where you want, resonates with many enterprises and startups. We are pleased that Red Hat, IBM, and SAP are embracing Knative as a powerful open industry-wide approach to serverless. Here’s what Knative brings to each of the new commercial offerings:”The serverless paradigm has already demonstrated that it can accelerate developer productivity and significantly optimize compute resources utilization. However, serverless offerings have also historically come with deep vendor lock-in. Red Hat believes that Knative, with its availability on Red Hat OpenShift, and collaboration within the open source community behind the project, will enable enterprises to benefit from the advantages of serverless while also minimizing lock-in, both from a perspective of application portability, as well as that of day-2 operations management.” – Reza Shafii, VP of product, platform services, at Red Hat“IBM believes open standards are key to success as enterprises are shifting to the era of hybrid multi-cloud, where portability and no vendor lock-in are crucial. We think Knative is a key technology that enables the community to unify containers, apps, and functions deployment on Kubernetes.” – Jason McGee, IBM Fellow, VP and CTO, Cloud Platform. “SAP’s focus has always been centered around simplifying and facilitating end-to-end business processes. SAP Cloud Platform Extension Factory is addressing the need to integrate and extend business solutions by providing a central point of control, allowing developers to react on business events and orchestrate complex workflows across all connected systems. Under the hood, we are leveraging cloud-native technologies such as Knative, Kubernetes, Istio and Kyma. Knative tremendously simplifies the overall architecture of SAP Cloud Platform Extension Factory and we will continue to collaborate and actively contribute to the Knative codebase together with Google and other industry leaders.” – Michael Wintergerst, SVP, SAP Cloud PlatformWe’re excited to deliver enterprise-grade Knative functionality as part of Google Kubernetes Engine, and by its momentum in the industry. To get started, take part in ther GKE serverless add-on alpha. To learn more about the Knative ecosystem, check out our post on the Google Open Source blog.
Quelle: Google Cloud Platform

Expanding our partnership with Palo Alto Networks to simplify cloud security and accelerate cloud adoption

Security remains a top concern and challenge for enterprises, and Google Cloud provides a strong and flexible toolkit to help make a smooth transition to the cloud. We partnered with Palo Alto Networks in 2017 because we both shared a belief that moving to the cloud can help enterprises simplify security, and that improved security will drive cloud adoption. Today we’re expanding that partnership to help more enterprises increase control of their own security in the cloud.As part of our partnership, Palo Alto Networks will run itsApplication Framework on Google Cloud to take advantage of Google Cloud Platform’s secure, durable cloud storage and highly-scalable AI and analytics tools. Services such as BigQuery will help Application Framework customers accelerate time-to-insight as they work to detect and respond to security threats. Palo Alto Networks will also run their GlobalProtect cloud service on Google Cloud Platform. Google Cloud’s reliable, performant, and secure global-scale network and infrastructure offer many advantages for a service to help protect branch and mobile workforces.“This partnership makes us a Google Cloud customer, allowing us to run important cloud-delivered security services at scale and with the benefits of Google’s AI and analytics expertise,” said Varun Badhwar, SVP Products & Engineering for Public Cloud Security at Palo Alto Networks. “We’ll also be working with Google Cloud to offer organizations moving to Google Cloud additional visibility, compliance and security capabilities they need to prevent cyberattacks.”New solutions to help customers automate compliance audits and reportingRedLock from Palo Alto Networks helps organizations manage security risks and achieve and maintain compliance. By monitoring the use of GCP APIs, RedLock delivers real-time visibility across GCP resources, including containerized workloads in Google Kubernetes Engine. This enables continuous compliance monitoring and auto-generated reports for common regulations and standards such as GDPR, HIPAA, PCI DSS, and NIST, eliminating the need for lengthy manual audits. A new integration with GCP’s Security Baseline API (Alpha) means that customers can combine a view of their own security and compliance posture with data from GCP’s infrastructure, a capability not available on any other public cloud.Solutions that will help increase visibility and enhance security analyticsDeep integration of Palo Alto Networks products with Google’s Cloud Security Command Center helps centralize visibility into security and compliance risks on GCP. Palo Alto Networks integrations send alerts from the VM-Series next-generation firewalls, Traps endpoint protection, and RedLock to help provide centralized visibility into security and compliance risks in a Google Cloud environment.Findings from Palo Alto Networks products in the Cloud Security Command Center DashboardThis new functionality complements the already extensive set of joint capabilities that help Google Cloud customers define, enforce, monitor and maintain consistent security policies across on-premises, public cloud, and hybrid environments. For example:The entire line of Palo Alto Networks next-generation firewalls, both physical and virtualized, support standards-based IPsec VPN connectivity to ensure a secure connection from on-premises to Google Cloud. In addition, GlobalProtect cloud service provides secure connectivity to GCP as a service, removing some of the operational burden associated with firewall deployments.Palo Alto Networks VM-Series virtualized firewalls protect and segment cloud workloads in GCP to safeguard against internal and external threats and can be deployed directly from the GCP Marketplace.Panorama network security management provides unified management of both physical and the VM-Series firewalls deployed on-premises and on GCP. Customers can create policies once and enforce them everywhere.Traps helps secure the operating system and applications within workloads on GCP. A lightweight host agent deployed within the cloud instance detects any zero-day exploits and ensures the integrity of the operating system and applications. As attackers uncover vulnerabilities, the agent-based approach can provide protection until organizations are able to patch cloud workloads.Through in-line protection provided by Palo Alto Networks firewall appliances or GlobalProtect cloud service, organizations can understand SaaS usage and build policies to help control risk exposure. They can complement the robust security capabilities in G Suite with the Aperture SaaS security service, which offers additional options for protection of data at rest as well as ongoing monitoring of user activity and administrative configurations.Through our extended partnership, enterprises using Palo Alto Networks offerings on-premises will have an easier path to move to the cloud while leveraging their existing security investments. Organizations who run on Google Cloud will have easy access to security functionality from Palo Alto Networks with enhanced capabilities available only on Google Cloud.“We are pleased to see Google Cloud and Palo Alto Networks strengthening their partnership. Security is a top priority for Broadcom, and we depend on both organizations to help protect our networks, infrastructure, data, and applications,” notes Andy Nallappan, Vice President and Chief Information Officer, Global Information Technology for Broadcom. “We look forward to increased collaboration that will provide us with new capabilities to enhance our security posture and further simplify deployment and operations across our data centers and the cloud.”Start today for freeOrganizations can take a free, 2-week VM-Series Test Drive and learn how the VM-Series can be deployed on GCP to prevent data loss and potential business disruption. Organizations can also sign up for a free, 2-week RedLock Trial to continuously monitor and secure their Google Cloud environment and identify vulnerable resources and potential points of exposure.To learn more about our partnership with Palo Alto Networks please visit this site and read their respective announcement.
Quelle: Google Cloud Platform

Exploring container security: This year, it’s all about security. Again.

Earlier this year at KubeCon in Copenhagen, the message from the community was resoundingly clear: “this year, it’s about security”. If Kubernetes was to move into the enterprise, there were real security challenges that needed to be addressed. Six months later, at this week’s KubeCon in Seattle, we’re happy to report that the community has largely answered that call. In general, Kubernetes has made huge security strides this year, and giant strides on Google Cloud. Let’s take a look at what changed this year for Kubernetes security.Kubernetes attacks in the wildWhere developers go, hackers follow. This year, Kubernetes graduated from the CNCF, and it also earned another badge of honor: weathering its first real security attacks. Earlier this year, several unsecured Kubernetes dashboards made the news for leaking cloud credentials. At the time, Lacework estimated there of over 20,000 public dashboards, 300 were open without requiring any access credentials. (Note that Google Kubernetes Engine no longer deploys this dashboard by default.) Elsewhere, attackers added binaries to images on Docker Hub to mine cryptocurrency, which were then downloaded an estimated five million times and deployed to production clusters.The majority of attacks against containers, however, remain “drive by” attacks—where an attacker is only interested in finding unpatched vulnerabilities to exploit. This means that the best thing you can do to protect your containers is to patch: your base image, your packages, your application code—everything. We expect attackers to start targeting containers more, but since containers make it easier to patch your environment, hopefully they’ll have less success.Luckily, we also saw the community responding to security threats, by donating multiple security-related projects to the CNCF including SPIFFE, OPA, and Project Harbor.Developing container isolation, togetherIsolation was a hot topic for the container community this year, even though there still haven’t been any reports of container escapes in the wild, where an attacker gains control of a container, and uses it to gain control of other containers on the same host. The Kata Containers project kicked things off in December 2017, and other sandboxing technologies quickly followed suit in 2018, including gVisor and Nabla containers. While different in implementation, the goal of each of these technologies is to create a second layer of isolation for containerized workloads and bring defense-in-depth principles to containers, without compromising performance.Container isolation is frequently misunderstood (after all, they don’t contain), and lack of isolation has been a primary argument against adopting them. Unlike virtual machines, containers don’t provide a strong isolation boundary on par with a hypervisor. That makes some users hesitant about running multi-tenant environments—deploying two containers for different workloads on the same VM—because they are worried that the workload in one container affecting the other. To address this, Kubernetes 1.12 added RuntimeClass, which lets you use new sandboxing technologies to isolate individual pods. RuntimeClass gives you the ability to select which runtime to use with each pod, letting you select hardening runtimes like gVisor or Kata depending on how much they trust the workload. With this tooling, the primary argument against containers is now one of its greatest strengths.Protecting the software supply chainAt Google Cloud, we focused our efforts on securing the software supply chain—protecting your container from the base image, to code, to an application image, to what you deploy in production. Recently we released two new products in this space: Container Registry Vulnerability Scanning scans your images for known vulnerabilities; and Binary Authorization lets you enforce your policy requirements at deployment time. Both of these products are currently in beta.Since a container is meant to be immutable, you’re constantly redeploying, and constantly pushing things down your supply chain. Binary Authorization gives you a single enforcement point where you can dictate what’s running in your environment. In addition to the GCP-hosted product, we also published an open-source reference implementation of Kritis, to ensure that your containers are scanned and patched for any known vulnerabilities before you let them into your environment.Hardening GKE and its networkWe keep GKE up to date with Kubernetes open-source releases, but we also introduce new features and new defaults to help you better protect your clusters. We made huge headway in network security recently, namely with the general availability of Private Clusters and Master Authorized Networks. Together, these help you further limit access to your cluster by malicious attackers who are scanning IP addresses for vulnerabilities. Now, you can restrict access to your cluster’s master to a set of whitelisted IP addresses, and can further ensure that your cluster’s nodes only have private IP addresses. And since GKE now works with shared Virtual Private Cloud, your network team can manage this environment directly. To learn more about GKE networking and network security, see the GKE network overview.Then, in the small-but-mighty category, we turned node auto-upgrade on by default in the GCP Console. Unpatched environments are an easy target for attackers, and it only takes one missed security notice or delayed patch to be suddenly vulnerable. Node auto-upgrade delivers security patches automatically to keep your node up to date. Note that on GKE, Google manages and patches the control plane. While you probably didn’t notice it, our team has been very active patching GCP and GKE for Linux and Kubernetes vulnerabilities this year, most notably last week!In addition to new network security features, we are always striving to improve GKE’s default security settings, so you can implement security best practices without having to be a security expert. We’ve consolidated our hardening advice into a single guide that’s easy to follow, and noted when we’ve changed defaults. Note that this is an easy link to share with auditors.There’s so much more we want to do and we’re going to keep on keeping on, so that 2019 can be all about security too. If you’re at KubeCon this year, check out some of our container security talks:How Symlinks Pwned Kubernetes (And How We Fixed It) – Tues Dec 11th, 10:50-11:25Recent Advancements in Container Isolation – Tues Dec 11th, 1:45-2:20This Year, It’s About Security – Tues Dec 11th, 4:30-5:05So You Want to Run Vault in Kubernetes? – Wed Dec 12th, 11:40-12:15Navigating Workload Identity in Kubernetes – Wed Dec 12th, 4:30-5:05Shopify’s $25k Bug Report, and the Cluster Takeover That Didn’t Happen – Thurs Dec 13th, 4:30-5:05Hope to see you there!
Quelle: Google Cloud Platform

A little light reading: What to read to stay updated on cloud technology

It sometimes feels like keeping up with technology news could be a full-time job, whatever your particular area of interest. We collected this list of useful, interesting and otherwise cool stories for you to catch up on what was new in the bigger Google world in November. Here’s what caught our attention.Newspaper puts cloud infrastructure, AI into actionThe New York Times built a processing pipeline using GCP products to digitize and organize its more than five million physical photos. They are using Cloud Storage to hold the photo scans, Cloud Pub/Sub to provide the data pipeline, services on Google Kubernetes Engine (GKE) to resize images, and a Cloud SQL database to store metadata. Here’s the full story.Pixel photos get better with machine learningWe also recently got a look at how the Pixel 3 phone incorporates machine learning so the Portrait Mode of the camera can better predict depth when taking photos. Using machine learning allowed Pixel developers to consider multiple “depth cues,” which would be extremely difficult to do with a manually created algorithm. They instead trained a neural network written in TensorFlow to achieve the improved depth. Get all the technical details here.Want your phone to find the best signal? There’s a network for thatEver wish your phone could find another, better option when your signal is bad? There’s actually a virtual mobile network service from Google that automatically switches your phone service between Wi-Fi hotspots and cellular networks based on signal strength and speed wherever you are. The news in November was that Google Fi is now available for the majority of Android devices and iPhones.Get to know the service mesh conceptIf you’ve started playing around with Istio—or want to try it—this Medium piece is a great explainer on how to create an Istio-enabled “Hello, world” app. It focuses on routing, in particular, since Istio manages the traffic of your app, and assumes you have some knowledge of containers and Kubernetes.Need some techie fun in your spare time?It’s now easier to find and enter one of Google’s three coding competitions: Hash Code, Code Jam, and Kick Start. All the competitions are now global, and the site has a simplified interface. Will you accept the challenge? Sign up to get notifications for the early 2019 start dates.Anything you would add to this list? Tell us about your recommended reading.
Quelle: Google Cloud Platform

Kubernetes and GKE for developers: a year of Cloud Console

As a Google Kubernetes Engine (GKE) administrator, you routinely need to create and manage new clusters, deploy applications to them, and monitor and troubleshoot how those applications are performing. Today, there are two main ways to manage do that: the kubectl command line interface and Cloud Console, a web-based dashboard. Cloud Console for GKE has been generally available for a year now, and we’ve been adding some exciting new features to give you an easy, intuitive interface for managing your GKE environment.Off to an easy startTo present these new features to you, meet Taylor, an infrastructure and DevOps admin who wants to kick the tires before deciding whether to start using GKE in her company. She logs into Cloud Console and selects an option to create a new cluster, and is prompted to choose what kind of cluster she wants.Cloud Console makes choosing the right cluster easy. On the left Taylor can see a list of preconfigured templates that match common use-cases. On the right, she sees an option to let her customize a cluster exactly how she wants it. Out of curiosity, she expands the advanced view and plays around with different templates to see what setting would Google recommend. In the end she decides to go with ‘Your First Cluster’ configuration—looks perfect for the occasion!After her new cluster is provisioned Taylor wants to put it to work. She clicks on the ‘Deploy’ button and gets to a form that guides her through setting up her first-ever GKE application. Each step explains what the different options mean and what the next actions should be—from picking an image to exposing it to the internet.Taylor doesn’t have a specific application in mind, but GKE provisions nginx by default, so she decides to just go with that—she doesn’t even have to fill in or change any of the fields!A few minutes later Taylor has a working instance of nginx that is accessible from the internet via a Kubernetes service.Easier troubleshooting and monitoringAfter playing around with GKE for a bit, Taylor decides to try it out with some of her production workloads. She creates a larger cluster with several microservices. With each microservice she deploys, Cloud Console provides helpful CPU, memory and disk utilization charts, and highlights new features from Stackdriver, like limits.Unfortunately, it looks like some of her workloads are not working properly. To start troubleshooting Taylor clicks on the error and sees the likely reason for the failure, as well as a link to documentation that contains more tips on how to debug this particular issue. You can read more about troubleshooting applications using Cloud Console in our previous blog post.Find commercial Kubernetes applications in GCP MarketplaceAfter some time Taylor decides she’d like a commercial database to use with her applications. She finds a prepackaged version of Couchbase in Cloud Marketplace (formerly Cloud Launcher) and uses click-to-deploy to get it running fast.Apart from making GKE installs easy, Cloud Console also offers a rich UI to monitor and manage applications installed from the marketplace.Meet the GKE team at KubeconThese are just some of the things you can do in Cloud Console to manage your GKE environment. The Google Cloud team will be presenting on this and other GKE topics at KubeCon North America ‘18 in Seattle next week—be sure to check out the agenda. And if nothing else, stop by the Google Cloud booth D5 to try Cloud Console for yourself!
Quelle: Google Cloud Platform

How to connect Cloudera’s CDH to Cloud Storage

If you are running CDH, Cloudera’s distribution of Hadoop, we aim to provide you with first-class integration on Google Cloud so you can run a CDH cluster with Cloud Storage integration.In this post, we’ll help you get started deploying the Cloud Storage connector for your CDH clusters. The methods and steps we discuss here will apply to both on-premise clusters and cloud-based clusters. Keep in mind that the Cloud Storage connector uses Java, so you’ll want to make sure that the appropriate Java 8 packages are installed on your CDH cluster. Java 8 should come pre-configured as your default Java Development Kit.[Check out this post if you’re deciding how and when to use Cloud Storage over the Hadoop Distributed File System (HDFS).]Here’s how to get started:Distribute using the Cloudera parcelIf you’re running a large Hadoop cluster or more than one cluster, it can be hard to deploy libraries and configure Hadoop services to use those libraries without making mistakes. Fortunately, Cloudera Manager provides a way to install packages with parcels. A parcel is a binary distribution format that consists of a gzipped (compressed) tar archive file with metadata.We recommend using the CDH parcel to install the Cloud Storage connector. There are some big advantages of using a parcel instead of manual deployment and configuration to deploy the Cloud Storage connector on your Hadoop cluster:Self-contained distribution: All related libraries, scripts and metadata are packaged into a single parcel file. You can host it at an internal location that is accessible to the cluster or even upload it directly to the Cloudera Manager node.No need for sudo access or root: The parcel is not deployed under /usr or any of the system directories. Cloudera Manager will deploy it through agents, which eliminates the need to use sudo access users or root user to deploy.Create your own Cloud Storage connector parcelTo create the parcel for your clusters, download and use this script. You can do this on any machine with access to the internet.This script will execute the following actions:Download Cloud Storage connector to a local drivePackage the connector Java Archive (JAR) file into a parcelPlace the parcel under the Cloudera Manager’s parcel repo directoryIf you’re connecting an on-premise CDH cluster or cluster on a cloud provider other than Google Cloud Platform (GCP), follow the instructions from this page to create a service account and download its JSON key file.Create the Cloud Storage parcelNext, you’ll want to run the script to create the parcel file and checksum file and let Cloudera Manager find it with the following steps:1. Place the service account JSON key file and the create_parcel.sh script under the same directory. Make sure that there are no other files under this directory.2. Run the script, which will look something like this: $ ./create_parcel.sh -f <parcel_name> -v <version> -o <os_distro_suffix>parcel_name is the name of the parcel in a single string format without any spaces or special characters. (i.e.,, gcsconnector)version is the version of the parcel in the format x.x.x (ex: 1.0.0)os_distro_suffix: Like the naming conventions of RPM or deb, parcels need to be named in a similar way. A full list of possible distribution suffixes can be found here.d is a flag you can use to deploy the parcel to the Cloudera Manager parcel repo folder. It’s optional; if not provided, the parcel file will be created in the same directory where the script ran.3. Logs of the script can be found in /var/log/build_script.logDistribute and activate the parcelOnce you’ve created the Cloud Storage parcel, Cloudera Manager has to recognize the parcel and install it on the cluster.The script you ran generated a .parcel file and a .parcel.sha checksum file. Put these two files on the Cloudera Manager node under directory /opt/cloudera/parcel-repo. If you already host Cloudera parcels somewhere, you can just place these files there and add an entry in the manifest.json file.On the Cloudera Manager interface, go to Hosts -> Parcels and click Check for New Parcels to refresh the list to load any new parcels. The Cloud Storage connector parcel should show up like this:3. On the Actions column of the new parcel, click Distribute. Cloudera Manager will start distributing the Cloud Storage connector.4. JAR file to every node in the cluster.When distribution is finished, click Activate to enable the parcel.Configure CDH clusters to use the Cloud Storage connectorAfter the Cloud Storage connector is distributed on the cluster, you’ll need to do a few additional configuration steps to let the cluster use the connector. These steps will be different depending on whether you’re using HDFS or Spark for your Hadoop jobs.Configuration for the HDFS service1. From the Cloudera Manager UI, click HDFS service > Configurations. In the search bar,  type core-site.xml. In the box titled “Cluster-wide Advanced Configuration Snippet (Safety Valve) for core-site.xml,” add the following properties:2. Click Save configurations > Restart required services.3. Export Hadoop classpath to point to the Cloud Storage connector JAR file, as shown here:4. Run the “hdfs dfs – ls” command pointing to the bucket the service account has access to:Configuration for the Spark serviceIn order to let Spark recognize the Cloud Storage path, you have to let Spark load the connector JAR. Here is how to configure it:1. From the Cloudera Manager home page, go to Spark > Configuration > Spark Service Advanced Configuration Snippet (Safety Valve) for spark-conf/spark-env.sh. Add the configuration according to the Cloud Storage connector JAR path.2. Next, use Cloudera Manager to deploy the configuration and restart the service if necessary.3. Open Spark shell to validate that you can access Cloud Storage using Spark.Configuration for the Hive serviceIf you also need to store Hive table data in Cloud Storage, configure Hive to load the connector JAR file with the following steps:1. From Cloudera Manager home page, go to Hive Service > Configuration, search “Hive Auxiliary JARs Directory” and enter the path to the Cloud Storage connector JAR, as shown here:2. Validate if the JAR is being accepted by accessing the Hive CLI. Note that the IP address and the result may be different here from your screen:That’s it! You’ve now connected Cloudera’s CDH to Google Cloud Storage, so you can store and access your data on Cloud Storage with high performance and scalability. Learn more here about running these workloads on Google Cloud.
Quelle: Google Cloud Platform