Application management made easier with Kubernete Operators on GCP Marketplace

More and more enterprises leverage Google Kubernetes Engine (GKE) to containerize and manage their applications, and deliver new features to their end users. However, application and cluster admins often struggle with authoring, releasing and managing applications on top of Kubernetes. To make this easier, we have published a set of Kubernetes Operators to our marketplace that encapsulate best practices and end-to-end solutions for specific applications.Lifecycle of a Kubernetes applicationFor example, Operators automate the creation of core resources such as pods, containers, persistent volumes, services, etc., as well as workload resources including deployments and statefulsets, and enable application-specific visibility and orchestration.We worked with the Kubernetes open-source community in the Apps Special Interest Group (SIG) to introduce and standardize an Application resource that defines the various resources within an application and manages them as a group. The Application resource includes a standard API for creating, viewing, and managing applications in Kubernetes, making it easy to perform health checks, do garbage collection, and manage application dependencies. Moreover, it provides a standard mechanism for viewing and managing apps on the GKE UI and other UI dashboards.Application view in GKE UIOperators exercise one of the most valuable capabilities of Kubernetes—its extensibility—through the use of CRDs and custom controllers. Operators extend the Kubernetes API to support the modernization of different categories of workloads and provide improved lifecycle management, scheduling, etc.Kubernetes applications on GCP Marketplace recently became generally available, and are an excellent place to find ISV-supported Operators.Operators by GoogleIn addition to helping define the application standard, we’ve also created several critical Operators. These highlight the possibilities of being able to extend Kubernetes, as well as demonstrate best practices for authoring and managing the lifecycle of a Kubernetes application.Java OperatorJava is one of the most popular programming languages on Earth, with 10 million developers worldwide writing apps for 15 Java-enabled billion devices (source). Java apps rely on a Java virtual machine (JVM), which lets a computer run Java and other related programs. Examples of containerized apps that run in a JVM, and that are often deployed on GKE, include Spark, Elasticsearch, Kafka, and Cassandra. However, there are several challenges in running JVM applications on Kubernetes. The JVM is often not fully aware of the isolation mechanisms that containers use internally, leading to unexpected behavior between different environments (such as test and production).To solve these challenges, we created a Java Operator that automatically configures various aspects of a JVM application running in a Kubernetes cluster, including the JVM memory, garbage collection logging, monitoring, and debugging. You can find this Java Operator on Google Cloud Platform (GCP) Marketplace.Spark OperatorApache Spark is a popular analytics engine for large-scale batch and streaming data processing and machine learning. We recently launched an open-source Kubernetes Operator for Apache Spark in beta that simplifies lifecycle management of Spark applications running on Kubernetes in a Kubernetes-native way. You can find it on GCP Marketplace.Airflow OperatorApache Airflow allows programmatic management of complex workflows as directed acyclic graphs for dependency management and scheduling. We published the open-source Airflow Operator that simplifies the installation and management of Apache Airflow on Kubernetes, and which is available on GCP Marketplace.Building your own OperatorWhile we provide several high quality extensions and applications on GCP Marketplace, you may also want to write your own extensions for custom use cases. To do so, you can follow these best practices and useKubebuilder, reducing development time from months to weeks or days. To learn more, check out theKubebuilder book.Automatic updates for your Kubernetes appsTo simplify the management experience, Managed Updates on GCP Marketplace lets you easily update and auto-roll back your Kubernetes apps with health checks. Now, you no longer need to stop your applications, search for the right patches and releases, verify and validate them and finally manually update them. With Managed Updates, we update applications for the latest features and security patches, removing significant operational burden. We are working with partners on GCP Marketplace to enable seamless updates of their applications.Managing Kubernetes apps made easyCreating, configuring and deploying apps to run on top of Kubernetes doesn’t have to be hard. Operators simplify the process of deploying many common applications directly from the GCP Marketplace and CLI, and if you can’t find what you need, you can build it yourself. Check out GCP Marketplace for the full catalog of pre-configured Kubernetes apps that are ready to deploy into your cluster today.
Quelle: Google Cloud Platform

Preview zone: How Google Cloud CRE helps SaaS companies prevent unanticipated failures

At Google, our Customer Reliability Engineering (CRE) teams work with customers to help implement Site Reliability Engineering (SRE) practices to continually attain their reliability goals. This work often includes defining objectives and implementing operational best practices like blameless postmortems or analyzing error budget spend.Following CRE practices is especially important when changes are made in the customer’s product. But what about when changes are released within Google Cloud Platform (GCP), where the product runs? We’ve heard that you want to test your products against future GCP releases to ensure reliability and performance when the underlying cloud service changes. We are happy to announce that preview zones are now available to let you test your own production code against future releases of GCP.We’ve been working recently with many of our SaaS company partners and we’re happy to announce that we’ve expanded our CRE for SaaS program to address these needs. You can see how it works here:With this expansion, our SaaS partners who have enrolled in our CRE for SaaS program now have an option to run a copy of their production applications in the preview zone. This lets partners detect unanticipated failures of applications running on future releases of GCP services. We put a number of unreleased “Day 0 binaries,” our soon-to-be-released code, in this zone. Then partners can test their production applications against that code. This way, we can anticipate and avoid previously unknown failure modes before users do—giving both us and our partners a chance to investigate the pending changes and address them.BrightInsight (a Flex company), this year’s winner of the Google Cloud Healthcare Partner award, has been using the preview zone, and finds it helpful both in preventing unanticipated failures as well as supporting regulatory compliance requirements within the healthcare industry.To use the preview zone, you’ll need to have defined your SLOs so that Google can integrate them with additional test frameworks. If you don’t have SLOs defined, we’ve built SLO Guide, a new tool to help you discover what you should measure based on common architectures and critical user journeys. It will help you quickly create SLOs that measure what your users actually care about. You can request access to the tool here. Finally, if you’re not a Google Cloud SaaS partner yet, kick off the process here.
Quelle: Google Cloud Platform

Google Cloud networking in depth: Cloud Load Balancing deconstructed

Google has eight services that serve over one billion users every day. To offer the best availability and user experience for these services, we at Google engineered load-balancing infrastructure that scales on demand, utilizes resources efficiently, is secure and optimized for latency. This same load-balancing infrastructure is what we provide to you for your applications, in the form of the Google Cloud Load Balancing family. Unlike traditional load-balancing solutions, each of our load-balancing solutions are designed as large-scale distributed software-defined systems that scale-out and are highly resilient.In this blog we will cover our portfolio of load-balancing offerings. We will start with our internet-facing load balancers that deliver Google’s massive edge-as-a-service to you via Network Load Balancing and Global Load Balancing. We’ll present benefits of container-native load balancing and show you how to secure the edge and optimize for latency and cost. Since many of you have services that are internal to Google Cloud, we’ll then cover your Internal Load Balancing options. We will wrap up by showing you how we can help you grow your cloud footprint and manage multi-cloud and heterogeneous services with internal layer-7 load balancing and Traffic Director for global service mesh.Maglev for fast and reliable Network Load BalancingFor load-balancing external layer-4 TCP/UDP traffic, we offer Network Load Balancing built using our Maglevs. In production since 2008, Maglevs load balance all traffic that comes into our data centers, and distribute traffic to front-end engines at our network edges. The layer-4 traffic is distributed to a set of regional backend instances using a 5-tuple hash consisting of the source and destination IP address, protocol and source and destination port.Maglev was a break from traditional load balancers in that it is software-based and operates in an active-active scale-out architecture. With Maglev Consistent Hashing, Maglev-based load balancers evenly distribute traffic over hundreds of backends as well as minimize the negative impact of unexpected faults on connection-oriented protocols. Network Load Balancing is a great solution for lightweight L4-based load balancing where you want to preserve the client IP address all the way to the backend instance and also perform TLS termination on these instances.Global Load Balancing for a single VIP, global reachFor our global load-balancing solution, we pushed load balancing to the edge of Google’s global network to front end the global load-balancing capacity behind a single Anycast Virtual IPv4 or IPv6 address. You can deploy capacity in multiple regions without having to modify the DNS entries or add new load balancer front-end IP address (VIPs) for new regions. You don’t have to deal with the challenges of traditional DNS-based load balancers such as clients caching IP addresses or regional siloed resources resulting in sub-optimal load balancing and utilization of backends instances.With global load balancing, you get cross-region failover and overflow. Global LB’s traffic distribution algorithm automatically directs traffic to the next closest instance with available capacity in the event of failure of or lack of capacity for instances in the region closest to end user.  Global LB delivers first class support for both VMs and containers. For containers, we built an abstraction called Network Endpoint Groups (NEG), which is essentially a group of IP address and port pairs. NEGs enable you to directly specify a container endpoint as opposed to first directing traffic to the node on which it resides and then redirecting to the container using kube-proxy. As a result, you can deliver lower latency, greater throughput and higher fidelity health checks for your services using NEGs.Secure the edgeTo secure your service, we recommend taking a defense-in-depth approach. We also recommend that you deploy TLS for data privacy and integrity purposes. We do not charge extra for encrypted vs. unencrypted traffic. We offer HTTPSand SSL proxy in our global load-balancing family. We also offer Managed Certificatesto reduce the work of procuring certs and managing their lifecycle. With SSL policies you can specify the minimum TLS version and SSL features that you wish to enable on your HTTP(S) and SSL proxy load balancers. We also offer multiple pre-configured profiles, including a custom one that lets you allows specify the ciphers and SSL features you want to use.With Google’s global network and global load-balancing, Google is able to mitigate and dissipate layer-3 and layer-4 volumetric attacks. To protect against application layer attacks, we recommend using Cloud Armor attached to your Global HTTP(S) load balancer. Use this in concert with Identity Aware Proxy to authenticate users and authorize access to your backend services.Optimize for latency and costMake the web fasterWe spend a lot of time at Google working to make the web faster. QUIC is a UDP-based encrypted transport optimized for HTTPS and HTTP/2is foundational for gRPC support. Google cloud load balancing supports QUIC traffic to the load balancer and supports multiplexed streams of HTTP/2 to the load balancer, followed by load balancing these multiple HTTP/2 streams to the backend.Google Cloud CDN runs on our globally distributed edge points, so you can reduce network latency when serving website content, offload content origins and reduce serving costs. Just set up HTTP(S) Load Balancing and then enable CDN by clicking a single checkbox.Optimize for performance or cost with Network TiersWith Network Tiers, you can optimize your workload for performance with Premium Tier, which takes advantage of Google’s performant network, or optimize for cost with Standard Tier, where your return traffic travels over regular ISP networks like other public clouds but incurs lower egress costs.Internal Load Balancing for private servicesMany Google Cloud customers have private workloads that need to be protected from the public internet. Those services need to scale and grow behind a private VIP that is accessible only by internal instances. For such users we offer regional layer-4 Internal Load Balancing based on our Andromeda network virtualization stack. Similar to our HTTP(S) Load Balancer and Network Load Balancer, Internal L4 Load Balancing is neither a hardware appliance nor an instance-based solution, and can support as many connections per second as you need since there’s no load balancer in the path between your client and backend instances.What’s next?For business agility, many organizations are transitioning from monolithic applications to microservices, looking for a uniform way to create and manage heterogenous and multi-cloud services with security, observability and resiliency. This is where service mesh comes in, providing software-defined networking (SDN) for services, including load balancing. With service mesh, networking complexity is abstracted away to the service mesh’s data-plane, which is implemented as a service proxy such as Envoy, leaving you free to focus on building business logic. Envoy is a performant, feature-rich and open-source service mesh data plane that you can configure and manage via the service mesh’s control plane (such as Istio). Google is a key contributor to both the Envoy and Istio open-source initiatives.We recently launched Traffic Director, a GCP-managed traffic management control plane for service mesh. Traffic Director communicates with the service proxies in the data plane using open-source xDS APIs to enable global load balancing, scalable health checking, autoscaling, resiliency and policy-driven traffic steering.Learn moreTo learn more about Cloud Load Balancing, start with the Next ‘19 talks onGoogle Cloud Load Balancing Deep Dive and Best Practices, Traffic Director and Envoy-based ILB for Production Grade Service Mesh & Istio and read the documentation. We’d love your feedback on these features and what else you’d like to see from our load balancing portfolio. You  can reach us at gcp-networking@google.com.
Quelle: Google Cloud Platform

Cloud Audit Logs: Integrated audit transparency for GCP and G Suite

Google Groups is a critical tool to control access to your Google Cloud Platform (GCP) projects, and you’ve told us that having Google Group audit logs available in Cloud Audit Logs would help streamline security and access monitoring. We’ve been working to unify these audit logs so you don’t have to integrate with multiple APIs to get a complete audit inventory of your GCP environment, and now, you can access the Google Groups audit logs right from within Cloud Audit Logs. This is an opt-in feature that you can turn on through the Admin console’s Data Sharing section under the Legal & Compliance.Using Google Groups to manage your organization’s data accessGoogle Groups are the recommended way to grant access to GCP resources when using IAM policies. Groups help you centralize access control, reduce duplication, delegate access management and scale your GCP environments securely. This launch is one of many investments we’re making to simplify using Google Groups within GCP.Google Cloud Audit LogsCloud Audit Logs is a Stackdriver security offering that lets you answer the question “who did what, when and where?” for your GCP environment.  It contains audit trails of all administrative changes, and data accesses of cloud resources by users.At the nucleus of all security operations, Cloud Audit Logs makes it possible to identify patterns of threat via Event Threat Detection, alert on security abnormalities via Cloud Security Command Center, remediate incidents via Stackdriver Incident Response and Remediation, and satisfy compliance requirements such as the NIST 800-92 Guide to Computer Security Log Management.A view into the futureAs more customers adopt G Suite and GCP to modernize their collaboration tools and applications, you’ve asked us to provide a more unified and consistent management plane. That is why we are bringing  group management directly into the Google Cloud Console. This includes various streams of security logs, audit logs from Cloud Identity, and G Suite audit logs. For example, when a Cloud Identity or G Suite administrator adds a user, or turns on a G Suite service, an audit log appears in both the G Suite Admin Audit Log, as well as the GCP Admin Activity Audit Log. Likewise, when a user signs into your domain, it’s recorded in the G Suite Login Audit Log and GCP Cloud Audit Log.To learn more about using Google Groups to manage access control, check out our overview of Identity and Access Management to learn more.
Quelle: Google Cloud Platform

Optimize your organization’s cloud journey with a Cloud Center Of Excellence

The cloud has become a foundational part of the business and digital transformation journeys of many organizations. The Google Cloud Professional Services team has learned through working with our customers that a Cloud Center of Excellence (COE) is one of the ways that enterprises can get to the cloud faster and maintain stronger alignment between their business strategy and cloud investments.A Cloud COE can help accelerate cloud adoption benefits in a number of ways including:Driving momentum across the organizationDeveloping reusable frameworks for cloud governanceManaging cloud knowledge sharing and learning initiativesOverseeing cloud usage and plans for scaleAligning cloud offerings to the larger organizational strategyFurther, we’ve observed that successful Cloud COE teams exhibit many of the following characteristics:Multidisciplinary: Members of the team reflect the diverse perspectives of the stakeholders in the organization.Empowered: Team members have decision-making authority without need for higher-level sign-off.Visionary: They take a multi-project viewpoint to understand repeatability and long-term benefits or goals for the organization.Agile: The team understands how to deliver short-term wins such as short development cycles and an iterative approach to building products.Technical: The Cloud COE should include experienced individuals with a history of architecting and building past solutions within the organization.Integrated: Individual members come from existing areas of the business to allow for easy integration into existing teams and organizational constructs.Hands-on: The group includes individuals who are able to do the hands-on work needed to build and test cloud solutions.Google Cloud Professional Services is excited to release a new whitepaper that can help guide your organization through the process of building a Cloud COE to meet your needs both now and in the future. “Building a Cloud COE” is closely aligned with the Google Cloud Adoption Framework and is a practical guide for organizations looking to build or evolve their Cloud COE.To learn about how to build a Cloud Center of Excellence, download the whitepaper for practical guidance and strategies.
Quelle: Google Cloud Platform

Forseti intelligent agents: an open-source anomaly detection module

Among security professionals, one way to identify a breach or spurious entity is to detect anomalies and abnormalities in customer’ usage trend. At Google, we use Forseti,a community-driven collection of open-source tools to improve the security of Google Cloud Platform (GCP) environments. Recently, we launched the “Forseti Intelligent Agents” initiative to identify anomalies,  enable systems to take advantage of common user usage patterns, and identify other outlier data points. In this way, we hope to help security specialists for whom it’s otherwise cumbersome and time-consuming to manually flag these data points.Anomaly detection is a classic and common solution implemented across multiple business domains. We tested several machine-learning (ML) techniques for use in anomaly detection, analyzing existing data that had been used to create firewall rules and identify outliers. The approach, the results of which you can find in this whitepaper, was experimental and based on static analysis.At a high level, our goal is to use Forseti inventory data to achieve the following:Detect unusual instances between snapshots.Alert users of unusual firewall rules, provide comparisons with what expected behaviors.Provide potential remediation steps.Below is our solution. Note that it uses static data for now, but we can transform it to use dynamic data, if needed.The Forseti intelligent agents workflowTo build this solution, we took a multi-phase approach that imported firewall data into a BigQuery table, prepared and manipulated the data, then generated and evaluated a model. At the same time, we engaged in “feature-level decision stumps” (i.e., decision trees built after considering one feature as the label and all the rest as regular features) and performed bucketing and sample detection. Figure 1 is a high level depiction of our initial workflow. For pre-processing we experimented with approaches such as penalizing the subnet with a wider range. We also looked at Supernets, an example of which is depicted below.Some of these flattened firewall rules that we used to train the model can be depicted as follows:Then, for unsupervised learning, we experimented with techniques including  k-means clustering, decision stumps, and visualization in low-dimensional space.Feature weights for both principal components:Based on these results, we looked at a normal organization with thousands of firewall rules, and examining the points and clusters to the right, found some of the following anomalies (marked in RED below):*Model output has been anonymized for privacy and security.We conducted these experiments with firewall rules to prototype different approaches. You can read these approaches in detail in the whitepaper.A next step to follow up on this framework would be to use semi-supervised learning. Using some of the data points that our models can confidently flag as anomalous would also help in generating annotated data for such detailed analysis. Since we only used firewall rules in this initial study, as a next step, we plan to use other features such as hierarchical location of the firewall rules and network-related metadata.If you’re interested in contributing to the Forseti intelligent agents initiative, you can play around with any sample inventory data (or even your own), helping us generate broader anomaly detection mechanisms. By enlisting the community’s help with intelligent agents, we hope to continue to expand the Forseti toolset to help ensure the security of your cloud environment.For more details about this initiative, check out the solution here.Joe Cheuk, Cloud Application Engineer; Praneet Dutta, Cloud Machine Learning Engineer; and Nitin Aggarwal, Technical Program Manager, Cloud Machine Learning contributed to this report.
Quelle: Google Cloud Platform

Building the cloud-native future at Google Cloud

From its first open-source commit five years ago to now, Kubernetes has become the industry standard for modern application architecture. It was built on over a decade of Google’s experience as the world’s largest containerized application user. And it’s from this deep and continued investment that Google Cloud provides industry-leading solutions for running workloads at enterprise scale.One of the most exciting outcomes of this shift toward cloud-native computing is the innovation built on top of Kubernetes. At Google, we love to solve challenging problems, and then share our experiences at scale with the world. This ethos is what brought Kubernetes to life, and it’s also the force behind Knative, Istio, gVisor, Kubeflow, Tekton, and other cloud-native open-source projects that we lead.We think of it as our job to not only dream about the future, but also to design and implement it. Here’s an overview of open-source projects tied to Kubernetes that we’re working on. We know that speculating about the future can be tricky, but these projects offer a glimpse into how we’re building a cloud-native future. Let’s take a look.Start with KubernetesKubernetes is an open-source system for automating deployment, scaling, and management of containerized applications. It is the industry’s de facto container orchestrator, and is the heart of the cloud-native movement.We’re proud of our contributions to the Kubernetes project, as we serve the community in many important ways. Google remains the top technical contributor to the project, as well as being actively involved in nearly all special interest groups (SIGs), subprojects, the steering committee, and as code approvers and reviewers. We constantly integrate our real-world experience at scale into the project, just as we have from the beginning.When we look at the future of Kubernetes, we see the API extension ecosystem maturing and growing even further. We also see a more holistic approach to scalability, so it’s not just about how many nodes or pods are deployed, but how Kubernetes is used across real-world, production environments with widely-varying requirements. Improved reliability is another important facet of this work, as even more mission-critical workloads move to Kubernetes.IstioIstio is a service mesh that helps manage, secure and observe traffic between services. The project evolved out of the need for developers adopting microservices to help understand and control the traffic between those services without requiring code changes.Istio uses the Envoy proxy as a sidecar to collect detailed network traffic statistics and other data from the co-located application, as well as provide logging and tracing. It optionally secures traffic using mTLS (and automatically generates and rotates certificates). Finally, it provides Kubernetes-style APIs to provide advanced networking functionality (for example, the ability to run canary tests, change retry policy at runtime, or add circuit-breaking).The upcoming version, 1.2, will feature a new operator-based installer and numerous testing and quality improvements. For the rest of 2019, componentization and ease of use will take center stage, as well as architectural improvements that will increase modularity, allow powerful dataplane extensibility, and enhance reliability and performance.KnativeKnative is a Kubernetes-based platform to build, deploy, and manage modern stateless workloads. Knative components abstract away the complexity and enable developers to focus on what matters to them—solving important business problems.Just last week, the Knative team released the latest version, v0.6. Besides incremental reliability and stability enhancements, this release also exposes more powerful routing capabilities and improved support for GitOps-like operational use cases. Also, starting with this release, developers can now easily migrate simple apps from Kubernetes Deployments without changes, making service deployment easier for anyone who’s familiar with the Kubernetes resource model.Since it was announced 10 months ago, a number of commercial offerings already use underlying Knative primitives. Today, the Knative community includes 400+ contributors associated with over 50 different companies, who with the v0.6 release have made 4,000+ pull requests. We are excited about this momentum and look forward to working with the community on further improving the developer experience on Kubernetes.gVisorgVisor is an open-source, OCI-compatible sandbox runtime that provides a virtualized container environment. It runs containers with a new user-space kernel, delivering a low-overhead container security solution for high-density applications. gVisor integrates with Docker, containerd and Kubernetes, making it easier to improve the security isolation of your containers while still using familiar tooling. Additionally, gVisor supports a variety of underlying mechanisms for intercepting application calls, allowing it to run in diverse host environments, including cloud-hosted virtual machines.gVisor was open sourced in May 2018 at KubeCon EU. Since then, the gVisor team has added multi-container support for Kubernetes, released a suite of tests containing more than 1,500 individual tests, released a minikube add-on, integrated it with containerd, and further improved isolation and compatibility. The gVisor team recently began hosting community meetings and is working to grow the users and community around container isolation and gVisor.TektonTekton is a set of standardized Kubernetes-native primitives for building and running Continuous Delivery workflows. It allows users to express their Continuous Integration, Deployment and Delivery pipelines as Kubernetes CRDs, and run them in any Kubernetes cluster.We started Tekton last year and donated it to the open Continuous Delivery Foundation earlier this year. Tekton APIs are still in alpha, but we look forward to stabilizing them and adding support for automated deployments, vendor-agnostic pull requests, GitOps workflows, automated compliance-as-code and more!Forseti SecurityForseti Security is a collection of community-driven, open-source tools to help you expand upon the security of your Google Cloud Platform (GCP) environments. It takes a snapshot of your GCP resources metadata, audits those resources by comparing the configuration with the policies you defined, and notifies you of violations on an ongoing basis.With Forseti, you can ensure your GKE clusters are provisioned with security and governance guardrails by scanning your GKE resource metadata and making sure the configurations are as expected. Forseti’s Validator Scanner lets you define custom security and governance constraints in Rego to check for violations in your GKE resource metadata.In addition, you can reuse these constraints for pre-deployment checks with Terraform Validator. A set of canned constraints are available in the Policy Library. The Forseti community will continue contributing new constraints to harden your GKE environment. Get started with Forseti Validator Scanner here.KubeflowKubeflow is dedicated to making deployments of machine learning (ML) workflows on Kubernetes simple, portable and scalable. Its goal is not to recreate other services, but to provide a straightforward way to deploy best-of-breed open-source systems for ML on a variety of infrastructures. The Kubeflow project is supported by 100+ contributors from 20+ organizations.Kubeflow is on the road to 1.0, and we’re hard at work building a powerful development experience that will allow data scientists to build, train and deploy from notebooks, as well as the enterprise stability and features ML operations teams need to deploy and scale advanced data science workflows. Hear more about this effort in this session from KubeCon NA 2018, and follow us on Twitter @kubeflow.SkaffoldSkaffold is a command line tool that makes it fast and easy to develop applications on Kubernetes. Skaffold automates the local development loop for you; skaffold dev rebuilds your images and redeploys your app to Kubernetes on every code change. You can also use Skaffold as a building block for CI/CD pipelines with skaffold run. It’s language-agnostic and has an increasing number of configurable, flexible image builders (jib, docker, bazel, kaniko), deployers (kustomize, kubectl, helm) and automated tagging policies, making it a great fit for more and more Kubernetes development workflows.We use Skaffold under the hood for Cloud Code for IntelliJ and VSCode and also for Jenkins-X. Skaffold is currently in beta, and will soon graduate to 1.0.0.Follow our progress on our GitHub repo, and share your thoughts with the #skaffold hashtag on Twitter!GatekeeperGatekeeper is a customizable admission webhook. It allows cluster administrators and security practitioners to develop, share and enforce policies and config validation via parameterized, easily configurable constraint CRDs. Constraints are portable and could also be used to validate commits to the source-of-truth repo in CI/CD pipelines.With Gatekeeper, you can help developers comply with internal governance and best practices, freeing up your time and theirs. You can do things like require developers to set ownership labels, apply resource limits to their pods, or prohibit them from using the :latest tag. Using Gatekeeper’s audit functionality, you can easily find any pre-existing resources that are in violation of current best practices.Google is proud to be collaborating with Microsoft and Styra (the creators of Open Policy Agent) on this project. Gatekeeper is currently in alpha and we welcome user feedback and contributions.KrewKrew is a plugin manager for kubectl that helps users discover and install kubectl plugins to improve their kubectl experiences. Originally developed at Google, Krew is now a part of Kubernetes SIG CLI.  The future is nowBuilding cloud-native apps on top of Kubernetes isn’t some abstract, aspirational goal. The tools you need are here today, and they’re only getting better. To stay up to date on what else is happening in the cloud-native community, both from Google and beyond, we urge you to subscribe to the Kubernetes Podcast.
Quelle: Google Cloud Platform

Delivering end-to-end data analytics and data management solutions with Informatica

As more enterprises transition from on-premises data centers to the cloud, they increasingly need hybrid and multi-cloud solutions that can help them get the most from existing investments and take advantage of familiar, easy-to-use tooling.  Today, we’re extending our strategic partnership with Informatica, a leader in enterprise cloud data management, to meet these hybrid and multi-cloud needs. This includes the availability of Informatica Intelligent Cloud Services (IICS) and Master Data Management (MDM) on Google Cloud Platform (GCP), offering advanced data integration, data governance, data quality, and broader data management solutions for a seamless end-to-end data lifecycle management experience.In our conversations with enterprise customers across every industry, we frequently hear that data management and analytics are top of mind. Through our expanded collaboration with Informatica, we’re bringing these enterprises solutions that address their challenges in three key ways: data warehouse modernization for smart analytics and real-time insights, data management for marketing analytics, and data governance.Data warehouse modernizationWith the availability of IICS on Google Cloud Platform, customers will be able to easily and securely move data from their hybrid and multi-cloud applications and systems into GCP, and seamlessly and scalably analyze the data with smart analytics solutions like BigQuery, Cloud Dataproc and cloud AI capabilities.Existing Google Cloud customers will find that these new Informatica product integrations make GCP’s data management and data analytics solutions even easier to use. Our partnership will help accelerate data warehouse modernization by making data and schema migration, including ETL pipelines, seamless.In addition, with the availability of IICS on GCP, customers will be able to take advantage of our leading AI and ML capabilities. This means they can move data from multiple hybrid and multi-cloud systems into BigQuery, and use BigQuery ML or AutoML Tables to build machine learning models on their datasets. They can even fast-track data preparation with out-of-the-box data quality solutions for training AI models using cloud machine learning APIs or Cloud AutoML.Master data management for marketing analyticsEnterprise CMOs have told us they want to make decisions backed by data, but struggle with data fragmentation across systems such as CRM, POS, and billing. By bringing all their data together inside BigQuery and then applying Informatica’s Master Data Management solution, they get a single source of truth for customer data, and they can apply analytics that generate meaningful insights and improve the customer experience.For instance, by taking advantage of IICS on GCP, businesses can now build marketing-specific data lakes filled from sources such as Google Adwords, YouTube, DoubleClick, and more than 100 SaaS applications. This can help enterprises create a 360-degree view of their customers to enhance customer experiences, predict business outcomes, and improve campaign performance. Informatica MDM will also be available as a managed service within IICS on Google Cloud, offering better data governance and master data management, giving customers the ability to govern and understand all of their data across on-prem, public cloud, and GCP-native stores like Cloud Storage, BigQuery, and Cloud Spanner.Data GovernanceIICS Data Quality and Governance Cloud will make it easy for customers to explore, govern and manage data quality across a variety of on-premises systems and Google Cloud data stores such as Cloud Storage, BigQuery, and Cloud Spanner using a single pane of glass.  For Informatica customers moving to the cloud, GCP offers a secure, scalable, and reliable infrastructure to help build and operate mission-critical data analytics solutions. They can easily port their data pipelines and move data into GCP to realize the benefits of Google Cloud’s serverless, integrated, and intelligent data analytics services. Beyond analysis, Informatica customers will be able to apply Google Cloud’s industry-leading AI and machine learning capabilities to their data for predictive analytics so they can make their applications and business processes even more intelligent.“Informatica is committed to providing our customers with the broadest ecosystem support across the industry, and our new integration with Google Cloud creates an enhanced strategic alignment with a rapidly growing enterprise-ready cloud platform,” said Anil Chakravarthy, chief executive officer, Informatica. “Today, two major industry leaders are coming together to offer data integration innovations that power our customers’ digital transformations and expands our strategic partnership with Google Cloud.”Equinix, the world’s largest global interconnection platform, is already using Informatica and Google Cloud to help them connect digital businesses directly to their customers, clouds, employees and partners inside their more than 200 data centers. “One of our strategic goals is to deliver rich, on-demand business insights through big data and advanced analytics capabilities to help scale the organization and deliver a superior experience for our customers,” says Milind Wagle, CIO, Equinix. “As part of modernizing our technology in support of this goal, we selected Informatica and Google Cloud as strategic platforms in our data and analytics architecture stack. The strategic alignment between the Informatica and Google Cloud platforms, leveraging Equinix ECX Fabric, is helping us fast-track our enterprise digital transformation.”Informatica Intelligent Cloud Services (IICS) and Master Data Management (MDM) on Google Cloud Platform (GCP) will be available to customers through an early access program later in 2019.To learn more, visit informatica.com/gcp.
Quelle: Google Cloud Platform

Putting the ‘ease’ in Kubernetes with latest enhancements to GKE

Since its launch in 2015, Google Kubernetes Engine (GKE) continues to set the standard for the industry on what it means to provide a managed Kubernetes service that puts security, reliability, and ease of use first. This innovation is  driven by feedback from all Kubernetes users and GKE customers. Thank you for trusting us to provide the platform that powers your  business transformation.Today, on the first day of KubeCon EU, we want to update you on what’s new in GKE.Your Kubernetes your way with GKEMove quickly and reliably with GKE release channelsAs a Google Cloud customer, you have a wide range of requirements for how to use your clusters and when to upgrade them. GKE has always abstracted away the complexity of managing Kubernetes releases by automating the upgrade and delivery of new versions to your clusters. When you create a cluster on GKE, your cluster is created with the default certified version in the GKE fleet, and you can leverage the auto-upgrade capability to keep your clusters up to date with bug fixes and security patches.Starting this month in alpha, GKE will offer release channels, allowing you to upgrade your clusters in a way that fits your business. We’ll offer three channels; Rapid, Regular, and Stable, each with different version maturity and stability, so you can subscribe your cluster to an update stream that matches your risk tolerance and business requirements.We’re excited to introduce the first of the release channels, the Rapid channel. You can subscribe your clusters to the Rapid channel starting now, and get early access to the latest Kubernetes version as it matures to Regular, and finally to the Stable channel.Try out Windows Server Containers in Rapid ChannelWe heard you—being able to easily deploy Windows containers is critical for enterprises looking to modernize existing applications and move them towards cloud-native technology. In Kubernetes 1.14, the upstream open-source community announced support for Windows nodes, and we’re pleased to offer Windows Server Containers in Kubernetes 1.14 on GKE. You’ll be able to experiment with Windows Server Containers and modernize your existing Windows applications from the new Rapid channel in June. Be in the know with Stackdriver Kubernetes Engine MonitoringToday, we’re excited to announce general availability of Stackdriver Kubernetes Engine Monitoring, a tool that gives you GKE observability (metrics, logs, events, and metadata) all in one place, to help provide faster time-to-resolution for issues, no matter the scale.Stackdriver Kubernetes Engine Monitoring observability data (metrics, logs and traces) and events for infrastructure, workloads and servicesKubernetes Engine Monitoring gives you a comprehensive view into your Kubernetes environment, including infrastructure, application and service data with speed and reliability. It comes pre-integrated with GKE, so you can use it to improve the reliability of the services running there from the get-go.”Stackdriver Kubernetes Engine Monitoring gives us complete visibility into our GKE environment, is easy to use and fosters a meaningful conversation between SREs and Engineering on the reliability of our applications. We now have a full view across all of our clusters and can dig into further details as needed–allowing us to diagnose issues faster and keep our applications available to users.” – Pardeep Sandhu, Site Reliability Engineer, SchlumbergerDevelop faster with Kubernetes apps on GKEIt’s easy to deploy integrated Kubernetes applications to GKE directly from the Google Cloud Platform (GCP) Marketplace, bringing an ecosystem of open-source and commercial applications to you in the cloud and on-prem in a simple, integrated way. Kubernetes apps also support organizations in their application modernization journey, by helping them deploy containerized applications on-prem before moving them to the cloud. Get started today, and encourage your ISV partners to offer their solutions as Kubernetes apps from GCP Marketplace.Use GKE Advanced for enhanced reliability, simplicity and scaleFinally, we recently introduced GKE Advanced with features and tooling to help you operate in fast-moving environments to simplify the management of workloads and clusters, and scale hands-free. You still benefit from Kubernetes’ portability and third-party ecosystem, but with an enhanced, and evolving, feature set.At Google Cloud our users are the inspiration for everything we do. Seeing how you use GKE to transform your business inspires us.Transforming businesses with GKEThe momentum we see across a variety of industries and customer segments is a testament to the value of Kubernetes and GKE. On the occasion of the five year anniversary of Kubernetes, we want to thank and celebrate some of our customers, who are transforming their businesses through GKE.Leading the way to cloud-native appsWhether you want to modernize your environment, or build cloud-native applications from scratch, consider choosing GKE as your container management and orchestration foundation.To celebrate Kubernetes’ 5th birthday, we’re giving away a free month of learning through our new Coursera course, Architecting with GKE†. And if you’re at KubeCon, be sure to check out our talks, and stop by the booth to say hi!† Offer valid until 30 September 2019, while supplies last.
Quelle: Google Cloud Platform

Got hybrid? Getting started with hybrid patterns and practices

Our solutions team here at Google Cloud is made up of solutions architects who are industry veterans and experts in cloud architecture and applications. Our goal is to help you put Google Cloud Platform (GCP) services together for you to solve your business needs and create the best solution for the infrastructure you’re building.One topic we work on a lot is hybrid cloud. As we hear from many of our customers, you want to move some of your workloads to the cloud to create a hybrid cloud, with some workloads on-premises and some on GCP. Where do you start? What do you have to think about? What does the topology look like? In this post, we’ll look at some of the solutions that we’ve published that can help you implement hybrid cloud topologies, starting from the very beginning of setting them up.Hybrid cloud patterns and best practicesImplementing a cloud architecture that involves workloads that run on-premises, on GCP, and possibly on another cloud provider can be a bit challenging. Let’s start with an overview of the process and what that architecture might look like once you’ve implemented it.Our Hybrid and Multi-Cloud Patterns and Practices series addresses precisely the types of questions you’re probably asking. This series, written by one of our solutions architects, Johannes Passing, distills his decade-plus years of experience with creating cloud-based architectures into solutions.The series starts by walking through the preliminaries, like articulating what your goals are for using hybrid cloud. You’ll then see some of the options for moving workloads to the cloud, and which approach might best suit your goals. The discussion is copiously illustrated with diagrams that offer a high-level view of what a hybrid solution might look like, such as this one:At each stage, you’ll see a list of the advantages of the various approaches to hybrid cloud and a concise list of best practices. Everything in the documents is very much rooted in the author’s hands-on experience with designing these types of systems.Authentication and single sign-on in hybrid cloudManaging authentication and authorization in a hybrid environment generally means matching your existing, on-premises identity system with how it’s done in GCP. For example, you might already run Active Directory on-premises. How can you map your user identities to GCP identities so that your users don’t have to sign in separately to your on-premises services and to GCP?In a three-part series, Federating Google Cloud Platform with Active Directory, Johannes tackles the topic of integrating Active Directory with Cloud Identity using Google Cloud Directory Sync. This series discusses how to deal with various Active Directory topologies (such as single or multi-forest), and how to perform Windows-integrated authentication and single sign-on (SSO) for apps running on GCP.”Rip and replace” with GKEThere’s another approach to moving systems to the cloud. If you’re modernizing a complex website to a refactored, container-based microservices platform (Google Kubernetes Engine, or GKE) on GCP, check out Migrating a monolithic application to microservices on GCP, from solutions architect and DevOps engineering veteran Théo Chamley.As an example in this solution, Theo uses an e-commerce site. You’ll see how to perform the migration feature by feature, avoiding the risks of a large-scale, all-at-once migration. During the migration itself, the application has a hybrid architecture, where some features are in the cloud and some are still on-premises. After the migration is finished, the complete application is hosted in the cloud, but it still interacts with back-end services that remain on-premises. In addition to describing the architecture of various steps in this migration, you’ll see how to take advantage of a variety of GCP services as part of the process, including Cloud Interconnect.Wait, there’s moreSeveral other solutions architects have also been writing about hybrid architectures to share best practices and offer advice. Here are a few to check out:In TCP optimization for network performance in GCP and hybrid scenarios, Kishor Aher explains how to tune network performance when moving workloads from on-premises to GCP. You’ll get a look at the details of TCP transmission so that you can understand why his recommendations can help reduce network latencies.What if you want to communicate between GCP and another cloud without using public IP addresses? Etsuji Nakai’s solution Using APIs from an External Network shows how to use a private network on Amazon Virtual Private Cloud (Amazon VPC) to emulate an on-premises private network.Check out all of our solutions here. And take a look through all the hybrid cloud sessions from Google Cloud Next ’19.
Quelle: Google Cloud Platform