Using eBPF in Kubernetes

IntroductionKubernetes provides a high-level API and a set of components that hides almost all of the intricate and—to some of us—interesting details of what happens at the systems level. Application developers are not required to have knowledge of the machines’ IP tables, cgroups, namespaces, seccomp, or, nowadays, even the container runtime that their application runs on top of. But underneath, Kubernetes and the technologies upon which it relies (for example, the container runtime) heavily leverage core Linux functionalities.This article focuses on a core Linux functionality increasingly used in networking, security and auditing, and tracing and monitoring tools. This functionality is called extended Berkeley Packet Filter (eBPF)Note: In this article we use both acronyms: eBPF and BPF. The former is used for the extended BPF functionality, and the latter for “classic” BPF functionality. What is BPF?BPF is a mini-VM residing in the Linux kernel that runs BPF programs. Before running, BPF programs are loaded with the bpf() syscall and are validated for safety: checking for loops, code size, etc. BPF programs are attached to kernel objects and executed when events happen on those objects—for example, when a network interface emits a packet.BPF SuperpowersBPF programs are event-driven by definition, an incredibly powerful concept, and executes code in the kernel when an event occurs. Netflix’s Brendan Gregg refers to BPF as a Linux superpower.The ‘e’ in eBPFTraditionally, BPF could only be attached to sockets for socket filtering. BPF’s first use case was in `tcpdump`. When you run `tcpdump` the filter is compiled into a BPF program and attached to a raw `AF_PACKET` socket in order to print out filtered packets.But over the years, eBPF added the ability to attach to other kernel objects. In addition to socket filtering, some supported attach points are:Kprobes (and userspace equivalents uprobes)TracepointsNetwork schedulers or qdiscs for classification or action (tc)XDP (eXpress Data Path)This and other, newer features like in-kernel helper functions and shared data-structures (maps) that can be used to communicate with user space, extend BPF’s capabilities.Existing Use Cases of eBPF with KubernetesSeveral open-source Kubernetes tools already use eBPF and many use cases warrant a closer look, especially in areas such as networking, monitoring and security tools.Dynamic Network Control and Visibility with CiliumCilium is a networking project that makes heavy use of eBPF superpowers to route and filter network traffic for container-based systems. By using eBPF, Cilium can dynamically generate and apply rules—even at the device level with XDP—without making changes to the Linux kernel itself.The Cilium Agent runs on each host. Instead of managing IP tables, it translates network policy definitions to BPF programs that are loaded into the kernel and attached to a container’s virtual ethernet device. These programs are executed—rules applied—on each packet that is sent or received.This diagram shows how the Cilium project works:Depending on what network rules are applied, BPF programs may be attached with tc or XDP. By using XDP, Cilium can attach the BPF programs at the lowest possible point, which is also the most performant point in the networking software stack.If you’d like to learn more about how Cilium uses eBPF, take a look at the project’s BPF and XDP reference guide.Tracking TCP Connections in Weave ScopeWeave Scope is a tool for monitoring, visualizing and interacting with container-based systems. For our purposes, we’ll focus on how Weave Scope gets the TCP connections.Weave Scope employs an agent that runs on each node of a cluster. The agent monitors the system, generates a report and sends it to the app server. The app server compiles the reports it receives and presents the results in the Weave Scope UI.To accurately draw connections between containers, the agent attaches a BPF program to kprobes that track socket events: opening and closing connections. The BPF program, tcptracer-bpf, is compiled into an ELF object file and loaded using gopbf.(As a side note, Weave Scope also has a plugin that make use of eBPF: HTTP statistics.)To learn more about how this works and why it’s done this way, read this extensive post that the Kinvolk team wrote for the Weaveworks Blog. You can also watch a recent talk about the topic.Limiting syscalls with seccomp-bpfLinux has more than 300 system calls (read, write, open, close, etc.) available for use—or misuse. Most applications only need a small subset of syscalls to function properly. seccomp is a Linux security facility used to limit the set of syscalls that an application can use, thereby limiting potential misuse.The original implementation of seccomp was highly restrictive. Once applied, if an application attempted to do anything beyond reading and writing to files it had already opened, seccomp sent a `SIGKILL` signal.seccomp-bpf enables more complex filters and a wider range of actions. Seccomp-bpf, also known as seccomp mode 2, allows for applying custom filters in the form of BPF programs. When the BPF program is loaded, the filter is applied to each syscall and the appropriate action is taken (Allow, Kill, Trap, etc.).seccomp-bpf is widely used in Kubernetes tools and exposed in Kubernetes itself. For example, seccomp-bpf is used in Docker to apply custom seccomp security profiles, in rkt to apply seccomp isolators, and in Kubernetes itself in its Security Context.But in all of these cases the use of BPF is hidden behind libseccomp. Behind the scenes, libseccomp generates BPF code from rules provided to it. Once generated, the BPF program is loaded and the rules applied.Potential Use Cases for eBPF with KuberneteseBPF is a relatively new Linux technology. As such, there are many uses that remain unexplored. eBPF itself is also evolving: new features are being added in eBPF that will enable new use cases that aren’t currently possible. In the following sections, we’re going to look at some of these that have only recently become possible and ones on the horizon. Our hope is that these features will be leveraged by open source tooling.Pod and container level network statisticsBPF socket filtering is nothing new, but BPF socket filtering per cgroup is. Introduced in Linux 4.10, cgroup-bpf allows attaching eBPF programs to cgroups. Once attached, the program is executed for all packets entering or exiting any process in the cgroup.A cgroup is, amongst other things, a hierarchical grouping of processes. In Kubernetes, this grouping is found at the container level. One idea for making use of cgroup-bpf, is to install BPF programs that collect detailed per-pod and/or per-container network statistics.Generally, such statistics are collected by periodically checking the relevant file in Linux’s `/sys` directory or using Netlink. By using BPF programs attached to cgroups for this, we can get much more detailed statistics: for example, how many packets/bytes on tcp port 443, or how many packets/bytes from IP 10.2.3.4. In general, because BPF programs have a kernel context, they can safely and efficiently deliver more detailed information to user space.To explore the idea, the Kinvolk team implemented a proof-of-concept: https://github.com/kinvolk/cgnet. This project attaches a BPF program to each cgroup and exports the information to Prometheus.There are of course other interesting possibilities, like doing actual packet filtering. But the obstacle currently standing in the way of this is having cgroup v2 support—required by cgroup-bpf—in Docker and Kubernetes.Application-applied LSMLinux Security Modules (LSM) implements a generic framework for security policies in the Linux kernel. SELinux and AppArmor are examples of these. Both of these implement rules at a system-global scope, placing the onus on the administrator to configure the security policies.Landlock is another LSM under development that would co-exist with SELinux and AppArmor. An initial patchset has been submitted to the Linux kernel and is in an early stage of development. The main difference with other LSMs is that Landlock is designed to allow unprivileged applications to build their own sandbox, effectively restricting themselves instead of using a global configuration. With Landlock, an application can load a BPF program and have it executed when the process performs a specific action. For example, when the application opens a file with the open() system call, the kernel will execute the BPF program, and, depending on what the BPF program returns, the action will be accepted or denied.In some ways, it is similar to seccomp-bpf: using a BPF program, seccomp-bpf allows unprivileged processes to restrict what system calls they can perform. Landlock will be more powerful and provide more flexibility. Consider the following system call:Cfd = open(“myfile.txt”, O_RDWR);The first argument is a “char *”, a pointer to a memory address, such as `0xab004718`. With seccomp, a BPF program only has access to the parameters of the syscall but cannot dereference the pointers, making it impossible to make security decisions based on a file. seccomp also uses classic BPF, meaning it cannot make use of eBPF maps, the mechanism for interfacing with user space. This restriction means security policies cannot be changed in seccomp-bpf based on a configuration in an eBPF map.BPF programs with Landlock don’t receive the arguments of the syscalls but a reference to a kernel object. In the example above, this means it will have a reference to the file, so it does not need to dereference a pointer, consider relative paths, or perform chroots.Use Case: Landlock in Kubernetes-based serverless frameworksIn Kubernetes, the unit of deployment is a pod. Pods and containers are the main unit of isolation. In serverless frameworks, however, the main unit of deployment is a function. Ideally, the unit of deployment equals the unit of isolation. This puts serverless frameworks like Kubeless or OpenFaaS into a predicament: optimize for unit of isolation or deployment?To achieve the best possible isolation, each function call would have to happen in its own container—ut what’s good for isolation is not always good for performance. Inversely, if we run function calls within the same container, we increase the likelihood of collisions.By using Landlock, we could isolate function calls from each other within the same container, making a temporary file created by one function call inaccessible to the next function call, for example. Integration between Landlock and technologies like Kubernetes-based serverless frameworks would be a ripe area for further exploration.Auditing kubectl-exec with eBPFIn Kubernetes 1.7 the audit proposal started making its way in. It’s currently pre-stable with plans to be stable in the 1.10 release. As the name implies, it allows administrators to log and audit events that take place in a Kubernetes cluster. While these events log Kubernetes events, they don’t currently provide the level of visibility that some may require. For example, while we can see that someone has used `kubectl exec` to enter a container, we are not able to see what commands were executed in that session. With eBPF one can attach a BPF program that would record any commands executed in the `kubectl exec` session and pass those commands to a user-space program that logs those events. We could then play that session back and know the exact sequence of events that took place.Learn more about eBPFIf you’re interested in learning more about eBPF, here are some resources:A comprehensive reading list about eBPF for doing just thatBCC (BPF Compiler Collection) provides tools for working with eBPF as well as many example tools making use of BCC.Some videosBPF: Tracing and More by Brendan GreggCilium – Container Security and Networking Using BPF and XDP by Thomas GrafUsing BPF in Kubernetes by Alban CrequyConclusionWe are just starting to see the Linux superpowers of eBPF being put to use in Kubernetes tools and technologies. We will undoubtedly see increased use of eBPF. What we have highlighted here is just a taste of what you might expect in the future. What will be really exciting is seeing how these technologies will be used in ways that we have not yet thought about. Stay tuned!The Kinvolk team will be hanging out at the Kinvolk booth at KubeCon in Austin. Come by to talk to us about all things, Kubernetes, Linux, container runtimes and yeah, eBPF.
Quelle: kubernetes

Introducing Kubeflow – A Composable, Portable, Scalable ML Stack Built for Kubernetes

Today’s post is by David Aronchick and Jeremy Lewi, a PM and Engineer on the Kubeflow project, a new open source Github repo dedicated to making using machine learning (ML) stacks on Kubernetes easy, fast and extensible. Kubernetes and Machine LearningKubernetes has quickly become the hybrid solution for deploying complicated workloads anywhere. While it started with just stateless services, customers have begun to move complex workloads to the platform, taking advantage of rich APIs, reliability and performance provided by Kubernetes. One of the fastest growing use cases is to use Kubernetes as the deployment platform of choice for machine learning.Building any production-ready machine learning system involves various components, often mixing vendors and hand-rolled solutions. Connecting and managing these services for even moderately sophisticated setups introduces huge barriers of complexity in adopting machine learning. Infrastructure engineers will often spend a significant amount of time manually tweaking deployments and hand rolling solutions before a single model can be tested.Worse, these deployments are so tied to the clusters they have been deployed to that these stacks are immobile, meaning that moving a model from a laptop to a highly scalable cloud cluster is effectively impossible without significant re-architecture. All these differences add up to wasted effort and create opportunities to introduce bugs at each transition.Introducing KubeflowTo address these concerns, we’re announcing the creation of the Kubeflow project, a new open source Github repo dedicated to making using ML stacks on Kubernetes easy, fast and extensible. This repository contains:JupyterHub to create & manage interactive Jupyter notebooks A Tensorflow Custom Resource (CRD) that can be configured to use CPUs or GPUs, and adjusted to the size of a cluster with a single setting A TF Serving container Because this solution relies on Kubernetes, it runs wherever Kubernetes runs. Just spin up a cluster and go! Using KubeflowLet’s suppose you are working with two different Kubernetes clusters: a local minikube cluster; and a GKE cluster with GPUs; and that you have two kubectl contexts defined named minikube and gke.First we need to initialize our ksonnet application and install the Kubeflow packages. (To use ksonnet, you must first install it on your operating system – the instructions for doing so are here)     ks init my-kubeflow     cd my-kubeflow     ks registry add kubeflow      github.com/google/kubeflow/tree/master/kubeflow     ks pkg install kubeflow/core     ks pkg install kubeflow/tf-serving     ks pkg install kubeflow/tf-job     ks generate core kubeflow-core –name=kubeflow-coreWe can now define environments corresponding to our two clusters.     kubectl config use-context minikube     ks env add minikube     kubectl config use-context gke     ks env add gkeAnd we’re done! Now just create the environments on your cluster. First, on minikube:     ks apply minikube -c kubeflow-coreAnd to create it on our multi-node GKE cluster for quicker training:     ks apply gke -c kubeflow-coreBy making it easy to deploy the same rich ML stack everywhere, the drift and rewriting between these environments is kept to a minimum.To access either deployments, you can execute the following command:     kubectl port-forward tf-hub-0 8100:8000and then open up http://127.0.0.1:8100 to access JupyterHub. To change the environment used by kubectl, use either of these commands:     # To access minikube     kubectl config use-context minikube     # To access GKE     kubectl config use-context gkeWhen you execute apply you are launching on K8sJupyterHub for launching and managing Jupyter notebooks on K8s A TF CRDLet’s suppose you want to submit a training job. Kubeflow provides ksonnet prototypes that make it easy to define components. The tf-job prototype makes it easy to create a job for your code but for this example, we’ll use the tf-cnn prototype which runs TensorFlow’s CNN benchmark.To submit a training job, you first generate a new job from a prototype:     ks generate tf-cnn cnn –name=cnnBy default the tf-cnn prototype uses 1 worker and no GPUs which is perfect for our minikube cluster so we can just submit it.     ks apply minikube -c cnnOn GKE, we’ll want to tweak the prototype to take advantage of the multiple nodes and GPUs. First, let’s list all the parameters available:     # To see a list of parameters     ks prototype list tf-jobNow let’s adjust the parameters to take advantage of GPUs and access to multiple nodes.     ks param set –env=gke cnn num_gpus 1     ks param set –env=gke cnn num_workers 1     ks apply gke -c cnnNote how we set those parameters so they are used only when you deploy to GKE. Your minikube parameters are unchanged!After training, you export your model to a serving location.Kubeflow also includes a serving package as well. In a separate example, we trained a standard Inception model, and stored the trained model in a bucket we’ve created called ‘gs://kubeflow-models’ with the path ‘/inception’.To deploy a the trained model for serving, execute the following:     ks generate tf-serving inception –name=inception     —namespace=default –model_path=gs://kubeflow-models/inception     ks apply gke -c inceptionThis highlights one more option in Kubeflow – the ability to pass in inputs based on your deployment. This command creates a tf-serving service on the GKE cluster, and makes it available to your application.For more information about of deploying and monitoring TensorFlow training jobs and TensorFlow models please refer to the user guide. Kubeflow + ksonnetOne choice we want to call out is the use of the ksonnet project. We think working with multiple environments (dev, test, prod) will be the norm for most Kubeflow users. By making environments a first class concept, ksonnet makes it easy for Kubeflow users to easily move their workloads between their different environments.Particularly now that Helm is integrating ksonnet with the next version of their platform, we felt like it was the perfect choice for us. More information about ksonnet can be found in the ksonnet docs.We also want to thank the team at Heptio for expediting features critical to Kubeflow’s use of ksonnet.What’s Next?We are in the midst of building out a community effort right now, and we would love your help! We’ve already been collaborating with many teams – CaiCloud, Red Hat & OpenShift, Canonical, Weaveworks, Container Solutions and many others. CoreOS, for example, is already seeing the promise of Kubeflow:“The Kubeflow project was a needed advancement to make it significantly easier to set up and productionize machine learning workloads on Kubernetes, and we anticipate that it will greatly expand the opportunity for even more enterprises to embrace the platform. We look forward to working with the project members in providing tight integration of Kubeflow with Tectonic, the enterprise Kubernetes platform.” — Reza Shafii, VP of product, CoreOSIf you’d like to try out Kubeflow right now right in your browser, we’ve partnered with Katacoda to make it super easy. You can try it here!And we’re just getting started! We would love for you to help. How you might ask? Well…Please join the slack channel Please join the kubeflow-discuss email list Please subscribe to the Kubeflow twitter account Please download and run kubeflow, and submit bugs! Thank you for your support so far, we could not be more excited!Jeremy Lewi & David AronchickGoogle
Quelle: kubernetes

Kubernetes 1.9: Apps Workloads GA and Expanded Ecosystem

We’re pleased to announce the delivery of Kubernetes 1.9, our fourth and final release this year.Today’s release continues the evolution of an increasingly rich feature set, more robust stability, and even greater community contributions. As the fourth release of the year, it gives us an opportunity to look back at the progress made in key areas. Particularly notable is the advancement of the Apps Workloads API to stable. This removes any reservations potential adopters might have had about the functional stability required to run mission-critical workloads. Another big milestone is the beta release of Windows support, which opens the door for many Windows-specific applications and workloads to run in Kubernetes, significantly expanding the implementation scenarios and enterprise readiness of Kubernetes.Workloads API GAWe’re excited to announce General Availability (GA) of the apps/v1 Workloads API, which is now enabled by default. The Apps Workloads API groups the DaemonSet, Deployment, ReplicaSet, and StatefulSet APIs together to form the foundation for long-running stateless and stateful workloads in Kubernetes. Note that the Batch Workloads API (Job and CronJob) is not part of this effort and will have a separate path to GA stability.Deployment and ReplicaSet, two of the most commonly used objects in Kubernetes, are now stabilized after more than a year of real-world use and feedback. SIG Apps has applied the lessons from this process to all four resource kinds over the last several release cycles, enabling DaemonSet and StatefulSet to join this graduation. The v1 (GA) designation indicates production hardening and readiness, and comes with the guarantee of long-term backwards compatibility.Windows Support (beta)Kubernetes was originally developed for Linux systems, but as our users are realizing the benefits of container orchestration at scale, we are seeing demand for Kubernetes to run Windows workloads. Work to support Windows Server in Kubernetes began in earnest about 12 months ago. SIG-Windows has now promoted this feature to beta status, which means that we can evaluate it for usage.Storage EnhancementsFrom the first release, Kubernetes has supported multiple options for persistent data storage, including commonly-used NFS or iSCSI, along with native support for storage solutions from the major public and private cloud providers. As the project and ecosystem grow, more and more storage options have become available for Kubernetes. Adding volume plugins for new storage systems, however, has been a challenge.Container Storage Interface (CSI) is a cross-industry standards initiative that aims to lower the barrier for cloud native storage development and ensure compatibility. SIG-Storage and the CSI Community are collaborating to deliver a single interface for provisioning, attaching, and mounting storage compatible with Kubernetes.Kubernetes 1.9 introduces an alpha implementation of the Container Storage Interface (CSI), which will make installing new volume plugins as easy as deploying a pod, and enable third-party storage providers to develop their solutions without the need to add to the core Kubernetes codebase.Because the feature is alpha in 1.9, it must be explicitly enabled and is not recommended for production usage, but it indicates the roadmap working toward a more extensible and standards-based Kubernetes storage ecosystem.Additional FeaturesCustom Resource Definition (CRD) Validation, now graduating to beta and enabled by default, helps CRD authors give clear and immediate feedback for invalid objectsSIG Node hardware accelerator moves to alpha, enabling GPUs and consequently machine learning and other high performance workloadsCoreDNS alpha makes it possible to install CoreDNS with standard toolsIPVS mode for kube-proxy goes beta, providing better scalability and performance for large clustersEach Special Interest Group (SIG) in the community continues to deliver the most requested user features for their area. For a complete list, please visit the release notes.AvailabilityKubernetes 1.9 is available for download on GitHub. To get started with Kubernetes, check out these interactive tutorials. Release teamThis release is made possible through the effort of hundreds of individuals who contributed both technical and non-technical content. Special thanks to the release team led by Anthony Yeh, Software Engineer at Google. The 14 individuals on the release team coordinate many aspects of the release, from documentation to testing, validation, and feature completeness.As the Kubernetes community has grown, our release process has become an amazing demonstration of collaboration in open source software development. Kubernetes continues to gain new users at a rapid clip. This growth creates a positive feedback cycle where more contributors commit code creating a more vibrant ecosystem. Project VelocityThe CNCF has embarked on an ambitious project to visualize the myriad contributions that go into the project. K8s DevStats illustrates the breakdown of contributions from major company contributors. Open issues remained relatively stable over the course of the release, while forks rose approximately 20%, as did individuals starring the various project repositories. Approver volume has risen slightly since the last release, but a lull is commonplace during the last quarter of the year. With 75,000+ comments, Kubernetes remains one of the most actively discussed projects on GitHub.User highlightsAccording to the latest survey conducted by CNCF, 61 percent of organizations are evaluating and 83 percent are using Kubernetes in production. Example of user stories from the community include:BlaBlaCar, the world’s largest long distance carpooling community connects 40 million members across 22 countries. The company has about 3,000 pods, with 1,200 of them running on Kubernetes, leading to improved website availability for customers.Pokémon GO, the popular free-to-play, location-based augmented reality game developed by Niantic for iOS and Android devices, has its application logic running on Google Container Engine powered by Kubernetes. This was the largest Kubernetes deployment ever on Google Container Engine.Is Kubernetes helping your team? Share your story with the community. Ecosystem updatesAnnounced on November 13, the Certified Kubernetes Conformance Program ensures that Certified Kubernetes™ products deliver consistency and portability. Thirty-two Certified Kubernetes Distributions and Platforms are now available. Development of the certification program involved close collaboration between CNCF and the rest of the Kubernetes community, especially the Testing and Architecture Special Interest Groups (SIGs). The Kubernetes Architecture SIG is the final arbiter of the definition of API conformance for the program. The program also includes strong guarantees that commercial providers of Kubernetes will continue to release new versions to ensure that customers can take advantage of the rapid pace of ongoing development.CNCF also offers online training that teaches the skills needed to create and configure a real-world Kubernetes cluster.KubeConFor recorded sessions from the largest Kubernetes gathering, KubeCon + CloudNativeCon in Austin from December 6-8, 2017, visit YouTube/CNCF. The premiere Kubernetes event will be back May 2-4, 2018 in Copenhagen and will feature technical sessions, case studies, developer deep dives, salons and more! CFP closes January 12, 2018. WebinarJoin members of the Kubernetes 1.9 release team on January 9th from 10am-11am PT to learn about the major features in this release as they demo some of the highlights in the areas of Windows and Docker support, storage, admission control, and the workloads API. Register here.Get involved:The simplest way to get involved with Kubernetes is by joining one of the many Special Interest Groups (SIGs) that align with your interests. Have something you’d like to broadcast to the Kubernetes community? Share your voice at our weekly community meeting, and through the channels below.Thank you for your continued feedback and support.Post questions (or answer questions) on Stack OverflowJoin the community portal for advocates on K8sPortFollow us on Twitter @Kubernetesio for latest updatesChat with the community on SlackShare your Kubernetes story.
Quelle: kubernetes

Top 5 blogs of 2017: LinuxKit, A Toolkit for building Secure, Lean and Portable Linux Subsystems

In case you’ve missed it, this week we’re highlighting the top five most popular Docker blogs in 2017. Coming in the third place is the announcement of LinuxKit, a toolkit for building secure, lean and portable Linux Subsystems.

 
LinuxKit includes the tooling to allow building custom Linux subsystems that only include exactly the components the runtime platform requires. All system services are containers that can be replaced, and everything that is not required can be removed. All components can be substituted with ones that match specific needs. It is a kit, very much in the Docker philosophy of batteries included but swappable. LinuxKit is an open source project available at https://github.com/linuxkit/linuxkit.
To achieve our goals of a secure, lean and portable OS,we built it from containers, for containers.  Security is a top-level objective and aligns with NIST stating, in their draft Application Container Security Guide: “Use container-specific OSes instead of general-purpose ones to reduce attack surfaces. When using a container-specific OS, attack surfaces are typically much smaller than they would be with a general-purpose OS, so there are fewer opportunities to attack and compromise a container-specific OS.”
The leanness directly helps with security by removing parts not needed if the OS is designed around the single use case of running containers. Because LinuxKit is container-native, it has a very minimal size – 35MB with a very minimal boot time.  All system services are containers, which means that everything can be removed or replaced.
System services are sandboxed in containers, with only the privileges they need. The configuration is designed for the container use case. The whole system is built to be used as immutable infrastructure, so it can be built and tested in your CI pipeline, deployed, and new versions are redeployed when you wish to upgrade.
The kernel comes from our collaboration with the Linux kernel community, participating in the process and work with groups such as the Kernel Self Protection Project (KSPP), while shipping recent kernels with only the minimal patches needed to fix issues with the platforms LinuxKit supports. The kernel security process is too big for a single company to try to develop on their own therefore a broad industry collaboration is necessary.
In addition LinuxKit provides a space to incubate security projects that show promise for improving Linux security. We are working with external open source projects such as Wireguard, Landlock, Mirage, oKernel, Clear Containers and more to provide a testbed and focus for innovation in the container space, and a route to production.
LinuxKit is portable, as it was built for the many platforms Docker runs on now, and with a view to making it run on far more.. Whether they are large or small machines, bare metal or virtualized, mainframes or the kind of devices that are used in Internet of Things scenarios as containers reach into every area of computing.
Learn More about Linuxkit:

Check out the LinuxKit repository on GitHub
Watch the LinuxKit video from the last Moby Summit to learn more about the latest features, updates and use cases from the community
Read the Announcement

#LinuxKit: A Toolkit for building Secure, Lean and Portable Linux SubsystemsClick To Tweet

The post Top 5 blogs of 2017: LinuxKit, A Toolkit for building Secure, Lean and Portable Linux Subsystems appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Top 5 blogs of 2017: Spring Boot Development with Docker

We’ve rounded up the top five most popular Docker blogs of 2017. Coming in at number four is, Spring Boot Development With Docker, part of a multi-part tutorial series.

The AtSea Shop is an example storefront application that can be deployed on different operating systems and can be customized to both your enterprise development and operational environments. In my last post, I discussed the architecture of the app. In this post, I will cover how to setup your development environment to debug the Java REST backend that runs in a container.
Building the REST Application
I used the Spring Boot framework to rapidly develop the REST backend that manages products, customers and orders tables used in the AtSea Shop. The application takes advantage of Spring Boot’s built-in application server, support for REST interfaces and ability to define multiple data sources. Because it was written in Java, it is agnostic to the base operating system and runs in either Windows or Linux containers. This allows developers to build against a heterogenous architecture.
Project setup
The AtSea project uses multi-stage builds, a new Docker feature, which allows me to use multiple images to build a single Docker image that includes all the components needed for the application. The multi-stage build uses a Maven container to build the the application jar file. The jar file is then copied to a Java Development Kit image. This makes for a more compact and efficient image because the Maven is not included with the application. Similarly, the React store front client is built in a Node image and the compile application is also added to the final application image.
I used Eclipse to write the AtSea app. If you want info on configuring IntelliJ or Netbeans for remote debugging, you can check out the the Docker Labs Repository. You can also check out the code in the AtSea app github repository.
I built the application by cloning the repository and imported the project into Eclipse by setting the Root Directory to the project and clicking Finish
    File > Import > Maven > Existing Maven Projects 
Since I used using Spring Boot, I took advantage of spring-devtools to do remote debugging in the application. I had to add the Spring Boot-devtools dependency to the pom.xml file.
<dependency>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-devtools</artifactId>
</dependency>
Note that developer tools are automatically disabled when the application is fully packaged as a jar. To ensure that devtools are available during development, I set the <excludeDevtools> configuration to false in the spring-boot-maven build plugin:
<build>
    <plugins>
        <plugin>
            <groupId>org.springframework.boot</groupId>
            <artifactId>spring-boot-maven-plugin</artifactId>
            <configuration>
                <excludeDevtools>false</excludeDevtools>
            </configuration>
        </plugin>
    </plugins>
</build>
This example uses a Docker Compose file that creates a simplified build of the containers specifically needed for development and debugging.
 version: “3.1”

services:
 database:
   build: 
      context: ./database
   image: atsea_db
   environment:
     POSTGRES_USER: gordonuser
     POSTGRES_DB: atsea
   ports:
     – “5432:5432″ 
   networks:
     – back-tier
   secrets:
     – postgres_password

 appserver:
   build:
      context: .
      dockerfile: app/Dockerfile-dev
   image: atsea_app
   ports:
     – “8080:8080″
     – “5005:5005″
   networks:
     – front-tier
     – back-tier
   secrets:
     – postgres_password

secrets:
 postgres_password:
   file: ./devsecrets/postgres_password
   
networks:
 front-tier:
 back-tier:
 payment:
   driver: overlay
 The Compose file uses secrets to provision passwords and other sensitive information such as certificates –  without relying on environmental variables. Although the example uses PostgreSQL, the application can use secrets to connect to any database defined by as a Spring Boot datasource. From JpaConfiguration.java:
 public DataSourceProperties dataSourceProperties() {
        DataSourceProperties dataSourceProperties = new DataSourceProperties();

    // Set password to connect to database using Docker secrets.
    try(BufferedReader br = new BufferedReader(new FileReader(“/run/secrets/postgres_password”))) {
        StringBuilder sb = new StringBuilder();
        String line = br.readLine();
        while (line != null) {
            sb.append(line);
            sb.append(System.lineSeparator());
            line = br.readLine();
        }
         dataSourceProperties.setDataPassword(sb.toString());
     } catch (IOException e) {
        System.err.println(“Could not successfully load DB password file”);
     }
    return dataSourceProperties;
}
Also note that the appserver opens port 5005 for remote debugging and that build calls the Dockerfile-dev file to build a container that has remote debugging turned on. This is set in the Entrypoint which specifies transport and address for the debugger.
ENTRYPOINT [“java”, 

“-agentlib:jdwp=transport=dt_socket,server=y,suspend=y,address=5005″,”-jar”, 

“/app/AtSea-0.0.1-SNAPSHOT.jar”]
Remote Debugging
To start remote debugging on the application, run compose using the docker-compose-dev.yml file.
docker-compose -f docker-compose-dev.yml up –build
Docker will build the images and start the AtSea Shop database and appserver containers. However, the application will not fully load until Eclipse’s remote debugger attaches to the application. To start remote debugging you click on Run > Debug Configurations …
Select Remote Java Application then press the new button to create a configuration. In the Debug Configurations panel, you give the configuration a name, select the AtSea project and set the connection properties for host and the port to 5005. Click Apply > Debug.  

The appserver will start up.
appserver_1|2017-05-09 03:22:23.095 INFO 1 — [main] s.b.c.e.t.TomcatEmbeddedServletContainer : Tomcat started on port(s): 8080 (http)

appserver_1|2017-05-09 03:22:23.118 INFO 1 — [main] com.docker.atsea.AtSeaApp                : Started AtSeaApp in 38.923 seconds (JVM running for 109.984)
To test remote debugging set a breakpoint on ProductController.java where it returns a list of products.

You can test it using curl or your preferred tool for making HTTP requests:
curl -H “Content-Type: application/json” -X GET  http://localhost:8080/api/product/
Eclipse will switch to the debug perspective where you can step through the code.

The AtSea Shop example shows how easy it is to use containers as part of your normal development environment using tools that you and your team are familiar with. Download the application to try out developing with containers or use it as basis for your own Spring Boot REST application.
Interested in more? Check out these developer resources and videos from Dockercon 2017.

AtSea Shop demo
Docker Reference Architecture: Development Pipeline Best Practices Using Docker EE
Docker Labs

Developer Tools
Java development using docker

DockerCon videos

Docker for Java Developers
The Rise of Cloud Development with Docker & Eclipse Che
All the New Goodness of Docker Compose
Docker for Devs

Top 5 blogs of 2017: Developing the AtSea app with #Docker and #SpringBoot by @sparaClick To Tweet

The post Top 5 blogs of 2017: Spring Boot Development with Docker appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Top 5 Blogs of 2017: Docker Platform and Moby Project add Kubernetes

As we count down the final days of 2017, we would like to bring you the final installment of the top 5 blogs of 2017. On day 5, we take a look back DockerCon EU, when we announced Kubernetes support in the Docker platform. This blog takes an in-depth look at the industry-leading container platform and the addition of Kubernetes.

The Docker platform is integrating support for Kubernetes so that Docker customers and developers have the option to use both Kubernetes and Swarm to orchestrate container workloads. Register for beta access and check out the detailed blog posts to learn how we’re bringing Kubernetes to:

Docker Enterprise Edition
Docker Community Edition on the desktop with Docker for Mac and Windows
The Moby Project

Docker is a platform that sits between apps and infrastructure. By building apps on Docker, developers and IT operations get freedom and flexibility. That’s because Docker runs everywhere that enterprises deploy apps: on-prem (including on IBM mainframes, enterprise Linux and Windows) and in the cloud. Once an application is containerized, it’s easy to re-build, re-deploy and move around, or even run in hybrid setups that straddle on-prem and cloud infrastructure.
The Docker platform is composed of many components, assembled in four layers:

containerd is an industry-standard container runtime implementing the OCI standards
Swarm orchestration that transforms a group of nodes into a distributed system
Docker Community Edition providing developers a simple workflow to build and ship container applications, with features like application composition, image build and management
Docker Enterprise Edition, to manage an end to end secure software supply chain and run containers in production

These four layers are assembled from upstream components that are part of the open source Moby Project.
Docker’s design philosophy has always been about providing choice and flexibility. This is important for customers that are integrating Docker with existing IT systems, and that’s why Docker is built to work well with already-deployed networking, logging, storage, load balancers and CI/CD systems. For all of these (and more), Docker relies on industry-standard protocols or published and documented interfaces. And for all of these, Docker Enterprise Edition ships with sensible defaults, but those defaults can be swapped for certified third party options for customers that have existing systems or prefer an alternative solution.
In 2016, Docker added orchestration to the platform, powered by the SwarmKit project. In the past year, we’ve received lots of positive feedback on Swarm: it’s easy to set up, is scalable and is secure out-of-the-box.
We’ve also gotten feedback that some users really like the integrated Docker platform with end-to-end container management, but that they want to use other orchestrators, like Kubernetes, for container scheduling. Either because they’ve already designed services to work on Kubernetes or because Kubernetes has particular features they’re looking for. This is why we are adding Kubernetes support as an orchestration option (alongside Swarm) in both Docker Enterprise Edition, and in Docker for Mac and Windows.

We’re also working on innovative components that make it easier for Docker users to deploy Docker apps natively with Kubernetes orchestration. For example, by using Kubernetes extension mechanisms like Custom Resources and the API server aggregation layer, the coming version of Docker with Kubernetes support will allow users to deploy their Docker Compose apps as Kubernetes-native Pods and Services.
With the next version of the Docker platform, developers can build and test apps destined for production directly on Kubernetes, on their workstation. And ops can get all the benefits of Docker Enterprise Edition – secure multi-tenancy, image scanning and role-based access control – while running apps in production orchestrated with either Kubernetes or Swarm.
The Kubernetes version that we’re incorporating into Docker will be the vanilla Kubernetes that everyone is familiar with, direct from the CNCF.  It won’t be a fork, nor an outdated version, nor wrapped or limited in any way.
Through the Moby Project, Docker has been working to adopt and contribute to Kubernetes over the last year. We’ve been working on containerd (now 1.0)  and cri-containerd for the container runtime, on InfraKit for creating and managing Kubernetes installs, and on libnetwork for overlay networking. See the Moby Project blog post for more examples and details.
Docker and Kubernetes share much lineage, are written using the same programming language and have overlapping components, contributors and ideals. We at Docker are excited to have Kubernetes support in our products and into the open source projects we work on. And we can’t wait to work with the Kubernetes community to make containers and container-orchestration ever more powerful and easier to use.
While we’re adding Kubernetes as an orchestration option in Docker, we remain committed to Swarm and our customers and users that rely on Swarm and Docker for running critical apps at scale in production. To learn more about how Docker is integrating Kubernetes, check out the sessions “What’s New in Docker” and “Gordon’s Secret Session” at DockerCon EU.
Where to go from here?

Sign up for the Kubernetes for Docker beta
Docker Enterprise Edition with Kubernetes
Community Edition for Mac and Windows with Kubernetes
Moby and Kubernetes

#Docker Platform and @Moby Project add @KubernetesioClick To Tweet

The post Top 5 Blogs of 2017: Docker Platform and Moby Project add Kubernetes appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Using Docker to Scale Operational Intelligence at Splunk

Splunk wants to make machine data accessible, usable and valuable to everyone. With over 14,000 customers in 110 countries, providing the best software for visualizing machine data involves hours and hours of testing against multiple supported platforms and various configurations. For Mike Dickey, Sr. Director in charge of engineering infrastructure at Splunk, the challenge was that 13 different engineering teams in California and Shanghai had contributed to test infrastructure sprawl, with hundreds of different projects and plans that were all being managed manually.
At DockerCon Europe, Mike and Harish Jayakumar, Docker Solutions Engineer, shared how Splunk leveraged Docker Enterprise Edition (Docker EE) to dramatically improve build and deployment times on their test infrastructure, converge on a unified Continuous Integration (CI) workflow, and how they’ve now grown to 600 bare-metal servers deploying tens of thousands of Docker containers per day.
You can watch the entire session here:

Hitting the Limits of Manual Test Configurations
As Splunk has grown, so has their customers’ use of their software. Many Splunk customers now process petabytes of data, and that has forced Splunk to scale their testing to match. That means more infrastructure needs to be reserved in the shared test environment for these large-scale tests. Besides running out of data center capacity, reserving test infrastructure was being managed manually through a Wiki page – a process with obvious limitations.

At the time Mike was leading the Performance Engineering team, and they had started working with Docker containers. Seeing near-bare metal performance for containerized applications, Splunk began to test Docker in smaller proof-of-concept projects and saw that it could be effective for performance testing. They saw the ability to leverage Docker as the foundation for a unified test and CI platform.
Building a App Development Platform with Docker EE

Splunk chose Docker EE to power their test and CI platform for a few key reasons:

Windows and Linux support: Splunk software runs on both Linux and Windows and so they wanted a single solution that could support both Linux and Windows
Role-Based Access Control: As the environment is a shared resource between multiple teams, Splunk needed a way to integrate with Active Directory and assign resources by roles.
Consistent Dev Experience: With most developers already using Docker on their desktops, Splunk wanted to maintain a consistent experience with support for Docker APIs and the use of Compose files.
Vendor to Partner With: Given the scale of this project, Splunk wanted to work with a vendor who would be their partner. A bonus was that our offices were only a few blocks apart.

Results and What’s Next
Today, Docker EE powers Splunk’s CI and test platforms. As part of the CI solution, Splunk is leveraging Docker to create an agentless Jenkins workflow where each build stage is replaced by a container. This delivers a more consistent and scalable experience (2000 concurrent jobs today vs. 200 per master with standard agents) that is much more efficient as well. For performance testing, teams can reserve an entire host to get accurate performance results. These can be dynamically provisioned for different configurations in minutes instead of days.

At Splunk, the Docker EE environment has grown from 150 servers to now 600 servers, starting with one team of developers to now 385 unique developers who deploy between 10,000 and 20,000 containers a day. In addition to the fast deployment times, Splunk is seeing more efficient use of the hardware than before, averaging 75% utilization of the available capacity. With the platform in place, the developers at Splunk have a simple and fast way to provision and execute tests. As a result, Splunk has seen an increase in testing frequency, which is helping to improve product quality.

Check out how @Splunk used #Docker EE to scale and deploy 10,000+ containers per dayClick To Tweet

To learn more about Docker EE, check out the following resources:

Learn more about Docker EE
Try Docker EE for yourself
Contact Sales for more information

The post Using Docker to Scale Operational Intelligence at Splunk appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

5 tips to learn Docker in 2018

As the holiday season ends, many of us are making New Year’s resolutions for 2018. Now is a great time to think about the new skills or technologies you’d like to learn. So much can change each year as technology progresses and companies are looking to innovate or modernize their legacy applications or infrastructure. At the same time the market for Docker jobs continues to grow as companies such as Visa, MetLife and Splunk adopt Docker Enterprise Edition ( EE) in production. So how about learning Docker in 2018 ? Here are a few tips to help you along the way.
 

1. Play With Docker: the Docker Playground and Training site
 
Play with Docker (PWD) is a Docker playground and training site which allows users to run Docker commands in a matter of seconds. It gives the experience of having a free Linux Virtual Machine in browser, where you can build and run Docker containers and even create clusters. Check out this video from DockerCon 2017 to learn more about this project. The training site is composed of a large set of Docker labs and quizzes from beginner to advanced level available for both Developers and IT pros at  training.play-with-docker.com.

 
2. DockerCon 2018
 
In case you missed it, DockerCon 2018 will take place at Moscone Center, San Francisco, CA on June 13-15, 2018. DockerCon is where the container community comes to learn, belong, and collaborate. Attendees are a mix of beginner, intermediate, and advanced users who are all looking to level up their skills and go home inspired. With a 2 full days of training, more than 100 sessions, free workshops and hands-on labs and the wealth of experience brought by each attendee, DockerCon is the place to be if you’re looking to learn Docker in 2018.

 
3. Docker Meetups
 
Look at our Docker Meetup Chapters page to see if there is a Docker user group in your city. With more than 200 local chapters in 81 countries, you should be able to find one near you! Attending local Docker meetups are an excellent way to learn Docker. The community leaders who run the user group often schedule Docker 101 talks and hands-on training for newcomers!
Can’t find a chapter near you? Join the Docker Online meetup group to attend meetups remotely!

 
4. Docker Captains
 
Captains are Docker experts that are leaders in their communities, organizations or ecosystems. As Docker advocates, they are committed to sharing their knowledge and do so every chance they get! Captains are advisors, ambassadors, coders, contributors, creators, tool buil
ders, speakers, mentors, maintainers and super users and are required to be active stewards of Docker in order to remain in the program.
Follow all of the Captains on twitter. Also check out the Captains GitHub repo to see what projects they have been working on. Docker Captains are eager to bring their technical expertise to new audiences both offline and online around the world – don’t hesitate to reach out to them via the social links on their Captain profile pages. You can filter the captains by location, expertise, and more.

5. Training and Certification
 
The new Docker Certified Associate (DCA) certification, launching at DockerCon Europe on October 16, 2017, serves as a foundational benchmark for real-world container technology expertise with Docker Enterprise Edition. In today’s job market, container technology skills are highly sought after and this certification sets the bar for well-qualified professionals. The professionals that earn the certification will set themselves apart as uniquely qualified to run enterprise workloads at scale with Docker Enterprise Edition and be able to display the certification logo on r
esumes and social media profiles. Want to be as prepared as you can be? Check out our study guide with sample questions and exam preparation tips before you schedule your exam.
 

 

5 tips to learn #docker in 2018: @playwithdocker @dockercon #dockercaptain #dockermeetupClick To Tweet

The post 5 tips to learn Docker in 2018 appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Using RBAC, Generally Available in Kubernetes v1.8

Editor’s note: this post is part of a series of in-depth articles on what’s new in Kubernetes 1.8. Today’s post comes from Eric Chiang, software engineer, CoreOS, and SIG-Auth co-lead.Kubernetes 1.8 represents a significant milestone for the role-based access control (RBAC) authorizer, which was promoted to GA in this release. RBAC is a mechanism for controlling access to the Kubernetes API, and since its beta in 1.6, many Kubernetes clusters and provisioning strategies have enabled it by default.Going forward, we expect to see RBAC become a fundamental building block for securing Kubernetes clusters. This post explores using RBAC to manage user and application access to the Kubernetes API.Granting access to usersRBAC is configured using standard Kubernetes resources. Users can be bound to a set of roles (ClusterRoles and Roles) through bindings (ClusterRoleBindings and RoleBindings). Users start with no permissions and must explicitly be granted access by an administrator.All Kubernetes clusters install a default set of ClusterRoles, representing common buckets users can be placed in. The “edit” role lets users perform basic actions like deploying pods; “view” lets a user observe non-sensitive resources; “admin” allows a user to administer a namespace; and “cluster-admin” grants access to administer a cluster.$ kubectl get clusterroles NAME            AGEadmin           40mcluster-admin   40medit            40m# …view            40mClusterRoleBindings grant a user, group, or service account a ClusterRole’s power across the entire cluster. Using kubectl, we can let a sample user “jane” perform basic actions in all namespaces by binding her to the “edit” ClusterRole:$ kubectl create clusterrolebinding jane –clusterrole=edit –user=jane$ kubectl get namespaces –as=janeNAME          STATUS    AGEdefault       Active    43mkube-public   Active    43mkube-system   Active    43m$ kubectl auth can-i create deployments –namespace=dev –as=janeyesRoleBindings grant a ClusterRole’s power within a namespace, allowing administrators to manage a central list of ClusterRoles that are reused throughout the cluster. For example, as new resources are added to Kubernetes, the default ClusterRoles are updated to automatically grant the correct permissions to RoleBinding subjects within their namespace.Next we’ll let the group “infra” modify resources in the “dev” namespace:$ kubectl create rolebinding infra –clusterrole=edit –group=infra –namespace=devrolebinding “infra” createdBecause we used a RoleBinding, these powers only apply within the RoleBinding’s namespace. In our case, a user in the “infra” group can view resources in the “dev” namespace but not in “prod”:$ kubectl get deployments –as=dave –as-group=infra –namespace devNo resources found.$ kubectl get deployments –as=dave –as-group=infra –namespace prodError from server (Forbidden): deployments.extensions is forbidden: User “dave” cannot list deployments.extensions in the namespace “prod”.Creating custom rolesWhen the default ClusterRoles aren’t enough, it’s possible to create new roles that define a custom set of permissions. Since ClusterRoles are just regular API resources, they can be expressed as YAML or JSON manifests and applied using kubectl.Each ClusterRole holds a list of permissions specifying “rules.” Rules are purely additive and allow specific HTTP verb to be performed on a set of resource. For example, the following ClusterRole holds the permissions to perform any action on “deployments”, “configmaps,” or “secrets”, and to view any “pod”:kind: ClusterRoleapiVersion: rbac.authorization.k8s.io/v1metadata:  name: deployerrules:- apiGroups: [“apps”]  resources: [“deployments”]  verbs: [“get”, “list”, “watch”, “create”, “delete”, “update”, “patch”]- apiGroups: [“”] # “” indicates the core API group  resources: [“configmaps”, “secrets”]  verbs: [“get”, “list”, “watch”, “create”, “delete”, “update”, “patch”]- apiGroups: [“”] # “” indicates the core API group  resources: [“pods”]  verbs: [“get”, “list”, “watch”]Verbs correspond to the HTTP verb of the request, while the resource and API groups refer to the the resource being referenced. Consider the following Ingress resource:apiVersion: extensions/v1beta1kind: Ingressmetadata:  name: test-ingressspec:  backend:    serviceName: testsvc    servicePort: 80To POST the resource, the user would need the following permissions:rules:- apiGroups: [“extensions”] # “apiVersion” without version  resources: [“ingresses”]  # Plural of “kind”  verbs: [“create”]         # “POST” maps to “create”Roles for applicationsWhen deploying containers that require access to the Kubernetes API, it’s good practice to ship an RBAC Role with your application manifests. Besides ensuring your app works on RBAC enabled clusters, this helps users audit what actions your app will perform on the cluster and consider their security implications.A namespaced Role is usually more appropriate for an application, since apps are traditionally run inside a single namespace and the namespace’s resources should be tied to the lifecycle of the app. However, Roles cannot grant access to non-namespaced resources (such as nodes) or across namespaces, so some apps may still require ClusterRoles.The following Role allows a Prometheus instance to monitor and discover services, endpoints, and pods in the “dev” namespace:kind: Rolemetadata:  name: prometheus-role  namespace: devrules:- apiGroups: [“”] # “” refers to the core API group  Resources: [“services”, “endpoints”, “pods”]  verbs: [“get”, “list”, “watch”]Containers running in a Kubernetes cluster receive service account credentials to talk to the Kubernetes API, and service accounts can be targeted by a RoleBinding. Pods normally run with the “default” service account, but it’s good practice to run each app with a unique service account so RoleBindings don’t unintentionally grant permissions to other apps.To run a pod with a custom service account, create a ServiceAccount resource in the same namespace and specify the `serviceAccountName` field of the manifest.apiVersion: apps/v1beta2 # Abbreviated, not a full manifestkind: Deploymentmetadata:  name: prometheus-deployment  namespace: devspec:  replicas: 1  template:    spec:      containers:      – name: prometheus        image: prom/prometheus:v1.8.0        command: [“prometheus”, “-config.file=/etc/prom/config.yml”]    # Run this pod using the “prometheus-sa” service account.    serviceAccountName: prometheus-sa—apiVersion: v1kind: ServiceAccountmetadata:  name: prometheus-sa  namespace: devGet involvedDevelopment of RBAC is a community effort organized through the Auth Special Interest Group, one of the many SIGs responsible for maintaining Kubernetes. A great way to get involved in the Kubernetes community is to join a SIG that aligns with your interests, provide feedback, and help with the roadmap.About the authorEric Chiang is a software engineer and technical lead of Kubernetes development at CoreOS, the creator of Tectonic, the enterprise-ready Kubernetes platform. Eric co-leads Kubernetes SIG Auth and maintains several open source projects and libraries on behalf of CoreOS.
Quelle: kubernetes

MetLife Uses Docker Enterprise Edition to Self Fund Containerization

MetLife is a 150 year old company in the business of securing promises and the information management of over 100M customers and their insurance policies. As a global company, MetLife delivers promises into every corner of the world – some of them built to last a lifetime. With this rich legacy comes a diverse portfolio of IT infrastructure to maintain those promises.
In April, Aaron Aedes from MetLife spoke about their first foray into Docker containerization with a new application, GSSP, delivered through Azure. Six months later, MetLife returns to the DockerCon stage to share their journey since this initial deployment motivated them to find other ways to leverage Docker Enterprise Edition [EE] within MetLife.
Jeff Murr, Director of Engineering for Containers and Open Source at MetLife spoke in the Day 1 DockerCon keynote session about how they are looking to scale containerization with Docker as they scale . He states that new technology typically adds more cost and overhead to an already taxed IT budget. But the Docker Modernize Traditional Apps [MTA] Program presented an opportunity to reduce the costs of their existing applications.
The MTA project at MetLife started with a single Linux Java-based application that handled the “Do Not Call / Opt out” process, a simple but important application in handling the customer experience. The app was containerized with Docker EE in a single day and they immediately saw improvements in the time to deploy and scale the application in addition to the amount of resources the app now required as a Docker EE container.

The Business Case for Traditional Apps in Docker EE
After a successful MTA POC, Jeff’s team looked at the rest of their application landscape to find other applications that were candidates for modernization.. Of the almost 6,000 applications at MetLife, 593 of them (roughly 10%) used the same technology stack as the application from the MTA POC. The resulting analysis projected a 66% total cost savings for those 593 applications. This savings represents not only the infrastructure efficiency gains but also the time saved in maintaining and supporting the application — for their US operations only. This represents tens of millions of dollars in savings for MetLife – and it all started with a single application.

“The do-not-call app wasn’t an exciting app – it is pretty exciting now.” Jeff Murr, MetLife

For Jeff’s team, this project has created a repeatable model to offer to the various application teams within MetLife whether the applications are Windows or Linux. Docker EE created a unique opportunity where the disruption itself is self-funded while providing a platform to innovate at scale. With Docker and by migrating to the cloud,MetLife is able to   flip the 80/20 maintenance-to-innovation ratio on its head.
Begin Your Journey
Companies looking to get started with Docker Enterprise Edition can take some tips from the experience at MetLife with both containerizing an existing application as well as building and deploying new microservices. The key is to start small and incrementally innovate with one app, one technology stack at a time and ease the operational changes over time, establish a pattern, rinse and repeat.

To learn more about Docker Enterprise Edition:

Learn more about Docker EE
Try Docker EE for free and view subscription plans 
Register for the Modernize Traditional Apps Digital Kit
Sign up for upcoming webinars

Hear it from the customer! Learn how @MetLife used #Docker Enterprise Edition to reduce the costs…Click To Tweet

The post MetLife Uses Docker Enterprise Edition to Self Fund Containerization appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/