How Bitmovin is Doing Multi-Stage Canary Deployments with Kubernetes in the Cloud and On-Prem

Editor’s Note: Today’s post is by Daniel Hoelbling-Inzko, Infrastructure Architect at Bitmovin, a company that provides services that transcode digital video and audio to streaming formats, sharing insights about their use of Kubernetes.Running a large scale video encoding infrastructure on multiple public clouds is tough. At Bitmovin, we have been doing it successfully for the last few years, but from an engineering perspective, it’s neither been enjoyable nor particularly fun. So obviously, one of the main things that really sold us on using Kubernetes, was it’s common abstraction from the different supported cloud providers and the well thought out programming interface it provides. More importantly, the Kubernetes project did not settle for the lowest common denominator approach. Instead, they added the necessary abstract concepts that are required and useful to run containerized workloads in a cloud and then did all the hard work to map these concepts to the different cloud providers and their offerings.The great stability, speed and operational reliability we saw in our early tests in mid-2016 made the migration to Kubernetes a no-brainer.And, it didn’t hurt that the vision for scale the Kubernetes project has been pursuing is closely aligned with our own goals as a company. Aiming for >1,000 node clusters might be a lofty goal, but for a fast growing video company like ours, having your infrastructure aim to support future growth is essential. Also, after initial brainstorming for our new infrastructure, we immediately knew that we would be running a huge number of containers and having a system, with the expressed goal of working at global scale, was the perfect fit for us. Now with the recent Kubernetes 1.6 release and its support for 5,000 node clusters, we feel even more validated in our choice of a container orchestration system.During the testing and migration phase of getting our infrastructure running on Kubernetes, we got quite familiar with the Kubernetes API and the whole ecosystem around it. So when we were looking at expanding our cloud video encoding offering for customers to use in their own datacenters or cloud environments, we quickly decided to leverage Kubernetes as our ubiquitous cloud operating system to base the solution on.Just a few months later this effort has become our newest service offering: Bitmovin Managed On-Premise encoding. Since all Kubernetes clusters share the same API, adapting our cloud encoding service to also run on Kubernetes enabled us to deploy into our customer’s datacenter, regardless of the hardware infrastructure running underneath. With great tools from the community, like kube-up and turnkey solutions, like Google Container Engine, anyone can easily provision a new Kubernetes cluster, either within their own infrastructure or in their own cloud accounts. To give us the maximum flexibility for customers that deploy to bare metal and might not have any custom cloud integrations for Kubernetes yet, we decided to base our solution solely on facilities that are available in any Kubernetes install and don’t require any integration into the surrounding infrastructure (it will even run inside Minikube!). We don’t rely on Services of type LoadBalancer, primarily because enterprise IT is usually reluctant to open up ports to the open internet – and not every bare metal Kubernetes install supports externally provisioned load balancers out of the box. To avoid these issues, we deploy a BitmovinAgent that runs inside the Cluster and polls our API for new encoding jobs without requiring any network setup. This agent then uses the locally available Kubernetes credentials to start up new deployments that run the encoders on the available hardware through the Kubernetes API.Even without having a full cloud integration available, the consistent scheduling, health checking and monitoring we get from using the Kubernetes API really enabled us to focus on making the encoder work inside a container rather than spending precious engineering resources on integrating a bunch of different hypervisors, machine provisioners and monitoring systems. Multi-Stage Canary DeploymentsOur first encounters with the Kubernetes API were not for the On-Premise encoding product. Building our containerized encoding workflow on Kubernetes was rather a decision we made after seeing how incredibly easy and powerful the Kubernetes platform proved during development and rollout of our Bitmovin API infrastructure. We migrated to Kubernetes around four months ago and it has enabled us to provide rapid development iterations to our service while meeting our requirements of downtime-free deployments and a stable development to production pipeline. To achieve this we came up with an architecture that runs almost a thousand containers and meets the following requirements we had laid out on day one:Zero downtime deployments for our customersContinuous deployment to production on each git mainline pushHigh stability of deployed services for customersObviously and are at odds with each other, if each merged feature gets deployed to production right away – how can we ensure these releases are bug-free and don’t have adverse side effects for our customers?To overcome this oxymoron, we came up with a four-stage canary pipeline for each microservice where we simultaneously deploy to production and keep changes away from customers until the new build has proven to work reliably and correctly in the production environment.Once a new build is pushed, we deploy it to an internal stage that’s only accessible for our internal tests and the integration test suite. Once the internal test suite passes, QA reports no issues, and we don’t detect any abnormal behavior, we push the new build to our free stage. This means that 5% of our free users would get randomly assigned to this new build. After some time in this stage the build gets promoted to the next stage that gets 5% of our paid users routed to it. Only once the build has successfully passed all 3 of these hurdles, does it get deployed to the production tier, where it will receive all traffic from our remaining users as well as our enterprise customers, which are not part of the paid bucket and never see their traffic routed to a canary track.This setup makes us a pretty big Kubernetes installation by default, since all of our canary tiers are available at a minimum replication of 2. Since we are currently deploying around 30 microservices (and growing) to our clusters, it adds up to a minimum of 10 pods per service (8 application pods + minimum 2 HAProxy pods that do the canary routing). Although, in reality our preferred standard configuration is usually running 2 internal, 4 free, 4 others and 10 production pods alongside 4 HAProxy pods – totalling around 700 pods in total. This also means that we are running at least 150 services that provide a static ClusterIP to their underlying microservice canary tier.A typical deployment looks like this:Services (ClusterIP)Deployments#-serviceaccount-service-haproxy4account-service-internalaccount-service-internal-v1.18.02account-service-canaryaccount-service-canary-v1.17.04account-service-paidaccount-service-paid-v1.15.04account-service-productionaccount-service-production-v1.15.010An example service definition the production track will have the following label selectors:apiVersion: v1kind: Servicemetadata:  name: account-service-production  labels:    app: account-service-production    tier: service    lb: privatespec:  ports:  – port: 8080    name: http    targetPort: 8080    protocol: TCP  selector:    app: account-service    tier: service    track: productionIn front of the Kubernetes services, load balancing the different canary versions of the service, lives a small cluster of HAProxy pods that get their haproxy.conf from the Kubernetes ConfigMaps that looks something like this:frontend http-in  bind *:80  log 127.0.0.1 local2 debug  acl traffic_internal    hdr(X-Traffic-Group) -m str -i INTERNAL  acl traffic_free        hdr(X-Traffic-Group) -m str -i FREE  acl traffic_enterprise  hdr(X-Traffic-Group) -m str -i ENTERPRISE  use_backend internal   if traffic_internal  use_backend canary     if traffic_free  use_backend enterprise if traffic_enterprise  default_backend paidbackend internal  balance roundrobin  server internal-lb        user-resource-service-internal:8080   resolvers dns check inter 2000backend canary  balance roundrobin  server canary-lb          user-resource-service-canary:8080     resolvers dns check inter 2000 weight 5  server production-lb      user-resource-service-production:8080 resolvers dns check inter 2000 weight 95backend paid  balance roundrobin  server canary-paid-lb     user-resource-service-paid:8080       resolvers dns check inter 2000 weight 5  server production-lb      user-resource-service-production:8080 resolvers dns check inter 2000 weight 95backend enterprise  balance roundrobin  server production-lb      user-resource-service-production:8080 resolvers dns check inter 2000 weight 100Each HAProxy will inspect a header that gets assigned by our API-Gateway called X-Traffic-Group that determines which bucket of customers this request belongs to. Based on that, a decision is made to hit either a canary deployment or the production deployment.Obviously, at this scale, kubectl (while still our main day-to-day tool to work on the cluster) doesn’t really give us a good overview of whether everything is actually running as it’s supposed to and what is maybe over or under replicated.Since we do blue/green deployments, we sometimes forget to shut down the old version after the new one comes up, so some services might be running over replicated and finding these issues in a soup of 25 deployments listed in kubectl is not trivial, to say the least.So, having a container orchestrator like Kubernetes, that’s very API driven, was really a godsend for us, as it allowed us to write tools that take care of that.We built tools that either run directly off kubectl (eg bash-scripts) or interact directly with the API and understand our special architecture to give us a quick overview of the system. These tools were mostly built in Go using the client-go library.One of these tools is worth highlighting, as it’s basically our only way to really see service health at a glance. It goes through all our Kubernetes services that have the tier: service selector and checks if the accompanying HAProxy deployment is available and all pods are running with 4 replicas. It also checks if the 4 services behind the HAProxys (internal, free, others and production) have at least 2 endpoints running. If any of these conditions are not met, we immediately get a notification in Slack and by email.Managing this many pods with our previous orchestrator proved very unreliable and the overlay network frequently caused issues. Not so with Kubernetes – even doubling our current workload for test purposes worked flawlessly and in general, the cluster has been working like clockwork ever since we installed it.Another advantage of switching over to Kubernetes was the availability of the kubernetes resource specifications, in addition to the API (which we used to write some internal tools for deployment). This enabled us to have a Git repo with all our Kubernetes specifications, where each track is generated off a common template and only contains placeholders for variable things like the canary track and the names.All changes to the cluster have to go through tools that modify these resource specifications and get checked into git automatically so, whenever we see issues, we can debug what changes the infrastructure went through over time!To summarize this post – by migrating our infrastructure to Kubernetes, Bitmovin is able to have:Zero downtime deployments, allowing our customers to encode 24/7 without interruptionFast development to production cycles, enabling us to ship new features fasterMultiple levels of quality assurance and high confidence in production deploymentsUbiquitous abstractions across cloud architectures and on-premise deploymentsStable and reliable health-checking and scheduling of servicesCustom tooling around our infrastructure to check and validate the systemHistory of deployments (resource specifications in git + custom tooling)We want to thank the Kubernetes community for the incredible job they have done with the project. The velocity at which the project moves is just breathtaking! Maintaining such a high level of quality and robustness in such a diverse environment is really astonishing. –Daniel Hoelbling-Inzko, Infrastructure Architect, BitmovinPost questions (or answer questions) on Stack OverflowJoin the community portal for advocates on K8sPortGet involved with the Kubernetes project on GitHubFollow us on Twitter @Kubernetesio for latest updatesConnect with the community on SlackDownload Kubernetes
Quelle: kubernetes

DockerCon 2017: Moby’s Cool Hack sessions

Every year at , we expand the bounds of what Docker can do with new features and products. And every day, we see great new apps that are built on top of Docker. And yet, there’s always a few that stand out not just for being cool apps, but for pushing the bounds of what you can do with Docker.
This year we had two great apps that we featured in the Docker Cool Hacks closing keynote. Both hacks came from members of our Docker Captains program, a group of people from the Docker community who are recognized by Docker as very knowledgeable about Docker, and contribute quite a bit to the community.
Play with Docker
The first Cool Hack was Play with Docker by Marcos Nils and Jonathan Leibiusky. Marcos and Jonathan actually were featured in the Cool Hacks session at DockerCon EU in 2015 for their work on a Container Migration Tool.
Play with Docker is a Docker playground that you can run in your browser.

Play with Docker’s architecture is a Swarm of Swarms, running Docker in Docker instances.

Running on pretty beefy hosts r3.4xlarge on AWS &; Play with Docker is able to run about 3500 containers per host, only running containers as needed for a session. Play with Docker is completely open source, so you can run it on your own infrastructure. And they welcome contributions on their GitHub repo.
FaaS (Function as a Service)
The second Cool Hack was Functions as a Service (FaaS) by Alex Ellis. FaaS is a framework for building serverless functions on Docker Swarm with first class support for metrics. Any UNIX process can be packaged as a function enabling you to consume a range of web events without repetitive boilerplate coding. Each function runs as a container that only runs as long as it takes to run the function.

FaaS also comes with a convenient gateway tester that allows you to try out each of your functions directly in the browser.

FaaS is actively seeking contributions, so feel free to send issues and PRs on the GitHub repo.
Check out the video recording of the cool hack sessions below:

Congratulations to DockerCon Cool Hacks winners @marcosnils, @xetorthio, and @alexellisuk for Play&;Click To Tweet

Learn more about our DockerCon 2017 cool hacks:

Check out Play with Docker
Check out and contribute to FaaS
Contribute to Play with Docker

The post DockerCon 2017: Moby’s Cool Hack sessions appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

The Agility and Flexibility of Docker including Oracle Database and Development Tools

A company’s important applications often are subjected to random and capricious changes due to forces well beyond the control of IT or management.  Events like a corporate merger or even a top programmer on an extended vacation can have an adverse impact on the performance and reliability of critical company infrastructure.
During the second day keynote at DockerCon 2017 in Austin TX, Lily Guo and Vivek Saraswat showed a simulation of how to use Enterprise Edition and its application transformation tools to respond to random events that threaten to undermine the stability of their company critical service.
The demo begins as two developers are returning to work after an extended vacation.  They discover that, during their absence, their CEO has unexpectedly hired an outside contract programmer to rapidly code and introduce an entire application service that they know nothing about.  As they try to build the new service, however, Docker Security Scan detects that a deprecated library has been incorporated by the contractor.  This library is found to have a security vulnerability which violates the company’s best practice standards.  As part of Docker Enterprise Edition Advanced, Docker Security Scan automatically keeps track of code contributions and acts as a gatekeeper to flag issues and protect company standards.   In this case, they are able to find a newer version of the library and build the service successfully.
The next step is to deploy the service.   Docker Compose is the way to describe the application dependencies and secrets access.   It is tempting to simply insert the passwords into the Compose file using plain text. However, the best choice is to let Docker Secrets manage sensitive application configuration data and take advantage of Docker EE with its  ability to manage and enforce RBAC (Role Based Access Control).
It is interesting that the service consists of a Microsoft SQL Server database container that is interacting with other containers that are running Linux.  Docker Enterprise Edition features this ability to run a cluster of microservices in a hybrid Windows and Linux environment.  “It just works.”
All of the problems from the beginning of the demo now seem to be resolved, but the CEO rushes in to announce that they have just purchased a company that uses a traditional on premise application. The merger press announcement will be tomorrow and there is concern about the scope and cost of updating and moving the application to a modern infrastructure. However, they know that can use the Docker transformation tool, image2docker, to do the hard work of taking the traditional application and moving it to a modern Docker Enterprise Edition containers which can be deployed on any infrastructure, including Cloud.
One final step step is needed to complete the move from the traditional architecture.   As the traditional application relies on the popular and powerful Oracle Database, it will need to be acquired and adapted.  Time to go out to the Docker Store.    Lily finds the Oracle DB on Docker Store and integrates it directly into the transformed application &; and “it just works”
 The Docker Store is the place where developers can find trusted and scanned commercial content with collaborative support from Docker and the application container image provider.   Oracle today announced that its flagship databases and developer tools will be immediately available as Docker containers through the Docker Store marketplace.  The first set of certified images include: Oracle Database, Oracle MySQL, Oracle WebLogic Server, Oracle Coherence, Oracle Instant Client, and Oracle Java 8 SE (Server JRE).   
The demo ends and it’s been shown how developers can use Docker Enterprise Edition to quickly resolve a library incompatibility issues and how easy it is to take traditional applications and accomplish the first steps towards adapting them to a modern container infrastructure.

Agility and Flexibility of Docker including @Oracle Database and Development ToolsClick To Tweet

 
The post The Agility and Flexibility of Docker including Oracle Database and Development Tools appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Introducing the Modernize Traditional Apps Program

Today at DockerCon, we announced the Modernize Traditional Applications (MTA) Program to help enterprises make their existing legacy apps more secure, more efficient and portable to hybrid cloud infrastructure.  Collaboratively developed and brought to market with partners Avanade, Cisco, HPE, and Microsoft, the MTA Program consists of consulting services, Docker Enterprise Edition, and hybrid cloud infrastructure from partners to modernize existing .NET Windows or Java Linux applications in five days or less.  Designed for IT operations teams, the MTA Program modernizes existing legacy applications without modifying source code or re-architecting the application.

The First Step In The Microservices Journey
In working with hundreds of our enterprise IT customers the last couple years, when we sit down with them one of the first questions they inevitably ask is, “What is the first step we should take toward microservices?”
Through experience we have found that, for the vast majority of them, the best answer is, “Start with what you have today &; with your existing applications.”   Why is this the right place for them to start?  Because it recognizes two realities facing enterprise IT organizations today: existing applications consume 80% of IT budgets, and most IT organizations responsible for existing apps are also tasked with hybrid cloud initiatives.  
Seeing this pattern repeatedly, we developed this program as a solution for IT operations teams to rapidly address both realities.
Bringing Portability, Security, and Efficiency to Legacy Applications
The heart of the program is methodology and automation tooling to containerize existing .NET Windows or Java Linux applications without modifying source code or re-architecting the app.  Then, using Docker’s Containers-as-a-Service (CaaS) offering, Docker Enterprise Edition (Docker EE), IT operations teams deploy and manage the newly-containerized applications onto partners’ hybrid cloud infrastructure.
The result?  Customers participating in the private beta of the Modernize Traditional Apps Program during the last six months report the following benefits:

Portability.  Customers share that previous attempts to move legacy applications to hybrid cloud infrastructure took months and suffered from high failure rates.  In contrast, thanks to the ability of containers to package together an application and its dependencies, MTA’d legacy applications can be moved in weeks.
Security.  Docker Enterprise Edition (Docker EE) improves the security profile of existing legacy applications through container-based isolation, automated scanning and alerting for vulnerabilities, and integrity verification through digital signatures.

Efficiency.  Customers realize significant improvements in the total cost of ownership (TCO) in both CapEx and OpEx of their existing legacy applications.

To give specific examples, today at DockerCon private beta program participants Northern Trust and Microsoft IT both shared their experiences and results:

Northern Trust, a leading international financial services company, experienced  deployment times that were 4X faster and noted a 2X improvement in infrastructure utilization;
Microsoft is not only a partner in this program; their IT organization is also a beta customer.  Microsoft IT increased app density 4X with zero impact to performance and were able to reduce their infrastructure costs by a third.

Program Powered By Partners
The goal of the program is to accelerate the time-to-value for customers.  To achieve this, we worked closely with our partners to define tightly-scoped, turnkey solutions consisting of consulting services, Docker Enterprise Edition (Docker EE) software, and hybrid cloud infrastructure.

Avanade and Microsoft Azure.  Avanade’s consulting services provides a structured approach for evaluating the customer’s existing legacy applications, containerizing, and deploying and managing the containerized apps on Microsoft Azure hybrid cloud using Docker EE.
Cisco.  Cisco offers consulting services and their UCS converged infrastructure products for MTA Program customers.  Used together with Docker EE,  the solution helps customers take advantage of Cisco’s policy-based container networking technology, Contiv, in deploying containerized apps to hybrid cloud environments.
HPE.  For hybrid cloud solutions employing composable infrastructure, HPE offers MTA Program customers consulting services and converged infrastructure products together with Docker EE to deploy and manage containerized legacy apps.

Docker Enterprise Edition Empower Customers to Control the Journey
These turnkey bundles make the MTA Program a quick, efficient solution for IT operations taking the first step toward microservices.  And with the modernized applications being managed by the Docker CaaS offering, Docker Enterprise Edition (Docker EE), customers have control over the journey’s pace and direction &8211; how fast or slow as well as which application functionality to re-factor into microservices versus which to leave as-is.  This flexibility stems from Docker EE’s ability to manage the lifecycle of any containerized app, from 10-years-old legacy to just-released new microservices-based and anything in between.

We are excited to announce the Modernize Traditional Apps Program today at DockerCon with partners Avanade, Cisco, HPE, and Microsoft and share the stories from IT organizations who are making their existing legacy applications portable, secure, and efficient.  Use the links below to learn more about how the MTA Program from Docker and its partners can help breathe new life into your existing legacy applications.
More Resources:

Learn more about modernizing traditional apps with Docker EE
Read the press release
Read more about the program from Microsoft
Read more about the program with Cisco  

Introducing the Modernize Traditional Apps w/ @Docker program To Tweet

The post Introducing the Modernize Traditional Apps Program appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Introducing Moby Project: a new open-source project to advance the software containerization movement

Since Docker democratized software four years ago, a whole ecosystem grew around containerization and in this compressed time period it has gone through two distinct phases of growth. In each of these two phases, the model for producing container systems evolved to adapt to the size and needs of the user community as well as the project and the growing contributor ecosystem.
The Project is a new open-source project to advance the software containerization movement and help the ecosystem take containers mainstream. It provides a library of components, a framework for assembling them into custom container-based systems and a place for all container enthusiasts to experiment and exchange ideas.
Let’s review how we got where we are today. In 2013-2014 pioneers started to use containers and collaborate in a monolithic open source codebase, Docker and few other projects, to help tools mature.

Then in 2015-2016, containers were massively adopted in production for cloud-native applications. In this phase, the user community grew to support tens of thousands of deployments that were backed by hundreds of ecosystem projects and thousands of contributors. It is during this phase, that Docker evolved its production model to an open component based approach. In this way, it allowed us to increase both the surface area of innovation and collaboration.
What sprung up were new independent Docker component projects that helped spur growth in the partner ecosystem and the user community. During that period, we extracted and rapidly innovated on components out of the Docker codebase so that systems makers could reuse them independently as they were building their own container systems: runc, HyperKit, VPNKit, SwarmKit, InfraKit, containerd  etc..

Being at the forefront of the container wave, one trend we see emerging in 2017 is containers going mainstream, spreading to every category of computing, server, data center, cloud, desktop, Internet of Things and mobile.  Every industry and vertical market, finance, healthcare, government, travel, manufacturing. And every use case, modern web applications, traditional server applications, machine learning, industrial control systems, robotics. What many new entrants in the container ecosystem have in common is that they build specialized systems, targeted at a particular infrastructure, industry or use case.
As a company Docker uses open source as our innovation lab, in collaboration with a whole ecosystem. Docker’s success is tied to the success of the container ecosystem: if the ecosystem succeeds, we succeed. Hence we have been planning for the next phase of the container ecosystem growth: what production model will help us scale the container ecosystem to fulfill the promise of making containers mainstream?
Last year, our customers started to ask for Docker on many platforms beyond Linux: Mac and Windows desktop, Windows Server, cloud platforms like Amazon Web Services (AWS), Microsoft Azure or Google Cloud Platform, and we created a dozen Docker editions specialized for these platforms. In order to be able to build and ship these specialized editions is a relatively short time, with small teams, in a scalable way, without having to reinvent the wheel; it was clear we needed a new approach.  We needed our teams to collaborate not only on components, but also on assemblies of components, borrowing an idea from the car industry where assemblies of components are reused to build completely different cars.

We think the best way to scale the container ecosystem to the next level to get containers mainstream is to collaborate on assemblies at the ecosystem level.

In order to enable this new level of collaboration, today we are announcing the Moby Project, a new open-source project to advance the software containerization movement. It provides a “Lego set” of dozens of components, a framework for assembling them into custom container-based systems, and a place for all container enthusiasts to experiment and exchange ideas. Think of Moby as the “Lego Club” of container systems.
Moby is comprised of:

A library of containerized backend components (e.g., a low-level builder, logging facility, volume management, networking, image management, containerd, SwarmKit, …)
A framework for assembling the components into a standalone container platform, and tooling to build, test and deploy artifacts for these assemblies.
A reference assembly, called Moby Origin, which is the open base for the Docker container platform, as well as examples of container systems using various components from the Moby library or from other projects.

Moby is designed for system builders, who want to build their own container based systems, not for application developers, who can use Docker or other container platforms. Participants in the Moby project can choose from the library of components derived from Docker or they can elect to “bring your own components” (BYOC) packaged as containers with the option to mix and match among all of the components to create a customized container system.
Docker uses the Moby Project as an open R&D lab, to experiment, develop new components, and collaborate with the ecosystem on the future of container technology. All our open source collaboration will move to the Moby project.
Please join us in helping take software containers mainstream, and grow our ecosystem and our user community to the next level by collaborating on components and assemblies.

Let’s take containers mainstream with moby https://mobyproject.org/ To Tweet

 
Learn more about the Moby Project:

https://mobyproject.org/

Join us for the DockerCon 2017 Online Meetup Recap
Read the Announcement

The post Introducing Moby Project: a new open-source project to advance the software containerization movement appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Announcing LinuxKit: A Toolkit for building Secure, Lean and Portable Linux Subsystems

 
Last year, one of the most common requests we heard from our users was to bring a Docker-native experience to their platforms. These platforms were many and varied: from cloud platforms such as AWS, Azure, Google Cloud, to server platforms such as Windows Server, desktop platforms that their developers used such as OSX and Windows 10, to mainframes and IoT platforms &;  the list went on.
We started working on support for these platforms, and we initially shipped Docker for Mac and Docker for Windows, followed by Docker for AWS and Docker for Azure. Most recently, we announced the beta of Docker for GCP. The customizations we applied to make Docker native for each platform have furthered the adoption of the Docker editions.
One of the issues we encountered was that for many of these platforms, the users wanted Linuxcontainer support but the platform itself did not ship with Linux included. Mac OS and Windows are two obvious examples, but cloud platforms do not ship with a standard Linux either. So it made sense for us to bundle Linux into the Docker platform to run in these places.
What we needed to bundle was a secure, lean and portable Linux subsystem that can provide Linux container functionality as a component of a container platform. As it turned out, this is what many other people working with containers wanted as well; secure, lean and portable Linux subsystem for the container movement, So, we partnered with several companies and the Linux Foundation to build this component. These companies include HPE, Intel, ARM, IBM and Microsoft &8211; all of whom are interested in bringing Linux container functionality to new and varied platforms, from IoT to mainframes.
includes the tooling to allow building custom Linux subsystems that only include exactly the components the runtime platform requires. All system services are containers that can be replaced, and everything that is not required can be removed. All components can be substituted with ones that match specific needs. It is a kit, very much in the Docker philosophy of batteries included but swappable.  Today, onstage at Dockercon 2017 we opensourced LinuxKit at https://github.com/linuxkit/linuxkit.
To achieve our goals of a secure, lean and portable OS,we built it from containers, for containers.  Security is a top-level objective and aligns with NIST stating, in their draft Application Container Security Guide: “Use container-specific OSes instead of general-purpose ones to reduce attack surfaces. When using a container-specific OS, attack surfaces are typically much smaller than they would be with a general-purpose OS, so there are fewer opportunities to attack and compromise a container-specific OS.”
The leanness directly helps with security by removing parts not needed if the OS is designed around the single use case of running containers. Because LinuxKit is container-native, it has a very minimal size &8211; 35MB with a very minimal boot time.  All system services are containers, which means that everything can be removed or replaced.
System services are sandboxed in containers, with only the privileges they need. The configuration is designed for the container use case. The whole system is built to be used as immutable infrastructure, so it can be built and tested in your CI pipeline, deployed, and new versions are redeployed when you wish to upgrade.
The kernel comes from our collaboration with the Linux kernel community, participating in the process and work with groups such as the Kernel Self Protection Project (KSPP), while shipping recent kernels with only the minimal patches needed to fix issues with the platforms LinuxKit supports. The kernel security process is too big for a single company to try to develop on their own therefore a broad industry collaboration is necessary.
In addition LinuxKit provides a space to incubate security projects that show promise for improving Linux security. We are working with external open source projects such as Wireguard, Landlock, Mirage, oKernel, Clear Containers and more to provide a testbed and focus for innovation in the container space, and a route to production.
LinuxKit is portable, as it was built for the many platforms Docker runs on now, and with a view to making it run on far more.. Whether they are large or small machines, bare metal or virtualized, mainframes or the kind of devices that are used in Internet of Things scenarios as containers reach into every area of computing.
For the launch we invited John Gossman from Microsoft onto the stage. We have a long history of collaboration with Microsoft, on Docker for Windows Server, Docker for Windows and Docker for Azure. Part of that collaboration has been work on the Linux subsystem in Docker for Windows and Docker for Azure, and working on Hyper-V integration with LinuxKit on those platforms. The next step in that collaboration announced today is that all Windows Server and Windows 10 customers will get access to Linux containers and we will be working together on how to integrate linuxKit with Hyper-V isolation.
Today we open up LinuxKit to partners and open source enthusiasts to build new things with Linux and to expand the container platform. We look forward to seeing what you make from it and contribute back to the community.

Announcing LinuxKit: A Toolkit for building Secure, Lean and Portable Linux SubsystemsClick To Tweet

Learn More about Linuxkit:

Check out the LinuxKit repository on GitHub
Join us for the DockerCon 2017 Online Meetup Recap
Read the Announcement

 
The post Announcing LinuxKit: A Toolkit for building Secure, Lean and Portable Linux Subsystems appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

WEBINAR Q&A: Modernizing Traditional Applications with Docker

This webinar covers the importance of “WHY NOW and HOW” to start modernizing traditional applications with Docker Enterprise Edition. Legacy applications often serve critical business needs and have to be maintained for a long time. The maintenance of these applications can become expensive and very time consuming. Some applications may have been written decades ago, grown to millions of lines of code  and the team that built and deployed the app may no longer be at your company. That can pose a challenge for app maintenance, security and support. Docker Enterprise Edition and the Image2Docker Tool presents a unique opportunity to these apps into containers to make them portable, more secure and cost efficient to operate.
View the recorded session below and read through some of the most popular questions
.

Q: Do I need to follow all the steps in the exact sequential manner or do all of them to qualify as modernizing traditional applications?
A:  Outside of the first step of taking the existing app and converting it to a container with Image2Docker, the decision to refactor, automate or deploy to new infrastructure is up to you. You can strictly lift and shift some apps, while others are candidates for refactoring or to be completely re-written. Modernization can also include migrating to a more modern infrastructure or adding modern services to an existing app. With Docker, managing and deploying apps are straightforward whether it is a microservice or monolith.
Q: What kind of apps does image2Docker tool support and how do I get it?
A: Image2Docker is a free tool for use by Windows and Linux teams to convert apps to dockerfiles. Whether it’s a .NET application running on Windows or a Java or LAMP stack application running on Linux, Image2Docker will help taking an existing, deployed application and convert it to a dockerfile. Of course, not all apps are equal when it comes to converting to images.
For Windows: 2-3 tier IIS and ASP.NET applications ,limited external dependencies. For Linux:  2-3 tier Java apps running frameworks suited for isolation within container , LAMP stack apps,limited external dependencies.
Q: Is image2Docker a part of Docker Enterprise Edition?
A: No, image2Docker is a free open source tool available for download. Anyone interested in this use case can download the Linux or Windows version. A Linux and Windows Server version is available.
Q: How do I manage and deploy my apps securely at scale?
A: Docker Enterprise Edition (EE) is the platform for enterprise container management and security. It is an application-centric platform that can orchestrate, accelerate and secure applications across the entire software supply chain, from development to production running on any infrastructure. Docker EE is a single pane of glass to manage and deploy your apps at scale, securely and efficiently.
Q: How is Docker EE licensed and where can I see key features included in Docker EE tiers?
A: Docker EE is licensed per node. A node is an instance running on a bare metal or virtual server. Docker EE is available in three tiers: Basic, Standard and Advanced to address a wide range of requirements. For more details visit www.docker.com/pricing.
Q: Can Docker EE  run within my enterprise or is it only run externally?
A: Docker EE can be deployed on-premises or in your VPC.
Q: How do I modernize from monolith to microservices?
A: There is a lot of interest in microservices apps which are discrete, distributed and independently maintainable blocks of functionality. Many organizations have this as a goal and the reality is that the first step is to modernize that app into a single container. The next step is to start refactoring the app and overtime one container can become hundreds. A recommended best practice is to start with functionality that changes most frequently, target that for refactoring and most importantly test, test, test.  
Continue your Docker journey with these helpful links:

Learn More about Docker Enterprise Edition
Try Docker Enterprise Edition for free
Learn more about image2Docker for Linux and Windows Server
Build your own images

Modernize Traditional Applications without rewriting code with Docker To Tweet

The post WEBINAR Q&A: Modernizing Traditional Applications with Docker appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

DockerCon Agenda, Mobile App and DockerCon Slack

From Docker use cases at large corporations, to advanced technical talks and hands-­on lab tutorials, the Agenda includes sessions adapted to every attendee profile, expertise level and domain of interest.
If you’re a registered attendee, login on the DockerCon portal using the information you set up during the registration process. You can use the keyword search bar or filter by topics, days, tracks, experience level or target audience.

Once logged in, you can “star” your interests and create your DockerCon schedule. Your saved interests and schedule will be available on the DockerCon mobile app you can download here.
Below are some useful tips and tricks for getting the most out of the DockerCon App.
Add More Sessions in the App
If you have not started already, we encourage you to review DockerCon sessions and build your agenda for next week. The process is very simple and will help you organize sessions and activities by the topics that you are interested in. Just click the “Schedule” widget and explore sessions by day or track. When you add  to “My Agenda”, you’ll be able to it find later in “My Event”.
You can use the DockerCon App to take notes and rate both speakers and sessions. You can also access your Moby Mingle account to submit offers or join requests allowing you to connect with other attendees. Just log in once using your registration credentials and then it will be saved for the week.
The Mapping section includes a map of the Ecosystem Expo Hall giving you the details of where you can find sponsor’s booths.

Don’t forget to post your DockerSelfie photos to the Photo Feed! What’s a DockerSelfie? It’s just a selfie-style picture that features something Docker-related. Share pics with your  Docker swag, at DockerCon or from other Docker events. Let us know how excited you are for DockerCon.
Introducing a DockerCon Slack
We’ve set up a DockerCon slack so that it’s easier for attendees to participate in topic-based conversations in specific channels. This is a great way to interact with attendees online and ask questions to the Docker team who will be looking after the different channels. As always, please remember that this is a professional event and it’s important to adhere to the Code of Conduct.

Time to build your @dockercon agenda and download the dockercon App Click To Tweet

The post DockerCon Agenda, Mobile App and DockerCon Slack appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Docker Docs Hackathon: April 17-21, 2017

During DockerCon 2017, ’s team will be running the first-ever Docker Docs hackathon, and you’re invited to participate and win prizes – whether you attend DockerCon or are just watching the proceedings online.
Essentially, it’s a bug-bash! We have a number of bugs filed against our docs up on GitHub for you to grab.
You can participate in one of two ways:

With the docs team’s help in the fourth floor hack room at DockerCon on Tuesday, April 18th and Wednesday, April 19th, from 1-6pm.
Online! Right here! During the whole week of DockerCon (April 17th &; 21st).

Or, both – if you want to have the best shot. After all, we won’t be in the hack room 24/7 that whole week.
All participants who show up in the 4th floor hack room at DockerCon will get this way-cool magnet just for stopping by.

Quick links

Official hackathon page on Docs site
Event page on DockerCon website
View hackathon bugs on GitHub
Report your hackathon work
Browse prizes
docs on Slack, if you have questions

How it works
We have a number of bugs that have built up in our docs queue on GitHub, and we have labeled a whole slew of them with the tag hackathon, which you can see here.
Submit fixes for these bugs, or close them if you do a bit of research it turns out they aren’t actually valid. Every action you take gets you more points, and the points are redeemable for dollars in our hackathon store. These points also qualify you for valuable prizes like an Amazon gift card and a personally engraved trophy!
Prizes

All participants: Points are redeemable for t-shirts, hoodies, sweatshirts, mugs, beer steins, pint glasses, flasks, hoodies, stickers, buttons, magnets, wall clocks, post-cards, and even doggie t-shirts.
3rd place: A small trophy with a personal engraving, plus store credit
2nd place: A small trophy with a personal engraving, plus store credit, plus a $150 Amazon Gift Card
1st place: A large trophy with a personal engraving, plus store credit, plus a $300 Amazon Gift Card

Bonuses
A select few will get bonuses for being extra special contributors:

Largest single change introduced in a fix (files changed/lines of delta): 1000 points
Most bugs closed (resolved as no-op or handled): 1000 points
Most participation (attended all days): 1000 points

Choosing a prize
You can see the point values for the bugs in the GitHub queue. Those are worth cash in our rewards store at http://www.cafepress.com/dockerdocshackathon.
Our points-to-cash conversion rate will be figured out at the end of the hackathon, and will essentially be a function of the number of points that hackathon participants logged, and the number of dollars we have to spend on prizes.

View available rewards

When?
The docs hackathon is going on from April 17th thru April 21st, 2017. This is the time when it’s possible to claim and resolve bugs.
Where?
In-person
Attending DockerCon? Come to the fourth floor hack room on Tuesday and Wednesday from 1pm to 6pm. We’ll be there to answer questions and help you.
Note: While the hackathon is officially ongoing all week online, working in the hack room with us for these two days is by far the best way to participate; the docs team will be on-hand to get you started, get you unstuck, and guide you.
Online
Drop into the community Slack channel for the docs and ask any questions you have. Otherwise, just go to GitHub and look at our hackathon label and come here to claim your points when you’re done.
Claiming a bug
Whether attending in-person or online, to claim a bug as one that you are working on (so nobody else grabs it out from under you) you must type a comment saying you claim it. Respect it when you see other comments claiming a bug.

View available bugs

Claiming your points
Simply fill out this form when you’re done participating. We’ll take it from there.
Conversion rate
The points-to-cash ratio will be posted on the official page for the hackathon no later than Friday the 21st. We need to figure out how many points’ worth of fixes come in first.
Sorry but we can not send you cash for these points under any circumstances, even if you don’t spend them.
Questions?
Ask us anything at docs@docker.com or in the docs channel on Slack.
Thank you for participating in the 2017 Docs Hackathon!

Join us for the Docker Docs Hackathon: April 17-21, 2017Click To Tweet

The post Docker Docs Hackathon: April 17-21, 2017 appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Introducing Moby Mingle at DockerCon 2017

If you’re pumped about all the things you learn and all the people you meet at Docker events, you’re going to love what we have planned for you at this year’s DockerCon! With more than 5000 attendees, there will be a wealth of knowledge in the room, ready to be shared, explored and cultivated. This year we’re going to draw on the power of the DockerCon crowd to open-source the attendee experience and bring the focus of the conference back to our users. Every attendee has different experiences, backgrounds, and interests to share. The trick becomes finding the right individual, with the specific knowledge you’re looking for.
So we’re excited to give everyone at DockerCon access to a tool called  to connect with people who share the same Docker use cases, topic of interests or hack ideas, or even your favorite TV shows. So no matter where you’re traveling from or how many people you know before the conference, we will make sure you end up feeling at home!
Using a web based platform, you’re able to build a profile, set goals around what you want to get out of Dockercon, and then make Offers and Requests to help you achieve those goals. In practice, attendees will use the platform to identify other attendees they want to meet with 1 on 1 or as a group and then check-in onsite at the Moby Mingle lounge. You can access Moby Mingle here and login using your credentials you created during the DockerCon registration process.

Introducing MobyMingle, a matchmaking platform to meet and learn from other @DockerCon participantsClick To Tweet

The post Introducing Moby Mingle at DockerCon 2017 appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/