Moby Project and Open Source Summit North America

Docker will be at Open Source Summit from to highlight new development with the Moby Project and it’s various components: containerd, LinuxKit, InfraKit, Notary, etc.
Come see us at Booth #510 to learn more about:

The different uses cases for the Moby Projects and components
The difference between Docker and the Moby Project
How to get started with each component

As part of the OSS NA, Docker is also organizing a Moby Summit on September 14, 2017. Following the success of the previous editions, we’ll keep the same format which consists of short technical talks / demos in the morning and Birds-of-a-Feather in the afternoon.

 We have an excellent line up of speakers in store for you and are excited to share the agenda below. We hope that these sessions inspire you to come participate in the Moby community and register for this Moby summit.
For those of you who can’t attend the summit we recommend the following sessions as part of the main event / tracks:
 
Building specialized container-based systems with Moby: a few use cases
Speaker: Patrick Chanezon

This talk will explain how you can leverage the Moby project to assemble your own specialized container-based system, whether for IoT, cloud or bare metal scenarios. We will cover Moby itself, the framework, and tooling around the project, as well as many of it’s components: LinuxKit, InfraKit, containerd, SwarmKit, Notary. Then we will present a few use cases and demos of how different companies have leveraged Moby and some of the Moby components to create their own container-based systems.
Easier, faster, better testing with LinuxKit
Speaker: Justin Cormack (Docker) and Gianluca Arbezzano (InfluxData)
LinuxKit is a new toolkit for building custom Linux builds quickly and easily. It will build an immutable image containing applications built in as containers, and the build and boot process take under a minute. While it was primarily built at Docker for running container orchestration systems, it is also ideal as a tool for testing. In particular we use it to test across wide ranges of kernel versions in order to test support for different setups customers may be using.
From this talk, attendees should get a practical overview of how they can use LinuxKit in their own testing workflows. It will be illustrated with real world examples from projects that are using LinuxKit for testing, with all the CI code being open source and available to be adapted to other use cases. We hope to encourage people to test more and in more depth and breadth, by making it easier.
Unikernels: where are they now?
Speaker: Amir Chaudhry
In this talk we’ll review the progress in the unikernel ecosystem and highlight the advances of the most active open-source projects:
– MirageOS, which has improved the dev experience and supports new cloud targets.
– HaLVM, which created a product to help detect network intrusions.
– IncludeOS, which has made rapid progress and introduced POSIX compatibility.
We’ll also discuss how the underlying ideas behind unikernels, of minimalism, composability, and security, have found their way into other projects and products, and the questions this poses for building maintainable systems.
Container Orchestration from Theory to Practice
Speakers: Laura Frank
Join Laura Frank and Aaron Lehmann as they explain and examine technical concepts behind container orchestration systems, like distributed consensus, object models, and node topology. These concepts build the foundation of every modern orchestration system, and each technical explanation will be illustrated using Docker’s SwarmKit as a real-world example. Gain a deeper understanding of how orchestration systems like SwarmKit work in practice, and walk away with more insights into your production applications.
From 0 to Serverless in 60 Seconds
Speaker: Alex Ellis
This talk gives an overview of Serverless – an architectural pattern which lets us focus on building, discreet, and reusable chunks of code. A key use-case is integration with third party services and applications. You will see demos of functions with the Amazon Alexa voice assistant, Twitter and Slack.
Functions as a Service (FaaS) is an open-source framework for building serverless functions through Docker’s API and native orchestration, and meaning you can be up and running in less than a minute. Any code can be packaged as a function enabling you to consume a range of web events without repetitive boilerplate coding and Prometheus metrics allows functions to scale to demand. With Docker Swarm you can run your code anywhere which is important for businesses with compliance needs and existing systems.
 
Containerd Internals: Building a core container runtime
Phil Estes, Stephen Day
Containerd is the core container runtime used in Docker to execute containers and distribute images. It was designed from the ground up to support the OCI image and runtime specifications. The design of containerd is carefully crafted to fit the use cases of modern container orchestrators like Kubernetes and Swarm. In this talk, we dive into design decisions that help containerd meet a diverse set of requirements for a growing container world. Developing an understanding of the decoupled components will provide attendees a grasp where they can leverage functionality in their platforms. By slicing the components of a container runtime into the right pieces, integrators can choose only what they need.

Docker will be at #OSSUmmit in LA to highlight new development w/ the @Moby Project and…Click To Tweet

The post Moby Project and Open Source Summit North America appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Announcing new DockerCon Europe tracks, sessions, speakers and website!

The DockerCon Europe website has a fresh look and new sessions added. The DockerCon Review Committee is still working through announcing final sessions in each breakout track, but below is an overview of the tracks and content you’ll find this year in Copenhagen. To view abstracts in more detail check out the Agenda Page.
In case you missed it, we have two summits happening on Thursday, October 19th. The Moby Summit, a hands-on collaborative event for advanced container users who are actively maintaining, contributing or generally interested in the design and development of the Moby Project and it’s components. The Enterprise Summit, a full day event for enterprise IT practitioners who want to learn how they can embrace the journey to hybrid IT and implement a new strategy to help fund their modernization efforts.

We have an excellent line up of speakers in store for you and are excited to share the agenda below. We hope that these sessions inspire you to register to DockerCon Europe.
Using Docker
Using Docker sessions are introductory sessions for Docker users, dev and ops alike. Filled with practical advice, learnings, and insight, these sessions will help you get started with Docker or better implement Docker into your workflow.

Creating Effective Images by Abby Fuller, AWS
Docker?!?! But I’m a SysAdmin by Mike Coleman, Docker
Modernizing .NET Apps by Elton Stoneman, Docker & Iris Classon, Konstrukt
Modernizing Java Apps by Arun Gupta, AWS
Road to Docker Production: What You Need to Know and Decide by Bret Fisher, Independent Consultant
Tips and Tricks of the Docker Captains by Adrian Mouat, Container Solutions
Learning Docker From Square One by Chloe Condon, CodeFresh
Practical Design Patterns in Docker Networking by Mark Church, Docker

 Docker Advanced
Docker Advanced sessions provide a deeper dive into Docker tooling, implementation and real world production use recommendations. If you are ready to get to the next level with your Docker usage, join Docker Advanced for best practices from the Docker team.

Sessions to be announced soon!

 Use Case
Use case sessions highlight how companies are using Docker to modernize their infrastructure and build, ship and run distributed applications. These sessions are heavy on business value, ROI and production implementation advice and learnings.

Back To The Future: Containerize Legacy Applications by Brandon Royal, Docker
Using Docker For Product Testing and CI at Splunk by Mike Dickey, Splunk & Harish Jayakumar, Docker
Shipping and Shifting ~100 Apps with Docker by Sune Keller, Alm Brand
How Docker Helps Open Doors At Assa Abloy by Jan Hëdstrom, Assa Abloy &
Patrick van der Bleek, Docker

 Black Belt
Black Belt talks are code and demo heavy and light on the slides. Experts in the Docker ecosystem cover a deeply technical topic by diving way down deeper. Container connoisseurs, prepare to learn and be delighted.  

What Have Syscalls Done For You Lately? by Liz Rice, Aqua Security
A Deeper Dive Into Docker Overlay Networks by Laurent Bernaille, D2SI
Container-relevant Upstream Kernel Developments by Tycho Andersen, Docker
The Truth Behind Serverless by Erica Windisch, IOpipe

 Edge [NEW!]
The Edge track shows how containers are redefining our technology toolbox, from solving old problems in a new way to pushing the boundaries of what we can accomplish with software. Sessions in this track provide a glimpse into the new container frontier.  

Take Control Of Your Maps With Docker by Petr Pridal, Klokan Technologies GmbH
Panel: Modern App Security Requires Containers moderated by Sean Michael Kerner
Skynet vs Planet of The Apes, Duel! by Adrien Blind, Societe Generale
How to Secure the Journey to Microservices – Fraud Management at Arvato by Tobias Gurtzick, Arvato

 Transform [NEW!]
The transform track focuses on the impact that change has on organizations, individuals and communities. Filled with inspiration, insights and new perspectives, these stories will leave you energized and equipped to drive innovation.

Learn Fast, Fail Fast, Deliver Fast: The ModSquad Way by Tim Tyler, Metlife
The Value of Diverse Experiences by Nandhini Santhanam, Docker
We Need To Talk: How Communication Helps Code by Lauri Apple, Zalando
My Journey To Go by Ashley McNamara, Microsoft
A Strong Belief, Loosely Held: Bringing Empathy to IT by Nirmal Mehta, Booz Allen Hamilton

 Community Theater
Located in the main conference hall, the Community Theater will feature lightning talks and cool hacks from the Docker community and ecosystem.

Looking Under The Hood: containerD by Scott Coulton, Puppet
From Zero to Serverless in 60 Seconds, Anywhere by Alex Ellis, ADP
Deploying Software Containers on Heterogeneous IoT Devices by Daniel Bruzual, Aalto University
Android Meets Docker by Jing Li, Viacom
Cluster Symphony by Anton Weiss, Otomato Software
Containerizing Hardware Accelerated Applications by Chelsea Mafrica
Empowering Docker with Linked Data Principles by Riccardo Tommasini, Politecnico di Milano
Tales of Training: Scaling CodeLabs with Swarm Mode and Docker-Compose by Damien Duportal, CloudBees
Experience the Swarm API in Virtual Reality by Leigh Capili, Beatport
Repainting the Past with Distributed Machine Learning and Docker by Finnian Anderson, Student & Oli Callaghan, Student

 Ecosystem
Ecosystem Track showcases work done by sponsoring partners at DockerCon. Ecosystem sessions include a diverse range of topics and opportunity to learn more about the variety of solutions available in the Docker ecosystem.

Sessions to be announced soon!

We hope you can join us in Copenhagen for an amazing event! Tickets have sold out each year, so make sure to register soon!

Announcing new @DockerCon tracks, sessions, speakers and website!  Click To Tweet

The post Announcing new DockerCon Europe tracks, sessions, speakers and website! appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Test Drive Docker Enterprise Edition at VMworld 2017

Docker will be at VMworld 2017 next week (August 27-31) in Las Vegas to highlight new developments with Docker Enterprise Edition (EE), the only Container as a Service (CaaS) platform for managing and securing Windows, Linux and mainframe applications across any infrastructure, both on premises and in the cloud.
Stop by Booth #1206 to learn more about:

How VMs and containers work together for improved application lifecycle management
How containers and Docker EE can help IT with day-to-day maintenance and operations tasks
How IT can lead modernization efforts with Docker EE and become drivers of innovation in their organizations

Just as VMware vSphere simplified the management of VMs and made virtualization the de facto standard inside the data center, Docker is driving containerization of your entire application portfolio with Docker EE and helping organizations like yours to achieve their cloud and app modernization goals without requiring you to change how you operate.
Test Drive Docker EE in the Booth
Don’t miss the chance to get hands-on experience with Docker with our in-booth labs. Led by Docker experts, you will get to see for yourself how Docker brings all applications—traditional and cloud-native, Windows and Linux, on-prem and in the cloud—into a single experience for IT. Learn how standard IT tasks like patching and rolling updates are 10x easier with Docker EE and see how you can centralize security and access control through Docker EE’s management interface.
Pre-register here to sign up for one of the lab sessions and get a free Docker t-shirt when you show up!

Monday, August 28th @ 1:15pm
Tuesday, August 29th @ 1:15pm
Wednesday, August 30th @ 1:15pm

Congrats to 2017 vExperts!
We didn’t forget you! Come by our booth to collect special vExpert swag or better yet, sign up for our vExpert challenge! Special prizes in store for winners of our challenge.
Register here for special vExpert Challenge time slots and compete against fellow vExperts for some great Docker swag.

Monday, August 28th @ 3:10pm
Tuesday, August 29th @ 3:10pm

Sign up for a #Docker in-booth lab at #VMworld 2017Click To Tweet

To learn more about Docker solutions for IT:

Visit IT Starts with Docker and sign up for ongoing alerts
Learn more about Docker Enterprise Edition
Start a hosted trial
Sign up for upcoming webinars

The post Test Drive Docker Enterprise Edition at VMworld 2017 appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Kubernetes Meets High-Performance Computing

Editor’s note: today’s post is by Robert Lalonde, general manager at Univa, on supporting mixed HPC and containerized applications  Anyone who has worked with Docker can appreciate the enormous gains in efficiency achievable with containers. While Kubernetes excels at orchestrating containers, high-performance computing (HPC) applications can be tricky to deploy on Kubernetes. In this post, I discuss some of the challenges of running HPC workloads with Kubernetes, explain how organizations approach these challenges today, and suggest an approach for supporting mixed workloads on a shared Kubernetes cluster. We will also provide information and links to a case study on a customer, IHME, showing how Kubernetes is extended to service their HPC workloads seamlessly while retaining scalability and interfaces familiar to HPC users.HPC workloads unique challengesIn Kubernetes, the base unit of scheduling is a Pod: one or more Docker containers scheduled to a cluster host. Kubernetes assumes that workloads are containers. While Kubernetes has the notion of Cron Jobs and Jobs that run to completion, applications deployed on Kubernetes are typically long-running services, like web servers, load balancers or data stores and while they are highly dynamic with pods coming and going, they differ greatly from HPC application patterns.Traditional HPC applications often exhibit different characteristics: In financial or engineering simulations, a job may be comprised of tens of thousands of short-running tasks, demanding low-latency and high-throughput scheduling to complete a simulation in an acceptable amount of time.A computational fluid dynamics (CFD) problem may execute in parallel across many hundred or even thousands of nodes using a message passing library to synchronize state. This requires specialized scheduling and job management features to allocate and launch such jobs and then to checkpoint, suspend/resume or backfill them.Other HPC workloads may require specialized resources like GPUs or require access to limited software licenses. Organizations may enforce policies around what types of resources can be used by whom to ensure projects are adequately resourced and deadlines are met.HPC workload schedulers have evolved to support exactly these kinds of workloads. Examples include Univa Grid Engine, IBM Spectrum LSF and Altair’s PBS Professional. Sites managing HPC workloads have come to rely on capabilities like array jobs, configurable pre-emption, user, group or project based quotas and a variety of other features.Blurring the lines between containers and HPCHPC users believe containers are valuable for the same reasons as other organizations. Packaging logic in a container to make it portable, insulated from environmental dependencies, and easily exchanged with other containers clearly has value. However, making the switch to containers can be difficult. HPC workloads are often integrated at the command line level. Rather than requiring coding, jobs are submitted to queues via the command line as binaries or simple shell scripts that act as wrappers. There are literally hundreds of engineering, scientific and analytic applications used by HPC sites that take this approach and have mature and certified integrations with popular workload schedulers. While the notion of packaging a workload into a Docker container, publishing it to a registry, and submitting a YAML description of the workload is second nature to users of Kubernetes, this is foreign to most HPC users. An analyst running models in R, MATLAB or Stata simply wants to submit their simulation quickly, monitor their execution, and get a result as quickly as possible. Existing approachesTo deal with the challenges of migrating to containers, organizations running container and HPC workloads have several options:Maintain separate infrastructuresFor sites with sunk investments in HPC, this may be a preferred approach. Rather than disrupt existing environments, it may be easier to deploy new containerized applications on a separate cluster and leave the HPC environment alone. The challenge is that this comes at the cost of siloed clusters, increasing infrastructure and management cost.Run containerized workloads under an existing HPC workload managerFor sites running traditional HPC workloads, another approach is to use existing job submission mechanisms to launch jobs that in turn instantiate Docker containers on one or more target hosts. Sites using this approach can introduce containerized workloads with minimal disruption to their environment. Leading HPC workload managers such as Univa Grid Engine Container Edition and IBM Spectrum LSF are adding native support for Docker containers. Shifter and Singularity are important open source tools supporting this type of deployment also. While this is a good solution for sites with simple requirements that want to stick with their HPC scheduler, they will not have access to native Kubernetes features, and this may constrain flexibility in managing long-running services where Kubernetes excels.Use native job scheduling features in KubernetesSites less invested in existing HPC applications can use existing scheduling facilities in Kubernetes for jobs that run to completion. While this is an option, it may be impractical for many HPC users. HPC applications are often either optimized towards massive throughput or large scale parallelism. In both cases startup and teardown latencies have a discriminating impact. Latencies that appear to be acceptable for containerized microservices today would render such applications unable to scale to the required levels.All of these solutions involve tradeoffs. The first option doesn’t allow resources to be shared (increasing costs) and the second and third options require customers to pick a single scheduler, constraining future flexibility.Mixed workloads on KubernetesA better approach is to support HPC and container workloads natively in the same shared environment. Ideally, users should see the environment appropriate to their workload or workflow type.One approach to supporting mixed workloads is to allow Kubernetes and the HPC workload manager to co-exist on the same cluster, throttling resources to avoid conflicts. While simple, this means that neither workload manager can fully utilize the cluster. Another approach is to use a peer scheduler that coordinates with the Kubernetes scheduler. Navops Command by Univa is a solution that takes this third approach, augmenting the functionality of the Kubernetes scheduler. Navops Command provides its own web interface and CLI and allows additional scheduling policies to be enabled on Kubernetes without impacting the operation of the Kubernetes scheduler and existing containerized applications. Navops Command plugs into the Kubernetes architecture via the ‘schedulerName’ attribute in the pod spec as a peer scheduler that workloads can choose to use instead of the Kubernetes stock scheduler as shown below.With this approach, Kubernetes acts as a resource manager, making resources available to a separate HPC scheduler. Cluster administrators can use a visual interface to allocate resources based on policy or simply drag sliders via a web UI to allocate different proportions of the Kubernetes environment to non-container (HPC) workloads, and native Kubernetes applications and services.From a client perspective, the HPC scheduler runs as a service deployed in Kubernetes pods, operating just as it would on a bare metal cluster. Navops Command provides additional scheduling features including things like resource reservation, run-time quotas, workload preemption and more. This environment works equally well for on-premise, cloud-based or hybrid deployments.Deploying mixed workloads at IHMEOne client having success with mixed workloads is the Institute for Health Metrics & Evaluation (IHME), an independent health research center at the University of Washington. In support of their globally recognized Global Health Data Exchange (GHDx), IHME operates a significantly sized environment comprised of 500 nodes and 20,000 cores running a mix of analytic, HPC, and container-based applications on Kubernetes. This case study describes IHME’s success hosting existing HPC workloads on a shared Kubernetes cluster using Navops Command.For sites deploying new clusters that want access to the rich capabilities in Kubernetes but need the flexibility to run non-containerized workloads, this approach is worth a look. It offers the opportunity for sites to share infrastructure between Kubernetes and HPC workloads without disrupting existing applications and businesses processes. It also allows them to migrate their HPC workloads to use Docker containers at their own pace.
Quelle: kubernetes

My Three Favorite New Features in Docker Enterprise Edition

I’ve been at Docker for just over two years now, and I’ve worked with every version of Docker Enterprise Edition (née Docker Datacenter) since before there even was a Docker Enterprise Edition (EE). I’m more excited about this new release than any previous release.
There are several new features that are going to ease the management of your applications (both traditional and cloud-native) wherever you need them to run: the cloud or the data center, virtual or physical, Linux or Windows – and now even IBM Z mainframes.
It would take too long to discuss all of the new features, so with that in mind, I’m going to talk about my three favorite features in Docker EE 17.06.

Hybrid-OS Clusters
Docker and Microsoft introduced support for Windows Server containers last fall. This was a major milestone that helped Docker move towards the goal of embracing apps across the entirety of the data center. With this latest release Docker extends hybrid OS operations even further: IT admins can now build and manage clusters comprised of Linux, Windows Server 2016, and IBM Z mainframes  – all from the same management plane. This means you can manage applications comprised of both Windows and Linux components from Docker Universal Control Plane. For instance, you can run your web front end on Linux and connect that to Microsoft SQL Server running on Windows.
Docker EE 17.06 is the first Containers-as-a-Service platform to offer production-level support for the integrated management and security of Windows Server Containers.
For more information on hybrid-OS clusters, check out this video.
Enhanced Role-based Access Control (RBAC)
Docker EE has always featured RBAC. With Docker EE 17.06 we’ve enhanced these capabilities to further extend the way administrators manage access to cluster resources.
To better understand how RBAC works in Docker EE 17.06 it’s probably best if I define four concepts:

Custom Roles: A role is essentially a set of permissions that define what operations someone can perform on cluster resources. As in previous releases, Docker EE 17.06 has a set of predefined roles (View Only, Full Control, etc). What’s new in this release is the ability for administrators to choose from dozens of individual capabilities to define custom roles.

For instance, an admin could define a ‘network-ops’ role that only grants the ability to perform a subset of tasks specifically related to network functionality.
 Note: This image only shows a small subset of all the various operation permissions available in Docker EE 17.06
In short roles are what someone can do when working with your Docker EE cluster.

Subject: Subjects define who can perform certain tasks. Subjects can be Docker EE users, teams or organizations.

Collections: Collections are a new concept in Docker EE. They provide a mechanism for administrators to group cluster resources (services, containers, volumes, networks, secrets, etc) together. An admin assigns a special Docker label (com.docker.ucp.access.label) to a particular resource to define what collection the resource belongs to.
Collections can be nested into a directory-like hierarchy. For instance an admin user can create a prod collection, and then a webserver collection beneath that.

Nested collections will inherit permissions from their parent collections.
You can think of collections as where someone can perform tasks.
 

Grant: A grant defines who (subject) can do what (role) where (collection). For example, you can create a grant that specifies that the “Dev Team” gets “View Only” access against resources in the “/Production” collection.

In addition to these new capabilities, Docker EE Advanced 17.06 extends the concept of RBAC to nodes as well. So now administrators can subdivide cluster servers between teams, and ensure that those dedicated resources are only accessed by individuals who have been explicitly granted permission. These features give administrators nearly infinite flexibility with regards to how they want to secure their cluster resources.
For more information on RBAC in Docker EE 17.06 check out this video.
Automated Image Promotion and Immutable Repos
Ok, this is technically two features, but they’re both awesome: Automated Image Promotion and Immutable Repos. These two capabilities allow administrators to further ensure the integrity of Docker images.
Automated image promotion gives IT practitioners the ability to define criteria that, when met, will automatically promote an image from one Docker Trusted Registry (DTR) repository to another.
For instance, today you might create a new version of an application, run it through QA, and then – if it passes – manually promote it to the production repo. The QA process could include steps such as scanning for vulnerabilities or the usage of components with certain licenses.
With Docker EE 17.06, you can automate portions of this process. You can define criteria based on the the image tag, the number of vulnerabilities in the image, presence of certain packages, or the type of license found in the image. If those criteria are met, the image will automatically be promoted from one repo to the other.

Additionally, you can apply multiple policies to create sophisticated automated promotion scenarios.
Immutable repos work alongside image promotion (as well as the existing security scanning and image signing features) to help protect the integrity of your Docker images. As the name implies, immutable repos allow administrators to prevent image tags from being changed in a given repository.
This is aimed at stopping a scenario where someone pushes a version of an image with a given tag, and then someone else overwrites that image by pushing a different version using the same tag as the original user. With immutable repos you can be assured that your images will not be accidentally (or intentionally) overwritten.
For more information on image promotion and immutable repos, please see this video.
Secure and Manage More Applications
Ok – I know I said I was going to talk about my three favorite new features, but I have to add my other favorite new feature: Docker Security Scanning for Windows images. Docker Security Scanning, part of Docker EE Advanced, automatically scans images for common vulnerabilities and exploits as they are pushed to DTR. Previously this has only worked with Linux images, but with Docker EE Advanced 17.06 it now also works with Windows images!
So there you have it: my three (or four or five depending on how you counted) favorite new features in Docker EE 17.06.
Thanks for taking the time to learn what’s new in Docker EE 17.06. Like I said, there are plenty of other new features. Heck, I didn’t even talk about multi-stage builds or the new UI. I hope after reading this, that you’re as excited about Docker EE 17.06 as I am.
Continue your Docker journey with these helpful links:

Try Docker Enterprise Edition for free
Register for an upcoming Docker webinar
Review What’s New with Docker EE
Read the documentation

Are you excited about #Docker EE ? @mikegcoleman is – read about his fave featuresClick To Tweet

 
The post My Three Favorite New Features in Docker Enterprise Edition appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Docker is Headed to Gartner Catalyst 2017

The Docker team will be in sunny San Diego, CA, August 21-23 for Gartner Catalyst. Come by and visit us in Booth #508 to meet with our Docker Enterprise Edition (EE) experts, see a demo of Docker EE, and ask us any questions you may have before and after any of the Gartner sessions on Docker and containers. Better yet, schedule a meeting with us and we’ll not only answer all your questions, you will also get a special gift.

This year’s Catalyst event includes an entire topic dedicated to Docker and containers, which you can find by looking for the topic “Docker & Containers” in the schedule builder. If you are still trying to separate all the fact from fiction about Docker and want a specific recommendation, there is a great Tech Demo session by Gartner analyst Richard Watson we think you might like titled Seven Docker & Container Myths We Need To Bust. 
We hope you will join us at Gartner Catalyst to get the latest research on the next big trends for IT, but if you are not in San Diego, we hope to see you at one of these other upcoming events:

VMworld, Las Vegas, NV, August 27-31 (Booth #1206)
Microsoft Ignite, Orlando, FL, September 25-29
DockerCon Europe, Copenhagen, DK, October 16-19

To learn more about Docker solutions for the enterprise:

Test drive Docker Enterprise Edition for free
Read more about Docker or view pricing
Learn what’s new with Docker Enterprise Edition

Attending @gartner_inc #gartnercatalyst next week in San Diego? Visit us in Booth #508 to meet…Click To Tweet

The post Docker is Headed to Gartner Catalyst 2017 appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Announcing the New Release of Docker Enterprise Edition

We are excited to share the new release of Docker Enterprise Edition. By supporting IBM Z and Windows Server 2016, this release puts us further in the lead with the first Containers-as-a-Service (CaaS) solution in the market for the modernization of all applications without disruption to you and your IT environment.
 

 
Docker Enterprise Edition (EE) 17.06 embraces Windows, Linux and Linux-based mainframe applications, bringing the key benefits of CaaS to the enterprise application portfolio. Most enterprises manage a diverse set of applications that includes both traditional applications and microservices, built on Linux and Windows, and intended for x86 servers, mainframes, and public clouds. Docker EE unites all of these applications into single platform, complete with customizable and flexible access control, support for a broad range of applications and infrastructure, and a highly automated software supply chain. These capabilities allow organizations to easily layer Docker EE onto existing processes and workflows, aligning to existing organizational structures while delivering improved resource utilization and reduced maintenance time.
This release includes UCP 2.2 and DTR 2.3 and establishes Docker EE as a key IT platform for both new application development as well as application modernization across both on-premises and cloud environments.
 

Multi-Architecture Orchestration
Docker EE is the only solution for modernizing Windows, Linux, and mainframe applications across on-premises and cloud, without requiring code changes. With organizations dedicating large portions of their IT budget towards maintaining existing apps and the digital era forcing everyone to focus on innovation, Docker EE provides a non-disruptive way to modernize existing applications to make them more portable, more scalable, and easier to update. Most enterprise organizations have a mixture of .NET, Java apps and mainframe applications in their portfolio. Docker EE provides a way to modernize all these different applications by packaging them in a standard format which does not require software development teams to change their code. Organizations can containerize traditional apps and microservices and deploy them in the same cluster, either on-premises or in the cloud.
Key new features include:

Support for full lifecycle management of Docker Windows containers including image scanning, secrets management, and overlay networking
Integrate Windows and Linux applications through the use of overlay networking to support hybrid applications
Ability to intelligently orchestrate across mixed clusters of Windows, Linux, and mainframe worker nodes
With added support of Linux on IBM z Systems, Docker delivers a consistent experience (Compose files, networking, security, lifecycle management) across Linux, Windows, and Linux-on-mainframe applications

 
Secure Multi-Tenancy
As container adoption grows across an organization, roles and responsibilities need to align with existing organizational structures and processes. The latest release of Docker EE allows organizations to customize role-based access control and define both physical and logical boundaries for different teams sharing the same Docker EE environment. These new capabilities allow teams to bring their own organizational models to a Docker environment whether that is a shared IT services model where different teams rent their own nodes, multiple teams share resources, or a specific team is granted access to a collection of specific resources. The enhancements allow complex organizations to easily onboard new lines of business while keeping application owners separate across a shared environment.
Key new features include:

Leverage built-in default roles or create custom roles with granular permissions from the entire Docker API
Assign grants to users and teams for resource collections that include services, containers, volumes, networks, and secrets
Leverage RBAC for nodes to segment a team’s access to a specific set of nodes within a Docker EE environment

 
Policy-Based Automation
Docker EE is a platform solution that supports a wide variety of applications, and a key priority is ensuring that this diversity does not add complexity nor slow down the software supply chain. In a dynamic container environment, organizations need to automate as much of the process as they can without sacrificing security. New features in Docker EE allow organizations to create predefined policies that can remove bottlenecks in the process to maintain compliance and prevent human errors, while still accelerating application delivery.
Key new features include:

Automatic image promotion using pre-defined policies to move images from one repository to another within the same registry
Immutable repositories prevent image tags from being modified or deleted, ensuring that production app repositories are secure and locked down
New APIs for:

Access control permissions
User / Team / Org management
Cluster configuration

 
Next Steps:
There are many new and exciting capabilities with this release of Docker Enterprise Edition and over the next few weeks, we’ll be going into more detail on each of them. To learn more, check out these additional resources:

See the new features in action in our new hosted demo environment. With no software to install, you’re just minutes away from experiencing Docker EE 17.06 for yourself.
Register for these upcoming webinars:

Thursday, Aug. 24th: What’s New with Docker Enterprise Edition
Tuesday, Aug. 29th: Docker Captains on Deck: Swarm 
Thursday, Aug. 31st: Deploying Multi-OS Applications with Docker
Also stay tuned for What’s New sessions in your local region

Read the documentation or learn more about Docker EE

Announcing new release of #Docker Enterprise Edition – platform for modernization without disruptionClick To Tweet

The post Announcing the New Release of Docker Enterprise Edition appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

High Performance Networking with EC2 Virtual Private Clouds

One of the most popular platforms for running Kubernetes is Amazon Web Services’ Elastic Compute Cloud (AWS EC2). With more than a decade of experience delivering IaaS, and expanding over time to include a rich set of services with easy to consume APIs, EC2 has captured developer mindshare and loyalty worldwide. When it comes to networking, however, EC2 has some limits that hinder performance and make deploying Kubernetes clusters to production unnecessarily complex. The preview release of Romana v2.0, a network and security automation solution for Cloud Native applications, includes features that address some well known network issues when running Kubernetes in EC2.Traditional VPC Networking Performance Roadblocks A Kubernetes pod network is separate from an Amazon Virtual Private Cloud (VPC) instance network; consequently, off-instance pod traffic needs a route to the destination pods. Fortunately, VPCs support setting these routes. When building a cluster network with the kubenet plugin, whenever new nodes are added, the AWS cloud provider will automatically add a VPC route to the pods running on that node.Using kubenet to set routes provides native VPC network performance and visibility. However, since kubenet does not support more advanced network functions like network policy for pod traffic isolation, many users choose to run a Container Network Interface (CNI) provider on the back end. Before Romana v2.0, all CNI network providers required an overlay when used across Availability Zones (AZs), leaving CNI users who want to deploy HA clusters unable to get the performance of native VPC networking.Even users who don’t need advanced networking encounter restriction, since the VPC route tables support a maximum of 50 entries, which limits the size of a cluster to 50 nodes (or less, if some VPC routes are needed for other purposes). Until Romana v2.0, users also needed to run an overlay network to get around this limit.Whether you were interested in advanced networking for traffic isolation or running large production HA clusters (or both), you were unable to get the performance and visibility of native VPC networking. Native VPC Networking Availability Advanced Network FeaturesHA Production DeploymentSmall>50Small>50Single ZoneNative VPCNative VPCXXXXXXMulti-zoneN/AN/ANative VPCN/ABefore Romana v2.0, native VPC networking wasn’t available for HA clusters greater than 50 nodes and network policy required overlay across zones.Kubernetes on Multi-Segment Networks The way to avoid running out of VPC routes is to use them sparingly by making them forward pod traffic for multiple instances. From a networking perspective, what that means is that the VPC route needs to forward to a router, which can then forward traffic on to the final destination instance.Romana is a CNI network provider that configures routes on the host to forward pod network traffic without an overlay. Since inter-node routes are installed on hosts, no VPC routes are necessary at all. However, when the VPC is split into subnets for an HA deployment across zones, VPC routes are necessary. Fortunately, inter-node routes on hosts allows them to act as a network router and forward traffic inbound from another zone just as it would for traffic from local pods. This makes any Kubernetes node configured by Romana able to accept inbound pod traffic from other zones and forward it to the proper destination node on the subnet.Because of this local routing function, top-level routes to pods on other instances on the subnet can be aggregated, collapsing the total number of routes necessary to as few as one per subnet. To avoid using a single instance to forward all traffic, more routes can be used to spread traffic across multiple instances, up to the maximum number of available routes (i.e. equivalent to kubenet). The net result is that you can now build clusters of any size across AZs without an overlay. Romana clusters also support network policies for better security through network isolation.Making it All WorkWhile the combination of aggregated routes and node forwarding on a subnet eliminates overlays and avoids the VPC 50 route limitation, it imposes certain requirements on the CNI provider. For example, hosts should be configured with inter-node routes only to other nodes in the same zone on the local subnet. Traffic to all other hosts must use the default route off host, then use the (aggregated) VPC route to forward traffic out of the zone. Also: when adding a new host, in order to maintain aggregated VPC routes, the CNI plugin needs to use IP addresses for pods that are reachable on the new host.The latest release of Romana also addresses questions about how VPC routes are installed; what happens when a node that is forwarding traffic fails; how forwarding node failures are detected; and how routes get updated and the cluster recovers.Romana v2.0 includes a new AWS route configuration function to set VPC routes. This is part of a new set of network advertising features that automate route configuration in L3 networks. Romana v2.0 includes topology-aware IP address management (IPAM) that enables VPC route aggregation to stay within the 50 route limit as described here, as well as new health checks to update VPC routes when a routing instance fails. For smaller clusters, Romana configures VPC routes as kubenet does, with a route to each instance, taking advantage of every available VPC route.Native VPC Networking EverywhereWhen using Romana v2.0, native VPC networking is now available for clusters of any size, with or without network policies and for HA production deployment split across multiple zones.Native VPC Networking Availability Advanced Network FeaturesHA Production DeploymentSmall>50Small>50Single ZoneNative VPCNative VPCXXXXXXMulti-zoneNative VPCNative VPCNative VPCNative VPCWith Romana v2.0, native VPC networking is available for HA clusters of any size, and network policy never requires an overlay.The preview release of Romana v2.0 is available here. We welcome comments and feedback so we can make EC2 deployments of Kubernetes as fast and reliable as possible. –Juergen Brendel and Chris Marino, co-founders of Pani Networks, sponsor of the Romana project
Quelle: kubernetes

Kompose Helps Developers Move Docker Compose Files to Kubernetes

I’m pleased to announce that Kompose, a conversion tool for developers to transition Docker Compose applications to Kubernetes, has graduated from the Kubernetes Incubator to become an official part of the project. Since our first commit on June 27, 2016, Kompose has achieved 13 releases over 851 commits, gaining 21 contributors since the inception of the project. Our work started at Skippbox (now part of Bitnami) and grew through contributions from Google and Red Hat.The Kubernetes Incubator allowed contributors to get to know each other across companies, as well as collaborate effectively under guidance from Kubernetes contributors and maintainers. Our incubation led to the development and release of a new and useful tool for the Kubernetes ecosystem.We’ve created a reliable, scalable Kubernetes environment from an initial Docker Compose file. We worked hard to convert as many keys as possible to their Kubernetes equivalent. Running a single command gets you up and running on Kubernetes:  kompose up.We couldn’t have done it without feedback and contributions from the community!If you haven’t yet tried Kompose on GitHub check it out!Kubernetes guestbookThe go-to example for Kubernetes is the famous guestbook, which we use as a base for conversion. Here is an example from the official kompose.io site, starting with a simple Docker Compose file.First, we’ll retrieve the file:$ wget https://raw.githubusercontent.com/kubernetes/kompose/master/examples/docker-compose.yamlYou can test it out by first deploying to Docker Compose:$ docker-compose up -dCreating network “examples_default” with the default driverCreating examples_redis-slave_1Creating examples_frontend_1Creating examples_redis-master_1And when you’re ready to deploy to Kubernetes:$ kompose upWe are going to create Kubernetes Deployments, Services and PersistentVolumeClaims for your Dockerized application. If you need different kind of resources, use the kompose convert and kubectl create -f commands instead. INFO Successfully created Service: redis          INFO Successfully created Service: web            INFO Successfully created Deployment: redis       INFO Successfully created Deployment: web         Your application has been deployed to Kubernetes. You can run kubectl get deployment,svc,pods,pvc for detailsCheck out other examples of what Kompose can do.Converting to alternative Kubernetes controllers Kompose can also convert to specific Kubernetes controllers with the use of flags:$ kompose convert –helpUsage:  kompose convert [file] [flags]Kubernetes Flags:      –daemon-set               Generate a Kubernetes daemonset object  -d, –deployment               Generate a Kubernetes deployment object  -c, –chart                    Create a Helm chart for converted objects      –replication-controller   Generate a Kubernetes replication controller object…For example, let’s convert our guestbook example to a DaemonSet:$ kompose convert –daemon-setINFO Kubernetes file “frontend-service.yaml” created INFO Kubernetes file “redis-master-service.yaml” created INFO Kubernetes file “redis-slave-service.yaml” created INFO Kubernetes file “frontend-daemonset.yaml” created INFO Kubernetes file “redis-master-daemonset.yaml” created INFO Kubernetes file “redis-slave-daemonset.yaml” createdKey Kompose 1.0 features With our graduation, comes the release of Kompose 1.0.0, here’s what’s new: Docker Compose Version 3: Kompose now supports Docker Compose Version 3. New keys such as ‘deploy’ now convert to their Kubernetes equivalent.Docker Push and Build Support: When you supply a ‘build’ key within your `docker-compose.yaml` file, Kompose will automatically build and push the image to the respective Docker repository for Kubernetes to consume.New Keys: With the addition of version 3 support, new keys such as pid and deploy are supported. For full details on what Kompose supports, view our conversion document.Bug Fixes: In every release we fix any bugs related to edge-cases when converting. This release fixes issues relating to converting volumes with ‘./’ in the target name. What’s ahead? As we continue development, we will strive to convert as many Docker Compose keys as possible for all future and current Docker Compose releases, converting each one to their Kubernetes equivalent. All future releases will be backwards-compatible.Install KomposeKompose Quick Start Guide Kompose Web Site Kompose Documentation –Charlie Drage, Software Engineer, RedHatThe Kubernetes Incubator helps new projects adopt Kubernetes best practices as well as develop a healthy community. Post questions (or answer questions) on Stack OverflowJoin the community portal for advocates on K8sPortFollow us on Twitter @Kubernetesio for latest updatesConnect with the community on SlackGet involved with the Kubernetes project on GitHub
Quelle: kubernetes

DockerCon Europe Diversity Scholarship!

Each year, DockerCon brings the community together to learn, belong and contribute. With generous support from the  OCI) our team has created the DockerCon Europe Scholarship Program to provide members of the Docker community, who are traditionally underrepresented, mentorship and a financial scholarship to attend DockerCon in Copenhagen this year. This scholarship program aims to foster inclusivity by supporting members of our community through access to resources, tools and mentorship needed to facilitate career and educational development.

If you are interested in applying for the DockerCon Scholarship, follow the steps below:
Application Process:
The application process includes completing one of the five self-paced trainings along with the scholarship application form.
Step 1
Complete at least one of the free self-paced courses available in the Docker Playground. These courses are intended for both Dev and Ops beginner and intermediate level Docker users. Select which course you feel best fits you.
Step 2
After you’ve finished one of the courses, complete the application here. In the application, you will need to provide the name(s) of the lab(s) you completed along with the answers to the quiz at the end of the course.  
Deadline to Apply:
Tuesday, 5 September, 2017 at 5:00PM PST
Selection Process
A committee of Docker community members will review and select the scholarship recipients based on the completion of one of the courses and application. Recipients will be notified by the week of 18 September 2017.  
What’s included if you are selected for the scholarship:

Full DockerCon Conference Pass
Round-trip airfare
Hotel accommodations for 4 nights (16 October, 17 October, 18 October, 19 October )
1:1 mentorship session with a member of the  Docker community on-site at DockerCon

Requirements

All applicants must complete at least one self-paced online course found at: http://training.docker.com/category/self-paced-online 
Must be able to attend DockerCon Europe 2017 (16 October – 19 October, 2017)
Must be 18 years old or older to apply
Must have a valid passport to travel to Copenhagen, Denmark.
Learn more about the DockerCon Diversity Scholarship here. 

Have questions or concerns? Reach us at dockercon@docker.com
More free Docker ressources:

Attend local Docker meetups
Check out the Docker Playground
View the Docker Youtube Channel

#DockerCon Europe Diversity Scholarship is now open! Learn more and apply here!Click To Tweet

The post DockerCon Europe Diversity Scholarship! appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/