Beta Docker for Mac and Windows with Kubernetes

Today, as part of our effort to bring Kubernetes support to the Docker platform, we’re excited to announce that we will also add optional Kubernetes to Docker Community Edition for Mac and Windows. We’re demoing previews at DockerCon (stop by the Docker booth!) and will have a beta program ready at the end of 2017. Sign up to be notified when the beta is ready.
With Kubernetes support in Docker CE for Mac and Windows, Docker Inc. can provide customers an end-to-end suite of container-management software and services that span from developer workstations, through test and CI/CD through to production on-prem or in the cloud.
Docker for Mac and Windows are the most popular way to configure a Docker dev environment and are used everyday by hundreds of thousands of developers to build, test and debug containerized apps. Docker for Mac and Windows are popular because they’re simple to install, stay up-to-date automatically and are tightly integrated with macOS and Windows respectively.
The Kubernetes community has built solid solutions for installing limited Kubernetes development setups on developer workstations, including Minikube (itself based partly on the docker-machine project that predated Docker for Mac and Windows). Common to these solutions however, is that they can be tricky to configure for tight docker build → run → test iteration, and that they rely on outdated Docker versions.
Once Kubernetes support lands in Docker for Mac and Windows, developers building both docker-compose and Swarm-based apps, and apps destined for deployment on Kubernetes will get a simple-to-use development system that takes optimal advantage of their laptop or workstation. All container tasks (whether build, run or push) will run on the same Docker instance with a shared set of images, volumes and containers. And it’ll be based on the latest-and-greatest version of the Docker platform, giving Kubernetes desktop users access to enhancements like multi-stage builds.
As part of our effort to integrate Kubernetes with Docker, we’re building Kubernetes components using Custom Resources and the API server aggregation layer make it simpler to deploy Docker Compose apps as Kubernetes-native Pods and Services. These components will ship in both Docker EE and in Docker CE for Mac and Windows.

We can’t wait to show you Kubernetes running in Docker for Mac and Windows. Drop by the Docker booth at DockerCon EU 17 and sign up for the beta to be notified when we have something that’s ready to try.

Beta #Docker for Mac and @Windows with @Kubernetesio #dockerconClick To Tweet

The post Beta Docker for Mac and Windows with Kubernetes appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Videos series: Modernizing Java Apps for IT Pros

Today we start releasing a new video series in Docker’s Modernize Traditional Apps (MTA) program, aimed at IT Pros who manage, maintain and deploy Java apps. The video series shows you how to move a Java EE 7 application written to run on Wildfly 3, move it to a Windows Docker container and deploy it to a scalable, highly-available environment in the cloud – without any changes to the app.
These are the first 4 of a 5 part video series in Docker’s Modernize Traditional Apps (MTA) program, aimed at Java IT Pros. The video series shows you how to move a Java EE app on JBoss Wildfly to a Docker container and deploy it to a scalable, highly-available environment in the cloud – without any changes to the app.

Part 1 introduces the series, explaining what is meant by “traditional” apps and the problems they present. Traditional apps are built to run on a server, rather than on a modern application platform. They have common traits, like being complex to manage and difficult to deploy. A portfolio of traditional applications tends to under-utilize its infrastructure, and over-utilize the humans who manage it. Docker Enterprise Edition (EE) fixes that, giving you a consistent way to package, release and manage all your apps, without having to re-write them.

Part 2 shows how easy it is to move traditional apps to Docker. I start with an Java EE application running on Wildfly, and package the entire monolithic application as a Docker image. Then I run the application in a container on my Macbook Pro. I do that without changing the app, and without needing to access the original source code.

Part 3 covers the upgrade workflow in Docker. I build a new version of the Docker image for my app, by migrating it to a Tomcat EE image. I also replace the presentation layer implemented with Java Server faces with a javascript client written in React. I show how to do this using maven and node.js images to build them without having those tool chains on your laptop. Docker allows you to split off parts of the application and update them with modern technology.  In this case, I make use of the application’s REST interface to start moving towards a microservices architecture that’s suited to deployment in a cloud architecture.

Part 4 shows how to share the application images through a registry, in this case Docker Hub. A registry allows you to share the image publically. In addition to sharing images, Docker Hub and Docker Trusted Registry support automating the build process. I’ll connect the github repository with the application source code to the repository and configure it build a new image every time code is pushed. Updated images of the application will always be available for deployment.

In an upcoming Part 5, I’ll deploy the application as a cluster in the cloud using Docker EE. Migrating traditional apps to Docker EE gives you increased efficiency, portability and security. If you’re planning a move to the cloud, or upgrading to modern infrastructure – or if you just want to consolidate workloads on existing infrastructure – Docker makes it easy.
For more information about Modernizing Traditional Applications:

Download the MTA Kit 
Watch the MTA Webinar
Get in touch with us

Videos series: Modernizing Java Apps for IT Pros w/ @docker EE by @spara Click To Tweet

The post Videos series: Modernizing Java Apps for IT Pros appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Least Privilege Container Orchestration

The Docker platform and the container has become the standard for packaging, deploying, and managing applications. In order to coordinate running containers across multiple nodes in a cluster, a key capability is required: a container orchestrator.

Orchestrators are responsible for critical clustering and scheduling tasks, such as:

Managing container scheduling and resource allocation.
Support service discovery and hitless application deploys.
Distribute the necessary resources that applications need to run.

Unfortunately, the distributed nature of orchestrators and the ephemeral nature of resources in this environment makes securing orchestrators a challenging task. In this post, we will describe in detail the less-considered—yet vital—aspect of the security model of container orchestrators, and how Docker Enterprise Edition with its built-in orchestration capability, Swarm mode, overcomes these difficulties.
Motivation and threat model
One of the primary objectives of Docker EE with swarm mode is to provide an orchestrator with security built-in. To achieve this goal, we developed the first container orchestrator designed with the principle of least privilege in mind.
In computer science,the principle of least privilege in a distributed system requires that each participant of the system must only have access to  the information and resources that are necessary for its legitimate purpose. No more, no less.

”A process must be able to access only the information and resources that are necessary for its legitimate purpose.”

Principle of Least Privilege
                                                      
Each node in a Docker EE swarm is assigned role: either manager or worker. These roles define a coarsegrained level of privilege to the nodes: administration and task execution, respectively. However, regardless of its role, a node has access only to the information and resources it needs to perform the necessary tasks, with cryptographically enforced guarantees. As a result, it becomes easier to secure clusters against even the most sophisticated attacker models: attackers that control the underlying communication networks or even compromised cluster nodes.
Secure-by-default core
There is an old security maxim that states: if it doesn’t come by default, no one will use it. Docker Swarm mode takes this notion to heart, and ships with secure-by-default mechanisms to solve three of the hardest and most important aspects of the orchestration lifecycle:

Trust bootstrap and node introduction.
Node identity issuance and management.
Authenticated, Authorized, Encrypted information storage and dissemination.

Let’s look at each of these aspects individually
Trust Bootstrap and Node Introduction
The first step to a secure cluster is tight control over membership and identity. Without it, administrators cannot rely on the identities of their nodes and enforce strict workload separation between nodes. This means that unauthorized nodes can’t be allowed to join the cluster, and nodes that are already part of the cluster aren’t able to change identities, suddenly pretending to be another node.
To address this need, nodes managed by Docker EE’s Swarm mode maintain strong, immutable identities. The desired properties are cryptographically guaranteed by using two key building-blocks:

Secure join tokens for cluster membership.
Unique identities embedded in certificates issued from a central certificate authority.

Joining the Swarm
To join the swarm, a node needs a copy of a secure join token. The token is unique to each operational role within the cluster—there are currently two types of nodes: workers and managers. Due to this separation, a node with a copy of a worker token will not be allowed to join the cluster as a manager. The only way to get this special token is for a cluster administrator to interactively request it from the cluster’s manager through the swarm administration API.
The token is securely and randomly generated, but it also has a special syntax that makes leaks of this token easier to detect: a special prefix that you can easily monitor for in your logs and repositories. Fortunately, even if a leak does occur, tokens are easy to rotate, and we recommend that you rotate them often—particularly in the case where your cluster will not be scaling up for a while.

Bootstrapping trust
As part of establishing its identity, a new node will ask for a new identity to be issued by any of the network managers. However, under our threat model, all communications can be intercepted by a third-party. This begs the question: how does a node know that it is talking to a legitimate manager?

Fortunately, Docker has a built-in mechanism for preventing this from happening. The join token, which the host uses to join the swarm, includes a hash of the root CA’s certificate. The host can therefore use one-way TLS and use the hash to verify that it’s joining the right swarm: if the manager presents a certificate not signed by a CA that matches the hash, the node knows not to trust it.
Node identity issuance and management
Identities in a swarm are embedded in x509 certificates held by each individual node. In a manifestation of the least privilege principle, the certificates’ private keys are restricted strictly to the hosts where they originate. In particular, managers do not have access to private keys of any certificate but their own.
Identity Issuance
To receive their certificates without sharing their private keys, new hosts begin by issuing a certificate signing request (CSR), which the managers then convert into a certificate. This certificate now becomes the new host’s identity, making the node a full-fledged member of the swarm!

When used alongside with the secure bootstrapping mechanism, this mechanism for issuing identities to joining nodes is secure by default: all communicating parties are authenticated, authorized and no sensitive information is ever exchanged in clear-text.
Identity Renewal
However, securely joining nodes to a swarm is only part of the story. To minimize the impact of leaked or stolen certificates and to remove the complexity of managing CRL lists, Swarm mode uses short-lived certificates for the identities. These certificates have a default expiration of three months, but can be configured to expire every hour!

This short certificate expiration time means that certificate rotation can’t be a manual process, as it usually is for most PKI systems. With swarm, all certificates are rotated automatically and in a hitless fashion. The process is simple: using a mutually authenticated TLS connection to prove ownership over a particular identity, a Swarm node generates regularly a new public/private key pair and sends the corresponding CSR to be signed, creating a completely new certificate, but maintaining the same identity.
Authenticated, Authorized, Encrypted information storage and dissemination.
During the normal operation of a swarm, information about the tasks has to be sent to the worker nodes for execution. This includes not only information on which containers are to be executed by a node;but also, it includes  all the resources that are necessary for the successful execution of that container, including sensitive secrets such as private keys, passwords, and API tokens.
Transport Security
The fact that every node participating in a swarm is in possession of a unique identity in the form of a X509 certificate, communicating securely between nodes is trivial: nodes can use their respective certificates to establish mutually authenticated connections between one another, inheriting the confidentiality, authenticity and integrity properties of TLS.

One interesting detail about Swarm mode is the fact that it uses a push model: only managers are allowed to send information to workers—significantly reducing the surface of attack manager nodes expose to the less privileged worker nodes.
Strict Workload Separation Into Security Zones
One of the responsibilities of manager nodes is deciding which tasks to send to each of the workers. Managers make this determination using a variety of strategies; scheduling the workloads across the swarm depending on both the unique properties of each node and each workload.
In Docker EE with Swarm mode, administrators have the ability of influencing these scheduling decisions by using labels that are securely attached to the individual node identities. These labels allow administrators to group nodes together into different security zones limiting the exposure of particularly sensitive workloads and any secrets related to them.

Secure Secret Distribution
In addition to facilitating the identity issuance process, manager nodes have the important task of storing and distributing any resources needed by a worker. Secrets are treated like any other type of resource, and are pushed down from the manager to the worker over the secure mTLS connection.

On the hosts, Docker EE ensures that secrets are provided only to the containers they are destined for. Other containers on the same host will not have access to them. Docker exposes secrets to a container as a temporary file system, ensuring that secrets are always stored in memory and never written to disk. This method is more secure than competing alternatives, such as storing them in environment variables. Once a task completes the secret is gone forever.
Storing secrets
On manager hosts secrets are always encrypted at rest. By default, the key that encrypts these secrets (known as the Data Encryption Key, DEK) is also stored in plaintext on disk. This makes it easy for those with minimal security requirements to start using Docker Swarm mode.
However, once you are running a production cluster, we recommend you enable auto-lock mode. When auto-lock mode is enabled, a newly rotated DEK is encrypted with a separate Key Encryption Key (KEK). This key is never stored on the cluster; the administrator is responsible for storing it securely and providing it when the cluster starts up. This is known as unlocking the swarm.
Swarm mode supports multiple managers, relying on the Raft Consensus Algorithm for fault tolerance. Secure secret storage scales seamlessly in this scenario. Each manager host has a unique disk encryption key, in addition to the shared key. Furthermore, Raft logs are encrypted on disk and are similarly unavailable without the KEK when in autolock mode.
What happens when a node is compromised?

In traditional orchestrators, recovering from a compromised host is a slow and complicated process. With Swarm mode, recovery is as easy as running the docker node rm command. This removes the affected node from the cluster, and Docker will take care of the rest, namely re-balancing services and making sure other hosts know not to talk to the affected node.
As we have seen, thanks to least privilege orchestration, even if the attacker were still active on the host, they would be cut off from the rest of the network. The host’s certificate — its identity — is blacklisted, so the managers will not accept it as valid.
Conclusion
Docker EE with Swarm mode ensures security by default in all key areas of orchestration:

Joining the cluster. Prevents malicious nodes from joining the cluster.
Organizing hosts into security zones. Prevents lateral movement by attackers.
Scheduling tasks. Tasks will be issued only to designated and allowed nodes.
Allocating resources. A malicious node cannot “steal” another’s workload or resources.
Storing secrets. Never stored in plaintext and never written to disk on worker nodes.
Communicating with the workers. Encrypted using mutually authenticated TLS.

As Swarm mode continues to improve, the Docker team is working to take the principle of least privilege orchestration even further. The task we are tackling is: how can systems remain secure if a manager is compromised? The roadmap is in place, with some of the features already available such as the ability of whitelisting only specific Docker images, preventing managers from executing arbitrary workloads. This is achieved quite naturally using Docker Content Trust.

Least Privilege #Container Orchestration w/ @docker Enterprise Edition and Swarm by @diogomonica Click To Tweet

The post Least Privilege Container Orchestration appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Register for DockerCon Europe 2017 Livestream

For those of you who can’t make it to DockerCon Europe 2017 in Copenhagen, we are thrilled to announce that the General Sessions on both Day 1 and Day 2 of DockerCon will be livestreamed!
Find out about the latest Docker announcements live from Steve Singh (CEO) and Solomon Hykes (Founder and CTO) and enjoy the highly technical demos the Docker team has prepared for you!
Livestream schedule:

General Session Day 1 on 10/17 from 9am UTC +2
General Session Day 2 on 10/18 from 9am UTC+2

The livestream player will be embedded on the DockerCon site a few hours prior to the event. Be sure to sign up here to receive an email with the link to the livestream before the general session starts!
Sign up for the DockerCon EU Livestream
 
We invite you to follow the official Twitter account: @DockerCon and hashtag #DockerCon in order to get the latest updates.
Learn More about DockerCon

Visit the DockerCon Europe 2017 Website
Save the date for DockerCon 2018

Watch the live stream of keynotes at #DockerCon Europe | Oct 17 – 18, 9-11am UTC +2Click To Tweet

The post Register for DockerCon Europe 2017 Livestream appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Request Routing and Policy Management with the Istio Service Mesh

Editor’s note: Today’s post by Frank Budinsky, Software Engineer, IBM, Andra Cismaru, Software Engineer, Google, and Israel Shalom, Product Manager, Google, is the second post in a three-part series on Istio. It offers a closer look at request routing and policy management.In a previous article, we looked at a simple application (Bookinfo) that is composed of four separate microservices. The article showed how to deploy an application with Kubernetes and an Istio-enabled cluster without changing any application code. The article also outlined how to view Istio provided L7 metrics on the running services.This article follows up by taking a deeper look at Istio using Bookinfo. Specifically, we’ll look at two more features of Istio: request routing and policy management.Running the Bookinfo ApplicationAs before, we run the v1 version of the Bookinfo application. After installing Istio in our cluster, we start the app defined in bookinfo-v1.yaml using the following command:kubectl apply -f <(istioctl kube-inject -f bookinfo-v1.yaml)We created an Ingress resource for the app:cat <<EOF | kubectl create -f -apiVersion: extensions/v1beta1kind: Ingressmetadata: name: bookinfo annotations:   kubernetes.io/ingress.class: “istio”spec: rules: – http:     paths:     – path: /productpage       backend:         serviceName: productpage         servicePort: 9080     – path: /login       backend:         serviceName: productpage         servicePort: 9080     – path: /logout       backend:         serviceName: productpage         servicePort: 9080EOFThen we retrieved the NodePort address of the Istio Ingress controller:export BOOKINFO_URL=$(kubectl get po -n istio-system -l istio=ingress -o jsonpath={.items[0].status.hostIP}):$(kubectl get svc -n istio-system istio-ingress -o jsonpath={.spec.ports[0].nodePort})Finally, we pointed our browser to http://$BOOKINFO_URL/productpage, to see the running v1 application:HTTP request routingExisting container orchestration platforms like Kubernetes, Mesos, and other microservice frameworks allow operators to control when a particular set of pods/VMs should receive traffic (e.g., by adding/removing specific labels). Unlike existing techniques, Istio decouples traffic flow and infrastructure scaling. This allows Istio to provide a variety of traffic management features that reside outside the application code, including dynamic HTTP request routing for A/B testing, canary releases, gradual rollouts, failure recovery using timeouts, retries, circuit breakers, and fault injection to test compatibility of failure recovery policies across services. To demonstrate, we’ll deploy v2 of the reviews service and use Istio to make it visible only for a specific test user. We can create a Kubernetes deployment, reviews-v2, with this YAML file: apiVersion: extensions/v1beta1kind: Deploymentmetadata: name: reviews-v2spec: replicas: 1 template:   metadata:     labels:       app: reviews       version: v2   spec:     containers:     – name: reviews       image: istio/examples-bookinfo-reviews-v2:0.2.3       imagePullPolicy: IfNotPresent       ports:       – containerPort: 9080From a Kubernetes perspective, the v2 deployment adds additional pods that the reviews service selector includes in the round-robin load balancing algorithm. This is also the default behavior for Istio.Before we start reviews:v2, we’ll start the last of the four Bookinfo services, ratings, which is used by the v2 version to provide ratings stars corresponding to each review:kubectl apply -f <(istioctl kube-inject -f bookinfo-ratings.yaml)If we were to start reviews:v2 now, we would see browser responses alternating between v1 (reviews with no corresponding ratings) and v2 (review with black rating stars). This will not happen, however, because we’ll use Istio’s traffic management feature to control traffic.With Istio, new versions don’t need to become visible based on the number of running pods. Version visibility is controlled instead by rules that specify the exact criteria. To demonstrate, we start by using Istio to specify that we want to send 100% of reviews traffic to v1 pods only. Immediately setting a default rule for every service in the mesh is an Istio best practice. Doing so avoids accidental visibility of newer, potentially unstable versions. For the purpose of this demonstration, however, we’ll only do it for the reviews service:cat <<EOF | istioctl create -f -apiVersion: config.istio.io/v1alpha2kind: RouteRulemetadata:  name: reviews-defaultspec:  destination:    name: reviews  route:  – labels:      version: v1    weight: 100EOFThis command directs the service mesh to send 100% of traffic for the reviews service to pods with the label “version: v1”. With this rule in place, we can safely deploy the v2 version without exposing it.kubectl apply -f <(istioctl kube-inject -f bookinfo-reviews-v2.yaml)Refreshing the Bookinfo web page confirms that nothing has changed.At this point we have all kinds of options for how we might want to expose reviews:v2. If for example we wanted to do a simple canary test, we could send 10% of the traffic to v2 using a rule like this:apiVersion: config.istio.io/v1alpha2kind: RouteRulemetadata:  name: reviews-defaultspec:  destination:    name: reviews  route:  – labels:      version: v2    weight: 10  – labels:      version: v1    weight: 90A better approach for early testing of a service version is to instead restrict access to it much more specifically. To demonstrate, we’ll set a rule to only make reviews:v2 visible to a specific test user. We do this by setting a second, higher priority rule that will only be applied if the request matches a specific condition:cat <<EOF | istioctl create -f -apiVersion: config.istio.io/v1alpha2kind: RouteRulemetadata: name: reviews-test-v2spec: destination:   name: reviews precedence: 2 match:   request:     headers:       cookie:         regex: “^(.*?;)?(user=jason)(;.*)?$” route: – labels:     version: v2   weight: 100EOFHere we’re specifying that the request headers need to include a user cookie with value “tester” as the condition. If this rule is not matched, we fall back to the default routing rule for v1.If we login to the Bookinfo UI with the user name “tester” (no password needed), we will now see version v2 of the application (each review includes 1-5 black rating stars). Every other user is unaffected by this change. Once the v2 version has been thoroughly tested, we can use Istio to proceed with a canary test using the rule shown previously, or we can simply migrate all of the traffic from v1 to v2, optionally in a gradual fashion by using a sequence of rules with weights less than 100 (for example: 10, 20, 30, … 100). This traffic control is independent of the number of pods implementing each version. If, for example, we had auto scaling in place, and high traffic volumes, we would likely see a corresponding scale up of v2 and scale down of v1 pods happening independently at the same time. For more about version routing with autoscaling, check out “Canary Deployments using Istio”.In our case, we’ll send all of the traffic to v2 with one command:cat <<EOF | istioctl replace -f -apiVersion: config.istio.io/v1alpha2kind: RouteRulemetadata:  name: reviews-defaultspec:  destination:    name: reviews  route:  – labels:      version: v2    weight: 100EOFWe should also remove the special rule we created for the tester so that it doesn’t override any future rollouts we decide to do:istioctl delete routerule reviews-test-v2In the Bookinfo UI, we’ll see that we are now exposing the v2 version of reviews to all users.Policy enforcementIstio provides policy enforcement functions, such as quotas, precondition checking, and access control. We can demonstrate Istio’s open and extensible framework for policies with an example: rate limiting.Let’s pretend that the Bookinfo ratings service is an external paid service–for example, Rotten Tomatoes®–with a free quota of 1 request per second (req/sec). To make sure the application doesn’t exceed this limit, we’ll specify an Istio policy to cut off requests once the limit is reached. We’ll use one of Istio’s built-in policies for this purpose.To set a 1 req/sec quota, we first configure a memquota handler with rate limits:cat <<EOF | istioctl create -f -apiVersion: “config.istio.io/v1alpha2″kind: memquotametadata: name: handler namespace: defaultspec: quotas: – name: requestcount.quota.default   maxAmount: 5000   validDuration: 1s   overrides:   – dimensions:       destination: ratings     maxAmount: 1     validDuration: 1sEOFThen we create a quota instance that maps incoming attributes to quota dimensions, and create a rule that uses it with the memquota handler:cat <<EOF | istioctl create -f -apiVersion: “config.istio.io/v1alpha2″kind: quotametadata: name: requestcount namespace: defaultspec: dimensions:   source: source.labels[“app”] | source.service | “unknown”   sourceVersion: source.labels[“version”] | “unknown”   destination: destination.labels[“app”] | destination.service | “unknown”   destinationVersion: destination.labels[“version”] | “unknown”—apiVersion: “config.istio.io/v1alpha2″kind: rulemetadata: name: quota namespace: defaultspec: actions: – handler: handler.memquota   instances:   – requestcount.quotaEOFTo see the rate limiting in action, we’ll generate some load on the application:wrk -t1 -c1 -d20s http://$BOOKINFO_URL/productpageIn the web browser, we’ll notice that while the load generator is running (i.e., generating more than 1 req/sec), browser traffic is cut off. Instead of the black stars next to each review, the page now displays a message indicating that ratings are not currently available.Stopping the load generator means the limit will no longer be exceeded: the black stars return when we refresh the page.SummaryWe’ve shown you how to introduce advanced features like HTTP request routing and policy injection into a service mesh configured with Istio without restarting any of the services. This lets you develop and deploy without worrying about the ongoing management of the service mesh; service-wide policies can always be added later.In the next and last installment of this series, we’ll focus on Istio’s security and authentication capabilities. We’ll discuss how to secure all interservice communications in a mesh, even against insiders with access to the network, without any changes to the application code or the deployment.
Quelle: kubernetes

Brace yourselves, DockerCon Europe 2017 is coming!

DockerCon Europe 2017 is just around the corner and the whole European Docker community is getting ready for four days of incredible learning, networking and collaboration!
If you’re a registered attendee, login on to the DockerCon Europe Agenda Builder using the information you set up during the registration process. You can use the keyword search bar or filter by topics, days, tracks, experience level or target audience to get recommended sessions and build you schedule.
Every DockerCon Europe Attendee should have received an invitation to join the Docker Community Slack (dockercommunity.slack.com). If that’s not the case, please reach out to community@docker.com and we’ll make sure to resend the invitation.

Monday 16 October
Attendees who have signed up for Paid-Workshops or want to check in and pick up their badge and backpacks early should plan to be in Copenhagen by Monday morning.
Registration
Registration will be open from 12:00 – 19:30.
Workshops
Interested in attending a DockerCon EU Workshops on Monday? Here is the list of the workshops that are still available:

Introduction to Docker for Enterprise Developers
Docker on Windows: From 101 to Production
Docker for Java Developers
Learn Docker

If you’ve already registered for a workshop, full day workshops run from 9:00 – 17:00 and the half-day workshops from 14:00 – 18:00 at the Bella Center. Room assignments will be emailed out.
Hallway Track
From 12:00 to 20:00 on Monday you’ll be able to meet and share knowledge with community members and practitioners using the DockerCon Hallway track recommendation algorithm.
Docker Pals
It can be downright intimidating to attend a conference by yourself, much less figure out how to make the most of your experience! Docker Pals gives you a built-in network at the conference by pairing you with another attendee and a DockerCon veteran as your guide. You will meet your pals at a Meet Your Pals Pre-Welcome Reception in the Expo Hall from 17:30 – 18:00. Pre-registration is required.
Welcome Reception
Join us at the evening Welcome reception in the Ecosystem Expo starting at 18:00.
 
Tuesday 17 October
Conference sessions start on Tuesday. Come early and be ready to learn, connect and collaborate with the Docker community.
Registration and Hallway Track
Registration and the Hallway track will be open from 07:30 – 18:00.
Ecosystem Expo
Stop by the booths of the DockerCon Europe Sponsors from 8:00am – 17:50 pm to learn, connect and network! Don’t forget to make your way to the Docker booth to learn more about our products and meet the Docker team.
General Session
Make sure to arrive early to be on time for our Day 1 General Session which starts at 09:00 sharp!
Breakout Sessions
Download the DockerCon App and start scheduling your DockerCon Agenda.
Hands-on Labs
From 11:00 – 18:00, take your Docker learning to the next level by completing self-paced Hands-on-Labs to walk through the process of managing and securing Docker containers.
Docker Professional Certification 
We are launching Docker Certification in Copenhagen. As a DockerCon attendee, you’ll have the opportunity to be among the first in the world to earn the ‘Docker Certified Associate’ designation with the digital certificate and verification to prove it! Learn more.
DockerCon After Party
Starting at 19:00, arcade and classic games like Pong, Asteroids, Tetris, Tron and Breakout will fill the venue providing you with ample entertainment and opportunities to challenge your fellow attendees to some friendly competition. You will be transported to a whole new gaming universe!
Wednesday 18 October
Wednesday brings more awesome content, learning and networking:

Docker Professional Certification: 07:00 – 17:30
Hallway Track: 07:30 – 17:00
Ecosystem Expo: 8:00 – 16:30
General Session: 9:00 – 10:30
Breakout Sessions: 10:30 – 18:30
Hands-on Labs: 11:00 – 18:00

Thursday 19 October
On Thursday attendees have the option to attend the Enterprise Summit (sold out) to learn how Docker customers have transformed their Windows or Linux applications to run as a container making it more efficient, more portable, and more secure—all without touching a line of code. To join the waitlist, email dockercon@docker.com.
The Moby Summit (sold out) is also taking place on Thursday. You can join the waitlist by logging into the DockerCon portal for a chance to attend.
Finally, the DockerCon Hands-on labs will be open all day on Thursday and offering a broad range of topics that cover the interests of both developers and IT operations personnel on Windows and Linux.
Learn More:

Visit the DockerCon Europe Website
Register for DockerCon Europe
Visit the Agenda builder

Time to plan your DockerCon Europe 2017 WeekClick To Tweet

The post Brace yourselves, DockerCon Europe 2017 is coming! appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Kubernetes Community Steering Committee Election Results

Beginning with the announcement of Kubernetes 1.0 at OSCON in 2015, there has been a concerted effort to share the power and burden of leadership across the Kubernetes community.With the work of the Bootstrap Governance Committee, consisting of Brandon Phillips, Brendan Burns, Brian Grant, Clayton Coleman, Joe Beda, Sarah Novotny and Tim Hockin – a cross section of long-time leaders representing 5 different companies with major investments of talent and effort in the Kubernetes Ecosystem – we wrote an initial Steering Committee Charter and launched a community wide election to seat a Kubernetes Steering Committee. To quote from the Charter -The initial role of the steering committee is to instantiate the formal process for Kubernetes governance. In addition to defining the initial governance process, the bootstrap committee strongly believes that it is important to provide a means for iterating the processes defined by the steering committee. We do not believe that we will get it right the first time, or possibly ever, and won’t even complete the governance development in a single shot. The role of the steering committee is to be a live, responsive body that can refactor and reform as necessary to adapt to a changing project and community.This is our largest step yet toward making an implicit governance structure explicit. Kubernetes vision has been one of an inclusive and broad community seeking to build software which empowers our users with the portability of containers. The Steering Committee will be a strong leadership voice guiding the project toward success.The Kubernetes Community is pleased to announce the results of the 2017 Steering Committee Elections. Please congratulate Aaron Crickenberger, Derek Carr, Michelle Noorali, Phillip Wittrock, Quinton Hoole and Timothy St. Clair, who will be joining the members of the Bootstrap Governance committee on the newly formed Kubernetes Steering Committee. Derek, Michelle, and Phillip will serve for 2 years. Aaron, Quinton, and Timothy will serve for 1 year.This group will meet regularly in order to clarify and streamline the structure and operation of the project. Early work will include electing a representative to the CNCF Governing Board, evolving project processes, refining and documenting the vision and scope of the project, and chartering and delegating to more topical community groups. Please see the full Steering Committee backlog for more details.
Quelle: kubernetes

Introducing Hallway Track: Learn from People Around You at DockerCon

Photo by: Youssef Shoufan at DockerCon Austin 2017
The DockerCon Hallway Track is coming to DockerCon Europe in Copenhagen. We’ve partnered with e180.co once again to deliver the next level of conference attendee networking. Together, we believe that education is a relationship, not an institution, and that a conversation can change someone’s life. After the success of our collaboration in Austin with Moby Mingle, we’re happy to be growing this idea further for Copenhagen.
DockerCon is all about learning new things and connecting with the right people. The Hallway Track will help you meet and share knowledge with community members and practitioners at the conference.  

So, what’s a Hallway Track?
DockerCon Hallway Track is a one-on-one or group conversations based on topics of interest that you schedule with other attendees during DockerCon. Hallway Track’s recommendation algorithm curates an individualized selection of Hallway Track topics for each participant, based on their behavior and interests.
It’s simple:

Explore the knowledge Offer and Requests –where all participants post the knowledge they are willing to share.
Pick something you want to learn or create your own Offer or Request.
Book your Hallway Tracks and meet in person at the Hallway Track Lounge!

If you are interested in attending DockerCon. please register soon as we have only 100 tickets left! If you are already registered and want to book your Hallway Tracks, the platform will be launching today – look out for the email with instructions for logging into the system.

Introducing Hallway Track: Learn from People Around You at #DockerConClick To Tweet

The post Introducing Hallway Track: Learn from People Around You at DockerCon appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Your Docker Agenda for JavaOne

If you are one of the thousands that will be in San Francisco for JavaOne Oct 1-5th, don’t miss the opportunity to level-up your knowledge around container technology and Docker Community and Enterprise Edition. We’ve listed our must-attend sessions below:
Monday, October 2nd
Monday, Oct 02, 11:00 a.m. – 11:45 a.m. | Java in a World of Containers [CON4429]
Speakers: Paul Sandoz and Mikael Vidstedt, Oracle
This session explains how OpenJDK 9 fits into the world of containers, specifically how it fits with Docker images and containers. The first part of the session focuses on the production of Docker images containing a JDK. It introduces technologies, such as J-Link, that can be used to reduce the size of the JDK and discusses the inclusion of class-data-sharing (CDS) archives and ahead-of-time (AOT) shared object libraries. The second part describes how the Java process can be a good citizen when running within a Java container and obeying resource limits. The presentation also covers the role of CDS archives and AOT shared object libraries that can be shared across running containers to reduce startup time or memory usage.
 
Tuesday, October 3rd
8:30 a.m. – 10:30 a.m. |  Hands-on Lab: Docker 101 [HOL7960]
Eric Smalling, Ben Bonnefoy, Mano Marks, Docker
Dennis Foley and Richard Wark, Oracle
If you are just getting started learning about the Docker platform and want to get up to speed, this is the lab for you. Come learn the  basics including running containers, building images, and basics on networking, orchestration, security, and volumes.
8:30 a.m. – 9:15 a.m. | Modernizing Traditional Apps with Docker EE: Java Edition [CON7951]
Sophia Parafina, Docker
Most large enterprises have huge application install bases. Many have apps running in production that were written by people who have moved on to other projects, or even other companies. How do you bring older, critical apps into a new, modern containerized infrastructure? In this presentation, you’ll learn the benefits of moving to a containerized infrastructure and how to easily package a Java EE application to a Docker Enterprise Edition container without changing any code. And then begin the process of modernizing it by replacing the JavaServer Faces client with a JavaScript client written in React.
 
Wednesday, October 4th
Wednesday, Oct 04, 2:45 p.m. – 3:30 p.m. | Best Practices for Developing and Deploying Java Applications with Docker [CON7957]
Speaker: Eric Smalling, Docker
What if you could run your Java application in the same artifacts as your developer workstation, integration, and user acceptance testing environments as it does in production? With the Docker platform, your deployment artifacts conform to a common, portable standard that allows your team to do exactly that. In this session learn how to best run the JVM inside containers; ensure it is built and tested in deterministic, repeatable fashion; and deploy it in a guaranteed known-good-state in every environment. This session explores the basics of the Docker platform, how to build and run your applications in containers, how to deploy a web application using the same artifacts on workstations and servers, and best practices for managing and configuring JVM-based applications in containers.
Wednesday, Oct 04, 2:45 p.m. – 3:30 p.m. | Docker Tips and Tricks for Java Developers [CON4060]
Speaker: Ray Tsang, Google
Everyone is talking about containers—but be aware! It takes discipline to use container technology. It may not be as secure nor as optimal as you thought it would be. Although it’s relatively easy to create a new immutable container image to run everywhere, you may have fallen into many of the caveats. Is it running as the root user? Why are the images taking so much space? Why did your containers run out of space in the first place!? Most importantly, your container images may not be as immutable nor repeatable as you thought, and your Java process might be overutilizing assigned resources! Attend this session to learn how to best address these issues when building your Java container images.

It’s almost time for #JavaOne! Here’s a don’t miss guide to the best #Docker sessions!Click To Tweet

The post Your Docker Agenda for JavaOne appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Introducing the Docker Global Professional Certification Program

 Docker is excited to announce the first and only official professional certification program for the Docker Enterprise Edition (EE) platform.
The new Docker Certified Associate (DCA) certification, launching at DockerCon Europe on October 16, 2017, serves as a foundational benchmark for real-world container technology expertise with Docker Enterprise Edition. In today’s job market, container technology skills are highly sought after and this certification sets the bar for well-qualified professionals. The professionals that earn the certification will set themselves apart as uniquely qualified to run enterprise workloads at scale with Docker Enterprise Edition and be able to display the certification logo on resumes and social media profiles.
The DCA is the first in a comprehensive multi-tiered certification program and the exam was created by top practitioners using a rigorous development process. It consists of 55 questions to be completed over 80 minutes covering essential skills on Docker Enterprise Edition.  The exam can be taken anywhere in the world at any time and is delivered using remote proctoring technology to ensure exam security while creating a simple and streamlined test taking experience for candidates.
Be among the first to earn the DCA designation and gain recognition for your enterprise container skills.
Get Started now
 
Be Among the First to Get Certified, at DockerCon Europe
Be one of the first to get your DCA on-site at DockerCon Europe. If you’ve reviewed the study guide and think you’ve got what it takes, join us in Copenhagen and take the exam. Testing is offered Tuesday through Thursday and we’ve got some special gifts to hand out to our first Docker Certified Associates.

Be the first #DockerCertified Associate – #Docker launches first official certification exam for…Click To Tweet

Learn more:

Docker Professional Certification program
Docker Trainings

The post Introducing the Docker Global Professional Certification Program appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/