Convince your manager to send you to DockerCon

Has it sunk in yet that DockerCon is in roughly 2 months? That’s right, this year we gather in April as a community and ecosystem in Austin, Texas for 3 days of deep learning and networking (with a side serving of Docker fun). DockerCon is the annual community and industry event for makers and operators of next generation distributed apps built with containers. If Docker is important to your daily workflow or your business, you and your team (reach out for group discounts) should attend this conference to stay up to date on the latest progress with the Docker platform and ecosystem.
Do you really want to go to DockerCon, but are having a hard time convincing your manager on pulling the trigger to send you? Have you already explained that sessions, training and hands-on exercises are definitely worth the financial investment and time away from your desk?
Well, fear not! We’ve put together a few more resources and reasons to help convince your manager that DockerCon 2017 on April 17-20, is an invaluable experience you need to attend.
Something for everyone
DockerCon is the best place to learn and share your experiences with the industry’s greatest minds and the guarantee to bring learnings back to your team. We will have plenty of learning materials and networking opportunities specific to our 3 main audiences:
1. Developers
From programming language specific workshops such as Docker for Java developers or modernizing monolithic ASP.NET applications with Docker, to sessions on building effective Docker images or using Docker for Mac, Docker for Windows and Docker Cloud, the DockerCon agenda will showcase plenty of hands-on content for developers.

2. IT Pros
The review committee is also making sure to include lots of learning materials for IT pros. Register now if you want to attend the orchestration and networking workshops as they will sold out soon. Here is the list of Ops centric topics, which will be covered in the breakout sessions: tracing, container and cluster monitoring, container orchestration, securing the software supply chain and the Docker for AWS and Docker for Azure editions.

3. Enterprise
Every DockerCon attendee will also have the opportunity to learn how Docker offers an integrated Container-as-a-Service platform for developers and IT operations to collaborate in the enterprise software supply chain. Companies with a lot of experience running Docker in production will go over their reference architecture and explain how they brought security, policy and controls to their application lifecycle without sacrificing any agility or application portability. Use case sessions will be heavy on technical detail and implementation advice.
Proof is in the numbers

According to surveyed DockerCon attendees, 91% would recommend investing in DockerCon again, not to mention DockerCon 2016 scored an improved NPS of 61.
DockerCon continues to grow due to high demand. DockerCon attendance has increased 900% since 2014 and +25% in the just the last year. We hope to continue to welcome more to DockerCon and the community each year while preserving the intimacy of the conference.
87% of surveyed attendees said the content and subject matter was relevant to their professional endeavours.  

Take part in priceless learning opportunities
At the heart of DockerCon are amazing learning opportunities from not just the Docker team but the entire community. This year we will provide event more tools and resources to facilitate professional networking, learning and mentorship opportunities based on areas of expertise and interest. No matter your expertise level, DockerCon is the one place where you can not only learn and ask, but also teach and help. To us, this is what makes DockerCon unlike any other conference.
Leave motivated to create something amazing
The core theme of every DockerCon is to celebrate the makers within us all. Through the robust content and pure energy of the community, we are confident that you will leave DockerCon inspired to return to work to apply all of your new knowledge and best practices to your line of work. Don’t believe us? Just check out our closing session of 2016 that featured cool hacks created by community and Docker team members.
DockerCon Schedule
We have extended the conference this year to 3 days with instructor led workshops beginning on Monday afternoon. General sessions, breakout sessions and ecosystem expo will take place Tuesday &; Wednesday. We’ve added the extra day to the conference to help your over crammed agendas with repeat top sessions, hands on labs and mini summits that will take place on Thursday, April 20.

Plan Your Trip

Sending an employee to a conference is an investment and can be a big expense. Below you will find a budget template to help you plan for your trip. Ready to send an email to your boss about DockerCon ? Here is the sample letter you can use as a starting point.
We invite you to join the community and us at DockerCon 2017, and we hope you find this packet of event information useful, including a helpful letter you can use to send to your manager to justify your trip and build a budget estimate. We are confident there’s something at DockerCon for everyone, so feel free to share within your company and networks.

Convince your manager to send you to @dockercon &8211; the 1st and largest container conferenceClick To Tweet

The post Convince your manager to send you to DockerCon appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Running MongoDB on Kubernetes with StatefulSets

Editor’s note: Today’s post is by Sandeep Dinesh, Developer Advocate, Google Cloud Platform, showing how to run a database in a container.Conventional wisdom says you can’t run a database in a container. “Containers are stateless!” they say, and “databases are pointless without state!” Of course, this is not true at all. At Google, everything runs in a container, including databases. You just need the right tools. Kubernetes 1.5 includes the new StatefulSet API object (in previous versions, StatefulSet was known as PetSet). With StatefulSets, Kubernetes makes it much easier to run stateful workloads such as databases.If you’ve followed my previous posts, you know how to create a MEAN Stack app with Docker, then migrate it to Kubernetes to provide easier management and reliability, and create a MongoDB replica set to provide redundancy and high availability.While the replica set in my previous blog post worked, there were some annoying steps that you needed to follow. You had to manually create a disk, a ReplicationController, and a service for each replica. Scaling the set up and down meant managing all of these resources manually, which is an opportunity for error, and would put your stateful application at risk In the previous example, we created a Makefile to ease the management of of these resources, but it would have been great if Kubernetes could just take care of all of this for us.With StatefulSets, these headaches finally go away. You can create and manage your MongoDB replica set natively in Kubernetes, without the need for scripts and Makefiles. Let’s take a look how.Note: StatefulSets are currently a beta resource. The sidecar container used for auto-configuration is also unsupported.Prerequisites and SetupBefore we get started, you’ll need a Kubernetes 1.5+ and the Kubernetes command line tool. If you want to follow along with this tutorial and use Google Cloud Platform, you also need the Google Cloud SDK.Once you have a Google Cloud project created and have your Google Cloud SDK setup (hint: gcloud init), we can create our cluster.To create a Kubernetes 1.5 cluster, run the following command:gcloud container clusters create “test-cluster”This will make a three node Kubernetes cluster. Feel free to customize the command as you see fit.Then, authenticate into the cluster:gcloud container clusters get-credentials test-clusterSetting up the MongoDB replica setTo set up the MongoDB replica set, you need three things: A StorageClass, a Headless Service, and a StatefulSet.I’ve created the configuration files for these already, and you can clone the example from GitHub:git clone https://github.com/thesandlord/mongo-k8s-sidecar.gitcd example/StatefulSet/To create the MongoDB replica set, run these two commands:kubectl apply -f googlecloud_ssd.yamlkubectl apply -f mongo-statefulset.yamlThat’s it! With these two commands, you have launched all the components required to run an highly available and redundant MongoDB replica set. At an high level, it looks something like this:Let’s examine each piece in more detail.StorageClassThe storage class tells Kubernetes what kind of storage to use for the database nodes. You can set up many different types of StorageClasses in a ton of different environments. For example, if you run Kubernetes in your own datacenter, you can use GlusterFS. On GCP, your storage choices are SSDs and hard disks. There are currently drivers for AWS, Azure, Google Cloud, GlusterFS, OpenStack Cinder, vSphere, Ceph RBD, and Quobyte.The configuration for the StorageClass looks like this:kind: StorageClassapiVersion: storage.k8s.io/v1beta1metadata:  name: fastprovisioner: kubernetes.io/gce-pdparameters:  type: pd-ssdThis configuration creates a new StorageClass called “fast” that is backed by SSD volumes. The StatefulSet can now request a volume, and the StorageClass will automatically create it!Deploy this StorageClass:kubectl apply -f googlecloud_ssd.yamlHeadless ServiceNow you have created the Storage Class, you need to make a Headless Service. These are just like normal Kubernetes Services, except they don’t do any load balancing for you. When combined with StatefulSets, they can give you unique DNS addresses that let you directly access the pods! This is perfect for creating MongoDB replica sets, because our app needs to connect to all of the MongoDB nodes individually.The configuration for the Headless Service looks like this:apiVersion: v1kind: Servicemetadata:  name: mongo  labels:    name: mongospec:  ports:  – port: 27017    targetPort: 27017  clusterIP: None  selector:    role: mongoYou can tell this is a Headless Service because the clusterIP is set to “None.” Other than that, it looks exactly the same as any normal Kubernetes Service.StatefulSetThe pièce de résistance. The StatefulSet actually runs MongoDB and orchestrates everything together. StatefulSets differ from Kubernetes ReplicaSets (not to be confused with MongoDB replica sets!) in certain ways that makes them more suited for stateful applications. Unlike Kubernetes ReplicaSets, pods created under a StatefulSet have a few unique attributes. The name of the pod is not random, instead each pod gets an ordinal name. Combined with the Headless Service, this allows pods to have stable identification. In addition, pods are created one at a time instead of all at once, which can help when bootstrapping a stateful system. You can read more about StatefulSets in the documentation.Just like before, this “sidecar” container will configure the MongoDB replica set automatically. A “sidecar” is a helper container which helps the main container do its work.The configuration for the StatefulSet looks like this:apiVersion: apps/v1beta1kind: StatefulSetmetadata:  name: mongospec:  serviceName: “mongo”  replicas: 3  template:    metadata:      labels:        role: mongo        environment: test    spec:      terminationGracePeriodSeconds: 10      containers:        – name: mongo          image: mongo          command:            – mongod            – “–replSet”            – rs0            – “–smallfiles”            – “–noprealloc”          ports:            – containerPort: 27017          volumeMounts:            – name: mongo-persistent-storage              mountPath: /data/db        – name: mongo-sidecar          image: cvallance/mongo-k8s-sidecar          env:            – name: MONGO_SIDECAR_POD_LABELS              value: “role=mongo,environment=test”  volumeClaimTemplates:  – metadata:      name: mongo-persistent-storage      annotations:        volume.beta.kubernetes.io/storage-class: “fast”    spec:      accessModes: [ “ReadWriteOnce” ]      resources:        requests:          storage: 100GiIt’s a little long, but fairly straightforward.The first second describes the StatefulSet object. Then, we move into the Metadata section, where you can specify labels and the number of replicas. Next comes the pod spec. The terminationGracePeriodSeconds is used to gracefully shutdown the pod when you scale down the number of replicas, which is important for databases! Then the configurations for the two containers is shown. The first one runs MongoDB with command line flags that configure the replica set name. It also mounts the persistent storage volume to /data/db, the location where MongoDB saves its data. The second container runs the sidecar.Finally, there is the volumeClaimTemplates. This is what talks to the StorageClass we created before to provision the volume. It will provision a 100 GB disk for each MongoDB replica.Using the MongoDB replica setAt this point, you should have three pods created in your cluster. These correspond to the three nodes in your MongoDB replica set. You can see them with this command:kubectl get podsNAME        READY     STATUS    RESTARTS   AGEmongo-0      2/2       Running   0          3mmongo-1      2/2       Running   0          3mmongo-2      2/2       Running   0          3mEach pod in a StatefulSet backed by a Headless Service will have a stable DNS name. The template follows this format: <pod-name>.<service-name>This means the DNS names for the MongoDB replica set are:mongo-0.mongomongo-1.mongomongo-2.mongoYou can use these names directly in the connection string URI of your app.In this case, the connection string URI would be:“mongodb://mongo-0.mongo,mongo-1.mongo,mongo-2.mongo:27017/dbname_?”That’s it!Scaling the MongoDB replica setA huge advantage of StatefulSets is that you can scale them just like Kubernetes ReplicaSets. If you want 5 MongoDB Nodes instead of 3, just run the scale command:kubectl scale –replicas=5 statefulset mongoThe sidecar container will automatically configure the new MongoDB nodes to join the replica set.Include the two new nodes (mongo-3.mongo & mongo-4.mongo) in your connection string URI and you are good to go. Too easy!Cleaning UpTo clean up the deployed resources, delete the StatefulSet, Headless Service, and the provisioned volumes.Delete the StatefulSet:kubectl delete statefulset mongoDelete the Service:kubectl delete svc mongoDelete the Volumes:kubectl delete pvc -l role=mongoFinally, you can delete the test cluster:gcloud container clusters delete “test-cluster”Happy Hacking!For more cool Kubernetes and Container blog posts, follow me on Twitter and Medium. –Sandeep Dinesh, Developer Advocate, Google Cloud Platform.
Quelle: kubernetes

Fission: Serverless Functions as a Service for Kubernetes

Editor’s note: Today’s post is by Soam Vasani, Software Engineer at Platform9 Systems, talking about a new open source Serverless Function (FaaS) framework for Kubernetes. Fission is a Functions as a Service (FaaS) / Serverless function framework built on Kubernetes.Fission allows you to easily create HTTP services on Kubernetes from functions. It works at the source level and abstracts away container images (in most cases). It also simplifies the Kubernetes learning curve, by enabling you to make useful services without knowing much about Kubernetes.To use Fission, you simply create functions and add them with a CLI. You can associate functions with HTTP routes, Kubernetes events, or other triggers. Fission supports NodeJS and Python today.Functions are invoked when their trigger fires, and they only consume CPU and memory when they’re running. Idle functions don’t consume any resources except storage.Why make a FaaS framework on Kubernetes?We think there’s a need for a FaaS framework that can be run on diverse infrastructure, both in public clouds and on-premise infrastructure. Next, we had to decide whether to build it from scratch, or on top of an existing orchestration system. It was quickly clear that we shouldn’t build it from scratch — we would just end up having to re-invent cluster management, scheduling, network management, and lots more.Kubernetes offered a powerful and flexible orchestration system with a comprehensive API backed by a strong and growing community. Building on it meant Fission could leave container orchestration functionality to Kubernetes, and focus on FaaS features.The other reason we don’t want a separate FaaS cluster is that FaaS works best in combination with other infrastructure. For example, it may be the right fit for a small REST API, but it needs to work with other services to store state. FaaS also works great as a mechanism for event handlers to handle notifications from storage, databases, and from Kubernetes itself. Kubernetes is a great platform for all these services to interoperate on top of.Deploying and Using FissionFission can be installed with a `kubectl create` command: see the project README for instructions.Here’s how you’d write a “hello world” HTTP service:$ cat > hello.pydef main(context):    print “Hello, world!”$ fission function create –name hello –env python –code hello.py –route /hello$ curl http://<fission router>/helloHello, world!Fission takes care of loading the function into a container, routing the request to it, and so on. We go into more details in the next section. How Fission Is Implemented on KubernetesAt its core, a FaaS framework must (1) turn functions into services and (2) manage the lifecycle of these services.There are a number of ways to achieve these goals, and each comes with different tradeoffs. Should the framework operate at the source-level, or at the level of Docker images (or something in-between, like “buildpacks”)? What’s an acceptable amount of overhead the first time a function runs? Choices made here affect platform flexibility, ease of use, resource usage and costs, and of course, performance. Packaging, source code, and imagesOne of our goals is to make Fission very easy to use for new users. We chose to operateat the source level, so that users can avoid having to deal with container image building, pushing images to registries, managing registry credentials, image versioning, and so on.However, container images are the most flexible way to package apps. A purely source-level interface wouldn’t allow users to package binary dependencies, for example.So, Fission goes with a hybrid approach — container images that contain a dynamic loader for functions. This approach allows most users to use Fission purely at the source level, but enables them to customize the container image when needed.These images, called “environment images” in Fission, contain the runtime for the language (such as NodeJS or Python), a set of commonly used dependencies and a dynamic loader for functions. If these dependencies are sufficient for the function the user is writing, no image rebuilding is needed. Otherwise, the list of dependencies can be modified, and the image rebuilt.These environment images are the only language-specific parts of Fission. They present a uniform interface to the rest of the framework. This design allows Fission to be easily extended to more languages.Cold start performanceOne of the goals of the serverless functions is that functions use CPU/memory resources only when running. This optimizes the resource cost of functions, but it comes at the cost of some performance overhead when starting from idle (the “cold start” overhead).Cold start overhead is important in many use cases. In particular, with functions used in an interactive use case — like a web or mobile app, where a user is waiting for the action to complete — several-second cold start latencies would be unacceptable.To optimize cold start overheads, Fission keeps a running pool of containers for each environment. When a request for a function comes in, Fission doesn’t have to deploy a new container — it just chooses one that’s already running, copies the function into the container, loads it dynamically, and routes the request to that instance. The overhead of this process takes on the order of 100msec for NodeJS and Python functions.How Fission works on KubernetesFission is designed as a set of microservices. A Controller keeps track of functions, HTTProutes, event triggers, and environment images. A Pool Manager manages pools of idle environment containers, the loading of functions into these containers, and the killing of function instances when they’re idle. A Router receives HTTP requests and routes them to function instances, requesting an instance from the Pool Manager if necessary.The controller serves the fission API. All the other components watch the controller for updates. The router is exposed as a Kubernetes Service of the LoadBalancer or NodePort type, depending on where the Kubernetes cluster is hosted.When the router gets a request, it looks up a cache to see if this request already has a service it should be routed to. If not, it looks up the function to map the request to, and requests the poolmgr for an instance. The poolmgr has a pool of idle pods; it picks one, loads the function into it (by sending a request into a sidecar container in the pod), and returns the address of the pod to the router. The router  proxies over the request to this pod. The pod is cached for subsequent requests, and if it’s been idle for a few minutes, it is killed.(For now, Fission maps one function to one container; autoscaling to multiple instances is planned for a later release. Re-using function pods to host multiple functions is also planned, for cases where isolation isn’t a requirement.)Use Cases for FissionBots, Webhooks, REST APIs Fission is a good framework to make small REST APIs, implement webhooks, and write chatbots for Slack or other services.As an example of a simple REST API, we’ve made a small guestbook app that uses functions for reading and writing to guestbook, and works with a redis instance to keep track of state. You can find the app in the Fission GitHub repo.The app contains two end points — the GET endpoint lists out guestbook entries from redis and renders them into HTML. The POST endpoint adds a new entry to the guestbook list in redis. That’s all there is — there’s no Dockerfile to manage, and updating the app is as simple as calling fission function update. Handling Kubernetes EventsFission also supports triggering functions based on Kubernetes watches. For example, you can setup a function to watch for all pods in a certain namespace matching a certain label. The function gets the serialized object and the watch event type (added/removed/updated) in its context.These event handler functions could be used for simple monitoring — for example, you could send a slack message whenever a new service is added to the cluster. There are also more complex use cases, such as writing a custom controller by watching Kubernetes’ Third Party Resources.Status and RoadmapFission is in early alpha for now (Jan 2017). It’s not ready for production use just yet. We’re looking for early adopters and feedback.What’s ahead for Fission? We’re working on making FaaS on Kubernetes more convenient, easy to use and easier to integrate with. In the coming months we’re working on adding support for unit testing, integration with Git, function monitoring and log aggregation. We’re also working on integration with other sources of events.Creating more language environments is also in the works. NodeJS and Python are supported today. Preliminary support for C# .NET has been contributed by Klavs Madsen.You can find our current roadmap on our GitHub issues and projects.Get InvolvedFission is open source and developed in the open by Platform9 Systems. Check us out on GitHub, and join our slack channel if you’d like to chat with us. We’re also on twitter at @fissionio.Download KubernetesGet involved with the Kubernetes project on GitHub Post questions (or answer questions) on Stack Overflow Connect with the community on SlackFollow us on Twitter @Kubernetesio for latest updates–Soam Vasani, Software Engineer, Platform9 Systems
Quelle: kubernetes

Docker Online Meetup recap: Introducing Docker 1.13

Last week, we released 1.13 to introduce several new enhancements in addition to building on and improving Docker swarm mode introduced in Docker 1.12. Docker 1.13 has many new features and fixes that we are excited about, so we asked core team member and release captain, Victor Vieux to introduce Docker 1.13 in an online .
The meetup took place on Wednesday, Jan 25 and over 1000 people RSVPed to hear Victor’s presentation live. Victor gave an overview and demo of many of the new features:

Restructuration of CLI commands
Experimental build
CLI backward compatibility
Swarm default encryption at rest
Compose to Swarm
Data management commands
Brand new “init system”
Various orchestration enhancements

In case you missed it, you can watch the recording and access Victor’s slides below.

 
Below is a short list of the questions asked to Victor at the end of the Online meetup:
Q: What will happened if we call docker stack deploy multiple times to the same file?
A: All the services that were modified in the compose file will be updated according to their respective update policy. It won’t recreate a new stack, update the current one. Same mechanism used in the docker-compose python tool.
Q: In &;docker system df&8220;, what exactly constitutes an &8220;active&; image?
A: It means it’s associated with a container, if you have (even stopped) container(s) using the `redis` image, then the `redis` images is “active”
Q: criu integration is available with `–experimental` then?
A: Yes! One of the many features I didn’t cover in the slides as there are so many new ones in Docker 1.13. There is no need to download a separate build anymore, it’s just a flag away
Q: When will we know when certain features are out of the experimental state and part of the foundation of this product?
A: Usually experimental features tend to remain in an experimental state for only one release. Larger or more complex features and capabilities may require two releases to gather feedback and make incremental improvements.
Q: Can I configure docker with multiple registries (some private and some public)?
A: It’s not really necessary to configure docker as the “configuration” happen in the image name.
docker pull my-private-registry.com:9876/my-image and docker pull my-public-registry.com:5432/my-image

Missed the Intro to Docker 1.13 online meetup w/ @vieux? Check out the video & slides here!Click To Tweet

The post Docker Online Meetup recap: Introducing Docker 1.13 appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Docker Meetup Community reaches 150K members

We are thrilled to announce that the community has reached over 150,000 members! We’d like to take a moment to acknowledge all the amazing contributors and Docker enthusiasts who are working hard to organize frequent and interesting Docker-centric meetups. Thanks to you, there are 275 Docker meetup groups, in 75 countries, across 6 continents.
There were over 1000 Docker meetups held all over the world last year. Big shout out to Ben Griffin, organizer of Docker Melbourne, who organized 18 meetups in 2016,  Karthik Gaekwad, Lee Calcote, Vikram Sabnis and Everett Toews, organizers of Docker Austin who organized 16 meetups, Gerhard Schweinitz and Stephen J Wallace, organizers of Docker Sydney who organized 13, and Jesse White, Luisa Morales and Doug Masiero from Docker NYC who organized 12. 

We also wanted to thank and give a massive shout out to organizers Adrien Blind and Patrick Aljord have grown the Docker Paris Meetup group to nearly 4,000 members and have hosted 46 events since they launched the group almost 4 years ago!
 

Reached 3925 @DockerParis meetup members ! We may be able to celebrate 4000 members during feb docker event @vcoisne @jpetazzo @docker pic.twitter.com/CGmvShIj0L
— Adrien Blind (@AdrienBlind) January 17, 2017

One of our newest groups, Docker Havana, started last November and they already have +200 members! The founding organizers, Enrique Carbonell and Manuel Morejón are doing a fantastic job recruiting new members and have even started planning awesome meetups in other Cuban cities too!

 
Interested in getting involved with the Docker Community? The best way to participate is through your local meetup group. Check out this map to see if a Docker user group exists in your city, or take a look at the list of upcoming Docker events.

Can’t find a group near you? Learn more here about how to start a group and the process of becoming an organizer. Our community team would be happy to work with you on solving some of the challenges associated with organizing meetups in your area.
Not interested in starting a group? You can always join the Docker Online Meetup Group!
In case you missed it, we’ve recently introduced a Docker Community Directory and Slack to further enable community building and collaboration. Our goal is to give everyone the opportunity to become a more informed and engaged member of the community by creating sub groups and channels based on location, language, use cases, interest in specific Docker-centric projects or initiatives.
Sign up for the Docker Community Directory and Slack  
 

Docker community reaches 150k meetup members! Join a local group to and meet other&;Click To Tweet

The post Docker Meetup Community reaches 150K members appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

How we run Kubernetes in Kubernetes aka Kubeception

Editor’s note: Today’s post is by the team at Giant Swarm, showing how they run Kubernetes in Kubernetes.Giant Swarm’s container infrastructure started out with the goal to be an easy way for developers to deploy containerized microservices. Our first generation was extensively using fleet as a base layer for our infrastructure components as well as for scheduling user containers.In order to give our users a more powerful way to manage their containers we introduced Kubernetes into our stack in early 2016. However, as we needed a quick way to flexibly spin up and manage different users’ Kubernetes clusters resiliently we kept the underlying fleet layer.As we insist on running all our underlying infrastructure components in containers, fleet gave us the flexibility of using systemd unit files to define our infrastructure components declaratively. Our self-developed deployment tooling allowed us to deploy and manage the infrastructure without the need for imperative configuration management tools.However, fleet is just a distributed init and not a complete scheduling and orchestration system. Next to a lot of work on our tooling, it required significant improvements in terms of communication between peers, its reconciliation loop, and stability that we had to work on. Also the uptake in Kubernetes usage would ensure that issues are found and fixed faster.As we had made good experience with introducing Kubernetes on the user side and with recent developments like rktnetes and stackanetes it felt like time for us to also move our base layer to Kubernetes.Why Kubernetes in KubernetesNow, you could ask, why would anyone want to run multiple Kubernetes clusters inside of a Kubernetes cluster? Are we crazy? The answer is advanced multi-tenancy use cases as well as operability and automation thereof.Kubernetes comes with its own growing feature set for multi-tenancy use cases. However, we had the goal of offering our users a fully-managed Kubernetes without any limitations to the functionality they would get using any vanilla Kubernetes environment, including privileged access to the nodes. Further, in bigger enterprise scenarios a single Kubernetes cluster with its inbuilt isolation mechanisms is often not sufficient to satisfy compliance and security requirements. More advanced (firewalled) zoning or layered security concepts are tough to reproduce with a single installation. With namespace isolation both privileged access as well as firewalled zones can hardly be implemented without sidestepping security measures.Now you could go and set up multiple completely separate (and federated) installations of Kubernetes. However, automating the deployment and management of these clusters would need additional tooling and complex monitoring setups. Further, we wanted to be able to spin clusters up and down on demand, scale them, update them, keep track of which clusters are available, and be able to assign them to organizations and teams flexibly. In fact this setup can be combined with a federation control plane to federate deployments to the clusters over one API endpoint.And wouldn’t it be nice to have an API and frontend for that?Enter GiantnetesBased on the above requirements we set out to build what we call Giantnetes – or if you’re into movies, Kubeception. At the most basic abstraction it is an outer Kubernetes cluster (the actual Giantnetes), which is used to run and manage multiple completely isolated user Kubernetes clusters.The physical machines are bootstrapped by using our CoreOS bootstrapping tool, Mayu. The Giantnetes components themselves are self-hosted, i.e. a kubelet is in charge of automatically bootstrapping the components that reside in a manifests folder. You could call this the first level of Kubeception.Once the Giantnetes cluster is running we use it to schedule the user Kubernetes clusters as well as our tooling for managing and securing them.We chose Calico as the Giantnetes network plugin to ensure security, isolation, and the right performance for all the applications running on top of Giantnetes.Then, to create the inner Kubernetes clusters, we initiate a few pods, which configure the network bridge, create certificates and tokens, and launch virtual machines for the future cluster. To do so, we use lightweight technologies such as KVM and qemu to provision CoreOS VMs that become the nodes of an inner Kubernetes cluster. You could call this the second level of Kubeception.Currently this means we are starting Pods with Docker containers that in turn start VMs with KVM and qemu. However, we are looking into doing this with rkt qemu-kvm, which would result in using a rktnetes setup for our Giantnetes.The networking solution for the inner Kubernetes clusters has two levels. It on a combination of flannel’s server/client architecture model and Calico BGP. While a flannel client is used to create the network bridge between the VMs of each virtualized inner Kubernetes cluster, Calico is running inside the virtual machines to connect the different Kubernetes nodes and create a single network for the inner Kubernetes. By using Calico, we mimic the Giantnetes networking solution inside of each Kubernetes cluster and provide the primitives to secure and isolate workloads through the Kubernetes network policy API.Regarding security, we aim for separating privileges as much as possible and making things auditable. Currently this means we use certificates to secure access to the clusters and encrypt communication between all the components that form a cluster is (i.e. VM to VM, Kubernetes components to each other, etcd master to Calico workers, etc). For this we create a PKI backend per cluster and then issue certificates per service in Vault on-demand. Every component uses a different certificate, thus, avoiding to expose the whole cluster if any of the components or nodes gets compromised. We further rotate the certificates on a regular basis.For ensuring access to the API and to services of each inner Kubernetes cluster from the outside we run a multi-level HAproxy ingress controller setup in the Giantnetes that connects the Kubernetes VMs to hardware load balancers.Looking into Giantnetes with kubectlLet’s have a look at a minimal sample deployment of Giantnetes.In the above example you see a user Kubernetes cluster `customera` running in VM-containers on top of Giantnetes. We currently use Jobs for the network and certificate setups.Peeking inside the user cluster, you see the DNS pods and a helloworld running.Each one of these user clusters can be scheduled and used independently. They can be spun up and down on-demand.ConclusionTo sum up, we could show how Kubernetes is able to easily not only self-host but also flexibly schedule a multitude of inner Kubernetes clusters while ensuring higher isolation and security aspects. A highlight in this setup is the composability and automation of the installation and the robust coordination between the Kubernetes components. This allows us to easily create, destroy, and reschedule clusters on-demand without affecting users or compromising the security of the infrastructure. It further allows us to spin up clusters with varying sizes and configurations or even versions by just changing some arguments at cluster creation. This setup is still in its early days and our roadmap is planning for improvements in many areas such as transparent upgrades, dynamic reconfiguration and scaling of clusters, performance improvements, and (even more) security. Furthermore, we are looking forward to improve on our setup by making use of the ever advancing state of Kubernetes operations tooling and upcoming features, such as Init Containers, Scheduled Jobs, Pod and Node affinity and anti-affinity, etc.Most importantly, we are working on making the inner Kubernetes clusters a third party resource that can then be managed by a custom controller. The result would be much like the Operator concept by CoreOS. And to ensure that the community at large can benefit from this project we will be open sourcing this in the near future.– Hector Fernandez, Software Engineer & Puja Abbassi, Developer Advocate, Giant Swarm
Quelle: kubernetes

CPU Management in Docker 1.13

Resource management for containers is a huge requirement for production users. Being able to run multiple containers on a single host and ensure that one container does not starve the others in terms of cpu, memory, io, or networking in an efficient way is why I like working with containers. However, cpu management for containers is still not as straightforward as what I would like. There are many different options when it comes to dealing with restricting the cpu usage for a container. With things like memory, its is very easy for people to think that , –memory 512m gives the container up to 512mb. With CPU, it&;s hard for people to understand a container’s limit with the current options.
In 1.13 we added a –cpus flag, which is the best tech for limiting cpu usage of a container with a sane UX that the majority of users can understand. Let’s take a look at a couple of the options in 1.12 to show why this is necessary.
There are various ways to set a cpu limit for a container. Cpu shares, cpuset, cfs quota and period are the three most common ways. We can just go ahead and say that using cpu shares is the most confusing and worst functionality out of all the options we have. The numbers don’t make sense. For example, is 5 a large number or is 512 half of my system’s resources if there is a max of 1024 shares?  Is 100 shares significant when I only have one container; however, if I add two more containers each with 100 shares, what does that mean?  We could go in depth on cpu shares but you have to remember that cpu shares are relative to everything else on the system.
Cpuset is a viable alternative but it takes much more thought and planning to use it correctly and use it in the correct circumstances. The cfs scheduler along with quota and period are some of the best options for limiting container cpu usage but they come with bad user interfaces. Specifying cpu usage in nanoseconds for a user is sometimes hard to determine when you want to do simple tasks such as limiting a container to one core.
In 1.13 though, if you want a container to be limited to one cpu then you can just add –cpus 1.0 to your Docker run/create command line. If you would like two and a half cpus as the limit of the container then just add –cpus 2.5. In Docker we are using the CFS quota and period to limit the container’s cpu usage to what you want and doing the calculations for you.
If you are limiting cpu usage for your containers, look into using this new flag and API to handle your needs. This flag will work on both Linux and Windows when using Docker.  
For more information on the feature you can look at the docs https://docs.docker.com/engine/admin/resource_constraints/
For more information on Docker 1.13 in general, check out these links:

Read the product documentation
Learn more about the latest Docker 1.13 release
Get started and install Docker
Attend the next Docker Online Meetup on Wed 1/25 at 10am PST

Introducing a new CPU management flag in Docker 1.13 Click To Tweet

The post CPU Management in Docker 1.13 appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Introducing Docker 1.13

Today we’re releasing 1.13 with lots of new features, improvements and fixes to help Docker users with New Year’s resolutions to build more and better container apps. Docker 1.13 builds on and improves Docker swarm mode introduced in Docker 1.12 and has lots of other fixes. Read on for Docker 1.13 highlights.

Use compose-files to deploy swarm mode services
Docker 1.13 adds Compose-file support to the `docker stack deploy` command so that services can be deployed using a `docker-compose.yml` file directly. Powering this is a major effort to extend the swarm service API to make it more flexible and useful.
Benefits include:

Specifying the number of desired instances for each service
Rolling update policies
Service constraints

Deploying a multi-host, multi-service stack is now as simple as:
docker stack deploy –compose-file=docker-compose.yml my_stack
Improved CLI backwards compatibility
Ever been bitten by the dreaded Error response from daemon: client is newer than server problem because your Docker CLI was updated, but you still need to use it with older Docker engines?
Starting with 1.13, newer CLIs can talk to older daemons. We’re also adding feature negotiation so that proper errors are returned if a new client is attempting to use features not supported in an older daemon. This greatly improves interoperability and makes it much simpler to manage Docker installs with different versions from the same machine.
Clean-up commands
Docker 1.13 introduces a couple of nifty commands to help users understand how much disk space Docker is using, and help remove unused data.

docker system df will show used space, similar to the unix tool df
docker system prune will remove all unused data.

Prune can also be used to clean up just some types of data. For example: docker volume prune removes unused volumes only.
CLI restructured
Docker has grown many features over the past couple years and the Docker CLI now has a lot of commands (40 at the time of writing). Some, like build or run are used a lot, some are more obscure, like pause or history. The many top-level commands clutters help pages and makes tab-completion harder.
In Docker 1.13, we regrouped every command to sit under the logical object it’s interacting with. For example list and startof containers are now subcommands of docker container and history is a subcommand of docker image.
docker container list
docker container start
docker image history
These changes let us clean up the Docker CLI syntax, improve help text and make Docker simpler to use. The old command syntax is still supported, but we encourage everybody to adopt the new syntax.
Monitoring improvements
docker service logs is a powerful new experimental command that makes debugging services much simpler. Instead of having to track down hosts and containers powering a particular service and pulling logs from those containers, docker service logs pulls logs from all containers running a service and streams them to your console.
Docker 1.13 also adds an experimental Prometheus-style endpoint with basic metrics on containers, images and other daemon stats.
Build improvements
docker build has a new experimental –squash flag. When squashing, Docker will take all the filesystem layers produced by a build and collapse them into a single new layer. This can simplify the process of creating minimal container images, but may result in slightly higher overhead when images are moved around (because squashed layers can no longer be shared between images). Docker still caches individual layers to make subsequent builds fast.
1.13 also has support for compressing the build context that is sent from CLI to daemon using the –compress flag. This will speed up builds done on remote daemons by reducing the amount of data sent.
Docker for AWS and Azure out of beta
Docker for AWS and Azure are out of public beta and ready for production. We’ve spent the past 6 months perfecting Docker for AWS and Azure and incorporating user feedback, and we’re incredibly grateful to all the users that helped us test and identify problems. Also, stay tuned for more updates and enhancements over the coming months.
Get started with Docker 1.13
Docker for Mac and Windows users on both beta and stable channels will get automatic upgrade notifications (in fact, beta channel users have been running Docker 1.13 release clients for the past couple of months). If you’re new to Docker, download Docker for Mac and Windows to get started.
To deploy Docker for AWS or Docker for Azure, check out the docs or use these buttons to get started:

If you are interested in installing Docker on Linux, check the install instructions for details.
Helpful links:

Join us for the next Docker Online Meetup on Wed 1/25 at 9am PST 
Become a member of the Docker community
Read the product documentation

 

Introducing Docker 1.13 with compress, squash, prune and service logsClick To Tweet

The post Introducing Docker 1.13 appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Scaling Kubernetes deployments with Policy-Based Networking

Editor’s note: Today’s post is by Harmeet Sahni, Director of Product Management, at Nuage Networks, writing about their contributions to Kubernetes and insights on policy-based networking.  Although it’s just been eighteen-months since Kubernetes 1.0 was released, we’ve seen Kubernetes emerge as the leading container orchestration platform for deploying distributed applications. One of the biggest reasons for this is the vibrant open source community that has developed around it. The large number of Kubernetes contributors come from diverse backgrounds means we, and the community of users, are assured that we are investing in an open platform. Companies like Google (Container Engine), Red Hat (OpenShift), and CoreOS (Tectonic) are developing their own commercial offerings based on Kubernetes. This is a good thing since it will lead to more standardization and offer choice to the users. Networking requirements for Kubernetes applicationsFor companies deploying applications on Kubernetes, one of biggest questions is how to deploy and orchestrate containers at scale. They’re aware that the underlying infrastructure, including networking and storage, needs to support distributed applications. Software-defined networking (SDN) is a great fit for such applications because the flexibility and agility of the networking infrastructure can match that of the applications themselves. The networking requirements of such applications include:Network automation Distributed load balancing and service discoveryDistributed security with fine-grained policiesQoS PoliciesScalable Real-time MonitoringHybrid application environments with Services spread across Containers, VMs and Bare Metal ServersService Insertion (e.g. firewalls)Support for Private and Public Cloud deploymentsKubernetes NetworkingKubernetes provides a core set of platform services exposed through APIs. The platform can be extended in several ways through the extensions API, plugins and labels. This has allowed a wide variety integrations and tools to be developed for Kubernetes. Kubernetes recognizes that the network in each deployment is going to be unique. Instead of trying to make the core system try to handle all those use cases, Kubernetes chose to make the network pluggable.With Nuage Networks we provide a scalable policy-based SDN platform. The platform is managed by a Network Policy Engine that abstracts away the complexity associated with configuring the system. There is a separate SDN Controller that comes with a very rich routing feature set and is designed to scale horizontally. Nuage uses the open source Open vSwitch (OVS) for the data plane with some enhancements in the OVS user space. Just like Kubernetes, Nuage has embraced openness as a core tenet for its platform. Nuage provides open APIs that allow users to orchestrate their networks and integrate network services such as firewalls, load balancers, IPAM tools etc. Nuage is supported in a wide variety of cloud platforms like OpenStack and VMware as well as container platforms like Kubernetes and others.The Nuage platform implements a Kubernetes network plugin that creates VXLAN overlays to provide seamless policy-based networking between Kubernetes Pods and non-Kubernetes environments (VMs and bare metal servers). Each Pod is given an IP address from a network that belongs to a Namespace and is not tied to the Kubernetes node.As cloud applications are built using microservices, the ability to control traffic among these microservices is a fundamental requirement. It is important to point out that these network policies also need to control traffic that is going to/coming from external networks and services. Nuage’s policy abstraction model makes it easy to declare fine-grained ingress/egress policies for applications. Kubernetes has a beta Network Policy API implemented using the Kubernetes Extensions API. Nuage implements this Network Policy API to address a wide variety of policy use cases such as:Kubernetes Namespace isolationInter-Namespace policiesPolicies between groups of Pods (Policy Groups) for Pods in same or different NamespacesPolicies between Kubernetes Pods/Namespaces and external Networks/ServicesA key question for users to consider is the scalability of the policy implementation. Some networking setups require creating access control list (ACL) entries telling Pods how they can interact with one another. In most cases, this eventually leads to an n-squared pileup of ACL entries. The Nuage platform avoids this problem and can quickly assign a policy that applies to a whole group of Pods. The Nuage platform implements these policies using a fully distributed stateful firewall based on OVS.Being able to monitor the traffic flowing between Kubernetes Pods is very useful to both development and operations teams. The Nuage platform’s real-time analytics engine enables visibility and security monitoring for Kubernetes applications. Users can get a visual representation of the traffic flows between groups of Pods, making it easy to see how the network policies are taking effect. Users can also get a rich set of traffic and policy statistics. Further, users can set alerts to be triggered based on policy event thresholds.ConclusionEven though we started working on our integration with Kubernetes over a year ago, it feels we are just getting started. We have always felt that this is a truly open community and we want to be an integral part of it. You can find out more about our Kubernetes integration on our GitHub page.–Harmeet Sahni, Director of Product Management, Nuage Networks
Quelle: kubernetes

Docker for Windows Server and Image2Docker

In December we had a live webinar focused on Server Docker containers. We covered a lot of ground and we had some great feedback &; thanks to all the folks who joined us. This is a brief recap of the session, which also gives answers to the questions we didn’t get round to.
Webinar Recording
You can view the webinar on YouTube:

The recording clocks in at just under an hour. Here’s what we covered:

00:00 Introduction
02:00 Docker on Windows Server 2016
05:30 Windows Server 2016 technical details
10:30 Hyper-V and Windows Server Containers
13:00 Docker for Windows Demo &8211; ASP.NET Core app with SQL Server
25:30 Additional Partnerships between Docker, Inc. and Microsoft
27:30 Introduction to Image2Docker
30:00 Demo &8211; Extracting ASP.NET Apps from a VM using Image2Docker
52:00 Next steps and resources for learning Docker on Windows

Q&A
Can these [Windows] containers be hosted on a Linux host?
No. Docker containers use the underlying operating system kernel to run processes, so you can’t mix and match kernels. You can only run Windows Docker images on Windows, and Linux Docker images on Linux.
However, with an upcoming release to the Windows network stack, you will be able to run a hybrid Docker Swarm &8211; a single cluster containing a mixture of Linux and Windows hosts. Then you can run distributed apps with Linux containers and Windows containers communicating in the same Docker Swarm, using Docker’s networking layer.
Is this only for ASP.NET Core apps?
No. You can package pretty much any Windows application into a Docker image, provided it can be installed and run without a UI.
The first demo in the Webinar showed an ASP.NET Core app running in Docker. The advantage with .NET Core is that it’s cross-platform so the same app can run in Linux or Windows containers, and on Windows you can use the lightweight Nano Server option.
In the second demo we showed ASP.NET WebForms and ASP.NET MVC apps running in Docker. Full .NET Framework apps need to use the WIndows Server Core base image, but that gives you access to the whole feature set of Windows Server 2016.
If you have existing ASP.NET applications running in VMs, you can use the Image2Docker tool to port them across to Docker images. Image2Docker works on any Windows Server VM, from Server 2003 to Server 2016.

How does licensing work?
For production, licensing is at the host level, i.e. each machine or VM which is running Docker. Your Windows licence on the host allows you to run any number of Windows Docker containers on that host. With Windows Server 2016 you get the commercially supported version of Docker included in the licence costs, with support from Microsoft and Docker, Inc.
For development, Docker for Windows runs on Windows 10 and is free, open-source software. Docker for Windows can also run a Linux VM on your machine, so you can use both Linux and Windows containers in development. Like the server version, your Windows 10 licence allows you to run any number of Windows Docker containers.
Windows admins will want a unified platform for managing images and containers. That’s Docker Datacenter which is separately licensed, and will be available for Windows soon.
What about Windows updates for the containers?
Docker containers have a different life cycle from full VMs or bare-metal servers. You wouldn’t deploy an app update or a Windows update inside a running container &8211; instead you update the image that packages your app, then just kill the container and start a new container from the updated image.
Microsoft are supporting that workflow with the two Windows base images on Docker Hub &8211; for Windows Server Core and Nano Server. They are following a monthly release cycle, and each release adds an incremental update with new patches and security updates.
For your own applications, you would aim to have the same deployment schedule &8211; after a new release of the Windows base image, you would rebuild your application images and deploy new containers. All this can be automated, so it’s much faster and more reliable than manual patching. Docker Captain Stefan Scherer has a great blog post on keeping your Windows containers up to date.
Additional Resources

Get everything Docker and Microsoft here
Windows and Docker case-study: Tyco’s life-safety applications 
Self-paced labs for Windows Containers from Docker
Packaging ASP.NET 4.5 Applications in Docker
Subscribe to Docker’s weekly newsletter

Our Windows containers webinar is on YouTube for you to To Tweet

The post Docker for Windows Server and Image2Docker appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/