Deploying PostgreSQL Clusters using StatefulSets

Editor’s note: Today’s guest post is by Jeff McCormick, a developer at Crunchy Data, showing how to build a PostgreSQL cluster using the new Kubernetes StatefulSet feature.In an earlier post, I described how to deploy a PostgreSQL cluster using Helm, a Kubernetes package manager. The following example provides the steps for building a PostgreSQL cluster using the new Kubernetes StatefulSets feature. StatefulSets ExampleStep 1 – Create Kubernetes EnvironmentStatefulSets is a new feature implemented in Kubernetes 1.5 (prior versions it was known as PetSets). As a result, running this example will require an environment based on Kubernetes 1.5.0 or above.  The example in this blog deploys on Centos7 using kubeadm. Some instructions on what kubeadm provides and how to deploy a Kubernetes cluster is located here.Step 2 – Install NFSThe example in this blog uses NFS for the Persistent Volumes, but any shared file system would also work (ex: ceph, gluster).  The example script assumes your NFS server is running locally and your hostname resolves to a known IP address. In summary, the steps used to get NFS working on a Centos 7 host are as follows:sudo setsebool -P virt_use_nfs 1sudo yum -y install nfs-utils libnfsidmapsudo systemctl enable rpcbind nfs-serversudo systemctl start rpcbind nfs-server rpc-statd nfs-idmapdsudo mkdir /nfsfilesharesudo chmod 777 /nfsfileshare/sudo vi /etc/exportssudo exportfs -rThe /etc/exports file should contain a line similar to this one except with the applicable IP address specified:/nfsfileshare 192.168.122.9(rw,sync)After these steps NFS should be running in the test environment.Step 3 – Clone the Crunchy PostgreSQL Container SuiteThe example used in this blog is found at in the Crunchy Containers GitHub repo here. Clone the Crunchy Containers repository to your test Kubernertes host and go to the example:cd $HOMEgit clone https://github.com/CrunchyData/crunchy-containers.gitcd crunchy-containers/examples/kube/statefulsetNext, pull down the Crunchy PostgreSQL container image:docker pull crunchydata/crunchy-postgres:centos7-9.5-1.2.6Step 4 – Run the ExampleTo begin, it is necessary to set a few of the environment variables used in the example:export BUILDBASE=$HOME/crunchy-containersexport CCP_IMAGE_TAG=centos7-9.5-1.2.6BUILDBASE is where you cloned the repository and CCP_IMAGE_TAG is the container image version we want to use.Next, run the example:./run.shThat script will create several Kubernetes objects including: Persistent Volumes (pv1, pv2, pv3) Persistent Volume Claim (pgset-pvc) Service Account (pgset-sa) Services (pgset, pgset-master, pgset-replica) StatefulSet (pgset) Pods (pgset-0, pgset-1)At this point, two pods will be running in the Kubernetes environment: $ kubectl get podNAME      READY     STATUS    RESTARTS   AGEpgset-0   1/1       Running   0          2mpgset-1   1/1       Running   1          2mImmediately after the pods are created, the deployment will be as depicted below:Step 5 – What Just Happened?This example will deploy a StatefulSet, which in turn creates two pods.The containers in those two pods run the PostgreSQL database. For a PostgreSQL cluster, we need one of the containers to assume the master role and the other containers to assume the replica role. So, how do the containers determine who will be the master, and who will be the replica?This is where the new StateSet mechanics come into play. The StateSet mechanics assign a unique ordinal value to each pod in the set.The StatefulSets provided unique ordinal value always start with 0. During the initialization of the container, each container examines its assigned ordinal value. An ordinal value of 0 causes the container to assume the master role within the PostgreSQL cluster. For all other ordinal values, the container assumes a replica role. This is a very simple form of discovery made possible by the StatefulSet mechanics.PostgreSQL replicas are configured to connect to the master database via a Service dedicated to the master database. In order to support this replication, the example creates a separate Service for each of the master role and the replica role. Once the replica has connected, the replica will begin replicating state from the master.  During the container initialization, a master container will use a Service Account (pgset-sa) to change it’s container label value to match the master Service selector.  Changing the label is important to enable traffic destined to the master database to reach the correct container within the Stateful Set.  All other pods in the set assume the replica Service label by default.Step 6 – Deployment DiagramThe example results in a deployment depicted below:In this deployment, there is a Service for the master and a separate Service for the replica.  The replica is connected to the master and replication of state has started.The Crunchy PostgreSQL container supports other forms of cluster deployment, the style of deployment is dictated by setting the PG_MODE environment variable for the container.  In the case of a StatefulSet deployment, that value is set to: PG_MODE=setThis environment variable is a hint to the container initialization logic as to the style of deployment we intend.Step 7 – Testing the ExampleThe tests below assume that the psql client has been installed on the test system.  If if not, the psql client has been previously installed, it can be installed as follows:sudo yum -y install postgresqlIn addition, the tests below assume that the tested environment DNS resolves to the Kube DNS and that the tested environment DNS search path is specified to match the applicable Kube namespace and domain. The master service is named pgset-master and the replica service is named pgset-replica.Test the master as follows (the password is password):psql -h pgset-master -U postgres postgres -c ‘table pg_stat_replication’If things are working, the command above will return output indicating that a single replica is connecting to the master.Next, test the replica as follows:psql -h pgset-replica -U postgres postgres  -c ‘create table foo (id int)’The command above should fail as the replica is read-only within a PostgreSQL cluster.Next, scale up the set as follows:kubectl scale statefulset pgset –replicas=3The command above should successfully create a new replica pod called pgset-2 as depicted below:Step 8 – Persistence ExplainedTake a look at the persisted PostgreSQL data files on the resulting NFS mount path:$ ls -l /nfsfileshare/total 12drwx—— 20   26   26 4096 Jan 17 16:35 pgset-0drwx—— 20   26   26 4096 Jan 17 16:35 pgset-1drwx—— 20   26   26 4096 Jan 17 16:48 pgset-2Each container in the stateful set binds to the single NFS Persistent Volume Claim (pgset-pvc) created in the example script.  Since NFS and the PVC can be shared, each pod can write to this NFS path.  The container is designed to create a subdirectory on that path using the pod host name for uniqueness.ConclusionStatefulSets is an exciting feature added to Kubernetes for container builders that are implementing clustering. The ordinal values assigned to the set provide a very simple mechanism to make clustering decisions when deploying a PostgreSQL cluster.  –Jeff McCormick, Developer, Crunchy Data
Quelle: kubernetes

How about family spring break in Austin?

Are you looking for Spring Break plans with the family? Look no further than 2017!  Located in sunny Austin, Texas April 17-20, DockerCon provides learning and entertainment for all members of the family.

Childcare
As part of our efforts to make DockerCon’s doors open to all, we are excited to announce that we will be partnering again this year with Big Time Kid to provide childcare at DockerCon! Gone are the days of “Mom / Dad has to stay home with the kids…” – you can now bring the whole family to DockerCon!
Childcare will be offered:

Monday, April 17  1:00pm – 7:30pm
Tuesday, April 18  8:00am – 6:30pm
Wednesday, April 19 8:00am – 5:30pm
Thursday, April 20 8:00am – 12:00pm

Following in the success of last year, we  have chosen Big Time Kid Care as our childcare provider. All caregivers and staff are certified, fully insured and experienced in child education and care with police background checks. Big Time Kid Care will be well equipped and excited to take good care of your little ones at a kid-friendly play room close to the DockerCon activities at Austin Convention Center. Games, activities, breakfast and lunch will be provided.
Spousetivities
Interested in exploring the diverse culture of Austin? Spousetivities offers great events for partners and family members. Activities range from checking out the legendary Magnolia Market to exploring the famous Salt Lick BBQ. Spousetivities is a great opportunity to meet new folks and learn about the history of Austin. All activities are suitable for the full family, so kids are welcome as well. Check out all the “Spousetivities”planned for DockerCon week!
This year, turn DockerCon into a family affair! We hope to see you all in Austin.

 DockerCon: Spring Break activities for everyone! Click To Tweet

The post How about family spring break in Austin? appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Containers as a Service, the foundation for next generation PaaS

Today’s post is by Brendan Burns, Partner Architect, at Microsoft & Kubernetes co-founder.Containers are revolutionizing the way that people build, package and deploy software. But what is often overlooked is how they are revolutionizing the way that people build the software that builds, packages and deploys software. (it’s ok if you have to read that sentence twice…) Today, and in a talk at Container World tomorrow, I’m taking a look at how container orchestrators like Kubernetes form the foundation for next generation platform as a service (PaaS). In particular, I’m interested in how cloud container as a service (CaaS) platforms like Azure Container Service, Google Container Engine and others are becoming the new infrastructure layer that PaaS is built upon.To see this, it’s important to consider the set of services that have traditionally been provided by PaaS platforms:Source code and executable packaging and distributionReliable, zero-downtime rollout of software versionsHealing, auto-scaling, load balancingWhen you look at this list, it’s clear that most of these traditional “PaaS” roles have now been taken over by containers. The container image and container image build tooling has become the way to package up your application. Container registries have become the way to distribute your application across the world. Reliable software rollout is achieved using orchestrator concepts like Deployment in Kubernetes, and service healing, auto-scaling and load-balancing are all properties of an application deployed in Kubernetes using ReplicaSets and Services.What then is left for PaaS? Is PaaS going to be replaced by container as a service? I think the answer is “no.” The piece that is left for PaaS is the part that was always the most important part of PaaS in the first place, and that’s the opinionated developer experience. In addition to all of the generic parts of PaaS that I listed above, the most important part of a PaaS has always been the way in which the developer experience and application framework made developers more productive within the boundaries of the platform. PaaS enables developers to go from source code on their laptop to a world-wide scalable service in less than an hour. That’s hugely powerful. However, in the world of traditional PaaS, the skills needed to build PaaS infrastructure itself, the software on which the user’s software ran, required very strong skills and experience with distributed systems. Consequently, PaaS tended to be built by distributed system engineers rather than experts in a particular vertical developer experience. This means that PaaS platforms tended towards general purpose infrastructure rather than targeting specific verticals. Recently, we have seen this start to change, first with PaaS targeted at mobile API backends, and later with PaaS targeting “function as a service”. However, these products were still built from the ground up on top of raw infrastructure.More recently, we are starting to see these platforms build on top of container infrastructure. Taking for example “function as a service” there are at least two (and likely more) open source implementations of functions as a service that run on top of Kubernetes (fission and funktion). This trend will only continue. Building a platform as a service, on top of container as a service is easy enough that you could imagine giving it out as an undergraduate computer science assignment. This ease of development means that individual developers with specific expertise in a vertical (say software for running three-dimensional simulations) can and will build PaaS platforms targeted at that specific vertical experience. In turn, by targeting such a narrow experience, they will build an experience that fits that narrow vertical perfectly, making their solution a compelling one in that target market.This then points to the other benefit of next generation PaaS being built on top of container as a service. It frees the developer from having to make an “all-in” choice on a particular PaaS platform. When layered on top of container as a service, the basic functionality (naming, discovery, packaging, etc) are all provided by the CaaS and thus common across multiple PaaS that happened to be deployed on top of that CaaS. This means that developers can mix and match, deploying multiple PaaS to the same container infrastructure, and choosing for each application the PaaS platform that best suits that particular platform. Also, importantly, they can choose to “drop down” to raw CaaS infrastructure if that is a better fit for their application. Freeing PaaS from providing the infrastructure layer, enables PaaS to diversify and target specific experiences without fear of being too narrow. The experiences become more targeted, more powerful, and yet by building on top of container as a service, more flexible as well.Kubernetes is infrastructure for next generation applications, PaaS and more. Given this, I’m really excited by our announcement today that Kubernetes on Azure Container Service has reached general availability. When you deploy your next generation application to Azure, whether on a PaaS or deployed directly onto Kubernetes itself (or both) you can deploy it onto a managed, supported Kubernetes cluster.Furthermore, because we know that the world of PaaS and software development in general is a hybrid one, we’re excited to announce the preview availability of Windows clusters in Azure Container Service. We’re also working on hybrid clusters in ACS-Engine and expect to roll those out to general availability in the coming months.I’m thrilled to see how containers and container as a service is changing the world of compute, I’m confident that we’re only scratching the surface of the transformation we’ll see in the coming months and years.–Brendan Burns, Partner Architect, at Microsoft and co-founder of KubernetesGet involved with the Kubernetes project on GitHub Post questions (or answer questions) on Stack Overflow Download KubernetesConnect with the community on SlackFollow us on Twitter @Kubernetesio for latest updates
Quelle: kubernetes

What’s new at DockerCon 2017

If you’ve attended multiple DockerCons, you should know that the team is always looking for new and exciting programs to improve on the previous editions. Last year, we introduced a ton of new DockerCon programs including a new Black Belt Track, a DockerCon scholarships, Workshops, etc. This year we’re excited to introduce more DockerCon goodness!
Using Docker and Docker Deep Dive Tracks
In the past editions, we received great attendee feedback requesting to split the Docker, Docker, Docker track into two separate tracks. We’ve heard you and as a result are happy to introduce the Using Docker and Docker Deep Dive tracks.
The Using Docker track is for everyone who’s getting started with Docker or wants to better implement Docker in their workflow. Whether you’re a .NET, Java or Node.js  developer looking to modernizing your applications, or an IT Pro who wants to learn about Docker orchestration and application troubleshooting, this track will have specific sessions for you to get up to speed with Docker.
The Docker Deep Dive track focuses on the technical details associated with the different components of the Docker platform: advanced orchestration, networking, security, storage, management and plug-ins. The Docker engineering leads will walk you through the best way to build, ship and run distributed applications with Docker as well as give you a hint at what’s on the roadmap.
More Community Theater
Located in the Ecosystem Expo, the Community Theater features cool Docker hacks and lightning talks by various community members on a range of topics. Because this “expo track” was very popular last year and in order to showcase more cool projects and use cases from the community, we’ve decided to add a second community theater! Check out the talks and abstracts from the 30 extra speakers featured in that track.
Adding a third day to the conference!
Repeating top sessions
With all these tracks and awesome sessions, we know that it can be difficult to choose which ones to attend &;  especially if they are scheduled at the same time. This year, based on your session ratings during the conference, the top 8 sessions will be delivered again on Thursday!
Mentor Summit
Also new this year, we will host a summit for current and aspiring Docker Mentors on Thursday, April 20th. Mentorship can be a fun and rewarding experience and you don&;t need to be an expert in order to mentor someone. Come learn the ins and outs of being an awesome mentor both in industry and in the Docker Community!
Docker Internals Summit
Finally, we’re excited to host a Docker Internals Summit. This is a collaborative event for advanced Docker Operators who are actively maintaining, contributing or generally involved in the design and development of the following Docker open source projects: Infrakit, SwarmKit, Hyperkit, Notary, containerd, runC, libnetwork and underlying technologies TUF, IPVS, Raft, etc.
The goals of the summit are twofold:

Get everyone up to speed with each project’s mission, scope, insights into their architecture, roadmap and integration with other systems.
Drive architecture, design decisions and code contributions through collaboration with project maintainers during the hands-on sessions.

 

What’s new at DockerCon: tracks, community lightning talks and internal summits! Click To Tweet

The post What’s new at DockerCon 2017 appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Dockercast Interview: Docker Captain Stefan Scherer on Microsoft and Docker

In this podcast we chat with Captain and newly minted Microsoft MVP Stefan Scherer. Stefan has done some fantastic work with Docker for Windows and Microservices. We also talk about how lift and shift models work really well for Docker and Windows and Stefan walks us through some of the basics of running Docker on Windows. In addition to the podcast, below is his interview on why being a Captain allows him to give back to the awesome Docker community.
Dockercast with Stephen Scherer

Interview with Stefan Scherer
How has Docker impacted what you do on a daily basis?

Docker helps me to keep my machines clean. I realize more and more that you only need a few tools on your laptop, keeping it clean and lean. And instead of writing documentation on how to build a piece of software, describe all steps in a Dockerfile. So multi GByte fat developer VM’s we maintained some years ago shrink down so a few KByte Dockerfiles for each project. No time-consuming backups needed, just keep the Dockerfile in your sources and have a backup of your Git repos.
Having practiced that on Mac and Linux now for a while, I’m happy to see that this will work on Windows as well. I see the same patterns there to get rid of an exploding PATH variable, keeping all the dependencies out of your machine and inside a container.
As a Docker Captain, how do you share that learning with the community?
When I’ve found something or solved a problem that could be useful for others, I like to write a blog post about my experience. I’m trying to show it in a simple way. If it’s just a cool hack that fits into a Tweet, then you can find it on Twitter.
I’m also watching some GitHub repos and helping people there by answering their questions or giving them some useful links to find the relevant documentation.
More and more people ask me questions directly through Twitter or email, but I gently ask them to ask the question in a public forum like GitHub, Gitter or Slack. Not that I don’t want to answer them, but instead others can profit from the discussion and the given solution.
I also speak at local Meetups. Our Hypriot team has been organizing Docker Meetups for about a year to bring together students and those interested in Docker that are working in various companies.
Why do you like Docker?
What I really like is that Docker, although many new features came in the last year, is that it is still small and simple to use, at least from a developer’s point of view.
What’s so cool about Docker is that with availability of Windows Containers earlier this year,  you now have the same tools and mindset on a formerly very different platform. I believe that this lowers the barrier between Linux and Windows.  Once you know the basic Docker commands, you are able to do things on both platforms. Before that, you probably were afraid, how to run software XY as a service on that previously unknown platform.
What’s your favorite thing about the Docker community?
I remember when I started to test the Windows Docker engine and found the first bugs. So I wrote an issue on GitHub and you know what? I immediately got answer from employees at Microsoft. Well I’ve previously pressed the “Send feedback report to Microsoft” button when Word crashed and nothing happened. But with the Docker project, I learned that there is a much better feedback loop. I think for both sides, so it’s important to give feedback to the developers about their software they are writing.
Are you working on any fun projects on the side?
After some first baby steps with Docker, I joined four other friends at the end of 2014 to really learn Docker together during the holiday. And we wanted to try it out on a Raspberry Pi, with only a single core CPU and half a Gig memory. We hadn’t the slightest idea what this fun idea would lead us to. This is probably not the straightforward way to learn Docker, but we learned a lot of the basics and what’s needed such as  a suitable Linux kernel. In less than two months, we released our version of what was later called HypriotOS. You can’t imagine what hard work is hidden behind an easy-to-use SD image that you just plug into your Raspberry Pi and boot it to Docker.
And we’re happy to see that this project,our work and the efforts of others led to the official ARM support of Docker in the upstream GitHub repo.
How did you first learn about Docker?
We were in the middle of a new software project where we automated a lot of our development and testing environments with Vagrant. We heard about this Docker thing and that it would be much faster and smaller. It took a few  weeks to find the time to play with Docker but it felt right to learn more about it.
Docker Captains
Captains are Docker ambassadors (not Docker employees) and their genuine love of all things Docker has a huge impact on the Docker community. Whether they are blogging, writing books, speaking, running workshops, creating tutorials and classes, offering support in forums, or organizing and contributing to local events – they make Docker’s mission of democratizing technology possible. Whether you are new to Docker or have been a part of the community for a while, please don’t hesitate to reach out to Docker Captains with your challenges, questions, speaking requests and more.
While Docker does not accept applications for the Captains program, we are always on the lookout to add additional leaders that inspire and educate the Docker community. If you are interested in becoming a Docker Captain, we need to know how you are giving back. Sign up for community.docker.com, share your activities on social media with the Docker, get involved in a local meetup as a speaker or organizer and continue to share your knowledge of Docker in your community.
Follow the Docker Captains
You can now follow all of the Docker Captains on Twitter using Docker with Alex Ellis’ tutorial.
 

DockerCast : @botchagalupe interviews @Microsoft MVP @stefscherer on Windows & microservicesClick To Tweet

The post Dockercast Interview: Docker Captain Stefan Scherer on Microsoft and Docker appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Announcing Docker Birthday #4: Spreading the Docker Love!

Community is at the heart of and thanks to the hard work of thousands of maintainers, contributors, Captains, mentors, organizers, and the entire Docker community, the Docker platform is now used in production by companies of all sizes and industries.
To show our love and gratitude, it has become a tradition for Docker and our awesome network of meetup organizers to host Docker Birthday meetup celebrations all over the world. This year the celebrations will take place during the week of March 13-19, 2017. Come learn, mentor, celebrate, eat cake, and take an epic !
Docker Love
We wanted to hear from the community about why they love Docker!
Wellington Silva, Docker São Paulo meetup organizer said “Docker changed my life, I used to spend days compiling and configuring environments. Then I used to spend hours setting up using VM. Nowadays I setup an environment in minutes, sometimes in seconds.”

Love the new organization of commands in Docker 1.13!
— Kaslin Fields (@kaslinfields) January 25, 2017

Docker Santo Domingo organizer, Victor Recio said, “Docker has increased my effectiveness at work, currently I can deploy software to production environment without worrying that it will not work when the delivery takes place. I love docker and I&;m very grateful with it and whenever I can share my knowledge about docker with the young people of the communities of my country I do it and I am proud that there are already startups that have reach a Silicon Valley level.”

We love docker here at @Harvard for our screening platform. https://t.co/zpp8Wpqvk5
— Alan Aspuru-Guzik (@A_Aspuru_Guzik) January 12, 2017

Docker Birthday Labs
At the local birthday 4 meetups, there will be Docker labs and challenges to help attendees at all levels and welcome new members into the community. We’re partnering with CS schools, non-profit organizations, and local meetup groups to throw a series of events around the world. While the courses and labs are geared towards newcomers and intermediate level users, advanced and expert community members are invited to join as mentors to help attendees work through the materials.
Find a Birthday meetup near you!
There are already 46 Docker Birthday 4 celebrations scheduled around the world with more on the way! Check back as more events are announced.

Thursday, March 9th

Fulda, Germany

Saturday, March 11th

Madurai, India

Sunday, March 12th

Mumbai, India

Monday, March 13th

Atlanta, GA
Dallas, TX
Grenoble, France
Liège, Belgium
Luxembourg, Luxembourg

Tuesday, March 14th

Austin, TX
Berlin, Germany
Las Vegas, NV
Malmö, Sweden
Miami, FL
Saint Louis, MO

Wednesday, March 15th

Blacksburg, VA
Columbus, OH
Istanbul, Turkey
Nantes, France
Phoenix, AZ
Prague, Czech Republic
San Francisco, CA
Santa Barbara, CA
Singapore, Singapore

Thursday, March 16th

Brussels, Belgium
Budapest, Hungary
Dhahran, Saudi Arabia
Dortmund, Germany
Iráklion, Greece
Montreal, Canada
Nice, France
Stuttgart, Germany
Tokyo, Japan
Washington, DC

Saturday, March 18th

Delhi, India
Hermosillo, Mexico
Kanpur, India
Kisumu, Kenya
Novosibirsk, Russia
Porto, Portugal
Rio de Janeiro, Brazil
Thanh Pho Ho Chi Minh, Vietnam

Monday, March 20th

London, United Kingdom
Milan, Italy

Thursday, March 23rd

Dublin, Ireland

Wednesday, March 29th

Colorado Springs, CO
Ottawa, Canada

Want to help us organize a Docker Birthday celebration in your city? Email us at meetups@docker.com for more information!
Are you an advanced Docker user? Join us as a mentor!
We are recruiting a network of mentors to attend the local events and help guide attendees through the Docker Birthday labs. Mentors should have experience working with Docker Engine, Docker Networking, Docker Hub, Docker Machine, Docker Orchestration and Docker Compose. Click here to sign up as a mentor.

Excited to LearnDocker at an upcoming 4th celebration! Join your local edition! Click To Tweet

The post Announcing Docker Birthday 4: Spreading the Docker Love! appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Announcing Docker Birthday #4: Spreading the Docker Love!

Community is at the heart of and thanks to the hard work of thousands of maintainers, contributors, Captains, mentors, organizers, and the entire Docker community, the Docker platform is now used in production by companies of all sizes and industries.
To show our love and gratitude, it has become a tradition for Docker and our awesome network of meetup organizers to host Docker Birthday meetup celebrations all over the world. This year the celebrations will take place during the week of March 13-19, 2017. Come learn, mentor, celebrate, eat cake, and take an epic !
Docker Love
We wanted to hear from the community about why they love Docker!
Wellington Silva, Docker São Paulo meetup organizer said “Docker changed my life, I used to spend days compiling and configuring environments. Then I used to spend hours setting up using VM. Nowadays I setup an environment in minutes, sometimes in seconds.”

Love the new organization of commands in Docker 1.13!
— Kaslin Fields (@kaslinfields) January 25, 2017

Docker Santo Domingo organizer, Victor Recio said, “Docker has increased my effectiveness at work, currently I can deploy software to production environment without worrying that it will not work when the delivery takes place. I love docker and I&;m very grateful with it and whenever I can share my knowledge about docker with the young people of the communities of my country I do it and I am proud that there are already startups that have reach a Silicon Valley level.”

We love docker here at @Harvard for our screening platform. https://t.co/zpp8Wpqvk5
— Alan Aspuru-Guzik (@A_Aspuru_Guzik) January 12, 2017

Docker Birthday Labs
At the local birthday 4 meetups, there will be Docker labs and challenges to help attendees at all levels and welcome new members into the community. We’re partnering with CS schools, non-profit organizations, and local meetup groups to throw a series of events around the world. While the courses and labs are geared towards newcomers and intermediate level users, advanced and expert community members are invited to join as mentors to help attendees work through the materials.
Find a Birthday meetup near you!
There are already 44 Docker Birthday 4 celebrations scheduled around the world with more on the way! Check back as more events are announced.

Thursday, March 9th

Fulda, Germany

Saturday, March 11th

Madurai, India

Sunday, March 12th

Mumbai, India

Monday, March 13th

Dallas, TX
Grenoble, France
Liège, Belgium
Luxembourg, Luxembourg

Tuesday, March 14th

Austin, TX
Berlin, Germany
Las Vegas, NV
Malmö, Sweden
Miami, FL

Wednesday, March 15th

Columbus, OH
Istanbul, Turkey
Nantes, France
Phoenix, AZ
Prague, Czech Republic
San Francisco, CA
Santa Barbara, CA
Singapore, Singapore

Thursday, March 16th

Brussels, Belgium
Budapest, Hungary
Dhahran, Saudi Arabia
Dortmund, Germany
Iráklion, Greece
Montreal, Canada
Nice, France
Saint Louis, MO
Stuttgart, Germany
Tokyo, Japan
Washington, DC

Saturday, March 18th

Delhi, India
Hermosillo, Mexico
Kanpur, India
Kisumu, Kenya
Novosibirsk, Russia
Porto, Portugal
Rio de Janeiro, Brazil
Thanh Pho Ho Chi Minh, Vietnam

Monday, March 20th

London, United Kingdom
Milan, Italy

Thursday, March 23rd

Dublin, Ireland

Wednesday, March 29th

Colorado Springs, CO
Ottawa, Canada

Want to help us organize a Docker Birthday celebration in your city? Email us at meetups@docker.com for more information!
Are you an advanced Docker user? Join us as a mentor!
We are recruiting a network of mentors to attend the local events and help guide attendees through the Docker Birthday labs. Mentors should have experience working with Docker Engine, Docker Networking, Docker Hub, Docker Machine, Docker Orchestration and Docker Compose. Click here to sign up as a mentor.

Excited to LearnDocker at the 4th ! Join your local edition: http://dockr.ly/2jXcwz8 Click To Tweet

The post Announcing Docker Birthday 4: Spreading the Docker Love! appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

5 reasons to attend DockerCon 2017

2017 is for the hackers, the makers and those who want to build tools of mass innovation.
In April, 5,000 of the best and brightest will come together to share and learn from different experiences, diverse backgrounds, and common interests. We know that part of what makes DockerCon so special is what happens in the hallways, not just the main stage. Those spontaneous connections between attendees, and the endless networking and learning opportunities, are where the most meaningful interactions occur.

If you haven’t been to a DockerCon yet, you may not know what you are missing. To try to explain why DockerCon 2017 is a must attend conference, we took the liberty of putting together the Top 5 reasons to join us April 17-20 in Austin, Texas.

The.Best.Content. From beginner to deep dive, DockerCon brings together the brightest minds to talk about their passion. Those passions range from tracing containers, building containers from scratch, monitoring and storage, to creating effective images. The list goes on.
Experts Everywhere. Want to meet the maintainers and tech leads of the Docker project? DockerCon! The community members that put together the coolest IoT hack to make walking in between sessions fun? DockerCon! What about chatting directly with the developers and IT professionals at Fortune 500 enterprises that are transforming their organizations by using Docker? DockerCon!
A Hallway Track like you’ve never experienced. DockerCon took conference networking to a new level last year with Bump Up. We can’t wait to share what we have planned this year that will make connecting, learning, and sharing with other like-minded attendees one of the most valuable takeaways of the event.  
DockerCon For All. DockerCon will always be an open and inclusive event for all. We are excited to announce the launch of this year’s DockerCon Diversity Scholarship. The scholarship’s purpose is to provide financial support and guidance to members of the Docker Community who are traditionally underrepresented through on-site mentorship and a scholarship to attend DockerCon.
Community & Docker Swag. As a part of Docker’s community, you already know that it rocks, thanks to you! Now just imagine the energy when 5,000 of us are in one room doing what we love! Now imagine we all just got the most amazing Docker swag to top it off! We are talking backpacks, t-shirts, umbrellas, scarves, LEGO whales &; this year will be no exception.

We hope you’ve read to this point and are so inspired to be a part of something innovative and unique that you’ll join us in Austin for DockerCon 2017. And in case you need some extra help convincing a manager to let you go, we’ve put together a few more resources and a request letter for you to use.

5 reasons to attend DockerCon 2017 in Austin &8211; April 17-20Click To Tweet

The post 5 reasons to attend DockerCon 2017 appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Inside JD.com's Shift to Kubernetes from OpenStack

Editor’s note: Today’s post is by the Infrastructure Platform Department team at JD.com about their transition from OpenStack to Kubernetes. JD.com is one of China’s largest companies and the first Chinese Internet company to make the Global Fortune 500 list.History of cluster buildingThe era of physical machines (2004-2014)Before 2014, our company’s applications were all deployed on the physical machine. In the age of physical machines, we needed to wait an average of one week for the allocation to application coming on-line. Due to the lack of isolation, applications would affected each other, resulting in a lot of potential risks. At that time, the average number of tomcat instances on each physical machine was no more than nine. The resource of physical machine was seriously wasted and the scheduling was inflexible. The time of application migration took hours due to the breakdown of physical machines. And the auto-scaling cannot be achieved. To enhance the efficiency of application deployment, we developed compilation-packaging, automatic deployment, log collection, resource monitoring and some other systems.Containerized era (2014-2016)The Infrastructure Platform Department (IPD) led by Liu Haifeng–Chief Architect of JD.COM, sought a new resolution in the fall of 2014. Docker ran into our horizon. At that time, docker had been rising, but was slightly weak and lacked of experience in production environment. We had repeatedly tested docker. In addition, docker was customized to fix a couple of issues, such as system crash caused by device mapper and some Linux kernel bugs. We also added plenty of new features into docker, including disk speed limit, capacity management, and layer merging in image building and so on.To manage the container cluster properly, we chose the architecture of OpenStack + Novadocker driver. Containers are managed as virtual machines. It is known as the first generation of JD container engine platform–JDOS1.0 (JD Datacenter Operating System). The main purpose of JDOS 1.0 is to containerize the infrastructure. All applications run in containers rather than physical machines since then. As for the operation and maintenance of applications, we took full advantage of existing tools. The time for developers to request computing resources in production environment reduced to several minutes rather than a week. After the pooling of computing resources, even the scaling of 1,000 containers would be finished in seconds. Application instances had been isolated from each other. Both the average deployment density of applications and the physical machine utilization had increased by three times, which brought great economic benefits.We deployed clusters in each IDC and provided unified global APIs to support deployment across the IDC. There are 10,000 compute nodes at most and 4,000 at least in a single OpenStack distributed container cluster in our production environment. The first generation of container engine platform (JDOS 1.0) successfully supported the “6.18” and “11.11” promotional activities in both 2015 and 2016. There are already 150,000 running containers online by November 2016.“6.18” and “11.11” are known as the two most popular online promotion of JD.COM, similar to the black Friday promotions. Fulfilled orders in November 11, 2016 reached 30 million. In the practice of developing and promoting JDOS 1.0, applications were migrated directly from physical machines to containers. Essentially, JDOS 1.0 was an implementation of IaaS. Therefore, deployment of applications was still heavily dependent on compilation-packaging and automatic deployment tools. However, the practice of JDOS1.0 is very meaningful. Firstly, we successfully moved business into containers. Secondly, we have a deep understanding of container network and storage, and know how to polish them to the best. Finally, all the experiences lay a solid foundation for us to develop a brand new application container platform.New container engine platform (JDOS 2.0)Platform architectureWhen JDOS 1.0 grew from 2,000 containers to 100,000, we launched a new container engine platform (JDOS 2.0). The goal of JDOS 2.0 is not just an infrastructure management platform, but also a container engine platform faced to applications. On the basic of JDOS 1.0 and Kubernetes, JDOS 2.0 integrates the storage and network of JDOS 1.0, gets through the process of CI/CD from the source to the image, and finally to the deployment. Also, JDOS 2.0 provides one-stop service such as log, monitor, troubleshooting, terminal and orchestration. The platform architecture of JDOS 2.0 is shown below. FunctionProductSource Code ManagementGitlabContainer ToolDockerContainer NetworkingCaneContainer EngineKubernetesImage RegistryHarborCI ToolJenkinsLog ManagementLogstash + Elastic SearchMonitorPrometheusIn JDOS 2.0, we define two levels, system and application. A system consists of several applications and an application consists of several Pods which provide the same service. In general, a department can apply for one or more systems which directly corresponds to the namespace of Kubernetes. This means that the Pods of the same system will be in the same namespace.Most of the JDOS 2.0 components (GitLab / Jenkins / Harbor / Logstash / Elastic Search / Prometheus) are also containerized and deployed on the Kubernetes platform.One Stop SolutionJDOS 2.0 takes docker image as the core to implement continuous integration and continuous deployment.Developer pushes code to git.Git triggers the jenkins master to generate build job.Jenkins master invokes Kubernetes to create jenkins slave Pod.Jenkins slave pulls the source code, compiles and packs.Jenkins slave sends the package and the Dockerfile to the image build node with docker.The image build node builds the image.The image build node pushes the image to the image registry Harbor.User creates or updates app Pods in different zone.The docker image in JDOS 1.0 consisted primarily of the operating system and the runtime software stack of the application. So, the deployment of applications was still dependent on the auto-deployment and some other tools. While in JDOS 2.0, the deployment of the application is done during the image building. And the image contains the complete software stack, including App. With the image, we can achieve the goal of running applications as designed in any environment.Networking and External Service Load BalancingJDOS 2.0 takes the network solution of JDOS 1.0, which is implemented with the VLAN model of OpenStack Neutron. This solution enables highly efficient communication between containers, making it ideal for a cluster environment within a company. Each Pod occupies a port in Neutron, with a separate IP. Based on the Container Network Interface standard (CNI) standard, we have developed a new project Cane for integrating kubelet and Neutron.At the same time, Cane is also responsible for the management of LoadBalancer in Kubernetes service. When a LoadBalancer is created / deleted / modified, Cane will call the creating / removing / modifying interface of the lbaas service in Neutron. In addition, the Hades component in the Cane project provides an internal DNS resolution service for the Pods.The source code of the Cane project is currently being finished and will be released on GitHub soon.Flexible SchedulingJDOS 2.0 accesses applications, including big data, web applications, deep learning and some other types, and takes more diverse and flexible scheduling approaches. In some IDCs, we experimentally mixed deployment of online tasks and offline tasks. Compared to JDOS 1.0, overall resource utilization increased by about 30%.SummaryThe rich functionality of Kubernetes allows us to pay more attention to the entire ecosystem of the platform, such as network performance, rather than the platform itself. In particular, the SREs highly appreciated the functionality of replication controller. With it, the scaling of the applications is achieved in several seconds. JDOS 2.0 now has accessed about 20% of the applications, and deployed 2 clusters with about 20,000 Pods running daily. We plan to access more applications of our company, to replace the current JDOS 1.0. And we are also glad to share our experience in this process with the community.Thank you to all the contributors of Kubernetes and the other open source projects.–Infrastructure Platform Department team at JD.comGet involved with the Kubernetes project on GitHub Post questions (or answer questions) on Stack Overflow Download KubernetesConnect with the community on SlackFollow us on Twitter @Kubernetesio for latest updates
Quelle: kubernetes

Inside JD.com's Shift to Kubernetes from OpenStack

Editor’s note: Today’s post is by the Infrastructure Platform Department team at JD.com about their transition from OpenStack to Kubernetes. JD.com is one of China’s largest companies and the first Chinese Internet company to make the Global Fortune 500 list.History of cluster buildingThe era of physical machines (2004-2014)Before 2014, our company’s applications were all deployed on the physical machine. In the age of physical machines, we needed to wait an average of one week for the allocation to application coming on-line. Due to the lack of isolation, applications would affected each other, resulting in a lot of potential risks. At that time, the average number of tomcat instances on each physical machine was no more than nine. The resource of physical machine was seriously wasted and the scheduling was inflexible. The time of application migration took hours due to the breakdown of physical machines. And the auto-scaling cannot be achieved. To enhance the efficiency of application deployment, we developed compilation-packaging, automatic deployment, log collection, resource monitoring and some other systems.Containerized era (2014-2016)The Infrastructure Platform Department (IPD) led by Liu Haifeng–Chief Architect of JD.COM, sought a new resolution in the fall of 2014. Docker ran into our horizon. At that time, docker had been rising, but was slightly weak and lacked of experience in production environment. We had repeatedly tested docker. In addition, docker was customized to fix a couple of issues, such as system crash caused by device mapper and some Linux kernel bugs. We also added plenty of new features into docker, including disk speed limit, capacity management, and layer merging in image building and so on.To manage the container cluster properly, we chose the architecture of OpenStack + Novadocker driver. Containers are managed as virtual machines. It is known as the first generation of JD container engine platform–JDOS1.0 (JD Datacenter Operating System). The main purpose of JDOS 1.0 is to containerize the infrastructure. All applications run in containers rather than physical machines since then. As for the operation and maintenance of applications, we took full advantage of existing tools. The time for developers to request computing resources in production environment reduced to several minutes rather than a week. After the pooling of computing resources, even the scaling of 1,000 containers would be finished in seconds. Application instances had been isolated from each other. Both the average deployment density of applications and the physical machine utilization had increased by three times, which brought great economic benefits.We deployed clusters in each IDC and provided unified global APIs to support deployment across the IDC. There are 10,000 compute nodes at most and 4,000 at least in a single OpenStack distributed container cluster in our production environment. The first generation of container engine platform (JDOS 1.0) successfully supported the “6.18” and “11.11” promotional activities in both 2015 and 2016. There are already 150,000 running containers online by November 2016.“6.18” and “11.11” are known as the two most popular online promotion of JD.COM, similar to the black Friday promotions. Fulfilled orders in November 11, 2016 reached 30 million. In the practice of developing and promoting JDOS 1.0, applications were migrated directly from physical machines to containers. Essentially, JDOS 1.0 was an implementation of IaaS. Therefore, deployment of applications was still heavily dependent on compilation-packaging and automatic deployment tools. However, the practice of JDOS1.0 is very meaningful. Firstly, we successfully moved business into containers. Secondly, we have a deep understanding of container network and storage, and know how to polish them to the best. Finally, all the experiences lay a solid foundation for us to develop a brand new application container platform.New container engine platform (JDOS 2.0)Platform architectureWhen JDOS 1.0 grew from 2,000 containers to 100,000, we launched a new container engine platform (JDOS 2.0). The goal of JDOS 2.0 is not just an infrastructure management platform, but also a container engine platform faced to applications. On the basic of JDOS 1.0 and Kubernetes, JDOS 2.0 integrates the storage and network of JDOS 1.0, gets through the process of CI/CD from the source to the image, and finally to the deployment. Also, JDOS 2.0 provides one-stop service such as log, monitor, troubleshooting, terminal and orchestration. The platform architecture of JDOS 2.0 is shown below. FunctionProductSource Code ManagementGitlabContainer ToolDockerContainer NetworkingCaneContainer EngineKubernetesImage RegistryHarborCI ToolJenkinsLog ManagementLogstash + Elastic SearchMonitorPrometheusIn JDOS 2.0, we define two levels, system and application. A system consists of several applications and an application consists of several Pods which provide the same service. In general, a department can apply for one or more systems which directly corresponds to the namespace of Kubernetes. This means that the Pods of the same system will be in the same namespace.Most of the JDOS 2.0 components (GitLab / Jenkins / Harbor / Logstash / Elastic Search / Prometheus) are also containerized and deployed on the Kubernetes platform.One Stop SolutionJDOS 2.0 takes docker image as the core to implement continuous integration and continuous deployment.Developer pushes code to git.Git triggers the jenkins master to generate build job.Jenkins master invokes Kubernetes to create jenkins slave Pod.Jenkins slave pulls the source code, compiles and packs.Jenkins slave sends the package and the Dockerfile to the image build node with docker.The image build node builds the image.The image build node pushes the image to the image registry Harbor.User creates or updates app Pods in different zone.The docker image in JDOS 1.0 consisted primarily of the operating system and the runtime software stack of the application. So, the deployment of applications was still dependent on the auto-deployment and some other tools. While in JDOS 2.0, the deployment of the application is done during the image building. And the image contains the complete software stack, including App. With the image, we can achieve the goal of running applications as designed in any environment.Networking and External Service Load BalancingJDOS 2.0 takes the network solution of JDOS 1.0, which is implemented with the VLAN model of OpenStack Neutron. This solution enables highly efficient communication between containers, making it ideal for a cluster environment within a company. Each Pod occupies a port in Neutron, with a separate IP. Based on the Container Network Interface standard (CNI) standard, we have developed a new project Cane for integrating kubelet and Neutron.At the same time, Cane is also responsible for the management of LoadBalancer in Kubernetes service. When a LoadBalancer is created / deleted / modified, Cane will call the creating / removing / modifying interface of the lbaas service in Neutron. In addition, the Hades component in the Cane project provides an internal DNS resolution service for the Pods.The source code of the Cane project is currently being finished and will be released on GitHub soon.Flexible SchedulingJDOS 2.0 accesses applications, including big data, web applications, deep learning and some other types, and takes more diverse and flexible scheduling approaches. In some IDCs, we experimentally mixed deployment of online tasks and offline tasks. Compared to JDOS 1.0, overall resource utilization increased by about 30%.SummaryThe rich functionality of Kubernetes allows us to pay more attention to the entire ecosystem of the platform, such as network performance, rather than the platform itself. In particular, the SREs highly appreciated the functionality of replication controller. With it, the scaling of the applications is achieved in several seconds. JDOS 2.0 now has accessed about 20% of the applications, and deployed 2 clusters with about 20,000 Pods running daily. We plan to access more applications of our company, to replace the current JDOS 1.0. And we are also glad to share our experience in this process with the community.Thank you to all the contributors of Kubernetes and the other open source projects.–Infrastructure Platform Department team at JD.comGet involved with the Kubernetes project on GitHub Post questions (or answer questions) on Stack Overflow Download KubernetesConnect with the community on SlackFollow us on Twitter @Kubernetesio for latest updates
Quelle: kubernetes