Docker Weekly Roundup | October 9, 2016

 

It’s time for your weekly ! Get caught up on the top news including; expansion into China through a commercial partnership with Alibaba Cloud, announcement of DockerCon 2017, and information on the upcoming Global Mentor Week. As we begin a new week, let’s recap the top five most-read stories of the week of October 9, 2016:

Alibaba Cloud Partnership Docker expands into China market through new partnership with the Alibaba Group, the world&;s largest retail commerce group. The focus of the partnership is to provide a China-based Docker Hub, enable Alibaba to resell Docker’s commercial offerings, and create a “Docker For Alibaba Cloud”.

DockerCon 2017 a three day, conference organized by Docker. This year’s US edition will take place in Austin, TX and continue to build on the success of previous events as it grows and reflects Docker’s established ecosystem and ever-growing community.

Global Mentor Week  is a global event series aimed at providing Docker training to both newcomers and intermediate users. Participants will work through self-paced labs that will be available through an online Learning Management System (LMS). There will be different labs for different skill levels, Developers, Ops, Linux and Windows users.

Docker on Windows &; check out this blog on three tips for setting a solid foundation and improving the Docker on Windows experience from Elton Stoneman.

SQL Server 2016 was publicly available this week and SQL Server 2016 Express Edition in Windows Containers is now available on Docker Hub. In addition, the build scripts will be hosted on the SQL Server Samples GitHub repository and the image can be used in both Windows Server Containers as well as Hyper-V Containers.

Weekly Roundup: Top 5 Docker stories for the week 10/09/16Click To Tweet

The post Docker Weekly Roundup | October 9, 2016 appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Docker Distributed System Summit videos & podcast episodes

Following LinuxCon Europe in Berlin last week, we organized a first of its kind Docker event called Docker Distributed Systems Summit. This two day event was an opportunity for core Docker engineers and Docker experts from the Docker community to learn, collaborate, problem-solve and hack around the next generation of distributed systems in areas such as orchestration, networking, security and storage.

More specifically, the goal of the summit was to dive deep into Docker’s infrastructure plumbing tools and internals: SwarmKit, InfraKit, Hyperkit, Notary, libnetwork, IPVS, Raft, TUF and provide attendees with the working knowledge of how to leverage these tools while building their own systems.
We’re happy to share with you all the videos recordings, slides and audio files available as episodes!
Youtube playlist

Podcast playlist

All the slides from the summit are available on the official Docker slideshare account.
 
Please join us in giving a big shout out to our awesome speakers for creating and presenting the following projects:
 

InfraKit: A toolkit for creating and managing declarative, self-healing infrastructure

Speaker: Bill Farner and David Chung (Docker)
GitHub repo, Slides, video, podcast and Liveblogging

Heart of the SwarmKit: Store, Topology & Object Model

Speaker: Aaron Lehman, Andrea Luzzardi and Stephen Day (Docker)
GitHub Repo, Slides, video, podcast and Liveblogging

Persistent storage tailored for containers

Speaker: Quentin Hocquet (CTO at Infinit)
GitHub repo, Slides, video, podcast and Liveblogging

Prometheus: Design and Philosophy

Speaker: Julius Volz &; @juliusvolz (Author of Prometheus)
GitHub repo, Slides, video, podcast and Liveblogging

Talking TUF: Securing Software Distribution

Speaker: Justin Cappos (Professor at New York University)
GitHub repo, Slides, video, podcast and Liveblogging

Orchestrating Least Privilege

Speaker: Diogo Monica (Docker)
GitHub Repo, Slides, video, podcast and Liveblogging

Cilium &8211; BPF & XDP for containers

Speaker: Thomas Graf (Principal at Noiro Networks)
GitHub repo, Slides, video, podcast and Liveblogging

Docker Networking: Control Plane and Data Plane

Speaker: Madhu Venugopal and Jana Radhakrishnan (Docker)
GitHub repo, Slides, video, podcast and Liveblogging

Unikernels: the rise of the library hypervisor in MirageOS

Speaker: Anil Madhavapeddy and Martin Lucina (Docker)
GitHub repo, Slides, video, podcast and Liveblogging

 
The Docker team would also like to extend a huge thank you to everyone who attended the Summit in Berlin last week. The event was a success because of the amazing participation and energy of the community.
 

Thanks for this wonderful @docker team! Lot ´s of great ppl, great detailled tech infos, I learn so many things! See you soon! pic.twitter.com/oDrSs6XATH
— Julien Maitrehenry (@jmaitrehenry) October 9, 2016

 

Great talks, deep-dive content, really enjoyable ppl, Thks @docker 4 t dockersummit Berlin, bye-bye till next year pic.twitter.com/TgitFaieWQ
— grealish (@grealish) October 8, 2016

 
Click to tweet:

Check out the videos, slides and dockercast episodes from the dockersummit sessions last week! Click To Tweet

The post Docker Distributed System Summit videos &; podcast episodes appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Announcing Docker Global Mentor Week 2016

Building on the the success of the Docker Birthday  Celebration and Training events earlier this year, we’re excited to announce the Docker Global Mentor Week. This global event series aims to provide Docker training to both newcomers and intermediate Docker users. More advanced users will have the opportunity to get involved as mentors to further encourage connection and collaboration within the community.

The Docker Global Mentor Week is your opportunity to either or help others learndocker. Participants will work through self paced labs that will be available through an online Learning Management System (LMS). We’ll have different labs for beginners and intermediate users, Developers and Ops and Linux or Windows users.
Are you an advanced Docker user?
We are recruiting a network of mentors to help guide learners work through the labs. Mentors will be invited to attend local events to help answer questions attendees may have while completing the self-paced beginner and intermediate labs. To help mentors prepare for their events, we&;ll be sharing the content of the labs and hosting a Q&A session with the Docker team before the start of the global mentor week.
 
Sign up as a Mentor!
 
With over 250 Docker Meetup groups worldwide, there is always an opportunity for collaboration and knowledge sharing. With the launch of Global Mentor Week, Docker is also introducing a Sister City program to help create and strengthen partnerships between local Docker communities which share similar challenges.
Docker NYC Organiser Jesse White talks about their collaboration with Docker London:
“Having been a part of the Docker community ecosystem from the beginning, it&8217;s thrilling for us at Docker NYC to see the community spread across the globe. As direct acknowledgment and support of the importance of always reaching out and working together, we&8217;re partnering with Docker London to capture the essence of what&8217;s great about Docker Global Mentor week. We&8217;ll be creating a transatlantic, volunteer-based partnership to help get the word out, collaborate on and develop training materials, and to boost the recruitment of mentors. If we&8217;re lucky, we might get some international dial-in and mentorship at each event too!”
If you’re part of a community group for a specific programming language, open source software projects, CS students at local universities, coding institutions or organizations promoting inclusion in the larger tech community and interested in learning about Docker, we&8217;d love to partner with you. Please email us at meetups@docker.com for more information about next steps.
We&8217;re thrilled to announce that there are already 37 events scheduled around the world! Check out the list of confirmed events below to see if there is one happening near you. Make sure to check back as we’ll be updating this list as more events are announced. Want to help us organize a Mentor Week training in your city? Email us at meetups@docker.com for more information!
 
Saturday, November 12th

New Delhi, India

Sunday, November 13th

Mumbai, India

Monday, November 14th

Auckland, New Zealand
London, United Kingdom
Mexico City, Mexico
Orange County, CA

Tuesday, November 15th

Atlanta, GA
Austin, TX
Brussels, Belgium
Denver, CO
Jakarta, Indonesia
Las Vegas, NV
Medan, Indonesia
Nice, France
Singapore, Singapore

Wednesday, November 16th

Århus, Denmark
Boston, MA
Dhahran, Saudia Arabia
Hamburg, Germany
Novosibirsk, Russia
San Francisco, CA
Santa Barbara, CA
Santa Clara, CA
Washington, D.C.
Rio de Janeiro, Brazil

Thursday, November 17th

Berlin, Germany
Budapest, Hungary
Glasgow, United Kingdom
Lima, Peru
Minneapolis, MN
Oslo, Norway
Richmond, VA

Friday, November 18th

Kanpur, India
Tokyo, Japan

Saturday, November 19th

Ha Noi, Vietnam
Mangaluru, India
Taipei, Taiwan

Excited about Docker Global Mentor Week? Let your community know!

Excited to learndocker during @docker Global Mentor Week! Get involved by signing up for&;Click To Tweet

The post Announcing Docker Global Mentor Week 2016 appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Docker Announces Expansion To China Through Commercial Partnership with Alibaba Cloud

The containerization movement fueled by has extended across all geographic boundaries since the very beginning. Some of Docker’s earliest success stories were from Chinese based, web-scale companies running Docker in production before Docker had released its 1.0 version. Additionally, through the grass roots efforts of the development community, we have thriving Docker Meetups in 20 of ’s largest cities. This is a testament to the innovative spirit within the Chinese developer community because the ability to deliver great community content from Docker Hub has been highly constrained. That is why a partnership with China’s largest public cloud provider is so significant. Docker, in concert with Cloud, is going to deliver a China-based instance of Docker Hub to ensure optimal access and performance to the thousands of Dockerized images that will serve as the foundation of a new generation of distributed applications in China.  
In addition to formally providing Dockerized content on Docker Hub to China, Docker is commercially partnering with Alibaba to address the substantial demand for running enterprise applications in containers.  A June 2016 Alibaba Cloud survey indicates that more than 80% respondents are already using or plan to use containers. Together Alibaba Cloud and Docker will make it easier for organizations of all sizes to containerize legacy applications, accelerate their digital transformations and build new microservices. Through this commercial partnership with Alibaba Cloud, we look to serve the unique needs of global enterprises in China and to deepen our roots in the market.
Specifically, the commercial partnership entails:

Providing a China-based Docker Hub running on Alibaba Cloud for the distribution of thousands of Dockerized applications
Enabling Alibaba to resell Docker’s commercial offerings in China, including Docker Datacenter and Commercially Supported Docker Engine
Creating “Docker For Alibaba Cloud” &; a configuration of Docker for Alibaba that has been created  by Docker

Agility is key to innovation. By  partnering with Alibaba Cloud to deliver a locally-hosted Docker Hub,  developers will enjoy significantly faster image downloads and UI response rates. Now, development teams will begin leveraging Docker Hub to integrate source code management, build and QA tools. This enables users to reduce their commit-to-deploy cycle times from days to minutes, often enabling them to ship applications more frequently than before.
We are excited to advance the user experience for developers throughout China and to help unlock the innovation and creativity that will help transform the economy. In addition, we’ve added Alibaba Cloud to Docker’s growing list of supported cloud environments where Docker Datacenter can be easily and quickly installed. Our collaboration further enables application portability &8211;  without sacrificing the security, policy and control that comes with Docker Datacenter, an integrated platform where both developers and IT ops teams can meet to collaborate. This is another critical step towards enabling ‘write-once, run-anywhere’ apps that can be deployed on-premises or in the cloud. Overall, there is a vast opportunity in this collaboration between Docker and Alibaba Cloud as China’s cloud spending is expected to increase roughly 5X over the next three years.   

Docker Announces Expansion To China Through Commercial Partnership with Alibaba To Tweet

The post Docker Announces Expansion To China Through Commercial Partnership with Alibaba Cloud appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Announcing DockerCon 2017

The Docker Team is excited to announce the next DockerCon will be in held in Austin, Texas from April 17-20. For anyone not in an event planning role, finding a venue is always an adventure. Finding a venue for a unique event such as DockerCon adds an extra layer of complexity. After inquiring on over 15 venues and visiting 3 cities, we are confident that we have chosen a great venue for DockerCon 2017 and the Docker community.
DockerCon US 2017: Austin
April 17-20, 2017
Between the lively tech community, amazing restaurants and culture, Austin will be a natural fit for DockerCon. A diverse range of companies such as Dell, Whole Foods Market, Rackspace, HomeAway and many more of the hottest IT startups call Austin home. We can’t wait to welcome back many returning DockerCon alumni as well as open the DockerCon doors to so many new attendees and companies in the Austin area.
One of the most exciting additions to the DockerCon program is an extra day of content! We reviewed every attendee survey from Seattle in June, debriefed with Docker Captains and others in the community and came to the overwhelming conclusion that two days was not enough time to get the most value out of the jampacked DockerCon agenda. In 2017, we will introduce a third day of content that will repeat the top voted sessions, give more time to complete Hands-on Labs and allow more time for other learning opportunities that are in the works.
Let’s get this party started!
Save the dates:

Monday April 17: Paid training, afternoon workshops and evening welcome reception
Tuesday April 18: DockerCon Day 1, After Party
Wednesday April 19: DockerCon Day 2
Thursday April 20: DockerCon Day 3 &; half day of repeat top sessions, Hands-on Labs and workshops

Pre-register now for early bird pricing and we’ll send you an additional $50 discount code once DockerCon registration launches.
 
Pre-register for DockerCon
 
Calling all speakers!
We’re excited to hear about all of the interesting ways you’re using Docker. We’re looking for a variety of talks such as cool and unique use cases and Docker hack projects, advanced technical talks, or maybe you have a great talk on tech culture. Check out our sample CFP proposals for DockerCon for more information on what the program committee is looking for when reviewing a proposal, our tips for getting a proposal accepted, and our previous talks from DockerCon 2016. Our Call for Proposals will be open November 17, 2016 &8211; January 7, 2017.
Are you interested in learning more about sponsorship opportunities at DockerCon? Please sign up here to be among the first to receive the sponsorship prospectus.
 
Sponsor DockerCon
 
So, by now you’ve read this entire blog post and are now shouting, “What about DockerCon Europe?!” The truth is that we have spent many months searching for an available venue and we were unable to secure a site for this year. The reality is that the conference industry is incredibly competitive and we need to lock in venues farther in advance. For this reason we are now working on bringing DockerCon back to Europe in 2017. We will update the community as soon as we concrete details.
 
About DockerCon
DockerCon 2017 is a three day, Docker-centric conference organized by Docker. This year’s US edition will take place in Austin, TX and continue to build on the success of previous events as it grows and reflects Docker’s established ecosystem and ever growing community. DockerCon will feature topics and content covering all aspects of Docker and will be suitable for Developers, DevOps, Ops, System Administrators and C­-level executives. You will have ample opportunities to connect and learn about how others are using Docker. We&;re confident that no matter your level of expertise with Docker or your company size, you&8217;ll meet and learn from other attendees who share the same use cases and overcame the same challenges using Docker.

Save the date for @DockerCon 2017 in Austin April 17-20 ! we hope to see you all at To Tweet

The post Announcing DockerCon 2017 appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Helm Charts: making it simple to package and deploy common applications on Kubernetes

There are thousands of people and companies packaging their applications for deployment on Kubernetes. This usually involves crafting a few different Kubernetes resource definitions that configure the application runtime, as well as defining the mechanism that users and other apps leverage to communicate with the application. There are some very common applications that users regularly look for guidance on deploying, such as databases, CI tools, and content management systems. These types of applications are usually not ones that are developed and iterated on by end users, but rather their configuration is customized to fit a specific use case. Once that application is deployed users can link it to their existing systems or leverage their functionality to solve their pain points.For best practices on how these applications should be configured, users could look at the many resources available such as: the examples folder in the Kubernetes repository, the Kubernetes contrib repository, the Helm Charts repository, and the Bitnami Charts repository. While these different locations provided guidance, it was not always formalized or consistent such that users could leverage similar installation procedures across different applications.So what do you do when there are too many places for things to be found?xkcd StandardsIn this case, we’re not creating Yet Another Place for Applications, rather promoting an existing one as the canonical location. As part of the Special Interest Group Apps (SIG Apps) work for the Kubernetes 1.4 release, we began to provide a home for these Kubernetes deployable applications that provides continuous releases of well documented and user friendly packages. These packages are being created as Helm Charts and can be installed using the Helm tool. Helm allows users to easily templatize their Kubernetes manifests and provide a set of configuration parameters that allows users to customize their deployment. Helm is the package manager (analogous to yum and apt) and Charts are packages (analogous to debs and rpms). The home for these Charts is the Kubernetes Charts repository which provides continuous integration for pull requests, as well as automated releases of Charts in the master branch. There are two main folders where charts reside. The stable folder hosts those applications which meet minimum requirements such as proper documentation and inclusion of only Beta or higher Kubernetes resources. The incubator folder provides a place for charts to be submitted and iterated on until they’re ready for promotion to stable at which time they will automatically be pushed out to the default repository. For more information on the repository structure and requirements for being in stable, have a look at this section in the README.The following applications are now available:Stable repositoryIncubatingrepositoryDrupalConsulJenkinsElasticsearchMariaDBetcdMySQLGrafanaRedmineMongoDBWordpressPatroniPrometheusSparkZooKeeperExample workflow for a Chart developerCreate a chartDeveloper provides parameters via the values.yaml file allowing users to customize their deployment. This can be seen as the API between chart devs and chart users.A README is written to help describe the application and its parameterized values.Once the application installs properly and the values customize the deployment appropriately, the developer adds a NOTES.txt file that is shown as soon as the user installs. This file generally points out the next steps for the user to connect to or use the application.If the application requires persistent storage, the developer adds a mechanism to store the data such that pod restarts do not lose data. Most charts requiring this today are using dynamic volume provisioning to abstract away underlying storage details from the user which allows a single configuration to work against Kubernetes installations.Submit a Pull Request to the Kubernetes Charts repo. Once tested and reviewed, the PR will be merged.Once merged to the master branch, the chart will be packaged and released to Helm’s default repository and available for users to install. Example workflow for a Chart userInstall HelmInitialize HelmSearch for a chart $ helm searchNAME VERSION DESCRIPTION stable/drupal 0.3.1 One of the most versatile open source content m…stable/jenkins 0.1.0 A Jenkins Helm chart for Kubernetes. stable/mariadb 0.4.0 Chart for MariaDB stable/mysql 0.1.0 Chart for MySQL stable/redmine 0.3.1 A flexible project management web application. stable/wordpress 0.3.0 Web publishing platform for building blogs and …Install the chart$ helm install stable/jenkinsAfter the install Notes:1. Get your ‘admin’ user password by running:  printf $(printf ‘%o’ `kubectl get secret –namespace default brawny-frog-jenkins -o jsonpath=”{.data.jenkins-admin-password[*]}”`);echo2. Get the Jenkins URL to visit by running these commands in the same shell:**** NOTE: It may take a few minutes for the LoadBalancer IP to be available.                      ********       You can watch the status of by running ‘kubectl get svc -w brawny-frog-jenkins’ ****  export SERVICE_IP=$(kubectl get svc –namespace default brawny-frog-jenkins -o jsonpath='{.status.loadBalancer.ingress[0].ip}’)  echo http://$SERVICE_IP:8080/login3. Login with the password from step 1 and the username: adminFor more information on running Jenkins on Kubernetes, visit here.ConclusionNow that you’ve seen workflows for both developers and users, we hope that you’ll join us in consolidating the breadth of application deployment knowledge into a more centralized place. Together we can raise the quality bar for both developers and users of Kubernetes applications. We’re always looking for feedback on how we can better our process. Additionally, we’re looking for contributions of new charts or updates to existing ones. Join us in the following places to get engaged:SIG Apps – Slack ChannelSIG Apps – Weekly MeetingSubmit a Kubernetes Charts IssueA big thank you to the folks at Bitnami, Deis, Google and the other contributors who have helped get the Charts repository to where it is today. We still have a lot of work to do but it’s been wonderful working together as a community to move this effort forward.–Vic Iglesias, Cloud Solutions Architect, GoogleDownload KubernetesGet involved with the Kubernetes project on GitHub Post questions (or answer questions) on Stack Overflow Connect with the community on SlackFollow us on Twitter @Kubernetesio for latest updates
Quelle: kubernetes

Docker Weekly Roundup | October 2, 2016

 

This week, our readers enjoyed some big news, including the release of InfraKit, a toolkit for declarative infrastructure, a Windows 10 container guide, and a new open source project Image2Docker. As we begin a new week, let’s recap our top 5 most-read stories for the week of October 2, 2016:

InfraKit is a new declarative management toolkit for orchestrating infrastructure. InfraKit’s simple, pluggable components for a declarative infrastructure state, actively monitor and automatically reconcile that state.

Windows Server Container guide is designed to get setup to run Docker Windows Containers on Windows 10 or using a Windows Server 2016 VM.

Docs Repo On GitHub is a consolidation of all Docker documentation into a single Pages-based repository on GitHub. All documentation for Docker projects will now be open sourced for an easier than ever way to contribute to and stage the public docs.

Image2Docker is a new tool for prototyping Windows VM conversions to show how to replicate a VM Image to a container. The Powershell module can point at a virtual hard disk image, scan for common Windows components and suggest a Dockerfile.

Docker Compose Story is a tool for defining and running your multi-container Docker applications. Applications can be defined in a YAML file where all options used in `docker run` are now defined, and allows users to manage applications as a single entity versus individual containers by Ajeet Raina.

Weekly : Top 5 Docker stories for the week 10/02/16Click To Tweet

The post Docker Weekly Roundup | October 2, 2016 appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Top 5 Docker Questions from Microsoft Ignite

Last week was busy for the team at Microsoft Ignite in Atlanta. With the exciting announcement about the next evolution of the Docker and Microsoft relationship, the availability of Docker for Windows Server 2016 workloads, the show floor, general session, keynotes, and breakout sessions were all abuzz about Docker for Windows. Whether you were attended or not we want to make sure you didn’t miss a thing, here are the key announcements at this year’s Microsoft Ignite:

Docker Doubles Container Market with Support for Windows Workloads
Availability of Docker For Windows Server 2016
Docker Commercially Supported Docker Engine available in Windows Server 2016

Cool @VisualStudio and @docker integration being demoed by @shanselman at auto creation of Dockerfiles & debug inside containers. pic.twitter.com/HVDHKmwRrL
— Marcus Robinson (@techdiction) September 26, 2016

Wow @Docker engine included with all Server 2016 deployments. MSIgnite
— Joe Kelly (@_JoeKelly_) September 26, 2016

 
Here our top 5 questions heard in the Docker booth:

What are containers?

While container technology had been around for more than a decade. However, as the leader in the containerization market, .Docker has made the technology usable and accessible to all developers and sysadmins. . Containers allow developers and IT Pros to package an application into a standardized unit for software development, making them highly portable and able to run across any operating system. Each container contains a complete filesystem with everything needed to run: code, runtime, system tools, system libraries –essentially,  anything that can be installed on a server. This guarantees that the software will always run the same, regardless of its environment, without having to make any changes to the underlying code. Docker containers were previously only available to the Linux community and with the announcement of Docker for Windows Server 2016, Docker containers are now available for Windows workloads addressing 98% of enterprise workloads.
 

How is this different than App-V Application Virtualization?

Those in the Windows OS world are familiar with Microsoft App-V or ThinApp and naturally there were questions about comparing them to Docker containers. Application virtualization is used to package a full application with the relevant OS libraries into a single executable. Docker is a set of tooling used to build server based applications.  A single application could be comprised of one or hundreds of containers connected together. App-V is used for desktop applications and are not designed for server based applications. The most common example is packaging browsers with extensions so they can access custom web apps.  Each App-V package can reside on a laptop with different extensions/plugins, etc. To learn more about Application Virtualization and Docker, read our blog: There’s Application Virtualization and There’s Docker
 

How do I get started with Docker for Windows Server?

Integrating Visual Studio Tools for Docker and Docker for Windows provides desktop development environments for building Dockerized Windows apps. Getting started is easy and we have the tools you need to get started in a few easy steps:

Pick your tool:

The latest Anniversary update for Windows 10 offers containerization support for the Windows 10 kernel.
To run Windows containers in production at scale, download a free evaluation version Windows Server 2016 and install it on bare metal or in a VM running on Hyper-V, VirtualBox or similar.

Install a Windows Docker Engine on your system with Docker for Windows public beta on your system.
Run your first Windows Container in just a few steps with the instructions listed on the “Getting Started with Docker for Windows” webpage.
Create your own Dockerfile with our Image2Docker tool, a Powershell module that points at a virtual hard disk image, scans for common Windows components and suggest a Dockerfile. Read the blog to learn more and get started.

For a complete list of instructions read our blog post &; Build And Run Your First Docker Windows Server Container & view Windows Server container base images and applications on Docker Hub from Microsoft.
 

How do I manage containers?

Docker Datacenter is the integrated container orchestration and management platform for IT Pros. Today Docker Datacenter is available on Azure to manage Linux application environments. With the availability of Windows Server 2016 and Docker Engine, we are planning for a beta in Q4 2016 of Docker Datacenter management for Windows Server based applications.  Sign up here to be notified of the beta.
 

Where can I learn more?

There are lots of great resources and sessions to help you learn more. Whether you attended the conference or watched online here’s a wrap up of the top five session from Microsoft Ignite:

General Session with Scott Guthrie, EVP Cloud and Enterprise at Microsoft and Daryll Fogal CTO at Tyco

 

Keynote: “Reinvent IT infrastructure for business agility” with Jason Zander, CVP Microsoft Azure and Ben Golub, CEO of Docker

 

Breakout sessions:

Walk the path to containerization – transforming workloads into containers
Accelerate application delivery with Docker Containers and Windows Server 2016
Dive into the new world of Windows Server and Hyper-V Containers

Top 5 Docker questions from MSIgnite &8211; Answers hereClick To Tweet

Resources

Learn more about Docker on Windows Server
Sign up to be notified of GA and the Docker Datacenter for Windows Beta
Register for a webinar: Docker for Windows Server
Learn more about the Docker and Microsoft partnership

The post Top 5 Docker Questions from Microsoft Ignite appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

How To Dockerize Vendor Apps like Confluence

Docker Datacenter customer, Shawn Bower of Cornell University recently shared their experiences in containerizing Confluence as being the start of their Docker journey.
Through that project they were able to demonstrate a 10X savings in application maintenance, reduce the time to build a disaster recovery plan from days to 30 minutes and improve the security profile of their Confluence deployment. This change allowed the Cloudification team that Shawn leads to start spending the majority of their time helping Cornelians to use technology to be innovative.
Since the original blog was posted, there’s been a lot of requests to get the pragmatic info on how Cornell actually did this project.  In the post below, Shawn provides detailed instructions on how Confluence is containerized and how the Docker workflow is integrated with Puppet.

Written by Shawn Bower
As we started our Journey to move Confluence to the cloud using Docker we were emboldened by the following post from Atlassian. We use many of the Atlassian products and love how well integrated they are.  In this post I will walk you through the process we used to get Confluence in a container and running.
First we needed to craft a Dockerfile.  At Cornell we used image inheritance which enables our automated patching and security scanning process.  We start with the cannonical ubuntu image: https://hub.docker.com/_/ubuntu/ and then build on defaults used here at Cornell.  Our base image is available publicly on github here: https://github.com/CU-CommunityApps/docker-base.
Let’s take a look at the Dockerfile.
FROM ubuntu:14.04

# File Author / Maintainer
MAINTAINER Shawn Bower <my email address>

# Install.
RUN
apt-get update && apt-get install –no-install-recommends -y
build-essential
curl
git
unzip
vim
wget
ruby
ruby-dev
-daemon
openssh-client &&
rm -rf /var/lib/apt/lists/*

RUN rm /etc/localtime
RUN ln -s /usr/share/zoneinfo/America/New_York /etc/localtime

Clamav stuff
RUN freshclam -v &&
mkdir /var/run/clamav &&
chown clamav:clamav /var/run/clamav &&
chmod 750 /var/run/clamav

COPY conf/clamd.conf /etc/clamav/clamd.conf

RUN echo “gem: –no-ri –no-rdoc” > ~/.gemrc &&
gem install json_pure -v 1.8.1 &&
gem install puppet -v 3.7.5 &&
gem install librarian-puppet -v 2.1.0 &&
gem install hiera-eyaml -v 2.1.0

# Set environment variables.
ENV HOME /root

# Define working directory.
WORKDIR /root

# Define default command.
CMD [“bash”]

At Cornell we use Puppet for configuration management so we bake that directly into our base image.  We do a few other things like setting the timezone and installing the clamav agent as we have some applications that use that for virus scanning.  We have an automated project in Jenkins that pulls that latest ubuntu:14.04 image from Docker Hub and then builds this base image every weekend.  Once the base image is built we tag it with ‘latest’, a time stamp tag and automatically push it to our local Docker Trusted Registry.  This allows the brave to pull in patches continuously while allowing others to pin to a specific version until they are ready to migrate.  From that image we create a base Java image which installs Oracle’s JVM.
The Dockerfile is available here and explained below.
# Pull base image.
FROM DTR Repo path /cs/base

# Install Java.
RUN
apt-get update &&
apt-get -y install software-properties-common &&
add-apt-repository ppa:webupd8team/java -y &&
apt-get update &&
echo “oracle-java8-installer shared/accepted-oracle-license-v1-1 select true” | sudo debconf-set-selections &&
apt-get install -y oracle-java8-installer &&
apt-get install oracle-java8-set-default &&
rm -rf /var/lib/apt/lists/*

# Define commonly used JAVA_HOME variable
ENV JAVA_HOME /usr/lib/jvm/java-8-oracle

# Define working directory.
WORKDIR /data

# Define default command.
CMD [“bash”]

The same automated patching process is followed for the Java image as with the base image.  The Java image is automatically built after the base imaged and tagged accordingly so there is a matching set of base and java8.  Now that we have our Java image we can layer on Confluence.  Our Confluence repository is private but the important bits of the Dockerfile are below.
FROM DTR Repo path for cs/java8

# Configuration variables.
ENV CONF_HOME     /var/local/atlassian/confluence
ENV CONF_INSTALL  /usr/local/atlassian/confluence
ENV CONF_VERSION  5.8.18

ARG environment=local

# Install Atlassian Confluence and helper tools and setup initial home
# directory structure.
RUN set -x
&& apt-get update –quiet
&& apt-get install –quiet –yes –no-install-recommends libtcnative-1 xmlstarlet
&& apt-get clean
&& mkdir -p                “${CONF_HOME}”
&& chmod -R 700            “${CONF_HOME}”
&& chown daemon:daemon     “${CONF_HOME}”
&& mkdir -p                “${CONF_INSTALL}/conf”
&& curl -Ls                “http://www.atlassian.com/software/confluence/downloads/binary/atlassian-confluence-${CONF_VERSION}.tar.gz” | tar -xz –directory “${CONF_INSTALL}” –strip-components=1 –no-same-owner
&& chmod -R 700            “${CONF_INSTALL}/conf”
&& chmod -R 700            “${CONF_INSTALL}/temp”
&& chmod -R 700            “${CONF_INSTALL}/logs”
&& chmod -R 700            “${CONF_INSTALL}/work”
&& chown -R daemon:daemon  “${CONF_INSTALL}/conf”
&& chown -R daemon:daemon  “${CONF_INSTALL}/temp”
&& chown -R daemon:daemon  “${CONF_INSTALL}/logs”
&& chown -R daemon:daemon  “${CONF_INSTALL}/work”
&& echo -e                 “nconfluence.home=$CONF_HOME” >> “${CONF_INSTALL}/confluence/WEB-INF/classes/confluence-init.properties”
&& xmlstarlet              ed –inplace
–delete               “Server/@debug”
–delete               “Server/Service/Connector/@debug”
–delete               “Server/Service/Connector/@useURIValidationHack”
–delete               “Server/Service/Connector/@minProcessors”
–delete               “Server/Service/Connector/@maxProcessors”
–delete               “Server/Service/Engine/@debug”
–delete               “Server/Service/Engine/Host/@debug”
–delete               “Server/Service/Engine/Host/Context/@debug”
“${CONF_INSTALL}/conf/server.xml”

# bust cache
ADD version /version

# RUN Puppet
WORKDIR /
COPY Puppetfile /
COPY keys/ /keys

RUN mkdir -p /root/.ssh/ &&
cp /keys/id_rsa /root/.ssh/id_rsa &&
chmod 400 /root/.ssh/id_rsa &&
touch /root/.ssh/known_hosts &&
ssh-keyscan github.com >> /root/.ssh/known_hosts &&
librarian-puppet install &&
puppet apply –modulepath=/modules – hiera_config=/modules/confluence/hiera.yaml

–environment=${environment} -e “class { confluence::app': }” &&
rm -rf /modules &&
rm -rf /Puppetfile* &&
rm -rf /root/.ssh &&
rm -rf /keys

USER daemon:daemon

# Expose default HTTP connector port.
EXPOSE 8080

VOLUME [“/opt/atlassian/confluence/logs”]

# Set the default working directory as the installation directory.
WORKDIR /var/atlassian/confluence

# Run Atlassian Confluence as a foreground process by default.
CMD [“/opt/atlassian/confluence/bin/catalina.sh”, “run”]

We bring down the install media from Atlassian, explode that into the install path and do a bit of cleanup on some of the XML configs.  We use Docker build cache for that part of the process becauses it does not change often.  After the Confluence installation we bust the cache by adding a version file which changes each time the build runs in Jenkins.  This ensuers that Puppet will run in the container and configure the environment.  Puppet is used to lay down environment (dev, test, prod, etc.) configuration and use a docker build argument called ‘environment.’  This allows us to bake everything needed to run Confluence into the image so we can launch it on any machine with no extra configuration.  Whether to store the configuration in the image or outside is a contested subject for sure, but our decision was  to store all configurations directly in the image. We believe this ensures the highest level of portability.
Here are some general rules we follow with Docker

Use base images that are a part of the automated patching
Follow Dockerfile best practices
Keep the base infrastructure in a Dockerfile, and environment specific information in Puppet
Build one process per container
Keep all components of the stack in one repository
If the stack has multiple components (ie, apache, tomcat) they should live in the same repository
Use subdirectories for each component

Hope you enjoyed this post and gets you containerizing some vendored apps. This is just the beginning as we recently moved a legacy coldfusion app into Docker &; almost anything can probably be containerized!

Tips on how to dockerize @atlassian @Confluence by @Cornell&;s @drizzt51Click To Tweet

More Resources

Try Docker Datacenter free for 30 days
Learn more about Docker Datacenter
Read the blog post &8211; It all started with containerizing Confluence at Cornell
Watch the webinar featuring Shawn and Docker at Cornell

The post How To Dockerize Vendor Apps like Confluence appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Dynamic Provisioning and Storage Classes in Kubernetes

Storage is a critical part of running containers, and Kubernetes offers some powerful primitives for managing it. Dynamic volume provisioning, a feature unique to Kubernetes, allows storage volumes to be created on-demand. Without dynamic provisioning, cluster administrators have to manually make calls to their cloud or storage provider to create new storage volumes, and then create PersistentVolume objects to represent them in Kubernetes. The dynamic provisioning feature eliminates the need for cluster administrators to pre-provision storage. Instead, it automatically provisions storage when it is requested by users. This feature was introduced as alpha in Kubernetes 1.2, and has been improved and promoted to beta in the latest release, 1.4. This release makes dynamic provisioning far more flexible and useful.What’s New?The alpha version of dynamic provisioning only allowed a single, hard-coded provisioner to be used in a cluster at once. This meant that when Kubernetes determined storage needed to be dynamically provisioned, it always used the same volume plugin to do provisioning, even if multiple storage systems were available on the cluster. The provisioner to use was inferred based on the cloud environment – EBS for AWS, Persistent Disk for Google Cloud, Cinder for OpenStack, and vSphere Volumes on vSphere. Furthermore, the parameters used to provision new storage volumes were fixed: only the storage size was configurable. This meant that all dynamically provisioned volumes would be identical, except for their storage size, even if the storage system exposed other parameters (such as disk type) for configuration during provisioning.Although the alpha version of the feature was limited in utility, it allowed us to “get some miles” on the idea, and helped determine the direction we wanted to take.The beta version of dynamic provisioning, new in Kubernetes 1.4, introduces a new API object, StorageClass. Multiple StorageClass objects can be defined each specifying a volume plugin (aka provisioner) to use to provision a volume and the set of parameters to pass to that provisioner when provisioning. This design allows cluster administrators to define and expose multiple flavors of storage (from the same or different storage systems) within a cluster, each with a custom set of parameters. This design also ensures that end users don’t have to worry about the the complexity and nuances of how storage is provisioned, but still have the ability to select from multiple storage options.How Do I use It?Below is an example of how a cluster administrator would expose two tiers of storage, and how a user would select and use one. For more details, see the reference and example docs.Admin ConfigurationThe cluster admin defines and deploys two StorageClass objects to the Kubernetes cluster:kind: StorageClassapiVersion: extensions/v1beta1metadata:  name: slowprovisioner: kubernetes.io/gce-pdparameters:  type: pd-standardThis creates a storage class called “slow” which will provision standard disk-like Persistent Disks.kind: StorageClassapiVersion: extensions/v1beta1metadata:  name: fastprovisioner: kubernetes.io/gce-pdparameters:  type: pd-ssdThis creates a storage class called “fast” which will provision SSD-like Persistent Disks.User RequestUsers request dynamically provisioned storage by including a storage class in their PersistentVolumeClaim. For the beta version of this feature, this is done via the volume.beta.kubernetes.io/storage-class annotation. The value of this annotation must match the name of a StorageClass configured by the administrator.To select the “fast” storage class, for example, a user would create the following PersistentVolumeClaim:{  “kind”: “PersistentVolumeClaim”,  “apiVersion”: “v1″,  “metadata”: {    “name”: “claim1″,    “annotations”: {        “volume.beta.kubernetes.io/storage-class”: “fast”    }  },  “spec”: {    “accessModes”: [      “ReadWriteOnce”    ],    “resources”: {      “requests”: {        “storage”: “30Gi”      }    }  }} This claim will result in an SSD-like Persistent Disk being automatically provisioned. When the claim is deleted, the volume will be destroyed.Defaulting BehaviorDynamic Provisioning can be enabled for a cluster such that all claims are dynamically provisioned without a storage class annotation. This behavior is enabled by the cluster administrator by marking one StorageClass object as “default”. A StorageClass can be marked as default by adding the storageclass.beta.kubernetes.io/is-default-class annotation to it.When a default StorageClass exists and a user creates a PersistentVolumeClaim without a storage-class annotation, the new DefaultStorageClass admission controller (also introduced in v1.4), automatically adds the class annotation pointing to the default storage class.Can I Still Use the Alpha Version?Kubernetes 1.4 maintains backwards compatibility with the alpha version of the dynamic provisioning feature to allow for a smoother transition to the beta version. The alpha behavior is triggered by the existance of the alpha dynamic provisioning annotation (volume.alpha.kubernetes.io/storage-class). Keep in mind that if the beta annotation (volume.beta.kubernetes.io/storage-class) is present, it takes precedence, and triggers the beta behavior.Support for the alpha version is deprecated and will be removed in a future release.What’s Next?Dynamic Provisioning and Storage Classes will continue to evolve and be refined in future releases. Below are some areas under consideration for further development.Standard Cloud ProvisionersFor deployment of Kubernetes to cloud providers, we are considering automatically creating a provisioner for the cloud’s native storage system. This means that a standard deployment on AWS would result in a StorageClass that provisions EBS volumes, a standard deployment on Google Cloud would result in a StorageClass that provisions GCE PDs. It is also being debated whether these provisioners should be marked as default, which would make dynamic provisioning the default behavior (no annotation required).Out-of-Tree ProvisionersThere has been ongoing discussion about whether Kubernetes storage plugins should live “in-tree” or “out-of-tree”. While the details for how to implement out-of-tree plugins is still in the air, there is a proposal introducing a standardized way to implement out-of-tree dynamic provisioners.How Do I Get Involved?If you’re interested in getting involved with the design and development of Kubernetes Storage, join the Kubernetes Storage Special-Interest-Group (SIG). We’re rapidly growing and always welcome new contributors.– Saad Ali, Software Engineer, GoogleDownload KubernetesGet involved with the Kubernetes project on GitHub Post questions (or answer questions) on Stack Overflow Connect with the community on SlackFollow us on Twitter @Kubernetesio for latest updates
Quelle: kubernetes