Docker Presents at Inaugural Cloud Field Day

Thanks to everyone who joined us last Thursday. We were really excited to participate in the first Cloud Field Day event and to host at Docker HQ in San Francisco. In watching the trend to cloud and the changing dynamics of application development, the Tech Field Day organizers, Stephen Foskett and Tom Hollingsworth started Cloud Field Day to create a forum for companies to share and for the delegates to discuss. The delegates came from backgrounds in software development, , networking, virtualization, storage, data and of course, cloud… As always, the delegates asked a lot of questions, kicked off some great discussions, even had some spirited debates both in the room and online, always with the end user in mind. We are looking forward to doing this again.

ICYMI: The videos and event details are now available online and also follow the conversation from Twitter with the delegates.

containers are really about applications, not infrastructure @docker https://t.co/BAabGfwKIm pic.twitter.com/S8YrLDLd92
— Karen Lopez (@datachick) September 15, 2016

 

It’s staggering how far apart many traditional IT departments are from where the leading edge currently is… CFD1
— Jason Nash (@TheJasonNash) September 15, 2016

 

There is NO way to run @docker swarm mode insecurely! TLS built in! Gotta like that&; CFD1
— Nigel Poulton (@nigelpoulton) September 15, 2016

The three livestreamed sessions have been recorded and are now available to view.
Session 1: What is Docker?  Featuring product manager Vivek Saraswat
In this session, Vivek explains container architecture, how it is different than VMs and how they can be applied to application environments.  Bonus demo featuring an app with rotating cat gifs.

Session 2: Docker Orchestration featuring architect Andrea Luzzardi
With Docker 1.12, orchestration is built in directly into the Engine.  As an optional feature, orchestration includes node clustering, container scheduling, notion of application level services, container aware networking, security and much more.

Session 3: Docker and Microsoft featuring product manager Michael Friis
Enterprises have a mix of Linux and Windows application workloads. In this session, Michael explains how Docker and Windows Server deliver Windows containers and other integrations to the native Microsoft developer and IT pro toolset.

And we are not finished yet! The Docker Team will be participating in the upcoming Tech Field Day 12 in Silicon Valley on November 15-16th. Check back on the Tech Field Day site to get updated times and a link to view the live stream.
See you online soon!
More resources:

Learn more about Docker for the Enterprise
Read the white paper: Docker for the Virtualization Admin
Docker 1.12 with built in orchestration
Learn more about Docker Datacenter

The post Docker Presents at Inaugural Cloud Field Day appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Live Debugging Java in Docker – Just in time for JavaOne!

Developing Java web applications often requires that they can be deployed on multiple technology stacks. These typically include an application server and a database, but these components can vary from deployment to deployment. Building and managing multiple development stacks in a development environment can be a time consuming task often requiring unique configurations for each stack.
Docker can simplify the process of building and maintaining develop environments for Java web applications by building custom images that application developers can create on demand and use for development, testing and debugging applications. We have recently published a tutorial for building a Java web application using containers and three popular Java IDEs.  Docker enables developers to debug their code as it runs in containers. The tutorial covers setting up a debug session with an application server in Docker using IDEs that developers typically use such as Eclipse, IntelliJ IDEA and Netbeans. Developers can build the application, change code, and set breakpoints while the application is running in the container. The tutorials use a simple Spring MVC application to illustrate how use containers when developing Java applications
The tutorial is available on GitHub in our Docker Labs repository. These tutorials show you how to:

Configure Eclipse, IntelliJ, and Netbeans
Set-up the project
Debug your application live in the container

You can go to the tutorials, or follow along in these videos:

The tutorial uses common stack components, but the Docker enables you to build development environments using components from different technology stacks. For most use cases, Docker provides a way to quickly create and deploy a consistent development environment in Java.
Have any more tips or examples using Docker for with Java? Or other languages? Share them with the community by contributing to the Docker Labs repository.
Based in San Francisco? Join us this Wed Sept 21st at Docker HQ for a Docker for Java Developers meetup with Docker Captain Arun Gupta and Patrick Chanezon.
The post Live Debugging Java in Docker &; Just in time for JavaOne! appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Docker Weekly Roundup | September 11, 2016

 

As we arrive at the conclusion of another week, the team at wanted to take a moment to reflect on a few of the top posts you might have missed, while also highlighting a few other Docker stories from around the web. Here’s the weekly roundup for the week of September 11, 2016:

Docker Partner Program introducing the new tiered Docker Partner Program designed to address the growing demand by enterprise companies to adopt Containers as a Service environments with Docker Datacenter. 

Dockercast Episode 3 in this podcast Docker catches up with Nirmal Mehta at Booz Allen Hamilton. We discuss how large government organizations are modernizing their IT infrastructures and why these types of institutions seem to be early adopters of Docker.

Docker + Golang a short collection of tips and tricks by Jerome Petazzoni showing how Docker can be useful when working with Go code.

IoT Swarm with Docker Machine the new Swarm Mode in Docker 1.12 makes it easy to build a Docker Swarm and connect different ARM devices to an IoT cluster. Instructions on how to build your own by Docker Captain Dieter Reuter.

CentOS 7.2 ARM Docker Image the building of the first CentOS 7.2 ARM Docker image on Raspberry Pi 3 under the Docker 1.12.1 release done by Docker Captain Ajeet Raina.

Top 5 Docker stories for the week of 09/11/16Click To Tweet

The post Docker Weekly Roundup | September 11, 2016 appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

HPE Docker Ready Servers Now Available – Get Docker Preinstalled On Your Favorite Hardware

It’s here!  HPE ready servers are now available. These servers are pre-configured, integrated and validated with commercially supported Docker Engine out of the box. Enterprises can ease the adoption of Docker through a trusted hardware platform.  
Announced in June, the Docker and Hewlett Packard Enterprise (HPE) partnership, has been called “The 10 Most Important Tech Partnerships In 2016 (so far),” by CRN as a way to bring infrastructure optimized Docker technology to enable a modern application platform for the enterprise.
Integrated, Validated and Supported
Docker ready servers are available for the HPE ProLiant, Cloudline, and Hyper Converged Systems. These servers come pre-installed with the commercially supported Docker Engine (CS Engine) and enterprise class support direct from HPE, backed by Docker. Whether deploying new servers or facing a hardware refresh, enterprises looking to adopt containerization can benefit from a simplified and repeatable deployment option on hardware they trust.
HPE Docker ready servers accelerate businesses time to value with everything needed in a single server to scale and support Docker environments, combining the hardware and OS you already use in your environment with the Docker CS Engine. Docker CS Engine is a commercially supported container runtime and native robust tooling that builds and runs Docker containers on any host. Once up and running, these Docker hosts can be a destination for any new Dockerized distributed application or containerized legacy application.
In our partnership with HPE, Docker ready servers are fully supported and guaranteed with enterprise L1/L2 support from HPE and consulting services in alignment with HPE’s technology solutions roadmaps and SLAs, providing a single source of Docker support. Businesses can choose from a full complement of technical support services, including 1-year, 3-year, 9&;5, and 24&215;7 support, through HPE. In addition, HPE will provide technology assessments, design and implementation services for Docker (platform security, workload modernization consulting) from HPE Technology Services Consulting Services.
As teams scale their container environment and move from test/dev to production they can frictionless upgrade to Docker Datacenter, at any time. Docker Datacenter, our enterprise container management solution, provides end to end container, security, policy and controls across  the application lifecycle without sacrificing application agility or portability. Docker Datacenter helps enterprises transform to a hybrid IT environment, from bare metal, virtual or cloud deployment models, open APIs and interfaces, to flexibility to support a wide variety of workflows.
Docker Ready Server Availability
The HPE Docker ready servers are available for purchase through any HPE reseller or Systems Integrator and directly through your HPE representative. To purchase, simply reach out to your trusted HPE business partner.
Included HPE server models and supported configurations
Currently Docker ready servers are available for HPE ProLiant, Cloudline and Hyper Converged Systems, with additional x86 server lines becoming available later this year. The Linux operating systems where CS Engine is available include Ubuntu, RHEL, SLES, CentOS and H Linux. Get more details on the exact version compatibility and interoperability here.
Business day and business critical levels of support are available to align to the relevant application SLAs in one or three year terms.

Now available @HPE @Docker ready servers &; get Docker preinstalled on your favorite hardware!Click To Tweet

The post HPE Docker Ready Servers Now Available &8211; Get Docker Preinstalled On Your Favorite Hardware appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Security through Community: Introducing the Vendor Security Alliance

Today is proud to announce that we are founding member of the Vendor Security Alliance (), a coalition formed to help organizations streamline their vendor evaluation processes by establishing a standardized questionnaire for appraising a vendor’s security and compliance practices.The VSA was established to solve a fundamental problem: how can IT teams conform to its existing security practices when procuring and deploying third-party components and platforms?
The VSA solves this problem by developing a required set of security questions that will allow vendors to demonstrate to their prospective customers that they are doing a good job with security and data handling. Good security is built on great technology paired with processes and policies. Until today, there was no consistent way to discern if all these things were in place. Doing a proper security evaluation today tends to be a hard, manual process. A large number of key questions come to mind when gauging how well a third-party company manages security.
As an example, these are the types of things that IT teams must be aware of when assessing a vendor’s security posture:

Do they securely handle sensitive customer data?
Do they have the ability to detect when attacks occur on their infrastructure?
Do they train their developers on secure coding best practices?
Do they follow industry best practices for configuring the systems?

Docker joins the Vendor Security Alliance’s founding team of security conscious companies including Uber, Dropbox, Palantir, Twitter, Square, Atlassian, Godaddy and Airbnb. The founding team has worked together to provide a pragmatic and approachable questionnaire. The collective team draws from a wide variety of backgrounds and experiences, including mobile, enterprise, and infrastructure companies which have provided a unique set of perspectives that has informed a strong common security lexicon. We expect this questionnaire to be the basis for all companies to understand their security posture with tangible, actionable questions that will help improve software security across all industries. In service of that goal, we are releasing the questionnaire so that it is freely available to everyone. At the beginning of October, a copy of the questionnaire will be available for everyone at https://www.vendorsecurityalliance.org/.
As a founding member of the Vendor Security Alliance, Docker has taken an important step towards helping companies secure their processes and infrastructure.  At Docker we talk a lot about helping organizations build secure infrastructure using Docker’s tools like Docker Content Trust and the Docker Engine’s runtime isolation, both of which were influenced by diligent feedback from our customers. But technology isn’t the whole equation. Assessing yourself against best practices and understanding how well your vendors manage their programs is an important step when it comes to building a security program at any company. Docker will also be using  this questionnaire to assess our own vendors, while looking outward to see how it will help the industry with shared practices and consistent evaluation criteria.

docker is a founding member of the Vendor Security Alliance VSA Click To Tweet

The post Security through Community: Introducing the Vendor Security Alliance appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Docker + Golang =

This is a short collection of tips and tricks showing how Docker can be useful when working with Go code. For instance, I’ll show you how to compile Go code with different versions of the Go toolchain, how to cross-compile to a different platform (and test the result!), or how to produce really small container images.
The following article assumes that you have Docker installed on your system. It doesn’t have to be a recent version (we’re not going to use any fancy feature here).
Go without go
… And by that, we mean “Go without installing go”.
If you write Go code, or if you have even the slightest interest into the Go language, you certainly have the Go compiler and toolchain installed, so you might be wondering “what’s the point?”; but there are a few scenarios where you want to compile Go without installing Go.

You still have this old Go 1.2 on your machine (that you can’t or won’t upgrade), and you have to work on this codebase that requires a newer version of the toolchain.
You want to play with cross compilation features of Go 1.5 (for instance, to make sure that you can create OS X binaries from a Linux system).
You want to have multiple versions of Go side-by-side, but don’t want to completely litter your system.
You want to be 100% sure that your project and all its dependencies download, build, and run fine on a clean system.

If any of this is relevant to you, then let’s call Docker to the rescue!
Compiling a program in a container
When you have installed Go, you can do go get -v github.com/user/repo to download, build, and install a library. (The -v flag is just here for verbosity, you can remove it if you prefer your toolchain to be swift and silent!)
You can also do go get github.com/user/repo/… (yes, that’s three dots) to download, build, and install all the things in that repo (including libraries and binaries).
We can do that in a container!
Try this:
docker run golang go get -v github.com/golang/example/hello/…
This will pull the golang image (unless you have it already; then it will start right away), and create a container based on that image. In that container, go will download a little “hello world” example, build it, and install it. But it will install it in the container … So how do we run that program now?
Running our program in a container
One solution is to commit the container that we just built, i.e. “freeze” it into a new image:
docker commit $(docker ps -lq) awesomeness
Note: docker ps -lq outputs the ID (and only the ID!) of the last container that was executed. If you are the only uesr on your machine, and if you haven’t created another container since the previous command, that container should be the one in which we just built the “hello world” example.
Now, we can run our program in a container based on the image that we just built:
docker run awesomeness hello
The output should be Hello, Go examples!.
Bonus points
When creating the image with docker commit, you can use the –change flag to specify arbitrary Dockerfile commands. For instance, you could use a CMD or ENTRYPOINT command so that docker run awesomeness automatically executes hello.
Running in a throwaway container
What if we don’t want to create an extra image just to run this Go program?
We got you covered:
docker run –rm golang sh -c
“go get github.com/golang/example/hello/… && exec hello”
Wait a minute, what are all those bells and whistles?

–rm tells to the Docker CLI to automatically issue a docker rm command once the container exits. That way, we don’t leave anything behind ourselves.
We chain together the build step (go get) and the execution step (exec hello) using the shell logical operator &&. If you’re not a shell aficionado, && means “and”. It will run the first part go get…, and if (and only if!) that part is successful, it will run the second part (exec hello). If you wonder why this is like that: it works like a lazy and evaluator, which needs to evaluate the right hand side only if the left hand side evaluates to true.
We pass our commands to sh -c, because if we were to simply do docker run golang “go get … && hello”, Docker would try to execute the program named go SPACE get SPACE etc. and that wouldn’t work. So instead, we start a shell and instruct the shell to execute the command sequence.
We use exec hello instead of hello: this will replace the current process (the shell that we started) with the hello program. This ensures that hello will be PID 1 in the container, instead of having the shell as PID 1 and hello as a child process. This is totally useless for this tiny example, but when we will run more useful programs, this will allow them to receive external signals properly, since external signals are delivered to PID 1 of the container. What kind of signal, you might be wondering? A good example is docker stop, which sends SIGTERM to PID 1 in the container.

Using a different version of Go
When you use the golang image, Docker expands that to golang:latest, which (as you might guess) will map to the latest version available on the Docker Hub.
If you want to use a specific version of Go, that’s very easy: specify that version as a tag after the image name.
For instance, to use Go 1.5, change the example above to replace golang with golang:1.5:
docker run –rm golang:1.5 sh -c
“go get github.com/golang/example/hello/… && exec hello”
You can see all the versions (and variants) available on the Golang image page on the Docker Hub.
Installing on our system
OK, so what if we want to run the compiled program on our system, instead of in a container?
We could copy the compiled binary out of the container. Note, however, that this will work only if our container architecture matches our host architecture; in other words, if we run Docker on Linux. (I’m leaving out people who might be running Windows Containers!)
The easiest way to get the binary out of the container is to map the $GOPATH/bin directory to a local directory. In the golang container, $GOPATH is /go. So we can do the following:
docker run -v /tmp/bin:/go/bin
golang go get github.com/golang/example/hello/…
/tmp/bin/hello
If you are on Linux, you should see the Hello, Go examples! message. But if you are, for instance, on a Mac, you will probably see:
-bash:
/tmp/test/hello: cannot execute binary file
What can we do about it?
Cross-compilation
Go 1.5 comes with outstanding out-of-the-box cross-compilation abilities, so if your container operating system and/or architecture doesn’t match your system’s, it’s no problem at all!
To enable cross-compilation, you need to set GOOS and/or GOARCH.
For instance, assuming that you are on a 64 bits Mac:
docker run -e GOOS=darwin -e GOARCH=amd64 -v /tmp/crosstest:/go/bin
golang go get github.com/golang/example/hello/…
The output of cross-compilation is not directly in $GOPATH/bin, but in $GOPATH/bin/$GOOS_$GOARCH. In other words, to run the program, you have to execute /tmp/crosstest/darwin_amd64/hello.
Installing straight to the $PATH
If you are on Linux, you can even install directly to your system bin directories:
docker run -v /usr/local/bin:/go/bin
golang get github.com/golang/example/hello/…
However, on a Mac, trying to use /usr as a volume will not mount your Mac’s filesystem to the container. It will mount the /usr directory of the Moby VM (the small Linux VM hidden behind the Docker whale icon in your toolbar).
You can, however, use /tmp or something in your home directory, and then copy it from there.
Building lean images
The Go binaries that we produced with this technique are statically linked. This means that they embed all the code that they need to run, including all dependencies. This contrasts withdynamically linked programs, which don’t contain some basic libraries (like the “libc”) and use a system-wide copy which is resolved at run time.
This means that we can drop our Go compiled program in a container, without anything else, and it should work.
Let’s try this!
The scratch image
There is a special image in the Docker ecosystem: scratch. This is an empty image. It doesn’t need to be created or downloaded, since by definition, it is empty.
Let’s create a new, empty directory for our new Go lean image.
In this new directory, create the following Dockerfile:
FROM scratch
COPY ./hello /hello
ENTRYPOINT [“/hello”]
This means: * start from scratch (an empty image), * add the hello file to the root of the image, * define this hello program to be the default thing to execute when starting this container.
Then, produce our hello binary as follows:
docker run -v $(pwd):/go/bin –rm
golang go get github.com/golang/example/hello/…
Note: we don’t need to set GOOS and GOARCH here, because precisely, we want a binary that will run in a Docker container, not on our host system. So leave those variables alone!
Then, we can build the image:
docker build -t hello .
And test it:
docker run hello
(This should display Hello, Go examples!.)
Last but not least, check the image’s size:
docker images hello
If we did everything right, this image should be about 2 MB. Not bad!
Building something without pushing to GitHub
Of course, if we had to push to GitHub each time we wanted to compile, we would waste a lot of time.
When you want to work on a piece of code and build it within a container, you can mount a local directory to /go in the golang container, so that the $GOPATH is persisted across invocations: docker run -v $HOME/go:/go golang ….
But you can also mount local directories to specific paths, to “override” some packages (the ones that you have edited locally). Here is a complete example:
# Adapt the two following environment variables if you are not running on a Mac
export GOOS=darwin GOARCH=amd64
mkdir go-and-docker-is-love
cd go-and-docker-is-love
git clone git://github.com/golang/example
cat example/hello/hello.go
sed -i .bak s/olleH/eyB/ example/hello/hello.go
docker run –rm
-v $(pwd)/example:/go/src/github.com/golang/example
-v $(pwd):/go/bin/${GOOS}_${GOARCH} 
-e GOOS -e GOARCH
golang go get github.com/golang/example/hello/…
./hello
# Should display “Bye, Go examples!”
 
The special case of the net package and CGo
Before diving into real-world Go code, we have to confess something: we lied a little bit about the static binaries. If you are using CGo, or if you are using the net package, the Go linker will generate a dynamic binary. In the case of the net package (which a lot of useful Go programs out there are using indeed!), the main culprit is the DNS resolver. Most systems out there have a fancy, modular name resolution system (like the Name Service Switch) which relies on plugins which are, technically, dynamic libraries. By default, Go will try to use that; and to do so, it will produce dynamic libraries.
How do we work around that?
Re-using another distro’s libc
One solution is to use a base image that has the essential libraries needed by those Go programs to function. Almost any “regular” Linux distro based on the GNU libc will do the trick. So instead of FROM scratch, you would use FROM debian or FROM fedora, for instance. The resulting image will be much bigger now; but at least, the bigger bits will be shared with other images on your system.
Note: you cannot use Alpine in that case, since Alpine is using the musl library instead of the GNU libc.
Bring your own libc
Another solution is to surgically extract the files needed, and place them in your container with COPY. The resulting container will be small. However, this extraction process leaves the author with the uneasy impression of a really dirty job, and they would rather not go into more details.
If you want to see for yourself, look around ldd and the Name Service Switch plugins mentioned earlier.
Producing static binaries with netgo
We can also instruct Go to not use the system’s libc, and substitute Go’s netgo library, which comes with a native DNS resolver.
To use it, just add -tags netgo -installsuffix netgo to the go get options.

-tags netgo instructs the toolchain to use netgo.
-installsuffix netgo will make sure that the resulting libraries (if any) are placed in a different, non-default directory. This will avoid conflicts between code built with and without netgo, if you do multiple go get (or go build) invocations. If you build in containers like we have shown so far, this is not strictly necessary, since there will be no other Go code compiled in this container, ever; but it’s a good idea to get used to it, or at least know that this flag exists.

The special case of SSL certificates
There is one more thing that you have to worry about if your code has to validate SSL certificates; for instance if it will connect to external APIs over HTTPS. In that case, you need to put the root certificates in your container too, because Go won’t bundle those into your binary.
Installing the SSL certificates
Three again, there are multiple options available, but the easiest one is to use a package from an existing distribution.
Alpine is a good candidate here because it’s so tiny. The following Dockerfile will give you a base image that is small, but has an up-to-date bundle of root certificates:
FROM alpine:3.4
RUN apk add –no-cache ca-certificates apache2-utils
 
Check it out; the resulting image is only 6 MB!
Note: the –no-cache option tells apk (the Alpine package manager) to get the list of available packages from Alpine’s distribution mirrors, without saving it to disk. You might have seen Dockerfiles doing something like apt-get update && apt-get install … && rm -rf /var/cache/apt/*; this achieves something equivalent (i.e. not leave package caches in the final image) with a single flag.
As an added bonus, putting your application in a container based on the Alpine image gives you access to a ton of really useful tools: now you can drop a shell into your container and poke around while it’s running, if you need to!
Wrapping it up
We saw how Docker can help us to compile Go code in a clean, isolated environment; how to use different versions of the Go toolchain; and how to cross-compile between different operating systems and platforms.
We also saw how Go can help us to build small, lean container images for Docker, and described a number of associated subtleties linked (no pun intended) to static libraries and network dependencies.
Beyond the fact that Go is really good fit for a project that Docker, we hope that we showed you how Go and Docker can benefit from each other and work really well together!
Acknowledgements
This was initially presented during the hack day at GopherCon 2016.
I would like to thank all the people who proofread this material and gave ideas and suggestions to make it better; including but not limited to:

Aaron Lehmann
Stephen Day
AJ Bowen

All mistakes and typos are my own; all the good stuff is theirs! ☺
The post Docker + Golang = <3 appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Introducing the Docker Authorized Partner Program

Today the Docker team is excited to announce a new tiered Docker Partner Program to address the growing demand by companies to adopt Containers as a Service environments with Docker Datacenter.  This enhanced program provides end-to end support for a community of Resellers, Regional Consulting partners, Global Systems Integrators and Federal Systems Integrators.
Since the launch of Docker over three years ago, there has been tremendous adoption from the developer community of Docker container technology to accelerate their development and CI workflows. Companies of all sizes, from startups to Fortune 500 in healthcare, financial services, mobile apps and more are leaning on Docker to transform their application pipeline by containerizing legacy and new microservices applications.  As companies embark on their Docker journey, they are looking to their business partners to assist them in developing a business case, understand the functionality, architect use cases and deploy the environments.
New Program, Training and Resources
Containerization is the next catalyst for transformation in application infrastructure. The Docker Partner Program is designed to help partners build a successful “Docker practice” rooted in best practices, technical expertise and as extension of their expertise in cloud technologies, Software Defined Datacenter and converged infrastructure.
The new tiered partner program gives Partners a clear path from onboarding to establishing expertise by growing from Professional to Premier tiers. As partners invest in their Docker practice with more accredited professionals, Docker in turn increases the program benefits to help accelerate the partner business.
A critical element to this program is the professional accreditations available for the Sales, Presales, Technical and Consulting professionals. New self-paced and instructor led courses have been designed for a comprehensive curriculum to ensure partner professionals can guide IT organizations successfully on their Docker journey.  For consulting professionals, the addition in instructor led courses with hands on exercises ensures deep technical expertise in design, deployments and administration. In addition to training, the revamped Docker Partner Portal features a resource center for content, collateral and tools to help build awareness, demand and accelerate successful customer engagements.
Removing Friction with Distribution
Docker is also partnering with commercial distributor SYNNEX and our Federal distribution partners Immix, SYNNEX and Vizuri in North America to streamline the sales process and customer value period. Partner companies can work with distribution for access to Docker subscription packages while getting the technical and business development support needed to develop their practice.  Our Federal distributors that p specialize in government solutions bring Docker’s enterprise offerings to government agencies, state, municipalities and educational institutions.
From global, federal to regional system integrators to value added resellers, consulting providers and distribution partners, the Docker Partner Program is focused on developing a thriving community that can successfully guide companies from deploying their first container to rolling out a more sophisticated enterprise Containers as a Service platform using Docker Datacenter. 
Get Started Today

Sign up for the Partner Program
Log in to the Partner Portal and check out all the resources
Get trained: Start your learning path and earning your accreditation
Become Authorized!

We look forward to partnering with you and accelerating your Docker journey!
More Resources

Read the press release
Read the white paper:  Modern App Architecture for the Enterprise
Learn more about Docker Datacenter

The post Introducing the Docker Authorized Partner Program appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

New Dockercast Episode with Docker Captain, Nirmal Mehta

In case you missed it, we recently launched , the official Docker Podcast including all the DockerCon 2016 sessions available as podcast episodes.

In this podcast I catch up with Nirmal Mehta at Booz Allen Hamilton.  Nirmal has been a big part of the Docker community and is also a Docker Captain.
Nirmal works with some large government organizations and we discussed why these types of institutions seemed to be early adopters of Docker.  As most would answer, speed was an obvious driver, however, we discuss that security was also an early driver.  Turns out due to tighter boundaries of Docker containers some of these organizations felt that the potential security opportunities stretched better than virtualization.  We discuss these ideas as well as what is it like to be a Docker Captain.
 

 
You can find the latest Dockercast episodes on the Itunes Store or via the SoundCloud RSS feed.
 

New dockercast episode w/ host @botchagalupe & @normafaults from @BoozAllen as a guest!Click To Tweet

The post New Dockercast Episode with Docker Captain, Nirmal Mehta appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Docker at Tech Field Day 2016

Save the date!  This coming Thursday Docker is excited to host the delegates of Cloud Field Day at our headquarters for a deep dive into the Docker platform. Cloud Field Day is part of a series of Tech Field Day events that bring together technology companies and IT thought leaders to talk shop with technology and insights.
Cloud Field Day will be live and in person at Docker HQ but anyone can join in by participating in the live stream. Docker will be featured at 1pm on Thursday Sept 15th. Join us by visiting the Cloud Field Day event page.
Cloud field day is just one in a series of Tech Field Day sessions coordinated by IT industry veterans Stephen Foskett and Tom Hollingsworth. Learn more about the whole Tech Field Day series here.
 
ICYMI:  Our very own Mike Coleman, spoke at the Tech Field Day Express at VMworld.  In this one hour session, Mike walked a group of vExperts through an introduction to containers, what the new end to end application workflow looks like and an overview of Docker 1.12 with built in orchestration.
 
Intro to Docker

Build, Ship, Run with Docker

Docker 1.12 with built in Orchestration

See you online!

Join us for the Cloud Field Day live stream
Read the ebook: Docker for the Virtualization Admin
Learn more about Docker Datacenter
Try Docker Datacenter free for 30 days

The post Docker at Tech Field Day 2016 appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Creating a PostgreSQL Cluster using Helm

Editor’s note: Today’s guest post is by Jeff McCormick, a developer at Crunchy Data, showing how to deploy a PostgreSQL cluster using Helm, a Kubernetes package manager.Crunchy Data supplies a set of open source PostgreSQL and PostgreSQL related containers. The Crunchy PostgreSQL Container Suite includes containers that deploy, monitor, and administer the open source PostgreSQL database, for more details view this GitHub repository. In this post we’ll show you how to deploy a PostgreSQL cluster using Helm, a Kubernetes package manager. For reference, the Crunchy Helm Chart examples used within this post are located here, and the pre-built containers can be found on DockerHub at this location. This example will create the following in your Kubernetes cluster:postgres master servicepostgres replica servicepostgres 9.5 master database (pod)postgres 9.5 replica database (replication controller)This example creates a simple Postgres streaming replication deployment with a master (read-write), and a single asynchronous replica (read-only). You can scale up the number of replicas dynamically.ContentsThe example is made up of various Chart files as follows:values.yamlThis file contains values which you can reference within the database templates allowing you to specify in one place values like database passwordstemplates/master-pod.yamlThe postgres master database pod definition.  This file causes a single postgres master pod to be created.templates/master-service.yamlThe postgres master database has a service created to act as a proxy.  This file causes a single service to be created to proxy calls to the master database.templates/replica-rc.yamlThe postgres replica database is defined by this file.  This file causes a replication controller to be created which allows the postgres replica containers to be scaled up on-demand.templates/replica-service.yamlThis file causes the service proxy for the replica database container(s) to be created.InstallationInstall Helm according to their GitHub documentation and then install the examples as follows:helm initcd crunchy-containers/examples/kubehelmhelm install ./crunchy-postgresTestingAfter installing the Helm chart, you will see the following services:kubectl get servicesNAME              CLUSTER-IP   EXTERNAL-IP   PORT(S)    AGEcrunchy-master    10.0.0.171   <none>        5432/TCP   1hcrunchy-replica   10.0.0.31    <none>        5432/TCP   1hkubernetes        10.0.0.1     <none>        443/TCP    1hIt takes about a minute for the replica to begin replicating with the master. To test out replication, see if replication is underway with this command, enter password for the password when prompted:psql -h crunchy-master -U postgres postgres -c ‘table pg_stat_replication’If you see a line returned from that query it means the master is replicating to the slave. Try creating some data on the master:psql -h crunchy-master -U postgres postgres -c ‘create table foo (id int)’psql -h crunchy-master -U postgres postgres -c ‘insert into foo values (1)’ Then verify that the data is replicated to the slave:psql -h crunchy-replica -U postgres postgres -c ‘table foo’You can scale up the number of read-only replicas by running the following kubernetes command:kubectl scale rc crunchy-replica –replicas=2It takes 60 seconds for the replica to start and begin replicating from the master.  The Kubernetes Helm and Charts projects provide a streamlined way to package up complex applications and deploy them on a Kubernetes cluster.  Deploying PostgreSQL clusters can sometimes prove challenging, but the task is greatly simplified using Helm and Charts.–Jeff McCormick, Developer, Crunchy DataDownload KubernetesGet involved with the Kubernetes project on GitHub Post questions (or answer questions) on Stack Overflow Connect with the community on SlackFollow us on Twitter @Kubernetesio for latest updates
Quelle: kubernetes