Docker Online Meetup #42: Docker Captains Share Tips & Tricks for Using Docker 1.12

For this week’s Online Meetup, Docker Captains Ajeet Singh Raina, Viktor Farcic and Bret Fisher shared their tips and tricks for built In Docker orchestration.
Ajeet talked about the best ways to use Docker 1.12 Service Discovery and shared key takeaways. Viktor talked about best practices for setting a Swarm cluster and integrating it with HAProxy. Bret concluded the meetup with a presentation on Docker 1.12 command options and aliases including cli aliases for quick container management; the shortest path to secure production-ready swarm; how to use cli filters for easier management of larger swarms; and docker remote cli security setup.
 
 

Best ways to use Docker 1.12 Service Discovery by Docker Captain Ajeet Raina

Scaling and clustering with Docker Swarm by Docker Captain Viktor Farcic
Docker cli Tips and Tricks by Docker Captain Bret Fisher

Want to learn more about Docker 1.12 and orchestration? Check out these resources:

Docker 1.12.1 on Raspberry Pi 3 in 5 minutes by Docker Captain Ajeet Singh Raina
Docker Docs: Understand Docker container network
Docker 1.12 Release Notes
Docker Blog: Docker 1.12: Now With Built-In Orchestration!
Scale a real microservice with Docker 1.12 Swarm Mode by Docker Captain Alex Ellis
Docker 1.12 orchestration built-in by Docker Captain Gianluca Arbezzano

@BretFisher @ajeetsraina @vfarcic share Tips & Tricks for Using Docker 1.12Click To Tweet

The post Docker Online Meetup : Docker Captains Share Tips &; Tricks for Using Docker 1.12 appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Weekly Roundup | Docker

Here’s the buzz from this week we think you should know about! We shared a preview of Microsoft&;s container monitoring, reviewed the Docker Engine security feature set, and delivered a quick tutorial for getting 1.12.1 running on Raspberry Pi 3. As we begin a new week, let’s recap our top five most-read stories for the week of August 21, 2016:
 
 

Docker security: the Docker Engine has strong security default for all containerized applications.

1.12.1 on Raspberry Pi 3: five minute guide for getting Docker 1.12.1 running on Raspberry Pi 3 by Docker Captain Ajeet Singh Raina.

Securing the Enterprise: how Docker’s security features can be used to provide active and continuous security for a software supply chain.

Docker + NATS for Microservices: building a microservices control plane using NATS and Docker v1.12 by Wally Quevedo.

Container Monitoring: Microsoft previews open Docker container monitoring. Aimed at users who want a simplified view of containers’ usage, to diagnose issues whether containers are running in the cloud or on-premises by Sam Dean.  

Weekly roundup: Top 5 Docker stories of the weekClick To Tweet

The post Weekly Roundup | Docker appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Docker Online Meetup #41: Deep Dive into Docker 1.12 Networking

For this week’s Online Meetup, Sr. Director, Networking at Docker, Madhu Venugopal, joined us to talk about Docker 1.12 Networking and answer questions.
Starting with Docker 1.12, Docker has added features to the core Docker Engine to make multi-host and multi-container orchestration simple to use and accessible to everyone. Docker 1.12 Networking plays a key role in enabling these orchestration features.
In this online meetup, we learned all the new and exciting networking features introduced in Docker 1.12:

Swarm-mode networking
Routing Mesh
Ingress and Internal Load-Balancing
Service Discovery
Encrypted Network Control-Plane and Data-Plane
Multi-host networking without external KV-Store
MACVLAN Driver

 
The number of questions Madhu got at the end of the online meetup was amazing and because he did not have time to answer all of them, we&;ve added the rest of the Q&A below:
Q: Will you address the DNS configuration in Docker? We have two apps created with docker compose and would like to enable communication and DNS resolution from containers in one of the apps to containers in the other app.
Check out the PTAL external network feature in docker compose in the Docker docs to get started. If that does not satisfy your requirement, please raise an issue in docker/docker.
Q: What mechanism is used to register the different docker instances with each other so that they recognize a shared network between hosts, please?
Docker swarm-mode uses Raft and GRPC to communicate between docker instances. That’s how the nodes in the cluster exchange data and recognize shared network. At the data-plane, overlay driver uses VXLAN tunnels to provide per-network multi-host connectivity and isolation.
Q: Does it work with NSX?
This question is related to network plugins and the community has developed OVS & OVN plugins.  We are not sure if NSX integration is feasible through that.  Typically vendor plugin are created and maintained by the vendor directly.
Q: Is there a way to see all records registered in Docker internal DNS?  Is it exposed via API so it can be queried?
The Internal DNS is not exposed but network inspect and service inspect APIs can be used to gather this information.
Q: Has swarm mode created dependency of docker-engine on iptables?
Docker Engines has been using iptables since 1.0 for the bridge driver. Swarm mode merely makes use of iptables to provide functionality like the routing mesh.
Q: Can I have only 2 nodes in swarm and both are managers and node themselves as well?
Docker recommends an odd number of manager nodes as the Raft consensus requires majority consensus and to take full advantage of the fault tolerance features of swarm mode.  Please read through https://docs.docker.com/engine/swarm/raft/ for more information.
Q: Wil making ports into a cluster wide resources limit the number of total services whereas using public VIPs is expandable?
Yes.  Docker does not control public VIP so it needs to be managed external to the docker cluster. However, only front-end services require port-publishing & only those services that requires port-publishing will be participating in the Routing Mesh. Back-end services do not reserve cluster-wide ports.
Q: Can I plumb more than one IP per container while only using one network?
At the moment, libnetwork supports one routable IP per endpoint (per network). But users can configure many more link-local ip-addresses to the same endpoint. If you are interested in discussing this capability further, please open an enhancement request in docker/docker.
Q: Can you insert records into DNS to cause static IPs to be used?
Docker doesn’t expose embedded DNS APIs externally. Users can provide external DNS using the –dns option and one can insert custom name-lookup entries in the external DNS server which will be used by the containers.
Q: Can you talk more about automatic key rotation for secure networks? How often does it occur and is the interval configurable? What process(es) are responsible for key rotation?  How are the keys circulated throughout the cluster?
Please read the Overlay Security Model on the Docker Docs. Currently this is not configurable, but we are working on the configurability of this and other swarm mode features. Key-rotation is entirely handled by manager node process (swarmkit) and is distributed in the secured grpc channel established between the manager and workers.
Q: Regarding front end ports, is there a limitation on the number of port 80&8217;s you can listen on?
Yes. The best way to mitigate that is to run a global nginx or haproxy or other reverse-proxy service and back the backend services by the host-header.
Have a question that wasn’t answered or a specific requirement? Check out the Docker Forums or open an issue on GitHub.

Watch @MadhuVenugopal to learn about the new networking features introduced in docker 1.12Click To Tweet

Want to learn more about Docker 1.12 and networking? Check out these resources:

Docker 1.12 Networking Model Overview by Docker Captain Ajeet Singh Raina
Docker Docs: Understand Docker container network
Docker 1.12 Release Notes
Docker Blog: Docker 1.12: Now With Built-In Orchestration!
Scale a real microservice with Docker 1.12 Swarm Mode by Docker Captain Alex Ellis
Docker 1.12 orchestration built-in by Docker Captain Gianluca Arbezzano

The post Docker Online Meetup : Deep Dive into Docker 1.12 Networking appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Your Software is Safer in Docker Containers

The security philosophy is Secure by Default. Meaning security should be inherent in the platform for all applications and not a separate solution that needs to be deployed, configured and integrated.
Today, Docker Engine supports all of the isolation features available in the Linux kernel. Not only that, but we’ve supported a simple user experience by implementing default configurations that provide greater protection for applications running within the Docker Engine, making strong security default for all containerized applications while still leaving the controls with the admin to change configurations and policies as needed.
But don’t take our word for it.  Two independent groups have evaluated Docker Engine for you and recently released statements about the inherent security value of Docker.
Gartner analyst Joerg Fritsch recently published a new paper titled How to Secure Docker Containers in Operation on this blog post.  In it Fritsch states the following:
“Gartner asserts that applications deployed in containers are more secure than applications deployed on the bare OS” because even if a container is cracked “they greatly limit the damage of a successful compromise because applications and users are isolated on a per-container basis so that they cannot compromise other containers or the host OS”.
Additionally, NCC Group contrasted the security features and defaults of container platforms and published the findings in the paper “Understanding and Hardening Linux Containers.” Included is an examination of attack surfaces, threats, related hardening features, a contrast of different defaults and recommendations across different container platforms. A key takeaway from this examination is the recommendation that applications are more secure by running in some form of Linux container than without.
“Containers offer many overall advantages. From a security perspective, they create a method to reduce attack surfaces and isolate applications to only the required components, interfaces, libraries and network connections.”
“In this modern age, I believe that there is little excuse for not running a Linux application in some form of a Linux container, MAC or lightweight sandbox.”
– Aaron Grattafiori, NCC Group

The chart below depicts the outcome of the security evaluation of three container platforms.  Docker Engine was found to have a more comprehensive feature set with strong defaults.
 

Source: Understanding and Hardening Linux Containers

The Docker security philosophy of “Secure by Default” spans across the concepts of secure platform, secure content and secure access to deliver a modern software supply chain for the enterprise that is fundamentally secure.  Built on a secure foundation with support for every Linux isolation feature, Docker Datacenter delivers additional features like application scanning, signing, role based access control (RBAC) and secure cluster configurations for complete lifecycle security. Leading enterprises like ADP trust Docker Datacenter to help harden the containers that process paychecks, manage benefits and store the most sensitive data for millions of employees across thousands of employers.

Your apps are safer and more secure in Docker containers To Tweet

More Resources:

Read the Container Isolation White Paper
Learn how Docker secures your software supply chain
ADP hardens enterprise containers with Docker Datacenter
Try Docker Datacenter free for 30 days

The post Your Software is Safer in Docker Containers appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Securing the Enterprise Software Supply Chain Using Docker

At we have spent a lot of time discussing runtime security and isolation as a core part of the container architecture. However that is just one aspect of the total software pipeline. Instead of a one time flag or setting, we need to approach security as something that occurs at every stage of the application lifecycle. Organizations must apply security as a core part of the software supply chain where people, code and infrastructure are constantly moving, changing and interacting with each other.
If you consider a physical product like a phone, it’s not enough to think about the security of the end product. Beyond the decision of what kind of theft resistant packaging to use, you might want to know  where the materials are sourced from and how they are assembled, packaged, transported. Additionally it is important to ensure that  the phone is not tampered with or stolen along the way.

The software supply chain maps almost identically to the supply chain for a physical product. You have to be able to identify and trust the raw materials (code, dependencies, packages), assemble them together, ship them by sea, land, or air (network) to a store (repository) so the item (application) can be sold (deployed) to the end customer.
Securing the software supply chain is also also quite similar.  You have to:

Identify all the stuff in your pipeline; from people, code, dependencies, to infrastructure
Ensure a consistent and quality build process
Protect the product while in storage and transit
Guarantee and validate the final product at delivery against a bill of materials

In this post we will explain how Docker’s security features can be used to provide active and continuous security for a software supply chain.
Identity
The foundation of the entire pipeline is built on identity and access. You fundamentally need to know who has access to what assets and can run processes against them. The Docker architecture has a distinct identity concept that underpins the security strategy for securing your software supply chain: cryptographic keys allows the publisher to sign images to ensure proof-of-origin, authenticity, and provenance for Docker images.
Consistent Builds: Good Input = Good Output
Establishing consistent builds allow you to create a repeatable process and get control of your application dependencies and components to make it easier to test for defects and vulnerabilities. When you have a clear understanding of your components, it becomes easier to identify the things that break or are anomalous.

To get consistent builds, you have to ensure you are adding good components:

Evaluate the quality of the dependency, make sure it is the most recent/compatible version and test it with your software
Authenticate that the component comes from a source you expect and was not corrupted or altered in transit
Pin the dependency ensuring subsequent rebuilds are consistent so it is easier to uncover if a defect is caused by a change in code or dependency
Build your image from a trusted, signed base image using Docker Content Trust

Application Signing Seals Your Build
Application signing is the step that effectively “seals” the artifact from the build. By signing the images, you ensure that whomever verifies the signature on the receiving side (docker pull) establishes a secure chain with you (the publisher).  This relationship assures that the images were not altered, added to, or deleted from while stored in a registry or during transit. Additionally, signing indicates that the publisher “approves” that the image you have pulled is good.

Enabling Docker Content Trust on both build machines and the runtime environment sets a policy so that only signed images can be pulled and run on those Docker hosts.  Signed images signal to others in the organization that the publisher (builder) declares the image to be good.
Security Scanning and Gating
Your CI system and developers verify that your build artifact works with the enumerated dependencies, that operations on your application have expected behavior in both the success path and failure path, but have they vetted the dependencies for vulnerabilities?  Have they vetted subcomponents of the dependencies or bundled system libraries for dependencies?  Do they know the licenses for their dependencies? This kind of vetting is almost never done on a regular basis, if at all, since it is a huge overhead on top of already delivering bugfixes and features.

Docker Security Scanning assists in automating the vetting process by scanning the image layers.  Because this happens as the image is pushed to the repo, it acts as a last check or final gate before are deployed into production. Currently available in Docker Cloud and coming soon to Docker Datacenter, Security Scanning creates a Bill of Materials of all of the image’s layers, including packages and versions. This Bill of Materials is used to continuously monitor against a variety of CVE databases.  This ensures that this scanning happens more than once and notifies the system admin or application developer when a new vulnerability is reported for an application package that is in use.
Threshold Signing &; Tying it all Together
One of the strongest security guarantees that comes from signing with Docker Content Trust is the ability to have multiple signers participate in the signing process for a container. To understand this, imagine a simple CI process that moves a container image through the following steps:

Automated CI
Docker Security Scanning
Promotion to Staging
Promotion to Production

This simple 4 step process can add a signature after each stage has been completed and verify the every stage of the CI/CD process has been followed.

Image passes CI? Add a signature!
Docker Security Scanning says the image is free of vulnerabilities? Add a signature!
Build successfully works in staging? Add a signature!
Verify the image against all 3 signatures and deploy to production

Now before a build can be deployed to the production cluster, it can be cryptographically verified that each stage of the CI/CD process has signed off on an image.
Conclusion
The Docker platform provide enterprises the ability to layer in security at each step of the software lifecycle.  From establishing trust with their users, to the infrastructure and code, our model gives both freedom and control to the developer and IT teams. From building secure base images to scanning every image to signing every layer, each feature allows IT to layer in a level of trust and guarantee into the application.  As applications move through their lifecycle, their security profile is actively managed, updated and finally gated before it is finally deployed.

Docker secure beyond containers to your entire app pipeline To Tweet

More Resources:

Read the Container Isolation White Paper
ADP hardens enterprise containers with Docker Datacenter
Try Docker Datacenter free for 30 days
Watch this talk from DockerCon 2016

The post Securing the Enterprise Software Supply Chain Using Docker appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Docker Weekly Roundup

 
This week, we announced the launch of the Scholarship program, got to know our featured Docker Captains, and aired the first episode. As we begin a new week, let’s recap our top 5 most-read stories for the week of August 14, 2016:

 

Docker Scholarship Program: Docker announced the launch of a new scholarship program, in partnership with Reactor Core, a network of coding schools. The application period is open and interested applicants can apply here.
Docker Captains: meet and greet with our three selected August Captains. Learn how they got started, what they love most about Docker, and why Docker.
Dockercast Episode 1: this podcast guest stars Ilan Rabinovitch the Director of Technical Community at Datadog and discusses Monitoring-as-a-Service, Docker metadata and Docker container monitoring information.
Docker on Raspberry Pi: an informative guide to getting started with Docker on Raspberry Pi by Docker Captain Alex Ellis
Powershell with Docker: introduction to building your own custom Docker container image, that runs PowerShell natively on Ubuntu Linux by Larry Larsen & Docker Captain Trevor Sullivan at Channel 9

5 docker stories you dan&;t want to miss this week cc @chanwit @vfarcic @idomyowntricks @pcgeek86&;Click To Tweet

 
The post Docker Weekly Roundup appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Your Docker Agenda for LinuxCon North America

Hey Dockers! We’re excited to be back at this year in Toronto and hope you are, too! We’ve a got a round-up of many of our awesome speakers, as well as a booth. Come visit us in between the sessions at booth inside “The Hub”. You may even be able to score yourself some Docker swag.
 

Monday:
11:45am &; Curious about the Cloud Native Computing Foundation, Open Container Initiative, Cloud Foundry Foundation and their role in the cloud ecosystem? Docker’s Stephen Walli joins other panelists to deliver So CFF, CNCF, and OCI Walk into a Room (or ‘Demystifying the Confusion: CFF, CNCF, OCI).
3:00pm &8211; Docker Captain Phil Estes will describe and demonstrate the use of the new schema format’s capabilities for multiple platform-specific image references in his More than x86_64: Docker Images for Multi-Platform session.
4:20 pm &8211; Join Docker’s Mike Coleman for Containers, Physical, and virtual, Oh My! insight on what points businesses need to consider as they decide how and where to run their Docker containers.
 
Tuesday:
2:00pm &8211; Docker Captain Phil Estes is back with Runc: The Little (Container) Engine that Could where he will 1) give an overview of runc, 2) explain how to take existing Docker Containers and migrate them to runc bundles and 3) demonstrate how modern container isolation features can be exploited via runc container configuration.
2:00pm &8211; Docker’s Amir Chaudhry will explain Unikernels: When you Should and When you Shouldn’t to help you weigh the pros and cons of using unikernels and help you decide when when it may be appropriate to consider a library OS for your next project.
 
Wednesday:
10:55am &8211; Mike Goelzer and Victor Vieux rom Docker&;s Core team will walk the audience through the new orchestration features added to Docker this summer: secure clustering, declarative service specification, load balancing, service discovery and more in their session From 1 to N Docker Hosts: Getting Started with Docker Clustering.
11:55am &8211; Kendrick Coleman, Docker Captain will talk about Highly Available & Distributed Containers. Learn how to deploy stateless and stateful services all completely load balanced in a Docker 1.12 swarm cluster
2:15pm &8211; Docker’s Paul Novarese will dive into User namespace and Seccomp support in Docker Engine, covering new features that respectively allow users to run Containers as without elevated privileges and provide a method of containment for containers.
4:35pm &8211; Docker’s Riyaz Faizullabhoy will deliver When The Going Gets Tough, Get TUF Going!
The Update Framework (TUF) helps developers secure new or existing software update systems. Join Docker’s Riyaz Faizullabhoy’s When The Going Gets Tough, Get TUF Going! to learn the attacks that TUF protects against and how it actually does so in a usable manner.
 
Thursday:
9:00am &8211; In this all day tutorial, Jerome Petazzoni will teach attendees how to Orchestrate Containers in Production at Scale with Docker Swarm.
In addition to our Docker talks, we have two amazing Docker Toronto meetups lined up just for you. Check them out:
On August 23rd, we’re joining together with Toronto NATS Cloud Native and IoT Group at Lighthouse Labs to feature Diogo Monteiro on “Implementing Microservices with NATS” and our own Riyaz Faizullabhoy on “Docker Security and the Update Framework (TUF)”.
Come August 24th we’ll be at the Mozilla Community Space. Gou Rao, CTO and co-founder of Portworx will be touching on “Radically Simple Storage for Docker”, while Drew Erny from Docker will discuss “High Availability using Docker Swarm”.

Going to linuxcon next week? here is the list of docker sessions we recommend cc&;Click To Tweet

The post Your Docker Agenda for LinuxCon North America appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

5 Minutes with the Docker Captains

Captain is a distinction that Docker awards select members of the community that are both experts in their field and are passionate about sharing their Docker knowledge with others. Captains are Docker ambassadors (not Docker employees) and their genuine love of all things Docker has a huge impact on the Docker community &; whether they are blogging, writing books, speaking, running workshops, creating tutorials and classes, offering support in forums, or organizing and contributing to local events &8211; they make Docker’s mission of democratizing technology possible. Whether you are new to Docker or have been a part of the community for awhile, please don’t hesitate to reach out to Docker Captains with your challenges, questions, speaking requests and more.

This week we are highlighting 3 of our outstanding Captains who made August one filled with Docker learnings and events. Read on to learn more about how they got started, what they love most about Docker, and why Docker.
While Docker does not accept applications for the Captains program, we are always on the lookout to add additional leaders that inspire and educate the Docker community. If you are interested in becoming a Docker Captain, we need to know how you are giving back. Sign up for community.docker.com, share your activities on social media with the Docker, get involved in a local meetup as a speaker or organizer and continue to share your knowledge of Docker in your community.
 
Brian Christner
 
Brian Christner is a Cloud Advocate for Swisscom a Switzerland based Telecom where they are busy deploying a large Docker infrastructure. Brian is passionate about Linux, Docker or anything with a .IO domain name and regularly contributes Dockerarticles and GitHub projects.
 
How has Docker impacted what you do on a daily basis?
3 years ago Docker was still a relatively new concept to my coworkers and customers. Today, I would say that over 50% of the meetings I attend are about Docker, containers or technologies surrounding the Docker ecosystem. We recently integrated Docker image support into our Application Cloud which was a huge success. Docker continues to power our Services platform for Application Cloud where we are busy adding more services all the time like MongoDB, Redis, RabbitMQ and ELK as a service.
As a Docker Captain, how do you share your learnings with the community?
I keep quite busy building new Docker projects, researching, presenting at meetups and publishing articles to https://www.brianchristner.io. I’m also one of the maintainers of the Awesome Docker List which is a collection of Docker resources and projects.  If you have a good project or resource, please submit it so the community can benefit. I also contribute regularly to https://www.reddit.com/r/docker
Are you working on any fun projects?
Currently I’m building a Docker Swarm version of https://github.com/vegasbrianc/prometheus
Who are you when you’re not online?
When I’m not online you can find me in the Swiss Alps Hiking, mountain biking or skiing with my wife and son.
 
Viktor Farcic
 
Viktor Farcic is a Senior Consultant at CloudBees. His big passions are Microservices, Continuous Integration, Delivery and Deployment (CI/CD) and Test-Driven Development (TDD). He wrote The DevOps 2.0 Toolkit: Automating the Continuous Deployment Pipeline with Containerized Microservices and the Test-Driven Java Development books. His random thoughts and tutorials can be found on his blog TechnologyConversations.com.
 
 
How has Docker impacted what you do on a daily basis?
Almost everything I do today involves Docker one way or another. The code I wrote is compiled through containers (since I bought my last laptop, I do not even have most of my build tools installed). Tests I run are inside containers. Services and applications are packaged and deployed as containers. Servers I used for development and testing are substituted with containers running on my laptop. The list can go on and on. In my case, Docker is everywhere.
What is a common technology question you’re asked and the high-level explanation?
How do we put things into containers without changing anything else? My answer is always the same. Docker is not only a tool but a new way to approach many different software development aspects. If we are to leverage Docker’s full potential, many things need to change. Architecture, team structure, processes, and so on.
Share a random story with us.
When I was young, I almost become archeologist. Being in the same profession as Indiana Jones was a much better way to attract girls than being a geek. Eventually, my geeky side won and I went back to computers.
If you could switch your job with anyone else, whose job would you want?
It would be Jérôme Petazzoni. He looks like someone who truly enjoys his work (apart from being great at it).
 
Chanwit Kaewkasi
 
Chanwit is an Asst. Professor at  Suranaree University of Technology and a Docker Swarm Maintainer. Chanwit ported Swarm to Windows and developed a number of Swarm features in the early (v0.1) days. He serves as a Technical Cloud Adviser to many companies in Thailand, where they have been setting up Swarm clusters for their production environments.
 
How has Docker impacted what you do on a daily basis?
I’m teaching and co-running a research laboratory at Suranaree University of Technology (SUT) in Thailand. Basically, Docker is the major part of our, Large Scale Software Engineering, research ecosystem there. We use Docker as the infrastructure layer of every system we built, ranging from low-power storage clusters, bare-metal computing clouds, and upgradable IoT devices at scale.
To make the research progresses, we need to understand how does Docker and its clustering system work. This resulted in the recent 2000-node crowd-sourcing Docker cluster project, SwarmZilla (formerly known as Swarm2K) in July.
As a Docker Captain, how do you share that learning with the community?
Together with members of the Docker community, we did scaling tests on the July Swarm2K cluster and provided feedback to the Docker Engineering team so they could use the data collected from the experiments to improve Docker Engine. I blogged about Docker and the Swarm2K project and other things at http://medium.com/@chanwit.
 
The post 5 Minutes with the Docker Captains appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Docker Weekly | Roundup

This week, we’re taking a look at how to quickly create a swarm cluster, setup a mail forwarder on Docker, and better understand the new Docker 1.12.0 load-balancing feature. As we begin a new week, let’s recap our top 5 most-read stories for the week of August 7, 2016:

1. Docker Cheat Sheet: a quick reference guide on how to initialize swarm mode, build an image from the Dockerfile, and pull an image from a registry.
2. cURL with HTTP2 Support: build a Dockerfile to create a minimal, Alpine Linux-based image with support for HTTP2. Emphasis on keeping the generated image small and customizing curl by Nathan LeClaire.
3. Distributed Application Bundles: tutorial on how to create a demo swarm cluster composed of Docker machines and deploy a service using a dab file by Viktor Farcic.
4. Setting up Mail Forwarder: create email addresses for your domain, provide address for the mails forwarded, and pass information to the Docker container via environment variables by Brian Christner.
5. Load-Balancing Feature: in-depth overview of what’s new in Docker 1.12.0 load-balancing feature by Ajeet Singh Raina

Top 5 most popular Docker stories of the week via @DockerClick To Tweet

Quelle: https://blog.docker.com/feed/

Weekly Roundup: Top 5 Most Popular Posts

This week, our readers enjoyed some big news, including the great milestone of making Docker 1.12, Docker for Mac and Docker for Windows generally available for production environments, answers to the ten most often asked Docker questions and more. As we begin a new week, let’s recap our top 5 most-read stories for the week of July 24, 2016:

1. Docker 1.12 Goes GA: Docker 1.12 adds the largest and most sophisticated set of features into a single release since the beginning of the Docker project.
2. Docker for Mac and Windows: Native development environment using hypervisors built into each operating system. (No more VirtualBox!)
3. Docker Questions: The ten most common Docker questions (and answers) asked by IT Admins.
4. Function as a Service: The Function-as-a-Service model and how to generate a function from an image on Docker hub via Chanwit Kaewkasi
5. 12 Factor Method: Using the twelve factors application to Dockerize Apps via Rafael Gomes

Top 5 Docker Posts — July 24, 2016 via @DockerClick To Tweet

Quelle: https://blog.docker.com/feed/