User-guided caching in Docker for Mac

Recent Docker releases (17.04 CE Edge onwards) bring significant performance improvements to bind-mounted directories on macOS. (Docker users on the stable channel will see the improvements in the forthcoming 17.06 release.) Commands for bind-mounting directories have new options to selectively enable caching.
Containers that perform large numbers of read operations in mounted directories are the main beneficiaries. Here’s an illustration of the improvements in a few tools and applications in common use among Docker for Mac users: go list is 2.5× faster; symfony is 2.7× faster, and rake is 3.5× faster, as illustrated by the following graphs:
go list (2.5× speedup)

go list ./.. in the moby/moby repository
symfony (2.7× speedup)

curl of the main page of the Symfony demo app
rake (3.5× speedup)

rake -T in @hirowatari’s benchmark
For more details about how and when to enable caching, and what’s going on under the hood, read on.
Basics of bind-mounting
A defining characteristic of containers is isolation: by default, many parts of the execution environment of a container are isolated both from other containers and from the host system. In the filesystem, isolation shows up as layering: the filesystem of a running container consists of a series of incremental layers, topped by a container-specific read/write layer that keeps changes made within the container concealed from the outside world.
Isolation as a default encourages careful thinking about the best way to bypass isolation in order to share data with a container. For data-in-motion, Docker offers a variety of ways to connect containers via the network. For data-at-rest, Docker Volumes offer a flexible mechanism to share data between containers, and with the host.
The simplest and most common way to use volumes is to bind-mount a host directory when starting a container — that is, to make the directory available at a specified point in the container’s filesystem. For example, the following command runs the alpine image, exposing the host directory /Users/yallop/project within the container as /project:
docker run -v /Users/yallop/project:/project:cached
-v /host/another-path:/mount/another-point:consistent
alpine command
In this example, modifications to files under /project in the container appear as modifications to the corresponding files under /Users/yallop/project on the host. Similarly, modifications to files under /Users/yallop/project on the host appear as modifications to files under /project in the container.
There are many use cases for bind mounting. For example, you might

develop software using an editor on your host, running development tools in a container        
run a periodic job in a container, storing the output in a host directory
cache large data assets on the host for processing in a container

Bind mounts on Linux
Newcomers to Docker are sometimes surprised to discover that the performance overhead of containers is often close to negligible and in many cases, is significantly lower than other forms of virtualization.
On Linux, bind-mounting a directory, like many Docker features, simply selectively exposes host resources directly to a container. Consequently, access to bind mounts carries little-to-no overhead compared to filesystem access in a regular process.
Bind mounts on Docker for Mac
The Linux kernel makes container-style isolation efficient, but running containers on Docker editions for non-Linux operating systems such as macOS involves several additional moving parts that carry additional overhead.
Docker containers run on top of a Linux kernel, and so the Docker for Mac container runtime system runs a minimal Linux instance using the HyperKit framework. Containers running on top of the Linux system cannot directly access macOS filesystem or networking resources, and so Docker for Mac includes libraries that expose those resources in a way that the Docker engine can consume.
Access to filesystem resources is provided by a separate non-privileged macOS process (osxfs) that communicates with a daemon (“transfused”) running on the virtualized Linux. A Linux system call such as open or read that accesses bind-mounted files in a container must be

turned into a FUSE message in the Linux VFS
proxied over a virtio socket by transfused
forwarded onto a UNIX domain socket by HyperKit
deserialized, dispatched and executed as a macOS system call by osxfs

The entire process then takes place in reverse to return the result of the macOS system call to the container.
Each step in the process is fairly efficient, making the total round trip time around 100 microseconds. However, some software, written under the usually-correct assumption that system calls are instantaneous, can perform tens of thousands of system calls for each user-facing operation. Even a comparatively low overhead can become irksome when scaled up by four orders of magnitude. Consequently, although syscall latency has been reduced several times since the initial release of Docker for Mac, and although a few opportunities for further reducing latency remain, optimizing latency alone will not completely address bind mount performance for all applications.
File sharing design constraints under Docker for Mac
The design described above arises from a number of constraints, which in turn arise from the high-level design goals of Docker for Mac: it should closely match the Linux execution environment, require minimal configuration, and involve as little privileged system access as possible.
Three constraints in particular underlie the design of Docker for Mac file sharing.
The first constraint is consistency: a running container should always have the same view of a bind-mounted directory as the host system. On Linux consistency comes for free, since bind-mounting directly exposes a directory to a container. On macOS maintaining consistency is not free: changes must be synchronously propagated between container and host.
The second constraint is event propagation: several common workflows rely on containers receiving inotify events when files change on the host, or on the host receiving events when the container makes changes. Again, event propagation is automatic and free on Linux, but Docker for Mac must perform additional work to ensure that events are propagated promptly and reliably.
The third constraint concerns the interface: bind mounting on Docker for Mac should support both the concise -v syntax and the more elaborate interfaces for bind mounting on Linux.
These constraints rule out a number of alternative solutions. Using rsync to copy files into a container provides fast access, but does not support consistency. Mounting directories into containers using NFS works well for some use cases, but does not support event propagation. Reverse-mounting container directories onto the host might provide good performance for some workloads, but would require a very different interface.
User-guided caching
The design constraints above describe useful defaults. In particular, a system that was not consistent by default would behave in ways that were unpredictable and surprising, especially for casual users, for users used to the Linux implementation, and for software invoking docker on the host.
However, not all applications need the guarantees which arise for free from the Linux implementation. In particular, although the Linux implementation guarantees that the container and host have consistent views at all times, temporary inconsistency between container and host is sometimes acceptable. Allowing temporary inconsistency makes it possible to cache filesystem state, avoiding unnecessary communication between the container and macOS, and increasing performance.
Different applications require different levels of consistency. Full consistency is sometimes essential, and remains the default. However, to support cases where temporary inconsistency is an acceptable price to pay for improved performance, Docker 17.04 CE Edge includes new flags for the -v option:

consistent: Full consistency. The container runtime and the host maintain an identical view of the mount at all times.  This is the default, as described above.
cached: The host’s view of the mount is authoritative. There may be delays before updates made on the host are visible within a container.

For example, to enable cached mode for the bind-mounted directory above, you might write
docker run -v /Users/yallop/project:/project:cached alpine command
And caching is enabled on a per-mount basis, so you can mount each directory in a different mode: 
docker run -v /Users/yallop/project:/project:cached
-v /host/another-path:/mount/another-point:consistent
alpine command
The osxfs documentation has more details about the guarantees provided by consistent and cached.  On Linux, where full consistency comes for free, cached behaves identically to consistent.
Feedback
We have seen significant improvements in the performance of several common applications when directories are mounted in the new cached mode.
For the moment, read-heavy workloads will benefit most from caching. Improvements in the performance of write-heavy workloads, including a popular dd-based benchmark, are under development.
Test cases involving real world applications are a big help in guiding Docker for Mac development. So, if you have field reports or other comments about file sharing performance, we’d love to hear from you.
You can get in touch via the issue tracker. The osxfs documentation outlines the details to provide when reporting a performance issue.

Docker for Mac 17.04 adds flags that speed up file sharing 3x or more on common workloads.Click To Tweet

Learn More:

Get started with Docker for Mac
Check out the detailed osxfs documentation
Send us your file sharing use cases  (See What you can do in the osxfs documentation.)

The post User-guided caching in Docker for Mac appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Docker Enterprise Edition Lights a New Spark of Innovation within MetLife

MetLife, the global provider of insurance, annuities, and employee benefit programs, will be celebrating it’s 150th birthday next year. Survival and success in their space depends on being agile and able to respond to changing market requirements. During the Day 2 General Session at DockerCon 2017, MetLife shared how they’re inspiring new innovation in their organization with Docker Enterprise Edition (EE).
Information Management is Core to MetLife
MetLife offers auto, home, dental, life, disability, vision, and health insurance to over 100 million customers across 50 countries. Their business relies on information – about policyholders, risk assessments, financial and market data, etc. Aaron Ades, AVP of Solutions Engineering at MetLife offers that they’ve been in the information management business for 150 years and have accumulated over 400 systems of record – some apps are over 30 years old.
The challenge for MetLife is that they still have a lot of legacy technology that they must work with. Aaron shared that there is still code running today that was first written in 1982, but they still need to deliver a modern experience on top of those legacy systems.
To hear more about how MetLife is staying ahead of their competition using Docker, watch Aaron’s presentation from the Day 2 general session.

Wrapping Legacy Apps with Docker EE
A key realization for MetLife was that wrapping containerized microservices around a legacy app would make them more easy to adapt and improve. As Aaron shared about breaking up the business logic into smaller pieces, “it makes you more nimble…it makes your systems of record much easier to deal with.” For MetLife, that also means the application becomes more portable. Once containerized, MetLife has the flexibility to host the services in their own datacenter or in the cloud.
With Docker EE, MetLife found a secure container platform that could deliver both objectives around containerization and management across a hybrid cloud. Docker provided them a commercially backed end-to-end container management solution. It was easy for the DevOps team to install and manage and it allowed them to go from concept to production in only 5 months.
Results and Benefits
With Docker in place, MetLife has shipped a new modern UI that allows customers and agents to have a holistic view of their relationship with the company. It can be viewed on a phone, laptop or other mobile device, but it taps into data written in different decades in different languages, running on different systems, successfully integrating the old with the new.

Aaron and his team have also seen significant operational improvements including:

The ability to scale quickly by leveraging Microsoft Azure to handle the 25x increase in traffic during annual open enrollment periods
Increased resource utilization with up to 70% consolidation of their VMs
More automation through orchestration allowing them to easily scale up a service or deal with VM/hardware failures

Adopting Docker EE has, more importantly, sparked a new wave of innovation at MetLife. As Aaron closed out his presentation, he recognized the “Mod Squad”, the team that led the Docker project and was credited with “changing the culture of the company” as “antibodies to the status quo”.  He believes Docker has been transformative for his company and has ignited a spark across multiple businesses at MetLife. In the end, he believes this will allow them to be more agile and more nimble which is a core competitive advantage.
Watch Tim Tyler’s presentation below to learn more about how Metlife uses Docker Enterprise Edition.

Next Steps

Watch the entire Day 2 General Session from DockerCon 2017
View all the recorded sessions from DockerCon 2017
Learn more about Docker Enterprise Edition 

The post Docker Enterprise Edition Lights a New Spark of Innovation within MetLife appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Introducing Docker’s new CEO

Docker has celebrated a number of important milestones lately. March 20th was the fourth anniversary of the launch of the Docker project at PyCon in 2013. April 10th was the fourth anniversary of the day that I joined Solomon and a team of 14 other believers to help build this remarkable company. And, on April 18th, we brought the community, customers, and partners together in Austin for the fourth US-based DockerCon.

March 20th, 2013

Docker Team in 2013
DockerCon was a great opportunity to reflect on the progress we’ve seen in the past four years. Docker the company has grown from 15 to over 330 talented individuals. The number of contributors to Docker has grown from 10 to over 3300. Docker is used by millions of developers and is running on millions of servers. There are now over 900k dockerized apps that have been downloaded over 13 billion times. Docker is being used to cure diseases, keep planes in the air, to keep soldiers safe from landmines, to power the world’s largest financial networks and institutions, to process billions in transactions, to help create new companies, and to help revitalize existing companies. Docker has rapidly scaled revenues, building a sustainable and exciting subscription business in conjunction with tens of thousands of small and mid sized businesses and over 400 G2000 customers like ADP, the Department of Defense, GE, Goldman Sachs,  Merck,  MetLife, and Visa. And, we’ve created enduring partnerships with the likes of Accenture, Alibaba, Avanade, AWS, Booz Allen, Cisco, Google, HPE, IBM, Microsoft, Oracle, and more. We’ve built the foundations for a lasting and sustainable business.
 

I am incredibly humbled and honored to have been a part of this journey. So, it is naturally with some mixed emotions that I share the news that Steve Singh will become Docker’s new CEO (he is currently chairman of the board),  and that I will be moving to a role on the board of directors. While there is always some uncertainty about changing roles, I am 100% certain that Steve is the right person for Docker.
 
He is an incredible individual and leader, with a broad base of experience that includes founding and growing one of the most successful enterprise companies, Concur. Over the course of two decades, Steve took Concur from startup to public company, successfully navigated the shift from software to SaaS, and built a world class team, customer base, and business model, forging an organization of lasting value. More recently, he has demonstrated his considerable expertise as a member of the SAP Executive Board Member, serving as President of SAP Business Networks  and running SAP’s largest cloud businesses. As I worked with Steve– first as Docker’s chairman, and then as we worked together on the transition– I also saw his incredible qualities as a human being, and heard from the many people who have had the pleasure to work with Steve at Concur and SAP. (the average tenure at Concur was over twelve years). Docker has the potential to become not only one of the most enduring technology companies, but also a transformational platform, technology, and movement; I can’t think of a better or more qualified individual to lead us to that future than Steve.
I’d like to end this note with a few personal thoughts.

My first “Silicon Valley” Job, c. 1982  
 I started my “career” in Silicon Valley cutting apricots in an orchard a stone’s throw from what was to become Apple headquarters. Since then, I’ve had the privilege of working for six startups—three as CEO. Two went public (Avid and Verisign), two became part of larger companies (Gluster and Plaxo) and one was unceremoniously shut down by government officials.  (A “business school” in Uzbekistan ). I think Docker has the potential to far exceed all of them.

Of the Six Startups I’ve worked at/led, Docker stands out (and certainly has the best logo!)
But, all of my startup experiences have been remarkable,  and have taught me that my heart lies in startups. I’ve also learned that great companies are bigger than any individual person, and that a person’s life is defined by much more than any one company. Finally, I’ve seen that different leaders with different skills are needed at different stages in a company’s history. Steve is the right leader for Docker now, and I am confident that his leadership will enable Docker to fulfill its incredible potential.
I’d like to end by thanking a number of people. I’d like to thank Solomon for his vision, passion, brilliance, and willingness to share this journey with me. I’d like to thank our board and investors, our many partners, and especially Docker’s customers for their faith in–and support of–a young company and team with outsized dreams. I’d like to thank the incredibly creative and talented people who are using Docker for everything from finding cures for cancer to searching for earth-like planets to developing new sources of clean energy. Your enthusiasm and passion has made the past four years not only fun, but deeply meaningful. I’d like to thank my incredible family, my wife, and  and three amazing sons for your support, love, patience, and encouragement through the long days, nights, and weekends.
Finally , I’d like to thank the incredible team at Docker. I’d like to thank the early employees ( like Sam Alba, Eric Bardin, Victor Vieux, Yannis Peyret, Ken Cochrane, and Jerome Petazzoni–who met me on my first day) and people like Nick Stinemates and Michael Crosby, who were among the first people I helped to bring into the company, and who have all continued to contribute so much to Docker. I’d like to thank entrepreneurs like Aanand, Anil, Ben, Daniel, Diogo, Evan, Jeff Julien, Madhu, Nathan, and Patrick who were willing to join their dreams and companies with ours. I’d like to thank the incredible members of the Docker executive team who I haven’t already mentioned: the irrepressible Roger Egan, the multi-talented Marianna Tessel, the hard-driving Scott Johnston, the passionate customer advocate Iain Gray, the wildly creative and insightful David Messina, and the unflappable Mike Gupta. Thank you for not only joining us at Docker, but for attracting and building such amazing teams of your own. And, thank you all for persevering through the difficult times. Thank you to Faith Kinyua for keeping me sane and organized.  I could probably go on for at least another 300 names. But, to all of you: you are one of the most passionate, skilled, diverse, and amazing groups I’ve ever known. You have done such inspiring and impossible things already, and I will be cheering from the sidelines as you take Docker to even greater heights.

Docker Team Today
The post Introducing Docker’s new CEO appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

DockerCon 2017 Online meetup Recap

Weren’t able to attend DockerCon 2017 or looking for a refresher? Check out the recording and slides from the DockerCon 2017 Online Meetup highlights recap of all the announcements and highlights from DockerCon by Patrick Chanezon and Betty Junod.

Watch the General Session Talks
The videos and slides from general session day 1 and day 2 as well as the top rated sessions are already available. The rest of the DockerCon slides and videos will soon be published on our slideshare account and all the breakout session video recordings available on our DockerCon 2017 youtube playlist.
Learn more about the Moby Project
The Moby Project is a new open-source project to advance the software containerization movement and help the ecosystem take containers mainstream. Learn more here.

Learn More about LinuxKit
LinuxKit is toolkit for building secure, portable and lean operating systems for containers. Read more about LinuxKit.

Learn More about the Modernize Traditional Applications Program:
The Modernize Traditional Applications (MTA) Program aims to help enterprises make their existing legacy apps more secure, more efficient and portable to hybrid cloud infrastructure. Read more about the Modernize Traditional Apps Program.

Weren’t able to attend #dockercon? Watch this recap video for key highlights !Click To Tweet

The post DockerCon 2017 Online meetup Recap appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Visa Inc. Gains Speed and Operational Efficiency with Docker Enterprise Edition

2017 was an opportunity to hear from customers across multiple industries and segments on how they are leveraging technology to accelerate their business. In the keynote on Day 2 and also a breakout session that afternoon, Visa shared how Docker Enterprise Edition is empowering them on their mission is to make global economies safer by digitizing currency and making electronic payments available to everyone, everywhere.
 
Visa is the world’s largest retail electronic payment network that handles 130 billion transactions a year, processing $5.8 trillion annually. Swamy Kocherlakota, Global Head of Infrastructure and Operations, shared that Visa got here by expanding their global footprint which has put pressure on his organization which has remained mostly flat in headcount during that time. Since going into production with their Docker Containers-as-a-Service architecture 6 months ago, Mr. Kocherlakota has seen a 10x increase in scalability, ensuring that his organization will be able to support their overall mission and growth objectives well into the future.
Global Growth Fuels Need for A New Operating Model
In aligning his organization to the company mission, Swamy decided to focus on two primary metrics: Speed and Efficiency.

Speed is tied to developer on boarding and developer productivity. Visa wants new developers to be able to deploy code on their first day. That means giving them tools they are familiar with and getting out of their way. It also means providing developers access to infrastructure whenever and wherever they need it.

Efficiency is tied to Visa’s ability to maximize utilization of their existing datacenter footprint while also reducing the time the team spends on patching and refreshing hardware. Optimizing their efficiency also frees up both headcount and datacenter resources to support their global growth initiatives.

While considering how they could support these objectives, Visa also has to meet the high bar on security and availability that underpins everything they do. Some of the core systems at Visa have had zero downtime over a span of 20 years!
Modernizing with Docker Enterprise Edition
After investigating different technologies and vendors who could help them achieve both speed and efficiency objectives, Visa chose Docker Enterprise Edition (Docker EE) to help them move towards a microservices application model while also modernizing their data center operations.
Visa was looking for an enterprise-ready solution and appreciated the integrated approach of the Docker EE stack which includes scheduling, service registry, service discovery, container networking, and a centralized management control plane. Docker EE allows them to manage multiple development, QA, and staging environments, gain visibility across their container environment, and retain full control over role-based access.
Visa chose two key applications to begin their Docker journey – a core transaction processing application and a risk decision system. These were legacy monolithic applications which they began to containerize into services. Those two applications are now running in production on Docker EE across multiple regions and handling 100,000 transactions per day. They consist of 100 separate containers and have the ability to instantly scale to 800 when transactions peak.
To learn more about Visa’s application architecture, watch the breakout Docker Networking in Production at Visa below:

Results and Benefits

With Docker EE now in production, Visa is seeing improvements in a number of ways:

Provisioning time: Visa can now provision in seconds rather than days even while more application teams join the effort. They can also deliver just-in-time infrastructure across multiple datacenters around the world with a standardized format that works across their diverse set of applications.
Patching & maintenance: With Docker, Visa can simply redeploy an application with a new image. This also allows Visa to respond quickly to new threats as they can deploy patches across their entire environment at one time.
Tech Refresh: Once applications are containerized with Docker, developers do not have to worry about the underlying infrastructure; the infrastructure is invisible.
Multi-tenancy: Docker containers provides both space and time division multiplexing by allowing Visa to provision and deprovision microservices quickly as needed. This allows them to strategically place new services into the available infrastructure which has allowed the team to support 10x the scale they could previously.

To hear more about how Visa was able to gain 10x scalability for their application with Docker, watch Swamy&;s presentation from the Day 2 general session below:

Docker Enterprise Edition (EE) is designed for enterprise development and IT teams who build, ship and run business critical applications in production at scale. Docker EE is integrated, certified and supported to provide enterprises like Visa with the most secure container platform in the industry to modernize all applications.
Next Steps

Watch the entire Day 2 General Session from DockerCon 2017
View all the recorded sessions from DockerCon 2017
Learn more about Docker Enterprise Edition

Docker Enterprise Edition and Docker Networking in production at @visa dockercon Click To Tweet

The post Visa Inc. Gains Speed and Operational Efficiency with Docker Enterprise Edition appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Visa Inc. Gains Speed and Operational Efficiency with Docker Enterprise Edition

2017 was an opportunity to hear from customers across multiple industries and segments on how they are leveraging technology to accelerate their business. In the keynote on Day 2 and also a breakout session that afternoon, Visa shared how Docker Enterprise Edition is empowering them on their mission is to make global economies safer by digitizing currency and making electronic payments available to everyone, everywhere.
 
Visa is the world’s largest retail electronic payment network that handles 130 billion transactions a year, processing $5.8 trillion annually. Swamy Kocherlakota, Global Head of Infrastructure and Operations, shared that Visa got here by expanding their global footprint which has put pressure on his organization which has remained mostly flat in headcount during that time. Since going into production with their Docker Containers-as-a-Service architecture 6 months ago, Mr. Kocherlakota has seen a 10x increase in scalability, ensuring that his organization will be able to support their overall mission and growth objectives well into the future.
Global Growth Fuels Need for A New Operating Model
In aligning his organization to the company mission, Swamy decided to focus on two primary metrics: Speed and Efficiency.

Speed is tied to developer on boarding and developer productivity. Visa wants new developers to be able to deploy code on their first day. That means giving them tools they are familiar with and getting out of their way. It also means providing developers access to infrastructure whenever and wherever they need it.

Efficiency is tied to Visa’s ability to maximize utilization of their existing datacenter footprint while also reducing the time the team spends on patching and refreshing hardware. Optimizing their efficiency also frees up both headcount and datacenter resources to support their global growth initiatives.

While considering how they could support these objectives, Visa also has to meet the high bar on security and availability that underpins everything they do. Some of the core systems at Visa have had zero downtime over a span of 20 years!
Modernizing with Docker Enterprise Edition
After investigating different technologies and vendors who could help them achieve both speed and efficiency objectives, Visa chose Docker Enterprise Edition (Docker EE) to help them move towards a microservices application model while also modernizing their data center operations.
Visa was looking for an enterprise-ready solution and appreciated the integrated approach of the Docker EE stack which includes scheduling, service registry, service discovery, container networking, and a centralized management control plane. Docker EE allows them to manage multiple development, QA, and staging environments, gain visibility across their container environment, and retain full control over role-based access.
Visa chose two key applications to begin their Docker journey – a core transaction processing application and a risk decision system. These were legacy monolithic applications which they began to containerize into services. Those two applications are now running in production on Docker EE across multiple regions and handling 100,000 transactions per day. They consist of 100 separate containers and have the ability to instantly scale to 800 when transactions peak.
To learn more about Visa’s application architecture, watch the breakout Docker Networking in Production at Visa below:

Results and Benefits

With Docker EE now in production, Visa is seeing improvements in a number of ways:

Provisioning time: Visa can now provision in seconds rather than days even while more application teams join the effort. They can also deliver just-in-time infrastructure across multiple datacenters around the world with a standardized format that works across their diverse set of applications.
Patching & maintenance: With Docker, Visa can simply redeploy an application with a new image. This also allows Visa to respond quickly to new threats as they can deploy patches across their entire environment at one time.
Tech Refresh: Once applications are containerized with Docker, developers do not have to worry about the underlying infrastructure; the infrastructure is invisible.
Multi-tenancy: Docker containers provides both space and time division multiplexing by allowing Visa to provision and deprovision microservices quickly as needed. This allows them to strategically place new services into the available infrastructure which has allowed the team to support 10x the scale they could previously.

To hear more about how Visa was able to gain 10x scalability for their application with Docker, watch Swamy&;s presentation from the Day 2 general session below:

Docker Enterprise Edition (EE) is designed for enterprise development and IT teams who build, ship and run business critical applications in production at scale. Docker EE is integrated, certified and supported to provide enterprises like Visa with the most secure container platform in the industry to modernize all applications.
Next Steps

Watch the entire Day 2 General Session from DockerCon 2017
View all the recorded sessions from DockerCon 2017
Learn more about Docker Enterprise Edition

Docker Enterprise Edition and Docker Networking in production at @visa dockercon Click To Tweet

The post Visa Inc. Gains Speed and Operational Efficiency with Docker Enterprise Edition appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

DockerCon 2017: The Top Rated Sessions

After the general session videos from Day 1 and Day 2 yesterday, we’re happy to share with you the video recordings of the top rated sessions by DockerCon attendees. All the slides will soon be published on our slideshare account and all the breakout session video recordings available on our DockerCon 2017 youtube playlist.

Cilium: Network and Application Security with BPF and XDP by Thomas Graf

Docker?!? But I am a Sysadmin by Mike Coleman

Creating Effective Images by Abby Fuller

Taking Docker from Local to Production at Intuit by JanJaap Lahpor and Harish Jayakumar

Container Performance Analysis by Brendan Gregg

Secure Substrate: Least Privilege Container Deployment by Diogo Mónica and Riyaz Faizullabhoy

Escape from VMs with Image2Docker by Elton Stoneman and Jeff Nickoloff

What Have Namespaces Done for You Lately? by Liz Rice

Watch the top rated sessions from dockercon cc @brendangregg @abbyfuller @lizrice @diogomonica  Click To Tweet

The post DockerCon 2017: The Top Rated Sessions appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

DockerCon 2017 Day 2 Highlights

Following the general session highlights from Day 1, we’re happy to share with you the video recording from general session day 2. All the slides will soon be published on our slideshare account and all the breakout session video recordings available on our DockerCon 2017 youtube playlist.

Here’s what we covered during the day 2 general session:

14:00 Docker Enterprise Edition at Visa
30:00 Securing the Software supply chain
65:00 Oracle applications now available on Docker Store
75:00 Modernize your Traditional Apps Program with Docker

Docker Enterprise Edition at Visa
Ben started off his DockerCon Day 1 keynote with key facts and figures around Docker Commercial Adoption. To illustrate his points Ben invited on stage Swamy Kochelakota, Global Head of Infrastructure and Operations at Visa to talk about their journey adopting Docker Enterprise Edition to run their critical applications at scale in a very diverse environment.
Securing the Software supply chain
During the day 2 keynote, Lily and Vivek and reprise their 2016 roles of dedicated burners, finally returning from Burning Man to get back to their jobs of enterprise dev and ops.  Ben returns as clueless business guy, and decides to add value by hiring a contractor, who also went to Burning Man and pushed code from there. Company policy says that developers push code to dev repos, and if they pass certain criteria they are promoted to prod repos, and they can be deployed. The code written from Burning Man was laden with vulnerabilities and failed the promotion step. Luckily, Lily was able to clean up the image and pass it through promotions. Vivek then deployed the full stack, which consisted of an linux frontend and MS SQL DB on Windows. Using docker, he was able to deploy this stack on a hybrid Linux Windows cluster with just 1 click, bringing up the Enterprise Art Store and showing off some great enterprise art.
Oracle applications now available on Docker Store
Ben later asks Lily and Vivek to deploy the software from a 90s ecommerce company he acquired within an hour, and to do it in the container thingies. The app consists of a LAMP stack VM and Oracle DB. Vivek is skeptical that this can be containerized within an hour, but Lily is convinced Docker’s tools can handle it, so they make a bet of $20. Lily uses image2docker to convert the LAMP stack VM into a container without any code change. And as it turns out, Oracle has recently collaborated with Docker to containerize many of their apps, including OracleDB! The database is now an official product on the Docker Store, and can be found here. Using the converted VM and OracleDB from the store, Vivek was able to deploy the 90s app within the 1 hour time limit. Read more about Oracle database and developer tools now available on Docker Store.
Modernize your Traditional Apps Program with Docker
Finally, Ben announced the Modernize Traditional Applications () Program to help enterprises make their existing legacy apps more secure, more efficient and portable to hybrid cloud infrastructure.  Collaboratively developed and brought to market with partners Avanade, Cisco, HPE, and Microsoft, the MTA Program consists of consulting services, Docker Enterprise Edition, and hybrid cloud infrastructure from partners to modernize existing .NET Windows or Java Linux applications in five days or less.  Designed for IT operations teams, the MTA Program modernizes existing legacy applications without modifying source code or re-architecting the application.  Read more about the Modernize Traditional Apps Program.

Watch the dockercon general session videos from day 2! MTA Click To Tweet

Learn More about the general sessions announcements:

Learn more about the Modernize Traditional Apps Program
Sign up for the Modernize Traditional Apps Kit
Sign up for the DockerCon 2017 Recap Online Meetup
Register for DockerCon Europe 2017

The post DockerCon 2017 Day 2 Highlights appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

DockerCon 2017 Day 1 Highlights

What an incredible 2017 we had last week. Big thank you to all of the 150+ confirmed speakers, 100+ sponsors and over 5,500 attendees for contributing to the success of these amazing 3 days in Austin. You’ll find below the videos and slides from general session day 1.All the slides will soon be published on our slideshare account and all the breakout session video recordings available on our DockerCon 2017 youtube playlist.

Here’s what we covered during the day 1 general session:

17:00 Developer Workflow improvements and demo
37:00 Secure Orchestration and demo
59:00 Introducing : a toolkit for building secure, lean and portable linux subsystems
1:15 Introducing the Moby Project: a new open source project to advance the software containerization movement

Development workflow Improvements
Solomon’s keynote started by introducing new Docker features to improve the development workflows of Docker users: multi-stage builds and desktop-to-cloud integration. With multi-stage builds you can now easily separate your build-time and runtime container images, allowing development teams to ship minimal and efficient images. It’s time to say goodbye to those custom and non-portable build scripts! With desktop-to-cloud you can easily connect to a remote swarm cluster using your Docker ID for authentication, without having to worry about maintaining a complex public key infrastructure, nor requiring developers to get ssh access to the hosts themselves. Desktop-to-cloud is the fastest way for development teams to collaborate on shared pre-production environments.
Secure orchestration
In his presentation, Diogo Monica talks about SwarmKit and how to take the security of orchestration to the next level with secure node introduction, cryptographic node identify, MTLS between all nodes, cluster segmentation, encrypted networks and secure secret distribution. Watch the video to see a demo of this secure orchestration layer in action within an enterprise.
LinuxKit
Solomon then introduced a new component bringing Linux container functionality to new and varied platforms, from IoT to mainframes. This component called LinuxKit includes the tooling to allow building custom Linux subsystems that only include exactly the components the runtime platform requires. All system services are containers that can be replaced, and everything that is not required can be removed. All components can be substituted with ones that match specific needs. It is a kit, very much in the Docker philosophy of batteries included but swappable. Read more about LinuxKit.

Moby Project
Finally, Solomon announced the Moby Project, a new open-source project to advance the software containerization movement and help the ecosystem take containers mainstream. It provides a library of components, a framework for assembling them into custom container-based systems and a place for all container enthusiasts to experiment and exchange ideas. Read more about the Moby Project. 
Docker users, please refer to Moby and Docker to clarify the relationship between the projects. Docker maintainers and contributors, please check out Transitioning to Moby for more details.

Watch the dockercon general session videos! Introducing linuxKit and @mobyClick To Tweet

Learn More about the general sessions announcements:

Read more about LinuxKit.
Read more about the Moby Project
Sign up for the DockerCon 2017 Recap Online Meetup
Register for DockerCon Europe 2017

The post DockerCon 2017 Day 1 Highlights appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Join Us for the Docker Federal Summit on May 2nd 2017

 Docker is excited to announce that we are returning to the Newseum in Washington DC on May 2nd to host the second annual Docker Federal Summit event.  A one day event packed with breakout sessions, discussions, hands on labs and technology deep dives from Docker and our ecosystem partners.
 
This event is designed for federal agency developers and IT ops personnel looking to learn more about how to approach Docker containers, cloud and devops to accelerate agency IT initiatives to support critical civilian and defense missions.

Government agency registration: Complimentary
Industry and ecosystem partner registration is $100

Register to save your seat.
Featured Speakers and Sessions
Technology leaders from agencies like GSA, USCIS and JIDO will share their experiences in deploying containers to production and provide pragmatic guidance in how to approach this change from technology to process and culture.
 
 View the full agenda here.
Featured Breakout Sessions: A wide variety of breakout sessions feature technical deep dives, demonstrations and discussions around compliance and security.

Accelerating app development in the cloud
The latest initiatives around compliance for IT projects
How to scale and secure applications requiring FIPS
Docker technology deep dive from orchestration, security, management and more
Demos and deep dives from Sysdig, ETA, Cloudera and IBM

Learning Lab: Get hands on experience with Docker technology in the Learning Lab. Bring your own laptop with SSH and RDP installed to access the tutorials. Featured sessions include Docker Orchestration, Deploying and managing apps with Docker Enterprise Edition and Modernizing .NET Apps.
Technology Mall: The expo floor features event sponsors including technology and systems integrator partners that extend the Docker platform and accelerate the time to market to establish a Containers as a Service framework in your agency. A special thank you to our sponsors including: Booz Allen Hamilton, HPE, Microsoft, Cloudera, ETA, IBM, Sysdig, Aqua Security, Boxboat, IntegrityOne Partners, Twistlock and Immix Group. 
See you in Washington DC for an exciting day of sharing, learning and connecting next week.
Register and See You Next Week:

Learn More about the Federal Summit agenda and register.
Learn more about Docker in Government
Try Docker Enterprise Edition for free

 
 
The post Join Us for the Docker Federal Summit on May 2nd 2017 appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/