Kargo Ansible Playbooks foster Collaborative Kubernetes Ops

Today’s guest post is by Rob Hirschfeld, co-founder of open infrastructure automation project, Digital Rebar and co-chair of the SIG Cluster Ops.  Why Kargo?Making Kubernetes operationally strong is a widely held priority and I track many deployment efforts around the project. The incubated Kargo project is of particular interest for me because it uses the popular Ansible toolset to build robust, upgradable clusters on both cloud and physical targets. I believe using tools familiar to operators grows our community.We’re excited to see the breadth of platforms enabled by Kargo and how well it handles a wide range of options like integrating Ceph for StatefulSet persistence and Helm for easier application uploads. Those additions have allowed us to fully integrate the OpenStack Helm charts (demo video).By working with the upstream source instead of creating different install scripts, we get the benefits of a larger community. This requires some extra development effort; however, we believe helping share operational practices makes the whole community stronger. That was also the motivation behind the SIG-Cluster Ops.With Kargo delivering robust installs, we can focus on broader operational concerns.For example, we can now drive parallel deployments, so it’s possible to fully exercise the options enabled by Kargo simultaneously for development and testing.  That’s helpful to built-test-destroy coordinated Kubernetes installs on CentOS, Red Hat and Ubuntu as part of an automation pipeline. We can also set up a full classroom environment from a single command using Digital Rebar’s providers, tenants and cluster definition JSON.Let’s explore the classroom example:First, we define a student cluster in JSON like the snippet below{  “attribs”: {    “k8s-version”: “v1.6.0″,    “k8s-kube_network_plugin”: “calico”,    “k8s-docker_version”: “1.12”  },  “name”: “cluster01″,  “tenant”: “cluster01″,  “public_keys”: {    “cluster01″: “ssh-rsa AAAAB….. user@example.com”  },  “provider”: {    “name”: “google-provider”  },  “nodes”: [    {      “roles”: [ “etcd”,”k8s-addons”, “k8s-master” ],      “count”: 1    },    {      “roles”: [ “k8s-worker” ],      “count”: 3    }  ]}Then we run the Digital Rebar workloads Multideploy.sh reference script which inspects the deployment files to pull out key information.  Basically, it automates the following steps:rebar provider create {“name”:“google-provider”, [secret stuff]}rebar tenants create {“name”:“cluster01”}rebar deployments create [contents from cluster01 file]The deployments create command will automatically request nodes from the provider. Since we’re using tenants and SSH key additions, each student only gets access to their own cluster. When we’re done, adding the –destroy flag will reverse the process for the nodes and deployments but leave the providers and tenants.We are invested in operational scripts like this example using Kargo and Digital Rebar because if we cannot manage variation in a consistent way then we’re doomed to operational fragmentation.  I am excited to see and be part of the community progress towards enterprise-ready Kubernetes operations on both cloud and on-premises. That means I am seeing reasonable patterns emerge with sharable/reusable automation. I strongly recommend watching (or better, collaborating in) these efforts if you are deploying Kubernetes even at experimental scale. Being part of the community requires more upfront effort but returns dividends as you get the benefits of shared experience and improvement.When deploying at scale, how do you set up a system to be both repeatable and multi-platform without compromising scale or security?With Kargo and Digital Rebar as a repeatable base, extensions get much faster and easier. Even better, using upstream directly allows improvements to be quickly cycled back into upstream. That means we’re closer to building a community focused on the operational side of Kubernetes with an SRE mindset.If this is interesting, please engage with us in the Cluster Ops SIG, Kargo or Digital Rebar communities. — Rob Hirschfeld, co-founder of RackN and co-chair of the Cluster Ops SIGGet involved with the Kubernetes project on GitHub Post questions (or answer questions) on Stack Overflow Connect with the community on SlackFollow us on Twitter @Kubernetesio for latest updates
Quelle: kubernetes

Dancing at the Lip of a Volcano: The Kubernetes Security Process – Explained

Editor’s note: Today’s post is by Jess Frazelle of Google and Brandon Philips of CoreOS about the Kubernetes security disclosures and response policy. Software running on servers underpins ever growing amounts of the world’s commerce, communications, and physical infrastructure. And nearly all of these systems are connected to the internet; which means vital security updates must be applied rapidly. As software developers and IT professionals, we often find ourselves dancing on the edge of a volcano: we may either fall into magma induced oblivion from a security vulnerability exploited before we can fix it, or we may slide off the side of the mountain because of an inadequate process to address security vulnerabilities. The Kubernetes community believes that we can help teams restore their footing on this volcano with a foundation built on Kubernetes. And the bedrock of this foundation requires a process for quickly acknowledging, patching, and releasing security updates to an ever growing community of Kubernetes users. With over 1,200 contributors and over a million lines of code, each release of Kubernetes is a massive undertaking staffed by brave volunteer release managers. These normal releases are fully transparent and the process happens in public. However, security releases must be handled differently to keep potential attackers in the dark until a fix is made available to users.We drew inspiration from other open source projects in order to create the Kubernetes security release process. Unlike a regularly scheduled release, a security release must be delivered on an accelerated schedule, and we created the Product Security Team to handle this process.This team quickly selects a lead to coordinate work and manage communication with the persons that disclosed the vulnerability and the Kubernetes community. The security release process also documents ways to measure vulnerability severity using the Common Vulnerability Scoring System (CVSS) Version 3.0 Calculator. This calculation helps inform decisions on release cadence in the face of holidays or limited developer bandwidth. By making severity criteria transparent we are able to better set expectations and hit critical timelines during an incident where we strive to:Respond to the person or team who reported the vulnerability and staff a development team responsible for a fix within 24 hoursDisclose a forthcoming fix to users within 7 days of disclosureProvide advance notice to vendors within 14 days of disclosureRelease a fix within 21 days of disclosureAs we continue to harden Kubernetes, the security release process will help ensure that Kubernetes remains a secure platform for internet scale computing. If you are interested in learning more about the security release process please watch the presentation from KubeCon Europe 2017 on YouTube and follow along with the slides. If you are interested in learning more about authentication and authorization in Kubernetes, along with the Kubernetes cluster security model, consider joining Kubernetes SIG Auth. We also hope to see you at security related presentations and panels at the next Kubernetes community event: CoreOS Fest 2017 in San Francisco on May 31 and June 1.As a thank you to the Kubernetes community, a special 25 percent discount to CoreOS Fest is available using k8s25code or via this special 25 percent off link to register today for CoreOS Fest 2017. –Brandon Philips of CoreOS and Jess Frazelle of GooglePost questions (or answer questions) on Stack OverflowJoin the community portal for advocates on K8sPortFollow us on Twitter @Kubernetesio for latest updatesConnect with the community on SlackGet involved with the Kubernetes project on GitHub
Quelle: kubernetes

Developing a Spring Boot app on Docker: The AtSea Demo App

This is the first of a series of blog posts that demonstrates using Docker to develop a typical web application and deploying it in production. For DockerCon 2017, we wanted to build a new demo application that would demonstrate the flexibility of using Docker in development as well as showcase the features of Docker in a production environment. The result was the AtSea Shop, a storefront application that can be deployed on different operating systems and can be customized to both your enterprise development and operational environment.

A Hybrid Architecture
The team decided on a few ground rules. First, we wanted to use modern components commonly used in enterprise applications. We decided to build a Java application using the Spring Boot framework. The web client is a javascript application written using React as a framework.  Second, the application should be able to use any relational database and that it could be deployed on a Linux or Windows environment or cluster. Finally, the team wanted to show the process from development to deployment including building the application, implementing security, and deploying the application.

The application combines a typical Java n-tier architecture that uses Spring Boot’s web MVC framework for the REST API and Spring Data to manage database operations. We chose PostgreSQL for the database, but the application can use any database defined by Spring Data. The storefront client was developed separately using React, and added to the AtSea jar file. Finally, we used a bash script to simulate a payment gateway that uses secrets to authorize transactions.
Although the application is deployed in an n-tier configuration, with the javascript client included in the application jar, each of these components could be deployed separately in a microservice architecture.
Developing and Deploying with Docker
Developing an application with a distributed team can be challenging. Docker provides significant advantages when developing an application by:

enabling migration to microservices
establishing a consistent deployment environment
developers can use familiar tools and IDEs
allowing for rapid implementation and testing of ideas
simplifying the process to deploying to production
easily develop polyglot applications with multiple programming languages
building in security tools
enabling quick deployment of your application

You can see the code for the AtSea app in our new Docker Samples organization on GitHub, where we share our sample applications.
In following articles, we’ll go into depth on the following topics

developing with Eclipse and Docker
using multistage builds to create containers
implementing container security using secrets
deploying the application to a cluster
running the application in Windows containers

While you’re waiting, check out these developer resources and videos from Dockercon 2017.

AtSea Shop demo
Docker Labs
Developer Tools
Java development using docker
DockerCon videos
Docker for Java Developers
The Rise of Cloud Development with Docker & Eclipse Che
All the New Goodness of Docker Compose
Docker for Devs

Learn how to Develop a @SpringBoot app on #Docker: The AtSea Demo AppClick To Tweet

The post Developing a Spring Boot app on Docker: The AtSea Demo App appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Docker at Microsoft Build 2017

Build is Microsoft’s premier developer event, run annually. This year Docker, Inc. and containers were everywhere, starting with a dedicated container pre-day, then with constant traffic to the Docker booth, and many shared container success stories.

Container Fest Pre-Day
Build is usually a three-day event, but this year saw the very first pre-day – run jointly by Docker and Microsoft. “Container Fest” was a whole-day event focused on containers and Docker, running on Windows and Linux, on-premises and in Azure.
There were 12 sessions throughout the day, presented by engineers and architects from Microsoft and Docker, Inc. They covered everything from the internals of Docker on Windows Server, through modernizing .NET Framework apps with Docker, to the options for running Docker containers on Azure.
A popular first step for modernizing traditional Windows applications is to use Image2Docker, which we demonstrated at the event. Image2Docker can extract existing applications from Windows machines into Dockerfiles, so you can automate the conversion of your app landscape to Docker. You can see Image2Docker in action from our session at DockerCon:

Over 300 people were at the Container Fest pre-day, and when the sessions had finished, they stayed on to run through the Hands-On Labs from DockerCon. Just like at DockerCon, we provisioned virtual machines in Azure for attendees to use for working through the labs, and the experts were available for help and advice.
The DockerCon 2017 labs cover a range of topics, including orchestration and networking, Docker Enterprise Edition and Docker Cloud, and running Docker containers on Windows. The labs are open source on GitHub now, as part of the main Docker labs repo. If you’re looking to get started with Docker on Windows, these labs give you a great roadmap:

Windows 101 – learn the basics of Docker and Windows containers
Modernize .NET Apps, for Ops – see how to package an ASP.NET app as a Docker image
Modernize .NET Apps, for Devs – modernize an ASP.NET by breaking features out into Docker containers
SQL Server – learn how to run SQL Server in Docker containers and package up a custom database schema into a Docker image

Partner Hub
Hundreds of attendees dropped into the Docker booth in the MS Build Conference Hub expo area to ask for help and advice, tell us about their Docker journey, or just to say Hi. The level of Docker experience was everything from complete beginners to folks running production workloads on Docker Enterprise Edition.
We had some videos running on loop, which were people found very useful – and these are on YouTube so you can check them out yourself. To start, there’s the Docker on Windows 101, which introduces you to how containers work on Windows:

And for the journey into production, we have a tour around Docker Datacenter, the Containers-as-a-Service (CaaS) platform available with Docker Enterprise Edition standard and advanced. 

The crack team from Docker were kept busy through the whole event, had a great time, and are thoroughly looking forward to next year.
Learn More:

Try out the DockerCon 2017 Hands-On Labs for yourself
Get the Modernize Traditional Apps kit to plan your MTA program with Docker
Scott Guthrie from Microsoft is on a European tour – Docker will be joining in Amsterdam, London and Dublin
Try out Image2Docker for Windows and Image2Docker for Linux
Learn more about Docker and Microsoft together

Highlights from #MSBuild: Internals of #Docker on Windows, modernizing .NET framework apps &…Click To Tweet

The post Docker at Microsoft Build 2017 appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

The Latest Docker Certified Container and Plugins for March and April 2017

The Docker Certification Program provides a way for technology partners to validate and certify their software or plugin as a container for use on the Docker Enterprise Edition platform.  Since the initial launch of the program in March, more Containers and Plugins have been certified and available for download.
 
Certified Containers and Plugins are technologies that are built with best practices as Docker containers, tested and validated against the Docker Enterprise Edition platform and APIs, pass security requirements, reviewed by Docker partner engineering and cooperatively supported by both Docker and the partner. Docker Enterprise Edition and Certified Technology provide assurance and support to businesses for their critical application infrastructure.
Check out the latest Docker Certified technologies to the Docker Store:

Dynatrace provides monitoring Docker applications and Docker clusters out of the box.
{code} by Dell EMC certified a number of REX-ray volume plugins for the following: REX-Ray for AWS EFS, REX-Ray for AWS EBS, REX-Ray for S3FS, REX-Ray for Isilon, REX-Ray for GCE and REX-Ray for ScaleIO.
HPE OpsBridge Agent provides monitoring of Docker applications with HPE Operations Bridge.
CoScale Agent provides a lightweight solution for monitoring the performance of your Docker containers and microservices in production.
NexentaEdge Docker NFS Volume Plug-In for the Nexenta Scale-Out High Performance Multi-Service Solution with Cluster-Wide Deduplication and Compression.
Oracle: As announced at DockerCon, many Oracle products are now available on Docker Store including Oracle Coherence, Oracle WebLogic Server, Oracle Java 8 SE (Server JRE), and Oracle Instant Client.
VMware Vsphere Volume Service for Docker enables the ability to run stateful container applications on VMware vSphere.
Weaveworks Network Plugin provides simple, resilient multi-host Docker networking.

Check Visit Docker Store regularly to browse and download the latest Certified Containers and Plugins. Interested in publishing? Sign up here to start posting to Docker Store.
 

The latest Docker Certified Containers and Plugins on #Docker StoreClick To Tweet

Continue your Docker journey with these helpful links:

Try Docker Enterprise Edition for free
Browse the Docker Store for Certified Containers and Certified Plugins
Sign up to become a Docker Store Publisher

The post The Latest Docker Certified Container and Plugins for March and April 2017 appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

DockerCon Hands-on Labs now online

One of more popular activities at DockerCon is our Hands-on Labs, where you can learn to use the Docker tools you see announced on stage, or talked about in the breakout sessions. This year we had eight labs for people to work through, ranging from 20 minutes to an hour in length.

We’ve now moved these apps into the Docker Labs Repo so that everyone can use them. The Docker Labs Repo is where we put a bunch of learning content for people who want to learn Docker, from beginner to advanced security and networking labs.
Here are the new labs:
Continuous Integration With Docker Cloud
In this lab, you will learn how to configure a continuous integration (CI) pipeline for a web application using Docker Cloud’s automated build features.
Docker Swarm Orchestration Beginner and Advanced
In this lab, you will play around with the container orchestration features of Docker. You will deploy a simple application to a single host and learn how that works. Then, you will configure Docker Swarm Mode, and learn to deploy the same simple application across multiple hosts. You will then see how to scale the application and move the workload across different hosts easily.
Securing Apps with Docker EE Advanced / Docker Trusted Registry
In this lab, you will integrate Docker EE Advanced in to your development pipeline. You will build your application from a Dockerfile and push your image to the Docker Trusted Registry (DTR). DTR will scan your image for vulnerabilities so they can be fixed before your application is deployed.
Docker Networking
In this lab you will learn about key Docker Networking concepts. You will get your hands dirty by going through examples of a few basic networking concepts, learn about Bridge and Overlay networking, and finally learning about the Swarm Routing Mesh.
Windows Docker Containers 101
Docker runs natively on Windows 10 and Windows Server 2016. In this lab you’ll learn how to package Windows applications as Docker images and run them as Docker containers. You’ll learn how to create a cluster of Docker servers in swarm mode, and deploy an application as a highly-available service.
Modernize .NET Apps – for Devs
You can run full .NET Framework apps in Docker using the Windows Server Core base image from Microsoft. That image is a headless version of Windows Server 2016, so it has no UI but it has all the other roles and features available. Building on top of that there are also Microsoft images for IIS and ASP.NET, which are already configured to run ASP.NET and ASP.NET 3.5 apps in IIS.
This lab steps through porting an ASP.NET WebForms app to run in a Docker container on Windows Server 2016. With the app running in Docker, you can easily modernize it – and in the lab you’ll add new features quickly and safely by making use of the Docker platform.
Modernize .NET Apps – for Ops
You’ll already have a process for deploying ASP.NET apps, but it probably involves a lot of manual steps. Work like copying application content between servers, running interactive setup programs, modifying configuration items and manual smoke tests all add time and risk to deployments.
In Docker, the process of packaging applications is completely automated, and the platform supports automatic update and rollback for application deployments. You can build Docker images from your existing application artifacts, and run ASP.NET apps in containers without going back to source code.
This lab is aimed at ops and system admins. It steps through packaging an ASP.NET WebForms app to run in a Docker container on Windows 10 or Windows Server 2016. It starts with an MSI and ends by showing you how to run and update the application as a highly-available service on Docker swarm.
So check out these labs, or head on over the Docker Labs repo and check out the other great content we have there. And if that doesn’t satisfy you desire for hands-on learning, come to DockerCon Europe in October, where we’ll have yet more labs for you to try out the very latest in Docker tech.

More Hands-on Learning with the #DockerCon Labs, now available to allClick To Tweet

More Resources

Check out the Docker Labs repo for this and many more tutorials
Register for an upcoming Docker Webinar
Attend an upcoming Docker event near you

The post DockerCon Hands-on Labs now online appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Mentorship in the Docker Community: How you can get involved

Mentorship is an important part of the Docker Community. Over the past few global event series like the Docker Birthday #3 and Mentor week last year, advanced users attended their local event and helped attendees work through training materials. As interest in mentorship continues to grow, we’re excited to grow our programs and provide more opportunities for the community to get involved.

New this year at DockerCon, we organized a Mentor Summit for attendees to learn the ins and outs of being an awesome mentor both in industry and in the Docker Community. Check out the talks below and learn how you can get involved.
Anna Osswoski – How to Mentor and be a Great One

View Anna’s slides here.
Sebastiaan van Stijn – How To Contribute to Open Source

Jérôme Petazzoni – A DockerCon 2017 Recap: give a talk in your local community

Are you an advanced Docker user? Join the Docker Mentor Group!
With over 280 Docker Meetup groups worldwide, the Docker online Community Group + Slack, and other programs, there is always an opportunity for collaboration and knowledge sharing. Mentors should have experience working with Docker Engine, Docker Networking, Docker Hub, Docker Machine, Docker Orchestration and Docker Compose.
Sign up as a mentor!
Learn about Mentorship in the Docker Community:

Join the online Community Mentor Group
Watch the DockerCon Recap Online Meetup recording
Review Docker Meetup Content (DMC) DockerCon 2017 Highlights!

Learn how to be a great mentor in the #docker community! Talks by @jpetazzo @OssAnna16 &…Click To Tweet

The post Mentorship in the Docker Community: How you can get involved appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

DockerCon Europe Registration and Call for Proposals are OPEN

DockerCon 2017 in Austin was amazing! We are still on a high from the energy and excitement that is created when 5,500 members of the Docker Community are in one place. Containers are everywhere, and the learning, inspiration and networking that those four days brings is unrivaled. We welcomed amazing speakers, made tons of meaningful connections and are already geared up to do it again for DockerCon Europe: October 16 – 19th in Copenhagen! Early Bird registration is now open, hurry up and get your ticket before they sell out.
Register for DockerCon Europe!
 

In addition, today we opened the DockerCon Copenhagen Call for Papers. We hope that you were inspired by the Moby Project and LinuxKit announcements and are looking forward to your submissions on the following:
Using Docker
Has Docker technology made you better at what you do? Is Docker an integral part of your company’s tech stack? Do you use Docker to do big things?
By giving concrete, first-hand examples, tell us about your Docker usage, share your challenges and what you learned along the way, and inspire us on how to use Docker to accomplish real tasks. When attendees leave your session, they should understand how to apply your take-aways to their use case.
Deep Dives
Share your code and demo heavy deep-dive sessions on what you have been able to transform with your use of the Docker stack. Entice your audience by going deeply technical and teach them how to do something they haven’t done.
Moby and Friends
Do you have a cool Moby use case? Are you using LinuxKit or Docker plumbing in your projects? Share the ways you are using Docker’s open source components to build technology that solves real problems.
Cool Hacks
Show us your cool hacks and wow us with the interesting ways you are using Moby or Docker plumbing in your projects. Or share the interesting ways you are pushing the boundaries of Docker. DockerCon Austin 2017’s cool hacks included: Play with Docker and FaaS. 
The deadline for submissions is  June 13th at 11:59 PST.
Submit a talk
So, what happens next?
After a proposal is submitted, it will be reviewed initially for content and format. Once past the initial review, a committee of reviewers from Docker and the industry will read the proposals and select the best ones. There are a limited number of speaking slots and we work to achieve a balance of presentations that will interest the Docker community.
We’re looking forward to reading your proposals!
Learn more about DockerCon: 

Check out the DockerCon 2017 website to learn about speakers and sponsors
Watch the DockerCon 2017 videos
Sign up to receive DockerCon News

#DockerCon Europe CFP is now open! Submit your talk before June 13thClick To Tweet

The post DockerCon Europe Registration and Call for Proposals are OPEN appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Docker Enterprise Edition Brings New Life Back to Legacy Apps at Northern Trust

Many organizations understand the value of building modern 12-factor applications with microservices. However, 90+% of applications running today are still traditional, monolithic apps. That is also the case for Northern Trust – a 128-year old financial services company headquartered in Chicago, Illinois. At DockerCon 2017, Rob Tanner, Division Manager for Enterprise Middleware at Northern Trust, shared how they are using Docker Enterprise Edition (EE) to modernize their traditional applications to make them faster, safer, and more performant.
Bringing Agility and Security to Traditional Apps
Founded in 1889, Northern Trust is a global leader in asset servicing, asset management, and banking for personal and institutional clients. Their clients expect best-of-breed services and experiences from Northern Trust and Rob’s team plays a large role in delivering that. While their development teams are focused on microservices apps for greenfield projects, Rob is responsible for over 400 existing WebLogic, Tomcat, and .NET applications. Docker EE became the obvious choice to modernize these traditional apps and manage their incredibly diverse environment with a single solution.
Containerizing traditional applications with Docker EE gives Northern Trust a better way to manage them and some immediate benefits:

Improved security: As a financial institution, security is a top priority. Containerizing traditional applications helps improve their underlying security posture in a few ways:

Security scanning – Northern Trust is leveraging image scanning to discover vulnerabilities within their existing apps. There were some new vulnerabilities that were previously undetected but with the binary level scan, they are automatically alerted to new issues and can address and resolve them immediately.
Smaller attack surface – With Docker, Northern Trust can reduce the attack surface of their application by only allowing the required access, syscalls and processes needed to run the application.
Faster updates – With the ability to rapidly deploy new containers, Northern Trust no longer patches applications  in place, but quickly deploys a new container with the updates and fixes and removes the previous one.

Improved infrastructure efficiency: Instead of managing unique infrastructure stacks for each application, each with its own challenging dependencies, Docker allows Northern Trust to treat all infrastructure as a heterogeneous pool of resources. Dependencies are not packaged into the containers with the app, thus removing it from the infrastructure problem. This makes the application portable so that Northern Trust is free to explore a hybrid cloud strategy.

Impact and Results
With Docker EE in place, Northern Trust is seeing immediate improvements in the way they do software development. It used to take 30 days to provision infrastructure for new projects. With Docker EE, they experience a 4x improvement in deployment time and it now only takes 7 days. Northern Trust is also seeing 2x improvement in infrastructure utilization, getting more out of their available capacity than before.

By simplifying infrastructure management and making applications more portable, Docker EE is improving the quality of their traditional apps. This enables both their developers and operations team to be more responsive and ultimately, Northern Trust is able to stay a leader in their market by delivering the services that their clients are asking for.
To learn more about how Northern Trust is modernizing their traditional applications, watch Rob’s breakout session with Rohit Tatachar, Sr. Program Manager at Microsoft and Brandon Royal, Solutions Architect at DockerCon 2017:

Next Steps

View all the recorded sessions from DockerCon 2017
Learn more about modernizing traditional apps with Docker EE
Sign up for the Modernize Traditional Apps kit

Traditional Apps at @NorthernTrust are more agile and secure with Docker Enterprise Edition…Click To Tweet

The post Docker Enterprise Edition Brings New Life Back to Legacy Apps at Northern Trust appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

DockerCon 2017: all the session videos are now live!

We’re happy to announce that all the breakout session video recordings from DockerCon 2017 are now available online! Special shoutout to all the amazing speakers for making their sessions informative and insightful. All the videos are published on the Docker Youtube channel and the presentation slides available from the Docker Slideshare account.
Here are the links to the playlists of each track:  
Use Case Track
Use case talks are about practical applications of Docker and are heavy on technical detail and implementation advice. Topics covered during this track were related to high availability and parallel usage in the gaming industry, Cloud scale for e-commerce giants, Security compliance and system, protocols legacy in financial and health care institutions.

Black Belt Track
Black Belt talks were deeply technical sessions presented by Docker experts. These sessions are code and demo heavy and light on the slides. From container internals to advanced container orchestration, security and networking, this track is a delight for the container connoisseurs in the room.

Docker Deep Dive
This track focuses on the technical details associated with the different components of the Docker platform: advanced orchestration, networking, security, storage, management and plug-ins. The Docker engineering leads walk you through the best way to build, ship and run distributed applications with Docker as well as give you a hint at what’s on their roadmaps.

Using Docker Track
This track is for everyone who’s getting started with Docker or wants to better implement Docker in their workflow. Whether you’re a .NET, Java or NodeJS developer looking to modernizing your applications, or an IT Pro who wants to learn about Docker orchestration and application troubleshooting, this track will have specific sessions for you to get up to speed with Docker.

Wildcard Track
Wildcard talks are all about informing, inspiring and delighting attendees in the light of what they can with containers and other related technologies. Culture, community, tech trends, business or innovative talks, anything that’s Docker related.  

Community Theater Track
Community Theaters feature cool Docker hacks and lightning talks by various community members on a range of topics like Community Cool Hacks, Docker and Serverless, Docker and RaspberryPi.

All the @dockercon videos and slides are now live! #dockercon #docker #learningdocker Click To Tweet

The post DockerCon 2017: all the session videos are now live! appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/