Docker 101: Introduction to Docker webinar recap

Docker is standardizing the way to package applications, making it easier for developers to code and build apps on their laptop or workstation and for IT to manage, secure and deploy into a variety of infrastructure platforms
In last week’s webinar, Docker 101: An Introduction to Docker, we went from describing what a container is, all the way to what a production deployment of Docker looks like, including how large enterprise organizations and world-class universities are leveraging Docker Enterprise Edition (EE)  to modernize their legacy applications and accelerate public cloud adoption.
If you missed the webinar, you can watch the recording here:

We ran out of time to go through everyone’s questions, so here are some of the top questions from the webinar:
­Q: How does Docker get access to platform resources, such as I/O, networking, etc.­ Is it a type of hypervisor?
A: Docker EE is not a type of hypervisor. Hypervisors create virtual hardware: they make one server appear to be many servers but generally know little or nothing about the applications running inside them. Containers are the opposite: they make one OS or one application server appear to be many isolated instances. Containers explicitly must know the OS and application stack but the hardware underneath is less important to the container. In Linux operating systems, the Docker engine is a daemon installed directly in a host operating system kernel that isolates and segregates different procedures for the different containers running on that operating system. The platform resources are accessed by the host operating system and each container gets isolated access to these resources through segregated namespaces and control groups (cgroups). cgroups allow Docker to share available hardware resources to containers and optionally enforce limits and constraints. You can read more about this here.
Q: ­Are containers secure since they run on the same OS?­
Yes, cgroups, namespaces, seccomp profiles and the “secure by default” approach of Docker all contribute to the security of containers. Separate namespaces protects processes running within a container meaning it cannot see, and even less affect, processes running in another container, or in the host system. Cgroups help ensure that each container gets its fair share of memory, CPU, disk I/O; and, more importantly, that a single container cannot bring the system down by exhausting one of those resources. And Docker is designed to limit root access of containers themselves by default, meaning that even if an intruder manages to escalate to root within a container, it will be much harder to do serious damage, or to escalate to the host. These are just some of the many ways Docker is designed to be secure by default. Read more about Docker security and security features here. 
Docker Enterprise Edition includes additional advanced security options including role-based access control (RBAC), image signing to validate image integrity, secrets management, and image scanning to protect images from known vulnerabilities. These advanced capabilities provide an additional layer of security across the entire software supply chain, from developer’s laptop to production.
Q: ­Can a Docker image created under one OS (e.g Windows) be used to run on a different operating system (e.g RedHat 7.x)?
A: Unlike VMs, Docker containers share the OS kernel of the underlying host so containers can go from one Linux OS to another but not from Windows to Linux. So you cannot run a .NET app natively on a Linux machine, but you can run a RHEL-based container on a SUSE-based host because they both leverage the same OS kernel.
Q: Is there another advantage other than DevOps for implementing Docker in enterprise IT infrastructure?
A: Yes! Docker addresses many different IT challenges and aligns well with major IT initiatives including hybrid/multi-cloud, data center and app modernization. Legacy applications are difficult and expensive to maintain. They can be fragile and insecure due to neglect over time while maintaining them consumes a large portion of the overall IT budget. By containerizing these traditional applications, IT organizations save time and money and make these applications more nimble. For example:

Cloud portability: By containerizing applications, they can be easily deployed across different certified platforms without requiring code changes.
Easier application deployment and maintenance: Containers are based on images which are defined in Dockerfiles. This simplifies the dependencies of an application, making them easier to move between dev, test, QA, and production environments and also easier to update and maintain when needed. 62% of customers with Docker EE see a reduction in their mean time to resolution (MTTR).
Cost savings: Moving to containers provides overall increased utilization of available resources which means that customers often see up to 75% improved consolidation of virtual machines or CPU utilization. That frees up more budget to spend on innovation,

To learn more about how IT can benefit from modernizing traditional applications with Docker, check out www.docker.com/MTA.
Q: Can you explain more about how Docker EE can be used to convert apps to microservices?
A: Replacing an existing application with a microservices architecture is often a large undertaking that requires significant investment in application development. Sometimes it is impossible as it requires systems of record that cannot be replaced. What we see many companies do is containerize an entire traditional application as a starting point. They then peel away pieces of the application and convert those to microservices rather than taking on the whole application. This allows the organization to modernize components like the web interface without complete re-architecture, allowing the application to have a modern interface while still accessing legacy data.
­Q: Are there any tools that will help us manage private/corporate images? ­Can we have host our own image repository in-house vs using the cloud?
A: Yes! Docker Trusted Registry (DTR) is a private registry included in Docker Enterprise Edition Standard and Advanced. In addition, DTR provides additional advanced capabilities around security (eg. image signing, image scanning) and access controls (eg. LDAP/AD integration, RBAC). It is intended to be a private registry for you to install either in your data center or in your virtual private cloud environment.
Q: ­Is there any way to access the host OS file system(s)?  I want to put my security scan software in a Docker container but scan the host file system.
A: The best way to do this is to mount the host directory as a volume in the container with “-v /:/root_fs” so that the file system and directory are shared and visible in both places. More information around storage volumes, mounting shared volumes, backup and more are here.

Top 7 questions from #Docker 101 – Webinar recapClick To Tweet

Next Steps:

If you’re an IT professional, join our multi-part learning series: IT Starts with Docker
If you’re a developer, check out the Docker Playground 
Learn more about Docker Enterprise Edition or try the new hosted demo environment
Explore and register for other upcoming webinars or join a local Meetup

The post Docker 101: Introduction to Docker webinar recap appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Published by