Docker Enterprise: The First DISA STIG’ed Container Platform!

Docker Enterprise was built to be secure by default. When you build a secure by default platform, you need to consider security validation and governmental use. Docker Enterprise has become the first container platform to complete the Security Technical Implementation Guides (STIG) certification process. Thanks to Defense Information Systems Agency (DISA) for its support and sponsorship. Being the first container platform to complete the STIG process through DISA means a great deal to the entire Docker team.
The STIG took months of work around writing and validating the controls. What does it really mean? Having a STIG allows government agencies to ensure they are running Docker Enterprise in the most secure manner. The STIG also provides validation for the private sector. One of the great concepts with any compliance framework, like STIGs, is the idea of inherited controls.  Adopting a STIG recommendation helps improve an organization’s security posture. Here is a great blurb from DISA’ site:

The Security Technical Implementation Guides (STIGs) are the configuration standards for DOD IA and IA-enabled devices/systems. Since 1998, DISA has played a critical role enhancing the security posture of DoD’s security systems by providing the Security Technical Implementation Guides (STIGs). The STIGs contain technical guidance to “lock down” information systems/software that might otherwise be vulnerable to a malicious computer attack.

This GCN article also makes a good point about using the STIG as a security baseline:

If you look at any best practice guidance, regulation or standards around effective IT security out on the market today, you will see that it advises organizations to ensure their computing systems are configured as securely as possible and monitored for changes.
If you look at any best practice guidance, regulation or standards around effective IT security out on the market today, you will see that it advises organizations to ensure their computing systems are configured as securely as possible and monitored for changes.

What STIG Means for Docker’s Customers
So what’s in the STIG? STIGs are formatted in xml and require the STIG viewer to read. The STIG viewer is a custom GUI written in Java (see DISA’s page on STIG Viewing tools for more). Specifically you can find the latest DISA STIG Viewer here.

The Docker Enterprise STIG can be found here: Docker Enterprise 2.x Linux/UNIX STIG – Ver 1 Rel 1  (You will need to unzip it). Although the current STIG calls out Docker Enterprise 2.x, it absolutely applies to Docker Enterprise 3.X!

Lets dig into the STIG itself. There is some good information about the STIG and DISA’s authority from Overview pdf.
From the STIG itself there are only 100 controls.  For the uninitiated, a control is config that needs to be checked and possibly changed. This is the real meat and potatoes for the System Administrators.
Here is the breakdown:

Category
Controls

CAT 1
23

CAT 2
72

CAT 3
5

Total
100

CAT 1 controls are the most important controls to pay attention to. As you can see there are only 23 CAT 1, and the bulk of those controls are “what not to do” controls — checks to ensure an undesirable situation is not occurring. With only 100 total controls, there is not a lot of work to do to harden Docker Enterprise.
The STIG will be updated as often as needed. We want to ensure that all our customers and partners have access to the latest security information around Docker Enterprise.
Why STIG Matters to Docker
We are thankful to our sponsors within DISA that paved the way for us to be accepted into the STIG process and complete it. The primary goal of the Docker Public Sector team is to provide technology that serves those who serve our country. Completing the STIG process was a big step for us in gaining a level of trust necessary to fulfill that goal. 
We have always felt that new technology like Docker is tangibly valuable to production enterprise and mission environments only if we do our due diligence with security through the certifications and evaluations that are required for our technology to be approved and used safely in real world environments.

Exciting news! #Docker Enterprise is the first container platform to complete the @USDISA Security Technical Implementation Guides (STIG) certification process.Click To Tweet

To learn more about why Docker Enterprise is secure by default:

Take a look at our Security Reference Architecture.
Read about our solutions for Government .

Watch the Docker Enterprise 3.0 webinar series to see a demo of built-in features that are designed to provide enterprise-grade security without slowing down your organization.
Chris Cyrus, Director Enterprise Sales at Docker, also contributed to this blog post.
The post Docker Enterprise: The First DISA STIG’ed Container Platform! appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Women in Tech Week Profile: Anusha Ragunathan

It’s Women in Tech Week, and we want to take the opportunity to celebrate some of the amazing women who make a tremendous impact at Docker – this week, and every week – helping developers build modern apps.

Anusha Ragunathan is a Software Engineer at Docker. You can follow her on Twitter @AnushaRagunatha.

What is your job?
Software Engineer. I build systems, write and review code, test and analyze software. I’ve always worked on infrastructure software both at and before Docker. I participate in Moby and Kubernetes OSS projects. My current work is on persistent storage for Kubernetes workloads and integrating it with Docker’s Universal Control Plane. I also enjoy speaking at technical conferences and writing blogs about my work. 
How long have you worked at Docker?
 4 years and 1 month!

Is your current role one that you always intended on your career path? 
Yes, I’ve always been on this path. In my high school, we had the option to take biological sciences or Computer Sciences (CS). I chose CS and since then that has been my path. I earned both my bachelors’ and master’s degrees in CS.

What is your advice for someone entering the field?
If you love problem solving and enjoy learning a new system, building on top of it, observing it, reverse engineering it – if any of this excites you – give software engineering a shot.  It’s not just about writing code. It’s not just about learning new languages, frameworks and tools, it’s about having a holistic view of how things work together. Pursue a career in engineering if you have this inclination. And be brave! It can sound scary, but its not. It’s a set of machines that you need to understand. More you do it, the better you get at it.

Tell us about a favorite moment or memory at Docker or from your career? 
All the DockerCons! I get a huge high attending and watching the keynotes and announcements. 
My first year at Docker was also memorable. I didn’t come from an Open Source background. I have such fond memories of how welcoming everyone was at Docker. Arnaud Porterie, Solomon Hykes and many of the engineers who I still work with, Sebastiaan van Stijn, Tonis Tiigi, Tibor Vass, Michael Crosby were very welcoming and very eager to help – there was a great camaraderie.

What are you working on right now that you are excited about?
Kubernetes. It is a project that has a lot going on from both a community contribution as well as the feature set it carries every release. It’s a great project to be involved with, especially given its wide adoption. According to https://octoverse.github.com/projects.html, it’s in the top 10 OSS projects. It can be chaotic as well, but if you have a focus and a few specific projects to follow, it’s a great community to learn from and contribute to.

What do you do to get “unstuck” on a really difficult problem/design/bug?
Whether it’s a technical, team or business problem, the first thing I always tell myself or advise colleagues is simply “do not panic.” Then methodically, I try to see if any of my prior knowledge can be applied to this new problem and if I can connect the dots.
If that doesn’t work, then you try new things. If its a technical problem you can look things up, or try to understand the system more by focusing on observability, it’s very measurable, you just have to methodical.
If it is a team problem or a customer issue, I tend to communicate. You don’t need to over-communicate, but it’s always good to keep people in the loop. Offer a solution as part of your communication; don’t go into the conversation only with the problem. If you give your audience something to work with, then it’s much easier to work towards a solution.
Lastly, leave your ego aside. Ask for help and reciprocate when others ask help from you.

What is your superpower?
Authenticity.

What is your definition of success?
It’s not a traditional view of success. To me success is being able to contribute to something and feel good about it in the end – build something, get features working and seeing your work deployed, adopted and used by customers. To me, success is about working on something you think is significant and eventually sees the light of day.

What are you passionate about?
At work, distributed systems and all the nuts and bolts that go with it. I love automation, observability and debugging challenging problems – I get a kick out of that.
At home, my family. Spending time with both of my girls.
Icebreaker Round
When we interviewed Anusha, we also did a quick icebreaker round. Her answers were great so we’ve included them here!

Icons and graphics sourced from Lineal, Darius Dan and Smashicon on Flaticon.

We’ll feature profiles of some more of the amazing women who work at Docker throughout the week!

We’re celebrating Women In Tech Week! Discover @AnushaRagunatha’s story of being an #Engineer at Docker.Click To Tweet

The post Women in Tech Week Profile: Anusha Ragunathan appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Top Questions Answered: Docker and Kubernetes? I Thought You Were Competitors!

Last week, we covered some of the questions about container infrastructure from our recent webinar “Demystifying VMs, Containers, and Kubernetes in the Hybrid Cloud Era.” This week, we’ll tackle the questions about Kubernetes, Docker and the software supply chain. One common misperception that we heard in the webinar — that Docker and Kubernetes are competitors. In fact, Kubernetes is better with Docker. And Docker is better with Kubernetes.
Docker And Kubernetes? I thought you were competitors?
We hear questions along this line all the time. Here are some quick answers:
Can I use Kubernetes with Docker?

Yes, they go together. You need a container runtime like Docker Engine (based on open source containerd) to start and stop containers on a host.
When you have a bunch of containers running across a bunch of hosts, you need an orchestrator to manage things like: Where will the next container start? How do you make a container highly available? How do you control which containers can communicate with other containers? That’s where an orchestrator such as Kubernetes comes in.

Comparing traditional, virtualized, containerized and Kubernetes deployment architectures.

The container runtime and the orchestrator are the two core atomic units that go together. You could just install Kubernetes and Docker Engines and have something that works, but enterprise organizations need security, monitoring and logging, enterprise storage and networking, and much more. 
This is where a container platform like Docker Enterprise comes in: Docker Enterprise is the easiest and fastest way to use containers and Kubernetes at scale and delivers the fastest time to production for modern applications, securely running them from hybrid cloud to the edge. It also ships with a CNCF-conformant version Kubernetes!

Docker Enterprise adds important capabilities to Kubernetes and makes it easier to use.
Does Kubernetes replace Docker Swarm?

No, they can be used together. Kubernetes and Docker Swarm are both orchestrators, so they have the same end goals. New users find it much easier to understand Docker Swarm. However, Kubernetes has evolved to add a lot of functionality.
The good news: you can use both side-by-side in Docker Enterprise. Enable your developers and operators to decide which route they want to go: you don’t have to tie yourself to one decision or the other.

You can learn more about this choice from the on-demand webinar, Swarm vs. Kubernetes, Presented by BoxBoat.
Container images, security, and CI/CD
 A Secure Software Supply Chain is another big part of what makes up a container platform. After all, Docker Engine and Kubernetes wouldn’t have anything to do without container images! This was clearly another area with lots of interest.
We demonstrated vulnerability scanning and some questions came up about where scanning fits in the development lifecycle.
Do you scan containers in development? Test? QA? Production?
Vulnerability scanning is a feature of Docker Trusted Registry (DTR), which is part of Docker Enterprise. Scanning can happen any time a new image is pushed to the registry, when the vulnerability database is updated, or on demand. 

The answer is “Always be scanning.” Scanning can occur in both development and production. You want to scan the images when they are pushed to the registry and then continue to check them against the latest vulnerability discoveries. Vulnerabilities have a way of getting discovered months or even years after libraries are released. If you only scan the container when it’s originally created, you risk missing these new vulnerabilities in your existing applications.
The Application Designer in Docker Desktop Enterprise (our locally installed tool for developers) has pre-configured Templates. In reality, those Templates are container images that “live” in DTR. With Desktop Enterprise, developers automatically pull scanned images to their machines. In addition, the templates can be customized to your organization’s particular standards and approved frameworks. You can build-in your coding standards, plug-ins or other artifacts directly into the templates as well.
We automatically check existing images against the vulnerability database whenever it is updated. On the production side, you might have long-running containers in your environment. Those get scanned and we surface the results in the Universal Control Plane. From there, you can easily tell which of your running applications are affected and choose how to address the issues.

Getting Started
A lot of people had questions about how to get started with Kubernetes. Fortunately, they’re much easier questions to answer and many of the resources are free!

If you want to get started with Docker and Kubernetes on your own and you have a Windows 10 (with Hyper-V features) or macOS machine, go get Docker Desktop. It’s free, incredibly easy to set up, and you’ll have both Docker and Kubernetes ready to go.
If you have a Linux workstation you can get Docker Engine for free. Make sure you’re getting the actual Docker Engine and not a forked version by following the instructions here.
If you can’t install software on your machine or you don’t have an OS that meets the requirements, you can use Play with Docker and Play with Kubernetes for free. . Both provide access to nodes directly from your browser. They also have lab content to guide you through introductory exercises.
Read the excellent blog series from Bill Mills on our training team about designing your first application in Kubernetes.

Are #Docker and #Kubernetes competitors? @jdarmstro answers your questions on the blog.Click To Tweet

The post Top Questions Answered: Docker and Kubernetes? I Thought You Were Competitors! appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Don’t Miss Docker’s Hands-on Workshop at Arm TechCon 2019

Photo by Zan Ilic on Unsplash
Momentum is building for edge computing
With the rise of the Internet of Things (IoT) combined with global rollout of 5G (Fifth-generation wireless network technology), a perfect storm is brewing that will see higher speeds, extreme lower latency, and greater network capacity that will deliver on the hype of IoT connectivity.
And industry experts are bullish on the future. For example, Arpit Joshipura, The Linux Foundation’s general manager of networking, predicts edge computing will overtake cloud computing by 2025. According to Santhosh Rao, senior research director at Gartner, around 10% of enterprise-generated data is created and processed outside a traditional centralized data center or cloud today. He predicts this will reach 75% by 2025.
Back in April 2019, Docker and Arm announced a strategic partnership enabling cloud developers to build applications for cloud, edge, and IoT environments seamlessly on the Arm® architecture. We carried the momentum with Arm from that announcement into DockerCon 2019 in our joint Techtalk, where we showcased cloud native development on Arm and how multi-architecture containers with Docker can be used to accelerate Arm development.  
A hands-on workshop for ARM developers
As part of our strategic partnership, Docker will be participating at ARM TechCon scheduled this October 8-10, 2019 at the San Jose Convention Center. Docker is excited to announce that we will be delivering a hands-on workshop exclusive to All-Access Pass holders in attendance:
Software Development for the Arm Architecture with DockerOctober 8, 2019: 3:30 PM to 5:30 PM 
This workshop builds on previously announced multi-architecture builds to introduce new tools and concepts to accelerate development and streamline deployments of Artificial Intelligence (AI) and Machine Learning (ML) workloads on Arm edge and IoT devices.
The workshop will provide a hands-on session with Docker Desktop on Windows or Mac, Amazon Web Services (AWS) A1 instances, and embedded Linux. The session will cover the latest Docker features to build, share, and run multi-architecture images with transparent support for Arm. Attendees will be provided with a 60-day trial license of Docker Desktop Enterprise including multiple examples and demos as well as the latest Raspberry Pi 4 development board to keep.
Our strategic partnership benefits both Arm developers and the Docker community by bringing together a common development platform where users can access Arm technology in the cloud, on the desktop, and with embedded and IoT devices. Arm and Docker are educating software developers on the benefits and simplicity Docker brings to the development process. We hope to see you at TechCon and your participation at the hands-on workshop.
Read Our Guide to Arm TechCon
To learn more about how Docker can help bring life to your edge and IoT ideas:

Come see us at the Docker booth in the Infrastructure Zone, at ArmTechCon 2019
Read about Building Multi-Arch Images for Arm and x86 with Docker Desktop
Watch the DockerCon session on Developing Containers for Arm

Learn how to bring #IoT and edge computing to life with Docker and @arm at #ArmTechCon Click To Tweet

The post Don’t Miss Docker’s Hands-on Workshop at Arm TechCon 2019 appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

How Carnival Corporation Creates Customized Guest Experiences with Docker Enterprise

Regal Princess cruise ship. Photo by Jamie Morrison on Unsplash
When you get on a cruise ship or go to a major resort, there’s a lot happening behind the scenes. Thousands of people work to create amazing, memorable experiences, often out of sight. And increasingly, technology helps them make those experiences even better.
We sat down recently with Todd Heard, VP of Infrastructure at Carnival Corporation, to find out how technology like Docker helps them create memorable experiences for their guests. Todd and some of his colleagues worked at Disney in the past, so they know a thing or two about memorable experiences.
Here’s what he told us. You can also catch the highlights in this 2 minute video:

On Carnival’s Mission
Our goal at Carnival Corporation is to provide a very personalized, seamless, and customized experience for each and every guest on their vacation. Our people and technology investments are what make that possible. But we also need to keep up with changes in the industry and people’s lifestyles.
On Technology in the Travel Industry and Customized Guest Experiences
One of the ironies in the travel industry is that everybody talks about technology, but the technology should be invisible to the guest. Travel industry players use technology to differentiate their capabilities and products and services, but essentially guests want more engagement with their travel provider. Technology helps us build and customize memorable experiences for each guest, but the technology itself isn’t the experience.
In the travel and hospitality industry, digital transformation is all about the guest experience. And now you’ll see more companies investing in more innovation that drive more personalized interactions and experiences for the guests. For us at Carnival, digital transformation means that the more that a guest interacts and participates in a particular brand or experience, the more personalized it becomes because it’s always adapting and adjusting and anticipating those guests needs.
Carnival Corporation’s MedallionClass Experience
We are rolling out the MedallionClass experience which is powered by a wearable OceanMedallion that gives the guests more ways to engage with us and receive personalized service on demand at any time on the ship.
For example, traditionally you think on a cruise ship if you’re sitting by the swimming pool you have people walking by you and you can order a drink. But what if you’re somewhere else? Perhaps you’re sharing an intimate moment with someone at the back of the ship and you want to order a bottle of champagne. The Medallion allows the crew to find your location and deliver it to you.
We also have gamification built into the platform, “find your shipmates,” room entry and dozens of other services tied to MedallionClass. All of that allows us to provide very personalized interactions with our guests.
The Technology Behind MedallionClass
The MedallionClass program is based on a micro strategy architecture with over 300 services deployed. All of these services work together to create very distinct experiences for our guests. Each ship has about 120 containers running across two shipboard data centers. That gives us high availability in a very small footprint to make sure our guest services are always available.
As part of MedallionClass, we use a hybrid cloud based architecture where we use Docker Enterprise in all of our lower environments and also deploy it on the ships to run our services that provide these experiences for our guests. 
How Docker Makes MedallionClass Possible
Docker plays a very critical role in our digital transformation with the MedallionClass. Cruise ships are essentially mobile cities that provide everything that a guest could need or want, including food and beverage, recreation, accommodations, transportation, telecommunications, gaming and more. And what Docker gives us is the unique ability to control the entire guest experience by connecting all these elements together.
Docker lets us be very agile so our developers are more productive. As we develop and create highly innovative new personalized guest experiences we can get those to market quickly for our guests.
Docker Enterprise has also allowed us to reduce our overall infrastructure footprint on the ship, and it helps drive standardization as we continue to rollout MedallionClass experiences to our fleet of ships.
To learn more about how Docker Enterprise can help you create memorable customer experiences:

Read the Digital Transformation Imperative eBook
Download the Definitive Guide to Container Platforms

How @CarnivalPLC creates customized guest experiences and uses #Docker Enterprise to power @medallionclassClick To Tweet

The post How Carnival Corporation Creates Customized Guest Experiences with Docker Enterprise appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Top Questions: Containers and VMs Together

We had a great turnout to our recent webinar “Demystifying VMs, Containers, and Kubernetes in the Hybrid Cloud Era” and tons of questions came in via the chat — so many that we weren’t able to answer all of them in real-time or in the Q&A at the end. We’ll cover the answers to the top questions in two posts (yes, there were a lot of questions!).
First up, we’ll take a look at IT infrastructure and operations topics, including whether you should deploy containers in VMs or make the leap to containers on bare metal. 
VMs or Containers?

Among the top questions was whether users should just run a container platform on bare metal or run it on top of their virtual infrastructure — Not surprising, given the webinar topic.

A Key Principle: one driver for containerization is to abstract applications and their dependencies away from the underlying infrastructure. It’s our experience that developers don’t often care about the underlying infrastructure (or at least they’d prefer not to). Docker and Kubernetes are infrastructure agnostic. We have no real preference.
The goal – yours and ours: provide a platform that developers love to use, AND provide the operational and security tools required to keep your applications running in production and maintain the platform.
So VMs or containers?  It depends. If all of your operational expertise and tooling is built around virtualization, you might not want to change that right out of the gate when you deploy a container platform. On the other hand, if cost reduction or performance overhead is more important to you, maybe you’ll decide you don’t want to pay for a hypervisor anymore. The good news — when you containerize applications, you can likely reduce the number of VMs and underlying servers by at least 30 or 40 percent.

Whatever you do, avoid making a container platform decision that’s driven purely by your infrastructure of today. If developers feel like they don’t have flexibility, they quickly adopt their own tools, creating a second wave of shadow IT. 
Containers and Networking
We didn’t go very deep on networking or storage on the webinar because they’ are topics than can easily fill multiple webinars on their own, but there were a few common questions.
How do you connect multiple containers together and expose the applications to external users and services?

For simple applications, you can use the networking tools that are built right into both the Docker Engine and Kubernetes. 
If you’re running a container and want to map an external port to an internal port you can add a simple parameter to a command to open communications:docker run –publish <external port>:<internal port> <image name>
Both Swarm and Kubernetes allow two containers to communicate with each other while keeping that connection hidden from external traffic. You can do that very simply on a Docker Engine or Swarm cluster using Docker Compose to define your services and their networks. In Kubernetes, you of course have similar capabilities. 

What are Calico and Tigera, and how do they fit in to the networking design? What about NSX or other networking solutions?

For more advanced applications or those running in production, you might want additional features and capabilities beyond what the built-in networking drivers support. When you have hundreds or thousands of containers, you’ll need a better way to handle routing, discovery, security and other network concerns at scale. The Kubernetes community supports more advanced networking plugins through a standardized Container Networking Interface (CNI). CNI plugins provide enhanced capabilities you won’t find in the default network drivers.
Project Calico, by Tigera, is one of the most common open source CNI plugins you will find in Kubernetes, and it works for both Linux and Windows containers. Calico is maintained by Tigera who works closely with the Kubernetes community to define and contribute to the CNI standard. Docker Enterprise includes Calico as our “batteries included, but swappable” CNI plugin.
For enterprises that need even greater security and management including auditing, reporting, greater scale, integration with service meshes, you might look to a commercial product like Tigera Secure.
CNI ensures a standard networking interface that can support different ecosystem solutions. If you’re a VMware customer and already invested in NSX, you might go that route instead.

Sizing and Optimizing Infrastructure for Containers
We received quite a few questions about how to size a design for Docker & Kubernetes. They boiled down to two main questions:
What’s the maximum number of containers you can run on a single Docker host? 

The answer: it depends. Remember, containers are just processes that consume RAM and CPU directly from the host. Since there’s no hypervisor layer and additional OS between the application and the hos, a host should be able to run at least as many processes as containers as it did prior to containerization. In fact, most organizations end up running around 40% more work on a host because multiple applications can share the same base OS, and because many VMs are over-provisioned.

How do I size my environment for Docker/Kubernetes?

As you start thinking about running containers in production, you should look at bringing in expertise to help guide you through this exercise. Similar to the previous answer, on average we see about a 40% reduction in the number of VMs, but that average is across a broad set of applications and you’ll want to learn how to estimate as you go forward and add more applications. We’ve seen customers do it on their own, but it takes time and you end up learning a lot from your own mistakes before you get it right.  You can greatly accelerate your path with a little help.

Getting Started
We saved this one for last because this is by far the area with the most questions. Fortunately, they’re much easier questions to answer and many of the resources are free!
Where can I learn more about optimizing and managing containers?

If you want to know more about the Docker Trusted Registry, Docker Kubernetes Service and the Universal Control Plane then you can get a free hosted trial of Docker Enterprise. Again, an introductory walkthrough is provided.
Want classes and training with an instructor? We have that, too. There are classes for operators and developers; and classes for Kubernetes and Security. 

Next week, we’ll cover questions about Kubernetes and Docker together, software pipelines and trusted content.

Top Questions Answered: VMs and Containers TogetherClick To Tweet

The post Top Questions: Containers and VMs Together appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Designing Your First Application in Kubernetes, Part 3: Communicating via Services

I reviewed the basic setup for building applications in Kubernetes in part 1 of this blog series, and discussed processes as pods and controllers in part 2. In this post, I’ll explain how to configure networking services in Kubernetes to allow pods to communicate reliably with each other.
Setting up Communication via Services 
At this point, we’ve deployed our workloads as pods managed by controllers, but there’s no reliable, practical way for pods to communicate with each other, nor is there any way for us to visit any network-facing pod from outside the cluster. Kubernetes networking model says that any pod can reach any other pod at the target pod’s IP by default, but discovering those IPs and maintaining that list while pods are potentially being rescheduled — resulting in them getting an entirely new IP — by hand would be a lot of tedious, fragile work.
Instead, we need to think about Kubernetes services when we’re ready to start building the networking part of our application. Kubernetes services provide reliable, simple networking endpoints for routing traffic to pods via the fixed metadata defined in the controller that created them, rather than via unreliable pod IPs. For simple applications, two services cover most use cases: clusterIP and nodePort services. This brings us to another decision point:
Decision #3: What kind of services should route to each controller? 
For simple use cases, you’ll choose either clusterIP or nodePort services. The simplest way to decide between them is to determine whether the target pods are meant to be reachable from outside the cluster or not. In our example application, our web frontend should be reachable externally so users can access our web app.
In this case, we’d create a nodePort service, which would route traffic sent to a particular port on any host in your Kubernetes cluster onto our frontend pods (Swarm fans: this is functionally identical to the L4 mesh net).
A Kubernetes nodePort service allows external traffic to be routed to the pods.
For our private API + database pods, we may only want them to be reachable from inside our cluster for security and traffic control purposes. In this case, a clusterIP service is most appropriate. The clusterIP service will provide an IP and port which only other containers in the cluster may send traffic to, and have it forwarded onto the backend pods.
A Kubernetes clusterIP service only accepts traffic from within the cluster.
Checkpoint #3: Write some yaml and verify routing
Write some Kubernetes yaml to describe the services you choose for your application and make sure traffic gets routed as you expect.
Advanced Topics
The simple routing and service discovery above will get pods talking to other pods and allow some simple ingress traffic, but there are many more advanced patterns you’ll want to learn for future applications:

Headless Services can be used to discover and route to specific pods; you’ll use them for stateful pods declared by a statefulSet controller.
Kube Ingress and IngressController objects provide managed proxies for doing routing at layer 7 and implementing patterns like sticky sessions and path-based routing.
ReadinessProbes work exactly like the healthchecks mentioned above, but instead of managing the health of containers and pods, they monitor and respond to their readiness to accept network traffic.
NetworkPolicies allow for the segmentation of the normally flat and open Kubernetes network, allowing you to define what ingress and egress communication is allowed for a pod, preventing access from or to an unauthorized endpoint.

You can continue reading about Kubernetes application configuration in part 4.
For additional information on these topics, have a look at the Kubernetes documentation:

Kubernetes Services
Kubernetes Cluster Networking

You can also check out Play with Kubernetes, powered by Docker.
We will also be offering training on Kubernetes starting in early 2020. In the training, we’ll provide more specific examples and hands on exercises.To get notified when the training is available, sign up here:
Get Notified About Training

Designing Your First App in #Kubernetes, Part 3 — Communication via ServicesClick To Tweet

The post Designing Your First Application in Kubernetes, Part 3: Communicating via Services appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

At the Grace Hopper Celebration, Learn Why Developers Love Docker

Lisa Dethmers-Pope and Amn Rahman at Docker also contributed to this blog post.
Docker hosted a Women’s Summit at DockerCon 2019.
As a Technical Recruiter at Docker, I am excited to be a part of Grace Hopper Celebration. It is a marvelous opportunity to speak with many talented women in tech and to continue pursuing one of Docker’s most valued ambitions: further diversifying our team. The Docker team will be on the show floor at the Grace Hopper Celebration, the world’s largest gathering of women technologists the week of October 1st in Orlando, Florida.
Our Vice President of Human Resources, and our Senior Director of Product Management, along with representatives from our Talent Acquisition and Engineering teams will be there to connect with attendees. We will be showing how to easily build, run, and share an applications using the Docker platform, and talking about what it’s like to work in tech today. 
Supporting Women in Tech
While we’ve made strides in diversity within tech, the 2019 Stack Overflow Developer Survey shows we have work to do. According to the survey, only 7.5 percent of professional developers are women worldwide (it’s 11 percent of all developers in the U.S.).
That’s why Docker hosts Women in Tech events at our own conferences, and we’re pleased to participate in the  Grace Hopper Celebration this year. It’s a place for women technologists to learn, network, and connect with a like-minded community. The conference offers attendees several opportunities to advance their professional development, find and provide mentorship, and further develop their leadership skills.
Last year’s celebration hosted over 20,000 attendees from 78 countries as well as thousands of listeners over livestream. We are thrilled to be involved with the conference and show our support for an organization making such a powerful impact.
Creating and Fostering Connections
2 million developers already use Docker regularly today. We have over 240 regional user groups, and a presence in 80 countries. Diversity and inclusion are a key part of our community, and we’ll continue building on that as we grow.
We are seeking forward-thinking individuals to join our team who have diverse experiences and are passionate about bringing technology that transforms lives, industries, and the world to life.
Whether you’re a curious explorer, a Docker newbie, or a super-powered Docker ninja, you should come join us at the Docker booth to learn more about how you can get the most benefit out of the platform!
If you’re a data scientist, a developer, or just bouncing from one coding assignment to the next, come and learn how you can start using Docker almost immediately! Apart from being introduced to cool Docker lingo, you’ll learn how to quickly launch a Docker environment, spin up an app on your machine, and share it with the rest of the world via Docker Hub.
We look forward to collaborating and connecting with you. Come visit us at the technology showcase booth 359, 3648!
In the meantime, if you’d like to dive into diversity in tech, these three DockerCon sessions are a great starting point:

A Transformation of Attitude: Why Mentor Matter
Diversity is not Only about Ethnicity and Gender
How Intentional Diversity Creates Thought Leadership

At @ghc, learn why #developers love Docker and why #diversity in the developer community matters to usClick To Tweet

The post At the Grace Hopper Celebration, Learn Why Developers Love Docker appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Designing Your First Application in Kubernetes, Part 4: Configuration

I reviewed the basic setup for building applications in Kubernetes in part 1 of this blog series, and discussed processes as pods and controllers in part 2. In part 3, I explained how to configure networking services in Kubernetes to allow pods to communicate reliably with each other. In this installment, I’ll explain how to identify and manage the environment-specific configurations expected by your application to ensure its portability between environments.

Factoring out Configuration
One of the core design principles of any containerized app must be portability. We absolutely do not want to reengineer our containers or even the controllers that manage them for every environment. One very common reason why an application may work in one place but not another is problems with the environment-specific configuration expected by that app.
A well-designed application should treat configuration like an independent object, separate from the containers themselves, that’s provisioned to them at runtime. That way, when you move your app from one environment to another, you don’t need to rewrite any of your containers or controllers; you simply provide a configuration object appropriate to this new environment, leaving everything else untouched.
When we design applications, we need to identify what configurations we want to make pluggable in this way. Typically, these will be environment variables or config files that change from environment to environment, such as access tokens for different services used in staging versus production or different port configurations.
Decision #4: What application configurations will need to change from environment to environment?
From our web app example, a typical set of configs would include the access credentials for our database and API (of course, you’d never use the same ones for development and production environments), or a proxy config file if we chose to include a containerized proxy in front of our web frontend.
Once we’ve identified the configs in our application that should be pluggable, we can enable the behavior we want by using Kubernetes’ system of volumes and configMaps.
In Kubernetes, a volume can be thought of as a filesystem fragment. Volumes are provisioned to a pod and owned by that pod. The file contents of a volume can be mounted into any filesystem path we like in the pod’s containers.
I like to think of the volume declaration as the interface between the environment-specific config object and the portable, universal application definition. Your volume declaration will contain the instructions to map a set of external configs onto the appropriate places in your containers.
ConfigMaps contain the actual contents you’re going to use to populate a pod’s volumes or environment variables. They contain-key value pairs describing either files and file contents, or environment variables and their values. ConfigMaps typically differ from environment to environment. For example, you will probably have one configMap for your development environment and another for production—with the correct variables and config files for each environment. 

The configMap and Volume interact to provide configuration for containers.

Checkpoint #4: Create a configMap appropriate to each environment. 
Your development environment’s configMap objects should capture the environment-specific configuration you identified above, with values appropriate for your development environment. Be sure to include a volume in your pod definitions that uses that configMap to populate the appropriate config files in your containers as necessary. Once you have the above set up for your development environment, it’s simple to create a new configMap object for each downstream environment and swap it in, leaving the rest of your application unchanged.
Advanced Topics
Basic configMaps are a powerful tool for modularizing configuration, but some situations require a slightly different approach.

Secrets in Kubernetes are like configMaps in that they package up a bunch of files or key/value pairs to be provisioned to a pod. However, secrets offer added security guarantees around encryption data management. They are the more appropriate choice for any sensitive information, like passwords, access tokens or other key-like objects.

From here, we wrap up the series with a post about storage configuration for Kubernetes applications.
To learn more about configuring Kubernetes and related topics: 

Check out Play with Kubernetes, powered by Docker.
Read the Kubernetes documentation on Volumes. 
Read the Kubernetes documentation on ConfigMaps.

We will also be offering training on Kubernetes starting in early 2020. In the training, we’ll provide more specific examples and hands on exercises.To get notified when the training is available, sign up here:
Get Notified About Training

Designing Your First App in #Kubernetes, Part 4 — Managing Environment-Specific ConfigurationsClick To Tweet

The post Designing Your First Application in Kubernetes, Part 4: Configuration appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Designing Your First Application in Kubernetes, Part 5: Provisioning Storage

In this blog series on Kubernetes, we’ve already covered:

The basic setup for building applications in Kubernetes
How to set up processes using pods and controllers
Configuring Kubernetes networking services to allow pods to communicate reliably
How to identify and manage the environment-specific configurations to make applications portable between environments

In this series’ final installment, I’ll explain how to provision storage to a Kubernetes application. 

Step 4: Provisioning Storage
The final component we want to think about when we build applications for Kubernetes is storage. Remember, a container’s filesystem is transient, and any data kept there is at risk of being deleted along with your container if that container ever exits or is rescheduled. If we want to guarantee that data lives beyond the short lifecycle of a container, we must write it out to external storage.
Any container that generates or collects valuable data should be pushing that data out to stable external storage. In our web app example, the database tier should be pushing its on-disk contents out to external storage so they can survive a catastrophic failure of our database pods.
Similarly, any container that requires the provisioning of a lot of data should be getting that data from an external storage location. We can even leverage external storage to push stateful information out of our containers, making them stateless and therefore easier to schedule and route to.
Decision #5: What data does your application gather or use that should live longer than the lifecycle of a pod?
The full Kubernetes storage model has a number of moving parts:
The Kubernetes storage model.

Container Storage Interface (CSI) Plugins can be thought of as the driver for your external storage.
StorageClass objects take a CSI driver and add some metadata that typically configures how storage on that backend will be treated
PersistentVolume (PV) objects represent an actual bucket of storage, as parameterized by a StorageClass
PersistentVolumeClaim (PVC) objects allow a pod to ask for a PersistentVolume to be provisioned to it
Finally, we met Volumes earlier in this series. In the case of storage, we can populate a volume with the contents of the external storage captured by a PV and requested by a PVC, provision that volume to a pod and finally mount its contents into a container in that pod.

Managing all these components can be cumbersome during development, but as in our discussion of configuration, Kubernetes volumes provide a convenient abstraction by defining how and where to mount external storage into your containers. They form the start of what I like to think of as the “storage frontend” in Kubernetes—these are the components most closely integrated with your pods and which won’t change from environment to environment.
All those other components, from the CSI driver all the way through the PVC, which I like to think of as the “storage backend”, can be torn out and replaced as you move between environments without affecting your code, containers, or the controller definitions that deploy them.
Note that on a single-node cluster (like the one created for your by Docker Desktop on your development machine), you can create hostpath backed persistentVolumes which will provision persistent storage from your local disk without setting up any CSI plugins or special storage classes. This is an easy way to get started developing your application without getting bogged down in the diagram above—effectively deferring the decision and setup of CSI plugins and storageClasses until you’re ready to move off of your dev machine and into a larger cluster.
Advanced Topics
The simple hostpath PVs mentioned above are appropriate for early development and proof-of-principle work, but they will need to be replaced with more powerful storage solutions before you get to production. This will require you to look into the ‘backend’ components of Kubernetes’ storage solution, namely StorageClasses and CSI plugins:

 StorageClasses
 Container Storage Interface plugins

The Future
In this series, I’ve walked you through the basic Kubernetes tooling you’ll need to containerize a wide variety of applications, and provided you with next-step pointers on where to look for more advanced information. Try working through the stages of containerizing workloads, networking them together, modularizing their config, and provisioning them with storage to get fluent with the ideas above.
Kubernetes provides powerful solutions for all four of these areas, and a well-built app will leverage all four of them. If you’d like more guidance and technical details on how to operationalize these ideas, you can explore the Docker Training team’s workshop offerings, and check back for new Training content landing regularly.
After mastering the basics of building a Kubernetes application, ask yourself, “How well does this application fit the values of portability, scalability and shareability we started with?” Containers themselves are engineered to easily move between clusters and users, but what about the entire application you just built? How can we move that around while still preserving its integrity and not invalidating any unit and integration testing you’ll perform on it?
Docker App sets out to solve that problem by packaging applications in an integrated bundle that can be moved around as easily as a single image. Stay tuned to this blog and Docker Training for more guidance on how to use this emerging format to share your Kubernetes applications seamlessly.
To learn more about Kubernetes storage and Kubernetes in general:

Read the Kubernetes documentation on PersistentVolumes and PersistentVolumeClaims.
Find out more about running Kubernetes on Docker Enterprise and Docker Desktop.
Check out Play with Kubernetes, powered by Docker.

We will also be offering training on Kubernetes starting in early 2020. In the training, we’ll provide more specific examples and hands on exercises.To get notified when the training is available, sign up here:
Get Notified About Training

Designing Your First App in #Kubernetes, Part 5 — Provisioning StorageClick To Tweet

The post Designing Your First Application in Kubernetes, Part 5: Provisioning Storage appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/