Level Up Security with Scoped Access Tokens

Scoped tokens are here !

Scopes give you more fine grained control over what access your tokens have to your content and other public content on Docker Hub! 

It’s been a while since we first introduced tokens into Docker Hub (back in 2019!) and we are now excited to say that we have added the ability for accounts on a Pro or Team plan to apply scopes to their Personal Access Tokens (PATs) as a way to authenticate with Docker Hub. 

Access tokens can be used as a substitute for your password in Docker Hub, adding scopes to these tokens gives you more fine grained control over what access the machine logged in has. This is great for setting up things like service accounts in CI systems, registry mirrors or even on your local machine to make sure you are not giving too much access away. 

PATs are an alternative to using passwords for authentication to Docker Hub (link to https://hub.docker.com/ ) when using Docker command line

docker login –username <username>

When prompted for your password you can simply provide a token. The other advantages of tokens are that you can create and manage multiple tokens at once, being able to see when they were last used and if things look wrong – revoke the tokens access. This and our API support make it easy to manage the rotation of your tokens to help improve the security of your supply chain. 

Create and Manage Personal Access Tokens in Docker Hub 

Personal access tokens are created and managed in your Account Settings.

Then head to security: 

From here, you can:

Create new access tokensModify existing tokensDelete access tokens

The other way you can manage your tokens is through the Hub APIs. We have Swagger docs for our APIs and the new docs for scoped tokens can be found here:

http://docs.docker.com/docker-hub/api/latest/#tag/access-tokens

Scopes available 

When you are creating a token Pro and Team plan members will now have access to 4 scopes:Read, write, delete: The scope of this token allows you to read, write and delete all of the repos that you have access to. (It does not allow you to modify account settings as a password authentication would) 

Read, write: This scope is for read/write within repos you have access to (all the public content on Hub & your private content). This is the sort of scope to use within a CI that is also pushing to a repo

Read only: This scope is read only for all repos you have have access to, this is great when used in production where it only needs to pull content from your repos to run it/

Public repo read only: This scope is for reading only public content, so nothing from your or your team’s repos. This is great when you want to set up a system which is just pulling say Docker Official Images or Verified content from Docker Hub. 

These scopes are for Pro accounts (which get 5 tokens) and Team accounts (which give each team member unlimited tokens). Free users can continue to use their single read, write, delete token and revoke/reissue this as they need. 

Scoped access tokens levels up the security of Docker users supply chain with how you can authenticate into Docker Hub. Available for Pro and Team plans, we are excited for you to try the scope tokens out and start giving us some feedback. 

Want to learn more about Docker Scoped Tokens? Make sure to follow us on Twitter: @Docker. We’ll be hosting a live Twitter Spaces event on Thursday, Jul 22, 2021 from 8:30 – 9:00 am PST, where you’ll hear from Docker engineers, product managers and a Docker Captain!

If you have feedback or other ideas, remember to add them to our public roadmap. We are always interested in what you would like us to build next!
The post Level Up Security with Scoped Access Tokens appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

SAVE THE DATE : Next Community All Hands on September 16th !

We’re excited to announce that our next Community All Hands will be in exactly 2 months,  on Thursday September 16th 2021 @ 8am PST/5pm CET. The event is a unique opportunity for Docker staff and the broader Docker community to come together for live company updates, product updates, demos, community updates, contributor shout-outs and of course, a live Q&A. 

Based on the great feedback we received from our last all-hands, we’ll stick to a similar format as last time: 

The first hour will focus on company and product updates with presentations from Docker executive staff, including Scott Johnston (CEO @ Docker), Justin Cormack (CTO @ Docker), Jean-Laurent de Morlhon (VP of Engineering @ Docker). The following two hours will focus on community-led breakout sessions with live demos and workshops in different languages and around specific thematic areas

As for the virtual event platform we’ll be using, we’re thrilled to be collaborating with Tulu.la again to leverage their unique interface, their integrated chat features and rock solid multi-casting capability.  They made incredible improvements to their platform since we last collaborated and we’re really keen to try out a bunch of new features they’ve shipped that will provide an even more interactive virtual experience for attendees. 

Docker Community All-Hands event are all about bringing the community together for three hours of sharing, learning and networking in a very informal, welcoming environment, and this one is going to be our best one yet! Stay tuned for more updates on speakers, talks, demos etc… as we get closer to the date. In the meantime…make sure to register for the event here!

The post SAVE THE DATE : Next Community All Hands on September 16th ! appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Docker Captain Take 5 – Lucas Santos

Docker Captains are select members of the community that are both experts in their field and are passionate about sharing their Docker knowledge with others. “Docker Captains Take 5” is a regular blog series where we get a closer look at our Captains and ask them the same broad set of questions ranging from what their best Docker tip is to whether they prefer cats or dogs (personally, we like whales and turtles over here). Today, we’re interviewing Lucas Santos who has been a Docker Captain since 2021. He is a Cloud Advocate at Microsoft and is based in São Paulo, Brazil.

How/when did you first discover Docker?

My first contact with Docker was in 2015 when I worked at a logistics company. Docker made it very easy to deploy applications in customer’s infrastructure. Every customer had a different need and a different architecture that needed to be taken into account, unfortunately I had to leave the company before I could make this real. So I took that knowledge to my next company where we deployed over 100 microservices using Docker images and Docker infrastructure.

What is your favorite Docker command?

I would say it’s “pull”. Because it makes it look as if images are incredibly simple things. However, there’s a whole system behind image pulling and, despite being simple to understand, a simple image pull contains a lot of steps and a lot of aggregated knowledge about containers, filesystems and so much more. And this is all transparent to the user as if it’s magic.

What is your top tip you think other people don’t know for working with Docker?

Some people, especially those who are not familiar with containers, think Docker is just a fancy word for a VM. My top tip for everyone that is working with Docker as a fancy VM, don’t. Docker containers can do so much more than just run simple processes and act as a simple VM. There’s so much we can do using containers and Docker images it’s an endless world.

What’s the coolest Docker demo you have done/seen ?

I think I don’t have a favorite, but one that really stuck with me all those years was one of the first demos I’ve seen with Docker back in 2016 or 2017. I won’t remember who was the speaker or where I was but it stuck with me because it was the first time I was seeing someone using CI with Docker. In this demo, the speaker not only created images on demand using a Docker container, but also spinned up several other containers, one for each part of the pipeline. I had never seen something like that before at that time.

What have you worked on in the past 6 months that you’re particularly proud of?

I’m proud of my work with my blog and even prouder of my work in the KEDA HTTP Add-On (https://github.com/kedacore/http-add-on) team. We’ve developed a way to scale applications in a Kubernetes cluster using KEDA native scalers. One of the things that I’m proudest is of the DockerCon community room for the Brazilian community, we had an amazing engagement and this was one of the most amazing events I’ve ever helped to organize.

What do you anticipate will be Docker’s biggest announcement this year?

This is a tricky question. I really don’t know what to hope for, technology moves so fast that I literally hope for anything.

What do you think is going to be Docker’s biggest challenge this year?

I think one of the biggest challenges Docker is going to face this year is to reinvent itself and reposition the company in the eyes of the developers.

What are some personal goals for the next year with respect to the Docker community ?

One of my main goals is to close the gap between the people who are still beginners in containers, and those who are experts because there is too little documentation about it. Along with that I plan to make the Brazilian community more aware of container technologies. I can say that my main goal this year is to make everyone understand what a Docker container is, deep down.

What talk would you most love to see at DockerCon 2021?

I’d love to see more Docker integration with the cloud and new ways to use containers in the cloud.

Looking to the distant future, what is the technology that you’re most excited about and that you think holds a lot of promise?

I think one of the technologies that I’m most excited about is the future of containers. They evolve so fast that I’m anxious to see what it’ll hold next. Especially in the security field, where I feel there are a lot of things we are yet to see.

Rapid fire questions…

What new skill have you mastered during the pandemic?

Patience, probably. I started to learn IoT, Electronics, Photography, Music and a lot of other things.

Cats or Dogs?

Cats

Salty, sour or sweet?

Salty

Beach or mountains?

Mountains

Your most often used emoji?

 
The post Docker Captain Take 5 – Lucas Santos appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Docker for Node.js Developers: 5 Things You Need to Know Not to Fail Your Security

Guest post by Liran Tal, Snyk Director of Developer Advocacy 

Docker is totalling up to more than 318 billion downloads of container images. With millions of applications available on Docker Hub, container-based applications are popular and make an easy way to consume and publish applications.

That being said, the naive way of building your own Docker Node.js web applications may come with many security risks. So, how do we make security an essential part of Docker for Node.js developers?

Many articles on this topic have been written, yet sadly, without thoughtful consideration of security and production best practices for building Node.js Docker images. This is the focus of my article here and the demos I shared on this recent Docker Build show with Peter McKee. 

Before we jump into the gist of Docker for Node.js and building Docker images, let’s have a look at some frequently asked questions on the topic.

How do I dockerize Node.js applications?

Running your Node.js application in a Docker container can be as simple as copying over the project’s directory and installing all the required npm packages, but there are many security and production related concerns that you might miss. These production-grade tips are laid out in the following guide on containerizing Node.js web applications with Docker, which covers everything from choosing the right Docker base image and using multi-stage builds, to managing secrets safely and properly enabling production-related framework configuration.

This article focuses on the information you need to better understand the impact of choosing the right Node.js Docker base image for your web application and will help you find the most secure Docker image available for your application.  

How is Docker helpful for Node.js developers?

Packaging your Node.js application in a container allows you to bundle your complete application, including the runtime, configuration and OS-level dependencies, and everything required for your web application to run across different platforms and CPU architectures. These images are bundled as deployable artifacts called container images. These Docker images are software-based bundles enabling easily reproducible builds, and give Node.js developers a way to run the same project or product in all environments. 

Finally, Docker containers allow you to experiment more easily with new platform releases or other changes without requiring special permissions, or setting up a dedicated environment to run a project.

1. Choose the right Node.js Docker base image for your application

When creating a Docker image for a Node.js project, we build our own application image based on another Docker image, which we pull from Docker Hub. This is what we refer to as the base image. The base image is the building block of the new Docker image you are about to build for your Node.js application.

The selection of a base image is critical because it significantly impacts everything from the Docker image build speed, as well as the security and performance of your web application. This is so critical Docker and Snyk co-wrote this practical guide focused on container image security for developer teams. 

It’s quite possible that you are choosing a full-fledged operating system image based on Debian or Ubuntu, because it enables you to utilize all the tooling and libraries available in these images. However, this comes at a price. When a base image has a security vulnerability, you will inherit it in your newly created image. Why would you want to start off on bad terms by defaulting to a big base image that contains many vulnerabilities?

When we look at the base images, many of the security vulnerabilities belong to the Operating System (OS) layer this base image uses. Snyk’s 2019 research Shifting Docker security left, showed that the vulnerabilities brought in by the OS layer can vary largely depending on the flavor you choose.

2. Scan your Node.js Docker image during development

Creating a Docker image based on other images, as well as rebuilding them can potentially introduce new vulnerabilities, but there’s a way for you to be on top of it.

Treat the Docker image build process just like any other development related activity. Just as you test the code you write, you should test the Docker images you build. 

These tests include static file checks—also known as linters—to ensure you’re avoiding security pitfalls and other bad patterns in your Dockerfile. We’ve outlined some of these in our Docker image security best practices. If you’re a Node.js application developer you’ll also want to read through this step-by-step 10 best practices to containerize Node.js web applications with Docker.

Connecting your git repositories to Snyk is also an excellent choice. Snyk supports native integrations with GitHub, GitLab, Bitbucket and Azure Repos. Having a git integration means that we can scan your pull requests and annotate them with security information, if we find security vulnerabilities. This allows you to put gates and deny merging a pull request if it brings new security vulnerabilities.

If you need more flexibility for your Continuous Integration (CI), or a closely integrated developer experience, meet the Snyk CLI.

The CLI allows you to easily test your Docker container image. Let’s say you’re building a Docker image locally and tagged it as nodejs:notification-v99.9—we test it as follows:

Install the Snyk CLI:$ npm install -g snykThen let the Snyk CLI automatically grab an API token for you with:$ snyk authScan the local base image:$ snyk container test nodejs:notification-v99.9

Test results are then printed to the screen, along with information about the CVE, the path that introduces the vulnerability, so you know which OS dependency is responsible for it.

Following is an example output for testing Docker base image node:15:

✗ High severity vulnerability found in binutils
Description: Out-of-Bounds
Info: https://snyk.io/vuln/SNYK-DEBIAN9-BINUTILS-404153
Introduced through: dpkg/dpkg-dev@1.18.25, libtool@2.4.6-2
From: dpkg/dpkg-dev@1.18.25 > binutils@2.28-5
From: libtool@2.4.6-2 > gcc-defaults/gcc@4:6.3.0-4 > gcc-6@6.3.0-18+deb9u1 > binutils@2.28-5
Introduced by your base image (node:15)

✗ High severity vulnerability found in binutils
Description: Integer Overflow or Wraparound
Info: https://snyk.io/vuln/SNYK-DEBIAN9-BINUTILS-404253
Introduced through: dpkg/dpkg-dev@1.18.25, libtool@2.4.6-2
From: dpkg/dpkg-dev@1.18.25 > binutils@2.28-5
From: libtool@2.4.6-2 > gcc-defaults/gcc@4:6.3.0-4 > gcc-6@6.3.0-18+deb9u1 > binutils@2.28-5
Introduced by your base image (node:15)

Organization: snyk-demo-567
Package manager: deb
Target file: Dockerfile
Project name: docker-image|node
Docker image: node:15
Platform: linux/amd64
Base image: node:15
Licenses: enabled

Tested 412 dependencies for known issues, found 554 issues.

Base Image Vulnerabilities Severity
node:15 554 56 high, 63 medium, 435 low

Recommendations for base image upgrade:

Alternative image types
Base Image Vulnerabilities Severity
node:current-buster-slim 53 10 high, 4 medium, 39 low
node:15.5-slim 72 18 high, 7 medium, 47 low
node:current-buster 304 33 high, 43 medium, 228 low

3. Fix your Node.js runtime vulnerabilities in your Docker images

An often overlooked detail, when managing the risk of Docker container images, is the application runtime itself. Whether you’re practicing Docker for Java, or you’re running Docker for Node.js web applications, the Node.js application runtime itself may be vulnerable.

You should be aware and follow Node.js security releases and the Node.js security policy. Instead of manually keeping up with these, take advantage of Snyk to also find Node.js security vulnerabilities.

To give you more context on security vulnerabilities across the different Node.js base image tags, I scanned some of them with the Snyk CLI and plotted the results in the following logarithmic scale chart:

You can see that:

The default node base image tag, also tagged as node:latest, bundles more than 500 security vulnerabilities, but also introduces 2 security vulnerabilities in the Node.js runtime itself. That should worry you if you’re currently running a Node.js 15 version in production and you didn’t patch or fix it.The node:alpine base image tag might not be bundling vulnerable OS dependencies in the base image—this is why it’s missing a blue bar—but it still has a vulnerable version of the latest Node.js runtime (version 15).If you’re running an unsupported version of Node.js—for example, Node.js 10—it is vulnerable and you can see that it is not receiving any security updates.

If you were to choose the Node.js version 15, which is the latest version released, at the time of writing this article, you would  actually be exposing yourself not only to 561 security vulnerabilities within this container, but also to two security vulnerabilities in the Node.js runtime itself.

We can see the Docker scan test results found in this public image testing URL: https://snyk.io/test/docker/node:15.5.0. You’re welcome to test other Node.js base image tags that you’re using with this public and free Docker scanning service: https://snyk.io/test.

Security is now an integral part of the Docker workflow, with Snyk powering container scanning in Docker Hub and Docker Desktop. In fact, if you’re using Docker as a development platform, you should review our Snyk and Docker Vulnerability Cheatsheet.

If you already have a Docker user account, you can use it to connect to Snyk and quickly import your Docker Hub repositories with up to 200 free scans per month. 

4. Monitor your deployed Docker images for your Node.js applications

Once you have Docker images built, you’re probably pushing them to a Docker registry that keeps track of the images, so that these can be deployed and spun up as a functional container application.

Why should we monitor Docker base images?

If you’re practicing all of the security guidelines we covered so far with scanning and fixing base images, that’s great. However, keep in mind that new security vulnerabilities get discovered all the time. If you have 78 security vulnerabilities in your image now, that doesn’t mean you won’t have 100 tomorrow morning when new CVEs are reported and impact your running containers in production. That’s why monitoring your registry of container images—those that you’re using to deploy containers—is crucial to ensure you will find out about security issues soon and can remediate them.

If you’re using a paid Docker Hub registry for your images, you might have already seen the integrated Docker security scanning by Snyk in Docker Hub. 

You can also integrate with many Docker image registries from the Snyk app directly. For example, you can import images from Docker Hub, ACR, ECR, GCR, or Artifactory and then Snyk will scan these regularly for you and alert you via Slack or email about any security issues found:

5. Follow security guidelines and production-grade recommendation for a secure and optimal Node.js Docker image

Congratulations for keeping up with all the security guidelines so far!

To wrap up, if you want to dive deep into security best practices for building optimal Docker images for Node.js and Java applications, check out these resources:

10 Docker Security Best Practices – detailed security practices that you should follow when building Docker base images and when pulling them too, as it also introduces the reader to Docker content trust.Are you a Java developer? You’ll find this resource valuable: Docker for Java developers: 5 things you need to know not to fail your security.10 best practices to containerize Node.js web applications with Docker – If you’re a Node.js developer you are going to love this step by step walkthrough, showing you how to build secure and performant Docker base images for your Node.js applications.

Start testing and fixing your container images with Snyk and your Docker ID.
The post Docker for Node.js Developers: 5 Things You Need to Know Not to Fail Your Security appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Video: Docker Build: Simplify Cloud-native Development with Docker & DAPR

Docker’s Peter McKee hosts serverless wizard and open sorcerer Yaron Schneider for a high-octane tour of DAPR (as in Distributed Application Runtime) and how it leverages Docker. A principal software engineer at Microsoft in Redmond, Washington, Schneider co-founded the DAPR project to make developers’ lives easier when writing distributed applications.

DAPR, which Schneider defines as “essentially a sidecar with APIs,” helps developers build event-driven, resilient distributed applications on-premises, in the cloud, or on an edge device. Lightweight and adaptable, it tries to target any type of language or framework, and can help you tackle a host of challenges that come with building microservices while keeping your code platform agnostic.

How do I reliably handle a state in a distributed environment? How do I deal with network failures when I have a lot of distributed components in my ecosystem? How do I fetch secrets securely? How do I scope down my applications? Tag along as Schneider, who also co-founded KEDA (Kubernetes-based event-driven autoscaling), demos how to “dapr-ize” applications while he and McKee discuss their favorite languages and Schneider’s collection of rare Funco pops!

Watch the video (Duration 0:50:36):

The post Video: Docker Build: Simplify Cloud-native Development with Docker & DAPR appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

DockerCon 2021: Women in Tech Panel

Go code, learn along the way and don’t be afraid to make mistakes. That was the inspirational message from an impressive all-women group of panelists at this year’s DockerCon Live 2021.

The advice — one of many gems shared in a Women in Tech live panel at the May 27 event — was aimed at girls and women who are thinking of becoming software developers but are hesitant. The panelists themselves were seasoned developers and Docker Captains hailing from different fields within the technology and developer industry. They included:

Julie Lerman, Docker Captain and Software CoachMelissa McKay, Docker Captain and Developer Advocate, JFrogJocelyn Matthews, Community Manager, StorjDieu Cao, Product Manager, DockerHema Ganapathy, Product Marketing, Docker

The panelists dove into the topic of Docker containers and how they make software development much more efficient, as well as use cases and applications for Docker containers. The women spoke of using SQLServer for LINUX in a container, using Docker Desktop for seeing clusters without having to use the CLI and using Docker for automated tests launched in a container.  

But it wasn’t all tech talk. The panelists also led a fascinating, interactive conversation on what it’s like to be a woman in a male-dominated industry and how those experiences shaped us during our careers. Each woman shared their passion for coding and how that influenced education and career choices. Some got hooked on coding after their first programming class, while others learned coding on their own at home. Whichever path they took, the women were united in their love of using Docker containers.

The panelists also explored what constitutes a positive developer experience. All agreed that a helpful, thriving community, like Docker’s development community, is a huge part of any developer’s success.

The panel was a reflection of Docker’s commitment to fully embracing diversity and supporting developers from all backgrounds. But don’t worry if you missed it. You can still watch it on demand for great insights about Women in Tech using Docker technology.
The post DockerCon 2021: Women in Tech Panel appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Improved Volume Management, Docker Dev Environments and more in Desktop 3.5

Docker Desktop 3.5 is here and we can’t wait for you to try it!

We’ve introduced some exciting new features including improvements to the Volume Management interface, a tech preview of Docker Dev Environments, and enhancements to Compose V2.

Easily Manage Files in your Volumes

Volumes can quickly take up local disk storage and without an easy way to see which ones are being used or their contents, it can be hard to free up space. This is why in the release of Docker Desktop 3.5 we’ve made it even easier for Pro and Team users to explore the directories and files inside of a volume. We’ve added in the modified date, kind, and size of files so that you can quickly identify what is taking up all that space and decide if you can part with it.

Once you’ve identified a file or directory inside a volume you no longer need, you can remove them straight from the Dashboard to free up space. We’ve also introduced a way to download files locally using “Save As” so that you can easily back up files before removing them.

We’re continuing to add more to volume management like the ability to share your volumes with your colleagues. Have ideas on how we might make managing volumes easier? We’d love you to help us prioritize by adding your use cases on our public roadmap. 

Docker Dev Environments

In 3.5 we released a technical preview of Docker Dev Environments. Check out our blog to learn more about why we built this and how it works.

Docker Compose V2 Beta Rollout Continues

We’re continuing to roll out the beta of Docker Compose V2, which allows you to seamlessly run the compose command in the Docker CLI. We are working towards launching Compose v2 as a drop-in replacement for docker-compose, so that no changes are required in your code to use this new functionality. We have also introduced the following new features:

Added support for container links and external links to facilitate communication between containers Introduced the docker compose logs –since and –until options enabling you to search logs by date.`docker compose config –profiles` now lists all defined profiles so you can see which additional services are defined in a single docker-compose.yml file. Profiles allow you to adjust the Compose application model for various usages and environments by selectively enabling services. 

You can test this new functionality by running the docker compose command, dropping the – in docker-compose. We are continuing to roll this out gradually; 31% of compose users are already using this beta version. You’ll be notified if you are using the new docker compose. You can opt-in to run Compose v2 with docker-compose, by running docker-compose enable-v2 command or by updating your Docker Desktop’s Experimental Features settings.  

If you run into any issues using Compose V2, simply run docker-compose disable-v2 command, or turn it off using Docker Desktop’s Experimental Features. Let us know your feedback on the new ‘compose’ command by creating an issue in the Compose-CLI GitHub repository.

Warning for Images incompatible with Apple Silicon Machines

Docker Dashboard will now warn you if an image you are using does not match your architecture on Apple Silicon. If you are using Desktop on Apple Silicon and an image is amd64 run by qemu emulation, it is possible that it may have poor performance or potentially crash. While we are promoting the usage of multi-architecture images, we want to make sure you are aware when an image you are using is running under emulation because it does not match your machine’s native architecture. If this is the case a warning will appear on the Containers / Apps page.

Less Disruptive Requests for Feedback

And finally, we’ve heard your feedback on how we ask you for your feedback. We’ve changed the way that the feedback form works so that it won’t pop up while you’re in the middle of working. When it’s time, the feedback form will only show up if you click on the whale menu. We do appreciate the time you spend to rate Docker Desktop. Your input helps us make changes like this! 

See the full release notes for Docker Desktop for Mac and Docker Desktop for Windows for the complete set of changes in Docker Desktop 3.5. 

We can’t wait for you to try Volume Management and the preview of Dev Environments! To get started simply download or update to Docker Desktop 3.5. To start collaborating with your teammates on your dev environments and digging into the contents of your volumes, upgrade to a Pro or Team subscription today!
The post Improved Volume Management, Docker Dev Environments and more in Desktop 3.5 appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

From Compose to Kubernetes with Okteto

Today we’re featuring a blog from Pablo Chico de Guzmán at Okteto, who writes about how the developers’ love of Docker Compose inspired Okteto to create Okteto Stacks, a fully compatible Kubernetes backend for Docker Compose

It has been almost 7 years since the Docker Compose v1.0.0 release went live. Since that time, Docker Compose has become the dominant tool for local development environments. You run one command and your local development environment is up and running. And it works the same way on any OS, and for any application.

At the same time, Kubernetes has grown to become the dominant platform to deploy containers in production. Kubernetes lets you run containers on multiple hosts for fault tolerance, monitors the health of your applications, and optimizes your infrastructure resources. There is a rich ecosystem around it, and all major providers have native support for Kubernetes: GKE, AKS, EKS, Openshift…

We’ve interacted with thousands of developers as we build Okteto (a cloud-native platform for developers). And we kept hearing the same complaint: there’s a very steep learning curve when you go from Docker Compose to Kubernetes. At least, that was the case until today. We are happy to announce that you can now run your Docker Compose files in Kubernetes with Okteto!

Why developers need Docker Compose in Kubernetes

Developers love Docker Compose, and they love it for good reasons. A Docker Compose file for five microservices might be around 30 lines of yaml, but the same application in Kubernetes would be 500+ lines of yaml and about 10-15 different files. Also, the Docker Compose CLI rebuilds and redeploys containers when needed. In Kubernetes, you need additional tools to build your images, tag them, push them to a Docker Registry, update your Kubernetes manifests, and redeploy them. It’s too much friction for something that’s wholly abstracted away by Docker Compose.

But there are some use cases where running your Docker Compose files locally presents some challenges. For example, you might need to run dozens of microservices that exhausts your local CPU/Memory resources, you might need access to GPUs to develop a ML application, or you might want to integrate with a service deployed in a remote Kubernetes cluster. For these scenarios, running Docker Compose in Kubernetes is the perfect solution. This way, developers get access to on demand CPU/Memory/GPU resources, direct access to other services running in the cluster, and more realistic end-to-end integration with the cluster configuration (ingress controllers, SSL termination, monitoring tools, secret manager tools…), while still using the application definition format they know and love.

Docker Compose Specification to the rescue

Luckily, the Docker Compose Specification was open-sourced in 2020. This allowed us to implement Okteto Stacks, a fully compatible Kubernetes backend for Docker Compose. Okteto Stacks are unique with respect to other Kubernetes backend implementations of the Docker Compose Specification because they provide:

In-cluster builds for better performance and caching behavior.Ingress Controller integration and SSL termination for public ports.Bidirectional synchronization between your local filesystem and your containers in Kubernetes.

Okteto’s bidirectional synchronization is pretty handy: it reloads your application on the cluster while you edit your code locally. It’s equivalent to mounting your code inside a container using Docker Compose host volumes, but for containers running in a remote cluster.

How to get started

Okteto Stacks are compatible with any Kubernetes cluster (you will need to install the Okteto CLI and a cluster-side Kubernetes application). But the easiest way to get started with Okteto Stacks is Okteto Cloud, the SaaS version of our cloud-native development platform.

To show the possibilities of Okteto Stacks, let’s deploy the famous Voting App. My team @Tutum developed the Voting App for the DockerCon keynote (EU 2015) to showcase the power of Tutum (later acquired by Docker that year). The demo gods were appeased with an offering of grapes that day. And I hope they are appeased again as you follow this tutorial:

First, install the Okteto CLI if you haven’t done it yet.

Next, configure access to your Okteto Cloud namespace. To do that, execute the following command:

$ okteto namespace

Authentication required. Do you want to log into Okteto? [y/n]: y
What is the URL of your Okteto instance? [https://cloud.okteto.com]:
Authentication will continue in your default browser
✓ Logged in as cindy
✓ Updated context ‘cloud_okteto_com’ in ‘/Users/cindy/.kube/config’

Get a local version of the Voting App by executing the following commands:

$ git clone https://github.com/okteto/compose-getting-started
$ cd compose-getting-started

Execute the following command to deploy the Voting App:

$ okteto stack deploy –wait

✓ Created volume ‘redis’
✓ Deployed service ‘vote’
✓ Deployed service ‘redis’
✓ Stack ‘compose-getting-started’ successfully deployed

The deploy command will create the necessary deployments, services, persistent volumes, and ingress rules needed to run the Voting App. Go to the Okteto Cloud dashboard and you will get the URL of the application.

Now that the Voting App is running, let’s make a small change to show you the full development workflow.

Instead of our pet, let’s ask everyone to vote on our favorite lunch item. Open the “vote/app.py” file in your IDE and modify the lines 16-17. Save your changes.

def getOptions():
option_a = “Tacos”
option_b = “Burritos”

Once you’re happy with your changes, execute the following command:

$ okteto up

✓ Images successfully pulled
✓ Files synchronized

Namespace: cindy
Name: vote

* Serving Flask app ‘app’ (lazy loading)
* Environment: development
* Debug mode: on
* Running on http://10.8.4.205:8080/ (Press CTRL+C to quit)
* Restarting with stat * Debugger is active! * Debugger PIN: 139-182-328

Check the URL of your application again. Your code changes were instantly applied. No commit, build, or push required. And from this moment, any changes done from your IDE will be immediately applied to your application!

That’s all!

Go to the Okteto Stacks docs to learn more about our Docker Compose Kubernetes backend. We’re just starting, so we’d love to hear your thoughts on this.

Happy coding!
The post From Compose to Kubernetes with Okteto appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Secure Software Supply Chain Best Practices

Last month, the Cloud Native Computing Foundation (CNCF) Security Technical Advisory Group published a detailed document about Software Supply Chain Best Practices. You can get the full document from their GitHub repo. This was the result of months of work from a large team, with special thanks to Jonathan Meadows and Emily Fox. As one of the CNCF reviewers I had the pleasure of reading several iterations and seeing it take shape and improve over time.

Supply chain security has gone from a niche concern to something that makes headlines, in particular after the SolarWinds “Sunburst” attack last year. Last week it was an important part of United States President Joe Biden’s Executive Order on Cybersecurity. So what is it? Every time you use software that you didn’t write yourself, often open source software that you use in your applications, you are trusting both that the software you added is what you thought it is, and that it is trustworthy not hostile. Usually both these things are true, but when they go wrong, like when hundreds of people installed updates from SolarWinds that turned out to contain code to attack their infrastructure, the consequences are serious. As people have hardened their production environments, attacking software as it is written, assembled, built or tested, before production, has become an easier route.

The CNCF Security paper started after discussions I had with Jonathan about what work needs to be done to make secure supply chains easier and more widely adopted. The paper does a really good job in explaining the four key principles:

First, every step in a supply chain should be “trustworthy” as a result of a combination of cryptographic attestation and verificationSecond, automation is critical to supply chain security. Automating as much of the software supply chain as possible can significantly reduce the possibility of human error and configuration drift. Third, the build environments used in a supply chain should be clearly defined, with limited scope.  Fourth, all entities operating in the supply chain environment must be required to mutually authenticate using hardened authentication mechanisms with regular key rotation.

In simpler language, this means that you need to be able to securely trace all the code you are using, which exact versions you are using,  where they came from, and in an automated way so that there are no errors. Your build environments should be minimal, secure and well defined, i.e. containerised. And you should be making sure everything is authenticated securely.

The majority of people do not meet all these criteria making exact traceability difficult. The report has strong recommendations for environments that are more sensitive, such as those dealing with payments and other sensitive areas. Over time these requirements will become much more widely used because the risks are serious for everyone.

At Docker we believe in the importance of a secure software supply chain and we are going to bring you simple tools that improve your security. We already set the standard with Docker Official Images. They are the most widely trusted images that  developers and development teams use as a secure basis for their application builds. Additionally, we have CVE scanning in conjunction with Snyk, which helps identify the many risks in the software supply chain. We are currently working with the CNCF, Amazon and Microsoft on the Notary v2 project to update container signing  which we will ship in a few months. This is a revamp of Notary v1 and Docker Content Trust that makes signatures portable between registries and will improve usability that has broad industry consensus. We have more plans to improve security for developers and would love your feedback and ideas in our roadmap repository.
The post Secure Software Supply Chain Best Practices appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Tech Preview: Docker Dev Environments

A couple of weeks ago at DockerCon we showed off a new feature that we are building – Docker Dev Environments. Today we are excited to announce the release of the Technical Preview of Dev Environment as part of Docker Desktop 3.5. 

At Docker we have been looking at how teams collaborate on projects using Git. We know that Git is a powerful tool for version control of source code, but it doesn’t solve a lot of the challenges that exist when developers try to collaborate. Developers still suffer with ‘it works on my machine’ when they are trying to work together on changes as dependencies can differ. Developers may also need to move between Git branches to achieve this and often don’t bother, simply looking at code in the browser rather than running it. This means they lack the context and the tools needed to really validate that the code is good and that this collaboration is all happening right at the end of the creation process. 

To address this, we are excited to release our preview of Dev Environments. With Dev Environments developers can now easily set up repeatable and reproducible development environments by keeping the environment details versioned in their SCM along with their code. Once a developer is working in a Development Environment, they can share their work-in-progress  code and dependencies in one click via the Docker Hub. They can then switch between their developer environments or their teammates’ environments, moving between branches to look at work-in-progress  changes without moving off their current Git branch. This makes reviewing PRs as simple as opening a new environment. Dev Environments use tools built into code editors that allow Docker to access code mounted into a container rather than on the developer’s local host. This isolates the tools, files and running services on the developer’s machine allowing multiple versions of them to exist side by side, also improving file system performance!  And we have built this experience on top of Compose rather than adding another manifest for developers to worry about or look after. 

With this preview we provide you with the ability to get started with a Dev Environment locally either by using our one click creation process or by providing a Compose file as part of a .docker folder. This will then allow you to run a Dev environment on your local machine and give you access to your git credentials inside it. With Compose you will be able to use the other services related to your Dev Environment, allowing you to develop in the full context of the application. We have got our first part of the sharing experience for team members as well, allowing you to share a Dev Environment with your team for them to see your code changes and dependencies together in just one click. 

There are some areas of the first release that we are going to be improving  in the coming  weeks as we build the experience out to make it even easier to use. 

When it comes to working with your team, we will be improving  this to make it easier to send someone your work-in-progress changes. Rather than having to create a unique name for your changes each time, we will let you instead share this with one click – keeping everything synced automatically via Docker Hub for your team. This means your team can see your shared Dev Environment in their UI as soon as you share it. They will also be able to swap out the existing services in their Compose stacks for the one you have shared, moving seamlessly between them. 

We know that developers love Compose and that we can leverage compose features to make it easier to set up your Dev Environments(things like profiles, setting a gopath, defining debug ports, supporting mounts etc). We will be extending what we have in Compose over the coming weeks, if there are particular features you think we should support please let us know!

We will also be looking at other areas of the experience like support for other IDEs, new creation flows and better ways to set up new Dev Environments. 

Lastly we will be looking at all the feedback you as a community give us on other areas we need to improve! If you have feedback on these items or have other areas you think we should be focusing on ready for our GA release, please let us know as part of our feedback repo.

We are really excited about the preview of Dev Environments! If you want to check them out simply download or upgrade Docker Desktop 3.5 and check out the new preview tab. To get started sharing Dev Environments with your team and moving your feedback process back into development rather than at the time of review, upgrade to one of Docker’s team plans today.
The post Tech Preview: Docker Dev Environments appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/