Docker Joins the AWS ISV Accelerate Program

Docker, a long-standing partner of Amazon Web Services, has offered products on the AWS Marketplace for years. After significant growth in Docker orders through the AWS marketplace, at the beginning of 2024, Docker achieved an elevated status by joining the exclusive tier of ISVs within the AWS ISV Accelerate Program. 

This strategic move opens a new chapter for Docker and our customers, who will benefit from an integrated go-to-market approach between Docker and AWS.

Advantages of embracing the AWS ISV Accelerate Program include:

Strategic collaboration

The program fosters a deeper collaboration between Docker and AWS, creating a unique platform for both parties to work closely with AWS experts. This collaboration is pivotal for optimizing Docker’s solutions for AWS services, ensuring deployment and an enhanced onboarding experience for customers.

Market expansion

Leveraging existing agreements with AWS, customers can easily procure Docker’s offerings, capitalizing on benefits like AWS Marketplace Enterprise Discounts (EDP) and simplified procurement processes. With both go-to-market teams working closely together, customers will benefit from more tailored solutions. 

Technical enablement

The AWS ISV Accelerate Program provides Docker with a wealth of technical resources and guidance, empowering us to enhance product offerings tailored for AWS customers. From comprehensive training on AWS services to best practices and meticulous architectural reviews, Docker is well-equipped to ensure our solutions are compatible and optimized for the cloud.

Conclusion

Docker’s entry into the AWS ISV Accelerate Program marks a significant milestone in our collaborative journey with AWS. The program’s robust support structure, coupled with AWS’s unparalleled global reach, propels Docker into a position where we are accelerating our growth and actively supporting our customers in delivering an exceptional developer experience.

If you are a customer eager to explore the benefits of this dynamic partnership, we encourage you to reach out to your dedicated Docker Account Executive or contact sales. Unleash the power of this collaboration and deliver great developer experiences to your teams.

Learn more

Get the latest release of Docker Desktop.

Vote on what’s next! Check out our public roadmap.

Have questions? The Docker community is here to help.

New to Docker? Get started.

Quelle: https://blog.docker.com/feed/

Docker Security Advisory: Multiple Vulnerabilities in runc, BuildKit, and Moby

We at Docker prioritize the security and integrity of our software and the trust of our users. Security researchers at Snyk Labs recently identified and reported four security vulnerabilities in the container ecosystem. One of the vulnerabilities, CVE-2024-21626, concerns the runc container runtime, and the other three affect BuildKit (CVE-2024-23651, CVE-2024-23652, and CVE-2024-23653). We want to assure our community that our team, in collaboration with the reporters and open source maintainers, has been diligently working on coordinating and implementing necessary remediations.

We are committed to maintaining the highest security standards. We will publish patched versions of runc, BuildKit, and Moby on January 31 and release an update for Docker Desktop on February 1 to address these vulnerabilities.  Additionally, our latest Moby and BuildKit releases will include fixes for CVE-2024-23650 and CVE-2024-24557, discovered respectively by an independent researcher and through Docker’s internal research initiatives.

 Versions impactedrunc<= 1.1.11BuildKit<= 0.12.4Moby (Docker Engine)<= v25.0.1 and <= v24.0.8Docker Desktop<= 4.27.0

These vulnerabilities can only be exploited if a user actively engages with malicious content by incorporating it into the build process or running a container from a suspect image (particularly relevant for the CVE-2024-21626 container escape vulnerability). Potential impacts include unauthorized access to the host filesystem, compromising the integrity of the build cache, and, in the case of CVE-2024-21626, a scenario that could lead to full container escape. 

We strongly urge all customers to prioritize security by applying the available updates as soon as they are released. Timely application of these updates is the most effective measure to safeguard your systems against these vulnerabilities and maintain a secure and reliable Docker environment.

What should I do if I’m on an affected version?

If you are using affected versions of runc, BuildKit, Moby, or Docker Desktop, make sure to update to the latest versions as soon as patched versions become available:

 Patched versionsrunc>= 1.1.11BuildKit>= 0.12.5Moby (Docker Engine)>= v25.0.2 and >= v24.0.9Docker Desktop>= 4.27.1

If you are unable to update to an unaffected version promptly after it is released, follow these best practices to mitigate risk: 

Only use trusted Docker images (such as Docker Official Images).

Don’t build Docker images from untrusted sources or untrusted Dockerfiles.

If you are a Docker Business customer using Docker Desktop and unable to update to v4.27.1 immediately after it’s released, make sure to enable Hardened Docker Desktop features such as:

Enhanced Container Isolation, which mitigates the impact of CVE-2024-21626 in the case of running containers from malicious images.

Image Access Management, and Registry Access Management, which give organizations control over which images and repositories their users can access.

For CVE-2024-23650, CVE-2024-23651, CVE-2024-23652, and CVE-2024-23653, avoid using BuildKit frontend from an untrusted source. A frontend image is usually specified as the #syntax line on your Dockerfile, or with –frontend flag when using the buildctl build command.

To mitigate CVE-2024-24557, make sure to either use BuildKit or disable caching when building images. From the CLI this can be done via the DOCKER_BUILDKIT=1 environment variable (default for Moby >= v23.0 if the buildx plugin is installed) or the –no-cache flag. If you are using the HTTP API directly or through a client, the same can be done by setting nocache to true or version to 2 for the /build API endpoint.

Technical details and impact

CVE-2024-21626 (High)

In runc v1.1.11 and earlier, due to certain leaked file descriptors, an attacker can gain access to the host filesystem by causing a newly-spawned container process (from runc exec) to have a working directory in the host filesystem namespace, or by tricking a user to run a malicious image and allow a container process to gain access to the host filesystem through runc run. The attacks can also be adapted to overwrite semi-arbitrary host binaries, allowing for complete container escapes. Note that when using higher-level runtimes (such as Docker or Kubernetes), this vulnerability can be exploited by running a malicious container image without additional configuration or by passing specific workdir options when starting a container. The vulnerability can also be exploited from within Dockerfiles in the case of Docker.

The issue has been fixed in runc v1.1.12.

CVE-2024-23651 (High)

In BuildKit <= v0.12.4, two malicious build steps running in parallel sharing the same cache mounts with subpaths could cause a race condition, leading to files from the host system being accessible to the build container. This will only occur if a user is trying to build a Dockerfile of a malicious project.

The issue will be fixed in BuildKit v0.12.5.

CVE-2024-23652 (High)

In BuildKit <= v0.12.4, a malicious BuildKit frontend or Dockerfile using RUN –mount could trick the feature that removes empty files created for the mountpoints into removing a file outside the container from the host system. This will only occur if a user is using a malicious Dockerfile.

The issue will be fixed in BuildKit v0.12.5.

CVE-2024-23653 (High)

In addition to running containers as build steps, BuildKit also provides APIs for running interactive containers based on built images. In BuildKit <= v0.12.4, it is possible to use these APIs to ask BuildKit to run a container with elevated privileges. Normally, running such containers is only allowed if special security.insecure entitlement is enabled both by buildkitd configuration and allowed by the user initializing the build request.

The issue will be fixed in BuildKit v0.12.5.

CVE-2024-23650 (Medium)

In BuildKit <= v0.12.4, a malicious BuildKit client or frontend could craft a request that could lead to BuildKit daemon crashing with a panic.

The issue will be fixed in BuildKit v0.12.5.

CVE-2024-24557 (Medium)

In Moby <= v25.0.1 and <= v24.0.8, the classic builder cache system is prone to cache poisoning if the image is built FROM scratch. Also, changes to some instructions (most important being HEALTHCHECK and ONBUILD) would not cause a cache miss. An attacker with knowledge of the Dockerfile someone is using could poison their cache by making them pull a specially crafted image that would be considered a valid cache candidate for some build steps.

The issue will be fixed in Moby >= v25.0.2 and >= v24.0.9.

How are Docker products affected? 

The following Docker products are affected. No other products are affected by these vulnerabilities.

Docker Desktop

Docker Desktop v4.27.0 and earlier are affected. Docker Desktop v4.27.1 will be released on February 1 and includes runc, BuildKit, and dockerd binaries patches. In addition to updating to this new version, we encourage all Docker users to diligently use Docker images and Dockerfiles and ensure you only use trusted content in your builds.

Docker Build Cloud

Any new Docker Build Cloud builder instances will be provisioned with the latest Docker Engine and BuildKit versions after fixes are released and will, therefore, be unaffected by these CVEs. Docker will also be rolling out gradual updates to any existing builder instances.

Security at Docker

At Docker, we know that part of being developer-obsessed is providing secure software to developers. We appreciate the responsible disclosure of these vulnerabilities. If you’re aware of potential security vulnerabilities in any Docker product, report them to security@docker.com. For more information on Docker’s security practices, see our website.

Advisory links

Runc

CVE-2024-21626

BuildKit

CVE-2024-23650

CVE-2024-23651

CVE-2024-23652

CVE-2024-23653

Moby

CVE-2024-24557

Quelle: https://blog.docker.com/feed/

EJBCA and Docker — Streamlining PKI Management and TLS Certificate Issuance

This post was contributed by Keyfactor.Docker has revolutionized how we develop and deploy modern applications, making it easier and more efficient for developers to create and manage containerized applications. 

If you’re in the world of enterprise-level security, public key infrastructure (PKI), and certificate management, you might already be familiar with EJBCA, an open source tool for implementing PKIs. In this blog post, we will explore how to deploy EJBCA as a Docker container, making your infrastructure setup more modern, efficient, and flexible for your security and certificate management needs. 

Why deploy EJBCA as a Docker container?

EJBCA is a robust PKI and certificate management solution, but sometimes setting up and managing it can be challenging, especially if you need to deploy it from source. Deploying EJBCA as a Docker container can simplify the process and offer various benefits, including:

Portability — Docker containers are lightweight and portable, containing all the software needed to run an application. Once you have an EJBCA container image, you can run it on any system that supports Docker, ensuring consistency across environments.

Easy scaling — Containers make it straightforward to scale your EJBCA instance. You can spin up multiple containers with ease, and orchestration tools like Kubernetes can manage the scaling for you.

Simplified deployment — With EJBCA in a Docker container, you can deploy and upgrade it quickly without worrying about complex installation procedures or dependencies such as Java, database drivers, Wildfly application server, operating system, etc. An installation of EJBCA requires all of these components, and the container has all of these critical dependencies installed and configured.

Advantages of open source PKI and EJBCA

When it comes to implementing a PKI solution, EJBCA’s open source nature provides distinct advantages over other software tools or utilities. Tools such as OpenSSL may serve well for testing, but they often prove inadequate for production. A Microsoft PKI or other PKI service tailored to specific use cases can be robust but often limited in flexibility, scalability, interoperability, and compliance.

EJBCA is one of the most used open source PKIs in the world. It can be built from source using the code from GitHub or be deployed as a Docker container. Here are advantages that you can expect from EJBCA:

Comprehensive feature set — EJBCA offers a comprehensive feature set for certificate management, including certificate issuance, revocation, and key management for many use cases. You can run hundreds of CAs in one single installation. This is effective compared to, for example, Microsoft ADCS, which can run only one CA per server installation. One installation of EJBCA can also support multiple use cases.

Robust certificate authority — EJBCA functions as a full-fledged certificate authority (CA), registration authority, and validation authority, including support for both online certificate status protocol (OCSP) and certificate revocation lists (CRLs), essential for being able to support a serious PKI. 

Scalability and automation — In production scenarios, scalability is critical when EJBCA is under load and more instances are needed to serve PKI operations. EJBCA can be easily scaled using Docker orchestration tools, Helm charts, and by leveraging EJBCA open source Ansible playbooks, ensuring that your PKI infrastructure can handle the demands of your organization. 

User management and role-based access control — EJBCA offers user management and role-based access control, allowing you to define who can perform specific tasks within your PKI. 

Active community and support — EJBCA benefits from an active open source community and professional support options for the EJBCA Enterprise editions, ensuring you can find the right assistance when needed. EJBCA Enterprise edition is available as software and hardware appliances, Cloud AWS and Azure Marketplace options, and SaaS.

Compliance and auditing — EJBCA is designed with compliance and auditing in mind, helping you meet regulatory requirements and maintain a robust and signed audit trail. For example, you can enforce certificate policy for each CA to prevent the CA from signing any type of certificate signing request (CSR) that is submitted.

Getting started

Let’s walk through the process of deploying EJBCA as a Docker container. You can learn more through our introductory video on YouTube.

Step 1: Install Docker

You must have Docker installed on your system. 

Step 2: Pull the EJBCA Docker image

EJBCA provides an official Docker image, making it easy to get started. You can pull the image using the following command:

docker pull keyfactor/ejbca-ce:latest

Step 3: Run EJBCA container

Now that you have the EJBCA image, you can run it as a container:

shellCopy code
docker run -d –rm –name ejbca-node1 -p 80:8080 -p 443:8443 -h "127.0.0.1" –memory="2048m" –memory-swap="2048m" –cpus="2" ejbca/ejbca-ce:8.0.0

This command will start the EJBCA container in the background, and it will be accessible at https://localhost:443/ejbca/adminweb.

Step 4: Access the EJBCA web console

Open your web browser and navigate to https://localhost/ejbca/adminweb to access the EJBCA web console.

Custom installation configuration

If you need to customize your EJBCA instance, you can mount a configuration file or use an external database with the container. This step allows you to tailor the PKI to your specific needs.

Issuing a TLS certificate as a PKI admin  

Private TLS certificates play a crucial role in authenticating users and devices within closed network environments such as enterprise networks and business applications. When public trust isn’t necessary, opting for private TLS certificates is the most cost efficient and convenient way. Yet, it’s crucial to approach it with seriousness. The PKI software setup and certificate issuance process hold significance even in private trust environments.  

You can generate TLS client or server certificates easily by following our best practices video tutorials. EJBCA allows you to initiate on a small scale and expand as your use case evolves. This series commences with a guide on setting up EJBCA as a Docker container. Read more and find additional options for how to issue your TLS certificates with EJBCA on the website.

Conclusion

Deploying EJBCA as a Docker container simplifies the management of your PKI setup. It provides portability, isolation, and scalability, making it easier to handle security and certificate management. Whether you are a security professional or a developer working on PKI solutions, using Docker to run EJBCA can streamline your workflow and enhance your security practices.

In this blog post, we’ve covered the basics of setting up EJBCA as a Docker container and explained how a PKI admin can configure the software to issue TLS certificates. We encourage you to explore the EJBCA documentation and tutorial videos for more advanced configurations and guidance on issuing certificates for your products or workloads. With the power of Docker and EJBCA, you can take control of your certificate authority and PKI efficiently and securely.

Now, go ahead and secure your digital world with EJBCA and Docker! If you have any questions or want to share your experiences, connect with us on the Keyfactor discussions page.

Learn more

Check out EJBCA CE on Docker Hub.

Visit the Open source EJBCA PKI product page.

New to Docker? Get started.

Quelle: https://blog.docker.com/feed/

Announcing Docker Scout Software Supply Chain Solution for Open Source Projects

As we announced at DockerCon, we’re now providing a free Docker Scout Team subscription to all Docker-Sponsored Open Source (DSOS) program participants. 

If your open source project participates in the DSOS program, you can start using Docker Scout today. If your open source project is not in the Docker-Sponsored Open Source program, you can check the requirements and apply.

For other customers, Docker Scout is already generally available. Refer to the Docker Scout product page to learn more.

Why use Docker Scout?

Docker Scout is a software supply chain solution designed to make it easier for developers to identify and fix supply chain issues before they hit production. 

To do this, Docker Scout:

Gives developers a centralized view of the tools they already use to see all the critical information they need across the software supply chain 

Makes clear recommendations on how to address those issues, including for security issues and opportunities to improve reliability efforts

Provides automation that highlights new defects, failures, or issues

Docker Scout allows you to prevent and address flaws where they start. By identifying issues earlier in the software development lifecycle and displaying information in Docker Desktop and the command line, Docker Scout reduces interruptions and rework.

Supply chain security is a big focus in software development, with attention from enterprises and governments. Software is complex, and when security, reliability, and stability issues arise, they’re often the result of an upstream library. So developers don’t just need to address issues in the software they write but also in the software their software uses.

These concerns apply just as much to open source projects as proprietary software. But the focus on improving the software supply chain results in an unfunded mandate for open source developers. A research study by the Linux Foundation found that almost 25% of respondents said the cost of security gaps was “high” or “very high.” Most open source projects don’t have the budget to address these gaps. With Docker Scout, we can reduce the burden on open source projects.

Conclusion

At Docker, we understand the importance of helping open source communities improve their software supply chain. We see this as a mutually beneficial relationship with the open source community. A well-managed supply chain doesn’t just help the projects that produce open source software; it helps downstream consumers through to the end user.

For more information, refer to the Docker Scout documentation.  

Learn more

Join our “Improving Software Supply Chain Security for Open Source Projects” webinar on Wednesday, February 7, 2024 at 1 PM Eastern (1700 UTC). Watch on LinkedIn or on the Riverside streaming platform.

Try Docker Scout.

Looking to get up and running? Use our Quickstart guide.

Vote on what’s next! Check out the Docker Scout public roadmap.

Have questions? The Docker community is here to help.

Not a part of DSOS? Apply now.

Quelle: https://blog.docker.com/feed/

Introducing Docker Build Cloud: A New Solution to Speed Up Build Times and Improve Developer Productivity

Developers face a growing predicament — the long wait times for builds to complete. In fact, the average build time increased by an average of 15.9% between 2020 and 2021, according to a survey by Incredibuild. On average, developers lose around one hour each day, the study says, and this delay is steadily rising year after year. The impact on productivity and developer experience can cost even a small organization $420,000 per year (based on an annualized salary of $140K and a team of 25 developers).

We’re developers, too, so we explored ways to speed up build time without sacrificing the local development experience we love. That’s how Docker Build Cloud was born. 

“The new products we announced meet development teams where they are with ‘just enough cloud,’ seamlessly blurring the boundaries between local and remote development. In doing so, we’re enabling these teams to accelerate their delivery of secure applications critical to their businesses.” — Giri Sreenivas, Chief Product Officer, Docker

Temporary fixes don’t last 

Developers and their employers have attempted to address resource constraints that slow build times and drain productivity in a few common ways, the Incredibuild study says, including upgrading hardware (cited by 44% of respondents) and reducing codebase size (acknowledged by 33%). Although these tactics offer a temporary boost, they’re far from sustainable. A better solution is required — one that speeds up build times and improves team collaboration by eliminating redundant builds. 

Build up to 39x faster with Docker Build Cloud 

In the constantly evolving landscape of software development, two investment areas continue to trend upwards: moving development to the cloud and accelerating builds. The concept behind Docker Build Cloud is simple. By leveraging cloud compute and cache, developers can build faster and improve collaboration with their team. 

Building in the cloud speeds up build times because the cloud provides access to faster compute resources than are available on a developer’s local machine, and this approach provides more consistency between developers who may have newer or older machines. 

Shared cache speeds up build times because when one team member initiates a build, the cached results become instantly accessible to others, thereby eliminating unnecessary builds and speeding up the development cycle. No more waiting around for each build to complete independently. 

The combination of building in the cloud and shared cache helps developers save time and improve productivity. Developers can get back to coding on parallel tasks while builds complete, and they get the results of builds back faster to incorporate into their work.  

For example, one of our technology customers who develops enterprise collaboration software was able to reduce their build time from an average duration of 15-20 minutes to less than 2 minutes using Docker Build Cloud. 

Multi-architecture builds made effortless

Today, developers who need to create applications for both Intel (AMD64) and Apple Silicon/AWS Graviton (Arm64) chipsets must have multiple native builders or configure slow emulators to successfully build for their deployment targets. Docker Build Cloud offers native support for multi-architecture builds, eliminating the need for setting up and maintaining multiple native builders. This support removes the challenges associated with emulation, further improving build efficiency.

An e-commerce customer simplified their CI toolchain. Prior to Docker Build Cloud, they were leveraging GitHub Actions, GitLab runners, plus a custom GitLab runner to handle ARM architecture. Due to Build Cloud’s dual AMD and ARM builders, our customer reduced their complexity and sped up their pipelines. 

Seamless integration with the tools you love

Developer tools should enhance developer experience, not add new points of friction. We’ve designed Docker Build Cloud to be easy to set up wherever you run your builds without requiring a massive lift-and-shift effort. Docker Build Cloud also works well with Docker Compose, GitHub Actions, and other CI solutions. This means you can seamlessly incorporate Docker Build Cloud into your existing development tools and services and immediately start reaping the benefits of enhanced speed and efficiency.

Try Docker Build Cloud today

Docker Build Cloud will enable a faster, more efficient Docker image-building process. Whether you’re a seasoned developer or just getting started, embrace the power of the cloud in your local development environment. Welcome to the future of Docker image builds with Docker Build Cloud. 

Try Docker Build Cloud now. 

Learn more

See the video demo Introducing Docker Build Cloud.

Watch the webinar Docker Build Cloud: Reclaim Your Dev Time with Fast, Multi-Architecture Builds.

Get the latest release of Docker Desktop.

Vote on what’s next! Check out our public roadmap.

Have questions? The Docker community is here to help.

New to Docker? Get started.

Quelle: https://blog.docker.com/feed/

How to Use OpenPubkey to SSH Without SSH Keys

This post was contributed by BastionZero.

What if you could SSH without having to worry about SSH keys? Without the need to worry about SSH keys getting lost, stolen, shared, rotated, or forgotten? In this article, we’ll walk you through how to SSH to your remote Docker setups with just your email account or Single Sign-On (SSO). Find instructions for setting up OpenPubkey SSH in our documentation.

What’s wrong with SSH?

We love SSH and use it all the time, but don’t often stop to count how many keys we’ve accumulated over the years. As of writing this, I have eight. I can tell you what five of them are for, I definitely shouldn’t have at least two of them, and I’m pretty sure of the swift firing that would happen if I lost at least one other. What on earth is “is_key.pem”? I have no idea, and it sounds like I didn’t know when I made it.

There’s rarely an SSH key that’s actually harmless, even if you’re only using it to access or debug remote Docker setups. Test environments get cryptojacked and proxyjacked frequently, and entire swaths of the internet are dedicated to SSH hacking. 

When was the last time you patched sshd? The tool is ubiquitous yet so rarely updated that those threats are not going away anytime soon. Managing keys is a hassle that is bound to lead to compromise, and simple mistakes can lead to horrible outcomes. Even GitHub exposed their SSH private key in a public repository last year. 

So, what can we do? How can we do better? And is it free? Yes, yes, and yes. 

Now, there’s a new way to use SSH with OpenPubkey. Instead of juggling SSH keys, OpenPubkey SSH (OPK SSH) allows you to use your regular email account or SSO to log in and securely connect to an SSH server with a quick, one-time setup. No more guessing which keys get you fired, and no cursing your past self for poor naming conventions. No keys.

OpenPubkey SSH is the first fully developed use case for OpenPubkey, an open source project led by BastionZero, Docker, and The Linux Foundation. It will continue to grow and improve as we enhance its features and adapt it to meet evolving user needs and security challenges. Read on to learn what OpenPubkey is and how it works.

Getting started with OpenPubkey SSH 

Currently, OPK SSH only supports logging in via Google. If you have a particular provider you’d prefer, come visit us in GitHub or learn more in the Getting involved section below.

OpenPubkey SSH is being offered as part of BastionZero’s zero-trust command-line utility: the zli. Instructions for installing the zli can be found in the BastionZero documentation.

After installing the zli, you’ll need to:

Configure your SSH server (<1 minute)

Log in with Google (<1 minute)

Test your configuration

Use OPK SSH for Docker remote access

Manage users

Configure your SSH server

The first step is to configure your SSH server. For your first-time setup, we assume you have a Google account and at least sudoer access to the SSH server you’re trying to set up.

zli configure opk <your Google email> <user>@<hostname>

Log in with Google

Then, you need to log in. This will open a browser window so you can authenticate with Google:

zli login –opk

Test your configuration

Now, you can use SSH using OPK. To test that everything configured correctly and access is working via OPK SSH, you can run the following command:

ssh -F /dev/null -o IdentityFile=~/.ssh/id_ecdsa -o IdentitiesOnly=yes user@server_ip

Because we save our certificate at a default location, SSH will always use it to authenticate. So, it is not necessary to specify the IdentityFile after removing your existing SSH keys.

Use OPK SSH for Docker remote access

If you’re already using SSH with Docker then you’re all set, you get to keep your existing remote Docker setup with no need to do anything else. Otherwise, you can set your local Docker client to connect to a remote Docker instance by doing one of the following:

# Set an environment variable
$ export DOCKER_HOST=ssh://user@server-ip

# Or, create a new context
$ docker context create ssh-box –docker "host=ssh://user@server-ip"

Then you can use Docker as usual, and it will use SSH under the hood to connect to your remote Docker instance.

Manage users

Now that you’ve set it up for one user, let’s discuss how to configure it for many. OPK SSH means that you don’t have to coordinate with users to give them access. Who you choose to allow access to your server is specified in an easy-to-read YAML policy file that might look like this:

$ cat policy.yaml
users:
– email: alice@acme.co
principals:
– root
– luffy
– email: bob@co.acme
principals:
– luffy

Note that principals is SSH-speak for the users you’re allowed to SSH in as.

If you’re flying solo or in a small group, then you’ll likely never have to deal with this file directly; our zli configuration command takes care of this for you. However, larger groups may be more interested in how this works at scale, and we’ve got answers for you. To discuss how OPK SSH can specifically fit your needs, reach out to us at BastionZero. For any issues or troubleshooting questions during the process, visit our guide.

How it works

Docker already lets you use SSH to execute Docker commands on remote containers by specifying a different host either as an environment variable or as part of a context.

# Set an environment variable
$ export DOCKER_HOST=ssh://user@server-ip

# Or, create a new context
$ docker context create ssh-box –docker "host=ssh://user@server-ip"

For OPK SSH, you don’t need to change any of that. Docker is using your pre-configured SSH under the hood for you. OpenPubkey is a different configuration that’s more secure yet completely compatible with Docker or any other access use case that relies on SSH (Figure 1).

Figure 1: Accessing a container using OpenPubkey SSH.

OpenPubkey slides in nicely with how SSH is already designed. We only use integration mechanisms that are well-used and widely deployed. First, we use SSH certificates instead of SSH keys, and second, we use the AuthorizedKeysCommand to invoke the OpenPubkey verifier program. This is all taken care of for you by our zli configure command.

$ cat /etc/ssh/sshd_config

AuthorizedKeysCommand /etc/opk/opk-ssh verify %u %k %t
AuthorizedKeysCommandUser root

SSH certificates remove the need for any keys. Instead of using them as in a traditional certificate ecosystem, such as x509, our goal is to embed them with a special token that we can verify on the server. That’s where the AuthorizedKeysCommand comes in. 

The AuthorizedKeysCommand allows users to have their access evaluated by a program instead of by comparing it against preconfigured, public keys in an authorized_keys file. Once you’ve configured your sshd to use our OPK verifier, it can grant or deny access for all OPK-generated SSH certificates you give it going forward.

What is OpenPubkey?

OpenPubkey isn’t just about SSH; it is so much more. Docker is using it to sign Docker Official Images and BastionZero is using it for zero-trust infrastructure access. OpenPubkey is a joint effort between the Linux Foundation, BastionZero, and Docker. It is an open source project built on top of OpenID Connect (OIDC) that adds new functionality without impacting any of the old. 

OIDC is a protocol that lets you log into websites or applications using your personal (or work) email accounts. When you log in, you’re actually generating an identity token (ID token) that’s only for the specific application and that attests to the fact that you’re you. It also includes some handy personal information — essentially whatever you’ve given that application permission to request. 

Basically, OpenPubkey adds a temporary public key to your ID token so that you can sign messages. Because it’s attested to by trusted identity providers like Google, Microsoft, Okta, etc., anyone can verify it anywhere, at any time.

But OpenPubkey isn’t just about adding a public key to your ID token; it’s also about how you use it. One issue with vanilla OIDC is that any application that respects that token assumes you are you. With OpenPubkey, proving that you’re you isn’t just about presenting a public token, but also a single-use, signed message. So, the only way to impersonate you is to steal your public token and a private secret that never leaves your machine.  

Getting involved

There are plenty of ways to get involved. We’re building a passionate and engaged community. We discuss things at both a high level for those who like to architect and at a fun, gritty, technical level for those who like to be a different kind of architect. Come to hang out; we appreciate the support in whatever capacity you can provide.

If you’d like to get involved, visit our OpenPubkey repo. And if you’re ready to try OPK SSH to SSH without SSH keys, refer to our documentation’s comprehensive guide.

Learn more

Watch the on-demand webinar How to use OpenPubkey to SSH without SSH keys.

Read How to Use OpenPubkey with GitHub Actions Workloads.

Signing Docker Official Images Using OpenPubkey

Get the latest release of Docker Desktop.

Vote on what’s next! Check out our public roadmap.

Have questions? The Docker community is here to help.

New to Docker? Get started.

Quelle: https://blog.docker.com/feed/

How to Enhance Application Security Posture with Docker Scout Policies

With the proliferation of open source components, integrity and reliability within the software supply chain are paramount. This article explores how Docker Scout policies serve as a catalyst, fostering collaboration between development and security teams to define and achieve an ideal application security posture for organizations. Let’s dive into the capabilities that make Docker Scout an indispensable asset in the pursuit of improved security.

Step 1: Use Docker Scout policies for SecOps efficiency

Docker Scout dashboards become a security team’s trusted companion, providing a seamless and intuitive interface to utilize out-of-the-box policies. These policies offer a rapid comparison between the ideal and current states of application security, effectively highlighting areas requiring attention. To give security teams a head start, these out-of-the-box policies come with default configurations that can be updated to reflect internal requirements and standards.  

Step 2: Gauge the impact of security policies

Docker Scout dashboards are more than visual aids; they are powerful tools for understanding an organization’s current application security posture. Offering an overall summary and compliance status checks against defined standards enables security teams to gauge the impact of security policies. For example, the critical CVE policy showcases the percentage of images with no critical CVEs (Figure 1).

Figure 1: Docker Scout policies showing conformance percentages.

Step 3: Drill down for actionable insights

Docker Scout dashboards offer an intuitive approach to analyzing information and gaining deeper insights. For example, selecting View details on any of the policies provides comprehensive information about nonconforming images. Moreover, it precisely indicates the location of vulnerabilities within an image. This user-friendly feature ensures that teams can identify problematic images with just a few clicks and understand the right next steps to initiate effective remediation (Figure 2).

Figure 2: Detailed view provides the non-conformant images and associated vulnerabilities.

Step 4: Use Docker Scout CLI at the point of development for quick feedback

Docker Scout becomes an integral part of developers’ workflows, allowing them to work seamlessly with their preferred tools, such as the CLI. For example, developers can run a simple docker scout policy command in the CLI to receive instant feedback on image compliance with company policies. This integration significantly reduces feedback loops, saving valuable time and boosting developer productivity (Figure 3).

Figure 3: Output of scout policy command showing conformance status at the developer workstation.

Step 5: Get recommendations for seamless issue resolution

Docker Scout goes beyond merely identifying issues; it provides actionable recommendations for developers. For example, running the docker scout recommendations command offers easy-to-understand next steps (Figure 4). Developers can now swiftly address issues, such as updating a base image, without needing to scour the web for solutions. Docker Scout simplifies the process, allowing developers to jump into their preferred workflows with confidence.

Figure 4: Output of Docker Scout recommendations command showing the next best action for the developer to remediate the issues.

Conclusion 

Docker Scout is more than a security product — it’s a business enabler. Docker Scout’s integrated solutions enhance developer productivity and empower cross-functional teams to confidently deliver secure applications to production faster. By seamlessly bringing together the development and security teams, Docker Scout policies become a driving force in achieving a secure and streamlined software development lifecycle. Elevate your security efforts with Docker Scout policies and unlock collaborative efficiency.

Get started with Docker Scout

Get started with Docker Scout policies at scout.docker.com.

Read Achieve Security and Compliance Goals with Policy Guardrails in Docker Scout.

Visit the Docker Scout product page.

Have questions? The Docker community is here to help.

New to Docker? Get started.

Quelle: https://blog.docker.com/feed/

8 Top Docker Tips & Tricks for 2024

This post was contributed by Docker Captain Vladimir Mikhalev.

Happy New Year, Docker Fans! I hope your 2024 is off to a great start. Whether you’re a Docker expert or new to the Docker community, you may be wondering about the best ways to optimize or get started quicker on Docker. As a Docker Captain and a Senior DevOps Engineer, I’ve been using Docker for more than six years, and I’m looking forward to some thrilling updates in 2024!  

In this post, I’m excited to share my top 8 tips and tricks for Docker that I’ve gathered through real-world experience and insider knowledge.

Supercharge productivity with Docker

1. Enable VirtioFS for faster file sharing on Macs.

Remember the days of sluggish file sharing in Docker on Mac? We’d be wrestling with heavy file I/O operations, watching the clock as each sync dragged on. It wasn’t just a test of patience; it was a real bottleneck in our workflow.

But here’s the good news: With Docker Desktop for Mac 4.6, that’s history. Just head over to Settings > General and select VirtioFS.

Figure 1: Select VirtoFS under Settings > General.

The performance leap is something you have to experience to believe. Everything feels snappier, whether building, running, or updating containerized apps. It’s a breath of fresh air for those of us in fast-paced dev environments where every second counts.

This upgrade has been a massive win for productivity, and it’s just one of the many reasons I’m excited about Docker’s direction in 2024. These kinds of improvements make Docker not just a tool but a powerful ally in our development arsenal.

2. Strategically layer to optimize the Docker Build cache.

Let’s talk about Dockerfile efficiency – something I’ve wrestled with more times than I can count. Back in the day, Docker builds could feel like a slow dance. You make a small change in your code, and wait for what feels like an eternity for the build to complete. It was a frequent frustration, especially when you’re iterating rapidly and need to test a small change. The problem? Our Dockerfiles weren’t optimized for efficient caching, leading to unnecessary rebuilds and time wasted.

Here’s a trick I learned: Strategic layering in your Dockerfile can turn the tide. Place those instructions that don’t change often, like installing dependencies, right at the top. Then, put your COPY or ADD commands for your application code lower down. 

This structure is a game-changer. It means Docker can reuse cached layers for the top parts of your Dockerfile, and you’re only rebuilding what’s actually changed. The result? Your build times get slashed, and you spend more time coding and less time waiting.

Another lifesaver is using RUN –mount type=cache when installing packages. This little gem keeps your package cache intact between builds. No more re-downloading the entire internet every time you build your image. It’s especially handy when you’re working with large dependencies. Implement this, and watch your build efficiency go through the roof.

To give you a better idea, here’s how you might apply these principles in a Dockerfile for a Node.js application:

# Use an official Node base image
FROM node:14

# Install dependencies first to leverage Docker cache
COPY package.json package-lock.json ./

# Using cache mount for npm install, so unchanged packages aren’t downloaded every time
RUN –mount=type=cache,target=/root/.npm
npm install

# Copy the rest of your app's source code
COPY . .

# Your app's start command
CMD ["npm", "start"]

This example Dockerfile demonstrates the strategic layering and RUN cache usage in action, showcasing how these practices can significantly optimize your Docker builds.

Adopting these practices transformed my Docker experience. No more watching the spinner while Docker rebuilds the world. Instead, it’s quick iterations, fast feedback, and more productivity. And honestly, that’s the kind of efficiency we live for in our line of work.

3. Avoid the bloat to keep builds efficient. 

In the earlier days of Docker, the sheer size of our builds often tripped me up. It was like packing your entire house for a weekend trip. I’d end up sending tons of unnecessary files to the Docker daemon, resulting in bloated build contexts and painfully slow build times. Not exactly ideal when you’re trying to keep things lean and agile.

The key? Getting smarter with what to include in the build context. In your .dockerignore, specify only the essentials – leave out anything that doesn’t contribute to your final image. This approach is like packing a well-organized suitcase and bringing only what you need. The benefit is twofold: You speed up the build process and reduce resource consumption by sending less data to the Docker daemon. It’s a straightforward yet powerful tweak that has saved us countless hours.

Another game-changer has been adopting multi-stage builds in our Dockerfiles. Imagine building a complex app and having to include all the build tools and dependencies in your final image. It’s like taking the construction crew with you after building your house. Instead, with multi-stage builds, you compile and build everything in an initial stage, and then, in a separate stage, you copy over just the necessary artifacts. This results in a much leaner, more efficient final image. It’s not only good practice for keeping image sizes down, but it also means quicker deployments and reduced storage costs.

Implementing these methods transformed how we handle Docker builds. Your builds are faster, your deployments are smoother, and your entire workflow just feels more streamlined.

4. Kickstart your projects with Docker Init.

Remember the old days when starting a new Docker project felt like navigating a maze? We’d often find ourselves fumbling through the initial setup – creating a Dockerfile, figuring out what to include in .dockerignore, setting up compose.yaml, and so on. 

For Docker newbies, this was daunting. Even for seasoned pros, it was a repetitive chore that ate into valuable time. Each new project was like reinventing the wheel; frankly, we had more important things to focus on, like actual coding.

Enter Docker Init. This feature has been a lifesaver for streamlining project setups. It’s like having a personal assistant to handle the groundwork of a new Docker project. 

Just run docker init, and voilà, it sets up the essential scaffolding for your project. You get a .dockerignore to keep unwanted files out, a Dockerfile tailored to your project’s needs, a compose.yaml for managing multi-container setups and even a README.Docker.md for documentation. 

The best part? It’s customizable. For instance, if you’re working on a Node.js app, Docker Init won’t just give you a generic Dockerfile; it’ll tailor it to fit the Node environment and dependencies. This means less tweaking and more doing. It’s not just about saving time; it’s about starting off on the right foot — no more guesswork or boilerplate code. You’re set up for success right from the get-go.

Docker Init has changed the game for us. What used to be a tedious start to every project is now a smooth, streamlined process. It’s like having a launchpad for your Docker projects, ready to take you straight into the heart of development without the initial hassle.

5. Proactively find and fix software vulnerabilities with Docker Scout.

In our constant quest for robust and secure applications, we’ve often encountered a common snag in the DevOps world – keeping a vigilant eye on vulnerabilities across multiple repositories. It’s like trying to keep track of a dozen moving targets simultaneously. Pre-Docker Scout days, this was a cumbersome task, often leading to oversights and last-minute scrambles to address security gaps.

But here’s where Docker Scout shines, and it’s not just about its powerful ability to detect vulnerabilities. Docker Scout provides a comprehensive, eagle-eyed watch over our entire repository landscape. Since we’ve made Docker Scout an integral part of our workflow, we have increased confidence across our teams and stages that we’re delivering a secure final product.

We started by setting up Docker Scout across all our repositories. (Check out the Docker quickstart guide.) It’s like deploying a network of sentinels, each tasked with keeping a watchful eye on a specific territory. The setup process was straightforward, and once in place, Scout began providing ongoing visibility into the security status of our repositories.

What I particularly appreciate about Docker Scout is its ongoing visibility feature. It’s like having a dashboard that constantly updates with the latest security intel. We’re not just talking about identifying vulnerabilities; we’re talking about a tool that gives us real-time insights, keeping us informed and ready to act.

And when Docker Scout flags an issue, it doesn’t just leave us hanging with a problem. It guides us through the remediation process. This aspect has been a game-changer. It’s like having an expert by your side, suggesting the best course of action, whether it’s updating a package or reconfiguring a setting. Having that level of guidance is empowering and transforms how we approach security from reactive to proactive.

Integrating Docker Scout in this expansive manner has revolutionized our approach to securing our software supply chain. It’s no longer a check-box activity; it’s an integral part of our DevOps culture. The peace of mind that comes from knowing you have a comprehensive security net over your entire application landscape? Priceless.

Incorporating Docker Scout this way has enhanced our security posture and fundamentally shifted our approach, making a secure software supply chain a seamlessly integrated aspect of our development lifecycle.

Try Docker Scout for yourself.

6. Accelerate your development with Docker Build Cloud.

Imagine you’re working on a Docker project, and each build feels like a long road trip in heavy traffic. Traditional local Docker builds, particularly for substantial projects, can be frustratingly slow and resource-intensive. You’re there, watching the progress bar crawl while your machine groans under the load. It’s like trying to run a race with weights tied to your feet. And let’s not forget the uneven playing field – developers with high-end machines breeze through builds while others with modest setups endure a sluggish pace. This disparity often leads to the infamous “works on my machine” syndrome, creating a rift in the development process.

Enter Docker Build Cloud, a game-changer that’s like swapping out your heavy backpack for a jetpack. By offloading the build process to the cloud, Docker Build Cloud provides a consistent, high-speed build environment for all developers, regardless of their local hardware. It’s the equivalent of giving every developer in your team a top-of-the-line workstation for building their Docker images.

Optimizing your Dockerfiles for cloud-based builds is key to harnessing the full potential of Docker Build Cloud. Structuring Dockerfile commands for maximum layer caching efficiency and minimizing the build context size are crucial steps. It’s about arranging your Dockerfile instructions to leverage shared caches and parallel build capabilities, akin to streamlining your development process for maximum efficiency. I recall a time when reorganizing our Dockerfile structure reduced the build time of a significant project by half, transforming a cumbersome process into a swift and efficient one.

Monitoring build times and cache usage is equally crucial. By keeping a close eye on these aspects, you can pinpoint any inefficiencies or bottlenecks, allowing for timely tweaks and adjustments. During one of our high-traffic periods, we noticed a spike in build times. By analyzing cache usage and build patterns, we identified a misconfigured step in our Dockerfile, which, once resolved, brought our build times back to optimal levels.

Embracing Docker Build Cloud marks a significant shift in your development workflow. It’s not just about speeding up builds; it’s about creating a harmonious and efficient development environment. Implementing multi-stage builds and regularly updating base images have further streamlined our processes, ensuring that our builds are not only fast but also secure and up-to-date.

Your team can now enjoy quick iterations and efficient resource utilization, elevating productivity to new heights. Docker Build Cloud transforms the building process from a chore into an experience marked by speed and efficiency, ensuring that your projects are not just built but crafted swiftly and seamlessly in a state-of-the-art cloud environment. This shift to Docker Build Cloud is more than an upgrade; it’s a new way of thinking about Docker builds, aligning perfectly with the agility and dynamism needed in modern software development.

7. Resolve code issues faster with Docker Debug.

Troubleshooting sometimes feels like trying to solve a puzzle with missing pieces. You’ve likely been there – a bug shows up, and you’re diving deep into logs and configurations, trying to replicate the issue. It’s a bit like detective work, where every clue matters, but you’re not quite sure where the next clue is. This process can be time-consuming and, frankly, a bit of a headache, especially when the issues are elusive or environment-specific.

But here’s where Docker Debug steps in and changes the game. It’s like being handed a magnifying glass and a detailed map when you’re in the midst of a complicated treasure hunt. Docker Debug is an upgrade to Docker Build that brings a suite of troubleshooting tools to your fingertips. It’s designed to make the debugging process less of a trial-and-error journey and more of a straight path to solutions.

Integrating Docker Debug into your regular debugging process is like adding a new set of high-tech tools to your toolkit. You get features for both local and remote debugging, which are invaluable when you’re dealing with issues that are hard to pin down. For instance, the ability to view logs in real-time or execute commands within containers is like having a direct line to what’s happening inside your Docker environment. This direct access means you can see exactly what’s going wrong and where rather than making educated guesses.

Using Docker Debug helps you replicate and diagnose issues in environments that mimic both local and production settings. This versatility is crucial because a bug that pops up in a production environment might not always show in a local one and vice versa. It’s akin to having the ability to test your car on both race tracks and city roads – you get a complete picture of performance across different conditions.

Implementing structured logging in your applications, for instance, turns your logs into a coherent story, making it easier for Docker Debug to guide you to the heart of the problem. Regularly performing health checks on your containers using Docker Debug’s tools is akin to having a routine check-up, ensuring everything runs smoothly.

When you face a network issue or a memory leak, Docker Debug becomes your go-to tool. It allows you to replicate the exact environment and dive deep into the container, inspecting processes, network connections, or even running a debugger on the application process. It’s like having a surgical tool to dissect and understand your application’s behavior under various conditions.

The natural beauty of Docker Debug lies in its ability to lead to quicker resolutions of complex issues. You’re not just looking at surface-level symptoms; you’re able to dive deep and understand the root causes. It’s essentially giving you an X-ray vision for your Docker projects. No more prolonged downtime or lengthy bug hunts; with Docker Debug, you’re equipped to identify, understand, and resolve issues with precision and speed.

In essence, incorporating Docker Debug into your workflow is more than just an upgrade; it’s a transformative step towards more efficient, effective, and less stressful troubleshooting. It’s about turning what used to be a daunting task into a more manageable, even straightforward, part of your development process. With Docker Debug, you’re not only fixing issues faster, but you’re also gaining insights that can prevent these issues from happening in the first place. It’s a strategic move that elevates your Docker game, ensuring your projects are functional, robust, and resilient.

8. Test against real instances with Testcontainers.

Testing in the world of Docker can often feel like navigating through a dense forest with just a compass. You’re trying your best to simulate real-world conditions, but there’s always that feeling that something’s missing. It’s like preparing for a marathon on a treadmill – useful, but not quite the same as hitting the pavement.

Enter Testcontainers, a lifesaver that’s turned our testing approach on its head, especially with Docker’s acquisition of AtomicJar. Imagine having the ability to spin up real databases, message brokers, or any other service your app interacts with, all within your test suite. It’s like suddenly having access to a full-scale rehearsal studio instead of practicing in your garage.

Testcontainers allow us to bring production-like environments right into our automated tests. We’re talking about spinning up a PostgreSQL container for database tests or RabbitMQ for messaging. This shift has been monumental – we’re now testing under conditions that closely mirror what we’ll encounter in production.

We’ve seamlessly integrated Testcontainers into our CI/CD pipeline. This means every build is tested against real instances, ensuring that the tests passing on a developer’s machine will pass in production, too. It’s akin to having an all-weather test track available any time we need it.

Let me paint a picture with a real scenario we faced. We had this intermittent issue where everything worked fine in development but fell apart in production. Sounds familiar? We set up Testcontainers with the same version of the database as in production, and suddenly, the problem was reproducible. And diagnosable. And fixable. It was the kind of turning point that transforms night-long debugging sessions into quick fixes.

Embracing Testcontainers is more than just adopting a new tool; it’s a paradigm shift in how we do testing. It ensures that our tests are not just passing but passing in a way that gives us confidence about how they’ll behave in the real world.

So, my fellow Docker aficionados, if you haven’t already, dive into the world of Testcontainers. It’s not just about making your tests more reliable; it’s about making your entire development lifecycle more predictable, efficient, and aligned with the realities of production environments. It’s one of those tools that, once you start using, you’ll wonder how you ever managed without it.

Get started with Testcontainers and see what you think.

Conclusion

These are the top tips and tricks that have revolutionized the way my team and I use Docker. Whether you’re just starting out or you’ve been in the Docker game for a while, I hope these insights help you as much as they’ve helped us. 

If you’re the kind of developer who wants to be the first to hear about new features and help improve the Docker experience, sign up to be part of the Developer Preview Program.  You can also join the community Slack, where you can chat with other Docker developers and share your own tips and tricks!

We wish you a happy 2024! Keep experimenting, and happy Dockerizing!

Learn more

Check out the Docker quickstart guide.

Read Docker Init: Initialize Dockerfiles and Compose files with a single CLI command.

Try Docker Scout.

Learn more about Docker Build Cloud.

Get started with Testcontainers.

Sign up for the Developer Preview Program.

Join the Docker community on Slack.

Quelle: https://blog.docker.com/feed/

How to Use OpenPubkey with GitHub Actions Workloads

This post was contributed by Ethan Heilman, CTO at BastionZero.

OpenPubkey is the web’s new technology for adding public keys to standard single sign-on (SSO) interactions with identity providers that speak OpenID Connect (OIDC). OpenPubkey works by essentially turning an identity provider into a certificate authority (CA), which is a trusted entity that issues certificates that cryptographically bind an identity with a cryptographic public key. With OpenPubkey, any OIDC-speaking identity provider can bind public keys to identities today.

OpenPubkey is newly open-sourced through a collaboration of BastionZero, Docker, and the Linux Foundation. We’d love for you to try it out, contribute, and build your own use cases on it. You can check out the OpenPubkey repository on GitHub.

In this article, our goal is to show you how to use OpenPubkey to bind public keys to workload identities. We’ll concentrate on GitHub Actions workloads, because this is what is currently supported by the OpenPubkey open source project. We’ll also briefly cover how Docker is using OpenPubkey with GitHub Actions to sign Docker Official Images and improve supply chain security. 

What’s an ID token?

Before we start, let’s review the OpenID Connect protocol. Identity providers that speak OIDC are usually called OpenID Providers, but we will just call them OPs in this article. 

OIDC has an important artifact called an ID token. A user obtains an ID token after they complete their single sign-on to their OP. They can then present the ID token to a third-party service to prove that they have properly been authenticated by their OP.  

The ID token includes the user’s identity (such as their email address) and is cryptographically signed by the OP. The third-party service can validate the ID token by querying the OP’s JSON Web Key Set (JWKS) endpoint, obtaining the OP’s public key, and then using the OP’s public key to validate the signature on the ID token. The OP’s public key is available by querying a JWKS endpoint hosted by the OP. 

How do GitHub Actions obtain ID tokens? 

So far, we’ve been talking about human identities (such as email addresses) and how they are used with ID tokens. But, our focus in this article is on workload identities. It turns out that Actions has a nice way to assign ID tokens to GitHub Actions.   

Here’s how it works. GitHub runs an OpenID Provider. When a new GitHub Action is spun up, GitHub first assigns it a fresh API key and secret. The GitHub Action can then use its API key and secret to authenticate to GitHub’s OP. GitHub’s OP can validate this API key and secret (because it knows that it was assigned to the new GitHub Action) and then provide the GitHub Action with an OIDC ID token. This GitHub Action can now use this ID token to identify itself to third-party services.

When interacting with GitHub’s OP,  Docker uses the job_workflow_ref claim in the ID token as the workflow’s “identity.” This claim identifies the location of the file that the GitHub Action is built from, so it allows the verifier to identify the file that generated the workflow and thus also understand and check the validity of the workflow itself. Here’s an example of how the claim could be set:

job_workflow_ref = octo-org/octo-automation/.github/workflows/oidc.yml@refs/heads/main

Other claims in the ID tokens issued by GitHub’s OP can be useful in other use cases. For example, there is a field called Actor or ActorID, which is the identity of the person who kicked off the GitHub Action. This could be useful for checking that workload was kicked off by a specific person. (It’s less useful when the workload was started by an automated process.)

GitHub’s OP supports many other useful fields in the ID token. You can learn more about them in the GitHub OIDC documentation.

Creating a PK token for workloads

Now that we’ve seen how to identify workloads using GitHub’s OP, we will see how to bind that workload identity to its public key with OpenPubkey. OpenPubKey does this with a cryptographic object called the PK token. 

To understand how this process works, let’s go back and look at how GitHub’s OP implements the OIDC protocol. The ID tokens generated by GitHub’s OP have a field called audience. Importantly, the audience field is chosen by the OIDC client that requests the ID token. When GitHub’s OP creates the ID token, it includes the audience along with the other fields (like job_workflow_ref and actor) that the OP signs when it creates the ID token.

So, in OpenPubkey, the GitHub Action workload runs an OpenPubkey client that first generates a new public-private key pair. Then, when the workload authenticates to GitHub’s OP with OIDC, it sets the audience field equal to the cryptographic hash of the workload’s public key along with some random noise. 

Now, the ID token contains the GitHub OP’s signature on the workload’s identity (the job_workflow_ref field and other relevant fields) and on the hash of the workload’s public key. This is most of what we need to have GitHub’s OP bind the workload’s identity and public key.

In fact, the PK token is a JSON Web Signature (JWS) which roughly consists of:

The ID token, including the audience field, which contains a hash of the workload’s public key.

The workload’s public key.

The random noise used to compute the hash of the workload’s public key.

A signature, under the workload’s public key, of all the information in the PK token. (This signature acts as a cryptographic proof that the user has access to the user-held secret signing key that is certified in the PK token.)

The PK token can then be presented to any OpenPubkey verifier, which uses OIDC to obtain the GitHub OP’s public key from its JWKS end. The verifier then verifies the ID token using the GitHub OP public key and then verifies the rest of the other fields in the PK token using the workload’s public key. Now the verifier knows the public key of the workload (as identified by its job_workflow_ref or other fields in the ID token) and can use this public key for whatever cryptography it wants to do.

Can you use ephemeral keys with OpenPubkey?

Yes! An ephemeral key is a key that is only used once. Ephemeral keys are nice because there is no need to store the private key anywhere, which improves security and reduces operational overhead.

Here’s how to do this with OpenPubkey. You choose a public-private key pair, authenticate to the OP to obtain a PK token for the public key, sign your object using the private key, and finally throw away the private key.   

One-time-use PK token 

We can take this a step further and ensure the PK token may only be associated with a single signed object. Here’s how it works. To start, we take a hash of the object to be signed. Then, when the workload authenticates to GitHub’s OP,  set the audience claim to equal to the cryptographic hash of the following items:

The public key 

The hash of the object to be signed

Some random noise

Finally, OpenPubkey verifier obtains the signed object and its one-time-use PK token, and then validates the PK token by additionally checking that the hash of the signed object is included in the audience claim. Now, you have a one-time-use PK token. You can learn more about this feature of OpenPubkey in the repo.

How will Docker use OpenPubkey to sign Docker Official Images?

Docker will be using OpenPubkey with GitHub Actions workloads to sign Docker Official Images. Every Docker Official Image will be created using a GitHub Action workload. The workload creates a fresh ephemeral public-private key pair, obtains the PK token for the public key via OpenPubkey, and finally signs the image using the private key.  

The private key is then deleted, and the image, its signature, and the PK token will be made available on the Docker Hub container registry. This approach is nice because it doesn’t require the signer to maintain or store the private key.

Docker’s container signing use case also relies heavily on The Update Framework (TUF), another Linux Foundation open source project. Read “Signing Docker Official Images Using OpenPubkey” for more details on how it all works.

What else can you do with OpenPubkey and GitHub Actions workloads?

Check out the following ideas on how to put OpenPubkey and GitHub Actions to work for you.

Signing private artifacts with a one-time key 

Consider signing artifacts that will be stored in a private repository. You can use OpenPubkey if you want to have a GitHub Action cryptographically sign an artifact using a one-time-use key. A nice thing about this approach is that it doesn’t require you to expose information in a public repository or transparency log. Instead, you need to post the artifact, its signature, and its PK token in the private repository. This capability is useful for private code repositories or internal build systems where you don’t want to reveal to the world what is being built, by whom, when, or how frequently. 

If relevant, you could also consider using the actor and actor-ID claim to bind the human who builds a particular artifact to the signed artifact itself. 

Authenticating workload-to-workload communication

Suppose you want one workload (call it Bob) to process an artifact created by another workload (call it Alice). If the Alice workload is a GitHub Action, the artifact it creates could be signed using OpenPubkey and passed on to the Bob workload, which uses an OpenPubkey verifier to verify it using the GitHub OP’s public key (which it would obtain from the GitHub OP’s JWKS url).  This approach might be useful in a multi-stage CI/CD process.

And other things, too! 

These are just strawman ideas. The whole point of this post is for you to try out OpenPubkey, contribute, and build your own use cases on it.

Other technical issues we need to think about

Before we wrap up, we need to discuss a few technical questions.

Aren’t ID tokens supposed to remain private?

You might worry about applications of OpenPubkey where the ID token is broadly exposed to the public inside the PK token. For example, in Docker Official Image signing use case, the PK tokens are made available to the public in the Docker Hub container registry. If the ID token is broadly exposed to the public, there is a risk that the ID token could be replayed and used for unauthorized access to other services.  

For this reason, we have a slightly different PK token for applications where the PK token is made broadly available to the public. 

For those applications, OpenPubkey strips the OP’s signature from the ID token before including it in the PK token. The OP’s signature is replaced with a Guillou-Quisquater (GQ) non-interactive proof-of-knowledge for an RSA signature (which is also known as a “GQ signature”). Now, the ID token cannot be replayed against other services, because the OP’s signature is removed. An ID token without a signature is useless.  

So, in applications where the PK token must be broadly exposed to the public, the PK token is a JSON Web Signature, which consists of:

The ID token excluding the OP’s signature

A GQ signature on the ID token

The user’s public key

The random noise used to compute the hash of the user’s public key

A signature, under the user’s public key, of all the information in the PK token 

The QC signature allows the client to prove that the ID token was validly signed by the identity provider (IdP), without revealing the OP’s signature. The OpenPubkey client generates the QC signature to cryptographically prove that the client knows the OP’s signature on the ID token, while still keeping the OP’s signature secret. GQ signatures only work with RSA, but this is fine because every OpenID Connect provider is required to support RSA.

Because GC signatures are larger and slower than regular signatures, we recommend using them only for use cases where the PK token must be made broadly available to the public. BastionZero’s infrastructure access use case does not use GQ signatures because it does not require the PK token to be made public. Instead, the user only exposes their PK token to the target (e.g., server, container, cluster, database) that they want to access; this is the same way an ID token is usually exposed with OpenID Connect.   

GC signatures might not be necessary when authenticating workload-to-workload communications; if the Alice workload is passing the signed artifact and its PK token to the Bob workload only, there is less of a concern that the PK token is broadly available to the public.

What happens when the OP rotates its OpenID Connect key? 

OPs have OpenID Connect signing keys that change over time (e.g., every two weeks). What happens if we need to use a PK token after the OP rotates its OpenID Connect key? 

For some use cases, the lifetime of a PK token is typically short. With BastionZero’s infrastructure access use case, for instance, a PK token will not be used for longer than 24 hours. In this use case, these timing problems are solved by (1) having user re-authenticate to the IdP and create a new PK token whenever the IdP rotates it’s key, and (2) having the OpenPubkey verifier check that the client also has a valid OIDC Refresh token along with the PK token whenever the ID token expires.

For some use cases, the PK token has a long life, so we do need to worry about OP rotating their OpenID Connect keys. With Docker’s container signing use case, this problem is solved by having TUF additionally store a historical log of the OP’s signing key. Anyone can keep a historical log of the OP public keys for use after they expire. In fact, we envision a future where OP’s might keep this historical log themselves. 

That’s it for now! You can check out the OpenPubkey repo on GitHub. We’d love for you to join the project, contribute, and identify other use cases where OpenPubkey might be useful.

Learn more

Read “Signing Docker Official Images Using OpenPubkey.”

Get the latest release of Docker Desktop.

Vote on what’s next! Check out our public roadmap.

Have questions? The Docker community is here to help.

New to Docker? Get started.

Quelle: https://blog.docker.com/feed/

Docker 2023: Milestones, Updates, and What’s Next

We’ve had an exciting year at Docker, with loads of product news and announcements. Don’t worry if you couldn’t keep up with the pace of our news and product releases. We’ve rounded up highlights from 2023 and look ahead to how we plan to stay the #1 most-used developer tool as we roll into 2024.

Docker milestones & performance improvements

Docker Desktop updates

We’ve been hard at work enhancing Docker Desktop this year. Among the notable highlights:

Docker Desktop 4.26: Rosetta, PHP Init, Builds View GA, Admin Enhancements, and Docker Desktop Image for Microsoft Dev Box 

Docker Desktop 4.25: Enhancements to Docker Desktop on Windows, Rosetta for Linux GA, and New Docker Scout Image Analysis Settings

Docker Desktop 4.24: Compose Watch, Resource Saver, and Docker Engine

Docker Desktop 4.23: Updates to Docker Init, New Configuration Integrity Check, Quick Search Improvements, Performance Enhancements, and More

Docker Desktop 4.22: Resource Saver, Compose ‘include’, and Enhanced RBAC Functionality

Docker Desktop 4.21: Support for New Wasm Runtimes, Docker Init Support for Rust, Docker Scout Dashboard Enhancements, Builds View (Beta), and More

Docker Desktop 4.20: Docker Engine and CLI Updated to Moby 24.0

Docker Desktop 4.19: Compose v2, New Language Support, Moby Project Update

Docker Desktop 4.18: Docker Scout Updates, Container File Explorer GA

Docker Desktop 4.17: New Functionality for a Better Development Experience

Docker Desktop 4.16: Better Performance and Docker Extensions GA

Performance milestones

Read “Docker’s Journey Toward Enabling Lightning-Fast Developer Innovation: Unveiling Performance Milestones” to learn about:

75% startup time speed improvements

85x improvement in upload speed

650% improvement in image download speeds

71% reduction in build time

Resource saver mode saves 38,500 CPU hours daily. 

Download the latest Docker Desktop release to take advantage of the performance improvements.

Simplifying software supply chain management

We’ve simplified software supply chain management for developers with Docker Scout. Docker Scout policies enable teams to identify, prioritize, and fix their software quality issues at the point of creation to meet their organization’s reliability and security standards while accelerating the speed of execution and innovation. 

Learn how to achieve security and compliance goals with policy guardrails in Docker Scout. Visit the Docker Scout product page to learn more.

20 new Docker extensions

Twenty new Docker extensions were added to the Docker extension marketplace in 2023. We highlighted a few extensions on the Docker blog, including Kubescape, NebulaGraph, Gefyra, LocalStack, and Grafana. Explore Docker Hub to discover more extensions, and use the Docker Extensions SDK to create and share your own.

New Docker features 

We also announced:

Docker Build Early Access Program: Docker Build speeds up builds by as much as 39 times by automatically taking advantage of large, on-demand cloud-based servers and team-wide build caching. 

Resource Saver GA release: With Resource Saver, Docker Desktop uses minimal system resources when it’s idle, letting you save battery life on your laptop and improve your multitasking experience. 

Docker Compose Watch GA release: Watch, available in Compose 2.22 and later, lets you automatically update and preview your running Compose services as you edit and save your code. 

Docker Init Beta release: docker init, a new command-line interface (CLI) command introduced as a beta feature, simplifies the process of adding Docker to a project.  

Docker+Wasm Technical Preview 2: The new Technical Preview of Docker+Wasm offers three new runtimes: spin from Fermyon, slight from Deislabs, and wasmtime from Bytecode Alliance.

Builds view GA release: The new Builds view in Docker Desktop provides detailed insight into your build performance and usage. Get a live view of your builds as they run, explore previous build performance, and dive deep into an error and cache issue.

Signing Docker Official Images Using OpenPubkey: We announced our intention to use OpenPubkey, a project jointly developed by BastionZero and Docker and recently open-sourced and donated to the Linux Foundation, as part of our signing solution for Docker Official Images (DOI).

All things AI/ML

2023 will be known as the year of AI/ML. For 2024, our investments in AI promise to bring new services and functionality to Docker customers. Recent announcements include:

GenAI Stack: At DockerCon, with partners Neo4j, LangChain, and Ollama, we announced a new GenAI Stack. We have brought together the top technologies in the generative artificial intelligence (GenAI) space to build a solution that allows developers to deploy a full GenAI stack with only a few clicks. 

Docker AI: Docker’s first AI-powered product promises to boost developer productivity by tapping into the universe of Docker developer wisdom to provide context-specific, automated guidance to developers as they work. Sign up for early access now.

Docker & Hugging Face partnership: We announced a new partnership with Hugging Face to democratize AI and make it accessible to all software engineers. To dive in, read our blog post, “Effortlessly Build Machine Learning Apps with Hugging Face’s Docker Spaces.”

Announcing the Docker AI/ML Hackathon 2023 Winners: The 2023 Docker AI/ML Hackathon encouraged participants to build solutions that were innovative, applicable in real life, use Docker technology, and have an impact on developer productivity. Read the announcement blog post to learn about the winning AI/ML projects.

Also check out our blog post “Why Are There More Than 100 Million Pull Requests for AI/ML Images on Docker Hub?” to learn how Docker is providing a powerful tool for AI/ML development.

Expanding developer experiences

AtomicJar joins Docker

In December, we were excited to welcome AtomicJar, the makers of Testcontainers, to the Docker family. “Docker already accelerates the ‘inner loop’ app development steps — build, verify (through Docker Scout), run, debug, and share — and now, with AtomicJar and Testcontainers, we’re adding ‘test,’” explains Docker CEO Scott Johnston. As a result, developers using Docker will be able to deliver quality applications faster and with less effort. Read our announcement blog post and FAQ to learn more about AtomicJar and Testcontainers.

Mutagen joins Docker

In June, we announced the acquisition of Mutagen, the company behind the open source Mutagen file synchronization and networking technologies that enable high-performance remote development. The Mutagen File Sync feature of Docker Desktop takes file sharing to new heights with up to a 16.5x improvement in performance. To try it and help influence Docker’s future, sign up for the Docker Desktop Preview Program.

Microsoft Dev Box and Docker Desktop

We announced our partnership with the Microsoft Dev Box team to bring additional benefits to developer onboarding, environment set-up, security, and administration with Docker Desktop. You can navigate to the Azure Marketplace to download the Docker Desktop-Dev Box compatible image and start developing in the cloud with a native experience. Additionally, this image can be activated with your current subscription, or you can buy a Docker Business subscription directly on Azure Marketplace.

Docker and Snowflake collaboration

At Snowflake BUILD, we announced Docker Desktop with Snowpark Container Services (private preview). Watch the session to learn more about accelerating deployments of data workloads with Docker and Snowpark. 

Docker in action

Customer highlights from 2023 include:

How Tabcorp Accelerated Delivery Velocity with Docker

How Docker Transforms Software Delivery and Empowers Developers at The Warehouse Group

How IKEA Retail Standardizes Docker Images for Efficient Machine Learning Model Deployment

How JW Player Secured 300 Repos in an Hour with Docker Scout

How Kinsta Improved the End-to-End Development Experience by Dockerizing Every Step of the Production Cycle

What’s next

In October at DockerCon, Docker and Udemy announced a partnership to offer developers accessible learning paths to further their Docker education. Read the announcement blog post to learn more about what we’ve planned.

Want to dive deeper into Docker? DockerCon videos are available now on YouTube. 

Do your New Year goals include expanding your Docker expertise? Watch the on-demand webinar Docker Fundamentals: Get the Most Out of Docker.

Check out our public roadmap to help steer the future of Docker.

Thank you to our community of developers, Docker Captains and Community Leaders, customers, and partners! We look forward to our continued work building our future together in the New Year. 

Learn more

Watch A Look Back at Docker in 2023 on-demand. 

Check out past and upcoming Docker events.

Get the latest release of Docker Desktop.

Read highlights from DockerCon 2023.

Watch DockerCon 2023 sessions.

Try Docker Scout.

Explore Docker Extensions.

Get started with Testcontainers.

Watch the on-demand webinar Docker Fundamentals: Get the Most Out of Docker.

Have questions? The Docker community is here to help.

New to Docker? Get started.

See the Docker public roadmap.

Quelle: https://blog.docker.com/feed/