Building Trusted Content with GitHub Actions

As part of our continued efforts to improve the security of the software supply chain and increase trust in the container images developers create and use every day, Docker has begun migrating its Docker Official Images (DOI) builds to the GitHub Actions platform. Leveraging the GitHub Actions hosted, ephemeral build platform enables the creation of secure, verifiable images with provenance and SBOM attestations signed using OpenPubkey and the GitHub Actions OIDC provider.

DOI currently supports up to nine architectures for a wide variety of images, more than any other collection of images. As we increase the trust in the DOI catalog, we will spread out the work over three phases. In our first phase, only Linux/AMD64 and Linux/386 images will be built on GitHub Actions. For the second phase, we eagerly anticipate the availability of GitHub Actions Arm-based hosted runners next year to add support for additional Arm architectures. In our final phase, we will investigate using GitHub Actions self-hosted runners for the image architectures not supported by GitHub Actions hosted runners to cover any outstanding architectures.

In addition to using GitHub Actions, the new DOI signing approach requires establishing a root of trust that identifies who should be signing Docker Official Images. We are working with various relevant communities — for example, the Open Source Security Foundation (OpenSSF, a Linux Foundation project), the CNCF TUF (The Update Framework) and in-toto projects, and the OCI technical community — to establish and distribute this trust root using TUF.

To ensure smooth and rapid developer adoption, we will integrate DOI TUF+OpenPubkey signing and verification into the container toolchain. These pluggable integrations will enable developers to seamlessly verify signatures of DOI, ensuring the integrity and origin of these fundamental artifacts. Soon, verifying your DOI base image signatures will be integrated into the Build and push Docker images GitHub Action for a more streamlined workflow.

What’s next

Looking forward, Docker will continue to develop and extend the TUF+OpenPubkey signing approach to make it more widely useful, enhancing and simplifying trust bootstrapping, signing, and verification. As a next step, we plan to work with Docker Verified Publishers (DVP) and Docker-Sponsored Open Source (DSOS) to expand signing support to additional Docker Trusted Content. Additionally, plans are in place to offer an integration of Docker Hub with GitHub Actions OIDC, allowing developers to push OCI images directly to Docker Hub using their GitHub Actions OIDC identities.

Learn more

OpenPubkey FAQ 

Signing Docker Official Images Using OpenPubkey 

Docker Official Image Signing based on OpenPubkey and TUF

Quelle: https://blog.docker.com/feed/

Docker Desktop 4.25: Enhancements to Docker Desktop on Windows, Rosetta for Linux GA, and New Docker Scout Image Analysis Settings

We’re excited to share Docker Desktop’s latest advancements that promise to elevate your experience, enhance productivity, and increase speed. The Docker Desktop 4.25 release supports the GA of Rosetta for Linux, a feature that furthers the speed and productivity that Docker Desktop brings. We’ve also optimized the installation experience on Windows and simplified Docker Scout image analysis settings in this latest Docker Desktop release.Get ready for near-native emulation, seamless updates, and effortless image analysis control. Let’s dive into some of the newest features in Docker Desktop.

Enhanced productivity and speed with Rosetta for Linux GA

We’re thrilled to announce the general availability of Rosetta for Linux, a game-changing Docker Desktop feature that significantly boosts performance and productivity. Here’s what you need to know:

Rosetta for Linux GA: Docker now supports running x86-64 (Intel) binaries on Apple silicon with Rosetta 2. It’s no longer an experimental feature but a seamlessly integrated component of Docker Desktop.

Near-native emulation: The x86_64 emulation performance is now nearly on par with native execution, all thanks to Rosetta 2. This means you can expect near-native speed when running your applications.

Easy activation: Enabling Rosetta for Linux is a breeze. Simply navigate to Docker Desktop Settings > General and toggle it on to take advantage of the enhanced performance.

System requirements: Rosetta for Linux is available on macOS version 13.0 and above, specifically for Apple silicon devices. Notably, it’s enabled by default on macOS 14.1 and newer, making it even more accessible.

Figure 1: Docker Desktop 4.25 User settings displaying the new option to select turning on Rosetta on Apple Silicon.

Customers who used the previously beta feature of Rosetta for Linux experienced remarkable improvements, particularly when compared to alternatives. Real-world examples:

Database operations: SQL queries are running significantly faster, resulting in notable speed-ups. For instance, tasks like creating databases, running queries, and making updates are showing impressive performance gains ranging from 4% to as high as 91%.

Development efficiency: Customers have reported substantial improvements in their development workflows. Tasks like installing dependencies and building projects are considerably faster, translating to more productive development cycles.

Compatibility: For projects that rely on compatibility with Linux/AMD64 platforms due to binary compatibility issues, Rosetta for Linux ensures a smooth and efficient development process.

With Rosetta for Linux in Docker Desktop, users can look forward to a significant performance boost and increased efficiency.

Enhanced Docker Desktop installation experience on Windows

At Docker, we’re committed to delivering a seamless and efficient Docker Desktop experience for Windows users, irrespective of local settings or privileges. We understand that keeping your WSL (Windows Subsystem for Linux) up to date is crucial for a seamless Docker Desktop experience. With this in mind, we’re pleased to announce a new feature in Docker Desktop that detects the version of WSL during installation and offers automated updates.

When an outdated version of WSL is detected, you now have two convenient options:

Automatic update (default): Allow Docker Desktop to handle the WSL update process seamlessly, ensuring your environment is always up to date without any manual intervention.

Manual update: If you have specific requirements or prefer to manage your WSL updates manually, you can choose to update WSL outside of Docker Desktop. This flexibility allows you to make custom kernel installations and maintain full control over your development environment.

With these enhancements, Docker Desktop on Windows becomes more user-friendly, reliable, and adaptable to your unique needs.

Figure 2: Prompt displaying two new options to finish the installation of Docker Desktop.

Improved Docker Desktop compatibility with Windows 

Docker Desktop’s recent update also includes a change in the minimum supported Windows version, now set at 19044. This update isn’t just about staying in sync with Microsoft’s supported operating systems; it’s about providing you with a seamless Docker Desktop installation experience. By raising the minimum version, we aim to prevent issues tied to older Windows versions, reducing installation failures. 

Figure 3: Alert regarding the installed version of Windows being incompatible with the version of Docker Desktop being installed.

To ensure all Windows users can harness the latest Docker Desktop features and functionalities, we’ve implemented a clear prompt to upgrade Windows versions below 19044.

New Docker Scout settings management in Docker Desktop 4.25

Introducing an easy way for users to manage Docker Scout image analysis in Docker Desktop 4.25. Now, users can easily control Docker Scout image indexing from the Docker Desktop general settings panel with a user-friendly toggle to enable or disable the analysis of local images. 

Administrators can fine-tune access with customized user policies, ensuring precise control of Docker Scout image analysis within their organizations. By specifying an organizational setting in admin-settings.json, administrators can control the Docker Scout image analysis feature for their developers. This enhancement is the first of many to ensure that both users and administrator experiences support personalization.

Figure 4: Docker Desktop 4.25 user settings displaying the new option to turn Scout SBOM indexing on or off at a user settings level. For organizations that have administration, this feature can be restricted per company policies.

Conclusion

The 4.25 release is all about enhancing your Docker Desktop experience. Rosetta for Linux provides remarkable speed and efficiency, optimized installation on Windows ensures seamless updates, and Docker Scout image analysis settings are more easily established.

Update to Docker Desktop 4.25 to empower every user and team to continue to improve productivity and efficiency in developing innovative applications. Do you have feedback? Leave feedback on our public GitHub roadmap, and let us know what else you’d like to see in upcoming releases.

Learn more

Read the Docker Desktop Release Notes.

Get the latest release of Docker Desktop.

Have questions? The Docker community is here to help.

New to Docker? Get started.

Quelle: https://blog.docker.com/feed/

Highlights from DockerCon 2023: New Docker Local, Cloud, and AI/ML Innovations

DockerCon 2023 was held October 4-5 in Los Angeles, California, and was a hybrid event, with the first in-person attendance since 2019. The keynotes both days were packed with Docker announcements and demos, plus customers and partners joined us on stage. 

In this post, we round up highlights from DockerCon 2023. Event videos are available on-demand now on the DockerCon site and will be added to YouTube in the coming weeks.

Docker CEO Scott Johnston kicked off DockerCon 2023, celebrating 10 years of Docker. 

“In the last 10 years, we’ve grown to more than 15 million developers, across the globe, and you all collectively — just on Docker Hub alone — have created and shared more than 15 million repos across open source, commercial, and many other communities,” he said. “Now, with your code, with your apps, with your Dockerfiles, your Docker Compose files, tweets, blog posts, YouTube videos, and introducing your colleagues to Docker, you have made it clear that Docker is the way to build, share, and run any application, anywhere.” 

The first-day keynote included product announcements to accelerate the delivery of secure apps. 

“With these products, Docker is clearly making ‘shift-left’ the new standard in developer experience,” writes Zevi Reinitz for Livecycle. “Each of these new tools aims to achieve a singular goal for developers everywhere: combine the responsiveness and convenience of local development with the on-demand resources, connectedness, and collaboration possibilities of the cloud. This combination enables developers to do their best work much earlier in the SDLC than they ever imagined possible.”

The second-day keynote, hosted by Docker CTO Justin Cormack, focused on innovations in artificial intelligence (AI). 

“The critical importance of Docker to the modern development ecosystem cannot be overstated, and the new AI efforts could have a big impact on GenAI development efforts,” writes Sean Michael Kerner in VentureBeat.

Here’s a roundup of the news and announcement buzz at DockerCon:

Docker Desktop 4.24: Improving the developer experience

Prior to the event kickoff, Docker announced the release of Docker Desktop 4.24. This release brings the Docker Compose Watch GA release, a tool to improve the inner loop of application development. Docker Compose Watch enables devs to instantly see the impact of their code changes without manually triggering image builds. Read the Docker Compose Watch GA announcement to learn more. 

Docker Desktop 4.24 also includes the GA release of the Resource Saver performance enhancement feature. This new feature automatically detects when Docker Desktop is not running containers and reduces its memory footprint by 10x, freeing up valuable resources on developers’ machines for other tasks and minimizing the risk of lag when navigating across different applications. 

And with this latest Docker Desktop release, developers can view and manage Docker Engine state directly from the Docker Dashboard, minimizing clicks. Learn more in the Docker Desktop 4.24 announcement.

New local + cloud products

In the first-day keynote, we announced the Docker Scout GA release, next-generation Docker Build, and Docker Debug. The new products bring the power of the cloud to a development team’s “inner-loop” code-build-test-debug process.

The Docker Scout GA release enables developers to evaluate container images against a set of out-of-the-box policies. Scout’s new capabilities strengthen its position as integral to the software supply chain. Read the Docker Scout announcement to learn more. 

Development teams can waste as much as an hour per day per team member waiting for their image builds to finish. To address this, next-generation Docker Build speeds up builds by as much as 39 times by automatically taking advantage of large, on-demand cloud-based servers and team-wide build caching.

Developers can spend as much as 60% of their time debugging their applications. But much of that time is taken by sorting and configuring tools and set-up instead of debugging. Docker Debug provides a language-independent, integrated toolbox for debugging local and remote containerized apps — even when the container fails to launch — enabling developers to find and solve problems faster.

The Mutagen File Sync feature of Docker Desktop takes file sharing to new heights with up to a 16.5x improvement in performance. To give it a try and help influence the future of Docker, sign up for the Docker Desktop Preview Program.

Udemy + Docker

Docker and Udemy announced a partnership to offer developers accessible learning paths to further their Docker education. Read the announcement blog post to learn more.

AI/ML announcements

Docker AI

Docker AI, Docker’s first AI-powered product, boosts dev productivity by generating guidance for developers that follows best practices and aids selecting up-to-date, secure images for their applications. Read the press release and “Docker dives into AI to help developers build GenAI apps” on VentureBeat to learn more.

Docker AI is available to sign up for early access now.

New GenAI stack

Docker and partners Neo4j, LangChain, and Ollama launched a new GenAI Stack designed to enable developers to deploy a full GenAI stack in a few clicks. Read the blog post and press release to learn how the GenAI Stack simplifies AI/ML integration. 

The GenAI Stack is available in early access now and is accessible from the Docker Desktop Learning Center or on GitHub. 

OpenPubkey

During DockerCon, we announced our intention to use OpenPubkey, a project jointly developed by BastionZero and Docker and recently open-sourced and donated to The Linux Foundation. Read our blog post to learn about signing Docker Official Images using OpenPubkey.

Hackathon kicks off

A Docker AI/ML Hackathon kicked off the week of DockerCon. The Docker AI/ML Hackathon is open from October 3 – November 7, 2023. Winning projects receive prizes, including Docker swag and up to US$10,000.

Register for the Docker AI/ML Hackathon to participate and to be notified of event activities.

Videos now online

Thank you to the DockerCon attendees, speakers, and sponsors for making the 2023 hybrid event  a huge success! And thank you to Docker partners, customers, Docker Captains, and our community for helping make this year happen. 

Visit DockerCon.com for on-demand videos from the event, and subscribe to the Docker YouTube channel to be notified as videos are uploaded.

Learn more

DockerCon 2023 videos on-demand

Docker YouTube channel

DockerCon archives on YouTube: 2020, 2021, 2022

Docker Desktop 4.24: Compose Watch, Resource Saver, and Docker Engine

Announcing Docker Compose Watch GA Release

Docker’s Journey Toward Enabling Lightning-Fast Developer Innovation: Unveiling Performance Milestones

What is Resource Saver Mode in Docker Desktop and what problem does it solve?, by Ajeet Raina on Collabnix

Announcing Docker Scout GA: Actionable Insights for the Software Supply Chain

Announcing Udemy + Docker Partnership

Docker dives into AI to help developers build GenAI apps, by Sean Michael Kerner on VentureBeat

Docker Announces Docker AI, Boosting Developer Productivity Through Context-Specific, Automated Guidance

Docker Announces New Local + Cloud Products to Accelerate Delivery of Secure Apps

Docker with Neo4j, LangChain, and Ollama Launches New GenAI Stack for Developers

Signing Docker Official Images Using OpenPubkey

Announcing Docker AI/ML Hackathon 

Register for the Docker AI/ML Hackathon

Quelle: https://blog.docker.com/feed/

Docker State of Application Development Survey 2023: Share Your Thoughts on Development

Welcome to the second annual Docker State of Application Development survey!

Please help us better understand and serve the developer community with just 20 minutes of your time. We want to know where developers are focused, what they’re working on, and what is most important to them. Your participation and input will help us build the best products and experiences for you.

For example, in Docker’s 2022 State of Application Development Survey, we found that the task for which Docker users most often refer to support/documentation was creating a Dockerfile (reported by 60% of respondents). Among other improvements, this finding helped spur the innovation of Docker AI.

We also found that 59% of respondents use Udemy for online courses and certifications, so we have partnered with Udemy to make learning and using Docker the best and most streamlined experience possible.

Take the Docker State of Application Development survey now!

By participating in the survey, you will be entered into a raffle for a chance to win* one of the following prizes:

1 laptop computer (an Apple M2 16”)

3 Lego kits: Choose from Ferrari™ Daytona SP3, the HulkBuster™, or The Lord of the Rings: Rivendell

2 game consoles: Choose from a Playstation 5, Xbox Series X, or Nintendo Switch OLED

2 $300 Amazon.com gift cards 

20 Docker swag sets 

The survey is open from October 20, 2023 (7AM PST) to November 20, 2023 (11:59PM PST). 

We’ll choose the winners randomly from those who complete the survey with meaningful answers. Winners will be notified via email on December 11, 2023.

The Docker State of Application Development survey only takes about 20 minutes to complete. We appreciate every contribution and opinion. Your voice counts!

*Docker State of Application Development Promotion Official Rules.

Quelle: https://blog.docker.com/feed/

Signing Docker Official Images Using OpenPubkey

At DockerCon 2023, we announced our intention to use OpenPubkey, a project jointly developed by BastionZero and Docker and recently open-sourced and donated to the Linux Foundation, as part of our signing solution for Docker Official Images (DOI). We provided a detailed description of our signing approach in the DockerCon talk “Building the Software Supply Chain on Docker Official Images.” 

In this post, we walk you through the updated DOI signing strategy. We start with how basic container image signing works and gradually build up to what is currently a common image signing flow, which involves public/private key pairs, certificate authorities, the Update Framework (TUF), timestamp logs, transparency logs, and identity verification using Open ID Connect.

After describing these mechanics, we show how OpenPubkey, with a few recent enhancements included, can be leveraged to smooth the flow and decrease the number of third-party entities the verifier is required to trust.

Hopefully, this incremental narrative will be useful to those new to software artifact signing and those just looking for how this proposal differs from current approaches. As always, Docker is committed to improving the developer experience, increasing the time developers spend on adding value, and decreasing the amount of time they spend on toil.

The approach described in this post aims to allow Docker users to improve the security of their software supply chain by making it easier to verify the integrity and origin of the DOI images they use every day.

Signing container images

An entity can prove that it built a container image by creating a digital signature and adding it to the image. This process is called signing. To sign an image, the entity can create a public/private key pair. The private key must be kept secret, and the public key can be shared publicly.

When an image is signed, a signature is produced using the private key and the digest of the image. Anyone with the public key can then validate that the signature was created by someone who has the private key (Figure 1).

Figure 1: An image is signed using a private key, resulting in a signed image. As a next step, the image’s signature is verified using the corresponding public key to confirm its authenticity.

Let’s walk through how container images can be signed, starting with a naive approach, building up to the current status quo in image signing, and ending with Docker’s proposed solution. We’ll use signing Docker Official Images (DOI) as part of the DOI build process as our example since that is the use case for which this solution has been designed.

In the diagrams throughout this post, we’ll use colored seals to represent signatures. The color of the seal matches the color of the private key it was signed with (Figure 2).

Figure 2: Two distinct private keys, labeled 1234 (red) and 5678 (yellow), generate corresponding unique signatures.

Note that all the verifier knows after verifying an image signature with a public key is that the image was signed with the private key associated with the public key. To trust the image, the verifier must verify the signature and the identity of the key pair owner (Figure 3).

Figure 3: DOI builder pushing a signed image to the registry and verifier pulling the same image. At this point, the verifier only knows what key signed the image, but not who controls the key.

Identity and certificates

How do you verify the owner of a public/private key pair? That is the purpose of a certificate, a simple data structure including a public key and a name. The certificate binds the name, known as the subject, to the public key. This data structure is normally signed by a Certificate Authority (CA), known as the issuer of the certificate. 

Certificates can be distributed alongside signatures that were made with the corresponding key. This means that consumers of images don’t need to verify the owner of every public key used to sign any image. They can instead rely on a much smaller set of CA certificates. This is analogous to the way web browsers have a set of a few dozen root CA certificates to establish trust with a myriad of websites using HTTPS.

Going back to the example of DOI signing, if we distribute a certificate binding the 1234 public key with the Docker Official Images (DOI) builder name, anybody can verify that an image signed by the 1234 private key was signed by the DOI builder, as long as they trust the CA that issued the certificate (Figure 4).

Figure 4: DOI builder provides proof of identity to a Certificate Authority (CA), which provides a certificate back. DOI builder pushes a signed image and certificate to the registry. The verifier is able to verify the signed image and that image was created by DOI builder.

Trust policy

Certificates solve the problem of which public keys belong to which entities, but how do we know which entity was supposed to sign an image? For this, we need trust policy, some signed metadata detailing which entities are allowed to sign an image. For Docker Official Images, trust policy will state that our DOI build servers must sign the images.

We need to ensure that trust policy is updated in a secure way, because if a malicious party can change a policy, then they can trick clients into believing the malicious party’s keys are allowed to sign images they otherwise should not be allowed to sign. To ensure secure trust policy updates, we will use The Update Framework (TUF) (specification), a mechanism for securely distributing updates to arbitrary files.

A TUF repository uses a hierarchy of keys to sign manifests of files in a repository. File indexes, called manifests, are signed with keys that are kept online to enable automation, and the online signing keys are signed with offline root keys. This enables the repository to be recovered in case of online key compromise.

A client that wants to download an update to a file in a TUF repository must first retrieve the latest copy of the signed manifests and make sure the signatures on the manifests are verified. Then they can retrieve the actual files.

Once a TUF repository has been created, it can be distributed by any means we choose, even if the distribution mechanism is not trusted. We will distribute it using the Docker Hub registry (Figure 5).

Figure 5: TUF repository provides a Trust Policy that says the image should be signed by DOI builder. DOI builder provides proof of identity to a Certificate Authority (CA), which provides a certificate back. DOI builder pushes signed image, certificate from the CA, and TUF policy to the registry. The verifier is able to verify the signed image and that the image was created by the identity defined in the Trust Policy.

Certificate expiry and timestamping

In the preceding section, we described a certificate as simply a binding from an identity to a public key. In reality, certificates do contain some additional data. One important detail is the expiry time. Usually, certificates should not be trusted after their expiry time. Signatures on images (as in Figure 5) will only be valid until the attached certificate’s expiry time. A limited life span for a signature isn’t desirable because we want images to be long-lasting (longer-lasting than a certificate).

This problem can be solved by using a Timestamp Authority (TSA). A TSA will receive some data, bundle the data with the current time, and sign the bundle before returning it. Using a TSA allows anybody who trusts the TSA to verify that the data existed at the bundled time.

We can send the signature to a TSA and have it bundle the current timestamp with the signature. Then, we can use the bundled timestamp as the ‘current time’ when verifying the certificate. The timestamp proves that the certificate had not expired at the time the signature was created. The TSA’s certificate will also expire, at which point all of the signed timestamps they’ve created will also expire. TSA certificates typically last for a long time (10+ years)(Figure 6).

Figure 6: DOI builder provides proof of identity to a Certificate Authority (CA), which provides a certificate back. DOI builder sends the image signature to the Timestamping Authority (TSA), which provides a signed bundle with the signature and the current time. DOI builder pushes the signed image, certificate from CA, and the bundle signed by the TSA to the registry. The verifier is able to verify the signed image and that the image was created by DOI builder at a specific time.

OpenID Connect

Thus far, we’ve ignored how the CA verifies the signer’s identity (the “proof of ID” box in the preceding diagrams). How this verification works depends on the CA, but one approach is to outsource this verification to a third-party using OpenID Connect (OIDC).

We won’t describe the entire OIDC flow, but the primary steps are:

The signer authenticates with the OIDC provider (e.g., Google, GitHub, or Microsoft).

The OIDC provider issues an ID token, which is a signed token that the signer can use to prove their identity.

The ID token includes an audience, which specifies the intended party that should use the ID token to verify the identity of the signer. The intended audience will be the Certificate Authority. The ID token must be rejected by any other audience.

The CA must trust the OIDC provider and understand how to verify the ID token’s audience claim.

OIDC ID tokens are signed using the OIDC provider’s private key. The corresponding public key is distributed from a discoverable HTTP endpoint hosted by the OIDC provider.

Signed DOI will be built using GitHub Actions, and GitHub Actions can automatically authenticate build processes with the GitHub Actions OIDC provider, making ID tokens available to build processes (Figure 7).

Figure 7: Using OIDC, DOI builder verifies its identity to GitHub Actions, which provides a token the DOI builder sends to the CA to verify its identity. The CA verifies the token with GitHub Actions and provides a certificate back to the DOI builder.

Key compromise

We mentioned at the start of this post that the private keys must be kept private for the system to remain secure. If the signer’s private key becomes compromised, a malicious party can create signatures that can be verified as being signed by the signer.

Let’s walk through a few ways to mitigate the risk of these keys becoming compromised.

Ephemeral keys

A nice way to reduce the risk of compromise of private keys is to not store them anywhere. Key pairs can be generated in memory, used once, and then the private key can be discarded. This means that certificates are also single-use, and a new certificate must be requested from the CA every time a signature is created.

Transparency logging

Ephemeral keys work well for the signing keys themselves, but there are other things that can be compromised:

The CA’s private key (practically, this cannot be ephemeral)

The OIDC provider’s private key (practically, this cannot be ephemeral)

The OIDC account credentials

These keys/credentials must be kept private, but in case of an accidental compromise, we need to have a way to detect misuse. In this situation, a transparency log (TL) can help.

A transparency log is an append-only tamperproof data store. When data is written to the log, a signed receipt is returned by the operator of the log, which can be used as proof that it is contained in the log. The log can also be monitored to check for suspicious activity.

We can use a transparency log to store all signatures and bundle the TL receipt with the signature. We can only accept a signature as valid if the signature is bundled with a valid TL receipt. Because a signature will only be valid if an entry is in the TL, any malicious party creating fake signatures will also have to publish an entry to the TL. The TL can be monitored by the signer, who can sound the alarm if they notice any signatures in the log they didn’t create (Figure 8). The log can also be monitored by concerned third parties to check for any signatures that don’t look right (Figure 9).

We can also use a transparency log to store certificates issued by the CA. A certificate will only be valid if it comes with a TL receipt. This is also how TLS certificates work — they will only be trusted by browsers if they have an attached TL receipt.

The TL receipts also contain a timestamp, so a TL can completely replace the role of the TSA while also providing extra functionality.

Figure 8: DOI builder sends the signed image and certificate from CA to the Transparency Log (TL), which appends the signature to the TL and returns a receipt for the current time. The monitor is able to observe that the signature was made by the DOI builder at a specific time.

Figure 9: Example of a malicious party signing an image using a fake certificate they received from the CA using hacked OIDC credentials. Monitor is able to discern something is not quite right.

Similar attacks with a stolen private key and a legitimate certificate are also detectable in this way.

A summary of the signing status quo

Everything up to this point describes the status quo in artifact signing. Let’s pull together all of the components described so far to recap (Figure 10). These are:

OIDC provider, to verify the identity of some entity

Certificate authority, to issue certificates binding the identity to a public key

Signer, to sign an image with the corresponding private key

Transparency log (TL), to store signatures and return signed timestamped receipts

TUF repository, to distribute trust policy

Transparency log monitors, to detect malicious behavior

Registry, to store all of the artifacts

Client, to verify signatures on images

Figure 10: Building on all the previous figures, using OIDC the DOI builder identifies itself to GitHub Actions, which provides a token the DOI builder sends to the CA to verify its identity. The CA verifies the token with GitHub Actions and provides a certificate back to the DOI builder. DOI builder sends the signed image and certificate from CA to the Transparency Log (TL), which appends the signature to the TL and returns a receipt for the current time. DOI builder pushes the signed image, the certificate from the CA, and the TL receipt to the registry. The verifier is able to verify the signed image and that the image was created by the identity consistent with trust policy at a specific time. The monitor is able to observe that the signature was made by the DOI builder at a specific time.

The client verifying a signature needs to trust:

The CA

The TL

The OIDC provider (transitively, they need to trust that the CA verifies ID tokens from the OIDC provider correctly)

The signers of the TUF repository

There are many things to trust. Any of these entities being compromised or acting maliciously themselves will compromise the security of the system. Even if such a compromise can be detected by monitoring the transparency log, remediation can be difficult. Removing any of these points of trust without compromising the overall security of the solution would be an improvement.

Docker’s proposed signing solution

Before a CA issues a certificate, it needs to verify control of the private key and control of the identity. In Figure 10, the CA outsources the identity verification to an OIDC provider. We can already use the OIDC provider to verify the identity, but can we use it to verify control of the private key? It turns out that we can.

OpenPubkey is a protocol for binding OIDC identities to public keys. Full details of how it works can be found in the OpenPubkey paper, but below is a simplified explanation. 

OIDC recommends a unique random number to be sent as part of the request to the OIDC provider. This number is called a nonce.

If the nonce is sent, the OIDC provider must return it in the signed JWT (JSON Web Token) called an ID token. We can use this to our advantage by constructing the nonce as a hash of the signer’s public key and some random noise (as the nonce still has to be random). The signer can then bundle the ID token from the OIDC provider with the public key and the random noise and sign the bundle with its private key.

The resulting token (called a PK token) proves control of the OIDC identity and control of the private key at a specific time, as long as a verifier trusts the OIDC provider. In other words, the PK token fulfills the same role as the certificate provided by the CA in all the signing flows up to this point, but does not require trust in a CA. This token can be distributed alongside signatures in the same way as a certificate.

OIDC ID tokens, however, are designed to be verified and discarded in a short timeframe. The public keys for verifying the tokens are available from an API endpoint hosted by the OIDC provider. These keys are rotated frequently (every few weeks or months), and there is currently no way to verify a token signed by a key that is no longer valid. Therefore, a log of historic keys will need to be used to verify PK tokens that were signed with OIDC provider keys that have been rotated out. This log is an additional point of trust for a verifier, so it may seem we’ve removed one point of trust (the CA) and replaced it with another (the log of public keys). For DOI, we have already added another point of trust with the TUF repository used to distribute trust policy. We can also use this TUF repository to distribute the log of public keys.

Figure 11: Using OIDC the DOI builder identifies itself to GitHub Actions, which provides an ID token that binds the OIDC identity to the public key. DOI builder sends the signed image and PK token to the Transparency Log (TL), which appends the signature and returns a receipt for the current time. DOI builder pushes the signed image, the PK token, and the TL receipt to the registry. The verifier is able to verify the signed image and that the image was created by the identity consistent with trust policy at a specific time. The monitor is able to observe that the signature was made by the DOI builder at a specific time.

OpenPubkey enhancements

As originally formulated, OpenPubkey was not designed to support code signing workflows as we’ve described. As a result, the implementation described here has a few drawbacks. In the following, we discuss each drawback and its associated solution.

OIDC ID tokens are bearer auth tokens

An OIDC ID token is a JWT signed by the OIDC provider that allows the bearer of the token to authenticate as the subject of the token. As we will be publishing these tokens publicly, it means a malicious party could take a valid ID token from the registry and present it to a service to identify as the subject of the ID token.

In theory, this should not be a problem because, according to the OIDC spec, any consumer must check the audience in the ID token before trusting the token (i.e., if the token is presented to Service Foo, Service Foo must check that the token was intended for Service Foo by checking the audience claim). However, there have been issues with OIDC client libraries not making this check.

To solve this issue, we can remove the OIDC provider’s signature from the ID token and replace it with a Guillou-Quisquater (GQ) signature. This GQ signature allows us to prove that we had the OIDC provider’s signature without sharing the signed token, and this proof can be verified using the OIDC provider’s public key and the rest of the ID token. More information on GQ signatures can be found in the original paper and in the OpenPubkey reference implementation. We’ve used a similar approach to one discussed in a paper by Zachary Newman.

OIDC ID tokens can contain personal information

For the case where OIDC ID tokens from CI systems such as GitHub Actions are used, it is unlikely that there is any personal information that could be leaked in the token. For example, the full data made available in a GitHub Actions OIDC ID token is documented on GitHub.

Some of this data, such as the repository name and the Git commit digest, are already included in the unsigned provenance attestations that the Docker build process generates. ID tokens representing human identities may include more personal data, but arguably, this is also the kind of data consumers may wish to verify as part of trust policy.

Key compromise

If the signer’s private key is compromised (admittedly unlikely as this is an ephemeral key), it is trivial for an attacker to sign any images and combine the signatures with the public PK token. As mentioned previously, the transparency log can help detect this kind of compromise, but we can go further and prevent it in the first place.

In the original OpenPubkey flow, we create the nonce from the signer’s public key and random noise, then use the corresponding private key to sign the image. If, however, we also include the hash of the image in the nonce, then the image, which we have already signed, is in effect also signed by the OIDC provider. This means the PK token becomes a one-use token that cannot be replayed to sign other images. Thus, compromising the ephemeral private key is no longer useful to an attacker.

OpenPubkey uses the nonce claim in the ID token

The full OIDC flow isn’t available on GitHub Actions. Instead, a simple HTTP endpoint is provided where a build process can request an ID token with an optional audience (aud) claim. We need to get the OIDC provider to sign some arbitrary data during authentication. We can do this by sending some data to the OIDC provider which will end up in one of the ID token claims, as long as we’re not preventing the claim’s intended use. Because GitHub Actions allows us to set the aud claim to an arbitrary value, we can use it for this purpose.

What’s next?

Docker aims to enable the broader open source community to improve security across the entire software supply chain. We feel strongly that good security requires good, easy-to-use tooling. Or, as Founder and CEO of Bounce Security Avi Douglen more eloquently put it, “Security at the expense of usability comes at the expense of security.” 

The approach explained in this post aims to make signing container images as easy as possible without sacrificing security and trust. By simplifying the overall approach and eliminating complicated infrastructure requirements, our goal is to foster widespread adoption of container signing, in the same way we enabled the widespread adoption of Linux containers a decade ago. 

Open source community and cryptography practitioners: Let us know what you think of this approach to signing. You can review the preliminary implementation across the various repositories in the OpenPubkey GitHub organization. Feel free to open issues in the various repositories or join the discussion in the OpenSSF community. 

We look forward to hearing your feedback and working together to improve the security of the software supply chain!

Learn more

Questions about DOI signing? Check out the DOI signing FAQ.

Use Docker Scout to improve your software supply chain security.

Implementation questions? Check out the code in the OpenPubkey GitHub organization.

Questions about OpenPubkey? See the OpenPubkey FAQ.

Have questions? The Docker community is here to help.

New to Docker? Get started.

Stick Figures image library by Youri Tjang.
Quelle: https://blog.docker.com/feed/

Getting Started with JupyterLab as a Docker Extension

This post was written in collaboration with Marcelo Ochoa, the author of the Jupyter Notebook Docker Extension.

JupyterLab is a web-based interactive development environment (IDE) that allows users to create and share documents that contain live code, equations, visualizations, and narrative text. It is the latest evolution of the popular Jupyter Notebook and offers several advantages over its predecessor, including:

A more flexible and extensible user interface: JupyterLab allows users to configure and arrange their workspace to best suit their needs. It also supports a growing ecosystem of extensions that can be used to add new features and functionality.

Support for multiple programming languages: JupyterLab is not just for Python anymore! It can now be used to run code in various programming languages, including R, Julia, and JavaScript.

A more powerful editor: JupyterLab’s built-in editor includes features such as code completion, syntax highlighting, and debugging, which make it easier to write and edit code.

Support for collaboration: JupyterLab makes collaborating with others on projects easy. Documents can be shared and edited in real-time, and users can chat with each other while they work.

This article provides an overview of the JupyterLab architecture and shows how to get started using JupyterLab as a Docker extension.

Uses for JupyterLab

JupyterLab is used by a wide range of people, including data scientists, scientific computing researchers, computational journalists, and machine learning engineers. It is a powerful interactive computing and data science tool and is becoming increasingly popular as an IDE.

Here are specific examples of how JupyterLab can be used:

Data science: JupyterLab can explore data, build and train machine learning models, and create visualizations.

Scientific computing: JupyterLab can perform numerical simulations, solve differential equations, and analyze data.

Computational journalism: JupyterLab can scrape data from the web, clean and prepare data for analysis, and create interactive data visualizations.

Machine learning: JupyterLab can develop and train machine learning models, evaluate model performance, and deploy models to production.

JupyterLab can help solve problems in the following ways:

JupyterLab provides a unified environment for developing and running code, exploring data, and creating visualizations. This can save users time and effort; they do not have to switch between different tools for different tasks.

JupyterLab makes it easy to share and collaborate on projects. Documents can be shared and edited in real-time, and users can chat with each other while they work. This can be helpful for teams working on complex projects.

JupyterLab is extensible. This means users can add new features and functionality to the environment using extensions, making JupyterLab a flexible tool that can be used for a wide range of tasks.

Project Jupyter’s tools are available for installation via the Python Package Index, the leading repository of software created for the Python programming language, but you can also get the JupyterLab environment up and running using Docker Desktop on Linux, Mac, or Windows.

Figure 1: JupyterLab is a powerful web-based IDE for data science

Architecture of JupyterLab

JupyterLab follows a client-server architecture (Figure 2) where the client, implemented in TypeScript and React, operates within the user’s web browser. It leverages the Webpack module bundler to package its code into a single JavaScript file and communicates with the server via WebSockets. On the other hand, the server is a Python application that utilizes the Tornado web framework to serve the client and manage various functionalities, including kernels, file management, authentication, and authorization. Kernels, responsible for executing code entered in the JupyterLab client, can be written in any programming language, although Python is commonly used.

The client and server exchange data and commands through the WebSockets protocol. The client sends requests to the server, such as code execution or notebook loading, while the server responds to these requests and returns data to the client.

Kernels are distinct processes managed by the JupyterLab server, allowing them to execute code and send results — including text, images, and plots — to the client. Moreover, JupyterLab’s flexibility and extensibility are evident through its support for extensions, enabling users to introduce new features and functionalities, such as custom kernels, file viewers, and editor plugins, to enhance their JupyterLab experience.

Figure 2: JupyterLab architecture.

JupyterLab is highly extensible. Extensions can be used to add new features and functionality to the client and server. For example, extensions can be used to add new kernels, new file viewers, and new editor plugins.

Examples of JupyterLab extensions include:

The ipywidgets extension adds support for interactive widgets to JupyterLab notebooks.

The nbextensions package provides a collection of extensions for the JupyterLab notebook.

The jupyterlab-server package provides extensions for the JupyterLab server.

JupyterLab’s extensible architecture makes it a powerful tool that can be used to create custom development environments tailored to users’ specific needs.

Why run JupyterLab as a Docker extension?

Running JupyterLab as a Docker extension offers a streamlined experience to users already familiar with Docker Desktop, simplifying the deployment and management of the JupyterLab notebook.

Docker provides an ideal environment to bundle, ship, and run JupyterLab in a lightweight, isolated setup. This encapsulation promotes consistent performance across different systems and simplifies the setup process.

Moreover, Docker Desktop is the only prerequisite to running JupyterLabs as an extension. Once you have Docker installed, you can easily set up and start using JupyterLab, eliminating the need for additional software installations or complex configuration steps.

Getting started

Getting started with the Docker Desktop Extension is a straightforward process that allows developers to leverage the benefits of unified development. The extension can easily be integrated into existing workflows, offering a familiar interface within Docker. This seamless integration streamlines the setup process, allowing developers to dive into their projects without extensive configuration.

The following key components are essential to completing this walkthrough:

Docker Desktop

Working with JupyterLabs as a Docker extension begins with opening the Docker Desktop. Here are the steps to follow (Figure 3):

Choose Extensions in the left sidebar.

Switch to the Browse tab.

In the Categories drop-down, select Utility Tools.

Find Jupyter Notebook and then select Install.

Figure 3: Installing JupyterLab with the Docker Desktop.

A JupyterLab welcome page will be shown (Figure 4).

Figure 4: JupyterLab welcome page.

Adding extra kernels

If you need to work with other languages rather than Python3 (default), you can complete a post-installation step. For example, to add the iJava kernel, launch a terminal and execute the following:

~ % docker exec -ti –user root jupyter_embedded_dd_vm /bin/sh -c "curl -s https://raw.githubusercontent.com/marcelo-ochoa/jupyter-docker-extension/main/addJava.sh | bash"

Figure 5 shows the install process output of the iJava kernel package.

Figure 5: Capture of iJava kernel installation process.

Next, close your extension tab or Docker Desktop, then reopen, and the new kernel and language support will be enabled (Figure 6).

Figure 6: New kernel and language support enabled.

Getting started with JupyterLab

You can begin using JupyterLab notebooks in many ways; for example, you can choose the language at the welcome page and start testing your code. Or, you can upload a file to the extension using the up arrow icon found at the upper left (Figure 7).

Figure 7: Sample JupyterLab iPython notebook.

Import a new notebook from local storage (Figures 8 and 9).

Figure 8: Upload dialog from disk.

Figure 9: Uploaded notebook.

Loading JupyterLab notebook from URL

If you want to import a notebook directly from the internet, you can use the File > Open URL option (Figure 10). This page shows an example for the notebook with Java samples.

Figure 10: Load notebook from URL.

A notebook upload from URL result is shown in Figure 11.

Figure 11: Uploaded notebook from URL.

Download a notebook to your personal folder

Just like uploading a notebook, the download operation is straightforward. Select your file name and choose the Download option (Figure 12).

Figure 12: Download to local disk option menu.

A download destination option is also shown (Figure 13).

Figure 13: Select local directory for downloading destination.

A note about persistent storage

The JupyterLab extension has a persistent volume for the /home/jovyan directory, which is the default directory of the JupyterLab environment. The contents of this directory will survive extension shutdown, Docker Desktop restart, and JupyterLab Extension upgrade. However, if you uninstall the extension, all this content will be discarded. Back up important data first.

Change the core image

This Docker extension uses a Docker image — jupyter/scipy-notebook:lab-4.0.6 (ubuntu 22.04) —  but you can choose one of the following available versions (Figure 14).

Figure 14: JupyterLab core image options.

To change the extension image, you can follow these steps:

Uninstall the extension.

Install again, but do not open until the next step is done.

Edit the associated docker-compose.yml file of the extension. For example, on macOS, the file can be found at: Library/Containers/com.docker.docker/Data/extensions/mochoa_jupyter-docker-extension/vm/docker-compose.yml

Change the image name from jupyter/scipy-notebook:ubuntu-22.04 to jupyter/r-notebook:ubuntu-22.04.

Open the extension.

On Linux, the docker-compose.yml file can be found at: .docker/desktop/extensions/mochoa_jupyter-docker-extension/vm/docker-compose.yml

Using JupyterLab with other extensions

To use the JupyterLab extension to interact with other extensions, such as the MemGraph database (Figure 15), typical examples only require a minimal change of the host connection option. This usually means a sample notebook referrer to MemGraph host running on localhost. Because JupyterLab is another extension hosted in a different Docker stack, you have to replace localhost with host.docker.internal, which refers to the external IP of another extension. Here is an example:

URI = "bolt://localhost:7687"

needs to be replaced by:

URI = "bolt://host.docker.internal:7687"

Figure 15: Running notebook connecting to MemGraph extension.

Conclusion

The JupyterLab Docker extension is a ready-to-run Docker stack containing Jupyter applications and interactive computing tools using a personal Jupyter server with the JupyterLab frontend.

Through the integration of Docker, setting up and using JupyterLab is remarkably straightforward, further expanding its appeal to experienced and novice users alike. 

The following video provides a good introduction with a complete walk-through of JupyterLab notebooks.

Learn more

Get the latest release of Docker Desktop.

Vote on what’s next! Check out our public roadmap.

Have questions? The Docker community is here to help.

New to Docker? Get started.

Quelle: https://blog.docker.com/feed/

Security Advisory: High Severity Curl Vulnerability

The maintainers of curl, the popular command-line tool and library for transferring data with URLs, will release curl 8.4.0 on October 11, 2023. This version will include a fix for two common vulnerabilities and exposures (CVEs), one of which the curl maintainers rate as “HIGH” severity and described as “probably the worst curl security flaw in a long time.” 

The CVE IDs are: 

CVE-2023-38545: severity HIGH (affects both libcurl and the curl tool)

CVE-2023-38546: severity LOW (affects libcurl only, not the tool)

Specific details of the exploit have yet to be published. We expect these details to be published when the version update becomes available.

We will continue to update this blog post as more information becomes available.

In the meantime, you can prepare ahead of exploitability details being released on October 11 by using Docker Scout to check whether you’re using the curl library as a dependency in any of the container images in your organization.

Am I vulnerable?

We anticipate that any version of curl prior to 8.4.0 will be affected by these CVEs, so now is a good time to build up a list of what you’ll need to update on October 11, 2023.

Having a dependency on curl won’t necessarily mean the exploit will be possible for your application. When more details are published, Docker Scout will surface specifics about the exploitability of this vulnerability. The first step is to understand whether your images have a dependency on curl.  

Quickest way to assess all images 

The quickest way to assess all images is to enable Docker Scout for your container registry. 

Step 1: Enable Docker Scout

Docker Scout currently supports Docker Hub, JFrog Artifactory, and AWS Elastic Container Registry. Instructions for integrating Docker Scout with these container registries:

Integrating Docker Scout with Docker Hub

Integrating Docker Scout with JFrog Artifactory

Integrating Docker Scout with AWS Elastic Container Registry

Note: If your container registry isn’t supported right now, you’ll need to use the local evaluation method via the CLI, described later.

Step 2: Select the repositories you want to analyze and kick off an analysis

Docker Scout analyzes all local images by default, but to analyze images in remote repositories, you need to enable Docker Scout image analysis. You can do this from Docker Hub, the Docker Scout Dashboard, and CLI. Find out how in the overview guide.

Sign in to your Docker account with the docker login command or use the Sign in button in Docker Desktop.

Use the Docker CLI docker scout repo enable command to enable analysis on an existing repository:

$ docker scout repo enable –org <org-name> <org-name>/scout-demo

Step 3: Visit scout.docker.com 

On the scout.docker.com homepage, find the policy card called No vulnerable version of curl and select View details (Figure 1). 

Figure 1: Docker Scout dashboard with the policy card that will help identify if and where the vulnerable version of curl exists.

The resulting list contains all the images that violate this policy — that is, they contain a version of curl that is likely to be susceptible to the HIGH severity CVE (CVE-2023-38545) listed above.

Figure 2: Docker Scout showing list of images that violate the policy by containing affected versions of the curl library.

Alternative CLI method

An alternative method is to use the Docker Scout CLI to analyze and evaluate local container images.

You can use the docker scout policy command to evaluate images against Docker Scout’s built-in policies on the command line, including the No vulnerable version of curl. 

docker scout policy [IMAGE] –org [ORG]

Figure 3: Docker Scout showing the results of running the Docker Scout command to evaluate a container image against the ‘No vulnerable version of curl’ policy.

If you’d rather understand all the CVEs identified in an individual container image, you can run the following command. This method doesn’t require you to enable Docker Scout in your container registry but will take a little longer if you have a large number of images to analyze. 

docker scout cves [OPTIONS] [IMAGE|DIRECTORY|ARCHIVE]

Learn more

Follow direct updates from the maintainer of curl project via the GitHub issue.

Learn more about Docker Scout at docs.docker.com/scout.

Read Announcing Docker Scout GA: Actionable Insights for the Software Supply Chain.

Quelle: https://blog.docker.com/feed/

Introducing a New GenAI Stack: Streamlined AI/ML Integration Made Easy

At DockerCon 2023, with partners Neo4j, LangChain, and Ollama, we announced a new GenAI Stack. We have brought together the top technologies in the generative artificial intelligence (GenAI) space to build a solution that allows developers to deploy a full GenAI stack with only a few clicks.

Here’s what’s included in the new GenAI Stack:

1. Pre-configured LLMs: We provide preconfigured Large Language Models (LLMs), such as  Llama2, GPT-3.5, and GPT-4, to jumpstart your AI projects.

2. Ollama management: Ollama simplifies the local management of open source LLMs, making your AI development process smoother.

3. Neo4j as the default database: Neo4j serves as the default database, offering graph and native vector search capabilities. This helps uncover data patterns and relationships, ultimately enhancing the speed and accuracy of AI/ML models. Neo4j also serves as a long-term memory for these models.

4. Neo4j knowledge graphs: Neo4j knowledge graphs to ground LLMs for more precise GenAI predictions and outcomes.

5. LangChain orchestration: LangChain facilitates communication between the LLM, your application, and the database, along with a robust vector index. LangChain serves as a framework for developing applications powered by LLMs. This includes LangSmith, an exciting new way to debug, test, evaluate, and monitor your LLM applications.

6. Comprehensive support: To support your GenAI journey, we provide a range of helpful tools, code templates, how-to guides, and GenAI best practices. These resources ensure you have the guidance you need.

Figure 1: The GenAI Stack guide and access to the GenAI Stack components.

Conclusion

The GenAI Stack simplifies AI/ML integration, making it accessible to developers. Docker’s commitment to fostering innovation and collaboration means we’re excited to see the practical applications and solutions that will emerge from this ecosystem. Join us as we make AI/ML more accessible and straightforward for developers everywhere.

The GenAI Stack is available in Early Access now and is accessible from the Docker Desktop Learning Center or on GitHub. 

Participate in our Docker Docker AI/ML Hackathon to show off your most creative AI/ML solutions built on Docker. Read our blog post “Announcing Docker AI/ML Hackathon” to learn more.

At DockerCon 2023, Docker also announced its first AI-powered product, Docker AI. Sign up now for early access to Docker AI. 

Learn more

Find the GenAI Stack on GitHub.

Join the Docker Docker AI/ML Hackathon.

Sign up now for early access to Docker AI. 

Get the latest release of Docker Desktop.

Vote on what’s next! Check out our public roadmap.

Have questions? The Docker community is here to help.

New to Docker? Get started.

Quelle: https://blog.docker.com/feed/

Announcing Docker Scout GA: Actionable Insights for the Software Supply Chain

We are excited to announce that Docker Scout General Availability (GA) now allows developers to continuously evaluate container images against a set of out-of-the-box policies, aligned with software supply chain best practices. These new capabilities also include a full suite of integrations enabling you to attain visibility from development into production. These updates strengthen Docker Scout’s position as integral to the software supply chain. 

For developers building modern cloud-native applications, Docker Scout provides actionable insights in real-time to make it simple to secure and manage their software supply chain end-to-end. Docker Scout meets developers in the Docker tools they use daily, and provides the richest library of open source trusted content, policy evaluation, and automated remediation in real-time across the entire software supply chain. From the first base image pulled all the way to git commit, CI pipeline, and deployed workloads in production, Docker Scout leverages its large open ecosystem and the container runtime as vehicles for insights developers can easily act upon.

With Docker Scout operating as the system of record for the software supply chain, developers have access to real-time insights, identification of anomalies in their applications, and automated recommendations to improve application integrity, in tandem with Docker Desktop, Docker CLI, Docker Hub, and the full suite of Docker products. Docker Scout is designed to see the bigger picture and address challenges that are layered into the software supply chain.

Software supply chain

Through many in-depth conversations with our customers, we uncovered a clear trend: Developers are increasingly aware about not consuming content they don’t trust or haven’t permitted within their accepted range of organizational policies. To solve for this, Docker provides Docker Official Images, a curated set of Docker repositories hosted on Docker Hub. These images provide essential base repositories that serve as the starting point for most users. Docker and Docker Scout will continue to provide additional forms of software supply chain metadata for the Docker Official Image catalog in the coming months.

Trusted content is the foundation of secure software applications. A key aspect of this foundation is Docker Hub, the largest and most-used source of secure software artifacts, which includes Docker Official Images, Docker Verified Publishers, and Docker-Sponsored Open Source trusted content. Docker Scout policies leverage this metadata to track the life cycle of images, generate unique insights for developers, and help customers automate the enhancement of their software supply chain objectives — from inner loop to production.

Ensuring the reliability of applications requires constant vigilance and in-depth software design considerations, from selecting dependencies with a JSON file, running compilations and associated tests, to ensuring the safety of every image built. While some monitoring platforms focus solely on policies for container images currently running in production, the new Docker Scout GA release recognizes this is not sufficient to oversee the entire software supply chain, since that approach occurs too late in the development process.

Docker Scout offers a seamless set of actionable insights and suggested workflows that meet developers where they build and monitor today. These insights and workflows are particularly helpful for developers building Docker container images based on Docker Trusted Content (Docker Official Images, Docker-Sponsored Open Source, Docker Verified Publishers), and for many data sources beyond that, including data sourced from integrations with JFrog Artifactory, Amazon ECR, Sysdig Runtime Monitoring, GitHub Actions, GitLab, and CircleCI (Figure 1).

Figure 1: Integrate Docker Scout with tools you already use.

Policy evaluation

Current policy solutions support only a binary outcome, where artifacts are either allowed or denied based on the oversimplified results from policy analysis. This pass/fail approach and the corresponding gatekeeping is often too rigid and does not consider nuanced situations or intermediate states, leading to day-to-day friction around implementing traditional policy enforcement. Docker Scout now goes deeper, not only by indicating more subtle deviations from policy, but also by actively suggesting upgrade and remediation paths to bring you back within policy guidelines, reducing MTTR in the process.

Additionally, Docker Scout Policy makes for a productivity boost that stems from not having to wait for CI/CD to finish to know if you need to upgrade dependencies. Evaluating policies with Docker Scout prevents last-minute security blockers in the CI/CD that can impact release dates, which is one of the primary reasons teams tend not to adopt policy evaluation solutions.

Policy can take many forms: Specifying which repos or sources it is safe to pull in components or libraries from, requiring secure and verified build processes, and continuously monitoring for issues after deployment. Docker Scout is now equipped to expand the types of policies that can be implemented within these wider sets of definitions (Figures 2 & 3).

Figure 2: Docker Scout dashboard showing policy violations with fixes.

Figure 3: Overview of the security status of images across all Docker-enabled repos over time.

Policy for maximal coverage

Our vision for policy is to allow developers to define what’s most developer-friendly and secure for their needs and their environments. While there’s a common thread across some industries, every enterprise has nuances. Whether setting security policies or aligning to software development life cycle tooling best practices, Docker Scout’s goal is to continuously evaluate container images, assisting teams to incrementally improve security posture within their cloud infrastructure.

These new capabilities include built-in out-of-the-box policies to maintain up-to-date base images, track associated vulnerabilities, and monitor for relevant in-scope licenses. Through evaluation results, users can see the status of policies for each image. Users will have an aggregated view of their policy results so they know what to focus on, as well as the ability to evaluate those results in more detail to understand what has changed within a given set of policies in a repo list view (Figure 4).

Figure 4: View from the CLI.

What’s next?

Docker Scout is designed to be with you in every step of improving developer workflows — from helping developers understand which actions to take to improve code reliability and bring it back in line with policy, to ensuring optimal code performance.

With the next phase on the horizon, we’re gearing up to deliver more value to our customers and always welcome feedback through our Docker Scout Design Partner Program. For now, the Docker Scout team is excited to get the latest solutions into our customers’ hands, ensuring safety, efficiency, and quality in a rapidly evolving ecosystem within the software supply chain.

Developers in our Docker-Sponsored Open Source (DSOS) Program will soon be able to access our Docker Scout Team plan, which includes unlimited local image analysis, as well as up to 100 repos for remote images, SDLC integrations, security posture reporting, and policy evaluation. Once Docker Scout is enabled for the DSOS program in late 2023, DSOS members can enable it on up to 100 repositories within their DSOS-approved namespace.

To learn more about Docker Scout, visit the Docker Scout product page 

Learn more

Try Docker Scout.

Looking to get up and running? Use our Quickstart guide.

Vote on what’s next! Check out the Docker Scout public roadmap.

Have questions? The Docker community is here to help.

New to Docker? Get started.

Quelle: https://blog.docker.com/feed/

Announcing Udemy + Docker Partnership

Docker and Udemy announced a new partnership at DockerCon to give developers a clear, defined, accessible path for learning how to use Docker, best practices, advanced concepts, and everything in between. As the #1 rated online course platform (as ranked by Stack Overflow), Udemy will be the first to house Docker-accredited content and customized learning paths to provide developers with the latest training materials on how to best use Docker tools. Launching in early 2024, the platform will provide learning paths that award Docker badges to those who successfully complete the path, enabling them to showcase their proficiency with the material.

Udemy instructors are vetted, experienced, and have in-depth knowledge of Docker and the Docker suite of development tools, services, trusted content, and automations. As a fully online education portal, Udemy offers classes that are accessible, inclusive, and attainable for a broad range of developers. This Docker + Udemy partnership will establish a key destination for developers and hobbyists who want to further their Docker education. Together, Docker and Udemy will enhance their communities with shared standards, education paths, and credibility.

This partnership will bring Docker educational content together into easy-to-navigate course paths and reward badges for completion of verified courses. This platform aims to bring verified and best-in-class materials together in one place. The Docker + Udemy partnership will create a streamlined source for developers to gain knowledge, receive badges, and stay current on the latest content.

These courses and their curricula will be vetted by Udemy instructional designers, Docker experts from the Docker community, and Docker employees. In the true spirit of open source, the curricula by which we vet and certify courses will be made publicly available for all content creators to use.  

In the coming months, we will invite members of the Docker community who are experienced instructors and content creators to apply to be a Udemy instructor and create Docker courses on Udemy, or bring their existing content into our learning paths. We are thrilled to be able to bring our community into this endeavor and to amplify visibility for the community.

Stay tuned for more details on this partnership soon. To get started today and gain access to Udemy’s collection of more than 350 Docker courses, developers can visit: https://www.udemy.com/topic/docker/.

Learn more

Check out Udemy’s collection of Docker courses.

Get the latest release of Docker Desktop.

Have questions? The Docker community is here to help.

New to Docker? Get started.

Quelle: https://blog.docker.com/feed/