Improved Volume Management, Docker Dev Environments and more in Desktop 3.5

Docker Desktop 3.5 is here and we can’t wait for you to try it!

We’ve introduced some exciting new features including improvements to the Volume Management interface, a tech preview of Docker Dev Environments, and enhancements to Compose V2.

Easily Manage Files in your Volumes

Volumes can quickly take up local disk storage and without an easy way to see which ones are being used or their contents, it can be hard to free up space. This is why in the release of Docker Desktop 3.5 we’ve made it even easier for Pro and Team users to explore the directories and files inside of a volume. We’ve added in the modified date, kind, and size of files so that you can quickly identify what is taking up all that space and decide if you can part with it.

Once you’ve identified a file or directory inside a volume you no longer need, you can remove them straight from the Dashboard to free up space. We’ve also introduced a way to download files locally using “Save As” so that you can easily back up files before removing them.

We’re continuing to add more to volume management like the ability to share your volumes with your colleagues. Have ideas on how we might make managing volumes easier? We’d love you to help us prioritize by adding your use cases on our public roadmap. 

Docker Dev Environments

In 3.5 we released a technical preview of Docker Dev Environments. Check out our blog to learn more about why we built this and how it works.

Docker Compose V2 Beta Rollout Continues

We’re continuing to roll out the beta of Docker Compose V2, which allows you to seamlessly run the compose command in the Docker CLI. We are working towards launching Compose v2 as a drop-in replacement for docker-compose, so that no changes are required in your code to use this new functionality. We have also introduced the following new features:

Added support for container links and external links to facilitate communication between containers Introduced the docker compose logs –since and –until options enabling you to search logs by date.`docker compose config –profiles` now lists all defined profiles so you can see which additional services are defined in a single docker-compose.yml file. Profiles allow you to adjust the Compose application model for various usages and environments by selectively enabling services. 

You can test this new functionality by running the docker compose command, dropping the – in docker-compose. We are continuing to roll this out gradually; 31% of compose users are already using this beta version. You’ll be notified if you are using the new docker compose. You can opt-in to run Compose v2 with docker-compose, by running docker-compose enable-v2 command or by updating your Docker Desktop’s Experimental Features settings.  

If you run into any issues using Compose V2, simply run docker-compose disable-v2 command, or turn it off using Docker Desktop’s Experimental Features. Let us know your feedback on the new ‘compose’ command by creating an issue in the Compose-CLI GitHub repository.

Warning for Images incompatible with Apple Silicon Machines

Docker Dashboard will now warn you if an image you are using does not match your architecture on Apple Silicon. If you are using Desktop on Apple Silicon and an image is amd64 run by qemu emulation, it is possible that it may have poor performance or potentially crash. While we are promoting the usage of multi-architecture images, we want to make sure you are aware when an image you are using is running under emulation because it does not match your machine’s native architecture. If this is the case a warning will appear on the Containers / Apps page.

Less Disruptive Requests for Feedback

And finally, we’ve heard your feedback on how we ask you for your feedback. We’ve changed the way that the feedback form works so that it won’t pop up while you’re in the middle of working. When it’s time, the feedback form will only show up if you click on the whale menu. We do appreciate the time you spend to rate Docker Desktop. Your input helps us make changes like this! 

See the full release notes for Docker Desktop for Mac and Docker Desktop for Windows for the complete set of changes in Docker Desktop 3.5. 

We can’t wait for you to try Volume Management and the preview of Dev Environments! To get started simply download or update to Docker Desktop 3.5. To start collaborating with your teammates on your dev environments and digging into the contents of your volumes, upgrade to a Pro or Team subscription today!
The post Improved Volume Management, Docker Dev Environments and more in Desktop 3.5 appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

From Compose to Kubernetes with Okteto

Today we’re featuring a blog from Pablo Chico de Guzmán at Okteto, who writes about how the developers’ love of Docker Compose inspired Okteto to create Okteto Stacks, a fully compatible Kubernetes backend for Docker Compose

It has been almost 7 years since the Docker Compose v1.0.0 release went live. Since that time, Docker Compose has become the dominant tool for local development environments. You run one command and your local development environment is up and running. And it works the same way on any OS, and for any application.

At the same time, Kubernetes has grown to become the dominant platform to deploy containers in production. Kubernetes lets you run containers on multiple hosts for fault tolerance, monitors the health of your applications, and optimizes your infrastructure resources. There is a rich ecosystem around it, and all major providers have native support for Kubernetes: GKE, AKS, EKS, Openshift…

We’ve interacted with thousands of developers as we build Okteto (a cloud-native platform for developers). And we kept hearing the same complaint: there’s a very steep learning curve when you go from Docker Compose to Kubernetes. At least, that was the case until today. We are happy to announce that you can now run your Docker Compose files in Kubernetes with Okteto!

Why developers need Docker Compose in Kubernetes

Developers love Docker Compose, and they love it for good reasons. A Docker Compose file for five microservices might be around 30 lines of yaml, but the same application in Kubernetes would be 500+ lines of yaml and about 10-15 different files. Also, the Docker Compose CLI rebuilds and redeploys containers when needed. In Kubernetes, you need additional tools to build your images, tag them, push them to a Docker Registry, update your Kubernetes manifests, and redeploy them. It’s too much friction for something that’s wholly abstracted away by Docker Compose.

But there are some use cases where running your Docker Compose files locally presents some challenges. For example, you might need to run dozens of microservices that exhausts your local CPU/Memory resources, you might need access to GPUs to develop a ML application, or you might want to integrate with a service deployed in a remote Kubernetes cluster. For these scenarios, running Docker Compose in Kubernetes is the perfect solution. This way, developers get access to on demand CPU/Memory/GPU resources, direct access to other services running in the cluster, and more realistic end-to-end integration with the cluster configuration (ingress controllers, SSL termination, monitoring tools, secret manager tools…), while still using the application definition format they know and love.

Docker Compose Specification to the rescue

Luckily, the Docker Compose Specification was open-sourced in 2020. This allowed us to implement Okteto Stacks, a fully compatible Kubernetes backend for Docker Compose. Okteto Stacks are unique with respect to other Kubernetes backend implementations of the Docker Compose Specification because they provide:

In-cluster builds for better performance and caching behavior.Ingress Controller integration and SSL termination for public ports.Bidirectional synchronization between your local filesystem and your containers in Kubernetes.

Okteto’s bidirectional synchronization is pretty handy: it reloads your application on the cluster while you edit your code locally. It’s equivalent to mounting your code inside a container using Docker Compose host volumes, but for containers running in a remote cluster.

How to get started

Okteto Stacks are compatible with any Kubernetes cluster (you will need to install the Okteto CLI and a cluster-side Kubernetes application). But the easiest way to get started with Okteto Stacks is Okteto Cloud, the SaaS version of our cloud-native development platform.

To show the possibilities of Okteto Stacks, let’s deploy the famous Voting App. My team @Tutum developed the Voting App for the DockerCon keynote (EU 2015) to showcase the power of Tutum (later acquired by Docker that year). The demo gods were appeased with an offering of grapes that day. And I hope they are appeased again as you follow this tutorial:

First, install the Okteto CLI if you haven’t done it yet.

Next, configure access to your Okteto Cloud namespace. To do that, execute the following command:

$ okteto namespace

Authentication required. Do you want to log into Okteto? [y/n]: y
What is the URL of your Okteto instance? [https://cloud.okteto.com]:
Authentication will continue in your default browser
✓ Logged in as cindy
✓ Updated context ‘cloud_okteto_com’ in ‘/Users/cindy/.kube/config’

Get a local version of the Voting App by executing the following commands:

$ git clone https://github.com/okteto/compose-getting-started
$ cd compose-getting-started

Execute the following command to deploy the Voting App:

$ okteto stack deploy –wait

✓ Created volume ‘redis’
✓ Deployed service ‘vote’
✓ Deployed service ‘redis’
✓ Stack ‘compose-getting-started’ successfully deployed

The deploy command will create the necessary deployments, services, persistent volumes, and ingress rules needed to run the Voting App. Go to the Okteto Cloud dashboard and you will get the URL of the application.

Now that the Voting App is running, let’s make a small change to show you the full development workflow.

Instead of our pet, let’s ask everyone to vote on our favorite lunch item. Open the “vote/app.py” file in your IDE and modify the lines 16-17. Save your changes.

def getOptions():
option_a = “Tacos”
option_b = “Burritos”

Once you’re happy with your changes, execute the following command:

$ okteto up

✓ Images successfully pulled
✓ Files synchronized

Namespace: cindy
Name: vote

* Serving Flask app ‘app’ (lazy loading)
* Environment: development
* Debug mode: on
* Running on http://10.8.4.205:8080/ (Press CTRL+C to quit)
* Restarting with stat * Debugger is active! * Debugger PIN: 139-182-328

Check the URL of your application again. Your code changes were instantly applied. No commit, build, or push required. And from this moment, any changes done from your IDE will be immediately applied to your application!

That’s all!

Go to the Okteto Stacks docs to learn more about our Docker Compose Kubernetes backend. We’re just starting, so we’d love to hear your thoughts on this.

Happy coding!
The post From Compose to Kubernetes with Okteto appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Secure Software Supply Chain Best Practices

Last month, the Cloud Native Computing Foundation (CNCF) Security Technical Advisory Group published a detailed document about Software Supply Chain Best Practices. You can get the full document from their GitHub repo. This was the result of months of work from a large team, with special thanks to Jonathan Meadows and Emily Fox. As one of the CNCF reviewers I had the pleasure of reading several iterations and seeing it take shape and improve over time.

Supply chain security has gone from a niche concern to something that makes headlines, in particular after the SolarWinds “Sunburst” attack last year. Last week it was an important part of United States President Joe Biden’s Executive Order on Cybersecurity. So what is it? Every time you use software that you didn’t write yourself, often open source software that you use in your applications, you are trusting both that the software you added is what you thought it is, and that it is trustworthy not hostile. Usually both these things are true, but when they go wrong, like when hundreds of people installed updates from SolarWinds that turned out to contain code to attack their infrastructure, the consequences are serious. As people have hardened their production environments, attacking software as it is written, assembled, built or tested, before production, has become an easier route.

The CNCF Security paper started after discussions I had with Jonathan about what work needs to be done to make secure supply chains easier and more widely adopted. The paper does a really good job in explaining the four key principles:

First, every step in a supply chain should be “trustworthy” as a result of a combination of cryptographic attestation and verificationSecond, automation is critical to supply chain security. Automating as much of the software supply chain as possible can significantly reduce the possibility of human error and configuration drift. Third, the build environments used in a supply chain should be clearly defined, with limited scope.  Fourth, all entities operating in the supply chain environment must be required to mutually authenticate using hardened authentication mechanisms with regular key rotation.

In simpler language, this means that you need to be able to securely trace all the code you are using, which exact versions you are using,  where they came from, and in an automated way so that there are no errors. Your build environments should be minimal, secure and well defined, i.e. containerised. And you should be making sure everything is authenticated securely.

The majority of people do not meet all these criteria making exact traceability difficult. The report has strong recommendations for environments that are more sensitive, such as those dealing with payments and other sensitive areas. Over time these requirements will become much more widely used because the risks are serious for everyone.

At Docker we believe in the importance of a secure software supply chain and we are going to bring you simple tools that improve your security. We already set the standard with Docker Official Images. They are the most widely trusted images that  developers and development teams use as a secure basis for their application builds. Additionally, we have CVE scanning in conjunction with Snyk, which helps identify the many risks in the software supply chain. We are currently working with the CNCF, Amazon and Microsoft on the Notary v2 project to update container signing  which we will ship in a few months. This is a revamp of Notary v1 and Docker Content Trust that makes signatures portable between registries and will improve usability that has broad industry consensus. We have more plans to improve security for developers and would love your feedback and ideas in our roadmap repository.
The post Secure Software Supply Chain Best Practices appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Tech Preview: Docker Dev Environments

A couple of weeks ago at DockerCon we showed off a new feature that we are building – Docker Dev Environments. Today we are excited to announce the release of the Technical Preview of Dev Environment as part of Docker Desktop 3.5. 

At Docker we have been looking at how teams collaborate on projects using Git. We know that Git is a powerful tool for version control of source code, but it doesn’t solve a lot of the challenges that exist when developers try to collaborate. Developers still suffer with ‘it works on my machine’ when they are trying to work together on changes as dependencies can differ. Developers may also need to move between Git branches to achieve this and often don’t bother, simply looking at code in the browser rather than running it. This means they lack the context and the tools needed to really validate that the code is good and that this collaboration is all happening right at the end of the creation process. 

To address this, we are excited to release our preview of Dev Environments. With Dev Environments developers can now easily set up repeatable and reproducible development environments by keeping the environment details versioned in their SCM along with their code. Once a developer is working in a Development Environment, they can share their work-in-progress  code and dependencies in one click via the Docker Hub. They can then switch between their developer environments or their teammates’ environments, moving between branches to look at work-in-progress  changes without moving off their current Git branch. This makes reviewing PRs as simple as opening a new environment. Dev Environments use tools built into code editors that allow Docker to access code mounted into a container rather than on the developer’s local host. This isolates the tools, files and running services on the developer’s machine allowing multiple versions of them to exist side by side, also improving file system performance!  And we have built this experience on top of Compose rather than adding another manifest for developers to worry about or look after. 

With this preview we provide you with the ability to get started with a Dev Environment locally either by using our one click creation process or by providing a Compose file as part of a .docker folder. This will then allow you to run a Dev environment on your local machine and give you access to your git credentials inside it. With Compose you will be able to use the other services related to your Dev Environment, allowing you to develop in the full context of the application. We have got our first part of the sharing experience for team members as well, allowing you to share a Dev Environment with your team for them to see your code changes and dependencies together in just one click. 

There are some areas of the first release that we are going to be improving  in the coming  weeks as we build the experience out to make it even easier to use. 

When it comes to working with your team, we will be improving  this to make it easier to send someone your work-in-progress changes. Rather than having to create a unique name for your changes each time, we will let you instead share this with one click – keeping everything synced automatically via Docker Hub for your team. This means your team can see your shared Dev Environment in their UI as soon as you share it. They will also be able to swap out the existing services in their Compose stacks for the one you have shared, moving seamlessly between them. 

We know that developers love Compose and that we can leverage compose features to make it easier to set up your Dev Environments(things like profiles, setting a gopath, defining debug ports, supporting mounts etc). We will be extending what we have in Compose over the coming weeks, if there are particular features you think we should support please let us know!

We will also be looking at other areas of the experience like support for other IDEs, new creation flows and better ways to set up new Dev Environments. 

Lastly we will be looking at all the feedback you as a community give us on other areas we need to improve! If you have feedback on these items or have other areas you think we should be focusing on ready for our GA release, please let us know as part of our feedback repo.

We are really excited about the preview of Dev Environments! If you want to check them out simply download or upgrade Docker Desktop 3.5 and check out the new preview tab. To get started sharing Dev Environments with your team and moving your feedback process back into development rather than at the time of review, upgrade to one of Docker’s team plans today.
The post Tech Preview: Docker Dev Environments appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

DockerCon LIVE 2021 Recapped: Top 5 Sessions

You came, you participated, you learned. You helped us pull off another DockerCon — and, my fellow developers, it was good. How good? About 80,000 folks registered for the May 27 virtual event — on a par with last year.

We threw a lot at you, from demos and product announcements to company updates and more — all of it focused on modern application delivery in a cloud-native world. But some clear favorites emerged. Here’s a rundown of the top 5 sessions, which zeroed in on some of the everyday issues and challenges facing our developer community.

#1. How Much Kubernetes Do I Need to Learn?

Kubernetes isn’t simple and the learning curve is steep, but the upside to mastering this powerful and flexible system is huge. So it’s natural for developers to ask how much Kubernetes is “just enough” to get productive. Clearly, many of you shared that question, making this the Número Uno session of DockerCon LIVE 2021. Docker Captain Elton Stoneman, a consultant and trainer at Sixeyed Consulting, walks you through the Kubernetes platform, clarifying core concepts around services, deployments, replica sets, pods, config maps and secrets, and sharing demos to show how they all work together. He also shows how simple and complex apps are defined as Kubernetes manifests, and clarifies the line between dev and ops.

#2. A Pragmatic Tour of Docker Filesystems

Mutagen founder Jacob Howard takes on the heroic task of dispelling the mists of confusion that developers often encounter when starting out with containerized development. Sure, container filesystems can seem like an impenetrable mess, but Jacob carefully makes the case for why the relationship between file systems and containers actually makes a lot of sense, even to non-developers. He also provides a pragmatic guide to container filesystem concepts, options and performance that can serve as a rule of thumb for selecting the right solution(s) for your use case.

#3. Top Dockerfile Security Best Practices

In this webinar, Alvaro Iradier Muro, an integrations engineer at Sysdig, goes deep on Dockerfile best practices for image builds to help you prevent security issues and optimize containerized applications. He shows you straightforward ways to avoid unnecessary privileges, reduce the attack surface with multistage builds, prevent confidential data leaks, detect bad practices and more, including how to go beyond image building to harden container security at runtime. It all comes down to building well-crafted Dockerfiles, and Alvaro shows how to do so by removing known risks in advance, so you can reduce security management and operational overhead.

#4. Databases on Containers

Only in the last few years has running high-performance stateful applications inside containers become a reality — a shift made possible by the rise of Kubernetes and performance improvements in Docker. Denis Souza Rosa, a developer advocate at Couchbase, answers many of the common questions that arise in connection with this new normal: Why should I run these applications inside containers in the first place? What are the challenges? Is it production ready? In this demo, Denis deploys a database and operator long with fail nodes, and he shows how to scale up and down with almost no manual intervention using state-of-the-art technology.

#5. A Day in the Life of a Developer: Moving Code from Development to Production Without Losing Control

Learn how to take control of your development process in ways you never thought possible with Nick Chase, director of technical marketing and developer relations at Mirantis. Nick zeroes in on how only a true software development pipeline can prevent serious problems such as security holes, configuration errors, and business issues such as executive approval for promotion of changes. Along the way, he covers what a complete software supply chain looks like, common “weak links” and how to strengthen them, how to integrate your workflow as a developer, and what to do when business concerns affect the pipeline.

If you missed these popular sessions last month, now’s your chance to catch them. Or maybe you just want to see them again. Either way, check out the recordings. They’re informative, practical and free!

We have a complete container solution for you – no matter who you are and where you are on your containerization journey. Get started with Docker today here.
The post DockerCon LIVE 2021 Recapped: Top 5 Sessions appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

SLOs should be easy, say hi to Sloth

itnext.io – As in other areas, in the technology world, every year there are some buzz words that are being said more than others. Some examples: I’m sure you have been hearing service level objectives lately…
Quelle: news.kubernauts.io

Kustomize explained; an MLOps Use Case

towardsdatascience.com – Kustomize is a tool to customize YAML files like Kubernetes (K8s) manifests, template free. Meanwhile, it became a built-in kubectl operation to apply K8s object definitions from YAML files stored in…
Quelle: news.kubernauts.io

Litestream

litestream.io – Litestream is an open-source, real-time streaming replication tool that lets you safely run SQLite applications on a single node.
Quelle: news.kubernauts.io

Volume Management, Compose v2, Skipping Updates, and more in Docker Desktop 3.4

We are excited to announce the release of Docker Desktop 3.4.

This release includes several improvements to Docker Desktop, including our new Volume Management interface, the Compose v2 roll-out, and changes to how to Skip an update to Docker Desktop based on your feedback.

Volume Management

Have you wanted a way to more easily manage and explore your volumes?

In this release we’re introducing a new capability in Docker Desktop that helps you to create and delete volumes from Desktop’s Dashboard as well as to see which ones are In Use.

For developers with Pro and Team Docker subscriptions, we’ll be bringing a richer experience to managing your volumes. 

You’ll be able to explore the contents of the volumes so that you can more easily get an understanding of what’s taking up space within the volume.

You’ll also be able to easily see which specific containers are using any particular volume.

We’re also looking to add additional capabilities in the future, such as being able to easily download files from the volume, read-only view for text files, and more. We’d love to hear more about what you’d like to see us prioritize and focus on in improving the way you can manage your volumes.  Please chime in with your use cases on our public roadmap if this is an area you’d like us to continue focusing on improving.

Compose V2 Roll out begins

We are very excited to launch the beta of Compose V2, which supports the compose command as part of the Docker CLI, and which we have affectionately promoted into the ‘first-class citizen in the Docker CLI’.  Compose V2 seamlessly integrates the compose functions into the Docker CLI, while still supporting most of the previous docker-compose features and flags.  Compose V2 includes two new options – 

docker compose ls, to list all your compose apps

docker compose cp, to copy files/folders between your service container and your local filesystem

The simplest way to test this new functionality is to run the docker compose command, instead of docker-compose, and see what happens.  

10% of compose users are already using docker compose, and we are hearing all sorts of good things.

But we want to make it even simpler, and launch Compose v2 as a drop-in replacement, so that you do not need to change any of your scripts, to take advantage of this new functionality.  

Beginning with Docker Desktop 3.4, you will be able to explicitly opt-in to run Compose v2 with docker-compose,by running docker-compose enable-v2 command.  Or you can opt into Compose v2 by updating your Docker Desktop’s Experimental Features settings.  

With the release of 3.4, we’ll also start to change the docker-compose command to run as Compose V2, without the explicit opt-in.  We’ll roll this out gradually, to a small percentage of users at a time.  If we upgrade your docker-compose, we will notify you that you are running the compose upgrade.  

If you do run into any issues using Compose V2, simply run docker-compose disable-v2 command, or via Docker Desktop’s Experimental Features, to revert to the initial docker-compose functionality.  And please help us resolve your problems by submitting an issue here.  

Skipping Docker Desktop Updates

We’ve heard your feedback regarding how the “Skip this update” behavior introduced in Docker Desktop 3.3 was confusing and missed the mark.  

It was trying to provide additional flexibility for Pro/Team users who needed to stay on an older version of Docker Desktop by allowing them to dismiss additional reminders about a particular update. 

There were many folks who took this to mean that you needed to be a Pro/Team subscriber to not have to update their version of Docker Desktop, which was not the case.

Based on your feedback, in Docker Desktop 3.4, we will be removing the requirement to be a Pro/Team subscriber to skip reminder notifications about individual Docker Desktop releases. 

To summarize what the experience will be like once you’ve upgraded to Docker Desktop 3.4:

When a new update becomes available, the whale icon will change to indicate that there’s an update available and you’ll be able to choose when to download and install the update.

Two weeks after an update first becomes available, a reminder notification, like below, will appear.

If you click on “Skip this update”, you won’t get any additional reminders for this particular update.

If you click on “Snooze” or dismiss the dialog, you’ll get a reminder to update on the following day.

For developers in larger organizations, who don’t have administrative access to install updates to Docker Desktop, or are only allowed to upgrade to IT-approved versions, there continues to be an option in the Settings menu to opt out of notifications altogether for Docker Desktop updates if your Docker ID is part of a Team subscription.

It’s your positive feedback that helps us continue to improve the Docker experience. We truly appreciate it. Please keep that feedback coming by raising tickets on our Public Roadmap.

See the release notes for Docker Desktop for Mac and Docker Desktop for Windows for the complete set of changes in Docker Desktop 3.4.Interested in learning more about what else is included with a Pro or Team subscription? Check out our pricing page for a detailed breakdown.
The post Volume Management, Compose v2, Skipping Updates, and more in Docker Desktop 3.4 appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Bringing “docker scan” to Linux

At the end of last year we launched vulnerability scanning options as part of the Docker platform. We worked together with our partner Snyk to include security testing options along multiple points of your inner loop.  We incorporated scanning options into the Hub, so that you can configure your repositories to automatically scan all the pushed images. We also added a scanning command to the Docker CLI on Docker Desktop for Mac and Windows, so that you can run vulnerability scans for images on your local machine. The earlier in your development that you find these vulnerabilities, the easier and cheaper it is to fix them.  Vulnerability scan results also provide remediation guidance on things that you can do to remove the reported vulnerabilities. Some of the examples of remediation include recommendations for alternative base images with lower vulnerability counts, or package upgrades that have already resolved the specified vulnerabilities.  

We are now making another update in our security journey, by bringing “docker scan” to the  Docker CLI on Linux. The experience of scanning on Linux is identical to what we have already launched for Desktop CLI, with scanning support for linux/amd64 (x86-64) Docker images. The CLI command is the same  docker scan,  supporting all of the same flags. These flags include the options to add Dockerfiles with images submitted for scanning and to specify the minimum severity level for the reported vulnerabilities.  

Information about the docker scan command, with all the details about the supported flags, is provided in the Vulnerability Scanning for Docker Local Images section in the Docker documentation. Vulnerability reports are also the same, listing for each vulnerability, information about severity levels, the image layers where vulnerabilities are manifested, the exploit maturity and remediation suggestions.  

The major difference with scanning on Linux is that instead of upgrading your Docker Desktop, you will need to install or upgrade your Docker Engine. Directions for installing the Engine are provided in the Install Docker Engine section of Docker documentation, including instructions for several different distros, including CentOS, Debian, Fedora and Ubuntu. And because this is  Linux, we have open sourced the scanning CLI plugin…  Go ahead, give it a try, or take a look at this page for other Docker open source projects that may help you to build, share and run your applications

If you want to learn more about application vulnerabilities, and you missed DockerCon 21, you can go here for a recording of the DockerCon LIVE panel on Security, or watch a great session called ‘My Container Image Has 500 Vulnerabilities.  Now What?’.  Or, look for any other DockerCon recording…  There were all sorts of great sessions on things that you can do to build, share and run your applications.  Or, for more information about the Docker partnership with Snyk, and plans for future partnership collaborations, please check out this blog post by Snyk’s Sarah Conway

The post Bringing “docker scan” to Linux appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/