Deploying Web Applications Quicker and Easier with Caddy 2

Follow along as we dissect an awesome method for accelerating your web application server deployment. Huge thanks to Chris Maki for his help in putting this guide together!
 
Today’s web applications — and especially enterprise variants — have numerous moving parts that need continual upkeep. These tasks can easily overwhelm developers who already have enough on their plates. You might have to grapple with:

TLS certificate management and termination
Web server setup and maintenance
Application firewalls and proxies
API gateway integration
Caching
Logging

 
Consequently, the task of building, deploying, and maintaining a web application has become that much more complicated. Developers must worry about performance, access delegation, and security.
Lengthy development times reflect this. It takes 4.5 months on average to build a complete web app — while creating a viable backend can take two to three months. There’s massive room for improvement. Web developers need simpler solutions that dramatically reduce deployment time.
First, we’ll briefly highlight a common deployment pipeline. Second, we’ll explore an easier way to get things up and running with minimal maintenance. Let’s jump in.
Cons of Server Technologies Like NGINX
There are many ways to deploy a web application backend. Upwards of 60% of the web server market is collectively dominated by NGINX and Apache. While these options are tried and true, they’re not 100% ready to use out of the box.
Let’s consider one feature that’s becoming indispensable across web applications: HTTPS. If you’re accustomed to NGINX web servers, you might know that these servers are HTTP-enabled right away. However, If SSL is required for you and your users, you’ll have to manually configure your servers to support it. Here’s what that process looks like from NGINX’s documentation:

server {
listen 443 ssl;
server_name www.example.com;
ssl_certificate www.example.com.crt;
ssl_certificate_key www.example.com.key;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers HIGH:!aNULL:!MD5;

}

 
Additionally, NGINX shares that “the ssl parameter must be enabled on listening sockets in the server block, and the locations of the server certificate and private key files should be specified.”
You’re tasked with these core considerations and determining best practices through your configurations. Additionally, you must properly optimize your web server to handle these HTTPS connections — without introducing new bottlenecks based on user activity, resource consumption, and timeouts. Your configuration files slowly grow with each customization:

worker_processes auto;

http {
ssl_session_cache shared:SSL:10m;
ssl_session_timeout 10m;

server {
listen 443 ssl;
server_name www.example.com;
keepalive_timeout 70;

ssl_certificate www.example.com.crt;
ssl_certificate_key www.example.com.key;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers HIGH:!aNULL:!MD5;

 
It’s worth remembering this while creating your web server. Precise, finely-grained control over server behavior is highly useful but has some drawbacks. Configurations can be time consuming. They may also be susceptible to coding conflicts introduced elsewhere. On the certificate side, you’ll often have to create complex SSL certificate chains to ensure maximum cross-browser compatibility:

$ openssl s_client -connect www.godaddy.com:443

Certificate chain
0 s:/C=US/ST=Arizona/L=Scottsdale/1.3.6.1.4.1.311.60.2.1.3=US
/1.3.6.1.4.1.311.60.2.1.2=AZ/O=GoDaddy.com, Inc
/OU=MIS Department/CN=www.GoDaddy.com
/serialNumber=0796928-7/2.5.4.15=V1.0, Clause 5.(b)
i:/C=US/ST=Arizona/L=Scottsdale/O=GoDaddy.com, Inc.
/OU=http://certificates.godaddy.com/repository
/CN=Go Daddy Secure Certification Authority
/serialNumber=07969287
1 s:/C=US/ST=Arizona/L=Scottsdale/O=GoDaddy.com, Inc.
/OU=http://certificates.godaddy.com/repository
/CN=Go Daddy Secure Certification Authority
/serialNumber=07969287
i:/C=US/O=The Go Daddy Group, Inc.
/OU=Go Daddy Class 2 Certification Authority
2 s:/C=US/O=The Go Daddy Group, Inc.
/OU=Go Daddy Class 2 Certification Authority
i:/L=ValiCert Validation Network/O=ValiCert, Inc.
/OU=ValiCert Class 2 Policy Validation Authority
/CN=http://www.valicert.com//emailAddress=info@valicert.com

 
Spinning up your server therefore takes multiple steps. You must also take these tasks and scale them to include proxying, caching, logging, and API gateway setup. The case is similar for module-based web servers like Apache.
We don’t say this to knock either web server (in fact, we maintain official images for both Apache and NGINX). However — as a backend developer — you’ll want to weigh those development-and-deployment efforts vs. their collective benefits. That’s where a plug-and-play solution comes in handy. This is perfect for those without deep backend experience or those strapped for time. 
Pros of Using Caddy 2
Written in Go, Caddy 2 (which we’ll simply call “Caddy” throughout this guide) acts as an enterprise-grade web server with automatic HTTPS. Other servers don’t share this same feature out of the box.
Additionally, Caddy 2 offers some pretty cool benefits:

Automatic TLS certificate renewals
OCSP stapling, to speed up SSL handshakes through request consolidation
Static file serving for scripts, CSS, and images that enrich web applications
Reverse proxying for scalability, load balancing, health checking, and circuit breaking
Kubernetes (K8s) ingress, which interfaces with K8s’ Go client and SharedInformers
Simplified logging, caching, API gateway, and firewall support
Configurable, shared Websocket and proxy support with HTTP/2

 
This is not an exhaustive list, by any means. However, these baked-in features alone can cut deployment time significantly. We’ll explore how Caddy 2 works, and how it makes server setup more enjoyable. Let’s hop in.
Here’s How Caddy Works
One Caddy advantage is its approachability for developers of all skill levels. It also plays nicely with external configurations.
Have an existing web server configuration that you want to migrate over to Caddy? No problem. Native adapters convert NGINX, TOML, Caddyfiles, and other formats into Caddy’s JSON format. You won’t have to reinvent the wheel nor start fresh to deploy Caddy in a containerized fashion.
Here’s how your Caddyfile structure might appear:

example.com

# Templates give static sites some dynamic features
templates

# Compress responses according to Accept-Encoding headers
encode gzip zstd

# Make HTML file extension optional
try_files {path}.html {path}

# Send API requests to backend
reverse_proxy /api/* localhost:9005

# Serve everything else from the file system
file_server

 
Note that this Caddyfile is even considered complex by Caddy’s own standards (since you’re defining extra functionality). What if you want to run WordPress with fully-managed HTTPS? The following Caddyfile enables this:

example.com

root * /var/www/wordpress
php_fastcgi unix//run/php/php-version-fpm.sock
file_server

 
That’s it! Basic files can produce some impressive results. Each passes essential instructions to your server and tells it how to run. Caddy’s reference documentation is also extremely helpful should you want to explore further.
Quick Commands
Before tackling your image, Caddy has shared some rapid, one-line commands that perform some basic tasks. You’ll enter these through the Caddy CLI:
 
To create a quick, local file server:
$ caddy file-server
To create a public file server over HTTPS:
$ caddy file-server –domain yoursampleapp.com
To perform an HTTPS reverse proxy:
​​$ caddy reverse-proxy –from example.com –to localhost:9000
To run a Caddyfile-backed server in an existing working directory:
$ caddy run
 
Caddy notes that these commands are tested and approved for production deployments. They’re safe, easy, and reliable. Want to learn many more useful commands and subcommands? Check out Caddy’s CLI documentation to streamline your workflows.
Leveraging the Caddy Container Image
Caddy bills itself as a web-server solution with fewer moving parts, and therefore fewer management demands. Its static binary compiles for any platform. Since Caddy has no dependencies, it can run practically anywhere — and within containers. This makes it a perfect match for Docker, which is why developers have downloaded Caddy’s Docker Official Image over 100 million times. You can find our Caddy image on Docker Hub.
 

 
We’ll discuss how to use this image to deploy your web applications faster, and share some supporting Docker mechanisms that can help streamline those processes. You don’t even need libc, either.
Prerequisites

A free Docker Hub account – for complete push/pull registry access to our Caddy Official Image
The latest version of Docker Desktop, tailored to your OS and CPU (for macOS users)
Your preferred IDE, though VSCode is popular

Docker Desktop unlocks greater productivity around container, image, and server management. Although the CLI remains available, we’re leveraging the GUI while running important tasks.
Pull Your Caddy Image
Getting started with Docker’s official Caddy image is easy. Enter the docker pull caddy:latest command to download the image locally on your machine. You can also specify a Caddy image version using a number of available tags.
Entering docker pull caddy:latest grabs Caddy’s current version, while docker pull caddy:2.5.1 requests an image running Caddy v2.5.1 (for example).
Using :latest is okay during testing but not always recommended in production. Ideally, you’ll aim to thoroughly test and vet any image from a security standpoint before using it. Pulling caddy:latest (or any image tagged similarly) may invite unwanted gremlins into your deployment.
Since Docker automatically grabs the newest release, it’s harder to pre-inspect that image or notice that changes have been made. However, :latest images can include the most up-to-date vulnerability and bug fixes. Image release notes are helpful here.
You can confirm your successful image pull within Docker Desktop:
 

 
After pulling caddy:latest, you can enter the docker image ls command to view its size. Caddy is even smaller than both NGINX and Apache’s httpd images:
 

 
Need to inspect the contents of your Caddy image? Docker Desktop lets you summon a Docker Software Bill of Materials (SBOM) — which lists all packages contained within your designated image. Simply enter the docker sbom caddy:latest command (or use any container image:tag):
 

 
 
 
 
 
 
 
You’ll see that your 43.1MB caddy:latest image contains 129 packages. These help Caddy run effectively cross-platform.
Run Your Caddy Image
Your image is ready to go! Next, run your image to confirm that Caddy is working properly with the following command:
docker run –rm -d -p 8080:80 –name web caddy
 
Each portion of this command serves a purpose:

–rm removes the container when it exits or when the daemon exits (whichever comes first)
-d runs your Caddy container in the background
-p designates port connections between host and container
–name lets you assign a non-random name to your image
web is an argument to the –name flag, and is the descriptive name of your running container
caddy is your image’s name, which was automatically assigned during your initial docker pull command

 
Since you’ve established a port connection, navigate to localhost:8080 in your browser. Caddy should display a webpage:
 

 
You’ll see some additional instructions for setting up Caddy 2. Scrolling further down the page also reveals more details, plus some handy troubleshooting tips.
Time to Customize
These next steps will help you customize your web application and map it into the container. Caddy uses the /var/www/html path to store your critical static website files.
Here’s where you’ll store any CSS or HTML — or anything residing within your index.html file. Additionally, this becomes your root directory once your Caddyfile is complete.
Locate your Caddyfile at /etc/caddy/Caddyfile to make any important configuration changes. Caddy’s documentation provides a fantastic breakdown of what’s possible, while the Caddyfile Tutorial guides you step-by-step.
The steps mentioned above describe resources that’ll exist within your Caddy container, not those that exist on your local machine. Because of this, it’s often necessary to reload Caddy to apply any configuration changes, or otherwise. You can apply any new changes by entering systemctl reload caddy in your CLI, and visiting your site as confirmation. Running your Caddy server within a Docker container requires just one extra shell command:
$ docker exec -w /etc/caddy web caddy reload
 
This gracefully reloads your content without a complete restart. That process is otherwise time consuming and disruptive.
Next, you can serve your updated index.html file by entering the following. You’ll create a working directory and run the following commands from that directory:

$ mkdir docker-caddy
$ cd docker-caddy
$ echo "hello world" > index.html
$ docker run -d -p 8080:80
-v $PWD/index.html:/usr/share/caddy/index.html
-v caddy_data:/data
caddy

$ curl http://localhost/
hello world

 
Want to quickly push new configurations to your Caddy server? Caddy provides a RESTful JSON API that lets you POST sets of changes with minimal effort. This isn’t necessary for our tutorial, though you may find this config structure useful in the future:

POST /config/

{
"apps": {
"http": {
"servers": {
"example": {
"listen": ["127.0.0.1:2080"],
"routes": [{
"@id": "demo",
"handle": [{
"handler": "file_server",
"browse": {}
}]
}]
}
}
}
}
}

 
No matter which route you take, any changes made through Caddy’s API are persisted on disk and continually usable after restarts. Caddy’s API docs explain this and more in greater detail.
Building Your Caddy-Based Dockerfile
Finally, half the magic of using Caddy and Docker rests with the Dockerfile. You don’t always want to mount your files within a container. Instead, it’s often better to COPY files or directories from a remote source and base your image on them:

FROM caddy:<version>

COPY Caddyfile /etc/caddy/Caddyfile
COPY site /srv

 
This adds your Caddyfile in using an absolute path. You can even add modules to your Caddy Dockerfile to extend your server’s functionality. Caddy’s :builder image streamlines this process significantly. Just note that this version is much larger than the standard Caddy image. To reduce bloat, the FROM instruction uses a binary overlay to save space:

FROM caddy:builder AS builder

RUN xcaddy build
–with github.com/caddyserver/nginx-adapter
–with github.com/hairyhenderson/caddy-teapot-module@v0.0.3-0

FROM caddy:<version>

COPY –from=builder /usr/bin/caddy /usr/bin/caddy

 
You’re also welcome to add any of Caddy’s available modules, which are free to download. Congratulations! You now have all the ingredients needed to deploy a functional Caddy 2 web application.
Quick Notes on Docker Compose
Though not required, you can use docker compose to run your Caddy-based stack. As a side note, Docker Compose V2 is also written in Go. You’d store this Compose content within a docker-compose.yml file, which looks like this:

version: "3.7"

services:
caddy:
image: caddy:<version>
restart: unless-stopped
ports:
– "80:80"
– "443:443"
volumes:
– $PWD/site:/srv
– caddy_data:/data
– caddy_config:/config

volumes:
caddy_data:
caddy_config:

 
Stopping and Reviving Your Containers
Want to stop your container? Simply enter docker stop web to shut it down within 10 seconds. If this process stalls, Docker will kill your container immediately. You can customize and extend this timeout with the following command:
docker stop –time=30 web
 
However, doing this is easier using Docker Desktop — and specifically the Docker Dashboard. In the sidebar, navigate to the Containers pane. Next, locate your Caddy server container titled “web” in the list, hover over it, and click the square Stop icon. This performs the same task from our first command above:
 

 
You’ll also see options to Open in Browser, launch the container CLI, Restart, or Delete.
Stop your Caddy container by accident, or want to spin up another? Navigate to the Images pane, locate caddy in the list, hover over it, and click the Run button that appears. Once the modal appears, expand the Optional Settings section. Next, do the following:

Name your new container something meaningful
Designate your Local Host port as 8080 and map it Container Port 80
Mount any volumes you want, though this isn’t necessary for this demo
Click the Run button to confirm

 

 
 
 
 
 
 
 
 
 
 
 
 
 
 
Finally, revisit the Containers pane, locate your new container, and click Open in Browser. You should see the very same webpage that your Caddy server rendered earlier, if things are working properly.
Conclusion
Despite the popularity of Apache and NGINX, developers have other server options at their disposal. Alternatives like Caddy 2 can save time, effort, and make the deployment process much smoother. Reduced dependencies, needed configurations, and even Docker integration help simplify each management step. As a bonus, you’re also supporting an open-source project.
Are you currently using Caddy 1 and want to upgrade to Caddy 2? Follow along with Caddy’s official documentation to understand the process.
Quelle: https://blog.docker.com/feed/

6 Development Tool Features that Backend Developers Need

Application development is seemingly getting more and more complex every day. While those complexities can lead to better end-user experiences, they can also hamper your overall productivity. Without a doubt, effective tooling helps you tackle these challenges much more easily. Good tools are user-friendly, work cross-platform, and have multi-language support.
 

 
 
 
 
 
 
 
 
 
 
 
 
Additionally, the best tools are actually enjoyable to use while keeping you seamlessly engaged. This is what makes integrated platforms so valuable to application development — but, how do you recognize a good solution when you see it?
Here’s what you should look for while evaluating tools and platforms.
1) Collaborative and User-Friendly Environments
Whether you’re working within one unified workspace or many, fostering strong collaboration is essential. Development projects have many stakeholders. They also — for individual features and especially entire products — rely on contributions from numerous team members.
This is where development environments come in handy. Environments help teams transition cohesively between tasks, projects, and milestones along their roadmaps. They also help developers manage application source code end-to-end. Your search for an ideal development environment should include those that bundle development, staging, and production servers. Additionally, look for all-encompassing environments that can reduce your overall tooling footprint. Prioritize the following environment features:

Task automation 
Writing
Testing
Debugging
Patching
Updating
Code compiling
Code merging
Viewing of in-progress code and peer review

 
Each snippet of application code is vital, which makes sharing so helpful. Additionally, environments that can support multiple repositories like GitHub, GitLab, or Bitbucket help teams code more effectively. You might also leverage different technologies with their own requirements. Wouldn’t it be great if numerous functions could be integrated within single solutions (like an IDE, for example)?
Overall, you’ll want an environment that helps you tackle your day-to-day tasks. It should welcome all technical stakeholders. It should also be something that you enjoy using on a daily basis. Great tools reduce friction — and the idea that environments can bring a smile to your face isn’t so far fetched. Remember to first create a “needs” checklist and evaluate your workflows before committing (no pun intended).
2) Kubernetes Compatibility
What makes Kubernetes (K8s) so important? Currently, there are over 3.9 million global K8s developers worldwide. This adoption is growing in lockstep with microservices and containerization. It’s not surprising that Kubernetes is the world’s leading container-orchestration platform.
In fact, 67% of respondents from last year’s Cloud Native Computing Foundation (CNCF) survey use K8s in production. Globally, 31% of backend developers also leverage K8s, so there’s a pretty good chance you do, too. If you need Kubernetes integration, you’ll want tools or platforms with it baked in.
3) Wide-Ranging OS and Hardware Support
It seems obvious, but any development tool must be compatible with your machine to be useful. You’ll need to choose the correct download package for your desktop OS — whether you’re running Windows, macOS, or a Linux distro.
From a collaborative standpoint, any application that supports all major operating systems is much more valuable. That flexibility means that developers with varied workstation setups can use it — and it even helps drive adoption. It’s much easier to lobby for your favorite developer tool when everyone can use it. Plus, working alongside other developers on one platform is much easier than building a “bridge” between two radically different, OS-specific tools.
OS aside, many apps have binaries tailored to specific hardware components — like Apple’s M-series Arm processors. Alternative packages may favor x86 architectures. These customized installs can deliver performance optimizations that you wouldn’t get from a universal install. They’re also integral to granting applications low-level permissions and system access needed to work effectively.
4) Multi-Language Support
Before containers, you had to install your desired language or framework locally on your development machine. This often entailed proprietary package managers unique to each language. Unfortunately, failed version upgrades could leave a particular language unusable and hard to remove. This can bloat any OS over time — and eventually require a fresh install.
Today, there are hundreds of programming languages of varied popularity. No tool can support all of them, but the right tool should support the heaviest hitters — like JavaScript, Python, Java, C, C++, C#, PHP, and others. Even five years ago, many developers reported using seven languages on average per project.
The most important language is the one you’re currently using. And that can change quickly! Consider that Swift and Rust are highly-desired languages, while Kotlin is also gaining steam. Your tools should let you grow and expand your development skill set, instead of being restrictive. Development is often an exploratory process, after all.
5) Easy Debugging
Writing functional code is challenging, but writing “clean” code can be even harder. Even the best developers make mistakes, and those little gremlins become increasingly harder to uncover as source code expands. Software bugs can take a massive amount of time to pinpoint (and fix) as development progresses — consuming up to 50% of a developer’s total bandwidth.
You also have to keep your users in mind, as coding issues can adversely affect your core services. Unfortunately, a missed bug can cause outages and impact people in very real ways. It’s therefore important to remember that bugs can have serious consequences, which makes debugging so critical for all users.
The right tooling should streamline this process. Shift-left testing and real-time linting are good approaches for catching pre-production issues. Devs are doing a better job here overall, but there’s still room for improvement. For example, a developer typically creates 70 bugs per 1,000 lines of code. Fifteen bugs reach end users in production, from there.
Since fixing a bug can take 30 times longer than writing a line of code, automation, collaboration, and targeted debugging features are crucial. You can pour every bit of debugging time you get back into other development tasks. Then, you can focus on shipping new features and improving user experiences.
6) Seamless Tech-Stack Integration
In this bring-your-own-stack (BYOS) world, it’s easy to encounter fragmentation while using platforms and applications in a piecemeal manner. These approaches often require workarounds or, at best, make you hop between windows to get work done.
In that sense, integration doesn’t just mean that one tool or technology meshes well with another. Complete integration describes a scenario where multi-platform functionality is possible through one single interface.
Bouncing between apps for even mundane tasks is frustrating. That’s why API-driven services platforms have become so popular — and why major vendors have opened up their ecosystems to partners or community developers. Your platform should boost your productivity and welcome external integrations with open arms. Similarly, they should fit into your internal and production environments.
This is important if you’re an indie developer or open-source contributor. However, it’s become essential for those working within larger companies — which deploy upwards of 187 applications on average. Consequently, you’ll want your tools to play nicely with numerous technologies.
Know Your Requirements
Building a needs-based tooling strategy is key. Whether you’re a lone developer or a member of a larger development team, take the time to sit down and assess your current situation. What’s working? Where could your tooling strategy improve? Gather feedback and apply those lessons while making decisions.
We’ve heard plenty of feedback ourselves, and have considered the needs of backend developers while developing Docker Desktop. Our goal is to make it easy for you and your team to build, share, and run your applications — and we’re always trying to make Desktop better for you. Make sure to check out our public roadmap and share your suggestions for features you’d love to see most.
Docker Desktop (for Windows, macOS, and now Linux) bundles together local K8s support, the Docker CLI, Docker Extensions (Beta), container management, Dev Environments (Preview), and more to keep you productive. Check out our documentation to learn how Docker Desktop makes development more enjoyable.
Quelle: https://blog.docker.com/feed/

New Extensions and Container Interface Enhancements in Docker Desktop 4.9

We’re excited to announce the launch of Docker Desktop 4.9, and we’re looking forward to you trying it! Here’s what’s coming in our latest launch. 
Manage your containers easily with table view and bulk remove
We’ve made a few user experience enhancements to the containers tab based on all of your awesome feedback. We’re excited to announce that on the Containers tab you can now:

Easily clean up multiple containers simultaneously with multi-select checkboxes

Resize columns to view your complete container and image names. Just drag the dividers in the table header.
Quickly copy your image id and easily paste it elsewhere, like in your terminal

 

 
 
 
 
 
 
 
 

Easily sort columns using header attributes. You can also hide columns to create more space and reduce clutter.

Navigate away from your tab and back to it, without disturbing either sorting and search results. These now remain in place thanks to state persistence!

 
Are you looking for more easy ways to tidy up your resources? Check out our Disk Usage extension in the Extensions Marketplace!
New extensions are joining the lineup
We’re happy to announce the addition of two new extensions to the Extensions Marketplace:

Lens Kubernetes – helps you access and take control of your Kubernetes clusters. Read more about it on the Mirantis website.
Slim.ai – gives you the power to create secure containers faster. Dive deeply into the construction of your images and discover what’s in your containers. Read more about this exciting extension here.

 
Information at a glance with the Docker Dashboard Footer
We’ve revamped the Docker Dashboard Footer to readily display key information about Docker Desktop’s current state, including:

Virtual Machine memory usage
Virtual Machine CPU usage
Docker Desktop version
A quick way to view the tip of the week

 
What else can we improve?
Please share your feedback and suggestions for what we should work on next! You can upvote, comment, or submit new ideas to our public roadmap. To learn more about the contents of Docker Desktop 4.9, check out our release notes.
Looking to get started as a new Docker Desktop user? View our Get Started page and read through our Docker Desktop Overview. 
 
Quelle: https://blog.docker.com/feed/

Automating Your Containers’ Security Scanning

Application development is complex. Teams must juggle numerous processes, gather all dependencies, and package everything together into a performant application. On top of this, each containerized application must satisfy strict security requirements, to protect your users and systems against intrusion.
When it comes to maintaining container security, software bugs are inevitable. They’re often harmless glitches, but some can pose serious security risks — letting bad actors access your systems. Whenever someone discovers a vulnerability within commercial or open-source software, they must register it within the Common Vulnerability and Exposure (CVE) database. There are currently more than 170,000 recorded CVEs, and engineers discovered over 2,083 new ones in March 2022 alone.
In this article, we’ll explore how vulnerabilities impact containers and how using images from trusted sources helps. We’ll then discuss how to use Docker’s native Snyk integration to secure your software supply chain.
The State of Software Security
Developers have increasingly turned to third-party code and applications while building their services. Unfortunately, using third-party software may also expose your code to risk. It’s absolutely essential that you leverage trusted images and secure your containers through ongoing vigilance.
This is why image scanning is so critical — not just early on in development, but throughout an application’s life. Thankfully, Docker customers have access to continuous security scanning that’s integrated into their workflows via Snyk — so you can find and fix vulnerabilities more easily. Whether you’re running conventional containers or Kubernetes applications, our native Snyk integration is valuable throughout the software development lifecycle.
How Vulnerabilities Affect Containers
A Docker container starts with a base image, typically from an online source. Teams then add layers to incorporate the functionality they need. These layers might just be simple commands that perform actions like creating a folder, yet they often pull in additional packages.
Here’s a basic Dockerfile example called good-nodejs:

FROM node:lts-alpine3.15
WORKDIR /workdir
RUN npm i express-openid-connect

 
This file has three layers:

FROM: This instruction initializes a new build stage and prepares the base image for upcoming instructions. Any valid Dockerfile starts with a FROM expression. Our example above uses an official Node.js image built atop Alpine Linux. This image contains everything needed to get up and running with Node. It has multiple layers, which you can see from the size of its Dockerfile. Each layer — from the Alpine Linux OS layer and those that make up Alpine — is a potential vulnerability source.
WORKDIR: This layer sets a working directory. Risks here are minimal or non-existent, since WORKDIR doesn’t introduce any new, external software packages.
RUN: This instructional layer installs another third-party package. It may introduce additional vulnerabilities via the package code, its dependencies, and any other requisite packages.

 
These above layers have a knack for concealing vulnerabilities deep inside an image, where they’re inconspicuous. You may need to perform extensive penetration testing to find them.
Using Trusted Sources
Trusted images that follow image best practices are your most powerful allies when securing your supply chain. Docker Hub provides access to Docker Official Images and Verified Publisher images — denoted by the color-coded badges displayed prominently beside their names.
 

 
Docker’s internal teams curate Docker Official Images. We frequently update, scan, and patch these images to galvanize security. Every essential operating system, programming language, middleware, and database is represented.
Docker’s external partners supply Docker Verified Publisher images. When you use these images, you know that they’re sourced authentically and actively maintained. Our program helps ensure that these components are trustworthy. You’ll also find resources from Snyk, similar to those above:
 

 
Developers don’t need to have an advanced security background or read CVE reports to fix container issues. This partnership gives Docker developers access to the industry’s most comprehensive, timely, and accurate database of open source and container-vulnerability intelligence.
Our security coverage goes far beyond CVE vulnerabilities and other public databases. Snyk’s in-house research team maintains its database and combines public sources, community contributions, proprietary research, and machine learning to continuously adapt to dynamic security threats.
Identifying Vulnerabilities
Trusted images are great starting points for development, yet they may not be fully functional. You may choose to leverage community-sourced images or those from outside developers instead. Luckily, you can use Docker Hub’s Snyk integration to detect any threats hidden within any images and code. More importantly, our Snyk integration also arms developers with base image fix recommendations and identifies any Dockerfile lines that introduce vulnerabilities.
Automated vulnerability scanning can detect CVEs that find their way into your container images. It’s an essential tool for securing your software supply chain — acting as a front-line defense mechanism as you integrate third-party code into their projects.
This scan works by examining all packages and dependencies defined in your Dockerfile, and checks them against a list of recorded vulnerabilities.
You can enable a repository’s vulnerability scanning in its respective Settings tab.
 

 
 
 
 
 
 
 
With scanning enabled, Snyk will automatically analyze any new tags pushed to the repository (like a specific image version or variant).
Consider our basic Dockerfile from earlier. To demonstrate how image scanning works, you can pull an older version of your base image (with known vulnerabilities), and do the same for your npm package:

FROM node:15.9.0-alpine3.13
WORKDIR /workdir
RUN npm i express-openid-connect@2.7.1

 
We can test Snyk’s functionality from here. If you build and push this code to your Docker Hub repository — with the test tag bad-nodejs (alongside good-nodejs from earlier) — you’ll see that Snyk has automatically scanned it. This scan has found 22 high-severity and eight medium-severity vulnerabilities:
 

 
 
 
 
 
 
You can then dive into the bad-nodejs results to get a breakdown of all vulnerabilities discovered, showing:

Severity
Priority score
CVE number
Package introducing the issue
Package versions with the bug and the fix

 

 
 
 
 
 
 
 
 
 
When you drill further into a vulnerability, Snyk presents information within a tree-like structure. You can see which package is responsible for introducing the vulnerability. In the example below, apk-tools is importing zlib, which contains an out-of-bounds write vulnerability:
 

 
 
 
Enabling Continuous Monitoring
Using older image versions to replicate legacy systems is a common practice; it ensures that your applications are backwards compatible. Your workflows might also prioritize stability over new features. Finally, licensing new software versions can be cost-prohibitive for your company, therefore keeping you on an older version.
Building an image with bleeding-edge technologies — and deploying it into production — does introduce some risk. If product development stagnates for an extended period, there’s a high chance that someone will find vulnerabilities in public releases. You wouldn’t know about these vulnerabilities before deployment, since you chose the newest versions.
Using Snyk’s technology, Docker Hub mitigates these risks by periodically re-scanning all repository images. Your Docker Hub subscription grants you Docker Desktop as a local UI. This lets you view recent scan results for images across your organization.
 

Fixing the Container Image via a Web UI
When you push your Dockerfiles to publicly-accessible source control — like GitHub, GitLab, and Bitbucket — you can integrate the code with a free Snyk account and get detailed remediation recommendations. The web UI prompts you to automatically fix vulnerabilities with a pull request into the source code.
 

Fixing the Container Image via the Command Line
Docker Desktop also provides powerful CLI scanning locally. This alternative method lets Snyk examine your Dockerfile and provide detailed recommendations based on its findings. It’s also an essential tool if you’ve embraced a shift-left testing philosophy.
When you scan that aforementioned bad-nodejs image via the command line, you’ll uncover the same vulnerabilities found within Docker Hub:

✗ Critical severity vulnerability found in apk-tools/apk-tools
Description: Out-of-bounds Read
Info: https://snyk.io/vuln/SNYK-ALPINE313-APKTOOLS-1533754
Introduced through: apk-tools/apk-tools@2.12.1-r0
From: apk-tools/apk-tools@2.12.1-r0
Image layer: Introduced by your base image (node:15.9-alpine3.13)
Fixed in: 2.12.6-r0

 
This output shows how the vulnerability was introduced and links to Snyk, where more information is available.
By linking your Dockerfile on the CLI scan, you’ll receive the same upgrade recommendations for your base image as you had earlier.
 

 
 
 
 
 
 
 
 
You’ll find another vulnerability if you scroll further down. Your added npm package introduced the vulnerability after intentionally grabbing an older version. Crucially, Snyk tells you how to fix this.
 

 
Next Steps
Security discussions can be both headache inducing and complex in a traditional development environment. Process fragmentation impacts security from one team to the next, and the onus is often on leaders to form a cohesive strategy. However, you should never have to wonder if your teams are building, running, and sharing secure applications.
Accordingly, automated vulnerability scanning helps your organization secure its software supply chain. Docker’s native Snyk integration provides broad oversight of your organization’s image security — detecting vulnerabilities inside dependency layers. Our Docker Extension for Snyk helps you better follow development best practices, while also meeting your compliance requirements. Learn more about getting started with Snyk scanning, and our Docker Extension for Snyk, here.
The integration reduces the time and effort needed to boost security. Your development teams can instead spend their time improving your services. To learn more about Docker’s vulnerability scanning integrations — and how to start securing your images —  browse our documentation.
Quelle: https://blog.docker.com/feed/

Kubernetes Best Practices

spacelift.io – As the most popular container orchestration system, K8s is the de-facto standard for the modern cloud engineer to get to grips with. K8s is a notoriously complex system to use and maintain, so…
Quelle: news.kubernauts.io