Hot Off the Press: New WordPress.com Themes for March 2024

The WordPress.com team is always working on new design ideas to bring your website to life. Check out the latest themes in our library, including great options for small businesses, sports fan, nostalgic bloggers, and more.

All WordPress.com Themes

Feelin’ Good

Feelin’ Good is a vibrant (to say the least!) blog theme with a bold vaporwave aesthetic. Its nostalgic atmosphere pays homage to the daring, over-the-top visual art and advertisements of the ’80s and early ’90s. We’ve combined a lot of elements that shouldn’t work together, but do. If you’re looking for a dynamic, attention-grabbing, eye-popping visual feast of a theme, try Feelin’ Good.

Click here to view a demo of this theme.

Low Fi

Low Fi is a simple blog theme featuring a narrow column layout that’s optimized for seamless browsing on mobile devices. With six style variations, you’re sure to find a palette you’re drawn to. Taking inspiration from the lo-fi beats music scene, the theme’s design cues, such as the square header image, offer a nod to album artwork.

The overall aesthetic is deliberately understated, with each element—from the muted color schemes to the textured background—crafted to evoke a sense of nostalgia and warmth.

Click here to view a demo of this theme.

Cakely

Cakely is the ultimate WordPress theme designed specifically for passionate bakers, cake enthusiasts, and dessert lovers. Tailored for small businesses aiming to shine in the world of sweets, Cakely effortlessly combines style and functionality to showcase mouthwatering creations. Its vibrant pink color scheme exudes joy while maintaining a classy, clean layout with easy navigation. This theme ultimately strikes the perfect balance between professionalism and playfulness, making it an ideal choice for showcasing your delicious masterpieces.

Click here to view a demo of this theme.

Treehouse

Treehouse is a carefree, fun, and friendly theme ideal for Woo stores selling children’s products. With its unlimited customization options, Treehouse enables you to set up an online shop with just a few clicks. Utilizing a soft color palette, playful design details, and simplified layouts, your site will attract a wide range of customers, from young parents to over-the-moon grandparents. This theme is fully responsive and cross-browser compatible.

Click here to view a demo of this theme.

Infield

Major League Baseball’s 2024 season kicks off on Thursday, March 28. What better way to show your home team the love it deserves than with a baseball-themed fan site! With a somewhat old-school layout, this theme evokes some of the classic sports sites of the ’90s, back before fantasy leagues took over. The header and accent colors are customizable, ensuring that your favorite crew is properly saluted.

Click here to view a demo of this theme.

To install any of the above themes, click the name of the theme you like, which brings you right to the installation page. Then click the “Activate this design” button. You can also click “Open live demo,” which brings up a clickable, scrollable version of the theme for you to preview.

Premium themes are available to use at no extra charge for customers on the Explorer plan or above. Partner themes are third-party products that can be purchased for $79/year each.

You can explore all of our themes by navigating to the “Themes” page, which is found under “Appearance” in the left-side menu of your WordPress.com dashboard. Or you can click below:

All WordPress.com Themes

Quelle: RedHat Stack

Amazon SES bietet jetzt ein geführtes Onboarding für einen vollständig authentifizierten Versand

Heute veröffentlicht Amazon Simple Email Service (SES) ein Update für seinen geführten Onboarding-Assistenten, um Kunden dabei zu helfen, die 2024 von Gmail und Yahoo Mail eingeführten Authentifizierungsanforderungen zu erfüllen. Der SES-Onboarding-Assistent führt Kunden jetzt durch die Schritte zur Überprüfung ihrer Sende-Identitäten, zur Einrichtung benutzerdefinierter E-Mails von Domains und zur Veröffentlichung eines DMARC-Eintrags, falls sie noch nicht über einen solchen verfügen. Dies macht es Kunden noch einfacher, mit dem Senden authentifizierter E-Mails mit SES zu beginnen.
Quelle: aws.amazon.com

Amazon Managed Service für Apache Flink fügt die Unterstützung für Apache Flink 1.18 hinzu

Amazon Managed Service für Apache Flink unterstützt jetzt Apache Flink 1.18. Diese neue Version enthält Verbesserungen an Konnektoren wie Amazon OpenSearch, Amazon DynamoDB, MongoDB sowie eine verbesserte Anpassung der Wasserzeichen und Steigerung der Abfrageleistung. Eine komplette Liste der unterstützten Features, Verbesserungen und Fehlerbehebungen finden Sie in unserer Dokumentation zu Amazon Managed Service für Apache Flink. Sie können direkte Versions-Upgrades für Apache Flink verwenden, um die Apache Flink 1.18-Laufzeit für ein einfaches und schnelles Upgrade Ihrer bestehenden Anwendung zu übernehmen. 
Quelle: aws.amazon.com

Is Your Container Image Really Distroless?

Containerization helped drastically improve the security of applications by providing engineers with greater control over the runtime environment of their applications. However, a significant time investment is required to maintain the security posture of those applications, given the daily discovery of new vulnerabilities as well as regular releases of languages and frameworks. 

The concept of “distroless” images offers the promise of greatly reducing the time needed to keep applications secure by eliminating most of the software contained in typical container images. This approach also reduces the amount of time teams spend remediating vulnerabilities, allowing them to focus only on the software they are using. 

In this article, we explain what makes an image distroless, describe tools that make the creation of distroless images practical, and discuss whether distroless images live up to their potential.

What’s a distro?

A Linux distribution is a complete operating system built around the Linux kernel, comprising a package management system, GNU tools and libraries, additional software, and often a graphical user interface.

Common Linux distributions include Debian, Ubuntu, Arch Linux, Fedora, Red Hat Enterprise Linux, CentOS, and Alpine Linux (which is more common in the world of containers). These Linux distributions, like most Linux distros, treat security seriously, with teams working diligently to release frequent patches and updates to known vulnerabilities. A key challenge that all Linux distributions must face involves the usability/security dilemma. 

On its own, the Linux kernel is not very usable, so many utility commands are included in distributions to cover a large array of use cases. Having the right utilities included in the distribution without having to install additional packages greatly improves a distro’s usability. The downside of this increase in usability, however, is an increased attack surface area to keep up to date. 

A Linux distro must strike a balance between these two elements, and different distros have different approaches to doing so. A key aspect to keep in mind is that a distro that emphasizes usability is not “less secure” than one that does not emphasize usability. What it means is that the distro with more utility packages requires more effort from its users to keep it secure.

Multi-stage builds

Multi-stage builds allow developers to separate build-time dependencies from runtime ones. Developers can now start from a full-featured build image with all the necessary components installed, perform the necessary build step, and then copy only the result of those steps to a more minimal or even an empty image, called “scratch”. With this approach, there’s no need to clean up dependencies and, as an added bonus, the build stages are also cacheable, which can considerably reduce build time. 

The following example shows a Go program taking advantage of multi-stage builds. Because the Golang runtime is compiled into the binary, only the binary and root certificates need to be copied to the blank slate image.

FROM golang:1.21.5-alpine as build
WORKDIR /
COPY go.* .
RUN go mod download
COPY . .
RUN go build -o my-app

FROM scratch
COPY –from=build
/etc/ssl/certs/ca-certificates.crt
/etc/ssl/certs/ca-certificates.crt
COPY –from=build /my-app /usr/local/bin/my-app
ENTRYPOINT ["/usr/local/bin/my-app"]

BuildKit

BuildKit, the current engine used by docker build, helps developers create minimal images thanks to its extensible, pluggable architecture. It provides the ability to specify alternative frontends (with the default being the familiar Dockerfile) to abstract and hide the complexity of creating distroless images. These frontends can accept more streamlined and declarative inputs for builds and can produce images that contain only the software needed for the application to run. 

The following example shows the input for a frontend for creating Python applications called mopy by Julian Goede.

#syntax=cmdjulian/mopy
apiVersion: v1
python: 3.9.2
build-deps:
– libopenblas-dev
– gfortran
– build-essential
envs:
MYENV: envVar1
pip:
– numpy==1.22
– slycot
– ./my_local_pip/
– ./requirements.txt
labels:
foo: bar
fizz: ${mopy.sbom}
project: my-python-app/

So, is your image really distroless?

Thanks to new tools for creating container images like multi-stage builds and BuildKit, it is now a lot more practical to create images that only contain the required software and its runtime dependencies. 

However, many images claiming to be distroless still include a shell (usually Bash) and/or BusyBox, which provides many of the commands a Linux distribution does — including wget — that can leave containers vulnerable to Living off the land (LOTL) attacks. This raises the question, “Why would an image trying to be distroless still include key parts of a Linux distribution?” The answer typically involves container initialization. 

Developers often have to make their applications configurable to meet the needs of their users. Most of the time, those configurations are not known at build time so they need to be configured at run time. Often, these configurations are applied using shell initialization scripts, which in turn depend on common Linux utilities such as sed, grep, cp, etc. When this is the case, the shell and utilities are only needed for the first few seconds of the container’s lifetime. Luckily, there is a way to create true distroless images while still allowing initialization using tools available from most container orchestrators: init containers.

Init containers

In Kubernetes, an init container is a container that starts and must complete successfully before the primary container can start. By using a non-distroless container as an init container that shares a volume with the primary container, the runtime environment and application can be configured before the application starts. 

The lifetime of that init container is short (often just a couple seconds), and it typically doesn’t need to be exposed to the internet. Much like multi-stage builds allow developers to separate the build-time dependencies from the runtime dependencies, init containers allow developers to separate initialization dependencies from the execution dependencies. 

The concept of init container may be familiar if you are using relational databases, where an init container is often used to perform schema migration before a new version of an application is started.

Kubernetes example

Here are two examples of using init containers. First, using Kubernetes:

apiVersion: v1
kind: Pod
metadata:
name: kubecon-postgress-pod
labels:
app.kubernetes.io/name: KubeConPostgress
spec:
containers:
– name: postgress
image: laurentgoderre689/postgres-distroless
securityContext:
runAsUser: 70
runAsGroup: 70
volumeMounts:
– name: db
mountPath: /var/lib/postgresql/data/
initContainers:
– name: init-postgress
image: postgres:alpine3.18
env:
– name: POSTGRES_PASSWORD
valueFrom:
secretKeyRef:
name: kubecon-postgress-admin-pwd
key: password
command: ['docker-ensure-initdb.sh']
volumeMounts:
– name: db
mountPath: /var/lib/postgresql/data/
volumes:
– name: db
emptyDir: {}

– – –

> kubectl apply -f pod.yml && kubectl get pods
pod/kubecon-postgress-pod created
NAME READY STATUS RESTARTS AGE
kubecon-postgress-pod 0/1 Init:0/1 0 0s
> kubectl get pods
NAME READY STATUS RESTARTS AGE
kubecon-postgress-pod 1/1 Running 0 10s

Docker Compose example

The init container concept can also be emulated in Docker Compose for local development using service dependencies and conditions.

services:
db:
image: laurentgoderre689/postgres-distroless
user: postgres
volumes:
– pgdata:/var/lib/postgresql/data/
depends_on:
db-init:
condition: service_completed_successfully

db-init:
image: postgres:alpine3.18
environment:
POSTGRES_PASSWORD: example
volumes:
– pgdata:/var/lib/postgresql/data/
user: postgres
command: docker-ensure-initdb.sh

volumes:
pgdata:

– – –
> docker-compose up
[+] Running 4/0
✔ Network compose_default Created
✔ Volume "compose_pgdata" Created
✔ Container compose-db-init-1 Created
✔ Container compose-db-1 Created
Attaching to db-1, db-init-1
db-init-1 | The files belonging to this database system will be owned by user "postgres".
db-init-1 | This user must also own the server process.
db-init-1 |
db-init-1 | The database cluster will be initialized with locale "en_US.utf8".
db-init-1 | The default database encoding has accordingly been set to "UTF8".
db-init-1 | The default text search configuration will be set to "english".
db-init-1 | […]
db-init-1 exited with code 0
db-1 | 2024-02-23 14:59:33.191 UTC [1] LOG: starting PostgreSQL 16.1 on aarch64-unknown-linux-musl, compiled by gcc (Alpine 12.2.1_git20220924-r10) 12.2.1 20220924, 64-bit
db-1 | 2024-02-23 14:59:33.191 UTC [1] LOG: listening on IPv4 address "0.0.0.0", port 5432
db-1 | 2024-02-23 14:59:33.191 UTC [1] LOG: listening on IPv6 address "::", port 5432
db-1 | 2024-02-23 14:59:33.194 UTC [1] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"
db-1 | 2024-02-23 14:59:33.196 UTC [9] LOG: database system was shut down at 2024-02-23 14:59:32 UTC
db-1 | 2024-02-23 14:59:33.198 UTC [1] LOG: database system is ready to accept connections

As demonstrated by the previous example, an init container can be used alongside a container to remove the need for general-purpose software and allow the creation of true distroless images. 

Conclusion

This article explained how Docker build tools allow for the separation of build-time dependencies from run-time dependencies to create “distroless” images. For example, using init containers allows developers to separate the logic needed to configure a runtime environment from the environment itself and provide a more secure container. This approach also helps teams focus their efforts on the software they use and find a better balance between security and usability.

Learn more

Subscribe to the Docker Newsletter.

Get the latest release of Docker Desktop.

Vote on what’s next! Check out our public roadmap.

Have questions? The Docker community is here to help.

New to Docker? Get started.

Quelle: https://blog.docker.com/feed/

containerd vs. Docker: Understanding Their Relationship and How They Work Together

During the past decade, containers have revolutionized software development by introducing higher levels of consistency and scalability. Now, developers can work without the challenges of dependency management, environment consistency, and collaborative workflows.

When developers explore containerization, they might learn about container internals, architecture, and how everything fits together. And, eventually, they may find themselves wondering about the differences between containerd and Docker and how they relate to one another.

In this blog post, we’ll explain what containerd is, how Docker and containerd work together, and how their combined strengths can improve developer experience.

What’s a container?

Before diving into what containerd is, I should briefly review what containers are. Simply put, containers are processes with added isolation and resource management. Containers have their own virtualized operating system with access to host system resources. 

Containers also use operating system kernel features. They use namespaces to provide isolation and cgroups to limit and monitor resources like CPU, memory, and network bandwidth. As you can imagine, container internals are complex, and not everyone has the time or energy to become an expert in the low-level bits. This is where container runtimes, like containerd, can help.

What’s containerd?

In short, containerd is a runtime built to run containers. This open source tool builds on top of operating system kernel features and improves container management with an abstraction layer, which manages namespaces, cgroups, union file systems, networking capabilities, and more. This way, developers don’t have to handle the complexities directly. 

In March 2017, Docker pulled its core container runtime into a standalone project called containerd and donated it to the Cloud Native Computing Foundation (CNCF).  By February 2019, containerd had reached the Graduated maturity level within the CNCF, representing its significant development, adoption, and community support. Today, developers recognize containerd as an industry-standard container runtime known for its scalability, performance, and stability.

Containerd is a high-level container runtime with many use cases. It’s perfect for handling container workloads across small-scale deployments, but it’s also well-suited for large, enterprise-level environments (including Kubernetes). 

A key component of containerd’s robustness is its default use of Open Container Initiative (OCI)-compliant runtimes. By using runtimes such as runc (a lower-level container runtime), containerd ensures standardization and interoperability in containerized environments. It also efficiently deals with core operations in the container life cycle, including creating, starting, and stopping containers.

How is containerd related to Docker?

But how is containerd related to Docker? To answer this, let’s take a high-level look at Docker’s architecture (Figure 1). 

Containerd facilitates operations on containers by directly interfacing with your operating system. The Docker Engine sits on top of containerd and provides additional functionality and developer experience enhancements.

How Docker interacts with containerd

To better understand this interaction, let’s talk about what happens when you run the docker run command:

After you select enter, the Docker CLI will send the run command and any command-line arguments to the Docker daemon (dockerd) via REST API call.

dockerd will parse and validate the request, and then it will check that things like container images are available locally. If they’re not, it will pull the image from the specified registry.

Once the image is ready, dockerd will shift control to containerd to create the container from the image.

Next, containerd will set up the container environment. This process includes tasks such as setting up the container file system, networking interfaces, and other isolation features.

containerd will then delegate running the container to runc using a shim process. This will create and start the container.

Finally, once the container is running, containerd will monitor the container status and manage the lifecycle accordingly.

Docker and containerd: Better together 

Docker has played a key role in the creation and adoption of containerd, from its inception to its donation to the CNCF and beyond. This involvement helped standardize container runtimes and bolster the open source community’s involvement in containerd’s development. Docker continues to support the evolution of the open source container ecosystem by continuously maintaining and evolving containerd.

Containerd specializes in the core functionality of running containers. It’s a great choice for developers needing access to lower-level container internals and other advanced features. Docker builds on containerd to create a cohesive developer experience and comprehensive toolchain for building, running, testing, verifying, and sharing containers.

Build + Run

In development environments, tools like Docker Desktop, Docker CLI, and Docker Compose allow developers to easily define, build, and run single or multi-container environments and seamlessly integrate with your favorite editors or IDEs or even in your CI/CD pipeline. 

Test

One of the largest developer experience pain points involves testing and environment consistency. With Testcontainers, developers don’t have to worry about reproducibility across environments (for example, dev, staging, testing, and production). Testcontainers also allows developers to use containers for isolated dependency management, parallel testing, and simplified CI/CD integration.

Verify

By analyzing your container images and creating a software bill of materials (SBOM), Docker Scout works with Docker Desktop, Docker Hub, or Docker CLI to help organizations shift left. It also empowers developers to find and fix software vulnerabilities in container images, ensuring a secure software supply chain.

Share

Docker Registry serves as a store for developers to push container images to a shared repository securely. This functionality streamlines image sharing, making maintaining consistency and efficiency in development and deployment workflows easier. 

With Docker building on top of containerd, the software development lifecycle benefits from the inner loop and testing to secure deployment to production.

Wrapping up

In this article, we discussed the relationship between Docker and containerd. We showed how containers, as isolated processes, leverage operating system features to provide efficient and scalable development and deployment solutions. We also described what containerd is and explained how Docker leverages containerd in its stack. 

Docker builds upon containerd to enhance the developer experience, offering a comprehensive suite of tools for the entire development lifecycle across building, running, verifying, sharing, and testing containers. 

Start your next projects with containerd and other container components by checking out Docker’s open source projects and most popular open source tools. 

Learn more

Subscribe to the Docker Newsletter.

Get the latest release of Docker Desktop.

Vote on what’s next! Check out our public roadmap.

Have questions? The Docker community is here to help.

New to Docker? Get started.

Quelle: https://blog.docker.com/feed/