How to Setup Ansible AWX on a Kubernetes EKS Cluster
blog.devops.dev – Learn how to use Ansible AWX, a GUI implementation of Ansible, and deploy it on AWS EKS. Deploy a dummy playbook and call it using REST API calls.
Quelle: news.kubernauts.io
blog.devops.dev – Learn how to use Ansible AWX, a GUI implementation of Ansible, and deploy it on AWS EKS. Deploy a dummy playbook and call it using REST API calls.
Quelle: news.kubernauts.io
Observability is the practice of understanding the internal state of a system from its output. It’s based on a trio of key indicators: logs, metrics, and traces. Because metrics and traces are numerical, it’s easy to visualize that data through graphics. Logs are unfortunately text heavy and relatively difficult to visualize or observe.
No matter the data type and its underlying nature, actionable log data helps you solve problems and make smarter business decisions. And that’s where Parseable comes in.
Introducing Parseable
The SaaS observability ecosystem is thriving, but there’s little to no movement in open source, developer-friendly observability platforms. That’s what we’re looking to address with Parseable.
Parseable is an open source, developer-centric platform created to ingest and query log data. It’s designed to be efficient, easy to use, and highly flexible. To achieve this, Parseable uses a cloud-native, containerized architectural approach to create a simple and dependency-free platform.
Specifically, Parseable uses Apache Arrow and Parquet under the hood to efficiently store log data and query at blazingly fast speeds. It uses S3 or other compatible storage platforms to support seamless storage while remaining stateless.
What’s unique about Parseable?
Here are some exciting features that set Parseable apart from other observability platforms:
It maintains a SQL-compatible API for querying log data.
The Parquet open data format enables complete data ownership and wide-ranging possibilities for data analysis.
The single binary and container-based deployment model (including UI) helps you deploy in minutes — if not seconds.
Its indexing-free design rivals the performance of indexed systems while offering lower CPU usage and less storage overhead.
It’s written in Rust with low latency and high throughput.
How does Parseable work?
Parseable exposes HTTP REST API endpoints. This lets you ingest, query, and manage your log streams on the Parseable server. There are three major API categories:
Log stream creation, ingestion, and management
Log stream query and search
Overall health status
API reference information and examples are available on the Parseable public workspace, at Postman.
Parseable is compatible with standard logging agents like FluentBit, LogStash, Vector, syslog and others via their HTTP output agents. It also offers a built-in, intuitive GUI for log query and analysis.
Why use the Parseable Docker extension?
Docker Extensions help you build and integrate software applications into your daily workflows. With the Parseable extension, we aim to provide a simple, one-click approach for deploying Parseable.
Once the extension is installed and running, you’ll have a running Parseable server that can ingest logs from any logging agents or directly from your application. You’ll also have access to the Parseable UI.
Overall, the Parseable extension brings richer log observability to development platforms.
Getting started with Parseable
Prerequisites
A Docker Desktop installation
MinIO Object Storage (or S3 if available)
Credentials for read-write access to object storage
An object storage bucket
Hop into Docker Desktop and confirm that the Docker Extensions feature is enabled. Click the Settings gear > Extensions tab > check the “Enable Docker Extensions” box:
Installing the Parseable Docker extension
While we’re working to bring Parseable to the Extensions Marketplace, you’ll currently need to download it via the CLI. Launch your terminal and run the following command to clone the GitHub repository and install the Parseable Extension:
git clone https://github.com/parseablehq/parseable-docker-extension
cd parseable-docker-extension
make install-extension
The Parseable extension will appear in the Docker Dashboard’s left sidebar, under the Extensions heading.
Using Parseable
Parseable requires you to enter the following configuration settings and environment variables during the initial setup:
Local Port Number (the port number you want Parseable listening on)
Local Storage Path (the path within the container where Parseable stages data)
Local Volume Path (the path where your local storage path is mounted)
S3/MinIO URL
S3/MinIO Bucket Name
S3/MinIO Access Key
S3/MinIO Secret Key
S3/MinIO Region
Click “Deploy” after you’ve entered all required configuration details.
You should see the URL http://localhost:8000 within the extension window:
Next, Docker Desktop will redirect you to your browser’s login page. Your credentials are identical to what you provided in the Login Credentials section (default user/password: parseable, parseable):
After logging in, you’ll see the logs page with the option to select a log stream. If you used the default MinIO bucket embedded in the Extensions UI, some demo data is already present. Alternatively, if you’re using your own S3-compatible bucket, use the Parseable API to create a log stream and send logs to the log stream.
Once you’re done, you can choose a log stream and the time range for which you want the logs. You can even add filters and search fields:
Parseable currently supports data filtering by label, metadata, and specific column values. For example, you can choose a column and specify an operator or value for the column. Only the log data rows matching this filter will be shown. We’re working on improving this with support for multiple-column data types.
This entire process takes about a minute. To see it in action, check out this quick walkthrough video:
Try Parseable today!
In this post, we quickly showcased Parseable and its key features. You also learned how to locally run it with a single click using the extension. Finally, we explored how to ingest logs to your running Parseable instance and query those logs via the Parseable UI.
But, you can test drive Parseable for yourself, today! Follow our CLI workflow to install this extension directly. Plus, keep an eye out for Parseable’s launch on the Extensions Marketplace — it’s coming soon!
To learn more, join the Parseable community on Slack and help us spread the word by adding a star to the repo.
We really hope you enjoyed this article and this new approach to log data ingestion and query. Docker Extensions makes this single-click approach possible.
Contribute to the Parseable Docker extension
We’re committed to making Parseable more powerful for our developers and users — and we need help! We’re actively looking for contributors to the Parseable Docker extension project.
The current code is simple and easy to get started with, and we’re always around to give potential contributors a hand. This can be a great first project, so please feel free to share your ideas.
Quelle: https://blog.docker.com/feed/
This post was co-written by Kris Rivera, Principal Software Engineer at Rapid7.
Rapid7 is a Boston-based provider of security analytics and automation solutions enabling organizations to implement an active approach to cybersecurity. Over 10,000 customers rely on Rapid7 technology, services, and research to improve security outcomes and securely advance their organizations.
The security space is constantly changing, with new threats arising every day. To meet their customers’ needs, Rapid7 focuses on increasing the reliability and velocity of software builds while also maintaining their quality.
That’s why Rapid7 turned to Docker. Their teams use Docker to help development, support the sales pipeline, provide testing environments, and deploy to production in an automated, reliable way.
By using Docker, Rapid7 transformed their onboarding process by automating manual processes. Setting up a new development environment now takes minutes instead of days. Their developers can produce faster builds that enable regular code releases to support changing requirements.
Automating the onboarding process
When developers first joined Rapid7, they were met with a static, manual process that was time consuming and error-prone. Configuring a development environment isn’t exciting for most developers. They want to spend most of their time creating! And setting up the environment is the least glamorous part of the process.
Docker helped automate this cumbersome process. Using Docker, Rapid7 could create containerized systems that were preconfigured with the right OS and developer tools. Docker Compose enabled multiple containers to communicate with each other, and it had the hooks needed to incorporate custom scripting and debugging tools.
Once the onboarding setup was configured through Docker, the process was simple for other developers to replicate. What once took multiple days now takes minutes.
Expanding containers into production
The Rapid7 team streamlined the setup of the development environment by using a Dockerfile. This helped them create an image with every required dependency and software package.
But they didn’t stop there. As this single Docker image evolved into a more complex system, they realized that they’d need more Docker images and container orchestration. That’s when they integrated Docker Compose into the setup.
Docker Compose simplified Docker image builds for each of Rapid7’s environments. It also encouraged a high level of service separation that split out different initialization steps into separate bounded contexts. Plus, they could leverage Docker Compose for inter-container communication, private networks, Docker volumes, defining environment variables with anchors, and linking containers for communication and aliasing.
This was a real game changer for Rapid7, because Docker Compose truly gave them unprecedented flexibility. Teams then added scripting to orchestrate communication between containers when a trigger event occurs (like when a service has completed).
Using Docker, Docker Compose, and scripting, Rapid7 was able to create a solution for the development team that could reliably replicate a complete development environment. To optimize the initialization, Rapid7 wanted to decrease the startup times beyond what Docker enables out of the box.
Optimizing build times even further
After creating Docker base images, the bottom layers rarely have to change. Essentially, that initial build is a one-time cost. Even if the images change, the cached layers make it a breeze to get through that process quickly. However, you do have to reinstall all software dependencies from scratch again, which is a one-time cost per Docker image update.
Committing the installed software dependencies back to the base image allows for a simple, incremental, and often skippable stage. The Docker image is always usable in development and production, all on the development computer.
All of these efficiencies together streamlined an already fast 15 minute process down to 5 minutes — making it easy for developers to get productive faster.
How to build it for yourself
Check out code examples and explanations about how to replicate this setup for yourself. We’ll now tackle the key steps you’ll need to follow to get started.
Downloading Docker
Download and install the latest version of Docker to be able to perform Docker-in-Docker. Docker-in-Docker lets your Docker environment have Docker installed within a container. This lets your container run other containers or pull images.
To enable Docker-in-Docker, you can apt install the docker.io distribution as one of your first commands in your Dockerfile. Once the container is configured, mount the Docker socket from the host installation:
# Dockerfile
FROM ubuntu:20.04
# Install dependencies
RUN apt update &&
apt install -y docker.io
Next, build your Docker image by running the following command in your CLI or shell script file:
docker build -t <docker-image-name>
Then, start your Docker container with the following command:
docker run -v /var/run/docker.sock:/var/run/docker.sock -ti <docker-image-name>
Using a Docker commit script
Committing layered changes to your base image is what drives the core of the Dev Environments in Docker. Docker fetches the container ID based on the service name, and the changes you make to the running container are committed to the desired image.
Because the host Docker socket is mounted into the container when executing the docker commit command, the container will apply the change to the base image located in the host Docker installation.
# ! /bin/bash
SERVICE=
IMAGE=
# Commit changes to image
CONTAINER_ID=$(docker ps -aqf “name=${SERVICE}”)
if [ ! -z “$CONTAINER_ID”]; then
Echo “— Committing changes from $SERVICE to $IMAGE — ”
docker commit $CONTAINER_ID $IMAGE
fi
Updating your environment
Mount the Docker socket from the host installation. Mounting the source code is insufficient without the :z property, which tells Docker that the content will be shared between containers.
You’ll have to mount the host machine’s Docker socket into the container. This lets any Docker operations performed inside the container actually modify the host Docker images and installation. Without this, changes made in the container are only going to persist in the container until it’s stopped and removed.
Add the following code into your Docker Compose file:
# docker-compose.yaml
services:
service-name:
image: image-with-docker:latest
volumes:
– /host/code/path:/container/code/path:z
– /var/run/docker.sock:/var/run/docker.sock
Orchestrating components
Once Docker Compose has the appropriate services configured, you can start your environment in two different ways. Use either the docker-compose up command or start the environment by running the individual service with the linked services with the following command:
docker compose start webserver
The main container references the linked service via the linked names. This makes it very easy to override any environment variables with the provided names. Check out the YAML file below:
services:
webserver:
mysql:
ports:
– ‘3306:3306′
volume
– dbdata:var/lib/mysql
redis:
ports:
– 6379:6379
volumes:
– redisdata:/data
volumes:
dbdata:
redisdata:
Notes: For each service, you’ll want to choose and specify your preferred Docker Official Image version. Additionally, the MySQL Docker Official Image comes with important environment variables defaulted in — though you can specify them here as needed.
Managing separate services
Starting a small part of the stack can also be useful if a developer only needs that specific piece. For example, if we just wanted to start the MySQL service, we’d run the following command:
docker compose start mysql
We can stop this service just as easily with the following command:
docker compose stop mysql
Configuring your environment
Mounting volumes into the database services lets your containers apply the change to their respective databases while letting those databases remain as ephemeral containers.
In the main entry point and script orchestrator, provide a -p attribute to ./start.sh to set the PROD_BUILD environment variable. The build reads the variable inside the entry point and optionally builds a production or development version of the development environment.
First, here’s how that script looks:
# start.sh
while [ "$1" != ""];
do
case $1 in
-p | –prod) PROD_BUILD="true";;
esac
shift
done
Second, here’s a sample shell script:
export PROD_BUILD=$PROD_BUILD
Third, here’s your sample Docker Compose file:
# docker-compose.yaml
services:
build-frontend:
entrypoint:
– bash
– -c
– "[[ "$PROD_BUILD" == "true" ]] && make fe-prod || make fe-dev"
Note: Don’t forget to add your preferred image under build-frontend if you’re aiming to make a fully functional Docker Compose file.
What if we need to troubleshoot any issues that arise? Debugging inside a running container only requires the appropriate debugging library in the mounted source code and an open port to mount the debugger. Here’s our YAML file:
# docker-compose.yaml
services:
webserver:
ports:
– ‘5678:5678′
links:
– mysql
– redis
entrypoint:
– bash
– -c
– ./start-webserver.sh
Note: Like in our previous examples, don’t forget to specify an image underneath webserver when creating a functional Docker Compose file.
In your editor of choice, provide a launch configuration to attach the debugger using the specified port. Once the container is running, run the configuration and the debugger will be attached:
#launch-setting.json
{
"configurations" : [
{
"name": "Python: Remote Attach",
"type": "python",
"request": "attach",
"port": 5678,
"host": "localhost",
"pathMappings": [
{
"localRoot": "${workspaceFolder}",
"remoteRoot": "."
}
]
}
]
}
Confirming that everything works
Once the full stack is running, it’s easy to access the main entry point web server via a browser on the defined webserver port.
The docker ps command will show your running containers. Docker is managing communication between containers.
The entire software service is now running completely in Docker. All the code lives on the host computer inside the Docker container. The development environment is now completely portable using only Docker.
Remembering important tradeoffs
This approach has some limitations. First, running your developer environment in Docker will incur additional resource overhead. Docker has to run and requires extra computing resources as a result. Also, including multiple stages will require scripting as a final orchestration layer to enable communication between containers.
Wrapping up
Rapid7’s development team uses Docker to quickly create their development environments. They use the Docker CLI, Docker Desktop, Docker Compose, and shell scripts to create an extremely unique and robust Docker-friendly environment. They can use this to spin up any part of their development environment.
The setup also helps Rapid7 compile frontend assets, start cache and database servers, run the backend service with different parameters, or start the entire application stack. Using a “Docker-in-Docker” approach of mounting the Docker socket within running containers makes this possible. Docker’s ability to commit layers to the base image after dependencies are either updated or installed is also key.
The shell scripts will export the required environment variables and then run specific processes in a specific order. Finally, Docker Compose makes sure that the appropriate service containers and dependencies are running.
Achieving future development goals
Relying on the Docker tool chain has been truly beneficial for Rapid7, since this has helped them create a consistent environment compatible with any part of their application stack. This integration has helped Rapid7 do the following:
Deploy extremely reliable software to advanced customer environments
Analyze code before merging it in development
Deliver much more stable code
Simplify onboarding
Form an extremely flexible and configurable development environment
By using Docker, Rapid7 is continuously refining its processes to push past the boundaries of what’s possible. Their next goal is to deliver production-grade stable builds on a daily basis, and they’re confident that Docker can help them get there.
Quelle: https://blog.docker.com/feed/
sysdig.com – Kubernetes 1.25 brings 40 enhancements, Pod Security Control replacing PSPs, Checkpoints for forensic analysis… Discover more!
Quelle: news.kubernauts.io
This post was co-written by Joyce Lin, Head of Developer Relations at Postman.
Over 20 million developers use the Postman API platform, and its Collections feature is a standout within the community. At its core, a collection is a group of API calls.
While not all collections evolve into anything more complex, many are foundational building blocks for Postman’s more advanced features. For example, a collection can contain API tests and documentation, inform mock servers, or represent a sequence of API calls.
An example of a Postman collection containing API tests within the Postman app.
Storing API requests in a collection lets users explore, run, and share their work with others. We’ll explain why that matters and how you can start using Postman’s Newman Docker extension.
Why run a Postman collection in Docker Desktop?
The Newman extension in Docker Desktop displays collection run results.
Since a collection is a sequence of API calls, it can represent any API workflow imaginable. For example, here are some use cases for running collections during development:
Automation – Automate API testing to run tests locally
Status checks – Run collections to assess the current status and health of your API
Debugging – Log test results and filter by test failures to debug unexpected API behavior
Execution – Run collections to execute an API workflow against different environment configurations
For each use case, you may want to run collections in different scenarios. Here are some scenarios involving API test automation:
Testing locally during development
Testing as part of a CI/CD pipeline
Testing based on an event trigger
Health checking on a predetermined schedule
And you can run collections in several ways. One method leverages Newman — Postman’s open-source library — with Docker. You can use Newman from your command line or with functions, scripts, and containerized applications. You can even run your collection from Docker Desktop!
Getting started with Newman in Docker Desktop
The Postman Docker Extension uses Postman’s Newman image to run a collection and display the results. In this section, we’ll test drive the extension and run our first collection.
Setting up
Install the latest version of Docker Desktop. Install the Newman extension for Docker Desktop.
Sign up for a free Postman account and generate an API key. This will let you access your Postman data like collections and environments.
Log into your Postman account and create a Postman collection. If you don’t have a Postman collection yet, you can fork this sample collection to your own workspace. Afterwards, this forked collection will appear as your own collection.
Running a Postman collection
Enter your Postman API key and click “Get Postman Collections.”
2. Choose which collection you want to run.
3. (Optional) Select an environment to run alongside your collection. In a Postman environment, you can define different server configurations and credentials corresponding to each server environment.
4. Click “Run Collection” and review the results of your API calls. You can filter by failed tests and drill down into the details. Here’s how everything works, going step by step:
5. Repeat this process with other collections and environments as needed.
Contribute to this extension or make your own
This extension is an open source, community project, so feel free to contribute your ideas. Or you can fork it and make it your own. Give Newman a try by visiting Docker Hub and opening the extension within Docker Desktop. You can also install Newman directly within Docker Desktop’s Extensions Marketplace.
Want to experiment even further? You can bring your own ideas to life via our Extensions SDK GitHub page. Here you’ll find useful code samples to kickstart your next project.
Special thanks from Joyce to Postman for supporting open-source projects like Newman, empowering the community to build integrations, and to Software Development Engineer in Test (SDET) Danny Dainton for his UI work around collection run results.
Quelle: https://blog.docker.com/feed/
itnext.io – let’s take a look at new CNCF projects which you should watch for in 2023
Quelle: news.kubernauts.io
Docker Desktop 4.14 brings new functionality directly into your workstations, specifically focused on providing better visibility into your containers’ productivity and security. Read more below!
Visualize your resource usage
Have you ever wanted an easier way to see which containers or Docker Compose projects consume the most resources, like CPU, memory, network, or disk I/O? The new Resource Usage extension displays all of this information right in Docker Desktop.
The extension displays a table view that shows CPU, memory, disk, and network I/O for all containers and aggregates them by Docker Compose project. You can start, stop, and restart containers or view container logs — all from the same place!
You can also visualize how these resources evolve over time:
Resource Usage is available on Docker Hub and on the Docker Desktop Extensions Marketplace. Try it out and let us know what you think!
Examine images for package vulnerabilities
Need to know if the package dependencies in your images (or the base images you build on) contain vulnerabilities? Over the coming weeks, Docker Desktop will roll out an enhanced image detail view to help you understand if dependencies are introducing vulnerabilities into your image — and where they’re introduced:
Inspect any image and see what you find. Don’t forget to join our Docker Community Slack and visit the #extensions channel to share your feedback directly with us!
Regenerate the original run command of a Docker container
If you need to share docker container run details with a collaborator, or you just need to modify some parameters and run it again, here’s a useful quality-of-life update. The “Copy docker run” option lets you easily retrieve the original run command (plus its parameters and details) to help you quickly uncover exactly which environment variables are being used:
Select the three-dot actions icon beside any listed container, choose “Copy docker run” to copy it, then paste and modify it anywhere!
Stay tuned for more!
We’re always looking for new ways to make it simpler and faster for you to understand what’s going on with your containers and dev environments. Check out our public roadmap to see what’s in store and share what other visibility features you’d like to see.
And be sure to check out the release notes for a full list of everything new in Docker Desktop 4.14!
Quelle: https://blog.docker.com/feed/
There’s no doubt that WebAssembly (AKA Wasm) is having a moment on the development stage. And while it may seem like a flash in the pan to some, we believe Wasm has a key role in continued containerized development. Docker and Wasm can be complementary technologies.
In the past, we’ve explored how Docker could successfully run Wasm modules alongside Linux or Windows containers. Nearly five months later, we’ve taken another big step forward with the Docker+Wasm Technical Preview. Developers need exceptional performance, portability, and runtime isolation more than ever before.
Chris Crone, a Director of Engineering at Docker, and Second State CEO, Founder Michael Yuan addressed these sticking points at the CNCF’s Wasm Day 2022. Here’s their full session, but feel free to stick around for our condensed breakdown:
You don’t need to learn new processes to develop successfully with Docker and Wasm. Popular Docker CLI commands can tackle this for you. Docker can even manage the WebAssembly runtime thanks to our collaboration with WasmEdge. We’ll dive into why we’re handling this new project and the technical mechanisms that make it possible.
Why WebAssembly and Docker?
How workloads and code are isolated has a major impact on how quickly we can deliver software to users. Chris highlights this by explaining how developers value:
Easy reuse of components and defined interfaces across projects that help build value quickerMaximization of shared compute resources while maintaining safe, sturdy boundaries between workloads — lowering the cost of application deliverySeamless application delivery to users, in seconds, through convenient packaging mechanisms like container images so users see value quicker
We know that workload isolation plays a role in these things, yet there are numerous ways to achieve it — like air gapping, hardware virtualization, stack virtualization (Wasm or JVM), containerization, and so on. Since each has unique advantages and disadvantages, choosing the best solution can be tricky.
Finding the right tools can also be enormously difficult. The CNCF tooling landscape alone is saturated, and while we’re thankful these tools exist, the variety is overwhelming for many developers.
Chris believes that specialized tooling can conquer the task at hand. It’s also our responsibility at Docker to guide these tooling decisions. This builds upon our continued mission to help developers build, share, and run their applications as quickly as possible.
That’s where WasmEdge — and Michael Yuan — come in.
Exciting opportunities with Docker and WasmEdge
Michael showed there’s some overlap between container and WebAssembly use cases. For example, developers from both camps might want to ship microservice applications. Wasm can enable quicker startup times and code-level security, which are beneficial in many cases.
However, WebAssembly doesn’t fit every use case due to threading, garbage collection, and binary packaging limitations. Running applications with Wasm also requires extra tooling, currently.
WasmEdge in action: TensorFlow interface
Michael then kicked off a TensorFlow ML application demo to show what WasmEdge can do. This application wouldn’t work with other WASI-compatible runtimes:
A few things made this demo possible:
Rust: a safe and fast programming language with first-class support for the Wasm compiling target.Tokio: a popular asynchronous runtime that can handle multiple, parallel HTTP requests without multithreading.WasmEdge’s TensorFlow: a plug-in compatible with the WASI-NN spec. Besides Tensorflow, PyTorch and OpenVINO are also supported in WasmEdge.
Note: Tokio and TensorFlow support are WasmEdge features that aren’t available on other WASI-compliant runtimes.
With Rust’s cargo build command, we can compile the program into a Wasm module using the wasm32-wasi target platform. The WasmEdge runtime can execute the resulting .wasm file. Once the application is running, we can perform HTTP queries to run some pretty cool image recognition tasks.
This example exemplifies the draw of WasmEdge as a WASI-compatible runtime. According to its maintainers, “WasmEdge is a lightweight, high-performance, and extensible WebAssembly runtime for cloud native, edge, and decentralized applications. It powers serverless apps, embedded functions, microservices, smart contracts, and IoT devices.”
Making Wasm accessible with Docker
Docker has two magical features. First, Docker and containers work on any machine and anywhere in production. Second, Docker makes it easy to build, share, and reuse components from any project. Container images and other OCI artifacts are easy to consume (and share). Isolation is baked in. Millions of developers are also very familiar with many Docker workflows like docker compose up.
Chris described how standardization and open ecosystems make Docker and container tooling available everywhere. The OCI specifications are crucial here and let us form new solutions that’ll work for nearly anyone and any supported technology (like Wasm).
On the other hand, setting up cross-platform Wasm developer environments is tricky. You also have to learn new tools and workflows — hampering productivity while creating frustration. We believe we can help developers overcome these challenges, and we’re excited to leverage our own platform to make Wasm more accessible.
Demoing Docker+WasmEdge
How does Wasm support look in practice? Chris fired up a demo using a preview of Docker Desktop, complete with WASI support. He created a Docker Compose file with three services:
A frontend static JavaScript client using the NGINX Docker Official ImageA Rust server compiled to wasi/wasm32A MariaDB database
That Rust server runs as a Wasm Module, while the NGINX and MariaDB servers run in Linux containers. Chris built this Rust server using a Dockerfile that compiled from his local platform to a wasm32-wasi target. He also ran WasmEdge’s proprietary AOT compiler to optimize the built Wasm module. However, this step is optional and optimized modules require the WasmEdge runtime.
We’ll leave the nitty gritty to Chris (see 19:43 for the demo) for now. However, know that you can run a Compose build and come away with a wasi/wasm32 platform image. Running docker compose up launches your application which you can then interact with through your Web browser. This is one way to seamlessly run containers and Wasm side by side.
From the Docker CLI, you’ll see the Wasm microservice is less than 2MB. It contains a high-performance HTTP server and a MySQL database client. The NGINX and MariaDB servers are 10MB and 120MB, respectively. Alternatively, your Rust microservice would be tens of megabytes after building it into a Linux binary and running it in a Linux container. This underscores how lightweight Wasm images are.
Since the output is an OCI image, you can store or share it using an OCI-compliant registry like Docker Hub. You don’t have to learn complex new workflows. And while Chris and Michael centered on WasmEdge, Docker should support any WASI runtime.
The approach is interoperable with containers and has early support within Docker Desktop. Although Wasm might initially seem unfamiliar, integration with the Docker ecosystem immediately levels that learning curve.
The future of Docker and Wasm
As Chris mentioned, we’re invested in making Docker and Wasm work well together. Our recent Docker+Wasm Technical Preview is a major step towards boosting interoperability. However, we’re also thrilled to explore how Docker tooling can improve the lives of Wasm-hungry developers — no matter their goals.
Docker wants to get involved with the Wasm community to better understand how developers like you are building your WebAssembly applications. Your use cases and obstacles matter. By sharing our experiences with the container ecosystem with the community, we hope to accelerate Wasm’s growth and help you tackle that next big project.
Get started and learn more
Want to test run Docker and Wasm? Check out Chris’ GitHub page for links to special Wasm-compatible Docker Desktop builds, demo repos, and more. We’d also love to hear your feedback as we continue bolstering Docker+Wasm support!
Finally, don’t miss the chance to learn more about WebAssembly and microservices — alongside experts and fellow developers — at an upcoming meetup.
Quelle: https://blog.docker.com/feed/
Go (or Golang) is one of the most loved and wanted programming languages, according to Stack Overflow’s 2022 Developer Survey. Thanks to its smaller binary sizes vs. many other languages, developers often use Go for containerized application development.
Mohammad Quanit explored the connection between Docker and Go during his Community All-Hands session. Mohammad shared how to Dockerize a basic Go application while exploring each core component involved in the process:
Follow along as we dive into these containerization steps. We’ll explore using a Go application with an HTTP web server — plus key best practices, optimization tips, and ways to bolster security.
Go application components
Creating a full-fledged Go application requires you to create some Go-specific components. These are essential to many Go projects, and the containerization process relies equally heavily on them. Let’s take a closer look at those now.
Using main.go and go.mod
Mohammad mainly highlights the main.go file since you can’t run an app without executable code. In Mohammad’s case, he created a simple web server with two unique routes: an I/O format with print functionality, and one that returns the current time.
What’s nice about Mohammad’s example is that it isn’t too lengthy or complex. You can emulate this while creating your own web server or use it as a stepping stone for more customization.
Note: You might also use a package main in place of a main.go file. You don’t explicitly need main.go specified for a web server — since you can name the file anything you want — but you do need a func main () defined within your code. This exists in our sample above.
We always recommend confirming that your code works as expected. Enter the command go run main.go to spin up your application. You can alternatively replace main.go with your file’s specific name. Then, open your browser and visit http://localhost:8081 to view your “Hello World” message or equivalent. Since we have two routes, navigating to http://localhost:8081/time displays the current time thanks to Mohammad’s second function.
Next, we have the go.mod file. You’ll use this as a root file for your Go packages, module path for imports (shown above), and for dependency requirements. Go modules also help you choose a directory for your project code.
With these two pieces in place, you’re ready to create your Dockerfile!
Creating your Dockerfile
Building and deploying your Dockerized Go application means starting with a software image. While you can pull this directly from Docker Hub (using the CLI), beginning with a Dockerfile gives you more configuration flexibility.
You can create this file within your favorite editor, like VS Code. We recommend VS Code since it supports the official Docker extension. This extension supports debugging, autocompletion, and easy project file navigation.
Choosing a base image and including your application code is pretty straightforward. Since Mohammad is using Go, he kicked off his Dockerfile by specifying the golang Docker Official Image as a parent image. Docker will build your final container image from this.
You can choose whatever version you’d like, but a pinned version like golang:1.19.2-bullseye is both stable and slim. Newer image versions like these are also safe from October 2022’s Text4Shell vulnerability.
You’ll also need to do the following within your Dockerfile:
Include an app directory for your source codeCopy everything from the root directory into your app directoryCopy your Go files into your app directory and install dependenciesBuild your app with configurationTell your Docker container to listen on a certain port at runtimeDefine an executable command that runs once your container starts
With these points in mind, here’s how Mohammad structured his basic Dockerfile:
# Specifies a parent image
FROM golang:1.19.2-bullseye
# Creates an app directory to hold your app’s source code
WORKDIR /app
# Copies everything from your root directory into /app
COPY . .
# Installs Go dependencies
RUN go mod download
# Builds your app with optional configuration
RUN go build -o /godocker
# Tells Docker which network port your container listens on
EXPOSE 8080
# Specifies the executable command that runs when the container starts
CMD [ “/godocker” ]
From here, you can run a quick CLI command to build your image from this file:
docker build –rm -t [YOUR IMAGE NAME]:alpha .
This creates an image while removing any intermediate containers created with each image layer (or step) throughout the build process. You’re also tagging your image with a name for easier reference later on.
Confirm that Docker built your image successfully by running the docker image ls command:
If you’ve already pulled or built images in the past and kept them, they’ll also appear in your CLI output. However, you can see Mohammad’s go-docker image listed at the top since it’s the most recent.
Making changes for production workloads
What if you want to account for code or dependency changes that’ll inevitably occur with a production Go application? You’ll need to tweak your original Dockerfile and add some instructions, according to Mohammad, so that changes are visible and the build process succeeds:
FROM golang:1.19.2-bullseye
WORKDIR /app
# Effectively tracks changes within your go.mod file
COPY go.mod .
RUN go mod download
# Copies your source code into the app directory
COPY main.go .
RUN go mod -o /godocker
EXPOSE 8080
CMD [ “/godocker” ]
After making those changes, you’ll want to run the same docker build and docker image ls commands. Now, it’s time to run your new image! Enter the following command to start a container from your image:
docker run -d -p 8080:8081 –name go-docker-app [YOUR IMAGE NAME]:alpha
Confirm that this worked by entering the docker ps command, which generates a list of your containers. If you have Docker Desktop installed, you can also visit the Containers tab from the Docker Dashboard and locate your new container in the list. This also applies to your image builds — instead using the Images tab.
Congratulations! By tracing Mohammad’s steps, you’ve successfully containerized a functioning Go application.
Best practices and optimizations
While our Go application gets the job done, Mohammad’s final image is pretty large at 913MB. The client (or end user) shouldn’t have to download such a hefty file.
Mohammad recommends using a multi-stage build to only copy forward the components you need between image layers. Although we start with a golang:version as a builder image, defining a second build stage and choosing a slim alternative like alpine helps reduce image size. You can watch his step-by-step approach to tackling this.
This is beneficial and common across numerous use cases. However, you can take things a step further by using FROM scratch in your multi-stage builds. This empty file is the smallest we offer and accepts static binaries as executables — making it perfect for Go application development.
You can learn more about our scratch image on Docker Hub. Despite being on Hub, you can only add scratch directly into your Dockerfile instead of pulling it.
Develop your Go application today
Mohammad Quanit outlined some user-friendly development workflows that can benefit both newer and experienced Go users. By following his steps and best practices, it’s possible to create cross-platform Go apps that are slim and performant. Docker and Go inherently mesh well together, and we also encourage you to explore what’s possible through containerization.
Want to learn more?
Check out our Go language-specific guide.Read about the golang Docker Official Image.See Go in action alongside other technologies in our Awesome Compose repo.Dig deeper into Dockerfile fundamentals and best practices.Understand how to use Go-based server technologies like Caddy 2.
Quelle: https://blog.docker.com/feed/
medium.com – Learn about its features to simplify managing your cluster
Quelle: news.kubernauts.io