Helm — Advanced Commands

medium.com – We can use the — generate-name flag to auto-generate the release name during the helm chart installation Before creating Kubernetes objects using a helm chart, we can use the — dry-run command to…
Quelle: news.kubernauts.io

Enable Cloud-Native Log Observability With Parseable

Observability is the practice of understanding the internal state of a system from its output. It’s based on a trio of key indicators: logs, metrics, and traces. Because metrics and traces are numerical, it’s easy to visualize that data through graphics. Logs are unfortunately text heavy and relatively difficult to visualize or observe. 

No matter the data type and its underlying nature, actionable log data helps you solve problems and make smarter business decisions. And that’s where Parseable comes in.

Introducing Parseable

The SaaS observability ecosystem is thriving, but there’s little to no movement in open source, developer-friendly observability platforms. That’s what we’re looking to address with Parseable. 

Parseable is an open source, developer-centric platform created to ingest and query log data. It’s designed to be efficient, easy to use, and highly flexible. To achieve this, Parseable uses a cloud-native, containerized architectural approach to create a simple and dependency-free platform. 

Specifically, Parseable uses Apache Arrow and Parquet under the hood to efficiently store log data and query at blazingly fast speeds. It uses S3 or other compatible storage platforms to support seamless storage while remaining stateless.

What’s unique about Parseable?

Here are some exciting features that set Parseable apart from other observability platforms:

It maintains a SQL-compatible API for querying log data.

The Parquet open data format enables complete data ownership and wide-ranging possibilities for data analysis.

The single binary and container-based deployment model (including UI) helps you deploy in minutes — if not seconds. 

Its indexing-free design rivals the performance of indexed systems while offering lower CPU usage and less storage overhead. 

It’s written in Rust with low latency and high throughput.

How does Parseable work?

Parseable exposes HTTP REST API endpoints. This lets you ingest, query, and manage your log streams on the Parseable server. There are three major API categories:

Log stream creation, ingestion, and management

Log stream query and search

Overall health status

API reference information and examples are available on the Parseable public workspace, at Postman.

Parseable is compatible with standard logging agents like FluentBit, LogStash, Vector, syslog and others via their HTTP output agents. It also offers a built-in, intuitive GUI for log query and analysis.

Why use the Parseable Docker extension?

Docker Extensions help you build and integrate software applications into your daily workflows. With the Parseable extension, we aim to provide a simple, one-click approach for deploying Parseable. 

Once the extension is installed and running, you’ll have a running Parseable server that can ingest logs from any logging agents or directly from your application. You’ll also have access to the Parseable UI.

Overall, the Parseable extension brings richer log observability to development platforms.

Getting started with Parseable

Prerequisites

A Docker Desktop installation

MinIO Object Storage (or S3 if available)

Credentials for read-write access to object storage

An object storage bucket

Hop into Docker Desktop and confirm that the Docker Extensions feature is enabled. Click the Settings gear > Extensions tab > check the “Enable Docker Extensions” box:

Installing the Parseable Docker extension

While we’re working to bring Parseable to the Extensions Marketplace, you’ll currently need to download it via the CLI. Launch your terminal and run the following command to clone the GitHub repository and install the Parseable Extension:

git clone https://github.com/parseablehq/parseable-docker-extension

cd parseable-docker-extension
make install-extension

The Parseable extension will appear in the Docker Dashboard’s left sidebar, under the Extensions heading.

Using Parseable

Parseable requires you to enter the following configuration settings and environment variables during the initial setup:

Local Port Number (the port number you want Parseable listening on)

Local Storage Path (the path within the container where Parseable stages data)

Local Volume Path (the path where your local storage path is mounted)

S3/MinIO URL

S3/MinIO Bucket Name

S3/MinIO Access Key

S3/MinIO Secret Key

S3/MinIO Region

Click “Deploy” after you’ve entered all required configuration details.

You should see the URL http://localhost:8000 within the extension window:

Next, Docker Desktop will redirect you to your browser’s login page. Your credentials are identical to what you provided in the Login Credentials section (default user/password: parseable, parseable):

After logging in, you’ll see the logs page with the option to select a log stream. If you used the default MinIO bucket embedded in the Extensions UI, some demo data is already present. Alternatively, if you’re using your own S3-compatible bucket, use the Parseable API to create a log stream and send logs to the log stream.

Once you’re done, you can choose a log stream and the time range for which you want the logs. You can even add filters and search fields:

Parseable currently supports data filtering by label, metadata, and specific column values. For example, you can choose a column and specify an operator or value for the column. Only the log data rows matching this filter will be shown. We’re working on improving this with support for multiple-column data types.

This entire process takes about a minute. To see it in action, check out this quick walkthrough video:

Try Parseable today!

In this post, we quickly showcased Parseable and its key features. You also learned how to locally run it with a single click using the extension. Finally, we explored how to ingest logs to your running Parseable instance and query those logs via the Parseable UI. 

But, you can test drive Parseable for yourself, today! Follow our CLI workflow to install this extension directly. Plus, keep an eye out for Parseable’s launch on the Extensions Marketplace — it’s coming soon!

To learn more, join the Parseable community on Slack and help us spread the word by adding a star to the repo.

We really hope you enjoyed this article and this new approach to log data ingestion and query. Docker Extensions makes this single-click approach possible.

Contribute to the Parseable Docker extension

We’re committed to making Parseable more powerful for our developers and users — and we need help! We’re actively looking for contributors to the Parseable Docker extension project. 

The current code is simple and easy to get started with, and we’re always around to give potential contributors a hand. This can be a great first project, so please feel free to share your ideas.
Quelle: https://blog.docker.com/feed/

How Rapid7 Reduced Setup Time From Days to Minutes With Docker

This post was co-written by Kris Rivera, Principal Software Engineer at Rapid7.

Rapid7 is a Boston-based provider of security analytics and automation solutions enabling organizations to implement an active approach to cybersecurity. Over 10,000 customers rely on Rapid7 technology, services, and research to improve security outcomes and securely advance their organizations.

The security space is constantly changing, with new threats arising every day. To meet their customers’ needs, Rapid7 focuses on increasing the reliability and velocity of software builds while also maintaining their quality.

That’s why Rapid7 turned to Docker. Their teams use Docker to help development, support the sales pipeline, provide testing environments, and deploy to production in an automated, reliable way. 

By using Docker, Rapid7 transformed their onboarding process by automating manual processes. Setting up a new development environment now takes minutes instead of days. Their developers can produce faster builds that enable regular code releases to support changing requirements.

Automating the onboarding process

When developers first joined Rapid7, they were met with a static, manual process that was time consuming and error-prone. Configuring a development environment isn’t exciting for most developers. They want to spend most of their time creating! And setting up the environment is the least glamorous part of the process.

Docker helped automate this cumbersome process. Using Docker, Rapid7 could create containerized systems that were preconfigured with the right OS and developer tools. Docker Compose enabled multiple containers to communicate with each other, and it had the hooks needed to incorporate custom scripting and debugging tools.

Once the onboarding setup was configured through Docker, the process was simple for other developers to replicate. What once took multiple days now takes minutes.

Expanding containers into production

The Rapid7 team streamlined the setup of the development environment by using a Dockerfile. This helped them create an image with every required dependency and software package.

But they didn’t stop there. As this single Docker image evolved into a more complex system, they realized that they’d need more Docker images and container orchestration. That’s when they integrated Docker Compose into the setup.

Docker Compose simplified Docker image builds for each of Rapid7’s environments. It also encouraged a high level of service separation that split out different initialization steps into separate bounded contexts. Plus, they could leverage Docker Compose for inter-container communication, private networks, Docker volumes, defining environment variables with anchors, and linking containers for communication and aliasing.

This was a real game changer for Rapid7, because Docker Compose truly gave them unprecedented flexibility. Teams then added scripting to orchestrate communication between containers when a trigger event occurs (like when a service has completed).

Using Docker, Docker Compose, and scripting, Rapid7 was able to create a solution for the development team that could reliably replicate a complete development environment. To optimize the initialization, Rapid7 wanted to decrease the startup times beyond what Docker enables out of the box.

Optimizing build times even further

After creating Docker base images, the bottom layers rarely have to change. Essentially, that initial build is a one-time cost. Even if the images change, the cached layers make it a breeze to get through that process quickly. However, you do have to reinstall all software dependencies from scratch again, which is a one-time cost per Docker image update.

Committing the installed software dependencies back to the base image allows for a simple, incremental, and often skippable stage. The Docker image is always usable in development and production, all on the development computer.

All of these efficiencies together streamlined an already fast 15 minute process down to 5 minutes — making it easy for developers to get productive faster.

How to build it for yourself

Check out code examples and explanations about how to replicate this setup for yourself. We’ll now tackle the key steps you’ll need to follow to get started.

Downloading Docker

Download and install the latest version of Docker to be able to perform Docker-in-Docker. Docker-in-Docker lets your Docker environment have Docker installed within a container. This lets your container run other containers or pull images.

To enable Docker-in-Docker, you can apt install the docker.io distribution as one of your first commands in your Dockerfile. Once the container is configured, mount the Docker socket from the host installation:

# Dockerfile

FROM ubuntu:20.04

# Install dependencies

RUN apt update &&
apt install -y docker.io

Next, build your Docker image by running the following command in your CLI or shell script file:

docker build -t <docker-image-name>

Then, start your Docker container with the following command:

docker run -v /var/run/docker.sock:/var/run/docker.sock -ti <docker-image-name>

Using a Docker commit script

Committing layered changes to your base image is what drives the core of the Dev Environments in Docker. Docker fetches the container ID based on the service name, and the changes you make to the running container are committed to the desired image. 

Because the host Docker socket is mounted into the container when executing the docker commit command, the container will apply the change to the base image located in the host Docker installation.

# ! /bin/bash

SERVICE=
IMAGE=

# Commit changes to image
CONTAINER_ID=$(docker ps -aqf “name=${SERVICE}”)

if [ ! -z “$CONTAINER_ID”]; then
Echo “— Committing changes from $SERVICE to $IMAGE — ”
docker commit $CONTAINER_ID $IMAGE
fi

Updating your environment

Mount the Docker socket from the host installation. Mounting the source code is insufficient without the :z property, which tells Docker that the content will be shared between containers.

You’ll have to mount the host machine’s Docker socket into the container. This lets any Docker operations performed inside the container actually modify the host Docker images and installation. Without this, changes made in the container are only going to persist in the container until it’s stopped and removed.

Add the following code into your Docker Compose file:

# docker-compose.yaml

services:
service-name:
image: image-with-docker:latest
volumes:
– /host/code/path:/container/code/path:z
– /var/run/docker.sock:/var/run/docker.sock

Orchestrating components

Once Docker Compose has the appropriate services configured, you can start your environment in two different ways. Use either the docker-compose up command or start the environment by running the individual service with the linked services with the following command:

docker compose start webserver

The main container references the linked service via the linked names. This makes it very easy to override any environment variables with the provided names. Check out the YAML file below:

services:
webserver:
mysql:
ports:
– ‘3306:3306′
volume
– dbdata:var/lib/mysql
redis:
ports:
– 6379:6379
volumes:
– redisdata:/data

volumes:
dbdata:
redisdata:

Notes: For each service, you’ll want to choose and specify your preferred Docker Official Image version. Additionally, the MySQL Docker Official Image comes with important environment variables defaulted in — though you can specify them here as needed.

Managing separate services

Starting a small part of the stack can also be useful if a developer only needs that specific piece. For example, if we just wanted to start the MySQL service, we’d run the following command:

docker compose start mysql

We can stop this service just as easily with the following command:

docker compose stop mysql

Configuring your environment

Mounting volumes into the database services lets your containers apply the change to their respective databases while letting those databases remain as ephemeral containers.

In the main entry point and script orchestrator, provide a -p attribute to ./start.sh to set the PROD_BUILD environment variable. The build reads the variable inside the entry point and optionally builds a production or development version of the development environment.  

First, here’s how that script looks:

# start.sh

while [ "$1" != ""];
do
case $1 in
-p | –prod) PROD_BUILD="true";;

esac
shift
done

Second, here’s a sample shell script:

export PROD_BUILD=$PROD_BUILD

Third, here’s your sample Docker Compose file:

# docker-compose.yaml

services:
build-frontend:
entrypoint:
– bash
– -c
– "[[ "$PROD_BUILD" == "true" ]] && make fe-prod || make fe-dev"

Note: Don’t forget to add your preferred image under build-frontend if you’re aiming to make a fully functional Docker Compose file.

What if we need to troubleshoot any issues that arise? Debugging inside a running container only requires the appropriate debugging library in the mounted source code and an open port to mount the debugger. Here’s our YAML file:

# docker-compose.yaml

services:
webserver:
ports:
– ‘5678:5678′
links:
– mysql
– redis
entrypoint:
– bash
– -c
– ./start-webserver.sh

Note: Like in our previous examples, don’t forget to specify an image underneath webserver when creating a functional Docker Compose file.

In your editor of choice, provide a launch configuration to attach the debugger using the specified port. Once the container is running, run the configuration and the debugger will be attached:

#launch-setting.json
{
"configurations" : [
{
"name": "Python: Remote Attach",
"type": "python",
"request": "attach",
"port": 5678,
"host": "localhost",
"pathMappings": [
{
"localRoot": "${workspaceFolder}",
"remoteRoot": "."
}
]
}
]
}

Confirming that everything works

Once the full stack is running, it’s easy to access the main entry point web server via a browser on the defined webserver port.

The docker ps command will show your running containers. Docker is managing communication between containers. 

The entire software service is now running completely in Docker. All the code lives on the host computer inside the Docker container. The development environment is now completely portable using only Docker.

Remembering important tradeoffs

This approach has some limitations. First, running your developer environment in Docker will incur additional resource overhead. Docker has to run and requires extra computing resources as a result. Also, including multiple stages will require scripting as a final orchestration layer to enable communication between containers.

Wrapping up

Rapid7’s development team uses Docker to quickly create their development environments. They use the Docker CLI, Docker Desktop, Docker Compose, and shell scripts to create an extremely unique and robust Docker-friendly environment. They can use this to spin up any part of their development environment.

The setup also helps Rapid7 compile frontend assets, start cache and database servers, run the backend service with different parameters, or start the entire application stack. Using a “Docker-in-Docker” approach of mounting the Docker socket within running containers makes this possible. Docker’s ability to commit layers to the base image after dependencies are either updated or installed is also key. 

The shell scripts will export the required environment variables and then run specific processes in a specific order. Finally, Docker Compose makes sure that the appropriate service containers and dependencies are running.

Achieving future development goals

Relying on the Docker tool chain has been truly beneficial for Rapid7, since this has helped them create a consistent environment compatible with any part of their application stack. This integration has helped Rapid7 do the following: 

Deploy extremely reliable software to advanced customer environments

Analyze code before merging it in development

Deliver much more stable code

Simplify onboarding 

Form an extremely flexible and configurable development environment

By using Docker, Rapid7 is continuously refining its processes to push past the boundaries of what’s possible. Their next goal is to deliver production-grade stable builds on a daily basis, and they’re confident that Docker can help them get there.
Quelle: https://blog.docker.com/feed/

Automate API Tests and Debug in Docker With Postman’s Newman Extension

This post was co-written by Joyce Lin, Head of Developer Relations at Postman.

Over 20 million developers use the Postman API platform, and its Collections feature is a standout within the community. At its core, a collection is a group of API calls. 

While not all collections evolve into anything more complex, many are foundational building blocks for Postman’s more advanced features. For example, a collection can contain API tests and documentation, inform mock servers, or represent a sequence of API calls.

An example of a Postman collection containing API tests within the Postman app.

Storing API requests in a collection lets users explore, run, and share their work with others. We’ll explain why that matters and how you can start using Postman’s Newman Docker extension.

Why run a Postman collection in Docker Desktop?

The Newman extension in Docker Desktop displays collection run results.

Since a collection is a sequence of API calls, it can represent any API workflow imaginable. For example, here are some use cases for running collections during development: 

Automation – Automate API testing to run tests locally

Status checks – Run collections to assess the current status and health of your API

Debugging – Log test results and filter by test failures to debug unexpected API behavior

Execution – Run collections to execute an API workflow against different environment configurations

For each use case, you may want to run collections in different scenarios. Here are some scenarios involving API test automation: 

Testing locally during development

Testing as part of a CI/CD pipeline

Testing based on an event trigger

Health checking on a predetermined schedule

And you can run collections in several ways. One method leverages Newman — Postman’s open-source library — with Docker. You can use Newman from your command line or with functions, scripts, and containerized applications. You can even run your collection from Docker Desktop!

Getting started with Newman in Docker Desktop

The Postman Docker Extension uses Postman’s Newman image to run a collection and display the results. In this section, we’ll test drive the extension and run our first collection.

Setting up

Install the latest version of Docker Desktop. Install the Newman extension for Docker Desktop.

Sign up for a free Postman account and generate an API key. This will let you access your Postman data like collections and environments.

Log into your Postman account and create a Postman collection. If you don’t have a Postman collection yet, you can fork this sample collection to your own workspace. Afterwards, this forked collection will appear as your own collection.

Running a Postman collection

Enter your Postman API key and click “Get Postman Collections.”

2. Choose which collection you want to run.

3. (Optional) Select an environment to run alongside your collection. In a Postman environment, you can define different server configurations and credentials corresponding to each server environment.

4. Click “Run Collection” and review the results of your API calls. You can filter by failed tests and drill down into the details. Here’s how everything works, going step by step:

5. Repeat this process with other collections and environments as needed.

Contribute to this extension or make your own

This extension is an open source, community project, so feel free to contribute your ideas. Or you can fork it and make it your own. Give Newman a try by visiting Docker Hub and opening the extension within Docker Desktop. You can also install Newman directly within Docker Desktop’s Extensions Marketplace. 

Want to experiment even further? You can bring your own ideas to life via our Extensions SDK GitHub page. Here you’ll find useful code samples to kickstart your next project. 

Special thanks from Joyce to Postman for supporting open-source projects like Newman, empowering the community to build integrations, and to Software Development Engineer in Test (SDET) Danny Dainton for his UI work around collection run results.
Quelle: https://blog.docker.com/feed/