Kubernetes Complete Reference

blog.devops.dev – Kubernetes, a powerful orchestrator that will ease deployment and automatically manage your applications on a set of machines, called a Cluster. The aim of this article is to explain the most used…
Quelle: news.kubernauts.io

December 2022 Newsletter

New in Docker Desktop 4.15: Improving Usability and Performance for Easier Builds
Docker Desktop 4.15 is here! And it’s packed with usability upgrades to help you find the images you want, manage your containers, discover vulnerabilities, and more.

Learn More

News you can use and monthly highlights:

Find and Fix Vulnerabilities Faster Now that Docker’s a CNA — by Kat Yi, Docker Sr. Security Engineer
How to Monitor Container Memory and CPU Usage in Docker Desktop — by Ivan Curkovic, Docker Engineering Manager
December Extensions Roundup: Improving Visibility for Your APIs and Images — by Amy Bass, Docker Group Product Manager
Configure, Manage, and Simplify Your Observability Data Pipelines with the Calyptia Core Docker Extension — by Ajeet Raina, Docker DevRel, & Eduardo Silva, Founder and CEO of Calyptia

Container Tools, Tips, and Tricks – Issue #2
Debugging is a fact of developer life, and Ivan Velichko is here to help make it go a little smoother. Check out his advice for debugging containers faster.

Learn More

The latest tips and tricks from the community:

Ruby on Rails Docker for local development environment — by Snyk 
Docker Made Easy Part #0 — Build your first Node JS Docker App — by Abdurrachman — mpj 
How to set up a Rails development environment with Docker — by Simon Chiu, Code with Rails
Traefik, Docker and dnsmasq to simplify container networking — by David Worms, Adaltas
Using a Random Forest Model for Fraud Detection in Confidential Computing — by Ellie Kloberdanz, Senior Data Scientist at Cape Privacy

See more great content from Docker and the community

Read Now

Subscribe to our newsletter to get the latest news, blogs, tips, how-to guides, best practices, and more from Docker experts sent directly in your inbox once a month.

Quelle: https://blog.docker.com/feed/

December Extensions Roundup: Improving Visibility for Your APIs and Images

It’s time for the holidays, and we’ve got some exciting new Docker Extensions to share with you! Docker extensions build new functionality into Docker Desktop, extend its existing capabilities, and allow you to discover and integrate additional tools that you’re already using with Docker. Let’s take a look at two exciting new extensions from December.

And if you’d like to see everything available, check out our full Extensions Marketplace!

Move faster with API endpoints with Akita

Are you working on a new service or shipping lots of changes? Do you have a handle on which of your API endpoints might be slow or which ones are throwing errors? With the Akita API extension for Docker Desktop, you can find this out in a few minutes.

The Akita API Docker extension makes it easy to try out Akita without additional work. With Akita, you can:

See your API endpoints.

See slow endpoints and endpoints with errors.

Automatically monitor across your endpoints.

The Akita API extension is in beta. To join Akita’s beta, sign up here. 

Get more visibility with Dive-In

There are many advantages to keeping your container sizes small. Often, that starts with keeping your Docker image small as well. But it can sometimes be hard to understand where to start or which layers can be reduced. With the Dive-In Docker extension, you can explore your docker image and its layer contents, then discover ways to shrink the size of your Docker/OCI image. 

With the Dive-In extension, you can:

View the total size of your image.

Identify the inefficient bytes.

See an efficiency score.

Identify the largest files in your image.

View the size of each layer in your image.

Dive-In is an open source extension. Feel free to contribute or raise issues on https://github.com/prakhar1989/dive-in.

Building extensions? We’d love to hear from you!

Adding new extensions to the Extensions Marketplace is really exciting and we’d love to see more from our partners and the community. If you’re currently working on an extension or have built one in the past, we’d love to hear from you! And you can help us improve the developer experience for our Extensions SDK by taking this short survey.

Check out the latest Docker Extensions with Docker Desktop

Docker is always looking for ways to improve the developer experience. Check out these resources for more info on extensions:

Install Docker Desktop for Mac, Windows, or Linux to try extensions yourself.

Visit our Extensions Marketplace to see all of our extensions.

Build your own extension with our Extensions SDK.

Quelle: https://blog.docker.com/feed/

Reduce Your Image Size with the Dive-In Docker Extension

This guest post is written by Prakhar Srivastav, Senior Software Engineer at Google.

Anyone who’s built their own containers, either for local development or for cloud deployment, knows the advantages of keeping container sizes small. In most cases, keeping the container image size small translates to real dollars saved by reducing bandwidth and storage costs on the cloud. In addition, smaller images ensure faster transfer and deployments when using them in a CI/CD server.

However, even for experienced Docker users, it can be hard to understand how to reduce the sizes of their containers. The Docker CLI can be very helpful for this, but it can be intimidating to figure out where to start. That’s where Dive comes in.

What is Dive?

Dive is an open-source tool for exploring a Docker image and its layer contents, then discovering ways to shrink the size of your Docker/OCI image.

At a high level, it works by analyzing the layers of a Docker image. With every layer you add, more space will be taken up by the image. Or you can say each line in the Dockerfile (like a separate RUN instruction) adds a new layer to your image.

Dive takes this information and does the following:

Breaks down the image contents in the Docker image layer by layer.

Shows the contents of each layer in details.

Shows the total size of the image.

Shows how much space was potentially wasted.

Shows the efficiency score of the image.

While Dive is awesome and extremely helpful, it’s a command line tool and uses a TUI (terminal UI) to display all the analysis. This can sometimes seem limiting and hard to use for some users. 

Wouldn’t it be cool to show all this useful data from Dive in an easy-to-use UI? Enter Dive-In, a new Docker Extension that integrates Dive into Docker Desktop!

Prerequisites

You’ll need to download Docker Desktop 4.8 or later before getting started. Make sure to choose the correct version for your OS and then install it.

Next, hop into Docker Desktop and confirm that the Docker Extensions feature is enabled. Click the Settings gear > Extensions tab > check the “Enable Docker Extensions” box.

Dive-In: A Docker Extension for Dive

Dive-In is a Docker extension that’s built on top of Dive so Docker users can explore their containers directly from Docker Desktop.

To get started, search for Dive-In in the Extensions Marketplace, then install it.

Alternatively, you can also run:

docker extension install prakhar1989/dive-in

When you first access Dive-In, it’ll take a few seconds to pull the Dive image from Docker Hub. Once it does, it should show a grid of all the images that you can analyze.

Note: Currently Dive-In does not show the dangling images (or the images that have the repo tag of “none”). This is to keep this grid uncluttered and as actionable as possible.

To analyze an image, click on the analyze button, which calls Dive behind the scenes to gather the data. Based on the size of the image this can sometimes take some time.  When it’s done, it’ll present the results.

On the top, Dive-In shows three key metrics for the image which are useful in getting a high level view about how inefficient the image is. The lower the efficiency score, the more room for improvement.

Below the key metrics, it shows a table of the largest files in the image, which can be a good starting point for reducing the size.

Finally, as you scroll down, it shows the information of all the layers along with the size of each of them, which is extremely helpful in seeing which layer is contributing the most to the final size.

And that’s it! 

Conclusion

The Dive-In Docker Extension helps you explore a Docker image and discover ways to shrink the size. It’s built on top of Dive, a popular open-source tool. Use Dive-In to gain insights into your container right from Docker Desktop!

Try it out for yourself and let me know what you think. Pull requests are also welcome!

About the Author

Prakhar Srivastav is a senior software engineer at Google where he works on Firebase to make app development easier for developers. When he’s not staring at Vim, he can be found playing guitar or exploring the outdoors.
Quelle: https://blog.docker.com/feed/

Configure, Manage, and Simplify Your Observability Data Pipelines with the Calyptia Core Docker Extension

This post was co-written with Eduardo Silva, Founder and CEO of Calyptia.

Applications produce a lot of observability data. And it can be a constant struggle to source, ingest, filter, and output that data to different systems. Managing these observability data pipelines is essential for being able to leverage your data and quickly gain actionable insights.

In cloud and containerized environments, Fluent Bit is a popular choice for marshaling data across cloud-native environments. A super fast, lightweight, and highly scalable logging and metrics processor and forwarder, it recently reached three billion downloads.

Calyptia Core, from the creators of Fluent Bit, further simplifies the data collection process with a powerful processing engine. Calyptia Core lets you create custom observability data pipelines and take control of your data.

And with the new Calyptia Core Docker Extension, you can build and manage observability pipelines within Docker Desktop. Let’s take a look at how it works!

What is Calyptia Core?

Calyptia Core plugs into your existing observability and security infrastructure to help you process large amounts of logs, metrics, security, and event data. With Calyptia Core, you can:

Connect common sources to the major destinations (e.g. Splunk, Datadog, Elasticsearch, etc.)

Process 100k events per second per replicas with efficient routing.

Automatically collect data from Kubernetes and its various flavors (GKE, EKS, AKS, OpenShift, Tanzu, etc).

Build reliability into your data pipeline at scale to debug data issues.

Why Calyptia Core?

Observability as a concept is common in the day-to-day life of engineers. But the different data standards, data schemas, storage backends, and dev stacks contribute to tool fatigue, resulting in lower developer productivity and increased total cost of ownership.  

Calyptia Core aims to simplify the process of building an observability pipeline. You can also augment the streaming observability data to add custom markers and discard or mask unneeded fields.  

Why run Calyptia Core as a Docker Extension?

Docker Extensions help you build and integrate software applications into your daily workflows. With Calyptia Core as a Docker Extension, you now have an easier, faster way to deploy Calyptia Core.

Once the extension is installed and started, you’ll have a running Calyptia core. This allows you to easily define and manage your observability pipelines and concentrate on what matters most — discovering actionable insights from the data.

Getting started with Calyptia Core

Calyptia Core is in Docker Extension Marketplace. In the tutorial below, we’ll install Calyptia Core in Docker Desktop, build a data pipeline with mock data, and visualize it with Vivo.

Initial setup

Make sure you’ve installed the latest version of Docker Desktop (or at least v4.8+). You’ll also need to enable Kubernetes under the Preferences tab. This will start a Kubernetes single-node cluster when starting Docker Desktop.

Installing the Calyptia Core Docker Extension

Step 1

Open Docker Desktop and click “Add Extensions” under Extensions to go to the Docker Extension Marketplace.

Step 2

Install the Calyptia Core Docker Extension.

By clicking on the details, you can see what containers or binaries are pulled during installation.

Step 3

Once the extension is installed, you’re ready to deploy Calyptia Core! Select “Deploy Core” and you’ll be asked to login and authenticate the token for the Docker Extension.

In your browser, you’ll see a message from https://core.calyptia.com/ asking to confirm the device.

Step 4

After confirming, Calyptia Core will be deployed. You can now select “Manage Core” to build, configure, and manage your data pipelines.

You’ll be taken to core.calyptia.com, where you can build your custom observability data pipelines from a host of source and destination connectors.

Step 5

In this tutorial, let’s create a new pipeline and set docker-extension as the name.

Add “Mock Data” as a source and “Vivo” as the destination.

NOTE: Vivo is a real time data viewer embedded in the Calyptia Core Docker Extension. You can make changes to the data pipelines like adding new fields or connectors and view the streaming observability data from Vivo in the Docker Extension.

Step 6

Hit “Save & Deploy” to create the pipeline in the Docker Desktop environment.

With the Vivo Live Data Viewer, you can view the data without leaving Docker Desktop.

Conclusion

The Calyptia Core Docker Extension makes it simple to manage and deploy observability pipelines without leaving the Docker Desktop developer environment. And that’s just the beginning. You can also use automated logging in Calyptia Core for automated data collection from your Kubernetes pods and use metadata  to perform processing rules before it’s delivered to the chosen destination.

Give the Calyptia Core Docker Extension a try, and let us know what you think at hello@calyptia.com.
Quelle: https://blog.docker.com/feed/

Implement User Authentication Into Your Web Application Using SuperTokens

This article was co-authored by Advait Ruia, CEO at SuperTokens.

Authentication directly affects the UX, dev experience, and security of any app. Authentication solutions ensure that sensitive user data is protected and only owners of this data have access to it. Although authentication is a vital part of web services, building it correctly can be time-consuming and expensive. For a personal project, a simple email/password solution can be built in a day, but the security and reliability requirements of production-ready applications add additional complexities. 

While there are a lot of resources available online, it takes time to go through all the content for every aspect of authentication (and even if you do, you may miss important information). And it takes even more effort to make sure your application is up to date with security best practices. If you’re going to move quickly while still meeting high standards, you need a solution that has the right level of abstraction, gives you maximum control, is secure, and is simple to use — just like if you build it from scratch, but without spending the time to learn, build, and maintain it. 

Meet SuperTokens

SuperTokens is an open-source authentication solution. It provides an end-to-end solution to easily implement the following features:

Support for popular login methods:

Email/password

Passwordless (OTP or magic link based)

Social login through OAuth 2.0

Role-based access control

Session management

User management

Option to self-host the SuperTokens core or use the managed service

SDKs are available for all popular languages and front-end frameworks such as Node.js, React.js, Reactive Native, Vanilla JS, and more.

The architecture of SuperTokens

SuperTokens’ architecture is optimized to add secure authentication for your users without compromising on user and developer experience. It consists of three building blocks:

Frontend SDK: The frontend SDK is responsible for rendering the login UI, managing authentication flows, and user sessions. There are SDKs for Vanilla JS (Vue / Angular / JS), ReactJS, and React-Native.

Backend SDK: The backend SDK provides APIs for sign-up, sign-in, sign-out, session refreshing, etc. Your frontend will talk to these APIs, which are exposed on the same domain as your application’s APIs. Available SDKs: Node.js, Python, and GoLang.

SuperTokens Core: The HTTP service for the core authentication logic and database operations. This service is used by the Backend SDK. It’s responsible for interfacing with the database and is queried by our backend SDK for operations that require the database.

Architecture diagram of a self-hosted core.

To learn more about the SuperTokens architecture, watch this video

What’s unique about SuperTokens?

Here are some features that set SuperTokens apart from other user-authentication solutions:

Supertokens is easy to set up and offers quick start guides specific to your use case. 

It’s open source, which means you can self-host the SuperTokens core and have control over user data. When you self-host the SuperTokens core, there are no usage limits — it can be used for free, forever.

It has low vendor lock-in since users have complete control over how SuperTokens works and where their data is stored.

The frontend of Supertokens is highly customizable. The authentication UI and authentication flows can be customized to your use case. The SuperTokens frontend SDK also offers helper functions for users who are looking to build their own custom UI.

SuperTokens integrates natively into your frontend and API layer. This means you have complete control over authentication flows. Through overrides, you can add analytics, add custom logic, or completely change authentication flows to fit your use case.

Why run Supertokens in Docker Desktop?

Docker Extensions help you build and integrate software applications into your daily workflows. With the SuperTokens extension, you get a simple way to quickly deploy Supertokens.

Once the extension is installed and started, you’ll have a running Supertokens core application. The extension allows you to connect to your preferred database, set the environment variable, and get your core connected to your backend.

The SuperTokens extension speeds up the process of getting started with SuperTokens and, over time, we hope to make it the best place to manage the SuperTokens core.

Getting started with SuperTokens 

Step 1: Pick your authentication method

Your first step is picking the authentication strategy, or recipe, you want to implement in your applications:

Email Password

Social Login & Enterprise SSO

Passwordless (with SMS or email)

You can find user guides for all supported recipes here.

Step 2: Integrate with the SuperTokens Frontend and Backend SDKs.

After picking your recipe, you can start integrating the SuperTokens frontend and backend SDKs into your tech stack.

For example, if you want both email password and social authentication methods in your application, you can use this guide to initialize SuperTokens in your frontend and backend.

Step 3: Connect to the SuperTokens Core

The final step is setting up the SuperTokens core. SuperTokens offers a managed service to get started quickly, but today we’re going to take a look at how you can self-host and manage the SuperTokens core using the SuperTokens Docker extension.

Running the Supertokens core from Docker Desktop

Prerequisites: Docker Desktop 4.8 or later

Hop into Docker Desktop and confirm that the Docker Extensions feature is enabled. Go to Settings > Extensions > and check the “Enable Docker Extensions” box.

Setting up the extension

Step 1: Clone the SuperTokens extension

Run this command to clone the extension:

git clone git@github.com:supertokens/supertokens-docker-extension.git

Step 2: Follow the instructions in the README.md to set up the SuperTokens Extension

Build the extension:

make build-extension

Add the extension to Docker Desktop:

docker extension install supertokens/supertokens-docker-extension:latest

Once the extension is added to Docker Desktop, you can run the SuperTokens core.

Step 3: Select which database you want to use to persist user data.

SuperTokens currently supports MySQL and PostgreSQL. Choose which Docker image to load.

Step 4: Add your database connection URI

You’ll need to create a database SuperTokens can write to. Follow this guide to see how to do this. If you don’t provide a connection URI, SuperTokens will run with an in-memory database.

In addition to the connection URI, you can add environment variables to the Docker container to customize the core.

Step 5: Run the Docker container

Select “Start docker container” to start the SuperTokens core. This will start the SuperTokens core on port 3567. You can ping “https://localhost:3567” to check if the core is running successfully.

Step 6: Update the connection URI in your backend to “http://localhost:3567”

(Note: This example code snippet is for Node.js, but if you’re using Python or Golang, a similar change should be made. You can find the guide on how to do that here.)

Now that you’ve set up your core and connected it to your backend, your application should be up and ready to authenticate users!

Try SuperTokens for yourself!

To learn more about SuperTokens, you can visit our website or join our Discord community.

We’re committed to making SuperTokens a more powerful user-authentication solution for our developers and users — and we need help! We’re actively looking for active contributors to the SuperTokens Docker extension project. The current code is simple and easy to get started with. And we’re always around to give potential contributors a hand.

If you like SuperTokens, you can help us spread the word by adding a star to the repo.
Quelle: https://blog.docker.com/feed/