DockerCon Workshops: What to expect

DockerCon 2023 will be held October 4-5 in Los Angeles, and we are working hard to make it an incredible experience for everyone. The program is now online so you can plan your experience by day, time, and theme, including AI and Machine Learning, Web Application / Web Development, Building and Deploying Applications, Secure Software Delivery, and Open Source. This year we’re offering talks, workshops, and panel discussions, plus the usual vibrant DIY hallway track. We can’t wait to see you there — whether you’re joining virtually or in person. (If this will be your first DockerCon experience, read “4 Reasons I’m Excited to Attend DockerCon 2023” to see why I can’t wait to go to LA next month.)

On October 3, we will have several DockerCon hands-on workshops, organized by fantastic presenters, covering a variety of topics. If you’re joining in person, the workshops are included in the price of your admission. Just don’t forget to register for the workshop you’d like to attend! 

If you’ll be attending DockerCon virtually, the Getting Started with Docker workshop is free, and the other workshops cost US$150. This is a fantastic opportunity to use any learning and development allocations you might have!

What are the workshops and what will they cover? I’m glad you asked! Let’s dive in.

Getting Started with Docker

Are you new to Docker? Are you overwhelmed with everything there is to learn? Are you unsure why you should learn about containers and you don’t understand their benefits? Or do you just want to help your team be more productive? If so, this workshop is for you!

This workshop will walk through the basics of containers and images, including answering the question: Why should I even care about containers? You’ll then learn how to run containers, create your own images, and set up your own development environments to enable the success of your team. The workshop will close out by introducing Docker Compose, which makes it even easier to share your dev environment — devs will only need to run git clone and docker compose up and then focus on their code.

After this session, you’ll have the basic knowledge to help your team be more productive with containerized dev environments. It’ll also be a fantastic primer to help you get the most out of the rest of DockerCon.

Docker for Machine Learning / AI / Data Science

The AI/ML space has exploded with activity and excitement over the past year. There are many great tools available, but keeping up with everything can be hard. If you want to get caught up and started with AI/ML, this is the workshop for you.

This workshop is being provided through a close collaboration with OctoML and Docker. In this workshop, you’ll start with a crash course on the latest developments in generative LLMs and image generation models, after which you’ll learn about fine-tuning your own model. You’ll then take that knowledge and create a multi-modal containerized application using Python and Docker. The workshop wraps up with a fireside chat and Q&A with the presenters and speakers, allowing you to dive in deep!

After this workshop, you’ll have a better understanding of the recent advancements in the AI/ML space and have successfully created your own AI/ML-supported application.

Secure Development with Docker

Modern applications are composed of many libraries and components from various sources being built and deployed on various systems, making it difficult for developers, platform teams, and security professionals to know what software is running and whether it is secure. Issues may arise from your own code, its dependencies, base images, and many other sources — and new vulnerabilities are being discovered all the time! If you want to secure your software supply chain, this is the workshop for you.

In this workshop, you’ll start off by learning about and remediating several common attacks against your software supply chain. From there, you’ll dive deeply into securing the software supply chain, taking a comprehensive view of the problem and possible solutions. With this knowledge, you’ll learn how Docker Scout helps you understand what’s in your images, how those images are constructed, what’s running where, and providing actionable feedback early in the process so concerns are eliminated before they become problems.

After this session, you’ll know how to take these learnings back to your organization so your team:

Understands and can verify how their applications are built

Quickly and easily identifies problems with your software supply chain and remediates them

Uses policies to encourage best practices across your organization without blocking fixes getting to production

Provides visibility into the security stance of your software to others within your organization

Docker for Web Development

Are you a web developer who isn’t quite sure why you should use Docker in your development environment? Or are you just not quite sure how to get started? If so, this workshop is specifically for you.

The Docker for Web Development workshop is being presented by Timo Stark, a Docker Captain and Principal Engineer at NGINX. This hands-on workshop starts with an overview of Docker Desktop, ensuring everyone understands the basics of containers and images. With that foundational knowledge, you’ll spend the remainder of the workshop building an application, containerizing it along the way, using a combination of NodeJS and PHP backends and a React frontend. You’ll learn how to connect multiple services together and build a development environment that will require no installation and configuration (beyond Docker Desktop), helping speed up productivity and ensuring reliable environments across your development team.

After this session, you’ll have a strong understanding of how Docker can be used to speed up your web development stack and how you can help enable your team to be more productive and have more consistent environments.

Register now

DockerCon is coming up quickly! We’d love to see you in person, but you’re welcome to join us virtually as well. Visit the DockerCon site to register for the conference, see the program, and register for workshops now.

Learn more

Register for DockerCon

Register for DockerCon workshops

See the DockerCon program

Get the latest release of Docker Desktop.

Vote on what’s next! Check out our public roadmap.

Have questions? The Docker community is here to help.

New to Docker? Get started.

Quelle: https://blog.docker.com/feed/

Docker Desktop 4.23: New Configuration Integrity Check, Plus Updates to Docker Init, Compose, Watch, Quick Search, and More

Docker Desktop 4.23 is now available and includes numerous enhancements, including ASP.NET support in Docker Init, Configuration Integrity Check to alert on any configuration changes that require attention, and cross-domain identity management. This release also improves Quick Search, allowing for searching across containers, apps, Docker Hub, Docs, and any volume, and performing quick actions (start/stop/delete). VirtioFS is now set to default to provide performance gains for Mac users. With the Docker Desktop 4.23 release, Mac users will also see increased network performance using traditional network connections. 

In this post, we dive into what’s new and updated in the latest release of Docker Desktop.

ASP.NET with Docker Init

We are excited to announce added support for ASP.NET. Whether you’re new to Docker or a seasoned pro, Docker Init now streamlines Dockerization for your ASP.NET projects. With a simple docker init command in your project folder and the latest Docker Desktop version, watch as Docker Init generates tailored Dockerfiles, Compose files, and .dockerignore files.

Figure 1: Docker Init showing available languages, now including ASP.NET.

Specific to ASP.NET, we also improved support and documentation for multi-arch builds. This advancement will help users manage sharing their images across different CPU architectures and streamline deployments on cloud providers such as Azure, AWS, and GCP. Create images that fit various architectures, boosting flexibility and efficiency in cloud deployments.

Get started by ensuring you have the latest Docker Desktop version. Then, execute docker init in your project directory through the command line. Let Docker Init handle the heavy lifting, allowing you to concentrate on your core task — crafting outstanding applications!

Configuration Integrity Check

Ensure Docker Desktop runs smoothly with our new Configuration Integrity Check. This allows you to keep using multiple local applications and tools, sometimes making configuration changes hassle-free. This update automatically detects and alerts to any configuration changes, prompting a simple click and re-establishing Docker Desktop configurations to track and ensure uninterrupted development.

Alerts to any configuration issues will be displayed in the whale menu.

Cross-domain identity management 

User access management for Docker just got more powerful. SCIM helps auto-provision or de-provision users, and group role mapping is now supported so you can organize your teams and their access accordingly: 

You can assign roles to members in your organization in the IdP. To set up a role, you can use optional user-level attributes for the person you want to assign a role. 

You can also set an organization and team to override the default provisioning values set by the SSO connection.

The following table lists the supported optional user-level attributes.

Figure 2: The logic used for defining user access control.

Improvements to Quick Search 

Empowering developers with seamless access to essential resources whenever they’re needed, our revamped Quick Search feature has received significant upgrades. Now, users can effortlessly locate:

Containers and Compose apps: Easily pinpoint any container or Compose app residing on your local system. Additionally, gain quick access to environment variables and perform essential actions such as starting, stopping, or deleting them.

Docker Hub images: Whether it’s public Docker Hub images, local ones, or those from remote repositories, Quick Search will provide fast and relevant results.

Extensions: Discover more about specific extensions and streamline installation with a single click.

Volumes: Effortlessly navigate through your volumes and gain insights into the associated containers.

Documentation: Instantly access invaluable assistance from Docker’s official documentation, all directly from within Docker Desktop.

Enhanced Quick Search is your ultimate tool for resource discovery and management, offering unmatched convenience for developers.

Figure 3: Search results with updated Quick Search within Docker Desktop 4.23.

Standardizing higher performance file sharing with VirtioFS for Mac users

Docker Desktop 4.23 now defaults to utilizing VirtioFS on macOS 12.5+ as the standard to deliver substantial performance gains when sharing files with containers through docker run -v or bind mounts in Compose YAML. VirtioFS minimizes file transfer overhead by allowing containers to access files without volume mounts or network shares.

Figure 4: Setting VirtuoFS to default in the Docker Desktop General settings.

Skipping network file transfer protocols also leads to faster file transfers. We measured performance improvements, decreasing file transfer time from 7:13.21s to 1:4.4s — an 85.15% increase in speed. We do want to note that the measured improvement depends on the size of the file, what other apps are running, and the hardware of the computer.

Figure 5: Tables showing the transfer speeds of a 10GB file over three runs, before and after using VirtioFS.

Docker Desktop network speed improvements for Mac users

Docker Desktop 4.23 comes with improved networking performance for Mac users. Now, when a container requires a traditional network connection, users will experience increased network performance in these ways:

Accessing exposed ports ~10x faster

Transmission Control Protocol (TCP)  ~1.5x – 2x faster

No user action is required to experience these benefits — all Mac users updated to 4.23 will now network faster!

Conclusion

Upgrade now to explore what’s new in the 4.23 release of Docker Desktop. Do you have feedback? Leave feedback on our public GitHub roadmap, and let us know what else you’d like to see in upcoming releases.

Learn more

Read the Docker Desktop Release Notes.

Get the latest release of Docker Desktop.

Have questions? The Docker community is here to help.

New to Docker? Get started.

Quelle: https://blog.docker.com/feed/

Docker Hub Registry IPv6 Support Now Generally Available

As the world becomes increasingly interconnected, it’s essential for the internet to support the growing number of devices and users. That’s where IPv6 comes in.

What is IPv6, and what does it have to do with Docker? 

IPv6 is the latest version of the Internet Protocol, the system that enables devices to communicate with each other over the internet. It’s designed to address the limitations of the previous version, IPv4, which is running out of available addresses. 

As Docker supports more customers, this means we need to support different use cases, like IPv6-only networks. Today, we are pleased to announce the general availability of IPv6 support for the Docker Hub Registry, Docker Docs, and Docker Scout endpoints. 

Why are we adopting IPv6? 

We have heard from the community that you need IPv6 support for Docker software as a service (SaaS) endpoints to work efficiently and effectively. In the past, IPv6-only networks required extra tooling to interact with some of Docker’s SaaS resources. This is no longer the case. Now you can get rid of your NAT64 gateway and docker pull.

What does this mean for my workflows? 

This is my favorite part… nothing! 🥳 During our beta testing of IPv6, we introduced new endpoints for accessing the Docker Hub Registry. Those were only for the beta testing and are no longer needed. Now, if you are on an IPv6-only network, dual-stack network, or an IPv4-only network, these commands will work.

To begin, log in to the Docker Hub:

docker login

Then pull whatever image you need:

docker pull alpine

How will Docker Hub download rate limits work?

If you use authentication when pulling container images from the Docker Hub Registry, nothing changes. Our servers will properly attach rate limit data to the authenticated user ID in the HTTP request. 

If you do not authenticate your docker pull commands by running docker login first, then we’ll need to rate limit the request based on the IP address. For IPv4 addresses, this is done on a per-IP basis. 

For IPv6 addresses, this becomes a harder problem because IPv6 has a much larger IP address range available to customers. Therefore, to accommodate the larger IP address range, we will rate limit against the first 64 bits in the IPv6 address. You can see an example of what our servers use as the source by looking at the docker-ratelimit-source header returned in the following HTTP response:

$ curl https://registry-1.docker.io/v2/ratelimitpreview/test/manifests/latest -I -XGET -6
HTTP/1.1 401 Unauthorized
content-type: application/json
docker-distribution-api-version: registry/2.0
www-authenticate: Bearer realm="https://auth.docker.io/token",service="registry.docker.io",scope="repository:ratelimitpreview/test:pull"
date: Wed, 28 June 2023 01:06:44 GMT
content-length: 164
strict-transport-security: max-age=31536000
docker-ratelimit-source: 2601:245:c100:a71::

How can I verify that IPv6 is being used? 

While browsing the Docker Docs or pulling a Docker container image, you can use network monitoring software like tcpdump to monitor the traffic.

Let’s say that you want to verify the network you use for pulling a container image from the Docker Hub Registry.

First, in your favorite terminal, start a tcpdump capture. This command will produce log records of all of the network connections between your local machine and the Docker Hub servers:

sudo tcpdump host registry-1.docker.io -vv

In another terminal window, pull a container image from Docker Hub:

docker pull registry-1.docker.io/library/alpine:latest

You should see output that looks like this:

🚀 sudo tcpdump host registry-1.docker.io -vv
tcpdump: data link type PKTAP
tcpdump: listening on pktap, link-type PKTAP (Apple DLT_PKTAP), snapshot length 524288 bytes
15:42:16.740577 IP6 (flowlabel 0xa0800, hlim 64, next-header TCP (6) payload length: 44) 2601:245:c100:a71:8454:86d0:52f1:d46f.62630 > 2600:1f18:2148:bc02:cfd8:db68:ea1f:277c.https: Flags [S], cksum 0xb80b (correct), seq 2539670618, win 65535, options [mss 1440,nop,wscale 6,nop,nop,TS val 4154959809 ecr 0,sackOK,eol], length 0
15:42:16.774831 IP6 (class 0x20, hlim 229, next-header TCP (6) payload length: 40) 2600:1f18:2148:bc02:cfd8:db68:ea1f:277c.https > 2601:245:c100:a71:8454:86d0:52f1:d46f.62630: Flags [S.], cksum 0x6b60 (correct), seq 4264170311, ack 2539670619, win 26847, options [mss 1440,sackOK,TS val 2058512533 ecr 4154959809,nop,wscale 12], length 0

When you look at the second column of tab-delimited data, it will say IP6 to denote IPv6 being used. Additionally, the IP addresses you see in the output are in IPv6 format instead of IPv4 format. The quick way to tell is if the IP address has a colon (:) in it, then it is IPv6. If the IP address only has periods (.), then it is IPv4. 🎉

The future

We are excited to be able to provide full dual-stack network capabilities to Docker Hub Registry, Docker Docs, and Docker Scout endpoints. We believe that dual-stack capabilities offer an important performance and reliability benefit to our customers. We intend to provide dual-stack network support for new endpoints as part of our commitment to delivering the best possible experience for our users. 

If you have the ability to control your local network, turn on IPv6 and see Docker Hub Registry, Docker Docs, and Docker Scout endpoints continue to work. If you have access to an IPv6-only network, try docker pull or take a look at our docs pages — they will all continue to work as they did before. 

We look forward to hearing feedback from our community through our hub-feedback GitHub issue tracker.

Learn more

Get the latest release of Docker Desktop.

 Vote on what’s next! Check out our public roadmap.

 Have questions? The Docker community is here to help.

 New to Docker? Get started.

Quelle: https://blog.docker.com/feed/

Accelerating Machine Learning with TensorFlow.js: Using Pretrained Models and Docker

In the rapidly evolving era of machine learning (ML) and artificial intelligence (AI), TensorFlow has emerged as a leading framework for developing and implementing sophisticated models. With the introduction of TensorFlow.js, TensorFlow’s capability is boosted for JavaScript developers. 

TensorFlow.js is a JavaScript machine learning toolkit that facilitates the creation of ML models and their immediate use in browsers or Node.js apps. TensorFlow.js has expanded the capability of TensorFlow into the realm of actual web development. A remarkable aspect of TensorFlow.js is its ability to utilize pretrained models, which opens up a wide range of possibilities for developers. 

In this article, we will explore the concept of pretrained models in TensorFlow.js and Docker and delve into the potential applications and benefits. 

Understanding pretrained models

Pretrained models are a powerful tool for developers because they allow you to use ML models without having to train them yourself. This approach can save a lot of time and effort, and it can also be more accurate than training your own model from scratch.

A pretrained model is an ML model that has been professionally trained on a large volume of data. Because these models have been trained on complex patterns and representations, they are incredibly effective and precise in carrying out specific tasks. Developers may save a lot of time and computing resources by using pretrained models because they can avoid having to train a model from scratch.

Types of pretrained models available

There is a wide range of potential applications for pretrained models in TensorFlow.js. 

For example, developers could use them to:

Build image classification models that can identify objects in images.

Build natural language processing (NLP) models that can understand and respond to text.

Build speech recognition models that can transcribe audio into text.

Build recommendation systems that can suggest products or services to users.

TensorFlow.js and pretrained models

Developers can easily include pretrained models in their web applications using TensorFlow.js. With TensorFlow.js, you can benefit from robust machine learning algorithms without needing to be an expert in model deployment or training. The library offers a wide variety of pretrained models, including those for audio analysis, picture identification, natural language processing, and more (Figure 1).

Figure 1: Available pretrained model types for TensorFlow.

How does it work?

The module allows for the direct loading of models in TensorFlow SavedModel or Keras Model formats. Once the model has been loaded, developers can use its features by invoking certain methods made available by the model API. Figure 2 shows the steps involved for training, distribution, and deployment.

Figure 2: TensorFlow.js model API for a pretrained image classification model.

Training

The training section shows the steps involved in training a machine learning model. The first step is to collect data. This data is then preprocessed, which means that it is cleaned and prepared for training. The data is then fed to a machine learning algorithm, which trains the model.

Preprocess data: This is the process of cleaning and preparing data for training. This includes tasks such as removing noise, correcting errors, and normalizing the data.

TF Hub: TF Hub is a repository of pretrained ML models. These models can be used to speed up the training process or to improve the accuracy of a model.

tf.keras: tf.keras is a high-level API for building and training machine learning models. It is built on top of TensorFlow, which is a low-level library for machine learning.

Estimator: An estimator is a type of model in TensorFlow. It is a simplified way to build and train ML models.

Distribution

Distribution is the process of making a machine learning model available to users. This can be done by packaging the model in a format that can be easily shared, or by deploying the model to a production environment.

The distribution section shows the steps involved in distributing a machine learning model. The first step is to package the model. This means that the model is converted into a format that can be easily shared. The model is then distributed to users, who can then use it to make predictions.

Deployment

The deployment section shows the steps involved in deploying a machine learning model. The first step is to choose a framework. A framework is a set of tools that makes it easier to build and deploy machine learning models. The model is then converted into a format that can be used by the framework. The model is then deployed to a production environment, where it can be used to make predictions.

Benefits of using pretrained models

There are several pretrained models available in TensorFlow.js that can be utilized immediately in any project and offer the following notable advantages:

Savings in time and resources: Building an ML model from scratch might take a lot of time and resources. Developers can skip this phase and use a model that has already learned from lengthy training by using pretrained models. The time and resources needed to implement machine learning solutions are significantly decreased as a result.

State-of-the-art performance: Pretrained models are typically trained on huge datasets and refined by specialists, producing models that give state-of-the-art performance across a range of applications. Developers can benefit from these models’ high accuracy and reliability by incorporating them into TensorFlow.js, even if they lack a deep understanding of machine learning.

Accessibility: TensorFlow.js makes pretrained models powerful for web developers, allowing them to quickly and easily integrate cutting-edge machine learning capabilities into their projects. This accessibility creates new opportunities for developing cutting-edge web-based solutions that make use of machine learning’s capabilities.

Transfer learning: Pretrained models can also serve as the foundation for your process. Using a smaller, domain-specific dataset, developers can further train a pretrained model. Transfer learning enables models to swiftly adapt to particular use cases, making this method very helpful when data is scarce.

Why is containerizing TensorFlow.js important?

Containerizing TensorFlow.js brings several important benefits to the development and deployment process of machine learning applications. Here are five key reasons why containerizing TensorFlow.js is important:

Docker provides a consistent and portable environment for running applications. By containerizing TensorFlow.js, you can package the application, its dependencies, and runtime environment into a self-contained unit. This approach allows you to deploy the containerized TensorFlow.js application across different environments, such as development machines, staging servers, and production clusters, with minimal configuration or compatibility issues.

Docker simplifies the management of dependencies for TensorFlow.js. By encapsulating all the required libraries, packages, and configurations within the container, you can avoid conflicts with other system dependencies and ensure that the application has access to the specific versions of libraries it needs. This containerization eliminates the need for manual installation and configuration of dependencies on different systems, making the deployment process more streamlined and reliable.

Docker ensures the reproducibility of your TensorFlow.js application’s environment. By defining the exact dependencies, libraries, and configurations within the container, you can guarantee that the application will run consistently across different deployments.

Docker enables seamless scalability of the TensorFlow.js application. With containers, you can easily replicate and distribute instances of the application across multiple nodes or servers, allowing you to handle high volumes of user requests.

Docker provides isolation between the application and the host system and between different containers running on the same host. This isolation ensures that the application’s dependencies and runtime environment do not interfere with the host system or other applications. It also allows for easy management of dependencies and versioning, preventing conflicts and ensuring a clean and isolated environment in which the application can operate.

Building a fully functional ML face-detection demo app

By combining the power of TensorFlow.js and Docker, developers can create a fully functional machine learning (ML) face-detection demo app. Once the app is deployed, the TensorFlow.js model can recognize faces in real-time by leveraging the camera. However, with a minor code change, it’s possible for developers to build an app that allows users to upload images or videos to be detected.

In this tutorial, you’ll learn how to build a fully functional face-detection demo app using TensorFlow.js and Docker. Figure 3 shows the file system architecture for this setup. Let’s get started.

Prerequisite

The following key components are essential to complete this walkthrough:

Docker Desktop 

Figure 3: File system architecture for Docker Compose development setup.

Deploying a ML face-detection app is a simple process involving the following steps:

Clone the repository. 

Set up the required configuration files. 

Initialize TensorFlow.js.

Train and run the model. 

Bring up the face-detection app. 

We’ll explain each of these steps below.

Quick demo

If you’re in hurry, you can bring up the complete app by running the following command:

docker run -p 1234:1234 harshmanvar/face-detection-tensorjs:slim-v1

Open URL in browser: http://localhost:1234.

Figure 4: URL opened in a browser.

Getting started

Cloning the project

To get started, you can clone the repository by running the following command:

https://github.com/dockersamples/face-detection-tensorjs

We are utilizing the MediaPipe Face Detector demo for this demonstration. You first create a detector by choosing one of the models from SupportedModels, including MediaPipeFaceDetector.

For example:

const model = faceDetection.SupportedModels.MediaPipeFaceDetector;
const detectorConfig = {
runtime: ‘mediapipe’, // or ‘tfjs’
}
const detector = await faceDetection.createDetector(model, detectorConfig);

Then you can use the detector to detect faces:

const faces = await detector.estimateFaces(image);

File: index.html:

<!DOCTYPE html>
<html>

<head>
<meta charset="utf-8">
<meta name="viewport" content="width=device-width,initial-scale=1,maximum-scale=1.0, user-scalable=no">
<style>
body {
margin: 0;
}

#stats {
position: relative;
width: 100%;
height: 80px;
}

#main {
position: relative;
margin: 0;
}

#canvas-wrapper {
position: relative;
}
</style>
</head>

<body>
<div id="stats"></div>
<div id="main">
<div class="container">
<div class="canvas-wrapper">
<canvas id="output"></canvas>
<video id="video" playsinline style="
-webkit-transform: scaleX(-1);
transform: scaleX(-1);
visibility: hidden;
width: auto;
height: auto;
">
</video>
</div>
</div>
</div>
</div>
</body>
<script src="https://cdnjs.cloudflare.com/ajax/libs/dat-gui/0.7.6/dat.gui.min.js"></script>
<script src="https://cdnjs.cloudflare.com/ajax/libs/stats.js/r16/Stats.min.js"></script>
<script src="src/index.js"></script>

</html>

The web application’s principal entry point is the index.html file. It includes the video element needed to display the real-time video stream from the user’s webcam and the basic HTML page structure. The relevant JavaScript scripts for the facial detection capabilities are also imported.

File: src/Index.js:

import ‘@tensorflow/tfjs-backend-webgl';
import ‘@tensorflow/tfjs-backend-webgpu';

import * as tfjsWasm from ‘@tensorflow/tfjs-backend-wasm';

tfjsWasm.setWasmPaths(
`https://cdn.jsdelivr.net/npm/@tensorflow/tfjs-backend-wasm@${
tfjsWasm.version_wasm}/dist/`);

import * as faceDetection from ‘@tensorflow-models/face-detection';

import {Camera} from ‘./camera';
import {setupDatGui} from ‘./option_panel';
import {STATE, createDetector} from ‘./shared/params';
import {setupStats} from ‘./shared/stats_panel';
import {setBackendAndEnvFlags} from ‘./shared/util';

let detector, camera, stats;
let startInferenceTime, numInferences = 0;
let inferenceTimeSum = 0, lastPanelUpdate = 0;
let rafId;

async function checkGuiUpdate() {
if (STATE.isTargetFPSChanged || STATE.isSizeOptionChanged) {
camera = await Camera.setupCamera(STATE.camera);
STATE.isTargetFPSChanged = false;
STATE.isSizeOptionChanged = false;
}

if (STATE.isModelChanged || STATE.isFlagChanged || STATE.isBackendChanged) {
STATE.isModelChanged = true;

window.cancelAnimationFrame(rafId);

if (detector != null) {
detector.dispose();
}

if (STATE.isFlagChanged || STATE.isBackendChanged) {
await setBackendAndEnvFlags(STATE.flags, STATE.backend);
}

try {
detector = await createDetector(STATE.model);
} catch (error) {
detector = null;
alert(error);
}

STATE.isFlagChanged = false;
STATE.isBackendChanged = false;
STATE.isModelChanged = false;
}
}

function beginEstimateFaceStats() {
startInferenceTime = (performance || Date).now();
}

function endEstimateFaceStats() {
const endInferenceTime = (performance || Date).now();
inferenceTimeSum += endInferenceTime – startInferenceTime;
++numInferences;

const panelUpdateMilliseconds = 1000;
if (endInferenceTime – lastPanelUpdate >= panelUpdateMilliseconds) {
const averageInferenceTime = inferenceTimeSum / numInferences;
inferenceTimeSum = 0;
numInferences = 0;
stats.customFpsPanel.update(
1000.0 / averageInferenceTime, 120);
lastPanelUpdate = endInferenceTime;
}
}

async function renderResult() {
if (camera.video.readyState < 2) {
await new Promise((resolve) => {
camera.video.onloadeddata = () => {
resolve(video);
};
});
}

let faces = null;

if (detector != null) {

beginEstimateFaceStats();

try {
faces =
await detector.estimateFaces(camera.video, {flipHorizontal: false});
} catch (error) {
detector.dispose();
detector = null;
alert(error);
}

endEstimateFaceStats();
}

camera.drawCtx();
if (faces && faces.length > 0 && !STATE.isModelChanged) {
camera.drawResults(
faces, STATE.modelConfig.boundingBox, STATE.modelConfig.keypoints);
}
}

async function renderPrediction() {
await checkGuiUpdate();

if (!STATE.isModelChanged) {
await renderResult();
}

rafId = requestAnimationFrame(renderPrediction);
};

async function app() {
const urlParams = new URLSearchParams(window.location.search);

await setupDatGui(urlParams);

stats = setupStats();

camera = await Camera.setupCamera(STATE.camera);

await setBackendAndEnvFlags(STATE.flags, STATE.backend);

detector = await createDetector();

renderPrediction();
};

app();

JavaScript file that conducts the facial detection logic. TensorFlow.js is loaded, allowing for real-time face detection on the video stream using the pretrained face identification model. The file manages access to the camera, processing of the video frames, and creating bounding boxes around faces that have been recognized in the video feed.

File: src/camera.js:

import {VIDEO_SIZE} from ‘./shared/params';
import {drawResults, isMobile} from ‘./shared/util';

export class Camera {
constructor() {
this.video = document.getElementById(‘video’);
this.canvas = document.getElementById(‘output’);
this.ctx = this.canvas.getContext(‘2d’);
}

static async setupCamera(cameraParam) {
if (!navigator.mediaDevices || !navigator.mediaDevices.getUserMedia) {
throw new Error(
‘Browser API navigator.mediaDevices.getUserMedia not available’);
}

const {targetFPS, sizeOption} = cameraParam;
const $size = VIDEO_SIZE[sizeOption];
const videoConfig = {
‘audio': false,
‘video': {
facingMode: ‘user’,
width: isMobile() ? VIDEO_SIZE[’360 X 270′].width : $size.width,
height: isMobile() ? VIDEO_SIZE[’360 X 270′].height : $size.height,
frameRate: {
ideal: targetFPS,
},
},
};

const stream = await navigator.mediaDevices.getUserMedia(videoConfig);

const camera = new Camera();
camera.video.srcObject = stream;

await new Promise((resolve) => {
camera.video.onloadedmetadata = () => {
resolve(video);
};
});

camera.video.play();

const videoWidth = camera.video.videoWidth;
const videoHeight = camera.video.videoHeight;
// Must set below two lines, otherwise video element doesn’t show.
camera.video.width = videoWidth;
camera.video.height = videoHeight;

camera.canvas.width = videoWidth;
camera.canvas.height = videoHeight;
const canvasContainer = document.querySelector(‘.canvas-wrapper’);
canvasContainer.style = `width: ${videoWidth}px; height: ${videoHeight}px`;
camera.ctx.translate(camera.video.videoWidth, 0);
camera.ctx.scale(-1, 1);

return camera;
}

drawCtx() {
this.ctx.drawImage(
this.video, 0, 0, this.video.videoWidth, this.video.videoHeight);
}

drawResults(faces, boundingBox, keypoints) {
drawResults(this.ctx, faces, boundingBox, keypoints);
}
}

The configuration for the camera’s width, audio, and other setup-related items is managed in camera.js.

File: .babelrc:

The .babelrc file is used to configure Babel, a JavaScript compiler, specifying presets and plugins that define the transformations to be applied during code transpilation.

File: src/shared:

shared % tree
.
├── option_panel.js
├── params.js
├── stats_panel.js
└── util.js

1 directory, 4 files

The parameters and other shared files found in the src/shared folder are needed to run and access the camera, checks, and parameter values. 

Defining services using a Compose file

Here’s how our services appear within a Docker Compose file:

services:
tensorjs:
#build: .
image: harshmanvar/face-detection-tensorjs:v2
ports:
– 1234:1234
volumes:
– ./:/app
– /app/node_modules
command: watch

Your sample application has the following parts:

The tensorjs service is based on the harshmanvar/face-detection-tensorjs:v2 image.

This image contains the necessary dependencies and code to run a face detection system using TensorFlow.js.

It exposes port 1234 to communicate with the TensorFlow.js.

The volume ./:/app sets up a volume mount, linking the current directory (represented by ./) on the host machine to the /app directory within the container. This allows you to share files and code between your host machine and the container.

The watch command specifies the command to run within the container. In this case, it runs the watch command, which suggests that the face detection system will continuously monitor for changes or updates.

Building the image

It’s time to build the development image and install the dependencies to launch the face-detection model.

docker build -t tensor-development:v1

Running the container

docker run -p 1234:1234 -v $(pwd):/app -v /app/node_modules tensor-development:v1 watch

Bringing up the container services

You can launch the application by running the following command:

docker compose up -d

Then, use the docker compose ps command to confirm that your stack is running correctly. Your terminal will produce the following output:

docker compose ps
NAME IMAGE COMMAND SERVICE STATUS PORTS
tensorflow tensorjs:v2 "yarn watch" tensorjs Up 48 seconds 0.0.0.0:1234->1234/tcp

Viewing the containers via Docker Dashboard

You can also leverage the Docker Dashboard to view your container’s ID and easily access or manage your application (Figure 5) container.

Figure 5: Viewing containers in the Docker Dashboard.

Conclusion

Well done! You have acquired the knowledge to utilize a pre-trained machine learning model with JavaScript for a web application, all thanks to TensorFlow.js. In this article, we have demonstrated how Docker Compose lets you quickly create and deploy a fully functional ML face-detection demo app, using just one YAML file.

With this newfound expertise, you can now take this guide as a foundation to build even more sophisticated applications with just a few additional steps. The possibilities are endless, and your ML journey has just begun!

Learn more

Get the latest release of Docker Desktop.

Vote on what’s next! Check out our public roadmap.

Have questions? The Docker community is here to help.

New to Docker? Get started.

Quelle: https://blog.docker.com/feed/

Docker Scout Demo and Q&A

If you missed our webinar “Docker Scout: Live Demo, Insights, and Q&A” — or if you want to watch it again — it’s available on-demand. The audience had more questions than we had time to answer, so we’ve included additional Q&A below.

Many developers — and their employers — are concerned with securing their software supply chain. But what does that mean? Senior Developer Relations Manager Michael Irwin uses a coffee analogy (even though he doesn’t drink coffee himself!). To brew the best cup of coffee, you need many things: clean water, high-quality beans, and good equipment. For the beans and the water, you want assurances that they meet your standards. You might look for beans that have independent certification of their provenance and processing to make sure they are produced sustainably and ethically, for example.

The same concepts apply to producing software. You want to start with trusted content. Using images from Docker Official Images, Docker Verified Publishers, and Docker-Sponsored Open Source lets you know you’re building on a reliable, up-to-date foundation. From those images and your layered software libraries, Docker can build a software bill of materials (SBOM) that you can present to your customers to show exactly what went into making your application. And with Docker Scout, you can automatically check for known vulnerabilities, which helps you find and fix security issues before they reach your customers.

During the webinar, Senior Principal Software Engineer Christian Dupuis demonstrated using Docker Scout. He highlighted how Docker Scout utilizes SBOM and provenance attestation produced by BuildKit. He also showed Docker Scout indicating vulnerabilities by severity. Docker Scout doesn’t stop at showing vulnerabilities, it lets you know where the vulnerability is added to the image and provides suggestions for remediation.

The audience asked great questions during the live Q&A. Since we weren’t able to answer them all during the webinar, we want to take a moment to address them now.

Webinar Q&A

What source does Docker Scout use to determine the CVEs?

Docker Scout gets vulnerability data from approximately 20 advisory sources. This includes Linux distributions and code repository platforms like Debian, Ubuntu, GitHub, GitLab, and other trustworthy providers of advisory metadata.

We constantly cross-reference the SBOM information stored in the Docker Scout system-of-record with advisory data. New vulnerability information is immediately reflected on Docker Desktop, in the Docker Scout CLI, and on scout.docker.com.

How much does Docker Scout cost?

Docker Scout has several different price tiers. You can start for free with up to 3 image repositories; if you need more, we also offer paid plans. The Docker Scout product page has a full comparison to help you pick the right option.

How do I add Docker Scout to my CI pipeline?

The documentation on Docker Scout has a dedicated section on CI integrations. 

How can I contribute?

There are several ways you can engage with the product team behind Docker Scout and influence the roadmap:

For feedback and issues about Docker Scout in the CLI, CI, Docker Desktop or scout.docker.com, open issues at https://github.com/docker/scout-cli.

Learn more about the Docker Scout Design Partner Program.

What platforms are supported?

Docker Scout works on all supported operating systems. You can use Docker Scout in Docker Desktop version 4.17 or later or log in to scout.docker.com to see information across all of your Docker Hub images. Make sure you keep your Docker Desktop version up to date — we’re adding new features and capabilities in every release.

We also provide a Docker Scout CLI plugin. You can find instructions in the scout-cli GitHub repository.

How do I export a list of vulnerabilities?

You can use the Docker Scout CLI to export vulnerabilities into a SARIF file for further processing or export. You can read more about this in the Docker Engine documentation.

How does Docker Scout help if I’m already using scanning tools?

Docker Scout builds upon a system of record for the entire software development life cycle, so you can integrate it with other tools you use in your software delivery process. Talk to us to learn more. 

Get started with Docker Scout

Developers want speed, security, and choice. Docker Scout helps improve developer efficiency and software security by detecting known vulnerabilities early. While it offers remediation suggestions, developers still have the choice in determining the best approach to addressing vulnerabilities. Get started today to see how Docker Scout helps you secure your software supply chain.

Learn more

Watch our webinar “Docker Scout: Live Demo, Insights, and Q&A”.

Get the latest release of Docker Desktop.

Vote on what’s next! Check out our public roadmap.

Have questions? The Docker community is here to help. 

New to Docker? Get started.

Quelle: https://blog.docker.com/feed/

Unleash Docker Desktop 4.22: The Featherweight Heavy-Hitter for Supercharged Rapid Development

Docker is committed to delivering the most efficient and high-performing container development toolset in the market, so we continue to advance our technology to exceed customer expectations. With the latest version 4.22 release, Docker Desktop has undergone significant optimizations, making it streamlined, lightweight, and faster than ever. Not only does this new version offer enhanced performance, but it also contributes to a more environmentally friendly approach, saving energy and reducing resource consumption on local machines:

Network speeds from 3.5 Gbit/sec to 19 Gbit/sec — a 443% improvement

Filesystem improvements that yield 60% faster builds

Enhanced active memory usage from 4GB to 2GB — a 2x improvement

Resource Saver mode — automatically reduces CPU and memory utilization by 10x

Discover how our commitment to delivering exceptional experiences for our developer community and customers shines through in the latest updates to Docker Desktop.

4.19: Networking stack — turbocharging container connectivity

The Docker Desktop 4.19 release brought a substantial boost to Docker Desktop’s networking stack, the technology used by containers to access the internet. This upgrade significantly enhances networking performance, which is particularly beneficial for tasks like docker builds, which often involve downloading and installing numerous packages. 

Benchmark tests using iperf3 on a first-generation M1 Mac Mini demonstrated remarkable progress. The previous unoptimized network stack managed around 3.5 Gbit/sec, whereas the current default networking stack in 4.19+ achieves an impressive 19 Gbit/sec on the same machine. This optimization translates to faster build times and smoother container operations.

4.21: Optimized CPU, memory, and filesharing performance

Docker Desktop 4.21 introduced the first version of what is now a game-changing feature, Resource Saver, which automatically reduces CPU. This intelligent mode detects when Docker Desktop is not running containers and automatically reduces CPU consumption, ensuring that developers can keep the application running in the background without compromising battery life or dealing with noisy laptop fans. Across all Docker Desktop users on 4.21, this innovative feature has saved up to 38,500 CPU hours every day, making it a true productivity booster.

Furthermore, Docker Desktop 4.21 significantly enhanced its active memory usage, slashing it from approximately 4GB to around 2GB — a remarkable 2x advancement. This empowers developers to seamlessly juggle multiple applications alongside Docker Desktop, resulting in an elevated and smoother user experience.

Additionally, Docker Desktop now utilizes VirtioFS on macOS 12.5+ to deliver substantial performance gains when sharing files with containers through docker run -v. Notably, the time needed for a clean (non-incremental) build of redis/redis checked out on the host has been reduced by more than half over recent releases resulting in ~60% faster builds, further solidifying Docker Desktop’s reputation as an indispensable development tool.

4.22: Heightened efficiency — dramatically reducing memory utilization when idle

Now, with the release of Docker Desktop 4.22, we’re excited to announce that  Docker Desktop’s newest performance enhancement feature, Resource Saver, supports automatic low memory mode for Mac, Windows, and Linux. This addition detects when Docker Desktop is not running containers and dramatically reduces its memory footprint by 10x, freeing up valuable resources on developers’ machines for other tasks and minimizing the risk of lag when navigating across different applications. Memory allocation can now be quick and efficient, resulting in a seamless and performant development experience.

But don’t just take it from us. In “What is Resource Saver Mode in Docker Desktop and what problem does it solve?” Ajeet Raina explains how the new Resource Saver feature optimizes efficiency, enhances performance, and simplifies the development workflow.

Conclusion

Docker Desktop continues to evolve. The latest enhancements in version 4.22, combined with the resource-saving features introduced in 4.21 and 4.19, have made Docker Desktop a lighter, faster, and more environmentally friendly solution for developers. 

By optimizing resource usage and maximizing performance, Docker Desktop enables developers to build and release applications faster while being conscious of their environmental impact. As Docker continues to innovate and fine-tune its offerings, developers can expect even greater strides toward a more efficient and productive development experience.

Download or update to the newest version of Docker Desktop today to start saving time and take advantage of these new advancements.

Learn more

Get the latest release of Docker Desktop.

Vote on what’s next! Check out our public roadmap.

Have questions? The Docker community is here to help.

New to Docker? Get started.

Quelle: https://blog.docker.com/feed/

5 Benefits of a Container-First Approach to Software Development

Cargo containers completely transformed the shipping industry and enabled the global commerce experience of today. Similarly, software containers simplify application development and deployment, which helps enable the cloud-native software architecture that powers the modern technology we all rely on. Although you can get benefits from containerizing your applications after the fact, you get the most value when you take a container-first approach. 

In this blog post, we discuss insights from Cracking the Code: Effectively Managing All of Those Applications, highlighting five benefits of embracing a container-first approach to software development.

Consistent and reliable software performance

Inconsistency can be a major roadblock to progress. The all too familiar frustration of “it works on my machine” can cause software delivery delays and hinders collaboration. But with containers comes standardization. This ensures that software will perform consistently across the entire development process, regardless of the underlying environment.

Developers and infrastructure engineers also save a lot of time and cognitive energy on configuring and maintaining their environments and workstations. Containers have a small resource footprint, which means your infrastructure can do more with less. And, because each container includes the exact versions of software it needs, you don’t have to worry about conflicting dependencies.

Fewer bugs

Bugs are the bane of every software developer’s existence. However, a container-first approach provides environmental parity. This means that the development, staging, and production environments remain consistent, reducing the likelihood of encountering bugs caused by disparities in underlying conditions. With containers, businesses can significantly reduce debugging time and enhance the overall quality of their software, leading to higher user satisfaction and a stronger competitive edge.

Faster developer onboarding

The learning curve for new developers can often be steep, especially when dealing with complex software environments. Containers revolutionize developer onboarding by providing a replica of the exact environment in which an application will be tested and executed. This is irrespective of the developer’s local operating system or installed libraries. With containers, developers can hit the ground running, accelerating their productivity and contributing to the project’s success from day one.

A more secure supply chain

The Consortium for Information & Software Quality estimates that poor software quality has cost the United States economy $2.41 trillion. Two of the top causes are criminals exploiting vulnerabilities and supply chain problems with third-party software. Containers can help.

Because the Dockerfile is a recipe for creating the container, you can use it to produce a software bill of materials (SBOM). This makes clear what dependencies — including the specific version — go into building the container. Cryptographically signed SBOMs let you verify the provenance of your dependencies, so you can be sure that the upstream library you’re using is the actual one produced by the project.

Using the SBOM, you can also monitor your fleet of containers for known vulnerabilities. When a new vulnerability is discovered, you can quickly tell which of your containers are affected, which makes the response quicker. Containers also provide isolation, micro-segmentation, and other zero-trust techniques, which reduce your attack surface and limit the impact of exploited vulnerabilities.

Improved productivity for faster time-to-market

The standardization, consistency, and security containers bring directly impact software delivery time. With fewer issues to deal with (bugs, compatibility issues, maintenance, etc.), developers can focus on more meaningful tasks and ultimately deliver solutions to customers faster. All of this helps development teams work more efficiently, collaborate effectively, and deliver higher-quality software.

Learn more

Dive deeper into the world of containers and the benefits of adopting a container-first model in your software development by downloading the full white paper, Cracking the Code: Effectively Managing All of Those Applications.

Get the latest release of Docker Desktop.

 Vote on what’s next! Check out our public roadmap.

 Have questions? The Docker community is here to help.

 New to Docker? Get started.

Quelle: https://blog.docker.com/feed/

Docker Desktop 4.22: Resource Saver, Compose ‘include’, and Enhanced RBAC Functionality

Docker Desktop 4.22 is now available and includes a new Resource Saver feature that massively reduces idle memory and CPU utilization to ensure efficient use of your machine’s resources. Docker Compose include allows splitting complex Compose projects into subprojects to make it easier to modularize complex applications into sub-Compose files. Role-based access control (RBAC) has also been enhanced with the addition of an Editor role to allow admins to delegate repository management tasks.

Resource Saver 

In 4.22 we have added a new Resource Saver feature for Mac and Windows which detects when Docker Desktop is not running containers and massively reduces its memory and CPU footprint (WSL has CPU optimizations only at this stage). When Resource Saver detects that Docker Desktop is idle without any active containers for a duration of 30 seconds, it automatically reduces the memory and CPU footprint. This optimizes Docker Desktop for your system and helps to free up resources on your machine for other tasks. When a container needs resources, they’re quickly allocated on demand.

To see this feature in action, start Docker Desktop and leave it idle for 30 seconds with no containers running. A leaf will appear over the whale icon in your Docker Desktop menu and the sidebar of the Docker Desktop dashboard, indicating that Resource Saver mode is activated.

Figure 1: The Docker Desktop menu and the macOS menu bar show Resource Saver mode running.

Previously, Docker Desktop introduced some CPU optimizations of Resource Saver, which, at the time of writing, are already saving up to a staggering 38,500 CPU hours every single day across all Docker Desktop users.

Split complex Compose projects into multiple subprojects with ‘include’

If you’re working with complex applications, use the new include section in your Compose file to split your project into manageable subprojects. Compared to merging files with CLI flags or using extends to share common attributes of a single service from another file, include loads external Compose files as self-contained building blocks, making it easier to collaborate on services across teams and share common dependency configurations within your organization.

For more on how you can try out this feature, read “Improve Docker Compose Modularity with `include`” or refer to the documentation.

Figure 2: A compose.yaml file that is using the new ‘include’ section to define subprojects.

Editor role available for organizations

With the addition of the Editor role, admins can provision users to manage repositories without full administrator privileges. Users assigned to the Editor role can:

Create public and private repositories

Pull, push, view, edit, and delete a repository

Update repository description

Assign team permissions to repos

Update scanning settings

Delete tags

Add webhooks

Change repo visibility settings

For further details on roles and permissions, refer to the documentation. 

Organization owners can assign the Editor role to a member of their organization in either Docker Hub or Docker Admin.

Figure 3: The Editor role functionality in Docker Hub.

Conclusion

Upgrade now to explore what’s new in the 4.22 release of Docker Desktop. Do you have feedback? Leave feedback on our public GitHub roadmap and let us know what else you’d like to see in upcoming releases.

Learn more

Read Improve Docker Compose Modularity with include.

Get the latest release of Docker Desktop.

Have questions? The Docker community is here to help.

New to Docker? Get started.

Quelle: https://blog.docker.com/feed/

Sentiment Analysis and Insights on Cryptocurrencies Using Docker and Containerized AI/ML Models

Learning about and sharing ways in which those in the Docker community leverage Docker to drive innovation is always exciting. We are learning about many interesting AI/ML solutions that use Docker to accelerate development and simplify deployment. 

In this blog post, Saul Martin shares how Prometeo.ai leverages Docker to deploy and manage its machine learning models allowing its developers to focus on innovation, not infrastructure/deployment management, for their sentiment analysis of a range of cryptocurrencies to provide insights for traders.

The digital assets market, which is famously volatile and swift, necessitates tools that can keep up with its speed and provide real-time insights. At the forefront of these tools is Prometeo.ai, which has harnessed the power of Docker to build a sophisticated, high-frequency sentiment analysis platform. This tool sifts through the torrent of emotions that drive the cryptocurrency market, providing real-time sentiments of the top 100 assets, which is a valuable resource for hedge funds and financial institutions.

Prometeo.ai’s leveraging of Docker’s containerization capabilities allows it to deploy and manage complex machine learning models with ease, making it an example of modern, robust, scalable architecture.

In this blog post, we will delve into how Prometeo.ai is utilizing Docker for its sentiment analysis tool, highlighting the critical aspects of its data collection, machine learning model implementations, storage, and deployment processes. This exploration will give you a clear understanding of how Docker can transform machine learning application deployment, presenting a case study in the form of Prometeo.ai.

Data collection and processing: High-frequency sentiment analysis with Docker

Prometeo.ai’s comprehensive sentiment analysis capability hinges on an agile, near real-time data collection and processing infrastructure. This framework captures, enriches, and publishes an extensive range of sentiment data from diverse platforms:

Stream/data access: Platform-specific data pipelines tasked with real-time harvesting of cryptocurrency-related discussions hosted on siloed Docker containers.

Tokenization and sentiment analysis: The harvested data undergoes tokenization, transforming each content piece into a format suitable for analysis. An internal Sentiment Analysis API further enriches this tokenized data, inferring sentiment attributes from the raw information.

Publishing: Enriched sentiment data is published within one minute of collection, facilitating near real-time insights for users. During periods of content unavailability from a data source, the system generates and publishes an empty dictionary.

All these operations transpire within Docker containers, guaranteeing the necessary scalability, isolation, and resource efficiency to manage high-frequency data operations.

For efficient data storage, Prometeo.ai relies on:

NoSQL database: DynamoDB is used for storing minute-level sentiment aggregations. The primary key is defined such that it allows quick access to data based on time-range queries. These aggregations are critical for providing real-time insights to users and for hourly and daily aggregation calculations.

Object storage: For model retraining and data backup purposes, the raw data, including raw content, is exported in batches and stored in Amazon S3 buckets. This robust storage mechanism ensures data durability and aids in maintaining data integrity.

Relational database: Metadata related to different assets, including links, tickers, IDs, descriptions, and others, are hosted in PostgreSQL. This provides a relational structure for asset metadata and promotes accessible, structured access when required.

NLP models

Prometeo.ai makes use of two Bidirectional Encoder Representations from Transformers (BERT) models, both of which operate within a Docker environment for natural language processing (NLP). The following models run multi-label classification pipelines that have been fine-tuned on an in-house dataset of 50k manually labeled tweets.

proSENT model: This model specializes in identifying 28 unique emotional sentiments. It owes its comprehensive language understanding to training on a corpus of more than 1.5 million unique cryptocurrency-related social media posts.

proCRYPT model: This model is fine-tuned for crypto sentiment analysis, classifying sentiments as bullish, bearish, or neutral. The deployment architecture encapsulates both these models within a Docker container alongside a FastAPI server. This internal API acts as the conduit for running inferences.

To ensure a seamless and efficient build process, Hugging Face’s model hub is used to store the models. The models and their binary files are retrieved directly from Hugging Face during the Docker container’s build phase. This approach keeps the Docker container lean by downloading the models at runtime, thereby optimizing the build process and contributing to the overall operational efficiency of the application.

Deployment

Prometeo.ai’s deployment pipeline is composed of GitHub Actions, AWS CodeDeploy, and accelerated computing instances. This pipeline forms a coherent system for efficiently handling application updates and resource allocation:

GitHub Actions: The onset of the pipeline employs GitHub Actions, which are programmed to autonomously instigate a fresh deployment upon the push of any modifications to the production branch. This procedural design ensures the application continually operates on the most recent, vetted code version.

AWS CodeDeploy: The subsequent phase involves AWS CodeDeploy, which is triggered once GitHub Actions have successfully compiled and transferred the Docker image to the Elastic Container Registry (ECR). CodeDeploy is tasked with the automatic deployment of this updated Docker image to the GPU-optimized instances. This robust orchestration ensures smooth rollouts and establishes a reliable rollback plan if necessary.

Accelerated computing: Prometeo leverages NVIDIA Tesla GPUs for the computational prowess needed for executing complex BERT models. These GPU-optimized instances are tailored for NVIDIA-CUDA Docker image compatibility, thereby facilitating GPU acceleration, which significantly expedites the processing and analysis stages.

Below is a snippet demonstrating the configuration to exploit the GPU capabilities of the instances:

deploy:
resources:
reservations:
devices:
– driver: nvidia
capabilities: [gpu]

Please note that the image must be the same version as your CUDA version after running nvidia-smi:

FROM nvidia/cuda:12.1.0-base-ubuntu20.04

To maintain optimal performance under fluctuating load conditions, an autoscaling mechanism is incorporated. This solution perpetually monitors CPU utilization, dynamically scaling the number of instances up or down as dictated by the load. This ensures that the application always has access to the appropriate resources for efficient execution.

Conclusion

By harnessing Docker’s containerization capabilities and compatibility with NVIDIA-CUDA images, Prometeo.ai successfully manages intensive, real-time emotion analysis in the digital assets domain. Docker’s role in this strategy is pivotal, providing an environment that enables resource optimization and seamless integration with other services.

Prometeo.ai’s implementation demonstrates Docker’s potential to handle sophisticated computational tasks. The orchestration of Docker with GPU-optimized instances and cloud-based services exhibits a scalable and efficient infrastructure for high-frequency, near-real-time data analysis.

Do you have an interesting use case or story about Docker in your AI/ML workflow? We would love to hear from you and maybe even share your story.

Learn more

Get the latest release of Docker Desktop.

Vote on what’s next! Check out our public roadmap.

Have questions? The Docker community is here to help.

New to Docker? Get started.

Quelle: https://blog.docker.com/feed/

Container Security and Why It Matters

Are you thinking about container security? Maybe you are on a security team trying to manage rogue cloud resources. Perhaps you work in the DevOps space and know what container security is, but you want to figure out how to decrease the pain around security triaging your containers for everyone involved. 

In this post, we’ll look at security for containers in a scalable environment, how deployment to that environment can affect your rollout of container security, and how Docker can help.

What is container security?

Container security is knowing that a container image you run in your environment includes only the libraries, base image, and any custom bits you declare in your Dockerfile,  and not malware or known vulnerabilities. (We’d also love to say no zero days, but such is the nature of the beast.)

You want to know that those libraries used to build your image and any base image behind it come from sources you expect — open source or otherwise — and are free from critical vulnerabilities, malware, and other surprises. 

The base image is usually a common image (for example, Alpine Linux, Ubuntu, or BusyBox) that is a building block upon which other companies add their own image layers. Think of an image layer as a step in the install process. Whenever you take a base image and add new libraries or steps to it for creation, you are essentially creating a new image.  

We’ve talked about the most immediate piece of container security, the image layers, but how is the image built and what is the source of those image layers?

Container image provenance

Here’s where container security gets tricky: the image build and source tracking process. You want assurances that your images, libraries, and any base images you depend on contain what you expect them to and not anything nefarious. So you should care about image provenance: where an image gets built, who builds it, and where it gets stored. 

You should pay attention to any infrastructure or automation used to build your images, which typically means continuous integration (CI) tooling such as GitHub Actions, AWS CodeBuild, or CircleCI. You need to ensure any workloads running your image builds are on build environments with minimal access and potential attack surfaces. You need to consider who has access to your GitHub actions runners, for example. Do you need to create a VPN connection from your runner to your cloud account? If so, what are the security protections on that VPN connection? Consider the confidentiality and integrity of your image pipeline carefully. 

To put it more directly: Managing container provenance in cloud workloads can make deployments easier, but it can also make it easier to deploy malware at scale if you aren’t careful. The nature of the cloud is that it adds complexity, not necessarily security.

Software Bill of Materials (SBOM) attestations can also help ensure that only what you want is inside your images. With an SBOM, you can review a list of all the libraries and dependencies used to build your image and ensure the versioning and content matches what you expect by viewing an SBOM attestation. Docker Engine provides this with docker sbom and Docker BuildKit provides it in versions newer than 0.11. 

Other considerations with SBOM attestations include attestation provider trust and protection from man-in-the-middle attacks, such as replacing libraries in the image. Docker is working to create signed SBOM attestations for images to create strong assurances around SBOM to help strengthen this part of image security.

You also want to consider software composition analysis (SCA) against your images to ensure open source tooling and licenses are as expected. Docker Official Images, for example, have a certified seal of provenance behind them for your base image, which provides assurance around a base image you might be using.

Vulnerability and malware scanning

And what about potential CVEs and malware? How do you scan your images at scale for those issues? 

A number of static scanning tools are available for CVE scanning, and some provide dynamic malware scanning. When researching tools in this space, consider what you use for your image repository, such as Docker Hub, Amazon Elastic Container Registry (ECR), Artifact Registry, or an on-premises/in-colocation option like Nexus. Depending on the dynamics and security controls you have in place on your registry, one tooling option might make more sense than another. For example, AWS ECR offers some static vulnerability scanning out of the box. Some other options bundle software composition analysis (SCA) scanning of images as well. 

The trick is to find a tool with the right signal-to-noise mix for your team. For example, you might want static scanning but minimal false positives and the ability to create exclusions. 

As with any static vulnerability scanning tool, the Common Vulnerability Scoring System (CVSS) score of a vulnerability is just a starting point. Only you and your team can determine the exploitability, possible risks, and attack surface of a particular vulnerability and whether those factors outweigh the potential effects of upgrading or changing an image deployed at scale in your environment.

In other words, a scanning tool might find some high or critical (per CVSS scoring) vulnerabilities in some of your images. Still, those vulnerabilities might not be exploitable because the affected images are only used internally inside a virtual private cloud (VPC) in your environment with no external access. But you’d want to ensure that the image stays internal and isn’t used for production. So guardrails, monitoring, and gating around the use of that image and it staying in internal workloads only is a must. 

Finally, imagine an image that is pervasive and used across all your workloads. The effort to upgrade that image might take several sprint cycles for your engineering teams to safely deploy and require service downtime as you unravel the library dependencies. Regarding vulnerability rating for the two examples — an internal-only image and a pervasive image that is difficult to upgrade — you might want to lower the priority of the vulnerability in the former and slowly track progress toward remediating the latter. 

Docker’s Security Team is intimately familiar with two of the biggest blockers security teams face: time and resources. Your team might not be able to triage and remediate all vulnerabilities across production, development, and staging environments, especially if your team is just starting its journey with container security. So start with what you can and must do something about: production images.

Production vs. non-production

Only container images that have gone through appropriate approval and automation workflows should be deployed in production. Like any mature CI/CD workflow, this means thorough testing in non-production environments, scanning before release to production, and monitoring and guardrails around images that are already live in production with things like cloud resource tagging, version control, and appropriate role-based access control around who can approve an image’s deployment to production. 

At its root, this means that Security teams that have not previously had their feet in the infrastructure or DevOps team’s ocean of work in your company’s cloud accounts should. Just as DevOps culture has caused a shift for developers in handling infrastructure, scaling, and service decisions in the cloud, the same shift is happening in the security community with DevSecOps culture and Security Engineering. It is in the middle of this intersection where container security resides.

Not only does your tool choice matter in terms of best-fit for your environment’s landscape with container security, your ability to collaborate with your infrastructure, engineering, and DevOps teams matters even more for this work. To reiterate, to get a good handle on gating production deployments and having good automation and monitoring tied to those production deployments and resources, security teams must familiarize themselves with this space and get comfortable in this intersection. Good tooling can help make all the difference in fostering that culture of collaboration, especially for a security team new to this space.

Container security tools: What to look for

Like any well-thought-out tool selection, sometimes what matters most is not the number of bells and whistles a tool offers but the tool’s fit to your organization’s needs and gaps.

Avoid container security tools that promise to be the silver bullet. Instead, think of tools that will help your team conquer small challenges today and work to build on goals for the larger challenges down the road. (Security folks know that any tool on the market promising to be a silver bullet is just selling something and isn’t a reality with the ever-changing threat landscape.)

In short, tools for container security should enable your workflow and build trust and facilitate cross-team collaboration from Engineering to Security to DevOps, not tools that become a landscape of noise and overwhelming visuals for your engineers. And here’s where Docker Scout can help.

Docker Scout

Docker engineers have been working on a new product to help increase container security: Docker Scout. Scout gives you the list of discovered vulnerabilities in your container images and offers guidance for remediation in an iterative small-improvements style. You can compare your scores from one deployment to the next and show improvement to create a sense of accomplishment for your teams, not an overwhelming bombardment of vulnerabilities and risk that seems insurmountable.

Docker Scout lets you set target goals for your images and markers for iterative improvement. You can define different goals for production images versus development or staging images so that each environment gets the level of security it needs.

Conclusion

As with most security problems, there is no silver bullet with container security. The technical, operational, and organizational moving pieces that go into protecting your company’s container images often reside at the boundaries between teams, functions, and responsibilities. This adds complexity to an already complex problem. Rather than further adding to the burdens created by this complexity, you should look for tools that enable your teams to work together and reach a deeper understanding of where goals, risks, and priorities overlap and coexist.

Even more importantly, look for container security solutions that are clear about what they can offer you and extend help in areas where they do not have offerings. 

Whether you are a security team member new to the ocean of DevOps and container security or have been in these security waters for a while, Docker is here to help support you and get to more stable waters. We are beside you in this ocean and trying to make the space better for ourselves, our customers, and developers who use Docker all over the world.

Learn more

Get the latest release of Docker Desktop.

Try Docker Scout.

Learn about Docker Security.

Generate the SBOM for Docker images.

Learn about SBOM attestations.

Check out Docker Official Images.

Visit Docker Hub.

Vote on what’s next! Check out our public roadmap.

Have questions? The Docker community is here to help.

New to Docker? Get started.

Quelle: https://blog.docker.com/feed/