Docker and Hugging Face Partner to Democratize AI

Today, Hugging Face and Docker are announcing a new partnership to democratize AI and make it accessible to all software engineers. Hugging Face is the most used open platform for AI, where the machine learning (ML) community has shared more than 150,000 models; 25,000 datasets; and 30,000 ML apps, including Stable Diffusion, Bloom, GPT-J, and open source ChatGPT alternatives. These apps enable the community to explore models, replicate results, and lower the barrier of entry for ML — anyone with a browser can interact with the models.

Docker is the leading toolset for easy software deployment, from infrastructure to applications. Docker is also the leading platform for software teams’ collaboration.

Docker and Hugging Face partner so you can launch and deploy complex ML apps in minutes. With the recent support for Docker on Hugging Face Spaces, folks can create any custom app they want by simply writing a Dockerfile. What’s great about Spaces is that once you’ve got your app running, you can easily share it with anyone worldwide! 🌍 Spaces provides an unparalleled level of flexibility and enables users to build ML demos with their preferred tools — from MLOps tools and FastAPI to Go endpoints and Phoenix apps.

Spaces also come with pre-defined templates of popular open source projects for members that want to get their end-to-end project in production in a matter of seconds with just a few clicks.

Spaces enable easy deployment of ML apps in all environments, not just on Hugging Face. With “Run with Docker,” millions of software engineers can access more than 30,000 machine learning apps and run them locally or in their preferred environment.

“At Hugging Face, we’ve worked on making AI more accessible and more reproducible for the past six years,” says Clem Delangue, CEO of Hugging Face. “Step 1 was to let people share models and datasets, which are the basic building blocks of AI. Step 2 was to let people build online demos for new ML techniques. Through our partnership with Docker Inc., we make great progress towards Step 3, which is to let anyone run those state-of-the-art AI models locally in a matter of minutes.”

You can also discover popular Spaces in the Docker Hub and run them locally with just a couple of commands.

To get started, read Effortlessly Build Machine Learning Apps with Hugging Face’s Docker Spaces. Or try Hugging Face Spaces now.
Quelle: https://blog.docker.com/feed/

Effortlessly Build Machine Learning Apps with Hugging Face’s Docker Spaces

The Hugging Face Hub is a platform that enables collaborative open source machine learning (ML). The hub works as a central place where users can explore, experiment, collaborate, and build technology with machine learning. On the hub, you can find more than 140,000 models, 50,000 ML apps (called Spaces), and 20,000 datasets shared by the community.

Using Spaces makes it easy to create and deploy ML-powered applications and demos in minutes. Recently, the Hugging Face team added support for Docker Spaces, enabling users to create any custom app they want by simply writing a Dockerfile.

Another great thing about Spaces is that once you have your app running, you can easily share it with anyone around the world. 🌍

This guide will step through the basics of creating a Docker Space, configuring it, and deploying code to it. We’ll show how to build a basic FastAPI app for text generation that will be used to demo the google/flan-t5-small model, which can generate text given input text. Models like this are used to power text completion in all sorts of apps. (You can check out a completed version of the app at Hugging Face.)

Prerequisites

To follow along with the steps presented in this article, you’ll need to be signed in to the Hugging Face Hub — you can sign up for free if you don’t have an account already.

Create a new Docker Space 🐳

To get started, create a new Space as shown in Figure 1.

Figure 1: Create a new Space.

Next, you can choose any name you prefer for your project, select a license, and use Docker as the software development kit (SDK) as shown in Figure 2. 

Spaces provides pre-built Docker templates like Argilla and Livebook that let you quickly start your ML projects using open source tools. If you choose the “Blank” option, that means you want to create your Dockerfile manually. Don’t worry, though; we’ll provide a Dockerfile to copy and paste later. 😅

Figure 2: Adding details for the new Space.

When you finish filling out the form and click on the Create Space button, a new repository will be created in your Spaces account. This repository will be associated with the new space that you have created.

Note: If you’re new to the Hugging Face Hub 🤗, check out Getting Started with Repositories for a nice primer on repositories on the hub.

Writing the app

Ok, now that you have an empty space repository, it’s time to write some code. 😎

The sample app will consist of the following three files:

requirements.txt — Lists the dependencies of a Python project or application

app.py — A Python script where we will write our FastAPI app

Dockerfile — Sets up our environment, installs requirements.txt, then launches app.py

To follow along, create each file shown below via the web interface. To do that, navigate to your Space’s Files and versions tab, then choose Add file → Create a new file (Figure 3). Note that, if you prefer, you can also utilize Git.

Figure 3: Creating new files.

Make sure that you name each file exactly as we have done here. Then, copy the contents of each file from here and paste them into the corresponding file in the editor. After you have created and populated all the necessary files, commit each new file to your repository by clicking on the Commit new file to main button.

Listing the Python dependencies 

It’s time to list all the Python packages and their specific versions that are required for the project to function properly. The contents of the requirements.txt file typically include the name of the package and its version number, which can be specified in a variety of formats such as exact version numbers, version ranges, or compatible versions. The file lists FastAPI, requests, and uvicorn for the API along with sentencepiece, torch, and transformers for the text-generation model.

fastapi==0.74.*
requests==2.27.*
uvicorn[standard]==0.17.*
sentencepiece==0.1.*
torch==1.11.*
transformers==4.*

Defining the FastAPI web application

The following code defines a FastAPI web application that uses the transformers library to generate text based on user input. The app itself is a simple single-endpoint API. The /generate endpoint takes in text and uses a transformers pipeline to generate a completion, which it then returns as a response.

To give folks something to see, we reroute FastAPI’s interactive Swagger docs from the default /docs endpoint to the root of the app. This way, when someone visits your Space, they can play with it without having to write any code.

from fastapi import FastAPI
from transformers import pipeline

# Create a new FastAPI app instance
app = FastAPI()

# Initialize the text generation pipeline
# This function will be able to generate text
# given an input.
pipe = pipeline("text2text-generation",
model="google/flan-t5-small")

# Define a function to handle the GET request at `/generate`
# The generate() function is defined as a FastAPI route that takes a
# string parameter called text. The function generates text based on the # input using the pipeline() object, and returns a JSON response
# containing the generated text under the key "output"
@app.get("/generate")
def generate(text: str):
"""
Using the text2text-generation pipeline from `transformers`, generate text
from the given input text. The model used is `google/flan-t5-small`, which
can be found [here](<https://huggingface.co/google/flan-t5-small>).
"""
# Use the pipeline to generate text from the given input text
output = pipe(text)

# Return the generated text in a JSON response
return {"output": output[0]["generated_text"]}

Writing the Dockerfile

In this section, we will write a Dockerfile that sets up a Python 3.9 environment, installs the packages listed in requirements.txt, and starts a FastAPI app on port 7860.

Let’s go through this process step by step:

FROM python:3.9

The preceding line specifies that we’re going to use the official Python 3.9 Docker image as the base image for our container. This image is provided by Docker Hub, and it contains all the necessary files to run Python 3.9.

WORKDIR /code

This line sets the working directory inside the container to /code. This is where we’ll copy our application code and dependencies later on.

COPY ./requirements.txt /code/requirements.txt

The preceding line copies the requirements.txt file from our local directory to the /code directory inside the container. This file lists the Python packages that our application depends on

RUN pip install –no-cache-dir –upgrade -r /code/requirements.txt

This line uses pip to install the packages listed in requirements.txt. The –no-cache-dir flag tells pip to not use any cached packages, the –upgrade flag tells pip to upgrade any already-installed packages if newer versions are available, and the -r flag specifies the requirements file to use.

RUN useradd -m -u 1000 user
USER user
ENV HOME=/home/user
PATH=/home/user/.local/bin:$PATH

These lines create a new user named user with a user ID of 1000, switch to that user, and then set the home directory to /home/user. The ENV command sets the HOME and PATH environment variables. PATH is modified to include the .local/bin directory in the user’s home directory so that any binaries installed by pip will be available on the command line. Refer the documentation to learn more about the user permission.

WORKDIR $HOME/app

This line sets the working directory inside the container to $HOME/app, which is /home/user/app.

COPY –chown=user . $HOME/app

The preceding line copies the contents of our local directory into the /home/user/app directory inside the container, setting the owner of the files to the user that we created earlier.

CMD ["uvicorn", "app:app", "–host", "0.0.0.0", "–port", "7860"]

This line specifies the command to run when the container starts. It starts the FastAPI app using uvicorn and listens on port 7860. The –host flag specifies that the app should listen on all available network interfaces, and the app:app argument tells uvicorn to look for the app object in the app module in our code.

Here’s the complete Dockerfile:

# Use the official Python 3.9 image
FROM python:3.9

# Set the working directory to /code
WORKDIR /code

# Copy the current directory contents into the container at /code
COPY ./requirements.txt /code/requirements.txt

# Install requirements.txt
RUN pip install –no-cache-dir –upgrade -r /code/requirements.txt

# Set up a new user named "user" with user ID 1000
RUN useradd -m -u 1000 user
# Switch to the "user" user
USER user
# Set home to the user’s home directory
ENV HOME=/home/user
PATH=/home/user/.local/bin:$PATH

# Set the working directory to the user’s home directory
WORKDIR $HOME/app

# Copy the current directory contents into the container at $HOME/app setting the owner to the user
COPY –chown=user . $HOME/app

# Start the FastAPI app on port 7860, the default port expected by Spaces
CMD ["uvicorn", "app:app", "–host", "0.0.0.0", "–port", "7860"]

Once you commit this file, your space will switch to Building, and you should see the container’s build logs pop up so you can monitor its status. 👀

If you want to double-check the files, you can find all the files at our app Space.

Note: For a more basic introduction on using Docker with FastAPI, you can refer to the official guide from the FastAPI docs.

Using the app 🚀

If all goes well, your space should switch to Running once it’s done building, and the Swagger docs generated by FastAPI should appear in the App tab. Because these docs are interactive, you can try out the endpoint by expanding the details of the /generate endpoint and clicking Try it out! (Figure 4).

Figure 4: Trying out the app.

Conclusion

This article covered the basics of creating a Docker Space, building and configuring a basic FastAPI app for text generation that uses the google/flan-t5-small model. You can use this guide as a starting point to build more complex and exciting applications that leverage the power of machine learning.

If you’re interested in learning more about Docker templates and seeing curated examples, check out the Docker Examples page. There you’ll find a variety of templates to use as a starting point for your own projects, as well as tips and tricks for getting the most out of Docker templates. Happy coding!
Quelle: https://blog.docker.com/feed/

Announcing Docker+Wasm Technical Preview 2

We recently announced the first Technical Preview of Docker+Wasm, a special build that makes it possible to run Wasm containers with Docker using the WasmEdge runtime. Starting from version 4.15, everyone can try out the features by activating the containerd image store experimental feature.

We didn’t want to stop there, however. Since October, we’ve been working with our partners to make running Wasm workloads with Docker easier and to support more runtimes.

Now we are excited to announce a new Technical Preview of Docker+Wasm with the following three new runtimes:

spin from Fermyon

slight from Deislabs

wasmtime from Bytecode Alliance

All of these runtimes, including WasmEdge, use the runwasi library.

What is runwasi?

Runwasi is a multi-company effort to make a library in Rust that makes it easier to write containerd shims for Wasm workloads. Last December, the runwasi project was donated and moved to the Cloud Native Computing Foundation’s containerd organization in GitHub.

With a lot of work from people at Microsoft, Second State, Docker, and others, we now have enough features in runwasi to run Wasm containers with Docker or in a Kubernetes cluster. We still have a lot of work to do, but there are enough features for people to start testing.

If you would like to chat with us or other runwasi maintainers, join us on the CNCF’s #runwasi channel.

Get the update

Ready to dive in and try it for yourself? Great! Before you do, understand that this is a technical preview build of Docker Desktop, so things might not work as expected. Be sure to back up your containers and images before proceeding.

Download and install the appropriate version for your system, then activate the containerd image store (Settings > Features in development > Use containerd for pulling and storing images), and you’ll be ready to go.

Figure 1: Docker Desktop beta features in development.

Mac (Intel)

Mac (Arm)

Linux (deb, Intel)

Linux (deb, Arm)

Linux (rpm, Intel)

Linux (Arch)

Windows

Let’s take Wasm for a spin 

The WasmEdge runtime is still present in Docker Desktop, so you can run: 

$ docker run –rm –runtime=io.containerd.wasmedge.v1
–platform=wasi/wasm secondstate/rust-example-hello:latest
Hello WasmEdge!

You can even run the same image with the wasmtime runtime:

$ docker run –rm –runtime=io.containerd.wasmtime.v1
–platform=wasi/wasm secondstate/rust-example-hello:latest
Hello WasmEdge!

In the next example, we will deploy a Wasm workload to Docker Desktop’s Kubernetes cluster using the slight runtime. To begin, make sure to activate Kubernetes in Docker Desktop’s settings, then create an example.yaml file:

cat > example.yaml <<EOT
apiVersion: apps/v1
kind: Deployment
metadata:
name: wasm-slight
spec:
replicas: 1
selector:
matchLabels:
app: wasm-slight
template:
metadata:
labels:
app: wasm-slight
spec:
runtimeClassName: wasmtime-slight-v1
containers:
– name: hello-slight
image: dockersamples/slight-rust-hello:latest
command: ["/"]
resources:
requests:
cpu: 10m
memory: 10Mi
limits:
cpu: 500m
memory: 128Mi

apiVersion: v1
kind: Service
metadata:
name: wasm-slight
spec:
type: LoadBalancer
ports:
– protocol: TCP
port: 80
targetPort: 80
selector:
app: wasm-slight
EOT

Note the runtimeClassName, kubernetes will use this to select the right runtime for your application. 

You can now run:

$ kubectl apply -f example.yaml

Once Kubernetes has downloaded the image and started the container, you should be able to curl it:

$ curl localhost/hello
hello

You now have a Wasm container running locally in Kubernetes. How exciting! 🎉

Note: You can take this same yaml file and run it in AKS.

Now let’s see how we can use this to run Bartholomew. Bartholomew is a micro-CMS made by Fermyon that works with the spin runtime. You’ll need to clone this repository; it’s a slightly modified Bartholomew template. 

The repository already contains a Dockerfile that you can use to build the Wasm container:

FROM scratch
COPY . .
ENTRYPOINT [ "/modules/bartholomew.wasm" ]

The Dockerfile copies all the contents of the repository to the image and defines the build bartholomew Wasm module as the entry point of the image.

$ cd docker-wasm-bartholomew
$ docker build -t my-cms .
[+] Building 0.0s (5/5) FINISHED
=> [internal] load build definition from Dockerfile 0.0s
=> => transferring dockerfile: 147B 0.0s
=> [internal] load .dockerignore 0.0s
=> => transferring context: 2B 0.0s
=> [internal] load build context 0.0s
=> => transferring context: 2.84kB 0.0s
=> CACHED [1/1] COPY . . 0.0s
=> exporting to image 0.0s
=> => exporting layers 0.0s
=> => exporting manifest sha256:cf85929e5a30bea9d436d447e6f2f2e 0.0s
=> => exporting config sha256:0ce059f2fe907a91a671f37641f4c5d73 0.0s
=> => naming to docker.io/library/my-cms:latest 0.0s
=> => unpacking to docker.io/library/my-cms:latest 0.0s

You are now ready to run your first WebAssembly micro-CMS:

$ docker run –runtime=io.containerd.spin.v1 -p 3000:80 my-cms

If you go to http://localhost:3000, you should be able to see the Bartholomew landing page (Figure 2).

Figure 2: Bartholomew landing page.

We’d love your feedback

All of this work is fresh from the oven and relies on the containerd image store in Docker, which is an experimental feature we’ve been working on for almost a year now. The good news is that we already see how this hard work can benefit everyone by adding more features to Docker. We’re still working on it, so let us know what you need. 

If you want to help us shape the future of WebAssembly with Docker, try it out, let us know what you think, and leave feedback on our public roadmap.

Quelle: https://blog.docker.com/feed/

Scaling Kubernetes to 7,500 nodes

openai.com – We’ve scaled Kubernetes clusters to 7,500 nodes, producing a scalable infrastructure for large models like GPT-3, CLIP, and DALL·E, but also for rapid small-scale iterative research such as Scaling L…
Quelle: news.kubernauts.io

We apologize. We did a terrible job announcing the end of Docker Free Teams.

We apologize for how we communicated and executed sunsetting Docker “Free Team” subscriptions, which alarmed the open source community.

For those of you catching up, we recently emailed accounts that are members of Free Team organizations, to let them know that they will lose features unless they move to one of our supported free or paid offerings. This impacted less than 2% of our users. Note that this change does not affect Docker Personal, Docker Pro, Docker Team, or Docker Business accounts, Docker-Sponsored Open Source members, Docker Verified Publishers, or Docker Official Images. 

The Docker Free Team subscription was deprecated in part because it was poorly targeted. In particular, it didn’t serve the open source audience as well as our recently updated Docker-Sponsored Open Source program, the latter offering benefits that exceed those of the deprecated Free Team plan.

We’d also like to clarify that public images will only be removed from Docker Hub if their maintainer decides to delete them. We’re sorry that our initial communications failed to make this clear.

We apologize again for the poor communication and execution surrounding this deprecation and promise to continue to listen to community feedback. 

How can I see if I’m affected?

Please consult the Organizations page of your Docker account; any affected organizations are labeled “Docker Free Team” in the “Subscription” column. Less than 2% of Docker users have a Free Team organization on their account.

Even if some of your organizations are affected, your individual Docker account (or other organizations) will not be affected by this change.

This change does NOT affect subscriptions such as Docker Personal, Docker Pro, Docker Team (paid), or Docker Business. 

Will open source images I rely on get deleted?

Not by Docker. Public images will only disappear if the maintainer of the image decides to proactively delete it from Docker Hub. If the maintainer takes no action, we will continue to distribute their public images. (Of course, if the maintainer migrates to the Docker-Sponsored Open Source program or to a paid Docker subscription, we will also continue to serve their public images.)

What if I run an open source project?  

Docker continues to offer a specific Docker-Sponsored Open Source (DSOS) program for open source projects, and it is not affected by the sunsetting of Free Team organizations. For new users interested in joining DSOS from a previous Free Team organization, we will defer any organization suspension or deletion while the DSOS application is under review. This will give organizations at least 30 days before we suspend them if the application is ultimately rejected. We are listening to feedback and may offer additional programs or plans based on your input. 

We encourage all open source projects to apply, even if you were not accepted on a prior application, as we updated the program and expanded eligibility in September 2022. We have assigned more staff to promptly review all applications.

What if I publish a Docker Official Image (DOI) or if I am a Docker Verified Publisher (DVP)?

Repositories in the DOI or DVP programs are not affected by this change.

How do I maintain access to private repositories? 

Private repositories for organizations are a feature of paid Docker subscriptions. If you currently are accessing a legacy Free Team organization and using private repos, they will be suspended as of April 13, 2023. However, you can choose from several subscription tiers that allow you to continue using private repos. Visit our pricing page to learn more.  

Can someone else “squat” my namespace?

No. Even if your organization is suspended, deleted, or you choose to leave Docker voluntarily, your organization’s namespace will not be released, so other users cannot “squat” your images.

Can I migrate to a Personal account?

You can migrate from a Free Team organization to a Personal account by opening a support ticket. No action will be taken against your account while your ticket is being processed.

I use a repository outside of Docker. Can I export the data from my account? 

Yes. At any point before April 13, 2023, you may pull images from your private repositories on the Docker registry and push those images to another registry of your choosing. 

How much does a Docker subscription cost?  

We offer three paid subscription tiers, starting with the Docker Pro plan. Visit our pricing page for more information. 

What are the benefits of a paid Docker subscription?   

Docker Pro is ideal for individual developers looking to accelerate productivity.

Docker Team is ideal for small teams looking to collaborate productively. 

Docker Business is ideal for businesses looking for centralized management and advanced security capabilities. Visit our pricing page to learn more.  

How do I upgrade to a paid Docker subscription?   

Sign in to your account at docker.com. 

Select Upgrade in the banner. 

Select the paid subscription tier you’d like to upgrade to and number of seats. 

Proceed to payment. 

Will my images automatically transfer to my paid subscription?   

Yes. Once you upgrade to a Docker paid subscription, your account and all associated configurations, images, and repositories remain 100% intact under the same name and settings. 
Quelle: https://blog.docker.com/feed/