Anzeige: Prime Day 2024 – Pfannen von Tefal by Jamie Oliver sichern
Im Rahmen des Prime Day 2024 sind die Bestseller-Bratpfannen von Tefal und Jamie Oliver zum Spitzenpreis bei Amazon erhältlich. (Prime Day, Technik/Hardware)
Quelle: Golem
Im Rahmen des Prime Day 2024 sind die Bestseller-Bratpfannen von Tefal und Jamie Oliver zum Spitzenpreis bei Amazon erhältlich. (Prime Day, Technik/Hardware)
Quelle: Golem
Wir sprechen im Podcast mit einer Wissenschaftshistorikerin über die Anfänge der KI-Forschung in Deutschland. (Besser Wissen, KI)
Quelle: Golem
Das chinesische Unternehmen Mingyang baut mit die größten Windräder der Welt. OceanX ist eine besondere Windkraftanlage für den Offshore-Einsatz. (Windkraft, Technologie)
Quelle: Golem
This ongoing Docker Labs GenAI series will explore the exciting space of AI developer tools. At Docker, we believe there is a vast scope to explore, openly and without the hype. We will share our explorations and collaborate with the developer community in real-time. Although developers have adopted autocomplete tooling like GitHub Copilot and use chat, there is significant potential for AI tools to assist with more specific tasks and interfaces throughout the entire software lifecycle. Therefore, our exploration will be broad. We will be releasing things as open source so you can play, explore, and hack with us, too.
Can an AI assistant help configure your project’s Git hooks?
Git hooks can save manual work during repetitive tasks, such as authoring commit messages or running checks before committing an update. But they can also be hard to configure, and are project dependent. Can generative AI make Git hooks easier to configure and use?
Simple prompts
From a high level, the basic prompt is simple:
How do I set up git hooks?
Although this includes no details about the actual project, the response from many foundation models is still useful. If you run this prompt in ChatGPT, you’ll see that the response contains details about how to use the .git/hooks folder, hints about authoring hook scripts, and even practical next steps for what you’ll need to learn about next. However, the advice is general. It has not been grounded by your project.
Project context
Your project itself is an important source of information for an assistant. Let’s start by providing information about types of source code in a project. Fortunately, there are plenty of existing tools for extracting project context, and these tools are often already available in Docker containers.
For example, here’s an image that will analyze any Git repository and return a list of languages being used. Let’s update our prompt with this new context.
How do I set up git hooks?
{{# linguist }}
This project contains code from the language {{ language }} so if you have any
recommendations pertaining to {{ language }}, please include them.
{{/linguist}}
In this example, we use moustache templates to bind the output of our “linguist” analysis into this prompt.
The response from an LLM-powered assistant will change dramatically. Armed with specific advice about what kinds of files might be changed, the LLM will generate sample scripts and make suggestions about specific tools that might be useful for the kinds of code developed in this project. It might even be possible to cut and paste code out of the response to try setting up hooks yourself.
The pattern is quite simple. We already have tools to analyze projects, so let’s plug these in locally and give the LLM more context to make better suggestions (Figure 1).
Figure 1: Adding tools to provide context for LLM.
Expertise
Generative AI also offers new opportunities for experts to contribute knowledge that AI assistants can leverage to become even more useful. For example, we have learned that pre-commit can be helpful to organize the set of tools used to implement Git hooks.
To represent this learning, we add this prompt:
When configuring git hooks, our organization uses a tool called
[pre-commit](https://github.com/pre-commit/pre-commit).
There’s also a base configuration that we have found useful in all projects. We also add that to our assistant’s knowledge base.
If a user wants to configure git hooks, use this template which will need to be written to pre-commit-config.yaml
in the root of the user's project.
Start with the following code block:
“`yaml
repos:
– repo: http://github.com/pre-commit/pre-commit-hooks
rev: v2.3.0
hooks:
– id: check-yaml
– id: trailing-whitespace
– id: check-merge-conflict
– repo https://github.com/jorisroovers/gitlint
rev: main
hooks:
– id: gitlint
– repo: local
hooks:
“`
Finally, as we learn about new tools that are useful for certain projects, we describe this information. For example, as an expert, I might want to suggest that teams using Golang include a particular linting tool in the Git hooks configuration.
If we detect `Go` in the project, add the following hook to the hooks entry in the `local` repo entry.
“`yaml
id: golangcli-lint
name: golang cli
entry: golangci/golangci-lint
files ".go$"
“`
With these additions, the response from our assistant becomes precise. We have found that our assistant can now write hooks scripts and write complete YAML configuration files that are project-specific and ready to copy directly into a project.
Somewhat surprisingly, the assistant can also now recommend tools not mentioned explicitly in our prompts but that use the same syntax established for other tools. Using these examples, the LLM appears to be capable of extending the assistant’s capabilities to other tools. Using our examples as guidance, the LLM suggests new tools but still configures them using our suggested framework and syntax.
Most importantly, the response from the assistant is now not only actionable to the developer, saving them time, but it is also specific enough that we could pass the response to a simple agent to take the action automatically.
Adding tools
For this example, the only tool we really need is a file-writer. The change to our prompt is to add one instruction to go ahead and write the configuration into the project.
Write the final yaml content to our project at the path pre-commit-config.yaml. Write both the `pre-commit` and `commit-message` scripts to `git/hooks` and make them executable.
Besides the prompt, there is another crucial step that we are skipping at this point. The assistant must be told that it is capable of writing content into files. However, this is really just a registration step.
The important thing is that we can give our agent the tools it needs to perform tasks. In doing so, the response from the LLM undergoes a transition. Instead of text output, the LLM responds with instructions for our agent. If we’re using an OpenAI function call, we’ll see a request that looks something like the following .json file. It’s not meant to be read by us, of course. It’s an instruction to the agent that knows how to update your project for you.
{
"id": "call_4LCo0CQqCHCGGZea3qlaTg5h",
"type": "function",
"function": {
"name": "write_file",
"arguments": "{n "path": "pre-commit-config.yaml",n "content": "repos:n – repo: http://github.com/pre-commi
t/pre-commit-hooksn rev: v2.3.0n hooks:n – id: check-yamln – id: trailing-whitespacen – id
: check-merge-conflictn – repo https://github.com/jorisroovers/gitlintn rev: mainn hooks:n – id: gitlintn
– repo: localn hooks:n – id: markdownlintn name: markdown lintern entry: markdownlint/mar
kdownlintn files: \"\.md$\"n – id: python-blackn name: python black formattern e
ntry: blackn files: \"\.py$\""n}"
}
}
A more sophisticated version of the file-writer function might communicate with an editor agent capable of presenting recommended file changes to a developer using native IDE concepts, like editor quick-fixes and hints. In other words, tools can help generative AI to meet developers where they are. And the answer to the question:
How do I set up git hooks?
becomes, “Let me just show you.”
Docker as tool engine
The tools mentioned in the previous sections have all been delivered as Docker containers. One goal of this work has been to verify that an assistant can bootstrap itself starting from a Docker-only environment. Docker is important here because it has been critical in smoothing over many of the system/environment gaps that LLMs struggle with.
We have observed that a significant barrier to activating even simple local assistants is the complexity of managing a safe and reliable environment for running these tools. Therefore, we are constraining ourselves to use only tools that can be lazily pulled from public registries.
For AI assistants to transform how we consume tools, we believe that both tool distribution and knowledge distribution are key factors. In the above example, we can see how LLM responses can be transformed by tools from unactionable and vague to hyper-project-focused and actionable. The difference is tools.
To follow along with this effort, check out the GitHub repository for this project.
Learn more
Subscribe to the Docker Newsletter.
Read the Docker Labs GenAI series.
Get the latest release of Docker Desktop.
Vote on what’s next! Check out our public roadmap.
Have questions? The Docker community is here to help.
New to Docker? Get started.
Quelle: https://blog.docker.com/feed/
Docker’s flexibility and robustness as a containerization tool come with a complexity that can be daunting. Multiple methods are available to accomplish similar tasks, and users must understand the pros and cons of the available options to choose the best approach for their projects.
One confusing area concerns the RUN, CMD, and ENTRYPOINT Dockerfile instructions. In this article, we will discuss the differences between these instructions and describe use cases for each.
RUN
The RUN instruction is used in Dockerfiles to execute commands that build and configure the Docker image. These commands are executed during the image build process, and each RUN instruction creates a new layer in the Docker image. For example, if you create an image that requires specific software or libraries installed, you would use RUN to execute the necessary installation commands.
The following example shows how to instruct the Docker build process to update the apt cache and install Apache during an image build:
RUN apt update && apt -y install apache2
RUN instructions should be used judiciously to keep the image layers to a minimum, combining related commands into a single RUN instruction where possible to reduce image size.
CMD
The CMD instruction specifies the default command to run when a container is started from the Docker image. If no command is specified during the container startup (i.e., in the docker run command), this default is used. CMD can be overridden by supplying command-line arguments to docker run.
CMD is useful for setting default commands and easily overridden parameters. It is often used in images as a way of defining default run parameters and can be overridden from the command line when the container is run.
For example, by default, you might want a web server to start, but users could override this to run a shell instead:
CMD ["apache2ctl", "-DFOREGROUND"]
Users can start the container with docker run -it <image> /bin/bash to get a Bash shell instead of starting Apache.
ENTRYPOINT
The ENTRYPOINT instruction sets the default executable for the container. It is similar to CMD but is overridden by the command-line arguments passed to docker run. Instead, any command-line arguments are appended to the ENTRYPOINT command.
Note: Use ENTRYPOINT when you need your container to always run the same base command, and you want to allow users to append additional commands at the end.
ENTRYPOINT is particularly useful for turning a container into a standalone executable. For example, suppose you are packaging a custom script that requires arguments (e.g., “my_script extra_args”). In that case, you can use ENTRYPOINT to always run the script process (“my_script”) and then allow the image users to specify the “extra_args” on the docker run command line. You can do the following:
ENTRYPOINT ["my_script"]
Combining CMD and ENTRYPOINT
The CMD instruction can be used to provide default arguments to an ENTRYPOINT if it is specified in the exec form. This setup allows the entry point to be the main executable and CMD to specify additional arguments that can be overridden by the user.
For example, you might have a container that runs a Python application where you always want to use the same application file but allow users to specify different command-line arguments:
ENTRYPOINT ["python", "/app/my_script.py"]
CMD ["–default-arg"]
Running docker run myimage –user-arg executes python /app/my_script.py –user-arg.
The following table provides an overview of these commands and use cases.
Command description and use cases
CommandDescriptionUse CaseCMDDefines the default executable of a Docker image. It can be overridden by docker run arguments.Utility images allow users to pass different executables and arguments on the command line.ENTRYPOINTDefines the default executable. It can be overridden by the “–entrypoint” docker run arguments.Images built for a specific purpose where overriding the default executable is not desired.RUNExecutes commands to build layers.Building an image
What is PID 1 and why does it matter?
In the context of Unix and Unix-like systems, including Docker containers, PID 1 refers to the first process started during system boot. All other processes are then started by PID 1, which in the process tree model is the parent of every process in the system.
In Docker containers, the process that runs as PID 1 is crucial, because it is responsible for managing all other processes inside the container. Additionally, PID 1 is the process that reviews and handles signals from the Docker host. For example, a SIGTERM into the container will be caught and processed by PID 1, and the container should gracefully shut down.
When commands are executed in Docker using the shell form, a shell process (/bin/sh -c) typically becomes PID 1. Still, it does not properly handle these signals, potentially leading to unclean shutdowns of the container. In contrast, when using the exec form, the command runs directly as PID 1 without involving a shell, which allows it to receive and handle signals directly.
This behavior ensures that the container can gracefully stop, restart, or handle interruptions, making the exec form preferable for applications that require robust and responsive signal handling.
Shell and exec forms
In the previous examples, we used two ways to pass arguments to the RUN, CMD, and ENTRYPOINT instructions. These are referred to as shell form and exec form.
Note: The key visual difference is that the exec form is passed as a comma-delimited array of commands and arguments with one argument/command per element. Conversely, shell form is expressed as a string combining commands and arguments.
Each form has implications for executing commands within containers, influencing everything from signal handling to environment variable expansion. The following table provides a quick reference guide for the different forms.
Shell and exec form reference
FormDescriptionExampleShell FormTakes the form of <INSTRUCTION> <COMMAND>.CMD echo TEST or ENTRYPOINT echo TESTExec FormTakes the form of <INSTRUCTION> [“EXECUTABLE”, “PARAMETER”].CMD [“echo”, “TEST”] or ENTRYPOINT [“echo”, “TEST”]
In the shell form, the command is run in a subshell, typically /bin/sh -c on Linux systems. This form is useful because it allows shell processing (like variable expansion, wildcards, etc.), making it more flexible for certain types of commands (see this shell scripting article for examples of shell processing). However, it also means that the process running your command isn’t the container’s PID 1, which can lead to issues with signal handling because signals sent by Docker (like SIGTERM for graceful shutdowns) are received by the shell rather than the intended process.
The exec form does not invoke a command shell. This means the command you specify is executed directly as the container’s PID 1, which is important for correctly handling signals sent to the container. Additionally, this form does not perform shell expansions, so it’s more secure and predictable, especially for specifying arguments or commands from external sources.
Putting it all together
To illustrate the practical application and nuances of Docker’s RUN, CMD, and ENTRYPOINT instructions, along with the choice between shell and exec forms, let’s review some examples. These examples demonstrate how each instruction can be utilized effectively in real-world Dockerfile scenarios, highlighting the differences between shell and exec forms.
Through these examples, you’ll better understand when and how to use each directive to tailor container behavior precisely to your needs, ensuring proper configuration, security, and performance of your Docker containers. This hands-on approach will help consolidate the theoretical knowledge we’ve discussed into actionable insights that can be directly applied to your Docker projects.
RUN instruction
For RUN, used during the Docker build process to install packages or modify files, choosing between shell and exec form can depend on the need for shell processing. The shell form is necessary for commands that require shell functionality, such as pipelines or file globbing. However, the exec form is preferable for straightforward commands without shell features, as it reduces complexity and potential errors.
# Shell form, useful for complex scripting
RUN apt-get update && apt-get install -y nginx
# Exec form, for direct command execution
RUN ["apt-get", "update"]
RUN ["apt-get", "install", "-y", "nginx"]
CMD and ENTRYPOINT
These instructions control container runtime behavior. Using exec form with ENTRYPOINT ensures that the container’s main application handles signals directly, which is crucial for proper startup and shutdown behavior. CMD can provide default parameters to an ENTRYPOINT defined in exec form, offering flexibility and robust signal handling.
# ENTRYPOINT with exec form for direct process control
ENTRYPOINT ["httpd"]
# CMD provides default parameters, can be overridden at runtime
CMD ["-D", "FOREGROUND"]
Signal handling and flexibility
Using ENTRYPOINT in exec form and CMD to specify parameters ensures that Docker containers can handle operating system signals gracefully, respond to user inputs dynamically, and maintain secure and predictable operations.
This setup is particularly beneficial for containers that run critical applications needing reliable shutdown and configuration behaviors. The following table shows key differences between the forms.
Key differences between shell and exec
Shell FormExec FormFormCommands without [] brackets. Run by the container’s shell, e.g., /bin/sh -c.Commands with [] brackets. Run directly, not through a shell.Variable SubstitutionInherits environment variables from the shell, such as $HOME and $PATH.Does not inherit shell environment variables but behaves the same for ENV instruction variables.Shell FeaturesSupports sub-commands, piping output, chaining commands, I/O redirection, etc.Does not support shell features.Signal Trapping & ForwardingMost shells do not forward process signals to child processes.Directly traps and forwards signals like SIGINT.Usage with ENTRYPOINTCan cause issues with signal forwarding.Recommended due to better signal handling.CMD as ENTRYPOINT ParametersNot possible with the shell form.If the first item in the array is not a command, all items are used as parameters for the ENTRYPOINT.
Figure 1 provides a decision tree for using RUN, CMD, and ENTRYPOINT in building a Dockerfile.
Figure 1: Decision tree — RUN, CMD, ENTRYPOINT.
Figure 2 shows a decision tree to help determine when to use exec form or shell form.
Figure 2: Decision tree — exec vs. shell form.
Examples
The following section will walk through the high-level differences between CMD and ENTRYPOINT. In these examples, the RUN command is not included, given that the only decision to make there is easily handled by reviewing the two different formats.
Test Dockerfile
# Use syntax version 1.3-labs for Dockerfile
# syntax=docker/dockerfile:1.3-labs
# Use the Ubuntu 20.04 image as the base image
FROM ubuntu:20.04
# Run the following commands inside the container:
# 1. Update the package lists for upgrades and new package installations
# 2. Install the apache2-utils package (which includes the 'ab' tool)
# 3. Remove the package lists to reduce the image size
#
# This is all run in a HEREDOC; see
# https://www.docker.com/blog/introduction-to-heredocs-in-dockerfiles/
# for more details.
#
RUN <<EOF
apt-get update;
apt-get install -y apache2-utils;
rm -rf /var/lib/apt/lists/*;
EOF
# Set the default command
CMD ab
First build
We will build this image and tag it as ab.
$ docker build -t ab .
[+] Building 7.0s (6/6) FINISHED docker:desktop-linux
=> [internal] load .dockerignore 0.0s
=> => transferring context: 2B 0.0s
=> [internal] load build definition from Dockerfile 0.0s
=> => transferring dockerfile: 730B 0.0s
=> [internal] load metadata for docker.io/library/ubuntu:20.04 0.4s
=> CACHED [1/2] FROM docker.io/library/ubuntu:20.04@sha256:33a5cc25d22c45900796a1aca487ad7a7cb09f09ea00b779e 0.0s
=> [2/2] RUN <<EOF (apt-get update;…) 6.5s
=> exporting to image 0.0s
=> => exporting layers 0.0s
=> => writing image sha256:99ca34fac6a38b79aefd859540f88e309ca759aad0d7ad066c4931356881e518 0.0s
=> => naming to docker.io/library/ab
Run with CMD ab
Without any arguments, we get a usage block as expected.
$ docker run ab
ab: wrong number of arguments
Usage: ab [options] [http[s]://]hostname[:port]/path
Options are:
-n requests Number of requests to perform
-c concurrency Number of multiple requests to make at a time
-t timelimit Seconds to max. to spend on benchmarking
This implies -n 50000
-s timeout Seconds to max. wait for each response
Default is 30 seconds
<– SNIP –>
However, if I run ab and include a URL to test, I initially get an error:
$ docker run –rm ab https://jayschmidt.us
docker: Error response from daemon: failed to create task for container: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: exec: "https://jayschmidt.us": stat https://jayschmidt.us: no such file or directory: unknown.
The issue here is that the string supplied on the command line — https://jayschmidt.us — is overriding the CMD instruction, and that is not a valid command, resulting in an error being thrown. So, we need to specify the command to run:
$ docker run –rm ab ab https://jayschmidt.us/
This is ApacheBench, Version 2.3 <$Revision: 1843412 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Licensed to The Apache Software Foundation, http://www.apache.org/
Benchmarking jayschmidt.us (be patient)…..done
Server Software: nginx
Server Hostname: jayschmidt.us
Server Port: 443
SSL/TLS Protocol: TLSv1.2,ECDHE-ECDSA-AES256-GCM-SHA384,256,256
Server Temp Key: X25519 253 bits
TLS Server Name: jayschmidt.us
Document Path: /
Document Length: 12992 bytes
Concurrency Level: 1
Time taken for tests: 0.132 seconds
Complete requests: 1
Failed requests: 0
Total transferred: 13236 bytes
HTML transferred: 12992 bytes
Requests per second: 7.56 [#/sec] (mean)
Time per request: 132.270 [ms] (mean)
Time per request: 132.270 [ms] (mean, across all concurrent requests)
Transfer rate: 97.72 [Kbytes/sec] received
Connection Times (ms)
min mean[+/-sd] median max
Connect: 90 90 0.0 90 90
Processing: 43 43 0.0 43 43
Waiting: 43 43 0.0 43 43
Total: 132 132 0.0 132 132
Run with ENTRYPOINT
In this run, we remove the CMD ab instruction from the Dockerfile, replace it with ENTRYPOINT [“ab”], and then rebuild the image.
This is similar to but different from the CMD command — when you use ENTRYPOINT, you cannot override the command unless you use the –entrypoint flag on the docker run command. Instead, any arguments passed to docker run are treated as arguments to the ENTRYPOINT.
$ docker run –rm ab "https://jayschmidt.us/"
This is ApacheBench, Version 2.3 <$Revision: 1843412 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Licensed to The Apache Software Foundation, http://www.apache.org/
Benchmarking jayschmidt.us (be patient)…..done
Server Software: nginx
Server Hostname: jayschmidt.us
Server Port: 443
SSL/TLS Protocol: TLSv1.2,ECDHE-ECDSA-AES256-GCM-SHA384,256,256
Server Temp Key: X25519 253 bits
TLS Server Name: jayschmidt.us
Document Path: /
Document Length: 12992 bytes
Concurrency Level: 1
Time taken for tests: 0.122 seconds
Complete requests: 1
Failed requests: 0
Total transferred: 13236 bytes
HTML transferred: 12992 bytes
Requests per second: 8.22 [#/sec] (mean)
Time per request: 121.709 [ms] (mean)
Time per request: 121.709 [ms] (mean, across all concurrent requests)
Transfer rate: 106.20 [Kbytes/sec] received
Connection Times (ms)
min mean[+/-sd] median max
Connect: 91 91 0.0 91 91
Processing: 31 31 0.0 31 31
Waiting: 31 31 0.0 31 31
Total: 122 122 0.0 122 122
What about syntax?
In the example above, we use ENTRYPOINT [“ab”] syntax to wrap the command we want to run in square brackets and quotes. However, it is possible to specify ENTRYPOINT ab (without quotes or brackets).
Let’s see what happens when we try that.
$ docker run –rm ab "https://jayschmidt.us/"
ab: wrong number of arguments
Usage: ab [options] [http[s]://]hostname[:port]/path
Options are:
-n requests Number of requests to perform
-c concurrency Number of multiple requests to make at a time
-t timelimit Seconds to max. to spend on benchmarking
This implies -n 50000
-s timeout Seconds to max. wait for each response
Default is 30 seconds
<– SNIP –>
Your first thought will likely be to re-run the docker run command as we did for CMD ab above, which is giving both the executable and the argument:
$ docker run –rm ab ab "https://jayschmidt.us/"
ab: wrong number of arguments
Usage: ab [options] [http[s]://]hostname[:port]/path
Options are:
-n requests Number of requests to perform
-c concurrency Number of multiple requests to make at a time
-t timelimit Seconds to max. to spend on benchmarking
This implies -n 50000
-s timeout Seconds to max. wait for each response
Default is 30 seconds
<– SNIP –>
This is because ENTRYPOINT can only be overridden if you explicitly add the –entrypoint argument to the docker run command. The takeaway is to always use ENTRYPOINT when you want to force the use of a given executable in the container when it is run.
Wrapping up: Key takeaways and best practices
The decision-making process involving the use of RUN, CMD, and ENTRYPOINT, along with the choice between shell and exec forms, showcases Docker’s intricate nature. Each command serves a distinct purpose in the Docker ecosystem, impacting how containers are built, operate, and interact with their environments.
By selecting the right command and form for each specific scenario, developers can construct Docker images that are more reliable, secure, and optimized for efficiency. This level of understanding and application of Docker’s commands and their formats is crucial for fully harnessing Docker’s capabilities. Implementing these best practices ensures that applications deployed in Docker containers achieve maximum performance across various settings, enhancing development workflows and production deployments.
Learn more
CMD
ENTRYPOINT
RUN
Stay updated on the latest Docker news! Subscribe to the Docker Newsletter.
Get the latest release of Docker Desktop.
New to Docker? Get started.
Have questions? The Docker community is here to help.
Quelle: https://blog.docker.com/feed/
Strix Point alias Ryzen AI 300 bekommt die bisher schnellste Grafikeinheit für Notebooks. Ein Vergleich mit der Arc 140V ist allerdings nicht ganz fair. (APU, Prozessor)
Quelle: Golem
Jennifer und Björn Pankratz äußern sich zum Aus von Piranha Bytes, den Rechten an Gothic – und ihren Plänen mit dem Start-up Pithead Studio. (Piranha Bytes, Risen)
Quelle: Golem
Wasser ist wertvoll – auf Dune und im Weltraum. Wissenschaftler haben sich für einen neuen Raumanzug von der Fremenbekleidung inspirieren lassen. (Raumfahrt, Mars)
Quelle: Golem
AMD hat zahlreiche Details zur neuen Prozessor-Architektur erklärt. Neue Tuning-Funktionen und schnelle Speicherprofile sollen ebenfalls dabei sein. (Prozessor, AMD)
Quelle: Golem
Nun startet auch die Telekom-Tochter mit einem Jahrespaket. Wer den Jahrestarif frühzeitig bucht, erhält mehr Datenvolumen für eine reduzierte Gebühr. (Congstar, Telekom)
Quelle: Golem