diff --git a/content/get-started/workshop/07_multi_container.md b/content/get-started/workshop/07_multi_container.md index aab577adeb3..e08b970bbd7 100644 --- a/content/get-started/workshop/07_multi_container.md +++ b/content/get-started/workshop/07_multi_container.md @@ -192,7 +192,7 @@ The todo app supports the setting of a few environment variables to specify MySQ > > While using env vars to set connection settings is generally accepted for development, it's highly discouraged > when running applications in production. Diogo Monica, a former lead of security at Docker, -> [wrote a fantastic blog post](https://diogomonica.com/2017/03/27/why-you-shouldnt-use-env-variables-for-secret-data/) +> [wrote a fantastic blog post](https://blog.diogomonica.com/2017/03/27/why-you-shouldnt-use-env-variables-for-secret-data/) > explaining why. > > A more secure mechanism is to use the secret support provided by your container orchestration framework. In most cases, diff --git a/content/manuals/ai/compose/model-runner.md b/content/manuals/ai/compose/model-runner.md deleted file mode 100644 index bf82c7fbfeb..00000000000 --- a/content/manuals/ai/compose/model-runner.md +++ /dev/null @@ -1,116 +0,0 @@ ---- -title: Use Docker Model Runner -description: Learn how to integrate Docker Model Runner with Docker Compose to build AI-powered applications -keywords: compose, docker compose, model runner, ai, llm, artificial intelligence, machine learning -weight: 20 -aliases: - - /compose/how-tos/model-runner/ -params: - sidebar: - badge: - color: green - text: New ---- - -{{< summary-bar feature_name="Compose model runner" >}} - -Docker Model Runner can be integrated with Docker Compose to run AI models as part of your multi-container applications. -This lets you define and run AI-powered applications alongside your other services. - -## Prerequisites - -- Docker Compose v2.38 or later -- Docker Desktop 4.43 or later -- Docker Desktop for Mac with Apple Silicon or Docker Desktop for Windows with NVIDIA GPU -- [Docker Model Runner enabled in Docker Desktop](/manuals/ai/model-runner.md#enable-docker-model-runner) - -## Use `models` definition - -The [`models` top-level element](/manuals/ai/compose/models-and-compose.md) in the Compose file lets you define AI models to be used by your application. -Compose can then use Docker Model Runner as the model runtime. - -The following example shows how to provide the minimal configuration to use a model within your Compose application: - -```yaml -services: - my-chat-app: - image: my-chat-app - models: - - smollm2 - -models: - smollm2: - model: ai/smollm2 -``` - -### How it works - -During the `docker compose up` process, Docker Model Runner automatically pulls and runs the specified model. -It also sends Compose the model tag name and the URL to access the model runner. - -This information is then passed to services which declare a dependency on the model provider. -In the example above, the `my-chat-app` service receives 2 environment variables prefixed by the service name: -- `SMOLLM2_ENDPOINT` with the URL to access the model -- `SMOLLM2_MODEL` with the model name - -This lets the `my-chat-app` service to interact with the model and use it for its own purposes. - -### Customizing environment variables - -You can customize the environment variable names which will be passed to your service container using the long syntax: - -```yaml -services: - my-chat-app: - image: my-chat-app - models: - smollm2: - endpoint_var: AI_MODEL_URL - model_var: AI_MODEL_NAME - -models: - smollm2: - model: ai/smollm2 -``` - -With this configuration, your `my-chat-app` service will receive: -- `AI_MODEL_URL` with the URL to access the model -- `AI_MODEL_NAME` with the model name - -This allows you to use more descriptive variable names that match your application's expectations. - - -## Alternative configuration with Provider services - -> [!TIP] -> -> Use the []`models` top-level element](#use-models-definition) instead. - -Compose introduced a new service type called `provider` that allows you to declare platform capabilities required by your application. For AI models, you can use the `model` type to declare model dependencies. - -Here's an example of how to define a model provider: - -```yaml -services: - chat: - image: my-chat-app - depends_on: - - ai_runner - - ai_runner: - provider: - type: model - options: - model: ai/smollm2 -``` - -Notice the dedicated `provider` attribute in the `ai_runner` service. -This attribute specifies that the service is a model provider and lets you define options such as the name of the model to be used. - -There is also a `depends_on` attribute in the `my-chat-app` service. -This attribute specifies that the `my-chat-app` service depends on the `ai_runner` service. -This means that the `ai_runner` service will be started before the `my-chat-app` service to allow injection of model information to the `my-chat-app` service. - -## Reference - -- [Docker Model Runner documentation](/manuals/ai/model-runner.md) diff --git a/content/manuals/ai/compose/models-and-compose.md b/content/manuals/ai/compose/models-and-compose.md index f657715c1ea..e7863b8179a 100644 --- a/content/manuals/ai/compose/models-and-compose.md +++ b/content/manuals/ai/compose/models-and-compose.md @@ -3,6 +3,9 @@ title: Define AI Models in Docker Compose applications linkTitle: Use AI models in Compose description: Learn how to define and use AI models in Docker Compose applications using the models top-level element keywords: compose, docker compose, models, ai, machine learning, cloud providers, specification +alias: + - /compose/how-tos/model-runner/ + - /ai/compose/model-runner/ weight: 10 params: sidebar: @@ -18,11 +21,12 @@ Compose lets you define AI models as core components of your application, so you ## Prerequisites - Docker Compose v2.38 or later -- A platform that supports Compose models such as Docker Model Runner or compatible cloud providers +- A platform that supports Compose models such as Docker Model Runner (DMR) or compatible cloud providers. + If you are using DMR, see the [requirements](/manuals/ai/model-runner/_index.md#requirements). ## What are Compose models? -Compose `models` are a standardized way to define AI model dependencies in your application. By using the []`models` top-level element](/reference/compose-file/models.md) in your Compose file, you can: +Compose `models` are a standardized way to define AI model dependencies in your application. By using the [`models` top-level element](/reference/compose-file/models.md) in your Compose file, you can: - Declare which AI models your application needs - Specify model configurations and requirements @@ -66,8 +70,15 @@ models: Common configuration options include: - `model` (required): The OCI artifact identifier for the model. This is what Compose pulls and runs via the model runner. - `context_size`: Defines the maximum token context size for the model. + + > [!NOTE] + > Each model has its own maximum context size. When increasing the context length, + > consider your hardware constraints. In general, try to keep context size + > as small as feasible for your specific needs. + - `runtime_flags`: A list of raw command-line flags passed to the inference engine when the model is started. -- Platform-specific options may also be available via extensions attributes `x-*` + For example, if you use llama.cpp, you can pass any of [the available parameters](https://github.com/ggml-org/llama.cpp/blob/master/tools/server/README.md). +- Platform-specific options may also be available via extension attributes `x-*` ## Service model binding @@ -131,18 +142,23 @@ One of the key benefits of using Compose models is portability across different ### Docker Model Runner -When Docker Model Runner is enabled: +When [Docker Model Runner is enabled](/manuals/ai/model-runner/_index.md): ```yaml services: chat-app: image: my-chat-app models: - - llm + llm: + endpoint_var: AI_MODEL_URL + model_var: AI_MODEL_NAME models: llm: model: ai/smollm2 + context_size: 4096 + runtime_flags: + - "--no-prefill-assistant" ``` Docker Model Runner will: @@ -150,6 +166,34 @@ Docker Model Runner will: - Provide endpoint URLs for accessing the model - Inject environment variables into the service +#### Alternative configuration with provider services + +> [!TIP] +> +> This approach is deprecated. Use the [`models` top-level element](#basic-model-definition) instead. + +You can also use the `provider` service type, which allows you to declare platform capabilities required by your application. +For AI models, you can use the `model` type to declare model dependencies. + +To define a model provider: + +```yaml +services: + chat: + image: my-chat-app + depends_on: + - ai_runner + + ai_runner: + provider: + type: model + options: + model: ai/smollm2 + context-size: 1024 + runtime-flags: "--no-prefill-assistant" +``` + + ### Cloud providers The same Compose file can run on cloud providers that support Compose models: @@ -181,4 +225,4 @@ Cloud providers might: - [`models` top-level element](/reference/compose-file/models.md) - [`models` attribute](/reference/compose-file/services.md#models) - [Docker Model Runner documentation](/manuals/ai/model-runner.md) -- [Compose Model Runner documentation](/manuals/ai/compose/model-runner.md) \ No newline at end of file +- [Compose Model Runner documentation](/manuals/ai/compose/models-and-compose.md) diff --git a/content/manuals/ai/model-runner/_index.md b/content/manuals/ai/model-runner/_index.md index a8856783577..8549f3d6127 100644 --- a/content/manuals/ai/model-runner/_index.md +++ b/content/manuals/ai/model-runner/_index.md @@ -84,7 +84,7 @@ Models are pulled from Docker Hub the first time they're used and stored locally > Using Testcontainers or Docker Compose? > [Testcontainers for Java](https://java.testcontainers.org/modules/docker_model_runner/) > and [Go](https://golang.testcontainers.org/modules/dockermodelrunner/), and -> [Docker Compose](/manuals/ai/compose/model-runner.md) now support Docker Model Runner. +> [Docker Compose](/manuals/ai/compose/models-and-compose.md) now support Docker Model Runner. ## Enable Docker Model Runner diff --git a/content/manuals/build/buildkit/dockerfile-release-notes.md b/content/manuals/build/buildkit/dockerfile-release-notes.md index 0877b592c5a..136135006e6 100644 --- a/content/manuals/build/buildkit/dockerfile-release-notes.md +++ b/content/manuals/build/buildkit/dockerfile-release-notes.md @@ -13,6 +13,20 @@ issues, and bug fixes in [Dockerfile reference](/reference/dockerfile.md). For usage, see the [Dockerfile frontend syntax](frontend.md) page. +## 1.17.0 + +{{< release-date date="2025-06-17" >}} + +The full release notes for this release are available +[on GitHub](https://github.com/moby/buildkit/releases/tag/dockerfile%2F1.17.0). + +```dockerfile +# syntax=docker/dockerfile:1.17.0 +``` + +* Add `ADD --unpack=bool` to control whether archives from a URL path are unpacked. The default is to detect unpack behavior based on the source path, as it happened in previous versions. [moby/buildkit#5991](https://github.com/moby/buildkit/pull/5991) +* Add support for `ADD --chown` when unpacking archive, similar to when copying regular files. [moby/buildkit#5987](https://github.com/moby/buildkit/pull/5987) + ## 1.16.0 {{< release-date date="2025-05-22" >}} diff --git a/content/manuals/build/release-notes.md b/content/manuals/build/release-notes.md index cf2d98245c6..aeae1f67267 100644 --- a/content/manuals/build/release-notes.md +++ b/content/manuals/build/release-notes.md @@ -10,6 +10,31 @@ toc_max: 2 This page contains information about the new features, improvements, and bug fixes in [Docker Buildx](https://github.com/docker/buildx). +## 0.25.0 + +{{< release-date date="2025-06-17" >}} + +The full release notes for this release are available +[on GitHub](https://github.com/docker/buildx/releases/tag/v0.25.0). + +### New + +- Bake now supports defining `extra-hosts`. [docker/buildx#3234](https://github.com/docker/buildx/pull/3234) + +### Enhancements + +- Add support for bearer token auth. [docker/buildx#3233](https://github.com/docker/buildx/pull/3233) +- Add custom exit codes for internal, resource, and canceled errors in commands. [docker/buildx#3214](https://github.com/docker/buildx/pull/3214) +- Show variable type when using `--list=variables` with Bake. [docker/buildx#3207](https://github.com/docker/buildx/pull/3207) +- Consider typed, value-less variables to have `null` value in Bake. [docker/buildx#3198](https://github.com/docker/buildx/pull/3198) +- Add support for multiple IPs in extra hosts configuration. [docker/buildx#3244](https://github.com/docker/buildx/pull/3244) +- Support for updated SLSA V1 provenance in `buildx history` commands. [docker/buildx#3245](https://github.com/docker/buildx/pull/3245) +- Add support for `RegistryToken` configuration in imagetools commands. [docker/buildx#3233](https://github.com/docker/buildx/pull/3233) + +### Bug fixes + +- Fix `keep-storage` flag deprecation notice for `prune` command. [docker/buildx#3216](https://github.com/docker/buildx/pull/3216) + ## 0.24.0 {{< release-date date="2025-05-21" >}} diff --git a/content/manuals/desktop/release-notes.md b/content/manuals/desktop/release-notes.md index 3598b0fecc9..d3f085e49c5 100644 --- a/content/manuals/desktop/release-notes.md +++ b/content/manuals/desktop/release-notes.md @@ -76,10 +76,6 @@ For more frequently asked questions, see the [FAQs](/manuals/desktop/troubleshoo - Return an explicit error to a Docker API / `docker` CLI command if Docker Desktop has been manually paused. - Fixed an issue where unknown keys in Admin and Cloud settings caused a failure. -#### For Linux - -- Bump `virtiofsd` to `1.13.1`. - #### For Mac - Removed `eBPF` which blocked `io_uring`. To enable `io_uring` in a container, use `--security-opt seccomp=unconfined`. Fixes [docker/for-mac#7707](https://github.com/docker/for-mac/issues/7707). @@ -240,7 +236,7 @@ For more frequently asked questions, see the [FAQs](/manuals/desktop/troubleshoo - Docker Model Runner is now available on x86 Windows machines with NVIDIA GPUs. - You can now [push models](/manuals/ai/model-runner.md#push-a-model-to-docker-hub) to Docker Hub with Docker Model Runner. - Added support for Docker Model Runner's model management and chat interface in Docker Desktop for Mac and Windows (on hardware supporting Docker Model Runner). Users can now view, interact with, and manage local AI models through a new dedicated interface. -- [Docker Compose](/manuals/ai/compose/model-runner.md) and Testcontainers [Java](https://java.testcontainers.org/modules/docker_model_runner/) and [Go](https://golang.testcontainers.org/modules/dockermodelrunner/) now support Docker Model Runner. +- [Docker Compose](/manuals/ai/compose/models-and-compose.md) and Testcontainers [Java](https://java.testcontainers.org/modules/docker_model_runner/) and [Go](https://golang.testcontainers.org/modules/dockermodelrunner/) now support Docker Model Runner. - Introducing Docker Desktop in the [Microsoft App Store](https://apps.microsoft.com/detail/xp8cbj40xlbwkx?hl=en-GB&gl=GB). ### Upgrades