Skip to content

publish updates from main #23007

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 5 commits into from
Jul 7, 2025
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion content/get-started/workshop/07_multi_container.md
Original file line number Diff line number Diff line change
Expand Up @@ -192,7 +192,7 @@ The todo app supports the setting of a few environment variables to specify MySQ
>
> While using env vars to set connection settings is generally accepted for development, it's highly discouraged
> when running applications in production. Diogo Monica, a former lead of security at Docker,
> [wrote a fantastic blog post](https://diogomonica.com/2017/03/27/why-you-shouldnt-use-env-variables-for-secret-data/)
> [wrote a fantastic blog post](https://blog.diogomonica.com/2017/03/27/why-you-shouldnt-use-env-variables-for-secret-data/)
> explaining why.
>
> A more secure mechanism is to use the secret support provided by your container orchestration framework. In most cases,
Expand Down
116 changes: 0 additions & 116 deletions content/manuals/ai/compose/model-runner.md

This file was deleted.

56 changes: 50 additions & 6 deletions content/manuals/ai/compose/models-and-compose.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,6 +3,9 @@ title: Define AI Models in Docker Compose applications
linkTitle: Use AI models in Compose
description: Learn how to define and use AI models in Docker Compose applications using the models top-level element
keywords: compose, docker compose, models, ai, machine learning, cloud providers, specification
alias:
- /compose/how-tos/model-runner/
- /ai/compose/model-runner/
weight: 10
params:
sidebar:
Expand All @@ -18,11 +21,12 @@ Compose lets you define AI models as core components of your application, so you
## Prerequisites

- Docker Compose v2.38 or later
- A platform that supports Compose models such as Docker Model Runner or compatible cloud providers
- A platform that supports Compose models such as Docker Model Runner (DMR) or compatible cloud providers.
If you are using DMR, see the [requirements](/manuals/ai/model-runner/_index.md#requirements).

## What are Compose models?

Compose `models` are a standardized way to define AI model dependencies in your application. By using the []`models` top-level element](/reference/compose-file/models.md) in your Compose file, you can:
Compose `models` are a standardized way to define AI model dependencies in your application. By using the [`models` top-level element](/reference/compose-file/models.md) in your Compose file, you can:

- Declare which AI models your application needs
- Specify model configurations and requirements
Expand Down Expand Up @@ -66,8 +70,15 @@ models:
Common configuration options include:
- `model` (required): The OCI artifact identifier for the model. This is what Compose pulls and runs via the model runner.
- `context_size`: Defines the maximum token context size for the model.

> [!NOTE]
> Each model has its own maximum context size. When increasing the context length,
> consider your hardware constraints. In general, try to keep context size
> as small as feasible for your specific needs.

- `runtime_flags`: A list of raw command-line flags passed to the inference engine when the model is started.
- Platform-specific options may also be available via extensions attributes `x-*`
For example, if you use llama.cpp, you can pass any of [the available parameters](https://github.com/ggml-org/llama.cpp/blob/master/tools/server/README.md).
- Platform-specific options may also be available via extension attributes `x-*`

## Service model binding

Expand Down Expand Up @@ -131,25 +142,58 @@ One of the key benefits of using Compose models is portability across different

### Docker Model Runner

When Docker Model Runner is enabled:
When [Docker Model Runner is enabled](/manuals/ai/model-runner/_index.md):

```yaml
services:
chat-app:
image: my-chat-app
models:
- llm
llm:
endpoint_var: AI_MODEL_URL
model_var: AI_MODEL_NAME

models:
llm:
model: ai/smollm2
context_size: 4096
runtime_flags:
- "--no-prefill-assistant"
```

Docker Model Runner will:
- Pull and run the specified model locally
- Provide endpoint URLs for accessing the model
- Inject environment variables into the service

#### Alternative configuration with provider services

> [!TIP]
>
> This approach is deprecated. Use the [`models` top-level element](#basic-model-definition) instead.

You can also use the `provider` service type, which allows you to declare platform capabilities required by your application.
For AI models, you can use the `model` type to declare model dependencies.

To define a model provider:

```yaml
services:
chat:
image: my-chat-app
depends_on:
- ai_runner

ai_runner:
provider:
type: model
options:
model: ai/smollm2
context-size: 1024
runtime-flags: "--no-prefill-assistant"
```


### Cloud providers

The same Compose file can run on cloud providers that support Compose models:
Expand Down Expand Up @@ -181,4 +225,4 @@ Cloud providers might:
- [`models` top-level element](/reference/compose-file/models.md)
- [`models` attribute](/reference/compose-file/services.md#models)
- [Docker Model Runner documentation](/manuals/ai/model-runner.md)
- [Compose Model Runner documentation](/manuals/ai/compose/model-runner.md)
- [Compose Model Runner documentation](/manuals/ai/compose/models-and-compose.md)
2 changes: 1 addition & 1 deletion content/manuals/ai/model-runner/_index.md
Original file line number Diff line number Diff line change
Expand Up @@ -84,7 +84,7 @@ Models are pulled from Docker Hub the first time they're used and stored locally
> Using Testcontainers or Docker Compose?
> [Testcontainers for Java](https://java.testcontainers.org/modules/docker_model_runner/)
> and [Go](https://golang.testcontainers.org/modules/dockermodelrunner/), and
> [Docker Compose](/manuals/ai/compose/model-runner.md) now support Docker Model Runner.
> [Docker Compose](/manuals/ai/compose/models-and-compose.md) now support Docker Model Runner.

## Enable Docker Model Runner

Expand Down
14 changes: 14 additions & 0 deletions content/manuals/build/buildkit/dockerfile-release-notes.md
Original file line number Diff line number Diff line change
Expand Up @@ -13,6 +13,20 @@ issues, and bug fixes in [Dockerfile reference](/reference/dockerfile.md).

For usage, see the [Dockerfile frontend syntax](frontend.md) page.

## 1.17.0

{{< release-date date="2025-06-17" >}}

The full release notes for this release are available
[on GitHub](https://github.com/moby/buildkit/releases/tag/dockerfile%2F1.17.0).

```dockerfile
# syntax=docker/dockerfile:1.17.0
```

* Add `ADD --unpack=bool` to control whether archives from a URL path are unpacked. The default is to detect unpack behavior based on the source path, as it happened in previous versions. [moby/buildkit#5991](https://github.com/moby/buildkit/pull/5991)
* Add support for `ADD --chown` when unpacking archive, similar to when copying regular files. [moby/buildkit#5987](https://github.com/moby/buildkit/pull/5987)

## 1.16.0

{{< release-date date="2025-05-22" >}}
Expand Down
25 changes: 25 additions & 0 deletions content/manuals/build/release-notes.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,6 +10,31 @@ toc_max: 2
This page contains information about the new features, improvements, and bug
fixes in [Docker Buildx](https://github.com/docker/buildx).

## 0.25.0

{{< release-date date="2025-06-17" >}}

The full release notes for this release are available
[on GitHub](https://github.com/docker/buildx/releases/tag/v0.25.0).

### New

- Bake now supports defining `extra-hosts`. [docker/buildx#3234](https://github.com/docker/buildx/pull/3234)

### Enhancements

- Add support for bearer token auth. [docker/buildx#3233](https://github.com/docker/buildx/pull/3233)
- Add custom exit codes for internal, resource, and canceled errors in commands. [docker/buildx#3214](https://github.com/docker/buildx/pull/3214)
- Show variable type when using `--list=variables` with Bake. [docker/buildx#3207](https://github.com/docker/buildx/pull/3207)
- Consider typed, value-less variables to have `null` value in Bake. [docker/buildx#3198](https://github.com/docker/buildx/pull/3198)
- Add support for multiple IPs in extra hosts configuration. [docker/buildx#3244](https://github.com/docker/buildx/pull/3244)
- Support for updated SLSA V1 provenance in `buildx history` commands. [docker/buildx#3245](https://github.com/docker/buildx/pull/3245)
- Add support for `RegistryToken` configuration in imagetools commands. [docker/buildx#3233](https://github.com/docker/buildx/pull/3233)

### Bug fixes

- Fix `keep-storage` flag deprecation notice for `prune` command. [docker/buildx#3216](https://github.com/docker/buildx/pull/3216)

## 0.24.0

{{< release-date date="2025-05-21" >}}
Expand Down
6 changes: 1 addition & 5 deletions content/manuals/desktop/release-notes.md
Original file line number Diff line number Diff line change
Expand Up @@ -76,10 +76,6 @@ For more frequently asked questions, see the [FAQs](/manuals/desktop/troubleshoo
- Return an explicit error to a Docker API / `docker` CLI command if Docker Desktop has been manually paused.
- Fixed an issue where unknown keys in Admin and Cloud settings caused a failure.

#### For Linux

- Bump `virtiofsd` to `1.13.1`.

#### For Mac

- Removed `eBPF` which blocked `io_uring`. To enable `io_uring` in a container, use `--security-opt seccomp=unconfined`. Fixes [docker/for-mac#7707](https://github.com/docker/for-mac/issues/7707).
Expand Down Expand Up @@ -240,7 +236,7 @@ For more frequently asked questions, see the [FAQs](/manuals/desktop/troubleshoo
- Docker Model Runner is now available on x86 Windows machines with NVIDIA GPUs.
- You can now [push models](/manuals/ai/model-runner.md#push-a-model-to-docker-hub) to Docker Hub with Docker Model Runner.
- Added support for Docker Model Runner's model management and chat interface in Docker Desktop for Mac and Windows (on hardware supporting Docker Model Runner). Users can now view, interact with, and manage local AI models through a new dedicated interface.
- [Docker Compose](/manuals/ai/compose/model-runner.md) and Testcontainers [Java](https://java.testcontainers.org/modules/docker_model_runner/) and [Go](https://golang.testcontainers.org/modules/dockermodelrunner/) now support Docker Model Runner.
- [Docker Compose](/manuals/ai/compose/models-and-compose.md) and Testcontainers [Java](https://java.testcontainers.org/modules/docker_model_runner/) and [Go](https://golang.testcontainers.org/modules/dockermodelrunner/) now support Docker Model Runner.
- Introducing Docker Desktop in the [Microsoft App Store](https://apps.microsoft.com/detail/xp8cbj40xlbwkx?hl=en-GB&gl=GB).

### Upgrades
Expand Down