Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
7 changes: 4 additions & 3 deletions mkdocs.yml
Original file line number Diff line number Diff line change
Expand Up @@ -56,6 +56,7 @@ nav:
Design Principles: concepts/design-principles.md
Conformance: concepts/conformance.md
Roles and Personas: concepts/roles-and-personas.md
Priority and Capacity: concepts/priority-and-capacity.md
- Implementations:
- Gateways: implementations/gateways.md
- Model Servers: implementations/model-servers.md
Expand All @@ -65,13 +66,12 @@ nav:
- Getting started: guides/index.md
- Use Cases:
- Serve Multiple GenAI models: guides/serve-multiple-genai-models.md
- Serve Multiple LoRA adapters: guides/serve-multiple-lora-adapters.md
- Rollout:
- Adapter Rollout: guides/adapter-rollout.md
- InferencePool Rollout: guides/inferencepool-rollout.md
- Metrics and Observability: guides/metrics-and-observability.md
- Configuration Guide:
- Configuring the plugins via configuration files or text: guides/epp-configuration/config-text.md
- Configuring the plugins via configuration YAML file: guides/epp-configuration/config-text.md
- Prefix Cache Aware Plugin: guides/epp-configuration/prefix-aware.md
- Troubleshooting Guide: guides/troubleshooting.md
- Implementer Guides:
Expand All @@ -82,9 +82,10 @@ nav:
- Regression Testing: performance/regression-testing/index.md
- Reference:
- API Reference: reference/spec.md
- Alpha API Reference: reference/x-spec.md
- API Types:
- InferencePool: api-types/inferencepool.md
- InferenceModel: api-types/inferencemodel.md
- InferenceObjective: api-types/inferenceobjective.md
- Enhancements:
- Overview: gieps/overview.md
- Contributing:
Expand Down
19 changes: 0 additions & 19 deletions site-src/api-types/inferencemodel.md

This file was deleted.

14 changes: 14 additions & 0 deletions site-src/api-types/inferenceobjective.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,14 @@
# Inference Objective

??? example "Alpha since v1.0.0"

The `InferenceObjective` resource is alpha and may have breaking changes in
future releases of the API.

## Background

The **InferenceObjective** API defines a set of serving objectives of the specific request it is associated with. This CRD currently houses only `Priority` but will be expanded to include fields such as SLO attainment.

## Spec

The full spec of the InferenceModel is defined [here](/reference/x-spec/#inferenceobjective).
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I guess InferenceModel here should be renamed to InferenceObjective.

5 changes: 2 additions & 3 deletions site-src/api-types/inferencepool.md
Original file line number Diff line number Diff line change
@@ -1,9 +1,8 @@
# Inference Pool

??? example "Alpha since v0.1.0"
??? success example "GA since v1.0.0"

The `InferencePool` resource is alpha and may have breaking changes in
future releases of the API.
The `InferencePool` resource has been graduated to v1 and is considered stable.

## Background

Expand Down
4 changes: 2 additions & 2 deletions site-src/concepts/api-overview.md
Original file line number Diff line number Diff line change
Expand Up @@ -23,6 +23,6 @@ each aligning with a specific user persona in the Generative AI serving workflow

InferencePool represents a set of Inference-focused Pods and an extension that will be used to route to them. Within the broader Gateway API resource model, this resource is considered a "backend". In practice, that means that you'd replace a Kubernetes Service with an InferencePool. This resource has some similarities to Service (a way to select Pods and specify a port), but has some unique capabilities. With InferencePool, you can configure a routing extension as well as inference-specific routing optimizations. For more information on this resource, refer to our [InferencePool documentation](/api-types/inferencepool) or go directly to the [InferencePool spec](/reference/spec/#inferencepool).

### InferenceModel
### InferenceObjective

An InferenceModel represents a model or adapter, and configuration associated with that model. This resource enables you to configure the relative criticality of a model, and allows you to seamlessly translate the requested model name to one or more backend model names. Multiple InferenceModels can be attached to an InferencePool. For more information on this resource, refer to our [InferenceModel documentation](/api-types/inferencemodel) or go directly to the [InferenceModel spec](/reference/spec/#inferencemodel).
An InferenceObjective represents the objectives of a specific request. A single InferenceObjective is associated with a request, and multiple requests with different InferenceObjectives can be attached to an InferencePool. For more information on this resource, refer to our [InferenceObjective documentation](/api-types/inferenceobjective) or go directly to the [InferenceObjective spec](/reference/spec/#inferenceobjective).
17 changes: 17 additions & 0 deletions site-src/concepts/priority-and-capacity.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,17 @@
# Priority and Capacity

The InferenceObjective creates the definition of `Priority` which describes how requests interact with each other, this naturally interacts with total pool capacity, and properly understanding and configuring these behaviors is important in allowing a pool to handle requests of different priority.

## Priority (in flow control)

It should be noted that priority is currently only used in [Capacity](#capacity), and that the description below is how Priority will be consumed in the `Flow Control` model.

Priority is a simple stack rank; the higher the number, the higher the priority. Should no priority for a request be specified, the default value is zero. Requests of higher priority are _always_ selected first when requests are queued. Requests of equal priority currently operate on a FCFS basis.

## Capacity

The current capacity model uses configurable [thresholds](https://github.com/kubernetes-sigs/gateway-api-inference-extension/blob/35b14a10a9830d1a9e3850913539066ebc8fb317/pkg/epp/saturationdetector/saturationdetector.go#L49) to determine if the entire pool is saturated. The calculation is to simply iterate through each endpoint in the pool, and if all are above all thresholds, the pool is considered `saturated`. In the event of saturation, all requests with a negative priority will be rejected, and other requests will be scheduled and queued on the model servers.

## Future work

The Flow Control system is nearing completion and will add more nuance to the Priority and Capacity model: proper priority enforcement, more articulate capacity tracking, queuing at the Inference Gateway level, etc. This documentation will be updated when the Flow Control has finished implementation.
2 changes: 1 addition & 1 deletion site-src/concepts/roles-and-personas.md
Original file line number Diff line number Diff line change
Expand Up @@ -17,7 +17,7 @@ The Inference Platform Admin creates and manages the infrastructure necessary to

An Inference Workload Owner persona owns and manages one or many Generative AI Workloads (LLM focused *currently*). This includes:

- Defining criticality
- Defining priority
- Managing fine-tunes
- LoRA Adapters
- System Prompts
Expand Down
53 changes: 3 additions & 50 deletions site-src/guides/adapter-rollout.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,6 @@
The goal of this guide is to show you how to perform incremental roll out operations,
which gradually deploy new versions of your inference infrastructure.
You can update LoRA adapters and Inference Pool with minimal service disruption.
This page also provides guidance on traffic splitting and rollbacks to help ensure reliable deployments for LoRA adapters rollout.

LoRA adapter rollouts let you deploy new versions of LoRA adapters in phases,
without altering the underlying base model or infrastructure.
Expand Down Expand Up @@ -49,36 +48,7 @@ data:

The new adapter version is applied to the model servers live, without requiring a restart.


### Direct traffic to the new adapter version

Modify the InferenceModel to configure a canary rollout with traffic splitting. In this example, 10% of traffic for food-review model will be sent to the new ***food-review-2*** adapter.


```bash
kubectl edit inferencemodel food-review
```

Change the targetModels list in InferenceModel to match the following:


```yaml
apiVersion: inference.networking.x-k8s.io/v1alpha2
kind: InferenceModel
metadata:
name: food-review
spec:
criticality: 1
poolRef:
name: vllm-llama3-8b-instruct
targetModels:
- name: food-review-1
weight: 90
- name: food-review-2
weight: 10
```

The above configuration means one in every ten requests should be sent to the new version. Try it out:
Try it out:

1. Get the gateway IP:
```bash
Expand All @@ -88,7 +58,7 @@ IP=$(kubectl get gateway/inference-gateway -o jsonpath='{.status.addresses[0].va
2. Send a few requests as follows:
```bash
curl -i ${IP}:${PORT}/v1/completions -H 'Content-Type: application/json' -d '{
"model": "food-review",
"model": "food-review-2",
"prompt": "Write as if you were a critic: San Francisco",
"max_tokens": 100,
"temperature": 0
Expand All @@ -97,23 +67,6 @@ curl -i ${IP}:${PORT}/v1/completions -H 'Content-Type: application/json' -d '{

### Finish the rollout


Modify the InferenceModel to direct 100% of the traffic to the latest version of the adapter.

```yaml
apiVersion: inference.networking.x-k8s.io/v1alpha2
kind: InferenceModel
metadata:
name: food-review
spec:
criticality: 1
poolRef:
name: vllm-llama3-8b-instruct
targetModels:
- name: food-review-2
weight: 100
```

Unload the older versions from the servers by updating the LoRA syncer ConfigMap to list the older version under the `ensureNotExist` list:

```yaml
Expand All @@ -137,5 +90,5 @@ data:
source: Kawon/llama3.1-food-finetune_v14_r8
```

With this, all requests should be served by the new adapter version.
With this, the new adapter version should be available for all incoming requests.

24 changes: 9 additions & 15 deletions site-src/guides/epp-configuration/config-text.md
Original file line number Diff line number Diff line change
@@ -1,17 +1,14 @@
# Configuring Plugins via text
# Configuring Plugins via YAML

The set of lifecycle hooks (plugins) that are used by the Inference Gateway (IGW) is determined by how
it is configured. The IGW can be configured in several ways, either by code or via text.
it is configured. The IGW is primarily configured via a configuration file.

If configured by code either a set of predetermined environment variables must be used or one must
fork the IGW and change code.

A simpler way to congigure the IGW is to use a text based configuration. This text is in YAML format
and can either be in a file or specified in-line as a parameter. The configuration defines the set of
The YAML file can either be specified as a path to a file or in-line as a parameter. The configuration defines the set of
plugins to be instantiated along with their parameters. Each plugin can also be given a name, enabling
the same plugin type to be instantiated multiple times, if needed.
the same plugin type to be instantiated multiple times, if needed (such as when configuring multiple scheduling profiles).

Also defined is a set of SchedulingProfiles, which determine the set of plugins to be used when scheduling a request. If one is not defailed, a default one names `default` will be added and will reference all of the
Also defined is a set of SchedulingProfiles, which determine the set of plugins to be used when scheduling a request.
If no scheduling profile is specified, a default profile, named `default` will be added and will reference all of the
instantiated plugins.

The set of plugins instantiated can include a Profile Handler, which determines which SchedulingProfiles
Expand All @@ -22,12 +19,9 @@ In addition, the set of instantiated plugins can also include a picker, which ch
the request is scheduled after filtering and scoring. If one is not referenced in a SchedulingProfile, an
instance of `MaxScorePicker` will be added to the SchedulingProfile in question.

It should be noted that while the configuration text looks like a Kubernetes Custom Resource, it is
**NOT** a Kubernetes Custom Resource. Kubernetes infrastructure is used to load the configuration
text and in the future will also help in versioning the text.

It should also be noted that even when the configuration text is loaded from a file, it is loaded at
the Endpoint-Picker's (EPP) startup and changes to the file at runtime are ignored.
***NOTE***: While the configuration text looks like a Kubernetes CRD, it is
**NOT** a Kubernetes CRD. Specifically, the config is not reconciled upon, and is only read on startup.
This behavior is intentional, as augmenting the scheduling config without redeploying the EPP is not supported.

The configuration text has the following form:
```yaml
Expand Down
4 changes: 2 additions & 2 deletions site-src/guides/implementers.md
Original file line number Diff line number Diff line change
Expand Up @@ -157,8 +157,8 @@ An example of a similar approach is Kuadrant’s [WASM Shim](https://github.com/
Here are some tips for testing your controller end-to-end:

- **Focus on Key Scenarios**: Add common scenarios like creating, updating, and deleting InferencePool resources, as well as different routing rules that target InferencePool backends.
- **Verify Routing Behaviors**: Design more complex routing scenarios and verify that requests are correctly routed to the appropriate model server pods within the InferencePool based on the InferenceModel configuration.
- **Test Error Handling**: Verify that the controller correctly handles scenarios like unsupported model names or resource constraints (if criticality-based shedding is implemented). Test with state transitions (such as constant requests while Pods behind EPP are being replaced and Pods behind InferencePool are being replaced) to ensure that the system is resilient to failures and can automatically recover by redirecting traffic to healthy Pods.
- **Verify Routing Behaviors**: Design more complex routing scenarios and verify that requests are correctly routed to the appropriate model server pods within the InferencePool.
- **Test Error Handling**: Verify that the controller correctly handles scenarios like unsupported model names or resource constraints (if priority-based shedding is implemented). Test with state transitions (such as constant requests while Pods behind EPP are being replaced and Pods behind InferencePool are being replaced) to ensure that the system is resilient to failures and can automatically recover by redirecting traffic to healthy Pods.
- **Using Reference EPP Implementation + Echoserver**: You can use the [reference EPP implementation](https://github.com/kubernetes-sigs/gateway-api-inference-extension/tree/main/pkg/epp) for testing your controller end-to-end. Instead of a full-fledged model server, a simple mock server (like the [echoserver](https://github.com/kubernetes-sigs/ingress-controller-conformance/tree/master/images/echoserver)) can be very useful for verifying routing to ensure the correct pod received the request.
- **Performance Test**: Run end-to-end [benchmarks](https://gateway-api-inference-extension.sigs.k8s.io/performance/benchmark/) to make sure that your inference gateway can achieve the latency target that is desired.

Expand Down
2 changes: 1 addition & 1 deletion site-src/guides/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -349,7 +349,7 @@ Tooling:
The following instructions assume you would like to cleanup ALL resources that were created in this quickstart guide.
Please be careful not to delete resources you'd like to keep.

1. Uninstall the InferencePool, InferenceModel, and model server resources
1. Uninstall the InferencePool, InferenceObjective and model server resources

```bash
helm uninstall vllm-llama3-8b-instruct
Expand Down
Loading