You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: site-src/concepts/api-overview.md
+2-2Lines changed: 2 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -23,6 +23,6 @@ each aligning with a specific user persona in the Generative AI serving workflow
23
23
24
24
InferencePool represents a set of Inference-focused Pods and an extension that will be used to route to them. Within the broader Gateway API resource model, this resource is considered a "backend". In practice, that means that you'd replace a Kubernetes Service with an InferencePool. This resource has some similarities to Service (a way to select Pods and specify a port), but has some unique capabilities. With InferencePool, you can configure a routing extension as well as inference-specific routing optimizations. For more information on this resource, refer to our [InferencePool documentation](/api-types/inferencepool) or go directly to the [InferencePool spec](/reference/spec/#inferencepool).
25
25
26
-
### InferenceModel
26
+
### InferenceObjective
27
27
28
-
An InferenceModel represents a model or adapter, and configuration associated with that model. This resource enables you to configure the relative criticality of a model, and allows you to seamlessly translate the requested model name to one or more backend model names. Multiple InferenceModels can be attached to an InferencePool. For more information on this resource, refer to our [InferenceModel documentation](/api-types/inferencemodel) or go directly to the [InferenceModel spec](/reference/spec/#inferencemodel).
28
+
An InferenceObjective represents the objectives of a specific request. A single InferenceObjective is associated with a request, and multiple requests with different InferenceObjectives can be attached to an InferencePool. For more information on this resource, refer to our [InferenceObjective documentation](/api-types/inferenceobjective) or go directly to the [InferenceObjective spec](/reference/spec/#inferenceobjective).
Copy file name to clipboardExpand all lines: site-src/guides/adapter-rollout.md
+3-50Lines changed: 3 additions & 50 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -3,7 +3,6 @@
3
3
The goal of this guide is to show you how to perform incremental roll out operations,
4
4
which gradually deploy new versions of your inference infrastructure.
5
5
You can update LoRA adapters and Inference Pool with minimal service disruption.
6
-
This page also provides guidance on traffic splitting and rollbacks to help ensure reliable deployments for LoRA adapters rollout.
7
6
8
7
LoRA adapter rollouts let you deploy new versions of LoRA adapters in phases,
9
8
without altering the underlying base model or infrastructure.
@@ -49,36 +48,7 @@ data:
49
48
50
49
The new adapter version is applied to the model servers live, without requiring a restart.
51
50
52
-
53
-
### Direct traffic to the new adapter version
54
-
55
-
Modify the InferenceModel to configure a canary rollout with traffic splitting. In this example, 10% of traffic for food-review model will be sent to the new ***food-review-2*** adapter.
56
-
57
-
58
-
```bash
59
-
kubectl edit inferencemodel food-review
60
-
```
61
-
62
-
Change the targetModels list in InferenceModel to match the following:
Copy file name to clipboardExpand all lines: site-src/guides/epp-configuration/config-text.md
+9-15Lines changed: 9 additions & 15 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,17 +1,14 @@
1
-
# Configuring Plugins via text
1
+
# Configuring Plugins via YAML
2
2
3
3
The set of lifecycle hooks (plugins) that are used by the Inference Gateway (IGW) is determined by how
4
-
it is configured. The IGW can be configured in several ways, either by code or via text.
4
+
it is configured. The IGW is primarily configured via a configuration file.
5
5
6
-
If configured by code either a set of predetermined environment variables must be used or one must
7
-
fork the IGW and change code.
8
-
9
-
A simpler way to congigure the IGW is to use a text based configuration. This text is in YAML format
10
-
and can either be in a file or specified in-line as a parameter. The configuration defines the set of
6
+
The YAML file can either be specified as a path to a file or in-line as a parameter. The configuration defines the set of
11
7
plugins to be instantiated along with their parameters. Each plugin can also be given a name, enabling
12
-
the same plugin type to be instantiated multiple times, if needed.
8
+
the same plugin type to be instantiated multiple times, if needed (such as when configuring multiple scheduling profiles).
13
9
14
-
Also defined is a set of SchedulingProfiles, which determine the set of plugins to be used when scheduling a request. If one is not defailed, a default one names `default` will be added and will reference all of the
10
+
Also defined is a set of SchedulingProfiles, which determine the set of plugins to be used when scheduling a request.
11
+
If no scheduling profile is specified, a default profile, named `default` will be added and will reference all of the
15
12
instantiated plugins.
16
13
17
14
The set of plugins instantiated can include a Profile Handler, which determines which SchedulingProfiles
@@ -22,12 +19,9 @@ In addition, the set of instantiated plugins can also include a picker, which ch
22
19
the request is scheduled after filtering and scoring. If one is not referenced in a SchedulingProfile, an
23
20
instance of `MaxScorePicker` will be added to the SchedulingProfile in question.
24
21
25
-
It should be noted that while the configuration text looks like a Kubernetes Custom Resource, it is
26
-
**NOT** a Kubernetes Custom Resource. Kubernetes infrastructure is used to load the configuration
27
-
text and in the future will also help in versioning the text.
28
-
29
-
It should also be noted that even when the configuration text is loaded from a file, it is loaded at
30
-
the Endpoint-Picker's (EPP) startup and changes to the file at runtime are ignored.
22
+
***NOTE***: While the configuration text looks like a Kubernetes CRD, it is
23
+
**NOT** a Kubernetes CRD. Specifically, the config is not reconciled upon, and is only read on startup.
24
+
This behavior is intentional, as augmenting the scheduling config without redeploying the EPP is not supported.
Copy file name to clipboardExpand all lines: site-src/guides/implementers.md
+2-2Lines changed: 2 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -157,8 +157,8 @@ An example of a similar approach is Kuadrant’s [WASM Shim](https://github.com/
157
157
Here are some tips for testing your controller end-to-end:
158
158
159
159
-**Focus on Key Scenarios**: Add common scenarios like creating, updating, and deleting InferencePool resources, as well as different routing rules that target InferencePool backends.
160
-
-**Verify Routing Behaviors**: Design more complex routing scenarios and verify that requests are correctly routed to the appropriate model server pods within the InferencePool based on the InferenceModel configuration.
161
-
-**Test Error Handling**: Verify that the controller correctly handles scenarios like unsupported model names or resource constraints (if criticality-based shedding is implemented). Test with state transitions (such as constant requests while Pods behind EPP are being replaced and Pods behind InferencePool are being replaced) to ensure that the system is resilient to failures and can automatically recover by redirecting traffic to healthy Pods.
160
+
-**Verify Routing Behaviors**: Design more complex routing scenarios and verify that requests are correctly routed to the appropriate model server pods within the InferencePool.
161
+
-**Test Error Handling**: Verify that the controller correctly handles scenarios like unsupported model names or resource constraints (if priority-based shedding is implemented). Test with state transitions (such as constant requests while Pods behind EPP are being replaced and Pods behind InferencePool are being replaced) to ensure that the system is resilient to failures and can automatically recover by redirecting traffic to healthy Pods.
162
162
-**Using Reference EPP Implementation + Echoserver**: You can use the [reference EPP implementation](https://github.com/kubernetes-sigs/gateway-api-inference-extension/tree/main/pkg/epp) for testing your controller end-to-end. Instead of a full-fledged model server, a simple mock server (like the [echoserver](https://github.com/kubernetes-sigs/ingress-controller-conformance/tree/master/images/echoserver)) can be very useful for verifying routing to ensure the correct pod received the request.
163
163
-**Performance Test**: Run end-to-end [benchmarks](https://gateway-api-inference-extension.sigs.k8s.io/performance/benchmark/) to make sure that your inference gateway can achieve the latency target that is desired.
0 commit comments