-
Notifications
You must be signed in to change notification settings - Fork 25.6k
[ML] Flag updates from Inference #131725
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[ML] Flag updates from Inference #131725
Changes from 6 commits
b124b8f
061cc5e
9ddaec4
b97a4ca
0107a7c
00f7ba7
fd87934
e9bc931
87b4a29
c14ed68
0e05b7a
e671686
8dd8d94
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
| Original file line number | Diff line number | Diff line change | ||||
|---|---|---|---|---|---|---|
|
|
@@ -27,11 +27,17 @@ | |||||
| import java.io.IOException; | ||||||
| import java.util.Objects; | ||||||
|
|
||||||
| import static org.elasticsearch.TransportVersions.INFERENCE_UPDATE_ML; | ||||||
| import static org.elasticsearch.xpack.core.ml.action.StartTrainedModelDeploymentAction.Request.ADAPTIVE_ALLOCATIONS; | ||||||
| import static org.elasticsearch.xpack.core.ml.action.StartTrainedModelDeploymentAction.Request.MODEL_ID; | ||||||
| import static org.elasticsearch.xpack.core.ml.action.StartTrainedModelDeploymentAction.Request.NUMBER_OF_ALLOCATIONS; | ||||||
|
|
||||||
| public class UpdateTrainedModelDeploymentAction extends ActionType<CreateTrainedModelAssignmentAction.Response> { | ||||||
| public enum Source { | ||||||
|
||||||
| API, | ||||||
| ADAPTIVE_ALLOCATIONS, | ||||||
| INFERENCE | ||||||
|
||||||
| INFERENCE | |
| INFERENCE_API |
Outdated
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can you confirm that we do want Source.INFERENCE here for all the usage of isInternal() below?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Confirmed! Yeah inference update code previously set isInternal to true (back when the boolean existed)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we need to determine if source == Source.ADAPTIVE_ALLOCATIONS here? Since this will return true for Source.INFERENCE as well?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Previously, we set the boolean to true if the source was either from the inference update api or the adaptive allocations autoscaler. out.writeBoolean(isInternal()) preserves this logic (i think). It means the stream reader will think an inference api call is an adaptive allocations api call, but that only affects serverless which is only mixed cluster during a rolling update.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm missing a bit of context: why do we need to distinguish between these cases?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is there a corresponding Serverless PR?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, let me ping you with the internal documentation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We need to allow updates to
num_allocationsin serverless that originate from theAdaptiveAllocationsScalerService(ADAPTIVE_ALLOCATIONS), but we want to disallow updates from users (APIandINFERENCE). The only alternative I thought of was refactoringAdaptiveAllocationsScalerServiceto update directly rather than through the API, but that felt more intrusive.