-
Notifications
You must be signed in to change notification settings - Fork 424
refact model runner v1 #2417
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
refact model runner v1 #2417
Conversation
Signed-off-by: weiguihua2 <[email protected]>
Signed-off-by: weiguihua2 <[email protected]>
Signed-off-by: weiguihua2 <[email protected]>
Signed-off-by: weiguihua2 <[email protected]>
Signed-off-by: weiguihua2 <[email protected]>
Signed-off-by: weiguihua2 <[email protected]>
Signed-off-by: weiguihua2 <[email protected]>
Signed-off-by: weiguihua2 <[email protected]>
Signed-off-by: weiguihua2 <[email protected]>
Signed-off-by: weiguihua2 <[email protected]>
Signed-off-by: weiguihua2 <[email protected]>
Signed-off-by: weiguihua2 <[email protected]>
Signed-off-by: weiguihua2 <[email protected]>
Signed-off-by: weiguihua2 <[email protected]>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request introduces a significant and well-executed refactoring of the model runner and attention metadata generation logic. The core of the change is the introduction of AscendCommonAttentionMetadata
and TorchairCommonAttentionMetadata
dataclasses, which centralize the information needed for attention operations. This decouples the attention metadata builders from the main model runner, leading to cleaner code with reduced dependencies and improved modularity. The changes are consistently applied across multiple files, including attention backends and proposers. The refactoring also simplifies some methods and removes redundant code. Overall, this is a high-quality pull request that improves the codebase's structure and maintainability. I have not found any high or critical issues in the proposed changes.
👋 Hi! Thank you for contributing to the vLLM Ascend project. The following points will speed up your PR merge:
If CI fails, you can run linting and testing checks locally according Contributing and Testing. |
Signed-off-by: weiguihua2 <[email protected]>
Signed-off-by: weiguihua2 <[email protected]>
Signed-off-by: weiguihua2 <[email protected]>
Signed-off-by: weiguihua2 <[email protected]>
Signed-off-by: weiguihua2 <[email protected]>
Signed-off-by: weiguihua2 <[email protected]>
Signed-off-by: weiguihua2 <[email protected]>
Signed-off-by: weiguihua2 <[email protected]>
This pull request has conflicts, please resolve those before we can evaluate the pull request. |
What this PR does / why we need it?
Remove the torchchair logic from model_runner_v1 and put it into torchchair_model_runner
Does this PR introduce any user-facing change?
No
How was this patch tested?