-
Notifications
You must be signed in to change notification settings - Fork 350
[Bugfix] Fix broken CI #2415
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
[Bugfix] Fix broken CI #2415
Conversation
Signed-off-by: wangli <[email protected]>
Signed-off-by: wangli <[email protected]>
Signed-off-by: wangli <[email protected]>
Signed-off-by: wangli <[email protected]>
Signed-off-by: wangli <[email protected]>
Signed-off-by: wangli <[email protected]>
Signed-off-by: wangli <[email protected]>
Signed-off-by: wangli <[email protected]>
Signed-off-by: wangli <[email protected]>
Signed-off-by: wangli <[email protected]>
@@ -235,7 +235,9 @@ def load_model(self) -> None: | |||
self.model_runner.load_model() | |||
|
|||
def compile_or_warm_up_model(self) -> None: | |||
warmup_sizes = self.vllm_config.compilation_config.compile_sizes.copy() | |||
# Note: need to adapt the graph mode |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@MengqingCao graph mode need note
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request refactors the codebase to use the upstream InputBatch
from vLLM, removing the custom npu_input_batch.py
. This is a good step towards better maintainability and alignment with the main project. The changes mostly involve adapting to the upstream API. I've found one critical issue where an attribute might not be initialized, which could lead to a runtime crash. Please see my detailed comment.
👋 Hi! Thank you for contributing to the vLLM Ascend project. The following points will speed up your PR merge:
If CI fails, you can run linting and testing checks locally according Contributing and Testing. |
Signed-off-by: wangli <[email protected]>
Signed-off-by: wangli <[email protected]>
This pull request has conflicts, please resolve those before we can evaluate the pull request. |
What this PR does / why we need it?
Fix vllm commit
Notice that I removed
npu_input_batch
and directly use the original input construction class of vllm for more convenient synchronizationNote: this patch fixed the eager-mode scenario, for the graph mode, there still need more work
Does this PR introduce any user-facing change?
How was this patch tested?
test it locally: