Skip to content

Conversation

@Gaohan123
Copy link
Collaborator

Purpose

This PR supports basic straming output for both offline and online. For text, it can output token by token or fixed chunk size. For audio output, it depends on later feature of chunked async pipeline across multiple stages.

Test Plan

Test Result


Essential Elements of an Effective PR Description Checklist
  • The purpose of the PR, such as "Fix some issue (link existing issues this PR will resolve)".
  • The test plan, such as providing test command.
  • The test results, such as pasting the results comparison before and after, or e2e results
  • (Optional) The necessary documentation update, such as updating supported_models.md and examples for a new model.
  • (Optional) Release notes update. If your change is user facing, please update the release notes draft.

BEFORE SUBMITTING, PLEASE READ https://github.com/vllm-project/vllm-omni/blob/main/CONTRIBUTING.md (anything written below this line will be removed by GitHub Actions)

@Gaohan123 Gaohan123 changed the title [Feature][WIP] Base version of supporting streaming output [Feature][WIP] Basic version of supporting streaming output Dec 19, 2025
Signed-off-by: Gaohan123 <[email protected]>
@hu-sp
Copy link

hu-sp commented Dec 22, 2025

@Gaohan123 @hsliuustc0106 Will the 1230 release #165 include support for returning audio data when stream: true is enabled #373 ?

@wonjerry
Copy link

This feature is really needed! Even just streaming for text output would be great.

Signed-off-by: Gaohan123 <[email protected]>
Signed-off-by: Gaohan123 <[email protected]>
Signed-off-by: Gaohan123 <[email protected]>
@hsliuustc0106
Copy link
Collaborator

for this feature, do we support text streaming for the first stage and leave the audio streaming for the follow-up PR?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

[Bug]: [Streaming] Qwen3-Omni streaming fails: 'OmniRequestOutput' object has no attribute 'prompt_token_ids'

4 participants