-
Notifications
You must be signed in to change notification settings - Fork 200
[Model] Add Stable Audio Open support for text-to-audio generation #331
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
Signed-off-by: linyueqian <[email protected]>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
💡 Codex Review
Here are some automated review suggestions for this pull request.
ℹ️ About Codex in GitHub
Codex has been enabled to automatically review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
When you sign up for Codex through ChatGPT, Codex can also answer questions or update the PR, like "@codex address that feedback".
| num_inference_steps = req.num_inference_steps or num_inference_steps | ||
| guidance_scale = req.guidance_scale if req.guidance_scale > 1.0 else guidance_scale | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Honor guidance_scale ≤1 from requests
guidance_scale from the request is only applied when it exceeds 1.0; otherwise the pipeline falls back to the default argument (7.0). This prevents callers from disabling classifier-free guidance or using a lower scale (e.g., requesting 0 or 1 via Omni.generate), because the model will always run CFG at scale 7 regardless of what was requested, making unconditional/low-guidance Stable Audio generation impossible.
Useful? React with 👍 / 👎.
|
add test ci |
Signed-off-by: linyueqian <[email protected]>
Signed-off-by: linyueqian <[email protected]>
@hsliuustc0106 For online serving of Stable Audio, is a simple FastAPI wrapper around OmniDiffusion the right approach, or is there existing/planned infrastructure I should use? |
Signed-off-by: Yueqian Lin <[email protected]>
@david6666666 I notice you are working on the video online serving, is it possible to support audio output as well? #437 |
I think we need to open another PR to enable online audio serving, after this text-to-audio model is supported. |
PLEASE FILL IN THE PR DESCRIPTION HERE ENSURING ALL CHECKLIST ITEMS (AT THE BOTTOM) HAVE BEEN CONSIDERED.
Purpose
Add support for Stable Audio Open (stabilityai/stable-audio-open-1.0) for text-to-audio generation in vLLM-Omni.
Test Plan
python examples/offline_inference/text_to_audio/text_to_audio.py --model stabilityai/stable-audio-open-1.0 --prompt "The sound of a dog barking" --output dog_barking.wavTest Result
dog_barking.wav
Essential Elements of an Effective PR Description Checklist
supported_models.mdandexamplesfor a new model.BEFORE SUBMITTING, PLEASE READ https://github.com/vllm-project/vllm-omni/blob/main/CONTRIBUTING.md (anything written below this line will be removed by GitHub Actions)