Core Maintainer@vLLM-Omni,
Research Scienctist@Huawei,
Ph.D. in Statistics and Operations Research@UNC
-
Huawei
- Shanghai
-
21:09
(UTC -12:00)
Pinned Loading
-
vllm-omni
vllm-omni PublicForked from vllm-project/vllm-omni
A high-throughput and memory efficient inference and serving engine for Omni-modality models
Python 1
-
vllm-omni-cookbook
vllm-omni-cookbook PublicA practical guide to vLLM-Omni with recipes, examples, and best practices for omni-modality inference and serving.
Python
-
Something went wrong, please refresh the page to try again.
If the problem persists, check the GitHub status page or contact support.
If the problem persists, check the GitHub status page or contact support.


