Skip to content

Pinned Loading

  1. vllm vllm Public

    A high-throughput and memory-efficient inference and serving engine for LLMs

    Python 59.4k 10.5k

  2. llm-compressor llm-compressor Public

    Transformers-compatible library for applying various compression algorithms to LLMs for optimized deployment with vLLM

    Python 2k 246

  3. recipes recipes Public

    Common recipes to run vLLM

    146 49

Repositories

Showing 10 of 24 repositories
  • semantic-router Public

    Intelligent Mixture-of-Models Router for Efficient LLM Inference

    vllm-project/semantic-router’s past year of commit activity
    Go 1,598 Apache-2.0 178 74 (16 issues need help) 20 Updated Oct 3, 2025
  • vllm-spyre Public

    Community maintained hardware plugin for vLLM on Spyre

    vllm-project/vllm-spyre’s past year of commit activity
    Python 35 Apache-2.0 24 7 21 Updated Oct 3, 2025
  • vllm Public

    A high-throughput and memory-efficient inference and serving engine for LLMs

    vllm-project/vllm’s past year of commit activity
    Python 59,391 Apache-2.0 10,506 1,869 (31 issues need help) 1,161 Updated Oct 3, 2025
  • production-stack Public

    vLLM’s reference system for K8S-native cluster-wide deployment with community-driven performance optimization

    vllm-project/production-stack’s past year of commit activity
    Python 1,824 Apache-2.0 297 83 (3 issues need help) 54 Updated Oct 3, 2025
  • guidellm Public

    Evaluate and Enhance Your LLM Deployments for Real-World Inference Needs

    vllm-project/guidellm’s past year of commit activity
    Python 606 Apache-2.0 86 85 (5 issues need help) 30 Updated Oct 3, 2025
  • aibrix Public

    Cost-efficient and pluggable Infrastructure components for GenAI inference

    vllm-project/aibrix’s past year of commit activity
    Go 4,279 Apache-2.0 467 219 (19 issues need help) 23 Updated Oct 3, 2025
  • speculators Public

    A unified library for building, evaluating, and storing speculative decoding algorithms for LLM inference in vLLM

    vllm-project/speculators’s past year of commit activity
    Python 51 Apache-2.0 9 3 (2 issues need help) 16 Updated Oct 3, 2025
  • vllm-gaudi Public

    Community maintained hardware plugin for vLLM on Intel Gaudi

    vllm-project/vllm-gaudi’s past year of commit activity
    Python 11 46 0 41 Updated Oct 3, 2025
  • llm-compressor Public

    Transformers-compatible library for applying various compression algorithms to LLMs for optimized deployment with vLLM

    vllm-project/llm-compressor’s past year of commit activity
    Python 2,035 Apache-2.0 246 59 (13 issues need help) 37 Updated Oct 3, 2025
  • flash-attention Public Forked from Dao-AILab/flash-attention

    Fast and memory-efficient exact attention

    vllm-project/flash-attention’s past year of commit activity
    Python 95 BSD-3-Clause 2,050 0 17 Updated Oct 2, 2025