Skip to content

Pinned Loading

  1. vllm vllm Public

    A high-throughput and memory-efficient inference and serving engine for LLMs

    Python 55.3k 9.4k

  2. llm-compressor llm-compressor Public

    Transformers-compatible library for applying various compression algorithms to LLMs for optimized deployment with vLLM

    Python 1.8k 203

  3. recipes recipes Public

    Common recipes to run vLLM

    98 17

Repositories

Showing 10 of 20 repositories
  • vllm-spyre Public

    Community maintained hardware plugin for vLLM on Spyre

    vllm-project/vllm-spyre’s past year of commit activity
    Python 30 Apache-2.0 21 8 12 Updated Aug 15, 2025
  • llm-compressor Public

    Transformers-compatible library for applying various compression algorithms to LLMs for optimized deployment with vLLM

    vllm-project/llm-compressor’s past year of commit activity
    Python 1,783 Apache-2.0 203 42 (7 issues need help) 34 Updated Aug 16, 2025
  • vllm Public

    A high-throughput and memory-efficient inference and serving engine for LLMs

    vllm-project/vllm’s past year of commit activity
    Python 55,329 Apache-2.0 9,402 1,818 (17 issues need help) 977 Updated Aug 16, 2025
  • production-stack Public

    vLLM’s reference system for K8S-native cluster-wide deployment with community-driven performance optimization

    vllm-project/production-stack’s past year of commit activity
    Python 1,665 Apache-2.0 259 67 (3 issues need help) 44 Updated Aug 15, 2025
  • guidellm Public

    Evaluate and Enhance Your LLM Deployments for Real-World Inference Needs

    vllm-project/guidellm’s past year of commit activity
    Python 489 Apache-2.0 67 62 (5 issues need help) 20 Updated Aug 15, 2025
  • aibrix Public

    Cost-efficient and pluggable Infrastructure components for GenAI inference

    vllm-project/aibrix’s past year of commit activity
    Go 4,031 Apache-2.0 426 209 (21 issues need help) 19 Updated Aug 15, 2025
  • vllm-ascend Public

    Community maintained hardware plugin for vLLM on Ascend

    vllm-project/vllm-ascend’s past year of commit activity
    Python 1,006 Apache-2.0 337 318 (5 issues need help) 167 Updated Aug 15, 2025
  • flash-attention Public Forked from Dao-AILab/flash-attention

    Fast and memory-efficient exact attention

    vllm-project/flash-attention’s past year of commit activity
    Python 87 BSD-3-Clause 1,907 0 11 Updated Aug 15, 2025
  • vllm-gaudi Public

    Community maintained hardware plugin for vLLM on Intel Gaudi

    vllm-project/vllm-gaudi’s past year of commit activity
    Python 8 17 0 19 Updated Aug 14, 2025
  • vllm-openvino Public
    vllm-project/vllm-openvino’s past year of commit activity
    Python 21 Apache-2.0 7 4 1 Updated Aug 14, 2025