Skip to content

Pinned Loading

  1. vllm vllm Public

    A high-throughput and memory-efficient inference and serving engine for LLMs

    Python 73.3k 14.4k

  2. llm-compressor llm-compressor Public

    Transformers-compatible library for applying various compression algorithms to LLMs for optimized deployment with vLLM

    Python 2.9k 438

  3. recipes recipes Public

    Common recipes to run vLLM

    Jupyter Notebook 498 168

  4. speculators speculators Public

    A unified library for building, evaluating, and storing speculative decoding algorithms for LLM inference in vLLM

    Python 276 54

  5. semantic-router semantic-router Public

    System Level Intelligent Router for Mixture-of-Models at Cloud, Data Center and Edge

    Go 3.4k 574

Repositories

Showing 10 of 34 repositories
  • speculators Public

    A unified library for building, evaluating, and storing speculative decoding algorithms for LLM inference in vLLM

    vllm-project/speculators’s past year of commit activity
    Python 276 Apache-2.0 54 23 (5 issues need help) 20 Updated Mar 16, 2026
  • vllm-gaudi Public

    Community maintained hardware plugin for vLLM on Intel Gaudi

    vllm-project/vllm-gaudi’s past year of commit activity
    Python 30 Apache-2.0 113 1 65 Updated Mar 16, 2026
  • flash-attention Public Forked from Dao-AILab/flash-attention

    Fast and memory-efficient exact attention

    vllm-project/flash-attention’s past year of commit activity
    Python 117 BSD-3-Clause 2,539 0 20 Updated Mar 16, 2026
  • vllm Public

    A high-throughput and memory-efficient inference and serving engine for LLMs

    vllm-project/vllm’s past year of commit activity
    Python 73,314 Apache-2.0 14,422 1,707 (46 issues need help) 2,011 Updated Mar 16, 2026
  • llm-compressor Public

    Transformers-compatible library for applying various compression algorithms to LLMs for optimized deployment with vLLM

    vllm-project/llm-compressor’s past year of commit activity
    Python 2,878 Apache-2.0 438 69 (12 issues need help) 49 Updated Mar 16, 2026
  • vllm-spyre Public

    Community maintained hardware plugin for vLLM on Spyre

    vllm-project/vllm-spyre’s past year of commit activity
    Python 47 Apache-2.0 46 60 (5 issues need help) 11 Updated Mar 16, 2026
  • vllm-omni Public

    A framework for efficient model inference with omni-modality models

    vllm-project/vllm-omni’s past year of commit activity
    Python 3,198 Apache-2.0 537 290 (56 issues need help) 185 Updated Mar 16, 2026
  • compressed-tensors Public

    A safetensors extension to efficiently store sparse quantized tensors on disk

    vllm-project/compressed-tensors’s past year of commit activity
    Python 263 Apache-2.0 66 5 (1 issue needs help) 21 Updated Mar 16, 2026
  • tpu-inference Public

    TPU inference for vLLM, with unified JAX and PyTorch support.

    vllm-project/tpu-inference’s past year of commit activity
    Python 259 Apache-2.0 121 48 (3 issues need help) 156 Updated Mar 16, 2026
  • guidellm Public

    Evaluate and Enhance Your LLM Deployments for Real-World Inference Needs

    vllm-project/guidellm’s past year of commit activity
    Python 919 Apache-2.0 138 63 24 Updated Mar 16, 2026