vllm
https://github.com/vllm-project/vllm
Python
A high-throughput and memory-efficient inference and serving engine for LLMs
Triage Issues!
When you volunteer to triage issues, you'll receive an email each day with a link to an open issue that needs help in this project. You'll also receive instructions on how to triage issues.
Triage Docs!
Receive a documented method or class from your favorite GitHub repos in your inbox every day. If you're really pro, receive undocumented methods or classes and supercharge your commit history.
Python not yet supported26 Subscribers
Add a CodeTriage badge to vllm
Help out
- Issues
- [DO NOT REVIEW] Scaled mm fine grained dispatch study
- [Bugfix] Fix GLM45 reasoning token counting
- [ROCm] Add AITER RoPE+KV cache and dual RMSNorm fusion for MLA
- [Bug]: DeepSeek V4 hangs when input length exceeds 64k tokens (vLLM deepseekv4-cu129 image)
- [RFC] Draft Integration: b12x Blackwell SM120 MoE Dispatcher (#40882)
- [ROCm][CI] Upgrade ROCm quantized MoE coverage
- [FP8] Add opt-in ParallelLMHead dispatch to Fp8Config
- [Quantization] Per-shard FP8 scaling for MergedColumnParallelLinear
- [Bugfix][V1] KV cache: handle None block_size for Mamba/SSM groups
- [Kernel] Tune default fp8 block-scaled Triton config for M<=8 decode
- Docs
- Python not yet supported