vllm
https://github.com/vllm-project/vllm
Python
A high-throughput and memory-efficient inference and serving engine for LLMs
Triage Issues!
When you volunteer to triage issues, you'll receive an email each day with a link to an open issue that needs help in this project. You'll also receive instructions on how to triage issues.
Triage Docs!
Receive a documented method or class from your favorite GitHub repos in your inbox every day. If you're really pro, receive undocumented methods or classes and supercharge your commit history.
Python not yet supported18 Subscribers
Add a CodeTriage badge to vllm
Help out
- Issues
- [Bug]: record_metadata_for_reloading causes ~3x host memory regression during torch.compile on XLA backends
- [Bug]: SM 7.5 extreme slowness hangs indefinitely on T4 (vllm 0.17.0 with Qwen3.5-27B)
- [Bugfix] Fix DP wave race condition re-arming engine while paused
- [Bugfix] Fix FP8 MLA CUDAGraph stale tile scheduler metadata
- [Bugfix] Fix FlashMLA sparse accuracy with topk_length and zero-init padding
- [Bug]: Does vllm support deploying glm-5 on A800 or A100, or are there any plans to support it?
- [Bugfix] Fix FP8 online quantization premature trigger with TP sharded weights
- [Bugfix] Fix off-by-one in multimodal prefix cache hash boundary check
- [Performance]: W4A16+eagle3 not better than fp8+eagle3 with Qwen2.5-14B
- [Bugfix] Respect scale_attn_weights config flag in GPTBigCode
- Docs
- Python not yet supported