vllm
https://github.com/vllm-project/vllm
Python
A high-throughput and memory-efficient inference and serving engine for LLMs
Triage Issues!
When you volunteer to triage issues, you'll receive an email each day with a link to an open issue that needs help in this project. You'll also receive instructions on how to triage issues.
Triage Docs!
Receive a documented method or class from your favorite GitHub repos in your inbox every day. If you're really pro, receive undocumented methods or classes and supercharge your commit history.
Python not yet supported26 Subscribers
Add a CodeTriage badge to vllm
Help out
- Issues
- [Spec Decode] Fix Gemma4 DFlash batched verification
- fix: remove unused norm for dpskv4
- [Bug]: Latest Nightly build with TurboQuant KV cache crashes on large chunked continuation prefill after workspace lock ( testing PR #39931 implementing TQ on Hybrid Attention Models e.g Qwen3.5-9B)
- [Roadmap] 2026 Q2 vLLM × RL Roadmap
- [vLLM IR] Nits
- [Core][Multimodal] Test + doc direct multimodal engine inputs
- [ROCm][CI] Fix ROCm LoRA Transformers fallback with full CUDA graphs
- [CI Failure]: mi300_1: DeepSeek V2-Lite Prefetch Offload Accuracy (H100-MI300)
- [Bug]: cuda graph capture hipErrorCapturedEvent crash on AMD ROCM when LoRA is enabled
- [New Model]: minicpm-sala
- Docs
- Python not yet supported