vllm
https://github.com/vllm-project/vllm
Python
A high-throughput and memory-efficient inference and serving engine for LLMs
Triage Issues!
When you volunteer to triage issues, you'll receive an email each day with a link to an open issue that needs help in this project. You'll also receive instructions on how to triage issues.
Triage Docs!
Receive a documented method or class from your favorite GitHub repos in your inbox every day. If you're really pro, receive undocumented methods or classes and supercharge your commit history.
Python not yet supported24 Subscribers
Add a CodeTriage badge to vllm
Help out
- Issues
- [CPU] Move KV cache FP32 buffer out of MLA decode hot loop
- [Bug]: The premature implementation of structured generation constraints in qwen3 led to a disastrous decline in model capabilities.
- [RFC]: PR de-dup/Similarity-Check CI workflow ?
- Feat:add support for PP and MTP
- Add validation script and coverage analysis for FusedMoE tunin…
- [Model] Use AutoWeightsLoader for QWen
- [Performance] Optimize MoE prefill for GLM-4.7-FP8 on H200
- [Core] Move EAGLE drop from KV cache managers to coordinators
- [MTP][Runtime] Reuse draft attention metadata across draft steps
- [SM120][MLA] Fix FlashInfer MLA DCP/MTP decode path
- Docs
- Python not yet supported