vllm
https://github.com/vllm-project/vllm
Python
A high-throughput and memory-efficient inference and serving engine for LLMs
Triage Issues!
When you volunteer to triage issues, you'll receive an email each day with a link to an open issue that needs help in this project. You'll also receive instructions on how to triage issues.
Triage Docs!
Receive a documented method or class from your favorite GitHub repos in your inbox every day. If you're really pro, receive undocumented methods or classes and supercharge your commit history.
Python not yet supported19 Subscribers
Add a CodeTriage badge to vllm
Help out
- Issues
- Fix: Clone NVFP4 MoE weights on SM121 to prevent Marlin kernel NaN
- [Test] Add unit tests for GDN fused recurrent kernel
- fix(qwen3.5): prevent false gate_proj match from dropping MoE router gate weights
- [mla] Support fused FP8/NVFP4 output quantization in MLA attention (#35792)
- [Bugfix] Allow concurrency and memory_limit for runai_streamer_sharded
- docs+tests: consolidate doc fixes and test assertion
- fix: improve token_ids_cpu swap to copy only valid indices
- [Feat] Add vllm eval CLI subcommand integrating lm_eval accuracy and perf benchmarking
- [Bugfix] Fix reasoning token routing with tool parsers: prompt false positive and transition-batch loss
- [torch.compile] Remove attention layer name from unified_kv_cache_update
- Docs
- Python not yet supported