vllm
https://github.com/vllm-project/vllm
Python
A high-throughput and memory-efficient inference and serving engine for LLMs
Triage Issues!
When you volunteer to triage issues, you'll receive an email each day with a link to an open issue that needs help in this project. You'll also receive instructions on how to triage issues.
Triage Docs!
Receive a documented method or class from your favorite GitHub repos in your inbox every day. If you're really pro, receive undocumented methods or classes and supercharge your commit history.
Python not yet supported13 Subscribers
Add a CodeTriage badge to vllm
Help out
- Issues
- [Bug]: [Fp8] [MoE] 'FLASHINFER_CUTLASS' is auto-selected as MoE backend instead of 'DEEPGEMM' on hopper
- [Feature]: Add MLA attention backend for Turing
- [Bugfix] Fix benchmark_moe.py inplace assertion with torch >= 2.9
- [UX nit] Fix non-default api_server_count message
- [Bugfix] Relax TRTLLM KV cache contiguity assertion for cross-layer layout
- [DRAFT][Feature] implement online data capture/generation for eagle3
- [CPU][Distributed] Fix Enable _CPUSHMDistributed only when TP/PP ranks share the same SHM group name
- [Draft][ROCm] ROCm7.2 as base
- [Feature] Decode Context Parallel support for GPU model runner v2
- [CI][AMD][BugFix] Use torch.testing.assert_close instead of assert torch.allclose in test_rocm_skinny_gemms.py
- Docs
- Python not yet supported