vllm
https://github.com/vllm-project/vllm
Python
A high-throughput and memory-efficient inference and serving engine for LLMs
Triage Issues!
When you volunteer to triage issues, you'll receive an email each day with a link to an open issue that needs help in this project. You'll also receive instructions on how to triage issues.
Triage Docs!
Receive a documented method or class from your favorite GitHub repos in your inbox every day. If you're really pro, receive undocumented methods or classes and supercharge your commit history.
Python not yet supported19 Subscribers
Add a CodeTriage badge to vllm
Help out
- Issues
- [ROCm][CI/Build] Fix the pytest hook to properly print out the summary
- [ROCM] Optmize redudent d2d copy of moe.
- [Bugfix] too many values to unpack in dispatch_cpu_unquantized_gemm
- [XPU] Enable async TP support for XPU
- [XPU] Enable sequence parallel support for XPU
- [ROCm] Fix aiter persistent mode mla with q/o nhead<16 for kimi-k2.5 tp8
- Bugfix/multi node dp tcp placement
- [Bug]: MLA + FP8 KV cache + CUDA Graph causes random NaN in decode phase
- [BugFix] Fix OOB read in CUTLASS grouped GEMM with epilogue
- Fix Whisper online benchmarking with profiling #38586
- Docs
- Python not yet supported