vllm
https://github.com/vllm-project/vllm
Python
A high-throughput and memory-efficient inference and serving engine for LLMs
Triage Issues!
When you volunteer to triage issues, you'll receive an email each day with a link to an open issue that needs help in this project. You'll also receive instructions on how to triage issues.
Triage Docs!
Receive a documented method or class from your favorite GitHub repos in your inbox every day. If you're really pro, receive undocumented methods or classes and supercharge your commit history.
Python not yet supported25 Subscribers
Add a CodeTriage badge to vllm
Help out
- Issues
- [Bug]: Whisper online benchmark with profiling error: TypeError: multi_modal_content must be a dict containing 'audio'
- [Bug]: RCCL RDNA3 gfx1100 Tp2 ROCM at startup
- [Feature]: Better base64 to torch tensor (Fixes #26781)
- [Bug]: CUDA error: an illegal memory access was encountered when deploy Qwen3.5-35B-A3B-FP8 on A100
- [Feature] Implement Mean Pool Connector to return mean pooled vector over prompt tokens in response
- [Bugfix] too many values to unpack in dispatch_cpu_unquantized_gemm
- [XPU] Enable async TP support for XPU
- [XPU] Enable sequence parallel support for XPU
- Fix Whisper online benchmarking with profiling #38586
- [Kernel fusion] QK Norm + RoPE + Cache + Quant
- Docs
- Python not yet supported