vllm
https://github.com/vllm-project/vllm
Python
A high-throughput and memory-efficient inference and serving engine for LLMs
Triage Issues!
When you volunteer to triage issues, you'll receive an email each day with a link to an open issue that needs help in this project. You'll also receive instructions on how to triage issues.
Triage Docs!
Receive a documented method or class from your favorite GitHub repos in your inbox every day. If you're really pro, receive undocumented methods or classes and supercharge your commit history.
Python not yet supported18 Subscribers
Add a CodeTriage badge to vllm
Help out
- Issues
- [Bug]: vLLM not working with qwen3.5 27B
- [KV Cache] Use a contiguous buffer for all layers
- [Bugfix] Fix hybrid Attention+Mamba models failing when hybrid KV cache manager is disabled
- fix: correct timestamp drift in speech-to-text for audio > 30s
- fix(lora): add bounds checking for TP configurations
- fix: release VideoCapture resources and guard div-by-zero in video utils
- fix: use byte count for realtime WebSocket audio size validation
- [Bug]: Qwen3.5-9B (BF16/AWQ) Illegal Memory Access in vLLM v0.17.0 (WSL2/RTX3090 Ti)
- [Bugfix][Frontend] Do not persist load_inplace on stored LoRA requests
- [Bugfix] Fix non-streaming/streaming inconsistency for Qwen3 reasoning when enable_thinking is not set
- Docs
- Python not yet supported