vllm
https://github.com/vllm-project/vllm
Python
A high-throughput and memory-efficient inference and serving engine for LLMs
Triage Issues!
When you volunteer to triage issues, you'll receive an email each day with a link to an open issue that needs help in this project. You'll also receive instructions on how to triage issues.
Triage Docs!
Receive a documented method or class from your favorite GitHub repos in your inbox every day. If you're really pro, receive undocumented methods or classes and supercharge your commit history.
Python not yet supported26 Subscribers
Add a CodeTriage badge to vllm
Help out
- Issues
- [Bug]: Gemma-4 fails when forcing FLASHINFER attention backend on Blackwell SM120 (head_size not supported)
- [Bugfix] fix incorrect apply_interleaved_rope in mrope under torch.compile
- [ROCm][Perf] Support N=5 in wvSplitK skinny GEMM kernels for speculative decoding
- [Bug]: Max token length incorrect when /nothink tag on Qwen3.5-4B
- [Bug]: The KV cache size log is wrong for Qwen3.5
- [Bug]: RoutedExpertsCapturer host buffer undersized for hybrid models with multiple KV cache groups
- [Bug]: ValueError: Gemma4ForConditionalGeneration does not support LoRA yet.
- [Feature]: Prefix caching completely ineffective for Mamba-hybrid models (Qwen3.5) when prompt < block_size (528 tokens)
- [BugFix] Correct OTEL span start time for Dynamo compilation
- [Bug]: Scheduling deadlock in _mamba_block_aligned_split with multiple large multimodal inputs on hybrid Mamba models
- Docs
- Python not yet supported