vllm
https://github.com/vllm-project/vllm
Python
A high-throughput and memory-efficient inference and serving engine for LLMs
Triage Issues!
When you volunteer to triage issues, you'll receive an email each day with a link to an open issue that needs help in this project. You'll also receive instructions on how to triage issues.
Triage Docs!
Receive a documented method or class from your favorite GitHub repos in your inbox every day. If you're really pro, receive undocumented methods or classes and supercharge your commit history.
Python not yet supported24 Subscribers
Add a CodeTriage badge to vllm
Help out
- Issues
- feat[vLLM × v5]: Add audio support for the Transformers backend
- [Refactor][MLA]: Expose mla to torch.compile
- [Core][Scheduler] Simulate reclaimable KV cache blocks before preempting running requests
- [Perf] Skip decode for generative scoring with max_tokens=0
- [Attention] Add HPC attention backend for improved performance on SM90 GPUS
- [Bug]: Gemma 4 31B FP8_BLOCK checkpoint produces garbage repetitive output — logit saturation at softcap wall due to absorbed activation scales being double-applied
- [Bugfix] Support Qwen3.5 Text Only Variant (Qwen3_5ForCausalLM)
- [Bug]: Crash on Transcription (size for tensor a must match the size of tensor b) with reproduce
- [ROCm] Fix UnboundLocalError for prefix_scheduler_metadata in Triton attention
- [Bug] Embedding/pooling models crash on B200 (SM 10.0) — encoder attention hardcodes FA2 which lacks SM100 support
- Docs
- Python not yet supported