vllm
https://github.com/vllm-project/vllm
Python
A high-throughput and memory-efficient inference and serving engine for LLMs
Triage Issues!
When you volunteer to triage issues, you'll receive an email each day with a link to an open issue that needs help in this project. You'll also receive instructions on how to triage issues.
Triage Docs!
Receive a documented method or class from your favorite GitHub repos in your inbox every day. If you're really pro, receive undocumented methods or classes and supercharge your commit history.
Python not yet supported25 Subscribers
Add a CodeTriage badge to vllm
Help out
- Issues
- Qwen-3.5 9B often producing repetitive/garbled output with Intel Backend
- [Bug]: Gemma 4 MoE (26B-A4B) crashes with `--data-parallel-size > 1` — AssertionError in cuda_communicator all_gather
- Fix async spec decode TOCTOU race and underflow on aborted requests
- [Bug][MoE] DeepEP HT hardcodes per_act_token_quant=False, causing crash/accuracy loss
- [Performance]: Qwen 3.5 27B Prefix Caching
- [ROCm] Support unlimited sequence lengths via multi-pass reduction
- [Perf]: ~23% output throughput regression on Qwen3.5-397B NVFP4 decode (8×B200) over the last 10 days
- [MoE][Fix] Fix DeepEP HT hardcoded per_act_token_quant=False
- Update MusicFlamingo and add AudioFlamingoNext
- fix(attention): fix high head dim model(Gemma4) support on limited shared memory
- Docs
- Python not yet supported