vllm
https://github.com/vllm-project/vllm
Python
A high-throughput and memory-efficient inference and serving engine for LLMs
Triage Issues!
When you volunteer to triage issues, you'll receive an email each day with a link to an open issue that needs help in this project. You'll also receive instructions on how to triage issues.
Triage Docs!
Receive a documented method or class from your favorite GitHub repos in your inbox every day. If you're really pro, receive undocumented methods or classes and supercharge your commit history.
Python not yet supported24 Subscribers
Add a CodeTriage badge to vllm
Help out
- Issues
- [Bug]: Gemma 4 MoE (26B-A4B) — runtime MXFP4 quantization crashes during weight loading in fused MoE layer
- [Bug]: torch.distributed.DistNetworkError: The server socket has failed to listen on any local network address. port: 29500, useIpv6: false, code: -98, name: EADDRINUSE, message: address already in use
- [Perf] Remove per-step KV offload touch, touch once at request_finished
- Qwen-3.5 9B often producing repetitive/garbled output with Intel Backend
- [Bug]: Gemma 4 MoE (26B-A4B) crashes with `--data-parallel-size > 1` — AssertionError in cuda_communicator all_gather
- Fix async spec decode TOCTOU race and underflow on aborted requests
- [Bug][MoE] DeepEP HT hardcodes per_act_token_quant=False, causing crash/accuracy loss
- [Performance]: Qwen 3.5 27B Prefix Caching
- [ROCm] Support unlimited sequence lengths via multi-pass reduction
- [Perf]: ~23% output throughput regression on Qwen3.5-397B NVFP4 decode (8×B200) over the last 10 days
- Docs
- Python not yet supported