vllm
https://github.com/vllm-project/vllm
Python
A high-throughput and memory-efficient inference and serving engine for LLMs
Triage Issues!
When you volunteer to triage issues, you'll receive an email each day with a link to an open issue that needs help in this project. You'll also receive instructions on how to triage issues.
Triage Docs!
Receive a documented method or class from your favorite GitHub repos in your inbox every day. If you're really pro, receive undocumented methods or classes and supercharge your commit history.
Python not yet supported19 Subscribers
Add a CodeTriage badge to vllm
Help out
- Issues
- [Bugfix] Zero-init NVFP4 padding scales to prevent NaN contamination
- [Bug] V1 engine hangs on encoder cache profiling on AMD gfx1151 (MIOpen missing solver DB)
- jais: only enable ALiBi when position_embedding_type == "alibi"
- [release 2.11] Update torch 211 - debug
- Fix DDE in group_broadcast for unbacked SymInts under torch.compile
- [Bug]: 启动qwen3.5-35B后反复kill进程:vllm.v1.engine exceptions enginedeaderror enginecore encountered an issue
- [P/D] let toy proxy handle Responses/Messages API
- [Performance] Remove unnecessary zero-fill of MLA decode output tensor in Aiter backend
- [Bug]: 0.17.1 - vllm serve deepseek-ai/DeepSeek-OCR-2 on H100 crashes during Capturing CUDA graphs (decode, FULL)
- [Feat][RL] IPC weight sync optimizations: multigpu support and chunked packed tensors
- Docs
- Python not yet supported