vllm
https://github.com/vllm-project/vllm
Python
A high-throughput and memory-efficient inference and serving engine for LLMs
Triage Issues!
When you volunteer to triage issues, you'll receive an email each day with a link to an open issue that needs help in this project. You'll also receive instructions on how to triage issues.
Triage Docs!
Receive a documented method or class from your favorite GitHub repos in your inbox every day. If you're really pro, receive undocumented methods or classes and supercharge your commit history.
Python not yet supported21 Subscribers
Add a CodeTriage badge to vllm
Help out
- Issues
- [Bugfix] Record request stats when request is aborted by client
- [Frontend][CLI] Add --enable-dashboard for vLLM Web UI
- [Bug]: VLLM v0.10.0 failed to deploy the qwen3-30b-moe model. The error is AttributeError: '_OpNamespace' '_moe_C' object has no attribute 'topk_softmax'.
- [Bugfix] Make unspecified --host bind to dual stack
- [Performance]: Custom fused kernel tracking
- [ROCm] Use aiter.topk_sigmoid in llama4
- [MLA] Expose prefill/decode paths to torch.compile
- [BugFix] fixing stream interval > 1 will cause tool call bug
- [P/D][Nixl] Support pipeline parallel for P/D
- Fix merge conflict.
- Docs
- Python not yet supported