vllm
https://github.com/vllm-project/vllm
Python
A high-throughput and memory-efficient inference and serving engine for LLMs
Triage Issues!
When you volunteer to triage issues, you'll receive an email each day with a link to an open issue that needs help in this project. You'll also receive instructions on how to triage issues.
Triage Docs!
Receive a documented method or class from your favorite GitHub repos in your inbox every day. If you're really pro, receive undocumented methods or classes and supercharge your commit history.
Python not yet supported26 Subscribers
Add a CodeTriage badge to vllm
Help out
- Issues
- [Bugfix][ROCm] Fix FP8 per-tensor scale rank mismatch causing Inductor assertion failure
- [CI Failure]: mi355_1: Quantization
- [CVE Backport] Handle `trust_remote_code` for transformers backend (releases/v0.12.0)
- [Bugfix][DeepSeek V4] Enable cross-node TP=16 FP8 serving
- [ROCm][CI] Fix NIXL spec-decode acceptance startup and diagnostics
- Avoid redundant AITER MoE output copies
- [Refactor] Extract shared helpers from MXFP4 MoE backend selectors
- [CI] Split B200 LM Eval Small Models suite by GPU count
- [CI Failure]: mi355_2: GPQA Eval (GPT-OSS) (2xB200-2xMI355)
- [EPD-Disaggregation] Add CPU-GPU encoding transfers in new EC connector.
- Docs
- Python not yet supported