vllm
https://github.com/vllm-project/vllm
Python
A high-throughput and memory-efficient inference and serving engine for LLMs
Triage Issues!
When you volunteer to triage issues, you'll receive an email each day with a link to an open issue that needs help in this project. You'll also receive instructions on how to triage issues.
Triage Docs!
Receive a documented method or class from your favorite GitHub repos in your inbox every day. If you're really pro, receive undocumented methods or classes and supercharge your commit history.
Python not yet supported25 Subscribers
Add a CodeTriage badge to vllm
Help out
- Issues
- [Usage]: Gemma4 on Turing GPUs (SM 7.5): all attention backends hit shared memory limits
- [Bug][ROCm] GLM-5 MXFP4 sparse MLA decode crash on MI355x
- [Bugfix] Fix piecewise backend to support torch.cond
- [Bug]: Mistral Small 4 (119B MoE) fails to start on ROCm MI325X - two blocking issues
- [R3] Add routed experts to openai entrypoint
- fix(kv_cache): sync `_prob_scale_float` and `q_scale` fallback overwrite
- [Bug]: Gemma4-31B freezes on multiple RTX6000 PRO during loading
- Request for attribution: Multi-ISA CPU dispatcher work (PR #35466)
- fix(gptq): auto-detect v1/v2 zero-point format from actual weights
- [BUG] Fix PP for R3
- Docs
- Python not yet supported