vllm
https://github.com/vllm-project/vllm
Python
A high-throughput and memory-efficient inference and serving engine for LLMs
Triage Issues!
When you volunteer to triage issues, you'll receive an email each day with a link to an open issue that needs help in this project. You'll also receive instructions on how to triage issues.
Triage Docs!
Receive a documented method or class from your favorite GitHub repos in your inbox every day. If you're really pro, receive undocumented methods or classes and supercharge your commit history.
Python not yet supported13 Subscribers
Add a CodeTriage badge to vllm
Help out
- Issues
- Fix: API server wait timeout on slow storage nodes
- [Bug]: RuntimeError: Int8 not supported for this architecture
- [Bug]: VLLM 0.11.0 with Gemma3-awq is totaly broken to start (not possible to start awq of gemma3-27b-awq
- [Usage]: what's the right way to run embedding model in vllm 0.11.0
- [Bug]: MiniMax tool parsing errors
- [Bug]: vLLM 0.11.1 runs into OOM error when loading lora adaptors.
- Removed unused env var VLLM_FLASHINFER_ALLREDUCE_FUSION_THRESHOLDS_MB
- [Bug]: Qwen3 Omni thinking unstable output
- [Usage]: Vllm + Intervl model local infra Image preprocessing / request adding becomes bottleneck even with more CPU cores — how to accelerate?
- [Usage]: 启动 qwen3 vl 超级超级超级慢,sglang 启动很快,可能的原因是什么?
- Docs
- Python not yet supported