vllm
https://github.com/vllm-project/vllm
Python
A high-throughput and memory-efficient inference and serving engine for LLMs
Triage Issues!
When you volunteer to triage issues, you'll receive an email each day with a link to an open issue that needs help in this project. You'll also receive instructions on how to triage issues.
Triage Docs!
Receive a documented method or class from your favorite GitHub repos in your inbox every day. If you're really pro, receive undocumented methods or classes and supercharge your commit history.
Python not yet supported19 Subscribers
Add a CodeTriage badge to vllm
Help out
- Issues
- [Bug]: vLLM 0.11.1 runs into OOM error when loading lora adaptors.
- Removed unused env var VLLM_FLASHINFER_ALLREDUCE_FUSION_THRESHOLDS_MB
- [Bug]: Qwen3 Omni thinking unstable output
- [Usage]: 启动 qwen3 vl 超级超级超级慢,sglang 启动很快,可能的原因是什么?
- fr-spec
- [Spec Decode] Remove input_fits_in_drafter
- [Installation]: how to Install vllm on dell promax gb10
- Remove duplicate fake registration implementations for gptq_marlin_repack and awq_marlin_repack operations.
- [Bug]: Ministral 3 - streaming tool call not working
- [Bug]: `pplx-kernels` fails to load in vLLM container
- Docs
- Python not yet supported