vllm
https://github.com/vllm-project/vllm
Python
A high-throughput and memory-efficient inference and serving engine for LLMs
Triage Issues!
When you volunteer to triage issues, you'll receive an email each day with a link to an open issue that needs help in this project. You'll also receive instructions on how to triage issues.
Triage Docs!
Receive a documented method or class from your favorite GitHub repos in your inbox every day. If you're really pro, receive undocumented methods or classes and supercharge your commit history.
Python not yet supported18 Subscribers
Add a CodeTriage badge to vllm
Help out
- Issues
- [Doc]: The --rope-scaling parameter has taken effect in vLLM supports YaRN
- [Bug]: NaNs in vLLM using DeepSeek-R1-0528-NVFP4-v2 and FlashInfer MLA
- [Bug]: EBNF grammar not strictly enforced when n > 1 in parallel generation
- GPT-OSS structured output + reasoning grinds to a halt at long context
- [Bug]: "none" reasoning effort doesn't do what it says it does (and may break output)
- Making spec decode testing nightly
- [Bug]: `flashinfer_cutedsl` incompatible with all cross-node EP backends on GB200 NVL72
- [Bug]: v0.18.0 fails to run pipeline parallel across nodes
- [Bug]: Editing `values.labels` in `chart-helm` breaks Service selector and leaves Endpoints empty
- Revert "[Async][Spec Decoding] Zero-bubble async scheduling + spec decoding" (#32951)
- Docs
- Python not yet supported