vllm
https://github.com/vllm-project/vllm
Python
A high-throughput and memory-efficient inference and serving engine for LLMs
Triage Issues!
When you volunteer to triage issues, you'll receive an email each day with a link to an open issue that needs help in this project. You'll also receive instructions on how to triage issues.
Triage Docs!
Receive a documented method or class from your favorite GitHub repos in your inbox every day. If you're really pro, receive undocumented methods or classes and supercharge your commit history.
Python not yet supported14 Subscribers
Add a CodeTriage badge to vllm
Help out
- Issues
- [Feature] Support multi-stream parallelism for the q, kv norm calculations in the Qwen3 model.
- [RFC]: To Inductor partition or to not Inductor partition (by default in v0.11.1)
- feat(v1): Implement pinned prefix caching
- [EPLB] Expert histogram logging
- [Bug]: Qwen3-VL-Thinking / Qwen3-Thinking-2507 reasoning parser doesn't account for <think> (start token) appended to input / chat template
- Add process pool support fro tokenizer
- [Kernel] Re-enable mrope triton kernel for CUDA/ROCM platform by default
- Fix decoding server's logprobs handling in Prefill/Decode disaggregation mode
- Clarify V0→V1 error; keep SamplingParams importable when VLLM_USE_V1=0
- Fix issue #27486 double bos token
- Docs
- Python not yet supported