vllm
https://github.com/vllm-project/vllm
Python
A high-throughput and memory-efficient inference and serving engine for LLMs
Triage Issues!
When you volunteer to triage issues, you'll receive an email each day with a link to an open issue that needs help in this project. You'll also receive instructions on how to triage issues.
Triage Docs!
Receive a documented method or class from your favorite GitHub repos in your inbox every day. If you're really pro, receive undocumented methods or classes and supercharge your commit history.
Python not yet supported14 Subscribers
Add a CodeTriage badge to vllm
Help out
- Issues
- feat(v1): Implement pinned prefix caching
- [EPLB] Expert histogram logging
- [Bug]: Qwen3-VL-Thinking / Qwen3-Thinking-2507 reasoning parser doesn't account for <think> (start token) appended to input / chat template
- Add process pool support fro tokenizer
- [Kernel] Re-enable mrope triton kernel for CUDA/ROCM platform by default
- Fix decoding server's logprobs handling in Prefill/Decode disaggregation mode
- Clarify V0→V1 error; keep SamplingParams importable when VLLM_USE_V1=0
- Fix issue #27486 double bos token
- Enhance benchmark_moe.py compatibility issues across vLLM versions
- [Bug]: nccl can't use NET plugin Socket after update to torch 2.9
- Docs
- Python not yet supported