vllm
https://github.com/vllm-project/vllm
Python
A high-throughput and memory-efficient inference and serving engine for LLMs
Triage Issues!
When you volunteer to triage issues, you'll receive an email each day with a link to an open issue that needs help in this project. You'll also receive instructions on how to triage issues.
Triage Docs!
Receive a documented method or class from your favorite GitHub repos in your inbox every day. If you're really pro, receive undocumented methods or classes and supercharge your commit history.
Python not yet supported18 Subscribers
Add a CodeTriage badge to vllm
Help out
- Issues
- [Feature]: FA4 Attention Sinks
- [Bug]: delta_text and delta_token_ids get out of sync when stop sequences are used.
- [Feat][Executor] Introduce RayExecutorV2
- [Bug]: GPU failure during repeated model loading when using --enable-prefix-caching with KV transfer (LMCacheConnectorV1)
- [Usage]: 'LLMEngine' object has no attribute 'collective_rpc'
- [Bug]: The arguments invoked by the tool in the GLM-5 streaming output cannot be parsed into the JSON format.
- [Bug]: Why does setting `--pipeline-parallel-size > 1` result in an OOM error, but `--tensor-parallel-size> 1` does not?
- [Bug]: SM120 / RTX 5090 source build still registers unsupported FlashMLA / FA targets and uses non-SM120 Marlin defaults.
- [Core] Skip np.repeat in _prepare_inputs when all requests are decodes
- [Bug]: Gibberish output and collapsing generation throughput with Qwen3.5-35B-A3B-FP8 and speculative decoding enabled
- Docs
- Python not yet supported