vllm
https://github.com/vllm-project/vllm
Python
A high-throughput and memory-efficient inference and serving engine for LLMs
Triage Issues!
When you volunteer to triage issues, you'll receive an email each day with a link to an open issue that needs help in this project. You'll also receive instructions on how to triage issues.
Triage Docs!
Receive a documented method or class from your favorite GitHub repos in your inbox every day. If you're really pro, receive undocumented methods or classes and supercharge your commit history.
Python not yet supported19 Subscribers
Add a CodeTriage badge to vllm
Help out
- Issues
- [CI Failure]: distributed-tests-h200
- [Feature]: fused GEMM +collectives helion kernel
- [Bug]: memory leak (nvidia v100, mineru, dp*8)
- `Feature/nvfp4 universal fallback emulation
- feat(kernel): patch fused_gdn_gating
- [ROCm][Docker] Add gfx1103 support to Docker builds
- [misc] allow overriding the TAG variable in auto_tune.sh
- [Bug]: [P/D] multi-connector cannot be used together with P2pNcclConnector that uses a put mode(push kv cache from P node to D node)
- [Bug]: UnpicklingError during concurrent model compilation on multiple GPUs
- ValueError: Qwen3OmniMoeThinkerForConditionalGeneration does not support LoRA yet.
- Docs
- Python not yet supported