vllm
https://github.com/vllm-project/vllm
Python
A high-throughput and memory-efficient inference and serving engine for LLMs
Triage Issues!
When you volunteer to triage issues, you'll receive an email each day with a link to an open issue that needs help in this project. You'll also receive instructions on how to triage issues.
Triage Docs!
Receive a documented method or class from your favorite GitHub repos in your inbox every day. If you're really pro, receive undocumented methods or classes and supercharge your commit history.
Python not yet supported26 Subscribers
Add a CodeTriage badge to vllm
Help out
- Issues
- [Perf][3/n] Eliminate GPU<->CPU syncs in attention impls
- Revert "Fix Cohere ASR after HF upgrade" (#40582)
- [Kernel][AMD] Optimize GatedDeltaNet FLA prefill kernels on MI300X
- [Feature]: MoE Active Expert Management --moe-gpu-prefetch <num>
- [Bug]: Gemma4-31B-it deployed on vLLM cannot process images in tool message
- [CMake] Move _C_stable_libtorch and _rocm_C builds to separate files (#9129)
- fix(frontend): Add multimodal placeholders to Gemma4 tool message template
- [codex] Guard Qwen-VL fp8_e5m2 default KV scales
- [compressed-tensors] Asymmetric support for MoE WNA16 marlin
- [ROCm][Quantization][3/N] Refactor quark_moe w4a4 w/ oracle
- Docs
- Python not yet supported