vllm
https://github.com/vllm-project/vllm
Python
A high-throughput and memory-efficient inference and serving engine for LLMs
Triage Issues!
When you volunteer to triage issues, you'll receive an email each day with a link to an open issue that needs help in this project. You'll also receive instructions on how to triage issues.
Triage Docs!
Receive a documented method or class from your favorite GitHub repos in your inbox every day. If you're really pro, receive undocumented methods or classes and supercharge your commit history.
Python not yet supported24 Subscribers
Add a CodeTriage badge to vllm
Help out
- Issues
- [Bugfix] Fix pipeline load imbalance in scheduler
- [Intel-GPU]: Using docker image at intel/vllm:0.17.0-xpu -> RuntimeError: PyTorch was compiled without CUDA support
- Log warning for scheduled token mismatch
- [Bugfix] Fix reasoning parser disabling structured output when enable_thinking=false
- [CPU] Enable Granite 4 / Mamba models on CPU backend
- [Feature] Extend Gemma4 tool parser to support XML-style <tool_call> format
- Clean up OMP and NUMA topology detection
- [EPLB] Fix balancedness metric computation and add verbose reporting
- [ROCm][Quantization][2/N] Refactor quark_moe w4a8 w/ oracle
- Revert "[Quantization] Add FlashInfer CuteDSL batched experts backend for NVFP4 MoE" (#38251)
- Docs
- Python not yet supported