vllm
https://github.com/vllm-project/vllm
Python
A high-throughput and memory-efficient inference and serving engine for LLMs
Triage Issues!
When you volunteer to triage issues, you'll receive an email each day with a link to an open issue that needs help in this project. You'll also receive instructions on how to triage issues.
Triage Docs!
Receive a documented method or class from your favorite GitHub repos in your inbox every day. If you're really pro, receive undocumented methods or classes and supercharge your commit history.
Python not yet supported24 Subscribers
Add a CodeTriage badge to vllm
Help out
- Issues
- Refactor WNA16 MoE backend selection into oracle module
- [MoE] Migrate W4A8 CT to Oracle Structure
- [Bugfix] Add 501 response to STT OpenAPI schema
- [Bugfix] Fix pipeline load imbalance in scheduler
- [Intel-GPU]: Using docker image at intel/vllm:0.17.0-xpu -> RuntimeError: PyTorch was compiled without CUDA support
- Log warning for scheduled token mismatch
- [Bugfix] Fix reasoning parser disabling structured output when enable_thinking=false
- [CPU] Enable Granite 4 / Mamba models on CPU backend
- [Feature] Extend Gemma4 tool parser to support XML-style <tool_call> format
- Clean up OMP and NUMA topology detection
- Docs
- Python not yet supported