vllm
https://github.com/vllm-project/vllm
Python
A high-throughput and memory-efficient inference and serving engine for LLMs
Triage Issues!
When you volunteer to triage issues, you'll receive an email each day with a link to an open issue that needs help in this project. You'll also receive instructions on how to triage issues.
Triage Docs!
Receive a documented method or class from your favorite GitHub repos in your inbox every day. If you're really pro, receive undocumented methods or classes and supercharge your commit history.
Python not yet supported24 Subscribers
Add a CodeTriage badge to vllm
Help out
- Issues
- fix: dereference $ref in tool schemas before passing to chat templates
- [ROCm] Expanded sparse MLA support
- Fix Gemma4 NVFP4 expert scale suffix mapping
- [Bugfix][Reasoning] Fix reasoning tokens dropped in streaming with async-scheduling and tool calls
- [Mamba] Add tuning script and config files for selective_state_update…
- [ROCm] Add SWIGLUSTEP activation support to AITER fused MoE
- [BugFix][Attention] Fix NaN in Triton merge_attn_states when both LSEs are -inf
- Guard GraphCaptureOutput override for torch compatibility
- Fix EAGLE prefix caching for hybrid KV cache
- Refactor AWQ-Marlin MoE to use modular kernel oracle
- Docs
- Python not yet supported