vllm
https://github.com/vllm-project/vllm
Python
A high-throughput and memory-efficient inference and serving engine for LLMs
Triage Issues!
When you volunteer to triage issues, you'll receive an email each day with a link to an open issue that needs help in this project. You'll also receive instructions on how to triage issues.
Triage Docs!
Receive a documented method or class from your favorite GitHub repos in your inbox every day. If you're really pro, receive undocumented methods or classes and supercharge your commit history.
Python not yet supported19 Subscribers
Add a CodeTriage badge to vllm
Help out
- Issues
- [ROCm] Cap Triton paged attention block size to fix ROCm shared memory OOM
- [Core] Make ModelRunnerOutput.num_nans_in_logits np.ndarray rather than python dict. Fixed bug with calculating num nans when speculative decoding in mrv1 and mrv2
- [RFC] Context-Aware KV-Cache Retention API (Prioritized Evictions)
- Fix Responses JSON schema alias serialization
- [ROCm][Perf] Add optimized MoE configs for Kimi K2.5 TP=4
- Add audio extraction at init + automatic audio detection
- [Build] Add SM121 (DGX Spark / GB10) to published build targets
- [MoE] Filter FP8/MXFP4 MoE backend candidates by platform
- feat(serving): DNS-AID SVCB endpoint registration
- [Frontend] Skip stop in reasoning content
- Docs
- Python not yet supported