vllm
https://github.com/vllm-project/vllm
Python
A high-throughput and memory-efficient inference and serving engine for LLMs
Triage Issues!
When you volunteer to triage issues, you'll receive an email each day with a link to an open issue that needs help in this project. You'll also receive instructions on how to triage issues.
Triage Docs!
Receive a documented method or class from your favorite GitHub repos in your inbox every day. If you're really pro, receive undocumented methods or classes and supercharge your commit history.
Python not yet supported19 Subscribers
Add a CodeTriage badge to vllm
Help out
- Issues
- feat(responses): stateless multi-turn via encrypted_content state carrier (RFC #26934)
- feat(responses): pluggable ResponseStore abstraction
- fix(cmake): support distro PyTorch without vendored libgomp
- refactor(metrics): consolidate histogram bucket definitions into buck…
- [Bugfix] Add regression test for allreduce RMS fusion with PP
- [Bugfix] Qwen3.5-397B-A17B model loading with transformer=5.2
- [Core] Skip inputs_embeds buffer allocation for text-only models
- [BUGFIX]fix cuda memory stat by reserved memory
- [Bugfix] Fix DeepSeek V3.2 tool parser
- [Performance] Fuse RoPE + KV cache update for MLA backends
- Docs
- Python not yet supported