vllm
https://github.com/vllm-project/vllm
Python
A high-throughput and memory-efficient inference and serving engine for LLMs
Triage Issues!
When you volunteer to triage issues, you'll receive an email each day with a link to an open issue that needs help in this project. You'll also receive instructions on how to triage issues.
Triage Docs!
Receive a documented method or class from your favorite GitHub repos in your inbox every day. If you're really pro, receive undocumented methods or classes and supercharge your commit history.
Python not yet supported26 Subscribers
Add a CodeTriage badge to vllm
Help out
- Issues
- Adjust the initialization order of self.k_norm to prevent vllm from failing to load models after gemma4 sft.
- [compile] Add nested_compile_regions for faster compilation
- Fix prefix cache block visibility lifecycle
- [Perf][1/n] Eliminate various GPU<->CPU syncs
- [V1][FT] Decoupled fault-tolerance framework: hooks + registry + supervisor + pluggable recovery plans
- [Bugfix] Detect MTP truncation at reasoning-to-tool-call boundary
- Add return type annotations to 20 methods across vllm
- Adding reasoning for responses API V1
- Add Medusa speculative decoding e2e test
- [Bugfix][Attention][TurboQuant] Pad head_dim to power-of-2 for WHT
- Docs
- Python not yet supported