vllm
https://github.com/vllm-project/vllm
Python
A high-throughput and memory-efficient inference and serving engine for LLMs
Triage Issues!
When you volunteer to triage issues, you'll receive an email each day with a link to an open issue that needs help in this project. You'll also receive instructions on how to triage issues.
Triage Docs!
Receive a documented method or class from your favorite GitHub repos in your inbox every day. If you're really pro, receive undocumented methods or classes and supercharge your commit history.
Python not yet supported15 Subscribers
Add a CodeTriage badge to vllm
Help out
- Issues
- [Bugfix] EPLB - Mistral 3 Large
- [Attention] Abstract the MLA prefill backends
- Introduce InferenceProfile as execution-intent metadata
- [Doc/Fix] Add Docker Compose guide and fix doc-build hook
- Feature/silu block quant fusion v1
- [Bug][ROCm]: Prefix caching produces different output on first request (cache miss) vs subsequent requests (cache hit)
- inital commit
- [bugfix] Solve the accuracy issue of deepseek ocr2
- Triton MLA perf fixes
- [Bug]: Prefix caching ignores visual input, causing incorrect multimodal outputs under concurrency
- Docs
- Python not yet supported