vllm
https://github.com/vllm-project/vllm
Python
A high-throughput and memory-efficient inference and serving engine for LLMs
Triage Issues!
When you volunteer to triage issues, you'll receive an email each day with a link to an open issue that needs help in this project. You'll also receive instructions on how to triage issues.
Triage Docs!
Receive a documented method or class from your favorite GitHub repos in your inbox every day. If you're really pro, receive undocumented methods or classes and supercharge your commit history.
Python not yet supported18 Subscribers
Add a CodeTriage badge to vllm
Help out
- Issues
- Upgrade to Transformers v5
- [Bug]: _C_stable_libtorch fails to build: const& references violate stable ABI trivially_copyable requirement
- [Installation]: torch 2.11 is not supported
- [ROCm][CI] Add K8s-hardened Python CI runner with JUnit exit-code fix, GPU lifecycle, and LFU cache
- [Perf] Fix DBO overlap: capture DeepEP event before yield
- [kv_offload+HMA][8/N]: Support multi-group worker transfer
- [ROCm][Test] Add hybrid block size and RDNA4 backend selection tests
- [Logging] Add JIT compilation progress log for FlashInfer
- [Logging] Improve DCP/PCP/MTP error messages with actionable guidance
- [Bug]: When using the Sonnet dataset for benchmark testing, if the input length is too small, the CPU usage becomes abnormally high with no error logs, making it impossible to run the benchmark properly.
- Docs
- Python not yet supported