vllm
https://github.com/vllm-project/vllm
Python
A high-throughput and memory-efficient inference and serving engine for LLMs
Triage Issues!
When you volunteer to triage issues, you'll receive an email each day with a link to an open issue that needs help in this project. You'll also receive instructions on how to triage issues.
Triage Docs!
Receive a documented method or class from your favorite GitHub repos in your inbox every day. If you're really pro, receive undocumented methods or classes and supercharge your commit history.
Python not yet supported19 Subscribers
Add a CodeTriage badge to vllm
Help out
- Issues
- [RFC]: Per-iteration forward pass metrics with accurate engine-level timing
- fix hang with pause and collectives
- only patch runtime_env for torch >= 2.10
- [Bugfix] Fix MLA kv_b_proj activation dtype with Marlin FP8
- [ROCm][Quantization][1/N] Refactor quark_moe w_mxfp4 w/ oracle
- [vLLM IR] 4/N Compile native implementation
- [Bug]: Qwen3ReasoningParser leaks </think> into content when streaming with `stop` sequences (Related to #17468)
- [Feature]: Does P2pNcclConnector support PD separation for the GLM5 model dsa? Testing on the 0.15.1 branch has failed.
- [CPU] Fix chained comparison static_assert for Clang 21+
- [Usage]: Does vllm support online infer for qwen3_asr_forced_aligner now? I only found offline example
- Docs
- Python not yet supported