vllm
https://github.com/vllm-project/vllm
Python
A high-throughput and memory-efficient inference and serving engine for LLMs
Triage Issues!
When you volunteer to triage issues, you'll receive an email each day with a link to an open issue that needs help in this project. You'll also receive instructions on how to triage issues.
Triage Docs!
Receive a documented method or class from your favorite GitHub repos in your inbox every day. If you're really pro, receive undocumented methods or classes and supercharge your commit history.
Python not yet supported18 Subscribers
Add a CodeTriage badge to vllm
Help out
- Issues
- TMA Support for Fully-Sharded LoRA MoE + Tuned Config Control
- Bridge pad_token_id from model config to tokenizer (#36429)
- [Bug]: Responses API streaming emits tool call XML as `response.output_text.delta` instead of `response.function_call_arguments.delta` for non-harmony models
- [Bugfix] Add regression test for allreduce RMS fusion with PP
- feat(attention): extract KV-cache update from FlashAttentionDiffKV ba…
- [Bug]: vllm 0.17.0 部署 Qwen3.5 397b-fp8版本运行过程中异常崩溃(vllm 0.17.0 crashed unexpectedly during deployment of Qwen3.5 397b-fp8 version.)
- [Bug]: There is something wrong with the use of mtp in qwen3.5-moe model: when it is changed to 0.17.0, it is wrong to directly report CudaError: an illegal memory access was encountered when reasoning with mtp.
- Patch for vLLM + FlashAttention4 + torch for GRPO colocated training
- Remove instance ID initialization logic
- [DO NOT REVIEW] Add versioned Helion kernel support with CI policy enforcement
- Docs
- Python not yet supported