vllm
https://github.com/vllm-project/vllm
Python
A high-throughput and memory-efficient inference and serving engine for LLMs
Triage Issues!
When you volunteer to triage issues, you'll receive an email each day with a link to an open issue that needs help in this project. You'll also receive instructions on how to triage issues.
Triage Docs!
Receive a documented method or class from your favorite GitHub repos in your inbox every day. If you're really pro, receive undocumented methods or classes and supercharge your commit history.
Python not yet supported13 Subscribers
Add a CodeTriage badge to vllm
Help out
- Issues
- [Bug]: splitting_ops can be updated after it gets included in the compile cache key
- [Feature]: Improve vLLM CUDA Memory Utilization and Estimation
- [Bug]: whitespace_pattern not doing anything
- [Bug]: KV Cache Quantization not working on v1 (rtx3090) "type fp8e4nv not supported in this architecture"
- [Bug]: `PPLXAll2AllManager` fails to init on pplx-kernels latest
- [Refactor][MLA]: Lift prefill/decode split into compiled region
- [Bug]: v0.11.0 New default VLLM_ALLREDUCE_USE_SYMM_MEM=1 prevent tensor-parallel on gpt-oss-120b
- [Kernel] Make moe_forward and moe_forward_shared into inplace ops
- Pad
- [Usage]: How to enable MTP when using Qwen3-Next in local infer ( not vllm serve)
- Docs
- Python not yet supported