vllm
https://github.com/vllm-project/vllm
Python
A high-throughput and memory-efficient inference and serving engine for LLMs
Triage Issues!
When you volunteer to triage issues, you'll receive an email each day with a link to an open issue that needs help in this project. You'll also receive instructions on how to triage issues.
Triage Docs!
Receive a documented method or class from your favorite GitHub repos in your inbox every day. If you're really pro, receive undocumented methods or classes and supercharge your commit history.
Python not yet supported13 Subscribers
Add a CodeTriage badge to vllm
Help out
- Issues
- [Bug]: v0.11.0 New default VLLM_ALLREDUCE_USE_SYMM_MEM=1 prevent tensor-parallel on gpt-oss-120b
- [Kernel] Make moe_forward and moe_forward_shared into inplace ops
- Pad
- [Usage]: How to enable MTP when using Qwen3-Next in local infer ( not vllm serve)
- [Bug]: AsyncHttpClient incorrectly decodes URLs via aiohttp, breaking signed URLs (e.g., S3)
- [Bug]: KV cache can't be quantized for Qwen3-Next
- [Usage]: how to use vllm on CUDA 12.9
- [Bug]: Token id 5279552648203111001 is out of vocabulary
- [Usage]: MCP-USE with VLLM gpt-oss:20b via ChatOpenAI
- [DeepSeekV3.2] Fix Loading BF16 weights
- Docs
- Python not yet supported