vllm
https://github.com/vllm-project/vllm
Python
A high-throughput and memory-efficient inference and serving engine for LLMs
Triage Issues!
When you volunteer to triage issues, you'll receive an email each day with a link to an open issue that needs help in this project. You'll also receive instructions on how to triage issues.
Triage Docs!
Receive a documented method or class from your favorite GitHub repos in your inbox every day. If you're really pro, receive undocumented methods or classes and supercharge your commit history.
Python not yet supported13 Subscribers
Add a CodeTriage badge to vllm
Help out
- Issues
- [ROCm][MLA] Enable MLA persistent kernel with fp8 and bf16 support
- [Bug]: Qwen3-Next FP8 error combining --tensor-parallel-size and --pipeline-parallel-size using MTP
- [Bug]: Extremely slow FA3 on Hopper for CUDA 13.0
- [Bug]: openai/gpt-oss-120b can't run on H100
- feat: [DRAFT Ignore for now] Add Omnivinci model + subfolder HF config/tokenizer support
- [Bug]: [DCP] Decode Context Parallel (DCP) failed to run on H200 GPU.
- [RFC]: Elastic Attn-FFN Disaggregation
- [Bug]: Multi-node mode with pplx backend fails to run on AWS EFA
- [Usage]: how to use --quantization option of `vllm serve`?
- [Bug]: Streaming tool call randomly failed when using gpt-oss-120b/20b
- Docs
- Python not yet supported