vllm
https://github.com/vllm-project/vllm
Python
A high-throughput and memory-efficient inference and serving engine for LLMs
Triage Issues!
When you volunteer to triage issues, you'll receive an email each day with a link to an open issue that needs help in this project. You'll also receive instructions on how to triage issues.
Triage Docs!
Receive a documented method or class from your favorite GitHub repos in your inbox every day. If you're really pro, receive undocumented methods or classes and supercharge your commit history.
Python not yet supported18 Subscribers
Add a CodeTriage badge to vllm
Help out
- Issues
- [Question]HOW TO Enabling FlashAttention- 4 backend for NVIDIA PRO 6000 (Blackwell) with MiniMax-2.5-230B
- [Bug]: Kimi-K2.5 outputs only '!!!!!!!!!!' in reasoning field, content is always null
- [Bug]: LMCache does not work with vLLM 0.17.0 (Qwen3Next)
- [Bug]: qwen 3.5 crash under dp 8
- [RFC][NixlConnector]: Add support for hybrid SSM-FA models
- [Bug]: vLLM 0.17.0 failed to serve Qwen3-30B-A3B-Instruct-2507 after adding `--enable_lora`
- [Bug]: CUBLAS_STATUS_INVALID_VALUE on Qwen3.5-122B-A10B-FP8 during profile run
- [Bug]: TP=2 DP=2 Broken for Qwen3-Next W4A16
- [Test] Add basic unittests for `split_graph`
- [AMD] fix to run MLA with kv cache dtype = fp8
- Docs
- Python not yet supported