vllm
https://github.com/vllm-project/vllm
Python
A high-throughput and memory-efficient inference and serving engine for LLMs
Triage Issues!
When you volunteer to triage issues, you'll receive an email each day with a link to an open issue that needs help in this project. You'll also receive instructions on how to triage issues.
Triage Docs!
Receive a documented method or class from your favorite GitHub repos in your inbox every day. If you're really pro, receive undocumented methods or classes and supercharge your commit history.
Python not yet supported13 Subscribers
Add a CodeTriage badge to vllm
Help out
- Issues
- [Bug]: openai/gpt-oss-120b can't run on H100
- feat: [DRAFT Ignore for now] Add Omnivinci model + subfolder HF config/tokenizer support
- [Bug]: [DCP] Decode Context Parallel (DCP) failed to run on H200 GPU.
- [RFC]: Elastic Attn-FFN Disaggregation
- [Bug]: Multi-node mode with pplx backend fails to run on AWS EFA
- [Usage]: how to use --quantization option of `vllm serve`?
- [Bug]: Streaming tool call randomly failed when using gpt-oss-120b/20b
- [Usage]: How to use vllm bench serve to bench remote deployed vllm models (can't bench when ep enabled!!!)
- [Usage]: Qwen3-32B on RTX PRO 6000 (55s First Token Delay and 15t/s)
- [Bug]: Inductor fails to fuse pointwise ops with sequence parallelism + async TP
- Docs
- Python not yet supported