vllm
https://github.com/vllm-project/vllm
Python
A high-throughput and memory-efficient inference and serving engine for LLMs
Triage Issues!
When you volunteer to triage issues, you'll receive an email each day with a link to an open issue that needs help in this project. You'll also receive instructions on how to triage issues.
Triage Docs!
Receive a documented method or class from your favorite GitHub repos in your inbox every day. If you're really pro, receive undocumented methods or classes and supercharge your commit history.
Python not yet supported21 Subscribers
Add a CodeTriage badge to vllm
Help out
- Issues
- [Tracking Issue][Performance]: Speculative decoding performance/QoL improvements
- [Bugfix][PD] correct prefill instance removal bug in examples/disagg_proxy_demo.py
- [ROCm][fusion] enable ROCm rms_norm pattern matching in qk_norm_rope fusion
- [Bugfix] Handle layer name inconsistencies in pipeline parallel training
- Support Deepseekv32 chat
- [Bug]: MXFP4 models still fall back to the Marlin kernel for RTX PRO 6000 (Blackwell SM120)
- [Bug]: Inference of Qwen3-VL-235B failed
- [Core] Allow users to modify the scheduler configuration online in dev mode.
- [Bug]: Docker image v0.12.0 Fail to serve via Docker image
- [Feature][Observability] Fine-grained model runner timing metrics
- Docs
- Python not yet supported