vllm
https://github.com/vllm-project/vllm
Python
A high-throughput and memory-efficient inference and serving engine for LLMs
Triage Issues!
When you volunteer to triage issues, you'll receive an email each day with a link to an open issue that needs help in this project. You'll also receive instructions on how to triage issues.
Triage Docs!
Receive a documented method or class from your favorite GitHub repos in your inbox every day. If you're really pro, receive undocumented methods or classes and supercharge your commit history.
Python not yet supported13 Subscribers
Add a CodeTriage badge to vllm
Help out
- Issues
- [Usage]: When is going to be the next release?
- [CI Failure]: should test_cumem.py use spawn or fork in cuda?
- [Tracking Issue][Performance]: (G)B200/300 performance improvements
- [CI] Notification mechanism for failing nightly jobs
- [Bug]: eagle3 default use quant model loader
- [Bug]: Llama 4 Scout on 2 x B200 errors during FlashInfer attention metadata build
- [Bug]: stride mismatch when using torch compile on graphs with splitting_ops and non-standard tensor dimensions
- [Bug]: Cannot use Qwen3 Next autoround quant model with 0.11.1
- [Installation]: Failed building wheel for vllm
- [Test] Fix pytest termination with @create_new_process_for_each_test("fork")
- Docs
- Python not yet supported