vllm
https://github.com/vllm-project/vllm
Python
A high-throughput and memory-efficient inference and serving engine for LLMs
Triage Issues!
When you volunteer to triage issues, you'll receive an email each day with a link to an open issue that needs help in this project. You'll also receive instructions on how to triage issues.
Triage Docs!
Receive a documented method or class from your favorite GitHub repos in your inbox every day. If you're really pro, receive undocumented methods or classes and supercharge your commit history.
Python not yet supported21 Subscribers
Add a CodeTriage badge to vllm
Help out
- Issues
- [Installation]: getting vLLM installed with a free-threaded Python interpreter (3.14t)
- Report request count before removing it due to target output len 1
- [Tracking Issue][Performance]: (G)B200/300 performance improvements
- [CI Failure]: [AMD] Nixl PD tests
- [Feature]: Selective Token Logprobs Tracking
- [BugFix] Fix eagle async scheduling cpu race
- [RFC]: Resettle examples.
- [DOC][ROCm]: Add attention backend guide
- [CI Failure]: torch.compile cache are reused across unittests.
- [P/D][Metrics] Consider combined/summed metrics (e.g. ttft and e2e_request_latency) for prefill and decode instances
- Docs
- Python not yet supported