vllm
https://github.com/vllm-project/vllm
Python
A high-throughput and memory-efficient inference and serving engine for LLMs
Triage Issues!
When you volunteer to triage issues, you'll receive an email each day with a link to an open issue that needs help in this project. You'll also receive instructions on how to triage issues.
Triage Docs!
Receive a documented method or class from your favorite GitHub repos in your inbox every day. If you're really pro, receive undocumented methods or classes and supercharge your commit history.
Python not yet supported19 Subscribers
Add a CodeTriage badge to vllm
Help out
- Issues
- [Bugfix] Get actual kernel_block_size_alignment from backend
- [Bug]: vllm bench: "Peak output token throughput" is "less than Output token throughput"
- [Bugfix] Fix SamplingParams bad_words tokenizer conversion for space-prefixed tokens
- [Bugfix] Allow tensorizer load format for S3/GCS/Azure object storage
- [Feature] Add OCI Image Annotations to container images
- fix: remove ambiguous KV cache layout assertion for Mamba hybrid models
- fix(bench): compute peak output throughput from token-volume decode windows
- [Model Runner V2] Add full cuda graph support for eagle prefill
- [compile] Cache InductorPass uuid
- [Bugfix] Fix Step3 pipeline parallel KeyError for residual tensor
- Docs
- Python not yet supported