vllm
https://github.com/vllm-project/vllm
Python
A high-throughput and memory-efficient inference and serving engine for LLMs
Triage Issues!
When you volunteer to triage issues, you'll receive an email each day with a link to an open issue that needs help in this project. You'll also receive instructions on how to triage issues.
Triage Docs!
Receive a documented method or class from your favorite GitHub repos in your inbox every day. If you're really pro, receive undocumented methods or classes and supercharge your commit history.
Python not yet supported2 Subscribers
Add a CodeTriage badge to vllm
Help out
- Issues
- [Bug]: Port binding keep failing due to unnecessary code
- [Misc]: Repeat the sample sonnet.txt contents to accomodate large seq lengths in benchmarking
- [Misc]: Output state configuration of vision encoder In VLM
- [Bug]: AttributeError: '_OpNamespace' '_C_cache_ops' object has no attribute 'reshape_and_cache'
- [RFC]: Adopt mergify for auto-labeling PRs
- [Bug]: 500 Internal Server Error when calling v1/completions and v1/chat/completions with vllm/vllm-openai:v0.6.2 on K8s
- [Feature]: Enabling MSS for larger number of sequences (>256)
- [WIP] Prototyping re-arch
- [Usage]: due to large max_mm_tokens, number of images that multimodal models can support is underestimated
- [Bug]: quantization does not work with dummy weight format
- Docs
- Python not yet supported