vllm
https://github.com/vllm-project/vllm
Python
A high-throughput and memory-efficient inference and serving engine for LLMs
Triage Issues!
When you volunteer to triage issues, you'll receive an email each day with a link to an open issue that needs help in this project. You'll also receive instructions on how to triage issues.
Triage Docs!
Receive a documented method or class from your favorite GitHub repos in your inbox every day. If you're really pro, receive undocumented methods or classes and supercharge your commit history.
Python not yet supported19 Subscribers
Add a CodeTriage badge to vllm
Help out
- Issues
- [Feature]: Prevent overallocation of kv-cache
- [Feature]: support pixel_values_videos input for VLM
- feat: add max tokens per doc in rerank request
- [Bugfix] Fix SP compilation shape mismatch errors for multimodal models and prompt embeds
- [Bug]: AttributeError: 'Step3VLProcessor' object has no attribute '_get_num_multimodal_tokens'
- [Feature][Performance][Speculative Decoding]: Support Full CUDA Graph for the drafter
- Mamba multistream
- [Bug]: Different embeddings produced by LLM and AsyncLLM
- [TESTS] Unit tests for GDN attn
- [Usage]: How to set structured_output using grammar
- Docs
- Python not yet supported