vllm
https://github.com/vllm-project/vllm
Python
A high-throughput and memory-efficient inference and serving engine for LLMs
Triage Issues!
When you volunteer to triage issues, you'll receive an email each day with a link to an open issue that needs help in this project. You'll also receive instructions on how to triage issues.
Triage Docs!
Receive a documented method or class from your favorite GitHub repos in your inbox every day. If you're really pro, receive undocumented methods or classes and supercharge your commit history.
Python not yet supported13 Subscribers
Add a CodeTriage badge to vllm
Help out
- Issues
- [Performance]: Performance degradation from 0.13.0 to 0.14.0rc2
- [Bug]: Error in inspecting model architecture 'Gemma3ForConditionalGeneration'
- [Bug]: SamplingParams bad_words to _bad_words_token_ids
- [New Model]: Complexity (Pacific-Prime) - INL Dynamics + Token-Routed MLP
- [Bug]: "Fatal Python error: none_dealloc" after 4 days deployment
- [BugFix] Fix bad_words token conversion for tokenizers with different space encodings
- [Bug]: LMCache CPU kv offload cause decode speed degrade
- [RFC]: More robust model accuracy testing with configurable and tiered coverage
- [LoRA] Update LoRA expand kernel block_n calculation
- [Feature]: 请问0.14.0版本能release支持python 3.12的wheel吗?
- Docs
- Python not yet supported