vllm
https://github.com/vllm-project/vllm
Python
A high-throughput and memory-efficient inference and serving engine for LLMs
Triage Issues!
When you volunteer to triage issues, you'll receive an email each day with a link to an open issue that needs help in this project. You'll also receive instructions on how to triage issues.
Triage Docs!
Receive a documented method or class from your favorite GitHub repos in your inbox every day. If you're really pro, receive undocumented methods or classes and supercharge your commit history.
Python not yet supported19 Subscribers
Add a CodeTriage badge to vllm
Help out
- Issues
- [Bug]: sm110: torch.AcceleratorError: CUDA error: an illegal instruction was encountered
- [Bug]: v0.17.0-aarch64 onwards will run out of CUDA memory for gpt-oss-120b on GH200 144GB
- [Frontend] Fix Hermes streaming for parameterless tool args
- [Bugfix] Fix Responses API harmony streaming: token splitting, missing done events, nested sequence_number
- [Feature]: Expose stable request completion hook in streaming serving paths
- [Model] Support Qwen1 use_logn_attn and use_dynamic_ntk
- Update FP8 MoE backend selection for B200 (Blackwell)
- Improve CPU platform detection fallback for source checkouts
- [RFC]: Opt-in Media URL Cache for `MediaConnector`
- [Core] Use contiguous arrays for request token histories
- Docs
- Python not yet supported