vllm
https://github.com/vllm-project/vllm
Python
A high-throughput and memory-efficient inference and serving engine for LLMs
Triage Issues!
When you volunteer to triage issues, you'll receive an email each day with a link to an open issue that needs help in this project. You'll also receive instructions on how to triage issues.
Triage Docs!
Receive a documented method or class from your favorite GitHub repos in your inbox every day. If you're really pro, receive undocumented methods or classes and supercharge your commit history.
Python not yet supported14 Subscribers
Add a CodeTriage badge to vllm
Help out
- Issues
- [Bug]: Qwen3-32B with MTP, run failed.
- add triton ops fused_qkvzba_split_reshape_cat for qwen3_next
- blackwell
- [Model] use maybe_all_reduce_tensor_model_parallel
- [Feature] Add logprobs support for Whisper transcription API
- [Feat][PP] support async send for PP
- [Core]ModelConfig use architecture rather than archiectures
- [Bug]: Unable to serve Qwen3-8B-FP8 with moriio kv connector
- [Bugfix] anthropic: support incoming streaming DeltaMessage with combined content and tool_calls
- [feat] add num preempted output
- Docs
- Python not yet supported