vllm
https://github.com/vllm-project/vllm
Python
A high-throughput and memory-efficient inference and serving engine for LLMs
Triage Issues!
When you volunteer to triage issues, you'll receive an email each day with a link to an open issue that needs help in this project. You'll also receive instructions on how to triage issues.
Triage Docs!
Receive a documented method or class from your favorite GitHub repos in your inbox every day. If you're really pro, receive undocumented methods or classes and supercharge your commit history.
Python not yet supported13 Subscribers
Add a CodeTriage badge to vllm
Help out
- Issues
- Refactor/understanding prepare inputs padded
- [Model] Support FP8 Mamba SSM Cache
- Mcp commentary channel bug
- [Refactor][MLA]: Independently pass q_nope & q_rope
- [DO NOT MERGE] Experiments related to MoE kernels
- [Compressed Tensors] Remove parameter conversion for sparse24
- some-code-change
- [Bug]: vLLM fails to perform CPU-only-head inference in Kubernetes + Ray cluster environment
- [Kernel] add cuda kernel of causal_conv1d for qwen3-next
- [ROCm] [AITER] Add block scaled bpreshuffle gemm
- Docs
- Python not yet supported