vllm
https://github.com/vllm-project/vllm
Python
A high-throughput and memory-efficient inference and serving engine for LLMs
Triage Issues!
When you volunteer to triage issues, you'll receive an email each day with a link to an open issue that needs help in this project. You'll also receive instructions on how to triage issues.
Triage Docs!
Receive a documented method or class from your favorite GitHub repos in your inbox every day. If you're really pro, receive undocumented methods or classes and supercharge your commit history.
Python not yet supported13 Subscribers
Add a CodeTriage badge to vllm
Help out
- Issues
- [Bug]: Can run Speculative decode with PP >2?
- [Bug]: Streaming output issue in vLLM v0.10.2
- [Bug]: vllm+ray:vllm serve DeepSeek-V3.2V-AWQ ERROR.
- [Bug]: Failed to capture CUDA kernel data and GPU memory data when running vllm with tensor_parallel_size=1
- [Feature]: vLLM should apply to Docker Open Source Program for removing image pull limits
- [Feature]: Support per-layer MLP sizes for Qwen2.5 ModelOpt/GradNAS pruned checkpoints
- [Usage]: RuntimeError when running Qwen2.5-VL-7B-Instruct with vllm: Potential version incompatibility
- [Feature]: need scheduler solution with high priority to process prefill
- [Usage]: If vllm surpport load LoRA adapter and DeepSeek-v3.1-termunis at the same time
- [Bug]: Potential improvements and fixes for sleep/wake_up API
- Docs
- Python not yet supported