vllm
https://github.com/vllm-project/vllm
Python
A high-throughput and memory-efficient inference and serving engine for LLMs
Triage Issues!
When you volunteer to triage issues, you'll receive an email each day with a link to an open issue that needs help in this project. You'll also receive instructions on how to triage issues.
Triage Docs!
Receive a documented method or class from your favorite GitHub repos in your inbox every day. If you're really pro, receive undocumented methods or classes and supercharge your commit history.
Python not yet supported24 Subscribers
Add a CodeTriage badge to vllm
Help out
- Issues
- [Bug]: minimax_m2_tool_parser does not stream tool call args
- [ROCm] Improve failed device detection diagnostics
- Dynamic FP8 on Blackwell B200 with LoRA-merged model produces non-deterministic degenerate output
- [Bug]: Online FP8 quantization drops bias weights, which breaks Qwen2 and other models with bias=True
- Bump actions/github-script from 8.0.0 to 9.0.0
- [CPU] Skip K conversion in MLA decode on AVX512_BF16
- [Performance]: Qwen3.5 with mtp is slower than without
- [Bug]: Gemma4 multimodal crashes with "pixel_values contains inconsistent shapes" when concurrent image requests have different resolutions
- [BUGS] vLLM V1 Engine Hangs After Weight Loading on Blackwell (sm_121) Multi-Node Ray Setup (TP=2)
- [RFC] Replace routing replay with CUDA-graph-compatible device cache approach
- Docs
- Python not yet supported