vllm
https://github.com/vllm-project/vllm
Python
A high-throughput and memory-efficient inference and serving engine for LLMs
Triage Issues!
When you volunteer to triage issues, you'll receive an email each day with a link to an open issue that needs help in this project. You'll also receive instructions on how to triage issues.
Triage Docs!
Receive a documented method or class from your favorite GitHub repos in your inbox every day. If you're really pro, receive undocumented methods or classes and supercharge your commit history.
Python not yet supported19 Subscribers
Add a CodeTriage badge to vllm
Help out
- Issues
- [compile] fuse rope and cache insertion for mla
- [CI][ROCm] Add Qwen3.5-35B-A3B-MXFP4 model eval into CI
- [ROCm] Enable dual-stream MoE shared experts and GLM-5 MXFP4 Quark support
- add vLLM-side LMCache EC connector entrypoint
- Fix Marlin repack PTX incompatibility on H100/H200 (CUDA 12.8)
- [Bug]: parity with CUDA & parity with rocm sglang: vLLM router doesn't current support MoRI kvcache connector
- [ROCm][perf] Use workspace manager for sparse indexer allocations
- [Bugfix] Fix TypeError in response_input_to_harmony when assistant content is None
- Zufang/ct mxfp8
- Add ibm-granite/granite-vision-3.3-2b to supported models documentation
- Docs
- Python not yet supported