vllm
https://github.com/vllm-project/vllm
Python
A high-throughput and memory-efficient inference and serving engine for LLMs
Triage Issues!
When you volunteer to triage issues, you'll receive an email each day with a link to an open issue that needs help in this project. You'll also receive instructions on how to triage issues.
Triage Docs!
Receive a documented method or class from your favorite GitHub repos in your inbox every day. If you're really pro, receive undocumented methods or classes and supercharge your commit history.
Python not yet supported19 Subscribers
Add a CodeTriage badge to vllm
Help out
- Issues
- SiLU_mul blockwise quantized fp8 kernel in Helion
- [Bugfix] Fix int64 expert IDs in routing simulator crashing flashinfer all2all
- [Transformers v5] HCXVisionForCausalLM
- feat(nixl,dcp): Supports DCP for PD disaggregation with nixl connector and MLA backends
- [CI] Revamp translation validation tests: parametrize ROCm backends, add seed, relax semantic assertions
- [Docs] Add vLLM CI overview documentation for contributors
- Nvfp4 cutedsl moe
- Enable building MoRI with AMD AINIC stack
- [Bugfix][Tool Parser] Fix Kimi-K2 streaming regex to handle leading newline before tool call ID
- [Bugfix] Fix limit_mm_per_prompt being ignored for encoder cache profiling
- Docs
- Python not yet supported