vllm
https://github.com/vllm-project/vllm
Python
A high-throughput and memory-efficient inference and serving engine for LLMs
Triage Issues!
When you volunteer to triage issues, you'll receive an email each day with a link to an open issue that needs help in this project. You'll also receive instructions on how to triage issues.
Triage Docs!
Receive a documented method or class from your favorite GitHub repos in your inbox every day. If you're really pro, receive undocumented methods or classes and supercharge your commit history.
Python not yet supported19 Subscribers
Add a CodeTriage badge to vllm
Help out
- Issues
- Fixed import error of a test
- [MoE Refactor] Refactor ZeroExpertFusedMoE into new framework
- [WIP] Full CI test run with Model Runner V2
- [CUDA Graph] Enhance CUDA graph input address debugging
- [SP] Add opt-in ragged sequence parallelism path via VLLM_ENABLE_SP_RAGGED
- [Perf] Eliminate duplicate bitmatrix metadata computation in gpt oss …
- feat(spec_decode): remove unpadded drafter batch mode
- [torch.compile] Move torch.Size producers to consumer subgraph in split_graph
- fix: correct max_loras grid size in fused_moe_lora kernels
- [RFC]: `vllm bench eval` for Unified Accuracy + Performance Evaluation
- Docs
- Python not yet supported