vllm
https://github.com/vllm-project/vllm
Python
A high-throughput and memory-efficient inference and serving engine for LLMs
Triage Issues!
When you volunteer to triage issues, you'll receive an email each day with a link to an open issue that needs help in this project. You'll also receive instructions on how to triage issues.
Triage Docs!
Receive a documented method or class from your favorite GitHub repos in your inbox every day. If you're really pro, receive undocumented methods or classes and supercharge your commit history.
Python not yet supported26 Subscribers
Add a CodeTriage badge to vllm
Help out
- Issues
- [Bugfix] Fix Triton MLA decode when Lk > Lv (split c_kv / k_pe cache)
- [RFC][vLLM IR]: Batch Invariance Dispatching in vLLM IR
- [ROCm][DSv3.2] Adopt new paged-MQA-logits API + cached logits buffer with defensive padding
- [Bugfix][Distributed] Tear down stale NCCL group on reinit in NCCLWeightTransferEngine
- [Bugfix] Fix Triton compile crash in penalties kernel on sm_89
- [MM][Perf][CG] Support ViT full cudagraphs for mllama4
- [vLLM IR] Propagate IR op name and provider to profiler annotations
- [ROCm][Kimi-Linear] Wire FlyDSL gated delta rule decode kernel for KimiDeltaAttention
- Make v1 KV cache initialization messages device-neutral
- [Bugfix] Fix layerwise weight reload: VllmConfig context + kernel-tensor copy
- Docs
- Python not yet supported