vllm
https://github.com/vllm-project/vllm
Python
A high-throughput and memory-efficient inference and serving engine for LLMs
Triage Issues!
When you volunteer to triage issues, you'll receive an email each day with a link to an open issue that needs help in this project. You'll also receive instructions on how to triage issues.
Triage Docs!
Receive a documented method or class from your favorite GitHub repos in your inbox every day. If you're really pro, receive undocumented methods or classes and supercharge your commit history.
Python not yet supported26 Subscribers
Add a CodeTriage badge to vllm
Help out
- Issues
- [torch.compile] reuse aot_compile_hash_factors in VllmBackend
- feat: mla prefill merge state fuse static fp8
- [Fusion][Kernel] Add register-based cos/sin variant for NTokenHeads k…
- elastic_ep: stage/commit MoE prepare/finalize on reconfigure
- fix(gemma4): remap compressed-tensors AWQ MoE keys in _weight_iterator
- [Bugfix][Model] Qwen3-VL-MoE NVFP4 (ModelOpt) per-expert weight loading
- [Core] Avoid using extra thread in `UniProcExecutor`
- Bugfix: fix RMSNormGated input_guard torch.compile dynamo tracing on CUDA
- [Bugfix] Run FlashInfer autotuning before KV cache allocation
- [Spec Decode] Add Sliding Window Attention support to DFlash drafter
- Docs
- Python not yet supported