vllm
https://github.com/vllm-project/vllm
Python
A high-throughput and memory-efficient inference and serving engine for LLMs
Triage Issues!
When you volunteer to triage issues, you'll receive an email each day with a link to an open issue that needs help in this project. You'll also receive instructions on how to triage issues.
Triage Docs!
Receive a documented method or class from your favorite GitHub repos in your inbox every day. If you're really pro, receive undocumented methods or classes and supercharge your commit history.
Python not yet supported13 Subscribers
Add a CodeTriage badge to vllm
Help out
- Issues
- Optimize popleft_n free-list traversal for KV cache blocks
- [WIP][Kernel] Add Helion kernel for static_scaled_fp8_quant
- [Installation]: Cant install vllm 0.15.0 on Windows & Python 3.12
- [W8A8 Block Linear Refactor][3/N] Remove W8A8Fp8BlockLinearOp and adopt Fp8 block linear kernel selections.
- [W8A8 Block Linear Refactor][2/N] Make Fp8 block linear Op use kernel abstraction.
- [Fix] [CPU Backend] : Prepack weights for w8a8 oneDNN matmul
- [Bugfix] Fix illegal memory access in AWQ-Marlin with CUDA graphsFix illegal memory access in AWQ-Marlin with CUDA graphs (Fixes #32834)
- [CMake] Switch vllm-flash-attn to ExternalProject for separate scope (#9129)
- Bug: CPU KV cache offloading fails for blocks formed during decode
- [Bug]: OpenAI-compatible Embeddings API intermittently crashes with multimodal cache assertion (`Expected a cached item for mm_hash`) on Qwen3-VL-Embedding-8B
- Docs
- Python not yet supported