vllm
https://github.com/vllm-project/vllm
Python
A high-throughput and memory-efficient inference and serving engine for LLMs
Triage Issues!
When you volunteer to triage issues, you'll receive an email each day with a link to an open issue that needs help in this project. You'll also receive instructions on how to triage issues.
Triage Docs!
Receive a documented method or class from your favorite GitHub repos in your inbox every day. If you're really pro, receive undocumented methods or classes and supercharge your commit history.
Python not yet supported13 Subscribers
Add a CodeTriage badge to vllm
Help out
- Issues
- [W8A8 Block Linear Refactor][2/N] Make Fp8 block linear Op use kernel abstraction.
- [Bugfix] Fix illegal memory access in AWQ-Marlin with CUDA graphsFix illegal memory access in AWQ-Marlin with CUDA graphs (Fixes #32834)
- [CMake] Switch vllm-flash-attn to ExternalProject for separate scope (#9129)
- Bug: CPU KV cache offloading fails for blocks formed during decode
- [Bug]: OpenAI-compatible Embeddings API intermittently crashes with multimodal cache assertion (`Expected a cached item for mm_hash`) on Qwen3-VL-Embedding-8B
- [Bug]: Qwen3-Coder-Next fails with Triton allocator error on DGX Spark cluster (GB10, sm121)
- fix: Qwen3ReasoningParser - handle prompt prefix format for Thinking models
- Add FlashAttention v2.8.3 scaling benchmark on Mistral-7B (H100)
- Waller Operator: Constant 14ms attention latency across 512-524K tokens (24.5x faster than FlashAttention at 32K)
- [RFC]: [compile] Rollout strategy for AOT Compilation.
- Docs
- Python not yet supported