vllm
https://github.com/vllm-project/vllm
Python
A high-throughput and memory-efficient inference and serving engine for LLMs
Triage Issues!
When you volunteer to triage issues, you'll receive an email each day with a link to an open issue that needs help in this project. You'll also receive instructions on how to triage issues.
Triage Docs!
Receive a documented method or class from your favorite GitHub repos in your inbox every day. If you're really pro, receive undocumented methods or classes and supercharge your commit history.
Python not yet supported19 Subscribers
Add a CodeTriage badge to vllm
Help out
- Issues
- [Bug]: Qwen3-Coder-Next fails with Triton allocator error on DGX Spark cluster (GB10, sm121)
- fix: Qwen3ReasoningParser - handle prompt prefix format for Thinking models
- Add FlashAttention v2.8.3 scaling benchmark on Mistral-7B (H100)
- Waller Operator: Constant 14ms attention latency across 512-524K tokens (24.5x faster than FlashAttention at 32K)
- [RFC]: Expose RequestOutput hook for programmatic use of Serving layer
- ovis 2.5 - vlm compilation fixes
- [Bug]: mistral3 offline multimodal inference example failing with prompt placeholder error
- [Bug][Docker]: Issues with 0.15.0 and newer docker image when running Qwen3-Next with VLLM_BLOCKSCALE_FP8_GEMM_FLASHINFER=1
- enabling torch.compile on phi3v
- [Bugfix] Lazy tokenizer init to prevent semaphore leak in multiprocess mode
- Docs
- Python not yet supported