vllm
https://github.com/vllm-project/vllm
Python
A high-throughput and memory-efficient inference and serving engine for LLMs
Triage Issues!
When you volunteer to triage issues, you'll receive an email each day with a link to an open issue that needs help in this project. You'll also receive instructions on how to triage issues.
Triage Docs!
Receive a documented method or class from your favorite GitHub repos in your inbox every day. If you're really pro, receive undocumented methods or classes and supercharge your commit history.
Python not yet supported19 Subscribers
Add a CodeTriage badge to vllm
Help out
- Issues
- [Bug]: 使用swift rollout启动vllm,推理结果乱码
- [Bug]: When use_audio_in_video is enabled in qwen3-omni, the output may exhibit issues such as empty or repetitive output.
- [Bugfix] Revert "Zero-init MLA attention output buffers to prevent NaN from CUDA graph padding"
- [FlashLinearAttention] reduce recompilations by removing unused triton kernel inputs
- [Hybrid] Simplify accepted token counting in spec decode for hybrid models
- [Bug]: glm 4.7 fp8 crashes (Worker_TP3 pid=457501) ERROR 03-27 17:11:15 [multiproc_executor.py:852] AttributeError: '_OpNamespace' '_C' object has no attribute 'per_token_group_fp8_quant'
- Upgrade to Transformers v5
- [Bug]: _C_stable_libtorch fails to build: const& references violate stable ABI trivially_copyable requirement
- [Installation]: torch 2.11 is not supported
- [ROCm][CI] Add K8s-hardened Python CI runner with JUnit exit-code fix, GPU lifecycle, and LFU cache
- Docs
- Python not yet supported