vllm
https://github.com/vllm-project/vllm
Python
A high-throughput and memory-efficient inference and serving engine for LLMs
Triage Issues!
When you volunteer to triage issues, you'll receive an email each day with a link to an open issue that needs help in this project. You'll also receive instructions on how to triage issues.
Triage Docs!
Receive a documented method or class from your favorite GitHub repos in your inbox every day. If you're really pro, receive undocumented methods or classes and supercharge your commit history.
Python not yet supported26 Subscribers
Add a CodeTriage badge to vllm
Help out
- Issues
- [Bug]: Incompatible dimension when using Mistral Small 4
- [Bug]: v0.19.1 failed to load AWQ 4bit quantization of Gemma 4 26B-A4B
- [Bug]: Gemma 4 (31B/26B-A4B) vision outputs only <pad> under fp16 — vision_tower standardize overflows
- [Bug]: DeepSeek-V3.2 DSA MFU bug
- [Bug]: Mistral3 text-only startup fails when text_config.architectures is None
- Fix Gemma 4 + BitsAndBytes startup failure reported in #38884
- [Doc]: Jetson Orin + vLLM Qwen3-0.6B quantized models – GPU active but no speedup, need optimization tips
- [Startup] Import hygiene for api_server hot path
- [Installation]: 有提供cuda 12.6+python3.12的vllm预编译的whl包吗? 以开发者模型需要本地安装下,发布的都是cuda13版本的,不适配cuda12.6的本机的版本型号
- [Bug]: Qwen3.5-397B-A17B-NVFP4 engine hangs (Running≥1, 0 tok/s) under high concurrency on Blackwell GPUs
- Docs
- Python not yet supported