vllm
https://github.com/vllm-project/vllm
Python
A high-throughput and memory-efficient inference and serving engine for LLMs
Triage Issues!
When you volunteer to triage issues, you'll receive an email each day with a link to an open issue that needs help in this project. You'll also receive instructions on how to triage issues.
Triage Docs!
Receive a documented method or class from your favorite GitHub repos in your inbox every day. If you're really pro, receive undocumented methods or classes and supercharge your commit history.
Python not yet supported18 Subscribers
Add a CodeTriage badge to vllm
Help out
- Issues
- Dflash integration
- [Bug]: Qwen 3.5 4B fail on first request on Intel XPU (Arc Graphics B580)
- [Bugfix][ROCm] Memory access fault fix for full graph capture for triton-attn - Option 2
- fix(lora): fix variable shadowing in get_supported_lora_modules
- [Feature] Add energy consumption metrics to benchmark suite
- [RFC]: vLLM IR Out-of-Tree (OOT) Kernel Registration
- [Feature]: Add support for token_adapter.trainable_tokens_delta LoRA weight
- [Feature]: Built-in debug tensor dump for intermediate activations
- [Bug]: record_metadata_for_reloading causes ~3x host memory regression during torch.compile on XLA backends
- [Bug]: SM 7.5 extreme slowness hangs indefinitely on T4 (vllm 0.17.0 with Qwen3.5-27B)
- Docs
- Python not yet supported