vllm
https://github.com/vllm-project/vllm
Python
A high-throughput and memory-efficient inference and serving engine for LLMs
Triage Issues!
When you volunteer to triage issues, you'll receive an email each day with a link to an open issue that needs help in this project. You'll also receive instructions on how to triage issues.
Triage Docs!
Receive a documented method or class from your favorite GitHub repos in your inbox every day. If you're really pro, receive undocumented methods or classes and supercharge your commit history.
Python not yet supported18 Subscribers
Add a CodeTriage badge to vllm
Help out
- Issues
- fix(lora): fix IndexError and GQA tensor size mismatch in QKV LoRA la…
- [Bug]: unknown error trying to run vllm v0.17.0 with ROCm on Radeon 8060S (gfx1151)
- [Bugfix] Download mmproj GGUF files for multimodal models
- [Bugfix] Fall back to TORCH_SDPA for encoder attention on SM<80 GPUs
- [Perf][GDN] Eliminate GPU-CPU synchronization in GDNAttentionMetadataBuilder.build()
- [CORE][V1] fix: alive-but-hung EngineCore not being detected by `/health` endpoint.
- [Docs] Add documentation for vllm launch render command
- [Bug]: Accuracy Issue with FlashMLA Sparse on DeepSeek V3.2
- refactor(envs): introduce typed Envs class with lazy __getattr__ and attribute docstrings
- [WIP] [Hybrid][GDN] Enable prefix caching 'all' mode for Qwen3.5/Qwen3Next
- Docs
- Python not yet supported