vllm
https://github.com/vllm-project/vllm
Python
A high-throughput and memory-efficient inference and serving engine for LLMs
Triage Issues!
When you volunteer to triage issues, you'll receive an email each day with a link to an open issue that needs help in this project. You'll also receive instructions on how to triage issues.
Triage Docs!
Receive a documented method or class from your favorite GitHub repos in your inbox every day. If you're really pro, receive undocumented methods or classes and supercharge your commit history.
Python not yet supported18 Subscribers
Add a CodeTriage badge to vllm
Help out
- Issues
- [Bug] _warmup_prefill_kernels in qwen3_next.py leaks ~3.4 GiB GPU memory despite empty_cache()
- [Model][Quantization] Add GGUF support for MiniMax-M2.1
- [RFC]: [KV Connector]: Support KV push from Prefill to Decode node using Nixl Connector
- [CI] Add persistent cache mounts and fix test download paths
- sched/v1: use SRTF tiebreaker for preemption victim selection
- [ROCm][CI] Optimize ROCm Docker build: registry cache, DeepEP, and ci-bake script
- [BugFix] Handle pre-sharded TP MoE expert weights in Grok loader
- [Feat][Executor] Introduce RayExecutorV2
- [Bug]: CUDA illegal memory access on GPTQ Marlin
- Move test dependencies from inline installs to Docker image
- Docs
- Python not yet supported