vllm
https://github.com/vllm-project/vllm
Python
A high-throughput and memory-efficient inference and serving engine for LLMs
Triage Issues!
When you volunteer to triage issues, you'll receive an email each day with a link to an open issue that needs help in this project. You'll also receive instructions on how to triage issues.
Triage Docs!
Receive a documented method or class from your favorite GitHub repos in your inbox every day. If you're really pro, receive undocumented methods or classes and supercharge your commit history.
Python not yet supported15 Subscribers
Add a CodeTriage badge to vllm
Help out
- Issues
- [Quark] Support online block-diagonal rotations in dense GEMM layers
- [Misc] Add VLLM Config to Prometheus Logger
- [Bugfix] Fix Llama 4 FP8 failure with FlashInfer on B200 (Nullptr crash)
- [Bugfix][Hardware][AMD] Add LoRA guard to unquantized MoE backend selection
- Add TUI Monitor: Real-time Terminal Dashboard for vLLM Metrics
- Add XPU MLA Sparse backend for DeepSeek v3.2
- [Feature]: online bf16->fp8 (and maybe even fp4?) quantization in `load_weights(...)` (so not only from disk checkpoint, but importantly from VRAM or RAM) for weights reloading in GRPO rollout workloads, and speed gains for generating long reasoning rollouts and peak VRAM reduction
- qwen optimze
- [Kernel] Integrate IBM/Applied-AI fused moe kernels
- [Bug]: 100% cpu usage on 3 cores on every node when using ray distributed pipeline parallel
- Docs
- Python not yet supported