vllm
https://github.com/vllm-project/vllm
Python
A high-throughput and memory-efficient inference and serving engine for LLMs
Triage Issues!
When you volunteer to triage issues, you'll receive an email each day with a link to an open issue that needs help in this project. You'll also receive instructions on how to triage issues.
Triage Docs!
Receive a documented method or class from your favorite GitHub repos in your inbox every day. If you're really pro, receive undocumented methods or classes and supercharge your commit history.
Python not yet supported21 Subscribers
Add a CodeTriage badge to vllm
Help out
- Issues
- [Qwen3-Omni] Prefer CUDA for faster Whisper audio feature extraction
- [Usage]: Why does `vllm serve` keep filling up my system disk when loading a model from a network mount?
- [Model] Enable LoRA support for tower and connector in Mistral and Voxtral
- [5/n] Migrate non-cutlass part of csrc/quantization/w8a8 to libtorch stable ABI
- [Frontend] `finish_reason` must be `tool_call` whenever a tool is called
- [Perf] Optimize Context Parallel by disable NCCL_GRAPH_MIXING_SUPPORT
- [Bug]: Kimi-K2-Thinking can not work on H20-3e
- Optimize fused MoE LoRA intermediate buffers and Triton indexing
- Initial structural_tag support for tool calling
- [Feature]: NVFP4 KV Cache Support
- Docs
- Python not yet supported