vllm
https://github.com/vllm-project/vllm
Python
A high-throughput and memory-efficient inference and serving engine for LLMs
Triage Issues!
When you volunteer to triage issues, you'll receive an email each day with a link to an open issue that needs help in this project. You'll also receive instructions on how to triage issues.
Triage Docs!
Receive a documented method or class from your favorite GitHub repos in your inbox every day. If you're really pro, receive undocumented methods or classes and supercharge your commit history.
Python not yet supported18 Subscribers
Add a CodeTriage badge to vllm
Help out
- Issues
- [CORE][V1] fix: alive-but-hung EngineCore not being detected by `/health` endpoint.
- [Docs] Add documentation for vllm launch render command
- [Bug]: Accuracy Issue with FlashMLA Sparse on DeepSeek V3.2
- refactor(envs): introduce typed Envs class with lazy __getattr__ and attribute docstrings
- [WIP] [Hybrid][GDN] Enable prefix caching 'all' mode for Qwen3.5/Qwen3Next
- [Observability] Add scheduler preemption metrics
- [Bug]: Unable to run Qwen3.5 on RTX5090
- [Bug]: LoRA on Qwen-3.5-2B fails to run
- Replace OMP initialization
- Add VLLM_USE_MONITORX to use more efficient busy polling
- Docs
- Python not yet supported