vllm
https://github.com/vllm-project/vllm
Python
A high-throughput and memory-efficient inference and serving engine for LLMs
Triage Issues!
When you volunteer to triage issues, you'll receive an email each day with a link to an open issue that needs help in this project. You'll also receive instructions on how to triage issues.
Triage Docs!
Receive a documented method or class from your favorite GitHub repos in your inbox every day. If you're really pro, receive undocumented methods or classes and supercharge your commit history.
Python not yet supported13 Subscribers
Add a CodeTriage badge to vllm
Help out
- Issues
- [RFC] Clarifying vLLM Shutdown Semantics
- [Bug]: [Spec Decode] Spec decoding is not disabled at/after configured batch size
- [Bug]: Engine Fails when running Qwen3-Next with no traceback
- [Feature]: Allow increasing the flashinfer workspace buffer size
- [Feature]: Tracking Whisper feature requests
- [Feat][EPLB] Enable Round-robin expert placement strategy while eplb is enabled.
- [Usage]: Is it safe to enable TorchInductor remote cache (Redis) in vLLM?
- [Bug]: `vllm serve --help` still spends time on CUDA init
- Change FLASHINFER_WORKSPACE_BUFFER_SIZE to be configurable by envvar
- Add context parallelism configurations and parallel group
- Docs
- Python not yet supported