vllm
https://github.com/vllm-project/vllm
Python
A high-throughput and memory-efficient inference and serving engine for LLMs
Triage Issues!
When you volunteer to triage issues, you'll receive an email each day with a link to an open issue that needs help in this project. You'll also receive instructions on how to triage issues.
Triage Docs!
Receive a documented method or class from your favorite GitHub repos in your inbox every day. If you're really pro, receive undocumented methods or classes and supercharge your commit history.
Python not yet supported19 Subscribers
Add a CodeTriage badge to vllm
Help out
- Issues
- feat(whisper): add decoder prefix and custom task tokens for transcription API
- feat(metrics): add configurable Prometheus histogram buckets via CLI flags
- [Feature]: Reasoning output for offline inference
- [Bug]: GLM-5 FP8 on H200 CUDA OOM in sparse_attn_indexer at High Concurrency
- [Bug]: EngineCore exits immediately after startup when vLLM CPU is launched from multiprocessing.Process on macOS
- [Draft] Support model Qwen3_5/Qwen3_5_moe on NPUplatform
- [Installation]: unrecognized arguments: --omni
- [Refactor][KVConnector]: Move KV Cache Events into KVConnectorWorkerMetadata
- [WIP][Quantization] add humming quantization kernel
- [Bug]: AR+rms broken for TP=2 DP=2
- Docs
- Python not yet supported