vllm
https://github.com/vllm-project/vllm
Python
A high-throughput and memory-efficient inference and serving engine for LLMs
Triage Issues!
When you volunteer to triage issues, you'll receive an email each day with a link to an open issue that needs help in this project. You'll also receive instructions on how to triage issues.
Triage Docs!
Receive a documented method or class from your favorite GitHub repos in your inbox every day. If you're really pro, receive undocumented methods or classes and supercharge your commit history.
Python not yet supported26 Subscribers
Add a CodeTriage badge to vllm
Help out
- Issues
- fix: route voyage qwen3 bidirectional models to correct vLLM class
- [EPLB] Simplify move_to_buffer() by decomposing into helper functions
- [Refactor][vLLM IR]: Replace hardcoded IrOpPriorityConfig fields with dynamic priorities dict
- [RFC]: Add unified Pod Snapshot API to support automatic cloud provider checkpoints
- Fix FP8 KV wake-up for nested KV cache containers
- [Bug]: Qwen3OmniMoeThinkerForConditionalGeneration does not support LoRA yet
- [Bug]: `collect_env.py` crashes on non-Linux platforms (macOS/Windows) due to unconditional assert in `get_pkg_version`
- [Bug] Cohere2ForCausalLM fails to load ModelOpt NVFP4 quantized models
- [Bugfix] Stop Harmony stream parsing after parser errors
- [ROCm/MI325X] DeepSeek-V4-Flash: NotImplementedError: mul_cuda not implemented for Float8_e8m0fnu in normalize_e4m3fn_to_e4m3fnuz
- Docs
- Python not yet supported