vllm
https://github.com/vllm-project/vllm
Python
A high-throughput and memory-efficient inference and serving engine for LLMs
Triage Issues!
When you volunteer to triage issues, you'll receive an email each day with a link to an open issue that needs help in this project. You'll also receive instructions on how to triage issues.
Triage Docs!
Receive a documented method or class from your favorite GitHub repos in your inbox every day. If you're really pro, receive undocumented methods or classes and supercharge your commit history.
Python not yet supported18 Subscribers
Add a CodeTriage badge to vllm
Help out
- Issues
- lora: add EP support for FusedMoEWithLoRA
- Tokenizer error with Huihui-Qwen3.5-35B-A3B-Claude-4.6-Opus-abliterated model - TokenizersBackend not found
- {ROCm]: gpt-oss fusion/padding fixes
- [MoE][GPT-OSS] Add L40S/SM89 Marlin block-size policy
- feat(quantization): fix W4A8-INT activation quantization and int4 support in Marlin kernel
- [ROCm] Enable VLLM triton FP8 moe for gfx1201, tuned for Qwen3-30B-A3B-FP8 tp=2 and Qwen/Qwen3.5-35B-A3B-FP8 tp=2
- [Bugfix] fix: normalize layer names for kv cache group to prevent KeyError in
- [CI Failure]: LM Eval Large Models (H200)
- [Doc] Update example docs to include Nemotron Super v3 and Nano 4B
- [Bugfix] Fix Qwen 3.5 GGUF loading: add model type mapping and vision config dā¦
- Docs
- Python not yet supported