vllm
https://github.com/vllm-project/vllm
Python
A high-throughput and memory-efficient inference and serving engine for LLMs
Triage Issues!
When you volunteer to triage issues, you'll receive an email each day with a link to an open issue that needs help in this project. You'll also receive instructions on how to triage issues.
Triage Docs!
Receive a documented method or class from your favorite GitHub repos in your inbox every day. If you're really pro, receive undocumented methods or classes and supercharge your commit history.
Python not yet supported26 Subscribers
Add a CodeTriage badge to vllm
Help out
- Issues
- Migrate gpt-oss-20b MoE backend selection from env var to model kwarg
- [Frontend][Bugfix] Abort ASR engine requests on cancellation
- [Bugfix][KV Transfer][NIXL] Notify P node on pre-admission rejection to free stranded KV blocks
- [Attention] Remove unused slot mapping from TreeAttention metadata
- [Bugfix][CI][Hardware][AMD] Fix various e4m3fn -> e4m3fnuz normalization issues
- [MoE Refactor] Add sequence parallel tests to test_moe_layer.py
- Fix Dynamic NTK RoPE scaling formula
- [ci] Add arm64 ci image
- [Bug]: v0.20 latency and throughput regression on MoE models
- [Feat] dnnl build for AVX2 W8A8 Int8
- Docs
- Python not yet supported