vllm
https://github.com/vllm-project/vllm
Python
A high-throughput and memory-efficient inference and serving engine for LLMs
Triage Issues!
When you volunteer to triage issues, you'll receive an email each day with a link to an open issue that needs help in this project. You'll also receive instructions on how to triage issues.
Triage Docs!
Receive a documented method or class from your favorite GitHub repos in your inbox every day. If you're really pro, receive undocumented methods or classes and supercharge your commit history.
Python not yet supported25 Subscribers
Add a CodeTriage badge to vllm
Help out
- Issues
- [Bug]: LMCache MP fallback adapter rejects cache_salt/cache_salts kwargs after #39837
- [Bug][Tracking Issue]: NaNs in CUDA Graph padding regions corrupt activations in some per-token kernels
- [Feature] Support passing configuration to custom attention backends
- [Tracking issue]: TurboQuant/HIGGS Attention follow-ups
- [Bugfix] Fix TurboQuant KV cache index-out-of-bounds in Triton decode kernel
- [Bug]: vLLM fails to start on RDNA 4 (gfx1201) inside containers — amdsmi, circular import, and torch.cuda.device_count() all broken
- [Bug]: Gemma4MultimodalEmbedder normalization order different from Transformers, causing bad audio inference
- fix(gemma4): use weightless k_norm for KV-shared layers (#1)
- [Feature]: Support n_positions config field for nomic_bert models to enable inference beyond max_position_embeddings
- Revert "[Misc] `toy_proxy_server` handle min_tokens" (#39706)
- Docs
- Python not yet supported