vllm
https://github.com/vllm-project/vllm
Python
A high-throughput and memory-efficient inference and serving engine for LLMs
Triage Issues!
When you volunteer to triage issues, you'll receive an email each day with a link to an open issue that needs help in this project. You'll also receive instructions on how to triage issues.
Triage Docs!
Receive a documented method or class from your favorite GitHub repos in your inbox every day. If you're really pro, receive undocumented methods or classes and supercharge your commit history.
Python not yet supported19 Subscribers
Add a CodeTriage badge to vllm
Help out
- Issues
- [Bug]: vllm v0.14.0 GGUF model with architecture minimax-m2 is not supported yet
- [LoRA] Support LoRA for Embedding and LMHead in Qwen2/3 family
- [Core] Improve MetaTensorMode to intercept more tensor factory operations
- [Bug]: pplx all2all backend hangs during model warmup on A6000 GPUs
- [Usage]: question about gguf gemma3-4b different output from llama_cpp
- [Feature]: Environment Variable to Control Triton Autotuning
- [CI Failure]: Entrypoints Integration Test (Responses API) for GPU utilization ValueError
- [Bug]: MTP does not mask embedding on position 0
- [Usage]: What would happen if the offloading KV cache size is larger than config the max_local_cpu_size ?
- [Bug]: Latency spikes at input_len=1024 with batch_size=16 (TP1 & TP2)
- Docs
- Python not yet supported