vllm
https://github.com/vllm-project/vllm
Python
A high-throughput and memory-efficient inference and serving engine for LLMs
Triage Issues!
When you volunteer to triage issues, you'll receive an email each day with a link to an open issue that needs help in this project. You'll also receive instructions on how to triage issues.
Triage Docs!
Receive a documented method or class from your favorite GitHub repos in your inbox every day. If you're really pro, receive undocumented methods or classes and supercharge your commit history.
Python not yet supported13 Subscribers
Add a CodeTriage badge to vllm
Help out
- Issues
- [Bug]: AttributeError: module 'vllm.distributed.parallel_state' has no attribute 'get_tensor_model_parallel_group'. Did you mean: 'get_tensor_model_parallel_rank'?
- [Feature]: per head or per channel fp8 kvcache support?
- [Bug]: gpt-oss-20B token ids out of range
- [Bug]: same `max_seq_len` of flashinfer trtllm decode and prefill.
- [Usage]: v1 SharedStorageConnector and PyNcclConmector executes the model again for the input prompt
- [Refactor] Simplify FusedMoEParallelConfig.make() logic and remove redundant assert
- [Usage]: Running Qwen3-VL-235B-A22B-Instruct-AWQ on two A100-80G GPUs results in an error
- [Bug]: MoE config not found Tesla_T4.json
- [Feature]: Benchmak Scalability Optimization
- [Usage]: Does EPLB support CompressedTensorsWNA16MarlinMoEMethod in v0.12.0 or higher version?
- Docs
- Python not yet supported