vllm
https://github.com/vllm-project/vllm
Python
A high-throughput and memory-efficient inference and serving engine for LLMs
Triage Issues!
When you volunteer to triage issues, you'll receive an email each day with a link to an open issue that needs help in this project. You'll also receive instructions on how to triage issues.
Triage Docs!
Receive a documented method or class from your favorite GitHub repos in your inbox every day. If you're really pro, receive undocumented methods or classes and supercharge your commit history.
Python not yet supported19 Subscribers
Add a CodeTriage badge to vllm
Help out
- Issues
- [Usage]: how to pass param logits_processors in AsyncEngineArgs?
- [Bug]: Behavior change 0.11.2 vs 0.12 (and up)
- [Feature]: Sort blocks by block_id in FreeKVCacheBlockQueue.append_n to enable contiguous allocation
- [Misc] Remove redundant all reduce in qkv split for ViTs
- [Bug]: "No tokenizer file found in directory" is seen when serve model from local directory after upgrading vllm from 0.11.2 to 0.12
- [Bug]: DeepSeek on B300 reports `invalid numeric default value` error
- [RFC]: Why custom_mask is not exposed on FlashInfer to get more flexible use case?
- [Feature]: MoE Layer
- [Bug][ModelOpt]: Llama4 DP/EP FlashInfer Cutlass Is Broken
- [RFC]: Decouple page_size_bytes calculation in AttentionSpec for TPU/RPA Compatibility
- Docs
- Python not yet supported