vllm
https://github.com/vllm-project/vllm
Python
A high-throughput and memory-efficient inference and serving engine for LLMs
Triage Issues!
When you volunteer to triage issues, you'll receive an email each day with a link to an open issue that needs help in this project. You'll also receive instructions on how to triage issues.
Triage Docs!
Receive a documented method or class from your favorite GitHub repos in your inbox every day. If you're really pro, receive undocumented methods or classes and supercharge your commit history.
Python not yet supported13 Subscribers
Add a CodeTriage badge to vllm
Help out
- Issues
- [Bug]: Llama-3.2-1b-instruct prepending extra bos token
- [Bug]: NaN's in MLA with chunked-prefill
- [Bug]: Qwen 3 MOE support on GH200
- [Usage]: `add_vision_id` ignored for Qwen 2.5-VL-32B-Instruct
- [Bug]: Value error, Found conflicts between 'rope_type=default' (modern field) and 'type=mrope'
- [Bug]: vllm0.10.1 can deploy deepseek-70b model with tp=2 and max-model-len 20000 on the machine with two NVIDIA A800(80GiB) , But vllm 0.11.0 failed
- [Bug]: Compile Integration should reuse for identical code
- [Bug]: SamplingParams.truncate_prompt_tokens has no effect in LLM.chat
- [Bug]: Qwen3-VL-235B-A22B-Instruct stuck with assert placeholder < len(self._out_of_band_tensors)
- [Bug]: Potential out-of-bounds access in paged_attention_v1.cu and paged_attention_v2.cu
- Docs
- Python not yet supported