vllm
https://github.com/vllm-project/vllm
Python
A high-throughput and memory-efficient inference and serving engine for LLMs
Triage Issues!
When you volunteer to triage issues, you'll receive an email each day with a link to an open issue that needs help in this project. You'll also receive instructions on how to triage issues.
Triage Docs!
Receive a documented method or class from your favorite GitHub repos in your inbox every day. If you're really pro, receive undocumented methods or classes and supercharge your commit history.
Python not yet supported13 Subscribers
Add a CodeTriage badge to vllm
Help out
- Issues
- [CI/Build] amd-ci-fix-kernels-attn
- [Feature]: CUDA12.6 prebuilt wheel for vllm v0.11
- [Usage]: deepseek-ocr The output token count is too low and unstable.
- [Bug]:During the vllm 0.10.1 v1 benchmark test, only about 100 out of 1000 requests could be processed, and then it would get stuck.
- [Bug]: RotaryEmbedding forward_native cannot match as expected for QKNormRoPEFusionPass
- [Doc]: Any detailed documentation about how to load_weights in customized vllm model?
- [wip] Fix prime rl test
- [Feature][UX]: vLLM Kernel Configuration
- [Bug]: ZeroDivisionError caused by dividing by pbar.format_dict["elapsed"] in LLM._run_engine() when use_tqdm=True
- [Bug]: llama 4 + fp4 is broke
- Docs
- Python not yet supported