vllm
https://github.com/vllm-project/vllm
Python
A high-throughput and memory-efficient inference and serving engine for LLMs
Triage Issues!
When you volunteer to triage issues, you'll receive an email each day with a link to an open issue that needs help in this project. You'll also receive instructions on how to triage issues.
Triage Docs!
Receive a documented method or class from your favorite GitHub repos in your inbox every day. If you're really pro, receive undocumented methods or classes and supercharge your commit history.
Python not yet supported19 Subscribers
Add a CodeTriage badge to vllm
Help out
- Issues
- [Bug]: 在线跑GLM-5,开dp,遇到RuntimeError: Triton Error [CUDA]: an illegal memory access was encountered
- [Bugfix] Fix MLA weight access crash for quantized layers (NVFP4/INT4)
- [Feature] TRITON_MLA: support FP8 KV cache (needed for SM12.0 / Blackwell)
- [Bug]: vllm: error: unrecognized arguments: --task embedding
- [Bugfix] Fix TypeError in benchmarks/benchmark_prefix_caching.py with --sort
- [Bug]: invalid argument at cumem_allocator.cpp:119
- [Bug]: mi355 minimax m2.1 arch mxfp4 rocm AITER TP4 error
- [Bug]: HIP build in Docker: offload-arch stderr contaminates compiler flags via cmake/utils.cmake and CMAKE_HIP_FLAGS
- [Feature]: AMD MXFP4 MiniMax M2.5 Checkpoint
- [Misc] Replace bare AssertionError with specific exception types
- Docs
- Python not yet supported