vllm
https://github.com/vllm-project/vllm
Python
A high-throughput and memory-efficient inference and serving engine for LLMs
Triage Issues!
When you volunteer to triage issues, you'll receive an email each day with a link to an open issue that needs help in this project. You'll also receive instructions on how to triage issues.
Triage Docs!
Receive a documented method or class from your favorite GitHub repos in your inbox every day. If you're really pro, receive undocumented methods or classes and supercharge your commit history.
Python not yet supported13 Subscribers
Add a CodeTriage badge to vllm
Help out
- Issues
- fix(tool_parsers): enable type conversion for Seed OSS tool parser streaming mode
- [ROCM] add 3d triton kernel for non-standard block size support under rocm_attn
- [Logging][Bugfix] fix scheduler stats logging
- docs: add version requirement note for --profiler-config flag
- [Bug]: Llama4 FP8 failure with Flashinfer on B200
- [Bugfix] Fix Hermes tool parser dropping empty arguments for parameterless tools
- [Parsers]Pangu Reasoning parser and Tool parser
- [RFC]: About design of QuantKey
- [Bug]: Serve with LoRA error "ValueError: base_model.model.lm_head.base_layer.weight is unsupported LoRA weight"
- [CI] Test NIXL+Offloading connector
- Docs
- Python not yet supported