vllm
https://github.com/vllm-project/vllm
Python
A high-throughput and memory-efficient inference and serving engine for LLMs
Triage Issues!
When you volunteer to triage issues, you'll receive an email each day with a link to an open issue that needs help in this project. You'll also receive instructions on how to triage issues.
Triage Docs!
Receive a documented method or class from your favorite GitHub repos in your inbox every day. If you're really pro, receive undocumented methods or classes and supercharge your commit history.
Python not yet supported19 Subscribers
Add a CodeTriage badge to vllm
Help out
- Issues
- fix(tool_parsers): remove kimi k2 8k section char limit that truncates large tool call arguments
- dcp prefill -> non-dcp decode prototype
- [Core] Support structured outputs for beam search
- feat(model): add embed_sparse task for BGE-M3 server-side sparse aggr…
- [Feature] Add per-request attention capture to the OpenAI-compatible API
- RuntimeError: Already borrowed in Hermes tool parser under concurrent load
- [CI Failure]: V1 e2e + engine : Cannot re-initialize CUDA in forked subprocess
- [CI] Remove DBO xfail on Blackwell
- [CI Failure]: Intel HPU Test - examples/offline_inference/basic/generate.py
- [CI Failure]: V1 Others : test_custom_logitsprocs[CustomLogitprocSource.LOGITPROC_SOURCE_ENTRYPOINT]
- Docs
- Python not yet supported