vllm
https://github.com/vllm-project/vllm
Python
A high-throughput and memory-efficient inference and serving engine for LLMs
Triage Issues!
When you volunteer to triage issues, you'll receive an email each day with a link to an open issue that needs help in this project. You'll also receive instructions on how to triage issues.
Triage Docs!
Receive a documented method or class from your favorite GitHub repos in your inbox every day. If you're really pro, receive undocumented methods or classes and supercharge your commit history.
Python not yet supported26 Subscribers
Add a CodeTriage badge to vllm
Help out
- Issues
- [Bugfix] Fix inverted condition causing thinking_token_budget to be silently ignored
- [Bug]: Pipeline Parallelism scheduler does not split sequences into pipeline micro-batches
- Keep first/last n token in high precision for nvfp4 kv cache
- [RFC]: Long-context-optimized Pipeline Parallelism, CPP + Async P2P + Dynamic Chunking
- [Bug]: DeepEP MoE all-to-all backend integration is unusable on Blackwell (SM103 / GB300)
- [DSV4] Add PP support for deepseek-v4
- [Bug]: SimpleCPUOffloadScheduler misses final full block when request finishes in the same scheduler step
- [Feature]: [IR] mm_encoder_attn migration on hold pending FlashInfer workspace support
- [Bug]: Kimi 2.6 + Kimi K2 tool parser passes malformed JSON in tool-call arguments to client without validation
- [Bug]: TurboQuant _continuation_prefill workspace allocation fails at long context — v0.20.0 regression
- Docs
- Python not yet supported