vllm
https://github.com/vllm-project/vllm
Python
A high-throughput and memory-efficient inference and serving engine for LLMs
Triage Issues!
When you volunteer to triage issues, you'll receive an email each day with a link to an open issue that needs help in this project. You'll also receive instructions on how to triage issues.
Triage Docs!
Receive a documented method or class from your favorite GitHub repos in your inbox every day. If you're really pro, receive undocumented methods or classes and supercharge your commit history.
Python not yet supported13 Subscribers
Add a CodeTriage badge to vllm
Help out
- Issues
- [Bugfix] Handle missing config.json in speculator probe for GGUF models
- [P/D] p2p_nccl: implement async KV loading for decode stage
- [Bug]: Prefix Cache Corruption with LoRA with the same name but different id
- [Feature]: could output the given format logger ?
- Add positional embedding and kv_cache fusion for llama and gpt-oss
- [Do not merge][Async] Asynchronous DP coordination
- [Doc] Add warning regarding GPU profiling limitations on WSL2
- [BugFix] Fix beam search parent mapping for variable logprobs
- Fix document of torchrun_example.py
- [Bug]: Wrong Generation Under High Concurrency When Using KVCache CPU Offload (vLLM 0.13.0)
- Docs
- Python not yet supported