vllm
https://github.com/vllm-project/vllm
Python
A high-throughput and memory-efficient inference and serving engine for LLMs
Triage Issues!
When you volunteer to triage issues, you'll receive an email each day with a link to an open issue that needs help in this project. You'll also receive instructions on how to triage issues.
Triage Docs!
Receive a documented method or class from your favorite GitHub repos in your inbox every day. If you're really pro, receive undocumented methods or classes and supercharge your commit history.
Python not yet supported24 Subscribers
Add a CodeTriage badge to vllm
Help out
- Issues
- [Bugfix] Add deepseek_v32 to Quark dynamic MXFP4 model type check
- [CI Failure]: Kernels FusedMoE Layer Test (2 H100s) is flaky
- [RFC]: Enable prompt_embeds content parts in Chat Completions API
- [compile] Add FlashInfer FP8 async TP fusion and preserve allreduce fusion ordering #27893
- fix(minimax_m2): avoid KeyError on split q/k/v NVFP4 weight scales
- [Tracing] Extend OpenTelemetry instrumentation to remaining HTTP route handlers
- [Bug]: spec decode tests fail on nightly b200 job
- [Feature]: Support sparse in-place weight updates in weight transfer API
- [Doc] Add pip equivalent for CUDA-specific wheel installation
- [torch.compile] E2E correctness testing for fusions
- Docs
- Python not yet supported