tvm
https://github.com/apache/tvm
Python
Open deep learning compiler stack for cpu, gpu and specialized accelerators
Triage Issues!
When you volunteer to triage issues, you'll receive an email each day with a link to an open issue that needs help in this project. You'll also receive instructions on how to triage issues.
Triage Docs!
Receive a documented method or class from your favorite GitHub repos in your inbox every day. If you're really pro, receive undocumented methods or classes and supercharge your commit history.
Python not yet supported3 Subscribers
Add a CodeTriage badge to tvm
Help out
- Issues
- [Bug] Output mismatch with ONNXRuntime for valid model using GELU + Interpolate + InstanceNorm2d
- added support for logsigmoid op
- [Bug] Segfault in TVM when building TIR module with pragma_unroll_explicit annotations
- [Bug] [RISC-V RVV] Performance Issue: log operator slower on RVV
- [Bug] [FRONTEND][ONNX] Error converting operator ConvTranspose: InternalError: In Op(relax.add), the first input shape at dim 1 is T.int64(16) and the second input shape at dim 1 is T.int64(32), which are not broadcastable.
- [Bug] Segfault in `tvm.compile` (Relax→TIR, CUDA target) inside `tir::transform::InjectPTXLDG32` / `PTXRewriter::VisitStmt_(BufferStore)` when compiling `torch.export` model returning `(tril, triu)` tuple
- [Bug] How to migrate from te.create_schedule and auto_scheduler to TVM v0.20’
- [Bug] When compiling using TVM's Relax, the output remains the same regardless of the optimization level set.
- Update scan.py to fix pascal error
- [Tracking Issue] Need support for GQA Attention in Relax
- Docs
- Python not yet supported