tvm
https://github.com/apache/tvm
Python
Open deep learning compiler stack for cpu, gpu and specialized accelerators
Triage Issues!
When you volunteer to triage issues, you'll receive an email each day with a link to an open issue that needs help in this project. You'll also receive instructions on how to triage issues.
Triage Docs!
Receive a documented method or class from your favorite GitHub repos in your inbox every day. If you're really pro, receive undocumented methods or classes and supercharge your commit history.
Python not yet supported3 Subscribers
Add a CodeTriage badge to tvm
Help out
- Issues
- [Bug] [FRONTEND][ONNX] Error converting operator ConvTranspose: InternalError: In Op(relax.add), the first input shape at dim 1 is T.int64(16) and the second input shape at dim 1 is T.int64(32), which are not broadcastable.
- [Bug] Segfault in `tvm.compile` (Relax→TIR, CUDA target) inside `tir::transform::InjectPTXLDG32` / `PTXRewriter::VisitStmt_(BufferStore)` when compiling `torch.export` model returning `(tril, triu)` tuple
- [Bug] How to migrate from te.create_schedule and auto_scheduler to TVM v0.20’
- [Bug] When compiling using TVM's Relax, the output remains the same regardless of the optimization level set.
- Update scan.py to fix pascal error
- [Tracking Issue] Need support for GQA Attention in Relax
- [Build] Track upstream apache/tvm-ffi
- [TIR] Update symbolic index term order in loop fusion
- [CI Problem] lint check has a bug
- [Bug] Install on H100 problem about cutlass_fpA_intB_gemm
- Docs
- Python not yet supported