tvm
https://github.com/apache/tvm
Python
Open deep learning compiler stack for cpu, gpu and specialized accelerators
Triage Issues!
When you volunteer to triage issues, you'll receive an email each day with a link to an open issue that needs help in this project. You'll also receive instructions on how to triage issues.
Triage Docs!
Receive a documented method or class from your favorite GitHub repos in your inbox every day. If you're really pro, receive undocumented methods or classes and supercharge your commit history.
Python not yet supported3 Subscribers
Add a CodeTriage badge to tvm
Help out
- Issues
- [Bug] terminate called after throwing an instance of 'tvm::runtime::InternalError'
- [Relax] Expose BlockBuilder's Analyzer instance in Python
- [Bug] [Relax] Build fails when applying `dlight.gpu.GeneralReduction` to `R.nn.group_norm` with dynamic shapes and `R.reshape`
- [Bug] Check failed: (::tvm::runtime::IsContiguous(tensor->dl_tensor)) is false: DLManagedTensor must be contiguous.
- [Bug] InternalError: Check failed: (it != slot_map_.end()) is false: Var mis not defined in the function but is referenced by m * n during VM Shape Lowering
- [Bug] Inconsistent module structure and InternalError: Check failed: (!require_value_computed) is false: PrimExpr m is not computed
- [Bug] TVMError: unknown intrinsic Op(tir.atan) during relax.build with custom atan TIR function
- [Bug] InternalError: Check failed: (!block_stack_.empty()) is false in StaticPlanBlockMemory with Dataflow
- [Bug] 'tvm.relax.op.nn' has no attribute 'attention_bias'
- [Bug] InternalError "Check failed: indices.size() == 1 (2 vs. 1): CodeGenLLVM requires all buffers to be flat 1-d buffers"
- Docs
- Python not yet supported