deepspeed
https://github.com/microsoft/deepspeed
Python
DeepSpeed is a deep learning optimization library that makes distributed training easy, efficient, and effective.
Triage Issues!
When you volunteer to triage issues, you'll receive an email each day with a link to an open issue that needs help in this project. You'll also receive instructions on how to triage issues.
Triage Docs!
Receive a documented method or class from your favorite GitHub repos in your inbox every day. If you're really pro, receive undocumented methods or classes and supercharge your commit history.
Python not yet supported16 Subscribers
Add a CodeTriage badge to deepspeed
Help out
- Issues
- Please publish latest version Windows WHL last one is too old
- Dose autotp supports the multimodal training?
- RuntimeError: expected mat1 and mat2 to have the same dtype, but got: float != c10::BFloat16
- [BUG] Exception raised at the end of training with deepcompile enabled
- [BUG]ZeRO-2 + CPU Offload + overlap_comm=true, the IPG (Independent Partition Gradient) buckets are never populated.
- [BUG] The same program runs fine with v0.17.5, but fails with v0.17.6. Under the zero2 configuration
- [BUG]deepspeed/ops/transformer/inference/triton/matmul_ext.py -> df: /root/.triton/autotune: No such file or directory
- [REQUEST] Muon Optimizer - Different LR for Different Groups
- [BUG][Deepcompile] reduce_grad returns undefined tensor -> Inductor compilation fails (expected a proper tensor but got None)
- Tracking excessive cpu memory usage in z2 cpu offload
- Docs
- Python not yet supported