deepspeed
https://github.com/microsoft/deepspeed
Python
DeepSpeed is a deep learning optimization library that makes distributed training easy, efficient, and effective.
Triage Issues!
When you volunteer to triage issues, you'll receive an email each day with a link to an open issue that needs help in this project. You'll also receive instructions on how to triage issues.
Triage Docs!
Receive a documented method or class from your favorite GitHub repos in your inbox every day. If you're really pro, receive undocumented methods or classes and supercharge your commit history.
Python not yet supported15 Subscribers
Add a CodeTriage badge to deepspeed
Help out
- Issues
- [BUG]deepspeed/ops/transformer/inference/triton/matmul_ext.py -> df: /root/.triton/autotune: No such file or directory
- [REQUEST] Muon Optimizer - Different LR for Different Groups
- [BUG][Deepcompile] reduce_grad returns undefined tensor -> Inductor compilation fails (expected a proper tensor but got None)
- Tracking excessive cpu memory usage in z2 cpu offload
- ignoring *.cuh prevents multi_tensor_apply.cuh from being pushed
- [BUG]
- [BUG]MoE router parameters are forced to bf16 under DeepSpeed bf16, causing dtype mismatch in fp32 routing logic
- feat: add parameter-level precision control for BF16 training
- `fp_quantizer` ops bug
- why is AutoTP only applies LinearAllreduce (RowParallel) ? What about ColumnParallel and ParallelEmbedding
- Docs
- Python not yet supported