deepspeed
https://github.com/microsoft/deepspeed
Python
DeepSpeed is a deep learning optimization library that makes distributed training easy, efficient, and effective.
Triage Issues!
When you volunteer to triage issues, you'll receive an email each day with a link to an open issue that needs help in this project. You'll also receive instructions on how to triage issues.
Triage Docs!
Receive a documented method or class from your favorite GitHub repos in your inbox every day. If you're really pro, receive undocumented methods or classes and supercharge your commit history.
Python not yet supported16 Subscribers
Add a CodeTriage badge to deepspeed
Help out
- Issues
- [BUG] clip_grad_norm for zero_optimization mode is not working
- [BUG]NCCL operation timeout when training with deepspeed_zero3_offload or deepspeed_zero3 on RTX4090
- Some Demos on How to config to offload tensors to nvme device
- [BUG] max_grad_norm not effect
- Training ops kernels: Speeding up the Llama-based MoE architectures
- `ZeRO-3 + MP8` Universal Checkpoint
- [BUG] Zero3 for torch.compile with compiled_autograd when running LayerNorm
- [BUG]Issue with Zero Optimization for Llama-2-7b Fine-Tuning on Intel GPUs
- [BUG] pipeline parallelism+fp16+moe isn't working
- [REQUEST] Some questions about deepspeed sequence parallel
- Docs
- Python not yet supported