deepspeed
https://github.com/microsoft/deepspeed
Python
DeepSpeed is a deep learning optimization library that makes distributed training easy, efficient, and effective.
Triage Issues!
When you volunteer to triage issues, you'll receive an email each day with a link to an open issue that needs help in this project. You'll also receive instructions on how to triage issues.
Triage Docs!
Receive a documented method or class from your favorite GitHub repos in your inbox every day. If you're really pro, receive undocumented methods or classes and supercharge your commit history.
Python not yet supported15 Subscribers
Add a CodeTriage badge to deepspeed
Help out
- Issues
- [BUG] Zenflow_stage3 - RuntimeError: narrow() cannot be applied to a 0-dim tensor.
- let allgather and alltoall execute in parallel when both attention and MOE used TP
- Gradient of the loss w.r.t sharded parameters
- [BUG] Incompatibility Between DeepSpeed AutoTP and BLOOM in Training of Hugging Face models
- [BUG] `Assert Error: assert buffer.grad is not None` & `RuntimeError: element 1 of tensors does not require grad and does not have a grad_fn` During pipeline parallelism
- [BUG] Install to Windows - fatal error LNK1181
- [BUG]Install deepspeed on the npu machine, and an error is reported during verification
- [REQUEST]I want to use CPU-based distributed approach to train a small recommendation model. Is there a demo available for me to refer ?
- [BUG] AutoTP training runs into missing gradient error
- [BUG] Memory is enough for training by using zero-3, but OOM occurred after enabling DeepCompile
- Docs
- Python not yet supported