deepspeed
https://github.com/microsoft/deepspeed
Python
DeepSpeed is a deep learning optimization library that makes distributed training easy, efficient, and effective.
Triage Issues!
When you volunteer to triage issues, you'll receive an email each day with a link to an open issue that needs help in this project. You'll also receive instructions on how to triage issues.
Triage Docs!
Receive a documented method or class from your favorite GitHub repos in your inbox every day. If you're really pro, receive undocumented methods or classes and supercharge your commit history.
Python not yet supported15 Subscribers
Add a CodeTriage badge to deepspeed
Help out
- Issues
- [BUG] how to achieve hybrid data and pipeline parallelism?
- Functorch support: RuntimeError: In order to use an autograd.Function with functorch transforms
- [BUG] Ulysses DistributedAttention silently produces incorrect output when #GPUs does not divide global sequence length
- How to properly use tensor_parallel while applying also Zero Stage 3
- [BUG] FlopsProfiler accumulates metrics when called multiple times
- Some question of gradient accumulation
- [BUG]Resolving OOM Issues in ConcurrenDistributed Inference of 111B Teacher Model and Distributed Training of 8B Student Model on Multi-Node H200 GPUs
- [QUESTION/HELP] ZERO3 get weight participate in loss
- [BUG] UlyssesSPDataLoaderAdapter returns duplicate data
- [BUG] Cuda failure 700 when use deepcompile with zero stage 3
- Docs
- Python not yet supported