deepspeed
https://github.com/microsoft/deepspeed
Python
DeepSpeed is a deep learning optimization library that makes distributed training easy, efficient, and effective.
Triage Issues!
When you volunteer to triage issues, you'll receive an email each day with a link to an open issue that needs help in this project. You'll also receive instructions on how to triage issues.
Triage Docs!
Receive a documented method or class from your favorite GitHub repos in your inbox every day. If you're really pro, receive undocumented methods or classes and supercharge your commit history.
Python not yet supported15 Subscribers
Add a CodeTriage badge to deepspeed
Help out
- Issues
- multiple runs on same machine, with ctrl+c, all runs are killed
- Question about using Autotuner with ZeRO and tensor parallelism
- deepspeed setup for requiring grads on the input (explainability) without huge increase in memory over all gpus
- [BUG] deepspeed inference for llama3.1 70b for 2 node, each node with 2 gpu
- AssertionError: no sync context manager is incompatible with gradientpartitioning logic of ZeRo stage 3
- [BUG] [Fix-Suggested] KeyError in stage_1_and_2.py Due to Optimizer-Model Parameter Mismatch
- [BUG] [Fix-Suggested] Checkpoint Inconsistency When Freezing Model Parameters Before `deepspeed.initialize`
- [BUG] clip_grad_norm for zero_optimization mode is not working
- [BUG]NCCL operation timeout when training with deepspeed_zero3_offload or deepspeed_zero3 on RTX4090
- Some Demos on How to config to offload tensors to nvme device
- Docs
- Python not yet supported