pytorch-lightning
https://github.com/pytorchlightning/pytorch-lightning
Python
The lightweight PyTorch wrapper for high-performance AI research. Scale your models, not the boilerplate.
Triage Issues!
When you volunteer to triage issues, you'll receive an email each day with a link to an open issue that needs help in this project. You'll also receive instructions on how to triage issues.
Triage Docs!
Receive a documented method or class from your favorite GitHub repos in your inbox every day. If you're really pro, receive undocumented methods or classes and supercharge your commit history.
Python not yet supported9 Subscribers
Add a CodeTriage badge to pytorch-lightning
Help out
- Issues
- Allow nested `batch_arg_name` in `BatchSizeFinder`/`Tuner.scale_batch_size()`
- Validation stuck when trainers have different data size
- `validation_epoch_end` is still mentionned in the documentation with version >= 2.0.0 while it has been removed from the code
- MLFlow ResponseError('too many 500 error responses') if try to log to deleted experiment
- Error when logging to MLFlow deleted experiment
- Need a strategy to gather and compute validation loss on the whole validation dataset
- Error: torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate more than 1EB memory
- MisconfigurationException use cases
- Change `optimizer` or `lr_scheduler` in resuming training without removing the `global_step` information
- Tensorboard Logger is flushed on every step
- Docs
- Python not yet supported