Lightning v2.5.3
Notable changes in this release
PyTorch Lightning
Changed
- Added
save_on_exceptionoption toModelCheckpointCallback (#20916) - Allow
dataloader_idx_in log names whenadd_dataloader_idx=False(#20987) - Allow returning
ONNXProgramwhen callingto_onnx(dynamo=True)(#20811) - Extended support for general mappings being returned from
training_stepwhen using manual optimization (#21011)
Fixed
- Fixed Allowing trainer to accept CUDAAccelerator instance as accelerator with FSDP strategy (#20964)
- Fixed progress bar console clearing for Rich
14.1+(#21016) - Fixed
AdvancedProfilerto handle nested profiling actions for Python 3.12+ (#20809) - Fixed
richprogress bar error when resume training (#21000) - Fixed double iteration bug when resumed from a checkpoint. (#20775)
- Fixed support for more dtypes in
ModelSummary(#21034) - Fixed metrics in
RichProgressBarbeing updated according to user providedrefresh_rate(#21032) - Fixed
save_lastbehavior in the absence of validation (#20960) - Fixed integration between
LearningRateFinderandEarlyStopping(#21056) - Fixed gradient calculation in
lr_finderformode="exponential"(#21055) - Fixed
save_hyperparameterscrashing withdataclassesusinginit=Falsefields (#21051)
Lightning Fabric
Changed
Fixed
Full commit list: 2.5.2 -> 2.5.3
Contributors
We thank all folks who submitted issues, features, fixes and doc changes. It's the only way we can collectively make Lightning ⚡ better for everyone, nice job!
In particular, we would like to thank the authors of the pull-requests above, in no particular order:
@baskrahmer, @bhimrazy, @deependujha, @fnhirwa, @GdoongMathew, @jonathanking, @relativityhd, @rittik9, @SkafteNicki, @sudiptob2, @vsey, @YgLK
Thank you ❤️ and we hope you'll keep them coming!