-
Couldn't load subscription status.
- Fork 51
Description
It would be nice to have integration with basic torch and lightning tuning workflows, to allow autoML style tuning.
This would require a BaseExperiment descendant class TorchExperiment which takes a DataLoader, a LightningModule, and a Trainer, performs a training run, and returns the validation loss/score. Also see the extension template https://github.com/SimonBlanke/Hyperactive/blob/main/extension_templates/experiments.py
Since tuning is not API preserving in torch / lightning, I would suggest an additional function that produces parameters for a tuned network, or the initialized tuned network with said parameters - e.g., tune_lightning(optimizer: BaseOptimizer, loader, module, trainer)? Or is it clear enough to construct the optimizer and call solve?
Comments appreciated on the design.