Skip to content

Conversation

@ZhenHuangLab
Copy link

I found out that the e2tomoseg_convnet.py could be improved by using model.fit. Training will be six times faster.

For example, when I ran the following command:

time e2tomoseg_convnet_test.py --trainset=particles/GCB_001_bin6_SIRT_preproc__good_2_trainset.hdf --nettag=convnet_iter100 --learnrate=0.0001 --niter=100 --ncopy=1 --batch=16 --nkernel=40,40,1 --ksize=15,15,15 --poolsz=2,1,1 --trainout --training --device=gpu

I could get the output as follows:

...
iteration 97, cost -5.999
iteration 98, cost -6.013
iteration 99, cost -6.013
Writting network output of training set to neuralnets/trainout_nnet_save__convnet_iter100.hdf...
Saving the trained net to neuralnets/nnet_save__convnet_iter100.hdf...
Done
Total time: 2589.5 s

real	43m12.468s
user	34m6.747s
sys	12m38.237s

When using the model.fit, the training will be much faster, and I can get almost the same results (like the loss, the network...)

Epoch 97/100
392/392 [==============================] - 4s 10ms/step - loss: -6.0035
Epoch 98/100
392/392 [==============================] - 4s 10ms/step - loss: -6.0049
Epoch 99/100
392/392 [==============================] - 4s 10ms/step - loss: -5.9938
Epoch 100/100
392/392 [==============================] - 4s 10ms/step - loss: -6.0119
Writting network output of training set to neuralnets/trainout_nnet_save__convnet_iter100.hdf...
Saving the trained net to neuralnets/nnet_save__convnet_iter100.hdf...
Done
Total time: 450.5 s

real	7m32.213s
user	5m22.877s
sys	1m22.681s

I can even set the callback parameter in model.fit() to set up some learning rate scheduler to improve the training process. So I think it is better to use model.fit instead of the cycle of model.train_on_batch.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant