We proudly present our newest produce, a totally well-defined extension for Tensorflow-Keras users!
Still not available now, will implement in the future.
Now we have such progress on the semi-product:
- optimzers:
- Manually switched optimizers (
Adam2SGDandNAdam2NSGD). - Automatically switched optimizer (
SWATS). - Advanced adaptive optimizers (
Adabound,NadaboundandMNadamsupportingamsgrad). - Wrapped default optimizers.
- Manually switched optimizers (
- layers:
- Ghost layer (used to construct trainable input layer).
- Tied dense layer for the symmetric autoencoder.
- Extended dropout and noise layers.
- Extended activation layers.
- Extended normalization layers.
- Group convolutional layers.
- Modern convolutional layers (support group convolution).
- Modern transposed convolutional layers (support group convolution).
- Tied (trivial) transposed convolutional layers for the symmetric autoencoder.
- Residual layers (or blocks) and their transposed versions.
- ResNeXt layers (or blocks) and their transposed versions.
- Inception-v4 layers (or blocks) and their transposed versions.
- InceptionRes-v2 layers (or blocks) and their transposed versions.
- InceptionPlus layers (or blocks) and their transposed versions.
- External interface for using generic python function.
- Droupout method options for all avaliable modern layers.
- data:
- Basic h5py (HDF5) IO handles.
- Basic SQLite IO handles.
- Basic Bcolz IO handles.
- Basic CSV IO handles.
- Basic JSON IO handles.
- Data parsing utilities.
- estimators:
- VGG16
- U-Net
- ResNet
- functions:
- (loss): Lovasz loss for IoU
- (loss): Linear interpolated loss for IoU
- (metrics): signal-to-noise ratio (SNR and PSNR)
- (metrics): Pearson correlation coefficient
- (metrics): IoU / Jaccard index
- utilities:
- Revised save and load model functions.
- Beholder plug-in callback.
- Revised ModelCheckpoint callback.
- LossWeightsScheduler callback (for changing the loss weights during the training).
- OptimizerSwitcher callback (for using manually switched optimizers).
- ModelWeightsReducer callback (parameter decay strategy including L1 decay and L2 decay).
- Extended data visualization tools.
- Tensorboard log file parser.
Check the branch demos to learn more details.
- Finish H5Converter
H5Converterin.data.
- Fix some bugs and add features in
.utilities.draw. - Add
webfiles.zipfor.utilities.tboard. - Fix a small bug in
.utilities.
- Enhance the
save_model/load_modelfor supportting storing/recovering customized loss/metric class. - Finish the submodule
.utilities.drawfor providing extended visualizations. - Finish the submodule
.utilities.tboardfor providing extended tensorboard interfaces. - Fix some bugs.
- Let
.save_modelsupport compression. - Revise the optional arguments for
RestrictSubin.layers.
- Fix a bug for
H5GCombinerin.datawhen adding more parsers. - Finish
H5VGParserin.data, this parser is used for splitting validation set from a dataset. - Finish
ExpandDimsin.layers, it is a layer version oftf.expand_dims. - Enable
ModelCheckpointin.utilities.callbacksto support the option for not saving optimizer.
- Fix a bug for serializing
Ghostin.layers. - Finish activation layers in
.layers, includingSlice,RestrictandRestrictSub.
- Let
.save_model/.load_modelsupports storing/recovering variable loss weights. - Finish
LossWeightsSchedulerin.utilities.callbacks.
Enable the H5SupSaver in .data to add more data to an existed file.
Enable the H5SupSaver in .data to expand if data is dumped in series.
- Finish
MNadam,AdaboundandNadaboundin.optimizers. - Slightly change
.optimizers.mixture. - Change the quick interface in
.optimizers.
- Finish the demo version for
SWATSin.optimizers. Need further tests. - Fix a small bug for
.load_model. - Change the warning backend to tensorflow version.
- Finish
ModelWeightsReducerin.utilities.callbacks. - Finish
Ghostin.layers. - Fix small bugs.
- Fix the bugs of manually switched optimizers in
.optimizers.Now they require to be used with a callback or switch the phase byswitch(). - Add a plain momentum SGD optimizer to fast interface in
.optimizers. - Finish
OptimizerSwitcherin.utilities.callbacks. It is used to control the phase of the manually swtiched optimizers. - Improve the efficiency for
Adam2SGDandNAdam2NSGDin.optimizers.
- Finish the manually switched optimizers in
.optimizers:Adam2SGDandNAdam2NSGD. Both of them supports amsgrad mode. - Adjust the fast interface
.optimizers.optimizer. Now it supports 2 more tensorflow based optimizers and the default momentum of Nesterov SGD optimizer is changed to 0.9.
- Fix some bugs in
.layers.convand.layers.unit. - Remove the normalization layer from all projection branches in
.layers.residualand.layers.inception.
- Support totally new
save_modelandload_modelAPIs in.utilites. - Finish
ModelCheckpointin.utilities.callbacks.
Finish losses.linear_jaccard_index, losses.lovasz_jaccard_loss, metrics.signal_to_noise, metrics.correlation, metrics.jaccard_index in .functions (may require tests in the future).
- Add dropout options to all advanced blocks (including residual, ResNeXt, inception, incept-res and incept-plus).
- Strengthen the compatibility.
- Fix minor bugs for spatial dropout in
0.50-b. - Thanks to GOD!
.layershas been finished, although it may require modification in the future.
- Fix a bug for implementing the channel_first mode for
AConvin.layers. - Finish
InstanceGaussianNoisein.layers. - Prepare the test for adding dropout to residual layers in
.layers.
- Finish
Conv1DTied,Conv2DTied,Conv3DTiedin.layers. - Switch back to the 0.48 version for
.layers.DenseTiedAPIs because testing show that the modification in 0.48-b will cause bugs.
A Test on replacing the .layers.DenseTied APIs like tf.keras.layers.Wrappers.
- Finish
Inceptplus1D,Inceptplus2D,Inceptplus3D,Inceptplus1DTranspose,Inceptplus2DTranspose,Inceptplus3DTransposein.layers. - Minor changes for docstrings and default settings in
.layers.inception.
- Enable the
ResNeXtto estimate the latent group and local filter number. - Make a failed try on implementing quick group convolution, testing results show that using
tf.nn.depthwise_conv2dto replace multipleconvNDops would cause the computation to be even slower.
- Enable Modern convolutional layers to work with group convolution.
- Reduce the memory consumption for network construction when using ResNeXt layers in case of out of memory (OOM) problems.
- Fix a minor bug for group convolution.
- Finish
GroupConv1D,GroupConv2D,GroupConv3Din.layers. - Fix the bugs in channel detections for residual and inception layers.
- Finish
Resnext1D,Resnext2D,Resnext3D,Resnext1DTranspose,Resnext2DTranspose,Resnext3DTransposein.layers. - Fix the repeating biases problems in inception-residual layers.
- Finish
Inceptres1D,Inceptres2D,Inceptres3D,Inceptres1DTranspose,Inceptres2DTranspose,Inceptres3DTransposein.layers. - Fix some bugs and revise docstrings for
.layers.residualand.layers.inception.
Finish Inception1D, Inception2D, Inception3D, Inception1DTranspose, Inception2DTranspose, Inception3DTranspose in .layers.
Finish Residual1D, Residual2D, Residual3D, Residual1DTranspose, Residual2DTranspose, Residual3DTranspose in .layers.
- Fix the bug about padding for transposed dilation convolutional layers.
- Add a new option
output_mshapeto help transposed convolutional layers to control the desired output shape. - Finish
PyExternalin.layers.
Finish H5GCombiner in .data.
- Use
keras.Sequence()to redefineH5GParserandH5HGParser. - Add compatible check.
Adjust the .data.h5py module to make it more generalized.
- Finish
H5HGParser,H5SupSaver,H5GParserin.data. - Finish
DenseTied,InstanceNormalization,GroupNormalization,AConv1D,AConv2D,AConv3D,AConv1DTranspose,AConv2DTranspose,AConv3DTransposein.layers.
Create this project.