neural_de.external.maxim_tf.maxim.blocks packageο
Submodulesο
neural_de.external.maxim_tf.maxim.blocks.attentions moduleο
Blocks based on https://github.com/google-research/maxim/blob/main/maxim/models/maxim.py
- neural_de.external.maxim_tf.maxim.blocks.attentions.CALayer(num_channels, reduction=4, use_bias=True, name='channel_attention')[source]ο
Squeeze-and-excitation block for channel attention.
- neural_de.external.maxim_tf.maxim.blocks.attentions.RCAB(num_channels, reduction=4, lrelu_slope=0.2, use_bias=True, name='residual_ca')[source]ο
Residual channel attention block. Contains LN,Conv,lRelu,Conv,SELayer.
- neural_de.external.maxim_tf.maxim.blocks.attentions.RDCAB(num_channels, reduction=16, use_bias=True, dropout_rate=0.0, name='rdcab')[source]ο
Residual dense channel attention block. Used in Bottlenecks.
- neural_de.external.maxim_tf.maxim.blocks.attentions.SAM(num_channels, output_channels=3, use_bias=True, name='sam')[source]ο
Supervised attention module for multi-stage training.
Introduced by MPRNet [CVPR2021]: https://github.com/swz30/MPRNet
neural_de.external.maxim_tf.maxim.blocks.block_gating moduleο
Blocks based on https://github.com/google-research/maxim/blob/main/maxim/models/maxim.py
neural_de.external.maxim_tf.maxim.blocks.bottleneck moduleο
Blocks based on https://github.com/google-research/maxim/blob/main/maxim/models/maxim.py
- neural_de.external.maxim_tf.maxim.blocks.bottleneck.BottleneckBlock(features, block_size, grid_size, num_groups=1, block_gmlp_factor=2, grid_gmlp_factor=2, input_proj_factor=2, channels_reduction=4, dropout_rate=0.0, use_bias=True, name='bottleneck_block')[source]ο
The bottleneck block consisting of multi-axis gMLP block and RDCAB.
neural_de.external.maxim_tf.maxim.blocks.grid_gating moduleο
Blocks based on https://github.com/google-research/maxim/blob/main/maxim/models/maxim.py
neural_de.external.maxim_tf.maxim.blocks.misc_gating moduleο
Blocks based on https://github.com/google-research/maxim/blob/main/maxim/models/maxim.py
- neural_de.external.maxim_tf.maxim.blocks.misc_gating.CrossGatingBlock(features, block_size, grid_size, dropout_rate=0.0, input_proj_factor=2, upsample_y=True, use_bias=True, name='cross_gating')[source]ο
Cross-gating MLP block.
neural_de.external.maxim_tf.maxim.blocks.others moduleο
Blocks based on https://github.com/google-research/maxim/blob/main/maxim/models/maxim.py
neural_de.external.maxim_tf.maxim.blocks.unet moduleο
Blocks based on https://github.com/google-research/maxim/blob/main/maxim/models/maxim.py
- neural_de.external.maxim_tf.maxim.blocks.unet.UNetDecoderBlock(num_channels, block_size, grid_size, num_groups=1, lrelu_slope=0.2, block_gmlp_factor=2, grid_gmlp_factor=2, input_proj_factor=2, channels_reduction=4, dropout_rate=0.0, downsample=True, use_global_mlp=True, use_bias=True, name='unet_decoder')[source]ο
Decoder block in MAXIM.
- neural_de.external.maxim_tf.maxim.blocks.unet.UNetEncoderBlock(num_channels, block_size, grid_size, num_groups=1, lrelu_slope=0.2, block_gmlp_factor=2, grid_gmlp_factor=2, input_proj_factor=2, channels_reduction=4, dropout_rate=0.0, downsample=True, use_global_mlp=True, use_bias=True, use_cross_gating=False, name='unet_encoder')[source]ο
Encoder block in MAXIM.