neural_de.external.maxim_tf.maxim.blocks package

Submodules

neural_de.external.maxim_tf.maxim.blocks.attentions module

Blocks based on https://github.com/google-research/maxim/blob/main/maxim/models/maxim.py

neural_de.external.maxim_tf.maxim.blocks.attentions.CALayer(num_channels, reduction=4, use_bias=True, name='channel_attention')[source]

Squeeze-and-excitation block for channel attention.

ref: https://arxiv.org/abs/1709.01507

neural_de.external.maxim_tf.maxim.blocks.attentions.RCAB(num_channels, reduction=4, lrelu_slope=0.2, use_bias=True, name='residual_ca')[source]

Residual channel attention block. Contains LN,Conv,lRelu,Conv,SELayer.

neural_de.external.maxim_tf.maxim.blocks.attentions.RDCAB(num_channels, reduction=16, use_bias=True, dropout_rate=0.0, name='rdcab')[source]

Residual dense channel attention block. Used in Bottlenecks.

neural_de.external.maxim_tf.maxim.blocks.attentions.SAM(num_channels, output_channels=3, use_bias=True, name='sam')[source]

Supervised attention module for multi-stage training.

Introduced by MPRNet [CVPR2021]: https://github.com/swz30/MPRNet

neural_de.external.maxim_tf.maxim.blocks.block_gating module

Blocks based on https://github.com/google-research/maxim/blob/main/maxim/models/maxim.py

neural_de.external.maxim_tf.maxim.blocks.block_gating.BlockGatingUnit(use_bias=True, name='block_gating_unit')[source]

A SpatialGatingUnit as defined in the gMLP paper.

The β€˜spatial’ dim is defined as the second last. If applied on other dims, you should swapaxes first.

neural_de.external.maxim_tf.maxim.blocks.block_gating.BlockGmlpLayer(block_size, use_bias=True, factor=2, dropout_rate=0.0, name='block_gmlp')[source]

Block gMLP layer that performs local mixing of tokens.

neural_de.external.maxim_tf.maxim.blocks.bottleneck module

Blocks based on https://github.com/google-research/maxim/blob/main/maxim/models/maxim.py

neural_de.external.maxim_tf.maxim.blocks.bottleneck.BottleneckBlock(features, block_size, grid_size, num_groups=1, block_gmlp_factor=2, grid_gmlp_factor=2, input_proj_factor=2, channels_reduction=4, dropout_rate=0.0, use_bias=True, name='bottleneck_block')[source]

The bottleneck block consisting of multi-axis gMLP block and RDCAB.

neural_de.external.maxim_tf.maxim.blocks.grid_gating module

Blocks based on https://github.com/google-research/maxim/blob/main/maxim/models/maxim.py

neural_de.external.maxim_tf.maxim.blocks.grid_gating.GridGatingUnit(use_bias=True, name='grid_gating_unit')[source]

A SpatialGatingUnit as defined in the gMLP paper.

The β€˜spatial’ dim is defined as the second last. If applied on other dims, you should swapaxes first.

neural_de.external.maxim_tf.maxim.blocks.grid_gating.GridGmlpLayer(grid_size, use_bias=True, factor=2, dropout_rate=0.0, name='grid_gmlp')[source]

Grid gMLP layer that performs global mixing of tokens.

neural_de.external.maxim_tf.maxim.blocks.misc_gating module

Blocks based on https://github.com/google-research/maxim/blob/main/maxim/models/maxim.py

neural_de.external.maxim_tf.maxim.blocks.misc_gating.CrossGatingBlock(features, block_size, grid_size, dropout_rate=0.0, input_proj_factor=2, upsample_y=True, use_bias=True, name='cross_gating')[source]

Cross-gating MLP block.

neural_de.external.maxim_tf.maxim.blocks.misc_gating.GetSpatialGatingWeights(features, block_size, grid_size, input_proj_factor=2, dropout_rate=0.0, use_bias=True, name='spatial_gating')[source]

Get gating weights for cross-gating MLP block.

neural_de.external.maxim_tf.maxim.blocks.misc_gating.ResidualSplitHeadMultiAxisGmlpLayer(block_size, grid_size, block_gmlp_factor=2, grid_gmlp_factor=2, input_proj_factor=2, use_bias=True, dropout_rate=0.0, name='residual_split_head_maxim')[source]

The multi-axis gated MLP block.

neural_de.external.maxim_tf.maxim.blocks.others module

Blocks based on https://github.com/google-research/maxim/blob/main/maxim/models/maxim.py

neural_de.external.maxim_tf.maxim.blocks.others.MlpBlock(mlp_dim, dropout_rate=0.0, use_bias=True, name='mlp_block')[source]

A 1-hidden-layer MLP block, applied over the last dimension.

neural_de.external.maxim_tf.maxim.blocks.others.UpSampleRatio(num_channels, ratio, use_bias=True, name='upsample')[source]

Upsample features given a ratio > 0.

neural_de.external.maxim_tf.maxim.blocks.unet module

Blocks based on https://github.com/google-research/maxim/blob/main/maxim/models/maxim.py

neural_de.external.maxim_tf.maxim.blocks.unet.UNetDecoderBlock(num_channels, block_size, grid_size, num_groups=1, lrelu_slope=0.2, block_gmlp_factor=2, grid_gmlp_factor=2, input_proj_factor=2, channels_reduction=4, dropout_rate=0.0, downsample=True, use_global_mlp=True, use_bias=True, name='unet_decoder')[source]

Decoder block in MAXIM.

neural_de.external.maxim_tf.maxim.blocks.unet.UNetEncoderBlock(num_channels, block_size, grid_size, num_groups=1, lrelu_slope=0.2, block_gmlp_factor=2, grid_gmlp_factor=2, input_proj_factor=2, channels_reduction=4, dropout_rate=0.0, downsample=True, use_global_mlp=True, use_bias=True, use_cross_gating=False, name='unet_encoder')[source]

Encoder block in MAXIM.

Module contents