neural_de.external.derain package
Submodules
neural_de.external.derain.blocks module
2D convolution class
- class neural_de.external.derain.blocks.Conv2d(in_channels, out_channels, kernel_size=3, stride=1, activation_func=LeakyReLU(negative_slope=0.1, inplace=True), norm_layer=<class 'torch.nn.modules.batchnorm.BatchNorm2d'>, use_bias=False, padding_type='reflect')[source]
Bases:
Module
2D convolution class Args: in_channels : int - Number of input channels out_channels : int - Number of output channels kernel_size : int - Size of kernel stride : int - Stride of convolution activation_func : func - Activation function after convolution norm_layer : functools.partial - Normalization layer use_bias : bool - If set, then use bias padding_type : str - The name of padding layer: reflect | replicate | zero
- forward(x)[source]
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
-
training:
bool
- class neural_de.external.derain.blocks.DecoderBlock(in_channels, skip_channels, out_channels, activation_func=LeakyReLU(negative_slope=0.1, inplace=True), norm_layer=<class 'torch.nn.modules.batchnorm.BatchNorm2d'>, use_bias=False, padding_type='reflect', upsample_mode='transpose')[source]
Bases:
Module
Decoder block with skip connections Args: in_channels : int - number of input channels skip_channels : int - number of skip connection channels out_channels : int - number of output channels activation_func : func - activation function after convolution norm_layer : functools.partial - normalization layer use_bias : bool - if set, then use bias padding_type : str - the name of padding layer: reflect | replicate | zero upsample_mode : str - the mode for interpolation: transpose | bilinear | nearest
- forward(x, skip=None)[source]
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
-
training:
bool
- class neural_de.external.derain.blocks.DeformableConv2d(in_channels, out_channels, kernel_size=3, stride=1, padding=1, bias=False)[source]
Bases:
Module
2D deformable convolution class :type in_channels: :param in_channels: int - number of input channels :type out_channels: :param out_channels: int - number of output channels :type kernel_size: :param kernel_size: int - size of kernel :type stride: :param stride: int - stride of convolution :type padding: :param padding: int - padding :param use_bias: bool - if set, then use bias
- forward(x)[source]
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
-
training:
bool
- class neural_de.external.derain.blocks.DeformableResnetBlock(dim, padding_type, norm_layer, use_dropout, use_bias, activation_func)[source]
Bases:
Module
Define a Resnet block with deformable convolutions
- build_conv_block(dim, padding_type, norm_layer, use_dropout, use_bias, activation_func)[source]
Construct a convolutional block. :type dim: :param dim: :type dim: int :type padding_type: :param padding_type: reflect | replicate | zero :type padding_type: str :param norm_layer – normalization layer: :type use_dropout: :param use_dropout: :type use_dropout: bool :type use_bias: :param use_bias: :type use_bias: bool :type activation_func: :param activation_func: :type activation_func: func
Returns a conv block (with a conv layer, a normalization layer, and a non-linearity layer)
-
training:
bool
- class neural_de.external.derain.blocks.ResNetModified(input_nc, output_nc, ngf=64, norm_layer=<class 'torch.nn.modules.batchnorm.BatchNorm2d'>, activation_func=LeakyReLU(negative_slope=0.1, inplace=True), use_dropout=False, n_blocks=6, padding_type='reflect', upsample_mode='bilinear')[source]
Bases:
Module
Resnet-based generator that consists of deformable Resnet blocks.
-
training:
bool
-
training:
- class neural_de.external.derain.blocks.UpConv2d(in_channels, out_channels, kernel_size=3, activation_func=LeakyReLU(negative_slope=0.1, inplace=True), norm_layer=<class 'torch.nn.modules.batchnorm.BatchNorm2d'>, use_bias=False, padding_type='reflect', interpolate_mode='bilinear')[source]
Bases:
Module
Up-convolution (upsample + convolution) block class Args: in_channels : int - number of input channels out_channels : int - number of output channels kernel_size : int - size of kernel (k x k) activation_func : func - activation function after convolution norm_layer : functools.partial - normalization layer use_bias : bool - if set, then use bias padding_type : str - the name of padding layer: reflect | replicate | zero interpolate_mode : str - the mode for interpolation: bilinear | nearest
- forward(x)[source]
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
-
training:
bool
Module contents
Code adapted from the Gt-Rain implementation: https://github.com/UCLA-VMG/GT-RAIN .