robustML.advertrain packageο
Subpackagesο
- robustML.advertrain.dependencies package
- Subpackages
- Submodules
- robustML.advertrain.dependencies.autoattack module
APGDAttack
APGDAttack.model
APGDAttack.n_iter
APGDAttack.norm
APGDAttack.n_restarts
APGDAttack.eps
APGDAttack.seed
APGDAttack.loss
APGDAttack.eot_iter
APGDAttack.rho
APGDAttack.topk
APGDAttack.verbose
APGDAttack.device
APGDAttack.use_largereps
APGDAttack.is_tf_model
APGDAttack.init_hyperparam()
APGDAttack.check_oscillation()
APGDAttack.check_shape()
APGDAttack.normalize()
APGDAttack.lp_norm()
APGDAttack.dlr_loss()
APGDAttack.attack_single_run()
APGDAttack.perturb()
APGDAttack.decr_eps_pgd()
APGDAttack.attack_single_run()
APGDAttack.check_oscillation()
APGDAttack.check_shape()
APGDAttack.decr_eps_pgd()
APGDAttack.dlr_loss()
APGDAttack.init_hyperparam()
APGDAttack.lp_norm()
APGDAttack.normalize()
APGDAttack.perturb()
L0_norm()
L1_norm()
L1_projection()
L2_norm()
- robustML.advertrain.dependencies.dropblock module
- robustML.advertrain.dependencies.fire module
- robustML.advertrain.dependencies.trades module
- Module contents
- robustML.advertrain.training package
Submodulesο
robustML.advertrain.constants moduleο
robustML.advertrain.metrics moduleο
- class robustML.advertrain.metrics.Metrics[source]ο
Bases:
object
Class to track performance metrics for binary classification tasks.
This class tracks true positives, true negatives, false positives, false negatives, and cumulative loss across batches. It calculates metrics like accuracy, precision, recall, and F1-score.
- display(title: str) None [source]ο
Display the calculated metrics with a title.
- Parameters:
title (str) β The title for the metrics display.
- display_table(title: str) None [source]ο
Display the metrics in a tabular format with a title.
- Parameters:
title (str) β The title for the table.
- get_metrics() tuple [source]ο
Calculate and return key performance metrics.
- Returns:
Tuple containing accuracy, loss, precision, recall, and F1-score.
- Return type:
tuple
- load_metrics(checkpoint: str) Dict[str, Any] [source]ο
Load metrics from a JSON file located at <checkpoint>/metrics.json.
This function reads the βmetrics.jsonβ file from the specified checkpoint directory and returns the contents as a dictionary.
- Parameters:
checkpoint (str) β The directory path from where the metrics.json file will be loaded.
- Returns:
A dictionary containing the loaded metrics.
- Return type:
Dict[str, Any]
- save_metrics(metrics: Dict[str, Any], checkpoint: str) None [source]ο
Save metrics in a JSON file located at <checkpoint>/metrics.json.
This function serializes the provided metrics dictionary into JSON format and writes it to a file named βmetrics.jsonβ in the specified checkpoint directory.
- Parameters:
metrics (Dict[str, Any]) β A dictionary containing metric names as keys and their corresponding values.
checkpoint (str) β The directory path where the metrics.json file will be saved.
robustML.advertrain.models moduleο
- class robustML.advertrain.models.ConvNet(device: device, p: float = 0.2)[source]ο
Bases:
Module
Convolutional Neural Network with dropout layers, designed for processing images of size 64x128.
This network includes a normalization layer, several convolutional layers with ReLU activation and max pooling, followed by fully connected layers with dropout for regularization. It is suited for tasks like image classification where dropout can help reduce overfitting.
- conv1, conv2_1, conv3_1, conv4_1
Convolutional layers for feature extraction.
- Type:
nn.Conv2d
- poolingο
Max pooling layer to reduce spatial dimensions.
- Type:
nn.MaxPool2d
- activationο
Activation function.
- Type:
nn.ReLU
- dropoutο
Dropout layer for regularization.
- Type:
nn.Dropout
- linear1, linear2, linear3
Fully connected layers for classification.
- Type:
nn.Linear
- forward(x: Tensor) Tensor [source]ο
Defines the forward pass of the ConvNetDropout.
The input tensor is processed through normalization, convolutional layers, pooling layers, dropout layers, and fully connected layers sequentially to produce the output tensor.
- Parameters:
x (Tensor) β Input tensor of shape (batch_size, 3, 64, 128).
- Returns:
Output tensor after processing through the network.
- Return type:
Tensor
- class robustML.advertrain.models.ConvNetDropblock(device: device, p: float = 0.2, drop_prob: float = 0.0, n_steps: int = 10)[source]ο
Bases:
Module
Convolutional Neural Network with DropBlock regularization, designed for processing images of size 64x128.
This network includes a normalization layer, several convolutional layers with ReLU activation and max pooling, followed by fully connected layers with dropout and DropBlock for regularization. It is suited for tasks like image classification where advanced regularization techniques can help reduce overfitting.
- conv1, conv2_1, conv3_1, conv4_1
Convolutional layers for feature extraction.
- Type:
nn.Conv2d
- poolingο
Max pooling layer to reduce spatial dimensions.
- Type:
nn.MaxPool2d
- activationο
Activation function.
- Type:
nn.ReLU
- dropoutο
Dropout layer for regularization.
- Type:
nn.Dropout
- dropblockο
DropBlock layer for structured dropout.
- Type:
- linear1, linear2, linear3
Fully connected layers for classification.
- Type:
nn.Linear
- forward(x: Tensor) Tensor [source]ο
Defines the forward pass of the ConvNetDropblock.
The input tensor is processed through normalization, convolutional layers, pooling layers, DropBlock layers, dropout layers, and fully connected layers sequentially to produce the output tensor.
- Parameters:
x (Tensor) β Input tensor of shape.
- Returns:
Output tensor after processing through the network.
- Return type:
Tensor
- class robustML.advertrain.models.Normalize(mean: Tensor, std: Tensor, device: device)[source]ο
Bases:
Module
- forward(x: Tensor) Tensor [source]ο
Normalize the input tensor.
Applies the normalization operation on the input tensor using the mean and standard deviation provided during initialization.
- Parameters:
x (Tensor) β The input tensor to be normalized.
- Returns:
The normalized tensor.
- Return type:
Tensor
- class robustML.advertrain.models.ResNet(device: device, p: float = 0.2)[source]ο
Bases:
Module
A custom implementation of a Residual Network (ResNet) for processing images.
This network consists of multiple convolutional layers, each followed by batch normalization, and some layers include dropout for regularization. The network uses skip connections similar to a ResNet architecture, adding the output of one layer to another layer.
- forward(inp: Tensor) Tensor [source]ο
Defines the forward pass of the ResNet.
The input tensor is processed through a series of convolutional layers with skip connections, batch normalization, and dropout, followed by fully connected layers to produce the output tensor.
- Parameters:
inp (Tensor) β Input tensor of appropriate shape, typically matching the input size of the first
layer. (convolutional)
- Returns:
Output tensor after processing through the network.
- Return type:
Tensor
- class robustML.advertrain.models.ResNetDropblock(device: device, p: float = 0.2, drop_prob: float = 0.0)[source]ο
Bases:
Module
A custom implementation of a Residual Network (ResNet) for processing images.
This network consists of multiple convolutional layers, each followed by batch normalization, and some layers include dropout for regularization. The network uses skip connections similar to a ResNet architecture, adding the output of one layer to another layer.
- forward(inp: Tensor) Tensor [source]ο
Defines the forward pass of the ResNet.
The input tensor is processed through a series of convolutional layers with skip connections, batch normalization, and dropout, followed by fully connected layers to produce the output tensor.
- Parameters:
inp (Tensor) β Input tensor of appropriate shape, typically matching the input size of the first
layer. (convolutional)
- Returns:
Output tensor after processing through the network.
- Return type:
Tensor
robustML.advertrain.transforms moduleο
- class robustML.advertrain.transforms.DataTransformations(train_prob: float = 0.5)[source]ο
Bases:
object
Class to create and return training and test data transformations.
This class encapsulates the creation of data transformations used in training and testing. It provides methods to get composed series of transformations for both scenarios.