robustML.advertrain.dependencies package

Subpackages

Submodules

robustML.advertrain.dependencies.autoattack module

Taken from https://github.com/fra31/auto-attack

MIT License

class robustML.advertrain.dependencies.autoattack.APGDAttack(predict: Callable, n_iter: int = 100, norm: str = 'Linf', n_restarts: int = 1, eps: float | None = None, seed: int = 0, loss: str = 'ce', eot_iter: int = 1, rho: float = 0.75, topk: float | None = None, verbose: bool = False, device: device | None = None, use_largereps: bool = False, is_tf_model: bool = False)[source]

Bases: object

Implements the Auto-PGD (Auto Projected Gradient Descent) attack method.

model

A function representing the forward pass of the model to be attacked.

Type:

Callable

n_iter

Number of iterations for the attack.

Type:

int

norm

The type of norm for the attack (β€˜Linf’, β€˜L2’, β€˜L1’).

Type:

str

n_restarts

Number of random restarts for the attack.

Type:

int

eps

The maximum perturbation amount allowed.

Type:

float

seed

Random seed for reproducibility.

Type:

int

loss

Type of loss function to use (β€˜ce’ for cross-entropy, β€˜dlr’).

Type:

str

eot_iter

Number of iterations for Expectation over Transformation.

Type:

int

rho

Parameter for adjusting step size.

Type:

float

topk

Parameter for controlling the sparsity of the attack.

Type:

Optional[float]

verbose

If True, prints verbose output during the attack.

Type:

bool

device

The device on which to perform computations.

Type:

Optional[torch.device]

use_largereps

If True, uses larger epsilon values in initial iterations.

Type:

bool

is_tf_model

If True, indicates the model is a TensorFlow model.

Type:

bool

init_hyperparam(x)[source]

Initializes hyperparameters based on the input data.

check_oscillation(...)[source]

Checks for oscillation in the optimization process.

check_shape(x)[source]

Ensures the input has the expected shape.

normalize(x)[source]

Normalizes the input tensor.

lp_norm(x)[source]

Computes the Lp norm of the input.

dlr_loss(x, y)[source]

Computes the Deep Learning Robustness (DLR) loss.

attack_single_run(x, y, x_init=None)[source]

Performs a single run of the attack.

perturb(x, y=None, best_loss=False, x_init=None)[source]

Generates adversarial examples for the given inputs.

decr_eps_pgd(x, y, epss, iters, use_rs=True)[source]

Performs PGD with decreasing epsilon values.

attack_single_run(x: Tensor, y: Tensor, x_init: Tensor | None = None) Tuple[Tensor, Tensor, Tensor, Tensor][source]

Performs a single run of the attack.

Parameters:
  • x (torch.Tensor) – The input data (clean images).

  • y (torch.Tensor) – The target labels.

  • x_init (Optional[torch.Tensor]) – Initial starting point for the attack.

Returns:

A tuple containing the best perturbed inputs, the accuracy tensor, the loss tensor, and the best adversarial examples found.

Return type:

Tuple[torch.Tensor, torch.Tensor, torch.Tensor, torch.Tensor]

check_oscillation(x: Tensor, j: int, k: int, y5: Tensor, k3: float = 0.75) Tensor[source]

Checks for oscillation in the optimization process to adjust step sizes.

Parameters:
  • x (torch.Tensor) – The input tensor.

  • j (int) – Current iteration index.

  • k (int) – The number of steps to look back for oscillation.

  • y5 (torch.Tensor) – The tensor of losses.

  • k3 (float, optional) – Threshold parameter for oscillation. Defaults to 0.75.

Returns:

Tensor indicating if oscillation is detected.

Return type:

torch.Tensor

check_shape(x: Tensor) Tensor[source]

Ensures the input tensor has the correct shape.

Parameters:

x (torch.Tensor) – The input tensor.

Returns:

The reshaped tensor.

Return type:

torch.Tensor

decr_eps_pgd(x: Tensor, y: Tensor, epss: list, iters: list, use_rs: bool = True) Tuple[Tensor, Tensor, Tensor, Tensor][source]

Performs PGD with decreasing epsilon values.

Parameters:
  • x (torch.Tensor) – The input data.

  • y (torch.Tensor) – The target labels.

  • epss (list) – List of epsilon values to use in the attack.

  • iters (list) – List of iteration counts corresponding to each epsilon value.

  • use_rs (bool, optional) – If True, uses random start. Defaults to True.

Returns:

A tuple containing the final perturbed inputs, the accuracy tensor, the loss tensor, and the best adversarial examples found.

Return type:

Tuple[torch.Tensor, torch.Tensor, torch.Tensor, torch.Tensor]

dlr_loss(x: Tensor, y: Tensor) Tensor[source]

Computes the Deep Learning Robustness (DLR) loss.

Parameters:
  • x (torch.Tensor) – The logits from the model.

  • y (torch.Tensor) – The target labels.

Returns:

The computed DLR loss.

Return type:

torch.Tensor

init_hyperparam(x: Tensor) None[source]

Initializes various hyperparameters based on the input data.

Parameters:

x (torch.Tensor) – The input data.

lp_norm(x: Tensor) Tensor[source]

Computes the Lp norm of the input tensor.

Parameters:

x (torch.Tensor) – The input tensor.

Returns:

The computed Lp norm of the input tensor.

Return type:

torch.Tensor

normalize(x: Tensor) Tensor[source]

Normalizes the input tensor based on the specified norm type.

Parameters:

x (torch.Tensor) – The input tensor to be normalized.

Returns:

The normalized tensor.

Return type:

torch.Tensor

perturb(x: Tensor, y: Tensor | None = None, best_loss: bool = False, x_init: Tensor | None = None) Tensor[source]

Generates adversarial examples for the given inputs.

Parameters:
  • x (torch.Tensor) – Clean images.

  • y (Optional[torch.Tensor]) – Clean labels. If None, predicted labels are used.

  • best_loss (bool, optional) – If True, returns points with highest loss. Defaults to False.

  • x_init (Optional[torch.Tensor]) – Initial starting point for the attack.

Returns:

Adversarial examples.

Return type:

torch.Tensor

robustML.advertrain.dependencies.autoattack.L0_norm(x: Tensor) Tensor[source]

Calculate the L0 norm of a tensor.

Parameters:

x (torch.Tensor) – Input tensor.

Returns:

The L0 norm of the input tensor.

Return type:

torch.Tensor

robustML.advertrain.dependencies.autoattack.L1_norm(x: Tensor, keepdim: bool = False) Tensor[source]

Calculate the L1 norm of a tensor.

Parameters:
  • x (torch.Tensor) – Input tensor.

  • keepdim (bool, optional) – Whether to keep the dimensions or not. Defaults to False.

Returns:

The L1 norm of the input tensor.

Return type:

torch.Tensor

robustML.advertrain.dependencies.autoattack.L1_projection(x2: Tensor, y2: Tensor, eps1: float) Tensor[source]

Project a point onto an L1 ball.

Parameters:
  • x2 (torch.Tensor) – Center of the L1 ball (bs x input_dim).

  • y2 (torch.Tensor) – Current perturbation (x2 + y2 is the point to be projected).

  • eps1 (float) – Radius of the L1 ball.

Returns:

Delta such that ||y2 + delta||_1 <= eps1 and 0 <= x2 + y2 + delta <= 1.

Return type:

torch.Tensor

robustML.advertrain.dependencies.autoattack.L2_norm(x: Tensor, keepdim: bool = False) Tensor[source]

Calculate the L2 norm of a tensor.

Parameters:
  • x (torch.Tensor) – Input tensor.

  • keepdim (bool, optional) – Whether to keep the dimensions or not. Defaults to False.

Returns:

The L2 norm of the input tensor.

Return type:

torch.Tensor

robustML.advertrain.dependencies.dropblock module

Taken from https://github.com/rwightman/pytorch-image-models

MIT License

class robustML.advertrain.dependencies.dropblock.DropBlock2d(drop_prob=0.1, block_size=7, gamma_scale=1.0, with_noise=False, inplace=False, batchwise=False, fast=True)[source]

Bases: Module

DropBlock. See https://arxiv.org/pdf/1810.12890.pdf

forward(x)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

robustML.advertrain.dependencies.dropblock.drop_block_2d(x, drop_prob: float = 0.1, block_size: int = 7, gamma_scale: float = 1.0, with_noise: bool = False, inplace: bool = False, batchwise: bool = False)[source]

DropBlock. See https://arxiv.org/pdf/1810.12890.pdf DropBlock with an experimental gaussian noise option. This layer has been tested on a few training runs with success, but needs further validation and possibly optimization for lower runtime impact.

robustML.advertrain.dependencies.dropblock.drop_block_fast_2d(x: Tensor, drop_prob: float = 0.1, block_size: int = 7, gamma_scale: float = 1.0, with_noise: bool = False, inplace: bool = False, batchwise: bool = False)[source]

DropBlock. See https://arxiv.org/pdf/1810.12890.pdf DropBlock with an experimental gaussian noise option. Simplied from above without concern for valid block mask at edges.

robustML.advertrain.dependencies.fire module

Taken from https://github.com/MarinePICOT/Adversarial-Robustness-via-Fisher-Rao-Regularization

Robust training losses. Based on code from https://github.com/MarinePICOT/Adversarial-Robustness-via-Fisher-Rao-Regularization/blob/main/src/losses.py

robustML.advertrain.dependencies.fire.entropy_loss(unlabeled_logits: Tensor) Tensor[source]

Calculate the entropy loss for a batch of unlabeled data.

Parameters:
  • unlabeled_logits (torch.Tensor) – A tensor of logits from a model’s output.

  • of (It should have a shape)

Returns:

The mean entropy loss across the batch.

Return type:

torch.Tensor

robustML.advertrain.dependencies.fire.fire_loss(model: Module, x_natural: Tensor, y: Tensor, optimizer: Optimizer, epoch: int, device: device, step_size: float = 0.003, epsilon: float = 0.001, perturb_steps: int = 10, beta: float = 1.0, adversarial: bool = True, distance: str = 'Linf', entropy_weight: float = 0, pretrain: int = 0) tuple[Tensor, Tensor, Tensor, Tensor][source]

This function calculates the FIRE (Fast and Improved Robustness Estimation) loss, which is a combination of natural loss, robust loss, and entropy loss for unlabeled data. It is used for adversarial training and stability training of neural networks.

Parameters:
  • model (torch.nn.Module) – The neural network model to be trained.

  • x_natural (torch.Tensor) – Input tensor of natural (non-adversarial) images.

  • y (torch.Tensor) – Tensor of labels. Unlabeled data should have label -1.

  • optimizer (torch.optim.Optimizer) – The optimizer used for training.

  • epoch (int) – Current training epoch.

  • device (torch.device) – The device on which to perform calculations.

  • step_size (float) – Step size for adversarial example generation.

  • epsilon (float) – Perturbation size for adversarial example generation.

  • perturb_steps (int) – Number of steps for adversarial example generation.

  • beta (float) – Weight for the robust loss in the overall loss calculation.

  • adversarial (bool) – Flag to enable/disable adversarial training.

  • distance (str) – Type of distance metric for adversarial example generation (β€œLinf” or β€œL2”).

  • entropy_weight (float) – Weight for the entropy loss in the overall loss calculation.

  • pretrain (int) – Number of pretraining epochs.

Returns:

A tuple containing the total loss, natural loss, robust loss, and entropy loss for unlabeled data.

Return type:

tuple[torch.Tensor, torch.Tensor, torch.Tensor, torch.Tensor]

robustML.advertrain.dependencies.fire.noise_loss(model: Module, x_natural: Tensor, y: Tensor, epsilon: float = 0.25, clamp_x: bool = True) Tensor[source]

This function augments the input data with random noise and computes the loss based on the model’s predictions for the noisy data.

Parameters:
  • model (torch.nn.Module) – The neural network model.

  • x_natural (torch.Tensor) – The original (clean) input data.

  • y (torch.Tensor) – The labels corresponding to the input data.

  • epsilon (float, optional) – The magnitude of the noise to be added to the input data. Defaults to 0.25.

  • clamp_x (bool, optional) – If True, the noisy data is clamped to the range [0.0, 1.0]. Defaults to True.

Returns:

The computed loss based on the model’s predictions for the noisy data.

Return type:

torch.Tensor

robustML.advertrain.dependencies.trades module

Taken from https://github.com/yaodongyu/TRADES

MIT License

robustML.advertrain.dependencies.trades.l2_norm(x: Tensor) Tensor[source]

Compute the L2 norm of a tensor.

Parameters:

x (torch.Tensor) – The input tensor.

Returns:

The L2 norm of the input tensor.

Return type:

torch.Tensor

robustML.advertrain.dependencies.trades.squared_l2_norm(x: Tensor) Tensor[source]

Compute the squared L2 norm of a tensor.

Parameters:

x (torch.Tensor) – The input tensor.

Returns:

The squared L2 norm of the flattened input tensor.

Return type:

torch.Tensor

robustML.advertrain.dependencies.trades.trades_loss(model: Module, x_natural: Tensor, y: Tensor, optimizer: Optimizer, step_size: float = 0.003, epsilon: float = 0.031, perturb_steps: int = 10, beta: float = 1.0, distance: str = 'l_inf', device: device | None = None) Tensor[source]

Calculate the TRADES loss for training robust models.

Parameters:
  • model (nn.Module) – The neural network model.

  • x_natural (torch.Tensor) – Natural (clean) inputs.

  • y (torch.Tensor) – Target outputs.

  • optimizer (torch.optim.Optimizer) – Optimizer for the model.

  • step_size (float, optional) – Step size for perturbation. Defaults to 0.003.

  • epsilon (float, optional) – Perturbation limit. Defaults to 0.031.

  • perturb_steps (int, optional) – Number of perturbation steps. Defaults to 10.

  • beta (float, optional) – Regularization parameter for TRADES. Defaults to 1.0.

  • distance (str, optional) – Norm for perturbation (β€˜l_inf’ or β€˜l_2’). Defaults to β€˜l_inf’.

  • device (torch.device, optional) – The device to use (e.g., β€˜cuda’ or β€˜cpu’).

Returns:

The TRADES loss.

Return type:

torch.Tensor

Module contents