Skip to content

🛡 A set of adversarial attacks in PyTorch

License

Notifications You must be signed in to change notification settings

spencerwooo/torchattack

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation


🛡 torchattack - A set of adversarial attacks in PyTorch.

Install from GitHub source -

python -m pip install git+https://github.com/spencerwooo/torchattack

Install from Gitee mirror -

python -m pip install git+https://gitee.com/spencerwoo/torchattack

Usage

import torch
from torchattack import FGSM, MIFGSM
from torchattack.eval import AttackModel

device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')

# Load a model
model = AttackModel.from_pretrained(model_name='resnet50', device=device)
transform, normalize = model.transform, model.normalize

# Initialize an attack
attack = FGSM(model, normalize, device)

# Initialize an attack with extra params
attack = MIFGSM(model, normalize, device, eps=0.03, steps=10, decay=1.0)

Check out torchattack.eval.run_attack for a simple example.

Attacks

Gradient-based attacks:

Name $\ell_p$ Paper torchattack class
FGSM $\ell_\infty$ Explaining and Harnessing Adversarial Examples torchattack.FGSM
PGD $\ell_\infty$ Towards Deep Learning Models Resistant to Adversarial Attacks torchattack.PGD
PGD (L2) $\ell_2$ Towards Deep Learning Models Resistant to Adversarial Attacks torchattack.PGDL2
MI-FGSM $\ell_\infty$ Boosting Adversarial Attacks with Momentum torchattack.MIFGSM
DI-FGSM $\ell_\infty$ Improving Transferability of Adversarial Examples with Input Diversity torchattack.DIFGSM
TI-FGSM $\ell_\infty$ Evading Defenses to Transferable Adversarial Examples by Translation-Invariant Attacks torchattack.TIFGSM
NI-FGSM $\ell_\infty$ Nesterov Accelerated Gradient and Scale Invariance for Adversarial Attacks torchattack.NIFGSM
SI-NI-FGSM $\ell_\infty$ Nesterov Accelerated Gradient and Scale Invariance for Adversarial Attacks torchattack.SINIFGSM
VMI-FGSM $\ell_\infty$ Enhancing the Transferability of Adversarial Attacks through Variance Tuning torchattack.VMIFGSM
VNI-FGSM $\ell_\infty$ Enhancing the Transferability of Adversarial Attacks through Variance Tuning torchattack.VNIFGSM
Admix $\ell_\infty$ Admix: Enhancing the Transferability of Adversarial Attacks torchattack.Admix
FIA $\ell_\infty$ Feature Importance-aware Transferable Adversarial Attacks torchattack.FIA
PNA-PatchOut $\ell_\infty$ Towards Transferable Adversarial Attacks on Vision Transformers torchattack.PNAPatchOut
TGR $\ell_\infty$ Transferable Adversarial Attacks on Vision Transformers with Token Gradient Regularization torchattack.TGR
DeCoWA $\ell_\infty$ Boosting Adversarial Transferability across Model Genus by Deformation-Constrained Warping torchattack.DeCoWA

Others:

Name $\ell_p$ Paper torchattack class
DeepFool $\ell_2$ DeepFool: A Simple and Accurate Method to Fool Deep Neural Networks torchattack.DeepFool
GeoDA $\ell_\infty$, $\ell_2$ GeoDA: A Geometric Framework for Black-box Adversarial Attacks torchattack.GeoDA
SSP $\ell_\infty$ A Self-supervised Approach for Adversarial Robustness torchattack.SSP

Development

# Create a virtual environment
python -m venv .venv
source .venv/bin/activate

# Install deps with dev extras
python -m pip install -r requirements.txt
python -m pip install -e '.[dev]'

License

MIT

Related