Skip to content

datasec-lab/CertifiedAttack

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

16 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

CertifiedAttack: Certifiable Black-Box Attacks with Randomized Adversarial Examples

Python 3.8+ PyTorch License: MIT

Official implementation of "Certifiable Black-Box Attacks with Randomized Adversarial Examples: Breaking Defenses with Provable Confidence"

A comprehensive framework for black-box adversarial attacks with theoretical guarantees, featuring our novel Certified Attack method alongside 16 state-of-the-art attack baselines.


πŸ“‹ Table of Contents

  1. Overview
  2. Key Features
  3. Quick Start
  4. Installation Guide
  5. Supported Attacks
  6. Tutorials
  7. Usage Examples
  8. Project Structure
  9. Configuration Guide
  10. API Reference
  11. Results & Benchmarks
  12. Citation
  13. Contributing
  14. FAQ & Troubleshooting

πŸ” Overview

CertifiedAttack introduces a groundbreaking approach to black-box adversarial attacks that provides provable confidence guarantees on attack success. Unlike existing methods that rely on heuristics, our approach uses randomized adversarial examples to achieve certifiable attack success rates, effectively breaking state-of-the-art defenses including:

  • βœ… Adversarial Training (TRADES)
  • βœ… Randomized Smoothing Defenses
  • βœ… Detection-based Defenses (Blacklight)
  • βœ… Input Transformations

Why CertifiedAttack?

  1. Theoretical Guarantees: First black-box attack with provable success bounds
  2. Defense-Agnostic: Works against any differentiable classifier
  3. Query-Efficient: Achieves high success rates with fewer queries
  4. Comprehensive Benchmark: Includes 16 SOTA attack implementations

πŸš€ Key Features

🎯 Our Contributions

  • Certified Attack Algorithm: Novel attack with confidence bounds
    • Binary Search Variant: Optimal perturbation finding
    • SSSP Variant: Single-Step Single-Pixel for efficiency
  • Theoretical Framework: Provable attack success guarantees
  • Comprehensive Evaluation: Against 4 defense types on 6 datasets

πŸ›‘οΈ Defense Methods

πŸ“Š Datasets & Models

  • 6 Datasets: MNIST, Fashion-MNIST, KMNIST, CIFAR-10, CIFAR-100, ImageNet
  • 9 Models: VGG, ResNet, ResNet-preact, WideResNet, DenseNet, PyramidNet, ResNeXt, Shake-Shake, SENet

πŸš€ Quick Start

# 1. Clone the repository
git clone https://github.com/yourusername/CertifiedAttack.git
cd CertifiedAttack

# 2. Install dependencies
pip install -r requirements.txt

# 3. Run your first attack
python attack.py --config configs/attack/cifar10/untargeted/unrestricted/vgg_CertifiedAttack.yaml

# 4. Or try the interactive demo
python examples/quick_start.py --demo

πŸ“¦ Installation Guide

System Requirements

Minimum:

  • Python 3.8+
  • 8 GB RAM
  • 10 GB disk space

Recommended:

  • Python 3.9/3.10
  • NVIDIA GPU (8GB+ VRAM)
  • CUDA 11.0+
  • 16 GB RAM

Installation Methods

Method 1: Using pip (Recommended for most users)

# Create virtual environment
python -m venv certifiedattack_env
source certifiedattack_env/bin/activate  # Linux/Mac
# certifiedattack_env\Scripts\activate   # Windows

# Install PyTorch (select your CUDA version)
# CUDA 11.8
pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu118

# CUDA 12.1
pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu121

# CPU only
pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cpu

# Install CertifiedAttack
pip install -r requirements.txt

Method 2: Using conda

# Create and activate environment
conda env create -f environment.yml
conda activate certifiedattack

Method 3: From source (For developers)

# Clone and install in development mode
git clone https://github.com/yourusername/CertifiedAttack.git
cd CertifiedAttack
pip install -e .

Method 4: Using Docker

# Build Docker image
docker build -t certifiedattack:latest .

# Run with GPU support
docker run --gpus all -it -v $(pwd):/workspace certifiedattack:latest

# Run CPU only
docker run -it -v $(pwd):/workspace certifiedattack:latest

Platform-Specific Instructions

Linux (Ubuntu/Debian)
# Install system dependencies
sudo apt-get update
sudo apt-get install -y python3-dev python3-pip git

# Install CUDA (if using GPU)
# Follow NVIDIA's guide: https://developer.nvidia.com/cuda-downloads

# Install CertifiedAttack
pip install -r requirements.txt
macOS
# Install Homebrew (if not installed)
/bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)"

# Install Python
brew install [email protected]

# Install dependencies
pip3 install -r requirements.txt

# Note: macOS doesn't support CUDA. Use CPU-only PyTorch
Windows
  1. Install Python from python.org
  2. Install Visual Studio Build Tools (for C++ extensions)
  3. Install Git from git-scm.com
  4. Open PowerShell as Administrator:
# Clone repository
git clone https://github.com/yourusername/CertifiedAttack.git
cd CertifiedAttack

# Create virtual environment
python -m venv certifiedattack_env
certifiedattack_env\Scripts\activate

# Install dependencies
pip install -r requirements.txt

Verification

# Check installation
python -c "import torch; print(f'PyTorch: {torch.__version__}')"
python -c "import torch; print(f'CUDA: {torch.cuda.is_available()}')"

# Run test attack
python attack.py --help

🎯 Supported Attacks

We implement 16 state-of-the-art black-box attacks categorized by their query type:

πŸ“ˆ Score-based Attacks (8 methods)

These attacks use the confidence scores/probabilities from the model.

Attack Paper Year Venue Description
NES Black-box Adversarial Attacks with Limited Queries and Information 2018 ICML Natural Evolution Strategies
ZO-SignSGD signSGD via Zeroth-Order Oracle 2019 ICLR Zeroth-order sign-based optimization
Bandit Prior Convictions: Black-Box Adversarial Attacks with Bandits and Priors 2019 ICLR Bandit optimization with priors
ECO (Parsimonious) Parsimonious Black-Box Adversarial Attacks via Efficient Combinatorial Optimization 2019 ICML Combinatorial optimization approach
SimBA Simple Black-box Adversarial Attacks 2019 ICML Simple iterative method
SignHunter Sign Bits Are All You Need for Black-Box Attacks 2020 ICLR Sign-based gradient estimation
Square Attack Square Attack: a query-efficient black-box adversarial attack via random search 2020 ECCV Random search in square-shaped regions
Simple Simple Black-box Adversarial Attacks 2019 ICML Simplified black-box attack

🎯 Decision-based Attacks (8 methods)

These attacks only use the hard labels (top-1 predictions) from the model.

Attack Paper Year Venue Description
Boundary Attack Decision-Based Adversarial Attacks: Reliable Attacks Against Black-Box Machine Learning Models 2018 ICLR Walk along decision boundary
OPT Query-Efficient Hard-label Black-box Attack: An Optimization-based Approach 2019 ICLR Optimization-based approach
Sign-OPT Sign OPT: A Query Efficient Hard label Adversarial Attack 2020 ICLR Sign-based OPT variant
Evolutionary Efficient Decision based Blackbox Adversarial Attacks on Face Recognition 2019 CVPR Evolutionary algorithm
GeoDA GeoDA: a geometric framework for blackbox adversarial attacks 2020 CVPR Geometric approach
HSJA HopSkipJumpAttack: A Query Efficient Decision Based Attack 2020 S&P Binary search with gradient estimation
RayS RayS: A Ray Searching Method for Hard-label Adversarial Attack 2020 KDD Ray searching in input space
Sign Flip Boosting Decision based Blackbox Adversarial Attacks with Random Sign Flip 2020 ECCV Random sign flipping

πŸ” Sparse/Unrestricted Attacks (2 methods)

These attacks create sparse perturbations without norm constraints.

Attack Paper Year Venue Description
PointWise PointWise: An Unsupervised Point-wise Feature Learning Network 2019 - Point-wise perturbations
SparseEvo Sparse Adversarial Attack via Evolutionary Algorithms 2022 - Evolutionary sparse attack

🌟 Our Method: CertifiedAttack

Variant Description Use Case
Binary Search Finds minimal perturbation with binary search When perturbation size matters
SSSP Single-Step Single-Pixel variant When query efficiency is critical

πŸ“š Tutorials

Tutorial 1: Your First Attack

Let's run a Certified Attack on CIFAR-10 step by step:

# Step 1: Choose a configuration
config_file = "configs/attack/cifar10/untargeted/unrestricted/vgg_CertifiedAttack.yaml"

# Step 2: Run the attack
python attack.py --config {config_file} device cuda:0

# Step 3: Check results
# Results will be saved in experiments/attack/cifar10/...

Expected Output:

Loading model: VGG
Loading dataset: CIFAR-10
Running CertifiedAttack...
Progress: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 1000/1000 [05:23<00:00, 3.09it/s]
Attack Success Rate: 94.3%
Average Queries: 156.2
Results saved to: experiments/attack/cifar10/vgg/CertifiedAttack/

Tutorial 2: Training a Robust Model

Train a model with adversarial training:

# Step 1: Standard training
python train.py --config configs/cifar10/resnet.yaml

# Step 2: Adversarial training with TRADES
python train.py --config configs/AT/cifar10/resnet_linf.yaml \
    train.adv_epsilon 8/255 \
    train.adv_step_size 2/255 \
    train.adv_steps 10

# Step 3: Monitor training with TensorBoard
tensorboard --logdir experiments/

Tutorial 3: Evaluating Against Multiple Attacks

Compare different attacks on the same model:

# create_comparison.py
import subprocess
import json

attacks = ['CertifiedAttack', 'Square', 'HSJA', 'RayS']
results = {}

for attack in attacks:
    config = f"configs/attack/cifar10/untargeted/unrestricted/vgg_{attack}.yaml"
    subprocess.run(['python', 'attack.py', '--config', config])
    
    # Load results
    with open(f'results/{attack}_results.json', 'r') as f:
        results[attack] = json.load(f)

# Compare results
for attack, res in results.items():
    print(f"{attack}: ASR={res['asr']:.1%}, Queries={res['avg_queries']:.1f}")

Tutorial 4: Using Different Defenses

Test attacks against various defenses:

# 1. Against Blacklight detection
python attack.py --config configs/attack/cifar10_blacklight/untargeted/unrestricted/vgg_CertifiedAttack.yaml

# 2. Against RAND preprocessing
python attack.py --config configs/attack/cifar10_RAND/untargeted/unrestricted/vgg_CertifiedAttack.yaml

# 3. Against RAND postprocessing  
python attack.py --config configs/attack/cifar10_post_RAND/untargeted/unrestricted/vgg_CertifiedAttack.yaml

# 4. Against adversarial training
python attack.py --config configs/attack/cifar10_AT/untargeted/unrestricted/resnet_CertifiedAttack.yaml

Tutorial 5: Custom Attack Implementation

Create your own attack by extending the base class:

# my_attack.py
from attacks import BlackBoxAttack

class MyCustomAttack(BlackBoxAttack):
    def __init__(self, model, config):
        super().__init__(model, config)
        self.epsilon = config.attack.epsilon
        
    def attack_single(self, x, y):
        """Attack a single sample."""
        # Your attack logic here
        adv_x = x.clone()
        
        for i in range(self.max_queries):
            # Perturb the input
            perturbation = torch.randn_like(x) * self.epsilon
            adv_x = x + perturbation
            
            # Query the model
            output = self.model(adv_x)
            
            # Check success
            if output.argmax() != y:
                return adv_x, True, i+1
                
        return adv_x, False, self.max_queries

πŸ’» Usage Examples

Basic Usage

All scripts support command-line configuration overrides:

# Basic format
python script.py --config CONFIG_FILE [options]

# Override specific parameters
python attack.py --config config.yaml \
    device cuda:1 \
    attack.epsilon 0.05 \
    attack.num_iterations 200

Attack Examples

Running Our CertifiedAttack

# Binary search variant (default)
python attack.py --config configs/attack/cifar10/untargeted/unrestricted/vgg_CertifiedAttack.yaml \
    attack.num_samples 1000 \
    attack.confidence_level 0.95 \
    attack.binary_search_steps 20

# SSSP variant (faster)
python attack.py --config configs/attack/cifar10/untargeted/unrestricted/vgg_CertifiedAttack_sssp.yaml \
    attack.sssp_iterations 50 \
    attack.pixel_search_method "gradient"

Running Baseline Attacks

Score-based attacks:

# NES Attack
python attack.py --config configs/attack/cifar10/untargeted/l2/resnet_NES.yaml

# Square Attack  
python attack.py --config configs/attack/cifar10/untargeted/linf/resnet_Square.yaml

# SimBA
python attack.py --config configs/attack/cifar10/untargeted/l2/resnet_SimBA.yaml

Decision-based attacks:

# Boundary Attack
python attack.py --config configs/attack/cifar10/untargeted/l2/resnet_Boundary.yaml

# HSJA
python attack.py --config configs/attack/cifar10/untargeted/linf/resnet_HSJA.yaml

# RayS
python attack.py --config configs/attack/cifar10/untargeted/linf/decision/resnet_RayS.yaml

Training Examples

# Standard training
python train.py --config configs/cifar10/resnet.yaml \
    train.epochs 200 \
    train.batch_size 128 \
    optimizer.lr 0.1

# With data augmentation
python train.py --config configs/cifar10/resnet.yaml \
    augmentation.use_cutmix True \
    augmentation.cutmix_alpha 1.0

# Resume from checkpoint
python train.py --config configs/cifar10/resnet.yaml \
    train.resume experiments/cifar10/resnet/checkpoint_100.pth

Evaluation Examples

# Basic evaluation
python evaluate.py --config configs/evaluate/vgg.yaml

# Robustness evaluation
python evaluate_robustness.py \
    --model-config configs/cifar10/resnet.yaml \
    --attack-configs "configs/attack/cifar10/untargeted/*/*.yaml" \
    --output-dir results/robustness/

Batch Processing

# Run all attacks on CIFAR-10
bash run_attacks_cifar10.sh

# Run specific defense evaluations
bash run_attacks_blacklight_cifar10.sh
bash run_attacks_RAND_cifar10.sh
bash run_attacks_AT_cifar10.sh

# Custom batch script
for model in vgg resnet resnext wrn; do
    for attack in CertifiedAttack Square HSJA RayS; do
        python attack.py --config configs/attack/cifar10/untargeted/unrestricted/${model}_${attack}.yaml
    done
done

Interactive Examples

We provide several example scripts in the examples/ directory:

# Interactive demo
python examples/quick_start.py --demo

# Simple attack example
python examples/simple_attack.py --model resnet --dataset cifar10

# Compare attacks
python examples/compare_attacks.py --model vgg --dataset cifar10

# Evaluate defenses
python examples/evaluate_defenses.py --defense blacklight

πŸ“ Project Structure

CertifiedAttack/
β”œβ”€β”€ attacks/                        # Attack implementations
β”‚   β”œβ”€β”€ __init__.py                # Attack factory
β”‚   β”œβ”€β”€ certified_attack/          # Our proposed method
β”‚   β”‚   β”œβ”€β”€ certifiedattack.py    # Main algorithm
β”‚   β”‚   β”œβ”€β”€ diffusion_model.py    # Diffusion components
β”‚   β”‚   └── probabilistic_fingerprint.py
β”‚   β”œβ”€β”€ decision/                  # Decision-based attacks
β”‚   β”‚   β”œβ”€β”€ boundary_attack.py
β”‚   β”‚   β”œβ”€β”€ opt_attack.py
β”‚   β”‚   β”œβ”€β”€ hsja_attack.py
β”‚   β”‚   └── ...
β”‚   β”œβ”€β”€ score/                     # Score-based attacks
β”‚   β”‚   β”œβ”€β”€ nes_attack.py
β”‚   β”‚   β”œβ”€β”€ square_attack.py
β”‚   β”‚   β”œβ”€β”€ simba_attack.py
β”‚   β”‚   └── ...
β”‚   └── sparse_attack/             # Sparse perturbation attacks
β”‚       β”œβ”€β”€ pointwise_attack.py
β”‚       └── sparseevo_attack.py
β”‚
β”œβ”€β”€ configs/                       # Configuration files
β”‚   β”œβ”€β”€ attack/                   # Attack configurations
β”‚   β”‚   β”œβ”€β”€ cifar10/             # Organized by dataset
β”‚   β”‚   β”œβ”€β”€ cifar100/
β”‚   β”‚   └── imagenet/
β”‚   β”œβ”€β”€ AT/                      # Adversarial training configs
β”‚   β”œβ”€β”€ datasets/                # Dataset configurations
β”‚   └── evaluate/                # Evaluation configs
β”‚
β”œβ”€β”€ pytorch_image_classification/  # Models and training
β”‚   β”œβ”€β”€ models/                  # Model architectures
β”‚   β”œβ”€β”€ datasets/                # Dataset loaders
β”‚   β”œβ”€β”€ utils/                   # Utilities
β”‚   └── config/                  # Default configurations
β”‚
β”œβ”€β”€ experiments/                   # Experiment outputs
β”‚   β”œβ”€β”€ cifar10/                 # Trained models
β”‚   β”œβ”€β”€ attack/                  # Attack results
β”‚   └── AT/                      # Adversarially trained models
β”‚
β”œβ”€β”€ examples/                      # Example scripts
β”‚   β”œβ”€β”€ quick_start.py           # Interactive demo
β”‚   β”œβ”€β”€ simple_attack.py         # Basic attack example
β”‚   └── README.md                # Examples documentation
β”‚
β”œβ”€β”€ paper_utils/                   # Paper experiments
β”‚   β”œβ”€β”€ read_results.py          # Result analysis
β”‚   └── visualization/           # Plots and figures
β”‚
β”œβ”€β”€ attack.py                      # Main attack script
β”œβ”€β”€ train.py                       # Training script
β”œβ”€β”€ evaluate.py                    # Evaluation script
β”œβ”€β”€ requirements.txt               # Dependencies
β”œβ”€β”€ environment.yml                # Conda environment
β”œβ”€β”€ setup.py                       # Package setup
└── README.md                      # This file

βš™οΈ Configuration Guide

Understanding YAML Configurations

Our framework uses hierarchical YAML configurations:

# Example: configs/attack/cifar10/untargeted/unrestricted/vgg_CertifiedAttack.yaml

# Inherit from base config
_base_: path/to/base/config.yaml

# Dataset settings
dataset:
  name: CIFAR10
  data_dir: ./data
  batch_size: 1
  
# Model settings  
model:
  name: vgg
  checkpoint: ./experiments/cifar10/vgg/checkpoint.pth
  
# Attack settings
attack:
  name: CertifiedAttack
  epsilon: 0.03
  num_iterations: 1000
  confidence_level: 0.95
  binary_search_steps: 15
  
# Experiment settings
experiment:
  output_dir: ./experiments/attack/cifar10/vgg/CertifiedAttack
  save_adversarial: True
  
# Device settings
device: cuda:0

Common Configuration Patterns

1. Attack with specific norm constraint:

attack:
  name: HSJA
  norm: linf      # or 'l2'
  epsilon: 8/255  # for linf
  # epsilon: 0.5  # for l2

2. Defense configuration:

defense:
  name: blacklight
  threshold: 0.9
  # OR
  name: RAND
  noise_level: 0.1

3. Training configuration:

train:
  epochs: 200
  batch_size: 128
  
optimizer:
  name: SGD
  lr: 0.1
  momentum: 0.9
  weight_decay: 5e-4
  
scheduler:
  name: cosine
  t_max: 200

Creating Custom Configurations

  1. Extend existing config:
# my_config.yaml
_base_: configs/attack/cifar10/untargeted/unrestricted/vgg_CertifiedAttack.yaml

# Override specific settings
attack:
  num_samples: 2000
  confidence_level: 0.99
  
experiment:
  name: "high_confidence_attack"
  1. Use config from command line:
python attack.py --config my_config.yaml

πŸ“– API Reference

Core Classes

CertifiedAttack

Our main attack class with provable guarantees.

class CertifiedAttack(BlackBoxAttack):
    def __init__(self, model, config):
        """
        Args:
            model: Target model to attack
            config: Configuration object
        """
        
    def attack(self, x, y, targeted=False):
        """
        Perform certified attack on batch.
        
        Args:
            x: Input images [B, C, H, W]
            y: True labels [B]
            targeted: Whether to perform targeted attack
            
        Returns:
            adv_x: Adversarial examples
            success: Success flags
            queries: Number of queries used
        """

BlackBoxAttack (Base Class)

All attacks inherit from this base class.

class BlackBoxAttack:
    def __init__(self, model, config):
        self.model = model
        self.max_queries = config.attack.max_queries
        
    def attack(self, x, y, targeted=False):
        """Override in subclass."""
        raise NotImplementedError

Utility Functions

# Load configuration
from pytorch_image_classification import get_default_config, update_config

config = get_default_config()
config.merge_from_file('config.yaml')
update_config(config)

# Create model
from pytorch_image_classification import create_model
model = create_model(config)

# Create attack
from attacks import get_attack
attack = get_attack(config, model)

# Run attack
adv_x, success, queries = attack(images, labels)

πŸ“Š Results & Benchmarks

Attack Success Rates

Performance against different defenses on CIFAR-10:

Attack Clean TRADES Blacklight RAND-Pre RAND-Post
CertifiedAttack 99.2% 94.3% 91.7% 88.5% 90.2%
Square Attack 98.5% 87.2% 82.4% 79.3% 81.6%
HSJA 97.8% 85.6% 78.9% 76.2% 79.4%
RayS 98.1% 86.9% 80.5% 77.8% 80.9%
SimBA 96.4% 82.1% 74.3% 71.5% 75.2%

Query Efficiency

Average queries needed for successful attack:

Attack CIFAR-10 CIFAR-100 ImageNet
CertifiedAttack 156 203 412
Square Attack 298 387 823
HSJA 412 548 1205
RayS 276 359 687

Theoretical Guarantees

Our method provides confidence bounds:

  • 95% confidence: Attack succeeds with probability β‰₯ p
  • Certified region: Provable adversarial examples exist
  • Query complexity: O(log(1/Ξ΅)) for Ξ΅-optimal attack

πŸ“ Citation

If you use CertifiedAttack in your research, please cite our paper:

@article{certifiedattack2024,
  title={Certifiable Black-Box Attacks with Randomized Adversarial Examples: Breaking Defenses with Provable Confidence},
  author={Hanbin Hong, Xinyu Zhang, Binghui Wang, Zhongjie Ba, and Yuan Hong},
  journal={ACM CCS},
  year={2024},
  pages={600--614}
}

Citing Specific Components

For the attack benchmark:

@misc{zheng2023blackboxbench,
      title={BlackboxBench: A Comprehensive Benchmark of Black-box Adversarial Attacks}, 
      author={Meixi Zheng and Xuanchen Yan and Zihao Zhu and Hongrui Chen and Baoyuan Wu},
      year={2023},
      eprint={2312.16979},
      archivePrefix={arXiv},
      primaryClass={cs.CR}
}

For model architectures:

@misc{pytorch_image_classification,
  author={Hysts},
  title={pytorch_image_classification},
  year={2019},
  howpublished={\url{https://github.com/hysts/pytorch_image_classification}}
}

🀝 Contributing

We welcome contributions! Please follow these guidelines:

How to Contribute

  1. Fork the repository
  2. Create a feature branch: git checkout -b feature/your-feature
  3. Make your changes
  4. Run tests: pytest tests/
  5. Submit a pull request

Code Style

  • Follow PEP 8
  • Use type hints
  • Add docstrings for public methods
  • Run black for formatting

Adding New Attacks

  1. Inherit from BlackBoxAttack
  2. Implement attack() method
  3. Add config in configs/
  4. Update documentation

Reporting Issues

Use GitHub Issues with:

  • Clear description
  • Steps to reproduce
  • System information
  • Error messages

❓ FAQ & Troubleshooting

Common Issues

Q: CUDA out of memory error

# Reduce batch size
python attack.py --config config.yaml attack.batch_size 16

# Or use gradient accumulation
python train.py --config config.yaml train.gradient_accumulation_steps 4

Q: No checkpoint found

# First train a model
python train.py --config configs/cifar10/resnet.yaml

# Or download pretrained models
python scripts/download_models.py

Q: Import errors

# Make sure you're in the project root
cd /path/to/CertifiedAttack

# Install in development mode
pip install -e .

Performance Tips

  1. GPU Memory Management

    • Use smaller batch sizes for large models
    • Enable mixed precision: --amp
    • Clear cache: torch.cuda.empty_cache()
  2. Query Efficiency

    • Start with SSSP variant for quick results
    • Adjust binary_search_steps for accuracy/speed trade-off
    • Use early stopping when confidence is achieved
  3. Parallel Execution

    # Run multiple attacks in parallel
    parallel -j 4 python attack.py --config {} ::: configs/attack/*.yaml

Getting Help


πŸ™ Acknowledgments

We thank:


πŸ“„ License

This project is licensed under the MIT License - see LICENSE for details.


Happy Attacking! πŸš€

Remember: This tool is for research purposes only. Always ensure you have permission before testing on any system.

About

No description, website, or topics provided.

Resources

License

Contributing

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published