This is the repository that contains source code for the APT website.
Peter Ebert Christensen1, Vésteinn Snæbjarnarson1, Andrea Dittadi2, Serge Belongie1, Sagie Benaim3 1University of Copenhagen, 2Helmholtz AI, 3Hebrew University
The robustness of image classifiers is essential to their deployment in the real world. The ability to assess this re- silience to manipulations or deviations from the training data is thus crucial. These modifications have tradition- ally consisted of minimal changes that still manage to fool classifiers, and modern approaches are increasingly robust to them. Semantic manipulations that modify elements of an image in meaningful ways have thus gained traction for this purpose. However, they have primarily been limited to style, color, or attribute changes. While expressive, these manipulations do not make use of the full capabilities of a pretrained generative model. In this work, we aim to bridge this gap. We show how a pretrained image generator can be used to semantically manipulate images in a detailed, diverse, and photorealistic way while still preserving the class of the original image. Inspired by recent GAN-based image inversion methods, we propose a method called Ad- versarial Pivotal Tuning (APT). Given an image, APT first finds a pivot latent space input that reconstructs the image using a pretrained generator. It then adjusts the genera- tor’s weights to create small yet semantic manipulations in order to fool a pretrained classifier. APT preserves the full expressive editing capabilities of the generative model. We demonstrate that APT is capable of a wide range of class- preserving semantic image manipulations that fool a variety of pretrained classifiers. Finally, we show that classifiers that are robust to other benchmarks are not robust to APT manipulations and suggest a method to improve them.
- 64-bit Python 3.8 and PyTorch 1.9.0 (or later). See https://pytorch.org for PyTorch install instructions.
- CUDA toolkit 11.1 or later.
- GCC 7 or later compilers. The recommended GCC version depends on your CUDA version; see for example, CUDA 11.4 system requirements.
- If you run into problems when setting up the custom CUDA kernels, we refer to the Troubleshooting docs of the original StyleGAN3 repo and the following issues: autonomousvision/stylegan-xl#23.
- Windows user struggling installing the env might find autonomousvision/stylegan-xl#10 helpful.
- Use the following commands with Miniconda3 to create and activate your PG Python environment:
conda env create -f environment.ymlconda activate APT
We modify the PTI inversion script from StyleGAN-XL into APT (see APT.py)
You will need to download the weights of a classifer. For your refence we used the PRIME-Resnet50 weights
Similarly you will need to get the imagenet dataset
To generate fooling examples for all classes in imagenet, run
python3 APT.py --outdir="" --target "" --inv-steps 1000 --run-pti --pti-steps 350 --startidx 0 --endidx 1000 --device cuda --network="https://s3.eu-central-1.amazonaws.com/avg-projects/stylegan_xl/models/imagenet256.pkl"
To generate fooling examples using another classifier you can pass the classifier object to python3 APT.py --perceptor=perceptor_object
alternatively you can check out our notebook in the notebook folder to see how you can pass your custom classifier to the function.
This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.
Pretrained StyleGAN-XL models can be found here (pass the url as PATH_TO_NETWORK_PKL):
| Dataset | Res | FID | PATH |
|---|---|---|---|
| ImageNet | 162 | 0.73 | https://s3.eu-central-1.amazonaws.com/avg-projects/stylegan_xl/models/imagenet16.pkl |
| ImageNet | 322 | 1.11 | https://s3.eu-central-1.amazonaws.com/avg-projects/stylegan_xl/models/imagenet32.pkl |
| ImageNet | 642 | 1.52 | https://s3.eu-central-1.amazonaws.com/avg-projects/stylegan_xl/models/imagenet64.pkl |
| ImageNet | 1282 | 1.77 | https://s3.eu-central-1.amazonaws.com/avg-projects/stylegan_xl/models/imagenet128.pkl |
| ImageNet | 2562 | 2.26 | https://s3.eu-central-1.amazonaws.com/avg-projects/stylegan_xl/models/imagenet256.pkl |
| ImageNet | 5122 | 2.42 | https://s3.eu-central-1.amazonaws.com/avg-projects/stylegan_xl/models/imagenet512.pkl |
| ImageNet | 10242 | 2.51 | https://s3.eu-central-1.amazonaws.com/avg-projects/stylegan_xl/models/imagenet1024.pkl |
| CIFAR10 | 322 | 1.85 | https://s3.eu-central-1.amazonaws.com/avg-projects/stylegan_xl/models/cifar10.pkl |
| FFHQ | 2562 | 2.19 | https://s3.eu-central-1.amazonaws.com/avg-projects/stylegan_xl/models/ffhq256.pkl |
| FFHQ | 5122 | 2.23 | https://s3.eu-central-1.amazonaws.com/avg-projects/stylegan_xl/models/ffhq512.pkl |
| FFHQ | 10242 | 2.02 | https://s3.eu-central-1.amazonaws.com/avg-projects/stylegan_xl/models/ffhq1024.pkl |
| Pokemon | 2562 | 23.97 | https://s3.eu-central-1.amazonaws.com/avg-projects/stylegan_xl/models/pokemon256.pkl |
| Pokemon | 5122 | 23.82 | https://s3.eu-central-1.amazonaws.com/avg-projects/stylegan_xl/models/pokemon512.pkl |
| Pokemon | 10242 | 25.47 | https://s3.eu-central-1.amazonaws.com/avg-projects/stylegan_xl/models/pokemon1024.pkl |
If you find Adversarial Pivotal Tuning useful for your work please cite:
@misc{christensen2022apt,
doi = {10.48550/ARXIV.2211.09782},
url = {https://arxiv.org/abs/2211.09782},
author = {Christensen, Peter Ebert and Snæbjarnarson, Vésteinn and Dittadi, Andrea and Belongie, Serge and Benaim, Sagie},
keywords = {Computer Vision and Pattern Recognition (cs.CV), Cryptography and Security (cs.CR), Machine Learning (cs.LG), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Assessing Neural Network Robustness via Adversarial Pivotal Tuning},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
