This repository contains the official implementation of the NeurIPS 2025 paper "Understanding and Improving Adversarial Robustness of Neural Probabilistic Circuits".
To create the required environment, run:
conda env create -f environment.ymlThen activate it with:
conda activate rnpcDownload the datasets from Google Drive – RNPC datasets, and save them in the root directory of this repository.
Download the Adversarial-Attacks-PyTorch repository and save it under ./visat-models.
Some modifications are required to adapt the attacks for the attribute recognition model, which outputs predictions for multiple attributes. Specifically, replace the files pgd.py, pgdl2.py, and cwbs.py under adversarial-attacks-pytorch/torchattacks/attacks with the ones provided in our repository.
cd visat-models/scripts
To train the attribute recognition model, run:
./train_attr.bashTo train the probabilistic circuit, please use the repository of the LearnSPN paper to generate the structure of the probabilistic circuit first, which is saved as a .txt file. Then, run:
./train_spn.bashTo train the baseline models, run:
./train_reference.bash
You can also download the checkpoints of these models from Google Drive – RNPC_checkpoints.
To evaluate the benign & adversarial performance of different models on various datasets, run:
cd visat-models/scripts
./mnist_dim_3_min_3_noise_1.bash
./mnist_dim_5_min_5_noise_1.bash
./celeba_dim_8_min_4.bash
./gtsrbsub.bash
This code builds upon the repository of the paper "Neural Probabilistic Circuits: Enabling Compositional and Interpretable Predictions through Logical Reasoning".
