We propose the semantic-aware implicit representation by learning semantic-aware implicit representation (SAIR), that is, we make the implicit representation of each pixel rely on both its appearance and semantic information (e.g., which object does the pixel belong to). This work is publised in ECCV 2024.
- Python 3
- Pytorch 1.6.0
- TensorboardX
- yaml, numpy, tqdm, imageio
mkdir load for putting the dataset folders.
- celebAHQ:
mkdir load/celebAHQandcp scripts/resize.py load/celebAHQ/, thencd load/celebAHQ/. Download andunzipdata1024x1024.zip from the Google Drive link (provided by this repo). Runpython resize.pyand get image folders256/, 128/, 64/, 32/. Download the split.json.
0. Preliminaries
-
For
train_liif.pyortest.py, use--gpu [GPU]to specify the GPUs (e.g.--gpu 0or--gpu 0,1). -
For
train_liif.py, by default, the save folder is atsave/_[CONFIG_NAME]. We can use--nameto specify a name if needed.
1. celebAHQ experiments
Train: python train_liif.py --config configs/train-celebAHQ/[CONFIG_NAME].yaml.
Test: python test.py --config configs/test/test-celebAHQ-32-256.yaml --model [MODEL_PATH] (or test-celebAHQ-64-128.yaml for another task). We use epoch-best.pth in corresponding save folder.
@inproceedings{zhang2025sair,
title={Sair: Learning semantic-aware implicit representation},
author={Zhang, Canyu and Li, Xiaoguang and Guo, Qing and Wang, Song},
booktitle={European Conference on Computer Vision},
pages={319--335},
year={2025},
organization={Springer}
}