Skip to content

qingyangzhou69/STNF4D_code

Repository files navigation

Spatiotemporal-aware Neural Fields for Dynamic CT Reconstruction

Spatiotemporal-aware Neural Fields for Dynamic CT Reconstruction [Paper] [Project Page].

Qingyang Zhou,Yunfan Ye, Zhiping Cai.

demo

Abstract

We propose a dynamic Computed Tomography (CT) reconstruction framework called STNF4D (SpatioTemporal-aware Neural Fields). First, we represent the 4D scene using four orthogonal volumes and compress these volumes into more compact hash grids. Compared to the plane decomposition method, this method enhances the model's capacity while keeping the representation compact and efficient. However, in densely predicted high-resolution dynamic CT scenes, the lack of constraints and hash conflicts in the hash grid features lead to obvious dot-like artifact and blurring in the reconstructed images. To address these issues, we propose the Spatiotemporal Transformer (ST-Former) that guides the model in selecting and optimizing features by sensing the spatiotemporal information in different hash grids, significantly improving the quality of reconstructed images. We conducted experiments on medical and industrial datasets covering various motion types, sampling modes, and reconstruction resolutions. Experimental results show that our method outperforms the second-best by 5.99 dB and 4.27 dB in medical and industrial scenes, respectively.

Installation

# Create envorinment
conda create -n stnf4d python=3.9
conda activate stnf4d

# Install pytorch (hash encoder requires CUDA v11.3)
pip install torch==1.11.0+cu113 torchvision==0.12.0+cu113 torchaudio==0.11.0 --extra-index-url https://download.pytorch.org/whl/cu113

# Install other packages
pip install -r requirements.txt

After testing, higher versions of torch can also run.

We suggest you install TIGRE toolbox (2.3 version) for executing traditional CT reconstruction methods and synthesize your own CT data if you plan to do so. Please note that TIGRE v2.5 might stuck when CT is large.

# Download TIGRE
wget https://github.com/CERN/TIGRE/archive/refs/tags/v2.3.zip
unzip v2.3.zip
rm v2.3.zip

# Install TIGRE
cd TIGRE-2.3/Python/
python setup.py develop

Prepare Dataset

Download 4D CT datasets from Google drive or Baidu disk Put them into the folder ./data as:

  |--data
      |--XCAT.pickle
      |--100_HM.pickle
      |--101_HM.pickle
      |--102_HM.pickle
      |--103_HM.pickle
      |--S01_004_256_60.pickle
      |--S02_005_256_60.pickle
      |--S04_009_256_60.pickle
      |--S08_700_256_60.pickle
      |--S12_021_256_60.pickle
      

Training

Run train.py to train the model. You can specify the configuration file using the --config argument.

Experiments settings are stored in ./config folder.

For example, train STNF4D with XCAT dataset:

python train.py --config ./config/XCAT.yaml

Visualization/Evaluation

To visualize and evaluate reconstructed 4DCT, run reconstruction.py.

For example, to visualize and evaluate XCAT dataset:

python reconstruction.py --config ./config/XCAT.yaml

Dynamic GIF can be generated by makegif.py:

python makegif.py --expname XCAT --slicenum 109 --phasenum 10

Generate your own data

If you want to try more medical datasets, follow this instruction to create your own dataset.

If you want to try more industrial datasets, follow this instruction.

Citation

@InProceedings{Zhou_2025_AAAI,
  title={Spatiotemporal-Aware Neural Fields for Dynamic CT Reconstruction},
  author={Zhou, Qingyang and Ye, Yunfan and Cai, Zhiping},
  booktitle={Proceedings of the AAAI Conference on Artificial Intelligence},
  volume={39},
  number={10},
  pages={10834--10842},
  year={2025}
}

Acknowledgement

torch-ngp

NAF

SAX-NeRF

TIGRE toolbox

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages