Skip to content

[SIGGRAPH ASIA 2025] This is the official implementation of the SIGGRAPH ASIA 2025 : Hierarchical Neural Semantic Representation for 3D Semantic Correspondence

License

Notifications You must be signed in to change notification settings

mapledky/NSR_PyTorch

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

15 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

[SIGGRAPH ASIA 2025] Hierarchical Neural Semantic Representation for 3D Semantic Correspondence

Keyu Du1, Jingyu Hu2†, Haipeng Li1, Hao Xu2, Haibin Huang3, Chi-Wing Fu2, Shuaicheng Liu1

1University of Electronic Science and Technology of China (UESTC) 2The Chinese University of Hong Kong (CUHK) 3TeleAI

† Corresponding author

This is the official implementation of our SIGGRAPH ASIA 2025 paper, Hierarchical Neural Semantic Representation for 3D Semantic Correspondence.

πŸ’‘ Introduction

This paper presents a new approach to estimate accurate and robust 3D semantic correspondence with hierarchical neural semantic representation.

Our work has three key contributions:

  1. Hierarchical Neural Semantic Representation (HNSR): We design a representation that consists of a global semantic feature to capture high-level structure and multi-resolution local geometric features to preserve fine details, by carefully harnessing 3D priors from pre-trained 3D generative models.

  2. Progressive Global-to-Local Matching Strategy: We design a strategy that establishes coarse semantic correspondence using the global semantic feature, then iteratively refines it with local geometric features, yielding accurate and semantically-consistent mappings.

  3. Training-Free Framework: Our framework is training-free and broadly compatible with various pre-trained 3D generative backbones, demonstrating strong generalization across diverse shape categories.

Our method also supports various applications, such as shape co-segmentation, keypoint matching, and texture transfer, and generalizes well to structurally diverse shapes, with promising results even in cross-category scenarios. Both qualitative and quantitative evaluations show that our method outperforms previous state-of-the-art techniques.

Teaser Image

πŸ“Œ To-Do List

  • Release NSR-PyTorch code with pretrained model, environment setup guide, dataset preparation, and batch testing scripts
  • Batch testing script for Objaverse dense matching
  • Batch testing script for ShapeNet co-segmentation
  • Single testing script for source-to-target semantic matching
  • Demo code for other backbones NSR_LAS_PyTorch.
  • Demo code for other backbones NSR_SDF_PyTorch.
  • Texture transfer testing script
  • Keypoint matching testing script for SHREC'19 & SHREC'20

This repository is still under active development.
If you have any questions or issues, feel free to contact me at Email.

πŸš€ Installation

Prerequisites

  • System: The code is currently tested only on Linux.
  • Hardware: An NVIDIA GPU with at least 24GB of memory is necessary. The code has been verified on Tesla P40.
  • Software:
    • The CUDA Toolkit is needed to compile certain submodules. The code has been tested with CUDA versions 11.8 and 12.2.
    • Conda is recommended for managing dependencies.
    • Python version 3.8 or higher is required.

###Installation Steps

  1. Clone the repository:

    git clone --recurse-submodules https://github.com/mapledky/NSR_PyTorch.git
    cd NSR_PyTorch
  2. Install the dependencies:

    Important notes before running the installation command:

    • By adding --new-env, a new conda environment named nsr will be created. If you want to use an existing conda environment, please remove this flag.
    • By default, the nsr environment will use PyTorch 2.4.0 with CUDA 11.8. If you want to use a different version of CUDA (e.g., if you have CUDA Toolkit 12.2 installed and do not want to install another 11.8 version for submodule compilation), you can remove the --new-env flag and manually install the required dependencies. Refer to PyTorch for the installation command.
    • If you have multiple CUDA Toolkit versions installed, PATH should be set to the correct version before running the command. For example, if you have CUDA Toolkit 11.8 and 12.2 installed, you should run export PATH=/usr/local/cuda-11.8/bin:$PATH before running the command.
    • By default, the code uses the flash-attn backend for attention. For GPUs that do not support flash-attn (e.g., NVIDIA V100), you can remove the --flash-attn flag to install xformers only and set the ATTN_BACKEND environment variable to xformers before running the code.
    • The installation may take a while due to the large number of dependencies. Please be patient. If you encounter any issues, you can try to install the dependencies one by one, specifying one flag at a time.

    Create a new conda environment named nsr and install the dependencies:

    . ./setup.sh --new-env --basic --xformers --flash-attn --diffoctreerast --spconv --mipgaussian --kaolin --nvdiffrast

    Detailed usage of setup.sh:

    . ./setup.sh --help

    Available options:

    Usage: setup.sh [OPTIONS]
    Options:
        -h, --help              Display this help message
        --new-env               Create a new conda environment
        --basic                 Install basic dependencies
        --train                 Install training dependencies
        --xformers              Install xformers
        --flash-attn            Install flash-attn
        --diffoctreerast        Install diffoctreerast
        --spconv                Install spconv
        --mipgaussian           Install mip-splatting
        --kaolin                Install kaolin
        --nvdiffrast            Install nvdiffrast
    

πŸ€– Pretrained Models

We provide the following pretrained model following the official microsoft/TRELLIS:

Model Description #Params Download
TRELLIS-image-large Large image-to-3D model 1.2B Download

Note: All VAEs are included in the TRELLIS-image-large model repository.

The models are hosted on Hugging Face. You can directly load the models with their repository names in the code:

TrellisImageTo3DPipeline.from_pretrained("microsoft/TRELLIS-image-large")

If you prefer loading the model from local storage, you can download the model files from the links above and load the model with the folder path (folder structure should be maintained):

TrellisImageTo3DPipeline.from_pretrained("pretrained_models/TRELLIS-image-large")

πŸ“š Dataset

We provide multiple datasets and their download and preprocessing methods.

Objaverse(XL)

Objaverse(XL) is a large-scale dataset containing various 3D assets. You can download the dataset from the official Objaverse(XL) website. Put the downloaded dataset into dataset directory.

Use the following command for batch rendering the data. You should pre-download Blender and change the Blender's path in render_objaverse.py. Arrange your testing cases in the same directory and change the down_list to your testing directory.

python dataset_toolkits/render_objaverse.py

ShapeNetCore

ShapeNetCore is a subset of the full ShapeNet dataset with single clean 3D models and manually verified categories. You can download the dataset from the official ShapeNetCore website. Put the downloaded dataset into dataset directory.

Use the following command for batch rendering the data. You should pre-download Blender and change the Blender's path in render_shapenet.py. Arrange your testing cases into a txt file and change the filter_file to your testing file.

python dataset_toolkits/render_shapenet.py

Use the following command for generating ground truth for ShapeNetCore. You should pre-download PartNet and change the dataset path in process_shapenet_gt.py. Arrange your testing cases into a txt file and change the filter_file to your testing file.

python dataset_toolkits/process_shapenet_gt.py

πŸ“– Usage

Objaverse(XL)

Here is the batch testing file objaverse_dense demonstrating how to use NSR to get dense matching between Objaverse(XL) 3D models.

Use the following command to perform dense matching between source and target models.
First, download the TRELLIS checkpoint, update the checkpoint root directory, and set test_file and test_id to your source model and directory.
The encoder weights are included in the TRELLIS checkpoint and can be loaded with:

encoder = models.from_pretrained("pretrained_models/TRELLIS-image-large/ckpts/ss_enc_conv3d_16l8_fp16").eval().cuda()

The python script allows you to freely adjust hyperparameters of timesteps and layer. You can set extract_t and extract_l to None to batch test all hyperparameters.

Keypoint matching results can be found in the output directory and visualized into colored .ply files.

python objaverse_cor/objaverse_dense.py

ShapeNetCore

Here is the batch testing file shapenet_part demonstrating how to use NSR to get co-segmentation between ShapeNetCore 3D models.

Use the following command to perform co-segmentation between source and target models. Notice that you should pre-download TRELLIS-checkpoint and change the checkpoint root directory in the file and set your testing source model and directory to test_file and test_id. Compared to dense matching, you should also provide a source part-segmentation on source vertices to a .json file and set it to test_label_path.

The python script allows you to freely adjust hyperparameters of timesteps and layer. You can set extract_t and extract_l to None to batch test all hyperparameters.

Co-segmentation results can be found in the output directory and visualized into colored .ply files.

python shapenet_part/shapenet_part.py

Single Semantic Matching

The script semantic_match demonstrates how to use NSR for semantic matching between a single source and target.

Run the following command to perform semantic matching between the source and target models. Notice that you should pre-download TRELLIS-checkpoint and change the checkpoint root directory in the file and set the source and target model paths. You should specify the source semantic part to transfer via point_choose.

Semantic match results can be found in the output directory and visualized into colored .ply files.

python semantic_match/semantic_s2t.py

πŸ”§ Additional Applications

Once you obtain the dense correspondence results, you can leverage these mappings to create fine-grained correspondences from source to target models. This enables various downstream applications:

  • Keypoint Matching: Establish precise keypoint correspondences between 3D models
  • Texture Transfer: Transfer textures between models using dense or part-based mappings
  • Shape Co-segmentation: Perform semantic segmentation across different 3D shapes

πŸ“¬ Feedback

If you have any questions, please open an issue or contact [email protected].

πŸ“œ Citation

If you find this work helpful, please consider citing our paper:

@article{du2025hnsr,
    title   = {Hierarchical Neural Semantic Representation for 3D Semantic Correspondence},
    author  = {Keyu Du and Jingyu Hu and Haipeng Li and Hao Xu and Haibing Huang and Chi-Wing Fu and Shuaicheng Liu},
    journal = {SIGGRAPH ASIA},
    year    = {2025},
    isbn = {9798400721373},
    publisher = {Association for Computing Machinery},
    url = {https://doi.org/10.1145/3757377.3763921},
    doi = {10.1145/3757377.3763921},
    booktitle = {Proceedings of the SIGGRAPH Asia 2025 Conference Papers}
}

🌟 Acknowledgements

About

[SIGGRAPH ASIA 2025] This is the official implementation of the SIGGRAPH ASIA 2025 : Hierarchical Neural Semantic Representation for 3D Semantic Correspondence

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published