Skip to content

yhanCao/CogNav_ObjNav

Repository files navigation

CogNav: Cognitive Process Modeling for Object Goal Navigation with LLMs

This is the official implementation of our ICCV 2025 paper "CogNav: Cognitive Process Modeling for Object Goal Navigation with LLMs".

💡 Demo

Scene1:

demo

Scene2:

demo

You can also find more detailed demos at our Project Page.

💡 Method Overview

overview

💡 Installation

The code has been tested only with Python 3.8 on Ubuntu 22.04.

1. Installing Dependencies

git clone https://github.com/facebookresearch/habitat-sim.git
cd habitat-sim; git checkout tags/challenge-2022; 
pip install -r requirements.txt; 
python setup.py install --headless

git clone https://github.com/facebookresearch/habitat-lab.git
cd habitat-lab; git checkout tags/challenge-2022; 
pip install -e .
  • Install pytorch according to your system configuration. The code is tested on pytorch v2.3.1 and cudatoolkit v11.8. If you are using conda:
conda install pytorch==2.3.1 torchvision==0.18.1 torchaudio==2.3.1 pytorch-cuda=11.8 -c pytorch -c nvidia #(Linux with GPU)

2. Download HM3D datasets:

Habitat Matterport

Download HM3D dataset using download utility and instructions:

python -m habitat_sim.utils.datasets_download --username <api-token-id> --password <api-token-secret> --uids hm3d_minival

Setup

Clone the repository and install other requirements:

git clone https://github.com/yhanCao/CogNav_ObjNav
cd CogNav_ObjNav/
pip install -r requirements.txt

Setting up datasets

The code requires the datasets in a data folder in the following format (same as habitat-lab):

CogNav_ObjNav/
  data/
    scene_datasets/
    matterport_category_mappings.tsv
    object_norm_inv_perplexity.npy
    versioned_data
    objectgoal_hm3d/
        train/
        val/
        val_mini/

For evaluation:

For evaluating the pre-trained model:

python3 main.py -d Results/ --skip_times 0 --scenes '5cdEh9F2hJL'

For batch verification:

bash run.sh

Citation

@article{cao2024cognav,
  title={CogNav: Cognitive Process Modeling for Object Goal Navigation with LLMs},
  author={Cao, Yihan and Zhang, Jiazhao and Yu, Zhinan and Liu, Shuzhen and Qin, Zheng and Zou, Qin and Du, Bo and Xu, Kai},
  journal={arXiv preprint arXiv:2412.10439},
  year={2024}
}

About

the official implementation of CogNav [ICCV 2025]

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Languages