Jiaxin Li, Weiqi Huang, Zan Wang, Wei Liang, Huijun Di, Feng Liu
Welcome to FloNa! This repository is the official implementation of paper "FloNa: Floor Plan Guided Embodied Visual Navigation".
a. Create a new conda environment:
git clone https://github.com/GauleeJX/flodiff.git
cd flodiff
conda env create -n flodiff -f environment.yaml
conda activate flodiff
pip install -e .b. Install the diffusion_policy:
cd /path/to/flodiff
git clone https://github.com/real-stanford/diffusion_policy.git
pip install -e diffusion_policy/a. Download our pre-collected dataset:
We collected navigation episodes on Gibson static scenes for training and evaluation. The structure of the episode data is organized as follows:
|--<train>
| |--<scene_0>
| |-----floor_plan.png # the floor plan of the scene
| |-----<traj0> # one episode in the scene
| |----------traj_0.npy # a .npy file saving ground truth 2d position and 2d orientation for each frame
| |----------traj_0.txt # a .txt file saving ground truth 2D position and orientation for each frame
| |----------traj_0.png # a .png file showing the trajectory on the floor plan
| |----------obs_0.png # frame 0
| |----------obs_1.png # frame 1
| |----------...
| |-----<traj1>
| |----------traj_1.npy
| |----------traj_1.txt
| |----------traj_1.png
| |----------obs_0.png
| |----------obs_1.png
| |----------...
| |-----...
| |--<scene_1>
| |-----...
|--<test>
| |...You can download it from BaiduDisk. Please note that the dataset is approximately 500 GB, so make sure you have sufficient disk space.
b. Unpack this tar archive:
cat /path/to/dataset/dataset.tar_* > /path/to/dataset/dataset.tar
tar -xvf dataset.tar -C /path/to/flodiff/datasets
mkdir /path/to/flodiff/datasets/trav_maps
unzip trav_maps.zip -d /path/to/flodiff/datasets/trav_mapsTo train your own model, simply run the following command:
cd /path/to/flodiff/training
python train.pyThe model is saved to /training/log/flona by default. If you need to change the save location, modify the flona.yaml configuration file.
Note: This code is designed for training on a single GPU. If you wish to use multiple GPUs, you'll need to modify the code accordingly.
a. Install iGibson:
cd /path/to/flodiff
git clone https://github.com/StanfordVL/iGibson --recursive
cd iGibson
pip install -e . # This step takes about 4 minutesb. Download scenes:
Please refer to this link to download the Gibson static scenes. We recommend saving the downloaded scenes in the following directory: /path_to_flodiff/iGibson/igibson/data/g_dataset.
c. Testing:
mkdir /path/to/flodiff/iGibson/igibson/scripts
mv /path/to/flodiff/test_flona_gtpos.py /path/to/flodiff/iGibson/scripts/
cd /path/to/flodiff/iGibson
python -m igibson.scripts.test_flona_gtposFor evaluating results, run:
python /path/to/flodiff/results/evaluate.pyIf you find our work helpful, please cite:
@inproceedings{li2025flona,
title={FloNa: Floor Plan Guided Embodied Visual Navigation},
author={Li, Jiaxin and Huang, Weiqi and Wang, Zan and Liang, Wei and Di, Huijun and Liu, Feng},
booktitle={Proceedings of the AAAI Conference on Artificial Intelligence},
year={2025}
}
This project is licensed under the MIT licensed.
