Zeshun Li1* · Fuhao Li1* · Wanting Zhang1*
Zijie Zheng · Xueping Liu1 · Tao Zhang2 · Yongjin Liu1 · Long Zeng1†
1THU 2PUDU Tech
*equal contribution †corresponding author
THUD++ comprises three primary components: an RGB-D dataset, a pedestrian trajectory dataset, and a robot navigation emulator. By sharing this dataset, we aim to accelerate the development and testing of mobile robot algorithms,contributing to real-world robotic applications.
Please refer to THUD_Dataset_Overview
git clone https://github.com/jackyzengl/THUD-plus-plus.git
conda create -n thud++ python=3.8
conda activate thud++
pip install -r requirements.txtThe construction of the dataset is the same as that of ETH/UCY. Each row in the dataset is recorded according to frameID, pedID, x, y. The data of each scene is named according to the scene and the corresponding world coordinate system range.
Dataset
├── eth
│ ├── train
│ ├── val
│ ├── test_eth
│ ├── test_gym
│ ├── test_hotel
│ ├── test_office
│ ├── test_supermarket
├── gym
│ ├── gym_x[-7,7]_y[-14.4,14.8].txt
├── office
│ ├── office_x[-43.6,-35.5]_y[0.25,17].txt
├── supermarket
│ ├── supermarket_x[-26,-3]_y[-8,8].txt
cd traj_pred/tools/FLA/sgan/scripts && python evaluate_model.py
cd traj_pred/tools/FLA/pecnet/scripts && python test_pretrained_model.py
cd traj_pred/tools/FLA/stgcnn && python test.py- Install Unity (recommended version ≥ 2022.3.42f1).
- Download the emulator Unity project from Google Drive and open it in Unity.
- Construct customized indoor scenes according to your experimental requirements (rooms, obstacles, pedestrians, robots, etc.).
- Configure the number of dynamic pedestrians and robots in the scene, and bind the robot controller to communicate with your navigation algorithm via TCP (see below).
cd navigation
conda env create -f environment.yml
cd Python-RVO2
python setup.py build
python setup.py installWe use ORCA as an example to illustrate how the navigation algorithm communicates with Unity.
-
In Python, the script
navigation/tcp_server.pystarts a TCP server (defaultHOST=0.0.0.0,PORT=11311) and loads the ORCA policy:- Unity sends the robot state and pedestrian states at each simulation step.
- The server computes the robot velocity and sends back
(vx, vy)to Unity.
-
Run the TCP server:
cd navigation
python tcp_server.py- (Optional) If the navigation algorithm and Unity run on different machines, you can set up SSH port forwarding so that Unity can access the remote Python server. For example:
ssh -L 11311:localhost:11311 root@<remote_ip> -p <ssh_port>- In Unity, modify the IP address and port in
Emulator_UnityProject/Assets/Scripts/Robot/RobotManager.csso that they match the settings intcp_server.py.
The data format sent from Unity to Python is:
robot_pos_x,robot_pos_y & robot_vel_x,robot_vel_y & robot_target_x,robot_target_y &
people1_pos_x,people1_pos_y & people1_vel_x,people1_vel_y & people2_pos_x,people2_pos_y & people2_vel_x,people2_vel_y & ...
- In Unity, select the scene to be tested, open the "ROS" tab in the menu, and click "Start Connection" to start communication with the navigation algorithm. Press the Tab key to show or hide the control menu.
By modifying the policy in tcp_server.py (e.g., replacing ORCA with other policies or RL models), you can plug in and evaluate different navigation algorithms in the same emulator.
If you find this project useful, please consider citing:
@article{li2024thud++,
title={THUD++: Large-Scale Dynamic Indoor Scene Dataset and Benchmark for Mobile Robots},
author={Li, Zeshun and Li, Fuhao and Zhang, Wanting and Zheng, Zijie and Liu, Xueping and Liu, Yongjin and Zeng, Long},
journal={arXiv preprint arXiv:2412.08096},
year={2024}
}
@article{zhengdemonstrating,
title={Demonstrating DVS: Dynamic Virtual-Real Simulation Platform for Mobile Robotic Tasks},
author={Zheng, Zijie and Li, Zeshun and Wang, Yunpeng and Xie, Qinghongbing and Zeng, Long}
booktitle={Robotics: Science and Systems}
year={2025}
}
@inproceedings{tang2024mobile,
title={Mobile robot oriented large-scale indoor dataset for dynamic scene understanding},
author={Tang, Yi-Fan and Tai, Cong and Chen, Fang-Xing and Zhang, Wan-Ting and Zhang, Tao and Liu, Xue-Ping and Liu, Yong-Jin and Zeng, Long},
booktitle={2024 IEEE International Conference on Robotics and Automation (ICRA)},
pages={613--620},
year={2024},
organization={IEEE}
}