ManiUniCon is a comprehensive, multi-process robotics control framework designed for robotic manipulation tasks. It provides a unified interface for controlling various robot arms, integrating sensors, and executing policies in real-time.
- Modular Design: Plug-and-play architecture for seamless integration of new robot arms and algorithms
- Multi-Robot Support: Compatible with UR5, XArm6, Franka FR3/Panda, and easily extensible to other robotic arms
- One-Click Switching: Effortlessly transition between data collection and policy deployment modes
- Real-time Control: High-frequency control loops with shared memory architecture
- Algorithm Agnostic: Easy integration of new learning algorithms through standardized interfaces
- Sensor Integration: Support for RealSense cameras and other sensor modalities
- Visualization: 3D visualization support with Meshcat
- Flexible Configuration: Hydra-based configuration management for quick experiment setup
- Safety Features: Emergency stop, error handling, and reset functionality
-
UR5: Universal Robots UR5 with RTDE interface
-
XArm6: UFACTORY XArm6 collaborative robot
-
FR3: FRANKA RESEARCH 3 robot with franky-control
-
FRPanda: Franka Emika Panda robot with deoxys_control (Note: Panda is no longer supported and does not work with latest libfranka)
- RealSense Cameras: Intel RealSense depth cameras
- Zed Cameras [WIP, not tested]: Zed series depth cameras
- Meta Quest Controllers: Meta Quest VR controllers for intuitive 6-DOF manipulation
- SpaceMouse: 3Dconnexion SpaceMouse for precise position and orientation control
- Keyboard: Basic keyboard control for simple teleoperation tasks
ManiUniCon uses a multi-process architecture with shared memory for efficient real-time communication:
- Python 3.10+
- CUDA-capable GPU (optional, for policy inference)
- Robot hardware or simulation environment
- Clone the repository:
git clone https://github.com/Universal-Control/ManiUniCon.git
cd ManiUniCon
git submodule update --init --recursive- Create a conda environment:
conda create -n unicon python=3.10
conda activate unicon- Install the package:
pip install -e .
pip install -e ./third_party/oculus_readerFor UR5 robot:
pip install -e '.[ur5]'For XArm6 robot:
pip install -e '.[xarm]'For FR3 robot with franky-control:
pip install -e '.[franky_fr3_franky]'For FRPanda robot with deoxys_control:
- Follow the installation guidance of deoxys_control on the server and the client side.
For RealSense cameras:
pip install -e '.[realsense]'For Gello device, you need to setup the Gello following the instructions in the repository.
-
Configure your setup by editing the configuration files in
configs/:configs/robot/- Robot-specific configurationsconfigs/policy/- Policy configurationsconfigs/sensors/- Sensor configurations
-
Run the system with default configs:
python main.py- Control the robot:
- Press
hto reset the robot to home position - The system will automatically start all processes (sensors, policy, robot control)
- Press
Collect demonstration data for training:
- Start data collection with teleoperation:
# Using Quest controllers
python main.py robot=ur5 policy=quest policy.record_dir=./data/ur5_recording
# Using keyboard control
python main.py robot=ur5 policy=keyboard policy.record_dir=./data/ur5_recording
# Using SpaceMouse
python main.py robot=ur5 policy=spacemouse policy.record_dir=./data/ur5_recording- Process collected data:
# Merge multiple episodes into a single zarr file
python tools/process_demo_data.py ./data/ur5_recordingpython main.py robot=xarm6 sensors=realsense_xarm policy=spacemousepython main.py robot=ur5 sensors=realsense_ur policy=ppt_rgb_simpleManiUniCon/
โโโ maniunicon/ # Main package
โ โโโ core/ # Core robot and control logic
โ โโโ robot_interface/ # Robot-specific interfaces
โ โโโ policies/ # Policy implementations
โ โโโ sensors/ # Sensor interfaces
โ โโโ customize/ # Custom wrappers and models
โ โโโ utils/ # Utility functions
โโโ configs/ # Configuration files
โ โโโ robot/ # Robot configurations
โ โโโ policy/ # Policy configurations
โ โโโ sensors/ # Sensor configurations
โโโ assets/ # Robot models and assets
โโโ tools/ # Utility scripts
โโโ third_party/ # Third-party packages
- Create a new robot interface in
maniunicon/robot_interface/that inherits fromRobotInterfacebase class:
from maniunicon.robot_interface.base import RobotInterface
from maniunicon.utils.shared_memory.shared_storage import RobotAction, RobotState
class MyRobotInterface(RobotInterface):
def connect(self) -> bool:
# Connect to the robot hardware
pass
def disconnect(self) -> bool:
# Disconnect from the robot hardware
pass
def get_state(self) -> RobotState:
# Get the current state of the robot
pass
def send_action(self, action: RobotAction) -> bool:
# Send a control action to the robot
pass
def reset_to_init(self) -> bool:
# Reset the robot to the initial configuration
pass
def forward_kinematics(self, joint_positions: np.ndarray) -> Tuple[np.ndarray, np.ndarray]:
# Compute forward kinematics (returns position and orientation)
pass
def inverse_kinematics(self, target_position: np.ndarray,
target_orientation: np.ndarray,
current_q: np.ndarray) -> np.ndarray:
# Compute inverse kinematics
pass
def is_connected(self) -> bool:
# Check if the robot is connected
pass
def is_error(self) -> bool:
# Check if the robot is in an error state
pass
def clear_error(self) -> bool:
# Clear any error state
pass
def stop(self) -> bool:
# Emergency stop the robot
pass-
Add configuration in
configs/robot/my_robot.yaml -
Register the robot in the configuration defaults
For PyTorch-based policies, we recommend using the existing TorchModelPolicy class:
- Create custom wrappers in
maniunicon/customize/:- Observation wrapper in
obs_wrapper/: Preprocesses sensor data for your model - Action wrapper in
act_wrapper/: Post-processes model outputs into robot actions
- Observation wrapper in
# Example: maniunicon/customize/obs_wrapper/my_obs_wrapper.py
class MyObsWrapper:
def __init__(self, shared_storage, device, **kwargs):
self.shared_storage = shared_storage
self.device = device
def __call__(self, state, camera):
# Process state and camera data into model input
return obs_tensor
# Example: maniunicon/customize/act_wrapper/my_act_wrapper.py
class MyActWrapper:
def __init__(self, **kwargs):
pass
def __call__(self, model_output, timestamp, start_timestamp, **kwargs):
# Convert model output to robot actions
return actions- Configure your policy in
configs/policy/my_policy.yaml:
_target_: maniunicon.policies.torch_model.TorchModelPolicy
model:
_target_: path.to.your.model
obs_wrapper:
_target_: maniunicon.customize.obs_wrapper.my_obs_wrapper.MyObsWrapper
act_wrapper:
_target_: maniunicon.customize.act_wrapper.my_act_wrapper.MyActWrapperNote: Only create a new policy class if TorchModelPolicy cannot meet your specific needs (e.g., non-PyTorch models, custom control loops):
# Custom policy class (only if needed)
class MyCustomPolicy(mp.Process):
def __init__(self, shared_storage, reset_event, **kwargs):
super().__init__()
self.shared_storage = shared_storage
self.reset_event = reset_event
def run(self):
# Implement custom policy logic
pass- Add configuration in
configs/policy/my_policy.yaml
The tools/ directory contains utility scripts for:
- Camera calibration:
calibration/- Tools for camera-robot calibration - Data processing:
process_demo_data.py- Process demonstration data - Recording:
replay_record_data.py- Replay and record robot data - Visualization:
save_zarr_video.py- Save recorded data as video - Hardware setup:
list_realsense_cameras.py- List available RealSense cameras
- Fork the repository
- Create a feature branch (
git checkout -b feature/amazing-feature) - Commit your changes (
git commit -m 'Add amazing feature') - Push to the branch (
git push origin feature/amazing-feature) - Open a Pull Request
For additional information and support:
- GitHub Issues - Report bugs or request features
- Discussions - Ask questions and share ideas
- Always ensure proper robot workspace setup
- Test in simulation before running on real hardware
- Keep emergency stop accessible
- Follow robot manufacturer's safety guidelines
- Monitor robot behavior during operation
This project is licensed under the MIT License - see the LICENSE file for details.
If you use ManiUniCon in your research, please cite:
@software{maniunicon2025,
title={ManiUniCon: A Unified Control Interface for Robotic Manipulation},
author={Zhu, Zhengbang and Liu, Minghuan and Han, Xiaoshen and Zhang, Zhengshen},
year={2025},
url={https://github.com/Universal-Control/ManiUniCon}
}For more information or support, please open an issue on GitHub.