Skip to content

Official implementation of "Re3Sim: Generating High-Fidelity Simulation Data via 3D-Photorealistic Real-to-Sim for Robotic Manipulation"

Notifications You must be signed in to change notification settings

OpenRobotLab/Re3Sim

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

14 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

Re3Sim: Generating High-Fidelity Simulation Data via 3D-Photorealistic Real-to-Sim for Robotic Manipulation

arXiv Website HF Data Python 3.10

Xiaoshen Han   Minghuan Liu^   Yilun Chen^†   Junqiu Yu   Xiaoyang Lyu   Yang Tian  
Bolun Wang   Weinan Zhang   Jiangmiao Pang†  

^Project lead  †Corresponding author


Shanghai Jiao Tong University   Shanghai AI Lab  The University of Hong Kong


πŸ“‹ Contents

🏠 Real-to-Sim

Installation

Install the Simulator Environment

We provide a Dockerfile to install the simulator environment. Here is the installation guide:

docker build -t re3sim:1.0.0 .

After the installation, you can run the container:

docker run --name re3sim --entrypoint bash -itd --runtime=nvidia --gpus='"device=0"' -e "ACCEPT_EULA=Y" --rm --network=bridge --shm-size="32g" -e "PRIVACY_CONSENT=Y" \
    -v /path/to/resources:/root/resources:rw \
    re3sim:1.0.0

Install IsaacLab:v1.1.0

# in docker
cd /root/IsaacLab
./isaaclab.sh --install none

Install CUDA 11.8 in the Docker container. Then, install diff-gaussian-rasterization and simple-knn:

./cuda_11.8.0_520.61.05_linux.run --silent --toolkit
pip install src/gaussian_splatting/submodules/diff-gaussian-rasterization/
pip install src/gaussian_splatting/submodules/simple-knn/

install OpenMVS

To reconstruct the geometry of the scene, you need to install OpenMVS by following the instructions in the OpenMVS Wiki within the Docker container and add the binary files to the PATH.

After that, we recommend saving the Docker image with docker commit.

The code in the docker may encounter some network issues, you can try to install the environment in the local machine following this instruction or uncomment this line in the yaml files:

franka_usd_path: /path/to/Re3Sim/re3sim/Collected_franka/franka.usd

Getting Started

Real-to-Sim in Predefined Scene

We offer the necessary resources here. You can download them and place them in the following path:

# in docker
/isaac-sim/
    - src
        - data/
            - align/
            - gs-data/
            - items/
            - urdfs/
            - usd/
  • Collect data in simulator
  • collect data for pick and place tasks
# in /isaac-sim/src
python standalone/clean_example/pick_into_basket_lmdb.py
  • Visualize the Data

You can use utils/checker/check_lmdb_data_by_vis.ipynb to visualize the data.

Real-to-Sim in Customized Scene

  1. Prepare the data:
  • Place the images in the folder /path/to/input/images.
  • Place the image for alignment in /path/to/input/align_image.png.
  1. Reconstruct the scene:
# in docker
python reconstrct.py -i /path/to/input

The scene will be reconstructed automatically.

  1. Calibrate and align the scene:
  • Run real-deployment/calibration/hand_in_eye_shooting.ipynb.
python real-deployment/calibration/hand_in_eye_calib.py --data_root /path/to/calibrate_folder
python real-deployment/utils/get_marker2base_aruco.py --data_root /path/to/calibrate_folder
  1. The file configs/pick_and_place/example.yaml provides an example of how to configure the required paths in the configuration file.

  2. Replace the config path in src/standalone/clean_example/pick_into_basket_lmdb.py to begin collecting data in the simulator:

python src/standalone/clean_example/pick_into_basket_lmdb.py

πŸ€– Policy Training

The GitHub repository act-plus-plus contains our modified ACT code.

Env Setup

  1. create conda env in conda_env.yaml
  2. install torch
  3. install other modules in requirements.txt
  4. install detr
cd detr
pip install -e .

Tutorial

  1. Put the data inside data/5_items_lmdb and uncompress them. The file structure should look like this:
    β”œβ”€β”€ data
        └── 5_items_lmdb 
            β”œβ”€β”€ random_position_1021_1_eggplant_low_res_continuous_better_controller2_multi_item_filtered_30_lmdb
                └── ...
            β”œβ”€β”€ random_position_1021_1_eggplant_low_res_continuous_better_controller2_multi_item_filtered_45_lmdb
                └── ... 
            └── ...
  1. check data and remove broken files (optional)
python ../re3sim/utils/check_lmdb.py  <path to the data> # use --fast flag to enable a partial check.
  1. process data to get act dataset:
# run the command in the act project root dir **/act-plus-plus/
python process_data.py --source-path /path/to/source --output-path /path/to/act_dataset 
  1. start training
# Single machine, 8 GPUs
torchrun --nproc_per_node=8 --master_port=12314 imitate_episodes_cosine.py --config-path=conf --config-name=<config name> hydra.job.chdir=True params.num_epochs=24 params.seed=100
# Multi-machine, multi-GPU
# First machine
torchrun --nproc_per_node=8 --node_rank=0 --nnodes=2 --master_addr=<master ip> --master_port=12314 imitate_episodes_cosine.py --config-path=conf --config-name=<config name> hydra.job.chdir=True params.num_epochs=24 params.seed=100

# Second machine
torchrun --nproc_per_node=8 --node_rank=1 --nnodes=2 --master_addr=<master ip> --master_port=12314 imitate_episodes_cosine.py --config-path=conf --config-name=<config name> hydra.job.chdir=True params.num_epochs=24 params.seed=100

Example

We provide the data for the pick a bottle task here. And we show how to train the policy with our data. Please download the dataset and place them in policies/act_plus_plus/lmdb_data

# in policies/act_plus_plus
export PYTHONPATH=$PYTHONPATH:$(pwd)
python process_data.py --source-path lmdb_data/pick_one --output-path data/pick_one --file-name _keys.json
# The configs are in `constants.py` and `conf/pick_one_into_basket.yaml`
torchrun --nproc_per_node=8 --master_port=12314 imitate_episodes_cosine.py --config-path=conf --config-name=pick_one_into_basket hydra.job.chdir=True params.num_epochs=24 params.seed=100

Note: The data paths are jointly indexed through constants.py and conf/*.yaml. For custom datasets, you need to add a task in constants.py (new element of the SIM_TASK_CONFIGS dictionary) and create a yaml file where the params.task_name matches the key in constants.py.

To evaluate the policy in simulation, please refer to eval.md.

πŸ“ TODO List

  • Code formatting.
  • More real-to-sim-to-real tasks.
  • The user-friendly GUI.
  • Unified rendering implementation and articulation reconstruction.

πŸ”— Citation

If you find our work helpful, please cite:

@article{han2025re3sim,
  title={Re$^3$Sim: Generating High-Fidelity Simulation Data via 3D-Photorealistic Real-to-Sim for Robotic Manipulation},
  author={Han, Xiaoshen and Liu, Minghuan and Chen, Yilun and Yu, Junqiu and Lyu, Xiaoyang and Tian, Yang and Wang, Bolun and Zhang, Weinan and Pang, Jiangmiao},
  journal={arXiv preprint arXiv:2502.08645},
  year={2025}
}

πŸ“„ License

The work is licensed under Creative Commons Attribution-NonCommercial 4.0 International.

πŸ‘ Acknowledgements

  • Gaussian-splatting: We use 3D Gaussian splatting as the rendering engine.
  • Act-plus-plus: We modify the ACT model based on the code.
  • Franka_grasp_baseline. We borrowed the code of the hand-eye calibration implementation from this codebase.
  • IsaacLab: We used the script from this library to convert OBJ to USD.

About

Official implementation of "Re3Sim: Generating High-Fidelity Simulation Data via 3D-Photorealistic Real-to-Sim for Robotic Manipulation"

Topics

Resources

Stars

Watchers

Forks

Contributors 3

  •  
  •  
  •