Skip to content

darthandvader/CtRNet-X

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

6 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Jingpei Lu*, Zekai Liang*, Tristin Xie, Florian Ritcher, Shan Lin, Sainan Liu, Michael C. Yip

University of California, San Diego

ICRA 2025

[arXiv] [Project page]

Highlight

CtRNet-X is a novel framework capable of estimating the robot pose with partially visible robot manipulators. Our approach leverages the Vision-Language Models for fine-grained robot components detection, and integrates it into a keypoint-based pose estimation network, which enables more robust performance in varied operational conditions. demo

Dependencies

Recommend set up the environment using Anaconda. Code is developed and tested on Ubuntu 22.04.

More details see environment.yml.

Dataset

  1. DREAM dataset
  2. Panda arm dataset with ground truth calibration info
  3. DROID sequences for evaluation

Weights

Weights for fine-tuned CLIP model

Weights for Camera-to-Robot estimation

Quick Start

Inference DROID raw data:

python inference_DROID_raw_file.py

Inference panda dataset with ground truth camera info:

python inference_panda_dataset.py
  • Optional args: confidence_threshold

Working with RGBD input

A variation of CtRNet-X can integrate depth maps from an RGB-D camera during inference by comparing measured depth to rendered depth from the differentiable renderer. Here we use DROID as example.

raw render

Use depth input to refine estimation:

python inference_video_depth.py

We use Huber loss with delta 0.1, feel free to try your own depth data with different losses!

Citation

@article{lu2024ctrnet,
  title={CtRNet-X: Camera-to-Robot Pose Estimation in Real-world Conditions Using a Single Camera},
  author={Lu, Jingpei and Liang, Zekai and Xie, Tristin and Ritcher, Florian and Lin, Shan and Liu, Sainan and Yip, Michael C},
  journal={arXiv preprint arXiv:2409.10441},
  year={2024}
}