Skip to content

Official PyTorch Implementation for "Stereo3DMOT: Stereo Vision Based 3D Multi-Object Tracking with Multimodal ReID, PRCV2023"

Notifications You must be signed in to change notification settings

Chain-Mao/Stereo3DMOT

Repository files navigation

Stereo3DMOT

Stereo3DMOT: Stereo Vision Based 3D Multi-Object Tracking with Multimodal ReID, Chen Mao

This repository contains the official python implementation for our paper at PRCV 2023 "Stereo3DMOT: Stereo Vision Based 3D Multi-Object Tracking with Multimodal ReID, Chen Mao et al.". If you find my paper or code useful, please cite my papers:

@inproceedings{mao2023stereo3dmot,
  title={Stereo3DMOT: Stereo Vision Based 3D Multi-object Tracking with Multimodal ReID},
  author={Mao, Chen and Tan, Chong and Liu, Hong and Hu, Jingqi and Zheng, Min},
  booktitle={Chinese Conference on Pattern Recognition and Computer Vision (PRCV)},
  pages={495--507},
  year={2023},
  organization={Springer}
}

Introduction

3D Multi-Object Tracking (MOT) is a key component in numerous applications, such as autonomous driving and intelligent robotics, playing a crucial role in the perception and decision-making processes of intelligent systems. In this paper, we propose a 3D MOT system based on a cost-effective stereo camera pair, which includes a 3D multimodal re-identification (ReID) model capable of multi-task learning. The ReID model obtains the multimodal features of objects, including RGB and point cloud information. We design data association and trajectory management algorithms. The data association computes an affinity matrix for the object feature embeddings and motion information, while the trajectory management controls the lifecycle of the trajectories. In addition, we create a ReID dataset based on the KITTI Tracking dataset, used for training and validating ReID models. Results demonstrate that our method can achieve accurate object tracking solely with a stereo camera pair, maintaining high reliability even in cases of occlusion and missed detections. Experimental evaluation shows that our approach outperforms competitive results on the KITTI MOT leaderboard.

Benchmarking

We provide instructions (inference, evaluation and visualization) for reproducing our method's performance on various supported datasets (KITTI) for benchmarking purposes.

Issues

If you have any questions regarding our code or paper, feel free to open an issue or send an email to [email protected]

Acknowledgements

Parts of this repo are inspired by the following repositories:

About

Official PyTorch Implementation for "Stereo3DMOT: Stereo Vision Based 3D Multi-Object Tracking with Multimodal ReID, PRCV2023"

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages