Pedestrian reid module is performed using the torchreid implementation to train our model and implementing more classical approches with Bag-Of-Word (BOW) model and color histograms.
- To prepare the environment for torchreid:
cd core/reid/deep-person-reid/
pip install -r requirements.txt
conda install pytorch torchvision cudatoolkit=9.0 -c pytorch
python setup.py develop
- To run training and test on MOTSynth and MOT17 datasets, respectively:
./scripts/reid/training.sh
The SummaryWriter() for tensorboard will be automatically initialized in engine.run() when you are training your model. Therefore, you do not need to do extra jobs. After the training is done, the tf.events file will be saved in save_dir. Then, you just call in your terminal:
pip install tensorflow tensorboard
tensorboard --logdir=your_save_dir
Access tensorboard visiting http://localhost:6006/ in a web browser. See pytorch tensorboard for further information.
You can also see the results of our experiments here using different metrics: Euclidean metric, Cosine Metric, Not Normalized Euclidean metric
- To prepare training descriptors for BOW dictionary generation:
./scripts/reid_cv/compute_sift.sh
- To retrieve similar images:
./scripts/reid_cv/reid_sift.sh