Skip to content

Commit

Permalink
upload
Browse files Browse the repository at this point in the history
  • Loading branch information
Matteo Dunnhofer authored and Matteo Dunnhofer committed Dec 2, 2022
1 parent 1e34081 commit c860990
Show file tree
Hide file tree
Showing 1,592 changed files with 392,243 additions and 0 deletions.
492 changes: 492 additions & 0 deletions CoCoLoT_Tracker.py

Large diffs are not rendered by default.

72 changes: 72 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,72 @@
### CoCoLoT: Combining Complementary Trackers in Long-Term Visual Tracking

🏆 Winner of the [Visual Object Tracking VOT2021 Long-term Challenge](https://openaccess.thecvf.com/content/ICCV2021W/VOT/papers/Kristan_The_Ninth_Visual_Object_Tracking_VOT2021_Challenge_Results_ICCVW_2021_paper.pdf) (aka mlpLT)

Matteo Dunnhofer, Kristian Simonato, Christian Micheloni

Machine Learning and Perception Lab
Department of Mathematics, Computer Science and Physics
University of Udine
Udine, Italy

##### Hardware and OS specifications
CPU Intel Xeon E5-2690 v4 @ 2.60GHz
GPU NVIDIA TITAN V
320 GB of RAM
OS: Ubuntu 20.04

#### VOT-LT test instructions
To run the VOT Challenge Long-term experiments please follow these instructions:

+ Clone the repository ``git clone https://github.com/matteo-dunnhofer/CoCoLoT``

+ Download the pre-trained weights files ``STARKST_ep0050.pth.tar``, ``super_dimp.pth.tar`` from [here](https://drive.google.com/drive/folders/1W_ePPy5HoLgcGbUE5Gh_0HZ034szvSiQ?usp=sharing) and put them in the folder ``CoCoLoT/ckpt/``

+ Move to the submission source folder ``cd CoCoLoT``

+ Create the Anaconda environment ``conda env create -f environment.yml``

+ Activate the environment ``conda activate CoCoLoT``

+ Install ninja-build ``sudo apt-get install ninja-build``

+ Edit the variable ``base_path`` in the file ``vot_path.py`` by providing the full-path to the location where the submission folder is stored,
and do the same in the file ``trackers.ini`` by substituting the paths ``[full-path-to-CoCoLoT]`` in line 9 and 13

+ Run ``python compile_pytracking.py``

+ Run the analysis by ``vot evaluate CoCoLoT``

+ Run the evaluation by ``vot analysis CoCoLoT``

#### If you fail to run our tracker please write to ``[email protected]``

#### An improved version of CoCoLoT exploiting Stark and KeepTrack is downloadable [here](http://data.votchallenge.net/vot2022/trackers/CoCoLoT-code-2022-04-28T08_08_35.527492.zip).

#### References

If you find this code useful please cite:

```
@INPROCEEDINGS{9956082,
author={Dunnhofer, Matteo and Micheloni, Christian},
booktitle={2022 26th International Conference on Pattern Recognition (ICPR)},
title={CoCoLoT: Combining Complementary Trackers in Long-Term Visual Tracking},
year={2022},
pages={5132-5139},
doi={10.1109/ICPR56361.2022.9956082}
}
@article{Dunnhofer2022imavis,
title = {Combining complementary trackers for enhanced long-term visual object tracking},
journal = {Image and Vision Computing},
volume = {122},
year = {2022},
doi = {https://doi.org/10.1016/j.imavis.2022.104448}
}
```

The code presented here is built up on the following repositories:
+ [pytracking](https://github.com/visionml/pytracking)
+ [Stark](https://github.com/researchmm/Stark)
+ [LTMU](https://github.com/Daikenan/LTMU)
123 changes: 123 additions & 0 deletions Stark/MODEL_ZOO.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,123 @@
# STARK Model Zoo

Here we provide the performance of the STARK trackers on multiple tracking benchmarks and the corresponding raw results.
The model weights and the corresponding training logs are also given by the links.

## Tracking
### Models & Logs

<table>
<tr>
<th>Model</th>
<th>LaSOT<br>AUC (%)</th>
<th>GOT-10k<br>AO (%)</th>
<th>TrackingNet<br>AUC (%)</th>
<th>VOT2020<br>EAO</th>
<th>VOT2020-LT<br>F-score (%)</th>
<th>Models</th>
<th>Logs</th>
<th>Logs(GOT10K)</th>
</tr>
<tr>
<td>STARK-S50</td>
<td>65.8</td>
<td>67.2</td>
<td>80.3</td>
<td>0.462</td>
<td>-</td>
<td><a href="https://drive.google.com/drive/folders/1144cEuF_yn9UwTfrSVl5wmaMK3F92q42?usp=sharing">model</a></td>
<td><a href="https://drive.google.com/file/d/1_YI0CX52vg8zN6hWsYK22_78FXPiukdv/view?usp=sharing">log</a></td>
<td><a href="https://drive.google.com/file/d/1xLUeV9I9tejT4eYd1mYpeB_AsndiaJNI/view?usp=sharing">log</a></td>
</tr>
<tr>
<td>STARK-ST50</td>
<td>66.4</td>
<td>68.0</td>
<td>81.3</td>
<td>0.505</td>
<td>70.2</td>
<td><a href="https://drive.google.com/drive/folders/1fSgll53ZnVKeUn22W37Nijk-b9LGhMdN?usp=sharing">model</a></td>
<td><a href="https://drive.google.com/drive/folders/1RcPoBxI1_E6U9s5Y6BEhQH_ov-sT7SJM?usp=sharing">log</a></td>
<td><a href="https://drive.google.com/drive/folders/13guPF1MUOaRa09_4y_K9do9yhQsC_y_y?usp=sharing">log</a></td>
</tr>
<tr>
<td>STARK-ST101</td>
<td>67.1</td>
<td>68.8</td>
<td>82.0</td>
<td>0.497</td>
<td>70.1</td>
<td><a href="https://drive.google.com/drive/folders/1fSgll53ZnVKeUn22W37Nijk-b9LGhMdN?usp=sharing">model</a></td>
<td><a href="https://drive.google.com/drive/folders/1nTDRfG0K0w2XiP5RDrYJXhotUYQJBNoY?usp=sharing">log</a></td>
<td><a href="https://drive.google.com/drive/folders/1PR6PRdARHFKBDSjoqeO7qxx9y87AZWSD?usp=sharing">log</a></td>
</tr>


</table>

The downloaded checkpoints should be organized in the following structure
```
${STARK_ROOT}
-- checkpoints
-- train
-- stark_s
-- baseline
STARKS_ep0500.pth.tar
-- baseline_got10k_only
STARKS_ep0500.pth.tar
-- stark_st2
-- baseline
STARKST_ep0050.pth.tar
-- baseline_got10k_only
STARKST_ep0050.pth.tar
-- baseline_R101
STARKST_ep0050.pth.tar
-- baseline_R101_got10k_only
STARKST_ep0050.pth.tar
```
### Raw Results
The raw results are in the format [top_left_x, top_left_y, width, height]. Raw results of GOT-10K and TrackingNet can be
directly submitted to the corresponding evaluation servers. The folder ```test/tracking_results/``` contains raw results
for the LaSOT dataset and results should be organized in the following structure
```
${STARK_ROOT}
-- test
-- tracking_results
-- stark_s
-- baseline
airplane-1.txt
airplane-13.txt
...
-- stark_st2
-- baseline
airplane-1.txt
airplane-13.txt
...
-- baseline_R101
airplane-1.txt
airplane-13.txt
...
```
The raw results of VOT2020 and VOT2020-LT should be organized in the following structure
```
${STARK_ROOT}
-- external
-- vot20
-- stark_s50
-- results
-- stark_s50_ar
-- results
-- stark_st50
-- results
-- stark_st50_ar
-- results
-- stark_st101
-- results
-- stark_st101_ar
-- results
-- vot20_lt
-- stark_st50
-- results
-- stark_st101
-- results
```
139 changes: 139 additions & 0 deletions Stark/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,139 @@
# STARK
[![PWC](https://img.shields.io/endpoint.svg?url=https://paperswithcode.com/badge/learning-spatio-temporal-transformer-for/visual-object-tracking-on-lasot)](https://paperswithcode.com/sota/visual-object-tracking-on-lasot?p=learning-spatio-temporal-transformer-for)
[![PWC](https://img.shields.io/endpoint.svg?url=https://paperswithcode.com/badge/learning-spatio-temporal-transformer-for/visual-object-tracking-on-got-10k)](https://paperswithcode.com/sota/visual-object-tracking-on-got-10k?p=learning-spatio-temporal-transformer-for)
[![PWC](https://img.shields.io/endpoint.svg?url=https://paperswithcode.com/badge/learning-spatio-temporal-transformer-for/visual-object-tracking-on-trackingnet)](https://paperswithcode.com/sota/visual-object-tracking-on-trackingnet?p=learning-spatio-temporal-transformer-for)

The official implementation of the paper [**Learning Spatio-Temporal Transformer for Visual Tracking**](https://arxiv.org/abs/2103.17154)

Hiring research interns for visual transformer projects: [email protected]


![STARK_Framework](tracking/Framework.png)
## Highlights
### End-to-End, Post-processing Free

STARK is an **end-to-end** tracking approach, which directly predicts one accurate bounding box as the tracking result.
Besides, STARK does not use any hyperparameters-sensitive post-processing, leading to stable performances.

### Real-Time Speed
STARK-ST50 and STARK-ST101 run at **40FPS** and **30FPS** respectively on a Tesla V100 GPU.

### Strong performance
| Tracker | LaSOT (AUC)| GOT-10K (AO)| TrackingNet (AUC)|
|---|---|---|---|
|**STARK**|**67.1**|**68.8**|**82.0**|
|TransT|64.9|67.1|81.4|
|TrDiMP|63.7|67.1|78.4|
|Siam R-CNN|64.8|64.9|81.2|

### Purely PyTorch-based Code

STARK is implemented purely based on the PyTorch.

## Install the environment
**Option1**: Use the Anaconda
```
conda create -n stark python=3.6
conda activate stark
bash install.sh
```
**Option2**: Use the docker file

We provide the complete docker at [here](https://hub.docker.com/repository/docker/alphabin/stark)

## Data Preparation
Put the tracking datasets in ./data. It should look like:
```
${STARK_ROOT}
-- data
-- lasot
|-- airplane
|-- basketball
|-- bear
...
-- got10k
|-- test
|-- train
|-- val
-- coco
|-- annotations
|-- images
-- trackingnet
|-- TRAIN_0
|-- TRAIN_1
...
|-- TRAIN_11
|-- TEST
```
Run the following command to set paths for this project
```
python tracking/create_default_local_file.py --workspace_dir . --data_dir ./data --save_dir .
```
After running this command, you can also modify paths by editing these two files
```
lib/train/admin/local.py # paths about training
lib/test/evaluation/local.py # paths about testing
```

## Train STARK
Training with multiple GPUs using DDP
```
# STARK-S50
python tracking/train.py --script stark_s --config baseline --save_dir . --mode multiple --nproc_per_node 8 # STARK-S50
# STARK-ST50
python tracking/train.py --script stark_st1 --config baseline --save_dir . --mode multiple --nproc_per_node 8 # STARK-ST50 Stage1
python tracking/train.py --script stark_st2 --config baseline --save_dir . --mode multiple --nproc_per_node 8 --script_prv stark_st1 --config_prv baseline # STARK-ST50 Stage2
# STARK-ST101
python tracking/train.py --script stark_st1 --config baseline_R101 --save_dir . --mode multiple --nproc_per_node 8 # STARK-ST101 Stage1
python tracking/train.py --script stark_st2 --config baseline_R101 --save_dir . --mode multiple --nproc_per_node 8 --script_prv stark_st1 --config_prv baseline_R101 # STARK-ST101 Stage2
```
(Optionally) Debugging training with a single GPU
```
python tracking/train.py --script stark_s --config baseline --save_dir . --mode single
```
## Test and evaluate STARK on benchmarks

- LaSOT
```
python tracking/test.py stark_st baseline --dataset lasot --threads 32
python tracking/analysis_results.py # need to modify tracker configs and names
```
- GOT10K-test
```
python tracking/test.py stark_st baseline_got10k_only --dataset got10k_test --threads 32
python lib/test/utils/transform_got10k.py --tracker_name stark_st --cfg_name baseline_got10k_only
```
- TrackingNet
```
python tracking/test.py stark_st baseline --dataset trackingnet --threads 32
python lib/test/utils/transform_trackingnet.py --tracker_name stark_st --cfg_name baseline
```
- VOT2020
Before evaluating "STARK+AR" on VOT2020, please install some extra packages following [external/AR/README.md](external/AR/README.md)
```
cd external/vot20/<workspace_dir>
export PYTHONPATH=<path to the stark project>:$PYTHONPATH
bash exp.sh
```
- VOT2020-LT
```
cd external/vot20_lt/<workspace_dir>
export PYTHONPATH=<path to the stark project>:$PYTHONPATH
bash exp.sh
```
## Test FLOPs, Params, and Speed
```
# Profiling STARK-S50 model
python tracking/profile_model.py --script stark_s --config baseline
# Profiling STARK-ST50 model
python tracking/profile_model.py --script stark_st2 --config baseline
# Profiling STARK-ST101 model
python tracking/profile_model.py --script stark_st2 --config baseline_R101
```

## Model Zoo
The trained models, the training logs, and the raw tracking results are provided in the [model zoo](MODEL_ZOO.md)

## Acknowledgments
* Thanks for the great [PyTracking](https://github.com/visionml/pytracking) Library, which helps us to quickly implement our ideas.
* We use the implementation of the DETR from the official repo [https://github.com/facebookresearch/detr](https://github.com/facebookresearch/detr).
Loading

0 comments on commit c860990

Please sign in to comment.