to create environment: python3 -m venv esa
to load environment: source esa/bin/activate
to install packages: pip install -r requirements_ok.txt
We provide a dataloader for the KITTI dataset in dataset_invariance.py. The dataloader yields samples of (past, future) trajectories paired with a semantic map of the surrounding scene.
To train MANTRA with ESA controller, we use a pretrained MANTRA. The model can be trained with the train_mantra_tran.py script. train_mantra_tran.py calls trainer_mantra_tran.py Trainings can be monitored using tensorboard, logs are stored in the folder runs/runs_tran. In the pretrained_model folder there are pretrained models.
python train_tran.py
python test.py --preds 5 --withIRM True/False --saved_memory True/False
test.py calls evaluate_MemNet.py This script generates metrics on the KITTI dataset using a trained models. We compute Average Displacement Error (ADE) and Final Displacement Error (FDE, also referred to as Error@K or Horizon Error).
--cuda Enable/Disable GPU device (default=True).
--batch_size Number of samples that will be fed to MANTRA in one iteration (default=32).
--past_len Past length (default=20).
--future_len Future length (default=40).
--preds Number of predictions generated by MANTRA model (default=5).
--model Path of pretrained model for the evaluation (default='pretrained_models/MANTRA/model_MANTRA')
--visualize_dataset The system saves (in *folder_test/dataset_train* and *folder_test/dataset_test*) all examples
of dataset.
--saved_memory The system chooses which memories will be used in evaluation.
If True, it will be loaded memories from 'memories_path' folder.
If False, new memories will be generated. pairs of past-future will be decided by writing controller of model.
--memories_path This path will be used only if saved_memory flag is True.
--withIRM The model generates predictions with/without Iterative Refinement Module.
--saveImages The system saves in test folder examples of dataset with prediction generated by MANTRA.
If None, it doesn't save any qualitative examples but only quantitative results.
If 'All', it saves all examples.
If 'Subset', it saves examples defined in index_qualitative.py (hand picked most significant samples)
(default=None)
--dataset_file Name of json file cointaining the dataset (default='kitti_dataset.json')
--model_classic_flag Flag to choose if you want to use classic MANTRA or MANTRA + ESA (default=False)
--info Name of evaluation. It will use for name of the test folder (default='')
If you use our code or find it useful in your research, please cite the following paper:
@inproceedings{cvpr_2020, author = {Marchetti, Francesco and Becattini, Federico and Seidenari, Lorenzo and Del Bimbo, Alberto}, booktitle = {International Conference on Computer Vision and Pattern Recognition (CVPR)}, publisher = {IEEE}, title = {MANTRA: Memory Augmented Networks for Multiple Trajectory Prediction}, year = {2020} }
@ARTICLE{Geiger2013IJRR, author = {Andreas Geiger and Philip Lenz and Christoph Stiller and Raquel Urtasun}, title = {Vision meets Robotics: The KITTI Dataset}, journal = {International Journal of Robotics Research (IJRR)}, year = {2013} }
This source code is shared under the license CC-BY-NC-SA, please refer to the LICENSE file for more information.
This source code is only shared for R&D or evaluation of this model on user database.
Any commercial utilization is strictly forbidden.
For any utilization with a commercial goal, please contact contact_cs or bendahan