Skip to content

Latest commit

 

History

History
166 lines (138 loc) · 11 KB

README.md

File metadata and controls

166 lines (138 loc) · 11 KB

ORCID GoogleScholar GitHub LinkedIn

Self-Supervised Learning on Small In-Domain Datasets Can Overcome Supervised Learning in Remote Sensing

This repository contains the official implementation of the paper Self-Supervised Learning on Small In-Domain Datasets Can Overcome Supervised Learning in Remote Sensing, published in the IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing.

DOI

If you use this code or find our work useful, please consider citing our paper:

@article{sanchez2024ssl,
    title={Self-supervised learning on small in-domain datasets can overcome supervised learning in remote sensing},
    author={Sanchez-Fernandez, Andres J and Moreno-Alvarez, Sergio and Rico-Gallego, Juan A and Tabik, Siham},
    journal={IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing},
    year={2024},
    publisher={IEEE}
}

The paper was authored by Andres J. Sanchez-Fernandez (University of Extremadura), Sergio Moreno-Álvarez (National University of Distance Education), Juan A. Rico-Gallego (CénitS-COMPUTAEX), and Siham Tabik (University of Granada).

Table of contents

Getting started

Prerequisites

Anaconda distribution is recommended. You can install it following the official installation guide.

Check if Anaconda is installed:

conda -V

Installation

The environment.yml file contains all the packages needed to use this project. You can create your own environment from the file provided as follows:

conda env create -f environment.yml

Check that the new environment has been installed correctly:

conda env list

Usage

Activate the conda environment:

conda activate ssl-os-rs

Now you can run any Python script.

Pretraining SSL models

Four SSL models are considered: Barlow Twins, MoCov2, SimCLR, and SimSiam. The ResNet18 is selected as the backbone of each network. The pretraining on the Sentinel2GlobalLULC pure-pixels dataset is launched via a SLURM job script by running:

./ssl_pretraining_slurm_launch_loop.sh <option>

For the <option> argument, four types of experiments can be selected: RayTune, DDP, Imbalanced (default), or Balanced.

If RayTune is selected, the script generates a csv file including the best configurations sorted according to the lowest training loss with the following format: ray_tune_<backbone>_<model>.csv. This file must be included in the path ./input/best_configs/ for the other types of experiments to start with the pseudo-optimal hyperparameters found.

Fine-tuning on downstream tasks

sbatch finetuning_slurm.sh

This script runs the finetuning_run_localhost.sh for the target downstream tasks: LULC fraction estimation and scene classification. It should be configured with only one or two train_rates to launch several Slurm jobs. The Python script accepts the desired number of samples per class as input. Upon completion of the jobs, several files will be generated (one per seed) inside the output folder.

python3 sc_1_compute_mean_std_from_csv.py -i <parent_folder_of_the_csv_files> -o <desired_output_folder>

where parent_folder_of_the_csv_files should target the multiclass/ and then multilabel/ folders following the structure below:

csv_results/
├── multiclass/
│   ├── multiclass_tr=0.010_resnet18_BarlowTwins_bd=False_tl=FT_iw=random_s=05_lr=0.001_m=0.9_wd=0.0_do=None.csv
│   ├── multiclass_tr=0.010_resnet18_BarlowTwins_bd=False_tl=FT_iw=random_s=42_lr=0.001_m=0.9_wd=0.0_do=None.csv
│   ├── ...
├── multilabel/
│   ├── multilabel_tr=0.010_resnet18_BarlowTwins_bd=False_tl=FT_iw=random_s=05_lr=0.01_m=0.9_wd=1e-05_do=None.csv
│   ├── multilabel_tr=0.010_resnet18_BarlowTwins_bd=False_tl=FT_iw=random_s=42_lr=0.01_m=0.9_wd=1e-05_do=None.csv
│   ├── ...
  • The output files generated by the previous script should be manually arranged in folders as follows:
both_mean_std_csv_files/
├── multiclass/
│   ├── 001p/
│      ├── pp_mean_multiclass_tr=0.010_resnet18_BarlowTwins_bd=False_tl=FT_iw=random.csv
│      ├── ...
│      ├── pp_std_multiclass_tr=0.010_resnet18_BarlowTwins_bd=False_tl=FT_iw=random.csv
│      ├── ...
├── multilabel/
│   ├── 001p/
│      ├── pp_mean_multilabel_tr=0.010_resnet18_BarlowTwins_bd=False_tl=FT_iw=random.csv
│      ├── ...
│      ├── pp_std_multilabel_tr=0.010_resnet18_BarlowTwins_bd=False_tl=FT_iw=random.csv
│      ├── ...
  • The generated csv file can be plotted using the script sc_2_plot_final_graphs_v2.py. This script requires inputting the parent folder (multiclass/ or multilabel/) and adjusting the hard-coded x variable to the current number of percentages available. It searches for the best results obtained in the validation dataset and then generates a new dataframe with the final results used for the graphs, as well as the final line graphs showing the training ratios versus the results of the desired final metric:
python3 scripts/sc_2_plot_final_graphs_v2.py -i ~/Documents/Experiments/SSL-BSU/02_v3_R1_Fine-tuning_new_results_val_test/02_avg_csv_files/multiclass/ -o ~/Documents/Experiments/SSL-BSU/02_v3_R1_Fine-tuning_new_results_val_test/03_dfs_final_results/ -m f1_macro -sf pdf
python3 scripts/sc_2_plot_final_graphs_v2.py -i ~/Documents/Experiments/SSL-BSU/02_v3_R1_Fine-tuning_new_results_val_test/02_avg_csv_files/multilabel/ -o ~/Documents/Experiments/SSL-BSU/02_v3_R1_Fine-tuning_new_results_val_test/03_dfs_final_results/ -m rmse -sf pdf
python3 scripts/sc_3_plot_final_bar_graphs_v2.py -i ~/Documents/Experiments/SSL-BSU/02_v3_R1_Fine-tuning_new_results_val_test/03_dfs_final_results/exp_multiclass_best_results_means.csv -o ~/Documents/Experiments/SSL-BSU/02_v3_R1_Fine-tuning_new_results_val_test/03_dfs_final_results/ -r Random -sf pdf
python3 scripts/sc_3_plot_final_bar_graphs_v2.py -i ~/Documents/Experiments/SSL-BSU/02_v3_R1_Fine-tuning_new_results_val_test/03_dfs_final_results/exp_multiclass_best_results_means.csv -o ~/Documents/Experiments/SSL-BSU/02_v3_R1_Fine-tuning_new_results_val_test/03_dfs_final_results/ -r ImageNet -sf pdf
python3 scripts/sc_3_plot_final_bar_graphs_v2.py -i ~/Documents/Experiments/SSL-BSU/02_v3_R1_Fine-tuning_new_results_val_test/03_dfs_final_results/exp_multilabel_best_results_means.csv -o ~/Documents/Experiments/SSL-BSU/02_v3_R1_Fine-tuning_new_results_val_test/03_dfs_final_results/ -r Random -sf pdf
python3 scripts/sc_3_plot_final_bar_graphs_v2.py -i ~/Documents/Experiments/SSL-BSU/02_v3_R1_Fine-tuning_new_results_val_test/03_dfs_final_results/exp_multilabel_best_results_means.csv -o ~/Documents/Experiments/SSL-BSU/02_v3_R1_Fine-tuning_new_results_val_test/03_dfs_final_results/ -r ImageNet -sf pdf
python3 scripts/sc_4_generate_latex_tables.py -i /home/sfandres/Documents/Experiments/SSL-BSU/02_v3_R1_Fine-tuning_new_results_val_test/03_dfs_final_results/exp_multiclass_best_results_means.csv -o ~/Documents/Experiments/SSL-BSU/02_v3_R1_Fine-tuning_new_results_val_test/03_dfs_final_results/ -v
python3 scripts/sc_4_generate_latex_tables.py -i /home/sfandres/Documents/Experiments/SSL-BSU/02_v3_R1_Fine-tuning_new_results_val_test/03_dfs_final_results/exp_multilabel_best_results_means.csv -o ~/Documents/Experiments/SSL-BSU/02_v3_R1_Fine-tuning_new_results_val_test/03_dfs_final_results/ -v
python3 scripts/sc_5_plot_discussion_bar_graphs.py -i ~/Documents/Experiments/SSL-BSU/02_v3_R1_Fine-tuning_new_results_val_test/03_dfs_final_results/exp_multiclass_best_results_means.csv -o ~/Documents/Experiments/SSL-BSU/02_v3_R1_Fine-tuning_new_results_val_test/03_dfs_final_results/ -r ImageNet -v -sf pdf
python3 scripts/sc_5_plot_discussion_bar_graphs.py -i ~/Documents/Experiments/SSL-BSU/02_v3_R1_Fine-tuning_new_results_val_test/03_dfs_final_results/exp_multilabel_best_results_means.csv -o ~/Documents/Experiments/SSL-BSU/02_v3_R1_Fine-tuning_new_results_val_test/03_dfs_final_results/ -r ImageNet -v -sf pdf

License

CC BY 4.0

This work is licensed under a Creative Commons Attribution 4.0 International License. See the LICENSE file for details.

CC BY 4.0