Skip to content

Latest commit

 

History

History
65 lines (46 loc) · 2.56 KB

README.md

File metadata and controls

65 lines (46 loc) · 2.56 KB

Dementia Diagnosis with Multi-Modal Vision Transformers Using MRI and PET

Official Pytorch Implementation of Paper - 💎 DiaMond: Dementia Diagnosis with Multi-Modal Vision Transformers Using MRI and PET - Accepted by WACV 2025

Preprint

Installation

  1. Create environment: conda env create -n diamond --file requirements.yaml
  2. Activate environment: conda activate diamond

Data

We used data from Alzheimer's Disease Neuroimaging Initiative (ADNI) and Japanese Alzheimer's Disease Neuroimaging Initiative (J-ADNI). Since we are not allowed to share our data, you would need to process the data yourself. Data for training, validation, and testing should be stored in separate HDF5 files, using the following hierarchical format:

  1. First level: A unique identifier, e.g. image ID.
  2. The second level always has the following entries:
    1. A group named MRI/T1, containing the T1-weighted 3D MRI data.
    2. A group named PET/FDG, containing the 3D FDG PET data.
    3. A string attribute DX containing the diagnosis labels: CN, Dementia/AD, FTD, or MCI, if available.
    4. A scalar attribute RID with the patient ID, if available.

Usage

The package uses PyTorch. To train and test DiaMond, execute the src/train.py script. The configuration file of the command arguments is stored in config/config.yaml. The essential command line arguments are:

  • --dataset_path: Path to HDF5 files containing either train, validation, or test data splits.
  • --img_size: Size of the input scan.
  • --test: True for model evaluation.

After specifying the config file, simply start training/evaluation by:

python src/train.py

Contacts

For any questions, please contact: Yitong Li ([email protected])

If you find this repository useful, please consider giving a star 🌟 and citing the paper:

@inproceedings{li2024diamond,
    title={DiaMond: Dementia Diagnosis with Multi-Modal Vision Transformers Using MRI and PET},
    author={Li, Yitong and Ghahremani, Morteza and Wally, Youssef and Wachinger, Christian},
    eprint={2410.23219},
    archivePrefix={arXiv},
    primaryClass={cs.CV},
    year={2024},
    url={https://arxiv.org/abs/2410.23219},
}

WACV 2025 proceedings:

Coming soon