Skip to content

xmed-lab/SD-Messenger

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

31 Commits
 
 
 
 
 
 

Repository files navigation

S&D Messenger: Exchanging Semantic and Domain Knowledge for Generic Semi-Supervised Medical Image Segmentation

🚀 Updates & Todo List

‼️ IMPORTANT: This version is not the final version. There are still a lot of todos. We will release the final version ASAP. Sorry for any inconvenience this may cause.

  • Create the repository and the ReadMe Template
  • Release the training and testing codes for S&D-Messenger
  • Release the pre-processed datasets (Synapse, MMWHS, LASeg, M&Ms)
  • Release the model weights and the logs for 20% Synapse dataset
  • Release the model weights and the logs for 40% Synapse dataset
  • Release the model weights and the logs for MMWHS (CT2MRI, MRI2CT)
  • Release the model weights and the logs for LASeg (5%, 10%)
  • Release the model weights and the logs for M&Ms (Domain A, Domain B, Domain C, Domain D)

⭐ Introduction

framework SSMIS has emerged as a promising solution to tackle the challenges of time-consuming manual labeling in the medical field. However, in practical scenarios, there are often domain variations within the datasets, leading to derivative scenarios like Semi-MDG and UMDA. We aim to develop a generic framework that masters all three tasks. We notice a critical shared challenge across three scenarios: the explicit semantic knowledge for segmentation performance and rich domain knowledge for generalizability exclusively exist in the labeled set and unlabeled set respectively. Such discrepancy hinders existing methods from effectively comprehending both types of knowledge under semi-supervised settings. To tackle this challenge, we develop a Semantic & Domain Knowledge Messenger (S&D Messenger) which facilitates direct knowledge delivery between the labeled and unlabeled set, and thus allowing the model to comprehend both of them in each individual learning flow. Equipped with our S&D Messenger, a naive pseudo-labeling method can achieve huge improvement on ten benchmark datasets for SSMIS (+7.5%), UMDA (+5.6%), and Semi-MDG tasks (+1.14%), compared with SOTA methods designed for specific tasks.

🔨 Installation

  • Main python libraries of our experimental environment are shown in requirements.txt. You can install S&D-Messenger by following:
git clone https://github.com/xmed-lab/SD-Messenger.git
cd SD_Messenger
conda create -n SDMessenger
conda activate SDMessenger
pip install -r ./requirements.txt

💻 Prepare Dataset

Download the pre-processed datasets and splits from the following links:

Dataset Synapse MMWHS M&Ms LASeg
Paper Link Link Link Link Link
Processed Datasets Link (Png) Link (NdArray) Link (Png) Link (NdArray)
Splits 20% 40% CT2MRI MRI2CT Do. A Do. B Do. C Do. D 5% 10%
  • Place the split files in ./splits
  • unzip the dataset files
unzip synapse.zip
unzip MMWHS.zip
unzip MNMS.zip
unzip la_seg.zip
  • Specify the dataset root path in the config files

🔑 Train and Evaluate S&D-Messenger

  • Train S&D-Messenger on Synapse, LASeg, MMWHS, M&Ms datasets:
    • Place the dataset and split files in the corresponding folder path
    • The model of S&D-Messenger is defined in ./SD_Messenger/model, with a backbone, and a SDMessengerTransformer
    • You can train S&D-Messenger on different datasets
cd ./SD_Messenger

# Synapse Dataset
bash scripts/train_synapse_1_5.sh ${GPU_NUM} ${PORT}    # 20% labeled
bash scripts/train_synapse_2_5.sh ${GPU_NUM} ${PORT}    # 40% labeled

# LASeg Dataset
bash scripts/train_laseg.sh ${GPU_NUM} ${PORT}
# MMWHS Dataset
bash scripts/train_mmwhs.sh ${GPU_NUM} ${PORT}
# M&Ms Dataset
bash scripts/train_mm.sh ${GPU_NUM} ${PORT}
  • Evaluate our S&D-Messenger on Synapse, LASeg, MMWHS, M&Ms datasets:
# Synapse Dataset
bash scripts/test_synapse_1_5.sh ${GPU_NUM} ${CHECKPOINT_PATH}    # 20% labeled
bash scripts/test_synapse_2_5.sh ${GPU_NUM} ${CHECKPOINT_PATH}    # 40% labeled

# LASeg Dataset
bash scripts/test_laseg.sh ${GPU_NUM} ${CHECKPOINT_PATH}
# MMWHS Dataset
bash scripts/test_mmwhs.sh ${GPU_NUM} ${CHECKPOINT_PATH}
# M&Ms Dataset
bash scripts/test_mm.sh ${GPU_NUM} ${CHECKPOINT_PATH}

📘 Model Weights and Logs

To improve the reproducibility of the results, we have shared the trained model weights and training logs for your reference.

Dataset Synapse Dataset MMWHS Dataset M&Ms Dataset LASeg Dataset
Splits 20% 40% CT2MRI MRI2CT Do. A Do. B Do. C Do. D 5% 10%
DICE (%) 68.38 71.53 77.0 91.4 86.50 87.58 88.30 89.76 90.21 91.46
Model Weights weight weight weight weight weight weight weight weight weight weight
Training Logs log log log log log log log log log log
ℹ️ Please note that the released model weights may perform slightly better than the results reported in the paper, as the reported numbers reflect the average performance across five experimental runs for each setting.

🎯 Results

fig_1 fig_2 fig_3

📚 Citation

If you find our paper helps you, please kindly consider citing our paper in your publications.

@article{zhang2024s,
  title={S\&D Messenger: Exchanging Semantic and Domain Knowledge for Generic Semi-Supervised Medical Image Segmentation},
  author={Zhang, Qixiang and Wang, Haonan and Li, Xiaomeng},
  journal={arXiv preprint arXiv:2407.07763},
  year={2024}
}

🍻 Acknowledge

We sincerely appreciate these three highly valuable repositories UniMatch, SegFormer and A&D.

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published