Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Poll: Which combination of backbone and architecture provides you with the best results? #205

Open
JordanMakesMaps opened this issue Sep 14, 2019 · 5 comments

Comments

@JordanMakesMaps
Copy link

As indicated in the title, I'm just curious to see which combination of backbones and architecture gives the best results for the dataset you're working with.

I was using ResNet18 with U-Net on images of 1024*1024 dimensions and recently switched to efficientnetb3 and the results were substantially better.

Anyone else have some winning combinations they'd recommend?

@theroggy
Copy link

theroggy commented Oct 1, 2019

I had the best results with inceptionresnetv2 + unet.

My tests at the time did include some other backbones not supported by this project, but didn't include efficientnetb if I recall correctly, so would be interested in your experiences if you would compare it with inceptionresnetv2.

@sfarahaninia73
Copy link

I had the best results with inceptionresnetv2 + unet.

My tests at the time did include some other backbones not supported by this project, but didn't include efficientnetb if I recall correctly, so would be interested in your experiences if you would compare it with inceptionresnetv2.

hi
How can I calculate dice,recall, precision for my model.

@JordanMakesMaps
Copy link
Author

@sfarahaninia73 there are already functions that were created by @qubvel that will do this for you during the training process:

import segmentation_models as sm

BACKBONE = 'efficientnetb3'
preprocess_input = sm.get_preprocessing(BACKBONE)

model = sm.Unet(input_shape = (size, size, 3),
                backbone_name = BACKBONE, 
                encoder_weights = 'imagenet', 
                activation = 'softmax', 
                classes = nb_classes,
                encoder_freeze = True)

from segmentation_models.losses import categorical_focal_dice_loss
from segmentation_models.metrics import precision, recall, iou_score, f1_score, f2_score
from keras.optimizers import Adam

model.compile(optimizer = Adam(lr = .001), 
              loss = [categorical_focal_dice_loss], 
              metrics = [precision, recall, iou_score, f1_score, f2_score])

model.summary()

...

# to view it across all epochs, plot it
plt.figure(figsize= (10, 5))
plt.plot(history.history["precision"], label="precision")
plt.plot(history.history["val_precision"], label="val_precision")
plt.title("Training Precision")
plt.xlabel("Epoch #")
plt.ylabel("Precision")
plt.legend(loc="upper right")
plt.show()

plt.figure(figsize= (10, 5))
plt.plot(history.history["recall"], label="recall")
plt.plot(history.history["val_recall"], label="val_recall")
plt.title("Training Recall")
plt.xlabel("Epoch #")
plt.ylabel("Recall")
plt.legend(loc="upper right")
plt.show()

@sfarahaninia73
Copy link

sfarahaninia73 commented Oct 31, 2019 via email

@tonyboston-au
Copy link

My best results were for U-Net with ResNet50 encoder weights, total_loss = dice_loss + (1 * focal_loss) for multi-class segmentation of remote sensing images.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants