-
Notifications
You must be signed in to change notification settings - Fork 652
Conversation
pep8 and tests pass on my system for tensorflow and py3.5. They fail on travis due to existing errors in keras-contrib, not my code, so this can probably be merged. (keras_contrib/layers/recurrent.py:12: ImportError: cannot import name '_time_distributed_dense') |
Thanks for your PR! There are a few things to do:
|
Alright sounds good, I'll try to do that in the next few weeks. |
I also forgot one important one:
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Almost LGTM, just a filename fix
examples/pascal_voc_jaccard_loss.py
Outdated
@@ -0,0 +1,84 @@ | |||
''' | |||
Trains a DenseNet-40-12 model on the CIFAR-10 Dataset. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
shouldn't this file be called something like cifar10_jaccard_loss.py?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ops I need to change that docstring, it has to use pascal voc because it's showing how jaccard loss is good for segment ion.
Conflicts: examples/pascal_voc_jaccard_loss.py
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm glad you've actually made a full pascal_voc example! We've needed one for quite a while now. Unfortunately, 70% accuracy is fairly low performance on pascal_voc. The best way to make that more clear is a different evaluation metric, intersection over union, also known as IOU. An implementation is available at https://github.com/aurora95/Keras-FCN/blob/master/evaluate.py.
I had problems of this sort before, which are detailed in #63.
examples/pascal_voc_jaccard_loss.py
Outdated
seed=seed) | ||
mask_generator = image_datagen.flow_from_directory( | ||
pascal_folder, | ||
target_size=(img_rows, img_cols), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
resizing the labels this way will change the label values, and training won't work correctly. Consider merging SegDataGenerator from Keras-FCN. A usage example is available in the Keras-FCN's train.py file.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Oh shit, you're right. Thanks for catching that. I'll do that or a basic decimation (I only have 2GB gpu).
Great stuff! I'll give it a test |
…u/keras-contrib into segmentation-data-generator
The augmentation works but it isn't converging on pascal VOC data, probably due to this issue. Thanks for working with me on this, but I'm going to close this issue since I don't have spare time for the amount of work you've asked me to do. Making an example script, fixing all the things associated with it, training, documentation, and a unit test is going to take more time than it's worth for a simple loss function I'm afraid. Anyone else is welcome to continue this, and anyone who want's to use the function can refer to this gist. |
@wassname Sorry! I didn't mean to imply that the performance on Pascal VOC would hold up this PR, I was just explaining what was going on. Let me give it one last look and this can probably be merged. Do you have the changes you tested with SegDataGenerator somewhere? I'd like to take a look. Also, I don't know how others feel about it, but I'd love to have that loss visualization included. There should be more examples like that which help people understand the functions. |
@wassname Just a tiny pep8 issue, and if you wouldn't mind adding a note about the training performance.
|
Add jaccard distance loss
Ok great! I've pushed the version that uses SegDataGenerator. I'll check it for PEP8 and make that note in the next few days. |
@wassname I made a separate PR #201 which separates the SegDataGenerator stuff out and includes the pep8 fix, it just took a moment and I improved the doc wording a bit. If #201 passes travis I'll merge it. When I have a chance I'll also finish up the SegDataGenerator PR and probably use your example as a starting point, but that will be a few days. Thanks! |
I'll close this again in favor of #201 to de-clutter |
Sounds good, thanks! |
Hey, I was interested in trying out jaccard loss for my problems, but I'm not sure if the implementation is correct. The absolute symbol in the jaccard loss is the cardinality of the set, not the absolute value. Can anyone explain the math behind why the absolute value is used? Am I missing something here? |
@HasnainRaz I can't revisit this in detail at the moment myself, so take my comment with a grain of salt, but I believe for certain problem types the abs isn't a problem. If you have a more general implementation could you consider submitting a PR with a test that fails but should pass with the current code + a fix? |
Adds jaccard distance loss.
Why do we want this? It's better than categorical cross entropy when you have unbalanced data within a sample e.g. for pixel-wise segmentation. For this reason it (and the similar dice loss) has been been used in kaggle competition with great results.
I modified this version to converge from greater than zero to zero because it looks nicer and matches other losses. I've also smoothed it to have a smooth gradient even as it approaches zero loss (otherwise it may stop short of max accuracy like dice loss).
Why not dice loss? I have found this to be similar in application but superior in results to dice loss #121 because dice loss has a vanishing gradient as it approached zero loss. So if you merge this you might want to consider closing the #121 without merging.