Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to generate multi class segmentation image from trained model? #185

Open
ksriharshitha opened this issue Aug 22, 2019 · 2 comments
Open

Comments

@ksriharshitha
Copy link

I have trained the model with 8 classes. But during prediction, I'm not able to plot the ground truth and predictions.

@JordanMakesMaps
Copy link

JordanMakesMaps commented Aug 22, 2019

" I'm not able to plot the ground truth and predictions."

What do you mean by plot? Like literally show the image using matplotlib? The predictions should be coming out in the shape of (batch size height, width, number of classes), so for you to visualize the prediction you need to use something like numpy.argmax(prediction, axis = 2) since we want an array that has only a single channel as opposed to 8. If you used to_categorical() then the ground truth will need to be treated the same way.

@JonnoFTW
Copy link

JonnoFTW commented Oct 17, 2019

If you have one-hot encoded masks like this in shape (rows, cols, classes) you can make an image with this function:

def mask2img(mask):
    palette = {
        0: (0, 0, 0),
        1: (255, 0, 0),
        2: (0, 255, 0),
        3: (0, 0, 255),
        4: (0, 255, 255),
        .... # repeat for all your classes
    }
    rows = mask.shape[0]
    cols = mask.shape[1]
    image = np.zeros((rows, cols, 3), dtype=np.uint8)
    for j in range(rows):
        for i in range(cols):
            image[j, i] = palette[np.argmax(mask[j, i])]
    return image

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants