Skip to content
Discussion options

You must be logged in to vote

The IMValidator isn't normalized, and the value of a "good" score depends on the number of classes in your dataset. Also depending on the dataset, it might have poor correlation with accuracy. You could try another validator, but most of them aren't normalized so they won't have an understandable range like [0, 1]. In my opinion the biggest challenge in the field right now is creating a validator that is well correlated with accuracy, across algorithms and datasets. Once that is found, then creating a normalizer for it wouldn't be too difficult. Unfortunately most validators are poorly correlated with accuracy. Despite there being hundreds of papers on unsupervised domain adaptation, this…

Replies: 1 comment 2 replies

Comment options

You must be logged in to vote
2 replies
@just-eoghan
Comment options

@KevinMusgrave
Comment options

Answer selected by just-eoghan
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Category
Q&A
Labels
None yet
2 participants