-
Notifications
You must be signed in to change notification settings - Fork 11
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Could a new type of loss be introduced for classes? #10
Comments
I was experimenting with this and I have a question in connection with this.
Here, we don't exactly get logprobs, but a constant is added to each probability based on the density of the distribution ( + np.log(orient_bins / (2 * np.pi) ), so the probabilities won't sum to one.What is the purpose of this? Better visualization/numerical stability? Thanks in advance! |
Theoretically these logprobs can be backpropagated. But I don't think information extracted from the pose distribution can help improve the classification.
|
I understand. But theoretically, the yaw angle distribution evaluated with some density from 0 to 2pi would roughly look like the logprobs here?
I don't understand this. Without |
In EPro-PnP-Det_v2 if we want to improve the classification performance, could theoretically a new type of loss be introduced with the help of the deformable correspondance head?
I was thinking about how the yaw angle distribution corresponds to different classes. During the AMIS algorithm we could use the generated rotation distribution, evaluate it from 0 to 2pi with some density. Then feed this distribution to a simple network which classifies based on yaw angle. Maybe this isn't suitable for all classes but it might be useful to train a binary classsifier for pedestrians and cones (which can be mixed for classifiers that are based on purely image inputs) and add the scores to the corresponding ones in the FCOS detection head with some weighting.
Or we could just use these orient logprobs for this purpose?:
EPro-PnP-v2/EPro-PnP-Det_v2/epropnp_det/models/dense_heads/deform_pnp_head.py
Lines 563 to 574 in 85215de
This is just an idea and my question is, could this theoretically work? Can this be backpropagated at all?
Thanks in advance for the answer, and for the previous ones too, they've been very useful.
The text was updated successfully, but these errors were encountered: