Number of stacked convolutions #2325
Unanswered
Whichislove
asked this question in
Q&A
Replies: 1 comment
-
This is just a modification from empirical experiences. Using two stack layers can achieve a performance similar to 4 layers, and the memory saved is more effective in setting disentangled small heads for different targets, because they are more distinct in monocular 3D detection compared to 2D detection (which corresponds to disentangled heads in the original paper). You can also try to set the number of stack layers to 4, and typically it can not further boost the final performance. |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Hi there,
I have noticed that in the original paper of FCOS3D it is written that predictive head consists of 4 stacked convolutions and the last layer for classification. However, in the config file from the master repo I see 2 convolutional layers (cls_convs), 1 convolutional layer (conv_cls_prev) and 1 last layer. Was the architecture modified since the publication or is there a reason for them being different?
Similar question regarding the regression head. 2 convolutional layers (reg_convs), 1 convolutional layer (conv_reg_prev) and 1 last layer.
Also in the config file the stacked convolutions is given as 2 not 4.
My guess is that first 2 convolutional layers are those stacked convolutional layers since in the source code it is the stacked convolutions from config file that sets it to 2. And the conv_cls_prev and conv_reg_prev are just not shown in the diagram in the original paper. Am I right? If yes, then was there a reason to change it from 4 to 2?
Beta Was this translation helpful? Give feedback.
All reactions