-
Notifications
You must be signed in to change notification settings - Fork 6
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
difference between two type of depth #2
Comments
So, as you said we can use two context view depth and rendered depth to calc depth loss. But in this setting, we also utilize target depth also, that is only acquired from rendered gaussian. |
For the first question, the output of pose network is w2c camera from input1 to input2, so for gaussian adapter, we use pose_rev for c2w. (from target to each view, because we assume the target view as an identity camera parameter.) |
For the second one, that loss is for the geometric consistency, for more detail, please refer to the paper. |
paper link cannot open, could you provide the paper title. |
Sorry about that, the title of paper is "Unsupervised Scale-consistent Depth Learning from Video". |
gaussian mean of z is the depth of render depth, but structure from motion(sfm) estimated depth looks like to be the ray depth of gaussian mean vector length, is it right ?
I see the code of depth loss between gaussian target view depth and sfm projected depth, why not just use two context view depth and gassian rendered depth to calc depth loss.
The text was updated successfully, but these errors were encountered: