-
Notifications
You must be signed in to change notification settings - Fork 74
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
(Again) How did you arrive to the normalization constants? #23
Comments
Hi @manolotis, thanks for pointing this out! I have similar questions about this. Btw how do you calculate the constants for target/future/xy? I do not quite understand why we need this constant. |
I calculate the constants for target/future/xy in a similar fashion as for target/history/xy: basically aggregating the mean and std for the future coordinates (ofc without counting the invalid timestamps). Generally it's beneficial to normalize the outputs if they have a significantly different magnitude. Here I suppose it's beneficial given the differences in resulting future x and y (since the inputs are processed such that the target is always facing the positive x direction at prediction time). I experimented not normalizing the inputs as suggested by #3 (comment), but I always run into issues with infinite values in the first epoch, and when changing the |
Thanks for your explanation! I meet the same issue when setting the |
First of all, thank you for making this implementation publicly available. I find your code really elegant.
I have some questions regarding how you arrived to the normalization constants. I already saw a really similar issue (#1), but it did not completely clarify what I am wondering.
I see that normalization is now performed in
model.data.py
, with these means and standard deviations:waymo-motion-prediction-challenge-2022-multipath-plus-plus/code/model/data.py
Lines 30 to 48 in 4636641
And to the predicted coordinates during training with these (if specified):
waymo-motion-prediction-challenge-2022-multipath-plus-plus/code/train.py
Lines 83 to 85 in 4636641
For a different model I am developing, I tried to calculate similar constants (mainly for various features of target/history and target/future), and I arrive to considerably different values. For some features, I get fairly similar values, but for other they are a lot higher (a factor of 10-100, especially noticeable in the coordinates).
My approach to compute these values has been to first prerender the dataset using
MultiPathPPRenderer
without normalization and filtering only interesting agents. Then traversing all the prerendered scenarios from the training split and computing the mean and standard deviation of each feature for the target agent. How come I am getting such different values? Could you elaborate on the part of the data you used to compute these constants? In particular: (1) did you use a subset of agents? (e.g. only interesting, or fully observed), (2) did you use a subset of the scenarios?Thank you in advance!
The text was updated successfully, but these errors were encountered: