Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

I encountered an issue in vis_std.py #32

Open
DrinkLego opened this issue Nov 18, 2024 · 13 comments
Open

I encountered an issue in vis_std.py #32

DrinkLego opened this issue Nov 18, 2024 · 13 comments

Comments

@DrinkLego
Copy link

When I execute the following command, the program shows an error:
python vis_std.py --version trainval --dataroot ../nuscenes --split val --trj_pred HiVT --map MapTR
--trj_data ../trj_data/maptr/val/data --base_results /home/MapUncertaintyPrediction/HiVT_modified/maptr_bas_prediction_result.pkl
--unc_results /home/MapUncertaintyPrediction/HiVT_modified/maptr_unc_prediction_result.pkl
--boxes /home/MapUncertaintyPrediction/bbox.pkl --save_path /home/MapUncertaintyPrediction/results

Error:IndexError: too many indices for array: array is 3-dimensional, but 4 were indexed
image

I also referred to #14, but this problem is still not resolved. Could you provide some help? I want to know if these three parameters: **global_y_hat_agent, pi_agent, seq_id** are correct, and why they are not being referenced.

image

@alfredgu001324
Copy link
Owner

Hi, thanks for reaching out! These 3 parameters are not referenced because they are not used anywhere, the main, important process is the pickle saving in this visualization function.

Can you maybe please check the shape of the array and let me know what that is?

@eeluo
Copy link

eeluo commented Dec 11, 2024

Hi, thanks for reaching out! These 3 parameters are not referenced because they are not used anywhere, the main, important process is the pickle saving in this visualization function.

Can you maybe please check the shape of the array and let me know what that is?

Hi, I have encountered the same error. I would like to ask, in the final visualization of the mp4 or gif, do the other agents have predicted trajectories except ego vehicle which contains predicted trajectories?

@alfredgu001324
Copy link
Owner

Yes, the other agents would also have predicted trajectories. HiVT is a multi-agent prediction model.

@eeluo
Copy link

eeluo commented Dec 12, 2024

Yes, the other agents would also have predicted trajectories. HiVT is a multi-agent prediction model.

When I run vis_std.py, "Line62:translated = hivt_trj[:,:,:,:2] - translation" error occurs, IndexError: too many indices for array: array is 3-dimensional, but 4 were indexed. I tried to output the matrix shape of hivt_trj, and it showed (6, 30, 4). I also see that several people are experiencing the same problem, like #14 . What can I do to solve this erroe?

@alfredgu001324
Copy link
Owner

Can you maybe backtrack into hivt.py's visualization function, and see what is the input when doing model inference? y_hat_agent = y_hat[data['av_index'], :, :, :2] the data['av_index'] here basically specifies the indices of all the vehicles in the scene. Can you maybe check what is the shape of y_hat first? If y_hat itself is (6, 30, 4) then there might be some problems during data preprocessing. And you need to keep backtracking.

@eeluo
Copy link

eeluo commented Dec 13, 2024

Can you maybe backtrack into hivt.py's visualization function, and see what is the input when doing model inference? y_hat_agent = y_hat[data['av_index'], :, :, :2] the data['av_index'] here basically specifies the indices of all the vehicles in the scene. Can you maybe check what is the shape of y_hat first? If y_hat itself is (6, 30, 4) then there might be some problems during data preprocessing. And you need to keep backtracking.

I print the shape of all three variables y_hat/y_hat_agent/y_hat_agent_uncertainty:
y_hat: (709, 6, 30, 4)
y_hat_agent: (32, 6, 30, 2)
y_hat_agent_uncertainty: (32, 6, 30, 2)
At this point, they should all still have the correct shape, right? But when saving the .pkl file, Line207 or Line211 of hivt.py, the shape of y_hat_agent[i] and y_hat_agent_uncertainty[i] is (6, 30, 2), and after torch.cat it's (6, 30, 4). So the shape of hivt_trj is (6, 30, 4) when I visualize it in vis_std.py.

@alfredgu001324
Copy link
Owner

Uhmm I see, they indeed all have the correct shape. It seems that (len(data['seq_id'])) this value might be 1? So only 1 trajectory is saved?

@alfredgu001324
Copy link
Owner

Ahhh nvm, I think I might know the reason. This visualization script might be created after the rebuttal, where the reviewers ask us to show trajectories of every agent. But the pickle saving here might only save the ego trajectories. There is a bit of version mismatch (sorry). Can you maybe try saving all the agent trajectories and try the viz script again?

@eeluo
Copy link

eeluo commented Dec 16, 2024

Ahhh nvm, I think I might know the reason. This visualization script might be created after the rebuttal, where the reviewers ask us to show trajectories of every agent. But the pickle saving here might only save the ego trajectories. There is a bit of version mismatch (sorry). Can you maybe try saving all the agent trajectories and try the viz script again?

How can i save all the agent trajectories?

@alfredgu001324
Copy link
Owner

alfredgu001324 commented Dec 16, 2024

So basically:

  1. Instead of selecting just the ego trajectories, you need to select all the trajectories by y_hat_agent = y_hat[:, :, :, :2].

  2. Then, after all the transformations done, during the pickle saving process at here, you need to use the data['av_index'] to figure out how many trajectories are in each scene and saving them accordingly. For example, if data['av_index'] is [0, 42, 76], then trajectories from index 0 to 41 should belong to the first scene, index 42 to 75 should belong to the second scene etc. I assume each batch should contain 32 scenarios if I remember correctly? So there are 32 sets of trajectories. And you can still use the same for loop to store them accordingly.

Please give it a try and let me know if I am wrong. Sorry it's been a while so my memory might not be exactly correct.

@eeluo
Copy link

eeluo commented Dec 25, 2024

So basically:

  1. Instead of selecting just the ego trajectories, you need to select all the trajectories by y_hat_agent = y_hat[:, :, :, :2].
  2. Then, after all the transformations done, during the pickle saving process at here, you need to use the data['av_index'] to figure out how many trajectories are in each scene and saving them accordingly. For example, if data['av_index'] is [0, 42, 76], then trajectories from index 0 to 41 should belong to the first scene, index 42 to 75 should belong to the second scene etc. I assume each batch should contain 32 scenarios if I remember correctly? So there are 32 sets of trajectories. And you can still use the same for loop to store them accordingly.

Please give it a try and let me know if I am wrong. Sorry it's been a while so my memory might not be exactly correct.

I apologize for not getting back to you until now, I've changed some code based on your suggestions:

Line164: y_hat_agent = y_hat[:, :, :, :2]
Line165: y_hat_agent_uncertainty = y_hat[:, :, :, :, 2:4]
Line207: predict_data[data['seq_id'][i].item()] = torch.cat([y_hat_agent[data['av_index'][i]:data['av_index'][i+1]], y_hat_agent_ uncertainty[data['av_index'][i]:data['av_index'][i+1]]], dim=-1).cpu().numpy()

Although the dimensions of hivt_trj are correct at this point, the mp4 file after visualization instead doesn't have any predicted trajectory, not even for ego vehicle. I don't know if my modification is in line with your suggestion, if not, please let me know.

@alfredgu001324
Copy link
Owner

Uhmm, I am not quite sure about Line 207. But for debugging:

  1. Instead of outputting the mp4 directly, can you put some breakpoints during the frame generation, and print out the trajectories coordinates after and before transformation?

  2. Another thing you can try is to directly plot out the trajectories (using some basic plt), to see how the trajectories look like, i.e. whether their shape and tendency align with the map. If the tendency seems correct, then there might be some problems regarding the coordinate transformation.

  3. If checking from the above, you find that the tendency of the trajectories looks correct, one thing to check is the self.rotate in hivt.py is True or False or not.

Hope this helps!

@alfredgu001324
Copy link
Owner

I uploaded a sample here: https://drive.google.com/file/d/1mVkCBUQ37mHeWL3AShkj4ILioA2wBeJz/view?usp=drive_link

that contains the results from HiVT + MapTR on the full val dataset. You can also find the results from mini val in the same folder. Can you maybe try comparing these against your results to see if the trajectories are under the same coordinate system?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants