You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I'm implementing it on my own task with the Unitree Go1 robot, and I have some questions regarding the "observation".
In the image above:
The left represents the "stand task" without privileged information (i.e., the velocity of the base link and the robot's height).
The right shows the task with these additional privileged observations.
From my experiments, it’s significantly harder for the agent to learn without the privileged information, as shown in the image. After 14 hours of training, the agent on the right still struggles. By contrast, the agent with privileged information (left) stands reliably after just 40 minutes.
This leads to my confusion:
Is it inherently too challenging to train TDMPC when only the latent state is used for the value (critic network), which is inferred from only proprioceptive data? While a motion capture system might be available for rewards during training, in my case, the trained policy would only have access to proprioceptive data during deployment on the real robot.
I’m considering a teacher-student framework:
The teacher loop is trained first with full access to privileged information to refine the latent states.
The student loop then learns to "imitate" the latent states using only proprioceptive data.
Do you think such an approach would help?
Looking forward to your insights!
Best regards,
Ruochen Li
The text was updated successfully, but these errors were encountered:
Hi,
Thank you for the incredible work on TDMPC!
I'm implementing it on my own task with the Unitree Go1 robot, and I have some questions regarding the "observation".
In the image above:
From my experiments, it’s significantly harder for the agent to learn without the privileged information, as shown in the image. After 14 hours of training, the agent on the right still struggles. By contrast, the agent with privileged information (left) stands reliably after just 40 minutes.
This leads to my confusion:
Is it inherently too challenging to train TDMPC when only the latent state is used for the value (critic network), which is inferred from only proprioceptive data? While a motion capture system might be available for rewards during training, in my case, the trained policy would only have access to proprioceptive data during deployment on the real robot.
I’m considering a teacher-student framework:
Do you think such an approach would help?
Looking forward to your insights!
Best regards,
Ruochen Li
The text was updated successfully, but these errors were encountered: