You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Firstly, thank you for building this amazing framework on top of MoveIt towards combined Task- and Motion Planning.
I'm trying to create a custom stage that accepts grasp poses given by human input at runtime, such as via an HTC Vive or even simply the orange virtual arm in the MotionPlanningPlugin. This custom stage should constantly listen for new grasp poses and MTC should constantly plan more trajectories whenever a new grasp pose is received. A grasp pose can be generated by the human at any time, and the maximum number of poses is unknown at compile time.
Could you offer some advise as to the best way to build such a custom stage?