-
Notifications
You must be signed in to change notification settings - Fork 198
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
**[URGENT]** What is the result of the 3dof localization? #31
Comments
Hello, In the links below is the configuration for drl for your bag file, along with a rviz config file and associated video showing drl working with your sensor data.
In rviz, the 3 DoF pose output of drl is shown as a TF frame [map_drl -> base_link] and as a PoseStamped. When using drl you should at least configure:
The drl_configs.yaml is only for documentation, you should not edit it. For comparing the drl poses with the ground truth you can use my package robot_localization_tools and inspect topic The accuracy of drl depends highly on:
For accuracy tests check my dissertation and papers available in the links at: For running drl with your bag file:
Have a nice day, |
@carlosmccosta Thanks a ton for the detailed reply, I was able to run and get the same output as you showed in the video,
2 . Could you please let me know what is the problem ? |
Hello, The warning is informing you that an alignment of the sensor data with the map was discarded because the percentage of outliers was above the specified threshold. You can find more information in the sections about outlier detection and transformation validators. These modules are used to validate the point cloud alignment and can be configured according to your requirements, such as:
You need to compile the robot_localization_tools package inside your catkin workspace. I started the bag file at 35 seconds because at 0 seconds there was no sensor data in the lidar. At 35 seconds was when the lidar started seeing a corner in the warehouse, allowing drl to start the pose tracking. If you start the bag at different time you need to update the initial pose in the launch file or use the rviz 2D pose estimate.
Have a nice day, |
Thanks for the detailed reply.
I did this ,and when I do rostopic list,I get the topics in the list,but rostopic echo /localization_error still does not give any output. Also I have some cases,where when the robot is navigating in the indoor environment,there is no/very less number of features,and this causes the laser scan not to detect any feature,like it does not fall on any object/wall ,etc,so in this case will the drl still be able to localize the robot accurately? |
Hello, After compiling the robot_localization_tools pacakge you should have this message available:
You should use the robot_localization_error.launch that I had attached in the zip file, since it has the frame_id configurations for your use case.
The drl updates the tracking pose when it aligns the lidar data with the map. Have a nice day, |
Thanks a lot for the help,I am now able to compute the error and even visualize the output in rviz as required. The only problem is that in case where the laser scan does not fall on any object,there is no tracking of the pose as you mentioned.Also sometimes the error goes as bad as 50m,may be this is due to the repeatative nature of the world,which I use.I have attached a screen-shot below- After this when it reaches the next portion of the world,the ground truth is on one side of the warehouse(blue) and estimate is on the complete opposite side.
But I think the main problem is that there is no object in the scan data right?And the world is also not so unique? Then may be DRL is very accurate with good features and unique environment as it uses feature matching. Thank you a lot for the help.Any suggestion is appreciated. Thank you |
Hello, Due to the repetitive nature of warehouses, feature matching will have many possible matches / initial poses.
This way, drl will only use the iterative closest point (ICP) algorithm for tracking the pose, which will require that the initial pose be given in one of these ways:
For computing a 3 DoF pose, the ambient point cloud must have at least 2 points, but I suggest to have at least 10 points, because 2 points might be associated with sensor noise. drl can only estimate the pose when it has sensor / lidar data. Also, drl can use several lidars by merging their data into a single point cloud using the laserscan_to_pointcloud. drl uses odometry (retrieved from TF If other modules use the You can also increase the values that control when the tracking is considered lost, such as Have a nice day, |
Thanks for the quick and detailed reply.
Since I cannot increase the lidar range,may be I can use a rgbd camera right?
Since rgbd camera has a larger field of view,it will have a point cloud for those places where the laser cannot really detect the features.Will this track the pose in areas like the one indicated by a circle as shown before? Thank you |
Hello, For using rgbd data, the tracking needs to be 6 DoF (x, y, z, roll, pitch, yaw) and you need to have a 3D map / point cloud of the environment. 6 DoF tracking is much less robust than 3 DoF (x, y, yaw) due to the extra 3 degrees of freedom and is also much more computational intensive. Have a nice day, |
Thanks a lot for your help.May be I can try increasing the range as you suggested. I just had a small quick question - But if not then by any chance do you have suggestions for indoor mobile 2d localization with high accuracy ,may be based on using multiple sensors? Thank you |
Hello, For accurate and robust 3 DoF localization, long range 360º lidar sensing is the most used solution.
Examples of both approaches in the fotos in dynamic_robot_localization_tests, namely the guardian and jarvis platforms. Have a nice day, |
Hello @carlosmccosta ,thank you for the open-source.
I checked out some of the issues and readme,but I am not very clear about the 3dof localization.
I had a few questions-
I have a bag file attached here, and I changed the launch file to this and also the yaml file ,then I run the following commands-
1.roscore
2.roslaunch dynamic_robot_localization dynamic_robot_localization_system.launch
3.rosrun map_server map_server map_default.yaml
4.rosbag play --clock expt6.bag
5 rviz
I have attached the map yaml here and pgm.I am clueless as to what should be the output,in the sense I have the ground_truth pose of the bot,so which topic should I compare this to(when I run the above commands) ,so that I know how good is the 3dof localization?
What should be the expected rviz output?
Let me know if any more information is required.
Any help/suggestion is appreciated.Waiting for your early reply.
Thank you
The text was updated successfully, but these errors were encountered: