-
Notifications
You must be signed in to change notification settings - Fork 80
GTRI based Sonar Notes and Description
Summary: The GTRI-based Sonar is similar to the ROS Melodic GPU Ray plugin which we present at NPS GPU Ray-based FLS. Here we port it to work on ROS Melodic with Gazebo 9 and show usage.
The GTRI-based Sonar model at https://github.com/Field-Robotics-Lab/gtri_based_sonar is based on and is forked from the SyllogismRXS Sonar model at https://github.com/SyllogismRXS/syllo-gazebo, ref. "A Computationally-Efficient 2D Imaging Sonar Model for Underwater Robotics Simulations in Gazebo", Georgia Tech . This fork has been modified as follows:
- It has been ported to work with ROS Melodic and Gazebo 9.
- The algorithm for converting the point cloud from the Gazebo Ray module to a Gazebo camera image has been merged into the Gazebo module. The image is no longer calculated and advertised on a separate ROS Node.
- Modules not related to modeling Sonar have been removed in order to simplify the repository.
See https://github.com/Field-Robotics-Lab/DAVE/wiki/Sonar-Models#gtri for usage and example screenshot.
Here is the xacro macro for instantiating a named Sonar instance onto a parent link:
Name: gtri_based_fls.
Parameters: namespace, suffix, parent_link, topic, *origin.
Here are the topics published by this Sonar model:
| Message type | Suggested topic name | Description |
|---|---|---|
sensor_msgs::PointCloud |
sonar_cloud |
The point cloud is calculated from Gazebo's RaySensor interface. |
sensor_msgs::Image |
sonar_image |
The Sonar image calculated from the point cloud. |
sensor_msgs::CameraInfo |
sonar_image_camera_info |
The height, width, and timestamp of the currently available Sonar image. |
PointCloud consists of a header, a number of xyz points, and the same number of values, see http://docs.ros.org/melodic/api/sensor_msgs/html/msg/PointCloud.html:
- std_msgs/Header header
- uint32 seq
- time stamp
- string frame_id
- geometry_msgs/Point32[] points
- float32 x
- float32 y
- float32 z
- sensor_msgs/ChannelFloat32[] channels | string name
- float32[] values
Image consists of a header, height, width, encoding information, and uint8[] data, see http://docs.ros.org/melodic/api/sensor_msgs/html/msg/Image.html:
- std_msgs/Header header
- uint32 seq
- time stamp
- string frame_id
- uint32 height
- uint32 width
- string encoding
- uint8 is_bigendian
- uint32 step
- uint8[] data
CameraInfo contains metatata about the camera image, see http://docs.ros.org/melodic/api/sensor_msgs/html/msg/CameraInfo.html.
The implementation is divided into two parts, 1) point cloud and 2) camera image:
- The point cloud is created using averaging functions on range and retro values from
physics::MultiRayShape, seegazebo_ros_block_laser.cpp. - The point cloud is transformed to the location of the sensor, randomization is applied on the x values, and the image is created, blurred, colored, clipped, and rotated.
The point cloud is calculated from information available in Gazebo's RaySensor interface, https://osrf-distributions.s3.amazonaws.com/gazebo/api/dev/classgazebo_1_1sensors_1_1RaySensor.html, see file ImagingSonar.cc at https://github.com/Field-Robotics-Lab/gtri_based_sonar/blob/master/gtri_based_fls_gazebo/src/ImagingSonar.cc.
This point cloud is identical to the point cloud the gazebo_ros_block_laser ROS plugin makes except that the ROS one has an optional parameter for adding gaussian noise.
Here is the basic flow:
Function load reads the requested topic names and sets up the Sonar so it doesn't run unless there are topic listeners. It also sets Gazebo's RaySensor to get callbacks to function OnNewLaserScans from Gazebo's MultiRayShape class for processing scans during runtime.
-
Function
OnNewLaserScanstakes a timestamp and, if not running behind, callsPutLaserData. -
Using angle, ray, and range information in
RaySensor, functionPutLaserDataperforms nested iterations over the 2-dimensional array ofVerticalRangeCountbyRangeCount. For each, it computes a cloud datapoint using math on range (length of ray) and retro (intensity of ray) values and pushes the xyz point onto thepointsarray:for j in vertical range count: for i in range count: Calculate 4 index values for 4 corners given (i,j). Obtain average range and intensity values given 4 index values. Calculate xyz from (i,j) through (pAngle polar, yAngle azimuthal) to xyz. Record xyz and intensity into point cloud: xyz into points[], intensity into values[].Key interfaces in this loop are
GetRangeandGetRetrofromphysics::MultiRayShape, https://osrf-distributions.s3.amazonaws.com/gazebo/api/dev/classgazebo_1_1physics_1_1MultiRayShape.html:double GetRange (unsigned int _index)double GetRetro (unsigned int _index)
To work with the
RaySensor, we need to understand the Gazebo RaySensor class (https://osrf-distributions.s3.amazonaws.com/gazebo/api/dev/classgazebo_1_1sensors_1_1RaySensor.html#a47aa298aceeb6084459df00a0d610ba6) and how to configure the shape of the ray using the MultiRayShape class (https://osrf-distributions.s3.amazonaws.com/gazebo/api/dev/classgazebo_1_1physics_1_1MultiRayShape.html). The ray shape parameters are adjustable using SDF tags (http://sdformat.org/spec?ver=1.6&elem=sensor#sensor_ray). For discussion see https://answers.gazebosim.org//question/20930/where-is-the-lasershape-in-the-ray-sensor-defined/.
The Sonar image is calculated from the point cloud, see file cloud_to_image.cpp at https://github.com/Field-Robotics-Lab/gtri_based_sonar/blob/master/gtri_based_fls_gazebo/src/cloud_to_image.cpp. File ColorMaps.cpp maps the Sonar image to its rendered colors.
Function init_cloud_to_image records the angle the image is calculated across and the Sonar's topic name for use during runtime.
Function publish_cloud_to_image projects the point cloud onto a camera image and then publishes the camera image. Here are the transforms:
- The point cloud is transformed to the location of the Sonar sensor using the
tf::TransformListenertransform. - The x value of each point is randomized by adding a random Gaussian distribution that has a mean of 0 and std of 0.01.
- A Blank
cv::Matimageimgis created, size 600x600 and holding 8-bit unsigned integers. - Each point in the point cloud is cast onto image
imgby drawing a circle of radius 1 with the color being the retro value from 0 to 255. - Apply a median blur over
imgusing a kernel size of 3 (for each point, use the middle value of the 3x3 square around it). - Create new image
img_colorfromimg, with the same size but holding three 8-bit unsigned colors for Blue-Green-Red, substituting each retro value from 0 to 255 to a color based on coloring algorithmgetColor_matlabdefined in filecolorMaps.cpp. - Fill the edges black.
- Rotate the image 180 degrees using
cv::warpAffine. - Create the ROS
sensor_msgs/Imagemessage from theimg_colorimage usingcv_bridge::CvImage. - Publish the message.
The https://github.com/IvisionLab/sonar-simulation repository requires Ruby and QT4. The install fails with a syntax error under bootstrap.sh.