-
Notifications
You must be signed in to change notification settings - Fork 32
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Use realsense d435 #15
Comments
I don't see the problem with the information provided. rqt_graph is ok and darknet_3d.yaml too. Have you changed something of darknet_ros_3d besides the yaml file? |
I found the following message in the middle of the terminal:
My guess is that I need to modify the code of the Darknet3D.cpp file for realsense, but I don't know how to fix it. |
You should not need to change Darknet3D.cpp code. Can you tell me which ros package are you using like realsense driver for see the messages, please? |
I've seen at realsense-ros documentation that if you want points cloud be published, you have to run rs_camera.launch with a specific parameter. I don't know if you did this. Probably, this is the mistake. darknet_ros works fine because it doesn't use pointcloud but darknet_ros_3d crashes because it need it. |
Hi @SeungRyeol , I have not a d435 right now. I have one in my office but until next week I can't access because of the holidays. Meanwhile, you could check (use rviz), in this order:
Check darknet_ros/config/ros.yaml if something went wrong If everything is ok until this point, the problem is in In your conf, I have seen Tell me if you found the error. In any case, I will try next week to reproduce your problem with the same camera. Best |
Running the launch file of the RealSense camera with rs_rgbd.launch instead of rs_camera.launch solved the problem. Really thank you. @fmrico @fgonzalezr1998 |
Hi @SeungRyeol Yes, I am testing it right now. with I am happy to see that you made it work!! Enjoy Darknet_ros_3d!!! |
Hi all, When I have a look to the bounding_boxes in the rqt topic monitor I´m wondering why the empty 3D bounding_boxes needed the doubled bandwith than the 2D bounding_boxes... is that normal? I started with ROS two weeks ago, so i´m not very confirm with it... Can please someone help me finding my problem!? That would be great! Here are some screenshots of my configuration. If there is something missing, please tell me! THANKS A LOT! Greetings, Hannes |
Hi @hannes56a , rqt_graph seems correct. If you are using Intel realsense, make sure that you are launching One thing you can try is launching a Respect to the Bandwith: Darknet ROS's Bounding Boxes are being published to 1 Hz approximately. However, Darknet ROS 3D's Bounding Boxes are published to 10 Hz and they contain more data, so even if I have never seen its bit rate, I think that it is not something abnormal that bit rate is greater. |
hi @fgonzalezr1998, you are the man! Your hint with the detection objects was the solution. I always tested the 3d bounding_boxes with the tvmonitor. And this object was only in the ros.yaml and not in the darknet_3d.yaml. Now i added it, and it works fine. Thank you! |
Thanks, @fgonzalezr1998 for hints, they were very useful. I encountered a strange issue here, the darknet_ros_3d one time works and 10 times doesn't work. Could you help with that? I'm running the system exactly as you explained and I tried to publish some dummy 3d bounding boxes msg but didn't get a success. |
Hello @mhaboali First of all, what I would do is to run Intel RealSense driver (using rs_rgbd.launch) and visualize with rviz if depth_registered_points are seen well
|
Hi @fgonzalezr1998 thanks so many for your clear steps, they were very useful for me. Now it works :D and the issue came when I used darknet_ros_3d_gdb.launch I have one more question and please let me know if I have to open a new issue for it. How can I track those 3d boxes? Your help is highly appreciated! |
Yes @mhaboali , open a new issue and I will answer you there, please |
Hello, @fgonzalezr1998 thanks to your code I could approach my goal some steps. I'm trying to run darknet_ros_3d with stereo camera(ocams_1cgn) made by withrobot and I really want to get 3d coordinate values like xmin ymin or something. However, there are no 3d boxes and distance info as well....... I tried to change some code to suit my camera path, but I think it is not perfect. Could you tell me what I'm wrong? or missing? I'll upload some pics such as my rqt_graph, ros yalm and ..... THANK YOU VERY MUCH -Myoungjin |
Hi, @MyongjinKim I've seen that you have two different topics that publish sensor_msgs/PointCloud2. Try using stereo/point_cloud topic. NOTE: You don't need to change parameters in the code because they are read from the YAML file. You only need to change darknet_3d.yaml and launcher file to change the RGB image topic as you do it. Also, try to visualize point cloud with rviz. You can launch rviz typing May useful to use the following commands for tracking: IMPORTANT!! You must launch darknet_ros_3d using I'll be waiting for your feedback! ;) |
@MyongjinKim exactly! It means that there is not tf between both frames. Without tf between point cloud frame and working frame, darknet_ros_3d can not calculate 3d coordinates. you can see all available frames adding TF in rviz. If there is some TF whose axes are the same that camera_link (X-axis aiming to front, Y-axis aiming to left, and Z-axis aiming to top) you can use it changing the working frame in darknet_3d.yaml. In this case, the 3d coordinates will be calculated concerning this new frame. However, if there is not any TF whose axes have these characteristics, you can publish by yourself the transform between two frames using static_transform_publisher where you indicate the translation and rotation that exists between both frames. I recommend you that if you are going to use this camera more times, yo do you custom package with a launcher that includes static_transform_publisher and darknet_ros_3d launcher. If you have any other hesitate, no doubt in ask it |
Oh, finally I can catch the distance value!!!! |
darknet_ros_3d takes as input the 2d bounding boxes and returns 3d bounding boxes. It means that each detected object will have an imaginary box around. Xmin is the nearest X coordinate and Xmax the farthest X coordinate. The same with Y and Z. For a better understanding, you can visualize these 3d bounding boxes adding MarkerArray on rviz and subscribing to When you look output 3d bounding boxes what you can see is an objects array where each object has its 3d coordinates, its class name, and its probability of certainty. I hope my explanation was useful @MyongjinKim |
Thank you for your detailed explanation.
Could you tell me what I should fix??? I'm sorry to keep asking you about the little things. Thank you so much. @fgonzalezr1998 -Myoungjin |
If you want to use other weights, you can change it between all weights you have available in darknet ROS Github page or you can re-train the neural network. This is something that I did some time ago and I recommend you follow these steps. @MyongjinKim |
Hi @fgonzalezr1998 it's Myoungjin Thanks a lot Myoungjin |
First of all, 3D bounding boxes provide min and max coordinate in each axis. So, if yo want to get the Euclidean distance between the camera and the detected object, it is a good idea to take the center of the object. In this way, the coordinates of the point would be X=(Xmin+Xmax)/2.0, Y=(Ymin+Ymax)/2.0, and Z=(Zmin+Zmax)/2.0. Once calculated this point, you know that the object distance vector is V = (X, Y, Z). So, you anly need to calculate the module of this vector: D = sqrt(X^2 + Y^2 + Z^2). On the other hand... if you anly want to know how far is located the object in X axis, you anly have to do: D = (Xmin+Xmax) / 2.0 I want to make clear that 3D bounding boxes provides to you the min and max distances in the three coordinates where object is located. with this information, you know all you could need to know because you not only know the Euclidean distance. Youcan know the volume of the object and its position in the 3d space @MyongjinKim I hope I solved your doubts |
Hello @fgonzalezr1998 it's been a long time |
@MyongjinKim Repository where I have all the code I did is private but I have invited you for you can see it. Probably it not compile but is normal because it depends on Dialogflow and it is using an old version of darknet_ros_3d. That happens because when the pandemic began I stopped its development. However, it can serve you as a guide. Jetson was running the ROS master, kobuki_node (k0buki driver), navigation packages (amcl, move_base, etc etc), and Darknet ROS neural network. And we were monitoring all data with my laptop using rviz. If you want to see a video, tell me and I can upload it to youtube and give you the url. |
@fgonzalezr1998 |
@fgonzalezr1998 Yheaaaaaaa! I'm really wanna watch it!! |
@fgonzalezr1998 Me too!! ^_^ I am also doing a project with Jetson Nano and Realsense D435. Could you also make me accessful to your private repository? Thanks so much!! |
@MyongjinKim @jameslele Here you can see the Turtlebot2 navigating and detecting objects with the neural network using the Nvidia Jetson nano which is mounted over the Turtlebot. It uses ROS Melodic. The only difference is that Intel RealSense was not used. It used the Orbbec Astra camera. But if you would want to use the Realsense camera you only have to install its driver in the Jetson and launch it instead of Orbbec.
|
@fgonzalezr1998 Hi, thanks very for your reply. It helps me a lot. I am so sorry so long not to reply. I still have not gotten the invitation yet. Could your send me again? |
Sorry, @jameslele that is the invitation |
Oh no, it has expired.(。•́︿•̀。) |
@fgonzalezr1998 @jameslele Hi all. I hope you guys are doing well |
@fgonzalezr1998 I have already accepted. Thanks so much!! |
I need to detect custom objects using darknet and want to get 3d coordinates. I retrained the model and obtained the weights. Can someone please guide me regarding the next steps @MyongjinKim ? Thanks for the help. |
Hi @hsaleem1 If you have darknet_ros working succesfully with your own weights, you only have to define what objects you want darknet_ros_3d to detect editing the config/darknet3d.yaml and then launch it. You must configure propperly the topics of teh camera rgb image, pointcloud, etc |
@fgonzalezr1998, thanks for your help. Its done now. |
@fgonzalezr1998 thanks for your code. First, I checked below
After that, I faced problem below. but, i cannot get /tf from |
@fgonzalezr1998 Sorry,sir |
@fgonzalezr1998 plus, if camera check the person, then Zmax means distance between back and camera? I saw distance in Z coordinate is (Zmax+Zmin)/2, but real distance is similar with (Zmax-Zmin) in my case. |
@Jaelly X axis aims to the front; Y aims to the right and Z aims to up. So, Z can be minus if camera es placed higher than the object. On the other hand, Y can be minus if the camera is placed in the right side of the object. About the distance the object is placed (from the camera): Xmin represents the nearest point. |
@fgonzalezr1998 Are there any tip to enhance fps with Xavier NX+ Realsense camera using rs_rgbd? I set Xavier NX, and other PC with same Networks. And I use Image_transport to subscribe compressed image on darknet_ros_3d. |
Hi @Jaelly Perhaps, the problem is that Darknet ROS is not using your GPU. In this case, maybe the problem is related to CUDA. 2 cases may happen:
Please, carry out this checks and report here the results |
I think that is because of case2. So, sending image frame is already slow. Is there any tip to upgrade fps without changing NX board? :'( |
Are you gathering the PC inside the NX or in other PC through wifi or ethernet connection? If you are getting the PC in other computer, you can't avoid the bandwidth limitations. However, if you are not sending the PC by the network... I don't know what is happenning.... Try to run the ```rostopic hz`` command inside the NX (by ssh, for example) because if you are running rqt in the other computer to which NX is connected, it's normal that you see slow FPS's @Jaelly |
@fmrico @fgonzalezr1998 @jginesclavero this is the output
Only issue I can think of after all the debugging is that the Thanks! |
Hi @TapleFlib you have to clone the repositories that contains darknet_ros_msgs and gb_visual_detection_3d_msgs into your workspace. They are in different GitHub repos. |
Thankyou ! |
@fgonzalezr1998 hello , do you know where can I find the example code of adding detected object as tf and send it to robotic arm ? https://github.com/introlab/find-object/blob/master/src/ros/tf_example_node.cpp this file is not available. |
I am using RealSense d435. So I changed code of the "darknet_3d.yaml" file as follows.
Here is launch file.
This is the node graph(Nodes/Topics (all)).


darknet works fine, but the /darknet_ros_3d/bounding_boxes topic does not have any information.
How can I solve this problem?
The text was updated successfully, but these errors were encountered: