Unified codebase for Bilkent University 2020-2021 EEE 493/4 Industrial Design Project Group B4
- Acknowledgement
- Aim
- Project Structure
- Software Requirements for Main Microcontroller
- Software Requirements for Peripheral Devices
- Hardware Requirements
- Usage of Modules
- Simple Usage
- This project was realized in cooperation with ROKETSAN.
- 🇬🇧 This project was supported by TUBITAK under the 2209-B program.
🇹🇷 Bu çalışma 2209-B programı kapsamında TÜBİTAK tarafından desteklenmiştir.
In this project autonomous mobile robots with heterogeneous hardware configurations are developed for collective task execution without communication between the robots, or global mapping systems. Collective tasks such as converging on a predefined target will be executed with three robots with different hardware configurations. Without communication (i.e. RF, LoRa) between robots, and global localization systems (i.e. GPS), robots make use of both visual data from cameras, and distance data from LiDAR to local mapping of their environment. Visual, and distance data is used to further augmented with object identification, and localization so that robots can identify obstacles, peer robots, and objectives in the environment. Local mapping of the environment is then used by a custom obstacle avoidance algorithm to perform collective task execution. The robots' collective ability to execute common objectives will be evaluated using five different scenarios. These scenarios are
- Target Identification and Transition with One Robot
- Pioneer Robot Following,
- Target Identification and Locating with Three Robots,
- Target Identification and Transition with Three Robots While Avoiding Obstacles
- Hostile Target Detection and Avoiding.
These scenarios test robots’ ability to identify a predefined target, and converge on the said target using local mapping of the environment. More challenging problems such as path finding in an environment with obstacles, are solved using custom grid-based obstacle avoidance algorithm. Developed systems such as object identification, and path finding are required to run in parallel. For process orchestration, and lightweight message passing Redis will be used, for low memory footprint, and low communication overhead. More detailed information like equipment list, searching algorithms, target and peer detection preferences, etc. provided in docs/report.pdf
The project consists of pre-defined five scenarios. The actions that we take in the light of same information varies between these scenarios, but we need to obtain camera and LiDAR feed and stream them for each scenario. In this context, we are running detector
and lidar
modules along with the ESP32 serial communication script in the background. These modules are continuously fulfilling their tasks and writing necessary information to the running redis-server
on the device. Then, the running motor controller script, for the selected scenario, takes action according to the information on the redis-server
.
A lightweight database system Redis must be installed, in order to handle inter-process communication. The OpenCV
library is a requirement of the project. On ARM based systems, compilation from the source is required. Hence, this dependency is not included in the Pipfile
or requirements.txt
for the sake of generality. Therefore, we recommend the following steps for x86
and ARM
architectures respectively for installing necessary requirements.
-
Install Redis for your system.
-
Obtain appropriate version of
Python
. -
We recommend using a virtual environment with the help of
pipenv
. Installpipenv
using$ pip install --user pipenv
-
Clone this project and
cd
into it. -
Create a virtual environment with the command
$ pipenv shell
-
Install necessary packages with the command
$ pipenv install
-
Install
OpenCV
additionally with the command$ pipenv install opencv-contrib-python
-
Download and build
OpenCV
from the source with the extra modules. -
Install Redis for your system.
-
Obtain appropriate version of
Python
. -
Clone this project and
cd
into it. -
Install necessary packages with the command
$ pip install -r requirements.txt
The main controller of the system communicates with several devices to accomplish the necessary tasks. Two of these devices, Arduino Nano
and ESP32
must be individually programmed so that they can perform their tasks. For this project, we utilized PlatformIO to program these devices but any alternative is plausible. Program Arduino Nano
with motor_controller/serial_pwm_driver.ino and ESP32
with esp32/src/main.cpp. Detailed information for ESP32
is present in its own README.
For the entirety of the program a camera must be connected to the main controller for detection and tracking of both the target and the peers. RPLIDAR
must be connected to the main controller with a data cable and must be powered appropriately for obstacle avoidance. An Arduino Nano
must be connected to the main controller for the serial communication of motor controlling. DC motors must be connected to the Arduino Nano
for logic operations and they must be connected to an appropriate power source. Finally, the main controller must be connected to the ESP32
for live camera and LiDAR feed and for switching between multiple scenarios.
- Note that the provided code recognizes these devices by identifying them from their
(VID, PID)
pairs. For this project the utilized components have unique pairs, so it did not create a problem. For someArduino
clones these pairs can have the same values withESP32
, which can cause problems with device identification.
In order to use the modules, the redis-server
must be running in the background. If Redis is successfully installed on your system, running the following command in your terminal starts the on device server.
$ redis-server
There are two main modules in the project that can be run detector
and lidar
. These modules can be run separately, although the redis-server
must be running in the background. The specifications of these modules are described below.
The detector module takes the device number as an argument. If there are multiple cameras are present in the system you can extend the program by running the detector module several times with different device IDs. The first and default device ID is 0
. The frame-size
optional argument takes a dimensions separated by an x
in the form of WIDTHxHEIGHT
, e.g., 640x480
, which adjusts the proportions of the camera feed. It uses cameras default dimensions if nothing is passed for this option. The serial
option enables the detector
module to write the current frame to redis-server
, so that the ESP32 serial communication script can forward it to ESP32
. Then we have the display
and quiet
options. By default the module displays the center point of the target and the current FPS value in the terminal, since during the scenarios there is no need to waste resources to display the camera feed on device, where we should only interact with the robots from the ESP32
. For debug purposes the module provides display
option to display real-time camera feed on device. Finally the quite
option eliminates the default output provided in the terminal.
Usage: python -m detector [OPTIONS] DEVICE
Options:
--frame-size TEXT
-d, --display
-s, --serial
-q, --quiet
--help Show this message and exit.
There are three commands for the lidar
module: detect
, display
and stop
.
Usage: python -m lidar [OPTIONS] COMMAND [ARGS]...
Options:
--help Show this message and exit.
Commands:
detect
display
stop
First of them is the detect
command. This command takes multiple arguments in form of LIDAR_RANGES
, i.e., NAME:START-STOP
, e.g., front:340-20
. For the LiDAR that can scan 360 degrees, it takes a key pair which consists of a key name and the angle interval in modulo 360 corresponds to the key name. For the given key, the LiDAR filter that we implemented provides the minimum distance that it can detect an obstacle.
Usage: python -m lidar detect [OPTIONS] [LIDAR_RANGES]...
Options:
--port TEXT
--help Show this message and exit.
With the display
command, you can display the live feed of a key provided in the running version of detect
command. Since the detect
command constantly updates the distances that correspond to provided keys, with display
command you can observe the any of the keys with the command below.
Usage: python -m lidar display [OPTIONS] NAME
Options:
--help Show this message and exit.
Finally the stop
command is an external fail safe, that is added for ignition.py
, to stop the motor of the RPLIDAR and gracefully shutdown the LiDAR.
We provided a script, ignition.py
, to initiate every helper program, then the user can set the desired scenario with the scenario changer script if device access present, or from the client service that can be accessed by connecting to the host on ESP32
. The ignition.py
starts the detector
and lidar
modules as separate processes along with the ESP32 serial communication script in the background. If a scenario selection is detected, ignition.py
kills the on-going scenario script's process, if there is an on-going scenario, and creates a new process with the selected scenario's script. When all the requirements are satisfied, and all the hardware connections are correct, the user can simply start the whole project with the following command.
$ python ignition.py
This script starts the program with the default values of the modules and starts streaming on the ESP32
.