FathomNet/MBARI-midwater-supercategory-detector
You only need to create your Conda environment once - proceed to the "Run Model" step if you have already done this
in a terminal window navigate into the folder you want to create a virtual environment in
$ module use /g/data/hh5/public/modules
$ module load conda/analysis3
$ python3 -m venv NAME_OF_ENVIRONMENT --system-site-packages
$ source NAME_OF_ENVIRONMENT/bin/activate
install any missing libraries
$ (NAME_OF_ENVIRONMENT) $ pip install ultralytics
in a terminal window
$ git clone https://github.com/ultralytics/yolov5
$ cd yolov5/
$ pip install -r requirements.txt
$ cd (NAME_OF_ENVIRONMENT)
$ python
>>> import torch
>>> model = torch.hub.load("ultralytics/yolov5", "yolov5s")
return to the terminal window
$ exit()
download weights file
upload the imagery you want to run the model over and use the path when running the model script below
in a terminal window
$ cd NAME_OF_ENVIRONMENT
$ source bin/activate
$ cd ../yolov5/
$ python detect.py --weights /path/to/best.pt --source /path/to/images-or-video --save-txt --save-csv --save-crop
$ cd runs/detect/
$ rsync -ravzP ./exp /path/to/DESTINATION
- Original image with bounding box predictions
- Cropped bounding box of predicted classes for each image
- .txt file with all the bounding box information for each image
link to text file for this image
- csv file with predictions and confidence levels for each image in the processed batch