-
-
Notifications
You must be signed in to change notification settings - Fork 669
Description
Hello, thanks a lot for sharing this code. It is really helpful. I have tried out your example on the yymnist dataset you provided, but I wanted to train my own dataset which I have created. I took a look at the files that were generated when I ran the make_data.py
with the example you mentioned in your readme file, and this was the directory structure:
├── test
│ ├── 000001.jpg
│ ├── 000002.jpg
│ ├── 000003.jpg
. .
. .
. .
│ ├── 000197.jpg
│ ├── 000198.jpg
│ ├── 000199.jpg
│ └── 000200.jpg
├── train
│ ├── 000001.jpg
│ ├── 000002.jpg
│ ├── 000003.jpg
│ ├── 000004.jpg
. .
. .
. .
│ ├── 000997.jpg
│ ├── 000998.jpg
│ ├── 000999.jpg
│ └── 001000.jpg
├── yymnist_test.txt
└── yymnist_train.txt
So far, this is very similar to what I have as well :
dataset
├── train
│ ├── img-1.jpg
│ ├── img-1.txt
│ ├── img-2.jpg
│ ├── img-2.txt
. .
. .
. .
│ ├── img-999.jpg
│ ├── img-999.txt
│ ├── img-1000.jpg
│ ├── img-1000.txt
├── test
│ ├── img-1.jpg
│ ├── img-1txt
│ ├── img-2.jpg
│ ├── img-2.txt
. .
. .
. .
│ ├── img-199.jpg
│ ├── img-199.txt
│ ├── img-200.jpg
│ ├── img-200.txt
├── test.txt
├── train.txt
└── classes.names
I want to use transfer learning to start with the original yolov3.weights file and take it from there. I am using labelImg as an image annotation tool, and I would like to know how I can convert this structure to be usable in your code. Each image has a corresponding text file which contains the annotation in the format that labelImg exports it in: <class-id> <x> <y> <w> <h>
which are all normalised. But for your file I see something like this(yymnist_train.txt) : dataset/train/000001.jpg 136,361,150,375,3 328,125,412,209,4 244,25,328,109,6
Can you please help me convert my annotated dataset into the format that is accepted in your code?
Many thanks in advance!