The project provides a service that evaluates how fashionable an outfit on the image is. The fashion score is given for 20 different styles.
To try the app by yourself check the Releases section!
- FastAPI server and Android app implementation;
- PyTorch Lightning model to evaluate images' score;
img_fashion_styles
dataset gathered from Pinterest;
...and, of course, a developed infrastructure to reproduce our results and conduct further experiments.
Repo is organised in the following way:
core
— the main directory with everything related to experimenting with model and gathering the dataset;data
— the directory to keep the dataset there, initially there is only compressedimg_fashion_styles.7z
that will be extracted to theimg_fashion_styles_extracted
byFashionStylesDataModule
;notebooks
— contains notebooks with several preliminary experiments and an example model's training pipeline;src
— contains all the machine learning code;data
— is dedicated to gathering, preparing and compressing the raw dataset (the ready-to-use compressed version is already located in thedata
directory);models
— every piece of code related to the training pipeline is located there;server
— contains code implementation of the FastAPI server, to run it properly you'd probably need to refer toREADME_SERVER.md
;utils
— finally, just a bunch of util models used throughout the project;
androidApp
— the implementation code of the Android app.
TL;DR: you can skip reading and just jump to the Jupyter notebook to check an example of the complete model training pipeline! But the more detailed guide is presented below.
Training model is pretty straightforward, luckily we made it simple! First, go to the core/src
directory, all the following commands should be executed from there.
cd core/src
Since Wanbd is used for logging the model, first you should log in to it with your credentials. If you use your own Wandb project, don't forget to update its name.
wandb login
Then set up several environment variables with the corresponding paths, but don't forget to use absolute paths.
export DATA_DIR=.../core/data # directory to extract dataset
export ARTIFACTS_DIR=.../core/artifacts # directory to store checkpoints and wand logs
Train model for the specified number of epochs. In Google Collab with GPU machine provided one epoch takes aproximately 1.5 minute.
python -m models.train --num_epochs=100
Wandb will output the genereated run id (for example, sg3yeobh
) — don't forget to save it and pass to the testing module further, so the test execution will also be logged in the same Wandb run.
Once the training is finished, the time has come to test the model!
python -m models.test --run_id sg3yeobh
Finally, to use the trained model you can run predict
module or call its functions directly from the Python code.
python -m models.predict --image_path img_fashion_styles_extracted/gothic/women-490-65.jpg --ckpt_path checkpoints/model.ckpt
See details in README_SERVER.md
.
Section is in progress. We tried to create the most readable code as possible, so we hope that a passionated reader would be able to go through it happily ;-)