Skip to content

DLochmelis33/fashion.pulse

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

59 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

fashion.pulse

releases License: MIT contributions

Stylish Introduction ✨✨✨

The project provides a service that evaluates how fashionable an outfit on the image is. The fashion score is given for 20 different styles.

Alt Text

To try the app by yourself check the Releases section!

This repo includes

  • FastAPI server and Android app implementation;
  • PyTorch Lightning model to evaluate images' score;
  • img_fashion_styles dataset gathered from Pinterest;

...and, of course, a developed infrastructure to reproduce our results and conduct further experiments.

Table of contents

Detailed repo structure 🔍

Repo is organised in the following way:

  • core — the main directory with everything related to experimenting with model and gathering the dataset;
    • data — the directory to keep the dataset there, initially there is only compressed img_fashion_styles.7z that will be extracted to the img_fashion_styles_extracted by FashionStylesDataModule;
    • notebooks — contains notebooks with several preliminary experiments and an example model's training pipeline;
    • src — contains all the machine learning code;
      • data — is dedicated to gathering, preparing and compressing the raw dataset (the ready-to-use compressed version is already located in the data directory);
      • models — every piece of code related to the training pipeline is located there;
      • server — contains code implementation of the FastAPI server, to run it properly you'd probably need to refer to README_SERVER.md;
      • utils — finally, just a bunch of util models used throughout the project;
  • androidApp — the implementation code of the Android app.

Train model & deploy server 🚂

Train the fanciest model

TL;DR: you can skip reading and just jump to the Jupyter notebook to check an example of the complete model training pipeline! But the more detailed guide is presented below.

Training model is pretty straightforward, luckily we made it simple! First, go to the core/src directory, all the following commands should be executed from there.

cd core/src

Since Wanbd is used for logging the model, first you should log in to it with your credentials. If you use your own Wandb project, don't forget to update its name.

wandb login

Then set up several environment variables with the corresponding paths, but don't forget to use absolute paths.

export DATA_DIR=.../core/data # directory to extract dataset
export ARTIFACTS_DIR=.../core/artifacts # directory to store checkpoints and wand logs

Train model for the specified number of epochs. In Google Collab with GPU machine provided one epoch takes aproximately 1.5 minute.

python -m models.train --num_epochs=100

Wandb will output the genereated run id (for example, sg3yeobh) — don't forget to save it and pass to the testing module further, so the test execution will also be logged in the same Wandb run.

Once the training is finished, the time has come to test the model!

python -m models.test --run_id sg3yeobh

Finally, to use the trained model you can run predict module or call its functions directly from the Python code.

python -m models.predict --image_path img_fashion_styles_extracted/gothic/women-490-65.jpg --ckpt_path checkpoints/model.ckpt

Deploy the most reliable server

See details in README_SERVER.md.

Infrastructure methods overview ⚙️

Section is in progress. We tried to create the most readable code as possible, so we hope that a passionated reader would be able to go through it happily ;-)