Skip to content

[EMNLP'23] ViPE: Visualise Pretty-much Everything

License

Notifications You must be signed in to change notification settings

adhirajghosh/ViPE

Repository files navigation

ViPE: Visualise Pretty-much Everything

Project Under Construction

Note: This repository is currently under construction and not yet completed. It is a work in progress, and changes are being made regularly.

Please feel free to check back later for updates or follow/watch this repository to receive notifications when it's ready for use.

Thank you for your patience!

TODO:

  1. Correct all the links.
  2. Add Video generation website
  3. Add HuggingFace page if available
  4. Refactor chatgpt, chatgpt_run and genius to one folder called lyric_canvas?
  5. Remove .sh files

🗄 Code Structure

├── vipe
│   ├── chatgpt-run                   <- build your own LLM-powered dataset
│   ├── datasets                      <- path to all relevant datasets to reproduce ViPE results
│   ├── genius                        <- implement the genius API
│   │── README.md                    
│   └── output                        <- folder that stores models and logs
│

💾 Downloads

TODO:

  1. Path to the retrieval files. All 4 pickle files and the images for train and eval. Upload to cloud.

HAIVMet

We stack ViPE against human annotators in understanding and visualising figurative speech. To that end, we refer to VisualMetaphors. To download the dataset, please follow their instructions.

The datasets folder should have the following structure

├── datasets
│   ├── HAIVMet
│   │   │── ad_slogans.zip   
│   │   │── bizzoni.zip
│   │   │── copoet.zip
│   │   │── figqa.zip
│   │   │── flute.zip
│   │   │── tsevtkov.zip
│   ├── retrieval
│   │   │── chatgpt  
│   │   │── haivmet   
│   │   │── vipe   
│   │   │── metaphor_id.pickle   
│   │   │── prompt_dict_chatgpt.pickle  
│   │   │── prompt_dict_haivmet.pickle  
│   │   │── prompt_dict_vipe.pickle  

Evaluation

Image-Text Retrieval

To generate datasets for the respective models, run the following:

python3 evaluation/retrieval/create_dataset.py --model <haivmet/vipe/chatgpt> \
--dataset <ad_slogans/bizzoni/copoet/figqa/flute/tsvetkov>\
--savedir <path/to/store/datasets/>\
--img_size <image resolution> --num_images <number of images per prompt>\
--checkpoint <path/to/vipe/checkpoint/if/using/vipe>

We conduct vigorous image-text retrieval using the BLIP model as the benchmark model.

python3 evaluation/retrieval/evaluation.py --dataset <haivmet/vipe/chatgpt> --output_dir <path/to/store/checkpoints> --id_type <metaphor/prompt>

📹 Music Video Generation

Music Video Generation strategy as used in our paper. For an updated version, please refer to ViPE-Videos.

python3 ./t2v/create_video.py --img_size 100 --outdir ./results/vids/finalise/ --fps 2

📑 Citation

If you found this repository useful, please consider citing:

@inproceedings{shahmohammadi2023vipe,
    title = "ViPE: Visualise Pretty-much Everything",
    author = "Hassan Shahmohammadi and Adhiraj Ghosh and Hendrik P. A. Lensch",
    booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing",
    month = dec,
    year = "2023",
    address = "Singapore",
    publisher = "Association for Computational Linguistics",
    url = "https://arxiv.org/abs/2310.10543",
    eprint={2310.10543},
    archivePrefix={arXiv},
    primaryClass={cs.CL}
    doi = "",
    pages = ""
}

👨‍🏫 Acknowledgements

We refer to portions of the implementations of the following for parts of our research:

About

[EMNLP'23] ViPE: Visualise Pretty-much Everything

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published