This repository contains an implementation of Text to Image synthesis using CLIP and VQGAN. The project demonstrates how to generate images based on textual descriptions by leveraging the power of OpenAI's CLIP model and VQGAN.
Text_To_Image.ipynb
: The Google Colab notebook containing the full implementation of the Text to Image synthesis. The notebook walks through the setup, training, and generation process using CLIP and VQGAN.
To run this notebook, you will need the following dependencies:
- Python 3.x
- PyTorch
- CLIP
- VQGAN
- Google Colab (recommended for ease of use)
- Clone the repository:
git clone https://github.com/Daimon5/Text_To_Image.git
- Open the notebook in Google Colab by uploading it or using the GitHub link directly.
- Text to Image Synthesis:
- Follow the notebook to understand how to generate images from text descriptions.
- Input your desired text prompts.
- Generate and visualize the corresponding images.
This project is licensed under the MIT License - see the LICENSE file for details.