Faceoff is a comprehensive pipeline for deepfake generation and detection. You can use this project for either generation or detection. The project is divided into four main parts: generation, generation evaluation, detection, and prototype system (including server and UI). You can run the project directly or find our repository on GitHub: https://github.com/girotte-tao/FaceOff.
This project utilizes multiple projects/features/submodules, requiring multiple environments. Different environments depend on different packages, making it impossible to run all project contents in a single environment.
To clone the repository from GitHub and use VFD or ICT, run:
git submodule update --init --recursive
For detailed operation instructions, refer to generation/README.md
. This feature leverages some code from faceswap.
This part uses the LLAVA visual language model for evaluation, depending on Ollama.
pip install ollama
Run the following command in the terminal:
ollama run llava:7b
Then, place the videos to be evaluated in FaceOff/evaluation/ollama/videos
and run:
python FaceOff/evaluation/ollama/llava_evaluation.py
A results.jsonl
file will be generated in the FaceOff/evaluation/ollama
directory. To get insights, run:
python FaceOff/evaluation/ollama/get_insights.py
This will generate a statistics_llava_evaluation_generated_deepfake.txt
file in the FaceOff/evaluation/ollama
directory.
Three models are available for detection. Install the corresponding environment based on the model you choose. The environment files are located in:
conda env create -f environment.yml
Subsequent operations need to be performed in the corresponding model environment. download the dataset from [here]
To train the Faceoff model, download the dataset from here and place it in FaceOff/detection/model/FaceoffModel/data/FakeAVCeleb_v1.2/FakeAVCeleb
.
Run the training script:
bash FaceOff/detection/model/FaceoffModel/run_train.sh
Refer to the official README at FaceOff/detection/model/VFD/README.md
.
-
Download the pretrained model from this link and put them in ./checkpoints/VFD
-
Download the sample dataset from this link and unzip it to ./Dataset/FakeAVCeleb
-
Run the test.py
python test_DF.py --dataroot ./Dataset/FakeAVCeleb --dataset_mode DFDC --model DFD --no_flip --checkpoints_dir ./checkpoints --name VFD
Refer to the official README at FaceOff/detection/model/ICT/README.md
.
-
Download pretrained ICT Base and move it to
PRETRAIN/ICT_BASE
. For the ICT-Reference, download our already bulit reference set and move it toPRETRAIN/ICT_BASE
. -
Extract faces from videos and align them.
python -u preprosee.py
This is a simple example, modify the input/output path for different datasets. Download our pretrained ICT Base and move it to
PRETRAIN/ICT_BASE
. For the ICT-Reference, download our already bulit reference set and move it toPRETRAIN/ICT_BASE
. -
Run the test script.
bash ict_test.sh
--name pretrain model name
--aug_test test robustness toward different image aumentation
To use the adapter for inference, ensure to switch environments and update the video path. Run:
python FaceOff/evaluation/adapter/modelAdapter.py
To perform an overall detect using the adapter, run:
python FaceOff/evaluation/adapter/modelAdapterEval.py
The backend uses Python Flask:
pip install flask==1.1.2
python FaceOff/server/app.py
The frontend uses React. Install Node.js, npm, and Yarn:
- Node.js: v22.4.1
- npm: v10.8.1
- Yarn: v1.22.21
Navigate to the UI directory and start the frontend:
cd FaceOff/ui
yarn install
yarn start
For any questions, please contact [email protected].