This project implements an AI-based machine learning model designed to detect and classify misogynistic, abusive, and harmful comments directed at women on social media platforms. The system identifies inappropriate content in real-time, enabling further moderation actions. Built using traditional machine learning techniques, the model focuses on accurate classification to support safer online environments.
To get this project up and running, you’ll need Python installed on your system. We’ll also be using a virtual environment to keep all the dependencies organized (trust me, it’s way cleaner that way!). Just follow the steps below to get everything set up and ready for action.
Follow these steps to get the project running locally on your machine:
First, clone the project to your local machine with this command:
git clone https://github.com/your-username/your-repo.gitNext, head into the project directory:
cd your-repoCreate a fresh virtual environment to keep your project dependencies clean and isolated:
python -m venv venvNow, activate the virtual environment to make sure all the dependencies get installed in the right place:
-
On Windows:
venv\Scripts\activate
-
On MacOS:
source venv/bin/activate
With your virtual environment up and running, install all the necessary dependencies from the requirements.txt file:
pip install -r requirements.txtOnce everything is set up, you're ready to run the model:
-
To train the model and understand how it works, run this command:
python train_model.py
-
To test the model and check out its performance, execute:
python test_model.py
Muslimah Sarumi | University of Bolton | 2025