This project is a TypeScript application that uses the OpenAI Moderation API for image classification. It's designed to detect and handle NSFW (Not Safe For Work) images uploaded to an S3 bucket.
These instructions will get you a copy of the project up and running on your local machine for development and testing purposes.
- Node.js or Bun
- npm (or Bun)
- TypeScript
- Clone the repository
git clone https://github.com/GalaxyBotTeam/Image-Classifier.git
- Navigate to the project directory
cd your-project-directory
- Install the dependencies
npm install
# OR using Bun
bun install
This project consists of two main components:
ImageClassificationHandler.ts
: This handles the image classification process. It fetches an image from an S3 bucket, converts it into a TensorFlow tensor, classifies the image using the NSFW.js model, and validates the classification results.
The application needs a configuration file to run. Create a config.json
file in the root directory of the project with the following content:
The content for the config.json
is defined in the config-example.json
file.
We use MinIO as an S3-compatible object storage server. You can use any S3-compatible object storage server by changing the s3
configuration in the config.json
file.
Build the TypeScript files:
npm run build
Start the application:
npm start
Development mode (with auto-reloading):
npm run dev
Build the TypeScript files:
bun run build
Start the application:
bun start
Development mode (with auto-reloading):
bun run dev
The application provides a Webserver that listens for POST requests on the /api/v1/classifyImage
endpoint. To classify an image, send a POST request to the /api/v1/classifyImage
endpoint with the following payload:
{
"key": "your-s3-key", //The key of the image in the S3 bucket. The bucket is defined in the config file
"deleteOnClassification": false //Boolean, should the image automaticly deleted if nsfw has been detected?
"metadata": {
"userID": "userID", //Guild ID for Discord Log
"guildID": "guildID" //User ID for Discord Log
}
}
Returns a JSON object with the classification results. The flagged
field indicates the classification result. The categories
and the scores
field contains the classification probabilities for each category.
In this example, the image is classified as sexual
with a probability of 0.9849382780471674
.
DeleteImage is true if the image has been deleted from the S3 bucket.
{
"flagged": true,
"categories": {
"harassment": false,
"harassment/threatening": false,
"sexual": true,
"hate": false,
"hate/threatening": false,
"illicit": false,
"illicit/violent": false,
"self-harm/intent": false,
"self-harm/instructions": false,
"self-harm": false,
"sexual/minors": false,
"violence": false,
"violence/graphic": false
},
"scores": {
"harassment": 0,
"harassment/threatening": 0,
"sexual": 0.9849382780471674,
"hate": 0,
"hate/threatening": 0,
"illicit": 0,
"illicit/violent": 0,
"self-harm/intent": 0.0002785803623326577,
"self-harm/instructions": 0.0002318797868593433,
"self-harm": 0.004690984132226771,
"sexual/minors": 0,
"violence": 0.12459626487974881,
"violence/graphic": 0.002384378805524485
},
"deletedImage": true
}
This project is licensed under the MIT License - see the LICENSE.md
file for details.