This project is a real-time American Sign Language (ASL) translator that uses a webcam to detect and classify hand gestures into corresponding letters of the alphabet. The system utilizes computer vision and machine learning to identify hand landmarks and make predictions based on a trained model.
- Demo
- Features
- Installation
- Usage
- Project Structure
- Training the Model
- Real-Time Prediction
- Contributing
- License
Click on the image to watch the demo video.
- Real-time Hand Gesture Recognition: Uses a webcam to capture hand movements and predict corresponding ASL letters.
- Machine Learning Model: Utilizes a Random Forest classifier trained on hand landmarks data.
- Simple and User-Friendly: Easy to set up and use for educational or demonstration purposes.
-
Clone the Repository:
git clone https://github.com/your-username/ASL-Translator.git cd ASL-Translator -
Install Dependencies:
Install the required packages using
pip:pip install -r requirements.txt
-
Additional Requirements:
- Ensure you have a working webcam.
- Python 3.10 or higher is recommended.
-
Run the Real-Time ASL Translator:
Navigate to the project directory and run the
realtime_predictor.pyscript:python realtime_predictor.py
-
Instructions:
- Make sure your webcam is connected.
- The program will start capturing video. Show a hand gesture in front of the camera.
- The corresponding ASL letter will be displayed on the screen.
- Press
Eto exit the program.
.
├── data/ # Directory containing image data for training
├── model/ # Directory for saving trained models
│ └── model.p # Pre-trained model file
├── src/
│ ├── dataset_creator.py # Script for creating and saving the dataset
│ ├── model_trainer.py # Script for training the model
│ └── realtime_predictor.py # Script for real-time prediction
├── requirements.txt # Project dependencies
└── README.md # Project documentation
To train the model on your own dataset:
-
Organize Data:
Place your hand gesture images in the
datadirectory, organized into subdirectories where each subdirectory name corresponds to the class label (e.g.,0,1,2, ...,25). -
Generate Dataset:
Run the
dataset_creator.pyscript to create the dataset:python src/dataset_creator.py
-
Train the Model:
Run the
model_trainer.pyscript to train the model:python src/model_trainer.py
-
The trained model will be saved in the
modeldirectory asmodel.p.
To use the real-time prediction functionality:
-
Ensure your webcam is connected.
-
Run the
realtime_predictor.pyscript:python src/realtime_predictor.py
-
The system will start capturing video, detecting hand gestures, and displaying predictions in real-time.
Contributions are welcome! If you'd like to contribute, please fork the repository and create a pull request with your changes. For major changes, please open an issue first to discuss what you would like to change.
This project is licensed under the MIT License - see the LICENSE file for details
