People with special-needs face a variety of different challenges and barriers that isolate them from their surroundings. Nowadays, several assistive technologies have been developed to reduce many of these barriers and simplify the communication between special-needs persons and the surrounding environment. However, few frameworks are presented to support them in the Arabic region either due to the lack of resources or the complexity of the Arabic language. The main goal of this work is to present a mobile-based framework that will help Arabic deaf people to communicate ‘on the go’ easily with virtually any one without the need of any specific devices or support from other people.
We use the framework utilizes the power of cloud computing for the complex processing of the Arabic text and Videos. The video processing produced a Arabic text showing the corresponding Standard Arabic Language on the mobile handset of the deaf person.
$ python3 simple_test.py
- NOTE: Download facial landmarks model and put it in landmarks folder.
$ python3 ASL_detection_landmark.py
$ python3 hand_detection_tracking.py
-
- Description of this data
- A new dataset consists of 54,049 images of ArSL alphabets performed by more than 40 people for 32 standard Arabic signs and alphabets. The number of images per class differs from one class to another. Sample image of all Arabic Language Signs is also attached. The CSV file contains the Label of each corresponding Arabic Sign Language Image based on the image file name.
-
Note: Google Colab
- Description of this data
-
Build simple Machine Learning model with SignsWorld Atlas.
-
Build simple Deep Learning model with SignsWorld Atlas.
- Sign language recognition using scikit-learn is an introduction Sign language using ML and how it works.
- Sign Language Recognition Datasets.
- ArASL.
- SignsWorld Atlas benchmark
- hand detection
- hand tracking