-
Notifications
You must be signed in to change notification settings - Fork 0
Open
Description
Project Name
Accessible Sign Language Interpreter
Project Description
Millions of people with hearing impairments face daily communication barriers, especially in real-time conversations, education, and public services. Existing solutions like human interpreters are costly and not always available, while most AI tools are not optimized for accessibility.
This project aims to build an AI-powered sign language interpreter that provides instant interpretation between sign language and text/audio. The solution will work through a web chat/video platform, with future enhancements like AR captions overlay for live calls, ensuring inclusivity and affordability.
Project Resources
- [Sign Language Datasets (e.g., RWTH-PHOENIX-Weather, ASLLVD)](https://www.phoenixs2.sign-lang.uni-hamburg.de/phoenix/)
- [TensorFlow.js / PyTorch for gesture recognition](https://www.tensorflow.org/js)
- [Accessible Web Guidelines (W3C)](https://www.w3.org/WAI/)
Required Tech Skills
- Machine Learning (Computer Vision, NLP)
- React / Next.js for frontend
- Node.js / Python for backend
- WebRTC for real-time video
- AR frameworks (optional for MVP 2.0)
📧 Project Owner Email
Track Type
- Paid
- Unpaid
Expected Deliverables
- MVP: Web-based interpreter that translates sign language gestures into text/audio in real-time.
- API for integration into public service platforms (schools, clinics, customer support).
- Documentation and accessibility compliance checklist.
- Stretch Goal: Web-based AR captions overlay for live video calls.
Reactions are currently unavailable
Metadata
Metadata
Assignees
Labels
No labels