SignLens is a project that leverages the power of Deep Learning to classify American Sign Language (ASL) gestures. The project includes scripts for data preprocessing, model training, and prediction. It is a final project for the Data Science bootcamp at Le Wagon.
This repository hosts the frontend for the intuitive Streamlit interface, providing seamless accessibility: SignLens Streamlit Interface
Here is a preview:
SignLens.mp4
SignLens uses an approach where videos are initially transformed into JSON files containing Mediapipe landmarks. Subsequently, these JSON files are transmitted to an API housing the landmark model, which returns accurate classifications. The development of both the model and the API is detailed here.
If you have any questions, comments, or feedback, feel free to reach out to us!
- Email: [email protected]
- GitHub: benoitfrisque