An AI powered system that can interpret Indian (or any) Sign Language. Just train the model with any Sign Language that you want and it'll be able to interpret it.
- Two separate datasets are created.
- Each dataset is divided into two parts, i.e. the training data set and the testing dataset.
- The training data of each dataset is used to train different Machine Learning models and the accuracy of each model is shown here.
- The webcam feed is given as input to the Mediapipe API and using it’s returned value, it is decided whether the user is showing gestures using one hand or two hands.
- The data is sent to the corresponding trained Kernel SVM model which returns the predicted alphabets that are then displayed on the screen.
- Setup the app locally
python3 -m venv env
source env/bin/activate
pip install -r requirements.txt
- Collect dataset
python3 scripts/collect_data.py
- Train the model with the dataset
python3 scripts/train.py
- Train the model with the dataset
python3 scripts/predict.py