You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+8-8
Original file line number
Diff line number
Diff line change
@@ -47,22 +47,22 @@ We wanted to make it easy for 70 million deaf people across the world to be inde
47
47
48
48
## Setup
49
49
50
-
* Use comand promt to setup environment by using requirements_cpu.txt and requirements_gpu.txt files.
50
+
* Use comand promt to setup environment by using install_packages.txt and install_packages_gpu.txt files.
51
51
52
-
`pyton -m pip r using requirements_cpu.txt`
52
+
`pyton -m pip r install_packages.txt`
53
53
54
54
This will help you in installing all the libraries required for the project.
55
55
56
56
## Process
57
57
58
-
* Run `set_hand_hist.py` to set the hand histogram for creating gestures.
58
+
* Run `set_hand_histogram.py` to set the hand histogram for creating gestures.
59
59
* Once you get a good histogram, save it in the code folder, or you can use the histogram created by us that can be found [here](https://github.com/harshbg/Sign-Language-Interpreter-using-Deep-Learning/blob/master/Code/hist).
60
60
* Added gestures and label them using OpenCV which uses webcam feed. by running `create_gestures.py` and stores them in a database. Alternately, you can use the gestures created by us [here](https://github.com/harshbg/Sign-Language-Interpreter-using-Deep-Learning/tree/master/Code).
61
-
* Add different variations to the captured gestures by flipping all the images by using `flip_images.py`.
61
+
* Add different variations to the captured gestures by flipping all the images by using `Rotate_images.py`.
62
62
* Run `load_images.py` to split all the captured gestures into training, validation and test set.
63
-
* To view all the gestures, run `display_all_gestures.py` .
64
-
* Train the model using Keras by running `cnn_keras.py`.
65
-
* Run `fun_util.py`. This will open up the gesture recognition window which will use your webcam to interpret the trained American Sign Language gestures.
63
+
* To view all the gestures, run `display_gestures.py` .
64
+
* Train the model using Keras by running `cnn_model_train.py`.
65
+
* Run `final.py`. This will open up the gesture recognition window which will use your webcam to interpret the trained American Sign Language gestures.
66
66
67
67
## Code Examples
68
68
@@ -167,4 +167,4 @@ If you loved what you read here and feel like we can collaborate to produce some
167
167
just want to shoot a question, please feel free to connect with me on <ahref="[email protected]"target="_blank">email</a>,
168
168
<ahref="http://bit.ly/2uOIUeo"target="_blank">LinkedIn</a>, or
0 commit comments