You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+32-15
Original file line number
Diff line number
Diff line change
@@ -1,27 +1,40 @@
1
1
Android Things TensorFlow image classifier sample
2
2
=====================================
3
3
4
-
The Android Things TensorFlow image classifier sample demonstrates how to classify images.
5
-
It uses camera to capture images and run TensorFlow inference on board to tell what kinds
6
-
of dogs and cats in the images.
4
+
The Android Things TensorFlow image classifier sample app demonstrates how to capture an
5
+
image by pushing a button, run TensorFlow on device to infer top three labels from the
6
+
captured image, and then convert the result of labels into speech using text-to-speech.
7
7
8
-
This project is based on the [TensorFlow Android Camera Demo](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/examples/android/),
9
-
where the TensorFlow training was done using a type of convolutional neural network on ImageNet
10
-
dataset codenamed Inception V3. The resulting data set is loaded into the sample app and
11
-
it runs inference via TensorFlow Android Inference APIs. This simplified sample does not require
12
-
native code and NDK and its only dependency on TensorFlow is a link to the TensorFlow Android
13
-
Inference library in the form of an .aar file in build.gradle, which is provided and packaged
14
-
into the project.
8
+
This project is based on the [TensorFlow Android Camera Demo TF_Classify app](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/examples/android/),
9
+
where the TensorFlow training was done using Google inception model and the trained data set
10
+
is used to run inference and generate classification labels via TensorFlow Android Inference
11
+
APIs.
12
+
13
+
This simplified sample app does not require native code or NDK and it links to TensorFlow
14
+
via a gradle dependency on the TensorFlow Android Inference library in the form of
15
+
an .aar library, which is included in the project here.
15
16
16
17
17
18
Pre-requisites
18
19
--------------
19
20
20
-
- Android Things compatible board
21
+
- Android Things compatible board e.g. Raspberry Pi 3
21
22
- Android Things compatible camera (for example, the Raspberry Pi 3 camera module)
22
23
- Android Studio 2.2+
23
24
- "Google Repository" from the Android SDK Manager
25
+
- The following individual components:
26
+
- 1 push button
27
+
- 2 resistors
28
+
- 1 LED light
29
+
- 1 breadboard
30
+
- 1 speaker or earphone set
31
+
- jumper wires
32
+
- Optional: display e.g. TV
33
+
34
+
Schematics
35
+
----------
24
36
37
+

25
38
26
39
Setup and Build
27
40
===============
@@ -32,6 +45,9 @@ To setup, follow these steps below.
32
45
- Set up camera module
33
46
- Set up the project in Android Studio
34
47
- Inception model assets will be downloaded during build step
48
+
- Connect push button to GPIO pin BCM21 (see schematics)
49
+
- Connect LED light to GPIO pin BCM6 (see schematics)
50
+
- Connect speaker to audio jack (see schematics)
35
51
36
52
37
53
Running
@@ -41,11 +57,12 @@ To run the `app` module on an Android Things board:
41
57
42
58
1. Build the project within Android Studio and deploy to device via adb
43
59
2. Reboot the device to get all permissions granted; see [Known issues in release notes](https://developer.android.com/things/preview/releases.html#known_issues)
44
-
3. Point camera to some images of dogs or cats
45
-
4. See generated labels for your image in adb logcat e.g. Result: samoyed
60
+
3. Push the button when LED is ON to take a picture of e.g. dogs or cats
61
+
4. Check result: LED light OFF during inference to prevent in a subsequent image advertently taken
62
+
- See generated labels for your image in adb logcat output e.g. Result: samoyed
63
+
- If display is available e.g. via HDMI, see generated labels with respective confidence levels
64
+
- If speaker or earphones connected, listen to speech output of the generated labels
46
65
47
-
If you have display like a TV, you will see images captured inside imageView with generated
0 commit comments