Skip to content

Commit 4ebb525

Browse files
committed
Merge dev --> Jetpack 4.4 compatibility
2 parents def27d7 + 757debd commit 4ebb525

6 files changed

+55
-19
lines changed

README.md

+35-11
Original file line numberDiff line numberDiff line change
@@ -17,21 +17,44 @@ If you want to use a USB camera instead of Raspi Camera set the boolean _isCSICa
1717

1818

1919
## Dependencies
20-
cuda 10.0 + cudnn 7.5 <br> TensorRT 5.1.x <br> OpenCV 3.x <br>
20+
cuda 10.2 + cudnn 8.0 <br> TensorRT 7.x <br> OpenCV 4.1.1 <br>
2121
TensorFlow r1.14 (for Python to convert model from .pb to .uff)
2222

23+
## Update
24+
This master branch now uses Jetpack 4.4, so dependencies have slightly changed and tensorflow is not preinstalled anymore. So there is an extra step that takes a few minutes more than before. <br>
25+
In case you would like to use older versions of Jetpack there is a tag jp4.2.2, that can links to the older implementation.
26+
2327
## Installation
2428
#### 1. Install Cuda, CudNN, TensorRT, and TensorFlow for Python
2529
You can check [NVIDIA website](https://developer.nvidia.com/) for help.
2630
Installation procedures are very well documented.<br><br>**If you are
27-
using NVIDIA Jetson (Nano, TX1/2, Xavier) with Jetpack 4.2.2**, all needed packages
31+
using NVIDIA Jetson (Nano, TX1/2, Xavier) with Jetpack 4.4**, most needed packages
2832
should be installed if the Jetson was correctly flashed using SDK
29-
Manager, you will only need to install cmake and openblas:
33+
Manager or the SD card image, you will only need to install cmake, openblas and tensorflow:
3034
```bash
31-
sudo apt-get install cmake libopenblas-dev
35+
sudo apt install cmake libopenblas-dev
36+
```
37+
#### 2. Install Tensorflow
38+
The following shows the steps to install Tensorflow for Jetpack 4.4. This was copied from the official [NVIDIA documentation](https://docs.nvidia.com/deeplearning/frameworks/install-tf-jetson-platform/index.html). I'm assuming you don't need to install it in a virtual environment. If yes, please refer to the documentation linked above. If you are not installing this on a jetson, please refer to the official tensorflow documentation.
39+
40+
```bash
41+
# Install system packages required by TensorFlow:
42+
sudo apt update
43+
sudo apt install libhdf5-serial-dev hdf5-tools libhdf5-dev zlib1g-dev zip libjpeg8-dev liblapack-dev libblas-dev gfortran
44+
45+
# Install and upgrade pip3
46+
sudo apt install python3-pip
47+
sudo pip3 install -U pip testresources setuptools
48+
49+
# Install the Python package dependencies
50+
sudo pip3 install -U numpy==1.16.1 future==0.18.2 mock==3.0.5 h5py==2.10.0 keras_preprocessing==1.1.1 keras_applications==1.0.8 gast==0.2.2 futures protobuf pybind11
51+
52+
# Install TensorFlow using the pip3 command. This command will install the latest version of TensorFlow compatible with JetPack 4.4.
53+
sudo pip3 install --pre --extra-index-url https://developer.download.nvidia.com/compute/redist/jp/v44 'tensorflow<2'
3254
```
3355

34-
#### 2. Prune and freeze TensorFlow model or get frozen model in the link
56+
57+
#### 3. Prune and freeze TensorFlow model or get frozen model in the link
3558
The inputs to the original model are an input tensor consisting of a
3659
single or multiple faces and a phase train tensor telling all batch
3760
normalisation layers that model is not in train mode. Batch
@@ -43,7 +66,7 @@ to model where the phase train tensor has already been removed from the
4366
saved model
4467
[github.com/apollo-time/facenet/raw/master/model/resnet/facenet.pb](https://github.com/apollo-time/facenet/raw/master/model/resnet/facenet.pb)
4568

46-
#### 3. Convert frozen protobuf (.pb) model to UFF
69+
#### 4. Convert frozen protobuf (.pb) model to UFF
4770
Use the convert-to-uff tool which is installed with tensorflow
4871
installation to convert the *.pb model to *.uff. The script will replace
4972
unsupported layers with custom layers implemented by
@@ -55,10 +78,7 @@ TRT_L2NORM_HELPER plugin.
5578
cd path/to/project
5679
python3 step01_pb_to_uff.py
5780
```
58-
You should now have a facenet.uff (or similar) file which will be used
59-
as the input model to TensorRT. <br>
60-
The path to model is hardcoded, so please put the __facenet.uff__ in the
61-
[facenetModels](./facenetModels) directory.
81+
You should now have a facenet.uff file in the [facenetModels folder](./facenetModels) which will be used as the input model to TensorRT. <br>
6282

6383

6484
#### 4. Get mtCNN models
@@ -130,7 +150,11 @@ Performance on **NVIDIA Jetson AGX Xavier**:
130150
Please respect all licenses of OpenCV and the data the machine learning models (mtCNN and Google FaceNet)
131151
were trained on.
132152

133-
153+
## FAQ
154+
Sometimes the camera driver doesn't close properly that means you will have to restart the __nvargus-daemon__:
155+
```bash
156+
sudo systemctl restart nvargus-daemon
157+
```
134158

135159
## Info
136160
Niclas Wesemann <br>

src/faceNet.cpp

+6-6
Original file line numberDiff line numberDiff line change
@@ -43,7 +43,7 @@ void FaceNetClassifier::createOrLoadEngine() {
4343
file.read(trtModelStream_.data(), size);
4444
file.close();
4545
}
46-
std::cout << "size" << size;
46+
// std::cout << "size" << size;
4747
IRuntime* runtime = createInferRuntime(m_gLogger);
4848
assert(runtime != nullptr);
4949
m_engine = runtime->deserializeCudaEngine(trtModelStream_.data(), size, nullptr);
@@ -125,7 +125,7 @@ void FaceNetClassifier::preprocessFaces() {
125125
// preprocess according to facenet training and flatten for input to runtime engine
126126
for (int i = 0; i < m_croppedFaces.size(); i++) {
127127
//mean and std
128-
cv::cvtColor(m_croppedFaces[i].faceMat, m_croppedFaces[i].faceMat, CV_RGB2BGR);
128+
cv::cvtColor(m_croppedFaces[i].faceMat, m_croppedFaces[i].faceMat, cv::COLOR_RGB2BGR);
129129
cv::Mat temp = m_croppedFaces[i].faceMat.reshape(1, m_croppedFaces[i].faceMat.rows * 3);
130130
cv::Mat mean3;
131131
cv::Mat stddev3;
@@ -256,10 +256,10 @@ void FaceNetClassifier::resetVariables() {
256256
}
257257

258258
FaceNetClassifier::~FaceNetClassifier() {
259-
// this leads to segfault if engine or context could not be created during class instantiation
260-
this->m_engine->destroy();
261-
this->m_context->destroy();
262-
std::cout << "FaceNet was destructed" << std::endl;
259+
// this leads to segfault
260+
// this->m_engine->destroy();
261+
// this->m_context->destroy();
262+
// std::cout << "FaceNet was destructed" << std::endl;
263263
}
264264

265265

src/main.cpp

+3-1
Original file line numberDiff line numberDiff line change
@@ -71,7 +71,8 @@ int main()
7171
while (true) {
7272
videoStreamer.getFrame(frame);
7373
if (frame.empty()) {
74-
std::cout << "Empty frame! Exiting..." << std::endl;
74+
std::cout << "Empty frame! Exiting...\n Try restarting nvargus-daemon by "
75+
"doing: sudo systemctl restart nvargus-daemon" << std::endl;
7576
break;
7677
}
7778
auto startMTCNN = chrono::steady_clock::now();
@@ -111,6 +112,7 @@ int main()
111112
}
112113
auto globalTimeEnd = chrono::steady_clock::now();
113114
cv::destroyAllWindows();
115+
videoStreamer.release();
114116
auto milliseconds = chrono::duration_cast<chrono::milliseconds>(globalTimeEnd-globalTimeStart).count();
115117
double seconds = double(milliseconds)/1000.;
116118
double fps = nbFrames/seconds;

src/videoStreamer.cpp

+8
Original file line numberDiff line numberDiff line change
@@ -65,3 +65,11 @@ std::string VideoStreamer::gstreamer_pipeline (int capture_width, int capture_he
6565
"/1 ! nvvidconv flip-method=" + std::to_string(flip_method) + " ! video/x-raw, width=(int)" + std::to_string(display_width) + ", height=(int)" +
6666
std::to_string(display_height) + ", format=(string)BGRx ! videoconvert ! video/x-raw, format=(string)BGR ! appsink";
6767
}
68+
69+
void VideoStreamer::release() {
70+
m_capture->release();
71+
}
72+
73+
VideoStreamer::~VideoStreamer() {
74+
75+
}

src/videoStreamer.h

+2
Original file line numberDiff line numberDiff line change
@@ -18,11 +18,13 @@ class VideoStreamer {
1818
public:
1919
VideoStreamer(int nmbrDevice, int videoWidth, int videoHeight, int frameRate, bool isCSICam);
2020
VideoStreamer(std::string filename, int videoWidth, int videoHeight);
21+
~VideoStreamer();
2122
void setResolutionDevice(int width, int height);
2223
void setResoltionFile(int width, int height);
2324
void assertResolution();
2425
void getFrame(cv::Mat &frame);
2526
std::string gstreamer_pipeline (int capture_width, int capture_height, int display_width, int display_height, int frameRate, int flip_method=0);
27+
void release();
2628
};
2729

2830
#endif //VIDEO_INPUT_WRAPPER_VIDEOSTREAMER_H

step01_pb_to_uff.py

+1-1
Original file line numberDiff line numberDiff line change
@@ -8,7 +8,7 @@
88
output_nodes = ["embeddings"]
99
input_node = "input"
1010
pb_file = "./facenet.pb"
11-
uff_file = "./facenet.uff"
11+
uff_file = "./facenetModels/facenet.uff"
1212
# END USER DEFINED VALUES
1313

1414
# read tensorflow graph

0 commit comments

Comments
 (0)