You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I pulled the latest CUDA image from the Autoware official docker image repository, image tag is universe-devel-cuda-20240921-amd64, and clone the latest autoware code (20240923), I used the following command to start the container and mount the code to the container.
The environment information in Docker is as follows: nvidia-smi
$nvcc -V
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2023 NVIDIA Corporation
Built on Wed_Nov_22_10:17:15_PST_2023
Cuda compilation tools, release 12.3, V12.3.107
Build cuda_12.3.r12.3/compiler.33567101_0
$dpkg -l | grep cudnn
hi libcudnn8 8.9.5.29-1+cuda12.2 amd64 cuDNN runtime libraries
ii ros-humble-cudnn-cmake-module 0.0.1-3jammy.20240728.201041 amd64 Exports a CMake module to find cuDNN.
$dpkg -l | grep nvinfer
hi libnvinfer-plugin8 8.6.1.6-1+cuda12.0 amd64 TensorRT plugin libraries
hi libnvinfer8
When I build packages, the packages related to TensorRT, CUDNN will return:
$colcon build --symlink-install --cmake-args -DCMAKE_BUILD_TYPE=Release --continue
Starting >>> tensorrt_common
--- stderr: tensorrt_common
CMake Warning at CMakeLists.txt:19 (message):
cuda, cudnn, tensorrt libraries are not found
---
Finished <<< tensorrt_common [2.67s]
When I comment on the following part in CMakeLists.txt, and build tensorrt_common again, get the following error
if(NOT (CUDAToolkit_FOUND AND CUDNN_FOUND AND TENSORRT_FOUND))
message(WARNING "cuda, cudnn, tensorrt libraries are not found")
return()
endif()
$colcon build --symlink-install --cmake-args -DCMAKE_BUILD_TYPE=Release --packages-up-to tensorrt_common
Starting >>> tensorrt_common
--- stderr: tensorrt_common
In file included from /autoware_test2409/src/universe/autoware.universe/common/tensorrt_common/src/simple_profiler.cpp:15:
/autoware_test2409/src/universe/autoware.universe/common/tensorrt_common/include/tensorrt_common/simple_profiler.hpp:18:10: fatal error: NvInfer.h: No such file or directory
18 | #include <NvInfer.h>
| ^~~~~~~~~~~
compilation terminated.
gmake[2]: *** [CMakeFiles/tensorrt_common.dir/build.make:90: CMakeFiles/tensorrt_common.dir/src/simple_profiler.cpp.o] Error 1
gmake[2]: *** Waiting for unfinished jobs....
In file included from /autoware_test2409/src/universe/autoware.universe/common/tensorrt_common/src/tensorrt_common.cpp:15:
/autoware_test2409/src/universe/autoware.universe/common/tensorrt_common/include/tensorrt_common/tensorrt_common.hpp:22:10: fatal error: NvInfer.h: No such file or directory
22 | #include <NvInfer.h>
| ^~~~~~~~~~~
compilation terminated.
gmake[2]: *** [CMakeFiles/tensorrt_common.dir/build.make:76: CMakeFiles/tensorrt_common.dir/src/tensorrt_common.cpp.o] Error 1
gmake[1]: *** [CMakeFiles/Makefile2:137: CMakeFiles/tensorrt_common.dir/all] Error 2
gmake: *** [Makefile:146: all] Error 2
---
Failed <<< tensorrt_common [3.00s, exited with code 2]
Expected behavior
All packages can be compiled successfully.
Actual behavior
All packages related to TensorRT and CUDNN cannot be compiled successfully.
Steps to reproduce
none
Versions
OS amd64
ROS2 humble
Autoware 20240923
Possible causes
No response
Additional context
No response
The text was updated successfully, but these errors were encountered:
cyn-liu
changed the title
The Recent Autoware Docker Image Failed to Compile Autoware.Universe
The Recent Autoware Docker Image Compile Autoware.Universe Failed
Sep 24, 2024
The previous experience was that the old version of the image could not compile successfully the new version code.
AFAIK there was no core dependency change yet, therefore before colcon build just call rosdep install --from-paths src --ignore-src -y and it should be OK.
Checklist
Description
I pulled the latest CUDA image from the Autoware official docker image repository, image tag is
universe-devel-cuda-20240921-amd64
, and clone the latest autoware code (20240923), I used the following command to start the container and mount the code to the container.docker run -it --name 20240923_autoware --gpus all --net=host --privileged -v $HOME/autoware_ws/autoware_20240923:/autoware_20240923 -w /autoware_20240923 -e DISPLAY -e QT_X11_NO_MITSHM=1 -e NVIDIA_VISIBLE_DEVICES=all -e NVIDIA_DRIVER_CAPABILITIES=all --device=/dev/dri:/dev/dri -v /tmp/.X11-unix:/tmp/.X11-unix e5a5bf46ce7a /bin/bash
The environment information in Docker is as follows:
nvidia-smi
$nvcc -V nvcc: NVIDIA (R) Cuda compiler driver Copyright (c) 2005-2023 NVIDIA Corporation Built on Wed_Nov_22_10:17:15_PST_2023 Cuda compilation tools, release 12.3, V12.3.107 Build cuda_12.3.r12.3/compiler.33567101_0
When I build packages, the packages related to TensorRT, CUDNN will return:
When I comment on the following part in
CMakeLists.txt
, and buildtensorrt_common
again, get the following errorExpected behavior
All packages can be compiled successfully.
Actual behavior
All packages related to TensorRT and CUDNN cannot be compiled successfully.
Steps to reproduce
none
Versions
Possible causes
No response
Additional context
No response
The text was updated successfully, but these errors were encountered: