Skip to content
Snippets Groups Projects

Installation guide of all software on the Adlink NEON smart-camera

Password Adlink (for sudo): adlink
This installation guide is tested on: Jetpack 4.4 (R32.4.3), CUDA 10.2, Python 3.6, TensorRT 7.1.3.1

There are several libraries already installed on the Adlink Neon:
python2, python3, opencv, cuda, numpy

1) Check the versions of the installed packages (in the terminal):

  • cat /etc/nv_tegra_release (should print the version of Jetpack)
  • nvcc --version (should the CUDA details)

2) Install Pytorch and Torchvision:
First, check which Pytorch version is compatible with the Jetpack/Cuda version. In this case it's Pytorch version 1.6 (you can Google the compatibility at this URL: https://forums.developer.nvidia.com/t/pytorch-for-jetson-version-1-7-0-now-available/72048)

Run the following commands in the terminal:

  • sudo apt install python3-pip
  • pip3 install wheel
  • wget https://nvidia.box.com/shared/static/9eptse6jyly1ggt9axbja2yrmj6pbarc.whl -O torch-1.6.0-cp36-cp36m-linux_aarch64.whl
  • sudo apt-get install python3-pip libopenblas-base libopenmpi-dev
  • pip3 install Cython
  • pip3 install numpy torch-1.6.0-cp36-cp36m-linux_aarch64.whl
  • sudo apt-get install libjpeg-dev zlib1g-dev libpython3-dev libavcodec-dev libavformat-dev libswscale-dev
  • git clone --branch v0.7.0 https://github.com/pytorch/vision torchvision
  • cd torchvision
  • export BUILD_VERSION=0.7.0
  • sudo python3 setup.py install (if this doesn't work: python3 setup.py install)
  • cd ../

3) Check if Pytorch and Torchvision are correctly installed:

  • python3
  • import torch
  • import torchvision
  • print(torch.version.cuda) (should print: 10.2)
  • print('CUDA available: ' + str(torch.cuda.is_available())) (should print: CUDA available: True)
  • torch.cuda.get_device_name(0) (should print: NVIDIA Tegra X2)
  • quit()

4) Install the software packages needed for YOLOv5:

  • pip3 install matplotlib
  • pip3 install pycocotools
  • pip3 install tqdm
  • pip3 install PyYAML==5.3.1
  • pip3 install pycuda
  • sudo apt-get install python3-tk

5) Clone the optima/inference code:

6) Install the software packages needed for classification (EfficientNet-Pytorch):

7) Install the SWIG software (needed for image acquisition):

  • download swig-3.0.12: https://sourceforge.net/p/swig/news/2017/01/swig-3012-released/
  • cd swig-3.0.12
  • ./configure --prefix=/home/adlink/library/swigtool
  • sudo make
  • sudo make install
  • sudo gedit /etc/profile
  • copy-paste these two lines at the end of the file:
       export SWIG_PATH=/home/adlink/library/swigtool/bin
       export PATH=SWIG_PATH:PATH
  • source /etc/profile
  • sudo reboot
  • swig -version (should print: 3.0.12)

8) Install pypylon software (needed for image acquisition):

9) Install the GPS software:

  • pip3 install pyserial
  • pip3 install pynmea2
  • sudo usermod -a -G tty $USER
  • sudo usermod -a -G dialout $USER
  • sudo reboot

10) OPTIONAL: Install the TensorRT backend for ONNX (to significantly speed up the EfficientNet image classification):
The description below has been tested for TensorRT version 7.1.3.1. Be aware that this installation requires the presence of the SWIG software (step 7)!

  • dpkg -l | grep TensorRT (should print a big list with all TensorRT libraries, including versions)
  • sudo apt-get install python3-pip libprotoc-dev protobuf-compiler
  • pip3 install onnx==1.6.0
  • cd inference/onnx-tensorrt
  • sudo python3 setup.py install

If there's still an error, like unable to execute 'swig': No such file or directory, then you have to have to run the following command:

  • swig -python -c++ -modern -builtin -o nv_onnx_parser_bindings_wrap.cpp nv_onnx_parser_bindings.i (should execute without any feedbacks and errors)
  • sudo python3 setup_2.py install

11) OPTIONAL: Make desktop shortcuts:

[Desktop Entry]
Name=detection
Version=1
Icon=/home/adlink/inference/data/optima.jpg
Exec=python3 /home/adlink/inference/inference.py --cfg /home/adlink/inference/cfg/detection.yaml
Terminal=true
Type=Application

[Desktop Entry]
Name=classification
Version=1
Icon=/home/adlink/inference/data/optima.jpg
Exec=python3 /home/adlink/inference/inference.py --cfg /home/adlink/inference/cfg/classification.yaml
Terminal=true
Type=Application

[Desktop Entry]
Name=classification_patches
Version=1
Icon=/home/adlink/inference/data/optima.jpg
Exec=python3 /home/adlink/inference/inference.py --cfg /home/adlink/inference/cfg/classification_patches.yaml
Terminal=true
Type=Application

[Desktop Entry]
Name=keyboard
Version=1
Icon=/home/adlink/inference/data/keyboard.jpg
Exec=onboard
Terminal=false
Type=Application