Nvidia Jetson FAQ
- The initialization on the Jetson devices will take a couple of minutes.
- Your Jetpack version should match the version used by our software. We support both r32 and r35 (Orin).
Docker
Docker is an open source platform for creating, deploying, and running containers. Docker is included in JetPack, so running containers on Jetson is easy and does not require any installation.
What are the dependencies? How do I fix "Unknown runtime specified nvidia"?
-
Add Nvidia to the APT sources. On some distributions, Nvidia packages are not available by default. Edit
/etc/apt/sources.list.d/nvidia-l4t-apt-source.list
and uncomment the lines if necessary. -
You can now install Jetpack libraries.
- If you are on a storage constrained device, do the following and then restart.
sudo apt install nvidia-docker2 cuda-libraries-10-2
sudo apt install cuda-compiler-10-2 graphsurgeon-tf nvidia-container-csv-cuda
sudo apt-get install nvidia-tensorrt nvidia-container-csv-tensorrt nvidia-container-csv-cudnn
- Another option is to install the Jetpack meta-package (this will use more storage) and then restart.
sudo apt update
sudo apt install nvidia-jetpack
How do I run Docker as a non-root user?
See those post installation steps. Note, that your user must also have access to GPU. If you see a relevant error message when starting docker, try dropping --user ...
flag from your docker command.
How do I fix "OSError: libcublas.so.10: cannot open shared object file: No such file or directory" ?
If you are getting the error OSError: libcublas.so.10: cannot open shared object file: No such file or directory
it means the docker run command is missing the parameter --runtime nvidia
. It could also be a user permissions problem (see above).
JetPack
NVIDIA JetPack SDK is the most comprehensive solution for building AI applications. It includes the latest OS images for Jetson products, along with libraries and APIs, samples, developer tools, and documentation.
I see error at the model optimization step. What do I do?
If you see errors such as createInferBuilder_INTERNAL symbol not found.
, it means your JetPack version does not match the one used by the Docker image. Your options are:
- Upgrade JetPack (see below).
- Use a version of the software that matches your JetPack version (see below).
Which Jetpack version am I using?
Install and run jetson-stats.
sudo apt install python3-pip
pip3 install -U jetson-stats
jetson_release -v
How can I optimize RAM and storage usage?
To free up RAM:
- Boot in text mode
sudo systemctl set-default multi-user.target
. In text mode, just after logging in, the device will use around 300MB of RAM.
To free up around 1.5G of storage, use
sudo apt purge thunderbird libreoffice* imagemagick chromium-* python3-pandas python3-numpy gnome-*
sudo apt autoremove -y
# If you do not need a desktop environment, do the following:
sudo apt remove ubuntu-desktop gdm3 unity gnome-icon-theme nvidia-l4t-graphics-demos
sudo apt autoremove -y
sudo apt clean
How do I upgrade JetPack?
- See Update Mechanism and Over-the-Air Update to update the Debian packages.
- And then upgrade JetPack.
Can I use a different JetPack version?
If you use an older JetPack version, you can install an image that matches your version. See all the Stream tags available. And pick the one that matches your Jetpack.
For example, for Jetpack 5.0.2 the L4T version is R35.1.0. For your reference, you can check the equivalent L4T version for your Jetpack version here: https://developer.nvidia.com/embedded/jetpack-archive
So you need to look for jetson-XXXXXX-r35.1.0 images for your Jetpack 5.0.2 device. For example the image platerecognizer/alpr-stream:jetson-1.41.0-r35.1.0
.
For older versions like 1.41.0 in the above example, this image does not include new features and bugfixes. An alternative to this is to utilize our Raspberry Pi image for Stream and Snapshot SDK which works for any arm64 device regardless of JetPack or L4T version. However, it's important to note that this alternative may have a slightly slower performance.
Hardware
How do I improve the inference speed?
- On devices with enough resources, use
-e WORKERS=4
(for Snapshot only) to fully utilize the GPU. Also, make sure to usedocker run --runtime nvidia ...
. - The GPU frequency may not be properly configured. Use
jetson_clocks
to manage it. For optimal performance, set it to maximum. - Configure the power mode using
nvpmodel
. For optimal performance, usenvpmodel --mode 0
. - Avoid running CPU-intensive programs while using our software. We recommend using
jtop
to view the resource usage.
Why does Stream/Snapshot stops working or the performance degrades after some time?
Jetson devices can get hot under heavy use. If they are not properly cooled, the OS will automatically throttle the CPU/GPU. Make sure to use a fan.