NVIDIA GPUs htop like monitoring tool

NVTOP

What is NVTOP?

Nvtop stands for NVidia TOP, a (h)top like task monitor for NVIDIA GPUs. It can handle multiple GPUs and print information about them in a htop familiar way.

Because a picture is worth a thousand words:

NVTOP interface

Table of Contents

NVTOP Options and Interactive Commands

Interactive Setup Window

NVTOP has a builtin setup utility that provides a way to specialize the interface to your needs. Simply press F2 and select the options that are the best for you.

NVTOP Setup Window

Saving Preferences

You can save the preferences set in the setup window by pressing F12. The preferences will be loaded the next time you run nvtop.

NVTOP Manual and Command line Options

NVTOP comes with a manpage!

man nvtop

For quick command line arguments help

nvtop -h
nvtop --help

GPU Support

The NVML library does not support some of the queries for GPUs coming before the Kepler microarchitecture. Anything starting at GeForce 600, GeForce 800M and successor should work fine. For more information about supported GPUs please take a look at the NVML documentation.

Build

Two libraries are required in order for NVTOP to display GPU information:

  • The NVIDIA Management Library (NVML) which comes with the GPU driver.
    • This queries the GPU for information.
  • The ncurses library driving the user interface.
    • This makes the screen look beautiful.

Distribution Specific Installation Process

Ubuntu / Debian

Ubuntu disco (19.04) / Debian buster (stable)

  • sudo apt install nvtop

Older

Fedora / RedHat / CentOS

  • NVIDIA drivers, CUDA required for nvml libraries (see RPM Fusion)
  • CMake, ncurses and git
    sudo dnf install cmake ncurses-devel git
  • NVTOP

OpenSUSE

Arch Linux

  • sudo pacman -S nvtop

Gentoo

  • sudo layman -a guru && sudo emerge -av nvtop

Docker

  • NVIDIA drivers (same as above)

  • nvidia-docker (See Container Toolkit Installation Guide)

  • git clone https://github.com/Syllo/nvtop.git && cd nvtop
    sudo docker build --tag nvtop .
    sudo docker run -it --rm --runtime=nvidia --gpus=all --pid=host nvtop

NVTOP Build

git clone https://github.com/Syllo/nvtop.git
mkdir -p nvtop/build && cd nvtop/build
cmake ..
make

# Install globally on the system
sudo make install

# Alternatively, install without privileges at a location of your choosing
# make DESTDIR="/your/install/path" install

If you use conda as environment manager and encounter an error while building nvtop, try conda deactivate before invoking cmake.

The build system supports multiple build type (e.g. -DCMAKE_BUILD_TYPE=RelWithDebInfo):

  • Release: Binary without debug information
  • RelWithDebInfo: Binary with debug information
  • Debug: Compile with warning flags and address/undefined sanitizers enabled (for development purposes)

Troubleshoot

  • The plot looks bad:
    • Verify that you installed the wide character version of the NCurses library (libncursesw5-dev for Debian / Ubuntu), clean the build directory and restart the build process.
  • Putty: Tell putty not to lie about its capabilities ($TERM) by setting the field Terminal-type string to putty in the menu Connection > Data > Terminal Details.

License

Nvtop is licensed under the GPLV3 license or any later version. You will find a copy of the license inside the COPYING file of the repository or at the gnu website <www.gnu.org/licenses/>.

Owner
Comments
  • Could NOT find NVML (missing: NVML_LIBRARIES) (found version

    Could NOT find NVML (missing: NVML_LIBRARIES) (found version "10")

    Under the lastest Jetpack version 4.2.1 for embedded devices like Xavier or TX2 I'm seeing this issue when trying to cmake:

    cmake .. -DNVML_RETRIEVE_HEADER_ONLINE=True
    

    I get the error:

    -- The C compiler identification is GNU 7.4.0
    -- Check for working C compiler: /usr/bin/cc
    -- Check for working C compiler: /usr/bin/cc -- works
    -- Detecting C compiler ABI info
    -- Detecting C compiler ABI info - done
    -- Detecting C compile features
    -- Detecting C compile features - done
    CMake Error at /usr/local/share/cmake-3.15/Modules/FindPackageHandleStandardArgs.cmake:137 (message):
      Could NOT find NVML (missing: NVML_LIBRARIES) (found version "10")
    Call Stack (most recent call first):
      /usr/local/share/cmake-3.15/Modules/FindPackageHandleStandardArgs.cmake:378 (_FPHSA_FAILURE_MESSAGE)
      cmake/modules/FindNVML.cmake:52 (FIND_PACKAGE_HANDLE_STANDARD_ARGS)
      CMakeLists.txt:31 (find_package)
    
    
    -- Configuring incomplete, errors occurred!
    See also "/home/nvidia/Documents/nvtop/build/CMakeFiles/CMakeOutput.log".
    
    

    It seems it finds NVML and doesn't find it in the download part where you look for:

    https://raw.githubusercontent.com/NVIDIA/nvidia-settings/master/src/nvml.h
    

    Any insight is appreciated!

  • Segmentation fault (core dumped)

    Segmentation fault (core dumped)

    Hi, I'm installing nvtop on Ubuntu 18.04.5 LTS following the build instruction in this repo. The build went smooth and there are no warning or error.

    But when trying to launch, nvtop, I got the error: Segmentation fault (core dumped)

    Here is my nvidia-smi output:

    Mon May 24 04:28:38 2021
    +-----------------------------------------------------------------------------+
    | NVIDIA-SMI 465.19.01    Driver Version: 460.32.03    CUDA Version: 11.2     |
    |-------------------------------+----------------------+----------------------+
    | GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
    | Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
    |                               |                      |               MIG M. |
    |===============================+======================+======================|
    |   0  Tesla P100-PCIE...  Off  | 00000000:00:04.0 Off |                    0 |
    | N/A   40C    P0    39W / 250W |   3025MiB / 16280MiB |     19%      Default |
    |                               |                      |                  N/A |
    +-------------------------------+----------------------+----------------------+
    
    +-----------------------------------------------------------------------------+
    | Processes:                                                                  |
    |  GPU   GI   CI        PID   Type   Process name                  GPU Memory |
    |        ID   ID                                                   Usage      |
    |=============================================================================|
    +-----------------------------------------------------------------------------+
    

    Also I did some test and find out that the last working commit for me was 0ef51c99ee1598cfc7fd806f122fbaf315f81a26. From that point on I always get this error.

    Are there anything I can help to resolve this. Thanks ;)

  • Possible to store sort order?

    Possible to store sort order?

    Hi,

    thank you for your great library. It gives a very good overview. One question: Is it possible to store the sorting order similar to htop (which creates a small config file usually at $HOME/.config/htop/htoprc when exiting the application). Then on restart, it loads the setting and applies it.

    Is this already implemented and I didn't find it, or is it a kind of new feature?

    Best regards Sven

  • The GPU usage shows huge % usage

    The GPU usage shows huge % usage

    Hi, found this issue today.

    The GPU usage jumps to huge % number (screenshot below).

    My System: Ubuntu 18.04.5 LTS nvtop version 1.2.0

    Screenshot from 2021-06-07 17-56-04

    Please let me know what additional info would be helpfull.

    P.$ Thanks for the tool!

  • Linker error: undefined reference to `nvmlDeviceGetGraphicsRunningProcesses_v2'

    Linker error: undefined reference to `nvmlDeviceGetGraphicsRunningProcesses_v2'

    Running driver version 450.57 on Ubuntu 18.04.

    Running CMake succeeds, however make fails with the following output:

    [100%] Linking C executable nvtop
    CMakeFiles/nvtop.dir/extract_gpuinfo.c.o: In function `update_device_infos':
    extract_gpuinfo.c:(.text+0x584): undefined reference to `nvmlDeviceGetGraphicsRunningProcesses_v2'
    extract_gpuinfo.c:(.text+0x624): undefined reference to `nvmlDeviceGetComputeRunningProcesses_v2'
    clang: error: linker command failed with exit code 1 (use -v to see invocation)
    src/CMakeFiles/nvtop.dir/build.make:253: recipe for target 'src/nvtop' failed
    make[2]: *** [src/nvtop] Error 1
    CMakeFiles/Makefile2:117: recipe for target 'src/CMakeFiles/nvtop.dir/all' failed
    make[1]: *** [src/CMakeFiles/nvtop.dir/all] Error 2
    Makefile:129: recipe for target 'all' failed
    make: *** [all] Error 2
    

    Anyone had this error?

    I've verified that I can access the GPU using nvidia-smi, as well as running Tensorflow:

    >>> from tensorflow.python.client import device_lib
    >>> print(device_lib.list_local_devices())
    
  • Unbound memory allocation on system with six gpus

    Unbound memory allocation on system with six gpus

    Hey there - thanks for this amazing monitoring tool! :bow:

    Here's an issue I'm hitting: when running on a 6-gpu system nvtop allocates memory until it is getting killed by the linux oom killer. It looks like there is an overflow somewhere leading to unbound memory allocation (at a rate of multiple GBs per second).

    Another data point: this behavior stops to happen when I run in a small terminal (e.g. 80x24) or in a tmux split pane, which indicates it has something to do with the live utilization plots.

    When running a debug build and sending a SIGHUP signal during the memory allocation I get backtraces indicating draw_plots in the problem, e.g.

    (gdb) bt
    #0  0x00005555555c11c8 in nvtop_line_plot (win=0x0, num_data=3200171704, data=0x7ff9f9354800, min=0, max=100, num_plots=4, legend=0x7fffffffe000) at /home/djh/nvtop/src/plot.c:61
    #1  0x00005555555a99db in draw_plots (interface=0x6110000002c0) at /home/djh/nvtop/src/interface.c:1604
    #2  0x00005555555a9ce1 in draw_gpu_info_ncurses (dev_info=0x61a000000c80, interface=0x6110000002c0) at /home/djh/nvtop/src/interface.c:1625
    #3  0x0000555555594192 in main (argc=1, argv=0x7fffffffe4a8) at /home/djh/nvtop/src/nvtop.c:270
    

    Hope that helps, let me know if you need more information.

  • Wrong process ID inside docker

    Wrong process ID inside docker

    I manually build nvtop in my Ubuntu 20.04 docker container, becuase the driver problem in the pinned issue. However, I found the pid it's showed it's wrong, because I cannot find the pid using ps:

    image find pid in ps -aux image

  • AMDGPU support

    AMDGPU support

    A picture is worth a thousand words: Screenshot_2022-03-21_10-53-19 c

    This PR isn't fully ready for merge yet. I think I'd like some feedback and do some polishing first (hence the RFC). My AMDGPU is also an iGPU / APU which limits how much I can test myself.

    Most the access is using libdrm and kernel APIs. <drm/amdgpu_drm.h> is a kernel UAPI header so I directly included it. For libdrm I'm using dlopen and re-declaring the constants to avoid a compile-time dependency, just like what the original code does with NVML.

    I also did some code refactoring to make my life easier, hope that is okay. ~~I really cannot deal with typedef structs~~

    CC #106, i915 kernel changes doesn't seem to have merged into mainline yet.

  • Failed to build from source because undefined reference to `reallocarray'

    Failed to build from source because undefined reference to `reallocarray'

    Distributor ID: Ubuntu Description: Ubuntu 16.04.7 LTS Release: 16.04 Codename: xenial

    I tried to build nvtop from source but got these error messages

    CMakeFiles/nvtop.dir/extract_gpuinfo_amdgpu.c.o: In function `gpuinfo_amdgpu_get_running_processes':
    extract_gpuinfo_amdgpu.c:(.text+0x1232): undefined reference to `reallocarray'
    extract_gpuinfo_amdgpu.c:(.text+0x12d8): undefined reference to `reallocarray'
    collect2: error: ld returned 1 exit status
    src/CMakeFiles/nvtop.dir/build.make:290: recipe for target 'src/nvtop' failed
    make[2]: *** [src/nvtop] Error 1
    CMakeFiles/Makefile2:124: recipe for target 'src/CMakeFiles/nvtop.dir/all' failed
    make[1]: *** [src/CMakeFiles/nvtop.dir/all] Error 2
    Makefile:135: recipe for target 'all' failed
    make: *** [all] Error 2
    

    I searched for this reallocarray function, it is just a Standard C Library(libc, -lc) I don't think it can be missing. So I think it is because of lib link problem

    Here below is the makefile generated by cmake:

    # CMAKE generated file: DO NOT EDIT!
    # Generated by "Unix Makefiles" Generator, CMake Version 3.23
    
    # Default target executed when no arguments are given to make.
    default_target: all
    .PHONY : default_target
    
    # Allow only one "make -f Makefile2" at a time, but pass parallelism.
    .NOTPARALLEL:
    
    #=============================================================================
    # Special targets provided by cmake.
    
    # Disable implicit rules so canonical targets will work.
    .SUFFIXES:
    
    # Disable VCS-based implicit rules.
    % : %,v
    
    # Disable VCS-based implicit rules.
    % : RCS/%
    
    # Disable VCS-based implicit rules.
    % : RCS/%,v
    
    # Disable VCS-based implicit rules.
    % : SCCS/s.%
    
    # Disable VCS-based implicit rules.
    % : s.%
    
    .SUFFIXES: .hpux_make_needs_suffix_list
    
    # Command-line flag to silence nested $(MAKE).
    $(VERBOSE)MAKESILENT = -s
    
    #Suppress display of executed commands.
    $(VERBOSE).SILENT:
    
    # A target that is always out of date.
    cmake_force:
    .PHONY : cmake_force
    
    #=============================================================================
    # Set environment variables for the build.
    
    # The shell in which to execute make rules.
    SHELL = /bin/sh
    
    # The CMake executable.
    CMAKE_COMMAND = /opt/cmake/bin/cmake
    
    # The command to remove a file.
    RM = /opt/cmake/bin/cmake -E rm -f
    
    # Escaping for special characters.
    EQUALS = =
    
    # The top-level source directory on which CMake was run.
    CMAKE_SOURCE_DIR = /home/amoschenyq/nvtop
    
    # The top-level build directory on which CMake was run.
    CMAKE_BINARY_DIR = /home/amoschenyq/nvtop/build
    
    #=============================================================================
    # Targets provided globally by CMake.
    
    # Special rule for the target edit_cache
    edit_cache:
    	@$(CMAKE_COMMAND) -E cmake_echo_color --switch=$(COLOR) --cyan "Running CMake cache editor..."
    	/opt/cmake/bin/ccmake -S$(CMAKE_SOURCE_DIR) -B$(CMAKE_BINARY_DIR)
    .PHONY : edit_cache
    
    # Special rule for the target edit_cache
    edit_cache/fast: edit_cache
    .PHONY : edit_cache/fast
    
    # Special rule for the target rebuild_cache
    rebuild_cache:
    	@$(CMAKE_COMMAND) -E cmake_echo_color --switch=$(COLOR) --cyan "Running CMake to regenerate build system..."
    	/opt/cmake/bin/cmake --regenerate-during-build -S$(CMAKE_SOURCE_DIR) -B$(CMAKE_BINARY_DIR)
    .PHONY : rebuild_cache
    
    # Special rule for the target rebuild_cache
    rebuild_cache/fast: rebuild_cache
    .PHONY : rebuild_cache/fast
    
    # Special rule for the target list_install_components
    list_install_components:
    	@$(CMAKE_COMMAND) -E cmake_echo_color --switch=$(COLOR) --cyan "Available install components are: \"Unspecified\""
    .PHONY : list_install_components
    
    # Special rule for the target list_install_components
    list_install_components/fast: list_install_components
    .PHONY : list_install_components/fast
    
    # Special rule for the target install
    install: preinstall
    	@$(CMAKE_COMMAND) -E cmake_echo_color --switch=$(COLOR) --cyan "Install the project..."
    	/opt/cmake/bin/cmake -P cmake_install.cmake
    .PHONY : install
    
    # Special rule for the target install
    install/fast: preinstall/fast
    	@$(CMAKE_COMMAND) -E cmake_echo_color --switch=$(COLOR) --cyan "Install the project..."
    	/opt/cmake/bin/cmake -P cmake_install.cmake
    .PHONY : install/fast
    
    # Special rule for the target install/local
    install/local: preinstall
    	@$(CMAKE_COMMAND) -E cmake_echo_color --switch=$(COLOR) --cyan "Installing only the local directory..."
    	/opt/cmake/bin/cmake -DCMAKE_INSTALL_LOCAL_ONLY=1 -P cmake_install.cmake
    .PHONY : install/local
    
    # Special rule for the target install/local
    install/local/fast: preinstall/fast
    	@$(CMAKE_COMMAND) -E cmake_echo_color --switch=$(COLOR) --cyan "Installing only the local directory..."
    	/opt/cmake/bin/cmake -DCMAKE_INSTALL_LOCAL_ONLY=1 -P cmake_install.cmake
    .PHONY : install/local/fast
    
    # Special rule for the target install/strip
    install/strip: preinstall
    	@$(CMAKE_COMMAND) -E cmake_echo_color --switch=$(COLOR) --cyan "Installing the project stripped..."
    	/opt/cmake/bin/cmake -DCMAKE_INSTALL_DO_STRIP=1 -P cmake_install.cmake
    .PHONY : install/strip
    
    # Special rule for the target install/strip
    install/strip/fast: preinstall/fast
    	@$(CMAKE_COMMAND) -E cmake_echo_color --switch=$(COLOR) --cyan "Installing the project stripped..."
    	/opt/cmake/bin/cmake -DCMAKE_INSTALL_DO_STRIP=1 -P cmake_install.cmake
    .PHONY : install/strip/fast
    
    # The main all target
    all: cmake_check_build_system
    	$(CMAKE_COMMAND) -E cmake_progress_start /home/amoschenyq/nvtop/build/CMakeFiles /home/amoschenyq/nvtop/build//CMakeFiles/progress.marks
    	$(MAKE) $(MAKESILENT) -f CMakeFiles/Makefile2 all
    	$(CMAKE_COMMAND) -E cmake_progress_start /home/amoschenyq/nvtop/build/CMakeFiles 0
    .PHONY : all
    
    # The main clean target
    clean:
    	$(MAKE) $(MAKESILENT) -f CMakeFiles/Makefile2 clean
    .PHONY : clean
    
    # The main clean target
    clean/fast: clean
    .PHONY : clean/fast
    
    # Prepare targets for installation.
    preinstall: all
    	$(MAKE) $(MAKESILENT) -f CMakeFiles/Makefile2 preinstall
    .PHONY : preinstall
    
    # Prepare targets for installation.
    preinstall/fast:
    	$(MAKE) $(MAKESILENT) -f CMakeFiles/Makefile2 preinstall
    .PHONY : preinstall/fast
    
    # clear depends
    depend:
    	$(CMAKE_COMMAND) -S$(CMAKE_SOURCE_DIR) -B$(CMAKE_BINARY_DIR) --check-build-system CMakeFiles/Makefile.cmake 1
    .PHONY : depend
    
    #=============================================================================
    # Target rules for targets named uninstall
    
    # Build rule for target.
    uninstall: cmake_check_build_system
    	$(MAKE) $(MAKESILENT) -f CMakeFiles/Makefile2 uninstall
    .PHONY : uninstall
    
    # fast build rule for target.
    uninstall/fast:
    	$(MAKE) $(MAKESILENT) -f CMakeFiles/uninstall.dir/build.make CMakeFiles/uninstall.dir/build
    .PHONY : uninstall/fast
    
    #=============================================================================
    # Target rules for targets named nvtop
    
    # Build rule for target.
    nvtop: cmake_check_build_system
    	$(MAKE) $(MAKESILENT) -f CMakeFiles/Makefile2 nvtop
    .PHONY : nvtop
    
    # fast build rule for target.
    nvtop/fast:
    	$(MAKE) $(MAKESILENT) -f src/CMakeFiles/nvtop.dir/build.make src/CMakeFiles/nvtop.dir/build
    .PHONY : nvtop/fast
    
    # Help Target
    help:
    	@echo "The following are some of the valid targets for this Makefile:"
    	@echo "... all (the default if no target is provided)"
    	@echo "... clean"
    	@echo "... depend"
    	@echo "... edit_cache"
    	@echo "... install"
    	@echo "... install/local"
    	@echo "... install/strip"
    	@echo "... list_install_components"
    	@echo "... rebuild_cache"
    	@echo "... uninstall"
    	@echo "... nvtop"
    .PHONY : help
    
    
    
    #=============================================================================
    # Special targets to cleanup operation of make.
    
    # Special rule to run CMake to check the build system integrity.
    # No rule that depends on this can have commands that come from listfiles
    # because they might be regenerated.
    cmake_check_build_system:
    	$(CMAKE_COMMAND) -S$(CMAKE_SOURCE_DIR) -B$(CMAKE_BINARY_DIR) --check-build-system CMakeFiles/Makefile.cmake 0
    .PHONY : cmake_check_build_system
    
    
  • PID shown by nvtop differs from the one shown by htop

    PID shown by nvtop differs from the one shown by htop

    Dear Syllo,

    I created a very simple python script to allocate some GPU memory - the corresponding process appeared in nvtop: image However, the PID I can see using htop differs from that one: image

    Do you happen to have any idea? Please note, that this can be a driver related issue as in the output of nvidia-smi there are no processes shown at all. I tried to find some answers for this one as well, but the only thing I found is that in containers it won't work. However, I do not use any containers and I don't have root access either, hence driver updates are not really possible.

    Thanks in advance for any advice, Daniel

  • show CPU utilization

    show CPU utilization

    Hi,

    first thanks for this great tool. It really simplifies to keep track of the running GPU processes on our servers. I would also like to also see the CPU utilization. If I would create a pull request, would you welcome this feature?

    Thank you, Leon

  • Missing process gpu, enc, dec, gpu mem usage when using 5.19 kernel version

    Missing process gpu, enc, dec, gpu mem usage when using 5.19 kernel version

    System: Kernel: 5.19.0-4-MANJARO arch: x86_64 bits: 64 Desktop: KDE Plasma v: 5.25.4 Distro: Manjaro Linux

    CPU: Info: quad core Intel Core i7-4790K

    Graphics: Device-1: AMD Navi 23 [Radeon RX 6600/6600 XT/6600M] driver: amdgpu v: kernel Display: x11 server: X.Org v: 21.1.4 with: Xwayland v: 22.1.3 driver: X: loaded: amdgpu unloaded: modesetting,radeon gpu: amdgpu resolution: 1: 1920x1080 2: 1920x1080 OpenGL: renderer: AMD Radeon RX 6600 XT (dimgrey_cavefish LLVM 14.0.6 DRM 3.47 5.19.0-4-MANJARO) v: 4.6 Mesa 22.1.4

    Since 5.19 kernel version, as you can see, some data are no longer displayed 5 19

    Seems to work as expected on 5.15 and 5.18 kernel versions 5 15_5 18

  • Segmentation fault (core dumped) with CUDA 11.7

    Segmentation fault (core dumped) with CUDA 11.7

    Many thanks for providing nvtop. I just build nvtop from source on a freshly installed xUbuntu 18.04 with CUDA 11.7 on a NVIDIA GeForce RTX 2070. I encountered a segmentation fault. Yesterday build of nvtop on a second xUbuntu 18 system with CUDA 11.6 NVIDIA GeForce RTX 3060 is running perfectly fine as on other systems

    Is it possible the new version of CUDA is causing the segmentation fault - as described in #107 (https://github.com/Syllo/nvtop/issues/107)

  • add gcc-c++ to Fedora instructions

    add gcc-c++ to Fedora instructions

    Making note that you'll the library gcc-c++ is required for cmake to compile this application, even if the "Development Tools" group is installed on Fedora 36.

  • feature request: display time on x axis

    feature request: display time on x axis

    nvtop is great - I use it to monitor GPU utilization while running certain interactive tasks. it would be great if the x axis can display values for time, say in seconds. this would be helpful to have a rough idea of how much time the GPU spent at certain utilization and would make the graph more informative.

  • Missing <linux/kcmp.h> breaks build on CentOS 7

    Missing breaks build on CentOS 7

    Hello, and thank you for the great tool!

    Reporting a problem with building NVTOP on an HPC cluster here. CentOS 7.9, kernel 3.10.0-1160.62.1.el7.x86_64, CUDA 11.7. Cmake part goes fine, but the build dies near the end:

    [ 92%] Building C object src/CMakeFiles/nvtop.dir/extract_gpuinfo_amdgpu.c.o
    /home/lev/software/nvtop/2.0.2/nvtop/src/extract_gpuinfo_amdgpu.c:36:24: fatal error: linux/kcmp.h: No such file or directory
     #include <linux/kcmp.h>
                            ^
    compilation terminated.
    make[2]: *** [src/CMakeFiles/nvtop.dir/extract_gpuinfo_amdgpu.c.o] Error 1
    make[1]: *** [src/CMakeFiles/nvtop.dir/all] Error 2
    

    Builds fine on Rocky 8.4 and Ubuntu 18.04 with 4.* kernels, but not on older systems with kernel 3.*. Is there a way to avoid/bypass dependency on this header?

A library for high performance deep learning inference on NVIDIA GPUs.
A library for high performance deep learning inference on NVIDIA GPUs.

Forward - A library for high performance deep learning inference on NVIDIA GPUs Forward - A library for high performance deep learning inference on NV

Jul 31, 2022
TensorRT is a C++ library for high performance inference on NVIDIA GPUs and deep learning accelerators.

TensorRT Open Source Software This repository contains the Open Source Software (OSS) components of NVIDIA TensorRT. Included are the sources for Tens

Aug 8, 2022
nvidia nvmpi encoder for streamFX and obs-studio (e.g. for nvidia jetson. Requires nvmpi enabled ffmpeg / libavcodec)

nvmpi-streamFX-obs nvidia nvmpi encoder for streamFX and obs-studio (e.g. for nvidia jetson. Requires nvmpi enabled ffmpeg / libavcodec) Purpose This

Jun 25, 2022
ThunderGBM: Fast GBDTs and Random Forests on GPUs
ThunderGBM: Fast GBDTs and Random Forests on GPUs

Documentations | Installation | Parameters | Python (scikit-learn) interface What's new? ThunderGBM won 2019 Best Paper Award from IEEE Transactions o

Aug 9, 2022
ThunderSVM: A Fast SVM Library on GPUs and CPUs
ThunderSVM: A Fast SVM Library on GPUs and CPUs

What's new We have recently released ThunderGBM, a fast GBDT and Random Forest library on GPUs. add scikit-learn interface, see here Overview The miss

Aug 14, 2022
A profiler to disclose and quantify hardware features on GPUs.
A profiler to disclose and quantify hardware features on GPUs.

ArchProbe ArchProbe is a profiling tool to demythify mobile GPU architectures with great details. The mechanism of ArchProbe is introduced in our tech

Aug 10, 2022
HugeCTR is a GPU-accelerated recommender framework designed to distribute training across multiple GPUs and nodes and estimate Click-Through Rates (CTRs)
HugeCTR is a GPU-accelerated recommender framework designed to distribute training across multiple GPUs and nodes and estimate Click-Through Rates (CTRs)

Merlin: HugeCTR HugeCTR is a GPU-accelerated recommender framework designed to distribute training across multiple GPUs and nodes and estimate Click-T

Aug 8, 2022
The dgSPARSE Library (Deep Graph Sparse Library) is a high performance library for sparse kernel acceleration on GPUs based on CUDA.

dgSPARSE Library Introdution The dgSPARSE Library (Deep Graph Sparse Library) is a high performance library for sparse kernel acceleration on GPUs bas

Aug 9, 2022
SHARK - High Performance Machine Learning for CPUs, GPUs, Accelerators and Heterogeneous Clusters

SHARK Communication Channels GitHub issues: Feature requests, bugs etc Nod.ai SHARK Discord server: Real time discussions with the nod.ai team and oth

Aug 3, 2022
GPU Cloth TOP in TouchDesigner using CUDA-enabled NVIDIA Flex
GPU Cloth TOP in TouchDesigner using CUDA-enabled NVIDIA Flex

This project demonstrates how to use NVIDIA FleX for GPU cloth simulation in a TouchDesigner Custom Operator. It also shows how to render dynamic meshes from the texture data using custom PBR GLSL material shaders inside TouchDesigner.

Jul 27, 2022
Gstreamer plugin that allows use of NVIDIA Maxine SDK in a generic pipeline.

GST-NVMAXINE Gstreamer plugin that allows use of NVIDIA MaxineTM sdk in a generic pipeline. This plugin is intended for use with NVIDIA hardware. Visi

May 11, 2022
GPU ray tracing framework using NVIDIA OptiX 7
GPU ray tracing framework using NVIDIA OptiX 7

GPU ray tracing framework using NVIDIA OptiX 7

Jun 11, 2022
ROS2 packages based on NVIDIA libArgus library for hardware-accelerated CSI camera support.
ROS2 packages based on NVIDIA libArgus library for hardware-accelerated CSI camera support.

Isaac ROS Argus Camera This repository provides monocular and stereo nodes that enable ROS developers to use cameras connected to Jetson platforms ove

Aug 12, 2022
Hardware-accelerated DNN model inference ROS2 packages using NVIDIA Triton/TensorRT for both Jetson and x86_64 with CUDA-capable GPU.
Hardware-accelerated DNN model inference ROS2 packages using NVIDIA Triton/TensorRT for both Jetson and x86_64 with CUDA-capable GPU.

Isaac ROS DNN Inference Overview This repository provides two NVIDIA GPU-accelerated ROS2 nodes that perform deep learning inference using custom mode

Jul 18, 2022
Visual odometry package based on hardware-accelerated NVIDIA Elbrus library with world class quality and performance.
Visual odometry package based on hardware-accelerated NVIDIA Elbrus library with world class quality and performance.

Isaac ROS Visual Odometry This repository provides a ROS2 package that estimates stereo visual inertial odometry using the Isaac Elbrus GPU-accelerate

Aug 11, 2022
The core engine forked from NVidia's Q2RTX. Heavily modified and extended to allow for a nicer experience all-round.

Nail & Crescent - Development Branch Scratchpad - Things to do or not forget: Items are obviously broken. Physics.cpp needs more work, revising. Proba

Jul 6, 2022
Docker files and scripts to setup and run VINS-FUSION-gpu on NVIDIA jetson boards inside a docker container.

jetson_vins_fusion_docker This repository provides Docker files and scripts to easily setup and run VINS-FUSION-gpu on NVIDIA jetson boards inside a d

May 30, 2022
NVIDIA Texture Tools samples for compression, image processing, and decompression.

NVTT 3 Samples This repository contains a number of samples showing how to use NVTT 3, a GPU-accelerated texture compression and image processing libr

Jun 13, 2022