A library for high performance deep learning inference on NVIDIA GPUs.

Forward - A library for high performance deep learning inference on NVIDIA GPUs

License Build Status



[中文版]

Forward

Forward is a library for high performance deep learning inference on NVIDIA GPUs. It provides a well-designed scheme that directly parse Tensorflow/PyTorch/Keras models to high-performance engine based on TensorRT. Compared to TensorRT, it is easy-to-use and easy-to-expand. So far, Forward supports not only mainstream deep learning models in CV, NLP and Recommend fields, but also some advanced models such as BERT, GAN, FaceSwap, StyleTransfer.

Features

  • Utilize TensorRT API and customized operators for high-performance deep learning inference.
  • Support not only mainstream deep learning models in CV, NLP and Recommend fields, but also advanced models such as BERT, GAN, FaceSwap, StyleTransfer.
  • Support FLOAT/HALF/INT8 infer modes.
  • Easy to use: Load directly Tensorflow(.pb)/PyTorch(.pth)/Keras(.h5) models and then do inference with TensorRT.
  • Easy to expand: Register customized layers refer to add_support_op.md.
  • Provide C++ and Python interfaces.

Quick Start

Prerequisites

  • NVIDIA CUDA >= 10.0, CuDNN >= 7 (Recommended version: CUDA 10.2 )
  • TensorRT >= 7.0.0.11, (Recommended version: TensorRT-7.2.1.6)
  • CMake >= 3.10.1
  • GCC >= 5.4.0, ld >= 2.26.1
  • (Pytorch) pytorch == 1.3.1
  • (Tensorflow) TensorFlow == 1.15.0 (download Tensorflow 1.15.0 and unzip it to source/third_party/tensorflow/lib)
  • (Keras) HDF 5

Build with CMake

Generate Makefiles or VS project (Windows) and build. Forward can be built for different framework, such as Fwd-Torch, Fwd-Python-Torch, Fwd-Tf, Fwd-Python-Tf, Fwd-Keras, Fwd-Python-Keras, which controlled by CMake options. For example, Fwd-Python-Tf is built as below.

mkdir build
cd build

cmake ..  \
-DTensorRT_ROOT=/path/to/TensorRT \ 
-DENABLE_LOGGING=ON \  
-DENABLE_PROFILING=ON \  
-DENABLE_DYNAMIC_BATCH=ON \ 
-DBUILD_PTYHON_LIB=ON \
-DENABLE_TORCH=OFF \  
-DENABLE_TENSORFLOW=ON \ 
-DENABLE_KERAS=OFF \ 

make -j

CMake build arguments

  • TensorRT_ROOT [Required]: Path to the TensorRT installation directory containing libraries
  • More CMake options refer to CMake Options

Unit Test

When the project is built, unit_test can be used to verify the project is successfully built.

cd build/bin
./unit_test --gtest_filter=TestTfNodes.*

Use Forward-Cpp

Refer to Demo for using Forward-Cpp in Linux

Use Forward-Python

Refer to Demo for using Forward-Python

More Usages

Notice: The name of INPUT in models can be viewed by model viewers, such as Netron.

FAQ

FAQ

Models & Operators

Models

Operators

Contribution

CONTRIBUTING

Contributors

Aster JIAN
Zexi YUAN
Ao LI
Paul LU
JettHu
Ryosuke1eep

Any form of contribution is welcome. The above contributors have been officially released by Tencent.

We very much welcome developers to contribute to Tencent's open source, and we will also give them incentives to acknowledge and thank them. Here we provide an official description of Tencent's open source contribution. Specific contribution rules for each project are formulated by the project team. Developers can choose the appropriate project and participate according to the corresponding rules. The Tencent Project Management Committee will report regularly to qualified contributors and awards will be issued by the official contact.

License

Apache License v2.0

Comments
  • make报错

    make报错

    Describe the bug A clear and concise description of what the bug is. 执行命令: cmake .. -DTensorRT_ROOT=/home/soft/wp/TensorRT-8.2.0.6 -DENABLE_TORC H=ON -DENABLE_TORCH_PLUGIN=ON -DCMAKE_PREFIX_PATH=/home/soft/wp/libtorch

    错误日志: -- Found Threads: TRUE -- Found CUDA: /usr/local/cuda (found version "11.1") -- CUDA_NVCC_FLAGS: -gencode arch=compute_60,code=sm_60 -gencode arch=compute_61,code=sm_61 -gencode arch=compute_70,code=sm_70 -gencode arch=compute_75,code=sm_75 -gencode arch=compute_75,code=compute_75 -- Using the single-header code from /home/soft/wp/Forward/source/third_party/json/single_include/ -- Found TensorRT: /home/soft/wp/TensorRT-8.2.0.6/lib/libnvinfer.so;/home/soft/wp/TensorRT-8.2.0.6/lib/libnvinfer_plugin.so;/home/soft/wp/TensorRT-8.2.0.6/lib/libnvonnxparser.so;/home/soft/wp/TensorRT-8.2.0.6/lib/libnvparsers.so (found version "8.2.0") -- Found CUDA: /usr/local/cuda (found version "11.1") -- Caffe2: CUDA detected: 11.1 -- Caffe2: CUDA nvcc is: /usr/local/cuda/bin/nvcc -- Caffe2: CUDA toolkit directory: /usr/local/cuda -- Caffe2: Header version is: 11.1 -- Found CUDNN: /usr/local/cuda/lib64/libcudnn.so -- Found cuDNN: v? (include: /usr/local/cuda/include, library: /usr/local/cuda/lib64/libcudnn.so) CMake Error at /home/soft/wp/libtorch/share/cmake/Caffe2/public/cuda.cmake:174 (message): PyTorch requires cuDNN 7 and above. Call Stack (most recent call first): /home/soft/wp/libtorch/share/cmake/Caffe2/Caffe2Config.cmake:88 (include) /home/soft/wp/libtorch/share/cmake/Torch/TorchConfig.cmake:68 (find_package) CMakeLists.txt:248 (find_package)

    -- Configuring incomplete, errors occurred!

    实际上是已经安装了cudnn image

    ls /usr/local/cuda/lib64 | grep cudnn image

    Environment

    TensorRT Version: TensorRT-8.2.0.6 NVIDIA GPU: P8 NVIDIA Driver Version: 465.19.01 CUDA Version: 11.1 CUDNN Version: 8.2.1.32 Operating System: Ubuntu 16.04.2 LTS Python Version (if applicable): 3.7 Tensorflow Version (if applicable): 2.6.0 PyTorch Version (if applicable): 1.9.0+cu111

    Relevant Files

    To Reproduce Steps to reproduce the behavior:

    1. Go to '...'
    2. Click on '....'
    3. Scroll down to '....'
    4. See error

    Expected behavior A clear and concise description of what you expected to happen.

    Screenshots If applicable, add screenshots to help explain your problem.

    Additional context Add any other context about the problem here.

  • 编译VC项目时出错

    编译VC项目时出错

    编译报错 先按照指引cmake,有test failed,但是完成了。

    Selecting Windows SDK version  to target Windows 10.0.19042.
    CMake Deprecation Warning at CMakeLists.txt:28 (cmake_policy):
      The OLD behavior for policy CMP0074 will be removed from a future version
      of CMake.
    
      The cmake-policies(7) manual explains that the OLD behaviors of all
      policies are deprecated and that a policy should be set to OLD only under
      specific short-term circumstances.  Projects should be ported to the NEW
      behavior and not rely on setting a policy to OLD.
    
    
    Found CUDA: C:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v10.2 (found version "10.2") 
    CUDA_NVCC_FLAGS:  -gencode arch=compute_60,code=sm_60 -gencode arch=compute_61,code=sm_61 -gencode arch=compute_70,code=sm_70 -gencode arch=compute_75,code=sm_75 -gencode arch=compute_75,code=compute_75
    Found TensorRT: D:/Software/TensorRT-7.2.1.6.Windows10.x86_64.cuda-10.2.cudnn8.0/TensorRT-7.2.1.6/lib/nvinfer.lib;D:/Software/TensorRT-7.2.1.6.Windows10.x86_64.cuda-10.2.cudnn8.0/TensorRT-7.2.1.6/lib/nvinfer_plugin.lib;D:/Software/TensorRT-7.2.1.6.Windows10.x86_64.cuda-10.2.cudnn8.0/TensorRT-7.2.1.6/lib/nvonnxparser.lib;D:/Software/TensorRT-7.2.1.6.Windows10.x86_64.cuda-10.2.cudnn8.0/TensorRT-7.2.1.6/lib/nvparsers.lib (found version "7.2.1") 
    Using the single-header code from D:/Projects/tencent_forward/Forward/source/third_party/json/single_include/
    Use HDF5 on third_party: D:/Projects/tencent_forward/Forward/source/third_party/hdf5
    Warnings Configuration: default:   /DWIN32 /D_WINDOWS /W3 :  /DWIN32 /D_WINDOWS /W3 /GR /EHsc /W3 /WX-
    Check for STD namespace
    Check for STD namespace - found
    Performing CXX Test OLD_HEADER_FILENAME - Failed
    Performing CXX Test HDF_NO_NAMESPACE - Failed
    Performing CXX Test HDF_NO_STD - Failed
    Performing CXX Test BOOL_NOTDEFINED - Failed
    Performing CXX Test NO_STATIC_CAST - Failed
    Performing CXX Test CXX_HAVE_OFFSETOF - Failed
    Configuring done
    Generating done
    

    然后在项目生成的时候报错:

    严重性	代码	说明	项目	文件	行
    错误	C2664	“void std::vector<fwd::NamedTensor,std::allocator<_Ty>>::push_back(const fwd::NamedTensor &)”: 无法将参数 1 从“initializer list”转换为“fwd::NamedTensor &&”	trt_engine	D:\Projects\tencent_forward\Forward\source\trt_engine\trt_engine\trt_buffer_manager.h	116
    错误	C1083	无法打开源文件: “D:\Projects\tencent_forward\Forward\build\source\third_party\hdf5\H5Tinit.c”: No such file or directory	hdf5	D:\Projects\tencent_forward\Forward\build\source\third_party\hdf5\src\c1	1
    错误	C2664	“nlohmann::basic_json<std::map,std::vector,std::string,bool,int64_t,uint64_t,double,std::allocator,nlohmann::adl_serializer> nlohmann::basic_json<std::map,std::vector,std::string,bool,int64_t,uint64_t,double,std::allocator,nlohmann::adl_serializer>::parse(nlohmann::detail::input_adapter &&,const std::function<bool (int,nlohmann::detail::parser<nlohmann::basic_json<std::map,std::vector,std::string,bool,int64_t,uint64_t,double,std::allocator,nlohmann::adl_serializer>>::parse_event_t,BasicJsonType &)>,const bool)”: 无法将参数 1 从“std::string”转换为“nlohmann::detail::input_adapter &&”	fwd_keras	D:\Projects\tencent_forward\Forward\source\fwd_keras\keras_cvt\keras_cpp_api.cpp	72
    错误	C2679	二进制“=”: 没有找到接受“std::string”类型的右操作数的运算符(或没有可接受的转换)	fwd_keras	D:\Projects\tencent_forward\Forward\source\fwd_keras\keras_cvt\keras_cpp_api.cpp	147
    错误	C2679	二进制“=”: 没有找到接受“bool”类型的右操作数的运算符(或没有可接受的转换)	fwd_keras	D:\Projects\tencent_forward\Forward\source\fwd_keras\keras_cvt\keras_cpp_api.cpp	166
    错误	C2679	二进制“=”: 没有找到接受“std::string”类型的右操作数的运算符(或没有可接受的转换)	fwd_keras	D:\Projects\tencent_forward\Forward\source\fwd_keras\keras_cvt\keras_cpp_api.cpp	175
    错误	C2679	二进制“=”: 没有找到接受“std::basic_string<char,std::char_traits<char>,std::allocator<char>>”类型的右操作数的运算符(或没有可接受的转换)	fwd_keras	D:\Projects\tencent_forward\Forward\source\fwd_keras\keras_cvt\keras_cpp_api.cpp	205
    错误	C2679	二进制“=”: 没有找到接受“const char [11]”类型的右操作数的运算符(或没有可接受的转换)	fwd_keras	D:\Projects\tencent_forward\Forward\source\fwd_keras\keras_cvt\keras_cpp_api.cpp	206
    错误	C2679	二进制“=”: 没有找到接受“const std::string”类型的右操作数的运算符(或没有可接受的转换)	fwd_keras	D:\Projects\tencent_forward\Forward\source\fwd_keras\keras_cvt\keras_cpp_api.cpp	209
    错误	C2679	二进制“=”: 没有找到接受“std::vector<std::vector<std::vector<std::string,std::allocator<_Ty>>,std::allocator<std::vector<_Ty,std::allocator<_Ty>>>>,std::allocator<std::vector<std::vector<_Ty,std::allocator<_Ty>>,std::allocator<std::vector<_Ty,std::allocator<_Ty>>>>>>”类型的右操作数的运算符(或没有可接受的转换)	fwd_keras	D:\Projects\tencent_forward\Forward\source\fwd_keras\keras_cvt\keras_cpp_api.cpp	213
    错误	C2666	“fwd::operator ==”: 3 个重载有相似的转换	fwd_keras	D:\Projects\tencent_forward\Forward\source\fwd_keras\keras_cvt\keras_cpp_api.h	62
    错误	C2666	“fwd::operator ==”: 3 个重载有相似的转换	fwd_keras	D:\Projects\tencent_forward\Forward\source\fwd_keras\keras_cvt\keras_cpp_api.h	62
    错误	C2666	“fwd::operator ==”: 3 个重载有相似的转换	fwd_keras	D:\Projects\tencent_forward\Forward\source\fwd_keras\keras_cvt\keras_cpp_api.h	62
    错误	C2664	“void std::vector<fwd::TrtLayerOutput,std::allocator<_Ty>>::push_back(const fwd::TrtLayerOutput &)”: 无法将参数 1 从“initializer list”转换为“fwd::TrtLayerOutput &&”	fwd_keras	D:\Projects\tencent_forward\Forward\source\fwd_keras\keras_cvt\trt_keras_parser.cpp	157
    错误	C2679	二进制“=”: 没有找到接受“initializer list”类型的右操作数的运算符(或没有可接受的转换)	fwd_keras	D:\Projects\tencent_forward\Forward\source\fwd_keras\keras_cvt\trt_keras_parser.cpp	177
    错误	C2664	“void std::vector<fwd::TrtLayerOutput,std::allocator<_Ty>>::push_back(const fwd::TrtLayerOutput &)”: 无法将参数 1 从“initializer list”转换为“fwd::TrtLayerOutput &&”	fwd_keras	D:\Projects\tencent_forward\Forward\source\fwd_keras\keras_cvt\trt_keras_parser.cpp	185
    错误	C2664	“void std::vector<fwd::TrtLayerOutput,std::allocator<_Ty>>::push_back(const fwd::TrtLayerOutput &)”: 无法将参数 1 从“initializer list”转换为“fwd::TrtLayerOutput &&”	fwd_keras	D:\Projects\tencent_forward\Forward\source\fwd_keras\keras_cvt\trt_keras_parser.cpp	201
    错误	C4579	'nlohmann::detail::static_const<nlohmann::detail::from_json_fn>::value': in-class initialization for type 'const T' is not yet implemented; static member will remain uninitialized at runtime but use in constant-expressions is supported	fwd_keras	D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp	2235
    错误	C2131	表达式的计算结果不是常数	fwd_keras	D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp	2235
    错误	C4579	'nlohmann::detail::static_const<nlohmann::detail::to_json_fn>::value': in-class initialization for type 'const T' is not yet implemented; static member will remain uninitialized at runtime but use in constant-expressions is supported	fwd_keras	D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp	2235
    错误	C4579	'nlohmann::detail::static_const<nlohmann::detail::from_json_fn>::value': in-class initialization for type 'const T' is not yet implemented; static member will remain uninitialized at runtime but use in constant-expressions is supported	fwd_keras	D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp	2235
    错误	C2131	表达式的计算结果不是常数	fwd_keras	D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp	2235
    错误	C4579	'nlohmann::detail::static_const<nlohmann::detail::to_json_fn>::value': in-class initialization for type 'const T' is not yet implemented; static member will remain uninitialized at runtime but use in constant-expressions is supported	fwd_keras	D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp	2235
    错误	C4579	'nlohmann::detail::static_const<nlohmann::detail::from_json_fn>::value': in-class initialization for type 'const T' is not yet implemented; static member will remain uninitialized at runtime but use in constant-expressions is supported	fwd_keras	D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp	2235
    错误	C2131	表达式的计算结果不是常数	fwd_keras	D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp	2235
    错误	C4579	'nlohmann::detail::static_const<nlohmann::detail::to_json_fn>::value': in-class initialization for type 'const T' is not yet implemented; static member will remain uninitialized at runtime but use in constant-expressions is supported	fwd_keras	D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp	2235
    错误	C2780	“unknown-type nlohmann::adl_serializer<T,void>::from_json(BasicJsonType &&,ValueType &) noexcept(<expr>)”: 应输入 2 个参数,却提供了 1 个	fwd_keras	D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp	2513
    错误	C2893	未能使函数模板“unknown-type nlohmann::adl_serializer<T,void>::from_json(BasicJsonType &&,ValueType &) noexcept(<expr>)”专用化	fwd_keras	D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp	2513
    错误	C2780	“unknown-type nlohmann::adl_serializer<T,void>::from_json(BasicJsonType &&,ValueType &) noexcept(<expr>)”: 应输入 2 个参数,却提供了 1 个	fwd_keras	D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp	2513
    错误	C2893	未能使函数模板“unknown-type nlohmann::adl_serializer<T,void>::from_json(BasicJsonType &&,ValueType &) noexcept(<expr>)”专用化	fwd_keras	D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp	2513
    错误	C2780	“unknown-type nlohmann::adl_serializer<T,void>::from_json(BasicJsonType &&,ValueType &) noexcept(<expr>)”: 应输入 2 个参数,却提供了 1 个	fwd_keras	D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp	2513
    错误	C2893	未能使函数模板“unknown-type nlohmann::adl_serializer<T,void>::from_json(BasicJsonType &&,ValueType &) noexcept(<expr>)”专用化	fwd_keras	D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp	2513
    错误	C2780	“unknown-type nlohmann::adl_serializer<T,void>::from_json(BasicJsonType &&,ValueType &) noexcept(<expr>)”: 应输入 2 个参数,却提供了 1 个	fwd_keras	D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp	2513
    错误	C2893	未能使函数模板“unknown-type nlohmann::adl_serializer<T,void>::from_json(BasicJsonType &&,ValueType &) noexcept(<expr>)”专用化	fwd_keras	D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp	2513
    错误	C2780	“unknown-type nlohmann::adl_serializer<T,void>::from_json(BasicJsonType &&,ValueType &) noexcept(<expr>)”: 应输入 2 个参数,却提供了 1 个	fwd_keras	D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp	2513
    错误	C2893	未能使函数模板“unknown-type nlohmann::adl_serializer<T,void>::from_json(BasicJsonType &&,ValueType &) noexcept(<expr>)”专用化	fwd_keras	D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp	2513
    错误	C2780	“unknown-type nlohmann::adl_serializer<T,void>::from_json(BasicJsonType &&,ValueType &) noexcept(<expr>)”: 应输入 2 个参数,却提供了 1 个	fwd_keras	D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp	2513
    错误	C2893	未能使函数模板“unknown-type nlohmann::adl_serializer<T,void>::from_json(BasicJsonType &&,ValueType &) noexcept(<expr>)”专用化	fwd_keras	D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp	2513
    错误	C2780	“unknown-type nlohmann::adl_serializer<T,void>::from_json(BasicJsonType &&,ValueType &) noexcept(<expr>)”: 应输入 2 个参数,却提供了 1 个	fwd_keras	D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp	2513
    错误	C2780	“unknown-type nlohmann::adl_serializer<T,void>::from_json(BasicJsonType &&,ValueType &) noexcept(<expr>)”: 应输入 2 个参数,却提供了 1 个	fwd_keras	D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp	2513
    错误	C2780	“unknown-type nlohmann::adl_serializer<T,void>::from_json(BasicJsonType &&,ValueType &) noexcept(<expr>)”: 应输入 2 个参数,却提供了 1 个	fwd_keras	D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp	2513
    错误	C2893	未能使函数模板“unknown-type nlohmann::adl_serializer<T,void>::from_json(BasicJsonType &&,ValueType &) noexcept(<expr>)”专用化	fwd_keras	D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp	2513
    错误	C2780	“unknown-type nlohmann::adl_serializer<T,void>::from_json(BasicJsonType &&,ValueType &) noexcept(<expr>)”: 应输入 2 个参数,却提供了 1 个	fwd_keras	D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp	2513
    错误	C2893	未能使函数模板“unknown-type nlohmann::adl_serializer<T,void>::from_json(BasicJsonType &&,ValueType &) noexcept(<expr>)”专用化	fwd_keras	D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp	2513
    错误	C2780	“unknown-type nlohmann::adl_serializer<T,void>::from_json(BasicJsonType &&,ValueType &) noexcept(<expr>)”: 应输入 2 个参数,却提供了 1 个	fwd_keras	D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp	2513
    错误	C2893	未能使函数模板“unknown-type nlohmann::adl_serializer<T,void>::from_json(BasicJsonType &&,ValueType &) noexcept(<expr>)”专用化	fwd_keras	D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp	2513
    错误	C2780	“unknown-type nlohmann::adl_serializer<T,void>::from_json(BasicJsonType &&,ValueType &) noexcept(<expr>)”: 应输入 2 个参数,却提供了 1 个	fwd_keras	D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp	2513
    错误	C2893	未能使函数模板“unknown-type nlohmann::adl_serializer<T,void>::from_json(BasicJsonType &&,ValueType &) noexcept(<expr>)”专用化	fwd_keras	D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp	2513
    错误	C2780	“unknown-type nlohmann::adl_serializer<T,void>::from_json(BasicJsonType &&,ValueType &) noexcept(<expr>)”: 应输入 2 个参数,却提供了 1 个	fwd_keras	D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp	2513
    错误	C2893	未能使函数模板“unknown-type nlohmann::adl_serializer<T,void>::from_json(BasicJsonType &&,ValueType &) noexcept(<expr>)”专用化	fwd_keras	D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp	2513
    错误	C2780	“unknown-type nlohmann::adl_serializer<T,void>::from_json(BasicJsonType &&,ValueType &) noexcept(<expr>)”: 应输入 2 个参数,却提供了 1 个	fwd_keras	D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp	2513
    错误	C2893	未能使函数模板“unknown-type nlohmann::adl_serializer<T,void>::from_json(BasicJsonType &&,ValueType &) noexcept(<expr>)”专用化	fwd_keras	D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp	2513
    错误	C2780	“unknown-type nlohmann::adl_serializer<T,void>::from_json(BasicJsonType &&,ValueType &) noexcept(<expr>)”: 应输入 2 个参数,却提供了 1 个	fwd_keras	D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp	2513
    错误	C2893	未能使函数模板“unknown-type nlohmann::adl_serializer<T,void>::from_json(BasicJsonType &&,ValueType &) noexcept(<expr>)”专用化	fwd_keras	D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp	2513
    错误	C2780	“unknown-type nlohmann::adl_serializer<T,void>::from_json(BasicJsonType &&,ValueType &) noexcept(<expr>)”: 应输入 2 个参数,却提供了 1 个	fwd_keras	D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp	2513
    错误	C2893	未能使函数模板“unknown-type nlohmann::adl_serializer<T,void>::from_json(BasicJsonType &&,ValueType &) noexcept(<expr>)”专用化	fwd_keras	D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp	2513
    错误	C2780	“unknown-type nlohmann::adl_serializer<T,void>::from_json(BasicJsonType &&,ValueType &) noexcept(<expr>)”: 应输入 2 个参数,却提供了 1 个	fwd_keras	D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp	2513
    错误	C2780	“unknown-type nlohmann::adl_serializer<T,void>::from_json(BasicJsonType &&,ValueType &) noexcept(<expr>)”: 应输入 2 个参数,却提供了 1 个	fwd_keras	D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp	2513
    错误	C2780	“unknown-type nlohmann::adl_serializer<T,void>::from_json(BasicJsonType &&,ValueType &) noexcept(<expr>)”: 应输入 2 个参数,却提供了 1 个	fwd_keras	D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp	2513
    错误	C2893	未能使函数模板“unknown-type nlohmann::adl_serializer<T,void>::from_json(BasicJsonType &&,ValueType &) noexcept(<expr>)”专用化	fwd_keras	D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp	2513
    错误	C2780	“unknown-type nlohmann::adl_serializer<T,void>::from_json(BasicJsonType &&,ValueType &) noexcept(<expr>)”: 应输入 2 个参数,却提供了 1 个	fwd_keras	D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp	2513
    错误	C2893	未能使函数模板“unknown-type nlohmann::adl_serializer<T,void>::from_json(BasicJsonType &&,ValueType &) noexcept(<expr>)”专用化	fwd_keras	D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp	2513
    错误	C2780	“unknown-type nlohmann::adl_serializer<T,void>::from_json(BasicJsonType &&,ValueType &) noexcept(<expr>)”: 应输入 2 个参数,却提供了 1 个	fwd_keras	D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp	2513
    错误	C2780	“unknown-type nlohmann::adl_serializer<T,void>::from_json(BasicJsonType &&,ValueType &) noexcept(<expr>)”: 应输入 2 个参数,却提供了 1 个	fwd_keras	D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp	2513
    错误	C2893	未能使函数模板“unknown-type nlohmann::adl_serializer<T,void>::from_json(BasicJsonType &&,ValueType &) noexcept(<expr>)”专用化	fwd_keras	D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp	2513
    错误	C2780	“unknown-type nlohmann::adl_serializer<T,void>::from_json(BasicJsonType &&,ValueType &) noexcept(<expr>)”: 应输入 2 个参数,却提供了 1 个	fwd_keras	D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp	2513
    错误	C2893	未能使函数模板“unknown-type nlohmann::adl_serializer<T,void>::from_json(BasicJsonType &&,ValueType &) noexcept(<expr>)”专用化	fwd_keras	D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp	2513
    错误	C2780	“unknown-type nlohmann::adl_serializer<T,void>::from_json(BasicJsonType &&,ValueType &) noexcept(<expr>)”: 应输入 2 个参数,却提供了 1 个	fwd_keras	D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp	2513
    错误	C2893	未能使函数模板“unknown-type nlohmann::adl_serializer<T,void>::from_json(BasicJsonType &&,ValueType &) noexcept(<expr>)”专用化	fwd_keras	D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp	2513
    错误	C2780	“unknown-type nlohmann::adl_serializer<T,void>::from_json(BasicJsonType &&,ValueType &) noexcept(<expr>)”: 应输入 2 个参数,却提供了 1 个	fwd_keras	D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp	2513
    错误	C2780	“unknown-type nlohmann::adl_serializer<T,void>::from_json(BasicJsonType &&,ValueType &) noexcept(<expr>)”: 应输入 2 个参数,却提供了 1 个	fwd_keras	D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp	2513
    错误	C2893	未能使函数模板“unknown-type nlohmann::adl_serializer<T,void>::from_json(BasicJsonType &&,ValueType &) noexcept(<expr>)”专用化	fwd_keras	D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp	2513
    错误	C2780	“unknown-type nlohmann::adl_serializer<T,void>::from_json(BasicJsonType &&,ValueType &) noexcept(<expr>)”: 应输入 2 个参数,却提供了 1 个	fwd_keras	D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp	2513
    错误	C2893	未能使函数模板“unknown-type nlohmann::adl_serializer<T,void>::from_json(BasicJsonType &&,ValueType &) noexcept(<expr>)”专用化	fwd_keras	D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp	2513
    错误	C2780	“unknown-type nlohmann::adl_serializer<T,void>::from_json(BasicJsonType &&,ValueType &) noexcept(<expr>)”: 应输入 2 个参数,却提供了 1 个	fwd_keras	D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp	2513
    错误	C2893	未能使函数模板“unknown-type nlohmann::adl_serializer<T,void>::from_json(BasicJsonType &&,ValueType &) noexcept(<expr>)”专用化	fwd_keras	D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp	2513
    错误	C2780	“unknown-type nlohmann::adl_serializer<T,void>::from_json(BasicJsonType &&,ValueType &) noexcept(<expr>)”: 应输入 2 个参数,却提供了 1 个	fwd_keras	D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp	2513
    错误	C2893	未能使函数模板“unknown-type nlohmann::adl_serializer<T,void>::from_json(BasicJsonType &&,ValueType &) noexcept(<expr>)”专用化	fwd_keras	D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp	2513
    错误	C2780	“unknown-type nlohmann::adl_serializer<T,void>::from_json(BasicJsonType &&,ValueType &) noexcept(<expr>)”: 应输入 2 个参数,却提供了 1 个	fwd_keras	D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp	2513
    错误	C2893	未能使函数模板“unknown-type nlohmann::adl_serializer<T,void>::from_json(BasicJsonType &&,ValueType &) noexcept(<expr>)”专用化	fwd_keras	D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp	2513
    错误	C2780	“unknown-type nlohmann::adl_serializer<T,void>::from_json(BasicJsonType &&,ValueType &) noexcept(<expr>)”: 应输入 2 个参数,却提供了 1 个	fwd_keras	D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp	2513
    错误	C2893	未能使函数模板“unknown-type nlohmann::adl_serializer<T,void>::from_json(BasicJsonType &&,ValueType &) noexcept(<expr>)”专用化	fwd_keras	D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp	2513
    错误	C2780	“unknown-type nlohmann::adl_serializer<T,void>::from_json(BasicJsonType &&,ValueType &) noexcept(<expr>)”: 应输入 2 个参数,却提供了 1 个	fwd_keras	D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp	2513
    错误	C2780	“unknown-type nlohmann::adl_serializer<T,void>::from_json(BasicJsonType &&,ValueType &) noexcept(<expr>)”: 应输入 2 个参数,却提供了 1 个	fwd_keras	D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp	2513
    错误	C2780	“unknown-type nlohmann::adl_serializer<T,void>::from_json(BasicJsonType &&,ValueType &) noexcept(<expr>)”: 应输入 2 个参数,却提供了 1 个	fwd_keras	D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp	2513
    错误	C2893	未能使函数模板“unknown-type nlohmann::adl_serializer<T,void>::from_json(BasicJsonType &&,ValueType &) noexcept(<expr>)”专用化	fwd_keras	D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp	2513
    错误	C2780	“unknown-type nlohmann::adl_serializer<T,void>::from_json(BasicJsonType &&,ValueType &) noexcept(<expr>)”: 应输入 2 个参数,却提供了 1 个	fwd_keras	D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp	2513
    错误	C2893	未能使函数模板“unknown-type nlohmann::adl_serializer<T,void>::from_json(BasicJsonType &&,ValueType &) noexcept(<expr>)”专用化	fwd_keras	D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp	2513
    错误	C2780	“unknown-type nlohmann::adl_serializer<T,void>::from_json(BasicJsonType &&,ValueType &) noexcept(<expr>)”: 应输入 2 个参数,却提供了 1 个	fwd_keras	D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp	2513
    错误	C2893	未能使函数模板“unknown-type nlohmann::adl_serializer<T,void>::from_json(BasicJsonType &&,ValueType &) noexcept(<expr>)”专用化	fwd_keras	D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp	2513
    错误	C2780	“unknown-type nlohmann::adl_serializer<T,void>::from_json(BasicJsonType &&,ValueType &) noexcept(<expr>)”: 应输入 2 个参数,却提供了 1 个	fwd_keras	D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp	2513
    错误	C2893	未能使函数模板“unknown-type nlohmann::adl_serializer<T,void>::from_json(BasicJsonType &&,ValueType &) noexcept(<expr>)”专用化	fwd_keras	D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp	2513
    错误	C2780	“unknown-type nlohmann::adl_serializer<T,void>::from_json(BasicJsonType &&,ValueType &) noexcept(<expr>)”: 应输入 2 个参数,却提供了 1 个	fwd_keras	D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp	2513
    错误	C2780	“unknown-type nlohmann::adl_serializer<T,void>::from_json(BasicJsonType &&,ValueType &) noexcept(<expr>)”: 应输入 2 个参数,却提供了 1 个	fwd_keras	D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp	2513
    错误	C2893	未能使函数模板“unknown-type nlohmann::adl_serializer<T,void>::from_json(BasicJsonType &&,ValueType &) noexcept(<expr>)”专用化	fwd_keras	D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp	2513
    错误	C2780	“unknown-type nlohmann::adl_serializer<T,void>::from_json(BasicJsonType &&,ValueType &) noexcept(<expr>)”: 应输入 2 个参数,却提供了 1 个	fwd_keras	D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp	2513
    错误	C2893	未能使函数模板“unknown-type nlohmann::adl_serializer<T,void>::from_json(BasicJsonType &&,ValueType &) noexcept(<expr>)”专用化	fwd_keras	D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp	2513
    错误	C2780	“unknown-type nlohmann::adl_serializer<T,void>::from_json(BasicJsonType &&,ValueType &) noexcept(<expr>)”: 应输入 2 个参数,却提供了 1 个	fwd_keras	D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp	2513
    错误	C2893	未能使函数模板“unknown-type nlohmann::adl_serializer<T,void>::from_json(BasicJsonType &&,ValueType &) noexcept(<expr>)”专用化	fwd_keras	D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp	2513
    错误	C2780	“unknown-type nlohmann::adl_serializer<T,void>::from_json(BasicJsonType &&,ValueType &) noexcept(<expr>)”: 应输入 2 个参数,却提供了 1 个	fwd_keras	D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp	2513
    错误	C2780	“unknown-type nlohmann::adl_serializer<T,void>::from_json(BasicJsonType &&,ValueType &) noexcept(<expr>)”: 应输入 2 个参数,却提供了 1 个	fwd_keras	D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp	2513
    错误	C2893	未能使函数模板“unknown-type nlohmann::adl_serializer<T,void>::from_json(BasicJsonType &&,ValueType &) noexcept(<expr>)”专用化	fwd_keras	D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp	2513
    错误	C2783	“unknown-type nlohmann::basic_json<std::map,std::vector,std::string,bool,int64_t,uint64_t,double,std::allocator,nlohmann::adl_serializer>::get(void) noexcept const”: 未能为“__formal”推导 模板 参数	fwd_keras	D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp	2516
    错误	C2783	“unknown-type nlohmann::basic_json<std::map,std::vector,std::string,bool,int64_t,uint64_t,double,std::allocator,nlohmann::adl_serializer>::get(void) noexcept”: 未能为“__formal”推导 模板 参数	fwd_keras	D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp	2516
    错误	C2783	“ValueType nlohmann::basic_json<std::map,std::vector,std::string,bool,int64_t,uint64_t,double,std::allocator,nlohmann::adl_serializer>::get(void) noexcept(<expr>) const”: 未能为“__formal”推导 模板 参数	fwd_keras	D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp	2516
    错误	C2783	“BasicJsonType nlohmann::basic_json<std::map,std::vector,std::string,bool,int64_t,uint64_t,double,std::allocator,nlohmann::adl_serializer>::get(void) const”: 未能为“__formal”推导 模板 参数	fwd_keras	D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp	2516
    错误	C2783	“nlohmann::basic_json<std::map,std::vector,std::string,bool,int64_t,uint64_t,double,std::allocator,nlohmann::adl_serializer> nlohmann::basic_json<std::map,std::vector,std::string,bool,int64_t,uint64_t,double,std::allocator,nlohmann::adl_serializer>::get(void) const”: 未能为“__formal”推导 模板 参数	fwd_keras	D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp	2516
    错误	C2893	未能使函数模板“unknown-type nlohmann::basic_json<std::map,std::vector,std::string,bool,int64_t,uint64_t,double,std::allocator,nlohmann::adl_serializer>::get(void) noexcept const”专用化	fwd_keras	D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp	2516
    错误	C2893	未能使函数模板“unknown-type nlohmann::basic_json<std::map,std::vector,std::string,bool,int64_t,uint64_t,double,std::allocator,nlohmann::adl_serializer>::get(void) noexcept”专用化	fwd_keras	D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp	2516
    错误	C2783	“unknown-type nlohmann::basic_json<std::map,std::vector,std::string,bool,int64_t,uint64_t,double,std::allocator,nlohmann::adl_serializer>::get(void) noexcept const”: 未能为“__formal”推导 模板 参数	fwd_keras	D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp	2516
    错误	C2783	“unknown-type nlohmann::basic_json<std::map,std::vector,std::string,bool,int64_t,uint64_t,double,std::allocator,nlohmann::adl_serializer>::get(void) noexcept”: 未能为“__formal”推导 模板 参数	fwd_keras	D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp	2516
    错误	C2783	“ValueType nlohmann::basic_json<std::map,std::vector,std::string,bool,int64_t,uint64_t,double,std::allocator,nlohmann::adl_serializer>::get(void) noexcept(<expr>) const”: 未能为“__formal”推导 模板 参数	fwd_keras	D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp	2516
    错误	C2783	“BasicJsonType nlohmann::basic_json<std::map,std::vector,std::string,bool,int64_t,uint64_t,double,std::allocator,nlohmann::adl_serializer>::get(void) const”: 未能为“__formal”推导 模板 参数	fwd_keras	D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp	2516
    错误	C2783	“nlohmann::basic_json<std::map,std::vector,std::string,bool,int64_t,uint64_t,double,std::allocator,nlohmann::adl_serializer> nlohmann::basic_json<std::map,std::vector,std::string,bool,int64_t,uint64_t,double,std::allocator,nlohmann::adl_serializer>::get(void) const”: 未能为“__formal”推导 模板 参数	fwd_keras	D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp	2516
    错误	C2893	未能使函数模板“unknown-type nlohmann::basic_json<std::map,std::vector,std::string,bool,int64_t,uint64_t,double,std::allocator,nlohmann::adl_serializer>::get(void) noexcept const”专用化	fwd_keras	D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp	2516
    错误	C2893	未能使函数模板“unknown-type nlohmann::basic_json<std::map,std::vector,std::string,bool,int64_t,uint64_t,double,std::allocator,nlohmann::adl_serializer>::get(void) noexcept”专用化	fwd_keras	D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp	2516
    错误	C2783	“unknown-type nlohmann::basic_json<std::map,std::vector,std::string,bool,int64_t,uint64_t,double,std::allocator,nlohmann::adl_serializer>::get(void) noexcept const”: 未能为“__formal”推导 模板 参数	fwd_keras	D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp	2516
    错误	C2783	“unknown-type nlohmann::basic_json<std::map,std::vector,std::string,bool,int64_t,uint64_t,double,std::allocator,nlohmann::adl_serializer>::get(void) noexcept”: 未能为“__formal”推导 模板 参数	fwd_keras	D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp	2516
    错误	C2783	“ValueType nlohmann::basic_json<std::map,std::vector,std::string,bool,int64_t,uint64_t,double,std::allocator,nlohmann::adl_serializer>::get(void) noexcept(<expr>) const”: 未能为“__formal”推导 模板 参数	fwd_keras	D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp	2516
    错误	C2783	“BasicJsonType nlohmann::basic_json<std::map,std::vector,std::string,bool,int64_t,uint64_t,double,std::allocator,nlohmann::adl_serializer>::get(void) const”: 未能为“__formal”推导 模板 参数	fwd_keras	D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp	2516
    错误	C2783	“nlohmann::basic_json<std::map,std::vector,std::string,bool,int64_t,uint64_t,double,std::allocator,nlohmann::adl_serializer> nlohmann::basic_json<std::map,std::vector,std::string,bool,int64_t,uint64_t,double,std::allocator,nlohmann::adl_serializer>::get(void) const”: 未能为“__formal”推导 模板 参数	fwd_keras	D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp	2516
    错误	C2893	未能使函数模板“unknown-type nlohmann::basic_json<std::map,std::vector,std::string,bool,int64_t,uint64_t,double,std::allocator,nlohmann::adl_serializer>::get(void) noexcept const”专用化	fwd_keras	D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp	2516
    错误	C2893	未能使函数模板“unknown-type nlohmann::basic_json<std::map,std::vector,std::string,bool,int64_t,uint64_t,double,std::allocator,nlohmann::adl_serializer>::get(void) noexcept”专用化	fwd_keras	D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp	2516
    错误	C2784	“const _Ty *std::begin(const std::valarray<_Ty> &)”: 未能从“add_rvalue_reference<const ContiguousContainer>::type”为“const std::valarray<_Ty> &”推导 模板 参数	fwd_keras	D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp	4290
    错误	C2784	“_Ty *std::begin(std::valarray<_Ty> &)”: 未能从“add_rvalue_reference<const ContiguousContainer>::type”为“std::valarray<_Ty> &”推导 模板 参数	fwd_keras	D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp	4290
    错误	C2784	“_Ty *std::begin(_Ty (&)[_Size]) noexcept”: 未能从“add_rvalue_reference<const ContiguousContainer>::type”为“_Ty (&)[_Size]”推导 模板 参数	fwd_keras	D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp	4290
    错误	C2893	未能使函数模板“unknown-type std::begin(const _Container &)”专用化	fwd_keras	D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp	4290
    错误	C2893	未能使函数模板“unknown-type std::begin(_Container &)”专用化	fwd_keras	D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp	4290
    错误	C2784	“const _Elem *std::begin(std::initializer_list<_Elem>) noexcept”: 未能从“add_rvalue_reference<const ContiguousContainer>::type”为“std::initializer_list<_Elem>”推导 模板 参数	fwd_keras	D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp	4290
    错误	C2039	“iterator_category”: 不是“nlohmann::detail::iterator_traits<unknown-type,void>”的成员	fwd_keras	D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp	4290
    错误	C2146	语法错误: 缺少“>”(在标识符“iterator_category”的前面)	fwd_keras	D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp	4290
    错误	C2784	“const _Ty *std::begin(const std::valarray<_Ty> &)”: 未能从“add_rvalue_reference<const ContiguousContainer>::type”为“const std::valarray<_Ty> &”推导 模板 参数	fwd_keras	D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp	4290
    错误	C2784	“_Ty *std::begin(std::valarray<_Ty> &)”: 未能从“add_rvalue_reference<const ContiguousContainer>::type”为“std::valarray<_Ty> &”推导 模板 参数	fwd_keras	D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp	4290
    错误	C2784	“_Ty *std::begin(_Ty (&)[_Size]) noexcept”: 未能从“add_rvalue_reference<const ContiguousContainer>::type”为“_Ty (&)[_Size]”推导 模板 参数	fwd_keras	D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp	4290
    错误	C2893	未能使函数模板“unknown-type std::begin(const _Container &)”专用化	fwd_keras	D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp	4290
    错误	C2893	未能使函数模板“unknown-type std::begin(_Container &)”专用化	fwd_keras	D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp	4290
    错误	C2784	“const _Elem *std::begin(std::initializer_list<_Elem>) noexcept”: 未能从“add_rvalue_reference<const ContiguousContainer>::type”为“std::initializer_list<_Elem>”推导 模板 参数	fwd_keras	D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp	4290
    错误	C2039	“iterator_category”: 不是“nlohmann::detail::iterator_traits<unknown-type,void>”的成员	fwd_keras	D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp	4290
    错误	C2146	语法错误: 缺少“>”(在标识符“iterator_category”的前面)	fwd_keras	D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp	4290
    错误	C2784	“const _Ty *std::begin(const std::valarray<_Ty> &)”: 未能从“add_rvalue_reference<const ContiguousContainer>::type”为“const std::valarray<_Ty> &”推导 模板 参数	fwd_keras	D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp	4290
    错误	C2784	“_Ty *std::begin(std::valarray<_Ty> &)”: 未能从“add_rvalue_reference<const ContiguousContainer>::type”为“std::valarray<_Ty> &”推导 模板 参数	fwd_keras	D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp	4290
    错误	C2784	“_Ty *std::begin(_Ty (&)[_Size]) noexcept”: 未能从“add_rvalue_reference<const ContiguousContainer>::type”为“_Ty (&)[_Size]”推导 模板 参数	fwd_keras	D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp	4290
    错误	C2893	未能使函数模板“unknown-type std::begin(const _Container &)”专用化	fwd_keras	D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp	4290
    错误	C2893	未能使函数模板“unknown-type std::begin(_Container &)”专用化	fwd_keras	D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp	4290
    错误	C2784	“const _Elem *std::begin(std::initializer_list<_Elem>) noexcept”: 未能从“add_rvalue_reference<const ContiguousContainer>::type”为“std::initializer_list<_Elem>”推导 模板 参数	fwd_keras	D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp	4290
    错误	C2039	“iterator_category”: 不是“nlohmann::detail::iterator_traits<unknown-type,void>”的成员	fwd_keras	D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp	4290
    错误	C2146	语法错误: 缺少“>”(在标识符“iterator_category”的前面)	fwd_keras	D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp	4290
    

    一共失败了3个

    10>------ 已启动全部重新生成: 项目: ALL_BUILD, 配置: Debug x64 ------
    10>  Building Custom Rule D:/Projects/tencent_forward/Forward/CMakeLists.txt
    ========== 全部重新生成: 成功 7 个,失败 3 个,跳过 0 个 ==========
    

    这是什么原因?请问怎么解决?

    Environment

    TensorRT Version: 7.2.1.6 NVIDIA GPU: RTX 2080 SUPER NVIDIA Driver Version: 441.22 CUDA Version: 10.2 CUDNN Version: 8.2.0.53 Operating System: Windows 10 专业版 Python Version: 3.8.5

  • Help to support tensorflow slim

    Help to support tensorflow slim "Flatten" pattern to Tensorrt.

    There is a layer in tensorflow slim named "Flatten", it includes servel tensorflow operations like: "Shape", "StridedSlice" and "Reshape". The test code like this.

    import numpy as np
    import os
    
    
    def create_tf_flatten(model_file):
        import tensorflow as tf
        import tf_slim as slim
        with tf.Session() as sess:
            x1 = tf.placeholder(shape=(None,299,299,3),dtype=tf.float32, name='x')
    
            op = slim.flatten(x1)
    
            sess.run(tf.global_variables_initializer())
            feed_dict = {x1: np.ones((1,299,299,3))}
            print(sess.run(op, feed_dict))
    
            graphdef = tf.graph_util.convert_variables_to_constants(sess, sess.graph_def, ['Flatten/flatten/Reshape'])
            tf.train.write_graph(graphdef, './', model_file, as_text=False)
            return feed_dict
    
    def forward_transfer(model_file, dummy_input):
        import forward
        # 1. 构建 Engine
        builder = forward.TfBuilder()
        infer_mode = 'float32'
    
        builder.set_mode(infer_mode)
        tf_engine = builder.build(model_file, dummy_input)
    
        # save engine
        engine_path = os.path.splitext(model_file)[0] + '.engine'
        tf_engine.save(engine_path)
    
    
    def test_forward(model_file, dummy_input):
        import forward
        engine_path = os.path.splitext(model_file)[0] + '.engine'
        # load saved engine
        tf_engine = forward.TfEngine()
        tf_engine.load(engine_path)
    
        inputs = dummy_input
        outputs = tf_engine.forward(inputs)
        print(outputs)
    
    
    model_file = 'tf_model.pb'
    create_tf_flatten(model_file)
    
    x = {'x':np.ones([1,299,299,3],dtype='float32')}
    
    forward_transfer(model_file, x)
    test_forward(model_file, x)
    

    Environment

    TensorRT Version: 7.1.3.4 NVIDIA GPU: T4 NVIDIA Driver Version: 410.104 CUDA Version: 10.2 CUDNN Version: 8.0 Operating System: ubuntu 18.04 Python Version (if applicable): 3.6.9 Tensorflow Version (if applicable): 1.15.0 PyTorch Version (if applicable): 1.7.0

    @aster2013 @yuanzexi

  • Segmentation fault (core dumped)

    Segmentation fault (core dumped)

    Describe the bug When I finished compiling cmake,then I copy forward.cpython-36m-aarch64-linux-gnu.so to a new directory, I run 'python test_forward.py' or ‘import forward’,an error appear ' Segmentation fault (core dumped) '

    Environment

    Device : Jetson Xavier NX System: Jetpack4.4 [L4T 32.4.4] TensorRT Version: 7.1.3 CUDA Version: 10.2.89 CUDNN Version: 8.0.0.180 Python Version (if applicable): 3.6.9 Tensorflow Version (if applicable): 1.15.2 Keras Version (if applicable): 2.1.5

    Relevant Files

    cmake successful 4

    error information 3

    some gdb information 1 2

    What causes such error? Looking forward to the answer!!

  • reflectPad存在两个问题

    reflectPad存在两个问题

    Describe the bug 第一个问题是即使plugin继承了IPluginV2DynamicExt,但是不能用于动态输入当中,因为getOutputDimensions()的写法有误。 第二个问题是这个插件的padding过程在某些GPU卡上输出结果是错的,具体原因未知。如在2080ti中可以正常输出结果,但是在A100出错

    Environment

    TensorRT Version: 7.2.1 NVIDIA GPU: 2080TI & A100 NVIDIA Driver Version: CUDA Version: 11.0 CUDNN Version: Operating System: Python Version (if applicable): Tensorflow Version (if applicable): PyTorch Version (if applicable):

    Relevant Files

    To Reproduce Steps to reproduce the behavior:

    1. Go to '...'
    2. Click on '....'
    3. Scroll down to '....'
    4. See error

    Expected behavior A clear and concise description of what you expected to happen.

    Screenshots If applicable, add screenshots to help explain your problem.

    Additional context Add any other context about the problem here.

  • 对‘fwd::TrtForwardEngine::Load(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&)’未定义的引用

    对‘fwd::TrtForwardEngine::Load(std::__cxx11::basic_string, std::allocator > const&)’未定义的引用

    1. i have build and get the libtrt_engine.so, libfwd_torch.so
    2. when i build the demo/fwd_cpp, i get the error: [ 50%] Linking CXX executable test_fwd_engine CMakeFiles/test_fwd_engine.dir/test_fwd_engine.cpp.o:在函数‘main’中: test_fwd_engine.cpp:(.text+0x1c8):对‘fwd::TrtForwardEngine::Load(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&)’未定义的引用 collect2: error: ld returned 1 exit status CMakeFiles/test_fwd_engine.dir/build.make:110: recipe for target 'test_fwd_engine' failed make[2]: *** [test_fwd_engine] Error 1 CMakeFiles/Makefile2:96: recipe for target 'CMakeFiles/test_fwd_engine.dir/all' failed make[1]: *** [CMakeFiles/test_fwd_engine.dir/all] Error 2 Makefile:102: recipe for target 'all' failed make: *** [all] Error 2
  • free(): invalid pointer

    free(): invalid pointer

    Describe the bug An error is reported at the end of the program, free(): invalid pointer.

    Environment

    TensorRT Version: 7.2.3.4 NVIDIA GPU: TITAN Xp NVIDIA Driver Version: 440.33.01 CUDA Version: 10.2 CUDNN Version: 8.0.4 Operating System: CentOS 8 Python Version (if applicable): 3.6.8 Tensorflow Version (if applicable): -- PyTorch Version (if applicable): 1.3.1 (libtorch cpu)

    Relevant Files

    To Reproduce Build Forward: cmake -DENABLE_LOGGING=OFF
    -DENABLE_PROFILING=OFF
    -DENABLE_DYNAMIC_BATCH=OFF
    -DENABLE_TORCH=ON
    -DBUILD_PYTHON_LIB=OFF
    -DPYTHON_EXECUTABLE=/usr/bin/python3
    -DENABLE_TENSORFLOW=OFF -DENABLE_KERAS=OFF
    -DTORCH_CMAKE_PATH=/usr/local/lib/libtorch/share/cmake/Torch/
    .. Steps to reproduce the behavior:

    1. #include "fwd_torch/torch_engine/torch_engine.h" #include "fwd_torch/torch_engine/torch_infer.h"
    2. create main function std::cout << "Hello World" << std::endl; after cmake and make, run this function.

    Expected behavior None

    Screenshots image

    Additional context The problem occurs in Forward cmake with libtorch. When we use Forward which cmake with python, the problem do not come up.

  • keras.layer里Embedding的trt实现

    keras.layer里Embedding的trt实现

    感谢大佬做的项目!帮助很大,已star 我遇到了一个问题,我在做keras.layer.Embedding层的trt实现,想问下可否使用tensorRT自带的embLayerNormPlugin插件来实现功能??我的python和C++源码放在下面: python代码: 微信图片_20210624171827 python运行的结果 从一个32长度数组变成32x128tensor: 微信图片_20210624171831

  • Cannot build the pb model (tensorflow), got

    Cannot build the pb model (tensorflow), got "tf_graph_parser.cpp(48): Creating input desc is failed." .

    Describe the bug I can convert the pytorch model (my resnet classification model) by the Forward. When I try to convert tensorflow pb model, I cannot build the saved pb model after obtaining the forword.**.so successfully and can import forward. When I tested this code:" engine = builder.build('./test_tfmodel.pb', dummy_inputs)", the problem "tf_graph_parser.cpp(48): Creating input desc is failed." was appeared and "已放弃(吐核)".

    Environment

    TensorRT Version: 7.2.1.6 NVIDIA GPU: GTX1080TI NVIDIA Driver Version: 450.80.02 CUDA Version: 11.0 CUDNN Version: 8.0.4 Operating System: 7.5 Python Version (if applicable): 3.6.13 Tensorflow Version (if applicable): tensorflow==1.15.0(cpu) PyTorch Version (if applicable): 1.7.1

    Relevant Files

    image just testing the add model.

    To Reproduce Steps to reproduce the behavior: 1. cmake .. \ -DTensorRT_ROOT=/data/wind/TensorRT-7.2.1.6/
    -DENABLE_LOGGING=OFF
    -DENABLE_PROFILING=OFF
    -DENABLE_DYNAMIC_BATCH=OFF
    -DBUILD_PYTHON_LIB=ON
    -DPYTHON_EXECUTABLE=/root/anaconda3/envs/xyang/bin/python
    -DENABLE_TORCH=OFF
    -DENABLE_TENSORFLOW=ON
    -DENABLE_KERAS=OFF

    make -j image then I import forward doesn't get wrong.

    import numpy as np
    import forward
    
    # 1. 构建 Engine
    builder = forward.TfBuilder()
    
    # img = torch.randn(1, 784)
    img = np.ones([1,784], dtype='float32')
    dummy_inputs = {'inputs': img}
    infer_mode = 'float32'  #  float32 / float16 / int8_calib / int8
    
    builder.set_mode(infer_mode)
    engine = builder.build('./test_tfmodel.pb', dummy_inputs)
    

    then the last code got wrong! image image

    Screenshots When I ./unit_test --gtest_filter=TestTfNodes.*, the following error(已放弃(吐核)) happened. But I can import forward successfully. image

  • 编译出错 找不到cublas_device库

    编译出错 找不到cublas_device库

    你好, 编译的时候报cublas_device找不到,具体如下:

    Environment

    TensorRT Version: 7.2.3.4 CUDA Version: 10.2 CUDNN Version: 7.4 Operating System: ubuntu18.04 Python Version (if applicable): 3.7 PyTorch Version (if applicable): 1.8

    错误信息: [ 97%] Linking CXX shared library ../../bin/libfwd_torch.so [ 97%] Built target fwd_torch Scanning dependencies of target forward [ 98%] Building CXX object source/py_fwd/CMakeFiles/forward.dir/py_forward.cpp.o [100%] Linking CXX shared module ../../bin/forward.cpython-37m-x86_64-linux-gnu.so /usr/bin/x86_64-linux-gnu-ld: cannot find -lCUDA_cublas_device_LIBRARY-NOTFOUND collect2: error: ld returned 1 exit status source/py_fwd/CMakeFiles/forward.dir/build.make:117: recipe for target 'bin/forward.cpython-37m-x86_64-linux-gnu.so' failed make[2]: *** [bin/forward.cpython-37m-x86_64-linux-gnu.so] Error 1 CMakeFiles/Makefile2:687: recipe for target 'source/py_fwd/CMakeFiles/forward.dir/all' failed make[1]: *** [source/py_fwd/CMakeFiles/forward.dir/all] Error 2 Makefile:83: recipe for target 'all' failed

    cublas_device这个库找不到,cuda10以后这个库就废弃了吧。

  • make error

    make error

    I've done the 'cmake' part successfully. But, when I run 'make', error happens like: [ 5%] Building NVCC (Device) object source/trt_engine/CMakeFiles/trt_engine.dir/trt_network_crt/plugins/emb_layer_norm_plugin/trt_engine_generated_emb_layer_norm_kernel.cu.o In file included from /home/agx/SCW/Forward-master/source/third_party/cub-1.8.0/cub/block/specializations/../../block/block_discontinuity.cuh:37:0, from /home/agx/SCW/Forward-master/source/third_party/cub-1.8.0/cub/block/specializations/block_histogram_sort.cuh:37, from /home/agx/SCW/Forward-master/source/third_party/cub-1.8.0/cub/block/block_histogram.cuh:36, from /home/agx/SCW/Forward-master/source/third_party/cub-1.8.0/cub/cub.cuh:38, from /home/agx/SCW/Forward-master/source/trt_engine/trt_network_crt/plugins/common/bert_plugin_util.h:33, from /home/agx/SCW/Forward-master/source/trt_engine/trt_network_crt/plugins/emb_layer_norm_plugin/emb_layer_norm_kernel.cu:36: /home/agx/SCW/Forward-master/source/third_party/cub-1.8.0/cub/block/specializations/../../block/../util_ptx.cuh:238:61: warning: missing terminating " character asprmt"bfi.b32 %0, %1, %2, %3;" : "=r"(ret) : ar"(x), "r"(x), b, in) - 1; ^ /home/agx/SCW/Forward-master/source/third_party/cub-1.8.0/cub/block/specializations/../../block/../util_ptx.cuh:282:2: error: #else without #if #else ^~~~ /home/agx/SCW/Forward-master/source/third_party/cub-1.8.0/cub/block/specializations/../../block/../util_ptx.cuh:284:2: error: #endif without #if #endif ^~~~~ /home/agx/SCW/Forward-master/source/third_party/cub-1.8.0/cub/block/specializations/../../block/../util_ptx.cuh:294:2: error: #else without #if #else ^~~~ /home/agx/SCW/Forward-master/source/third_party/cub-1.8.0/cub/block/specializations/../../block/../util_ptx.cuh:296:2: error: #endif without #if #endif ^~~~~ /home/agx/SCW/Forward-master/source/third_party/cub-1.8.0/cub/block/specializations/../../block/../util_ptx.cuh:306:2: error: #else without #if #else ^~~~ /home/agx/SCW/Forward-master/source/third_party/cub-1.8.0/cub/block/specializations/../../block/../util_ptx.cuh:308:2: error: #endif without #if #endif} ^~~~~ /home/agx/SCW/Forward-master/source/third_party/cub-1.8.0/cub/block/specializations/../../block/../util_ptx.cuh:319:2: error: #else without #if #else ^~~~ /home/agx/SCW/Forward-master/source/third_party/cub-1.8.0/cub/block/specializations/../../block/../util_ptx.cuh:321:46: warning: missing terminating " character 3;" : "word(ret) : word("(x), "rc_bit-of("(x), flags() + z; ^ /home/agx/SCW/Forward-master/source/third_party/cub-1.8.0/cub/block/specializations/../../block/../util_ptx.cuh:322:2: error: #endif without #if #endif ^~~~~ /home/agx/SCW/Forward-master/source/third_party/cub-1.8.0/cub/block/specializations/../../block/../util_ptx.cuh:334:2: error: #else without #if #else ^~~~ /home/agx/SCW/Forward-master/source/third_party/cub-1.8.0/cub/block/specializations/../../block/../util_ptx.cuh:336:46: warning: missing terminating " character 3;" : "word(ret) : word("(x), "rc_bit-of("(x), flags() + z; ^ /home/agx/SCW/Forward-master/source/third_party/cub-1.8.0/cub/block/specializations/../../block/../util_ptx.cuh:337:2: error: #endif without #if #endif ^~~~~ /home/agx/SCW/Forward-master/source/third_party/cub-1.8.0/cub/block/specializations/../../block/../util_ptx.cuh:349:2: error: #else without #if #else ^~~~ /home/agx/SCW/Forward-master/source/third_party/cub-1.8.0/cub/block/specializations/../../block/../util_ptx.cuh:351:44: warning: missing terminating " character 3;" : "word(ret) : word("(x), "rc_lane("(x), flags() + z; ^ /home/agx/SCW/Forward-master/source/third_party/cub-1.8.0/cub/block/specializations/../../block/../util_ptx.cuh:352:2: error: #endif without #if #endif ^~~~~ /home/agx/SCW/Forward-master/source/third_party/cub-1.8.0/cub/block/specializations/../../block/../util_ptx.cuh:371:61: warning: missing terminating " character asfma.rz.ffi.b32 %0, %1, %2, %3;" f"(d(ret)f: ar"(xf, "r"(xf, cr) - 1; ^ /home/agx/SCW/Forward-master/source/third_party/cub-1.8.0/cub/block/specializations/../../block/../util_ptx.cuh:375:2: error: #endif without #if #endif // DOXYGEN_SHOULD_SKIP_THIS ^~~~~ /home/agx/SCW/Forward-master/source/third_party/cub-1.8.0/cub/block/specializations/../../block/../util_ptx.cuh:383:20: warning: missing terminating " character vo orileasexit;")>()) ^ /home/agx/SCW/Forward-master/source/third_party/cub-1.8.0/cub/block/specializations/../../block/../util_ptx.cuh:395:20: warning: missing terminating " character vo orileasllup;")>()x; ^ /home/agx/SCW/Forward-master/source/third_party/cub-1.8.0/cub/block/specializations/../../block/../util_ptx.cuh:406:18: warning: missing terminating " character tef adIdx"urn x; ^ /home/agx/SCW/Forward-master/source/third_party/cub-1.8.0/cub/block/specializations/../../block/../util_ptx.cuh:475:1: error: unterminated comment /** ^ In file included from /home/agx/SCW/Forward-master/source/trt_engine/trt_network_crt/plugins/common/bert_plugin_util.h:33:0, from /home/agx/SCW/Forward-master/source/trt_engine/trt_network_crt/plugins/emb_layer_norm_plugin/emb_layer_norm_kernel.cu:36: /home/agx/SCW/Forward-master/source/third_party/cub-1.8.0/cub/cub.cuh:54:10: fatal error: device/device_run_length_encode.cuh: No such file or directory #include "device/device_run_length_encode.cuh" ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ compilation terminated. CMake Error at trt_engine_generated_emb_layer_norm_kernel.cu.o.cmake:220 (message): Error generating /home/agx/SCW/Forward-master/build/source/trt_engine/CMakeFiles/trt_engine.dir/trt_network_crt/plugins/emb_layer_norm_plugin/./trt_engine_generated_emb_layer_norm_kernel.cu.o

    source/trt_engine/CMakeFiles/trt_engine.dir/build.make:1591: recipe for target 'source/trt_engine/CMakeFiles/trt_engine.dir/trt_network_crt/plugins/emb_layer_norm_plugin/trt_engine_generated_emb_layer_norm_kernel.cu.o' failed make[2]: *** [source/trt_engine/CMakeFiles/trt_engine.dir/trt_network_crt/plugins/emb_layer_norm_plugin/trt_engine_generated_emb_layer_norm_kernel.cu.o] Error 1 CMakeFiles/Makefile2:556: recipe for target 'source/trt_engine/CMakeFiles/trt_engine.dir/all' failed make[1]: *** [source/trt_engine/CMakeFiles/trt_engine.dir/all] Error 2 Makefile:90: recipe for target 'all' failed make: *** [all] Error 2

    Wonder what's wrong...

TensorRT is a C++ library for high performance inference on NVIDIA GPUs and deep learning accelerators.

TensorRT Open Source Software This repository contains the Open Source Software (OSS) components of NVIDIA TensorRT. Included are the sources for Tens

Nov 23, 2022
The dgSPARSE Library (Deep Graph Sparse Library) is a high performance library for sparse kernel acceleration on GPUs based on CUDA.

dgSPARSE Library Introdution The dgSPARSE Library (Deep Graph Sparse Library) is a high performance library for sparse kernel acceleration on GPUs bas

Nov 7, 2022
PPLNN is a high-performance deep-learning inference engine for efficient AI inferencing.
PPLNN is a high-performance deep-learning inference engine for efficient AI inferencing.

PPLNN, which is short for "PPLNN is a Primitive Library for Neural Network", is a high-performance deep-learning inference engine for efficient AI inferencing.

Nov 24, 2022
SHARK - High Performance Machine Learning for CPUs, GPUs, Accelerators and Heterogeneous Clusters

SHARK Communication Channels GitHub issues: Feature requests, bugs etc Nod.ai SHARK Discord server: Real time discussions with the nod.ai team and oth

Dec 1, 2022
Deep Scalable Sparse Tensor Network Engine (DSSTNE) is an Amazon developed library for building Deep Learning (DL) machine learning (ML) models

Amazon DSSTNE: Deep Scalable Sparse Tensor Network Engine DSSTNE (pronounced "Destiny") is an open source software library for training and deploying

Nov 17, 2022
Edge ML Library - High-performance Compute Library for On-device Machine Learning Inference
 Edge ML Library - High-performance Compute Library for On-device Machine Learning Inference

Edge ML Library (EMLL) offers optimized basic routines like general matrix multiplications (GEMM) and quantizations, to speed up machine learning (ML) inference on ARM-based devices. EMLL supports fp32, fp16 and int8 data types. EMLL accelerates on-device NMT, ASR and OCR engines of Youdao, Inc.

Nov 16, 2022
Dataset Synthesizer - NVIDIA Deep learning Dataset Synthesizer (NDDS)
Dataset Synthesizer - NVIDIA Deep learning Dataset Synthesizer (NDDS)

NVIDIA Deep learning Dataset Synthesizer (NDDS) Overview NDDS is a UE4 plugin from NVIDIA to empower computer vision researchers to export high-qualit

Nov 22, 2022
TFCC is a C++ deep learning inference framework.

TFCC is a C++ deep learning inference framework.

Sep 28, 2022
KSAI Lite is a deep learning inference framework of kingsoft, based on tensorflow lite

KSAI Lite English | 简体中文 KSAI Lite是一个轻量级、灵活性强、高性能且易于扩展的深度学习推理框架,底层基于tensorflow lite,定位支持包括移动端、嵌入式以及服务器端在内的多硬件平台。 当前KSAI Lite已经应用在金山office内部业务中,并逐步支持金山

Nov 2, 2022
Helper Class for Deep Learning Inference Frameworks: TensorFlow Lite, TensorRT, OpenCV, ncnn, MNN, SNPE, Arm NN, NNAbla
Helper Class for Deep Learning Inference Frameworks: TensorFlow Lite, TensorRT, OpenCV, ncnn, MNN, SNPE, Arm NN, NNAbla

InferenceHelper This is a helper class for deep learning frameworks especially for inference This class provides an interface to use various deep lear

Nov 16, 2022
MACE is a deep learning inference framework optimized for mobile heterogeneous computing platforms.
MACE is a deep learning inference framework optimized for mobile heterogeneous computing platforms.

Mobile AI Compute Engine (or MACE for short) is a deep learning inference framework optimized for mobile heterogeneous computing on Android, iOS, Linux and Windows devices.

Nov 27, 2022
NVIDIA GPUs htop like monitoring tool
NVIDIA GPUs htop like monitoring tool

NVTOP What is NVTOP? Nvtop stands for NVidia TOP, a (h)top like task monitor for NVIDIA GPUs. It can handle multiple GPUs and print information about

Nov 29, 2022
nvidia nvmpi encoder for streamFX and obs-studio (e.g. for nvidia jetson. Requires nvmpi enabled ffmpeg / libavcodec)

nvmpi-streamFX-obs nvidia nvmpi encoder for streamFX and obs-studio (e.g. for nvidia jetson. Requires nvmpi enabled ffmpeg / libavcodec) Purpose This

Jun 25, 2022
An Out-of-the-Box TensorRT-based Framework for High Performance Inference with C++/Python Support

An Out-of-the-Box TensorRT-based Framework for High Performance Inference with C++/Python Support

Nov 22, 2022
ncnn is a high-performance neural network inference framework optimized for the mobile platform
ncnn is a high-performance neural network inference framework optimized for the mobile platform

ncnn ncnn is a high-performance neural network inference computing framework optimized for mobile platforms. ncnn is deeply considerate about deployme

Nov 24, 2022
Hardware-accelerated DNN model inference ROS2 packages using NVIDIA Triton/TensorRT for both Jetson and x86_64 with CUDA-capable GPU.
Hardware-accelerated DNN model inference ROS2 packages using NVIDIA Triton/TensorRT for both Jetson and x86_64 with CUDA-capable GPU.

Isaac ROS DNN Inference Overview This repository provides two NVIDIA GPU-accelerated ROS2 nodes that perform deep learning inference using custom mode

Nov 24, 2022
A library for creating Artificial Neural Networks, for use in Machine Learning and Deep Learning algorithms.
A library for creating Artificial Neural Networks, for use in Machine Learning and Deep Learning algorithms.

iNeural A library for creating Artificial Neural Networks, for use in Machine Learning and Deep Learning algorithms. What is a Neural Network? Work on

Apr 5, 2022
Benchmark framework of compute-in-memory based accelerators for deep neural network (inference engine focused)

DNN+NeuroSim V1.3 The DNN+NeuroSim framework was developed by Prof. Shimeng Yu's group (Georgia Institute of Technology). The model is made publicly a

Nov 21, 2022