Caffe2 is a lightweight, modular, and scalable deep learning framework.

Owner
Meta Archive
These projects have been archived and are generally unsupported, but are still available to view and use
Meta Archive
Comments
  • Builder scripts for Docker containers

    Builder scripts for Docker containers

    This includes a build script for Docker containers to run builds and tests in as well as a build and test script that is run to build and test Caffe2 itself. These scripts are directly used by Jenkins.

  • cmake: python packages now install to the cannonical directory

    cmake: python packages now install to the cannonical directory

    Addresses issue #1676

    Now when make install is run, the caffe2 (and caffe) python modules will be installed into the correct site-packages directory (relative to the prefix) instead of directly in the prefix.

  • ../lib/libcaffe2.so: undefined reference to `google::protobuf::internal::AssignDescriptors(std::__cxx11::basic_string

    ../lib/libcaffe2.so: undefined reference to `google::protobuf::internal::AssignDescriptors(std::__cxx11::basic_string

    Building caffe2 failed following "Custom Anaconda Install".

    1. conda create -n caffe2 && source activate caffe2
    2. conda install -y protobuf (3.4) or conda install -y -c conda-forge protobuf (3.5.1)
    3. git clone --recusive ...
    4. mkdir build && cd build
    5. cmake -DUSE_CUDA=ON -DUSE_LEVELDB=ON -DCMAKE_PREFIX_PATH=~ /Prog/anaconda2/envs/caffe2 -DCMAKE_INSTALL_PREFIX=~ /Prog/anaconda2/envs/caffe2 ..
    6. make install
    7. compile failed [75% ] ‘google::protobuf::internal::fixed_address_empty_string[abi:cxx11]’未定义的引用

    System information

    • Operating system: Ubuntu 16.04
    • Compiler version: gcc 5.4.0
    • CMake version: cmake 3.5.1
    • CMake arguments: cmake -DUSE_CUDA=ON -DUSE_LEVELDB=ON -DCMAKE_PREFIX_PATH=~ /Prog/anaconda2/envs/caffe2 -DCMAKE_INSTALL_PREFIX=~ /Prog/anaconda2/envs/caffe2 ..
    • Relevant libraries/versions (e.g. CUDA): cuda 8.0

    CMake summary output

    ******** Summary ********
    <please paste summary output here>
    

    [ 75%] Building CXX object caffe2/CMakeFiles/reshape_op_gpu_test.dir/operators/reshape_op_gpu_test.cc.o [ 75%] Linking CXX executable ../bin/reshape_op_gpu_test CMakeFiles/reshape_op_gpu_test.dir/operators/reshape_op_gpu_test.cc.o:在函数‘caffe2::ReshapeOpGPUTest_testReshapeWithScalar_Test::TestBody()’中: reshape_op_gpu_test.cc:(.text+0x1725):对‘google::protobuf::internal::fixed_address_empty_string[abi:cxx11]’未定义的引用 ../lib/libcaffe2.so:对‘google::protobuf::internal::WireFormatLite::WriteBytes(int, std::__cxx11::basic_string<char, std::char_traits, std::allocator > const&, google::protobuf::io::CodedOutputStream*)’未定义的引用 ../lib/libcaffe2_gpu.so:对‘google::protobuf::MessageLite::SerializeAsStringabi:cxx11 const’未定义的引用 ../lib/libcaffe2.so:对‘google::SetUsageMessage(std::__cxx11::basic_string<char, std::char_traits, std::allocator > const&)’未定义的引用 ../lib/libcaffe2.so:对‘google::protobuf::Message::DebugStringabi:cxx11 const’未定义的引用 ../lib/libcaffe2.so:对‘google::protobuf::MessageFactory::InternalRegisterGeneratedFile(char const*, void ()(std::__cxx11::basic_string<char, std::char_traits, std::allocator > const&))’未定义的引用 ../lib/libcaffe2_gpu.so:对‘google::protobuf::Message::ShortDebugStringabi:cxx11 const’未定义的引用 ../lib/libcaffe2.so:对‘google::protobuf::internal::WireFormatLite::WriteStringMaybeAliased(int, std::__cxx11::basic_string<char, std::char_traits, std::allocator > const&, google::protobuf::io::CodedOutputStream)’未定义的引用 ../lib/libcaffe2_gpu.so:对‘google::protobuf::internal::ParseNamedEnum(google::protobuf::EnumDescriptor const*, std::__cxx11::basic_string<char, std::char_traits, std::allocator > const&, int*)’未定义的引用 ../lib/libcaffe2.so:对‘google::protobuf::internal::WireFormatLite::ReadBytes(google::protobuf::io::CodedInputStream*, std::__cxx11::basic_string<char, std::char_traits, std::allocator >)’未定义的引用 ../lib/libcaffe2_gpu.so:对‘google::protobuf::MessageLite::ParseFromString(std::__cxx11::basic_string<char, std::char_traits, std::allocator > const&)’未定义的引用 ../lib/libcaffe2.so:对‘google::protobuf::Message::GetTypeNameabi:cxx11 const’未定义的引用 ../lib/libcaffe2.so:对‘google::protobuf::internal::OnShutdownDestroyString(std::__cxx11::basic_string<char, std::char_traits, std::allocator > const)’未定义的引用 ../lib/libcaffe2.so:对‘google::protobuf::io::CodedOutputStream::WriteStringWithSizeToArray(std::__cxx11::basic_string<char, std::char_traits, std::allocator > const&, unsigned char*)’未定义的引用 ../lib/libcaffe2_gpu.so:对‘google::FlagRegisterer::FlagRegisterer<std::__cxx11::basic_string<char, std::char_traits, std::allocator > >(char const*, char const*, char const*, std::__cxx11::basic_string<char, std::char_traits, std::allocator >, std::__cxx11::basic_string<char, std::char_traits, std::allocator >)’未定义的引用 ../lib/libcaffe2.so:对‘google::protobuf::Message::InitializationErrorStringabi:cxx11 const’未定义的引用 ../lib/libcaffe2_gpu.so:对‘google::base::CheckOpMessageBuilder::NewStringabi:cxx11’未定义的引用 ../lib/libcaffe2.so:对‘google::protobuf::internal::WireFormatLite::WriteBytesMaybeAliased(int, std::__cxx11::basic_string<char, std::char_traits, std::allocator > const&, google::protobuf::io::CodedOutputStream*)’未定义的引用 ../lib/libcaffe2.so:对‘google::protobuf::internal::AssignDescriptors(std::__cxx11::basic_string<char, std::char_traits, std::allocator > const&, google::protobuf::internal::MigrationSchema const*, google::protobuf::Message const* const*, unsigned int const*, google::protobuf::MessageFactory*, google::protobuf::Metadata*, google::protobuf::EnumDescriptor const**, google::protobuf::ServiceDescriptor const**)’未定义的引用 ../lib/libcaffe2.so:对‘google::protobuf::internal::WireFormatLite::WriteString(int, std::__cxx11::basic_string<char, std::char_traits, std::allocator > const&, google::protobuf::io::CodedOutputStream*)’未定义的引用 ../lib/libcaffe2.so:对‘google::protobuf::TextFormat::ParseFromString(std::__cxx11::basic_string<char, std::char_traits, std::allocator > const&, google::protobuf::Message*)’未定义的引用 ../lib/libcaffe2.so:对‘google::protobuf::MessageLite::SerializeToString(std::__cxx11::basic_string<char, std::char_traits, std::allocator >*) const’未定义的引用 collect2: error: ld returned 1 exit status caffe2/CMakeFiles/reshape_op_gpu_test.dir/build.make:126: recipe for target 'bin/reshape_op_gpu_test' failed make[2]: *** [bin/reshape_op_gpu_test] Error 1 CMakeFiles/Makefile2:1341: recipe for target 'caffe2/CMakeFiles/reshape_op_gpu_test.dir/all' failed make[1]: *** [caffe2/CMakeFiles/reshape_op_gpu_test.dir/all] Error 2 Makefile:138: recipe for target 'all' failed make: *** [all] Error 2

  • TravisCI Overhaul

    TravisCI Overhaul

    Uncached build: https://travis-ci.org/lukeyeager/caffe2/builds/239677224 Cached build: https://travis-ci.org/lukeyeager/caffe2/builds/239686725

    Improvements:

    • Parallel builds everywhere
    • All builds use CCache for quick build times (help from https://github.com/pytorch/pytorch/pull/614, https://github.com/ccache/ccache/pull/145)
    • Run ctests when available (continuation of https://github.com/caffe2/caffe2/pull/550)
    • Upgraded from cuDNN v5 to v6
    • Fixed MKL build (by updating pkg version)
    • Fixed android builds (https://github.com/caffe2/caffe2/commit/b6f905a67b8cdc301203c08d5a598bb1ed6d1873#commitcomment-22404119)

    Things that are broken:

    • ~~Building NNPACK fails with no discernible error message (currently disabled entirely)~~
    • ~~Android builds continue to fail with existing error:~~
    • ~~OSX builds time-out:~~

    Summary

    | Before | After | Changes | | --- | --- | --- | | COMPILER=g++ | linux | without CUDA | | COMPILER=g++-5 | linux-gcc5 | without CUDA | | COMPILER=g++ | linux-cuda | updated to cuDNN v6 | | BLAS=MKL | linux-mkl | updated pkg version | | BUILD_TARGET=android | linux-android | | | COMPILER=clang++ | osx | | | BUILD_TARGET=ios | osx-ios | | | BUILD_TARGET=android | osx-android | | | QUICKTEST | GONE | | | COMPILER=g++-4.8 | GONE | | | COMPILER=g++-4.9 | GONE | |

  • make_mnist_db doesn't generate db files

    make_mnist_db doesn't generate db files

    When running tutorial MNIST.ipynb, the function GenerateDB() runs fine, no error reported but it did not generate files mnist-train-nchw-leveldb or mnist-test-nchw-leveldb.

    While calling make_mnist_db directly from command line, it reported an error:

    • Caffe2 flag error: Cannot convert argument to bool: --db Note that if you are passing in a bool flag, you need to explicitly specify it, like --arg=True or --arg True. Otherwise, the next argument may be inadvertently used as the argument, causing the above error. Caffe2 flag: illegal argument: --channel_first

    Any thoughts how to fix it?

    I am using Caffe2 on windows server 2016 with python2.7. Running tutorial Basics.ipynb works.

  • onnx_onnx_c2.proto:383:5: Expected

    onnx_onnx_c2.proto:383:5: Expected "required", "optional", or "repeated".

    If this is a build issue, please fill out the template below.

    System information

    • Operating system: Ubuntu 14.04
    • Compiler version: GCC 4.8.4
    • CMake version: 3.11.2
    • CMake arguments: No args
    • Relevant libraries/versions : CUDA 8.0 CuDNN v6.0.21

    CMake summary output

    ******** Summary ********
    -- Does not need to define long separately.
    -- std::exception_ptr is supported.
    -- NUMA is not available
    -- Turning off deprecation warning due to glog.
    -- Current compiler supports avx2 extention. Will build perfkernels.
    -- Caffe2: Found protobuf with new-style protobuf targets.
    -- Caffe2 protobuf include directory: /usr/include
    -- The BLAS backend of choice:Eigen
    -- Could NOT find NNPACK (missing: NNPACK_INCLUDE_DIR NNPACK_LIBRARY PTHREADPOOL_LIBRARY CPUINFO_LIBRARY) 
    -- Brace yourself, we are building NNPACK
    -- Found PythonInterp: /usr/bin/python (found version "2.7.6") 
    -- Caffe2: Cannot find gflags automatically. Using legacy find.
    -- Caffe2: Found gflags  (include: /usr/include, library: /usr/lib/x86_64-linux-gnu/libgflags.so)
    -- Caffe2: Cannot find glog automatically. Using legacy find.
    -- Caffe2: Found glog (include: /usr/include, library: /usr/lib/x86_64-linux-gnu/libglog.so)
    -- git Version: v0.0.0
    -- Version: 0.0.0
    -- Performing Test HAVE_STD_REGEX
    -- Performing Test HAVE_STD_REGEX
    -- Performing Test HAVE_STD_REGEX -- compiled but failed to run
    -- Performing Test HAVE_GNU_POSIX_REGEX
    -- Performing Test HAVE_GNU_POSIX_REGEX
    -- Performing Test HAVE_GNU_POSIX_REGEX -- failed to compile
    -- Performing Test HAVE_POSIX_REGEX
    -- Performing Test HAVE_POSIX_REGEX
    -- Performing Test HAVE_POSIX_REGEX -- success
    -- Performing Test HAVE_STEADY_CLOCK
    -- Performing Test HAVE_STEADY_CLOCK
    -- Performing Test HAVE_STEADY_CLOCK -- success
    -- Found lmdb    (include: /usr/include, library: /usr/lib/x86_64-linux-gnu/liblmdb.so)
    -- Found LevelDB (include: /usr/include, library: /usr/lib/x86_64-linux-gnu/libleveldb.so)
    -- Found Snappy  (include: /usr/include, library: /usr/lib/libsnappy.so)
    -- Could NOT find Numa (missing: Numa_INCLUDE_DIR Numa_LIBRARIES) 
    CMake Warning at cmake/Dependencies.cmake:205 (message):
      Not compiling with NUMA.  Suppress this warning with -DUSE_NUMA=OFF
    Call Stack (most recent call first):
      CMakeLists.txt:101 (include)
    
    
    -- Found CUDA: /usr/local/cuda-8.0 (found suitable exact version "8.0") 
    -- OpenCV found (/usr/local/share/OpenCV)
    CMake Warning at cmake/Dependencies.cmake:270 (find_package):
      By not providing "FindEigen3.cmake" in CMAKE_MODULE_PATH this project has
      asked CMake to find a package configuration file provided by "Eigen3", but
      CMake did not find one.
    
      Could not find a package configuration file provided by "Eigen3" with any
      of the following names:
    
        Eigen3Config.cmake
        eigen3-config.cmake
    
      Add the installation prefix of "Eigen3" to CMAKE_PREFIX_PATH or set
      "Eigen3_DIR" to a directory containing one of the above files.  If "Eigen3"
      provides a separate development package or SDK, be sure it has been
      installed.
    Call Stack (most recent call first):
      CMakeLists.txt:101 (include)
    
    
    -- Did not find system Eigen. Using third party subdirectory.
    -- Found PythonInterp: /usr/bin/python (found suitable version "2.7.6", minimum required is "2.7") 
    -- NumPy ver. 1.14.0 found (include: /usr/local/lib/python2.7/dist-packages/numpy/core/include)
    -- Could NOT find pybind11 (missing: pybind11_INCLUDE_DIR) 
    -- MPI support found
    -- MPI compile flags: -pthread
    -- MPI include path: /usr/lib/openmpi/include/openmpi/usr/lib/openmpi/include
    -- MPI LINK flags path: -L/usr/lib/openmpi/lib -pthread
    -- MPI libraries: /usr/lib/libmpi_cxx.so/usr/lib/libmpi.so/usr/lib/x86_64-linux-gnu/libdl.so/usr/lib/x86_64-linux-gnu/libhwloc.so
    CMake Warning at cmake/Dependencies.cmake:324 (message):
      OpenMPI found, but it is not built with CUDA support.
    Call Stack (most recent call first):
      CMakeLists.txt:101 (include)
    
    
    -- Found CUDA: /usr/local/cuda-8.0 (found suitable version "8.0", minimum required is "7.0") 
    -- Caffe2: CUDA detected: 8.0
    -- Found cuDNN: v6.0.21  (include: /usr/local/cuda-8.0/include, library: /usr/local/cuda-8.0/lib64/libcudnn.so)
    -- Automatic GPU detection returned 6.1.
    -- Added CUDA NVCC flags for: sm_61
    -- Could NOT find NCCL (missing: NCCL_INCLUDE_DIRS NCCL_LIBRARIES) 
    -- Could NOT find CUB (missing: CUB_INCLUDE_DIR) 
    -- Could NOT find Gloo (missing: Gloo_INCLUDE_DIR Gloo_LIBRARY) 
    -- MPI include path: /usr/lib/openmpi/include/openmpi/usr/lib/openmpi/include
    -- MPI libraries: /usr/lib/libmpi_cxx.so/usr/lib/libmpi.so/usr/lib/x86_64-linux-gnu/libdl.so/usr/lib/x86_64-linux-gnu/libhwloc.so
    -- CUDA detected: 8.0
    -- Found libcuda: /usr/lib/x86_64-linux-gnu/libcuda.so
    -- Found libnvrtc: /usr/local/cuda-8.0/lib64/libnvrtc.so
    CMake Warning at cmake/Dependencies.cmake:457 (message):
      mobile opengl is only used in android or ios builds.
    Call Stack (most recent call first):
      CMakeLists.txt:101 (include)
    
    
    CMake Warning at cmake/Dependencies.cmake:533 (message):
      Metal is only used in ios builds.
    Call Stack (most recent call first):
      CMakeLists.txt:101 (include)
    
    
    -- Could NOT find pybind11 (missing: pybind11_INCLUDE_DIR) 
    -- GCC 4.8.4: Adding gcc and gcc_s libs to link line
    -- Include NCCL operators
    -- Including image processing operators
    -- Excluding video processing operators due to no opencv
    -- Excluding mkl operators as we are not using mkl
    -- Include Observer library
    -- Using lib/python2.7/dist-packages as python relative installation path
    -- Automatically generating missing __init__.py files.
    -- 
    -- ******** Summary ********
    -- General:
    --   CMake version         : 3.11.20180308
    --   CMake command         : /usr/local/bin/cmake
    --   Git version           : v0.8.1-1319-g2900223
    --   System                : Linux
    --   C++ compiler          : /usr/bin/c++
    --   C++ compiler version  : 4.8.4
    --   Protobuf compiler     : /usr/bin/protoc
    --   Protobuf include path : /usr/include
    --   Protobuf libraries    : /usr/lib/x86_64-linux-gnu/libprotobuf.so;-pthread
    --   BLAS                  : Eigen
    --   CXX flags             :  -Wno-deprecated -DONNX_NAMESPACE=onnx_c2 -O2 -fPIC -Wno-narrowing -Wno-invalid-partial-specialization
    --   Build type            : Release
    --   Compile definitions   : 
    -- 
    --   BUILD_BINARY          : ON
    --   BUILD_DOCS            : OFF
    --   BUILD_PYTHON          : ON
    --     Python version      : 2.7.6
    --     Python includes     : /usr/include/python2.7
    --   BUILD_SHARED_LIBS     : ON
    --   BUILD_TEST            : ON
    --   USE_ATEN              : OFF
    --   USE_ASAN              : OFF
    --   USE_CUDA              : ON
    --     CUDA version        : 8.0
    --     CuDNN version       : 6.0.21
    --     CUDA root directory : /usr/local/cuda-8.0
    --     CUDA library        : /usr/lib/x86_64-linux-gnu/libcuda.so
    --     CUDA NVRTC library  : /usr/local/cuda-8.0/lib64/libnvrtc.so
    --     CUDA runtime library: /usr/local/cuda-8.0/lib64/libcudart.so
    --     CUDA include path   : /usr/local/cuda-8.0/include
    --     NVCC executable     : /usr/local/cuda-8.0/bin/nvcc
    --     CUDA host compiler  : /usr/bin/cc
    --   USE_EIGEN_FOR_BLAS    : 1
    --   USE_FFMPEG            : OFF
    --   USE_GFLAGS            : ON
    --   USE_GLOG              : ON
    --   USE_GLOO              : ON
    --   USE_LEVELDB           : ON
    --     LevelDB version     : 1.15
    --     Snappy version      : 1.1.0
    --   USE_LITE_PROTO        : OFF
    --   USE_LMDB              : ON
    --     LMDB version        : 0.9.10
    --   USE_METAL             : OFF
    --   USE_MKL               : 
    --   USE_MOBILE_OPENGL     : OFF
    --   USE_MPI               : ON
    --   USE_NCCL              : ON
    --   USE_NERVANA_GPU       : OFF
    --   USE_NNPACK            : ON
    --   USE_OBSERVERS         : ON
    --   USE_OPENCV            : ON
    --     OpenCV version      : 3.3.0
    --   USE_OPENMP            : OFF
    --   USE_PROF              : OFF
    --   USE_REDIS             : OFF
    --   USE_ROCKSDB           : OFF
    --   USE_THREADS           : ON
    --   USE_ZMQ               : OFF
    -- Configuring done
    -- Generating done
    -- Build files have been written to: /home/eslam/caffe2/build
    

    Now executing sudo make install I get the following errors:

    [ 15%] Running C++ protocol buffer compiler on /home/eslam/caffe2/build/third_party/onnx/onnx/onnx_onnx_c2.proto
    onnx_onnx_c2.proto:383:5: Expected "required", "optional", or "repeated".
    onnx_onnx_c2.proto:383:17: Missing field number.
    onnx_onnx_c2.proto:402:3: Expected "required", "optional", or "repeated".
    onnx_onnx_c2.proto:402:15: Missing field number.
    make[2]: *** [third_party/onnx/onnx/onnx_onnx_c2.pb.cc] Error 1
    make[1]: *** [third_party/onnx/CMakeFiles/onnx_proto.dir/all] Error 2
    make: *** [all] Error 2
    
  • [Windows] Bug fixes for MSVC

    [Windows] Bug fixes for MSVC

    • file_store_handler.cc: mkdir only accepts one argument and requires inclusion of <direct.h>
    • math.h: macro workaround does not work for integerIsPowerOf2 when prefixed with math namespace.
    • GpuBitonicSort.cuh: use std::integral_constant since nvcc ignores constexpr with MSVC (fixes #997)
    • pool_op_cudnn.cu: undefine IN and OUT macros defined in minwindef.h
    • logging.cc: Prefix glog logging levels with name since MSVC cannot use the abbreviated macros
  • cmake: stop including files from the install directory

    cmake: stop including files from the install directory

    Here is the buggy behavior which this change fixes:

    • On the first configure with CMake, a system-wide benchmark installation is not found, so we use the version in third_party/ (see here)

    • On installation, the benchmark sub-project installs its headers to CMAKE_INSTALL_PREFIX (see here)

    • On a rebuild, CMake searches the system again for a benchmark installation (see https://github.com/caffe2/caffe2/issues/916 for details on why the first search is not cached)

    • CMake includes CMAKE_INSTALL_PREFIX when searching the system (docs)

    • Voila, a "system" installation of benchmark is found at CMAKE_INSTALL_PREFIX

    • On a rebuild, -isystem $CMAKE_INSTALL_PREFIX/include is added to every build target (see here). e.g:

      cd /caffe2/build/caffe2/binaries && ccache /usr/bin/c++    -I/caffe2/build -isystem /caffe2/third_party/googletest/googletest/include -isystem /caffe2/install/include -isystem /usr/include/opencv -isystem /caffe2/third_party/eigen -isystem /usr/include/python2.7 -isystem /usr/lib/python2.7/dist-packages/numpy/core/include -isystem /caffe2/third_party/pybind11/include -isystem /usr/local/cuda/include -isystem /caffe2/third_party/cub -I/caffe2 -I/caffe2/build_host_protoc/include  -fopenmp -std=c++11 -O2 -fPIC -Wno-narrowing -O3 -DNDEBUG   -o CMakeFiles/split_db.dir/split_db.cc.o -c /caffe2/caffe2/binaries/split_db.cc
      

    This causes two issues:

    1. Since the headers and libraries at CMAKE_INSTALL_PREFIX have a later timestamp than the built files, an unnecessary rebuild is triggered
    2. Out-dated headers from the install directory are used during compilation, which can lead to strange build errors (which can usually be fixed by rm -rf'ing the install directory)

    Possible solutions:

    • Stop searching the system for an install of benchmark, and always use the version in third_party/
    • Cache the initial result of the system-wide search for benchmark, so we don't accidentally pick up the installed version later
    • Hack CMake to stop looking for headers and libraries in the installation directory

    This PR is an implementation of the first solution. Feel free to close this and fix the issue in another way if you like.

  • [wip] Fix public protobuf interface - wip

    [wip] Fix public protobuf interface - wip

    This is an ongoing fix to the protobuf issue, mainly to address 2 things:

    (1) we have random protobuf fixes trying to patch an already flaky system (for example, dual install from anaconda and brew). This diff aims to basically use standard packages as much as possible (protobuf cmake config files, or FindProtoBuf.cmake) and then enforce the build script to explicitly set paths.

    (2) We need protobuf to be in the public interface of Caffe2. This PR adds it.

    (3) We will most likely need a protobuf diagnostic tool / script. TBD.

    Firing a PR so that we can launch build tests.

  • Check system dependencies first

    Check system dependencies first

    This PR changes the cmake of Caffe2 to look for system dependencies before resorting to the submodules in third-party. Only googletest should logically be in third-party, the other libraries should ideally be installed as system dependencies by the user. This PR adds system dependency checks for Gloo, CUB, pybind11, Eigen and benchmark, as these were missing from the cmake files.

    In addition it removes the execution of git submodule update --init in cmake. This seems like bad behavior to me, it should be up to the user to download submodules and manage the git repository.

  • mpi_test.cc.o: undefined reference to symbol '_ZN3MPI8Datatype4FreeEv

    mpi_test.cc.o: undefined reference to symbol '_ZN3MPI8Datatype4FreeEv

    Hi,

    I am getting an error while running make in Caffe2. This is what it says: /usr/bin/ld: CMakeFiles/mpi_test.dir/mpi/mpi_test.cc.o: undefined reference to symbol '_ZN3MPI8Datatype4FreeEv' /usr/lib/libmpi_cxx.so.1: error adding symbols: DSO missing from command line collect2: error: ld returned 1 exit status caffe2/CMakeFiles/mpi_test.dir/build.make: 100: recipe for target 'bin/mpi_test' failed make[2]: *** [bin/mpi_test] Error 1 CMakeFiles/Makefile2:2518: recipe for target 'caffe2/CMakeFiles/mpi_test.dir/all' failed make[1]: *** [caffe2/CMakeFiles/mpi_test.dir/all] Error 2 Makefile:138: recipe for target 'all' failed make: *** [all] Error 2

    Any idea how I can fix it? Any help will be appreciated.

    Thanks!!

    System information

    • Operating system: Ubuntu 16.04
    • Compiler version: gcc 5.4.0 20160609
    • CMake version: 3.5.1
    • CMake arguments:
    • CUDA 9.1
    • CuDNN 7.0.5

    CMake summary output

    ******** Summary ******** -- General: -- CMake version : 3.5.1 -- CMake command : /usr/bin/cmake -- Git version : v0.8.1-1240-g8f41717 -- System : Linux -- C++ compiler : /usr/bin/c++ -- C++ compiler version : 5.4.0 -- Protobuf compiler : /usr/bin/protoc -- Protobuf include path : /usr/include -- Protobuf libraries : optimized;/usr/lib/x86_64-linux-gnu/libprotobuf.so;debug;/usr/lib/x86_64-linux-gnu/libprotobuf.so;-pthread -- BLAS : Eigen -- CXX flags : -std=c++11 -O2 -fPIC -Wno-narrowing -Wno-invalid-partial-specialization -- Build type : Release

    -- Compile definitions :

    -- BUILD_BINARY : ON -- BUILD_DOCS : OFF -- BUILD_PYTHON : ON -- Python version : 2.7.12 -- Python library : /usr/lib/x86_64-linux-gnu/libpython2.7.so -- BUILD_SHARED_LIBS : ON -- BUILD_TEST : ON -- USE_ATEN : OFF -- USE_ASAN : OFF -- USE_CUDA : OFF -- USE_EIGEN_FOR_BLAS : 1 -- USE_FFMPEG : OFF -- USE_GFLAGS : ON -- USE_GLOG : ON -- USE_GLOO : ON -- USE_LEVELDB : ON -- LevelDB version : 1.18 -- Snappy version : 1.1.3 -- USE_LITE_PROTO : OFF -- USE_LMDB : ON -- LMDB version : 0.9.17 -- USE_METAL : OFF -- USE_MKL : -- USE_MOBILE_OPENGL : OFF -- USE_MPI : ON -- USE_NCCL : OFF -- USE_NERVANA_GPU : OFF -- USE_NNPACK : ON -- USE_OBSERVERS : ON -- USE_OPENCV : ON -- OpenCV version : 2.4.9.1 -- USE_OPENMP : OFF -- USE_PROF : OFF -- USE_REDIS : OFF -- USE_ROCKSDB : OFF -- USE_THREADS : ON -- USE_ZMQ : OFF -- Configuring done -- Generating done -- Build files have been written to: /home/ubuntu/caffe2/build

    
    
  • nvcc fatal   : Unsupported gpu architecture 'compute_75'

    nvcc fatal : Unsupported gpu architecture 'compute_75'

    System information

    Operating system: Ubuntu 16.04
    CMake version: 3.11.0
    Relevant libraries/versions (e.g. CUDA):
    CUDA version : 9.0
    CuDNN version : 7.1.3
    I compile caffe2 with source
    

    CMake summary output [ 23%] Building CXX object third_party/gloo/gloo/CMakeFiles/gloo.dir/transport/tcp/address.cc.o [ 23%] Building CXX object third_party/gloo/gloo/CMakeFiles/gloo.dir/transport/tcp/buffer.cc.o [ 23%] Building CXX object third_party/gloo/gloo/CMakeFiles/gloo.dir/transport/tcp/context.cc.o [ 23%] Building CXX object third_party/gloo/gloo/CMakeFiles/gloo.dir/transport/tcp/device.cc.o [ 23%] Building CXX object third_party/gloo/gloo/CMakeFiles/gloo.dir/transport/tcp/pair.cc.o [ 23%] Building CXX object third_party/gloo/gloo/CMakeFiles/gloo.dir/transport/tcp/unbound_buffer.cc.o [ 23%] Linking CXX static library ../../../lib/libgloo.a [ 23%] Built target gloo [ 23%] Building NVCC (Device) object third_party/gloo/gloo/CMakeFiles/gloo_cuda.dir/nccl/gloo_cuda_generated_nccl.cu.o nvcc fatal : Unsupported gpu architecture 'compute_75' CMake Error at gloo_cuda_generated_nccl.cu.o.Release.cmake:215 (message): Error generating /home/lyl/pytorch/build/third_party/gloo/gloo/CMakeFiles/gloo_cuda.dir/nccl/./gloo_cuda_generated_nccl.cu.o

    third_party/gloo/gloo/CMakeFiles/gloo_cuda.dir/build.make:77: recipe for target 'third_party/gloo/gloo/CMakeFiles/gloo_cuda.dir/nccl/gloo_cuda_generated_nccl.cu.o' failed make[2]: *** [third_party/gloo/gloo/CMakeFiles/gloo_cuda.dir/nccl/gloo_cuda_generated_nccl.cu.o] Error 1 CMakeFiles/Makefile2:951: recipe for target 'third_party/gloo/gloo/CMakeFiles/gloo_cuda.dir/all' failed make[1]: *** [third_party/gloo/gloo/CMakeFiles/gloo_cuda.dir/all] Error 2 Makefile:140: recipe for target 'all' failed make: *** [all] Error 2

  • Caffe2 python Conv op can not specify engine

    Caffe2 python Conv op can not specify engine

    I am testing the Conv with 'depthwise_3x3' engine in Caffe2. My caffe2 is installed from source. I constructed one layer network which contains only one group convolution layer with input size (1,100,600,600) and kernel size (100,1,3,3), group=100. However, when I specify the engine to be 'depthwise_3x3', the speed is the same with engine 'cudnn' (or '" ""' or anything others). It seems that the argument 'engine=' does not work.

  • new instance  caffe2::Predictor get struct

    new instance caffe2::Predictor get struct

    I'm run AICamera demo, and i change pb model with shufflenet pb model, which had tested on caffe2 python. however when i run shufflenet pb model in AIcamera android app it always gets stuck in _predictor = new caffe2::Predictor(_initNet, _predictNet); anyone help me ?

  • Trojan horse: Fuerboos.C!cl

    Trojan horse: Fuerboos.C!cl

    Hi there,

    when building the latest release:

    I get the following warning from Windows: Trojan:Win32/Fuerboos.C!cl Severe Warning caffe-master\build\CMakeFiles\3.12.3\CompilerIdC\a.exe

    Does anybody else experience this?

  • Error argument in predict

    Error argument in predict

    hi guys, caffe2 in now for me and i have this error while try run an example in tutorial web Caffe2 has been moved to https://github.com/pytorch/pytorch . Please post your issue at https://github.com/pytorch/pytorch/issues and include [Caffe2] in the beginning of your issue title.

Deep Scalable Sparse Tensor Network Engine (DSSTNE) is an Amazon developed library for building Deep Learning (DL) machine learning (ML) models

Amazon DSSTNE: Deep Scalable Sparse Tensor Network Engine DSSTNE (pronounced "Destiny") is an open source software library for training and deploying

Dec 30, 2022
Lightweight, Portable, Flexible Distributed/Mobile Deep Learning with Dynamic, Mutation-aware Dataflow Dep Scheduler; for Python, R, Julia, Scala, Go, Javascript and more
Lightweight, Portable, Flexible Distributed/Mobile Deep Learning with Dynamic, Mutation-aware Dataflow Dep Scheduler; for Python, R, Julia, Scala, Go, Javascript and more

Apache MXNet (incubating) for Deep Learning Apache MXNet is a deep learning framework designed for both efficiency and flexibility. It allows you to m

Dec 31, 2022
A library for creating Artificial Neural Networks, for use in Machine Learning and Deep Learning algorithms.
A library for creating Artificial Neural Networks, for use in Machine Learning and Deep Learning algorithms.

iNeural A library for creating Artificial Neural Networks, for use in Machine Learning and Deep Learning algorithms. What is a Neural Network? Work on

Apr 5, 2022
header only, dependency-free deep learning framework in C++14
header only, dependency-free deep learning framework in C++14

The project may be abandoned since the maintainer(s) are just looking to move on. In the case anyone is interested in continuing the project, let us k

Dec 31, 2022
Caffe: a fast open framework for deep learning.

Caffe Caffe is a deep learning framework made with expression, speed, and modularity in mind. It is developed by Berkeley AI Research (BAIR)/The Berke

Dec 30, 2022
TFCC is a C++ deep learning inference framework.

TFCC is a C++ deep learning inference framework.

Dec 23, 2022
KSAI Lite is a deep learning inference framework of kingsoft, based on tensorflow lite

KSAI Lite English | 简体中文 KSAI Lite是一个轻量级、灵活性强、高性能且易于扩展的深度学习推理框架,底层基于tensorflow lite,定位支持包括移动端、嵌入式以及服务器端在内的多硬件平台。 当前KSAI Lite已经应用在金山office内部业务中,并逐步支持金山

Dec 27, 2022
MACE is a deep learning inference framework optimized for mobile heterogeneous computing platforms.
MACE is a deep learning inference framework optimized for mobile heterogeneous computing platforms.

Mobile AI Compute Engine (or MACE for short) is a deep learning inference framework optimized for mobile heterogeneous computing on Android, iOS, Linux and Windows devices.

Jan 3, 2023
Plaidml - PlaidML is a framework for making deep learning work everywhere.
Plaidml - PlaidML is a framework for making deep learning work everywhere.

A platform for making deep learning work everywhere. Documentation | Installation Instructions | Building PlaidML | Contributing | Troubleshooting | R

Jan 7, 2023
CubbyDNN - Deep learning framework using C++17 in a single header file
CubbyDNN - Deep learning framework using C++17 in a single header file

CubbyDNN CubbyDNN is C++17 implementation of deep learning. It is suitable for deep learning on limited computational resource, embedded systems and I

Aug 16, 2022
High performance, easy-to-use, and scalable machine learning (ML) package, including linear model (LR), factorization machines (FM), and field-aware factorization machines (FFM) for Python and CLI interface.
High performance, easy-to-use, and scalable machine learning (ML) package, including linear model (LR), factorization machines (FM), and field-aware factorization machines (FFM) for Python and CLI interface.

What is xLearn? xLearn is a high performance, easy-to-use, and scalable machine learning package that contains linear model (LR), factorization machin

Dec 23, 2022
A fast, scalable, high performance Gradient Boosting on Decision Trees library, used for ranking, classification, regression and other machine learning tasks for Python, R, Java, C++. Supports computation on CPU and GPU.
A fast, scalable, high performance Gradient Boosting on Decision Trees library, used for ranking, classification, regression and other machine learning tasks for Python, R, Java, C++. Supports computation on CPU and GPU.

Website | Documentation | Tutorials | Installation | Release Notes CatBoost is a machine learning method based on gradient boosting over decision tree

Dec 31, 2022
Vowpal Wabbit is a machine learning system which pushes the frontier of machine learning with techniques such as online, hashing, allreduce, reductions, learning2search, active, and interactive learning.
Vowpal Wabbit is a machine learning system which pushes the frontier of machine learning with techniques such as online, hashing, allreduce, reductions, learning2search, active, and interactive learning.

This is the Vowpal Wabbit fast online learning code. Why Vowpal Wabbit? Vowpal Wabbit is a machine learning system which pushes the frontier of machin

Dec 30, 2022
Deep Learning in C Programming Language. Provides an easy way to create and train ANNs.
Deep Learning in C Programming Language. Provides an easy way to create and train ANNs.

cDNN is a Deep Learning Library written in C Programming Language. cDNN provides functions that can be used to create Artificial Neural Networks (ANN)

Dec 24, 2022
Triton - a language and compiler for writing highly efficient custom Deep-Learning primitives.
Triton - a language and compiler for writing highly efficient custom Deep-Learning primitives.

Triton - a language and compiler for writing highly efficient custom Deep-Learning primitives.

Dec 26, 2022
tutorial on how to train deep learning models with c++ and dlib.

Dlib Deep Learning tutorial on how to train deep learning models with c++ and dlib. usage git clone https://github.com/davisking/dlib.git mkdir build

Dec 21, 2021
TensorRT is a C++ library for high performance inference on NVIDIA GPUs and deep learning accelerators.

TensorRT Open Source Software This repository contains the Open Source Software (OSS) components of NVIDIA TensorRT. Included are the sources for Tens

Jan 4, 2023
Microsoft Cognitive Toolkit (CNTK), an open source deep-learning toolkit

CNTK Chat Windows build status Linux build status The Microsoft Cognitive Toolkit (https://cntk.ai) is a unified deep learning toolkit that describes

Dec 23, 2022