Edge ML Library - High-performance Compute Library for On-device Machine Learning Inference

logo

中文介绍

Edge ML Library - High-performance Compute Library for On-device Machine Learning Inference

Edge ML Library (EMLL) offers optimized basic routines like general matrix multiplications (GEMM) and quantizations, to speed up machine learning (ML) inference on ARM-based devices. EMLL supports fp32, fp16 and int8 data types. EMLL accelerates on-device NMT, ASR and OCR engines of Youdao, Inc.

Features

Performance-Oriented Design

The matrix-multiplication routines are heavily-optimized for matrix shapes common in on-device ML tasks, including "skinny" ones. The matrix-multiplication kernels are tuned for specific CPUs with a large portion of inline assembly codes.

Here are benchmarks of SGEMM on 2 machines[1]:

armv8a cortex-A35 4-thread armv8a cortex-A53 4-thread
test1 test2

[1].The fomular of GEMM: C[MxN] = A[MxK] B[KxN]; For each test case, the better performance in all-row-major and all-column-major situations is selected.

Facile Interface

The data and parameters are passed straightforward without wrappings. Matrices and arrays are passed with base address + dimensions. GEMM parameters seldom used in on-device inference like LDA-LDC are excluded from the interface. There is no dependency on any third-party compute libraries.

Extensibility

EMLL abstracts the core structures of CPU-based high-performance matrix multiplication algorithms and also bias/quant functions to general macros (see files under include/common), which can be applied to a variety of processors. When developing for a new architecture, a lot of coding works can be saved with these macros.

EMLL APIs

EMLL provides a series of C functions. See Usage_EN.md for details.

Type Name Parameters
Matrix Multiplication data_type + "gemm" matrix_orders, addresses of matrices, M, N, K, beta, number of threads
Fully-connect Layer (fp32) "fc" addresses of src/weight/bias/output, dimensions M/K/N, orders of source matrices, (number of threads)
Quantization "quantize_" + "symmetric"/"asymmetric" + input_type + output_type input array, output array, (zero point), scale, size of array, input range
Requantization "requantize_" + "symmetric/asymmetric" + "_XtoY" input array, output array, (zero point), output scale, size of array, input range
Bias "bias" + data_type the matrix to be biased, scalar bias to all elements, vector bias along major direction, vector bias along minor direction, dimensions of the matrix

Supported Architectures and Data Types

Target CPU Matrix Multiplication Bias Quantization Requantization
ARMv7a 32-bit fp32 -> fp32, (u)int8 -> (u)int32 fp32, int32 fp32 -> (u)int8/(u)int16 int32 -> (u)int8/(u)int16, int16 -> (u)int8
ARMv8a 64-bit fp32 -> fp32, (u)int8 -> (u)int32, fp16 -> fp16 fp32, fp16, int32 fp32 -> (u)int8/(u)int16 int32 -> (u)int8/(u)int16, int16 -> (u)int8

Supported OS: Linux & Android

Supported Compilers: GCC & Clang

Future Plan

EMLL may support on-device GPUs and NPUs in the future, with the expansion of available functions, according to business requirements.

License

Apache 2.0

Reference

Eigen: [https://eigen.tuxfamily.org]

OpenBLAS: [https://github.com/xianyi/OpenBLAS]

Comments
  • how to solve cross compile EMLL problem below

    how to solve cross compile EMLL problem below

    /EMLL/src/arm_neon/ARMCompareAndSwap.c:1:0: error: invalid feature modifier in '-march=armv8.2-a+dotprod+fp16' /*****************************************************************************/

    CMakeFiles/eml-armneon.dir/build.make:62: recipe for target 'CMakeFiles/eml-armneon.dir/src/arm_neon/ARMCompareAndSwap.c.o' failed make[2]: *** [CMakeFiles/eml-armneon.dir/src/arm_neon/ARMCompareAndSwap.c.o] Error 1 CMakeFiles/Makefile2:109: recipe for target 'CMakeFiles/eml-armneon.dir/all' failed make[1]: *** [CMakeFiles/eml-armneon.dir/all] Error 2 Makefile:129: recipe for target 'all' failed make: *** [all] Error 2

  • A53平台速度比arm compute library慢

    A53平台速度比arm compute library慢

    我试了这样的一个流程,Android aarch64 a53 quantize_symmetric_f32_s8 s8s32gemm requantize_symmetric_32to8 s8s32gemm requantize_symmetric_32to8 .... s8s32gemm 和requantize_symmetric_32to8 这样循环数层 dequantize_symmetric_f32_s32 矩阵mn=(mk)x(k*n), 大概分布是这样的几个矩阵m=8,k=100,n=400; m=8,k=400,n=100; 速度差不多是acl的两倍. A76平台上多个矩阵是比acl快一点的,很赞. 看到a35,a53,a7x是有不同的代码优化的

  • 关于与BLAS中Gemm的对应

    关于与BLAS中Gemm的对应

    非常感谢您分享这个工作,在使用EMLL替换OpenBLAS的时候我发现两者的参数有一些区别,您能够给出一些如何进行迁移参数的意见么或者给出一个简单的README,这样也可以有助于推广EMLL。

    主要的区别在于 Transpose似乎EMLL中是没有的,其他的参数含义应该是一样的?

    EMLL: a_rowmajor | 源矩阵 A 的排列顺序,非零表示行主序 -- | -- b_rowmajor | 源矩阵 B 的排列顺序,非零表示行主序 A | 源矩阵 A 的地址 B | 源矩阵 B 的地址 C | 输出矩阵 C 的地址 M | 矩阵 A 的行数 N | 矩阵 B 的列数 K | A的列数,必须等于 B 的行数 beta | 作用于矩阵 C 的预乘因子 num_threads | 并行时能够使用的线程数

    OpenBLAS: int an = a->dimSize[0]; int am = a->dimSize[1]; int bn = b->dimSize[0]; int bm = b->dimSize[1]; int cn = c->dimSize[0]; int cm = c->dimSize[1]; GEMM(CblasRowMajor, CblasNoTrans, CblasNoTrans, cn, cm, am, alpha, (DTYPE*)a->data, am, (DTYPE*)b->data, bm, beta, (DTYPE*)c->data, cm)

  • 多线程使用emll会造成内存占用增大

    多线程使用emll会造成内存占用增大

    一个进程创建了四个线程: thread-1:执行emll的gemm (使用1个线程计算,即gemm最后一个参数为1) thread-2:sleep thread-3:sleep thread-4:sleep 这比一个进程只创建一个线程: thread-1:执行emll的gemm (使用1个线程计算,即gemm最后一个参数为1) 占用更多内存,这个增大的内存占用来源于emll,经过我的统计,每多一个线程会增加大约768kb的内存,这个是否跟CommonDriver.h中GEMM_STATIC_BUFFER这个有关,这个也刚好创建了768kb内存(1024x192x4/1024)。 想问一下我该如何解决这种问题呢,我的线程2/3/4并不需要emll的gemm运算,如何不增加该内存占用。

  • 使用EMLL比OpenBlas+RUY的方式多使用10M内存

    使用EMLL比OpenBlas+RUY的方式多使用10M内存

    一 实验背景: 基于ctranslator2框架适配EMLL, 跑翻译demo(int8), 对比基于EMLL推理和OpenBlas+RUY推理的速度和内存使用情况 EMLL版本: https://github.com/netease-youdao/EMLL/issues/9#issuecomment-939259322 已经应用patch: https://github.com/netease-youdao/EMLL/issues/8#issuecomment-903630259

    二 C++ bin demo ① EMLL矩阵计算 内存: 102M 速度: 约380-500ms ②Openblas + RUY矩阵计算 内存: 107M 速度: 约680-720ms

    三 Qt集成 单独跑bin demo速度和内存上都没啥问题,但是集成到qt应用上基于EMLL方式的内存占用会比基于Openblas+RUY的方式多出10M, 两者除了矩阵这块其他程序和输入都是一样

    请问这种情况有可能是什么原因导致?

  • sgemm和openblas结果不一致,导致推理结果相差很远

    sgemm和openblas结果不一致,导致推理结果相差很远

    如题, 使用EMLL sgemm计算的结果和Openblas cblas_sgemm计算的结果有较小差异,但是会导致模型推理结果不正常, 然而使用EMLL 动态量化再s8s32gemm计算最后反量化的方式, 推理结果是正常的, 这两者间的差异是什么? 代码如下:

      enum QuantType
      {
        NO_QUANT = 0,
        SYMMETRIC,
        ASYMMETRIC
      };
    
    inline int emll_s8s32gemm(bool transpose_a, bool transpose_b,
                                dim_t m, dim_t n, dim_t k,
                                const int8_t *a,
                                const int8_t *b,
                                float beta,
                                int32_t *c)
      {
        int status;
        if (!transpose_a && !transpose_b)
        {
          status = s8s32gemm(0, 0, b, a, c, n, m, k, beta, 0);
        }
        else if (transpose_a && !transpose_b)
        {
          status = s8s32gemm(0, 1, b, a, c, n, m, k, beta, 0);
        }
        else if (!transpose_a && transpose_b)
        {
          status = s8s32gemm(1, 0, b, a, c, n, m, k, beta, 0);
        }
        else // transpose_a && transpose_b
        {
          status = s8s32gemm(1, 1, b, a, c, n, m, k, beta, 0);
        }
    
        return status;
      }
    
      inline int emll_u8u32gemm(bool transpose_a, bool transpose_b,
                                dim_t m, dim_t n, dim_t k,
                                const uint8_t *a,
                                const uint8_t *b,
                                float beta,
                                uint32_t *c)
      {
        int status;
        if (!transpose_a && !transpose_b)
        {
          status = u8u32gemm(0, 0, b, a, c, n, m, k, beta, 0);
        }
        else if (transpose_a && !transpose_b)
        {
          status = u8u32gemm(0, 1, b, a, c, n, m, k, beta, 0);
        }
        else if (!transpose_a && transpose_b)
        {
          status = u8u32gemm(1, 0, b, a, c, n, m, k, beta, 0);
        }
        else // transpose_a && transpose_b
        {
          status = u8u32gemm(1, 1, b, a, c, n, m, k, beta, 0);
        }
    
        return status;
      }
    
      int emll_sgemm(bool transpose_a, bool transpose_b,
                     dim_t m, dim_t n, dim_t k,
                     float alpha,
                     const float *a,
                     const float *b,
                     float beta,
                     float *c,
                     QuantType quant_type)
      {
        int status;
    
        float *a_f = nullptr;
        if (alpha != 1.0f)
        {
          a_f = static_cast<float *>(allocator.allocate(m * k * sizeof(float)));
          cpu::parallel_for(0, m * k, cpu::GRAIN_SIZE / 2, [&](dim_t begin, dim_t end) {
            for (dim_t i = begin; i < end; ++i)
            {
              a_f[i] = static_cast<float>(alpha * a[i]);
            }
          });
        }
    
        if (quant_type == QuantType::NO_QUANT) // 这种方法结果不对!!!
        {
          if (!transpose_a && !transpose_b)
          {
            // std::cout << "!!! !transpose_a && !transpose_b" << std::endl;
            if (a_f != nullptr)
            {
              status = sgemm(0, 0, b, a_f, c, n, m, k, beta, 0);
            }
            else
            {
              status = sgemm(0, 0, b, a, c, n, m, k, beta, 0);
            }
          }
          else if (transpose_a && !transpose_b)
          {
            // std::cout << "@@@ transpose_a && !transpose_b" << std::endl;
            if (a_f != nullptr)
            {
              status = sgemm(0, 1, b, a_f, c, n, m, k, beta, 0);
            }
            else
            {
              status = sgemm(0, 1, b, a, c, n, m, k, beta, 0);
            }
          }
          else if (!transpose_a && transpose_b)
          {
            // std::cout << "### !transpose_a && transpose_b" << std::endl;
            if (a_f != nullptr)
            {
              status = sgemm(1, 0, b, a_f, c, n, m, k, beta, 0);
            }
            else
            {
              status = sgemm(1, 0, b, a, c, n, m, k, beta, 0);
            }
          }
          else // transpose_a && transpose_b
          {
            // std::cout << "$$$ transpose_a && transpose_b" << std::endl;
            if (a_f != nullptr)
            {
              status = sgemm(1, 1, b, a_f, c, n, m, k, beta, 0);
            }
            else
            {
              status = sgemm(1, 1, b, a, c, n, m, k, beta, 0);
            }
          }
        }
        else if (quant_type == QuantType::SYMMETRIC)
        {
          int8_t *const a_s = static_cast<int8_t *>(allocator.allocate(m * k * sizeof(int8_t)));
          int8_t *const b_s = static_cast<int8_t *>(allocator.allocate(n * k * sizeof(int8_t)));
          int32_t *const c_qs = static_cast<int32_t *>(allocator.allocate(m * n * sizeof(int32_t)));
    
          float scale_a, scale_b;
    
          if (a_f != nullptr)
          {
            quantize_symmetric_f32_s8(a_f, a_s, &scale_a, m * k, 0, -1);
          }
          else
          {
            quantize_symmetric_f32_s8(a, a_s, &scale_a, m * k, 0, -1);
          }
    
          quantize_symmetric_f32_s8(b, b_s, &scale_b, n * k, 0, -1);
    
          status = emll_s8s32gemm(transpose_a, transpose_b,
                                  m, n, k, a_s, b_s, beta, c_qs);
          if (status != 0)
          {
            fprintf(stderr, "u8u32gemm returns error code %d\n", status);
          }
          else
          {
            dequantize_symmetric_f32_s32(c_qs, c, scale_a * scale_b, m * n);
          }
    
          allocator.free(a_s);
          allocator.free(b_s);
          allocator.free(c_qs);
        }
        else // ASYMMETRIC  
        {
          uint8_t *const a_u = static_cast<uint8_t *>(allocator.allocate(m * k * sizeof(uint8_t)));
          uint8_t *const b_u = static_cast<uint8_t *>(allocator.allocate(n * k * sizeof(uint8_t)));
          int32_t *const c_qu = static_cast<int32_t *>(allocator.allocate(m * n * sizeof(int32_t)));
    
          uint32_t *const a_sum = (uint32_t *)(allocator.allocate(m * sizeof(uint32_t)));
          uint32_t *const b_sum = (uint32_t *)(allocator.allocate(n * sizeof(uint32_t)));
    
          float scale_a, scale_b;
          uint8_t zero_point_a, zero_point_b;
    
          if (a_f != nullptr)
          {
            quantize_asymmetric_f32_u8(a_f, a_u, &zero_point_a, &scale_a, m * k, 0, -1);
          }
          else
          {
            quantize_asymmetric_f32_u8(a, a_u, &zero_point_a, &scale_a, m * k, 0, -1);
          }
    
          quantize_asymmetric_f32_u8(b, b_u, &zero_point_b, &scale_b, n * k, 0, -1);
    
          status = emll_u8u32gemm(transpose_a, transpose_b,
                                  m, n, k, a_u, b_u, beta, (uint32_t*)c_qu);
    
          if (status != 0)
          {
            fprintf(stderr, "u8u32gemm returns error code %d\n", status);
          }
          else
          {
            /* sum row/col of source matrices (along K dim) */
            u8u32_sum(a_u, (uint32_t*)(a_sum), m, k, 0);
            u8u32_sum(b_u, (uint32_t*)(b_sum), k, n, 1);
            /* bias the result of 8->32 bit GEMM */
            bias_int32_t(c_qu,
                         (int32_t)zero_point_a * (int32_t)zero_point_b * (int32_t)k,
                         (int32_t *)(a_sum), -(int32_t)zero_point_b,
                         (int32_t *)(b_sum), -(int32_t)zero_point_a, m, n);
            /* dequantitize the result */
            /* dequant(input_addr, output_addr, scale, array_length) */
            dequantize_symmetric_f32_s32(c_qu, c, scale_a * scale_b, m * n);
          }
    
          allocator.free(a_u);
          allocator.free(b_u);
          allocator.free(c_qu);
          allocator.free(a_sum);
          allocator.free(b_sum);
        }
    
        if (a_f != nullptr)
        {
          allocator.free(a_f);
        }
    
        return status;
      }
    
    
    
  • 如何在编译EMLL时选择性打开GEMM的某种优化手段?

    如何在编译EMLL时选择性打开GEMM的某种优化手段?

    通过EMLL的介绍,GEMM的优化主要有 分块、重排 和 汇编优化 三个手段,请问如何选择性的让其中某个或者某几个手段生效呢?

    因为我自测发现,同一份数据在单独写demo在设备上测试是没问题的,但是集成到项目中,在设备上测试,会出core,经分析大概率是由于内存不足导致,所以怀疑对于GEMM的三个优化手段,是不是有些优化手段非常耗费内存?

  • 编译问题咨询

    编译问题咨询

    编译遇到下面问题,请教以下如何解决

    /data/home/.../EMLL/src/arm_neon/ARMCpuType.c:1:0: error: unknown value ‘armv8.2-a+dotprod+fp16’ for -march /*****************************************************************************/

    /data/home/.../EMLL/src/arm_neon/ARMCompareAndSwap.c:1:0: error: unknown value ‘armv8.2-a+dotprod+fp16’ for -march /*****************************************************************************/

    CMakeFiles/eml-armneon.dir/build.make:89: recipe for target 'CMakeFiles/eml-armneon.dir/src/arm_neon/ARMCpuType.c.o' failed make[2]: *** [CMakeFiles/eml-armneon.dir/src/arm_neon/ARMCpuType.c.o] Error 1 make[2]: *** Waiting for unfinished jobs.... CMakeFiles/eml-armneon.dir/build.make:75: recipe for target 'CMakeFiles/eml-armneon.dir/src/arm_neon/ARMCompareAndSwap.c.o' failed make[2]: *** [CMakeFiles/eml-armneon.dir/src/arm_neon/ARMCompareAndSwap.c.o] Error 1 CMakeFiles/Makefile2:88: recipe for target 'CMakeFiles/eml-armneon.dir/all' failed make[1]: *** [CMakeFiles/eml-armneon.dir/all] Error 2 Makefile:135: recipe for target 'all' failed make: *** [all] Error 2

A lightweight C++ machine learning library for embedded electronics and robotics.

Fido Fido is an lightweight, highly modular C++ machine learning library for embedded electronics and robotics. Fido is especially suited for robotic

Dec 17, 2022
A C++ standalone library for machine learning

Flashlight: Fast, Flexible Machine Learning in C++ Quickstart | Installation | Documentation Flashlight is a fast, flexible machine learning library w

Jan 8, 2023
mlpack: a scalable C++ machine learning library --
mlpack: a scalable C++ machine learning library --

a fast, flexible machine learning library Home | Documentation | Doxygen | Community | Help | IRC Chat Download: current stable version (3.4.2) mlpack

Dec 30, 2022
Flashlight is a C++ standalone library for machine learning
Flashlight is a C++ standalone library for machine learning

Flashlight is a fast, flexible machine learning library written entirely in C++ from the Facebook AI Research Speech team and the creators of Torch and Deep Speech.

Jan 8, 2023
ML++ - A library created to revitalize C++ as a machine learning front end
ML++ - A library created to revitalize C++ as a machine learning front end

ML++ Machine learning is a vast and exiciting discipline, garnering attention from specialists of many fields. Unfortunately, for C++ programmers and

Dec 31, 2022
A toolkit for making real world machine learning and data analysis applications in C++

dlib C++ library Dlib is a modern C++ toolkit containing machine learning algorithms and tools for creating complex software in C++ to solve real worl

Dec 31, 2022
Machine Learning Framework for Operating Systems - Brings ML to Linux kernel
Machine Learning Framework for Operating Systems - Brings ML to Linux kernel

Machine Learning Framework for Operating Systems - Brings ML to Linux kernel

Nov 24, 2022
Scalable, Portable and Distributed Gradient Boosting (GBDT, GBRT or GBM) Library, for Python, R, Java, Scala, C++ and more. Runs on single machine, Hadoop, Spark, Dask, Flink and DataFlow
Scalable, Portable and Distributed Gradient Boosting (GBDT, GBRT or GBM) Library,  for Python, R, Java, Scala, C++ and more. Runs on single machine, Hadoop, Spark, Dask, Flink and DataFlow

eXtreme Gradient Boosting Community | Documentation | Resources | Contributors | Release Notes XGBoost is an optimized distributed gradient boosting l

Dec 30, 2022
Samsung Washing Machine replacing OS control unit

hacksung Samsung Washing Machine WS1702 replacing OS control unit More info at https://www.hackster.io/roni-bandini/dead-washing-machine-returns-to-li

Dec 19, 2022
RNNLIB is a recurrent neural network library for sequence learning problems. Forked from Alex Graves work http://sourceforge.net/projects/rnnl/

Origin The original RNNLIB is hosted at http://sourceforge.net/projects/rnnl while this "fork" is created to repeat results for the online handwriting

Dec 26, 2022
Caffe: a fast open framework for deep learning.

Caffe Caffe is a deep learning framework made with expression, speed, and modularity in mind. It is developed by Berkeley AI Research (BAIR)/The Berke

Jan 1, 2023
Frog is an integration of memory-based natural language processing (NLP) modules developed for Dutch. All NLP modules are based on Timbl, the Tilburg memory-based learning software package.

Frog - A Tagger-Lemmatizer-Morphological-Analyzer-Dependency-Parser for Dutch Copyright 2006-2020 Ko van der Sloot, Maarten van Gompel, Antal van den

Dec 14, 2022
C-based/Cached/Core Computer Vision Library, A Modern Computer Vision Library

Build Status Travis CI VM: Linux x64: Raspberry Pi 3: Jetson TX2: Backstory I set to build ccv with a minimalism inspiration. That was back in 2010, o

Jan 6, 2023
libsvm websitelibsvm - A simple, easy-to-use, efficient library for Support Vector Machines. [BSD-3-Clause] website

Libsvm is a simple, easy-to-use, and efficient software for SVM classification and regression. It solves C-SVM classification, nu-SVM classification,

Jan 2, 2023
Open Source Computer Vision Library

OpenCV: Open Source Computer Vision Library Resources Homepage: https://opencv.org Courses: https://opencv.org/courses Docs: https://docs.opencv.org/m

Jan 1, 2023
oneAPI Data Analytics Library (oneDAL)
oneAPI Data Analytics Library (oneDAL)

Intel® oneAPI Data Analytics Library Installation | Documentation | Support | Examples | Samples | How to Contribute Intel® oneAPI Data Analytics Libr

Dec 30, 2022
A C library for product recommendations/suggestions using collaborative filtering (CF)

Recommender A C library for product recommendations/suggestions using collaborative filtering (CF). Recommender analyzes the feedback of some users (i

Dec 29, 2022
An open library of computer vision algorithms

VLFeat -- Vision Lab Features Library Version 0.9.21 The VLFeat open source library implements popular computer vision algorithms specialising in imag

Dec 29, 2022