Verisilicon Tensor Interface Module for OpenVX

TIM-VX - Tensor Interface Module for OpenVX

TIM-VX is a software integration module provided by VeriSilicon to facilitate deployment of Neural-Networks on OpenVX enabled ML accelerators. It serves as the backend binding for runtime frameworks such as Android NN, Tensorflow-Lite, MLIR, TVM and more.

Main Features

  • Over 130 internal operators with rich format support for both quantized and floating point
  • Simplified binding API calls to create Tensors and Operations
  • Dynamic graph construction and supports shape inferencing
  • Built-in custom layer extensions
  • A set of utility functions for debugging

Roadmap

Roadmap of TIM-VX will be updated here in the future.

Get started

Build and Run

TIM-VX uses bazel build system by default. Install bazel first to get started.

TIM-VX needs to be compiled and linked against VeriSilicon OpenVX SDK which provides related header files and pre-compiled libraries. A default linux-x86_64 SDK is provided which contains the simulation environment on PC. Platform specific SDKs can be obtained from respective SoC vendors.

To build TIM-VX

bazel build libtim-vx.so

To run sample LeNet

# set VIVANTE_SDK_DIR for runtime compilation environment
export VIVANTE_SDK_DIR=`pwd`/prebuilt-sdk/x86_64_linux

bazel build //samples/lenet:lenet_asymu8_cc
bazel run //samples/lenet:lenet_asymu8_cc

Get familiar with OpenVX spec

To development for TIM-VX, you first need to get familiar with OpenVX API and OpenVX NN Extension API. Please head over to Khronos to read the spec.

Owner
VeriSilicon, INC.
A leading Silicon Platform as a Service company
VeriSilicon, INC.
Comments
  • Multiple downstream outputs bug

    Multiple downstream outputs bug

    Sorry for making duplicates. Im not sure bug is in vx-delegate or in TIM-VX. https://github.com/VeriSilicon/tflite-vx-delegate/issues/32

    Hi, I think I found a bug in the vx-delegate runtime.

    setup: A311D + Android 9 + TensorFlow Lite with vx-delegate

    Model: detector, that outputs multiple things: BBoxes, Landmarks, Probability scores and features vectors.

    Problem: Landmarks outputs are garbage. How model works for landmarks:

    input image -> backbone -> FPN -> Conv layers that produces features (OUTPUT 1) -> Conv layers that produces landmarks (OUTPUT 2)

    so, if I have two outputs, that are downstream one after another, second outputs is not calculated and I get garbage output. The problem is only with INT8 graph, if I use FP32 graph, it works fine with vx-delegate.

    On x86 with standard TFLite (and xnnpack) everything works fine with both INT8 and FP32 graphs.

    update: Downstream is not important, even if in landmarks branch of graph I have only landmarks as outputs, I get garbage. I don't know why, but part of the graph with landmarks is not calculated on the NPU.

    What could be the problem? Thanks.

  • Is SpatialTransformer on rv1126 supported?

    Is SpatialTransformer on rv1126 supported?

    I try to run SpatialTransformer(mxnet) operator on rv1126 using Tengine, but the Status of "SpatialTransformer" operator in tim-vx is "InternalOnly", and no TIM-VX API implementation.So i can't find a way to add SpatialTransformer npu support to Tengine, i wondered that:

    1. Is SpatialTransformer(mxnet) operator on rv1126 npu supported?
    2. If it is supported, how can i add it to Tengine?
  • Segmentation fault

    Segmentation fault

    Hi, I have A311D board with ubuntu on board (from khadas). My end goal is to compile tflite with vx-delegate support. What is the best way to do this?

    Also Im trying to compile TIM-VX (libtim-vx.so) it seems Bazel building is broken, so I try CMake. CMake works fine: compiles and links all targets, but when I try to run unit tests (under gdb), I get segfault from *** _LoadStates() from libGAL.so what causes this?

  • TVM RPC test failed with message:

    TVM RPC test failed with message: "PLS isn't existed"

    I don't know if it's the right place to ask questions about your TVM fork, but I cannot raise issues in that repo.

    I followed the guide from README.VSI.md to build TVM (on host, using x86_64_linux simulation drivers provided here) and TVM runtime (on target, using vendor-provided VIP NPU drivers), and ran the tests in test_vsi_npu, but I got these results:

    logs from TVM C++ RPC tool:

    VsiNpuModule::LoadFromBinary
    LoadFromBinary: nbg size = 593344
    LoadFromBinary: input size = 1
    LoadFromBinary: output size = 1
    VsiNpuModule : DeSerializeTensorSpec
    VsiNpuModule : DeSerializeTensorSpec2
    VsiNpuModule : DeSerializeTensorSpec
    VsiNpuModule : DeSerializeTensorSpec2
    [22:31:35] /home/nullko/Documents/tvm-vsi_npu/apps/cpp_rpc/rpc_env.cc:130: Load module from /home/ubuntu/workspace/rpc/model.so ...
    VsiNpuModule::GetFunction: _lookup_linked_param
    VsiNpuModule::GetFunction: return early
    VsiNpuModule::GetFunction: _lookup_linked_param
    VsiNpuModule::GetFunction: return early
    VsiNpuModule::GetFunction: _lookup_linked_param
    VsiNpuModule::GetFunction: return early
    VsiNpuModule::GetFunction: _lookup_linked_param
    VsiNpuModule::GetFunction: return early
    VsiNpuModule::GetFunction: tvmgen_default_vsi_npu_0
    [     1] PLS isn't existed
    E [compute_node:379]Create node[0] NBG fail
    Process Graph: 0 ms or 0 us
    

    It seemed that TVM is able to compile the NBG on the host, but the target runtime cannot execute it. I wonder what caused the "PLS isn't existed" issue, is it because I didn't set some environment variables on the target platform?

    Or maybe your tvm fork is still under development and is not ready to be used yet?

  • Handle tensor double free error on x86_64 simulator driver

    Handle tensor double free error on x86_64 simulator driver

    I try to use the vsi_nn_AddTensorFromHandle to create a tensor using buffer allocated by cv::Mat, every time the program exits, there will be a double free error issued by OpenCV. It seems that the passed buffer is freed by the OpenVX driver (OpenVX driver is not supposed to free the buffer since it's a handle ) when the context is deinitialized, so when the OpenCV try to free the same buffer, it causes a double free error.

    I only encountered this problem with the x86_64 simulator driver, when the program runs on the target device using vendor-provided NPU driver, everything works well.

    Here is a short program to reproduce this error:

    #include <iostream>
    #include <opencv2/core.hpp>
    #include <vsi_nn_pub.h>
    
    int main(int argc, char* argv[]) {
        vsi_status err = VSI_SUCCESS;
    
        auto matIn = cv::Mat(4, 4, CV_32F);
        auto matOut = cv::Mat(4, 4, CV_32F);
    
        cv::randu(matIn, -1.0F, 1.0F);
        matOut.setTo(0.0F);
    
        auto context = vsi_nn_CreateContext();
        auto graph = vsi_nn_CreateGraph(context, 2, 1);
        vsi_nn_SetGraphInputs(graph, nullptr, 1);
        vsi_nn_SetGraphOutputs(graph, nullptr, 1);
    
        vsi_nn_tensor_attr_t attr = {};
        attr.dtype.fmt = VSI_NN_DIM_FMT_NCHW;
        attr.dim_num = 4;
        attr.size[0] = 4;
        attr.size[1] = 4;
        attr.size[2] = 1;
        attr.size[3] = 1;
        attr.dtype.vx_type = VSI_NN_TYPE_FLOAT32;
        attr.dtype.qnt_type = VSI_NN_QNT_TYPE_NONE;
        attr.is_const = 0;
        attr.vtl = 0;
    
        auto tensorIn = vsi_nn_AddTensorFromHandle(
            graph, VSI_NN_TENSOR_ID_AUTO, &attr, matIn.data);
        auto tensorOut = vsi_nn_AddTensorFromHandle(
            graph, VSI_NN_TENSOR_ID_AUTO, &attr, matOut.data);
    
        auto nodeReLU = vsi_nn_AddNode(graph, VSI_NN_OP_RELU, 1, 1, nullptr);
        nodeReLU->uid = 100;
        nodeReLU->input.tensors[0] = tensorIn;
        nodeReLU->output.tensors[0] = tensorOut;
    
        graph->input.tensors[0] = tensorIn;
        graph->output.tensors[0] = tensorOut;
    
        err = vsi_nn_SetupGraph(graph, vx_false_e);
        err = vsi_nn_VerifyGraph(graph);
        err = vsi_nn_rnn_RunGraph(graph);
    
        std::cout << "[Input]\n" << matIn << std::endl;
        std::cout << "[Output]\n" << matOut << std::endl;
    
        vsi_nn_ReleaseGraph(&graph);
        vsi_nn_ReleaseContext(&context);
    
        return err;
    }
    

    callstack:

    raise (raise:49)
    abort (abort:60)
    __libc_message (__libc_message:173)
    malloc_printerr (malloc_printerr:0)
    _int_free (_int_free:455)
    __libc_free (__libc_free:28)
    cv::StdMatAllocator::deallocate(cv::UMatData*) const (cv::StdMatAllocator::deallocate(cv::UMatData*) const:17)
    cv::Mat::~Mat() (cv::Mat::~Mat():26)
    main (/home/nullko/Documents/tim-vx/samples/ncc/test_handle_tensor.cpp:54)
    __libc_start_main (__libc_start_main:53)
    _start (_start:13)
    
  • How to use NNAPI for NPU?

    How to use NNAPI for NPU?

    Hi, I've learned that it's possible to use Tensorflow Lite NNAPI on vim3 Android device,

    I have /system/lib/libneuralnetworks.so file in my Android 9 OS. How to make sure NPU is used? I benchmarked my model and it seems NPU is not used during TFLite 8bit inference, because speed is 10x slower and there is no difference between channel and tensor quantised models.

    also in dmesg after I run my bench:

    [ 4907.441064] type=1400 audit(1293888104.720:419): avc: denied { read } for pid=7157 
    comm="benchmar" path="/data/local/tmp/build/model/model.tflite" 
    dev="mmcblk0p20" ino=261748 scontext=u:r:hal_neuralnetworks_default:s0 tcontext=u:object_r:shell_data_file:s0 
    tclass=file permissive=1
    
  • Does TIM-VX utilizes NPU of vim3(A311D)?

    Does TIM-VX utilizes NPU of vim3(A311D)?

    Hello, I'm going to use TIM-VX to use the VIM3 development board. I have some questions because it's my first time dealing with applications related to ovxlib and npu. I'm going to use TIM-VX for research purposes. The goal of the study is utilizing NPU effectively. I wonder that,

    1. if the NPU of VIM3 can be used through TIM-VX,
    2. if it can be used, do all layers defined at TIM-VX can be run on NPU,
    3. Is there any method to check to know that I am utilizing NPU when run time,
    4. and how much cpu usage of tim-vx is. TIM-VX seems like a very interesting subject. Thank you for reading it!!
  • Minimal tflite with vx-delegate example

    Minimal tflite with vx-delegate example

    Hi, could you provide minimal example, for tflite inference with vx-delegate applied? Im not sure how to create and apply vx-delegate to tflite model. Thanks!

  • Is there the way to deal with graph verification errors?

    Is there the way to deal with graph verification errors?

    Hello, I am using tim-vx for tensorflow lite delegate and tengine, And I encountered graph verification fail error. What can I try to do for verification error? what may I check for this error Thank you!

  • feat(tensor): support external buffer when creating input/output tensors

    feat(tensor): support external buffer when creating input/output tensors

    Intent: Up to now TIM-VX allocates necessary tensor buffer on host memory, and vsi_nn_AddTensorFromHandle does accept a non-null data argument, so this PR enables a new usage of reusing data buffers from outsides for input and output tensors.

    This PR extends and replaces #297. And I've run a test using a yolov4-tiny-uint8 model (from tengine) on both x86_64_linux (sim) and aarch64 (hardware), and the test succeeded stably.

    API changes

    Add such public APIs:

    1. virtual bool Tensor::FlushCacheForHandle() = 0;
    2. virtual bool Tensor::InvalidateCacheForHandle() = 0;
    3. virtual void* Tensor::map(bool invalidate_cpu_cache = false) = 0;
    4. virtual void Tensor::unmap() = 0;
    5. virtual std::shared_ptr<Tensor> Graph::CreateIOTensor(const TensorSpec& spec, void* data = nullptr) = 0;

    And add:

    1. corresponding member functions in TensorImpl and GraphImpl classes
    2. TensorImpl::TensorImpl(Graph* graph, const TensorSpec& spec, void* data = nullptr);

    Also redefined TensorImpl::data_ from const void * to void *, which is to indicate this may work as a cache area and be updated by Tensor::RefillCacheFromHandle and Tensor::map(true).

  • [QST] custom op : Some question about kernel resource input data type

    [QST] custom op : Some question about kernel resource input data type

    Hi: I have some question about custom op kernel resource.

    1. Can I use __global float4 *inputA or __global float *inputA instead of __read_only image2d_t inputA, If so, how should I get the data from inputA (inputA[0]?)
    2. Whether the __read_only image3d_t inputA or __read_only image1d_t inputA data type supported?
  • RNN/LSTM/GRU Planned

    RNN/LSTM/GRU Planned

    Whether the following operators can be completed normally in 22Q3. Is there a plan to achieve BidirectionalSequenceGRUScreen Shot 2022-06-21 at 6 38 21 PM It may affect my project progress. I look forward to your reply, thank you!

  • add uni/bidirectional_sequence_rnn && UT

    add uni/bidirectional_sequence_rnn && UT

    Issue:

    1. The result will be error if UT select Tanh activation.
    2. bidirectional_sequence_rnn_ext.cc is written to be compatible with onnx. In the corresponding UT, the third row of bias_data cannot be obtained. I checked the code many times, but found no problems. Please check the above two questions, thank you.
  • Whether release/remove tensor interface can be supported in TIM-VX?

    Whether release/remove tensor interface can be supported in TIM-VX?

    These are many fp16 bias tensor when run my fp16 model, but TIM-VX/ovxlib just support fp32 bias tensor. I want to create fp32 bias tensor and release/remove fp16 bias tensor from graph. I can not find the release/remove tensor interface in TIM-VX. I can find vsi_nn_Remove_Tensor in Ovxlib, it consist of vsi_nn_ReleaseTensor/vsi_nn_MapRemove. Whether the corresponding interface(vsi_nn_Remove_Tensor or vsi_nn_ReleaseTensor/vsi_nn_MapRemove) can be exported in TIM-VX?

  • UINT8 model could not run on the A311D NPU!

    UINT8 model could not run on the A311D NPU!

    Hi,

    1. Can PTH or PT models with pytorch quantified by pytorch be directly run on rk3399 arm and rk3399pro NPU?
    2. If I use other formats of quantitative models, such as onnx, how can I run on rk3399 and rk3399rpo npus?
    3. what formats of quantitative models does rk3399pro NPU support? Or does it mean that the model must be converted to the rockchip framework before further quantification and finally deployed on the board?

    BR

  • [bug report] Some bugs or inconvenience in bidirectional_sequence_lstm and lstmunit_activation

    [bug report] Some bugs or inconvenience in bidirectional_sequence_lstm and lstmunit_activation

    When I tries to use the bidirectional LSTM operator, I found some bugs and fixed parts of them. So, I've created https://github.com/gdh1995/TIM-VX/commit/9ae1832eb37e0a01b544b69f916d6244805d5641 to show my patch; and those I can't fix are also described in this issues.

    Those fixed issues

    The below are detailed problems of https://github.com/VeriSilicon/TIM-VX/blob/f8741b4704ab9caffd49f6fc15f72616962ba1d1/src/tim/vx/internal/src/ops/vsi_nn_op_bidirectional_sequence_lstm.c :

    1. when passed inputs[BI_LSTM_BW_INPUT_H_STATE] is empty, it copies shape from outputs[BI_LSTM_BW_OUTPUT_OUTPUT], but the later may also be empty if only curr_param->merge_outputs
    2. it doesn't support empty c_state inputs, while unidirectional LSTM does.
    3. when it creates new c_state outputs during iterating forwards, it uses dtype from outputs[BI_LSTM_FW_OUTPUT_OUTPUT], but I think a better source should be inputs[BI_LSTM_FW_INPUT_C_STATE]
      1. so does it during iterating backwards
    1. it sets lstm_ovxlib.internal_dtype, which should have been lstmunit_ovxlib.internal_dtype
    2. the order of tensors in lstmcell_reshape_output_tensors_bw is D'C'B'A' when input words are A, B, C, D
      1. in PyTorch, the order is still A'B'C'D', so a merged output tensor is [ [A, A'], [B, B'], [C, C'], [D, D'] ]
      1. the above judgment is tested on torch==1.9.0+cu111 and Python 3.7 x64 for Windows.

    Left bugs

    When I test the operator, I noticed some kernel functions of lstmunit_activation don't work under some special combination of I/O data types. As far as I've catched, they are:

    On a main branch of TIMVX and the A311D-6.4.10.2 package (https://github.com/VeriSilicon/TIM-VX/releases/tag/v1.1.42), the evis kernel (src/tim/vx/internal/src/kernel/evis/lstmunit_activation_evis.c) never works

    1. the kernel will never write its output tensors, so the tensor will contain old data from old graph forwarding
    2. the failed kernels are:
      1. GEN_LSTMUNIT_STRUCT_ITEMS(0, 0, 0, 0, 0, F16, F16, F16, SIGMOID, S)
      1. GEN_LSTMUNIT_STRUCT_ITEMS(0, 0, 0, 0, 0, F16, U8, F16, SIGMOID, S)
    1. for example, after I created a graph with only 1 bidi-LSTM layer using the cl kernel, run it, and re-created the graph using the evis kernel, the new running result (under a input tensor of a same shape) is absolutely the same with the cl kernel.
    2. while the version in x86_64_linux works well

    The cl kernel (src/tim/vx/internal/src/kernel/evis/lstmunit_activation_cl.c) doesn't support F32toU8_F32_SIGMOID

    1. the failed kernel is GEN_LSTMUNIT_STRUCT_ITEMS(0, 0, 0, 0, 0, F32, U8 , F32, SIGMOID, S)
    2. the kernel will only output very small numbers (all items in the uint8-quantified tensor are either 0 or 1)
    3. here's my test case:
      1. by default the unidirectional LSTM layer creates float16 tensors as outputs of FullConnect nodes
      1. I want to quantify weights and LSTM input+output tensors into uint8, and then the hidden state is also uint8-quantified
      1. so lstmunit_activate receives several float16 inputs and is expected to generate a uint8-quantified tensor
    1. both the version in A311D-6.4.10.2 and the x86_64_linux one will give wrong outputs.

    Here's a conclusion of the 2 kernel bugs (uint8+fp16 means only hidden states are uint8, activation inputs and c_states are still float16):

                        | simulator (x86_64_linux)    | A311D
                        | evis    cl        cpu       | evis       cl        cpu
    ---------------------------------------------------------------------------
    unidi | fp16        | ok      ok        ok        | not-write  ok        ok
          | uint8+fp16  | ok      too-small ok        | not-write  too-small ok
          | uint8+fp16  | ok      too-small ok        | not-write  too-small ok
    ---------------------------------------------------------------------------
    bidi  | fp16        | ok      ok        ok        | not-write  ok        ok
          | uint8+fp16  | ok      too-small ok        | not-write  too-small ok
    

    A311D + evis: never write the tensor at all A311D + cl:

    • fp16+fp16 (S_F32toF32_F32_SIGMOID): ok
    • uint8+fp16 (S_F32toU8_F32_SIGMOID): value error: all are too small
Related tags
This is a fast module to probing an area in a 2d plane for physic objects

Godot AreaProber 2D Checking for neighbour colliders made easy AreaProber allows you to probe for colliders anywhere in your 2D game's world, no need

Feb 14, 2022
kaun is a replacement for löve's built-in love.graphics module intended for 3D graphics

kaun kaun is a replacement for löve's built-in love.graphics module intended for 3D graphics. It is a Lua module you can require from a shared library

Apr 5, 2021
This module is a simple, lightweight and flexible way to generate QR codes in Godot
This module is a simple, lightweight and flexible way to generate QR codes in Godot

QRCodeTexture Godot Module Summary This module is a simple, lightweight and flexible way to generate QR codes in Godot. It provides a new type of text

Jun 19, 2022
Metal-cpp is a low-overhead C++ interface for Metal that helps developers add Metal functionality to graphics apps, games, and game engines that are written in C++.

About metal-cpp is a low overhead and header only C++ interface for Metal that helps developers add Metal functionality to graphics applications that

Jun 15, 2022
NVRHI (NVIDIA Rendering Hardware Interface) is a library that implements a common abstraction layer over multiple graphics APIs

NVRHI Introduction NVRHI (NVIDIA Rendering Hardware Interface) is a library that implements a common abstraction layer over multiple graphics APIs (GA

Jun 17, 2022
A modern, feature-rich single header C++ interface system for GLFW
A modern, feature-rich single header C++ interface system for GLFW

A modern, feature-rich single header C++ interface system for GLFW

Dec 27, 2021
Polyscope is a C++/Python viewer and user interface for 3D data such as meshes and point clouds

Polyscope is a C++/Python viewer and user interface for 3D data such as meshes and point clouds. It allows you to register your data and quickly generate informative and beautiful visualizations, either programmatically or via a dynamic GUI.

Jun 20, 2022
Dear ImGui is a bloat-free graphical user interface library for C++
Dear ImGui is a bloat-free graphical user interface library for C++

dear imgui (This library is available under a free and permissive license, but needs financial support to sustain its continued improvements. In addit

Oct 27, 2020
Xerus - A general purpose library for numerical calculations with higher order tensors, Tensor-Train Decompositions / Matrix Product States and other Tensor Networks

About The xerus library is a general purpose library for numerical calculations with higher order tensors, Tensor-Train Decompositions / Matrix Produc

Apr 20, 2021
Deep Scalable Sparse Tensor Network Engine (DSSTNE) is an Amazon developed library for building Deep Learning (DL) machine learning (ML) models

Amazon DSSTNE: Deep Scalable Sparse Tensor Network Engine DSSTNE (pronounced "Destiny") is an open source software library for training and deploying

May 24, 2022
Demagnetization tensor of non-equidistant magnetic layers

Demagnetization tensor of non-equidistant magnetic layers A small standalone project calculating the demagnetization tensor from [1] in multi-threaded

Dec 3, 2021
GPTPU: General-Purpose Computing on (Edge) Tensor Processing Units

GPTPU: General-Purpose Computing on (Edge) Tensor Processing Units Welcome to the repository of ESCAL @ UCR's GPTPU project! We aim at demonstrating t

Jun 13, 2022
PSTensor provides a way to hack the memory management of tensors in TensorFlow and PyTorch by defining your own C++ Tensor Class.

PSTensor : Custimized a Tensor Data Structure Compatible with PyTorch and TensorFlow. You may need this software in the following cases. Manage memory

Feb 12, 2022
Code accompanying our SIGGRAPH 2021 Technical Communications paper "Transition Motion Tensor: A Data-Driven Approach for Versatile and Controllable Agents in Physically Simulated Environments"
Code accompanying our SIGGRAPH 2021 Technical Communications paper

SIGGRAPH ASIA 2021 Technical Communications Transition Motion Tensor: A Data-Driven Framework for Versatile and Controllable Agents in Physically Simu

Apr 21, 2022
Yet another tensor library in C++. It allows direct access to its underlying data buffer, and serializes in JSON.

Yet another tensor library in C++. It allows direct access to its underlying data buffer, and serializes in JSON. Built on top of zax json parser, C++ structures having tensor members can also be JSON-serialized and deserialized, allowing one to save and load the state of a highly hierarchical object.

May 28, 2022
PET: Optimizing Tensor Programs with Partially Equivalent Transformations and Automated Corrections
PET: Optimizing Tensor Programs with Partially Equivalent Transformations and Automated Corrections

PET: Optimizing Tensor Programs with Partially Equivalent Transformations and Automated Corrections PET is the first DNN framework that optimizes tens

Jun 11, 2022
This project shows how to interface Nokia 5110 LCD with Esp32 module to show current prices of any cryptocurrency like Bitcoin, Dogecoin, etc
This project shows how to interface Nokia 5110 LCD with Esp32 module to show current prices of any cryptocurrency like Bitcoin, Dogecoin, etc

ESP32 Cryptocurreny Ticker Introduction This project shows how to interface Nokia 5110 LCD with Esp32 module to show current prices of any cryptocurre

Jun 16, 2022
AVR-based frequency counter module with I2C interface.
AVR-based frequency counter module with I2C interface.

AVR-based Frequency Counter The AVR-based frequency counter is partly based on the project developed by Herbert Dingfelder with some extensions and mo

Feb 26, 2022
Legion Low Level Rendering Interface provides a graphics API agnostic rendering interface with minimal CPU overhead and low level access to verbose GPU operations.
Legion Low Level Rendering Interface provides a graphics API agnostic rendering interface with minimal CPU overhead and low level access to verbose GPU operations.

Legion-LLRI Legion-LLRI, or “Legion Low Level Rendering Interface” is a rendering API that aims to provide a graphics API agnostic approach to graphic

Mar 8, 2022
C++ implementation of the Google logging module

Google Logging Library The Google Logging Library (glog) implements application-level logging. The library provides logging APIs based on C++-style st

Jun 21, 2022