MediaPipe offers cross-platform, customizable ML solutions for live and streaming media.

layout title nav_order
default
Home
1

MediaPipe


Live ML anywhere

MediaPipe offers cross-platform, customizable ML solutions for live and streaming media.

accelerated.png cross_platform.png
End-to-End acceleration: Built-in fast ML inference and processing accelerated even on common hardware Build once, deploy anywhere: Unified solution works across Android, iOS, desktop/cloud, web and IoT
ready_to_use.png open_source.png
Ready-to-use solutions: Cutting-edge ML solutions demonstrating full power of the framework Free and open source: Framework and solutions both under Apache 2.0, fully extensible and customizable

ML solutions in MediaPipe

Face Detection Face Mesh Iris Hands Pose Holistic
face_detection face_mesh iris hand pose hair_segmentation
Hair Segmentation Object Detection Box Tracking Instant Motion Tracking Objectron KNIFT
hair_segmentation object_detection box_tracking instant_motion_tracking objectron knift
Android iOS C++ Python JS Coral
Face Detection
Face Mesh
Iris
Hands
Pose
Holistic
Hair Segmentation
Object Detection
Box Tracking
Instant Motion Tracking
Objectron
KNIFT
AutoFlip
MediaSequence
YouTube 8M

See also MediaPipe Models and Model Cards for ML models released in MediaPipe.

MediaPipe in Python

MediaPipe offers customizable Python solutions as a prebuilt Python package on PyPI, which can be installed simply with pip install mediapipe. It also provides tools for users to build their own solutions. Please see MediaPipe in Python for more info.

MediaPipe on the Web

MediaPipe on the Web is an effort to run the same ML solutions built for mobile and desktop also in web browsers. The official API is under construction, but the core technology has been proven effective. Please see MediaPipe on the Web in Google Developers Blog for details.

You can use the following links to load a demo in the MediaPipe Visualizer, and over there click the "Runner" icon in the top bar like shown below. The demos use your webcam video as input, which is processed all locally in real-time and never leaves your device.

visualizer_runner

Getting started

Learn how to install MediaPipe and build example applications, and start exploring our ready-to-use solutions that you can further extend and customize.

The source code is hosted in the MediaPipe Github repository, and you can run code search using Google Open Source Code Search.

Publications

Videos

Events

Community

Alpha disclaimer

MediaPipe is currently in alpha at v0.7. We may be still making breaking API changes and expect to get to stable APIs by v1.0.

Contributing

We welcome contributions. Please follow these guidelines.

We use GitHub issues for tracking requests and bugs. Please post questions to the MediaPipe Stack Overflow with a mediapipe tag.

Owner
Comments
  • How to render different face effect

    How to render different face effect

    In face effect module I can 3d data as glasses.pbtxt facepaint.pngblob glasses.pngblob.

    I am trying to add few more models to experiment but i couldnt find any documenation or information of pngblob data. It seems like that using glasses.pbtxt 3d model is generated at runtime and glasses.pngblob is getting used as texture. Can you please clear that is it right and how is it happening

    Ques 1- Can you please provide any documentation of pngblob datatype. How can i create new 3d model (pngblob / binarypb ) to render on face. Most common format of 3d model data are OBJ, FBX, etc. Is there any way to convert these format of 3d data to binarypb / pngblob?

    Ques 2- it is mentioned in gl_animation_overlay_calculator.cc that .obj.uuu can be created using the mentioned SimpleObjEncryptor but I couldn't find that. can you please specify where to find that ?

    ANIMATION_ASSET (String, required):
    //     Path of animation file to load and render. Should be generated by
    //     //java/com/google/android/apps/motionstills/SimpleObjEncryptor with
    //     --compressed_mode=true.  See comments and documentation there for more
    //     information on custom .obj.uuu file format.
    
  • Mediapipe CodePens don't run on iOS Safari

    Mediapipe CodePens don't run on iOS Safari

    Hello all,

    I have a project using Mediapipe Hands on iOS and I've been trying to update from the tfjs model to the new Mediapipe api but even when I enable WebGL2, it still fails to work. I've made sure I'm asking permission using navigator.getmedia properly. Wondering if anyone has any ideas on what's going wrong.

    Here's the codepen that I'm testing: https://codepen.io/aionkov/pen/MWjEqWa

    Here's the console:

    [Warning] I1223 11:05:16.032000 1 gl_context_webgl.cc:146] Successfully created a WebGL context with major version 3 and handle 3 (hands_solution_wasm_bin.js, line 9) [Warning] I1223 11:05:16.034000 1 gl_context.cc:340] GL version: 3.0 (OpenGL ES 3.0 (WebGL 2.0)) (hands_solution_wasm_bin.js, line 9) [Warning] W1223 11:05:16.034000 1 gl_context.cc:794] Drishti OpenGL error checking is disabled (hands_solution_wasm_bin.js, line 9) [Warning] E1223 11:05:16.711000 1 calculator_graph.cc:775] INTERNAL: CalculatorGraph::Run() failed in Run: (hands_solution_wasm_bin.js, line 9) [Warning] Calculator::Open() for node "handlandmarktrackinggpu__handlandmarkgpu__InferenceCalculator" failed: [GL_INVALID_FRAMEBUFFER_OPERATION]: The framebuffer object is not complete.: glCreateShader in third_party/tensorflow/lite/delegates/gpu/gl/gl_shader.cc:50 [type.googleapis.com/mediapipe.StatusList='\n\x84\x02\x08\r\x12\xff\x01\x43\x61lculator::Open() for node "handlandmarktrackinggpu__handlandmarkgpu__InferenceCalculator" failed: [GL_INVALID_FRAMEBUFFER_OPERATION]: The framebuffer object is not complete.: glCreateShader in third_party/tensorflow/lite/delegates/gpu/gl/gl_shader.cc:50'] (hands_solution_wasm_bin.js, line 9) [Warning] F1223 11:05:16.712000 1 solutions_wasm.embind.cc:585] Check failed: ::util::OkStatus() == (graph_->WaitUntilIdle()) (OK vs. INTERNAL: CalculatorGraph::Run() failed in Run: (hands_solution_wasm_bin.js, line 9) [Warning] Calculator::Open() for node "handlandmarktrackinggpu__handlandmarkgpu__InferenceCalculator" failed: [GL_INVALID_FRAMEBUFFER_OPERATION]: The framebuffer object is not complete.: glCreateShader in third_party/tensorflow/lite/delegates/gpu/gl/gl_shader.cc:50 [type.googleapis.com/mediapipe.StatusList='\n\x84\x02\x08\r\x12\xff\x01\x43\x61lculator::Open() for node "handlandmarktrackinggpu__handlandmarkgpu__InferenceCalculator" failed: [GL_INVALID_FRAMEBUFFER_OPERATION]: The framebuffer object is not complete.: glCreateShader in third_party/tensorflow/lite/delegates/gpu/gl/gl_shader.cc:50']) (hands_solution_wasm_bin.js, line 9) [Warning] *** Check failure stack trace: *** (hands_solution_wasm_bin.js, line 9) [Warning] undefined (hands_solution_wasm_bin.js, line 9) [Error] Unhandled Promise Rejection: RuntimeError: abort(undefined) at [email protected]://cdn.jsdelivr.net/npm/@mediapipe/[email protected]/hands_solution_wasm_bin.js:9:67558 [email protected]://cdn.jsdelivr.net/npm/@mediapipe/[email protected]/hands_solution_wasm_bin.js:9:67737 [email protected]://cdn.jsdelivr.net/npm/@mediapipe/[email protected]/hands_solution_wasm_bin.js:9:41049 [email protected]://cdn.jsdelivr.net/npm/@mediapipe/[email protected]/hands_solution_wasm_bin.js:9:179948 [email protected][wasm code]

    .wasm-function[10471]@[wasm code] .wasm-function[10466]@[wasm code] .wasm-function[10461]@[wasm code] .wasm-function[10458]@[wasm code] .wasm-function[10474]@[wasm code] .wasm-function[515]@[wasm code] .wasm-function[502]@[wasm code]

    [email protected][wasm code] [native code] SolutionWasm$send https://cdn.jsdelivr.net/npm/@mediapipe/[email protected]/hands.js:33:352 [email protected]://cdn.jsdelivr.net/npm/@mediapipe/[email protected]/hands.js:10:295 https://cdn.jsdelivr.net/npm/@mediapipe/[email protected]/hands.js:11:90 [email protected]://cdn.jsdelivr.net/npm/@mediapipe/[email protected]/hands.js:22:322 [email protected][native code] (evaluating 'new WebAssembly.RuntimeError(what)') (anonymous function) (hands_solution_wasm_bin.js:9:41099) promiseReactionJob

  • ModuleNotFoundError: No module named 'mediapipe.python._framework_bindings' on Raspberry Pi 3

    ModuleNotFoundError: No module named 'mediapipe.python._framework_bindings' on Raspberry Pi 3

    Hello, I'm having problems using mediapipe on my raspberry pi 3. Using "import mediapipe" gives no error, however using "mp_drawing = mp.solutions.drawing_utils" (for example) gives out the following error message: ModuleNotFoundError: No module named 'mediapipe.python._framework_bindings'

    My installation method was:

    1. sudo apt install ffmpeg python3-opencv python3-pip ;
    2. sudo apt install libxcb-shm0 libcdio-paranoia-dev libsdl2-2.0-0 libxv1 libtheora0 libva-drm2 libva-x11-2 libvdpau1 libharfbuzz0b libbluray2 libatlas-base-dev libhdf5-103 libgtk-3-0 libdc1394-22 libopenexr25 ;
    3. sudo pip3 uninstall mediapipe-rpi3 .

    I'm using a raspberry pi 3b with debian bullseye (32 bit), my python version is 3.9.2 and the opencv version is 4.2.1. P.S. the file name is "maos.py".

    Does anyone know what might be causing this error? (I annexed an image of the error aswell for clarity Error )

  • Accessing landmarks, tracking multiple hands, and enabling depth on desktop

    Accessing landmarks, tracking multiple hands, and enabling depth on desktop

    Hello,

    I found out about Mediapipe after seeing Google's blog post regarding hand tracking. Currently, I am working on using Mediapipe to build a cross platform interface using gestures to control multiple systems. I am using the Desktop CPU example as a base for how to move forward, and I have successfully retrieved the hand landmarks. I just want to ensure that I am retrieving them in the most efficient and proper way.

    The process I use is as follows:

    1. Create a listener of class OutputStreamPoller which listens for the hand_landmarks output stream in the HandLandmark subgraph.
    2. If there is an available packet, load the packet into a variable of class mediapipe::Packet using the .Next() method of the OutputStreamPoller class.
    3. Use the .Get() method of the Packet class and load into another variable called hand_landmarks.
    4. Loop through the variable and retrieve the x, y, and z coordinates and place them into a vector for processing.

    Is this process correct or is there a better way to go about retrieving the coordinates of the hand landmarks?

    I have additional questions, but I am unsure if I should place them in a separate issue. I will ask them here but please let me know if I should open a separate issue.

    1. In the hand tracking examples, only a single hand is to be detected. How would I alter the build such that it can detect multiple hands (specifically 2)?
    2. How would I enable the desktop implementations of hand tracking such that they can capture depth (similar to how the android/ios 3D builds can output z coordinates)?
  • Temperature check on possible memory leak in Holistic JS solution

    Temperature check on possible memory leak in Holistic JS solution

    Hello! I'm using the Holistic web solution in a WebRTC streaming application.

    We've occasionally seen Holistic processing crash in a way that seems indicative of a memory leak: image

    I'm currently unsure about whether the Holistic processing crash is the cause or a symptom of another issue. I've done some memory profiling, but haven't found a reliable way of reproducing it yet.

    Since I don't have too much visibility into the Mediapipe JS internals, I was just hoping to get a temperature check on whether the Mediapipe team thinks:

    1. This issue could be related to the Mediapipe internals / you've seen it before in other contexts
    2. This is definitely not a Mediapipe issue and likely a bug in my application logic

    Basically just trying to determine where to invest additional debugging / investigation efforts.

    FWIW I also came across this thread https://bugs.chromium.org/p/chromium/issues/detail?id=1174675 which indicates there could be some memory leak issues in Chromium that can affect use cases like WebRTC, but the rate of leakage described there seems too slow compared to what I'm perceiving in my application.

    Thanks in advance for your help! Sorry that I'm not able to provide any more details besides that single stack trace. Please let me know if you need any additional information and I'd be happy to circle back with it.

  • Hand tracking landmarks - Z value range

    Hand tracking landmarks - Z value range

    I am failing to find any kind of documentation or example that would explain the exact definition/behavior of the estimated Z coordinates returned by the hand tracking graph.

    We're able to successfully extract the landmark data as X, Y and Z coordinates. The X and Y coordinates are clearly normalized but the Z coordinates appear to take values to which I have no reference (they are not normalized, they are sometimes negative, sometimes positive and don't appear to adhere to any coherent scale. Clear is: They are most likely relative to each other.

    Could somebody shine some light on the estimated Z coordinates - especially the scale they adhere to?

  • base on opencv-4.5.1  , Use  build aar, crash ,dlopen failed: cannot locate symbol

    base on opencv-4.5.1 , Use build aar, crash ,dlopen failed: cannot locate symbol "__subtf3"

    System information (Please provide as much relevant information as possible)

    OS Platform and Distribution (e.g., Linux Ubuntu 16.04, Android 11, iOS 14.4): MediaPipe version: v0.8.6 Bazel version:bazel 4.1.0 Solution (e.g. FaceMesh, Pose, Holistic): HANDTRACK aar Programming Language and version ( e.g. C++, Python, Java): Java Describe the expected behavior:

    My goal was to create an aar to using the handtracking aar on Android Studio with gradle

    When I replace the .aar file in the libs folder (from the face_detection and multi_hand_tracking demo/example projects) with the one that I generated following the steps in https://google.github.io/mediapipe/getting_started/android_archive_library.html

    I get the following crash at System.loadLibrary("mediapipe_jni");

    java.lang.UnsatisfiedLinkError: dlopen failed: cannot locate symbol "__subtf3" referenced by "/data/app/~~IXWF_H6noRNss6OBiB8kZQ==/com.my.mediapipe.apps.myapplication-Eds2VNkZzhvjhYw-zhjNaw==/lib/arm64/libmediapipe_jni.so"...

    So i Switch to OpenCV 4. but i need opencv-4.5.1
    sed -i -e 's:3.4.3/opencv-3.4.3:4.5.1/opencv-4.5.1:g' WORKSPACE sed -i -e 's:libopencv_java3:libopencv_java4:g' third_party/opencv_android.BUILD

    i use opencv-4.5.1 is different from opencv-4.0.1 in exzample . But it still reports an same error ! nm -D libmediapipe_jni.so |grep subtf3 the symbol is still stay in.

  • AttributeError: module 'mediapipe' has no attribute 'solutions'

    AttributeError: module 'mediapipe' has no attribute 'solutions'

    Has anyone had this error when importing the mediapipe library?

    AttributeError: partially initialized module 'mediapipe' has no attribute 'solutions'(most likely due to a circular import)

    import cv2
    import mediapipe as mp
    mp_drawing = mp.solutions.drawing_utils
    mp_face_mesh = mp.solutions.face_mesh
    
  • How do you use MediaPipe on the web in your own web app?

    How do you use MediaPipe on the web in your own web app?

    I don't need the visualizer I just want to be able to run the MediaPipe Hands with Multi-hand support in my webapp. From my understanding the code is compiled into wasm and then run from a webapp. How would I include the Hands with Multi-hand support app in my on web application?

  • Unable to load the hand detection model

    Unable to load the hand detection model

    I am trying to test the given model in my sample android application. When trying to load the model i face this issue:

    java.lang.IllegalStateException: Internal error: Unexpected failure when preparing tensor allocations: Encountered unresolved custom op: Convolution2DTransposeBias.Node number 165 (Convolution2DTransposeBias) failed to prepare.

    Code: AssetFileDescriptor fileDescriptor = activity.getAssets().openFd("palm_detection.tflite");

  • how to support yolo model for object detect?

    how to support yolo model for object detect?

    i 've trained yolo model for object detect, and i want to integrating the model into mediapipe, whether it support or not, if support integrating, how to do, could you give me advice, thanks a lot.

  • [iOS] Linking issue with *.tflite files for ios_static_framework!

    [iOS] Linking issue with *.tflite files for ios_static_framework!

    System information:

    • MediaPipe v0.8.7.1 :
    • Bazel version: 5.2.0
    • XCode 14.0.1
    • iOS 16 (iPhone 7)

    Describe the problem:

    I have faced with the problem of building static framework with Mediapipe.

    Working solution for dynamic version:

    apple_binary(
        name = "face_landmarks_detector_ios",
        deps = [
            ":face_landmarks_detector_ios_lib",
            "@opencv_framework_ios16//:OpencvFramework"
        ],
        data = [
            "//mediapipe/modules/face_detection:face_detection_short_range.tflite",
            "//mediapipe/modules/face_landmark:face_landmark.tflite",
        ],
        platform_type = "ios",
        binary_type = "dylib"
    )
    

    Not working solution for static framework:

    ios_static_framework(
        name = "face_landmarks_detector_ios_static",
        minimum_os_version = "16.0",
        platform_type = "ios",
        families = ["iphone"],
        deps = [
            ":face_landmarks_detector_ios_lib",
            "@opencv_framework_ios16//:OpencvFramework",
        ],
        resources = [
            "//mediapipe/modules/face_detection:face_detection_short_range.tflite",
            "//mediapipe/modules/face_landmark:face_landmark.tflite",
        ],
    )
    

    And here is this lib to see that is inside 'face_landmarks_detector_ios_lib':

    cc_library(
        name = "face_landmarks_detector_ios_lib",
        srcs = [
            "face_landmarks_detector.cc",
        ],
        hdrs = [
            "export.h",
            "face_landmarks_detector.h"
        ],
        copts = ["-std=c++17"],
        deps = [
            "//mediapipe/framework:calculator_framework",
            "//mediapipe/framework/port:parse_text_proto",
            "//mediapipe/graphs/face_mesh/subgraphs:face_renderer_gpu",
            "//mediapipe/calculators/core:flow_limiter_calculator",
            ":face_landmarks_detector_front_gpu",
            "//mediapipe/calculators/core:constant_side_packet_calculator",
            ":desktop_live_calculators",
        ] + select({
            "//mediapipe:ios_i386": [],
            "//mediapipe:ios_x86_64": [],
            "//conditions:default": [
                ":mobile_calculators",
                "//mediapipe/framework/formats:landmark_cc_proto",
            ],
        }),
        data = [
            "//mediapipe/modules/face_detection:face_detection_short_range.tflite",
            "//mediapipe/modules/face_landmark:face_landmark.tflite",
        ],
        defines = ["COMPILING_FACE_DETECTOR_API"],
        alwayslink = 1,
    )
    

    Problem:

    After linking it to the iOS app with XCode I will get issues related to tensor flow models. Especially for:

             "//mediapipe/modules/face_detection:face_detection_short_range.tflite",
            "//mediapipe/modules/face_landmark:face_landmark.tflite",
    

    Somehow ios_static_framework is ignoring this files, rather than dynamic build working good without any issues. Environment and setting of the project is the same for dynamic and static lib.

    Here is text from issue when app is running on the device (iPhone 7 iOS 16):

    2023-01-08 13:28:55.216549+0200 app-api[13255:7401738] Error NOT_FOUND: ValidatedGraphConfig Initialization failed.
    No registered object with name: FaceLandmarksDetectorFrontCpu; Unable to find Calculator "FaceLandmarksDetectorFrontCpu"
    No registered object with name: FaceRendererCpu; Unable to find Calculator "FaceRendererCpu"
    

    Questing is: How to link this tflite files to ios_static_framework?

  • How to run JavaScript solution with increased GPU priority?

    How to run JavaScript solution with increased GPU priority?

    System information (Please provide as much relevant information as possible)

    • Have I written custom code (as opposed to using a stock example script provided in Mediapipe): Yes and no (issue exists for both)
    • OS Platform and Distribution (e.g., Linux Ubuntu 16.04, Android 11, iOS 14.4): Windows 11
    • MediaPipe version: Latest
    • Bazel version: N/A
    • Solution (e.g. FaceMesh, Pose, Holistic): Hands
    • Programming Language and version ( e.g. C++, Python, Java): JavaScript

    Describe the expected behavior: MediaPipe inference in JavaScript slows down drastically when there are other 3D applications running in the foreground. Normally the tracking is ~24 FPS, but if I launch a resource-intensive 3D application/game (say Elden Ring), the tracking FPS drops drastically to 3~4 FPS. Switching the foreground application to the MediaPipe JavaScript solution immediately improves the FPS. This is on a i9-9900K + RTX 2080 Ti.

    Notably, the Python solution does not have this problem. Tracking FPS remains stable no matter what application is running in the foreground. I am using official demo code for both.

    Standalone code you may have used to try to get what you need : I have tried to build the MediaPipe JavaScript solution as an electron app and manually raise its process priority in Task Manager (or through SetPriorityClass Win32 API) to "Realtime". I have also tried to run the electron app with administrator privileges. However, both didn't work.

    If there is a problem, provide a reproducible test case that is the bare minimum necessary to generate the problem. If possible, please share a link to Colab/repo link /any notebook: The JS hands demo reproduces this.

    Other info / Complete Logs : N/A

  • mediapipe installation issue

    mediapipe installation issue

    Please make sure that this is a build/installation issue and also refer to the troubleshooting documentation before raising any issues.

    System information (Please provide as much relevant information as possible)

    • OS Platform and Distribution (e.g. Linux Ubuntu 16.04, Android 11, iOS 14.4):
    • Compiler version (e.g. gcc/g++ 8 /Apple clang version 12.0.0):
    • Programming Language and version ( e.g. C++ 14, Python 3.6, Java ):
    • Installed using virtualenv? pip? Conda? (if python):
    • MediaPipe version:
    • Bazel version:
    • XCode and Tulsi versions (if iOS):
    • Android SDK and NDK versions (if android):
    • Android AAR ( if android):
    • OpenCV version (if running on desktop):

    Describe the problem:

    Provide the exact sequence of commands / steps that you executed before running into the problem:

    Complete Logs: Include Complete Log information or source code that would be helpful to diagnose the problem. If including tracebacks, please include the full traceback. Large logs and files should be attached:

  • Added iOS Classification Result, Category and Helper Classes

    Added iOS Classification Result, Category and Helper Classes

    1. Added MPPClassificationResult, MPPCategory and their helpers
    2. Removed MPPClassifierOptions and helpers since we have decided to in-line them in text classifier options to maintain consistency with Java and Python
  • Face Occlusion Detection

    Face Occlusion Detection

    I want to detect the occlusion in coming in front of the face region in a video. It will take video frames as input and for each frame it will give an output whether the face the in the current frame is occluded or not.

UE4 MediaPipe plugin

UE4 MediaPipe plugin Platforms: Win64 2D features: Face, Iris, Hands, Pose, Holistic 3D features: Face Mesh, World Pose Demo video: https://www.youtub

Dec 29, 2022
Example Qt application that demonstrates how to integrate Mediapipe
Example Qt application that demonstrates how to integrate Mediapipe

Mediapipe integration to Qt application example Example on how to integrate mediapipe as a dynamic library into Qt applicaton on Linux. Resulting appl

Nov 26, 2022
A mediapipe-hand demo infer by ncnn
A mediapipe-hand demo infer by ncnn

The Android demo of Mediapipe Hand infer by ncnn Please enjoy the mediapipe hand demo on ncnn You can try this APK demo https://pan.baidu.com/s/1ArAMH

Dec 22, 2022
The purpose of this project is to apply mediapipe to more AI chips.
The purpose of this project is to apply mediapipe to more AI chips.

1.About This Project Our Official Website: www.houmo.ai Who We Are: We are Houmo - A Great AI Company. We wish to change the world with unlimited comp

Nov 20, 2022
Native runtime package for MediaPipe.NET.

MediaPipe.NET.Runtime Native library package for MediaPipe.NET. This is the first half of the port of MediaPipeUnityPlugin, in order to use MediaPipe

Oct 12, 2022
C++ Live Toolkit are tools subset used to perform on-the-fly compilation and running of cpp code

C++ Live Toolkit CLT (C++ Live Toolkit) is subset of tools that are very light in size, and maintained to help programmers in compiling and executing

Jan 4, 2022
Anomaly Detection on Dynamic (time-evolving) Graphs in Real-time and Streaming manner
Anomaly Detection on Dynamic (time-evolving) Graphs in Real-time and Streaming manner

Anomaly Detection on Dynamic (time-evolving) Graphs in Real-time and Streaming manner. Detecting intrusions (DoS and DDoS attacks), frauds, fake rating anomalies.

Dec 18, 2022
CNStream is a streaming framework for building Cambricon machine learning pipelines
CNStream is a streaming framework for building Cambricon  machine learning pipelines

CNStream is a streaming framework for building Cambricon machine learning pipelines

Dec 30, 2022
Advent of Code 2021 optimized solutions in C++

advent2021-fast These solutions are a work in progress. Advent of Code 2021 optimized C++ solutions. Here are the timings from an example run on an i9

Dec 15, 2022
E-Box solutions Batch 2023
E-Box solutions Batch 2023

EBox Codes for the Assessments in the E-Box platform General Information E-Box is a learning platform helps various students to develop their coding s

Jul 15, 2022
ONNX Runtime: cross-platform, high performance ML inferencing and training accelerator
ONNX Runtime: cross-platform, high performance ML inferencing and training accelerator

ONNX Runtime is a cross-platform inference and training machine-learning accelerator compatible with deep learning frameworks, PyTorch and TensorFlow/Keras, as well as classical machine learning libraries such as scikit-learn, and more.

Jan 2, 2023
A Cross platform implement of Wenet ASR. It's based on ONNXRuntime and Wenet. We provide a set of easier APIs to call wenet models.

RapidASR: a new member of RapidAI family. Our visio is to offer an out-of-box engineering implementation for ASR. A cpp implementation of recognize-on

Nov 17, 2022
Insight Toolkit (ITK) is an open-source, cross-platform toolkit for N-dimensional scientific image processing, segmentation, and registration
 Insight Toolkit (ITK) is an open-source, cross-platform toolkit for N-dimensional scientific image processing, segmentation, and registration

ITK: The Insight Toolkit C++ Python Linux macOS Windows Linux (Code coverage) Links Homepage Download Discussion Software Guide Help Examples Issue tr

Dec 26, 2022
A C++-based, cross platform ray tracing library
A C++-based, cross platform ray tracing library

Visionaray A C++ based, cross platform ray tracing library Getting Visionaray The Visionaray git repository can be cloned using the following commands

Dec 29, 2022
RapidOCR - A cross platform OCR Library based on PaddleOCR & OnnxRuntime
RapidOCR - A cross platform OCR Library based on PaddleOCR & OnnxRuntime

RapidOCR (捷智OCR) 简体中文 | English 目录 RapidOCR (捷智OCR) 简介 近期更新 ?? 2021-12-18 update 2021-11-28 update 2021-11-13 update 2021-10-27 update 2021-09-13 upda

Jan 4, 2023
ClanLib is a cross platform C++ toolkit library.

ClanLib ClanLib is a cross platform toolkit library with a primary focus on game creation. The library is Open Source and free for commercial use, und

Dec 18, 2022
Gesture Recognition Toolkit (GRT) is a cross-platform, open-source, C++ machine learning library designed for real-time gesture recognition.

Gesture Recognition Toolkit (GRT) The Gesture Recognition Toolkit (GRT) is a cross-platform, open-source, C++ machine learning library designed for re

Dec 29, 2022
The Forge Cross-Platform Rendering Framework PC Windows, Linux, Ray Tracing, macOS / iOS, Android, XBOX, PS4, PS5, Switch, Quest 2
The Forge Cross-Platform Rendering Framework PC Windows, Linux, Ray Tracing, macOS / iOS, Android, XBOX, PS4, PS5, Switch, Quest 2

The Forge is a cross-platform rendering framework supporting PC Windows 10 / 7 with DirectX 12 / Vulkan 1.1 with DirectX Ray Tracing API DirectX 11 Fa

Jan 1, 2023
A Cross-Platform(Web, Android, iOS) app to Generate Faces of People (These people don't actually exist) made using Flutter.
A Cross-Platform(Web, Android, iOS) app to Generate Faces of People (These people don't actually exist) made using Flutter.

?? ?? Flutter Random Face Generator A flutter app to generate random faces. The Generated faces do not actually exist in real life (in other words you

Jan 3, 2023