Scalable, Portable and Distributed Gradient Boosting (GBDT, GBRT or GBM) Library, for Python, R, Java, Scala, C++ and more. Runs on single machine, Hadoop, Spark, Dask, Flink and DataFlow

eXtreme Gradient Boosting

Build Status Build Status Build Status XGBoost-CI Documentation Status GitHub license CRAN Status Badge PyPI version Conda version Optuna Twitter

Community | Documentation | Resources | Contributors | Release Notes

XGBoost is an optimized distributed gradient boosting library designed to be highly efficient, flexible and portable. It implements machine learning algorithms under the Gradient Boosting framework. XGBoost provides a parallel tree boosting (also known as GBDT, GBM) that solve many data science problems in a fast and accurate way. The same code runs on major distributed environment (Kubernetes, Hadoop, SGE, MPI, Dask) and can solve problems beyond billions of examples.

License

© Contributors, 2019. Licensed under an Apache-2 license.

Contribute to XGBoost

XGBoost has been developed and used by a group of active community members. Your help is very valuable to make the package better for everyone. Checkout the Community Page.

Reference

  • Tianqi Chen and Carlos Guestrin. XGBoost: A Scalable Tree Boosting System. In 22nd SIGKDD Conference on Knowledge Discovery and Data Mining, 2016
  • XGBoost originates from research project at University of Washington.

Sponsors

Become a sponsor and get a logo here. See details at Sponsoring the XGBoost Project. The funds are used to defray the cost of continuous integration and testing infrastructure (https://xgboost-ci.net).

Open Source Collective sponsors

Backers on Open Collective Sponsors on Open Collective

Sponsors

[Become a sponsor]

NVIDIA

Backers

[Become a backer]

Other sponsors

The sponsors in this list are donating cloud hours in lieu of cash donation.

Amazon Web Services

Owner
Distributed (Deep) Machine Learning Community
A Community of Awesome Machine Learning Projects
Distributed (Deep) Machine Learning Community
Comments
  • Predict error in R as of 1.1.1

    Predict error in R as of 1.1.1

    R version: 3.6.1 (Action of the Toes) xgboost version: 1.1.1.1

    This error can be produced when attempting to call predict on an xgboost model developed pre-1.0

    Error: Error in predict.xgb.Booster(model, data) : [11:24:23] amalgamation/../src/learner.cc:506: Check failed: mparam_.num_feature != 0 (0 vs. 0) : 0 feature is supplied. Are you using raw Booster interface?

  • [jvm-packages] Scala implementation of the Rabit tracker.

    [jvm-packages] Scala implementation of the Rabit tracker.

    Motivation

    The Java implementation of RabitTracker in xgboost4j depends on the Python script tracker.py in dmlc-core to handle all socket connections / loggings.

    The reliance on Python code has a few weaknesses:

    • It makes xgboost4j-spark and xgboost4j-flink, which use RabitTracker, more susceptible to random failures on worker nodes due to Python versions.
    • It increases difficulty for debugging tracker-related issues.
    • Since the Python code handles all socket connection logic, it is difficult to handle timeout due to connection issues, and thus the tracker may hang indefinitely if the workers fail to connect due to networking issues.

    To address the above issues, this PR was created to introduce a pure Scala implementation of the RabitTracker, that is interchangeable with the Java implementation at interface level, but with the Python dependency completely removed.

    The implementation was tested in a Spark cluster running on YARN with up to 16 distributed workers. More thorough tests (local mode, more nodes etc.) of this PR is still WIP.

    Implementation details

    The Scala version of RabitTracker replicates the functionality of the RabitTracker class in tracker.py, that is, to handle incoming connections from Rabit clients of the worker nodes, compute the link map and rank for each given worker, and print tracker logging information.

    The tracker handles connections in asynchronous and non-blocking fashion using Akka, and resolves the inter-dependency between worker connections properly.

    Timeouts

    The Scala RabitTracker implements timeout logic at multiple entry points.

    • RabitTracker.start()may time out if the tracker fails to bind to a socket address within certain time limit.
    • RabitTracker.waitFor() may time out if at least one worker fails to connect to the tracker within certain time limit. This prevents the tracker from hanging forever.
    • RabitTracker.waitFor() may time out after a given maximum execution time limit.

    Checklist

    The following tasks are to be completed:

    • [x] Add options to switch between Python-based tracker and Scala-based tracker in xgboost4j-spark and xgboost4j-flink.
    • [x] Refactoring of RabitTracker.scala to separate the components into different files.
    • [x] Unit tests for individual actors (using akka-testkit).
    • [x] Test with rabit clients (Allreduce, checkpoint, simulated connection issus.)
    • [x] Test in production.
  • Model produced in 1.0.0 cannot be loaded into 0.90

    Model produced in 1.0.0 cannot be loaded into 0.90

    Following the instructions here: https://xgboost.readthedocs.io/en/latest/R-package/xgboostPresentation.html

    > install.packages("drat", repos="https://cran.rstudio.com")
    trying URL 'https://cran.rstudio.com/bin/windows/contrib/3.6/drat_0.1.5.zip'
    Content type 'application/zip' length 87572 bytes (85 KB)
    downloaded 85 KB
    
    package ‘drat’ successfully unpacked and MD5 sums checked
    
    The downloaded binary packages are in
            C:\Users\lee\AppData\Local\Temp\RtmpiE0N3D\downloaded_packages
    > drat:::addRepo("dmlc")
    > install.packages("xgboost", repos="http://dmlc.ml/drat/", type = "source")
    Warning: unable to access index for repository http://dmlc.ml/drat/src/contrib:
      Line starting '<!DOCTYPE html> ...' is malformed!
    Warning message:
    package ‘xgboost’ is not available (for R version 3.6.0) 
    

    It also fails on R 3.6.2 with the same error.

    Note: I would much prefer to use the CRAN version. But models I train on linux and Mac and save using the saveRDS function don't predict on another system (windows), they just produce numeric(0). If anyone has any guidelines on how to save an XGBoost model for use on other computers, please let me know. I've tried xgb.save.raw and xgb.load - both produce numeric(0) as well. But on the computer I trained the model on (a month ago), readRDS in R works just fine. Absolutely baffling to me.

  • pip install failure

    pip install failure

    [email protected]:/# pip install xgboost Downloading/unpacking xgboost Could not find a version that satisfies the requirement xgboost (from versions: 0.4a12, 0.4a13) Cleaning up... No distributions matching the version for xgboost Storing debug log for failure in /root/.pip/pip.log

    You can repeat in docker with: docker run -it --rm ubuntu:trusty

    apt-get update
    apt-get install python-pip
    pip install xgboost
    

    see this also:

    http://stackoverflow.com/questions/32258463/install-xgboost-under-python-failing

  • OMP: Error #15: Initializing libiomp5.dylib, but found libiomp5.dylib already initialized.

    OMP: Error #15: Initializing libiomp5.dylib, but found libiomp5.dylib already initialized.

    For bugs or installation issues, please provide the following information. The more information you provide, the more easily we will be able to offer help and advice.

    Environment info

    Operating System: Mac OSX Sierra 10.12.1

    Compiler:

    Package used (python):

    xgboost version used: xgboost 0.6a2

    If you are using python package, please provide

    1. The python version and distribution: Pythong 2.7.12
    2. The command to install xgboost if you are not installing from source pip install xgboost

    Steps to reproduce

    1. from xgboost import XGBClassifier import numpy as np import matplotlib.pyplot as plt x = np.array([[1,2],[3,4]]) y = np.array([0,1]) clf = XGBClassifier(base_score = 0.005) clf.fit(x,y) plt.hist(clf.feature_importances_)

    What have you tried?

    See the error message: "OMP: Error #15: Initializing libiomp5.dylib, but found libiomp5.dylib already initialized. OMP: Hint: This means that multiple copies of the OpenMP runtime have been linked into the program. That is dangerous, since it can degrade performance or cause incorrect results. The best thing to do is to ensure that only a single OpenMP runtime is linked into the process, e.g. by avoiding static linking of the OpenMP runtime in any library. As an unsafe, unsupported, undocumented workaround you can set the environment variable KMP_DUPLICATE_LIB_OK=TRUE to allow the program to continue to execute, but that may cause crashes or silently produce incorrect results. For more information, please see http://www.intel.com/software/products/support/."

    I tried: import os os.environ['KMP_DUPLICATE_LIB_OK']='True'

    It can do the job for me. But it is kind of ugly.


    I know it might be not the problem of xgboost, but I'm pretty sure this problem happened after I upgrade xgboost using 'pip install xgboost'. I post the issue here to see if someone had the same problem as me. I have very little knowledge about OpenMP. Please help!
    Thanks in advance!

  • RMM integration plugin

    RMM integration plugin

    Fixes #5861.

    Depends on #5871. Will rebase after #5871 is merged.

    Depends on #5966. Will rebase after #5966 is merged.

    ~Currently, the C++ tests are crashing with an out-of-memory error.~ The OOM has been fixed.

  • [DISCUSSION] Adopting JSON-like format as next-generation model format

    [DISCUSSION] Adopting JSON-like format as next-generation model format

    As discussed in #3878 and #3886 , we might want a more extendable format for saving XGBoost model.

    For now my plan is utilizing the JSONReader and JSONWriter implemented in dmlc-core to add experimental support for saving/loading model into Json file. Due to the fact that related area of code is quite messy and is dangerous to change, I want to share my plan and possibly an early PR as soon as possible so that someone could point out my mistakes earlier(there will be mistakes), and we don't make duplicated work. :)

    @hcho3

  • XGBoost 0.90 Roadmap

    XGBoost 0.90 Roadmap

    This thread is to keep track of all the good things that will be included in 0.90 release. It will be updated as the planned release date (~May 1, 2019~ as soon as Spark 2.4.3 is out) approaches.

    • [x] XGBoost will no longer support Python 2.7, since it is reaching its end-of-life soon. This decision was reached in #4379.
    • [x] XGBoost4J-Spark will now require Spark 2.4+, as Spark 2.3 is reaching its end-of-life in a few months (#4377) (https://github.com/dmlc/xgboost/issues/4409)
    • [x] XGBoost4J now supports up to JDK 12 (#4351)
    • [x] Additional optimizations for gpu_hist (#4248, #4283)
    • [x] XGBoost as CMake target; C API example (#4323, #4333)
    • [x] GPU multi-class metrics (#4368)
    • [x] Scikit-learn-like random forest API (#4148)
    • [x] Bugfix: Fix GPU histogram allocation (#4347)
    • [x] [BLOCKING][jvm-packages] fix non-deterministic order within a partition (in the case of an upstream shuffle) on prediction https://github.com/dmlc/xgboost/pull/4388
    • [x] Roadmap: additional optimizations for hist on multi-core Intel CPUs (#4310)
    • [x] Roadmap: hardened Rabit; see RFC #4250
    • [x] Robust handling of missing values in XGBoost4J-Spark https://github.com/dmlc/xgboost/pull/4349
    • [x] External memory with GPU predictor (#4284, #4438)
    • [x] Use feature interaction constraints to narrow split search space (#4341)
    • [x] Re-vamp Continuous Integration pipeline; see RFC #4234
    • [x] Bugfix: AUC, AUCPR metrics should handle weights correctly for learning-to-rank task (#4216)
    • [x] Ignore comments in LIBSVM files (#4430)
    • [x] Bugfix: Fix AUCPR metric for ranking (#4436)
  • 1.5.0 Release Candidate

    1.5.0 Release Candidate

    Roadmap https://github.com/dmlc/xgboost/issues/6846 . Draft of release note: https://github.com/dmlc/xgboost/pull/7271 .

    We are about to release version 1.5.0 of XGBoost. In the next two weeks, we invite everyone to try out the release candidate (RC).

    Feedback period: until the end of October 13, 2021. No new feature will be added to the release; only critical bug fixes will be added.

    @dmlc/xgboost-committer

    Available packages:

    • Python packages:
    pip install xgboost==1.5.0rc1
    
    • R packages: Linux x86_64: https://github.com/dmlc/xgboost/releases/download/v1.5.0rc1/xgboost_r_gpu_linux.tar.gz Windows x86_64: https://github.com/dmlc/xgboost/releases/download/v1.5.0rc1/xgboost_r_gpu_win64.tar.gz
    R CMD INSTALL ./xgboost_r_gpu_linux.tar.gz
    
    • JVM packages
    Show instructions (Maven/SBT)

    Maven

    <dependencies>
      ...
      <dependency>
          <groupId>ml.dmlc</groupId>
          <artifactId>xgboost4j_2.12</artifactId>
          <version>1.5.0-RC1</version>
      </dependency>
      <dependency>
          <groupId>ml.dmlc</groupId>
          <artifactId>xgboost4j-spark_2.12</artifactId>
          <version>1.5.0-RC1</version>
      </dependency>
    </dependencies>
    
    <repositories>
      <repository>
        <id>XGBoost4J Release Repo</id>
        <name>XGBoost4J Release Repo</name>
        <url>https://s3-us-west-2.amazonaws.com/xgboost-maven-repo/release/</url>
      </repository>
    </repositories>
    

    SBT

    libraryDependencies ++= Seq(
      "ml.dmlc" %% "xgboost4j" % "1.5.0-RC1",
      "ml.dmlc" %% "xgboost4j-spark" % "1.5.0-RC1"
    )
    resolvers += ("XGBoost4J Release Repo"
                  at "https://s3-us-west-2.amazonaws.com/xgboost-maven-repo/release/")
    

    Starting from 1.2.0, XGBoost4J-Spark supports training with NVIDIA GPUs. To enable this capability, download artifacts suffixed with -gpu, as follows:

    Show instructions (Maven/SBT)

    Maven

    <dependencies>
      ...
      <dependency>
          <groupId>ml.dmlc</groupId>
          <artifactId>xgboost4j-gpu_2.12</artifactId>
          <version>1.5.0-RC1</version>
      </dependency>
      <dependency>
          <groupId>ml.dmlc</groupId>
          <artifactId>xgboost4j-spark-gpu_2.12</artifactId>
          <version>1.5.0-RC1</version>
      </dependency>
    </dependencies>
    
    <repositories>
      <repository>
        <id>XGBoost4J Release Repo</id>
        <name>XGBoost4J Release Repo</name>
        <url>https://s3-us-west-2.amazonaws.com/xgboost-maven-repo/release/</url>
      </repository>
    </repositories>
    

    SBT

    libraryDependencies ++= Seq(
      "ml.dmlc" %% "xgboost4j-gpu" % "1.5.0-RC1",
      "ml.dmlc" %% "xgboost4j-spark-gpu" % "1.5.0-RC1"
    )
    resolvers += ("XGBoost4J Release Repo"
                  at "https://s3-us-west-2.amazonaws.com/xgboost-maven-repo/release/")
    

    TO-DOs

    • [x] Release pip rc package.
    • [x] Test on R-hub.
    • [x] Release R rc package.
    • [x] Release jvm rc packages.

    PRs to be backported

    • [x] Fix gamma negative log likelihood (https://github.com/dmlc/xgboost/pull/7275)
    • [x] Fix verbose_eval in Python cv function. (https://github.com/dmlc/xgboost/pull/7291)
    • [x] Fix weighted samples in multi-class AUC (https://github.com/dmlc/xgboost/pull/7300)
    • [x] Fix prediction with categorical dataframe using sklearn interface. (https://github.com/dmlc/xgboost/pull/7306)
  • [DISCUSSION] Integration with PySpark

    [DISCUSSION] Integration with PySpark

    I just noticed that there are some requests for integration with PySpark http://dmlc.ml/2016/03/14/xgboost4j-portable-distributed-xgboost-in-spark-flink-and-dataflow.html

    I also received some emails from the users discussing the same topic

    I would like to initialize a discussion here on whether/when we shall start this work

    @tqchen @terrytangyuan

  • [Roadmap] XGBoost 1.0.0 Roadmap

    [Roadmap] XGBoost 1.0.0 Roadmap

    @dmlc/xgboost-committer please add your items here by editing this post. Let's ensure that

    • each item has to be associated with a ticket

    • major design/refactoring are associated with a RFC before committing the code

    • blocking issue must be marked as blocking

    • breaking change must be marked as breaking

    for other contributors who have no permission to edit the post, please comment here about what you think should be in 1.0.0

    I have created three new types labels, 1.0.0, Blocking, Breaking

    • [x] Improve installation experience on Mac OSX (#4477)
    • [x] Remove old GPU objectives.
    • [x] Remove gpu_exact updater (deprecated) #4527
    • [x] Remove multi threaded multi gpu support (deprecated) #4531
    • [x] External memory for gpu and associated dmatrix refactoring #4357 #4354
    • [ ] Spark Checkpoint Performance Improvement (https://github.com/dmlc/xgboost/issues/3946)
    • [x] [BLOCKING] the sync mechanism in hist method in master branch is broken due to the inconsistent shape of tree in different workers (https://github.com/dmlc/xgboost/pull/4716, https://github.com/dmlc/xgboost/issues/4679)
    • [x] Per-node sync slows down distributed training with 'hist' (#4679)
    • [x] Regression tests including binary IO compatibility, output stability, performance regressions.
  • Update custom_metric_obj.rst

    Update custom_metric_obj.rst

    For the codeblock given in line 291, in the softprob_obj method definition, the variable 'classes' is not defined. It seems it should be defined as given in the changes.

  • [R] [CI] add more linting checks

    [R] [CI] add more linting checks

    Proposes adding the following additional linting checks on R code in the project:

    • any_duplicated_linter()
      • encourages the use of anyDuplicated(x) over any(duplicated(x))
      • this built-in is faster in most cases, because it doesn't need to compute all duplicates
    • any_is_na_linter()
      • encourages the use of anyNA(x) over is.na(x)
      • this built-in is faster in most cases, because it doesn't need to compute whether or not every value in x is NA
    • sprintf_linter()
      • detects mistakes in string formatting with sprintf()
    • unreachable_code_linter()
      • detects code that will never be executed, e.g. after an unconditional stop() or return()
    • vector_logic_linter()
      • warns against the common mistake of using vectorized logical operators like | and && (which return a vector of results) when a single logical is expected

    Other changes in this PR are to address the following errors raised by these linters.

    warning: [unreachable_code] Code and comments coming after a top-level return() or stop() should be removed.

    warning: [vector_logic] Conditional expressions require scalar logical operators (&& and ||)

    warning: [any_is_na] anyNA(x) is better than any(is.na(x))

    How I tested this

    Rscript ./tests/ci_build/lint_r.R $(pwd)
    
    R CMD INSTALL R-package/
    cd R-package/tests
    Rscript testthat.R
    

    Notes for Reviewers

    I chose this set of additional linters because they address efficiency and correctness, so they provide some user-facing benefit.

    I might propose adding more in the future (we use many more in LightGBM), but don't want to draw away too much of maintainers' attention here.

    Thanks for your time and consideration!

  • Hist Tree Method Discrepancies

    Hist Tree Method Discrepancies

    Hello. I am opening this issue on behalf of someone. Basically, they are wondering if there is anything within the roadmap/if it was possible to not have varying prediction results for the "hist" tree method based off the number of workers used in a training job. From what I can see from past issues on here and the code itself, this seems to be expected behavior. However, I believe they wanted further insight from someone on here.

    Please let me know if any further specifics are needed for this inquiry.

  • Bump rapids-4-spark_2.12 from 21.08.0 to 22.12.0 in /jvm-packages

    Bump rapids-4-spark_2.12 from 21.08.0 to 22.12.0 in /jvm-packages

    Bumps rapids-4-spark_2.12 from 21.08.0 to 22.12.0.

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
  • [CI] cudaErrorInsufficientDriver returned from cudaGetDevice

    [CI] cudaErrorInsufficientDriver returned from cudaGetDevice

    https://github.com/dmlc/xgboost/blob/c6a8754c62496e43452e6edf49eb0eb89ffcdc70/tests/cpp/helpers.cc#L632

    This line is currently failing in the CI due to a driver issue. It's part of the GTest with RMM Until the CI machine can be updated with the latest driver, we'll disable the GTest with RMM.

  • QuantileDMatrix no longer take libsvm file as an input?

    QuantileDMatrix no longer take libsvm file as an input?

    XGBoost 1.7.2

    dtrain=xgb.QuantileDMatrix('/Users/weitian/tmp/data.test') Traceback (most recent call last): File "", line 1, in File "/Users/weitian/opt/miniconda3/envs/turi/lib/python3.9/site-packages/xgboost/core.py", line 620, in inner_f return func(**kwargs) File "/Users/weitian/opt/miniconda3/envs/turi/lib/python3.9/site-packages/xgboost/core.py", line 1386, in init self._init( File "/Users/weitian/opt/miniconda3/envs/turi/lib/python3.9/site-packages/xgboost/core.py", line 1445, in _init it.reraise() File "/Users/weitian/opt/miniconda3/envs/turi/lib/python3.9/site-packages/xgboost/core.py", line 488, in reraise raise exc # pylint: disable=raising-bad-type File "/Users/weitian/opt/miniconda3/envs/turi/lib/python3.9/site-packages/xgboost/core.py", line 469, in _handle_exception return fn() File "/Users/weitian/opt/miniconda3/envs/turi/lib/python3.9/site-packages/xgboost/core.py", line 534, in return self._handle_exception(lambda: self.next(input_data), 0) File "/Users/weitian/opt/miniconda3/envs/turi/lib/python3.9/site-packages/xgboost/data.py", line 1172, in next input_data(**self.kwargs) File "/Users/weitian/opt/miniconda3/envs/turi/lib/python3.9/site-packages/xgboost/core.py", line 620, in inner_f return func(**kwargs) File "/Users/weitian/opt/miniconda3/envs/turi/lib/python3.9/site-packages/xgboost/core.py", line 519, in input_data new, cat_codes, feature_names, feature_types = _proxy_transform( File "/Users/weitian/opt/miniconda3/envs/turi/lib/python3.9/site-packages/xgboost/data.py", line 1206, in _proxy_transform raise TypeError("Value type is not supported for data iterator:" + str(type(data))) TypeError: Value type is not supported for data iterator:<class 'str'>

BLLIP reranking parser (also known as Charniak-Johnson parser, Charniak parser, Brown reranking parser) See http://pypi.python.org/pypi/bllipparser/ for Python module.

BLLIP Reranking Parser Copyright Mark Johnson, Eugene Charniak, 24th November 2005 --- August 2006 We request acknowledgement in any publications that

Dec 17, 2022
Ingescape - Model-based framework for broker-free distributed software environments
 Ingescape - Model-based framework for broker-free distributed software environments

Ingescape - Model-based framework for broker-free distributed software environments Overview Scope and Goals Ownership and License Dependencies with o

Jan 5, 2023
Edge ML Library - High-performance Compute Library for On-device Machine Learning Inference
 Edge ML Library - High-performance Compute Library for On-device Machine Learning Inference

Edge ML Library (EMLL) offers optimized basic routines like general matrix multiplications (GEMM) and quantizations, to speed up machine learning (ML) inference on ARM-based devices. EMLL supports fp32, fp16 and int8 data types. EMLL accelerates on-device NMT, ASR and OCR engines of Youdao, Inc.

Jan 7, 2023
A lightweight C++ machine learning library for embedded electronics and robotics.

Fido Fido is an lightweight, highly modular C++ machine learning library for embedded electronics and robotics. Fido is especially suited for robotic

Dec 17, 2022
A C++ standalone library for machine learning

Flashlight: Fast, Flexible Machine Learning in C++ Quickstart | Installation | Documentation Flashlight is a fast, flexible machine learning library w

Jan 8, 2023
Flashlight is a C++ standalone library for machine learning
Flashlight is a C++ standalone library for machine learning

Flashlight is a fast, flexible machine learning library written entirely in C++ from the Facebook AI Research Speech team and the creators of Torch and Deep Speech.

Jan 8, 2023
ML++ - A library created to revitalize C++ as a machine learning front end
ML++ - A library created to revitalize C++ as a machine learning front end

ML++ Machine learning is a vast and exiciting discipline, garnering attention from specialists of many fields. Unfortunately, for C++ programmers and

Dec 31, 2022
A toolkit for making real world machine learning and data analysis applications in C++

dlib C++ library Dlib is a modern C++ toolkit containing machine learning algorithms and tools for creating complex software in C++ to solve real worl

Dec 31, 2022
Samsung Washing Machine replacing OS control unit

hacksung Samsung Washing Machine WS1702 replacing OS control unit More info at https://www.hackster.io/roni-bandini/dead-washing-machine-returns-to-li

Dec 19, 2022
Machine Learning Framework for Operating Systems - Brings ML to Linux kernel
Machine Learning Framework for Operating Systems - Brings ML to Linux kernel

Machine Learning Framework for Operating Systems - Brings ML to Linux kernel

Nov 24, 2022
An SDL2-based implementation of OpenAL in a single C file.

MojoAL MojoAL is a full OpenAL 1.1 implementation, written in C, in a single source file. It uses Simple Directmedia Layer (SDL) 2.0 to handle much of

Dec 28, 2022
C-based/Cached/Core Computer Vision Library, A Modern Computer Vision Library

Build Status Travis CI VM: Linux x64: Raspberry Pi 3: Jetson TX2: Backstory I set to build ccv with a minimalism inspiration. That was back in 2010, o

Jan 6, 2023
MITIE: library and tools for information extraction

MITIE: MIT Information Extraction This project provides free (even for commercial use) state-of-the-art information extraction tools. The current rele

Dec 29, 2022
libsvm websitelibsvm - A simple, easy-to-use, efficient library for Support Vector Machines. [BSD-3-Clause] website

Libsvm is a simple, easy-to-use, and efficient software for SVM classification and regression. It solves C-SVM classification, nu-SVM classification,

Jan 2, 2023
Open Source Computer Vision Library

OpenCV: Open Source Computer Vision Library Resources Homepage: https://opencv.org Courses: https://opencv.org/courses Docs: https://docs.opencv.org/m

Jan 1, 2023
oneAPI Data Analytics Library (oneDAL)
oneAPI Data Analytics Library (oneDAL)

Intel® oneAPI Data Analytics Library Installation | Documentation | Support | Examples | Samples | How to Contribute Intel® oneAPI Data Analytics Libr

Dec 30, 2022