Shōgun

The SHOGUN machine learning toolbox


Unified and efficient Machine Learning since 1999.

Latest release:

Release

Cite Shogun:

DOI

Develop branch build status:

Build status codecov

Donate to Shogun via NumFocus:

Powered by NumFOCUS

Buildbot: https://buildbot.shogun.ml.

Interfaces


Shogun is implemented in C++ and offers automatically generated, unified interfaces to Python, Octave, Java / Scala, Ruby, C#, R, Lua. We are currently working on adding more languages including JavaScript, D, and Matlab.

Interface Status
Python mature (no known problems)
Octave mature (no known problems)
Java/Scala stable (no known problems)
Ruby stable (no known problems)
C# stable (no known problems)
R beta (most examples work, static calls unavailable)
Perl pre-alpha (work in progress quality)
JS pre-alpha (work in progress quality)

See our website for examples in all languages.

Platforms


Shogun is supported under GNU/Linux, MacOSX, FreeBSD, and Windows.

Directory Contents


The following directories are found in the source distribution. Note that some folders are submodules that can be checked out with git submodule update --init.

  • src - source code, separated into C++ source and interfaces
  • doc - readmes (doc/readme, submodule), Jupyter notebooks, cookbook (API examples), licenses
  • examples - example files for all interfaces
  • data - data sets (submodule, required for examples)
  • tests - unit tests and continuous integration of interface examples
  • applications - applications of SHOGUN (outdated)
  • benchmarks - speed benchmarks
  • cmake - cmake build scripts

License


Shogun is distributed under BSD 3-clause license, with optional GPL3 components. See doc/licenses for details.

Comments
  • Implement heterogeneous (GPU+CPU) dot product computation routines (Deep learning project)

    Implement heterogeneous (GPU+CPU) dot product computation routines (Deep learning project)

    The dot product operation is one of the major building blocks for deep architecture neural networks. The routine implemented in this task should be able to handle batch computation of dot products. For some references see Theano, CUDA, OpenCL, ViennaCL. It is also worth to implement some tests to measure performance and memory specific things.

    Please join the discussion before starting working on any code. We expect to refine the task with further discussion.

  • #2068 Simple Gaussian Process Regression on Movielens.

    #2068 Simple Gaussian Process Regression on Movielens.

    How to commit data to the shogun-data? Need I use another pull request on shogun-data?

    This is the simple example of using Gaussian Process Regression on Movielens.

  • Add kmeans page to cookbook

    Add kmeans page to cookbook

    • There's no parameter CLabels in class CKMeans, so I can't find a way to use apply_*, eval.evaluate, so as to compare test and training dataset.
    • Though I didn't see why CKMeans cannot have CLabels - we can just label the clusters 1..N.
    • I thought about evaluating the clustering performance by calculating the Euclidean distances between centers of training dataset and test dataset, but there's no handy method for now.
    • I didn't see the difference between dataset fm_train_real.dat and classifier_binary__2d_linear_features_train.datbut I think it doesn't really matter which one to use..?
  • Add meta example features-char-string

    Add meta example features-char-string

    A simple meta example for CStringFeatures.

    I would like to make changes and add output file for an integration test but I am not sure if the current outputs are enough for that. Currently, it stores "max_string_length", "number_of_strings" and "length_of_first_string". I don't think that is possible to practically check all the values of "strings".

    However, if you don't have a better idea I could add eight variables that store the value of the first vector before and after the change to "test".

  • Refactor laplacian

    Refactor laplacian

    @karlnapf take a look at this. I will send the link for the notebook tomorrow.

    Note that the original implementation of LaplacianInferenceMethod in Shogun used log(lu.determinant()) to compute the log_determinant is not numerical stable. (In fact, this implementation do not follow the GPML code)

    Maybe MatrixOperations.h will be merged in Math.h. However, I think in that case the Math.h file need to include the Eigen3 header.

    Another issue is currently I use MatrixXd and VectorXd to pass variables in MatrixOperations.h. maybe SGVector and SGMatrix will be better. (should I use "SGVector &" or "SGVector") I do now know whether passing SGVector to a function is to copy elements in the SGVector.

  • Implement an example of variational approximation for binary GP classification

    Implement an example of variational approximation for binary GP classification

    This task is for the Variational Learning for Recommendations with Big data http://shogun-toolbox.org/page/Events/gsoc2014_ideas#variational_learning

    Our goal is to reproduce a simple example of variational approximation. We will use a GP prior with zero mean and a linear Kernel, and generate synthetic data using a logit likelihood. We will then compute an approximate Gaussian posterior N(m,V) with a restriction that the diagonal of V is 1. Our goal is to find m and V. We will use the KL method of Kuss and Rasmussen, 2005

    I have a demo code in MATLAB here, and the hope is to generate this using Shogun. https://github.com/emtiyaz/VariationalApproxExample

    You need to do the following two main tasks: (1) Write a function similar to ElogLik.m for logit likelihood (2) Interface the optimization in example.m using Shogun's LBFGS implementation.

    Please let us know that you are working on it, and feel free to ask any questions to @karlnapf or me.

  • cv::Mat to CDenseFeature conversion Factory and vice versa.

    cv::Mat to CDenseFeature conversion Factory and vice versa.

    I have made a factory which directly converts any cvMat object into any(required) type of CDenseFeatures. and CDenseFeature<float64_t> into the required type of cvMat (any)

  • Added Documentation regarding issue #1878

    Added Documentation regarding issue #1878

    Added 'pca_notebook.ipynb' named python notebook in doc/ipython-notebooks/pca Implemented PCA on toy data for 2d to 1d and 3d to 2d projection. Implemented Eigenfaces for data compression and face recognition using att_face dataset.

  • WIP Write Generalized Linear Machine class

    WIP Write Generalized Linear Machine class

    #5005 #5000 This is the basic framework for the Generalized Linear Machine class. This class is supposed to implement the following distributions BINOMIAL, GAMMA, SOFTPLUS, PROBIT, POISSON

    The code has been written keeping in mind this reference: PyGLMNet library However, I have only written code for the Poisson distribution till now.

    THIS IS A WORK IN PROGRESS

    This PR is so that a discussion can be held about the implementation of the GLM and so that Some feedback can be obtained for my code. @lgoetz @geektoni

    TODO

    • [x] Write code.
    • [x] Add basic test.
    • [x] Add gradient test.
    • [X] Link github gists for generating data.
    • [X] Check why the SGObject test is failing.
    • [ ] Use FeatureDispatchCRTP.
  • Added DEPRECATED versions of statistic and variance in streaming MMD

    Added DEPRECATED versions of statistic and variance in streaming MMD

    DEPRECATED versions are available with

    • statistic type S_UNBIASED_DEPRECATED
    • null variance estimation method NO_PERMUTATION_DEPRECATED
    • null approximation method MMD1_GAUSSIAN_DEPRECATED
  • one issue  about using Shogun's optimizers in target languages

    one issue about using Shogun's optimizers in target languages

    @karlnapf In CInference class

    virtual void register_minimizer(Minimizer* minimizer);
    

    In Minimizer class

    #ifndef MINIMIZER_H
    #define MINIMIZER_H
    #include <shogun/lib/config.h>
    namespace shogun
    {
    
    /** @brief The minimizer base class.
     *
     */
    class Minimizer
    {
    public: 
            /** Do minimization and get the optimal value 
             * 
             * @return optimal value
             */
            virtual float64_t minimize()=0;
    
            virtual ~Minimizer() {}
    };
    
    }
    #endif
    

    Note that

    • CInference is a sub-class of SGObject
    • LBFGSMinimizer class is a sub-class of Minimizer
    • CSingleLaplaceInferenceMethod is a sub-class of CInference
    • Minimizer is NOT a sub-class of CSGObject

    The following lines of C++ code work.

    CSingleLaplaceInferenceMethod* inf = new CSingleLaplaceInferenceMethod();
    LBFGSMinimizer* opt=new LBFGSMinimizer();
    inf->register_minimizer(opt);
    

    However, the following lines of Python code do not work

    inf=SingleLaplaceInferenceMethod()
    opt = LBFGSMinimizer()
    inf.register_minimizer(opt)
    

    Error output:

    TypeError: in method 'Inference_register_minimizer', argument 2 of type 'Minimizer *'
    
  • Website Not Secure warning

    Website Not Secure warning

    using mac os monterey and chrome v 104.0.5112.101

    Website linked from github: https://www.shogun.ml/

    image

    Website linked from NumFocus https://www.shogun-toolbox.org/ image

  • Examples are outdated

    Examples are outdated

    Just tried to make the Newton example work against 6.1.4 under Windows. Does not even compile because of API changes all over the place. Would be great to have at least one C/C++ AI/DL package with working examples. Shogun was more or less my last hope as tensorflow's C API is incomplete.

  • Machine object should return a reference to themselves

    Machine object should return a reference to themselves

    Machine object should return a reference to themselves (like in sklearn)

    auto machine = pipeline->over(std::make_shared<NormOne>())
                                           ->composite()
                                                  ->over(std::make_shared<MulticlassLibLinear>())
                				      ->over(std::make_shared<MulticlassOCAS>())
                            	        ->then(std::make_shared<MeanRule>());
    
    machine->train(train_feats, train_labels);
    auto pred = machine->apply_multiclass(test_feats);
    

    should be simply

    auto pred = pipeline->over(std::make_shared<NormOne>())
                                     ->composite()
                                              ->over(std::make_shared<MulticlassLibLinear>())
                		                  ->over(std::make_shared<MulticlassOCAS>())
                            	 ->then(std::make_shared<MeanRule>())
                                     ->train(train_feats, train_labels)
                                     ->apply_multiclass(test_feats);
    

    This should be a simple fix in Machine::train signature, but it might break some code..

  • Error freeing memory LibSVM when exiting sample application

    Error freeing memory LibSVM when exiting sample application

    I build shogun master on Windows 10 x64, VisualStudio 2019. I built the sample classifier_minimal_svm, it works but I get this error exiting the application

    Critical error detected c0000374
    classifier_minimal_svm.exe has triggered a breakpoint.
    
    Exception thrown at 0x00007FFC395DB0B9 (ntdll.dll) in classifier_minimal_svm.exe: 0xC0000374: A heap has been corrupted 
    (parameters: 0x00007FFC396427F0).
    Unhandled exception at 0x00007FFC395DB0B9 (ntdll.dll) in classifier_minimal_svm.exe: 0xC0000374: A heap has been corrupted (parameters: 0x00007FFC396427F0).
    

    This is the stack trace:

    ntdll.dll!00007ffc395db0b9()	Unknown
    ntdll.dll!00007ffc395db083()	Unknown
    ntdll.dll!00007ffc395e390e()	Unknown
    ntdll.dll!00007ffc395e3c1a()	Unknown
    ntdll.dll!00007ffc3957ecb1()	Unknown
    ntdll.dll!00007ffc3958ce62()	Unknown
    ucrtbase.dll!00007ffc357ec7eb()	Unknown
    classifier_minimal_svm.exe!shogun::sg_free(void * ptr) Line 186	C++
    classifier_minimal_svm.exe!shogun::sg_generic_free<int,0>(int * ptr) Line 124	C++
    classifier_minimal_svm.exe!shogun::SGVector<int>::free_data() Line 405	C++
    classifier_minimal_svm.exe!shogun::SGReferencedData::unref() Line 102	C++
    classifier_minimal_svm.exe!shogun::SGVector<int>::~SGVector<int>() Line 173	C++
    classifier_minimal_svm.exe!shogun::KernelMachine::~KernelMachine() Line 79	C++
    classifier_minimal_svm.exe!shogun::SVM::~SVM() Line 40	C++
    classifier_minimal_svm.exe!shogun::LibSVM::~LibSVM() Line 37	C++
    classifier_minimal_svm.exe!shogun::LibSVM::`scalar deleting destructor'(unsigned int)	C++
    classifier_minimal_svm.exe!std::_Destroy_in_place<shogun::LibSVM>(shogun::LibSVM & _Obj) Line 269	C++
    classifier_minimal_svm.exe!std::_Ref_count_obj2<shogun::LibSVM>::_Destroy() Line 1446	C++
    classifier_minimal_svm.exe!std::_Ref_count_base::_Decref() Line 542	C++
    classifier_minimal_svm.exe!std::_Ptr_base<shogun::LibSVM>::_Decref() Line 776	C++
    classifier_minimal_svm.exe!std::shared_ptr<shogun::LibSVM>::~shared_ptr<shogun::LibSVM>() Line 1034	C++
    classifier_minimal_svm.exe!main(int argc, char * * argv) Line 41	C++
    [Inline Frame] classifier_minimal_svm.exe!invoke_main() Line 78	C++
    classifier_minimal_svm.exe!__scrt_common_main_seh() Line 288	C++
    

    I see in previous release there was this line of code now removed

    // free up memory
    SG_UNREF(svm);
    
  • Make Machine class stateless

    Make Machine class stateless

    @LiuYuHui's main GSoC project. Machine class becomes stateless wrt Features and Labels which means that the user has to provide features and labels when fitting a Machine. This is essentially done by adding the notion of (Non)Parametric Machines.

  • Remove SVM from interfaces

    Remove SVM from interfaces

    Currently we expose SVM to interfaces, but this is somewhat redundant as we expose Machine (and SVM is a Machine derived class). This was just a hack to make sure that only SVM types can be passed with put to objects expecting an SVM parameter. But now we can test at runtime if the object being put can be casted to the expected type: https://github.com/shogun-toolbox/shogun/blob/f143eaf9fdd95f138335e49b3a26693847af5dac/src/shogun/kernel/string/HistogramWordStringKernel.cpp#L428-L429

    So all Machine which expect an SVM parameter (e.g. MKLClassification) should have this castable check and register the svm parameter as a Machine. This will avoid having to cast machine to svm when passing it to things like MKLClassifcation: https://github.com/shogun-toolbox/shogun/blob/f143eaf9fdd95f138335e49b3a26693847af5dac/examples/undocumented/python/evaluation_cross_validation_mkl_weight_storage.py#L41-L42

    @karlnapf the only issue will be how does the user have access to compute_svm_primal_objective and compute_svm_dual_objective? Maybe SVM shouldn't be removed from swig, but classes like MKL should register SVM as Machine? That way we avoid having to cast the Machine object when it is just passed as an argument in put?