mlpackgo

package module
v0.0.0-...-0e350ac Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Nov 28, 2022 License: BSD-3-Clause Imports: 10 Imported by: 0

README


a fast, flexible machine learning library

Home | Documentation | Doxygen | Community | Help | IRC Chat

This repository contains the Go bindings for mlpack. These bindings are auto-generated, and so this repository is not really maintained or monitored.

If you are looking for the documentation for the Go bindings, try here:

If you are having trouble or want to learn more, try looking in the main mlpack repository:

Any issues with the Go bindings should be filed there.

Documentation

Overview

mlpack is a fast, flexible machine learning library, written in C++, that aims to provide fast, extensible implementations of cutting-edge machine learning algorithms. mlpack provides these algorithms as simple command-line programs, Go bindings, and C++ classes which can then be integrated into larger-scale machine learning solutions.

Index

Constants

This section is empty.

Variables

This section is empty.

Functions

func Adaboost

func Adaboost(param *AdaboostOptionalParam) (*mat.Dense, adaBoostModel, *mat.Dense, *mat.Dense)

This program implements the AdaBoost (or Adaptive Boosting) algorithm. The variant of AdaBoost implemented here is AdaBoost.MH. It uses a weak learner, either decision stumps or perceptrons, and over many iterations, creates a strong learner that is a weighted ensemble of weak learners. It runs these iterations until a tolerance value is crossed for change in the value of the weighted training error.

For more information about the algorithm, see the paper "Improved Boosting Algorithms Using Confidence-Rated Predictions", by R.E. Schapire and Y. Singer.

This program allows training of an AdaBoost model, and then application of that model to a test dataset. To train a model, a dataset must be passed with the "Training" option. Labels can be given with the "Labels" option; if no labels are specified, the labels will be assumed to be the last column of the input dataset. Alternately, an AdaBoost model may be loaded with the "InputModel" option.

Once a model is trained or loaded, it may be used to provide class predictions for a given test dataset. A test dataset may be specified with the "Test" parameter. The predicted classes for each point in the test dataset are output to the "Predictions" output parameter. The AdaBoost model itself is output to the "OutputModel" output parameter.

Note: the following parameter is deprecated and will be removed in mlpack 4.0.0: "Output". Use "Predictions" instead of "Output".

For example, to run AdaBoost on an input dataset data with labels labelsand perceptrons as the weak learner type, storing the trained model in model, one could use the following command:

// Initialize optional parameters for Adaboost(). param := mlpack.AdaboostOptions() param.Training = data param.Labels = labels param.WeakLearner = "perceptron"

_, model, _, _ := mlpack.Adaboost(param)

Similarly, an already-trained model in model can be used to provide class predictions from test data test_data and store the output in predictions with the following command:

// Initialize optional parameters for Adaboost(). param := mlpack.AdaboostOptions() param.InputModel = &model param.Test = test_data

_, _, predictions, _ := mlpack.Adaboost(param)

Input parameters:

  • InputModel (adaBoostModel): Input AdaBoost model.
  • Iterations (int): The maximum number of boosting iterations to be run (0 will run until convergence.) Default value 1000.
  • Labels (mat.Dense): Labels for the training set.
  • Test (mat.Dense): Test dataset.
  • Tolerance (float64): The tolerance for change in values of the weighted error during training. Default value 1e-10.
  • Training (mat.Dense): Dataset for training AdaBoost.
  • Verbose (bool): Display informational messages and the full list of parameters and timers at the end of execution.
  • WeakLearner (string): The type of weak learner to use: 'decision_stump', or 'perceptron'. Default value 'decision_stump'.

Output parameters:

  • output (mat.Dense): Predicted labels for the test set.
  • outputModel (adaBoostModel): Output trained AdaBoost model.
  • predictions (mat.Dense): Predicted labels for the test set.
  • probabilities (mat.Dense): Predicted class probabilities for each point in the test set.

func ApproxKfn

func ApproxKfn(param *ApproxKfnOptionalParam) (*mat.Dense, *mat.Dense, approxkfnModel)

This program implements two strategies for furthest neighbor search. These strategies are:

  • The 'qdafn' algorithm from "Approximate Furthest Neighbor in High

Dimensions" by R. Pagh, F. Silvestri, J. Sivertsen, and M. Skala, in Similarity Search and Applications 2015 (SISAP).

  • The 'DrusillaSelect' algorithm from "Fast approximate furthest neighbors

with data-dependent candidate selection", by R.R. Curtin and A.B. Gardner, in Similarity Search and Applications 2016 (SISAP).

These two strategies give approximate results for the furthest neighbor search problem and can be used as fast replacements for other furthest neighbor techniques such as those found in the mlpack_kfn program. Note that typically, the 'ds' algorithm requires far fewer tables and projections than the 'qdafn' algorithm.

Specify a reference set (set to search in) with "Reference", specify a query set with "Query", and specify algorithm parameters with "NumTables" and "NumProjections" (or don't and defaults will be used). The algorithm to be used (either 'ds'---the default---or 'qdafn') may be specified with "Algorithm". Also specify the number of neighbors to search for with "K".

Note that for 'qdafn' in lower dimensions, "NumProjections" may need to be set to a high value in order to return results for each query point.

If no query set is specified, the reference set will be used as the query set.

The "OutputModel" output parameter may be used to store the built model, and

an input model may be loaded instead of specifying a reference set with the "InputModel" option.

Results for each query point can be stored with the "Neighbors" and "Distances" output parameters. Each row of these output matrices holds the k distances or neighbor indices for each query point.

For example, to find the 5 approximate furthest neighbors with reference_set as the reference set and query_set as the query set using DrusillaSelect, storing the furthest neighbor indices to neighbors and the furthest neighbor distances to distances, one could call

// Initialize optional parameters for ApproxKfn(). param := mlpack.ApproxKfnOptions() param.Query = query_set param.Reference = reference_set param.K = 5 param.Algorithm = "ds"

distances, neighbors, _ := mlpack.ApproxKfn(param)

and to perform approximate all-furthest-neighbors search with k=1 on the set data storing only the furthest neighbor distances to distances, one could call

// Initialize optional parameters for ApproxKfn(). param := mlpack.ApproxKfnOptions() param.Reference = reference_set param.K = 1

distances, _, _ := mlpack.ApproxKfn(param)

A trained model can be re-used. If a model has been previously saved to model, then we may find 3 approximate furthest neighbors on a query set new_query_set using that model and store the furthest neighbor indices into neighbors by calling

// Initialize optional parameters for ApproxKfn(). param := mlpack.ApproxKfnOptions() param.InputModel = &model param.Query = new_query_set param.K = 3

_, neighbors, _ := mlpack.ApproxKfn(param)

Input parameters:

  • Algorithm (string): Algorithm to use: 'ds' or 'qdafn'. Default value 'ds'.
  • CalculateError (bool): If set, calculate the average distance error for the first furthest neighbor only.
  • ExactDistances (mat.Dense): Matrix containing exact distances to furthest neighbors; this can be used to avoid explicit calculation when --calculate_error is set.
  • InputModel (approxkfnModel): File containing input model.
  • K (int): Number of furthest neighbors to search for. Default value 0.
  • NumProjections (int): Number of projections to use in each hash table. Default value 5.
  • NumTables (int): Number of hash tables to use. Default value 5.
  • Query (mat.Dense): Matrix containing query points.
  • Reference (mat.Dense): Matrix containing the reference dataset.
  • Verbose (bool): Display informational messages and the full list of parameters and timers at the end of execution.

Output parameters:

  • distances (mat.Dense): Matrix to save furthest neighbor distances to.
  • neighbors (mat.Dense): Matrix to save neighbor indices to.
  • outputModel (approxkfnModel): File to save output model to.

func BayesianLinearRegression

func BayesianLinearRegression(param *BayesianLinearRegressionOptionalParam) (bayesianLinearRegression, *mat.Dense, *mat.Dense)

An implementation of the bayesian linear regression. This model is a probabilistic view and implementation of the linear regression. The final solution is obtained by computing a posterior distribution from gaussian likelihood and a zero mean gaussian isotropic prior distribution on the solution. Optimization is AUTOMATIC and does not require cross validation. The optimization is performed by maximization of the evidence function. Parameters are tuned during the maximization of the marginal likelihood. This procedure includes the Ockham's razor that penalizes over complex solutions.

This program is able to train a Bayesian linear regression model or load a model from file, output regression predictions for a test set, and save the trained model to a file.

To train a BayesianLinearRegression model, the "Input" and "Responses"parameters must be given. The "Center"and "Scale" parameters control the centering and the normalizing options. A trained model can be saved with the "OutputModel". If no training is desired at all, a model can be passed via the "InputModel" parameter.

The program can also provide predictions for test data using either the trained model or the given input model. Test points can be specified with the "Test" parameter. Predicted responses to the test points can be saved with the "Predictions" output parameter. The corresponding standard deviation can be save by precising the "Stds" parameter.

For example, the following command trains a model on the data data and responses responseswith center set to true and scale set to false (so, Bayesian linear regression is being solved, and then the model is saved to blr_model:

// Initialize optional parameters for BayesianLinearRegression(). param := mlpack.BayesianLinearRegressionOptions() param.Input = data param.Responses = responses param.Center = 1 param.Scale = 0

blr_model, _, _ := mlpack.BayesianLinearRegression(param)

The following command uses the blr_model to provide predicted responses for the data test and save those responses to test_predictions:

// Initialize optional parameters for BayesianLinearRegression(). param := mlpack.BayesianLinearRegressionOptions() param.InputModel = &blr_model param.Test = test

_, test_predictions, _ := mlpack.BayesianLinearRegression(param)

Because the estimator computes a predictive distribution instead of a simple point estimate, the "Stds" parameter allows one to save the prediction uncertainties:

// Initialize optional parameters for BayesianLinearRegression(). param := mlpack.BayesianLinearRegressionOptions() param.InputModel = &blr_model param.Test = test

_, test_predictions, stds := mlpack.BayesianLinearRegression(param)

Input parameters:

  • Center (bool): Center the data and fit the intercept if enabled.
  • Input (mat.Dense): Matrix of covariates (X).
  • InputModel (bayesianLinearRegression): Trained BayesianLinearRegression model to use.
  • Responses (mat.Dense): Matrix of responses/observations (y).
  • Scale (bool): Scale each feature by their standard deviations if enabled.
  • Test (mat.Dense): Matrix containing points to regress on (test points).
  • Verbose (bool): Display informational messages and the full list of parameters and timers at the end of execution.

Output parameters:

  • outputModel (bayesianLinearRegression): Output BayesianLinearRegression model.
  • predictions (mat.Dense): If --test_file is specified, this file is where the predicted responses will be saved.
  • stds (mat.Dense): If specified, this is where the standard deviations of the predictive distribution will be saved.

func Cf

func Cf(param *CfOptionalParam) (*mat.Dense, cfModel)

This program performs collaborative filtering (CF) on the given dataset. Given a list of user, item and preferences (the "Training" parameter), the program will perform a matrix decomposition and then can perform a series of actions related to collaborative filtering. Alternately, the program can load an existing saved CF model with the "InputModel" parameter and then use that model to provide recommendations or predict values.

The input matrix should be a 3-dimensional matrix of ratings, where the first dimension is the user, the second dimension is the item, and the third dimension is that user's rating of that item. Both the users and items should be numeric indices, not names. The indices are assumed to start from 0.

A set of query users for which recommendations can be generated may be specified with the "Query" parameter; alternately, recommendations may be generated for every user in the dataset by specifying the "AllUserRecommendations" parameter. In addition, the number of recommendations per user to generate can be specified with the "Recommendations" parameter, and the number of similar users (the size of the neighborhood) to be considered when generating recommendations can be specified with the "Neighborhood" parameter.

For performing the matrix decomposition, the following optimization algorithms can be specified via the "Algorithm" parameter:

  • 'RegSVD' -- Regularized SVD using a SGD optimizer
  • 'NMF' -- Non-negative matrix factorization with alternating least squares

update rules

  • 'BatchSVD' -- SVD batch learning
  • 'SVDIncompleteIncremental' -- SVD incomplete incremental learning
  • 'SVDCompleteIncremental' -- SVD complete incremental learning
  • 'BiasSVD' -- Bias SVD using a SGD optimizer
  • 'SVDPP' -- SVD++ using a SGD optimizer

The following neighbor search algorithms can be specified via the "NeighborSearch" parameter:

  • 'cosine' -- Cosine Search Algorithm
  • 'euclidean' -- Euclidean Search Algorithm
  • 'pearson' -- Pearson Search Algorithm

The following weight interpolation algorithms can be specified via the "Interpolation" parameter:

  • 'average' -- Average Interpolation Algorithm
  • 'regression' -- Regression Interpolation Algorithm
  • 'similarity' -- Similarity Interpolation Algorithm

The following ranking normalization algorithms can be specified via the "Normalization" parameter:

  • 'none' -- No Normalization
  • 'item_mean' -- Item Mean Normalization
  • 'overall_mean' -- Overall Mean Normalization
  • 'user_mean' -- User Mean Normalization
  • 'z_score' -- Z-Score Normalization

A trained model may be saved to with the "OutputModel" output parameter.

To train a CF model on a dataset training_set using NMF for decomposition and saving the trained model to model, one could call:

// Initialize optional parameters for Cf(). param := mlpack.CfOptions() param.Training = training_set param.Algorithm = "NMF"

_, model := mlpack.Cf(param)

Then, to use this model to generate recommendations for the list of users in the query set users, storing 5 recommendations in recommendations, one could call

// Initialize optional parameters for Cf(). param := mlpack.CfOptions() param.InputModel = &model param.Query = users param.Recommendations = 5

recommendations, _ := mlpack.Cf(param)

Input parameters:

  • Algorithm (string): Algorithm used for matrix factorization. Default value 'NMF'.
  • AllUserRecommendations (bool): Generate recommendations for all users.
  • InputModel (cfModel): Trained CF model to load.
  • Interpolation (string): Algorithm used for weight interpolation. Default value 'average'.
  • IterationOnlyTermination (bool): Terminate only when the maximum number of iterations is reached.
  • MaxIterations (int): Maximum number of iterations. If set to zero, there is no limit on the number of iterations. Default value 1000.
  • MinResidue (float64): Residue required to terminate the factorization (lower values generally mean better fits). Default value 1e-05.
  • NeighborSearch (string): Algorithm used for neighbor search. Default value 'euclidean'.
  • Neighborhood (int): Size of the neighborhood of similar users to consider for each query user. Default value 5.
  • Normalization (string): Normalization performed on the ratings. Default value 'none'.
  • Query (mat.Dense): List of query users for which recommendations should be generated.
  • Rank (int): Rank of decomposed matrices (if 0, a heuristic is used to estimate the rank). Default value 0.
  • Recommendations (int): Number of recommendations to generate for each query user. Default value 5.
  • Seed (int): Set the random seed (0 uses std::time(NULL)). Default value 0.
  • Test (mat.Dense): Test set to calculate RMSE on.
  • Training (mat.Dense): Input dataset to perform CF on.
  • Verbose (bool): Display informational messages and the full list of parameters and timers at the end of execution.

Output parameters:

  • output (mat.Dense): Matrix that will store output recommendations.
  • outputModel (cfModel): Output for trained CF model.

func DataAndInfo

func DataAndInfo() *matrixWithInfo

A function used for initializing matrixWithInfo Tuple.

func Dbscan

func Dbscan(input *mat.Dense, param *DbscanOptionalParam) (*mat.Dense, *mat.Dense)

This program implements the DBSCAN algorithm for clustering using accelerated tree-based range search. The type of tree that is used may be parameterized, or brute-force range search may also be used.

The input dataset to be clustered may be specified with the "Input" parameter; the radius of each range search may be specified with the "Epsilon" parameters, and the minimum number of points in a cluster may be specified with the "MinSize" parameter.

The "Assignments" and "Centroids" output parameters may be used to save the output of the clustering. "Assignments" contains the cluster assignments of each point, and "Centroids" contains the centroids of each cluster.

The range search may be controlled with the "TreeType", "SingleMode", and "Naive" parameters. "TreeType" can control the type of tree used for range search; this can take a variety of values: 'kd', 'r', 'r-star', 'x', 'hilbert-r', 'r-plus', 'r-plus-plus', 'cover', 'ball'. The "SingleMode" parameter will force single-tree search (as opposed to the default dual-tree search), and '"Naive" will force brute-force range search.

An example usage to run DBSCAN on the dataset in input with a radius of 0.5 and a minimum cluster size of 5 is given below:

// Initialize optional parameters for Dbscan(). param := mlpack.DbscanOptions() param.Epsilon = 0.5 param.MinSize = 5

_, _ := mlpack.Dbscan(input, param)

Input parameters:

  • input (mat.Dense): Input dataset to cluster.
  • Epsilon (float64): Radius of each range search. Default value 1.
  • MinSize (int): Minimum number of points for a cluster. Default value 5.
  • Naive (bool): If set, brute-force range search (not tree-based) will be used.
  • SelectionType (string): If using point selection policy, the type of selection to use ('ordered', 'random'). Default value 'ordered'.
  • SingleMode (bool): If set, single-tree range search (not dual-tree) will be used.
  • TreeType (string): If using single-tree or dual-tree search, the type of tree to use ('kd', 'r', 'r-star', 'x', 'hilbert-r', 'r-plus', 'r-plus-plus', 'cover', 'ball'). Default value 'kd'.
  • Verbose (bool): Display informational messages and the full list of parameters and timers at the end of execution.

Output parameters:

  • assignments (mat.Dense): Output matrix for assignments of each point.
  • centroids (mat.Dense): Matrix to save output centroids to.

func DecisionTree

func DecisionTree(param *DecisionTreeOptionalParam) (decisionTreeModel, *mat.Dense, *mat.Dense)

Train and evaluate using a decision tree. Given a dataset containing numeric or categorical features, and associated labels for each point in the dataset, this program can train a decision tree on that data.

The training set and associated labels are specified with the "Training" and "Labels" parameters, respectively. The labels should be in the range [0, num_classes - 1]. Optionally, if "Labels" is not specified, the labels are assumed to be the last dimension of the training dataset.

When a model is trained, the "OutputModel" output parameter may be used to save the trained model. A model may be loaded for predictions with the "InputModel" parameter. The "InputModel" parameter may not be specified when the "Training" parameter is specified. The "MinimumLeafSize" parameter specifies the minimum number of training points that must fall into each leaf for it to be split. The "MinimumGainSplit" parameter specifies the minimum gain that is needed for the node to split. The "MaximumDepth" parameter specifies the maximum depth of the tree. If "PrintTrainingError" is specified, the training error will be printed.

Test data may be specified with the "Test" parameter, and if performance numbers are desired for that test set, labels may be specified with the "TestLabels" parameter. Predictions for each test point may be saved via the "Predictions" output parameter. Class probabilities for each prediction may be saved with the "Probabilities" output parameter.

For example, to train a decision tree with a minimum leaf size of 20 on the dataset contained in data with labels labels, saving the output model to tree and printing the training error, one could call

// Initialize optional parameters for DecisionTree(). param := mlpack.DecisionTreeOptions() param.Training = data param.Labels = labels param.MinimumLeafSize = 20 param.MinimumGainSplit = 0.001 param.PrintTrainingAccuracy = true

tree, _, _ := mlpack.DecisionTree(param)

Then, to use that model to classify points in test_set and print the test error given the labels test_labels using that model, while saving the predictions for each point to predictions, one could call

// Initialize optional parameters for DecisionTree(). param := mlpack.DecisionTreeOptions() param.InputModel = &tree param.Test = test_set param.TestLabels = test_labels

_, predictions, _ := mlpack.DecisionTree(param)

Input parameters:

  • InputModel (decisionTreeModel): Pre-trained decision tree, to be used with test points.
  • Labels (mat.Dense): Training labels.
  • MaximumDepth (int): Maximum depth of the tree (0 means no limit). Default value 0.
  • MinimumGainSplit (float64): Minimum gain for node splitting. Default value 1e-07.
  • MinimumLeafSize (int): Minimum number of points in a leaf. Default value 20.
  • PrintTrainingAccuracy (bool): Print the training accuracy.
  • PrintTrainingError (bool): Print the training error (deprecated; will be removed in mlpack 4.0.0).
  • Test (matrixWithInfo): Testing dataset (may be categorical).
  • TestLabels (mat.Dense): Test point labels, if accuracy calculation is desired.
  • Training (matrixWithInfo): Training dataset (may be categorical).
  • Verbose (bool): Display informational messages and the full list of parameters and timers at the end of execution.
  • Weights (mat.Dense): The weight of labels

Output parameters:

  • outputModel (decisionTreeModel): Output for trained decision tree.
  • predictions (mat.Dense): Class predictions for each test point.
  • probabilities (mat.Dense): Class probabilities for each test point.

func Det

func Det(param *DetOptionalParam) (dTree, string, string, *mat.Dense, *mat.Dense, *mat.Dense)

This program performs a number of functions related to Density Estimation Trees. The optimal Density Estimation Tree (DET) can be trained on a set of data (specified by "Training") using cross-validation (with number of folds specified with the "Folds" parameter). This trained density estimation tree may then be saved with the "OutputModel" output parameter.

The variable importances (that is, the feature importance values for each dimension) may be saved with the "Vi" output parameter, and the density estimates for each training point may be saved with the "TrainingSetEstimates" output parameter.

Enabling path printing for each node outputs the path from the root node to a leaf for each entry in the test set, or training set (if a test set is not provided). Strings like 'LRLRLR' (indicating that traversal went to the left child, then the right child, then the left child, and so forth) will be output. If 'lr-id' or 'id-lr' are given as the "PathFormat" parameter, then the ID (tag) of every node along the path will be printed after or before the L or R character indicating the direction of traversal, respectively.

This program also can provide density estimates for a set of test points, specified in the "Test" parameter. The density estimation tree used for this task will be the tree that was trained on the given training points, or a tree given as the parameter "InputModel". The density estimates for the test points may be saved using the "TestSetEstimates" output parameter.

Input parameters:

  • Folds (int): The number of folds of cross-validation to perform for the estimation (0 is LOOCV) Default value 10.
  • InputModel (dTree): Trained density estimation tree to load.
  • MaxLeafSize (int): The maximum size of a leaf in the unpruned, fully grown DET. Default value 10.
  • MinLeafSize (int): The minimum size of a leaf in the unpruned, fully grown DET. Default value 5.
  • PathFormat (string): The format of path printing: 'lr', 'id-lr', or 'lr-id'. Default value 'lr'.
  • SkipPruning (bool): Whether to bypass the pruning process and output the unpruned tree only.
  • Test (mat.Dense): A set of test points to estimate the density of.
  • Training (mat.Dense): The data set on which to build a density estimation tree.
  • Verbose (bool): Display informational messages and the full list of parameters and timers at the end of execution.

Output parameters:

  • outputModel (dTree): Output to save trained density estimation tree to.
  • tagCountersFile (string): The file to output the number of points that went to each leaf. Default value ”.
  • tagFile (string): The file to output the tags (and possibly paths) for each sample in the test set. Default value ”.
  • testSetEstimates (mat.Dense): The output estimates on the test set from the final optimally pruned tree.
  • trainingSetEstimates (mat.Dense): The output density estimates on the training set from the final optimally pruned tree.
  • vi (mat.Dense): The output variable importance values for each feature.

func DownloadFile

func DownloadFile(url string, filename string) error

DownloadFile() downloads the file from the given url and save it to the given filename.

func Emst

func Emst(input *mat.Dense, param *EmstOptionalParam) *mat.Dense

This program can compute the Euclidean minimum spanning tree of a set of input points using the dual-tree Boruvka algorithm.

The set to calculate the minimum spanning tree of is specified with the "Input" parameter, and the output may be saved with the "Output" output parameter.

The "LeafSize" parameter controls the leaf size of the kd-tree that is used to calculate the minimum spanning tree, and if the "Naive" option is given, then brute-force search is used (this is typically much slower in low dimensions). The leaf size does not affect the results, but it may have some effect on the runtime of the algorithm.

For example, the minimum spanning tree of the input dataset data can be calculated with a leaf size of 20 and stored as spanning_tree using the following command:

// Initialize optional parameters for Emst(). param := mlpack.EmstOptions() param.LeafSize = 20

spanning_tree := mlpack.Emst(data, param)

The output matrix is a three-dimensional matrix, where each row indicates an edge. The first dimension corresponds to the lesser index of the edge; the second dimension corresponds to the greater index of the edge; and the third column corresponds to the distance between the two points.

Input parameters:

  • input (mat.Dense): Input data matrix.
  • LeafSize (int): Leaf size in the kd-tree. One-element leaves give the empirically best performance, but at the cost of greater memory requirements. Default value 1.
  • Naive (bool): Compute the MST using O(n^2) naive algorithm.
  • Verbose (bool): Display informational messages and the full list of parameters and timers at the end of execution.

Output parameters:

  • output (mat.Dense): Output data. Stored as an edge list.

func Fastmks

func Fastmks(param *FastmksOptionalParam) (*mat.Dense, *mat.Dense, fastmksModel)

This program will find the k maximum kernels of a set of points, using a query set and a reference set (which can optionally be the same set). More specifically, for each point in the query set, the k points in the reference set with maximum kernel evaluations are found. The kernel function used is specified with the "Kernel" parameter.

For example, the following command will calculate, for each point in the query set query, the five points in the reference set reference with maximum kernel evaluation using the linear kernel. The kernel evaluations may be saved with the kernels output parameter and the indices may be saved with the indices output parameter.

// Initialize optional parameters for Fastmks(). param := mlpack.FastmksOptions() param.K = 5 param.Reference = reference param.Query = query param.Kernel = "linear"

indices, kernels, _ := mlpack.Fastmks(param)

The output matrices are organized such that row i and column j in the indices matrix corresponds to the index of the point in the reference set that has j'th largest kernel evaluation with the point in the query set with index i. Row i and column j in the kernels matrix corresponds to the kernel evaluation between those two points.

This program performs FastMKS using a cover tree. The base used to build the cover tree can be specified with the "Base" parameter.

Input parameters:

  • Bandwidth (float64): Bandwidth (for Gaussian, Epanechnikov, and triangular kernels). Default value 1.
  • Base (float64): Base to use during cover tree construction. Default value 2.
  • Degree (float64): Degree of polynomial kernel. Default value 2.
  • InputModel (fastmksModel): Input FastMKS model to use.
  • K (int): Number of maximum kernels to find. Default value 0.
  • Kernel (string): Kernel type to use: 'linear', 'polynomial', 'cosine', 'gaussian', 'epanechnikov', 'triangular', 'hyptan'. Default value 'linear'.
  • Naive (bool): If true, O(n^2) naive mode is used for computation.
  • Offset (float64): Offset of kernel (for polynomial and hyptan kernels). Default value 0.
  • Query (mat.Dense): The query dataset.
  • Reference (mat.Dense): The reference dataset.
  • Scale (float64): Scale of kernel (for hyptan kernel). Default value 1.
  • Single (bool): If true, single-tree search is used (as opposed to dual-tree search.
  • Verbose (bool): Display informational messages and the full list of parameters and timers at the end of execution.

Output parameters:

  • indices (mat.Dense): Output matrix of indices.
  • kernels (mat.Dense): Output matrix of kernels.
  • outputModel (fastmksModel): Output for FastMKS model.

func GmmGenerate

func GmmGenerate(inputModel *gmm, samples int, param *GmmGenerateOptionalParam) *mat.Dense

This program is able to generate samples from a pre-trained GMM (use gmm_train to train a GMM). The pre-trained GMM must be specified with the "InputModel" parameter. The number of samples to generate is specified by the "Samples" parameter. Output samples may be saved with the "Output" output parameter.

The following command can be used to generate 100 samples from the pre-trained GMM gmm and store those generated samples in samples:

// Initialize optional parameters for GmmGenerate(). param := mlpack.GmmGenerateOptions()

samples := mlpack.GmmGenerate(&gmm, 100, param)

Input parameters:

  • inputModel (gmm): Input GMM model to generate samples from.
  • samples (int): Number of samples to generate.
  • Seed (int): Random seed. If 0, 'std::time(NULL)' is used. Default value 0.
  • Verbose (bool): Display informational messages and the full list of parameters and timers at the end of execution.

Output parameters:

  • output (mat.Dense): Matrix to save output samples in.

func GmmProbability

func GmmProbability(input *mat.Dense, inputModel *gmm, param *GmmProbabilityOptionalParam) *mat.Dense

This program calculates the probability that given points came from a given GMM (that is, P(X | gmm)). The GMM is specified with the "InputModel" parameter, and the points are specified with the "Input" parameter. The output probabilities may be saved via the "Output" output parameter.

So, for example, to calculate the probabilities of each point in points coming from the pre-trained GMM gmm, while storing those probabilities in probs, the following command could be used:

// Initialize optional parameters for GmmProbability(). param := mlpack.GmmProbabilityOptions()

probs := mlpack.GmmProbability(&gmm, points, param)

Input parameters:

  • input (mat.Dense): Input matrix to calculate probabilities of.
  • inputModel (gmm): Input GMM to use as model.
  • Verbose (bool): Display informational messages and the full list of parameters and timers at the end of execution.

Output parameters:

  • output (mat.Dense): Matrix to store calculated probabilities in.

func GmmTrain

func GmmTrain(gaussians int, input *mat.Dense, param *GmmTrainOptionalParam) gmm

This program takes a parametric estimate of a Gaussian mixture model (GMM) using the EM algorithm to find the maximum likelihood estimate. The model may be saved and reused by other mlpack GMM tools.

The input data to train on must be specified with the "Input" parameter, and the number of Gaussians in the model must be specified with the "Gaussians" parameter. Optionally, many trials with different random initializations may be run, and the result with highest log-likelihood on the training data will be taken. The number of trials to run is specified with the "Trials" parameter. By default, only one trial is run.

The tolerance for convergence and maximum number of iterations of the EM algorithm are specified with the "Tolerance" and "MaxIterations" parameters, respectively. The GMM may be initialized for training with another model, specified with the "InputModel" parameter. Otherwise, the model is initialized by running k-means on the data. The k-means clustering initialization can be controlled with the "KmeansMaxIterations", "RefinedStart", "Samplings", and "Percentage" parameters. If "RefinedStart" is specified, then the Bradley-Fayyad refined start initialization will be used. This can often lead to better clustering results.

The 'diagonal_covariance' flag will cause the learned covariances to be diagonal matrices. This significantly simplifies the model itself and causes training to be faster, but restricts the ability to fit more complex GMMs.

If GMM training fails with an error indicating that a covariance matrix could not be inverted, make sure that the "NoForcePositive" parameter is not specified. Alternately, adding a small amount of Gaussian noise (using the "Noise" parameter) to the entire dataset may help prevent Gaussians with zero variance in a particular dimension, which is usually the cause of non-invertible covariance matrices.

The "NoForcePositive" parameter, if set, will avoid the checks after each iteration of the EM algorithm which ensure that the covariance matrices are positive definite. Specifying the flag can cause faster runtime, but may also cause non-positive definite covariance matrices, which will cause the program to crash.

As an example, to train a 6-Gaussian GMM on the data in data with a maximum of 100 iterations of EM and 3 trials, saving the trained GMM to gmm, the following command can be used:

// Initialize optional parameters for GmmTrain(). param := mlpack.GmmTrainOptions() param.Trials = 3

gmm := mlpack.GmmTrain(data, 6, param)

To re-train that GMM on another set of data data2, the following command may be used:

// Initialize optional parameters for GmmTrain(). param := mlpack.GmmTrainOptions() param.InputModel = &gmm

new_gmm := mlpack.GmmTrain(data2, 6, param)

Input parameters:

  • gaussians (int): Number of Gaussians in the GMM.
  • input (mat.Dense): The training data on which the model will be fit.
  • DiagonalCovariance (bool): Force the covariance of the Gaussians to be diagonal. This can accelerate training time significantly.
  • InputModel (gmm): Initial input GMM model to start training with.
  • KmeansMaxIterations (int): Maximum number of iterations for the k-means algorithm (used to initialize EM). Default value 1000.
  • MaxIterations (int): Maximum number of iterations of EM algorithm (passing 0 will run until convergence). Default value 250.
  • NoForcePositive (bool): Do not force the covariance matrices to be positive definite.
  • Noise (float64): Variance of zero-mean Gaussian noise to add to data. Default value 0.
  • Percentage (float64): If using --refined_start, specify the percentage of the dataset used for each sampling (should be between 0.0 and 1.0). Default value 0.02.
  • RefinedStart (bool): During the initialization, use refined initial positions for k-means clustering (Bradley and Fayyad, 1998).
  • Samplings (int): If using --refined_start, specify the number of samplings used for initial points. Default value 100.
  • Seed (int): Random seed. If 0, 'std::time(NULL)' is used. Default value 0.
  • Tolerance (float64): Tolerance for convergence of EM. Default value 1e-10.
  • Trials (int): Number of trials to perform in training GMM. Default value 1.
  • Verbose (bool): Display informational messages and the full list of parameters and timers at the end of execution.

Output parameters:

  • outputModel (gmm): Output for trained GMM model.

func HmmGenerate

func HmmGenerate(length int, model *hmmModel, param *HmmGenerateOptionalParam) (*mat.Dense, *mat.Dense)

This utility takes an already-trained HMM, specified as the "Model" parameter, and generates a random observation sequence and hidden state sequence based on its parameters. The observation sequence may be saved with the "Output" output parameter, and the internal state sequence may be saved with the "State" output parameter.

The state to start the sequence in may be specified with the "StartState" parameter.

For example, to generate a sequence of length 150 from the HMM hmm and save the observation sequence to observations and the hidden state sequence to states, the following command may be used:

// Initialize optional parameters for HmmGenerate(). param := mlpack.HmmGenerateOptions()

observations, states := mlpack.HmmGenerate(&hmm, 150, param)

Input parameters:

  • length (int): Length of sequence to generate.
  • model (hmmModel): Trained HMM to generate sequences with.
  • Seed (int): Random seed. If 0, 'std::time(NULL)' is used. Default value 0.
  • StartState (int): Starting state of sequence. Default value 0.
  • Verbose (bool): Display informational messages and the full list of parameters and timers at the end of execution.

Output parameters:

  • output (mat.Dense): Matrix to save observation sequence to.
  • state (mat.Dense): Matrix to save hidden state sequence to.

func HmmLoglik

func HmmLoglik(input *mat.Dense, inputModel *hmmModel, param *HmmLoglikOptionalParam) float64

This utility takes an already-trained HMM, specified with the "InputModel" parameter, and evaluates the log-likelihood of a sequence of observations, given with the "Input" parameter. The computed log-likelihood is given as output.

For example, to compute the log-likelihood of the sequence seq with the pre-trained HMM hmm, the following command may be used:

// Initialize optional parameters for HmmLoglik(). param := mlpack.HmmLoglikOptions()

_ := mlpack.HmmLoglik(seq, &hmm, param)

Input parameters:

  • input (mat.Dense): File containing observations,
  • inputModel (hmmModel): File containing HMM.
  • Verbose (bool): Display informational messages and the full list of parameters and timers at the end of execution.

Output parameters:

  • logLikelihood (float64): Log-likelihood of the sequence. Default value 0.

func HmmTrain

func HmmTrain(inputFile string, param *HmmTrainOptionalParam) hmmModel

This program allows a Hidden Markov Model to be trained on labeled or unlabeled data. It supports four types of HMMs: Discrete HMMs, Gaussian HMMs, GMM HMMs, or Diagonal GMM HMMs

Either one input sequence can be specified (with "InputFile"), or, a file containing files in which input sequences can be found (when "InputFile"and"Batch" are used together). In addition, labels can be provided in the file specified by "LabelsFile", and if "Batch" is used, the file given to "LabelsFile" should contain a list of files of labels corresponding to the sequences in the file given to "InputFile".

The HMM is trained with the Baum-Welch algorithm if no labels are provided. The tolerance of the Baum-Welch algorithm can be set with the "Tolerance"option. By default, the transition matrix is randomly initialized and the emission distributions are initialized to fit the extent of the data.

Optionally, a pre-created HMM model can be used as a guess for the transition matrix and emission probabilities; this is specifiable with "OutputModel".

Input parameters:

  • inputFile (string): File containing input observations.
  • Batch (bool): If true, input_file (and if passed, labels_file) are expected to contain a list of files to use as input observation sequences (and label sequences).
  • Gaussians (int): Number of gaussians in each GMM (necessary when type is 'gmm'). Default value 0.
  • InputModel (hmmModel): Pre-existing HMM model to initialize training with.
  • LabelsFile (string): Optional file of hidden states, used for labeled training. Default value ”.
  • Seed (int): Random seed. If 0, 'std::time(NULL)' is used. Default value 0.
  • States (int): Number of hidden states in HMM (necessary, unless model_file is specified). Default value 0.
  • Tolerance (float64): Tolerance of the Baum-Welch algorithm. Default value 1e-05.
  • Type (string): Type of HMM: discrete | gaussian | diag_gmm | gmm. Default value 'gaussian'.
  • Verbose (bool): Display informational messages and the full list of parameters and timers at the end of execution.

Output parameters:

  • outputModel (hmmModel): Output for trained HMM.

func HmmViterbi

func HmmViterbi(input *mat.Dense, inputModel *hmmModel, param *HmmViterbiOptionalParam) *mat.Dense

This utility takes an already-trained HMM, specified as "InputModel", and evaluates the most probable hidden state sequence of a given sequence of observations (specified as '"Input", using the Viterbi algorithm. The computed state sequence may be saved using the "Output" output parameter.

For example, to predict the state sequence of the observations obs using the HMM hmm, storing the predicted state sequence to states, the following command could be used:

// Initialize optional parameters for HmmViterbi(). param := mlpack.HmmViterbiOptions()

states := mlpack.HmmViterbi(obs, &hmm, param)

Input parameters:

  • input (mat.Dense): Matrix containing observations,
  • inputModel (hmmModel): Trained HMM to use.
  • Verbose (bool): Display informational messages and the full list of parameters and timers at the end of execution.

Output parameters:

  • output (mat.Dense): File to save predicted state sequence to.

func HoeffdingTree

func HoeffdingTree(param *HoeffdingTreeOptionalParam) (hoeffdingTreeModel, *mat.Dense, *mat.Dense)

This program implements Hoeffding trees, a form of streaming decision tree suited best for large (or streaming) datasets. This program supports both categorical and numeric data. Given an input dataset, this program is able to train the tree with numerous training options, and save the model to a file. The program is also able to use a trained model or a model from file in order to predict classes for a given test set.

The training file and associated labels are specified with the "Training" and "Labels" parameters, respectively. Optionally, if "Labels" is not specified, the labels are assumed to be the last dimension of the training dataset.

The training may be performed in batch mode (like a typical decision tree algorithm) by specifying the "BatchMode" option, but this may not be the best option for large datasets.

When a model is trained, it may be saved via the "OutputModel" output parameter. A model may be loaded from file for further training or testing with the "InputModel" parameter.

Test data may be specified with the "Test" parameter, and if performance statistics are desired for that test set, labels may be specified with the "TestLabels" parameter. Predictions for each test point may be saved with the "Predictions" output parameter, and class probabilities for each prediction may be saved with the "Probabilities" output parameter.

For example, to train a Hoeffding tree with confidence 0.99 with data dataset, saving the trained tree to tree, the following command may be used:

// Initialize optional parameters for HoeffdingTree(). param := mlpack.HoeffdingTreeOptions() param.Training = dataset param.Confidence = 0.99

tree, _, _ := mlpack.HoeffdingTree(param)

Then, this tree may be used to make predictions on the test set test_set, saving the predictions into predictions and the class probabilities into class_probs with the following command:

// Initialize optional parameters for HoeffdingTree(). param := mlpack.HoeffdingTreeOptions() param.InputModel = &tree param.Test = test_set

_, predictions, class_probs := mlpack.HoeffdingTree(param)

Input parameters:

  • BatchMode (bool): If true, samples will be considered in batch instead of as a stream. This generally results in better trees but at the cost of memory usage and runtime.
  • Bins (int): If the 'domingos' split strategy is used, this specifies the number of bins for each numeric split. Default value 10.
  • Confidence (float64): Confidence before splitting (between 0 and 1). Default value 0.95.
  • InfoGain (bool): If set, information gain is used instead of Gini impurity for calculating Hoeffding bounds.
  • InputModel (hoeffdingTreeModel): Input trained Hoeffding tree model.
  • Labels (mat.Dense): Labels for training dataset.
  • MaxSamples (int): Maximum number of samples before splitting. Default value 5000.
  • MinSamples (int): Minimum number of samples before splitting. Default value 100.
  • NumericSplitStrategy (string): The splitting strategy to use for numeric features: 'domingos' or 'binary'. Default value 'binary'.
  • ObservationsBeforeBinning (int): If the 'domingos' split strategy is used, this specifies the number of samples observed before binning is performed. Default value 100.
  • Passes (int): Number of passes to take over the dataset. Default value 1.
  • Test (matrixWithInfo): Testing dataset (may be categorical).
  • TestLabels (mat.Dense): Labels of test data.
  • Training (matrixWithInfo): Training dataset (may be categorical).
  • Verbose (bool): Display informational messages and the full list of parameters and timers at the end of execution.

Output parameters:

  • outputModel (hoeffdingTreeModel): Output for trained Hoeffding tree model.
  • predictions (mat.Dense): Matrix to output label predictions for test data into.
  • probabilities (mat.Dense): In addition to predicting labels, provide rediction probabilities in this matrix.

func ImageConverter

func ImageConverter(input []string, param *ImageConverterOptionalParam) *mat.Dense

This utility takes an image or an array of images and loads them to a matrix. You can optionally specify the height "Height" width "Width" and channel "Channels" of the images that needs to be loaded; otherwise, these parameters will be automatically detected from the image. There are other options too, that can be specified such as "Quality".

You can also provide a dataset and save them as images using "Dataset" and "Save" as an parameter.

An example to load an image :

// Initialize optional parameters for ImageConverter(). param := mlpack.ImageConverterOptions() param.Height = 256 param.Width = 256 param.Channels = 3

Y := mlpack.ImageConverter(X, param)

An example to save an image is :

// Initialize optional parameters for ImageConverter(). param := mlpack.ImageConverterOptions() param.Height = 256 param.Width = 256 param.Channels = 3 param.Dataset = Y param.Save = true

_ := mlpack.ImageConverter(X, param)

Input parameters:

  • input ([]string): Image filenames which have to be loaded/saved.
  • Channels (int): Number of channels in the image. Default value 0.
  • Dataset (mat.Dense): Input matrix to save as images.
  • Height (int): Height of the images. Default value 0.
  • Quality (int): Compression of the image if saved as jpg (0-100). Default value 90.
  • Save (bool): Save a dataset as images.
  • Verbose (bool): Display informational messages and the full list of parameters and timers at the end of execution.
  • Width (int): Width of the image. Default value 0.

Output parameters:

  • output (mat.Dense): Matrix to save images data to, Onlyneeded if you are specifying 'save' option.

func Kde

func Kde(param *KdeOptionalParam) (kdeModel, *mat.Dense)

This program performs a Kernel Density Estimation. KDE is a non-parametric way of estimating probability density function. For each query point the program will estimate its probability density by applying a kernel function to each reference point. The computational complexity of this is O(N^2) where there are N query points and N reference points, but this implementation will typically see better performance as it uses an approximate dual or single tree algorithm for acceleration.

Dual or single tree optimization avoids many barely relevant calculations (as kernel function values decrease with distance), so it is an approximate computation. You can specify the maximum relative error tolerance for each query value with "RelError" as well as the maximum absolute error tolerance with the parameter "AbsError". This program runs using an Euclidean metric. Kernel function can be selected using the "Kernel" option. You can also choose what which type of tree to use for the dual-tree algorithm with "Tree". It is also possible to select whether to use dual-tree algorithm or single-tree algorithm using the "Algorithm" option.

Monte Carlo estimations can be used to accelerate the KDE estimate when the Gaussian Kernel is used. This provides a probabilistic guarantee on the the error of the resulting KDE instead of an absolute guarantee.To enable Monte Carlo estimations, the "MonteCarlo" flag can be used, and success probability can be set with the "McProbability" option. It is possible to set the initial sample size for the Monte Carlo estimation using "InitialSampleSize". This implementation will only consider a node, as a candidate for the Monte Carlo estimation, if its number of descendant nodes is bigger than the initial sample size. This can be controlled using a coefficient that will multiply the initial sample size and can be set using "McEntryCoef". To avoid using the same amount of computations an exact approach would take, this program recurses the tree whenever a fraction of the amount of the node's descendant points have already been computed. This fraction is set using "McBreakCoef".

For example, the following will run KDE using the data in ref_data for training and the data in qu_data as query data. It will apply an Epanechnikov kernel with a 0.2 bandwidth to each reference point and use a KD-Tree for the dual-tree optimization. The returned predictions will be within 5% of the real KDE value for each query point.

// Initialize optional parameters for Kde(). param := mlpack.KdeOptions() param.Reference = ref_data param.Query = qu_data param.Bandwidth = 0.2 param.Kernel = "epanechnikov" param.Tree = "kd-tree" param.RelError = 0.05

_, out_data := mlpack.Kde(param)

the predicted density estimations will be stored in out_data. If no "Query" is provided, then KDE will be computed on the "Reference" dataset. It is possible to select either a reference dataset or an input model but not both at the same time. If an input model is selected and parameter values are not set (e.g. "Bandwidth") then default parameter values will be used.

In addition to the last program call, it is also possible to activate Monte Carlo estimations if a Gaussian kernel is used. This can provide faster results, but the KDE will only have a probabilistic guarantee of meeting the desired error bound (instead of an absolute guarantee). The following example will run KDE using a Monte Carlo estimation when possible. The results will be within a 5% of the real KDE value with a 95% probability. Initial sample size for the Monte Carlo estimation will be 200 points and a node will be a candidate for the estimation only when it contains 700 (i.e. 3.5*200) points. If a node contains 700 points and 420 (i.e. 0.6*700) have already been sampled, then the algorithm will recurse instead of keep sampling.

// Initialize optional parameters for Kde(). param := mlpack.KdeOptions() param.Reference = ref_data param.Query = qu_data param.Bandwidth = 0.2 param.Kernel = "gaussian" param.Tree = "kd-tree" param.RelError = 0.05 param.MonteCarlo = param.McProbability = 0.95 param.InitialSampleSize = 200 param.McEntryCoef = 3.5 param.McBreakCoef = 0.6

_, out_data := mlpack.Kde(param)

Input parameters:

  • AbsError (float64): Relative error tolerance for the prediction. Default value 0.
  • Algorithm (string): Algorithm to use for the prediction.('dual-tree', 'single-tree'). Default value 'dual-tree'.
  • Bandwidth (float64): Bandwidth of the kernel. Default value 1.
  • InitialSampleSize (int): Initial sample size for Monte Carlo estimations. Default value 100.
  • InputModel (kdeModel): Contains pre-trained KDE model.
  • Kernel (string): Kernel to use for the prediction.('gaussian', 'epanechnikov', 'laplacian', 'spherical', 'triangular'). Default value 'gaussian'.
  • McBreakCoef (float64): Controls what fraction of the amount of node's descendants is the limit for the sample size before it recurses. Default value 0.4.
  • McEntryCoef (float64): Controls how much larger does the amount of node descendants has to be compared to the initial sample size in order to be a candidate for Monte Carlo estimations. Default value 3.
  • McProbability (float64): Probability of the estimation being bounded by relative error when using Monte Carlo estimations. Default value 0.95.
  • MonteCarlo (bool): Whether to use Monte Carlo estimations when possible.
  • Query (mat.Dense): Query dataset to KDE on.
  • Reference (mat.Dense): Input reference dataset use for KDE.
  • RelError (float64): Relative error tolerance for the prediction. Default value 0.05.
  • Tree (string): Tree to use for the prediction.('kd-tree', 'ball-tree', 'cover-tree', 'octree', 'r-tree'). Default value 'kd-tree'.
  • Verbose (bool): Display informational messages and the full list of parameters and timers at the end of execution.

Output parameters:

  • outputModel (kdeModel): If specified, the KDE model will be saved here.
  • predictions (mat.Dense): Vector to store density predictions.

func KernelPca

func KernelPca(input *mat.Dense, kernel string, param *KernelPcaOptionalParam) *mat.Dense

This program performs Kernel Principal Components Analysis (KPCA) on the specified dataset with the specified kernel. This will transform the data onto the kernel principal components, and optionally reduce the dimensionality by ignoring the kernel principal components with the smallest eigenvalues.

For the case where a linear kernel is used, this reduces to regular PCA.

The kernels that are supported are listed below:

  • 'linear': the standard linear dot product (same as normal PCA): K(x, y) = x^T y

  • 'gaussian': a Gaussian kernel; requires bandwidth: K(x, y) = exp(-(|| x - y || ^ 2) / (2 * (bandwidth ^ 2)))

  • 'polynomial': polynomial kernel; requires offset and degree: K(x, y) = (x^T y + offset) ^ degree

  • 'hyptan': hyperbolic tangent kernel; requires scale and offset: K(x, y) = tanh(scale * (x^T y) + offset)

  • 'laplacian': Laplacian kernel; requires bandwidth: K(x, y) = exp(-(|| x - y ||) / bandwidth)

  • 'epanechnikov': Epanechnikov kernel; requires bandwidth: K(x, y) = max(0, 1 - || x - y ||^2 / bandwidth^2)

  • 'cosine': cosine distance: K(x, y) = 1 - (x^T y) / (|| x || * || y ||)

The parameters for each of the kernels should be specified with the options "Bandwidth", "KernelScale", "Offset", or "Degree" (or a combination of those parameters).

Optionally, the Nystroem method ("Using the Nystroem method to speed up kernel machines", 2001) can be used to calculate the kernel matrix by specifying the "NystroemMethod" parameter. This approach works by using a subset of the data as basis to reconstruct the kernel matrix; to specify the sampling scheme, the "Sampling" parameter is used. The sampling scheme for the Nystroem method can be chosen from the following list: 'kmeans', 'random', 'ordered'.

For example, the following command will perform KPCA on the dataset input using the Gaussian kernel, and saving the transformed data to transformed:

// Initialize optional parameters for KernelPca(). param := mlpack.KernelPcaOptions()

transformed := mlpack.KernelPca(input, "gaussian", param)

Input parameters:

  • input (mat.Dense): Input dataset to perform KPCA on.
  • kernel (string): The kernel to use; see the above documentation for the list of usable kernels.
  • Bandwidth (float64): Bandwidth, for 'gaussian' and 'laplacian' kernels. Default value 1.
  • Center (bool): If set, the transformed data will be centered about the origin.
  • Degree (float64): Degree of polynomial, for 'polynomial' kernel. Default value 1.
  • KernelScale (float64): Scale, for 'hyptan' kernel. Default value 1.
  • NewDimensionality (int): If not 0, reduce the dimensionality of the output dataset by ignoring the dimensions with the smallest eigenvalues. Default value 0.
  • NystroemMethod (bool): If set, the Nystroem method will be used.
  • Offset (float64): Offset, for 'hyptan' and 'polynomial' kernels. Default value 0.
  • Sampling (string): Sampling scheme to use for the Nystroem method: 'kmeans', 'random', 'ordered' Default value 'kmeans'.
  • Verbose (bool): Display informational messages and the full list of parameters and timers at the end of execution.

Output parameters:

  • output (mat.Dense): Matrix to save modified dataset to.

func Kfn

func Kfn(param *KfnOptionalParam) (*mat.Dense, *mat.Dense, kfnModel)

This program will calculate the k-furthest-neighbors of a set of points. You may specify a separate set of reference points and query points, or just a reference set which will be used as both the reference and query set.

For example, the following will calculate the 5 furthest neighbors of eachpoint in input and store the distances in distances and the neighbors in neighbors:

// Initialize optional parameters for Kfn(). param := mlpack.KfnOptions() param.K = 5 param.Reference = input

distances, neighbors, _ := mlpack.Kfn(param)

The output files are organized such that row i and column j in the neighbors output matrix corresponds to the index of the point in the reference set which is the j'th furthest neighbor from the point in the query set with index i. Row i and column j in the distances output file corresponds to the distance between those two points.

Input parameters:

  • Algorithm (string): Type of neighbor search: 'naive', 'single_tree', 'dual_tree', 'greedy'. Default value 'dual_tree'.
  • Epsilon (float64): If specified, will do approximate furthest neighbor search with given relative error. Must be in the range [0,1). Default value 0.
  • InputModel (kfnModel): Pre-trained kFN model.
  • K (int): Number of furthest neighbors to find. Default value 0.
  • LeafSize (int): Leaf size for tree building (used for kd-trees, vp trees, random projection trees, UB trees, R trees, R* trees, X trees, Hilbert R trees, R+ trees, R++ trees, and octrees). Default value 20.
  • Percentage (float64): If specified, will do approximate furthest neighbor search. Must be in the range (0,1] (decimal form). Resultant neighbors will be at least (p*100) % of the distance as the true furthest neighbor. Default value 1.
  • Query (mat.Dense): Matrix containing query points (optional).
  • RandomBasis (bool): Before tree-building, project the data onto a random orthogonal basis.
  • Reference (mat.Dense): Matrix containing the reference dataset.
  • Seed (int): Random seed (if 0, std::time(NULL) is used). Default value 0.
  • TreeType (string): Type of tree to use: 'kd', 'vp', 'rp', 'max-rp', 'ub', 'cover', 'r', 'r-star', 'x', 'ball', 'hilbert-r', 'r-plus', 'r-plus-plus', 'oct'. Default value 'kd'.
  • TrueDistances (mat.Dense): Matrix of true distances to compute the effective error (average relative error) (it is printed when -v is specified).
  • TrueNeighbors (mat.Dense): Matrix of true neighbors to compute the recall (it is printed when -v is specified).
  • Verbose (bool): Display informational messages and the full list of parameters and timers at the end of execution.

Output parameters:

  • distances (mat.Dense): Matrix to output distances into.
  • neighbors (mat.Dense): Matrix to output neighbors into.
  • outputModel (kfnModel): If specified, the kFN model will be output here.

func Kmeans

func Kmeans(clusters int, input *mat.Dense, param *KmeansOptionalParam) (*mat.Dense, *mat.Dense)

This program performs K-Means clustering on the given dataset. It can return the learned cluster assignments, and the centroids of the clusters. Empty clusters are not allowed by default; when a cluster becomes empty, the point furthest from the centroid of the cluster with maximum variance is taken to fill that cluster.

Optionally, the strategy to choose initial centroids can be specified. The k-means++ algorithm can be used to choose initial centroids with the "KmeansPlusPlus" parameter. The Bradley and Fayyad approach ("Refining initial points for k-means clustering", 1998) can be used to select initial points by specifying the "RefinedStart" parameter. This approach works by taking random samplings of the dataset; to specify the number of samplings, the "Samplings" parameter is used, and to specify the percentage of the dataset to be used in each sample, the "Percentage" parameter is used (it should be a value between 0.0 and 1.0).

There are several options available for the algorithm used for each Lloyd iteration, specified with the "Algorithm" option. The standard O(kN) approach can be used ('naive'). Other options include the Pelleg-Moore tree-based algorithm ('pelleg-moore'), Elkan's triangle-inequality based algorithm ('elkan'), Hamerly's modification to Elkan's algorithm ('hamerly'), the dual-tree k-means algorithm ('dualtree'), and the dual-tree k-means algorithm using the cover tree ('dualtree-covertree').

The behavior for when an empty cluster is encountered can be modified with the "AllowEmptyClusters" option. When this option is specified and there is a cluster owning no points at the end of an iteration, that cluster's centroid will simply remain in its position from the previous iteration. If the "KillEmptyClusters" option is specified, then when a cluster owns no points at the end of an iteration, the cluster centroid is simply filled with DBL_MAX, killing it and effectively reducing k for the rest of the computation. Note that the default option when neither empty cluster option is specified can be time-consuming to calculate; therefore, specifying either of these parameters will often accelerate runtime.

Initial clustering assignments may be specified using the "InitialCentroids" parameter, and the maximum number of iterations may be specified with the "MaxIterations" parameter.

As an example, to use Hamerly's algorithm to perform k-means clustering with k=10 on the dataset data, saving the centroids to centroids and the assignments for each point to assignments, the following command could be used:

// Initialize optional parameters for Kmeans(). param := mlpack.KmeansOptions()

centroids, assignments := mlpack.Kmeans(data, 10, param)

To run k-means on that same dataset with initial centroids specified in initial with a maximum of 500 iterations, storing the output centroids in final the following command may be used:

// Initialize optional parameters for Kmeans(). param := mlpack.KmeansOptions() param.InitialCentroids = initial param.MaxIterations = 500

final, _ := mlpack.Kmeans(data, 10, param)

Input parameters:

  • clusters (int): Number of clusters to find (0 autodetects from initial centroids).
  • input (mat.Dense): Input dataset to perform clustering on.
  • Algorithm (string): Algorithm to use for the Lloyd iteration ('naive', 'pelleg-moore', 'elkan', 'hamerly', 'dualtree', or 'dualtree-covertree'). Default value 'naive'.
  • AllowEmptyClusters (bool): Allow empty clusters to be persist.
  • InPlace (bool): If specified, a column containing the learned cluster assignments will be added to the input dataset file. In this case, --output_file is overridden. (Do not use in Python.)
  • InitialCentroids (mat.Dense): Start with the specified initial centroids.
  • KillEmptyClusters (bool): Remove empty clusters when they occur.
  • KmeansPlusPlus (bool): Use the k-means++ initialization strategy to choose initial points.
  • LabelsOnly (bool): Only output labels into output file.
  • MaxIterations (int): Maximum number of iterations before k-means terminates. Default value 1000.
  • Percentage (float64): Percentage of dataset to use for each refined start sampling (use when --refined_start is specified). Default value 0.02.
  • RefinedStart (bool): Use the refined initial point strategy by Bradley and Fayyad to choose initial points.
  • Samplings (int): Number of samplings to perform for refined start (use when --refined_start is specified). Default value 100.
  • Seed (int): Random seed. If 0, 'std::time(NULL)' is used. Default value 0.
  • Verbose (bool): Display informational messages and the full list of parameters and timers at the end of execution.

Output parameters:

  • centroid (mat.Dense): If specified, the centroids of each cluster will be written to the given file.
  • output (mat.Dense): Matrix to store output labels or labeled data to.

func Knn

func Knn(param *KnnOptionalParam) (*mat.Dense, *mat.Dense, knnModel)

This program will calculate the k-nearest-neighbors of a set of points using kd-trees or cover trees (cover tree support is experimental and may be slow). You may specify a separate set of reference points and query points, or just a reference set which will be used as both the reference and query set.

For example, the following command will calculate the 5 nearest neighbors of each point in input and store the distances in distances and the neighbors in neighbors:

// Initialize optional parameters for Knn(). param := mlpack.KnnOptions() param.K = 5 param.Reference = input

distances, neighbors, _ := mlpack.Knn(param)

The output is organized such that row i and column j in the neighbors output matrix corresponds to the index of the point in the reference set which is the j'th nearest neighbor from the point in the query set with index i. Row j and column i in the distances output matrix corresponds to the distance between those two points.

Input parameters:

  • Algorithm (string): Type of neighbor search: 'naive', 'single_tree', 'dual_tree', 'greedy'. Default value 'dual_tree'.
  • Epsilon (float64): If specified, will do approximate nearest neighbor search with given relative error. Default value 0.
  • InputModel (knnModel): Pre-trained kNN model.
  • K (int): Number of nearest neighbors to find. Default value 0.
  • LeafSize (int): Leaf size for tree building (used for kd-trees, vp trees, random projection trees, UB trees, R trees, R* trees, X trees, Hilbert R trees, R+ trees, R++ trees, spill trees, and octrees). Default value 20.
  • Query (mat.Dense): Matrix containing query points (optional).
  • RandomBasis (bool): Before tree-building, project the data onto a random orthogonal basis.
  • Reference (mat.Dense): Matrix containing the reference dataset.
  • Rho (float64): Balance threshold (only valid for spill trees). Default value 0.7.
  • Seed (int): Random seed (if 0, std::time(NULL) is used). Default value 0.
  • Tau (float64): Overlapping size (only valid for spill trees). Default value 0.
  • TreeType (string): Type of tree to use: 'kd', 'vp', 'rp', 'max-rp', 'ub', 'cover', 'r', 'r-star', 'x', 'ball', 'hilbert-r', 'r-plus', 'r-plus-plus', 'spill', 'oct'. Default value 'kd'.
  • TrueDistances (mat.Dense): Matrix of true distances to compute the effective error (average relative error) (it is printed when -v is specified).
  • TrueNeighbors (mat.Dense): Matrix of true neighbors to compute the recall (it is printed when -v is specified).
  • Verbose (bool): Display informational messages and the full list of parameters and timers at the end of execution.

Output parameters:

  • distances (mat.Dense): Matrix to output distances into.
  • neighbors (mat.Dense): Matrix to output neighbors into.
  • outputModel (knnModel): If specified, the kNN model will be output here.

func Krann

func Krann(param *KrannOptionalParam) (*mat.Dense, *mat.Dense, raModel)

This program will calculate the k rank-approximate-nearest-neighbors of a set of points. You may specify a separate set of reference points and query points, or just a reference set which will be used as both the reference and query set. You must specify the rank approximation (in %) (and optionally the success probability).

For example, the following will return 5 neighbors from the top 0.1% of the data (with probability 0.95) for each point in input and store the distances in distances and the neighbors in neighbors.csv:

// Initialize optional parameters for Krann(). param := mlpack.KrannOptions() param.Reference = input param.K = 5 param.Tau = 0.1

distances, neighbors, _ := mlpack.Krann(param)

Note that tau must be set such that the number of points in the corresponding percentile of the data is greater than k. Thus, if we choose tau = 0.1 with a dataset of 1000 points and k = 5, then we are attempting to choose 5 nearest neighbors out of the closest 1 point -- this is invalid and the program will terminate with an error message.

The output matrices are organized such that row i and column j in the neighbors output file corresponds to the index of the point in the reference set which is the i'th nearest neighbor from the point in the query set with index j. Row i and column j in the distances output file corresponds to the distance between those two points.

Input parameters:

  • Alpha (float64): The desired success probability. Default value 0.95.
  • FirstLeafExact (bool): The flag to trigger sampling only after exactly exploring the first leaf.
  • InputModel (raModel): Pre-trained kNN model.
  • K (int): Number of nearest neighbors to find. Default value 0.
  • LeafSize (int): Leaf size for tree building (used for kd-trees, UB trees, R trees, R* trees, X trees, Hilbert R trees, R+ trees, R++ trees, and octrees). Default value 20.
  • Naive (bool): If true, sampling will be done without using a tree.
  • Query (mat.Dense): Matrix containing query points (optional).
  • RandomBasis (bool): Before tree-building, project the data onto a random orthogonal basis.
  • Reference (mat.Dense): Matrix containing the reference dataset.
  • SampleAtLeaves (bool): The flag to trigger sampling at leaves.
  • Seed (int): Random seed (if 0, std::time(NULL) is used). Default value 0.
  • SingleMode (bool): If true, single-tree search is used (as opposed to dual-tree search.
  • SingleSampleLimit (int): The limit on the maximum number of samples (and hence the largest node you can approximate). Default value 20.
  • Tau (float64): The allowed rank-error in terms of the percentile of the data. Default value 5.
  • TreeType (string): Type of tree to use: 'kd', 'ub', 'cover', 'r', 'x', 'r-star', 'hilbert-r', 'r-plus', 'r-plus-plus', 'oct'. Default value 'kd'.
  • Verbose (bool): Display informational messages and the full list of parameters and timers at the end of execution.

Output parameters:

  • distances (mat.Dense): Matrix to output distances into.
  • neighbors (mat.Dense): Matrix to output neighbors into.
  • outputModel (raModel): If specified, the kNN model will be output here.

func Lars

func Lars(param *LarsOptionalParam) (lars, *mat.Dense)

An implementation of LARS: Least Angle Regression (Stagewise/laSso). This is a stage-wise homotopy-based algorithm for L1-regularized linear regression (LASSO) and L1+L2-regularized linear regression (Elastic Net).

This program is able to train a LARS/LASSO/Elastic Net model or load a model from file, output regression predictions for a test set, and save the trained model to a file. The LARS algorithm is described in more detail below:

Let X be a matrix where each row is a point and each column is a dimension, and let y be a vector of targets.

The Elastic Net problem is to solve

min_beta 0.5 || X * beta - y ||_2^2 + lambda_1 ||beta||_1 +
    0.5 lambda_2 ||beta||_2^2

If lambda1 > 0 and lambda2 = 0, the problem is the LASSO. If lambda1 > 0 and lambda2 > 0, the problem is the Elastic Net. If lambda1 = 0 and lambda2 > 0, the problem is ridge regression. If lambda1 = 0 and lambda2 = 0, the problem is unregularized linear regression.

For efficiency reasons, it is not recommended to use this algorithm with "Lambda1" = 0. In that case, use the 'linear_regression' program, which implements both unregularized linear regression and ridge regression.

To train a LARS/LASSO/Elastic Net model, the "Input" and "Responses" parameters must be given. The "Lambda1", "Lambda2", and "UseCholesky" parameters control the training options. A trained model can be saved with the "OutputModel". If no training is desired at all, a model can be passed via the "InputModel" parameter.

The program can also provide predictions for test data using either the trained model or the given input model. Test points can be specified with the "Test" parameter. Predicted responses to the test points can be saved with the "OutputPredictions" output parameter.

For example, the following command trains a model on the data data and responses responses with lambda1 set to 0.4 and lambda2 set to 0 (so, LASSO is being solved), and then the model is saved to lasso_model:

// Initialize optional parameters for Lars(). param := mlpack.LarsOptions() param.Input = data param.Responses = responses param.Lambda1 = 0.4 param.Lambda2 = 0

lasso_model, _ := mlpack.Lars(param)

The following command uses the lasso_model to provide predicted responses for the data test and save those responses to test_predictions:

// Initialize optional parameters for Lars(). param := mlpack.LarsOptions() param.InputModel = &lasso_model param.Test = test

_, test_predictions := mlpack.Lars(param)

Input parameters:

  • Input (mat.Dense): Matrix of covariates (X).
  • InputModel (lars): Trained LARS model to use.
  • Lambda1 (float64): Regularization parameter for l1-norm penalty. Default value 0.
  • Lambda2 (float64): Regularization parameter for l2-norm penalty. Default value 0.
  • Responses (mat.Dense): Matrix of responses/observations (y).
  • Test (mat.Dense): Matrix containing points to regress on (test points).
  • UseCholesky (bool): Use Cholesky decomposition during computation rather than explicitly computing the full Gram matrix.
  • Verbose (bool): Display informational messages and the full list of parameters and timers at the end of execution.

Output parameters:

  • outputModel (lars): Output LARS model.
  • outputPredictions (mat.Dense): If --test_file is specified, this file is where the predicted responses will be saved.

func LinearRegression

func LinearRegression(param *LinearRegressionOptionalParam) (linearRegression, *mat.Dense)

An implementation of simple linear regression and simple ridge regression using ordinary least squares. This solves the problem

y = X * b + e

where X (specified by "Training") and y (specified either as the last column of the input matrix "Training" or via the "TrainingResponses" parameter) are known and b is the desired variable. If the covariance matrix (X'X) is not invertible, or if the solution is overdetermined, then specify a Tikhonov regularization constant (with "Lambda") greater than 0, which will regularize the covariance matrix to make it invertible. The calculated b may be saved with the "OutputPredictions" output parameter.

Optionally, the calculated value of b is used to predict the responses for another matrix X' (specified by the "Test" parameter):

y' = X' * b

and the predicted responses y' may be saved with the "OutputPredictions" output parameter. This type of regression is related to least-angle regression, which mlpack implements as the 'lars' program.

For example, to run a linear regression on the dataset X with responses y, saving the trained model to lr_model, the following command could be used:

// Initialize optional parameters for LinearRegression(). param := mlpack.LinearRegressionOptions() param.Training = X param.TrainingResponses = y

lr_model, _ := mlpack.LinearRegression(param)

Then, to use lr_model to predict responses for a test set X_test, saving the predictions to X_test_responses, the following command could be used:

// Initialize optional parameters for LinearRegression(). param := mlpack.LinearRegressionOptions() param.InputModel = &lr_model param.Test = X_test

_, X_test_responses := mlpack.LinearRegression(param)

Input parameters:

  • InputModel (linearRegression): Existing LinearRegression model to use.
  • Lambda (float64): Tikhonov regularization for ridge regression. If 0, the method reduces to linear regression. Default value 0.
  • Test (mat.Dense): Matrix containing X' (test regressors).
  • Training (mat.Dense): Matrix containing training set X (regressors).
  • TrainingResponses (mat.Dense): Optional vector containing y (responses). If not given, the responses are assumed to be the last row of the input file.
  • Verbose (bool): Display informational messages and the full list of parameters and timers at the end of execution.

Output parameters:

  • outputModel (linearRegression): Output LinearRegression model.
  • outputPredictions (mat.Dense): If --test_file is specified, this matrix is where the predicted responses will be saved.

func LinearSvm

func LinearSvm(param *LinearSvmOptionalParam) (linearsvmModel, *mat.Dense, *mat.Dense)

An implementation of linear SVMs that uses either L-BFGS or parallel SGD (stochastic gradient descent) to train the model.

This program allows loading a linear SVM model (via the "InputModel" parameter) or training a linear SVM model given training data (specified with the "Training" parameter), or both those things at once. In addition, this program allows classification on a test dataset (specified with the "Test" parameter) and the classification results may be saved with the "Predictions" output parameter. The trained linear SVM model may be saved using the "OutputModel" output parameter.

The training data, if specified, may have class labels as its last dimension. Alternately, the "Labels" parameter may be used to specify a separate vector of labels.

When a model is being trained, there are many options. L2 regularization (to prevent overfitting) can be specified with the "Lambda" option, and the number of classes can be manually specified with the "NumClasses"and if an intercept term is not desired in the model, the "NoIntercept" parameter can be specified.Margin of difference between correct class and other classes can be specified with the "Delta" option.The optimizer used to train the model can be specified with the "Optimizer" parameter. Available options are 'psgd' (parallel stochastic gradient descent) and 'lbfgs' (the L-BFGS optimizer). There are also various parameters for the optimizer; the "MaxIterations" parameter specifies the maximum number of allowed iterations, and the "Tolerance" parameter specifies the tolerance for convergence. For the parallel SGD optimizer, the "StepSize" parameter controls the step size taken at each iteration by the optimizer and the maximum number of epochs (specified with "Epochs"). If the objective function for your data is oscillating between Inf and 0, the step size is probably too large. There are more parameters for the optimizers, but the C++ interface must be used to access these.

Optionally, the model can be used to predict the labels for another matrix of data points, if "Test" is specified. The "Test" parameter can be specified without the "Training" parameter, so long as an existing linear SVM model is given with the "InputModel" parameter. The output predictions from the linear SVM model may be saved with the "Predictions" parameter.

As an example, to train a LinaerSVM on the data 'data' with labels 'labels' with L2 regularization of 0.1, saving the model to 'lsvm_model', the following command may be used:

// Initialize optional parameters for LinearSvm(). param := mlpack.LinearSvmOptions() param.Training = data param.Labels = labels param.Lambda = 0.1 param.Delta = 1 param.NumClasses = 0

lsvm_model, _, _ := mlpack.LinearSvm(param)

Then, to use that model to predict classes for the dataset 'test', storing the output predictions in 'predictions', the following command may be used:

// Initialize optional parameters for LinearSvm(). param := mlpack.LinearSvmOptions() param.InputModel = &lsvm_model param.Test = test

_, predictions, _ := mlpack.LinearSvm(param)

Input parameters:

  • Delta (float64): Margin of difference between correct class and other classes. Default value 1.
  • Epochs (int): Maximum number of full epochs over dataset for psgd Default value 50.
  • InputModel (linearsvmModel): Existing model (parameters).
  • Labels (mat.Dense): A matrix containing labels (0 or 1) for the points in the training set (y).
  • Lambda (float64): L2-regularization parameter for training. Default value 0.0001.
  • MaxIterations (int): Maximum iterations for optimizer (0 indicates no limit). Default value 10000.
  • NoIntercept (bool): Do not add the intercept term to the model.
  • NumClasses (int): Number of classes for classification; if unspecified (or 0), the number of classes found in the labels will be used. Default value 0.
  • Optimizer (string): Optimizer to use for training ('lbfgs' or 'psgd'). Default value 'lbfgs'.
  • Seed (int): Random seed. If 0, 'std::time(NULL)' is used. Default value 0.
  • Shuffle (bool): Don't shuffle the order in which data points are visited for parallel SGD.
  • StepSize (float64): Step size for parallel SGD optimizer. Default value 0.01.
  • Test (mat.Dense): Matrix containing test dataset.
  • TestLabels (mat.Dense): Matrix containing test labels.
  • Tolerance (float64): Convergence tolerance for optimizer. Default value 1e-10.
  • Training (mat.Dense): A matrix containing the training set (the matrix of predictors, X).
  • Verbose (bool): Display informational messages and the full list of parameters and timers at the end of execution.

Output parameters:

  • outputModel (linearsvmModel): Output for trained linear svm model.
  • predictions (mat.Dense): If test data is specified, this matrix is where the predictions for the test set will be saved.
  • probabilities (mat.Dense): If test data is specified, this matrix is where the class probabilities for the test set will be saved.

func Lmnn

func Lmnn(input *mat.Dense, param *LmnnOptionalParam) (*mat.Dense, *mat.Dense, *mat.Dense)

This program implements Large Margin Nearest Neighbors, a distance learning technique. The method seeks to improve k-nearest-neighbor classification on a dataset. The method employes the strategy of reducing distance between similar labeled data points (a.k.a target neighbors) and increasing distance between differently labeled points (a.k.a impostors) using standard optimization techniques over the gradient of the distance between data points.

To work, this algorithm needs labeled data. It can be given as the last row of the input dataset (specified with "Input"), or alternatively as a separate matrix (specified with "Labels"). Additionally, a starting point for optimization (specified with "Distance"can be given, having (r x d) dimensionality. Here r should satisfy 1 <= r <= d, Consequently a Low-Rank matrix will be optimized. Alternatively, Low-Rank distance can be learned by specifying the "Rank"parameter (A Low-Rank matrix with uniformly distributed values will be used as initial learning point).

The program also requires number of targets neighbors to work with ( specified with "K"), A regularization parameter can also be passed, It acts as a trade of between the pulling and pushing terms (specified with "Regularization"), In addition, this implementation of LMNN includes a parameter to decide the interval after which impostors must be re-calculated (specified with "Range").

Output can either be the learned distance matrix (specified with "Output"), or the transformed dataset (specified with "TransformedData"), or both. Additionally mean-centered dataset (specified with "CenteredData") can be accessed given mean-centering (specified with "Center") is performed on the dataset. Accuracy on initial dataset and final transformed dataset can be printed by specifying the "PrintAccuracy"parameter.

This implementation of LMNN uses AdaGrad, BigBatch_SGD, stochastic gradient descent, mini-batch stochastic gradient descent, or the L_BFGS optimizer.

AdaGrad, specified by the value 'adagrad' for the parameter "Optimizer", uses maximum of past squared gradients. It primarily on six parameters: the step size (specified with "StepSize"), the batch size (specified with "BatchSize"), the maximum number of passes (specified with "Passes"). Inaddition, a normalized starting point can be used by specifying the "Normalize" parameter.

BigBatch_SGD, specified by the value 'bbsgd' for the parameter "Optimizer", depends primarily on four parameters: the step size (specified with "StepSize"), the batch size (specified with "BatchSize"), the maximum number of passes (specified with "Passes"). In addition, a normalized starting point can be used by specifying the "Normalize" parameter.

Stochastic gradient descent, specified by the value 'sgd' for the parameter "Optimizer", depends primarily on three parameters: the step size (specified with "StepSize"), the batch size (specified with "BatchSize"), and the maximum number of passes (specified with "Passes"). In addition, a normalized starting point can be used by specifying the "Normalize" parameter. Furthermore, mean-centering can be performed on the dataset by specifying the "Center"parameter.

The L-BFGS optimizer, specified by the value 'lbfgs' for the parameter "Optimizer", uses a back-tracking line search algorithm to minimize a function. The following parameters are used by L-BFGS: "MaxIterations", "Tolerance"(the optimization is terminated when the gradient norm is below this value). For more details on the L-BFGS optimizer, consult either the mlpack L-BFGS documentation (in lbfgs.hpp) or the vast set of published literature on L-BFGS. In addition, a normalized starting point can be used by specifying the "Normalize" parameter.

By default, the AMSGrad optimizer is used.

Example - Let's say we want to learn distance on iris dataset with number of targets as 3 using BigBatch_SGD optimizer. A simple call for the same will look like:

// Initialize optional parameters for Lmnn(). param := mlpack.LmnnOptions() param.Labels = iris_labels param.K = 3 param.Optimizer = "bbsgd"

_, output, _ := mlpack.Lmnn(iris, param)

An another program call making use of range & regularization parameter with dataset having labels as last column can be made as:

// Initialize optional parameters for Lmnn(). param := mlpack.LmnnOptions() param.K = 5 param.Range = 10 param.Regularization = 0.4

_, output, _ := mlpack.Lmnn(letter_recognition, param)

Input parameters:

  • input (mat.Dense): Input dataset to run LMNN on.
  • BatchSize (int): Batch size for mini-batch SGD. Default value 50.
  • Center (bool): Perform mean-centering on the dataset. It is useful when the centroid of the data is far from the origin.
  • Distance (mat.Dense): Initial distance matrix to be used as starting point
  • K (int): Number of target neighbors to use for each datapoint. Default value 1.
  • Labels (mat.Dense): Labels for input dataset.
  • LinearScan (bool): Don't shuffle the order in which data points are visited for SGD or mini-batch SGD.
  • MaxIterations (int): Maximum number of iterations for L-BFGS (0 indicates no limit). Default value 100000.
  • Normalize (bool): Use a normalized starting point for optimization. Itis useful for when points are far apart, or when SGD is returning NaN.
  • Optimizer (string): Optimizer to use; 'amsgrad', 'bbsgd', 'sgd', or 'lbfgs'. Default value 'amsgrad'.
  • Passes (int): Maximum number of full passes over dataset for AMSGrad, BB_SGD and SGD. Default value 50.
  • PrintAccuracy (bool): Print accuracies on initial and transformed dataset
  • Range (int): Number of iterations after which impostors needs to be recalculated Default value 1.
  • Rank (int): Rank of distance matrix to be optimized. Default value 0.
  • Regularization (float64): Regularization for LMNN objective function Default value 0.5.
  • Seed (int): Random seed. If 0, 'std::time(NULL)' is used. Default value 0.
  • StepSize (float64): Step size for AMSGrad, BB_SGD and SGD (alpha). Default value 0.01.
  • Tolerance (float64): Maximum tolerance for termination of AMSGrad, BB_SGD, SGD or L-BFGS. Default value 1e-07.
  • Verbose (bool): Display informational messages and the full list of parameters and timers at the end of execution.

Output parameters:

  • centeredData (mat.Dense): Output matrix for mean-centered dataset.
  • output (mat.Dense): Output matrix for learned distance matrix.
  • transformedData (mat.Dense): Output matrix for transformed dataset.

func Load

func Load(filename string) (*mat.Dense, error)

Load() reads all of the numeric records from the CSV.

func LocalCoordinateCoding

func LocalCoordinateCoding(param *LocalCoordinateCodingOptionalParam) (*mat.Dense, *mat.Dense, localCoordinateCoding)

An implementation of Local Coordinate Coding (LCC), which codes data that approximately lives on a manifold using a variation of l1-norm regularized sparse coding. Given a dense data matrix X with n points and d dimensions, LCC seeks to find a dense dictionary matrix D with k atoms in d dimensions, and a coding matrix Z with n points in k dimensions. Because of the regularization method used, the atoms in D should lie close to the manifold on which the data points lie.

The original data matrix X can then be reconstructed as D * Z. Therefore, this program finds a representation of each point in X as a sparse linear combination of atoms in the dictionary D.

The coding is found with an algorithm which alternates between a dictionary step, which updates the dictionary D, and a coding step, which updates the coding matrix Z.

To run this program, the input matrix X must be specified (with -i), along with the number of atoms in the dictionary (-k). An initial dictionary may also be specified with the "InitialDictionary" parameter. The l1-norm regularization parameter is specified with the "Lambda" parameter.

For example, to run LCC on the dataset data using 200 atoms and an l1-regularization parameter of 0.1, saving the dictionary "Dictionary" and the codes into "Codes", use

// Initialize optional parameters for LocalCoordinateCoding(). param := mlpack.LocalCoordinateCodingOptions() param.Training = data param.Atoms = 200 param.Lambda = 0.1

codes, dict, _ := mlpack.LocalCoordinateCoding(param)

The maximum number of iterations may be specified with the "MaxIterations" parameter. Optionally, the input data matrix X can be normalized before coding with the "Normalize" parameter.

An LCC model may be saved using the "OutputModel" output parameter. Then, to encode new points from the dataset points with the previously saved model lcc_model, saving the new codes to new_codes, the following command can be used:

// Initialize optional parameters for LocalCoordinateCoding(). param := mlpack.LocalCoordinateCodingOptions() param.InputModel = &lcc_model param.Test = points

new_codes, _, _ := mlpack.LocalCoordinateCoding(param)

Input parameters:

  • Atoms (int): Number of atoms in the dictionary. Default value 0.
  • InitialDictionary (mat.Dense): Optional initial dictionary.
  • InputModel (localCoordinateCoding): Input LCC model.
  • Lambda (float64): Weighted l1-norm regularization parameter. Default value 0.
  • MaxIterations (int): Maximum number of iterations for LCC (0 indicates no limit). Default value 0.
  • Normalize (bool): If set, the input data matrix will be normalized before coding.
  • Seed (int): Random seed. If 0, 'std::time(NULL)' is used. Default value 0.
  • Test (mat.Dense): Test points to encode.
  • Tolerance (float64): Tolerance for objective function. Default value 0.01.
  • Training (mat.Dense): Matrix of training data (X).
  • Verbose (bool): Display informational messages and the full list of parameters and timers at the end of execution.

Output parameters:

  • codes (mat.Dense): Output codes matrix.
  • dictionary (mat.Dense): Output dictionary matrix.
  • outputModel (localCoordinateCoding): Output for trained LCC model.

func LogisticRegression

func LogisticRegression(param *LogisticRegressionOptionalParam) (logisticRegression, *mat.Dense, *mat.Dense)

An implementation of L2-regularized logistic regression using either the L-BFGS optimizer or SGD (stochastic gradient descent). This solves the regression problem

y = (1 / 1 + e^-(X * b)).

In this setting, y corresponds to class labels and X corresponds to data.

This program allows loading a logistic regression model (via the "InputModel" parameter) or training a logistic regression model given training data (specified with the "Training" parameter), or both those things at once. In addition, this program allows classification on a test dataset (specified with the "Test" parameter) and the classification results may be saved with the "Predictions" output parameter. The trained logistic regression model may be saved using the "OutputModel" output parameter.

The training data, if specified, may have class labels as its last dimension. Alternately, the "Labels" parameter may be used to specify a separate matrix of labels.

When a model is being trained, there are many options. L2 regularization (to prevent overfitting) can be specified with the "Lambda" option, and the optimizer used to train the model can be specified with the "Optimizer" parameter. Available options are 'sgd' (stochastic gradient descent) and 'lbfgs' (the L-BFGS optimizer). There are also various parameters for the optimizer; the "MaxIterations" parameter specifies the maximum number of allowed iterations, and the "Tolerance" parameter specifies the tolerance for convergence. For the SGD optimizer, the "StepSize" parameter controls the step size taken at each iteration by the optimizer. The batch size for SGD is controlled with the "BatchSize" parameter. If the objective function for your data is oscillating between Inf and 0, the step size is probably too large. There are more parameters for the optimizers, but the C++ interface must be used to access these.

For SGD, an iteration refers to a single point. So to take a single pass over the dataset with SGD, "MaxIterations" should be set to the number of points in the dataset.

Optionally, the model can be used to predict the responses for another matrix of data points, if "Test" is specified. The "Test" parameter can be specified without the "Training" parameter, so long as an existing logistic regression model is given with the "InputModel" parameter. The output predictions from the logistic regression model may be saved with the "Predictions" parameter.

This implementation of logistic regression does not support the general multi-class case but instead only the two-class case. Any labels must be either 0 or 1. For more classes, see the softmax regression implementation.

As an example, to train a logistic regression model on the data 'data' with labels 'labels' with L2 regularization of 0.1, saving the model to 'lr_model', the following command may be used:

// Initialize optional parameters for LogisticRegression(). param := mlpack.LogisticRegressionOptions() param.Training = data param.Labels = labels param.Lambda = 0.1

lr_model, _, _ := mlpack.LogisticRegression(param)

Then, to use that model to predict classes for the dataset 'test', storing the output predictions in 'predictions', the following command may be used:

// Initialize optional parameters for LogisticRegression(). param := mlpack.LogisticRegressionOptions() param.InputModel = &lr_model param.Test = test

_, predictions, _ := mlpack.LogisticRegression(param)

Input parameters:

  • BatchSize (int): Batch size for SGD. Default value 64.
  • DecisionBoundary (float64): Decision boundary for prediction; if the logistic function for a point is less than the boundary, the class is taken to be 0; otherwise, the class is 1. Default value 0.5.
  • InputModel (logisticRegression): Existing model (parameters).
  • Labels (mat.Dense): A matrix containing labels (0 or 1) for the points in the training set (y).
  • Lambda (float64): L2-regularization parameter for training. Default value 0.
  • MaxIterations (int): Maximum iterations for optimizer (0 indicates no limit). Default value 10000.
  • Optimizer (string): Optimizer to use for training ('lbfgs' or 'sgd'). Default value 'lbfgs'.
  • StepSize (float64): Step size for SGD optimizer. Default value 0.01.
  • Test (mat.Dense): Matrix containing test dataset.
  • Tolerance (float64): Convergence tolerance for optimizer. Default value 1e-10.
  • Training (mat.Dense): A matrix containing the training set (the matrix of predictors, X).
  • Verbose (bool): Display informational messages and the full list of parameters and timers at the end of execution.

Output parameters:

  • outputModel (logisticRegression): Output for trained logistic regression model.
  • predictions (mat.Dense): If test data is specified, this matrix is where the predictions for the test set will be saved.
  • probabilities (mat.Dense): If test data is specified, this matrix is where the class probabilities for the test set will be saved.

func Lsh

func Lsh(param *LshOptionalParam) (*mat.Dense, *mat.Dense, lshSearch)

This program will calculate the k approximate-nearest-neighbors of a set of points using locality-sensitive hashing. You may specify a separate set of reference points and query points, or just a reference set which will be used as both the reference and query set.

For example, the following will return 5 neighbors from the data for each point in input and store the distances in distances and the neighbors in neighbors:

// Initialize optional parameters for Lsh(). param := mlpack.LshOptions() param.K = 5 param.Reference = input

distances, neighbors, _ := mlpack.Lsh(param)

The output is organized such that row i and column j in the neighbors output corresponds to the index of the point in the reference set which is the j'th nearest neighbor from the point in the query set with index i. Row j and column i in the distances output file corresponds to the distance between those two points.

Because this is approximate-nearest-neighbors search, results may be different from run to run. Thus, the "Seed" parameter can be specified to set the random seed.

This program also has many other parameters to control its functionality; see the parameter-specific documentation for more information.

Input parameters:

  • BucketSize (int): The size of a bucket in the second level hash. Default value 500.
  • HashWidth (float64): The hash width for the first-level hashing in the LSH preprocessing. By default, the LSH class automatically estimates a hash width for its use. Default value 0.
  • InputModel (lshSearch): Input LSH model.
  • K (int): Number of nearest neighbors to find. Default value 0.
  • NumProbes (int): Number of additional probes for multiprobe LSH; if 0, traditional LSH is used. Default value 0.
  • Projections (int): The number of hash functions for each table Default value 10.
  • Query (mat.Dense): Matrix containing query points (optional).
  • Reference (mat.Dense): Matrix containing the reference dataset.
  • SecondHashSize (int): The size of the second level hash table. Default value 99901.
  • Seed (int): Random seed. If 0, 'std::time(NULL)' is used. Default value 0.
  • Tables (int): The number of hash tables to be used. Default value 30.
  • TrueNeighbors (mat.Dense): Matrix of true neighbors to compute recall with (the recall is printed when -v is specified).
  • Verbose (bool): Display informational messages and the full list of parameters and timers at the end of execution.

Output parameters:

  • distances (mat.Dense): Matrix to output distances into.
  • neighbors (mat.Dense): Matrix to output neighbors into.
  • outputModel (lshSearch): Output for trained LSH model.

func MeanShift

func MeanShift(input *mat.Dense, param *MeanShiftOptionalParam) (*mat.Dense, *mat.Dense)

This program performs mean shift clustering on the given dataset, storing the learned cluster assignments either as a column of labels in the input dataset or separately.

The input dataset should be specified with the "Input" parameter, and the radius used for search can be specified with the "Radius" parameter. The maximum number of iterations before algorithm termination is controlled with the "MaxIterations" parameter.

The output labels may be saved with the "Output" output parameter and the centroids of each cluster may be saved with the "Centroid" output parameter.

For example, to run mean shift clustering on the dataset data and store the centroids to centroids, the following command may be used:

// Initialize optional parameters for MeanShift(). param := mlpack.MeanShiftOptions()

centroids, _ := mlpack.MeanShift(data, param)

Input parameters:

  • input (mat.Dense): Input dataset to perform clustering on.
  • ForceConvergence (bool): If specified, the mean shift algorithm will continue running regardless of max_iterations until the clusters converge.
  • InPlace (bool): If specified, a column containing the learned cluster assignments will be added to the input dataset file. In this case, --output_file is overridden. (Do not use with Python.)
  • LabelsOnly (bool): If specified, only the output labels will be written to the file specified by --output_file.
  • MaxIterations (int): Maximum number of iterations before mean shift terminates. Default value 1000.
  • Radius (float64): If the distance between two centroids is less than the given radius, one will be removed. A radius of 0 or less means an estimate will be calculated and used for the radius. Default value 0.
  • Verbose (bool): Display informational messages and the full list of parameters and timers at the end of execution.

Output parameters:

  • centroid (mat.Dense): If specified, the centroids of each cluster will be written to the given matrix.
  • output (mat.Dense): Matrix to write output labels or labeled data to.

func Nbc

func Nbc(param *NbcOptionalParam) (*mat.Dense, nbcModel, *mat.Dense, *mat.Dense, *mat.Dense)

This program trains the Naive Bayes classifier on the given labeled training set, or loads a model from the given model file, and then may use that trained model to classify the points in a given test set.

The training set is specified with the "Training" parameter. Labels may be either the last row of the training set, or alternately the "Labels" parameter may be specified to pass a separate matrix of labels.

If training is not desired, a pre-existing model may be loaded with the "InputModel" parameter.

The "IncrementalVariance" parameter can be used to force the training to use an incremental algorithm for calculating variance. This is slower, but can help avoid loss of precision in some cases.

If classifying a test set is desired, the test set may be specified with the "Test" parameter, and the classifications may be saved with the "Predictions"predictions parameter. If saving the trained model is desired, this may be done with the "OutputModel" output parameter.

Note: the "Output" and "OutputProbs" parameters are deprecated and will be removed in mlpack 4.0.0. Use "Predictions" and "Probabilities" instead.

For example, to train a Naive Bayes classifier on the dataset data with labels labels and save the model to nbc_model, the following command may be used:

// Initialize optional parameters for Nbc(). param := mlpack.NbcOptions() param.Training = data param.Labels = labels

_, nbc_model, _, _, _ := mlpack.Nbc(param)

Then, to use nbc_model to predict the classes of the dataset test_set and save the predicted classes to predictions, the following command may be used:

// Initialize optional parameters for Nbc(). param := mlpack.NbcOptions() param.InputModel = &nbc_model param.Test = test_set

predictions, _, _, _, _ := mlpack.Nbc(param)

Input parameters:

  • IncrementalVariance (bool): The variance of each class will be calculated incrementally.
  • InputModel (nbcModel): Input Naive Bayes model.
  • Labels (mat.Dense): A file containing labels for the training set.
  • Test (mat.Dense): A matrix containing the test set.
  • Training (mat.Dense): A matrix containing the training set.
  • Verbose (bool): Display informational messages and the full list of parameters and timers at the end of execution.

Output parameters:

  • output (mat.Dense): The matrix in which the predicted labels for the test set will be written (deprecated).
  • outputModel (nbcModel): File to save trained Naive Bayes model to.
  • outputProbs (mat.Dense): The matrix in which the predicted probability of labels for the test set will be written (deprecated).
  • predictions (mat.Dense): The matrix in which the predicted labels for the test set will be written.
  • probabilities (mat.Dense): The matrix in which the predicted probability of labels for the test set will be written.

func Nca

func Nca(input *mat.Dense, param *NcaOptionalParam) *mat.Dense

This program implements Neighborhood Components Analysis, both a linear dimensionality reduction technique and a distance learning technique. The method seeks to improve k-nearest-neighbor classification on a dataset by scaling the dimensions. The method is nonparametric, and does not require a value of k. It works by using stochastic ("soft") neighbor assignments and using optimization techniques over the gradient of the accuracy of the neighbor assignments.

To work, this algorithm needs labeled data. It can be given as the last row of the input dataset (specified with "Input"), or alternatively as a separate matrix (specified with "Labels").

This implementation of NCA uses stochastic gradient descent, mini-batch stochastic gradient descent, or the L_BFGS optimizer. These optimizers do not guarantee global convergence for a nonconvex objective function (NCA's objective function is nonconvex), so the final results could depend on the random seed or other optimizer parameters.

Stochastic gradient descent, specified by the value 'sgd' for the parameter "Optimizer", depends primarily on three parameters: the step size (specified with "StepSize"), the batch size (specified with "BatchSize"), and the maximum number of iterations (specified with "MaxIterations"). In addition, a normalized starting point can be used by specifying the "Normalize" parameter, which is necessary if many warnings of the form 'Denominator of p_i is 0!' are given. Tuning the step size can be a tedious affair. In general, the step size is too large if the objective is not mostly uniformly decreasing, or if zero-valued denominator warnings are being issued. The step size is too small if the objective is changing very slowly. Setting the termination condition can be done easily once a good step size parameter is found; either increase the maximum iterations to a large number and allow SGD to find a minimum, or set the maximum iterations to 0 (allowing infinite iterations) and set the tolerance (specified by "Tolerance") to define the maximum allowed difference between objectives for SGD to terminate. Be careful---setting the tolerance instead of the maximum iterations can take a very long time and may actually never converge due to the properties of the SGD optimizer. Note that a single iteration of SGD refers to a single point, so to take a single pass over the dataset, set the value of the "MaxIterations" parameter equal to the number of points in the dataset.

The L-BFGS optimizer, specified by the value 'lbfgs' for the parameter "Optimizer", uses a back-tracking line search algorithm to minimize a function. The following parameters are used by L-BFGS: "NumBasis" (specifies the number of memory points used by L-BFGS), "MaxIterations", "ArmijoConstant", "Wolfe", "Tolerance" (the optimization is terminated when the gradient norm is below this value), "MaxLineSearchTrials", "MinStep", and "MaxStep" (which both refer to the line search routine). For more details on the L-BFGS optimizer, consult either the mlpack L-BFGS documentation (in lbfgs.hpp) or the vast set of published literature on L-BFGS.

By default, the SGD optimizer is used.

Input parameters:

  • input (mat.Dense): Input dataset to run NCA on.
  • ArmijoConstant (float64): Armijo constant for L-BFGS. Default value 0.0001.
  • BatchSize (int): Batch size for mini-batch SGD. Default value 50.
  • Labels (mat.Dense): Labels for input dataset.
  • LinearScan (bool): Don't shuffle the order in which data points are visited for SGD or mini-batch SGD.
  • MaxIterations (int): Maximum number of iterations for SGD or L-BFGS (0 indicates no limit). Default value 500000.
  • MaxLineSearchTrials (int): Maximum number of line search trials for L-BFGS. Default value 50.
  • MaxStep (float64): Maximum step of line search for L-BFGS. Default value 1e+20.
  • MinStep (float64): Minimum step of line search for L-BFGS. Default value 1e-20.
  • Normalize (bool): Use a normalized starting point for optimization. This is useful for when points are far apart, or when SGD is returning NaN.
  • NumBasis (int): Number of memory points to be stored for L-BFGS. Default value 5.
  • Optimizer (string): Optimizer to use; 'sgd' or 'lbfgs'. Default value 'sgd'.
  • Seed (int): Random seed. If 0, 'std::time(NULL)' is used. Default value 0.
  • StepSize (float64): Step size for stochastic gradient descent (alpha). Default value 0.01.
  • Tolerance (float64): Maximum tolerance for termination of SGD or L-BFGS. Default value 1e-07.
  • Verbose (bool): Display informational messages and the full list of parameters and timers at the end of execution.
  • Wolfe (float64): Wolfe condition parameter for L-BFGS. Default value 0.9.

Output parameters:

  • output (mat.Dense): Output matrix for learned distance matrix.

func Nmf

func Nmf(input *mat.Dense, rank int, param *NmfOptionalParam) (*mat.Dense, *mat.Dense)

This program performs non-negative matrix factorization on the given dataset, storing the resulting decomposed matrices in the specified files. For an input dataset V, NMF decomposes V into two matrices W and H such that

V = W * H

where all elements in W and H are non-negative. If V is of size (n x m), then W will be of size (n x r) and H will be of size (r x m), where r is the rank of the factorization (specified by the "Rank" parameter).

Optionally, the desired update rules for each NMF iteration can be chosen from the following list:

  • multdist: multiplicative distance-based update rules (Lee and Seung 1999)
  • multdiv: multiplicative divergence-based update rules (Lee and Seung 1999)
  • als: alternating least squares update rules (Paatero and Tapper 1994)

The maximum number of iterations is specified with "MaxIterations", and the minimum residue required for algorithm termination is specified with the "MinResidue" parameter.

For example, to run NMF on the input matrix V using the 'multdist' update rules with a rank-10 decomposition and storing the decomposed matrices into W and H, the following command could be used:

// Initialize optional parameters for Nmf(). param := mlpack.NmfOptions() param.UpdateRules = "multdist"

H, W := mlpack.Nmf(V, 10, param)

Input parameters:

  • input (mat.Dense): Input dataset to perform NMF on.
  • rank (int): Rank of the factorization.
  • InitialH (mat.Dense): Initial H matrix.
  • InitialW (mat.Dense): Initial W matrix.
  • MaxIterations (int): Number of iterations before NMF terminates (0 runs until convergence. Default value 10000.
  • MinResidue (float64): The minimum root mean square residue allowed for each iteration, below which the program terminates. Default value 1e-05.
  • Seed (int): Random seed. If 0, 'std::time(NULL)' is used. Default value 0.
  • UpdateRules (string): Update rules for each iteration; ( multdist | multdiv | als ). Default value 'multdist'.
  • Verbose (bool): Display informational messages and the full list of parameters and timers at the end of execution.

Output parameters:

  • h (mat.Dense): Matrix to save the calculated H to.
  • w (mat.Dense): Matrix to save the calculated W to.

func Pca

func Pca(input *mat.Dense, param *PcaOptionalParam) *mat.Dense

This program performs principal components analysis on the given dataset using the exact, randomized, randomized block Krylov, or QUIC SVD method. It will transform the data onto its principal components, optionally performing dimensionality reduction by ignoring the principal components with the smallest eigenvalues.

Use the "Input" parameter to specify the dataset to perform PCA on. A desired new dimensionality can be specified with the "NewDimensionality" parameter, or the desired variance to retain can be specified with the "VarToRetain" parameter. If desired, the dataset can be scaled before running PCA with the "Scale" parameter.

Multiple different decomposition techniques can be used. The method to use can be specified with the "DecompositionMethod" parameter, and it may take the values 'exact', 'randomized', or 'quic'.

For example, to reduce the dimensionality of the matrix data to 5 dimensions using randomized SVD for the decomposition, storing the output matrix to data_mod, the following command can be used:

// Initialize optional parameters for Pca(). param := mlpack.PcaOptions() param.NewDimensionality = 5 param.DecompositionMethod = "randomized"

data_mod := mlpack.Pca(data, param)

Input parameters:

  • input (mat.Dense): Input dataset to perform PCA on.
  • DecompositionMethod (string): Method used for the principal components analysis: 'exact', 'randomized', 'randomized-block-krylov', 'quic'. Default value 'exact'.
  • NewDimensionality (int): Desired dimensionality of output dataset. If 0, no dimensionality reduction is performed. Default value 0.
  • Scale (bool): If set, the data will be scaled before running PCA, such that the variance of each feature is 1.
  • VarToRetain (float64): Amount of variance to retain; should be between 0 and 1. If 1, all variance is retained. Overrides -d. Default value 0.
  • Verbose (bool): Display informational messages and the full list of parameters and timers at the end of execution.

Output parameters:

  • output (mat.Dense): Matrix to save modified dataset to.

func Perceptron

func Perceptron(param *PerceptronOptionalParam) (*mat.Dense, perceptronModel, *mat.Dense)

This program implements a perceptron, which is a single level neural network. The perceptron makes its predictions based on a linear predictor function combining a set of weights with the feature vector. The perceptron learning rule is able to converge, given enough iterations (specified using the "MaxIterations" parameter), if the data supplied is linearly separable. The perceptron is parameterized by a matrix of weight vectors that denote the numerical weights of the neural network.

This program allows loading a perceptron from a model (via the "InputModel" parameter) or training a perceptron given training data (via the "Training" parameter), or both those things at once. In addition, this program allows classification on a test dataset (via the "Test" parameter) and the classification results on the test set may be saved with the "Predictions" output parameter. The perceptron model may be saved with the "OutputModel" output parameter.

Note: the following parameter is deprecated and will be removed in mlpack 4.0.0: "Output". Use "Predictions" instead of "Output".

The training data given with the "Training" option may have class labels as its last dimension (so, if the training data is in CSV format, labels should be the last column). Alternately, the "Labels" parameter may be used to specify a separate matrix of labels.

All these options make it easy to train a perceptron, and then re-use that perceptron for later classification. The invocation below trains a perceptron on training_data with labels training_labels, and saves the model to perceptron_model.

// Initialize optional parameters for Perceptron(). param := mlpack.PerceptronOptions() param.Training = training_data param.Labels = training_labels

_, perceptron_model, _ := mlpack.Perceptron(param)

Then, this model can be re-used for classification on the test data test_data.

The example below does precisely that, saving the predicted classes to

predictions.

// Initialize optional parameters for Perceptron(). param := mlpack.PerceptronOptions() param.InputModel = &perceptron_model param.Test = test_data

_, _, predictions := mlpack.Perceptron(param)

Note that all of the options may be specified at once: predictions may be calculated right after training a model, and model training can occur even if an existing perceptron model is passed with the "InputModel" parameter. However, note that the number of classes and the dimensionality of all data must match. So you cannot pass a perceptron model trained on 2 classes and then re-train with a 4-class dataset. Similarly, attempting classification on a 3-dimensional dataset with a perceptron that has been trained on 8 dimensions will cause an error.

Input parameters:

  • InputModel (perceptronModel): Input perceptron model.
  • Labels (mat.Dense): A matrix containing labels for the training set.
  • MaxIterations (int): The maximum number of iterations the perceptron is to be run Default value 1000.
  • Test (mat.Dense): A matrix containing the test set.
  • Training (mat.Dense): A matrix containing the training set.
  • Verbose (bool): Display informational messages and the full list of parameters and timers at the end of execution.

Output parameters:

  • output (mat.Dense): The matrix in which the predicted labels for the test set will be written.
  • outputModel (perceptronModel): Output for trained perceptron model.
  • predictions (mat.Dense): The matrix in which the predicted labels for the test set will be written.

func PreprocessBinarize

func PreprocessBinarize(input *mat.Dense, param *PreprocessBinarizeOptionalParam) *mat.Dense

This utility takes a dataset and binarizes the variables into either 0 or 1 given threshold. User can apply binarization on a dimension or the whole dataset. The dimension to apply binarization to can be specified using the "Dimension" parameter; if left unspecified, every dimension will be binarized.

The threshold for binarization can also be specified with the "Threshold"

parameter; the default threshold is 0.0.

The binarized matrix may be saved with the "Output" output parameter.

For example, if we want to set all variables greater than 5 in the dataset X to 1 and variables less than or equal to 5.0 to 0, and save the result to Y, we could run

// Initialize optional parameters for PreprocessBinarize(). param := mlpack.PreprocessBinarizeOptions() param.Threshold = 5

Y := mlpack.PreprocessBinarize(X, param)

But if we want to apply this to only the first (0th) dimension of X, we could instead run

// Initialize optional parameters for PreprocessBinarize(). param := mlpack.PreprocessBinarizeOptions() param.Threshold = 5 param.Dimension = 0

Y := mlpack.PreprocessBinarize(X, param)

Input parameters:

  • input (mat.Dense): Input data matrix.
  • Dimension (int): Dimension to apply the binarization. If not set, the program will binarize every dimension by default. Default value 0.
  • Threshold (float64): Threshold to be applied for binarization. If not set, the threshold defaults to 0.0. Default value 0.
  • Verbose (bool): Display informational messages and the full list of parameters and timers at the end of execution.

Output parameters:

  • output (mat.Dense): Matrix in which to save the output.

func PreprocessDescribe

func PreprocessDescribe(input *mat.Dense, param *PreprocessDescribeOptionalParam)

This utility takes a dataset and prints out the descriptive statistics of the data. Descriptive statistics is the discipline of quantitatively describing the main features of a collection of information, or the quantitative description itself. The program does not modify the original file, but instead prints out the statistics to the console. The printed result will look like a table.

Optionally, width and precision of the output can be adjusted by a user using the "Width" and "Precision" parameters. A user can also select a specific dimension to analyze if there are too many dimensions. The "Population" parameter can be specified when the dataset should be considered as a population. Otherwise, the dataset will be considered as a sample.

So, a simple example where we want to print out statistical facts about the dataset X using the default settings, we could run

// Initialize optional parameters for PreprocessDescribe(). param := mlpack.PreprocessDescribeOptions() param.Verbose = true

:= mlpack.PreprocessDescribe(X, param)

If we want to customize the width to 10 and precision to 5 and consider the dataset as a population, we could run

// Initialize optional parameters for PreprocessDescribe(). param := mlpack.PreprocessDescribeOptions() param.Width = 10 param.Precision = 5 param.Verbose = true

:= mlpack.PreprocessDescribe(X, param)

Input parameters:

  • input (mat.Dense): Matrix containing data,
  • Dimension (int): Dimension of the data. Use this to specify a dimension Default value 0.
  • Population (bool): If specified, the program will calculate statistics assuming the dataset is the population. By default, the program will assume the dataset as a sample.
  • Precision (int): Precision of the output statistics. Default value 4.
  • RowMajor (bool): If specified, the program will calculate statistics across rows, not across columns. (Remember that in mlpack, a column represents a point, so this option is generally not necessary.)
  • Verbose (bool): Display informational messages and the full list of parameters and timers at the end of execution.
  • Width (int): Width of the output table. Default value 8.

Output parameters:

func PreprocessOneHotEncoding

func PreprocessOneHotEncoding(dimensions []int, input *mat.Dense, param *PreprocessOneHotEncodingOptionalParam) *mat.Dense

This utility takes a dataset and a vector of indices and does one-hot encoding of the respective features at those indices. Indices represent the IDs of the dimensions to be one-hot encoded.

The output matrix with encoded features may be saved with the "Output" parameters.

So, a simple example where we want to encode 1st and 3rd feature from dataset X into X_output would be

// Initialize optional parameters for PreprocessOneHotEncoding(). param := mlpack.PreprocessOneHotEncodingOptions()

X_ouput := mlpack.PreprocessOneHotEncoding(X, 1, 3, param)

Input parameters:

  • dimensions ([]int): Index of dimensions thatneed to be one-hot encoded.
  • input (mat.Dense): Matrix containing data.
  • Verbose (bool): Display informational messages and the full list of parameters and timers at the end of execution.

Output parameters:

  • output (mat.Dense): Matrix to save one-hot encoded features data to.

func PreprocessScale

func PreprocessScale(input *mat.Dense, param *PreprocessScaleOptionalParam) (*mat.Dense, scalingModel)

This utility takes a dataset and performs feature scaling using one of the six scaler methods namely: 'max_abs_scaler', 'mean_normalization', 'min_max_scaler' ,'standard_scaler', 'pca_whitening' and 'zca_whitening'. The function takes a matrix as "Input" and a scaling method type which you can specify using "ScalerMethod" parameter; the default is standard scaler, and outputs a matrix with scaled feature.

The output scaled feature matrix may be saved with the "Output" output parameters.

The model to scale features can be saved using "OutputModel" and later can be loaded back using"InputModel".

So, a simple example where we want to scale the dataset X into X_scaled with standard_scaler as scaler_method, we could run

// Initialize optional parameters for PreprocessScale(). param := mlpack.PreprocessScaleOptions() param.ScalerMethod = "standard_scaler"

X_scaled, _ := mlpack.PreprocessScale(X, param)

A simple example where we want to whiten the dataset X into X_whitened with PCA as whitening_method and use 0.01 as regularization parameter, we could run

// Initialize optional parameters for PreprocessScale(). param := mlpack.PreprocessScaleOptions() param.ScalerMethod = "pca_whitening" param.Epsilon = 0.01

X_scaled, _ := mlpack.PreprocessScale(X, param)

You can also retransform the scaled dataset back using"InverseScaling". An example to rescale : X_scaled into Xusing the saved model "InputModel" is:

// Initialize optional parameters for PreprocessScale(). param := mlpack.PreprocessScaleOptions() param.InverseScaling = true param.InputModel = &saved

X, _ := mlpack.PreprocessScale(X_scaled, param)

Another simple example where we want to scale the dataset X into X_scaled with

min_max_scaler as scaler method, where scaling range is 1 to 3 instead of

default 0 to 1. We could run

// Initialize optional parameters for PreprocessScale(). param := mlpack.PreprocessScaleOptions() param.ScalerMethod = "min_max_scaler" param.MinValue = 1 param.MaxValue = 3

X_scaled, _ := mlpack.PreprocessScale(X, param)

Input parameters:

  • input (mat.Dense): Matrix containing data.
  • Epsilon (float64): regularization Parameter for pcawhitening, or zcawhitening, should be between -1 to 1. Default value 1e-06.
  • InputModel (scalingModel): Input Scaling model.
  • InverseScaling (bool): Inverse Scaling to get original dataset
  • MaxValue (int): Ending value of range for min_max_scaler. Default value 1.
  • MinValue (int): Starting value of range for min_max_scaler. Default value 0.
  • ScalerMethod (string): method to use for scaling, the default is standard_scaler. Default value 'standard_scaler'.
  • Seed (int): Random seed (0 for std::time(NULL)). Default value 0.
  • Verbose (bool): Display informational messages and the full list of parameters and timers at the end of execution.

Output parameters:

  • output (mat.Dense): Matrix to save scaled data to.
  • outputModel (scalingModel): Output scaling model.

func PreprocessSplit

func PreprocessSplit(input *mat.Dense, param *PreprocessSplitOptionalParam) (*mat.Dense, *mat.Dense, *mat.Dense, *mat.Dense)

This utility takes a dataset and optionally labels and splits them into a training set and a test set. Before the split, the points in the dataset are randomly reordered. The percentage of the dataset to be used as the test set can be specified with the "TestRatio" parameter; the default is 0.2 (20%).

The output training and test matrices may be saved with the "Training" and "Test" output parameters.

Optionally, labels can also be split along with the data by specifying the "InputLabels" parameter. Splitting labels works the same way as splitting the data. The output training and test labels may be saved with the "TrainingLabels" and "TestLabels" output parameters, respectively.

So, a simple example where we want to split the dataset X into X_train and X_test with 60% of the data in the training set and 40% of the dataset in the test set, we could run

// Initialize optional parameters for PreprocessSplit(). param := mlpack.PreprocessSplitOptions() param.TestRatio = 0.4

X_test, _, X_train, _ := mlpack.PreprocessSplit(X, param)

Also by default the dataset is shuffled and split; you can provide the "NoShuffle" option to avoid shuffling the data; an example to avoid shuffling of data is:

// Initialize optional parameters for PreprocessSplit(). param := mlpack.PreprocessSplitOptions() param.TestRatio = 0.4 param.NoShuffle = true

X_test, _, X_train, _ := mlpack.PreprocessSplit(X, param)

If we had a dataset X and associated labels y, and we wanted to split these into X_train, y_train, X_test, and y_test, with 30% of the data in the test set, we could run

// Initialize optional parameters for PreprocessSplit(). param := mlpack.PreprocessSplitOptions() param.InputLabels = y param.TestRatio = 0.3

X_test, y_test, X_train, y_train := mlpack.PreprocessSplit(X, param)

To maintain the ratio of each class in the train and test sets, the"StratifyData" option can be used.

// Initialize optional parameters for PreprocessSplit(). param := mlpack.PreprocessSplitOptions() param.TestRatio = 0.4 param.StratifyData = true

X_test, _, X_train, _ := mlpack.PreprocessSplit(X, param)

Input parameters:

  • input (mat.Dense): Matrix containing data.
  • InputLabels (mat.Dense): Matrix containing labels.
  • NoShuffle (bool): Avoid shuffling the data before splitting.
  • Seed (int): Random seed (0 for std::time(NULL)). Default value 0.
  • StratifyData (bool): Stratify the data according to labels
  • TestRatio (float64): Ratio of test set; if not set,the ratio defaults to 0.2 Default value 0.2.
  • Verbose (bool): Display informational messages and the full list of parameters and timers at the end of execution.

Output parameters:

  • test (mat.Dense): Matrix to save test data to.
  • testLabels (mat.Dense): Matrix to save test labels to.
  • training (mat.Dense): Matrix to save training data to.
  • trainingLabels (mat.Dense): Matrix to save train labels to.

func Radical

func Radical(input *mat.Dense, param *RadicalOptionalParam) (*mat.Dense, *mat.Dense)

An implementation of RADICAL, a method for independent component analysis (ICA). Assuming that we have an input matrix X, the goal is to find a square unmixing matrix W such that Y = W * X and the dimensions of Y are independent components. If the algorithm is running particularly slowly, try reducing the number of replicates.

The input matrix to perform ICA on should be specified with the "Input" parameter. The output matrix Y may be saved with the "OutputIc" output parameter, and the output unmixing matrix W may be saved with the "OutputUnmixing" output parameter.

For example, to perform ICA on the matrix X with 40 replicates, saving the independent components to ic, the following command may be used:

// Initialize optional parameters for Radical(). param := mlpack.RadicalOptions() param.Replicates = 40

ic, _ := mlpack.Radical(X, param)

Input parameters:

  • input (mat.Dense): Input dataset for ICA.
  • Angles (int): Number of angles to consider in brute-force search during Radical2D. Default value 150.
  • NoiseStdDev (float64): Standard deviation of Gaussian noise. Default value 0.175.
  • Objective (bool): If set, an estimate of the final objective function is printed.
  • Replicates (int): Number of Gaussian-perturbed replicates to use (per point) in Radical2D. Default value 30.
  • Seed (int): Random seed. If 0, 'std::time(NULL)' is used. Default value 0.
  • Sweeps (int): Number of sweeps; each sweep calls Radical2D once for each pair of dimensions. Default value 0.
  • Verbose (bool): Display informational messages and the full list of parameters and timers at the end of execution.

Output parameters:

  • outputIc (mat.Dense): Matrix to save independent components to.
  • outputUnmixing (mat.Dense): Matrix to save unmixing matrix to.

func RandomForest

func RandomForest(param *RandomForestOptionalParam) (randomForestModel, *mat.Dense, *mat.Dense)

This program is an implementation of the standard random forest classification algorithm by Leo Breiman. A random forest can be trained and saved for later use, or a random forest may be loaded and predictions or class probabilities for points may be generated.

The training set and associated labels are specified with the "Training" and "Labels" parameters, respectively. The labels should be in the range [0, num_classes - 1]. Optionally, if "Labels" is not specified, the labels are assumed to be the last dimension of the training dataset.

When a model is trained, the "OutputModel" output parameter may be used to save the trained model. A model may be loaded for predictions with the "InputModel"parameter. The "InputModel" parameter may not be specified when the "Training" parameter is specified. The "MinimumLeafSize" parameter specifies the minimum number of training points that must fall into each leaf for it to be split. The "NumTrees" controls the number of trees in the random forest. The "MinimumGainSplit" parameter controls the minimum required gain for a decision tree node to split. Larger values will force higher-confidence splits. The "MaximumDepth" parameter specifies the maximum depth of the tree.

The "SubspaceDim" parameter is used to control the number of random

dimensions chosen for an individual node's split. If "PrintTrainingAccuracy" is specified, the calculated accuracy on the training set will be printed.

Test data may be specified with the "Test" parameter, and if performance measures are desired for that test set, labels for the test points may be specified with the "TestLabels" parameter. Predictions for each test point may be saved via the "Predictions"output parameter. Class probabilities for each prediction may be saved with the "Probabilities" output parameter.

For example, to train a random forest with a minimum leaf size of 20 using 10 trees on the dataset contained in datawith labels labels, saving the output random forest to rf_model and printing the training error, one could call

// Initialize optional parameters for RandomForest(). param := mlpack.RandomForestOptions() param.Training = data param.Labels = labels param.MinimumLeafSize = 20 param.NumTrees = 10 param.PrintTrainingAccuracy = true

rf_model, _, _ := mlpack.RandomForest(param)

Then, to use that model to classify points in test_set and print the test error given the labels test_labels using that model, while saving the predictions for each point to predictions, one could call

// Initialize optional parameters for RandomForest(). param := mlpack.RandomForestOptions() param.InputModel = &rf_model param.Test = test_set param.TestLabels = test_labels

_, predictions, _ := mlpack.RandomForest(param)

Input parameters:

  • InputModel (randomForestModel): Pre-trained random forest to use for classification.
  • Labels (mat.Dense): Labels for training dataset.
  • MaximumDepth (int): Maximum depth of the tree (0 means no limit). Default value 0.
  • MinimumGainSplit (float64): Minimum gain needed to make a split when building a tree. Default value 0.
  • MinimumLeafSize (int): Minimum number of points in each leaf node. Default value 1.
  • NumTrees (int): Number of trees in the random forest. Default value 10.
  • PrintTrainingAccuracy (bool): If set, then the accuracy of the model on the training set will be predicted (verbose must also be specified).
  • Seed (int): Random seed. If 0, 'std::time(NULL)' is used. Default value 0.
  • SubspaceDim (int): Dimensionality of random subspace to use for each split. '0' will autoselect the square root of data dimensionality. Default value 0.
  • Test (mat.Dense): Test dataset to produce predictions for.
  • TestLabels (mat.Dense): Test dataset labels, if accuracy calculation is desired.
  • Training (mat.Dense): Training dataset.
  • Verbose (bool): Display informational messages and the full list of parameters and timers at the end of execution.
  • WarmStart (bool): If true and passed along with `training` and `input_model` then trains more trees on top of existing model.

Output parameters:

  • outputModel (randomForestModel): Model to save trained random forest to.
  • predictions (mat.Dense): Predicted classes for each point in the test set.
  • probabilities (mat.Dense): Predicted class probabilities for each point in the test set.

func Save

func Save(filename string, mat *mat.Dense) error

Save() writes all of the records to the CSV.

func SoftmaxRegression

func SoftmaxRegression(param *SoftmaxRegressionOptionalParam) (softmaxRegression, *mat.Dense, *mat.Dense)

This program performs softmax regression, a generalization of logistic regression to the multiclass case, and has support for L2 regularization. The program is able to train a model, load an existing model, and give predictions (and optionally their accuracy) for test data.

Training a softmax regression model is done by giving a file of training points with the "Training" parameter and their corresponding labels with the "Labels" parameter. The number of classes can be manually specified with the "NumberOfClasses" parameter, and the maximum number of iterations of the L-BFGS optimizer can be specified with the "MaxIterations" parameter. The L2 regularization constant can be specified with the "Lambda" parameter and if an intercept term is not desired in the model, the "NoIntercept" parameter can be specified.

The trained model can be saved with the "OutputModel" output parameter. If training is not desired, but only testing is, a model can be loaded with the "InputModel" parameter. At the current time, a loaded model cannot be trained further, so specifying both "InputModel" and "Training" is not allowed.

The program is also able to evaluate a model on test data. A test dataset can be specified with the "Test" parameter. Class predictions can be saved with the "Predictions" output parameter. If labels are specified for the test data with the "TestLabels" parameter, then the program will print the accuracy of the predictions on the given test set and its corresponding labels.

For example, to train a softmax regression model on the data dataset with labels labels with a maximum of 1000 iterations for training, saving the trained model to sr_model, the following command can be used:

// Initialize optional parameters for SoftmaxRegression(). param := mlpack.SoftmaxRegressionOptions() param.Training = dataset param.Labels = labels

sr_model, _, _ := mlpack.SoftmaxRegression(param)

Then, to use sr_model to classify the test points in test_points, saving the output predictions to predictions, the following command can be used:

// Initialize optional parameters for SoftmaxRegression(). param := mlpack.SoftmaxRegressionOptions() param.InputModel = &sr_model param.Test = test_points

_, predictions, _ := mlpack.SoftmaxRegression(param)

Input parameters:

  • InputModel (softmaxRegression): File containing existing model (parameters).
  • Labels (mat.Dense): A matrix containing labels (0 or 1) for the points in the training set (y). The labels must order as a row.
  • Lambda (float64): L2-regularization constant Default value 0.0001.
  • MaxIterations (int): Maximum number of iterations before termination. Default value 400.
  • NoIntercept (bool): Do not add the intercept term to the model.
  • NumberOfClasses (int): Number of classes for classification; if unspecified (or 0), the number of classes found in the labels will be used. Default value 0.
  • Test (mat.Dense): Matrix containing test dataset.
  • TestLabels (mat.Dense): Matrix containing test labels.
  • Training (mat.Dense): A matrix containing the training set (the matrix of predictors, X).
  • Verbose (bool): Display informational messages and the full list of parameters and timers at the end of execution.

Output parameters:

  • outputModel (softmaxRegression): File to save trained softmax regression model to.
  • predictions (mat.Dense): Matrix to save predictions for test dataset into.
  • probabilities (mat.Dense): Matrix to save class probabilities for test dataset into.

func SparseCoding

func SparseCoding(param *SparseCodingOptionalParam) (*mat.Dense, *mat.Dense, sparseCoding)

An implementation of Sparse Coding with Dictionary Learning, which achieves sparsity via an l1-norm regularizer on the codes (LASSO) or an (l1+l2)-norm regularizer on the codes (the Elastic Net). Given a dense data matrix X with d dimensions and n points, sparse coding seeks to find a dense dictionary matrix D with k atoms in d dimensions, and a sparse coding matrix Z with n points in k dimensions.

The original data matrix X can then be reconstructed as Z * D. Therefore, this program finds a representation of each point in X as a sparse linear combination of atoms in the dictionary D.

The sparse coding is found with an algorithm which alternates between a dictionary step, which updates the dictionary D, and a sparse coding step, which updates the sparse coding matrix.

Once a dictionary D is found, the sparse coding model may be used to encode other matrices, and saved for future usage.

To run this program, either an input matrix or an already-saved sparse coding model must be specified. An input matrix may be specified with the "Training" option, along with the number of atoms in the dictionary (specified with the "Atoms" parameter). It is also possible to specify an initial dictionary for the optimization, with the "InitialDictionary" parameter. An input model may be specified with the "InputModel" parameter.

As an example, to build a sparse coding model on the dataset data using 200 atoms and an l1-regularization parameter of 0.1, saving the model into model, use

// Initialize optional parameters for SparseCoding(). param := mlpack.SparseCodingOptions() param.Training = data param.Atoms = 200 param.Lambda1 = 0.1

_, _, model := mlpack.SparseCoding(param)

Then, this model could be used to encode a new matrix, otherdata, and save the output codes to codes:

// Initialize optional parameters for SparseCoding(). param := mlpack.SparseCodingOptions() param.InputModel = &model param.Test = otherdata

codes, _, _ := mlpack.SparseCoding(param)

Input parameters:

  • Atoms (int): Number of atoms in the dictionary. Default value 15.
  • InitialDictionary (mat.Dense): Optional initial dictionary matrix.
  • InputModel (sparseCoding): File containing input sparse coding model.
  • Lambda1 (float64): Sparse coding l1-norm regularization parameter. Default value 0.
  • Lambda2 (float64): Sparse coding l2-norm regularization parameter. Default value 0.
  • MaxIterations (int): Maximum number of iterations for sparse coding (0 indicates no limit). Default value 0.
  • NewtonTolerance (float64): Tolerance for convergence of Newton method. Default value 1e-06.
  • Normalize (bool): If set, the input data matrix will be normalized before coding.
  • ObjectiveTolerance (float64): Tolerance for convergence of the objective function. Default value 0.01.
  • Seed (int): Random seed. If 0, 'std::time(NULL)' is used. Default value 0.
  • Test (mat.Dense): Optional matrix to be encoded by trained model.
  • Training (mat.Dense): Matrix of training data (X).
  • Verbose (bool): Display informational messages and the full list of parameters and timers at the end of execution.

Output parameters:

  • codes (mat.Dense): Matrix to save the output sparse codes of the test matrix (--test_file) to.
  • dictionary (mat.Dense): Matrix to save the output dictionary to.
  • outputModel (sparseCoding): File to save trained sparse coding model to.

func TestGoBinding

func TestGoBinding(doubleIn float64, intIn int, stringIn string, param *TestGoBindingOptionalParam) (*mat.Dense, float64, int, *mat.Dense, *mat.Dense, float64, gaussianKernel, *mat.Dense, []string, string, *mat.Dense, *mat.Dense, *mat.Dense, []int)

A simple program to test Go binding functionality. You can build mlpack with the BUILD_TESTS option set to off, and this binding will no longer be built.

Input parameters:

  • doubleIn (float64): Input double, must be 4.0.
  • intIn (int): Input int, must be 12.
  • stringIn (string): Input string, must be 'hello'.
  • BuildModel (bool): If true, a model will be returned.
  • ColIn (mat.Dense): Input column.
  • Flag1 (bool): Input flag, must be specified.
  • Flag2 (bool): Input flag, must not be specified.
  • MatrixAndInfoIn (matrixWithInfo): Input matrix and info.
  • MatrixIn (mat.Dense): Input matrix.
  • ModelIn (gaussianKernel): Input model.
  • RowIn (mat.Dense): Input row.
  • StrVectorIn ([]string): Input vector of strings.
  • UcolIn (mat.Dense): Input unsigned column.
  • UmatrixIn (mat.Dense): Input unsigned matrix.
  • UrowIn (mat.Dense): Input unsigned row.
  • VectorIn ([]int): Input vector of numbers.
  • Verbose (bool): Display informational messages and the full list of parameters and timers at the end of execution.

Output parameters:

  • colOut (mat.Dense): Output column. 2x input column
  • doubleOut (float64): Output double, will be 5.0. Default value 0.
  • intOut (int): Output int, will be 13. Default value 0.
  • matrixAndInfoOut (mat.Dense): Output matrix and info; all numeric elements multiplied by 3.
  • matrixOut (mat.Dense): Output matrix.
  • modelBwOut (float64): The bandwidth of the model. Default value 0.
  • modelOut (gaussianKernel): Output model, with twice the bandwidth.
  • rowOut (mat.Dense): Output row. 2x input row.
  • strVectorOut ([]string): Output string vector.
  • stringOut (string): Output string, will be 'hello2'. Default value ”.
  • ucolOut (mat.Dense): Output unsigned column. 2x input column.
  • umatrixOut (mat.Dense): Output unsigned matrix.
  • urowOut (mat.Dense): Output unsigned row. 2x input row.
  • vectorOut ([]int): Output vector.

func UnZip

func UnZip(input string, output string) error

UnZip() unzips the given input to the given output file.

Types

type AdaboostOptionalParam

type AdaboostOptionalParam struct {
	InputModel  *adaBoostModel
	Iterations  int
	Labels      *mat.Dense
	Test        *mat.Dense
	Tolerance   float64
	Training    *mat.Dense
	Verbose     bool
	WeakLearner string
}

func AdaboostOptions

func AdaboostOptions() *AdaboostOptionalParam

type ApproxKfnOptionalParam

type ApproxKfnOptionalParam struct {
	Algorithm      string
	CalculateError bool
	ExactDistances *mat.Dense
	InputModel     *approxkfnModel
	K              int
	NumProjections int
	NumTables      int
	Query          *mat.Dense
	Reference      *mat.Dense
	Verbose        bool
}

func ApproxKfnOptions

func ApproxKfnOptions() *ApproxKfnOptionalParam

type BayesianLinearRegressionOptionalParam

type BayesianLinearRegressionOptionalParam struct {
	Center     bool
	Input      *mat.Dense
	InputModel *bayesianLinearRegression
	Responses  *mat.Dense
	Scale      bool
	Test       *mat.Dense
	Verbose    bool
}

func BayesianLinearRegressionOptions

func BayesianLinearRegressionOptions() *BayesianLinearRegressionOptionalParam

type CfOptionalParam

type CfOptionalParam struct {
	Algorithm                string
	AllUserRecommendations   bool
	InputModel               *cfModel
	Interpolation            string
	IterationOnlyTermination bool
	MaxIterations            int
	MinResidue               float64
	NeighborSearch           string
	Neighborhood             int
	Normalization            string
	Query                    *mat.Dense
	Rank                     int
	Recommendations          int
	Seed                     int
	Test                     *mat.Dense
	Training                 *mat.Dense
	Verbose                  bool
}

func CfOptions

func CfOptions() *CfOptionalParam

type DbscanOptionalParam

type DbscanOptionalParam struct {
	Epsilon       float64
	MinSize       int
	Naive         bool
	SelectionType string
	SingleMode    bool
	TreeType      string
	Verbose       bool
}

func DbscanOptions

func DbscanOptions() *DbscanOptionalParam

type DecisionTreeOptionalParam

type DecisionTreeOptionalParam struct {
	InputModel            *decisionTreeModel
	Labels                *mat.Dense
	MaximumDepth          int
	MinimumGainSplit      float64
	MinimumLeafSize       int
	PrintTrainingAccuracy bool
	PrintTrainingError    bool
	Test                  *matrixWithInfo
	TestLabels            *mat.Dense
	Training              *matrixWithInfo
	Verbose               bool
	Weights               *mat.Dense
}

func DecisionTreeOptions

func DecisionTreeOptions() *DecisionTreeOptionalParam

type DetOptionalParam

type DetOptionalParam struct {
	Folds       int
	InputModel  *dTree
	MaxLeafSize int
	MinLeafSize int
	PathFormat  string
	SkipPruning bool
	Test        *mat.Dense
	Training    *mat.Dense
	Verbose     bool
}

func DetOptions

func DetOptions() *DetOptionalParam

type EmstOptionalParam

type EmstOptionalParam struct {
	LeafSize int
	Naive    bool
	Verbose  bool
}

func EmstOptions

func EmstOptions() *EmstOptionalParam

type FastmksOptionalParam

type FastmksOptionalParam struct {
	Bandwidth  float64
	Base       float64
	Degree     float64
	InputModel *fastmksModel
	K          int
	Kernel     string
	Naive      bool
	Offset     float64
	Query      *mat.Dense
	Reference  *mat.Dense
	Scale      float64
	Single     bool
	Verbose    bool
}

func FastmksOptions

func FastmksOptions() *FastmksOptionalParam

type GmmGenerateOptionalParam

type GmmGenerateOptionalParam struct {
	Seed    int
	Verbose bool
}

func GmmGenerateOptions

func GmmGenerateOptions() *GmmGenerateOptionalParam

type GmmProbabilityOptionalParam

type GmmProbabilityOptionalParam struct {
	Verbose bool
}

func GmmProbabilityOptions

func GmmProbabilityOptions() *GmmProbabilityOptionalParam

type GmmTrainOptionalParam

type GmmTrainOptionalParam struct {
	DiagonalCovariance  bool
	InputModel          *gmm
	KmeansMaxIterations int
	MaxIterations       int
	NoForcePositive     bool
	Noise               float64
	Percentage          float64
	RefinedStart        bool
	Samplings           int
	Seed                int
	Tolerance           float64
	Trials              int
	Verbose             bool
}

func GmmTrainOptions

func GmmTrainOptions() *GmmTrainOptionalParam

type HmmGenerateOptionalParam

type HmmGenerateOptionalParam struct {
	Seed       int
	StartState int
	Verbose    bool
}

func HmmGenerateOptions

func HmmGenerateOptions() *HmmGenerateOptionalParam

type HmmLoglikOptionalParam

type HmmLoglikOptionalParam struct {
	Verbose bool
}

func HmmLoglikOptions

func HmmLoglikOptions() *HmmLoglikOptionalParam

type HmmTrainOptionalParam

type HmmTrainOptionalParam struct {
	Batch      bool
	Gaussians  int
	InputModel *hmmModel
	LabelsFile string
	Seed       int
	States     int
	Tolerance  float64
	Type       string
	Verbose    bool
}

func HmmTrainOptions

func HmmTrainOptions() *HmmTrainOptionalParam

type HmmViterbiOptionalParam

type HmmViterbiOptionalParam struct {
	Verbose bool
}

func HmmViterbiOptions

func HmmViterbiOptions() *HmmViterbiOptionalParam

type HoeffdingTreeOptionalParam

type HoeffdingTreeOptionalParam struct {
	BatchMode                 bool
	Bins                      int
	Confidence                float64
	InfoGain                  bool
	InputModel                *hoeffdingTreeModel
	Labels                    *mat.Dense
	MaxSamples                int
	MinSamples                int
	NumericSplitStrategy      string
	ObservationsBeforeBinning int
	Passes                    int
	Test                      *matrixWithInfo
	TestLabels                *mat.Dense
	Training                  *matrixWithInfo
	Verbose                   bool
}

func HoeffdingTreeOptions

func HoeffdingTreeOptions() *HoeffdingTreeOptionalParam

type ImageConverterOptionalParam

type ImageConverterOptionalParam struct {
	Channels int
	Dataset  *mat.Dense
	Height   int
	Quality  int
	Save     bool
	Verbose  bool
	Width    int
}

func ImageConverterOptions

func ImageConverterOptions() *ImageConverterOptionalParam

type KdeOptionalParam

type KdeOptionalParam struct {
	AbsError          float64
	Algorithm         string
	Bandwidth         float64
	InitialSampleSize int
	InputModel        *kdeModel
	Kernel            string
	McBreakCoef       float64
	McEntryCoef       float64
	McProbability     float64
	MonteCarlo        bool
	Query             *mat.Dense
	Reference         *mat.Dense
	RelError          float64
	Tree              string
	Verbose           bool
}

func KdeOptions

func KdeOptions() *KdeOptionalParam

type KernelPcaOptionalParam

type KernelPcaOptionalParam struct {
	Bandwidth         float64
	Center            bool
	Degree            float64
	KernelScale       float64
	NewDimensionality int
	NystroemMethod    bool
	Offset            float64
	Sampling          string
	Verbose           bool
}

func KernelPcaOptions

func KernelPcaOptions() *KernelPcaOptionalParam

type KfnOptionalParam

type KfnOptionalParam struct {
	Algorithm     string
	Epsilon       float64
	InputModel    *kfnModel
	K             int
	LeafSize      int
	Percentage    float64
	Query         *mat.Dense
	RandomBasis   bool
	Reference     *mat.Dense
	Seed          int
	TreeType      string
	TrueDistances *mat.Dense
	TrueNeighbors *mat.Dense
	Verbose       bool
}

func KfnOptions

func KfnOptions() *KfnOptionalParam

type KmeansOptionalParam

type KmeansOptionalParam struct {
	Algorithm          string
	AllowEmptyClusters bool
	InPlace            bool
	InitialCentroids   *mat.Dense
	KillEmptyClusters  bool
	KmeansPlusPlus     bool
	LabelsOnly         bool
	MaxIterations      int
	Percentage         float64
	RefinedStart       bool
	Samplings          int
	Seed               int
	Verbose            bool
}

func KmeansOptions

func KmeansOptions() *KmeansOptionalParam

type KnnOptionalParam

type KnnOptionalParam struct {
	Algorithm     string
	Epsilon       float64
	InputModel    *knnModel
	K             int
	LeafSize      int
	Query         *mat.Dense
	RandomBasis   bool
	Reference     *mat.Dense
	Rho           float64
	Seed          int
	Tau           float64
	TreeType      string
	TrueDistances *mat.Dense
	TrueNeighbors *mat.Dense
	Verbose       bool
}

func KnnOptions

func KnnOptions() *KnnOptionalParam

type KrannOptionalParam

type KrannOptionalParam struct {
	Alpha             float64
	FirstLeafExact    bool
	InputModel        *raModel
	K                 int
	LeafSize          int
	Naive             bool
	Query             *mat.Dense
	RandomBasis       bool
	Reference         *mat.Dense
	SampleAtLeaves    bool
	Seed              int
	SingleMode        bool
	SingleSampleLimit int
	Tau               float64
	TreeType          string
	Verbose           bool
}

func KrannOptions

func KrannOptions() *KrannOptionalParam

type LarsOptionalParam

type LarsOptionalParam struct {
	Input       *mat.Dense
	InputModel  *lars
	Lambda1     float64
	Lambda2     float64
	Responses   *mat.Dense
	Test        *mat.Dense
	UseCholesky bool
	Verbose     bool
}

func LarsOptions

func LarsOptions() *LarsOptionalParam

type LinearRegressionOptionalParam

type LinearRegressionOptionalParam struct {
	InputModel        *linearRegression
	Lambda            float64
	Test              *mat.Dense
	Training          *mat.Dense
	TrainingResponses *mat.Dense
	Verbose           bool
}

func LinearRegressionOptions

func LinearRegressionOptions() *LinearRegressionOptionalParam

type LinearSvmOptionalParam

type LinearSvmOptionalParam struct {
	Delta         float64
	Epochs        int
	InputModel    *linearsvmModel
	Labels        *mat.Dense
	Lambda        float64
	MaxIterations int
	NoIntercept   bool
	NumClasses    int
	Optimizer     string
	Seed          int
	Shuffle       bool
	StepSize      float64
	Test          *mat.Dense
	TestLabels    *mat.Dense
	Tolerance     float64
	Training      *mat.Dense
	Verbose       bool
}

func LinearSvmOptions

func LinearSvmOptions() *LinearSvmOptionalParam

type LmnnOptionalParam

type LmnnOptionalParam struct {
	BatchSize      int
	Center         bool
	Distance       *mat.Dense
	K              int
	Labels         *mat.Dense
	LinearScan     bool
	MaxIterations  int
	Normalize      bool
	Optimizer      string
	Passes         int
	PrintAccuracy  bool
	Range          int
	Rank           int
	Regularization float64
	Seed           int
	StepSize       float64
	Tolerance      float64
	Verbose        bool
}

func LmnnOptions

func LmnnOptions() *LmnnOptionalParam

type LocalCoordinateCodingOptionalParam

type LocalCoordinateCodingOptionalParam struct {
	Atoms             int
	InitialDictionary *mat.Dense
	InputModel        *localCoordinateCoding
	Lambda            float64
	MaxIterations     int
	Normalize         bool
	Seed              int
	Test              *mat.Dense
	Tolerance         float64
	Training          *mat.Dense
	Verbose           bool
}

func LocalCoordinateCodingOptions

func LocalCoordinateCodingOptions() *LocalCoordinateCodingOptionalParam

type LogisticRegressionOptionalParam

type LogisticRegressionOptionalParam struct {
	BatchSize        int
	DecisionBoundary float64
	InputModel       *logisticRegression
	Labels           *mat.Dense
	Lambda           float64
	MaxIterations    int
	Optimizer        string
	StepSize         float64
	Test             *mat.Dense
	Tolerance        float64
	Training         *mat.Dense
	Verbose          bool
}

func LogisticRegressionOptions

func LogisticRegressionOptions() *LogisticRegressionOptionalParam

type LshOptionalParam

type LshOptionalParam struct {
	BucketSize     int
	HashWidth      float64
	InputModel     *lshSearch
	K              int
	NumProbes      int
	Projections    int
	Query          *mat.Dense
	Reference      *mat.Dense
	SecondHashSize int
	Seed           int
	Tables         int
	TrueNeighbors  *mat.Dense
	Verbose        bool
}

func LshOptions

func LshOptions() *LshOptionalParam

type MeanShiftOptionalParam

type MeanShiftOptionalParam struct {
	ForceConvergence bool
	InPlace          bool
	LabelsOnly       bool
	MaxIterations    int
	Radius           float64
	Verbose          bool
}

func MeanShiftOptions

func MeanShiftOptions() *MeanShiftOptionalParam

type NbcOptionalParam

type NbcOptionalParam struct {
	IncrementalVariance bool
	InputModel          *nbcModel
	Labels              *mat.Dense
	Test                *mat.Dense
	Training            *mat.Dense
	Verbose             bool
}

func NbcOptions

func NbcOptions() *NbcOptionalParam

type NcaOptionalParam

type NcaOptionalParam struct {
	ArmijoConstant      float64
	BatchSize           int
	Labels              *mat.Dense
	LinearScan          bool
	MaxIterations       int
	MaxLineSearchTrials int
	MaxStep             float64
	MinStep             float64
	Normalize           bool
	NumBasis            int
	Optimizer           string
	Seed                int
	StepSize            float64
	Tolerance           float64
	Verbose             bool
	Wolfe               float64
}

func NcaOptions

func NcaOptions() *NcaOptionalParam

type NmfOptionalParam

type NmfOptionalParam struct {
	InitialH      *mat.Dense
	InitialW      *mat.Dense
	MaxIterations int
	MinResidue    float64
	Seed          int
	UpdateRules   string
	Verbose       bool
}

func NmfOptions

func NmfOptions() *NmfOptionalParam

type PcaOptionalParam

type PcaOptionalParam struct {
	DecompositionMethod string
	NewDimensionality   int
	Scale               bool
	VarToRetain         float64
	Verbose             bool
}

func PcaOptions

func PcaOptions() *PcaOptionalParam

type PerceptronOptionalParam

type PerceptronOptionalParam struct {
	InputModel    *perceptronModel
	Labels        *mat.Dense
	MaxIterations int
	Test          *mat.Dense
	Training      *mat.Dense
	Verbose       bool
}

func PerceptronOptions

func PerceptronOptions() *PerceptronOptionalParam

type PreprocessBinarizeOptionalParam

type PreprocessBinarizeOptionalParam struct {
	Dimension int
	Threshold float64
	Verbose   bool
}

func PreprocessBinarizeOptions

func PreprocessBinarizeOptions() *PreprocessBinarizeOptionalParam

type PreprocessDescribeOptionalParam

type PreprocessDescribeOptionalParam struct {
	Dimension  int
	Population bool
	Precision  int
	RowMajor   bool
	Verbose    bool
	Width      int
}

func PreprocessDescribeOptions

func PreprocessDescribeOptions() *PreprocessDescribeOptionalParam

type PreprocessOneHotEncodingOptionalParam

type PreprocessOneHotEncodingOptionalParam struct {
	Verbose bool
}

func PreprocessOneHotEncodingOptions

func PreprocessOneHotEncodingOptions() *PreprocessOneHotEncodingOptionalParam

type PreprocessScaleOptionalParam

type PreprocessScaleOptionalParam struct {
	Epsilon        float64
	InputModel     *scalingModel
	InverseScaling bool
	MaxValue       int
	MinValue       int
	ScalerMethod   string
	Seed           int
	Verbose        bool
}

func PreprocessScaleOptions

func PreprocessScaleOptions() *PreprocessScaleOptionalParam

type PreprocessSplitOptionalParam

type PreprocessSplitOptionalParam struct {
	InputLabels  *mat.Dense
	NoShuffle    bool
	Seed         int
	StratifyData bool
	TestRatio    float64
	Verbose      bool
}

func PreprocessSplitOptions

func PreprocessSplitOptions() *PreprocessSplitOptionalParam

type RadicalOptionalParam

type RadicalOptionalParam struct {
	Angles      int
	NoiseStdDev float64
	Objective   bool
	Replicates  int
	Seed        int
	Sweeps      int
	Verbose     bool
}

func RadicalOptions

func RadicalOptions() *RadicalOptionalParam

type RandomForestOptionalParam

type RandomForestOptionalParam struct {
	InputModel            *randomForestModel
	Labels                *mat.Dense
	MaximumDepth          int
	MinimumGainSplit      float64
	MinimumLeafSize       int
	NumTrees              int
	PrintTrainingAccuracy bool
	Seed                  int
	SubspaceDim           int
	Test                  *mat.Dense
	TestLabels            *mat.Dense
	Training              *mat.Dense
	Verbose               bool
	WarmStart             bool
}

func RandomForestOptions

func RandomForestOptions() *RandomForestOptionalParam

type SoftmaxRegressionOptionalParam

type SoftmaxRegressionOptionalParam struct {
	InputModel      *softmaxRegression
	Labels          *mat.Dense
	Lambda          float64
	MaxIterations   int
	NoIntercept     bool
	NumberOfClasses int
	Test            *mat.Dense
	TestLabels      *mat.Dense
	Training        *mat.Dense
	Verbose         bool
}

func SoftmaxRegressionOptions

func SoftmaxRegressionOptions() *SoftmaxRegressionOptionalParam

type SparseCodingOptionalParam

type SparseCodingOptionalParam struct {
	Atoms              int
	InitialDictionary  *mat.Dense
	InputModel         *sparseCoding
	Lambda1            float64
	Lambda2            float64
	MaxIterations      int
	NewtonTolerance    float64
	Normalize          bool
	ObjectiveTolerance float64
	Seed               int
	Test               *mat.Dense
	Training           *mat.Dense
	Verbose            bool
}

func SparseCodingOptions

func SparseCodingOptions() *SparseCodingOptionalParam

type TestGoBindingOptionalParam

type TestGoBindingOptionalParam struct {
	BuildModel      bool
	ColIn           *mat.Dense
	Flag1           bool
	Flag2           bool
	MatrixAndInfoIn *matrixWithInfo
	MatrixIn        *mat.Dense
	ModelIn         *gaussianKernel
	RowIn           *mat.Dense
	StrVectorIn     []string
	UcolIn          *mat.Dense
	UmatrixIn       *mat.Dense
	UrowIn          *mat.Dense
	VectorIn        []int
	Verbose         bool
}

func TestGoBindingOptions

func TestGoBindingOptions() *TestGoBindingOptionalParam

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL