metric

package
v0.1.5 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Feb 13, 2026 License: BSD-3-Clause Imports: 7 Imported by: 4

README

metric

metric provides various similarity / distance metrics for comparing two tensors, operating on the tensor.Tensor standard data representation, using this standard function:

type MetricFunc func(a, b, out tensor.Tensor) error

See the Cogent Lab Docs for full documentation.

The metric functions always operate on the outermost row dimension, and it is up to the caller to reshape the tensors to accomplish the desired results. The two tensors must have the same shape.

  • To obtain a single summary metric across all values, use tensor.As1D.

  • For RowMajor data that is naturally organized as a single outer rows dimension with the remaining inner dimensions comprising the cells, the results are the metric for each such cell computed across the outer rows dimension. For the L2Norm metric for example, each cell has the difference for that cell value across all the rows between the two tensors. See Matrix functions below for a function that computes the distances between each cell pattern and all the others, as a distance or similarity matrix.

  • Use tensor.NewRowCellsView to reshape any tensor into a 2D rows x cells shape, with the cells starting at a given dimension. Thus, any number of outer dimensions can be collapsed into the outer row dimension, and the remaining dimensions become the cells.

Metrics

Value increases with increasing distance (i.e., difference metric)
  • L2Norm: the square root of the sum of squares differences between tensor values.
  • SumSquares: the sum of squares differences between tensor values.
  • Absor L2Norm: the sum of the absolute value of differences between tensor values.
  • Hamming: the sum of 1s for every element that is different, i.e., "city block" distance.
  • L2NormBinTol: the L2Norm square root of the sum of squares differences between tensor values, with binary tolerance: differences < 0.5 are thresholded to 0.
  • SumSquaresBinTol: the SumSquares differences between tensor values, with binary tolerance: differences < 0.5 are thresholded to 0.
  • InvCosine: is 1-Cosine, which is useful to convert it to an Increasing metric where more different vectors have larger metric values.
  • InvCorrelation: is 1-Correlation, which is useful to convert it to an Increasing metric where more different vectors have larger metric values.
  • CrossEntropy: is a standard measure of the difference between two probabilty distributions, reflecting the additional entropy (uncertainty) associated with measuring probabilities under distribution b when in fact they come from distribution a. It is also the entropy of a plus the divergence between a from b, using Kullback-Leibler (KL) divergence. It is computed as: a * log(a/b) + (1-a) * log(1-a/1-b).
Value decreases with increasing distance (i.e., similarity metric)
  • DotProduct: the sum of the co-products of the tensor values.
  • Covariance: the co-variance between two vectors, i.e., the mean of the co-product of each vector element minus the mean of that vector: cov(A,B) = E[(A - E(A))(B - E(B))].
  • Correlation: the standardized Covariance in the range (-1..1), computed as the mean of the co-product of each vector element minus the mean of that vector, normalized by the product of their standard deviations: cor(A,B) = E[(A - E(A))(B - E(B))] / sigma(A) sigma(B). Equivalent to the Cosine of mean-normalized vectors.
  • Cosine: the high-dimensional angle between two vectors, in range (-1..1) as the normalized DotProduct: inner product / sqrt(ssA * ssB). See also Correlation.

Here is general info about these functions:

The output must be a tensor.Values tensor, and it is automatically shaped to hold the stat value(s) for the "cells" in higher-dimensional tensors, and a single scalar value for a 1D input tensor.

All metric functions skip over NaN's, as a missing value.

Metric functions cannot be computed in parallel, e.g., using VectorizeThreaded or GPU, due to shared writing to the same output values. Special implementations are required if that is needed.

Matrix functions

  • Matrix computes a distance / similarity matrix using a metric function, operating on the n-dimensional sub-space patterns on a given tensor (i.e., a row-wise list of patterns). The result is a square rows x rows matrix where each cell is the metric value for the pattern at the given row. The diagonal contains the self-similarity metric.

  • CrossMatrix is like Matrix except it compares two separate lists of patterns.

  • CovarianceMatrix computes the covariance matrix for row-wise lists of patterns, where the result is a square matrix of cells x cells size ("cells" is number of elements in the patterns per row), and each value represents the extent to which value of a given cell covaries across the rows of the tensor with the value of another cell. For example, if the rows represent time, then the covariance matrix represents the extent to which the patterns tend to move in the same way over time.

    See matrix for EigSym and SVD functions that compute the "principal components" (PCA) of covariance, in terms of the eigenvectors and corresponding eigenvalues of this matrix. The eigenvector (component) with the largest eigenvalue is the "direction" in n-dimensional pattern space along which there is the greatest variance in the patterns across the rows.

    There is also a matrix.ProjectOnMatrixColumn convenience function for projecting data along a vector extracted from a matrix, which allows you to project data along an eigenvector from the PCA or SVD functions. By doing this projection along the strongest 2 eigenvectors (those with the largest eigenvalues), you can visualize high-dimensional data in a 2D plot, which typically reveals important aspects of the structure of the underlying high-dimensional data, which is otherwise hard to see given the difficulty in visualizing high-dimensional spaces.

Documentation

Overview

Package metric provides various similarity / distance metrics for comparing tensors, operating on the tensor.Tensor standard data representation.

Index

Constants

This section is empty.

Variables

This section is empty.

Functions

func ClosestRow

func ClosestRow(fun any, probe, vocab tensor.Tensor) tensor.Values

ClosestRow returns the closest fit between probe pattern and patterns in a "vocabulary" tensor with outermost row dimension, using given metric function, which must fit the MetricFunc signature. The metric *must have the Increasing property*, i.e., larger = further. Output is a 1D tensor with 2 elements: the row index and metric value for that row. Note: this does _not_ use any existing Indexes for the probe, but does for the vocab, and the returned index is the logical index into any existing Indexes.

func ClosestRowOut

func ClosestRowOut(fun any, probe, vocab tensor.Tensor, out tensor.Values) error

ClosestRowOut returns the closest fit between probe pattern and patterns in a "vocabulary" tensor with outermost row dimension, using given metric function, which must fit the MetricFunc signature. The metric *must have the Increasing property*, i.e., larger = further. Output is a 1D tensor with 2 elements: the row index and metric value for that row. Note: this does _not_ use any existing Indexes for the probe, but does for the vocab, and the returned index is the logical index into any existing Indexes.

func Correlation

func Correlation(a, b tensor.Tensor) tensor.Values

Correlation computes the correlation between two vectors, in range (-1..1) as the mean of the co-product of each vector element minus the mean of that vector, normalized by the product of their standard deviations: cor(A,B) = E[(A - E(A))(B - E(B))] / sigma(A) sigma(B). (i.e., the standardized Covariance). Equivalent to the Cosine of mean-normalized vectors. See MetricFunc for general information.

func CorrelationOut

func CorrelationOut(a, b tensor.Tensor, out tensor.Values) error

CorrelationOut computes the correlation between two vectors, in range (-1..1) as the mean of the co-product of each vector element minus the mean of that vector, normalized by the product of their standard deviations: cor(A,B) = E[(A - E(A))(B - E(B))] / sigma(A) sigma(B). (i.e., the standardized Covariance). Equivalent to the Cosine of mean-normalized vectors.

func CorrelationOut64

func CorrelationOut64(a, b tensor.Tensor, out tensor.Values) (*tensor.Float64, error)

CorrelationOut64 computes the correlation between two vectors, in range (-1..1) as the mean of the co-product of each vector element minus the mean of that vector, normalized by the product of their standard deviations: cor(A,B) = E[(A - E(A))(B - E(B))] / sigma(A) sigma(B). (i.e., the standardized covariance). Equivalent to the cosine of mean-normalized vectors. Returns the Float64 output values for subsequent use.

func Cosine

func Cosine(a, b tensor.Tensor) tensor.Values

Cosine computes the high-dimensional angle between two vectors, in range (-1..1) as the normalized dot product: dot product / sqrt(ssA * ssB). See also Correlation See MetricFunc for general information.

func CosineOut

func CosineOut(a, b tensor.Tensor, out tensor.Values) error

CosineOut computes the high-dimensional angle between two vectors, in range (-1..1) as the normalized dot product: dot product / sqrt(ssA * ssB). See also Correlation

func CosineOut64

func CosineOut64(a, b tensor.Tensor, out tensor.Values) (*tensor.Float64, error)

CosineOut64 computes the high-dimensional angle between two vectors, in range (-1..1) as the normalized [Dot]: dot product / sqrt(ssA * ssB). See also Correlation.

func Covariance

func Covariance(a, b tensor.Tensor) tensor.Values

Covariance computes the co-variance between two vectors, i.e., the mean of the co-product of each vector element minus the mean of that vector: cov(A,B) = E[(A - E(A))(B - E(B))]. See MetricFunc for general information.

func CovarianceMatrix

func CovarianceMatrix(fun any, in tensor.Tensor) tensor.Values

CovarianceMatrix generates the cells x cells square covariance matrix for all per-row cells of the given higher dimensional input tensor, which must have at least 2 dimensions: the outermost rows, and within that, 1+dimensional patterns (cells). Each value in the resulting matrix represents the extent to which the value of a given cell covaries across the rows of the tensor with the value of another cell. Uses the given metric function, typically Covariance or Correlation, The metric function must have the MetricFunc signature. Use Covariance if vars have similar overall scaling, which is typical in neural network models, and use Correlation if they are on very different scales, because it effectively rescales). The resulting matrix can be used as the input to PCA or SVD eigenvalue decomposition.

func CovarianceMatrixOut

func CovarianceMatrixOut(fun any, in tensor.Tensor, out tensor.Values) error

CovarianceMatrixOut generates the cells x cells square covariance matrix for all per-row cells of the given higher dimensional input tensor, which must have at least 2 dimensions: the outermost rows, and within that, 1+dimensional patterns (cells). Each value in the resulting matrix represents the extent to which the value of a given cell covaries across the rows of the tensor with the value of another cell. Uses the given metric function, typically Covariance or Correlation, The metric function must have the MetricFunc signature. Use Covariance if vars have similar overall scaling, which is typical in neural network models, and use Correlation if they are on very different scales, because it effectively rescales). The resulting matrix can be used as the input to PCA or SVD eigenvalue decomposition.

func CovarianceOut

func CovarianceOut(a, b tensor.Tensor, out tensor.Values) error

CovarianceOut computes the co-variance between two vectors, i.e., the mean of the co-product of each vector element minus the mean of that vector: cov(A,B) = E[(A - E(A))(B - E(B))].

func CrossEntropy

func CrossEntropy(a, b tensor.Tensor) tensor.Values

CrossEntropy is a standard measure of the difference between two probabilty distributions, reflecting the additional entropy (uncertainty) associated with measuring probabilities under distribution b when in fact they come from distribution a. It is also the entropy of a plus the divergence between a from b, using Kullback-Leibler (KL) divergence. It is computed as: a * log(a/b) + (1-a) * log(1-a/1-b). See MetricFunc for general information.

func CrossEntropyOut

func CrossEntropyOut(a, b tensor.Tensor, out tensor.Values) error

CrossEntropyOut is a standard measure of the difference between two probabilty distributions, reflecting the additional entropy (uncertainty) associated with measuring probabilities under distribution b when in fact they come from distribution a. It is also the entropy of a plus the divergence between a from b, using Kullback-Leibler (KL) divergence. It is computed as: a * log(a/b) + (1-a) * log(1-a/1-b). See MetricOutFunc for general information.

func CrossMatrix

func CrossMatrix(fun any, a, b tensor.Tensor) tensor.Values

CrossMatrix computes the distance / similarity matrix between two different sets of patterns in the two input tensors, where the patterns are in the sub-space cells of the tensors, which must have at least 2 dimensions: the outermost rows, and within that, 1+dimensional patterns that the given distance metric function is applied to, with the results filling in the cells of the output matrix. The metric function must have the MetricFunc signature. The rows of the output matrix are the rows of the first input tensor, and the columns of the output are the rows of the second input tensor.

func CrossMatrixOut

func CrossMatrixOut(fun any, a, b tensor.Tensor, out tensor.Values) error

CrossMatrixOut computes the distance / similarity matrix between two different sets of patterns in the two input tensors, where the patterns are in the sub-space cells of the tensors, which must have at least 2 dimensions: the outermost rows, and within that, 1+dimensional patterns that the given distance metric function is applied to, with the results filling in the cells of the output matrix. The metric function must have the MetricFunc signature. The rows of the output matrix are the rows of the first input tensor, and the columns of the output are the rows of the second input tensor.

func DotProduct

func DotProduct(a, b tensor.Tensor) tensor.Values

DotProductOut computes the sum of the element-wise products of the two tensors (aka the inner product). See MetricFunc for general information.

func DotProductOut

func DotProductOut(a, b tensor.Tensor, out tensor.Values) error

DotProductOut computes the sum of the element-wise products of the two tensors (aka the inner product). See MetricOutFunc for general information.

func Hamming

func Hamming(a, b tensor.Tensor) tensor.Values

Hamming computes the sum of 1s for every element that is different, i.e., "city block" distance. See MetricFunc for general information.

func HammingOut

func HammingOut(a, b tensor.Tensor, out tensor.Values) error

HammingOut computes the sum of 1s for every element that is different, i.e., "city block" distance. See MetricOutFunc for general information.

func InvCorrelation

func InvCorrelation(a, b tensor.Tensor) tensor.Values

InvCorrelation computes 1 minus the correlation between two vectors, in range (-1..1) as the mean of the co-product of each vector element minus the mean of that vector, normalized by the product of their standard deviations: cor(A,B) = E[(A - E(A))(B - E(B))] / sigma(A) sigma(B). (i.e., the standardized covariance). Equivalent to the Cosine of mean-normalized vectors. This is useful for a difference measure instead of similarity, where more different vectors have larger metric values. See MetricFunc for general information.

func InvCorrelationOut

func InvCorrelationOut(a, b tensor.Tensor, out tensor.Values) error

InvCorrelationOut computes 1 minus the correlation between two vectors, in range (-1..1) as the mean of the co-product of each vector element minus the mean of that vector, normalized by the product of their standard deviations: cor(A,B) = E[(A - E(A))(B - E(B))] / sigma(A) sigma(B). (i.e., the standardized covariance). Equivalent to the Cosine of mean-normalized vectors. This is useful for a difference measure instead of similarity, where more different vectors have larger metric values.

func InvCosine

func InvCosine(a, b tensor.Tensor) tensor.Values

InvCosine computes 1 minus the cosine between two vectors, in range (-1..1) as the normalized dot product: dot product / sqrt(ssA * ssB). This is useful for a difference measure instead of similarity, where more different vectors have larger metric values. See MetricFunc for general information.

func InvCosineOut

func InvCosineOut(a, b tensor.Tensor, out tensor.Values) error

InvCosineOut computes 1 minus the cosine between two vectors, in range (-1..1) as the normalized dot product: dot product / sqrt(ssA * ssB). This is useful for a difference measure instead of similarity, where more different vectors have larger metric values.

func L1Norm

func L1Norm(a, b tensor.Tensor) tensor.Values

L1Norm computes the sum of the absolute value of differences between the tensor values, the L1 Norm. See MetricFunc for general information.

func L1NormOut

func L1NormOut(a, b tensor.Tensor, out tensor.Values) error

L1NormOut computes the sum of the absolute value of differences between the tensor values, the L1 Norm. See MetricOutFunc for general information.

func L2Norm

func L2Norm(a, b tensor.Tensor) tensor.Values

L2Norm computes the L2 Norm: square root of the sum of squares differences between tensor values, aka the Euclidean distance. See MetricFunc for general information.

func L2NormBinTol

func L2NormBinTol(a, b tensor.Tensor) tensor.Values

L2NormBinTol computes the L2 Norm square root of the sum of squares differences between tensor values (aka Euclidean distance), with binary tolerance: differences < 0.5 are thresholded to 0. See MetricFunc for general information.

func L2NormBinTolOut

func L2NormBinTolOut(a, b tensor.Tensor, out tensor.Values) error

L2NormBinTolOut computes the L2 Norm square root of the sum of squares differences between tensor values (aka Euclidean distance), with binary tolerance: differences < 0.5 are thresholded to 0.

func L2NormOut

func L2NormOut(a, b tensor.Tensor, out tensor.Values) error

L2NormOut computes the L2 Norm: square root of the sum of squares differences between tensor values, aka the Euclidean distance.

func Matrix

func Matrix(fun any, in tensor.Tensor) tensor.Values

Matrix computes the rows x rows square distance / similarity matrix between the patterns for each row of the given higher dimensional input tensor, which must have at least 2 dimensions: the outermost rows, and within that, 1+dimensional patterns (cells). Use tensor.NewRowCellsView to organize data into the desired split between a 1D outermost Row dimension and the remaining Cells dimension. The metric function must have the MetricFunc signature. The results fill in the elements of the output matrix, which is symmetric, and only the lower triangular part is computed, with results copied to the upper triangular region, for maximum efficiency.

func MatrixOut

func MatrixOut(fun any, in tensor.Tensor, out tensor.Values) error

MatrixOut computes the rows x rows square distance / similarity matrix between the patterns for each row of the given higher dimensional input tensor, which must have at least 2 dimensions: the outermost rows, and within that, 1+dimensional patterns (cells). Use tensor.NewRowCellsView to organize data into the desired split between a 1D outermost Row dimension and the remaining Cells dimension. The metric function must have the MetricFunc signature. The results fill in the elements of the output matrix, which is symmetric, and only the lower triangular part is computed, with results copied to the upper triangular region, for maximum efficiency.

func SumSquares

func SumSquares(a, b tensor.Tensor) tensor.Values

SumSquares computes the sum of squares differences between tensor values, See MetricFunc for general information.

func SumSquaresBinTol

func SumSquaresBinTol(a, b tensor.Tensor) tensor.Values

SumSquaresBinTol computes the sum of squares differences between tensor values, with binary tolerance: differences < 0.5 are thresholded to 0. See MetricFunc for general information.

func SumSquaresBinTolOut

func SumSquaresBinTolOut(a, b tensor.Tensor, out tensor.Values) error

SumSquaresBinTolOut computes the sum of squares differences between tensor values, with binary tolerance: differences < 0.5 are thresholded to 0.

func SumSquaresBinTolScaleOut64

func SumSquaresBinTolScaleOut64(a, b tensor.Tensor) (scale64, ss64 *tensor.Float64, err error)

SumSquaresBinTolScaleOut64 computes the sum of squares differences between tensor values, with binary tolerance: differences < 0.5 are thresholded to 0. returning scale and ss factors aggregated separately for better numerical stability, per BLAS.

func SumSquaresOut

func SumSquaresOut(a, b tensor.Tensor, out tensor.Values) error

SumSquaresOut computes the sum of squares differences between tensor values, See MetricOutFunc for general information.

func SumSquaresOut64

func SumSquaresOut64(a, b tensor.Tensor, out tensor.Values) (*tensor.Float64, error)

SumSquaresOut64 computes the sum of squares differences between tensor values, and returns the Float64 output values for use in subsequent computations.

func SumSquaresScaleOut64

func SumSquaresScaleOut64(a, b tensor.Tensor) (scale64, ss64 *tensor.Float64, err error)

SumSquaresScaleOut64 computes the sum of squares differences between tensor values, returning scale and ss factors aggregated separately for better numerical stability, per BLAS.

func Vectorize2Out64

func Vectorize2Out64(a, b tensor.Tensor, iniX, iniY float64, fun func(a, b, ox, oy float64) (float64, float64)) (ox64, oy64 *tensor.Float64)

Vectorize2Out64 is a version of VectorizeOut64 that separately aggregates two output values, x and y as tensor.Float64.

func Vectorize3Out64

func Vectorize3Out64(a, b tensor.Tensor, iniX, iniY, iniZ float64, fun func(a, b, ox, oy, oz float64) (float64, float64, float64)) (ox64, oy64, oz64 *tensor.Float64)

Vectorize3Out64 is a version of VectorizeOut64 that has 3 outputs instead of 1.

func VectorizeOut64

func VectorizeOut64(a, b tensor.Tensor, out tensor.Values, ini float64, fun func(a, b, agg float64) float64) *tensor.Float64

VectorizeOut64 is the general compute function for metric. This version makes a Float64 output tensor for aggregating and computing values, and then copies the results back to the original output. This allows metric functions to operate directly on integer valued inputs and produce sensible results. It returns the Float64 output tensor for further processing as needed. a and b are already enforced to be the same shape.

func VectorizePre3Out64

func VectorizePre3Out64(a, b tensor.Tensor, iniX, iniY, iniZ float64, preA, preB *tensor.Float64, fun func(a, b, preA, preB, ox, oy, oz float64) (float64, float64, float64)) (ox64, oy64, oz64 *tensor.Float64)

VectorizePre3Out64 is a version of VectorizePreOut64 that takes additional tensor.Float64 inputs of pre-computed values, e.g., the means of each output cell, and has 3 outputs instead of 1.

func VectorizePreOut64

func VectorizePreOut64(a, b tensor.Tensor, out tensor.Values, ini float64, preA, preB *tensor.Float64, fun func(a, b, preA, preB, agg float64) float64) *tensor.Float64

VectorizePreOut64 is a version of VectorizeOut64 that takes additional tensor.Float64 inputs of pre-computed values, e.g., the means of each output cell.

Types

type MetricFunc

type MetricFunc = func(a, b tensor.Tensor) tensor.Values

MetricFunc is the function signature for a metric function, which is computed over the outermost row dimension and the output is the shape of the remaining inner cells (a scalar for 1D inputs). Use tensor.As1D, tensor.NewRowCellsView, tensor.Cells1D etc to reshape and reslice the data as needed. All metric functions skip over NaN's, as a missing value, and use the min of the length of the two tensors. Metric functions cannot be computed in parallel, e.g., using VectorizeThreaded or GPU, due to shared writing to the same output values. Special implementations are required if that is needed.

func AsMetricFunc

func AsMetricFunc(fun any) (MetricFunc, error)

AsMetricFunc returns given function as a MetricFunc function, or an error if it does not fit that signature.

type MetricOutFunc

type MetricOutFunc = func(a, b tensor.Tensor, out tensor.Values) error

MetricOutFunc is the function signature for a metric function, that takes output values as the final argument. See MetricFunc. This version is for computationally demanding cases and saves reallocation of output.

func AsMetricOutFunc

func AsMetricOutFunc(fun any) (MetricOutFunc, error)

AsMetricOutFunc returns given function as a MetricFunc function, or an error if it does not fit that signature.

type Metrics

type Metrics int32 //enums:enum -trim-prefix Metric

Metrics are standard metric functions

const (
	// L2Norm is the square root of the sum of squares differences
	// between tensor values, aka the Euclidean distance.
	MetricL2Norm Metrics = iota

	// SumSquares is the sum of squares differences between tensor values.
	MetricSumSquares

	// L1Norm is the sum of the absolute value of differences
	// between tensor values, the L1 Norm.
	MetricL1Norm

	// Hamming is the sum of 1s for every element that is different,
	// i.e., "city block" distance.
	MetricHamming

	// L2NormBinTol is the [L2Norm] square root of the sum of squares
	// differences between tensor values, with binary tolerance:
	// differences < 0.5 are thresholded to 0.
	MetricL2NormBinTol

	// SumSquaresBinTol is the [SumSquares] differences between tensor values,
	// with binary tolerance: differences < 0.5 are thresholded to 0.
	MetricSumSquaresBinTol

	// InvCosine is 1-[Cosine], which is useful to convert it
	// to an Increasing metric where more different vectors have larger metric values.
	MetricInvCosine

	// InvCorrelation is 1-[Correlation], which is useful to convert it
	// to an Increasing metric where more different vectors have larger metric values.
	MetricInvCorrelation

	// CrossEntropy is a standard measure of the difference between two
	// probabilty distributions, reflecting the additional entropy (uncertainty) associated
	// with measuring probabilities under distribution b when in fact they come from
	// distribution a.  It is also the entropy of a plus the divergence between a from b,
	// using Kullback-Leibler (KL) divergence.  It is computed as:
	// a * log(a/b) + (1-a) * log(1-a/1-b).
	MetricCrossEntropy

	// DotProduct is the sum of the co-products of the tensor values.
	MetricDotProduct

	// Covariance is co-variance between two vectors,
	// i.e., the mean of the co-product of each vector element minus
	// the mean of that vector: cov(A,B) = E[(A - E(A))(B - E(B))].
	MetricCovariance

	// Correlation is the standardized [Covariance] in the range (-1..1),
	// computed as the mean of the co-product of each vector
	// element minus the mean of that vector, normalized by the product of their
	// standard deviations: cor(A,B) = E[(A - E(A))(B - E(B))] / sigma(A) sigma(B).
	// Equivalent to the [Cosine] of mean-normalized vectors.
	MetricCorrelation

	// Cosine is high-dimensional angle between two vectors,
	// in range (-1..1) as the normalized [DotProduct]:
	// inner product / sqrt(ssA * ssB).  See also [Correlation].
	MetricCosine
)
const MetricsN Metrics = 13

MetricsN is the highest valid value for type Metrics, plus one.

func MetricsValues

func MetricsValues() []Metrics

MetricsValues returns all possible values for the type Metrics.

func (Metrics) Call

func (m Metrics) Call(a, b tensor.Tensor) tensor.Values

Call calls a standard Metrics enum function on given tensors. Output results are in the out tensor.

func (Metrics) Desc

func (i Metrics) Desc() string

Desc returns the description of the Metrics value.

func (Metrics) Func

func (m Metrics) Func() MetricFunc

Func returns function for given metric.

func (Metrics) FuncName

func (m Metrics) FuncName() string

FuncName returns the package-qualified function name to use in tensor.Call to call this function.

func (Metrics) Increasing

func (m Metrics) Increasing() bool

Increasing returns true if the distance metric is such that metric values increase as a function of distance (e.g., L2Norm) and false if metric values decrease as a function of distance (e.g., Cosine, Correlation)

func (Metrics) Int64

func (i Metrics) Int64() int64

Int64 returns the Metrics value as an int64.

func (Metrics) MarshalText

func (i Metrics) MarshalText() ([]byte, error)

MarshalText implements the encoding.TextMarshaler interface.

func (*Metrics) SetInt64

func (i *Metrics) SetInt64(in int64)

SetInt64 sets the Metrics value from an int64.

func (*Metrics) SetString

func (i *Metrics) SetString(s string) error

SetString sets the Metrics value from its string representation, and returns an error if the string is invalid.

func (Metrics) String

func (i Metrics) String() string

String returns the string representation of this Metrics value.

func (*Metrics) UnmarshalText

func (i *Metrics) UnmarshalText(text []byte) error

UnmarshalText implements the encoding.TextUnmarshaler interface.

func (Metrics) Values

func (i Metrics) Values() []enums.Enum

Values returns all possible values for the type Metrics.

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL