metrics

package
v0.0.0-...-054d88b Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Feb 14, 2021 License: NCSA Imports: 10 Imported by: 0

Documentation

Index

Constants

This section is empty.

Variables

This section is empty.

Functions

func AbsoluteError

func AbsoluteError(a, b []float64) []float64

Compute the absolute error This function computes the elementwise absolute error for a vector

func Accuracy

func Accuracy(data []metrics) float64

Accuracy = (TP + TN) / (everything)

func Bhattacharyya

func Bhattacharyya(p, q []float64) float64

Bhattacharyya computes the distance between the probability distributions p and q given by:

-\ln ( \sum_i \sqrt{p_i q_i} )

The lengths of p and q must be equal. It is assumed that p and q sum to 1.

func BoundingBoxJaccard

func BoundingBoxJaccard(boxA, boxB *dlframework.BoundingBox) float64

func Broadcast

func Broadcast(a float64, len int) []float64

func CDF

func CDF(data, edges []float64) (values []float64)

CDF calculates an empirical cumulative distribution function. The granularity of the function is specified by a set of edges; see Histogram.

func ClassificationTop1

func ClassificationTop1(features *dlframework.Features, expectedLabelIndex int) bool

ClassificationTop1 ...

func ClassificationTop5

func ClassificationTop5(features *dlframework.Features, expectedLabelIndex int) bool

ClassificationTop5 ...

func Correlation

func Correlation(x, y, weights []float64) float64

Correlation returns the weighted correlation between the samples of x and y with the given WeightedMeans.

sum_i {w_i (x_i - WeightedMeanX) * (y_i - WeightedMeanY)} / (stdX * stdY)

The lengths of x and y must be equal. If weights is nil then all of the weights are 1. If weights is not nil, then len(x) must equal len(weights).

func Covariance

func Covariance(x, y, weights []float64) float64

Covariance returns the weighted covariance between the samples of x and y.

sum_i {w_i (x_i - WeightedMeanX) * (y_i - WeightedMeanY)} / (sum_j {w_j} - 1)

The lengths of x and y must be equal. If weights is nil then all of the weights are 1. If weights is not nil, then len(x) must equal len(weights).

func Edges

func Edges(data ...[]float64) []float64

Edges returns sorted unique elements of a number of data sets, ensuring that the first and last elements are -∞ and +∞, respectively.

func Expectation

func Expectation(data []float64) float64

Expectation computes an estimate of the population mean from a finite sample.

func F1Score

func F1Score(data []metrics) float64

F1Score = 2 * [(precision*recall) / (precision + recall)]

func Fpr

func Fpr(data []metrics) float64

FPR = FP / non-monitored elements = (FPP + FNP) / (TN + FNP)

func Hellinger

func Hellinger(p, q []float64) float64

Hellinger computes the distance between the probability distributions p and q given by:

\sqrt{ 1 - \sum_i \sqrt{p_i q_i} }

The lengths of p and q must be equal. It is assumed that p and q sum to 1.

func Histogram

func Histogram(data []float64, edges []float64) (bins []uint, total uint)

Histogram counts the number of points that fall into each of the bins specified by a set of edges. For n edges, the number of bins is (n-1). The left endpoint of a bin is assumed to belong to the bin while the right one is assumed to do not.

func IntersectionOverUnion

func IntersectionOverUnion(featA, featB *dlframework.Feature) float64

func Jaccard

func Jaccard(featA, featB *dlframework.Feature) float64

func JensenShannon

func JensenShannon(p, q []float64) float64

JensenShannon computes the JensenShannon divergence between the distributions p and q. The Jensen-Shannon divergence is defined as

m = 0.5 * (p + q)
JS(p, q) = 0.5 ( KL(p, m) + KL(q, m) )

Unlike Kullback-Liebler, the Jensen-Shannon distance is symmetric. The value is between 0 and ln(2).

func KolmogorovSmirnov

func KolmogorovSmirnov(data1, data2 []float64) float64

KolmogorovSmirnov computes the Kolmogorov–Smirnov statistic for two samples.

https://en.wikipedia.org/wiki/Kolmogorov%E2%80%93Smirnov_test

func KullbackLeibler

func KullbackLeibler(p, q []float64) float64

KullbackLeibler computes the Kullback-Leibler distance between the distributions p and q. The natural logarithm is used.

sum_i(p_i * log(p_i / q_i))

Note that the Kullback-Leibler distance is not symmetric; KullbackLeibler(p,q) != KullbackLeibler(q,p)

func L2

func L2(x, y []float64) float64

L2 computes the Euclidean distance between two vectors.

func Mean

func Mean(x []float64) float64

Mean computes the weighted Mean of the data set.

sum_i {x_i} / sum_i

func MeanSquaredError

func MeanSquaredError(a, b []float64) float64

func MeanSquaredPercentageError

func MeanSquaredPercentageError(y, yhat []float64) float64

MSPE computes the mean-square-percentage error.

func NormalizedRootMeanSquaredError

func NormalizedRootMeanSquaredError(y, yhat []float64) float64

NRMSE computes the normalized root-mean-square error.

https://en.wikipedia.org/wiki/Root-mean-square_deviation#Normalized_root-mean-square_deviation

func PDF

func PDF(data, edges []float64) (values []float64)

PDF calculates an empirical probability density function. The granularity of the function is specified by a set of edges; see Histogram.

func PeakSignalToNoiseRatio

func PeakSignalToNoiseRatio(input, reference []float64) float64

Suppose pixel values in [0,1] refer https://en.wikipedia.org/wiki/Peak_signal-to-noise_ratio

func Precision

func Precision(data []metrics) float64

precision = TP / (TP + FPP + FNP)

func Recall

func Recall(data []metrics) float64

recall = TPR = TP / (TP + FN + FPP)

func RegisterFeatureCompareFunction

func RegisterFeatureCompareFunction(name string, f FeatureCompareFunction)

func RelativeAbsoluteError

func RelativeAbsoluteError(a, b []float64) float64

func RootMeanSquaredError

func RootMeanSquaredError(y, yhat []float64) float64

RMSE computes the root-mean-square error.

https://en.wikipedia.org/wiki/Root-mean-square_deviation

func RootMeanSquaredPercentageError

func RootMeanSquaredPercentageError(y, yhat []float64) float64

RMSPE computes the root-mean-square-percentage error.

func SquaredError

func SquaredError(a, b []float64) []float64

Compute the squared error This function computes the elementwise squared error for a vector

func SquaredLogError

func SquaredLogError(a, b []float64) []float64

Compute the squared log error This function computes the elementwise squared log error for a vector

func Sum

func Sum(s []float64) float64

Sum returns the sum of the elements of the slice.

func Top1

func Top1(features *dlframework.Features, expectedLabelIndex int) bool

func Top5

func Top5(features *dlframework.Features, expectedLabelIndex int) bool

Top5 ...

func Uniform

func Uniform(x, y []float64) float64

Uniform computes the uniform distance between two vectors.

func Variance

func Variance(data []float64) float64

Variance computes an estimate of the population variance from a finite sample. The estimate is unbiased. The computation is based on the compensated-summation version of the two-pass algorithm.

https://en.wikipedia.org/wiki/Algorithms_for_calculating_variance#Two-pass_algorithm

func WeightedMean

func WeightedMean(x, weights []float64) float64

WeightedMean computes the weighted WeightedMean of the data set.

sum_i {w_i * x_i} / sum_i {w_i}

If weights is nil then all of the weights are 1. If weights is not nil, then len(x) must equal len(weights).

Types

type FeatureCompareFunction

type FeatureCompareFunction func(actual *dlframework.Features, expected interface{}) float64

func GetFeatureCompareFunction

func GetFeatureCompareFunction(name string) FeatureCompareFunction

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL