bigo

package module
v0.0.0-...-3eeafaa Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Jul 31, 2025 License: Apache-2.0 Imports: 12 Imported by: 0

README

BigO - Big O Complexity Analysis Library

A Go library for analyzing algorithm complexity by characterizing real-world timing data and determining which Big O notation best fits the performance patterns.

Overview

The bigo package analyzes timing measurements collected from algorithms and benchmarks to determine their time complexity characteristics. Given input size (N) and corresponding execution times (or any other measurement such as memory usage), it correlates the data against known Big O patterns to identify the most likely complexity class.

Supported Complexity Classes:

  • O(1) - Constant Time
  • O(log log n) - Double Logarithmic Time
  • O(log n) - Logarithmic Time
  • O((log n)^c) - Polylogarithmic Time
  • O(n) - Linear Time
  • O(n log* n) - n Log Star n Time
  • O(n log n) - Linearithmic Time
  • O(n²) - Quadratic Time
  • O(n³) - Cubic Time
  • O(n^c) - Polynomial Time
  • O(2^n) - Exponential Time
  • O(n!) - Factorial Time
  • O(n^n) - Hyper-Exponential Time

Basic Usage

Analyzing Timing Data
    // Create a new Classifier analyzer
    c := bigo.NewClassifier()
    
    // Add timing data points (input_size, execution_time_in_ns)
    // Note: Non-positive input sizes (n <= 0) are automatically filtered out
    c.AddDataPoint(100, 1250.5)
    c.AddDataPoint(200, 2501.2) 
    c.AddDataPoint(400, 5002.8)
    c.AddDataPoint(800, 10008.1)
    
    // Classify the complexity
    result, err := c.Classify()
    if err != nil {
        panic(err)
    }
    
    fmt.Printf("Best fit: %s\n", result)
    // Also print out the verbose summary.
    fmt.Print(c.Summary())
}
Loading Data from CSV

If your data comes from an external source, LoadCSV is a good starting point. It expects data in a two column delimited format. Rows do not need to be unique. Each row is considered a distinct measurement, and all measurements for a given N are averaged together before the characterization is performed. The method takes a filename, a boolean flag indicating if there is a header row in the file, and the delimiter character used in the file. The columns are expected to be:

  • Column 1 - A numerical value representing a given N. Non-positive values (N ≤ 0) are automatically filtered out during loading.
  • Column 2 - A numerical measurement value associated with column 1's N.

Extra columns are ignored.

// Load timing data from CSV file
c := bigo.NewClassifier()
if err := c.LoadCSV("timings.csv", true, ','); err != nil {
    panic(err)
}

result, err := c.Classify()
if err != nil {
    panic(err)
}

fmt.Printf("BigO: %s\n", result)

LoadCSV can be called multiple times to add more data. Data is not cleared between calls to LoadCSV.

Working with Multiple Data Points

Sometimes there are multiple runs for a given size of input. The AddDataPoint method is variadic and can take multiple measurement values for the given size. Similarly, there is a helper method AddDataPoints which takes slices of sizes and corresponding slices of slices of measurements paired up with those sizes.

Note: All data point addition methods automatically filter out non-positive input sizes (N ≤ 0) to ensure valid complexity analysis.

// Add multiple measurements for each input size
c := bigo.NewClassifier()

// Multiple runs for n=100
c.AddDataPoint(100, 1250.5, 1248.2, 1252.1)

// Invalid input sizes are silently filtered out
c.AddDataPoint(0, 0.0)    // Ignored - zero input size
c.AddDataPoint(-5, 10.0)  // Ignored - negative input size

// Batch add data points
inputSizes := []int{100, 200, 400, 800}
timings := [][]float64{
    {1250.5, 1248.2},
    {2501.2, 2498.9}, 
    {5002.8, 4999.1},
    {10008.1, 10011.2, 10007.5, 10009.3},
}

// AddDataPoints can error for things like the the two slices aren't the same length.
err := c.AddDataPoints(inputSizes, timings)
if err != nil {
    panic(err)
}

Example Algorithm Implementations

The examples/ directory contains comprehensive reference implementations for each Big O complexity class, organized by their time complexity. These implementations serve as both educational resources and test cases for the bigo library's analysis capabilities.

Each complexity class includes multiple algorithm implementations with comprehensive tests and benchmarks that generate timing data for analysis. The examples range from simple operations like array access (O(1)) to complex algorithms like the Traveling Salesman Problem (O(n!)).

For detailed information about the available algorithms and their implementations, see the examples README.

Testing

Running Tests

I've added comprehensive tests for both the main package and example algorithms:

# Run all tests
go test ./...

Benchmarking

Because this is about analyzing data, there are a lot of benchmarks here and especially amongst the examples. Most of this is managed though two systems:

  • Individual benchmark functions
  • A unified example benchmark running method system that can be used to generate CSV timing data for complexity analysis.
Individual Benchmark Functions

Every example method or algorithm includes unit tests and benchmarks.

# Run individual benchmarks by name (from root directory)
go test -bench=BenchmarkBubbleSort ./examples/quadratic/
go test -bench=BenchmarkRecursiveFibonacci ./examples/exponential/
go test -bench=BenchmarkBinarySearch ./examples/logarithmic/

# Run benchmarks by complexity class
go test -bench=BenchmarkBubbleSort -bench=BenchmarkInsertionSort -bench=BenchmarkSelectionSort ./examples/quadratic/

# For exponential and factorial benchmarks, increase timeout
go test -bench=BenchmarkRecursiveFibonacci -timeout=10m ./examples/exponential/
go test -bench=BenchmarkGenerateAllPermutations -timeout=10m ./examples/factorial/
Parameterized Benchmark System

The unified example method system allows parameterized benchmarking with CSV output generation:

cd examples

# Run parameterized benchmarks with custom ranges
go test -bench=BenchmarkExampleMethod --benchmark_example_name=Quadratic_BubbleSort --benchmark_start=100 --benchmark_end=5000 --benchmark_step=200 --count=3

# Run all example method benchmarks 5 times using their default parameters
go test -bench=BenchmarkAllBigOExamples --count=5
Shell Script Automation

The project includes two shell scripts for automated benchmarking:

  • ./run_all_benchmarks.sh
  • ./run_benchmark.sh
run_all_benchmarks.sh

Executes all benchmarks with optimized parameter ranges for each complexity class.

The main features of this shell script are:

  • Pre-tuned parameter ranges for each complexity class (e.g., Factorial: N=3-9, Exponential: N=5-25) so they don't crush your machine or run forever.
  • Complexity class filtering with flexible naming (supports quadratic, QuadraticTime, quadratic_time, etc.)
  • Batch CSV generation with consistent file naming
# Run all benchmarks with default parameters
./run_all_benchmarks.sh

Run one/any/all multiple times with the --count flag.

# Run with custom iteration count
./run_all_benchmarks.sh --count=3

The resulting CSV files will look like:

n,ns_per_op
0000100,5159
0000100,4884
0000100,5003
0000400,132290
0000400,132111
0000400,132738

To run just a specific complexity classes example method benchmarks use --complexity.

Accepted case insensitive options are: Constant, LogLog, Logarithmic, Polylogarithmic, Linear, NLogStarN, Linearithmic, Quadratic, Cubic, Exponential, Polynomial, Factorial, HyperExponential

Some alternative spelling/forms are also accepted (e.g., nlogn for Linearithmic)

Note: This is not a repeated value flag, only one complexity class is run at a time.

# Run specific complexity class only
./run_all_benchmarks.sh --complexity=Quadratic --count=2
./run_all_benchmarks.sh --complexity=Linear --count=5
run_benchmark.sh

Runs individual benchmarks or example methods with configurable parameters and generates CSV output. This script is what gets executed by run_all_benchmarks.sh.

The main features of this shell script are:

  • Supports both individual benchmark functions and example method names
  • Configurable parameters: --benchmark_start, --benchmark_end, --benchmark_step, --count
  • Automatic CSV generation with standardized naming (e.g., Quadratic_BubbleSort_start100_end5000_step200_count3.csv)
  • Parses benchmark output and extracts timing data in n,ns_per_op format
# Run individual benchmark function
./run_benchmark.sh BenchmarkBubbleSort

# Run example method with custom parameters
./run_benchmark.sh --benchmark_example_name=Quadratic_BubbleSort --benchmark_start=100 --benchmark_end=5000 --benchmark_step=200 --count=3

The run_all_benchmarks.sh script will list out the name of the supported benchmarks when passed the --help flag.

CSV Output and Analysis

All benchmarks run by the shell scripts generate CSV files in the timings/ directory with the format:

n,ns_per_op
100,1250.5
200,2501.2
400,5002.8

These CSV files can be analyzed using the bigo library itself or external tools to determine algorithmic complexity patterns.

Under the analyze/ directory is a standalone tool for running the classifier against a CSV file.

High-Precision Support

Although unlikely, for algorithms with extreme, extreme performance characteristics, the library supports Go's *big.Float and *big.Int types in the AddDataPointBig/AddDataPointsBig methods:

// Add high-precision timing data
c := bigo.NewClassifier()
bigTime := big.NewFloat(1.23456789012345e15) 
c.AddDataPointBig(1000000, bigTime)

// Non-positive input sizes are also filtered for big precision methods
c.AddDataPointBig(0, big.NewFloat(100.0))    // Ignored - zero input size
c.AddDataPointBig(-10, big.NewFloat(50.0))   // Ignored - negative input size

// Batch add big float data
bigTimings := [][]*big.Float{
    {big.NewFloat(1.5e10)},
    {big.NewFloat(3.2e12)},
}

err := c.AddDataPointsBig(inputSizes, bigTimings)

Generally the high-precision value support is used internally for Big O complexity classes greater than Linear where the chance of generating a comparison value that overflows a float64 becomes likely. For example, any value of N > 170 for factorial will exceed a float64's limit, but it's possible a data set matching a lower Big O will have N values running into the thousands or millions that we are hoping to compare to. Even algorithms in Linear time are likely to be able to generate results with a million or more inputs on moderated hardware in reasonable time.

Input Validation and Data Filtering

The library automatically filters invalid input data to ensure robust complexity analysis:

Non-Positive Input Sizes
  • Input sizes (N ≤ 0) are automatically filtered out across all data addition methods
  • This includes AddDataPoint(), AddDataPointBig(), AddDataPoints(), AddDataPointsBig(), and LoadCSV()
  • Filtering is silent - no errors are returned for filtered data points
  • If insufficient valid data points remain after filtering (< 3), an error is returned during classification
Example of Filtering Behavior
c := bigo.NewClassifier()

// These will be added successfully
c.AddDataPoint(100, 1250.5)
c.AddDataPoint(200, 2501.2)
c.AddDataPoint(400, 5002.8)

// These will be silently filtered out
c.AddDataPoint(0, 100.0)     // Zero input size - ignored
c.AddDataPoint(-5, 50.0)     // Negative input size - ignored
c.AddDataPoint(-100, 75.0)   // Negative input size - ignored

// Classification will succeed with 3 valid data points
result, err := c.Classify()  // Works with 3 valid points
Why This Filtering Is Important
  • Algorithm complexity analysis requires positive input sizes to be meaningful
  • Mathematical functions used for complexity correlation (log, exponential, factorial) are undefined or problematic for non-positive inputs
  • Prevents runtime panics from mathematical operations on invalid inputs
  • Ensures robust operation when loading real-world data that may contain edge cases
Data Requirements After Filtering
  • Minimum 3 data points are required after filtering for complexity analysis
  • Timing values can be negative (for measuring performance deltas) but input sizes cannot
  • Zero timing values are allowed and handled appropriately

Documentation

Overview

Package bigo is a library to generate Big O estimates for real world timings of algorithms and code. Given a collection of counts and timings collected when running code or benchmarks, the library will attempt to Classify which Big O notations most likely match the values.

The library has support for the following Big O classifications:

  • O(1) - Constant Time
  • O(log (log n)) - Double Log Time
  • O(n (log n)) - Log Time
  • O((log n)^c) - Polylogarithmic Time
  • O(n) - Linear Time
  • O(n log* n) - N Log* Time
  • O(n log n) - Linearithmic Time
  • O(n^2) - Quadratic Time
  • O(n^3) - Cubic Time
  • O(n^c) - Polynomial Time
  • O(2^n) - Exponential Time
  • O(n!) - Factorial Time
  • O(n^n) - Hyper-Exponential Time

The characterization can be applied to anyset of values whether they are times, bytes, allocations, or any other type.

Underneath the examples directory are subdirectories for each Big O category that contain a variety of example methods and algorithms that are generally considered to fall in a given level along with basic benchmarks that can be used to generate real timing results that can be used.

Index

Constants

This section is empty.

Variables

View Source
var (
	// Unrated is a BigO instance for when we have not yet rated the data.
	Unrated = &BigO{
		active:         false,
		rank:           0,
		label:          "O(?)",
		description:    "Not Yet Rated",
		scalingCutoff:  0,
		floatCutoffMin: 0,
		floatCutoffMax: 0,
		funcFloatFloat: nil,
		funcFloatBig:   nil,
		funcBigBig:     nil,
	}

	// Constant is a BigO instance for constant time functions.
	//
	// Uses special variance-based detection instead of correlation since
	// constant values have zero variance.
	Constant = &BigO{
		active:      true,
		rank:        1,
		label:       "O(1)",
		description: "An algorithm with O(1) complexity runs in constant time, regardless of the input size.",

		floatCutoffMin: math.SmallestNonzeroFloat64,
		floatCutoffMax: math.MaxFloat64,

		scalingCutoff: math.MaxInt64,

		funcFloatFloat: func(_ float64) float64 {
			return 1
		},
		funcFloatBig: func(_ float64) *big.Float {
			return big.NewFloat(1)
		},
		funcBigBig: func(_ *big.Float) *big.Float {
			return big.NewFloat(1)
		},
	}

	// InverseAckerman is a BigO instance for the inverse Ackerman function.
	// Almost as flat a curve as Constant, and quite similar to LogLog.
	InverseAckerman = &BigO{
		active:         false,
		rank:           2,
		label:          "O(α(n))",
		description:    "An algorithm with O(α(n)) complexity has a runtime that grows at the rate of the inverse Ackerman function with the input size.",
		floatCutoffMin: 1,
		floatCutoffMax: math.MaxFloat64,

		scalingCutoff: math.MaxInt64,

		funcFloatFloat: func(x float64) float64 {
			return inverseAckermann(int(x))
		},
		funcFloatBig: func(x float64) *big.Float {
			return inverseAckermannBig(big.NewFloat(x))
		},
		funcBigBig: inverseAckermannBig,
	}

	// LogLog is a BigO instance for the double log function.
	LogLog = &BigO{
		active:         true,
		rank:           4,
		label:          "O(log log n)",
		description:    "An algorithm with O(log log n) complexity has a runtime that grows logarithmically twice with the input size.",
		floatCutoffMin: math.Exp(math.Exp(0)),
		floatCutoffMax: math.MaxFloat64,

		scalingCutoff: math.MaxInt64,

		funcFloatFloat: func(x float64) float64 {
			return math.Log(math.Log(x))
		},
		funcFloatBig: func(x float64) *big.Float {
			return bigmath.Log(bigmath.Log(big.NewFloat(x)))
		},
		funcBigBig: func(x *big.Float) *big.Float {
			return bigmath.Log(bigmath.Log(x))
		},
	}

	// Log is a BigO instance for Log complexity functions.
	Log = &BigO{
		active:      true,
		rank:        8,
		label:       "O(log n)",
		description: "An algorithm with O(log n) complexity has a runtime that grows logarithmically as the input size increases. This often involves dividing the problem in half each time.",

		floatCutoffMin: math.Exp(0),
		floatCutoffMax: math.MaxFloat64,

		scalingCutoff: math.MaxInt64,

		funcFloatFloat: math.Log,
		funcFloatBig: func(x float64) *big.Float {
			return big.NewFloat(math.Log(x))
		},
		funcBigBig: bigmath.Log,
	}

	// Polylogarithmic is a BigO instance for the polylogarithmic function.
	Polylogarithmic = &BigO{
		active:         true,
		rank:           16,
		label:          "O((log n)^c)",
		description:    "An algorithm with O((log n)^c) complexity has a runtime that grows polylogarithmically with the input size.",
		floatCutoffMin: 1,
		floatCutoffMax: math.MaxFloat64,

		scalingCutoff: math.MaxInt64,

		funcFloatFloat: func(x float64) float64 {
			return math.Pow(math.Log(x), 4)
		},
		funcFloatBig: func(x float64) *big.Float {
			return bigmath.Pow(bigmath.Log(big.NewFloat(x)), big.NewFloat(4))
		},
		funcBigBig: func(x *big.Float) *big.Float {
			return bigmath.Pow(bigmath.Log(x), big.NewFloat(4))
		},
	}

	// Linear is a BigO instance for the linear function.
	Linear = &BigO{
		active:      true,
		rank:        32,
		label:       "O(n)",
		description: "An algorithm with O(n) complexity has a runtime that grows linearly with the input size.",

		scalingCutoff: math.MaxInt64,

		floatCutoffMin: 1,
		floatCutoffMax: math.MaxFloat64,

		funcFloatFloat: func(x float64) float64 {
			return x
		},
		funcFloatBig: big.NewFloat,
		funcBigBig: func(x *big.Float) *big.Float {
			return x
		},
	}

	// NLogStarN is a BigO instance for the n log star n function.
	NLogStarN = &BigO{
		active:         true,
		rank:           64,
		label:          "O(n log* n)",
		description:    "An algorithm with O(n log* n) complexity has a runtime that grows with n times the iterated logarithm of the input size.",
		floatCutoffMin: 1,
		floatCutoffMax: math.MaxFloat64,

		scalingCutoff: math.MaxInt64,

		funcFloatFloat: func(x float64) float64 {

			return x * float64(logStarFloat(x))
		},
		funcFloatBig: func(x float64) *big.Float {
			logStarVal := logStarBig(big.NewFloat(x))

			return new(big.Float).Mul(big.NewFloat(x), big.NewFloat(float64(logStarVal)))
		},
		funcBigBig: func(x *big.Float) *big.Float {
			logStarVal := logStarBig(x)

			return new(big.Float).Mul(x, big.NewFloat(float64(logStarVal)))
		},
	}

	// Linearithmic is a BigO instance for the linearithmic function.
	Linearithmic = &BigO{
		active:      true,
		rank:        128,
		label:       "O(n log n)",
		description: "An algorithm with O(n log n) complexity typically involves a combination of linear and logarithmic time, often seen in efficient sorting algorithms such as Merge Sort.",

		scalingCutoff: math.MaxInt64,

		floatCutoffMin: 1,
		floatCutoffMax: math.Log(math.MaxFloat64),

		funcFloatFloat: func(x float64) float64 {
			return x * math.Log(x)
		},
		funcFloatBig: func(x float64) *big.Float {
			return bigmath.Log(big.NewFloat(x)).Mul(bigmath.Log(big.NewFloat(x)), big.NewFloat(x))
		},
		funcBigBig: func(x *big.Float) *big.Float {
			return bigmath.Log(x).Mul(bigmath.Log(x), x)
		},
	}

	// Quadratic is a BigO instance for the quadratic function.
	Quadratic = &BigO{
		active:      true,
		rank:        256,
		label:       "O(n^2)",
		description: "An algorithm with O(n^2) complexity has a runtime that grows quadratically with the input size, often involving nested loops such as Bubble Sort.",

		scalingCutoff: math.MaxInt64,

		floatCutoffMin: 1,
		floatCutoffMax: math.Sqrt(math.MaxFloat64),

		funcFloatFloat: func(x float64) float64 {
			return x * x
		},
		funcFloatBig: func(x float64) *big.Float {
			return new(big.Float).Mul(big.NewFloat(x), big.NewFloat(x))
		},
		funcBigBig: func(x *big.Float) *big.Float {
			return new(big.Float).Mul(x, x)
		},
	}

	// Cubic is a BigO instance for the cubic function.
	Cubic = &BigO{
		active:      true,
		rank:        512,
		label:       "O(n^3)",
		description: "An algorithm with O(n^3) complexity has a runtime that grows cubically with the input size, often involving three nested loops.",

		scalingCutoff: math.MaxInt64,

		floatCutoffMin: 1,
		floatCutoffMax: math.Cbrt(math.MaxFloat64),
		funcFloatFloat: func(x float64) float64 {
			return x * x * x
		},
		funcFloatBig: func(x float64) *big.Float {
			c := new(big.Float).Mul(big.NewFloat(x), big.NewFloat(x))

			return c.Mul(c, big.NewFloat(x))
		},
		funcBigBig: func(x *big.Float) *big.Float {
			c := new(big.Float).Mul(x, x)

			return c.Mul(c, x)
		},
	}

	// Polynomial is a BigO instance for the polynomial function.
	Polynomial = &BigO{
		active:      true,
		rank:        1024,
		label:       "O(n^c)",
		description: "An algorithm with O(n^c) complexity has a runtime that grows polynomially with the input size, often involving nested loops.",

		scalingCutoff: 1000000,

		floatCutoffMin: 1,
		floatCutoffMax: math.MaxFloat64,
		funcFloatFloat: func(x float64) float64 {
			return math.Pow(x, 4)
		},
		funcFloatBig: func(x float64) *big.Float {
			return bigmath.PowFloat64(x, 4)
		},
		funcBigBig: func(x *big.Float) *big.Float {
			return bigmath.Pow(x, big.NewFloat(4))
		},
	}

	// Exponential is a BigO instance for the exponential function.
	Exponential = &BigO{
		active:      true,
		rank:        2048,
		label:       "O(2^n)",
		description: "An algorithm with O(2^n) complexity has a runtime that grows exponentially with the input size, often seen in recursive algorithms solving combinatorial problems.",

		scalingCutoff: 1000000,

		floatCutoffMin: 1,
		floatCutoffMax: 1024.0,

		funcFloatFloat: math.Exp2,
		funcFloatBig: func(x float64) *big.Float {
			return bigmath.PowFloat64(2, x)
		},
		funcBigBig: func(x *big.Float) *big.Float {
			return bigmath.Pow(big.NewFloat(2), x)
		},
	}

	// Factorial is a BigO instance for the factorial function.
	Factorial = &BigO{
		active:      true,
		rank:        4096,
		label:       "O(n!)",
		description: "An algorithm with O(n!) complexity has a runtime that grows factorially with the input size, often seen in problems involving permutations.",

		scalingCutoff: 1000,

		floatCutoffMin: 1,
		floatCutoffMax: 170,

		funcFloatFloat: func(x float64) float64 {
			return factorial(int(x))
		},
		funcFloatBig: func(x float64) *big.Float {
			return bigmath.FactorialFloat(big.NewFloat(x))
		},
		funcBigBig: bigmath.FactorialFloat,
	}

	// HyperExponential is a BigO instance for the hyper-exponential function.
	HyperExponential = &BigO{
		active:      true,
		rank:        8192,
		label:       "O(n^n)",
		description: "An algorithm with O(n^n) complexity has a runtime that grows hyper-exponentially with the input size.",

		scalingCutoff: 500,

		floatCutoffMin: 1,
		floatCutoffMax: 141,

		funcFloatFloat: func(x float64) float64 {
			return math.Pow(x, x)
		},
		funcFloatBig: func(x float64) *big.Float {
			return bigmath.PowFloat64(x, x)
		},
		funcBigBig: func(x *big.Float) *big.Float {
			return bigmath.Pow(x, x)
		},
	}
)
View Source
var (
	// BigOOrdered holds the set of all BigO instances in rank order.
	BigOOrdered = []*BigO{}
)

Functions

This section is empty.

Types

type BigO

type BigO struct {
	// contains filtered or unexported fields
}

BigO is used to describe the complexity class of an algorithm, which represents how the runtime or space requirements of an algorithm grow as the input size grows.

func (*BigO) Description

func (o *BigO) Description() string

Description returns the description of this BigO.

func (*BigO) Label

func (o *BigO) Label() string

Label returns the label of this BigO.

func (*BigO) Rate

func (o *BigO) Rate(ns []int, vals []float64) (*Rating, error)

Rate generates the ranking for this particular BigO using its defined cutoffs and mapping functions.

To try to make some of the math easier, we scale the incoming values by the smallest N in the data so that the comparison may hopefully stay in the float64 range.

Non-positive input sizes are filtered out before analysis.

func (*BigO) RateBig

func (o *BigO) RateBig(ns []int, vals []*big.Float) (*Rating, error)

RateBig is a helper function that rates the data using big.Float values to perform the correlation and scoring.

Non-positive input sizes are filtered out before analysis.

func (BigO) String

func (o BigO) String() string

String returns this instances label in string form.

type Classifier

type Classifier struct {
	// contains filtered or unexported fields
}

Classifier is used to Classify a set of data points to find the rating they are most likely to correspond to.

func NewClassifier

func NewClassifier() *Classifier

NewClassifier creates a new Classifier.

func (*Classifier) AddBenchmarkResult

func (o *Classifier) AddBenchmarkResult(result testing.BenchmarkResult) error

AddBenchmarkResult adds the result of a benchmark test to the data. It extracts the nanoseconds per operation from the BenchmarkResult and adds it as a data point using the number of iterations (N) as the input size. This method provides seamless integration with Go's built-in benchmarking system.

The timing data is converted from nanoseconds to a float64 value representing the average execution time per operation. Non-positive iteration counts are ignored.

Usage example:

result := testing.Benchmark(func(b *testing.B) {
    for i := 0; i < b.N; i++ {
        // your algorithm here
    }
})
classifier.AddBenchmarkResult(result)

func (*Classifier) AddDataPoint

func (o *Classifier) AddDataPoint(n int, values ...float64) error

AddDataPoint adds the given values to the data. Non-positive input sizes (n <= 0) are ignored and not added to the dataset.

func (*Classifier) AddDataPointBig

func (o *Classifier) AddDataPointBig(n int, values ...*big.Float) error

AddDataPointBig adds the given values to the data. Non-positive input sizes (n <= 0) are ignored and not added to the dataset.

func (*Classifier) AddDataPoints

func (o *Classifier) AddDataPoints(n []int, values [][]float64) error

AddDataPoints adds all the data points and associated values.

func (*Classifier) AddDataPointsBig

func (o *Classifier) AddDataPointsBig(n []int, values [][]*big.Float) error

AddDataPointsBig adds all the data points and associated values.

func (*Classifier) Classify

func (o *Classifier) Classify() (*Rating, error)

Classify is used to Classify the data so far and determine the most Big O fit. Can be run as often as needed when more data are added.

Common errors that can occur here are lack of distinct data points to be able to analyzye, or errors in a specific BigO test.

For every distinct N value, the values stored for it are averaged to get a working value. The assumption in this package is that the user will do more than a single run of their code to get real world timing results, so we average all the run values for a given N.

TODO(rsned): Add support for the big.Float values as well. TODO(rsned): If there are at least 30-50 values for a given N, then we can run some basic stats tests on the data points to test for outliers in the data . TODO(rsned): If the number of values for a given N are large enough, should we include the option to discard the outlier in the values?

func (*Classifier) GetAllRatings

func (o *Classifier) GetAllRatings() []*Rating

GetAllRatings returns a copy of all the ratings generated by the most recent Classify() call. Returns nil if Classify() has not been called yet. The ratings are sorted by BigO rank (lowest rank first). The returned slice is a copy and can be safely modified without affecting internal state.

func (*Classifier) LoadCSV

func (o *Classifier) LoadCSV(path string, header bool, delimiter rune) error

LoadCSV loads the data from a 2-column delimiter separated file and adds the values. The header parameter controls whether the first line of the file is a header or not and should be skipped. If an error occurred, no data will be loaded. Any errors encountered are returned. Non-positive input sizes are automatically filtered out during loading.

func (*Classifier) Summary

func (o *Classifier) Summary() string

Summary returns a longer form view of the results as a formatted text blob.

type Rating

type Rating struct {
	// contains filtered or unexported fields
}

Rating pairs up a BigO and the score it received in the current processing.

func (*Rating) BigO

func (r *Rating) BigO() *BigO

BigO returns the BigO instance associated with this rating.

func (*Rating) Score

func (r *Rating) Score() float64

Score returns the correlation score for this rating.

func (*Rating) String

func (r *Rating) String() string

Directories

Path Synopsis
Package examples contains a collection of example algorithms and methods and the Big O complexity they are generally associated with.
Package examples contains a collection of example algorithms and methods and the Big O complexity they are generally associated with.
constant
Package constant contains implementations of algorithms with O(1) constant time complexity.
Package constant contains implementations of algorithms with O(1) constant time complexity.
cubic
Package cubic contains implementations of algorithms with O(n³) cubic time complexity.
Package cubic contains implementations of algorithms with O(n³) cubic time complexity.
datatypes/collection
Package collection provides generic collection data structures optimized for Big O complexity analysis and education.
Package collection provides generic collection data structures optimized for Big O complexity analysis and education.
datatypes/tree
Package tree provides advanced tree data structure implementations for efficient range queries, updates, and specialized operations.
Package tree provides advanced tree data structure implementations for efficient range queries, updates, and specialized operations.
exponential
Package exponential contains implementations of algorithms with O(2ⁿ) exponential time complexity.
Package exponential contains implementations of algorithms with O(2ⁿ) exponential time complexity.
factorial
Package factorial contains implementations of algorithms with O(n!) factorial time complexity.
Package factorial contains implementations of algorithms with O(n!) factorial time complexity.
hyperexponential
Package hyperexponential contains implementations of algorithms with O(n^n) hyperexponential time complexity.
Package hyperexponential contains implementations of algorithms with O(n^n) hyperexponential time complexity.
inverseackermann
Package inverseackermann contains implementations of algorithms with O(α(n)) inverse Ackermann time complexity.
Package inverseackermann contains implementations of algorithms with O(α(n)) inverse Ackermann time complexity.
linear
Package linear contains implementations of algorithms with O(n) linear time complexity.
Package linear contains implementations of algorithms with O(n) linear time complexity.
linearithmic
Package linearithmic contains implementations of algorithms with O(n log n) linearithmic time complexity.
Package linearithmic contains implementations of algorithms with O(n log n) linearithmic time complexity.
logarithmic
Package logarithmic contains implementations of algorithms with O(log n) logarithmic time complexity.
Package logarithmic contains implementations of algorithms with O(log n) logarithmic time complexity.
loglog
Package loglog contains implementations of algorithms with O(log log n) double logarithmic time complexity.
Package loglog contains implementations of algorithms with O(log log n) double logarithmic time complexity.
nlogstar
Package nlogstar contains implementations of algorithms with O(n log*(n)) complexity, where log*(n) is the iterated logarithm.
Package nlogstar contains implementations of algorithms with O(n log*(n)) complexity, where log*(n) is the iterated logarithm.
polylogarithmic
Package polylogarithmic contains implementations of algorithms with O((log n)^c) polylogarithmic time complexity.
Package polylogarithmic contains implementations of algorithms with O((log n)^c) polylogarithmic time complexity.
polynomial
Package polynomial contains implementations of algorithms with polynomial time complexity O(nᵏ) where k > 3.
Package polynomial contains implementations of algorithms with polynomial time complexity O(nᵏ) where k > 3.
quadratic
Package quadratic contains implementations of algorithms with O(n²) quadratic time complexity.
Package quadratic contains implementations of algorithms with O(n²) quadratic time complexity.

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL