poindexter

package module
v0.0.0-...-91146b2 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Jan 4, 2026 License: EUPL-1.2 Imports: 14 Imported by: 1

README

Poindexter

Go Reference CI Go Report Card Vulncheck codecov Release

A Go library package providing utility functions including sorting algorithms with custom comparators.

Features

  • 🔢 Sorting Utilities: Sort integers, strings, and floats in ascending or descending order
  • 🎯 Custom Sorting: Sort any type with custom comparison functions or key extractors
  • 🔍 Binary Search: Fast search on sorted data
  • 🧭 KDTree (NN Search): Build a KDTree over points with generic payloads; nearest, k-NN, and radius queries with Euclidean, Manhattan, Chebyshev, and Cosine metrics
  • 📦 Generic Functions: Type-safe operations using Go generics
  • Well-Tested: Comprehensive test coverage
  • 📖 Documentation: Full documentation available at GitHub Pages

Installation

go get github.com/Snider/Poindexter

Quick Start

package main

import (
    "fmt"
    poindexter "github.com/Snider/Poindexter"
)

func main() {
    // Basic sorting
    numbers := []int{3, 1, 4, 1, 5, 9}
    poindexter.SortInts(numbers)
    fmt.Println(numbers) // [1 1 3 4 5 9]

    // Custom sorting with key function
    type Product struct {
        Name  string
        Price float64
    }

    products := []Product{{"Apple", 1.50}, {"Banana", 0.75}, {"Cherry", 3.00}}
    poindexter.SortByKey(products, func(p Product) float64 { return p.Price })

    // KDTree quick demo
    pts := []poindexter.KDPoint[string]{
        {ID: "A", Coords: []float64{0, 0}, Value: "alpha"},
        {ID: "B", Coords: []float64{1, 0}, Value: "bravo"},
        {ID: "C", Coords: []float64{0, 1}, Value: "charlie"},
    }
    tree, _ := poindexter.NewKDTree(pts, poindexter.WithMetric(poindexter.EuclideanDistance{}))
    nearest, dist, _ := tree.Nearest([]float64{0.9, 0.1})
    fmt.Println(nearest.ID, nearest.Value, dist) // B bravo ~0.141...
}

Documentation

Full documentation is available at https://snider.github.io/Poindexter/

Explore runnable examples in the repository:

  • examples/dht_ping_1d
  • examples/kdtree_2d_ping_hop
  • examples/kdtree_3d_ping_hop_geo
  • examples/kdtree_4d_ping_hop_geo_score
  • examples/dht_helpers (convenience wrappers for common DHT schemas)
  • examples/wasm-browser (browser demo using the ESM loader)
  • examples/wasm-browser-ts (TypeScript + Vite local demo)
KDTree performance and notes
  • Dual backend support: Linear (always available) and an optimized KD backend enabled when building with -tags=gonum. Linear is the default; with the gonum tag, the optimized backend becomes the default.
  • Complexity: Linear backend is O(n) per query. Optimized KD backend is typically sub-linear on prunable datasets and dims ≤ ~8, especially as N grows (≥10k–100k).
  • Insert is O(1) amortized; delete by ID is O(1) via swap-delete; order is not preserved.
  • Concurrency: the KDTree type is not safe for concurrent mutation. Protect with a mutex or share immutable snapshots for read-mostly workloads.
  • See multi-dimensional examples (ping/hops/geo/score) in docs and examples/.
  • Performance guide: see docs/Performance for benchmark guidance and tips: docs/perf.md • Hosted: https://snider.github.io/Poindexter/perf/
Backend selection
  • Default backend is Linear. If you build with -tags=gonum, the default becomes the optimized KD backend.
  • You can override per tree at construction:
// Force Linear (always available)
kdt1, _ := poindexter.NewKDTree(pts, poindexter.WithBackend(poindexter.BackendLinear))

// Force Gonum (requires build tag)
kdt2, _ := poindexter.NewKDTree(pts, poindexter.WithBackend(poindexter.BackendGonum))
  • Supported metrics in the optimized backend: Euclidean (L2), Manhattan (L1), Chebyshev (L∞).
  • Cosine and Weighted-Cosine currently run on the Linear backend.
  • See the Performance guide for measured comparisons and when to choose which backend.
Choosing a metric (quick tips)
  • Euclidean (L2): smooth trade-offs across axes; solid default for blended preferences.
  • Manhattan (L1): emphasizes per-axis absolute differences; good when each unit of ping/hop matters equally.
  • Chebyshev (L∞): dominated by the worst axis; useful for strict thresholds (e.g., reject high hop count regardless of ping).
  • Cosine: angle-based for vector similarity; pair it with normalized/weighted features when direction matters more than magnitude.

See the multi-dimensional KDTree docs for end-to-end examples and weighting/normalization helpers: Multi-Dimensional KDTree (DHT).

Maintainer Makefile

The repository includes a maintainer-friendly Makefile that mirrors CI tasks and speeds up local workflows.

  • help — list available targets
  • tidy / tidy-check — run go mod tidy, optionally verify no diffs
  • fmt — format code (go fmt ./...)
  • vet — go vet ./...
  • build — go build ./...
  • examples — build all programs under examples/ (if present)
  • test — run unit tests
  • race — run tests with the race detector
  • cover — run tests with race + coverage (writes coverage.out and prints summary)
  • coverhtml — render HTML coverage report to coverage.html
  • coverfunc — print per-function coverage (from coverage.out)
  • cover-kdtree — print coverage details filtered to kdtree.go
  • fuzz — run Go fuzzing for a configurable time (default 10s) matching CI
  • bench — run benchmarks with -benchmem (writes bench.txt)
  • lint — run golangci-lint (if installed)
  • vuln — run govulncheck (if installed)
  • ci — CI-parity aggregate: tidy-check, build, vet, cover, examples, bench, lint, vuln
  • release — run GoReleaser with the canonical .goreleaser.yaml (for tagged releases)
  • snapshot — GoReleaser snapshot (no publish)
  • docs-serve — serve MkDocs locally on 127.0.0.1:8000
  • docs-build — build MkDocs site into site/

Quick usage:

  • See all targets:
make help
  • Fast local cycle:
make fmt
make vet
make test
  • CI-parity run (what GitHub Actions does, locally):
make ci
  • Coverage summary:
make cover
  • Generate HTML coverage report (writes coverage.html):
make coverhtml
  • Fuzz for 10 seconds (default):
make fuzz
  • Fuzz with a custom time (e.g., 30s):
make fuzz FUZZTIME=30s
  • Run benchmarks (writes bench.txt):
make bench
  • Build examples (if any under ./examples):
make examples
  • Serve docs locally (requires mkdocs-material):
make docs-serve

Configurable variables:

  • FUZZTIME (default 10s) — e.g. make fuzz FUZZTIME=30s
  • BENCHOUT (default bench.txt), COVEROUT (default coverage.out), COVERHTML (default coverage.html)
  • Tool commands are overridable via env: GO, GOLANGCI_LINT, GORELEASER, MKDOCS

Requirements for optional targets:

  • golangci-lint for make lint
  • golang.org/x/vuln/cmd/govulncheck for make vuln
  • goreleaser for make release / make snapshot
  • mkdocs + mkdocs-material for make docs-serve / make docs-build

See the full Makefile at the repo root for authoritative target definitions.

License

This project is licensed under the European Union Public Licence v1.2 (EUPL-1.2). See LICENSE for details.

Contributing

Contributions are welcome! Please feel free to submit a Pull Request.

Coverage

  • CI produces coverage summaries as artifacts on every push/PR:
    • Default job: coverage-summary.md (from coverage.out)
    • Gonum-tag job: coverage-summary-gonum.md (from coverage-gonum.out)
  • Locally, you can generate and inspect coverage with the Makefile:
make cover         # runs tests with race + coverage and prints the total
make coverfunc     # prints per-function coverage
make cover-kdtree  # filters coverage to kdtree.go
make coverhtml     # writes coverage.html for visual inspection

Note: CI also uploads raw coverage profiles as artifacts (coverage.out, coverage-gonum.out).

Documentation

Overview

Package poindexter provides sorting utilities and a KDTree with simple nearest-neighbour queries. It also includes helper functions to build normalised, weighted KD points for 2D/3D/4D and arbitrary N‑D use-cases.

Distance metrics include Euclidean (L2), Manhattan (L1), Chebyshev (L∞), and Cosine/Weighted-Cosine for vector similarity.

Package poindexter provides functionality for the Poindexter library.

Example (TiesBehavior)
package main

import (
	"fmt"

	poindexter "github.com/Snider/Poindexter"
)

func main() {
	// Two points equidistant from the query; tie ordering is arbitrary,
	// but distances are equal.
	pts := []poindexter.KDPoint[int]{
		{ID: "L", Coords: []float64{-1}},
		{ID: "R", Coords: []float64{+1}},
	}
	tr, _ := poindexter.NewKDTree(pts)
	ns, ds := tr.KNearest([]float64{0}, 2)
	_ = ns // neighbor order is unspecified
	fmt.Printf("equal=%.1f==%.1f? %v", ds[0], ds[1], ds[0] == ds[1])
}
Output:
equal=1.0==1.0? true

Index

Examples

Constants

This section is empty.

Variables

View Source
var (
	// ErrEmptyPoints indicates that no points were provided to build a KDTree.
	ErrEmptyPoints = errors.New("kdtree: no points provided")
	// ErrZeroDim indicates that points or tree dimension must be at least 1.
	ErrZeroDim = errors.New("kdtree: points must have at least one dimension")
	// ErrDimMismatch indicates inconsistent dimensionality among points.
	ErrDimMismatch = errors.New("kdtree: inconsistent dimensionality in points")
	// ErrDuplicateID indicates a duplicate point ID was encountered.
	ErrDuplicateID = errors.New("kdtree: duplicate point ID")
	// ErrBackendUnavailable indicates that a requested backend cannot be used (e.g., not built/tagged).
	ErrBackendUnavailable = errors.New("kdtree: requested backend unavailable")
)
View Source
var (
	// ErrInvalidFeatures indicates that no features were provided or nil feature encountered.
	ErrInvalidFeatures = errors.New("kdtree: invalid features: provide at least one feature and ensure none are nil")
	// ErrInvalidWeights indicates weights length doesn't match features length.
	ErrInvalidWeights = errors.New("kdtree: invalid weights length; must match number of features")
	// ErrInvalidInvert indicates invert flags length doesn't match features length.
	ErrInvalidInvert = errors.New("kdtree: invalid invert length; must match number of features")
	// ErrStatsDimMismatch indicates NormStats dimensions do not match features length.
	ErrStatsDimMismatch = errors.New("kdtree: stats dimensionality mismatch")
)

Errors for helper builders.

Functions

func BinarySearch

func BinarySearch(data []int, target int) int

BinarySearch performs a binary search on a sorted slice of integers. Returns the index where target is found, or -1 if not found.

func BinarySearchStrings

func BinarySearchStrings(data []string, target string) int

BinarySearchStrings performs a binary search on a sorted slice of strings. Returns the index where target is found, or -1 if not found.

func ComputeTrustScore

func ComputeTrustScore(t TrustMetrics) float64

ComputeTrustScore calculates a composite trust score from trust metrics.

func Hello

func Hello(name string) string

Hello returns a greeting message.

func IsSorted

func IsSorted(data []int) bool

IsSorted checks if a slice of integers is sorted in ascending order.

func IsSortedFloat64s

func IsSortedFloat64s(data []float64) bool

IsSortedFloat64s checks if a slice of float64 values is sorted in ascending order.

func IsSortedStrings

func IsSortedStrings(data []string) bool

IsSortedStrings checks if a slice of strings is sorted in ascending order.

func NormalizePeerFeatures

func NormalizePeerFeatures(features []float64, ranges FeatureRanges) []float64

NormalizePeerFeatures normalizes peer features to [0,1] using provided ranges.

func PeerQualityScore

func PeerQualityScore(metrics NATRoutingMetrics, weights *QualityWeights) float64

PeerQualityScore computes a composite quality score for peer selection. Higher scores indicate better peers for routing. Weights can be customized; default weights emphasize latency and reliability.

func SortBy

func SortBy[T any](data []T, less func(i, j int) bool)

SortBy sorts a slice using a custom less function. The less function should return true if data[i] should come before data[j].

func SortByKey

func SortByKey[T any, K int | float64 | string](data []T, key func(T) K)

SortByKey sorts a slice by extracting a comparable key from each element. K is restricted to int, float64, or string.

func SortByKeyDescending

func SortByKeyDescending[T any, K int | float64 | string](data []T, key func(T) K)

SortByKeyDescending sorts a slice by extracting a comparable key from each element in descending order.

func SortFloat64s

func SortFloat64s(data []float64)

SortFloat64s sorts a slice of float64 values in ascending order in place.

func SortFloat64sDescending

func SortFloat64sDescending(data []float64)

SortFloat64sDescending sorts a slice of float64 values in descending order in place.

func SortInts

func SortInts(data []int)

SortInts sorts a slice of integers in ascending order in place.

func SortIntsDescending

func SortIntsDescending(data []int)

SortIntsDescending sorts a slice of integers in descending order in place.

func SortStrings

func SortStrings(data []string)

SortStrings sorts a slice of strings in ascending order in place.

func SortStringsDescending

func SortStringsDescending(data []string)

SortStringsDescending sorts a slice of strings in descending order in place.

func StandardFeatureLabels

func StandardFeatureLabels() []string

StandardFeatureLabels returns the labels for standard peer features.

func Version

func Version() string

Version returns the current version of the library.

func WeightedPeerFeatures

func WeightedPeerFeatures(normalized []float64, weights []float64) []float64

WeightedPeerFeatures applies per-feature weights after normalization.

Types

type ALIASRecord

type ALIASRecord struct {
	Target string `json:"target"`
}

ALIASRecord represents an ALIAS/ANAME record (provider-specific)

type AxisDistribution

type AxisDistribution struct {
	Axis  int               `json:"axis"`
	Name  string            `json:"name,omitempty"`
	Stats DistributionStats `json:"stats"`
}

AxisDistribution provides per-axis (feature) distribution analysis.

func ComputeAxisDistributions

func ComputeAxisDistributions[T any](points []KDPoint[T], axisNames []string) []AxisDistribution

ComputeAxisDistributions analyzes the distribution of values along each axis.

type AxisStats

type AxisStats struct {
	Min float64
	Max float64
}

AxisStats holds the min/max observed for a single axis.

type CAARecord

type CAARecord struct {
	Flag  uint8  `json:"flag"`
	Tag   string `json:"tag"` // "issue", "issuewild", "iodef"
	Value string `json:"value"`
}

CAARecord represents a CAA record

type ChebyshevDistance

type ChebyshevDistance struct{}

ChebyshevDistance implements the L-infinity (max) metric.

func (ChebyshevDistance) Distance

func (ChebyshevDistance) Distance(a, b []float64) float64

type CompleteDNSLookup

type CompleteDNSLookup struct {
	Domain       string     `json:"domain"`
	A            []string   `json:"a,omitempty"`
	AAAA         []string   `json:"aaaa,omitempty"`
	MX           []MXRecord `json:"mx,omitempty"`
	NS           []string   `json:"ns,omitempty"`
	TXT          []string   `json:"txt,omitempty"`
	CNAME        string     `json:"cname,omitempty"`
	SOA          *SOARecord `json:"soa,omitempty"`
	LookupTimeMs int64      `json:"lookupTimeMs"`
	Errors       []string   `json:"errors,omitempty"`
	Timestamp    time.Time  `json:"timestamp"`
}

CompleteDNSLookup contains all DNS records for a domain

func DNSLookupAll

func DNSLookupAll(domain string) CompleteDNSLookup

DNSLookupAll performs lookups for all common record types

func DNSLookupAllWithTimeout

func DNSLookupAllWithTimeout(domain string, timeout time.Duration) CompleteDNSLookup

DNSLookupAllWithTimeout performs lookups for all common record types with timeout

type CosineDistance

type CosineDistance struct{}

CosineDistance implements 1 - cosine similarity.

Distance is defined as 1 - (a·b)/(||a||*||b||). If both vectors are zero, distance is 0. If exactly one is zero, distance is 1. Numerical results are clamped to [0,2]. Note: For typical normalized/weighted feature vectors with non-negative entries, the value will be in [0,1]. Opposite vectors in general spaces can yield up to 2.

Example
package main

import (
	"fmt"

	poindexter "github.com/Snider/Poindexter"
)

func main() {
	a := []float64{1, 0}
	b := []float64{0, 1}
	d := poindexter.CosineDistance{}.Distance(a, b)
	fmt.Printf("%.0f", d)
}
Output:
1

func (CosineDistance) Distance

func (CosineDistance) Distance(a, b []float64) float64

type DNSKEYRecord

type DNSKEYRecord struct {
	Flags     uint16 `json:"flags"`
	Protocol  uint8  `json:"protocol"`
	Algorithm uint8  `json:"algorithm"`
	PublicKey string `json:"publicKey"`
}

DNSKEYRecord represents a DNSKEY record

type DNSLookupResult

type DNSLookupResult struct {
	Domain       string      `json:"domain"`
	QueryType    string      `json:"queryType"`
	Records      []DNSRecord `json:"records"`
	MXRecords    []MXRecord  `json:"mxRecords,omitempty"`
	SRVRecords   []SRVRecord `json:"srvRecords,omitempty"`
	SOARecord    *SOARecord  `json:"soaRecord,omitempty"`
	LookupTimeMs int64       `json:"lookupTimeMs"`
	Error        string      `json:"error,omitempty"`
	Timestamp    time.Time   `json:"timestamp"`
}

DNSLookupResult contains the results of a DNS lookup

func DNSLookup

func DNSLookup(domain string, recordType DNSRecordType) DNSLookupResult

DNSLookup performs a DNS lookup for the specified record type

func DNSLookupWithTimeout

func DNSLookupWithTimeout(domain string, recordType DNSRecordType, timeout time.Duration) DNSLookupResult

DNSLookupWithTimeout performs a DNS lookup with a custom timeout

func ReverseDNSLookup

func ReverseDNSLookup(ip string) DNSLookupResult

ReverseDNSLookup performs a reverse DNS lookup for an IP address

type DNSRecord

type DNSRecord struct {
	Type  DNSRecordType `json:"type"`
	Name  string        `json:"name"`
	Value string        `json:"value"`
	TTL   int           `json:"ttl,omitempty"`
}

DNSRecord represents a generic DNS record

type DNSRecordType

type DNSRecordType string

DNSRecordType represents DNS record types

const (
	// Standard record types
	DNSRecordA     DNSRecordType = "A"
	DNSRecordAAAA  DNSRecordType = "AAAA"
	DNSRecordMX    DNSRecordType = "MX"
	DNSRecordTXT   DNSRecordType = "TXT"
	DNSRecordNS    DNSRecordType = "NS"
	DNSRecordCNAME DNSRecordType = "CNAME"
	DNSRecordSOA   DNSRecordType = "SOA"
	DNSRecordPTR   DNSRecordType = "PTR"
	DNSRecordSRV   DNSRecordType = "SRV"
	DNSRecordCAA   DNSRecordType = "CAA"

	// Additional record types (ClouDNS and others)
	DNSRecordALIAS  DNSRecordType = "ALIAS"  // Virtual ANAME record (ClouDNS, Route53, etc.)
	DNSRecordRP     DNSRecordType = "RP"     // Responsible Person
	DNSRecordSSHFP  DNSRecordType = "SSHFP"  // SSH Fingerprint
	DNSRecordTLSA   DNSRecordType = "TLSA"   // DANE TLS Authentication
	DNSRecordDS     DNSRecordType = "DS"     // DNSSEC Delegation Signer
	DNSRecordDNSKEY DNSRecordType = "DNSKEY" // DNSSEC Key
	DNSRecordNAPTR  DNSRecordType = "NAPTR"  // Naming Authority Pointer
	DNSRecordLOC    DNSRecordType = "LOC"    // Geographic Location
	DNSRecordHINFO  DNSRecordType = "HINFO"  // Host Information
	DNSRecordCERT   DNSRecordType = "CERT"   // Certificate record
	DNSRecordSMIMEA DNSRecordType = "SMIMEA" // S/MIME Certificate Association
	DNSRecordWR     DNSRecordType = "WR"     // Web Redirect (ClouDNS specific)
	DNSRecordSPF    DNSRecordType = "SPF"    // Sender Policy Framework (legacy, use TXT)
)

func GetAllDNSRecordTypes

func GetAllDNSRecordTypes() []DNSRecordType

GetAllDNSRecordTypes returns all supported record types

func GetCommonDNSRecordTypes

func GetCommonDNSRecordTypes() []DNSRecordType

GetCommonDNSRecordTypes returns only commonly used record types

type DNSRecordTypeInfo

type DNSRecordTypeInfo struct {
	Type        DNSRecordType `json:"type"`
	Name        string        `json:"name"`
	Description string        `json:"description"`
	RFC         string        `json:"rfc,omitempty"`
	Common      bool          `json:"common"` // Commonly used record type
}

DNSRecordTypeInfo provides metadata about a DNS record type

func GetDNSRecordTypeInfo

func GetDNSRecordTypeInfo() []DNSRecordTypeInfo

GetDNSRecordTypeInfo returns metadata for all supported DNS record types

type DSRecord

type DSRecord struct {
	KeyTag     uint16 `json:"keyTag"`
	Algorithm  uint8  `json:"algorithm"`
	DigestType uint8  `json:"digestType"`
	Digest     string `json:"digest"`
}

DSRecord represents a DS (DNSSEC Delegation Signer) record

type DistanceMetric

type DistanceMetric interface {
	Distance(a, b []float64) float64
}

DistanceMetric defines a metric over R^n.

type DistributionStats

type DistributionStats struct {
	Count      int       `json:"count"`
	Min        float64   `json:"min"`
	Max        float64   `json:"max"`
	Mean       float64   `json:"mean"`
	Median     float64   `json:"median"`
	StdDev     float64   `json:"stdDev"`
	P25        float64   `json:"p25"` // 25th percentile
	P75        float64   `json:"p75"` // 75th percentile
	P90        float64   `json:"p90"` // 90th percentile
	P99        float64   `json:"p99"` // 99th percentile
	Variance   float64   `json:"variance"`
	Skewness   float64   `json:"skewness"`
	SampleSize int       `json:"sampleSize"`
	ComputedAt time.Time `json:"computedAt"`
}

DistributionStats provides statistical analysis of distances in query results.

func ComputeDistributionStats

func ComputeDistributionStats(distances []float64) DistributionStats

ComputeDistributionStats calculates distribution statistics from a slice of distances.

type EuclideanDistance

type EuclideanDistance struct{}

EuclideanDistance implements the L2 metric.

func (EuclideanDistance) Distance

func (EuclideanDistance) Distance(a, b []float64) float64
type ExternalToolLinks struct {
	// Target being analyzed
	Target string `json:"target"`
	Type   string `json:"type"` // "domain", "ip", "email"

	// MXToolbox links
	MXToolboxDNS       string `json:"mxtoolboxDns,omitempty"`
	MXToolboxMX        string `json:"mxtoolboxMx,omitempty"`
	MXToolboxBlacklist string `json:"mxtoolboxBlacklist,omitempty"`
	MXToolboxSMTP      string `json:"mxtoolboxSmtp,omitempty"`
	MXToolboxSPF       string `json:"mxtoolboxSpf,omitempty"`
	MXToolboxDMARC     string `json:"mxtoolboxDmarc,omitempty"`
	MXToolboxDKIM      string `json:"mxtoolboxDkim,omitempty"`
	MXToolboxHTTP      string `json:"mxtoolboxHttp,omitempty"`
	MXToolboxHTTPS     string `json:"mxtoolboxHttps,omitempty"`
	MXToolboxPing      string `json:"mxtoolboxPing,omitempty"`
	MXToolboxTrace     string `json:"mxtoolboxTrace,omitempty"`
	MXToolboxWhois     string `json:"mxtoolboxWhois,omitempty"`
	MXToolboxASN       string `json:"mxtoolboxAsn,omitempty"`

	// DNSChecker links
	DNSCheckerDNS         string `json:"dnscheckerDns,omitempty"`
	DNSCheckerPropagation string `json:"dnscheckerPropagation,omitempty"`

	// Other tools
	WhoIs          string `json:"whois,omitempty"`
	ViewDNS        string `json:"viewdns,omitempty"`
	IntoDNS        string `json:"intodns,omitempty"`
	DNSViz         string `json:"dnsviz,omitempty"`
	SecurityTrails string `json:"securitytrails,omitempty"`
	Shodan         string `json:"shodan,omitempty"`
	Censys         string `json:"censys,omitempty"`
	BuiltWith      string `json:"builtwith,omitempty"`
	SSLLabs        string `json:"ssllabs,omitempty"`
	HSTSPreload    string `json:"hstsPreload,omitempty"`
	Hardenize      string `json:"hardenize,omitempty"`

	// IP-specific tools
	IPInfo      string `json:"ipinfo,omitempty"`
	AbuseIPDB   string `json:"abuseipdb,omitempty"`
	VirusTotal  string `json:"virustotal,omitempty"`
	ThreatCrowd string `json:"threatcrowd,omitempty"`

	// Email-specific tools
	MailTester string `json:"mailtester,omitempty"`
	LearnDMARC string `json:"learndmarc,omitempty"`
}

ExternalToolLinks contains links to external DNS/network analysis tools

func GetExternalToolLinks(domain string) ExternalToolLinks

GetExternalToolLinks generates links to external analysis tools for a domain

func GetExternalToolLinksEmail

func GetExternalToolLinksEmail(emailOrDomain string) ExternalToolLinks

GetExternalToolLinksEmail generates links for email-related checks

func GetExternalToolLinksIP

func GetExternalToolLinksIP(ip string) ExternalToolLinks

GetExternalToolLinksIP generates links to external analysis tools for an IP

type FeatureRanges

type FeatureRanges struct {
	Ranges []AxisStats `json:"ranges"`
}

FeatureRanges defines min/max ranges for feature normalization.

func DefaultPeerFeatureRanges

func DefaultPeerFeatureRanges() FeatureRanges

DefaultPeerFeatureRanges returns sensible default ranges for peer features.

type FeatureVector

type FeatureVector struct {
	PeerID   string    `json:"peerId"`
	Features []float64 `json:"features"`
	Labels   []string  `json:"labels,omitempty"` // Optional feature names
}

FeatureVector represents a normalized feature vector for a peer. This is the core structure for KD-Tree based peer selection.

type KDBackend

type KDBackend string

KDBackend selects the internal engine used by KDTree.

const (
	BackendLinear KDBackend = "linear"
	BackendGonum  KDBackend = "gonum"
)

type KDOption

type KDOption func(*kdOptions)

KDOption configures KDTree construction (non-generic to allow inference).

func WithBackend

func WithBackend(b KDBackend) KDOption

WithBackend selects the internal KDTree backend ("linear" or "gonum"). Default is linear. If the requested backend is unavailable (e.g., gonum build tag not enabled), the constructor will silently fall back to the linear backend.

func WithMetric

func WithMetric(m DistanceMetric) KDOption

WithMetric sets the distance metric for the KDTree.

type KDPoint

type KDPoint[T any] struct {
	ID     string
	Coords []float64
	Value  T
}

KDPoint represents a point with coordinates and an attached payload/value. ID should be unique within a tree to enable O(1) deletes by ID. Coords must all have the same dimensionality within a given KDTree.

func Build2D

func Build2D[T any](items []T, id func(T) string, f1, f2 func(T) float64, weights [2]float64, invert [2]bool) ([]KDPoint[T], error)

Build2D constructs normalised-and-weighted KD points from items using two feature extractors. - id: function to provide a stable string ID (can return "" if you don't need DeleteByID) - f1,f2: feature extractors (raw values) - weights: per-axis weights applied after normalization - invert: per-axis flags; if true, the axis is inverted (1-norm) so that higher raw values become lower cost

Example
package main

import (
	"fmt"

	poindexter "github.com/Snider/Poindexter"
)

func main() {
	type rec struct{ ping, hops float64 }
	items := []rec{{ping: 20, hops: 3}, {ping: 30, hops: 2}, {ping: 15, hops: 4}}
	weights := [2]float64{1.0, 1.0}
	invert := [2]bool{false, false}
	pts, _ := poindexter.Build2D(items,
		func(r rec) string { return "" },
		func(r rec) float64 { return r.ping },
		func(r rec) float64 { return r.hops },
		weights, invert,
	)
	tr, _ := poindexter.NewKDTree(pts, poindexter.WithMetric(poindexter.ManhattanDistance{}))
	_, _, _ = tr.Nearest([]float64{0, 0})
	fmt.Printf("dim=%d len=%d", tr.Dim(), tr.Len())
}
Output:
dim=2 len=3

func Build2DWithStats

func Build2DWithStats[T any](items []T, id func(T) string, f1, f2 func(T) float64, weights [2]float64, invert [2]bool, stats NormStats) ([]KDPoint[T], error)

Build2DWithStats builds points using provided normalisation stats.

Example
package main

import (
	"fmt"

	poindexter "github.com/Snider/Poindexter"
)

func main() {
	type rec struct{ ping, hops float64 }
	items := []rec{{20, 3}, {30, 2}, {15, 4}}
	weights := [2]float64{1.0, 1.0}
	invert := [2]bool{false, false}
	stats := poindexter.ComputeNormStats2D(items,
		func(r rec) float64 { return r.ping },
		func(r rec) float64 { return r.hops },
	)
	pts, _ := poindexter.Build2DWithStats(items,
		func(r rec) string { return "" },
		func(r rec) float64 { return r.ping },
		func(r rec) float64 { return r.hops },
		weights, invert, stats,
	)
	tr, _ := poindexter.NewKDTree(pts)
	fmt.Printf("dim=%d len=%d", tr.Dim(), tr.Len())
}
Output:
dim=2 len=3

func Build3D

func Build3D[T any](items []T, id func(T) string, f1, f2, f3 func(T) float64, weights [3]float64, invert [3]bool) ([]KDPoint[T], error)

Build3D constructs normalised-and-weighted KD points using three feature extractors.

Example
package main

import (
	"fmt"

	poindexter "github.com/Snider/Poindexter"
)

func main() {
	type rec struct{ x, y, z float64 }
	items := []rec{{0, 0, 0}, {1, 1, 1}}
	weights := [3]float64{1, 1, 1}
	invert := [3]bool{false, false, false}
	pts, _ := poindexter.Build3D(items,
		func(r rec) string { return "" },
		func(r rec) float64 { return r.x },
		func(r rec) float64 { return r.y },
		func(r rec) float64 { return r.z },
		weights, invert,
	)
	tr, _ := poindexter.NewKDTree(pts)
	fmt.Println(tr.Dim())
}
Output:
3

func Build3DWithStats

func Build3DWithStats[T any](items []T, id func(T) string, f1, f2, f3 func(T) float64, weights [3]float64, invert [3]bool, stats NormStats) ([]KDPoint[T], error)

Build3DWithStats builds points using provided normalisation stats.

func Build4D

func Build4D[T any](items []T, id func(T) string, f1, f2, f3, f4 func(T) float64, weights [4]float64, invert [4]bool) ([]KDPoint[T], error)

Build4D constructs normalised-and-weighted KD points using four feature extractors.

Example
package main

import (
	"fmt"

	poindexter "github.com/Snider/Poindexter"
)

func main() {
	type rec struct{ a, b, c, d float64 }
	items := []rec{{0, 0, 0, 0}, {1, 1, 1, 1}}
	weights := [4]float64{1, 1, 1, 1}
	invert := [4]bool{false, false, false, false}
	pts, _ := poindexter.Build4D(items,
		func(r rec) string { return "" },
		func(r rec) float64 { return r.a },
		func(r rec) float64 { return r.b },
		func(r rec) float64 { return r.c },
		func(r rec) float64 { return r.d },
		weights, invert,
	)
	tr, _ := poindexter.NewKDTree(pts)
	fmt.Println(tr.Dim())
}
Output:
4

func Build4DWithStats

func Build4DWithStats[T any](items []T, id func(T) string, f1, f2, f3, f4 func(T) float64, weights [4]float64, invert [4]bool, stats NormStats) ([]KDPoint[T], error)

Build4DWithStats builds points using provided normalisation stats.

Example
package main

import (
	"fmt"

	poindexter "github.com/Snider/Poindexter"
)

func main() {
	type rec struct{ a, b, c, d float64 }
	items := []rec{{0, 0, 0, 0}, {1, 1, 1, 1}}
	weights := [4]float64{1, 1, 1, 1}
	invert := [4]bool{false, false, false, false}
	stats := poindexter.ComputeNormStats4D(items,
		func(r rec) float64 { return r.a },
		func(r rec) float64 { return r.b },
		func(r rec) float64 { return r.c },
		func(r rec) float64 { return r.d },
	)
	pts, _ := poindexter.Build4DWithStats(items,
		func(r rec) string { return "" },
		func(r rec) float64 { return r.a },
		func(r rec) float64 { return r.b },
		func(r rec) float64 { return r.c },
		func(r rec) float64 { return r.d },
		weights, invert, stats,
	)
	tr, _ := poindexter.NewKDTree(pts)
	fmt.Println(tr.Dim())
}
Output:
4

func BuildND

func BuildND[T any](items []T, id func(T) string, features []func(T) float64, weights []float64, invert []bool) ([]KDPoint[T], error)

BuildND constructs normalised-and-weighted KD points from arbitrary amount features. Features are min-max normalised per axis over the provided items, optionally inverted, then multiplied by per-axis weights.

Example
package main

import (
	"fmt"

	poindexter "github.com/Snider/Poindexter"
)

func main() {
	type rec struct{ a, b, c float64 }
	items := []rec{{0, 0, 0}, {1, 2, 3}, {0.5, 1, 1.5}}
	features := []func(rec) float64{
		func(r rec) float64 { return r.a },
		func(r rec) float64 { return r.b },
		func(r rec) float64 { return r.c },
	}
	weights := []float64{1, 0.5, 2}
	invert := []bool{false, false, false}
	pts, _ := poindexter.BuildND(items, func(r rec) string { return "" }, features, weights, invert)
	tr, _ := poindexter.NewKDTree(pts)
	fmt.Printf("dim=%d len=%d", tr.Dim(), tr.Len())
}
Output:
dim=3 len=3

func BuildNDNoErr

func BuildNDNoErr[T any](items []T, id func(T) string, features []func(T) float64, weights []float64, invert []bool) []KDPoint[T]

BuildNDNoErr constructs normalized-and-weighted KD points like BuildND but never returns an error. It performs no input validation beyond basic length checks and will propagate NaN/Inf values from feature extractors into the resulting coordinates. Use when you control inputs and want a simpler call signature.

func BuildNDWithStats

func BuildNDWithStats[T any](items []T, id func(T) string, features []func(T) float64, weights []float64, invert []bool, stats NormStats) ([]KDPoint[T], error)

BuildNDWithStats builds points using provided normalisation stats.

Example
package main

import (
	"fmt"

	poindexter "github.com/Snider/Poindexter"
)

func main() {
	type rec struct{ a, b float64 }
	items := []rec{{0, 0}, {1, 2}, {0.5, 1}}
	features := []func(rec) float64{
		func(r rec) float64 { return r.a },
		func(r rec) float64 { return r.b },
	}
	stats, _ := poindexter.ComputeNormStatsND(items, features)
	weights := []float64{1, 0.5}
	invert := []bool{false, false}
	pts, _ := poindexter.BuildNDWithStats(items, func(r rec) string { return "" }, features, weights, invert, stats)
	tr, _ := poindexter.NewKDTree(pts, poindexter.WithMetric(poindexter.CosineDistance{}))
	fmt.Printf("dim=%d len=%d", tr.Dim(), tr.Len())
}
Output:
dim=2 len=3

type KDTree

type KDTree[T any] struct {
	// contains filtered or unexported fields
}

KDTree is a lightweight wrapper providing nearest-neighbor operations.

Complexity: queries are O(n) linear scans in the current implementation. Inserts are O(1) amortized; deletes by ID are O(1) using swap-delete (order not preserved). Concurrency: KDTree is not safe for concurrent mutation. Guard with a mutex or share immutable snapshots for read-mostly workloads.

This type is designed to be easily swappable with gonum.org/v1/gonum/spatial/kdtree in the future without breaking the public API.

func NewKDTree

func NewKDTree[T any](pts []KDPoint[T], opts ...KDOption) (*KDTree[T], error)

NewKDTree builds a KDTree from the given points. All points must have the same dimensionality (>0).

Example
package main

import (
	"fmt"

	poindexter "github.com/Snider/Poindexter"
)

func main() {
	pts := []poindexter.KDPoint[string]{
		{ID: "A", Coords: []float64{0, 0}, Value: "alpha"},
		{ID: "B", Coords: []float64{1, 0}, Value: "bravo"},
	}
	tr, _ := poindexter.NewKDTree(pts)
	p, _, _ := tr.Nearest([]float64{0.2, 0})
	fmt.Println(p.ID)
}
Output:
A

func NewKDTreeFromDim

func NewKDTreeFromDim[T any](dim int, opts ...KDOption) (*KDTree[T], error)

NewKDTreeFromDim constructs an empty KDTree with the specified dimension. Call Insert to add points after construction.

Example
package main

import (
	"fmt"

	poindexter "github.com/Snider/Poindexter"
)

func main() {
	// Construct an empty 2D tree, insert a point, then query.
	tr, _ := poindexter.NewKDTreeFromDim[string](2)
	tr.Insert(poindexter.KDPoint[string]{ID: "A", Coords: []float64{0.1, 0.2}, Value: "alpha"})
	p, _, ok := tr.Nearest([]float64{0, 0})
	fmt.Printf("ok=%v id=%s dim=%d len=%d", ok, p.ID, tr.Dim(), tr.Len())
}
Output:
ok=true id=A dim=2 len=1

func (*KDTree[T]) Analytics

func (t *KDTree[T]) Analytics() *TreeAnalytics

Analytics returns the tree analytics tracker. Returns nil if analytics tracking is disabled.

func (*KDTree[T]) Backend

func (t *KDTree[T]) Backend() KDBackend

Backend returns the active backend type.

func (*KDTree[T]) ComputeDistanceDistribution

func (t *KDTree[T]) ComputeDistanceDistribution(axisNames []string) []AxisDistribution

ComputeDistanceDistribution analyzes the distribution of current point coordinates.

func (*KDTree[T]) DeleteByID

func (t *KDTree[T]) DeleteByID(id string) bool

DeleteByID removes a point by its ID. Returns false if not found or ID empty.

Example
package main

import (
	"fmt"

	poindexter "github.com/Snider/Poindexter"
)

func main() {
	pts := []poindexter.KDPoint[string]{
		{ID: "A", Coords: []float64{0}, Value: "a"},
	}
	tr, _ := poindexter.NewKDTree(pts)
	tr.Insert(poindexter.KDPoint[string]{ID: "Z", Coords: []float64{0.1}, Value: "z"})
	p, _, _ := tr.Nearest([]float64{0.09})
	fmt.Println(p.ID)
	tr.DeleteByID("Z")
	p2, _, _ := tr.Nearest([]float64{0.09})
	fmt.Println(p2.ID)
}
Output:
Z
A

func (*KDTree[T]) Dim

func (t *KDTree[T]) Dim() int

Dim returns the number of dimensions.

func (*KDTree[T]) GetAnalyticsSnapshot

func (t *KDTree[T]) GetAnalyticsSnapshot() TreeAnalyticsSnapshot

GetAnalyticsSnapshot returns a point-in-time snapshot of tree analytics.

func (*KDTree[T]) GetPeerStats

func (t *KDTree[T]) GetPeerStats() []PeerStats

GetPeerStats returns per-peer selection statistics.

func (*KDTree[T]) GetTopPeers

func (t *KDTree[T]) GetTopPeers(n int) []PeerStats

GetTopPeers returns the top N most frequently selected peers.

func (*KDTree[T]) Insert

func (t *KDTree[T]) Insert(p KDPoint[T]) bool

Insert adds a point. Returns false if dimensionality mismatch or duplicate ID exists.

func (*KDTree[T]) KNearest

func (t *KDTree[T]) KNearest(query []float64, k int) ([]KDPoint[T], []float64)

KNearest returns up to k nearest neighbors to the query in ascending distance order. If multiple points are at the same distance, tie ordering is arbitrary and not stable between calls.

Example
package main

import (
	"fmt"

	poindexter "github.com/Snider/Poindexter"
)

func main() {
	pts := []poindexter.KDPoint[int]{
		{ID: "a", Coords: []float64{0}, Value: 0},
		{ID: "b", Coords: []float64{1}, Value: 0},
		{ID: "c", Coords: []float64{2}, Value: 0},
	}
	tr, _ := poindexter.NewKDTree(pts)
	ns, ds := tr.KNearest([]float64{0.6}, 2)
	fmt.Printf("%s %.1f | %s %.1f", ns[0].ID, ds[0], ns[1].ID, ds[1])
}
Output:
b 0.4 | a 0.6

func (*KDTree[T]) Len

func (t *KDTree[T]) Len() int

Len returns the number of points in the tree.

func (*KDTree[T]) Nearest

func (t *KDTree[T]) Nearest(query []float64) (KDPoint[T], float64, bool)

Nearest returns the closest point to the query, along with its distance. ok is false if the tree is empty or the query dimensionality does not match Dim().

Example
package main

import (
	"fmt"

	poindexter "github.com/Snider/Poindexter"
)

func main() {
	pts := []poindexter.KDPoint[int]{
		{ID: "x", Coords: []float64{0, 0}, Value: 1},
		{ID: "y", Coords: []float64{2, 0}, Value: 2},
	}
	tr, _ := poindexter.NewKDTree(pts, poindexter.WithMetric(poindexter.EuclideanDistance{}))
	p, d, ok := tr.Nearest([]float64{1.4, 0})
	fmt.Printf("ok=%v id=%s d=%.1f", ok, p.ID, d)
}
Output:
ok=true id=y d=0.6

func (*KDTree[T]) PeerAnalytics

func (t *KDTree[T]) PeerAnalytics() *PeerAnalytics

PeerAnalytics returns the peer analytics tracker. Returns nil if peer analytics tracking is disabled.

func (*KDTree[T]) Points

func (t *KDTree[T]) Points() []KDPoint[T]

Points returns a copy of all points in the tree. This is useful for analytics and export operations.

func (*KDTree[T]) Radius

func (t *KDTree[T]) Radius(query []float64, r float64) ([]KDPoint[T], []float64)

Radius returns points within radius r (inclusive) from the query, sorted by distance.

Example
package main

import (
	"fmt"

	poindexter "github.com/Snider/Poindexter"
)

func main() {
	pts := []poindexter.KDPoint[int]{
		{ID: "a", Coords: []float64{0}, Value: 0},
		{ID: "b", Coords: []float64{1}, Value: 0},
		{ID: "c", Coords: []float64{2}, Value: 0},
	}
	tr, _ := poindexter.NewKDTree(pts)
	within, _ := tr.Radius([]float64{0}, 1.0)
	fmt.Printf("%d %s %s", len(within), within[0].ID, within[1].ID)
}
Output:
2 a b
Example (None)
package main

import (
	"fmt"

	poindexter "github.com/Snider/Poindexter"
)

func main() {
	// Radius query that yields no matches.
	pts := []poindexter.KDPoint[int]{
		{ID: "a", Coords: []float64{10}},
		{ID: "b", Coords: []float64{20}},
	}
	tr, _ := poindexter.NewKDTree(pts)
	within, _ := tr.Radius([]float64{0}, 5)
	fmt.Println(len(within))
}
Output:
0

func (*KDTree[T]) ResetAnalytics

func (t *KDTree[T]) ResetAnalytics()

ResetAnalytics clears all analytics data.

type LOCRecord

type LOCRecord struct {
	Latitude  float64 `json:"latitude"`
	Longitude float64 `json:"longitude"`
	Altitude  float64 `json:"altitude"`
	Size      float64 `json:"size"`
	HPrecis   float64 `json:"hPrecision"`
	VPrecis   float64 `json:"vPrecision"`
}

LOCRecord represents a LOC (Location) record

type MXRecord

type MXRecord struct {
	Host     string `json:"host"`
	Priority uint16 `json:"priority"`
}

MXRecord represents an MX record with priority

type ManhattanDistance

type ManhattanDistance struct{}

ManhattanDistance implements the L1 metric.

func (ManhattanDistance) Distance

func (ManhattanDistance) Distance(a, b []float64) float64

type NAPTRRecord

type NAPTRRecord struct {
	Order       uint16 `json:"order"`
	Preference  uint16 `json:"preference"`
	Flags       string `json:"flags"`
	Service     string `json:"service"`
	Regexp      string `json:"regexp"`
	Replacement string `json:"replacement"`
}

NAPTRRecord represents a NAPTR record

type NATRoutingMetrics

type NATRoutingMetrics struct {
	// Connectivity score (0-1): higher means better reachability
	ConnectivityScore float64 `json:"connectivityScore"`
	// Symmetry score (0-1): higher means more symmetric NAT (easier to traverse)
	SymmetryScore float64 `json:"symmetryScore"`
	// Relay requirement probability (0-1): likelihood peer needs relay
	RelayProbability float64 `json:"relayProbability"`
	// Direct connection success rate (historical)
	DirectSuccessRate float64 `json:"directSuccessRate"`
	// Average RTT in milliseconds
	AvgRTTMs float64 `json:"avgRttMs"`
	// Jitter (RTT variance) in milliseconds
	JitterMs float64 `json:"jitterMs"`
	// Packet loss rate (0-1)
	PacketLossRate float64 `json:"packetLossRate"`
	// Bandwidth estimate in Mbps
	BandwidthMbps float64 `json:"bandwidthMbps"`
	// NAT type classification
	NATType string `json:"natType"`
	// Last probe timestamp
	LastProbeAt time.Time `json:"lastProbeAt"`
}

NATRoutingMetrics provides metrics specifically for NAT traversal routing decisions.

type NATTypeClassification

type NATTypeClassification string

NATTypeClassification enumerates common NAT types for routing decisions.

const (
	NATTypeOpen           NATTypeClassification = "open"            // No NAT / Public IP
	NATTypeFullCone       NATTypeClassification = "full_cone"       // Easy to traverse
	NATTypeRestrictedCone NATTypeClassification = "restricted_cone" // Moderate difficulty
	NATTypePortRestricted NATTypeClassification = "port_restricted" // Harder to traverse
	NATTypeSymmetric      NATTypeClassification = "symmetric"       // Hardest to traverse
	NATTypeSymmetricUDP   NATTypeClassification = "symmetric_udp"   // UDP-only symmetric
	NATTypeUnknown        NATTypeClassification = "unknown"         // Not yet classified
	NATTypeBehindCGNAT    NATTypeClassification = "cgnat"           // Carrier-grade NAT
	NATTypeFirewalled     NATTypeClassification = "firewalled"      // Blocked by firewall
	NATTypeRelayRequired  NATTypeClassification = "relay_required"  // Must use relay
)

type NetworkHealthSummary

type NetworkHealthSummary struct {
	TotalPeers        int       `json:"totalPeers"`
	ActivePeers       int       `json:"activePeers"`    // Peers queried recently
	HealthyPeers      int       `json:"healthyPeers"`   // Peers with good metrics
	DegradedPeers     int       `json:"degradedPeers"`  // Peers with some issues
	UnhealthyPeers    int       `json:"unhealthyPeers"` // Peers with poor metrics
	AvgLatencyMs      float64   `json:"avgLatencyMs"`
	MedianLatencyMs   float64   `json:"medianLatencyMs"`
	AvgTrustScore     float64   `json:"avgTrustScore"`
	AvgQualityScore   float64   `json:"avgQualityScore"`
	DirectConnectRate float64   `json:"directConnectRate"` // % of peers directly reachable
	RelayDependency   float64   `json:"relayDependency"`   // % of peers needing relay
	ComputedAt        time.Time `json:"computedAt"`
}

NetworkHealthSummary aggregates overall network health metrics.

type NormStats

type NormStats struct {
	Stats []AxisStats
}

NormStats holds per-axis normalisation statistics. For D dimensions, Stats has length D.

func ComputeNormStats2D

func ComputeNormStats2D[T any](items []T, f1, f2 func(T) float64) NormStats

ComputeNormStats2D computes per-axis min/max for two features.

func ComputeNormStats3D

func ComputeNormStats3D[T any](items []T, f1, f2, f3 func(T) float64) NormStats

ComputeNormStats3D computes per-axis min/max for three features.

func ComputeNormStats4D

func ComputeNormStats4D[T any](items []T, f1, f2, f3, f4 func(T) float64) NormStats

ComputeNormStats4D computes per-axis min/max for four features.

func ComputeNormStatsND

func ComputeNormStatsND[T any](items []T, features []func(T) float64) (NormStats, error)

ComputeNormStatsND computes per-axis min/max for an arbitrary number of features.

type ParsedDomainInfo

type ParsedDomainInfo struct {
	Domain           string   `json:"domain"`
	Registrar        string   `json:"registrar,omitempty"`
	RegistrationDate string   `json:"registrationDate,omitempty"`
	ExpirationDate   string   `json:"expirationDate,omitempty"`
	UpdatedDate      string   `json:"updatedDate,omitempty"`
	Status           []string `json:"status,omitempty"`
	Nameservers      []string `json:"nameservers,omitempty"`
	DNSSEC           bool     `json:"dnssec"`
}

ParsedDomainInfo provides a simplified view of domain information

func ParseRDAPResponse

func ParseRDAPResponse(resp RDAPResponse) ParsedDomainInfo

ParseRDAPResponse extracts key information from an RDAP response

type PeerAnalytics

type PeerAnalytics struct {
	// contains filtered or unexported fields
}

PeerAnalytics tracks per-peer selection statistics for NAT routing optimization.

func NewPeerAnalytics

func NewPeerAnalytics() *PeerAnalytics

NewPeerAnalytics creates a new peer analytics tracker.

func (*PeerAnalytics) GetAllPeerStats

func (p *PeerAnalytics) GetAllPeerStats() []PeerStats

GetAllPeerStats returns statistics for all tracked peers.

func (*PeerAnalytics) GetPeerStats

func (p *PeerAnalytics) GetPeerStats(peerID string) PeerStats

GetPeerStats returns statistics for a specific peer.

func (*PeerAnalytics) GetTopPeers

func (p *PeerAnalytics) GetTopPeers(n int) []PeerStats

GetTopPeers returns the top N most frequently selected peers.

func (*PeerAnalytics) RecordSelection

func (p *PeerAnalytics) RecordSelection(peerID string, distance float64)

RecordSelection records that a peer was selected/returned in a query result.

func (*PeerAnalytics) Reset

func (p *PeerAnalytics) Reset()

Reset clears all peer analytics data.

type PeerStats

type PeerStats struct {
	PeerID         string    `json:"peerId"`
	SelectionCount int64     `json:"selectionCount"`
	AvgDistance    float64   `json:"avgDistance"`
	LastSelectedAt time.Time `json:"lastSelectedAt"`
}

PeerStats holds statistics for a single peer.

type QualityWeights

type QualityWeights struct {
	Latency       float64 `json:"latency"`
	Jitter        float64 `json:"jitter"`
	PacketLoss    float64 `json:"packetLoss"`
	Bandwidth     float64 `json:"bandwidth"`
	Connectivity  float64 `json:"connectivity"`
	Symmetry      float64 `json:"symmetry"`
	DirectSuccess float64 `json:"directSuccess"`
	RelayPenalty  float64 `json:"relayPenalty"`
	NATType       float64 `json:"natType"`
}

QualityWeights configures the importance of each metric in peer selection.

func DefaultQualityWeights

func DefaultQualityWeights() QualityWeights

DefaultQualityWeights returns sensible defaults for peer selection.

func (QualityWeights) Total

func (w QualityWeights) Total() float64

Total returns the sum of all weights for normalization.

type RDAPBootstrapRegistry

type RDAPBootstrapRegistry struct {
	Services [][]interface{} `json:"services"`
	Version  string          `json:"version"`
}

RDAPBootstrapRegistry holds the RDAP bootstrap data

type RDAPEntity

type RDAPEntity struct {
	Handle     string       `json:"handle,omitempty"`
	Roles      []string     `json:"roles,omitempty"`
	VCardArray []any        `json:"vcardArray,omitempty"`
	Entities   []RDAPEntity `json:"entities,omitempty"`
	Events     []RDAPEvent  `json:"events,omitempty"`
	Links      []RDAPLink   `json:"links,omitempty"`
	Remarks    []RDAPRemark `json:"remarks,omitempty"`
}

RDAPEntity represents an entity (registrar, registrant, etc.)

type RDAPEvent

type RDAPEvent struct {
	EventAction string `json:"eventAction"`
	EventDate   string `json:"eventDate"`
	EventActor  string `json:"eventActor,omitempty"`
}

RDAPEvent represents an RDAP event (registration, expiration, etc.)

type RDAPIPs

type RDAPIPs struct {
	V4 []string `json:"v4,omitempty"`
	V6 []string `json:"v6,omitempty"`
}

RDAPIPs represents IP addresses for a nameserver

type RDAPLink struct {
	Value string `json:"value,omitempty"`
	Rel   string `json:"rel,omitempty"`
	Href  string `json:"href,omitempty"`
	Type  string `json:"type,omitempty"`
}

RDAPLink represents a link in RDAP

type RDAPNotice

type RDAPNotice = RDAPRemark

RDAPNotice is an alias for RDAPRemark

type RDAPNs

type RDAPNs struct {
	LDHName     string   `json:"ldhName"`
	IPAddresses *RDAPIPs `json:"ipAddresses,omitempty"`
}

RDAPNs represents a nameserver in RDAP

type RDAPRemark

type RDAPRemark struct {
	Title       string     `json:"title,omitempty"`
	Description []string   `json:"description,omitempty"`
	Links       []RDAPLink `json:"links,omitempty"`
}

RDAPRemark represents a remark/notice

type RDAPResponse

type RDAPResponse struct {
	// Common fields
	Handle      string       `json:"handle,omitempty"`
	LDHName     string       `json:"ldhName,omitempty"` // Domain name
	UnicodeName string       `json:"unicodeName,omitempty"`
	Status      []string     `json:"status,omitempty"`
	Events      []RDAPEvent  `json:"events,omitempty"`
	Entities    []RDAPEntity `json:"entities,omitempty"`
	Nameservers []RDAPNs     `json:"nameservers,omitempty"`
	Links       []RDAPLink   `json:"links,omitempty"`
	Remarks     []RDAPRemark `json:"remarks,omitempty"`
	Notices     []RDAPNotice `json:"notices,omitempty"`

	// Network-specific (for IP lookups)
	StartAddress string `json:"startAddress,omitempty"`
	EndAddress   string `json:"endAddress,omitempty"`
	IPVersion    string `json:"ipVersion,omitempty"`
	Name         string `json:"name,omitempty"`
	Type         string `json:"type,omitempty"`
	Country      string `json:"country,omitempty"`
	ParentHandle string `json:"parentHandle,omitempty"`

	// Error fields
	ErrorCode   int      `json:"errorCode,omitempty"`
	Title       string   `json:"title,omitempty"`
	Description []string `json:"description,omitempty"`

	// Metadata
	RawJSON      string    `json:"rawJson,omitempty"`
	LookupTimeMs int64     `json:"lookupTimeMs"`
	Timestamp    time.Time `json:"timestamp"`
	Error        string    `json:"error,omitempty"`
}

RDAPResponse represents an RDAP response

func RDAPLookupASN

func RDAPLookupASN(asn string) RDAPResponse

RDAPLookupASN performs an RDAP lookup for an ASN

func RDAPLookupASNWithTimeout

func RDAPLookupASNWithTimeout(asn string, timeout time.Duration) RDAPResponse

RDAPLookupASNWithTimeout performs an RDAP lookup for an ASN with timeout

func RDAPLookupDomain

func RDAPLookupDomain(domain string) RDAPResponse

RDAPLookupDomain performs an RDAP lookup for a domain

func RDAPLookupDomainWithTimeout

func RDAPLookupDomainWithTimeout(domain string, timeout time.Duration) RDAPResponse

RDAPLookupDomainWithTimeout performs an RDAP lookup with custom timeout

func RDAPLookupIP

func RDAPLookupIP(ip string) RDAPResponse

RDAPLookupIP performs an RDAP lookup for an IP address

func RDAPLookupIPWithTimeout

func RDAPLookupIPWithTimeout(ip string, timeout time.Duration) RDAPResponse

RDAPLookupIPWithTimeout performs an RDAP lookup for an IP with custom timeout

type RPRecord

type RPRecord struct {
	Mailbox string `json:"mailbox"` // Email as DNS name (user.domain.com)
	TxtDom  string `json:"txtDom"`  // Domain with TXT record containing more info
}

RPRecord represents an RP (Responsible Person) record

type SOARecord

type SOARecord struct {
	PrimaryNS  string `json:"primaryNs"`
	AdminEmail string `json:"adminEmail"`
	Serial     uint32 `json:"serial"`
	Refresh    uint32 `json:"refresh"`
	Retry      uint32 `json:"retry"`
	Expire     uint32 `json:"expire"`
	MinTTL     uint32 `json:"minTtl"`
}

SOARecord represents an SOA record

type SRVRecord

type SRVRecord struct {
	Target   string `json:"target"`
	Port     uint16 `json:"port"`
	Priority uint16 `json:"priority"`
	Weight   uint16 `json:"weight"`
}

SRVRecord represents an SRV record

type SSHFPRecord

type SSHFPRecord struct {
	Algorithm   uint8  `json:"algorithm"` // 1=RSA, 2=DSA, 3=ECDSA, 4=Ed25519
	FPType      uint8  `json:"fpType"`    // 1=SHA-1, 2=SHA-256
	Fingerprint string `json:"fingerprint"`
}

SSHFPRecord represents an SSHFP record

type StandardPeerFeatures

type StandardPeerFeatures struct {
	LatencyMs       float64 `json:"latencyMs"`       // Lower is better
	HopCount        int     `json:"hopCount"`        // Lower is better
	GeoDistanceKm   float64 `json:"geoDistanceKm"`   // Lower is better
	TrustScore      float64 `json:"trustScore"`      // Higher is better (invert)
	BandwidthMbps   float64 `json:"bandwidthMbps"`   // Higher is better (invert)
	PacketLossRate  float64 `json:"packetLossRate"`  // Lower is better
	ConnectivityPct float64 `json:"connectivityPct"` // Higher is better (invert)
	NATScore        float64 `json:"natScore"`        // Higher is better (invert)
}

StandardPeerFeatures defines the standard feature set for peer selection. These map to dimensions in the KD-Tree.

func (StandardPeerFeatures) ToFeatureSlice

func (f StandardPeerFeatures) ToFeatureSlice() []float64

ToFeatureSlice converts structured features to a slice for KD-Tree operations. Inversion is handled so that lower distance = better peer.

type TLSARecord

type TLSARecord struct {
	Usage        uint8  `json:"usage"`        // 0-3: CA constraint, Service cert, Trust anchor, Domain-issued
	Selector     uint8  `json:"selector"`     // 0=Full cert, 1=SubjectPublicKeyInfo
	MatchingType uint8  `json:"matchingType"` // 0=Exact, 1=SHA-256, 2=SHA-512
	CertData     string `json:"certData"`
}

TLSARecord represents a TLSA (DANE) record

type TreeAnalytics

type TreeAnalytics struct {
	QueryCount  atomic.Int64 // Total nearest/kNearest/radius queries
	InsertCount atomic.Int64 // Total successful inserts
	DeleteCount atomic.Int64 // Total successful deletes

	// Timing statistics (nanoseconds)
	TotalQueryTimeNs  atomic.Int64
	LastQueryTimeNs   atomic.Int64
	MinQueryTimeNs    atomic.Int64
	MaxQueryTimeNs    atomic.Int64
	LastQueryAt       atomic.Int64 // Unix nanoseconds
	CreatedAt         time.Time
	LastRebuiltAt     atomic.Int64 // Unix nanoseconds (for gonum backend rebuilds)
	BackendRebuildCnt atomic.Int64 // Number of backend rebuilds
}

TreeAnalytics tracks operational statistics for a KDTree. All counters are safe for concurrent reads; use the Reset() method for atomic reset.

func NewTreeAnalytics

func NewTreeAnalytics() *TreeAnalytics

NewTreeAnalytics creates a new analytics tracker.

func (*TreeAnalytics) RecordDelete

func (a *TreeAnalytics) RecordDelete()

RecordDelete records a successful delete.

func (*TreeAnalytics) RecordInsert

func (a *TreeAnalytics) RecordInsert()

RecordInsert records a successful insert.

func (*TreeAnalytics) RecordQuery

func (a *TreeAnalytics) RecordQuery(durationNs int64)

RecordQuery records a query operation with timing.

func (*TreeAnalytics) RecordRebuild

func (a *TreeAnalytics) RecordRebuild()

RecordRebuild records a backend rebuild.

func (*TreeAnalytics) Reset

func (a *TreeAnalytics) Reset()

Reset atomically resets all counters.

func (*TreeAnalytics) Snapshot

func (a *TreeAnalytics) Snapshot() TreeAnalyticsSnapshot

Snapshot returns a point-in-time view of the analytics.

type TreeAnalyticsSnapshot

type TreeAnalyticsSnapshot struct {
	QueryCount        int64     `json:"queryCount"`
	InsertCount       int64     `json:"insertCount"`
	DeleteCount       int64     `json:"deleteCount"`
	AvgQueryTimeNs    int64     `json:"avgQueryTimeNs"`
	MinQueryTimeNs    int64     `json:"minQueryTimeNs"`
	MaxQueryTimeNs    int64     `json:"maxQueryTimeNs"`
	LastQueryTimeNs   int64     `json:"lastQueryTimeNs"`
	LastQueryAt       time.Time `json:"lastQueryAt"`
	CreatedAt         time.Time `json:"createdAt"`
	BackendRebuildCnt int64     `json:"backendRebuildCount"`
	LastRebuiltAt     time.Time `json:"lastRebuiltAt"`
}

TreeAnalyticsSnapshot is an immutable snapshot for JSON serialization.

type TrustMetrics

type TrustMetrics struct {
	// ReputationScore (0-1): aggregated trust score
	ReputationScore float64 `json:"reputationScore"`
	// SuccessfulTransactions: count of successful exchanges
	SuccessfulTransactions int64 `json:"successfulTransactions"`
	// FailedTransactions: count of failed/aborted exchanges
	FailedTransactions int64 `json:"failedTransactions"`
	// AgeSeconds: how long this peer has been known
	AgeSeconds int64 `json:"ageSeconds"`
	// LastSuccessAt: last successful interaction
	LastSuccessAt time.Time `json:"lastSuccessAt"`
	// LastFailureAt: last failed interaction
	LastFailureAt time.Time `json:"lastFailureAt"`
	// VouchCount: number of other peers vouching for this peer
	VouchCount int `json:"vouchCount"`
	// FlagCount: number of reports against this peer
	FlagCount int `json:"flagCount"`
	// ProofOfWork: computational proof of stake/work
	ProofOfWork float64 `json:"proofOfWork"`
}

TrustMetrics tracks trust and reputation for peer selection.

type WebRedirectRecord

type WebRedirectRecord struct {
	URL          string `json:"url"`
	RedirectType int    `json:"redirectType"` // 301, 302, etc.
	Frame        bool   `json:"frame"`        // Frame redirect vs HTTP redirect
}

WebRedirectRecord represents a Web Redirect record (ClouDNS specific)

type WeightedCosineDistance

type WeightedCosineDistance struct{ Weights []float64 }

WeightedCosineDistance implements 1 - weighted cosine similarity, where weights scale each axis in both the dot product and the norms. If Weights is nil or has zero length, this reduces to CosineDistance.

func (WeightedCosineDistance) Distance

func (wcd WeightedCosineDistance) Distance(a, b []float64) float64

Directories

Path Synopsis
examples
dht_helpers command
dht_ping_1d command

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL