kedastral

module
v0.1.2 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Dec 17, 2025 License: Apache-2.0

README ยถ

๐Ÿ“˜ README.md โ€” Kedastral

Kedastral โ€” Predict tomorrow's load, scale today.

Go Reference Go Report Card License


๐Ÿงญ Overview

Kedastral is an open-source, domain-agnostic predictive autoscaling companion for KEDA, written in Go.

It enables Kubernetes workloads to scale proactively, not just reactively, by forecasting future demand (for example, request rate, queue depth, or events) and translating that forecast into desired replica counts before resource metrics like CPU or RPS spike.

Where KEDA reacts to what has already happened, Kedastral predicts what will happen next โ€” keeping applications responsive, stable, and cost-efficient during sudden traffic surges.


๐Ÿš€ Current Features (v0.1 MVP)

Feature Description Status
๐Ÿ”ฎ Predictive scaling Forecast short-term demand and set replica counts ahead of time โœ… Implemented
โš™๏ธ KEDA-native integration Implements the official KEDA External Scaler gRPC interface โœ… Implemented
๐Ÿ“ˆ Prometheus adapter Pull metrics from Prometheus for forecasting โœ… Implemented
๐Ÿ’พ Storage backends In-memory (default) and Redis for HA deployments โœ… Implemented
๐Ÿง  Baseline forecasting model Statistical baseline with quantile-based prediction โœ… Implemented
๐Ÿ“Š ARIMA forecasting model Time-series forecasting for trending/seasonal workloads โœ… Implemented
๐Ÿง  Built in Go Fast, efficient, minimal footprint; deployable as static binaries or containers โœ… Implemented
๐Ÿงฑ Extensible interfaces Well-defined interfaces for adapters and models โœ… Implemented
๐Ÿ” Data stays local All forecasting and scaling happen inside your cluster โœ… Implemented
๐Ÿ“Š Prometheus metrics Exposes metrics for monitoring forecast health โœ… Implemented
๐Ÿณ Docker support Dockerfiles for containerized deployment โœ… Implemented
๐Ÿงช Comprehensive tests 81 unit tests covering core functionality โœ… Implemented
๐Ÿ”ฎ Planned Features
  • Declarative CRDs - Kubernetes-native configuration (ForecastPolicy, DataSource)
  • Additional adapters - Kafka, HTTP APIs, and custom data sources
  • Advanced ML models - Prophet, SARIMA, and custom model support
  • Helm charts - Easy deployment via Helm
  • Grafana dashboards - Pre-built dashboards for visualization

๐Ÿ“Š Forecasting Models

Kedastral supports two forecasting models, selectable via the --model flag:

Baseline (Default)
  • Algorithm: Moving average (EMA 5m + 30m) + hour-of-day seasonality
  • Training: None required (stateless)
  • Best for: Stable workloads, development, quick start
  • Pros: Fast, simple, no training data needed
  • Cons: Limited accuracy for trending/seasonal data
  • Configuration: --model=baseline (default)
ARIMA
  • Algorithm: AutoRegressive Integrated Moving Average (pure Go)
  • Training: Required (uses historical window)
  • Best for: Workloads with trends, seasonality, or autocorrelation
  • Pros: Better accuracy for complex patterns
  • Cons: Requires training data, slower startup
  • Configuration: --model=arima --arima-p=1 --arima-d=1 --arima-q=1

Model Comparison:

Feature Baseline ARIMA
Training time None ~15ฮผs per 1K points
Prediction time <10ms <1ฮผs for 30 steps
Memory overhead ~1MB ~5MB
Handles trends โŒ โœ…
Handles seasonality Basic โœ…
Training data needed No Yes
Recommended for Stable workloads Complex patterns

Example usage:

# Baseline (default)
./forecaster --workload=my-api --model=baseline

# ARIMA with auto parameters (default: p=1, d=1, q=1)
./forecaster --workload=my-api --model=arima

# ARIMA with custom parameters
./forecaster --workload=my-api --model=arima --arima-p=2 --arima-d=1 --arima-q=2

ARIMA Parameters:

  • p (AR order): How many past values to use (1-3 typical)
  • d (differencing): Trend removal (0=none, 1=linear, 2=quadratic)
  • q (MA order): How many past errors to use (1-3 typical)
  • Auto (0): Defaults to 1 for all parameters (ARIMA(1,1,1))

๐Ÿ’ก Example Use Cases

Kedastral is domain-neutral. You can use it for any workload that shows predictable or event-driven traffic patterns:

Domain Typical signals Scaling goal
E-commerce request rate, promotions, time of day scale before sales campaigns
Video streaming viewer counts, release schedule pre-scale for new show launches
Banking & fintech batch job schedules, queue lag prepare for end-of-month loads
IoT ingestion connected devices count absorb telemetry spikes gracefully
SaaS APIs & gaming RPS, active sessions, time windows prevent latency from scaling delays

๐Ÿ—๏ธ Architecture Overview

Kedastral currently consists of two main components, both implemented in Go for performance and operational simplicity.

1. Forecaster (cmd/forecaster)
  • Collects recent metrics from Prometheus using configurable queries
  • Uses a baseline forecasting model (statistical quantile-based prediction) to predict short-term load
  • Translates predicted load into desired replica counts using a configurable capacity policy
  • Stores forecasts in memory and exposes them via HTTP API (/forecast/current)
  • Exposes Prometheus metrics for monitoring (/metrics)
  • Health check endpoint (/healthz)
2. Scaler (cmd/scaler)
  • Implements the KEDA External Scaler gRPC API
  • Periodically queries the Forecaster via HTTP to fetch the latest forecast
  • Selects appropriate replica count based on configured lead time
  • Returns desired replicas to KEDA via gRPC interface
  • Exposes health check and metrics endpoints

The two components form a closed feedback loop:

Prometheus โ†’ Forecaster โ†’ HTTP โ†’ Scaler โ†’ gRPC โ†’ KEDA โ†’ HPA โ†’ Workload

๐Ÿงฉ Component Diagram (ASCII)
โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”
โ”‚  Metrics Sources   โ”‚
โ”‚ (Prometheus, etc.) โ”‚
โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ฌโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜
          โ”‚
          โ–ผ
โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”
โ”‚  Kedastral         โ”‚
โ”‚  Forecast Engine   โ”‚  (Go)
โ”‚  โ€ข Collects data   โ”‚
โ”‚  โ€ข Forecasts load  โ”‚
โ”‚  โ€ข Outputs replicasโ”‚
โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ฌโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜
          โ”‚ REST/gRPC
          โ–ผ
โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”
โ”‚  Kedastral Scaler  โ”‚  (Go, gRPC)
โ”‚  โ€ข KEDA plugin     โ”‚
โ”‚  โ€ข Reports replicasโ”‚
โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ฌโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜
          โ”‚
          โ–ผ
โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”
โ”‚        KEDA        โ”‚
โ”‚   (HPA controller) โ”‚
โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ฌโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜
          โ”‚
          โ–ผ
โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”
โ”‚ Target Deployment  โ”‚
โ”‚   (User workload)  โ”‚
โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜
๐Ÿงญ Mermaid Diagram
flowchart TD
    A["Metrics Sources (Prometheus ยท Kafka ยท HTTP ยท Custom)"] --> B["Forecast Engine (Go)"]
    B --> C["Kedastral External Scaler (Go, gRPC)"]
    C --> D["KEDA Operator"]
    D --> E["Horizontal Pod Autoscaler"]
    E --> F["Target Deployment (User workload)"]
    D -.->|Reactive metrics| B

๐Ÿ’พ Storage Backends

Kedastral supports two storage backends for forecast snapshots, allowing you to choose between simplicity and high availability:

In-Memory Storage (Default)
  • Best for: Single forecaster instance, development, testing
  • Pros: Zero dependencies, fast, simple setup
  • Cons: No persistence across restarts, no HA support
  • Configuration: --storage=memory (default)
# Uses in-memory storage by default
./forecaster --workload=my-api --metric=http_rps
Redis Storage
  • Best for: Multi-instance forecasters, production HA deployments, persistence
  • Pros: Shared state across replicas, TTL-based expiration, horizontal scaling
  • Cons: Requires Redis server, additional network dependency
  • Configuration: --storage=redis --redis-addr=HOST:PORT
# Using Redis storage
./forecaster --storage=redis \
  --redis-addr=redis:6379 \
  --redis-ttl=1h \
  --workload=my-api \
  --metric=http_rps

Redis Configuration Options:

  • --storage=redis - Enable Redis backend
  • --redis-addr=HOST:PORT - Redis server address (default: localhost:6379)
  • --redis-password=SECRET - Redis password (optional)
  • --redis-db=N - Redis database number (default: 0)
  • --redis-ttl=DURATION - Snapshot TTL (default: 30m)

Example HA Deployment: See examples/deployment-redis.yaml for a complete Kubernetes deployment with:

  • Redis for persistent storage
  • 2+ forecaster replicas sharing Redis
  • Scaler consuming forecasts from HA forecasters

๐Ÿง  How It Works

  1. Data Collection: Kedastralโ€™s adapters pull short-term metrics and contextual features from your chosen data sources.
  2. Forecasting: The engine runs a forecasting model to estimate load (RPS, queue length, etc.) for the next few minutes.
  3. Replica Calculation: Using the configured capacity model, Kedastral computes how many pods will be required to handle that future load.
  4. Integration with KEDA:
    • The Kedastral External Scaler exposes the forecast as a metric via gRPC.
    • KEDA reads it and updates the Horizontal Pod Autoscaler (HPA).
    • Your workload scales before demand arrives.

๐Ÿš€ Quick Start

Building from Source
# Clone the repository
git clone https://github.com/HatiCode/kedastral.git
cd kedastral

# Build both forecaster and scaler
make build

# Or build individually
make forecaster
make scaler

# Run tests
make test
Running Locally
1. Start the Forecaster

The forecaster generates predictions and exposes them via HTTP:

./bin/forecaster \
  -workload=my-api \
  -metric=http_rps \
  -prom-url=http://localhost:9090 \
  -prom-query='sum(rate(http_requests_total{service="my-api"}[1m]))' \
  -target-per-pod=100 \
  -headroom=1.2 \
  -min=2 \
  -max=50 \
  -lead-time=5m \
  -log-level=info

Check the forecast:

curl "http://localhost:8081/forecast/current?workload=my-api"
2. Start the Scaler

The scaler implements the KEDA External Scaler gRPC interface:

./bin/scaler \
  -forecaster-url=http://localhost:8081 \
  -lead-time=5m \
  -log-level=info

The scaler exposes:

  • gRPC on :50051 for KEDA
  • HTTP metrics on :8082
3. Configure KEDA

Apply a ScaledObject to connect KEDA to Kedastral:

apiVersion: keda.sh/v1alpha1
kind: ScaledObject
metadata:
  name: my-api-scaledobject
spec:
  scaleTargetRef:
    name: my-api
    kind: Deployment
  pollingInterval: 30
  minReplicaCount: 2
  maxReplicaCount: 50
  triggers:
    - type: external
      metadata:
        scalerAddress: kedastral-scaler:50051
        workload: my-api
Deploying to Kubernetes

See the examples/ directory for complete Kubernetes deployment manifests:

Quick deploy:

kubectl apply -f examples/deployment.yaml
kubectl apply -f examples/scaled-object.yaml

๐Ÿงฐ Current Tech Stack

Component Technology Status
Core language Go (โ‰ฅ1.25) โœ…
Forecaster API REST (HTTP) โœ…
Scaler API gRPC (KEDA External Scaler protocol) โœ…
Forecast model Baseline (statistical) โœ…
Metrics adapter Prometheus โœ…
Storage In-memory โœ…
Observability Prometheus metrics โœ…
Testing Go testing framework (81 tests) โœ…
Deployment Kubernetes manifests + Dockerfiles โœ…

Planned: Redis storage, Helm charts, additional adapters (Kafka, HTTP), ML models (Prophet, ARIMA), Grafana dashboards


๐Ÿงฑ Current Project Structure

kedastral/
โ”œโ”€ cmd/
โ”‚  โ”œโ”€ forecaster/          # Forecaster binary and subpackages
โ”‚  โ”‚  โ”œโ”€ main.go
โ”‚  โ”‚  โ”œโ”€ forecaster.go
โ”‚  โ”‚  โ”œโ”€ config/           # Configuration parsing
โ”‚  โ”‚  โ”œโ”€ logger/           # Structured logging
โ”‚  โ”‚  โ”œโ”€ metrics/          # Prometheus metrics
โ”‚  โ”‚  โ””โ”€ router/           # HTTP routes
โ”‚  โ””โ”€ scaler/              # Scaler binary and subpackages
โ”‚     โ”œโ”€ main.go
โ”‚     โ”œโ”€ scaler.go
โ”‚     โ”œโ”€ config/           # Configuration parsing
โ”‚     โ”œโ”€ logger/           # Structured logging
โ”‚     โ”œโ”€ metrics/          # Prometheus metrics
โ”‚     โ””โ”€ router/           # HTTP routes
โ”œโ”€ pkg/
โ”‚  โ”œโ”€ adapters/            # Prometheus adapter
โ”‚  โ”œโ”€ models/              # Baseline forecasting model
โ”‚  โ”œโ”€ capacity/            # Replica calculation logic
โ”‚  โ”œโ”€ features/            # Feature engineering
โ”‚  โ”œโ”€ storage/             # In-memory snapshot storage
โ”‚  โ”œโ”€ httpx/               # HTTP server utilities
โ”‚  โ””โ”€ api/externalscaler/  # KEDA External Scaler protobuf
โ”œโ”€ examples/               # Kubernetes deployment examples
โ”‚  โ”œโ”€ deployment.yaml      # Complete deployment manifests
โ”‚  โ”œโ”€ scaled-object.yaml   # KEDA ScaledObject example
โ”‚  โ””โ”€ README.md            # Detailed usage guide
โ”œโ”€ docs/                   # Design documentation
โ”‚  โ”œโ”€ capacity-planner.md
โ”‚  โ”œโ”€ cli-design.md
โ”‚  โ””โ”€ forecaster-store-interface.md
โ”œโ”€ test/integration/       # Integration tests
โ”œโ”€ Dockerfile.forecaster   # Forecaster container image
โ”œโ”€ Dockerfile.scaler       # Scaler container image
โ”œโ”€ Makefile                # Build automation
โ””โ”€ LICENSE (Apache-2.0)

๐Ÿ”ง Installation

Prerequisites
  • Go 1.25 or later (for building from source)
  • Kubernetes cluster (v1.20+)
  • KEDA installed (installation guide)
  • Prometheus running in the cluster
From Source
# Clone and build
git clone https://github.com/HatiCode/kedastral.git
cd kedastral
make build

# Deploy to Kubernetes
kubectl apply -f examples/deployment.yaml
kubectl apply -f examples/scaled-object.yaml
Using Makefile
make build           # Build both forecaster and scaler
make test            # Run all tests
make test-coverage   # Run tests with coverage report
make clean           # Remove build artifacts
make help            # Show all available targets

See the examples/README.md for detailed deployment instructions and configuration options.


๐Ÿ“š Documentation

API Documentation

Full Go package documentation is available at pkg.go.dev:

View Documentation Locally
# Install godoc (if not already installed)
go install golang.org/x/tools/cmd/godoc@latest

# Start local documentation server
godoc -http=:6060

# Open in browser
open http://localhost:6060/pkg/github.com/HatiCode/kedastral/

๐Ÿ“Š Observability

Metric Description
kedastral_predicted_value forecasted metric (e.g., RPS)
kedastral_desired_replicas computed replica count
kedastral_forecast_age_seconds staleness of forecast data
kedastral_underprovision_seconds_total safety metric for missed forecasts

๐Ÿงฉ Extensibility

  • Adapters SDK: implement your own metric collectors (Go interfaces).
  • Model SDK: plug in your own forecasting logic.
  • Storage SDK: replace Redis with your preferred backend.
  • BYOM Mode: expose an HTTP endpoint returning predictions; Kedastral will use it automatically.

Example interface:

type ForecastModel interface {
    Train(ctx context.Context, data DataFrame) error
    Predict(ctx context.Context, horizon time.Duration) ([]float64, error)
}

๐Ÿ”„ Safety & Fallbacks

  • Kedastral can run hybrid scaling: effectiveReplicas = max(predicted, reactive) ensuring reactive CPU/RPS-based scaling still applies.
  • Built-in clamps: max scale-up/down rate per minute.
  • Automatic fallback to KEDAโ€™s default triggers if the forecast is stale or engine is down.

๐Ÿง‘โ€๐Ÿ’ป Project Goals

  1. Provide a pluggable, open predictive-scaling layer for Kubernetes.
  2. Empower developers to forecast and pre-scale workloads in any industry.
  3. Keep data and control within the userโ€™s cluster.
  4. Remain KEDA-compatible and Go-native.
  5. Grow an open community around predictive autoscaling research and practice.

๐ŸŒ License & Governance

  • License: Apache-2.0
  • Repository: github.com/kedastral/kedastral
  • Maintainers: community-governed, CNCF-style steering model.
  • Contributions: welcomed via PRs, adapters, or model integrations.

๐Ÿ—บ๏ธ Roadmap

Milestone Key Features Status
v0.1 (MVP) Forecaster + Scaler + Prometheus adapter + baseline model + in-memory storage โœ… Complete
v0.2 Redis storage + additional ML models (Prophet, ARIMA) + Helm chart ๐Ÿ”„ Planned
v0.3 CRDs (ForecastPolicy/DataSource) + Grafana dashboards ๐Ÿ”„ Planned
v0.4 Additional adapters (Kafka, HTTP) + BYOM plugin + multi-metric ensembles ๐Ÿ”„ Planned
v1.0 Kedastral Operator + model registry + full conformance tests + production hardening ๐Ÿ”„ Planned

v0.1 Deliverables (Current):

  • โœ… Forecaster binary with Prometheus integration
  • โœ… Scaler binary implementing KEDA External Scaler protocol
  • โœ… Baseline statistical forecasting model
  • โœ… In-memory forecast storage
  • โœ… Capacity planning with configurable policies
  • โœ… Prometheus metrics for observability
  • โœ… 81 unit tests
  • โœ… Docker support
  • โœ… Kubernetes deployment examples
  • โœ… Comprehensive documentation

โœจ Getting Started

# 1. Build Kedastral
make build

# 2. Deploy to Kubernetes
kubectl apply -f examples/deployment.yaml
kubectl apply -f examples/scaled-object.yaml

# 3. Monitor forecasts
kubectl logs -l component=forecaster -f
kubectl logs -l component=scaler -f

# 4. Check current forecast
kubectl port-forward svc/kedastral-forecaster 8081:8081
curl "http://localhost:8081/forecast/current?workload=my-api"

For detailed instructions, see the Quick Start section above and examples/README.md.


๐Ÿงฉ Summary for AI Assistants

Project Name: Kedastral Purpose: Predictive autoscaling framework for Kubernetes built around KEDA Core Language: Go (โ‰ฅ1.25) Current Status: v0.1 MVP - Production-ready core components Primary Components:

  • Forecaster (HTTP API, Prometheus integration, baseline model)
  • Scaler (gRPC KEDA External Scaler implementation)

Key Integrations: KEDA (External Scaler protocol), Prometheus (metrics source) Storage: In-memory (Redis planned for v0.2) Domain Scope: Domain-agnostic (works for any workload) Mission: Enable proactive scaling decisions in Kubernetes through forecasted metrics Deployment: Kubernetes manifests + Docker containers (Helm planned for v0.2) Testing: 81 unit tests covering core functionality Architecture Keywords: predictive autoscaling, statistical forecasting, Kubernetes, Go, gRPC, KEDA External Scaler, Prometheus, time-series prediction, capacity planning, proactive scaling

Directories ยถ

Path Synopsis
cmd
forecaster command
Package main implements the core forecast loop orchestration.
Package main implements the core forecast loop orchestration.
forecaster/config
Package config provides configuration parsing and management for the forecaster.
Package config provides configuration parsing and management for the forecaster.
forecaster/logger
Package logger provides structured logging configuration for the forecaster.
Package logger provides structured logging configuration for the forecaster.
forecaster/metrics
Package metrics provides Prometheus metrics instrumentation for the forecaster.
Package metrics provides Prometheus metrics instrumentation for the forecaster.
forecaster/router
Package router configures HTTP routes for the forecaster's HTTP API.
Package router configures HTTP routes for the forecaster's HTTP API.
forecaster/store
Package store provides storage backend initialization for the forecaster.
Package store provides storage backend initialization for the forecaster.
scaler command
Command scaler implements the KEDA External Scaler for Kedastral.
Command scaler implements the KEDA External Scaler for Kedastral.
scaler/config
Package config provides configuration parsing and management for the scaler.
Package config provides configuration parsing and management for the scaler.
scaler/logger
Package logger provides structured logging configuration for the scaler.
Package logger provides structured logging configuration for the scaler.
scaler/metrics
Package metrics provides Prometheus metrics instrumentation for the scaler.
Package metrics provides Prometheus metrics instrumentation for the scaler.
scaler/router
Package router configures HTTP routes for the scaler's HTTP server.
Package router configures HTTP routes for the scaler's HTTP server.
pkg
adapters
Package adapters provides Kedastral data source connectors that retrieve metrics or contextual signals from external systems and normalize them into a common DataFrame structure.
Package adapters provides Kedastral data source connectors that retrieve metrics or contextual signals from external systems and normalize them into a common DataFrame structure.
capacity
Package capacity converts forecasted load into desired replica counts using a deterministic policy (target per pod, headroom, lead time, clamps).
Package capacity converts forecasted load into desired replica counts using a deterministic policy (target per pod, headroom, lead time, clamps).
client
Package client provides HTTP clients for communicating with Kedastral services.
Package client provides HTTP clients for communicating with Kedastral services.
features
Package features provides utilities for building feature frames from raw metric data.
Package features provides utilities for building feature frames from raw metric data.
httpx
Package httpx provides HTTP server utilities and helpers for Kedastral services.
Package httpx provides HTTP server utilities and helpers for Kedastral services.
models
Package models provides forecasting model implementations.
Package models provides forecasting model implementations.
storage
Package storage provides forecast snapshot storage implementations.
Package storage provides forecast snapshot storage implementations.

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL