optimizer

module
v0.6.0 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Sep 8, 2025 License: Apache-2.0

README

Inference system optimizer

The inference system optimizer assigns GPU types to inference model servers and decides on the number of replicas for each model for a given request traffic load and classes of service, as well as the batch size. (slides)

Building

docker build -t  inferno . --load

Prerequisites

Running

First, install prerequisites if running locally (not using an image).

I. Optimizer only

There are two ways to run the optimizer.

  1. Direct function calls: An example is provided in main.go.

    First, populate sample data.

    git submodule init
    git submodule update
    

    Then, run the demo.

    cd demos/main
    go run main.go
    
  2. REST API server: The optimizer may run as a REST API server (steps).

II. Optimized auto-scaler

One may run the optimizer as part of an auto-scaling control system, in one of two ways.

  1. Kubernetes controller: Running in a Kubernetes cluster and using custom resources and a Kubernetes runtime controller, the optimizer may be excercised in reconciliation to updates to the Optimizer custom resource (reference).

  2. Optimization control loop: The control loop comprises (1) a Collector to get data about the inference servers through Prometheus and server deployments, (2) an Optimizer to make decisions, (3) an Actuator to realize such decisions by updating server deployments, and (4) a periodic Controller that has access to static and dynamic data. The control loop may run either externally or in a Kubernetes cluster.

Steps to run the optimizer as a REST API server

The REST API specifications are documented.

Clone this repository and set environment variable INFERNO_REPO to the path to it.

Option A: Run externally
cd $INFERNO_REPO/cmd/optimizer
go run main.go [-F]

The default is to run the server in Stateless mode. Use the optional -F argument to run in Statefull mode. (Description of modes)

You may then curl API commands to http://localhost:8080.

Option B: Run in cluster
  • Deploy optimizer as a deployment, along with a service on port 80, in name space inferno in the cluster. (The deployment yaml file starts the server in a container with the -F flag.)

    cd $INFERNO_REPO/manifests/yamls
    kubectl apply -f deploy-optimizer.yaml
    
  • Forward port to local host.

    kubectl port-forward service/inferno-optimizer -n inferno 8080:80
    

    You may then curl API commands (above) to http://localhost:8080.

  • (Optional) Inspect logs.

    POD=$(kubectl get pod -l app=inferno-optimizer -n inferno -o jsonpath="{.items[0].metadata.name}")
    kubectl logs -f $POD -n inferno 
    
  • Cleanup.

    kubectl delete -f deploy-optimizer.yaml
    

Detailed description of the optimizer

problem-scope

timing-definitions

request-batching

token-time-fitting

modeling-batching

qn-model

system-occupancy

impact-batch

target-service

Decision variables

For each pair of (class of service, model):

  • gpuProfile: the GPU type allocated
  • numReplicas: the number of replicas
  • batchSize: the batch size, given continuous batching

Specifications: Accelerators and models

accelerators

models

Example 1: Unlimited accelerators

unlimited-assign

unlimited-perf

Example 2: Load change - Unlimited accelerators

unlimited-change-assign

unlimited-change

unlimited-change-perf

Example 3: Limited accelerators

limited-count

limited-assign

limited-perf

Example 4: Load change - Limited accelerators

limited-change-assign

limited-change

limited-change-perf

Directories

Path Synopsis
cmd
optimizer command
demos
generators command
main command
queue command
scale command
transition command
pkg

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL