controller

package
v0.0.4 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Mar 9, 2026 License: Apache-2.0 Imports: 32 Imported by: 0

Documentation

Overview

Package controller provides a PatchHelper inspired by the ClusterAPI patch.Helper pattern. It snapshots an object before reconciliation and defers a single, diff-based MergeFrom patch at the end. If nothing changed the helper is a no-op and no API call is made.

Unlike a status-only helper, PatchHelper handles spec, metadata, AND status changes — issuing at most two API calls (one for the main object, one for the status subresource) and zero calls when nothing changed.

Package controller registers reconcilers with a controller-runtime Manager.

Index

Constants

View Source
const (
	// ReasonReconciling indicates the controller is actively processing.
	ReasonReconciling = "Reconciling"

	// ReasonReconciled indicates successful reconciliation.
	ReasonReconciled = "Reconciled"

	// ReasonOrphansCleaned indicates orphaned allocations were removed.
	ReasonOrphansCleaned = "OrphansCleaned"

	// ReasonPoolCreated indicates a new pool was created.
	ReasonPoolCreated = "PoolCreated"

	// ReasonPoolUpdated indicates a pool spec was updated.
	ReasonPoolUpdated = "PoolUpdated"

	// ReasonPoolFull indicates no available slots for node assignment.
	ReasonPoolFull = "PoolFull"

	// ReasonValidated indicates the resource passed validation checks.
	ReasonValidated = "Validated"

	// ReasonError indicates an error during reconciliation.
	ReasonError = "Error"
)

Kstatus-compatible condition reasons used across all controllers.

Variables

This section is empty.

Functions

func SetupIPPoolReconciler

func SetupIPPoolReconciler(mgr ctrl.Manager, reconcileInterval time.Duration, opts ReconcilerOptions) error

SetupIPPoolReconciler creates and registers the IPPoolReconciler with the manager. The reconcileInterval controls the periodic re-queue interval.

func SetupNodeSliceReconciler

func SetupNodeSliceReconciler(mgr ctrl.Manager) error

SetupNodeSliceReconciler creates and registers the NodeSliceReconciler with the manager.

func SetupOverlappingRangeReconciler

func SetupOverlappingRangeReconciler(mgr ctrl.Manager, reconcileInterval time.Duration, opts ReconcilerOptions) error

SetupOverlappingRangeReconciler creates and registers the reconciler.

func SetupWithManager

func SetupWithManager(mgr ctrl.Manager, reconcileInterval time.Duration, opts ReconcilerOptions) error

SetupWithManager registers all reconcilers with the given manager. The reconcileInterval controls how often periodic re-checks of IP pools and related resources are triggered.

The following RBAC rules are required by controller-runtime infrastructure (leader election and event recording) and are not tied to a specific reconciler: +kubebuilder:rbac:groups=coordination.k8s.io,resources=leases,verbs=get;create;update;delete +kubebuilder:rbac:groups="",events.k8s.io,resources=events,verbs=create;patch;update;get

Types

type IPPoolReconciler

type IPPoolReconciler struct {
	// contains filtered or unexported fields
}

IPPoolReconciler reconciles IPPool resources by removing allocations whose pods no longer exist. It replaces the legacy CronJob-based reconciler and the DaemonSet pod controller.

func (*IPPoolReconciler) Reconcile

func (r *IPPoolReconciler) Reconcile(ctx context.Context, req ctrl.Request) (ctrl.Result, error)

Reconcile checks all allocations in the IPPool against live pods and removes orphaned entries.

type NodeSliceReconciler

type NodeSliceReconciler struct {
	// contains filtered or unexported fields
}

NodeSliceReconciler reconciles NetworkAttachmentDefinition resources by managing the corresponding NodeSlicePool CRDs. It assigns IP range slices to nodes and ensures node join/leave events are reflected in the allocations.

func (*NodeSliceReconciler) Reconcile

func (r *NodeSliceReconciler) Reconcile(ctx context.Context, req ctrl.Request) (ctrl.Result, error)

Reconcile processes a NAD and manages the corresponding NodeSlicePool.

type OverlappingRangeReconciler

type OverlappingRangeReconciler struct {
	// contains filtered or unexported fields
}

OverlappingRangeReconciler reconciles OverlappingRangeIPReservation CRDs by deleting reservations whose pods no longer exist. This provides a secondary cleanup path in addition to the IPPoolReconciler's inline cleanup.

func (*OverlappingRangeReconciler) Reconcile

Reconcile checks whether the pod referenced by the OverlappingRangeIPReservation still exists. If not, the reservation is deleted.

type PatchHelper

type PatchHelper struct {
	// contains filtered or unexported fields
}

PatchHelper implements the "snapshot → mutate → deferred patch" pattern from ClusterAPI's patch.Helper.

Usage:

helper, err := NewPatchHelper(obj, c)
if err != nil { return err }
defer func() {
    if pErr := helper.Patch(ctx, obj); pErr != nil { retErr = pErr }
}()
// … mutate obj (spec, metadata, status, conditions, etc.) …

func NewPatchHelper

func NewPatchHelper(obj client.Object, c client.Client) (*PatchHelper, error)

NewPatchHelper snapshots the current state of obj so it can later be compared with the mutated state. Call this *before* making any changes to the object.

func (*PatchHelper) HasChanges

func (h *PatchHelper) HasChanges(obj client.Object) bool

HasChanges reports whether the object has been modified since the snapshot was taken. Useful in tests or logging.

func (*PatchHelper) Patch

func (h *PatchHelper) Patch(ctx context.Context, obj client.Object) error

Patch compares the current object with the snapshot taken at creation time and issues the minimal set of API calls:

  • If spec or metadata changed: one MergeFrom Patch on the main object
  • If status changed: one Status().Update on the status subresource
  • If nothing changed: zero API calls

When both spec and status changed, the spec patch is sent first. Since client.Patch updates obj in-place with the server response (resetting status to server state), the desired status is preserved and restored before the Status().Update call.

Returns nil when nothing was patched.

type ReconcilerOptions

type ReconcilerOptions struct {
	// CleanupTerminating controls whether pods with a DeletionTimestamp
	// (i.e. terminating pods) are treated as orphaned. When false (default),
	// terminating pods keep their IP allocation until fully deleted. When
	// true, allocations are released immediately. Applies to both IPPool
	// and OverlappingRange reconcilers. See upstream #550.
	CleanupTerminating bool

	// CleanupDisrupted controls whether pods with a DisruptionTarget
	// condition (DeletionByTaintManager) are treated as orphaned. When true
	// (default), the reconcilers release their allocations immediately
	// because the taint manager has already decided to evict the pod.
	// Applies to both IPPool and OverlappingRange reconcilers.
	CleanupDisrupted bool

	// VerifyNetworkStatus controls whether the IPPool reconciler verifies
	// that an allocated IP is present in the pod's Multus network-status
	// annotation. When true (default), a mismatch marks the allocation as
	// orphaned. Disable this if your CNI does not populate the annotation.
	VerifyNetworkStatus bool
}

ReconcilerOptions holds optional feature flags for reconciler setup.

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL