gameservers

package
Version: v1.25.0 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Aug 3, 2022 License: Apache-2.0 Imports: 43 Imported by: 6

Documentation

Overview

Package gameservers handles management of the GameServer Custom Resource Definition

Index

Constants

This section is empty.

Variables

This section is empty.

Functions

This section is empty.

Types

type Controller

type Controller struct {
	// contains filtered or unexported fields
}

Controller is a the main GameServer crd controller

func NewController

func NewController(
	wh *webhooks.WebHook,
	health healthcheck.Handler,
	minPort, maxPort int32,
	sidecarImage string,
	alwaysPullSidecarImage bool,
	sidecarCPURequest resource.Quantity,
	sidecarCPULimit resource.Quantity,
	sidecarMemoryRequest resource.Quantity,
	sidecarMemoryLimit resource.Quantity,
	sdkServiceAccount string,
	kubeClient kubernetes.Interface,
	kubeInformerFactory informers.SharedInformerFactory,
	extClient extclientset.Interface,
	agonesClient versioned.Interface,
	agonesInformerFactory externalversions.SharedInformerFactory) *Controller

NewController returns a new gameserver crd controller

func (*Controller) Run

func (c *Controller) Run(ctx context.Context, workers int) error

Run the GameServer controller. Will block until stop is closed. Runs threadiness number workers to process the rate limited queue

type HealthController

type HealthController struct {
	// contains filtered or unexported fields
}

HealthController watches Pods, and applies an Unhealthy state if certain pods crash, or can't be assigned a port, and other similar type conditions.

func NewHealthController

func NewHealthController(health healthcheck.Handler,
	kubeClient kubernetes.Interface,
	agonesClient versioned.Interface,
	kubeInformerFactory informers.SharedInformerFactory,
	agonesInformerFactory externalversions.SharedInformerFactory) *HealthController

NewHealthController returns a HealthController

func (*HealthController) Run

func (hc *HealthController) Run(ctx context.Context) error

Run processes the rate limited queue. Will block until stop is closed

type MigrationController added in v1.3.0

type MigrationController struct {
	// contains filtered or unexported fields
}

MigrationController watches for if a Pod is migrated/a maintenance event happens on a node, and a Pod is recreated with a new Address for a GameServer

func NewMigrationController added in v1.3.0

func NewMigrationController(health healthcheck.Handler,
	kubeClient kubernetes.Interface,
	agonesClient versioned.Interface,
	kubeInformerFactory informers.SharedInformerFactory,
	agonesInformerFactory externalversions.SharedInformerFactory) *MigrationController

NewMigrationController returns a MigrationController

func (*MigrationController) Run added in v1.3.0

Run processes the rate limited queue. Will block until stop is closed

type MissingPodController added in v1.4.0

type MissingPodController struct {
	// contains filtered or unexported fields
}

MissingPodController makes sure that any GameServer that isn't in a Scheduled or Unhealthy state and is missing a Pod is moved to Unhealthy.

It's possible that a GameServer is missing its associated pod due to unexpected controller downtime or if the Pod is deleted with no subsequent Delete event.

Since resync on the controller is every 30 seconds, even if there is some time in which a GameServer is in a broken state, it will eventually move to Unhealthy, and get replaced (if in a Fleet).

func NewMissingPodController added in v1.4.0

func NewMissingPodController(health healthcheck.Handler,
	kubeClient kubernetes.Interface,
	agonesClient versioned.Interface,
	kubeInformerFactory informers.SharedInformerFactory,
	agonesInformerFactory externalversions.SharedInformerFactory) *MissingPodController

NewMissingPodController returns a MissingPodController

func (*MissingPodController) Run added in v1.4.0

Run processes the rate limited queue. Will block until stop is closed

type NodeCount added in v0.9.0

type NodeCount struct {
	// Ready is ready count
	Ready int64
	// Allocated is allocated out
	Allocated int64
}

NodeCount is just a convenience data structure for keeping relevant GameServer counts about Nodes

type PerNodeCounter added in v0.9.0

type PerNodeCounter struct {
	// contains filtered or unexported fields
}

PerNodeCounter counts how many Allocated and Ready GameServers currently exist on each node. This is useful for scheduling allocations, fleet management mostly under a Packed strategy

func NewPerNodeCounter added in v0.9.0

func NewPerNodeCounter(
	kubeInformerFactory informers.SharedInformerFactory,
	agonesInformerFactory externalversions.SharedInformerFactory) *PerNodeCounter

NewPerNodeCounter returns a new PerNodeCounter

func (*PerNodeCounter) Counts added in v0.9.0

func (pnc *PerNodeCounter) Counts() map[string]NodeCount

Counts returns the NodeCount map in a thread safe way

func (*PerNodeCounter) Run added in v0.9.0

func (pnc *PerNodeCounter) Run(ctx context.Context, _ int) error

Run sets up the current state GameServer counts across nodes non blocking Run function.

type PortAllocator

type PortAllocator struct {
	// contains filtered or unexported fields
}

PortAllocator manages the dynamic port allocation strategy. Only use exposed methods to ensure appropriate locking is taken. The PortAllocator does not currently support mixing static portAllocations (or any pods with defined HostPort) within the dynamic port range other than the ones it coordinates.

func NewPortAllocator

func NewPortAllocator(minPort, maxPort int32,
	kubeInformerFactory informers.SharedInformerFactory,
	agonesInformerFactory externalversions.SharedInformerFactory) *PortAllocator

NewPortAllocator returns a new dynamic port allocator. minPort and maxPort are the top and bottom portAllocations that can be allocated in the range for the game servers

func (*PortAllocator) Allocate

Allocate assigns a port to the GameServer and returns it.

func (*PortAllocator) DeAllocate

func (pa *PortAllocator) DeAllocate(gs *agonesv1.GameServer)

DeAllocate marks the given port as no longer allocated

func (*PortAllocator) Run

func (pa *PortAllocator) Run(ctx context.Context) error

Run sets up the current state of port allocations and starts tracking Pod and Node changes

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL