updatestrategy

package
v0.0.0-...-0666c2b Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Apr 19, 2024 License: MIT Imports: 32 Imported by: 0

Documentation

Index

Constants

View Source
const (
	KarpenterEC2NodeClassResource = "ec2nodeclasses.karpenter.k8s.aws"
)

Variables

This section is empty.

Functions

func InstanceConfigUpToDate

func InstanceConfigUpToDate(instanceConfig, poolConfig *InstanceConfig) bool

InstanceConfigUpToDate compares current and desired InstanceConfig. It compares userdata, imageID and checks if the current config has all the desired tags. It does NOT check if the current config has too many EC2 tags as many tags are injected out of our control. This means removing a tag is not enough to make the configs unequal.

Types

type ASGNodePoolsBackend

type ASGNodePoolsBackend struct {
	// contains filtered or unexported fields
}

ASGNodePoolsBackend defines a node pool backed by an AWS Auto Scaling Group.

func NewASGNodePoolsBackend

func NewASGNodePoolsBackend(clusterID string, sess *session.Session) *ASGNodePoolsBackend

NewASGNodePoolsBackend initializes a new ASGNodePoolsBackend for the given clusterID and AWS session and.

func (*ASGNodePoolsBackend) Get

func (n *ASGNodePoolsBackend) Get(_ context.Context, nodePool *api.NodePool) (*NodePool, error)

Get gets the ASG matching to the node pool and gets all instances from the ASG. The node generation is set to 'current' for nodes with the latest launch configuration and 'outdated' for nodes with an older launch configuration.

func (*ASGNodePoolsBackend) MarkForDecommission

func (n *ASGNodePoolsBackend) MarkForDecommission(_ context.Context, nodePool *api.NodePool) error

MarkForDecommission suspends autoscaling of the node pool if it was enabled and makes sure that the pool can be scaled down to 0. The implementation assumes the kubernetes cluster-autoscaler is used so it just removes a tag.

func (*ASGNodePoolsBackend) Scale

func (n *ASGNodePoolsBackend) Scale(_ context.Context, nodePool *api.NodePool, replicas int) error

Scale sets the desired capacity of the ASGs to the number of replicas. If the node pool is backed by multiple ASGs the scale operation will try to balance the increment/decrement of nodes over all the ASGs.

func (*ASGNodePoolsBackend) Terminate

func (n *ASGNodePoolsBackend) Terminate(_ context.Context, node *Node, decrementDesired bool) error

Terminate terminates an instance from the ASG and optionally decrements the DesiredCapacity. By default the desired capacity will not be decremented. In case the new desired capacity is less then the current min size of the ASG, it will also decrease the ASG minSize. This function will not return until the instance has been terminated in AWS.

type CLCUpdateStrategy

type CLCUpdateStrategy struct {
	// contains filtered or unexported fields
}

func NewCLCUpdateStrategy

func NewCLCUpdateStrategy(logger *log.Entry, nodePoolManager NodePoolManager, pollingInterval time.Duration) *CLCUpdateStrategy

NewCLCUpdateStrategy initializes a new CLCUpdateStrategy.

func (*CLCUpdateStrategy) PrepareForRemoval

func (c *CLCUpdateStrategy) PrepareForRemoval(ctx context.Context, nodePoolDesc *api.NodePool) error

func (*CLCUpdateStrategy) Update

func (c *CLCUpdateStrategy) Update(ctx context.Context, nodePoolDesc *api.NodePool) error

type DrainConfig

type DrainConfig struct {
	// Start forcefully evicting pods <ForceEvictionGracePeriod> after node drain started
	ForceEvictionGracePeriod time.Duration

	// Only force evict pods that are at least <MinPodLifetime> old
	MinPodLifetime time.Duration

	// Wait until all healthy pods in the same PDB are at least <MinHealthyPDBSiblingCreationTime> old
	MinHealthyPDBSiblingLifetime time.Duration

	// Wait until all unhealthy pods in the same PDB are at least <MinUnhealthyPDBSiblingCreationTime> old
	MinUnhealthyPDBSiblingLifetime time.Duration

	// Wait at least <ForceEvictionInterval> between force evictions to allow controllers to catch up
	ForceEvictionInterval time.Duration

	// Wait for <PollInterval> between force eviction attempts
	PollInterval time.Duration
}

DrainConfig contains the various settings for the smart node draining algorithm

type EC2NodePoolBackend

type EC2NodePoolBackend struct {
	// contains filtered or unexported fields
}

EC2NodePoolBackend defines a node pool consisting of EC2 instances managed externally by some component e.g. Karpenter.

func NewEC2NodePoolBackend

func NewEC2NodePoolBackend(clusterID string, sess *session.Session, crdResolverInitializer func() (*KarpenterCRDNameResolver, error)) *EC2NodePoolBackend

NewEC2NodePoolBackend initializes a new EC2NodePoolBackend for the given clusterID and AWS session and.

func (*EC2NodePoolBackend) DecommissionKarpenterNodes

func (n *EC2NodePoolBackend) DecommissionKarpenterNodes(ctx context.Context) error

func (*EC2NodePoolBackend) DecommissionNodePool

func (n *EC2NodePoolBackend) DecommissionNodePool(ctx context.Context, nodePool *api.NodePool) error

func (*EC2NodePoolBackend) Get

func (n *EC2NodePoolBackend) Get(ctx context.Context, nodePool *api.NodePool) (*NodePool, error)

Get gets the EC2 instances matching to the node pool by looking at node pool tag. The node generation is set to 'current' for nodes with up-to-date userData,ImageID and tags and 'outdated' for nodes with an outdated configuration.

func (*EC2NodePoolBackend) MarkForDecommission

func (n *EC2NodePoolBackend) MarkForDecommission(context.Context, *api.NodePool) error

func (*EC2NodePoolBackend) Scale

func (*EC2NodePoolBackend) Terminate

func (n *EC2NodePoolBackend) Terminate(context.Context, *Node, bool) error

type InstanceConfig

type InstanceConfig struct {
	UserData string
	ImageID  string
	Tags     map[string]string
}

type KarpenterCRDNameResolver

type KarpenterCRDNameResolver struct {
	NodePoolCRDName string
	// contains filtered or unexported fields
}

func NewKarpenterCRDResolver

func NewKarpenterCRDResolver(ctx context.Context, k8sClients *kubernetes.ClientsCollection) (*KarpenterCRDNameResolver, error)

func (*KarpenterCRDNameResolver) NodePoolConfigGetter

func (r *KarpenterCRDNameResolver) NodePoolConfigGetter(ctx context.Context, nodePool *api.NodePool) (*InstanceConfig, error)

func (*KarpenterCRDNameResolver) NodeTemplateCRDName

func (r *KarpenterCRDNameResolver) NodeTemplateCRDName() string

type KubernetesNodePoolManager

type KubernetesNodePoolManager struct {
	// contains filtered or unexported fields
}

KubernetesNodePoolManager defines a node pool manager which uses the Kubernetes API along with a node pool provider backend to manage node pools.

func NewKubernetesNodePoolManager

func NewKubernetesNodePoolManager(logger *log.Entry, kubeClient kubernetes.Interface, poolBackend ProviderNodePoolsBackend, drainConfig *DrainConfig, noScheduleTaint bool) *KubernetesNodePoolManager

NewKubernetesNodePoolManager initializes a new Kubernetes NodePool manager which can manage single node pools based on the nodes registered in the Kubernetes API and the related NodePoolBackend for those nodes e.g. ASGNodePool.

func (*KubernetesNodePoolManager) AbortNodeDecommissioning

func (m *KubernetesNodePoolManager) AbortNodeDecommissioning(ctx context.Context, node *Node) error

func (*KubernetesNodePoolManager) CordonNode

func (m *KubernetesNodePoolManager) CordonNode(ctx context.Context, node *Node) error

CordonNode marks a node unschedulable.

func (*KubernetesNodePoolManager) DisableReplacementNodeProvisioning

func (m *KubernetesNodePoolManager) DisableReplacementNodeProvisioning(ctx context.Context, node *Node) error

func (*KubernetesNodePoolManager) GetPool

func (m *KubernetesNodePoolManager) GetPool(ctx context.Context, nodePoolDesc *api.NodePool) (*NodePool, error)

GetPool gets the current node Pool from the node pool backend and attaches the Kubernetes node object name and labels to the corresponding nodes.

func (*KubernetesNodePoolManager) MarkNodeForDecommission

func (m *KubernetesNodePoolManager) MarkNodeForDecommission(ctx context.Context, node *Node) error

func (*KubernetesNodePoolManager) MarkPoolForDecommission

func (m *KubernetesNodePoolManager) MarkPoolForDecommission(ctx context.Context, nodePool *api.NodePool) error

func (*KubernetesNodePoolManager) ScalePool

func (m *KubernetesNodePoolManager) ScalePool(ctx context.Context, nodePool *api.NodePool, replicas int) error

ScalePool scales a nodePool to the specified number of replicas. On scale down it will attempt to do it gracefully by draining the nodes before terminating them.

func (*KubernetesNodePoolManager) TerminateNode

func (m *KubernetesNodePoolManager) TerminateNode(ctx context.Context, node *Node, decrementDesired bool) error

TerminateNode terminates a node and optionally decrement the desired size of the node pool. Before a node is terminated it's drained to ensure that pods running on the nodes are gracefully terminated.

type Node

type Node struct {
	Name            string
	Annotations     map[string]string
	Labels          map[string]string
	Taints          []v1.Taint
	Cordoned        bool
	ProviderID      string
	FailureDomain   string
	Generation      int
	VolumesAttached bool
	Ready           bool
	Master          bool
}

Node is an abstract node object which combines the node information from the node pool backend along with the corresponding Kubernetes node object.

type NodePool

type NodePool struct {
	Min        int
	Desired    int
	Current    int
	Max        int
	Generation int
	Nodes      []*Node
}

NodePool defines a node pool including all nodes.

func WaitForDesiredNodes

func WaitForDesiredNodes(ctx context.Context, logger *log.Entry, n NodePoolManager, nodePoolDesc *api.NodePool) (*NodePool, error)

WaitForDesiredNodes waits for the current number of nodes to match the desired number. The final node pool will be returned.

func (*NodePool) ReadyNodes

func (n *NodePool) ReadyNodes() []*Node

ReadyNodes returns a list of nodes which are marked as ready.

type NodePoolManager

type NodePoolManager interface {
	GetPool(ctx context.Context, nodePool *api.NodePool) (*NodePool, error)
	MarkNodeForDecommission(ctx context.Context, node *Node) error
	AbortNodeDecommissioning(ctx context.Context, node *Node) error
	ScalePool(ctx context.Context, nodePool *api.NodePool, replicas int) error
	TerminateNode(ctx context.Context, node *Node, decrementDesired bool) error
	MarkPoolForDecommission(ctx context.Context, nodePool *api.NodePool) error
	DisableReplacementNodeProvisioning(ctx context.Context, node *Node) error
	CordonNode(ctx context.Context, node *Node) error
}

NodePoolManager defines an interface for managing node pools when performing update operations.

type ProfileNodePoolProvisioner

type ProfileNodePoolProvisioner struct {
	// contains filtered or unexported fields
}

ProfileNodePoolProvisioner is a NodePoolProvisioner which selects the backend provisioner based on the node pool profile. It has a default provisioner and a mapping of profile to provisioner for those profiles which can't use the default provisioner.

func NewProfileNodePoolsBackend

func NewProfileNodePoolsBackend(defaultProvisioner ProviderNodePoolsBackend, profileMapping map[string]ProviderNodePoolsBackend) *ProfileNodePoolProvisioner

NewProfileNodePoolsBackend initializes a new ProfileNodePoolProvisioner.

func (*ProfileNodePoolProvisioner) Get

Get the specified node pool using the right node pool provisioner for the profile.

func (*ProfileNodePoolProvisioner) MarkForDecommission

func (n *ProfileNodePoolProvisioner) MarkForDecommission(ctx context.Context, nodePool *api.NodePool) error

MarkForDecommission marks a node pool for decommissioning using the right node pool provisioner for the profile.

func (*ProfileNodePoolProvisioner) Scale

func (n *ProfileNodePoolProvisioner) Scale(ctx context.Context, nodePool *api.NodePool, replicas int) error

Scale scales a node pool using the right node pool provisioner for the profile.

func (*ProfileNodePoolProvisioner) Terminate

func (n *ProfileNodePoolProvisioner) Terminate(ctx context.Context, node *Node, decrementDesired bool) error

Terminate terminates a node pool using the default provisioner.

type ProviderNodePoolsBackend

type ProviderNodePoolsBackend interface {
	Get(ctx context.Context, nodePool *api.NodePool) (*NodePool, error)
	Scale(ctx context.Context, nodePool *api.NodePool, replicas int) error
	MarkForDecommission(ctx context.Context, nodePool *api.NodePool) error
	Terminate(ctx context.Context, node *Node, decrementDesired bool) error
}

ProviderNodePoolsBackend is an interface for describing a node pools provider backend e.g. AWS Auto Scaling Groups.

type RollingUpdateStrategy

type RollingUpdateStrategy struct {
	// contains filtered or unexported fields
}

RollingUpdateStrategy is a cluster node update strategy which will roll the nodes with a specified surge.

func NewRollingUpdateStrategy

func NewRollingUpdateStrategy(logger *log.Entry, nodePoolManager NodePoolManager, surge int) *RollingUpdateStrategy

NewRollingUpdateStrategy initializes a new RollingUpdateStrategy.

func (*RollingUpdateStrategy) PrepareForRemoval

func (r *RollingUpdateStrategy) PrepareForRemoval(ctx context.Context, nodePoolDesc *api.NodePool) error

func (*RollingUpdateStrategy) Update

func (r *RollingUpdateStrategy) Update(ctx context.Context, nodePoolDesc *api.NodePool) error

Update performs a rolling update of a single node pool. Passing a context allows stopping the update loop in case the context is canceled.

type UpdateStrategy

type UpdateStrategy interface {
	Update(ctx context.Context, nodePool *api.NodePool) error
	PrepareForRemoval(ctx context.Context, nodePool *api.NodePool) error
}

UpdateStrategy defines an interface for performing cluster node updates.

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL