capacityscheduling

package
v0.0.7 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Nov 29, 2022 License: Apache-2.0 Imports: 30 Imported by: 0

README

Overview

This folder holds the capacity scheduling plugin implementations based on Capacity Scheduling.

Maturity Level

  • 💡 Sample (for demonstrating and inspiring purpose)
  • 👶 Alpha (used in companies for pilot projects)
  • 👦 Beta (used in companies and developed actively)
  • 👨 Stable (used in companies for production workloads)

Tutorial

Example config:

apiVersion: kubescheduler.config.k8s.io/v1beta2
kind: KubeSchedulerConfiguration
leaderElection:
  leaderElect: false
clientConnection:
  kubeconfig:   "REPLACE_ME_WITH_KUBE_CONFIG_PATH"
profiles:
- schedulerName: default-scheduler
  plugins:
    preFilter:
      enabled:
      - name: CapacityScheduling
    postFilter:
      enabled:
      - name: CapacityScheduling
      disabled:
      - name: "*"
    reserve:
      enabled:
      - name: CapacityScheduling
ElasticQuota
apiVersion: scheduling.sigs.k8s.io/v1alpha1
kind: ElasticQuota
metadata:
  name: quota1
  namespace: quota1
spec:
  max:
    cpu: 6
  min:
    cpu: 4
  • max: the upper bound of the resource consumption of the consumers.
  • min: the minimum resources that are guaranteed to ensure the basic functionality/performance of the consumers
Demo

We assume two elastic quotas are defined: quota1 (min:cpu 4, max:cpu 6) and quota2 (min:cpu 4, max:cpu 6). The entire cluster has 8 CPUs available hence the sum of quota min is equal to the cluster capacity.

  • create namespace and ElasticQuota
$ kubectl create ns quota1
$ kubectl create ns quota2
$ cat <<EOF | kubectl apply -f -
apiVersion: scheduling.sigs.k8s.io/v1alpha1
kind: ElasticQuota
metadata:
  name: quota1
  namespace: quota1
spec:
  max:
    cpu: 6
  min:
    cpu: 4
EOF
$ cat <<EOF | kubectl apply -f -
apiVersion: scheduling.sigs.k8s.io/v1alpha1
kind: ElasticQuota
metadata:
  name: quota2
  namespace: quota2
spec:
  max:
    cpu: 6
  min:
    cpu: 4
EOF
  • create app1
cat <<EOF | kubectl apply -f -
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx
  namespace: quota1
  labels:
    app: nginx
spec:
  replicas: 4
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      name: nginx
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx
        resources:
          limits:
            cpu: 2
          requests:
            cpu: 2
EOF

After the usage of quota1 reaches min(4), there are still 4 cpus available in the cluster. So the rest of the pods are still being scheduled until it reaches the max(6). Finally, there is one pending pod.

$ kubectl get pods -n quota1
NAME          READY   STATUS    RESTARTS   AGE
nginx-27qr9   1/1     Running   0          32s
nginx-2mxdd   1/1     Running   0          32s
nginx-6gbgx   1/1     Running   0          32s
nginx-bxvcg   0/1     Pending   0          32s
  • create app2
$ cat <<EOF | kubectl apply -f -
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx
  namespace: quota2
  labels:
    app: nginx
spec:
  replicas: 2
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      name: nginx
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx
        resources:
          limits:
            cpu: 2
          requests:
            cpu: 2
EOF
$ kubectl get pods -n quota1
NAME          READY   STATUS        RESTARTS   AGE
nginx-2mxdd   1/1     Running       0          6m49s
nginx-6gbgx   1/1     Running       0          6m49s
nginx-6gbgx   0/1     Terminating   0          6m49s
nginx-bxvcg   0/1     Pending       0          6m49s

$ kubectl get pods -n quota2
NAME          READY   STATUS    RESTARTS   AGE
nginx-4z2zd   1/1     Running   0          81s
nginx-6dfn9   1/1     Running   0          81s

When app2 is created, there are 2 cpus available in the cluster. So one pod is scheduled successfully. But the usage of quota2 has not reached min(4). Meanwhile, the usage of quota1 has exceeded min(4). The 2nd pod belongs to quota2 will be able to preempt pod(s) belong to quota1 that have used excessive cpus. So the 2nd pod gets scheduled eventually, making the usage of quota2 min(4) with a cost of preempting one pod in quota1.

Documentation

Index

Constants

View Source
const (
	// Name is the name of the plugin used in Registry and configurations.
	Name = "CapacityScheduling"

	ElasticQuotaSnapshotKey = "ElasticQuotaSnapshot"
)

Variables

This section is empty.

Functions

func New

func New(obj runtime.Object, handle framework.Handle) (framework.Plugin, error)

New initializes a new plugin and returns it.

Types

type CapacityScheduling

type CapacityScheduling struct {
	sync.RWMutex
	// contains filtered or unexported fields
}

CapacityScheduling is a plugin that implements the mechanism of capacity scheduling.

func (*CapacityScheduling) AddPod

func (c *CapacityScheduling) AddPod(ctx context.Context, cycleState *framework.CycleState, podToSchedule *v1.Pod, podToAdd *framework.PodInfo, nodeInfo *framework.NodeInfo) *framework.Status

AddPod from pre-computed data in cycleState.

func (*CapacityScheduling) EventsToRegister

func (c *CapacityScheduling) EventsToRegister() []framework.ClusterEvent

func (*CapacityScheduling) Name

func (c *CapacityScheduling) Name() string

Name returns name of the plugin. It is used in logs, etc.

func (*CapacityScheduling) PostFilter

func (*CapacityScheduling) PreFilter

PreFilter performs the following validations. 1. Check if the (pod.request + eq.allocated) is less than eq.max. 2. Check if the sum(eq's usage) > sum(eq's min).

func (*CapacityScheduling) PreFilterExtensions

func (c *CapacityScheduling) PreFilterExtensions() framework.PreFilterExtensions

PreFilterExtensions returns prefilter extensions, pod add and remove.

func (*CapacityScheduling) RemovePod

func (c *CapacityScheduling) RemovePod(ctx context.Context, cycleState *framework.CycleState, podToSchedule *v1.Pod, podToRemove *framework.PodInfo, nodeInfo *framework.NodeInfo) *framework.Status

RemovePod from pre-computed data in cycleState.

func (*CapacityScheduling) Reserve

func (c *CapacityScheduling) Reserve(ctx context.Context, state *framework.CycleState, pod *v1.Pod, nodeName string) *framework.Status

func (*CapacityScheduling) Unreserve

func (c *CapacityScheduling) Unreserve(ctx context.Context, state *framework.CycleState, pod *v1.Pod, nodeName string)

type ElasticQuotaInfo

type ElasticQuotaInfo struct {
	Namespace string

	Min  *framework.Resource
	Max  *framework.Resource
	Used *framework.Resource
	// contains filtered or unexported fields
}

ElasticQuotaInfo is a wrapper to a ElasticQuota with information. Each namespace can only have one ElasticQuota.

type ElasticQuotaInfos

type ElasticQuotaInfos map[string]*ElasticQuotaInfo

func NewElasticQuotaInfos

func NewElasticQuotaInfos() ElasticQuotaInfos

type ElasticQuotaSnapshotState

type ElasticQuotaSnapshotState struct {
	// contains filtered or unexported fields
}

ElasticQuotaSnapshotState stores the snapshot of elasticQuotas.

func (*ElasticQuotaSnapshotState) Clone

Clone the ElasticQuotaSnapshot state.

type PreFilterState

type PreFilterState struct {
	// contains filtered or unexported fields
}

PreFilterState computed at PreFilter and used at PostFilter or Reserve.

func (*PreFilterState) Clone

func (s *PreFilterState) Clone() framework.StateData

Clone the preFilter state.

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL