priorities

package
v1.2.3 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Apr 22, 2016 License: Apache-2.0 Imports: 9 Imported by: 0

Documentation

Index

Constants

This section is empty.

Variables

This section is empty.

Functions

func BalancedResourceAllocation

func BalancedResourceAllocation(pod *api.Pod, nodeNameToInfo map[string]*schedulercache.NodeInfo, nodeLister algorithm.NodeLister) (schedulerapi.HostPriorityList, error)

BalancedResourceAllocation favors nodes with balanced resource usage rate. BalancedResourceAllocation should **NOT** be used alone, and **MUST** be used together with LeastRequestedPriority. It calculates the difference between the cpu and memory fracion of capacity, and prioritizes the host based on how close the two metrics are to each other. Detail: score = 10 - abs(cpuFraction-memoryFraction)*10. The algorithm is partly inspired by: "Wei Huang et al. An Energy Efficient Virtual Machine Placement Algorithm with Balanced Resource Utilization"

func ImageLocalityPriority added in v1.2.0

func ImageLocalityPriority(pod *api.Pod, nodeNameToInfo map[string]*schedulercache.NodeInfo, nodeLister algorithm.NodeLister) (schedulerapi.HostPriorityList, error)

ImageLocalityPriority is a priority function that favors nodes that already have requested pod container's images. It will detect whether the requested images are present on a node, and then calculate a score ranging from 0 to 10 based on the total size of those images. - If none of the images are present, this node will be given the lowest priority. - If some of the images are present on a node, the larger their sizes' sum, the higher the node's priority.

func LeastRequestedPriority

func LeastRequestedPriority(pod *api.Pod, nodeNameToInfo map[string]*schedulercache.NodeInfo, nodeLister algorithm.NodeLister) (schedulerapi.HostPriorityList, error)

LeastRequestedPriority is a priority function that favors nodes with fewer requested resources. It calculates the percentage of memory and CPU requested by pods scheduled on the node, and prioritizes based on the minimum of the average of the fraction of requested to capacity. Details: cpu((capacity - sum(requested)) * 10 / capacity) + memory((capacity - sum(requested)) * 10 / capacity) / 2

func NewNodeAffinityPriority added in v1.2.0

func NewNodeAffinityPriority(nodeLister algorithm.NodeLister) algorithm.PriorityFunction

func NewNodeLabelPriority

func NewNodeLabelPriority(label string, presence bool) algorithm.PriorityFunction

func NewSelectorSpreadPriority added in v1.1.0

func NewSelectorSpreadPriority(podLister algorithm.PodLister, serviceLister algorithm.ServiceLister, controllerLister algorithm.ControllerLister, replicaSetLister algorithm.ReplicaSetLister) algorithm.PriorityFunction

func NewServiceAntiAffinityPriority

func NewServiceAntiAffinityPriority(podLister algorithm.PodLister, serviceLister algorithm.ServiceLister, label string) algorithm.PriorityFunction

Types

type NodeAffinity added in v1.2.0

type NodeAffinity struct {
	// contains filtered or unexported fields
}

func (*NodeAffinity) CalculateNodeAffinityPriority added in v1.2.0

func (s *NodeAffinity) CalculateNodeAffinityPriority(pod *api.Pod, nodeNameToInfo map[string]*schedulercache.NodeInfo, nodeLister algorithm.NodeLister) (schedulerapi.HostPriorityList, error)

CalculateNodeAffinityPriority prioritizes nodes according to node affinity scheduling preferences indicated in PreferredDuringSchedulingIgnoredDuringExecution. Each time a node match a preferredSchedulingTerm, it will a get an add of preferredSchedulingTerm.Weight. Thus, the more preferredSchedulingTerms the node satisfies and the more the preferredSchedulingTerm that is satisfied weights, the higher score the node gets.

type NodeLabelPrioritizer

type NodeLabelPrioritizer struct {
	// contains filtered or unexported fields
}

func (*NodeLabelPrioritizer) CalculateNodeLabelPriority

func (n *NodeLabelPrioritizer) CalculateNodeLabelPriority(pod *api.Pod, nodeNameToInfo map[string]*schedulercache.NodeInfo, nodeLister algorithm.NodeLister) (schedulerapi.HostPriorityList, error)

CalculateNodeLabelPriority checks whether a particular label exists on a node or not, regardless of its value. If presence is true, prioritizes nodes that have the specified label, regardless of value. If presence is false, prioritizes nodes that do not have the specified label.

type SelectorSpread added in v1.1.0

type SelectorSpread struct {
	// contains filtered or unexported fields
}

func (*SelectorSpread) CalculateSpreadPriority added in v1.1.0

func (s *SelectorSpread) CalculateSpreadPriority(pod *api.Pod, nodeNameToInfo map[string]*schedulercache.NodeInfo, nodeLister algorithm.NodeLister) (schedulerapi.HostPriorityList, error)

CalculateSpreadPriority spreads pods across hosts and zones, considering pods belonging to the same service or replication controller. When a pod is scheduled, it looks for services or RCs that match the pod, then finds existing pods that match those selectors. It favors nodes that have fewer existing matching pods. i.e. it pushes the scheduler towards a node where there's the smallest number of pods which match the same service selectors or RC selectors as the pod being scheduled. Where zone information is included on the nodes, it favors nodes in zones with fewer existing matching pods.

type ServiceAntiAffinity

type ServiceAntiAffinity struct {
	// contains filtered or unexported fields
}

func (*ServiceAntiAffinity) CalculateAntiAffinityPriority

func (s *ServiceAntiAffinity) CalculateAntiAffinityPriority(pod *api.Pod, nodeNameToInfo map[string]*schedulercache.NodeInfo, nodeLister algorithm.NodeLister) (schedulerapi.HostPriorityList, error)

CalculateAntiAffinityPriority spreads pods by minimizing the number of pods belonging to the same service on machines with the same value for a particular label. The label to be considered is provided to the struct (ServiceAntiAffinity).

Directories

Path Synopsis

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL