predicates

package
v1.2.0-alpha.7 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Feb 2, 2016 License: Apache-2.0 Imports: 8 Imported by: 0

Documentation

Index

Constants

This section is empty.

Variables

This section is empty.

Functions

func CheckPodsExceedingFreeResources added in v1.1.0

func CheckPodsExceedingFreeResources(pods []*api.Pod, allocatable api.ResourceList) (fitting []*api.Pod, notFittingCPU, notFittingMemory []*api.Pod)

func MapPodsToMachines

func MapPodsToMachines(lister algorithm.PodLister) (map[string][]*api.Pod, error)

MapPodsToMachines obtains a list of pods and pivots that list into a map where the keys are host names and the values are the list of pods running on that host.

func NewNodeLabelPredicate

func NewNodeLabelPredicate(info NodeInfo, labels []string, presence bool) algorithm.FitPredicate

func NewResourceFitPredicate

func NewResourceFitPredicate(info NodeInfo) algorithm.FitPredicate

func NewSelectorMatchPredicate

func NewSelectorMatchPredicate(info NodeInfo) algorithm.FitPredicate

func NewServiceAffinityPredicate

func NewServiceAffinityPredicate(podLister algorithm.PodLister, serviceLister algorithm.ServiceLister, nodeInfo NodeInfo, labels []string) algorithm.FitPredicate

func NewVolumeZonePredicate added in v1.2.0

func NewVolumeZonePredicate(nodeInfo NodeInfo, pvInfo PersistentVolumeInfo, pvcInfo PersistentVolumeClaimInfo) algorithm.FitPredicate

VolumeZonePredicate evaluates if a pod can fit due to the volumes it requests, given that some volumes may have zone scheduling constraints. The requirement is that any volume zone-labels must match the equivalent zone-labels on the node. It is OK for the node to have more zone-label constraints (for example, a hypothetical replicated volume might allow region-wide access)

Currently this is only supported with PersistentVolumeClaims, and looks to the labels only on the bound PersistentVolume.

Working with volumes declared inline in the pod specification (i.e. not using a PersistentVolume) is likely to be harder, as it would require determining the zone of a volume during scheduling, and that is likely to require calling out to the cloud provider. It seems that we are moving away from inline volume declarations anyway.

func NoDiskConflict

func NoDiskConflict(pod *api.Pod, existingPods []*api.Pod, node string) (bool, error)

NoDiskConflict evaluates if a pod can fit due to the volumes it requests, and those that are already mounted. If there is already a volume mounted on that node, another pod that uses the same volume can't be scheduled there. This is GCE, Amazon EBS, and Ceph RBD specific for now: - GCE PD allows multiple mounts as long as they're all read-only - AWS EBS forbids any two pods mounting the same volume ID - Ceph RBD forbids if any two pods share at least same monitor, and match pool and image. TODO: migrate this into some per-volume specific code?

func PodFitsHost

func PodFitsHost(pod *api.Pod, existingPods []*api.Pod, node string) (bool, error)

func PodFitsHostPorts added in v1.1.1

func PodFitsHostPorts(pod *api.Pod, existingPods []*api.Pod, node string) (bool, error)

func PodMatchesNodeLabels

func PodMatchesNodeLabels(pod *api.Pod, node *api.Node) bool

Types

type CachedNodeInfo added in v1.2.0

type CachedNodeInfo struct {
	*cache.StoreToNodeLister
}

func (*CachedNodeInfo) GetNodeInfo added in v1.2.0

func (c *CachedNodeInfo) GetNodeInfo(id string) (*api.Node, error)

GetNodeInfo returns cached data for the node 'id'.

type ClientNodeInfo

type ClientNodeInfo struct {
	*client.Client
}

func (ClientNodeInfo) GetNodeInfo

func (nodes ClientNodeInfo) GetNodeInfo(nodeID string) (*api.Node, error)

type InsufficientResourceError added in v1.2.0

type InsufficientResourceError struct {
	// contains filtered or unexported fields
}

InsufficientResourceError is an error type that indicates what kind of resource limit is hit and caused the unfitting failure.

func (*InsufficientResourceError) Error added in v1.2.0

func (e *InsufficientResourceError) Error() string

type NodeInfo

type NodeInfo interface {
	GetNodeInfo(nodeID string) (*api.Node, error)
}

type NodeLabelChecker

type NodeLabelChecker struct {
	// contains filtered or unexported fields
}

func (*NodeLabelChecker) CheckNodeLabelPresence

func (n *NodeLabelChecker) CheckNodeLabelPresence(pod *api.Pod, existingPods []*api.Pod, nodeID string) (bool, error)

CheckNodeLabelPresence checks whether all of the specified labels exists on a node or not, regardless of their value If "presence" is false, then returns false if any of the requested labels matches any of the node's labels, otherwise returns true. If "presence" is true, then returns false if any of the requested labels does not match any of the node's labels, otherwise returns true.

Consider the cases where the nodes are placed in regions/zones/racks and these are identified by labels In some cases, it is required that only nodes that are part of ANY of the defined regions/zones/racks be selected

Alternately, eliminating nodes that have a certain label, regardless of value, is also useful A node may have a label with "retiring" as key and the date as the value and it may be desirable to avoid scheduling new pods on this node

type NodeSelector

type NodeSelector struct {
	// contains filtered or unexported fields
}

func (*NodeSelector) PodSelectorMatches

func (n *NodeSelector) PodSelectorMatches(pod *api.Pod, existingPods []*api.Pod, nodeID string) (bool, error)

type PersistentVolumeClaimInfo added in v1.2.0

type PersistentVolumeClaimInfo interface {
	GetPersistentVolumeClaimInfo(namespace string, pvcID string) (*api.PersistentVolumeClaim, error)
}

type PersistentVolumeInfo added in v1.2.0

type PersistentVolumeInfo interface {
	GetPersistentVolumeInfo(pvID string) (*api.PersistentVolume, error)
}

type ResourceFit

type ResourceFit struct {
	// contains filtered or unexported fields
}

func (*ResourceFit) PodFitsResources

func (r *ResourceFit) PodFitsResources(pod *api.Pod, existingPods []*api.Pod, node string) (bool, error)

PodFitsResources calculates fit based on requested, rather than used resources

type ServiceAffinity

type ServiceAffinity struct {
	// contains filtered or unexported fields
}

func (*ServiceAffinity) CheckServiceAffinity

func (s *ServiceAffinity) CheckServiceAffinity(pod *api.Pod, existingPods []*api.Pod, nodeID string) (bool, error)

CheckServiceAffinity ensures that only the nodes that match the specified labels are considered for scheduling. The set of labels to be considered are provided to the struct (ServiceAffinity). The pod is checked for the labels and any missing labels are then checked in the node that hosts the service pods (peers) for the given pod.

We add an implicit selector requiring some particular value V for label L to a pod, if: - L is listed in the ServiceAffinity object that is passed into the function - the pod does not have any NodeSelector for L - some other pod from the same service is already scheduled onto a node that has value V for label L

type StaticNodeInfo

type StaticNodeInfo struct {
	*api.NodeList
}

func (StaticNodeInfo) GetNodeInfo

func (nodes StaticNodeInfo) GetNodeInfo(nodeID string) (*api.Node, error)

type VolumeZoneChecker added in v1.2.0

type VolumeZoneChecker struct {
	// contains filtered or unexported fields
}

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL