rc

package
v0.0.0-...-8223eb1 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Jan 14, 2020 License: Apache-2.0 Imports: 33 Imported by: 0

Documentation

Index

Constants

View Source
const (
	// This label is applied to pods owned by an RC.
	RCIDLabel = "replication_controller_id"

	// The maximum number of times to attempt a node allocate or deallocate
	MaxAllocateAttempts = 10
)

Variables

This section is empty.

Functions

func CurrentPods

func CurrentPods(rcid fields.ID, labeler LabelMatcher) (types.PodLocations, error)

CurrentPods returns all pods managed by an RC with the given ID.

Types

type AuditLogStore

type AuditLogStore interface {
	Create(
		ctx context.Context,
		eventType audit.EventType,
		eventDetails json.RawMessage,
	) error
}

type Farm

type Farm struct {
	// contains filtered or unexported fields
}

The Farm is responsible for spawning and reaping replication controllers as they are added to and deleted from Consul. Multiple farms can exist simultaneously, but each one must hold a different Consul session. This ensures that the farms do not instantiate the same replication controller multiple times.

RC farms take an RC selector that is used to decide whether this farm should pick up a particular RC. This can be used to assist in RC partitioning of work or to create test environments. Note that this is _not_ required for RC farms to cooperatively schedule work.

func NewFarm

func NewFarm(
	store consulStore,
	client consulutil.ConsulClient,
	rcStatusStore rcstatus.ConsulStore,
	auditLogStore AuditLogStore,
	rcs ReplicationControllerStore,
	rcLocker ReplicationControllerLocker,
	rcWatcher ReplicationControllerWatcher,
	txner transaction.Txner,
	healthChecker checker.HealthChecker,
	scheduler Scheduler,
	labeler Labeler,
	sessions <-chan string,
	logger logging.Logger,
	rcSelector klabels.Selector,
	alerter alerting.Alerter,
	rcWatchPauseTime time.Duration,
	artifactRegistry artifact.Registry,
	sdChecker ServiceDiscoveryChecker,
) *Farm

func (*Farm) Start

func (rcf *Farm) Start(quit <-chan struct{})

Start is a blocking function that monitors Consul for replication controllers. The Farm will attempt to claim replication controllers as they appear and, if successful, will start goroutines for those replication controllers to do their job. Closing the quit channel will cause this function to return, releasing all locks it holds.

Start is not safe for concurrent execution. Do not execute multiple concurrent instances of Start.

type LabelMatcher

type LabelMatcher interface {
	GetMatches(selector klabels.Selector, labelType labels.Type) ([]labels.Labeled, error)
}

LabelMatcher is a subset of Labeler, but its small size makes it easier to call CurrentPods() in code where transactions are not available

type Labeler

type Labeler interface {
	SetLabelsTxn(ctx context.Context, labelType labels.Type, id string, labels map[string]string) error
	RemoveLabelsTxn(ctx context.Context, labelType labels.Type, id string, keysToRemove []string) error
	GetLabels(labelType labels.Type, id string) (labels.Labeled, error)
	GetMatches(selector klabels.Selector, labelType labels.Type) ([]labels.Labeled, error)
}

subset of labels.Applicator

type RCMutationLocker

type RCMutationLocker interface {
	LockForMutation(fields.ID, consul.Session) (consul.Unlocker, error)
}

type ReplicationController

type ReplicationController interface {
	// WatchDesires causes the replication controller to watch for any changes to its desired state.
	// It is expected that a replication controller is aware of a backing rcstore against which to perform this watch.
	// Upon seeing any changes, the replication controller schedules or unschedules pods to meet the desired state.
	// This spawns a goroutine that performs the watch and returns a channel on which errors are sent.
	// The caller must consume from the error channel.
	// Failure to do so blocks the replication controller from meeting desires.
	// Send a struct{} on the quit channel to stop the goroutine.
	// The error channel will be closed in response.
	WatchDesires(quit <-chan struct{}) <-chan error

	// CurrentPods() returns all pods managed by this replication controller.
	CurrentPods() (types.PodLocations, error)
}

func New

func New(
	rcID fields.ID,
	consulStore consulStore,
	consulClient consulutil.ConsulClient,
	rcLocker RCMutationLocker,
	rcStatusStore rcstatus.ConsulStore,
	auditLogStore AuditLogStore,
	txner transaction.Txner,
	rcWatcher ReplicationControllerWatcher,
	scheduler Scheduler,
	podApplicator Labeler,
	logger logging.Logger,
	alerter alerting.Alerter,
	healthChecker checker.HealthChecker,
	artifactRegistry artifact.Registry,
	sdChecker ServiceDiscoveryChecker,
) ReplicationController

type ReplicationControllerLocker

type ReplicationControllerLocker interface {
	RCMutationLocker
	LockForOwnership(rcID fields.ID, session consul.Session) (consul.Unlocker, error)
}

type ReplicationControllerStore

type ReplicationControllerStore interface {
	WatchRCKeysWithLockInfo(quit <-chan struct{}, pauseTime time.Duration) (<-chan []rcstore.RCLockResult, <-chan error)
	Get(id fields.ID) (fields.RC, error)
	List() ([]fields.RC, error)
}

type ReplicationControllerWatcher

type ReplicationControllerWatcher interface {
	Watch(rcID fields.ID, quit <-chan struct{}) (<-chan fields.RC, <-chan error)
}

type Scheduler

type Scheduler interface {
	// EligibleNodes returns the nodes that this RC may schedule the manifest on
	EligibleNodes(manifest.Manifest, klabels.Selector) ([]types.NodeName, error)

	// AllocateNodes() can be called by the RC when it needs more nodes to
	// schedule on than EligibleNodes() returns. It will return the newly
	// allocated nodes which will also appear in subsequent EligibleNodes()
	// calls
	AllocateNodes(manifest manifest.Manifest, nodeSelector klabels.Selector, allocationCount int, force bool) ([]types.NodeName, error)

	// DeallocateNodes() indicates to the scheduler that the RC has unscheduled
	// the pod from these nodes, meaning the scheduler can free the
	// resource reservations
	DeallocateNodes(nodeSelector klabels.Selector, nodes []types.NodeName) error
}

A Scheduler decides what nodes are appropriate for a pod to run on. It potentially takes into account considerations such as existing load on the nodes, label selectors, and more.

type ServiceDiscoveryChecker

type ServiceDiscoveryChecker interface {
	// IsSyncedWithCluster can be called by the RC when it needs to know that
	// the service discovery system is up to date with P2's latest cluster
	// state
	IsSyncedWithCluster(rcID fields.ID) (bool, error)
}

Directories

Path Synopsis

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL