README
¶
[!WARNING] multicluster-runtime is an experiment to add multi-cluster support on-top of controller-runtime. It is not generally consumable yet. Use at your own risk. Contributions though are highly welcome.
Related controller-runtime design: https://github.com/kubernetes-sigs/controller-runtime/pull/2746
multicluster-runtime

Multi cluster controllers with controller-runtime
- no fork, no go mod replace: clean extension to upstream controller-runtime.
- universal: kind, cluster-api, Gardener (tbd), kcp (WIP), BYO. Cluster providers make the controller-runtime multi-cluster aware.
- seamless: add multi-cluster support without compromising on single-cluster. Run in either mode without code changes to the reconcilers.
Uniform Reconcilers
Run the same reconciler against many clusters:
- The reconciler reads from cluster A and writes to cluster A.
- The reconciler reads from cluster B and writes to cluster B.
- The reconciler reads from cluster C and writes to cluster C.
This is the most simple case. Many existing reconcilers can easily adapted to work like this without major code changes. The resulting controllers will work in the multi-cluster setting, but also in the classical single-cluster setup, all in the same code base.
Multi-Cluster-aware Reconcilers
Run reconcilers that listen to some cluster(s) and operate other clusters.
Principles
- multicluster-runtime is a friendly extension of controller-runtime.
- multicluster-runtime loves ❤️ contributions.
- multicluster-runtime is following controller-runtime releases.
- multicluster-runtime is developed as if it was part of controller-runtime (quality standards, naming, style).
- multicluster-runtime could be a testbed for native controller-runtime functionality, eventually becoming superfluous.
- multicluster-runtime is provider agnostic, but may contain providers with its own go.mod files and dedicated OWNERS files.
How it looks?
package main
import (
"context"
"log"
corev1 "k8s.io/api/core/v1"
apierrors "k8s.io/apimachinery/pkg/api/errors"
ctrl "sigs.k8s.io/controller-runtime"
"sigs.k8s.io/controller-runtime/pkg/manager"
"sigs.k8s.io/controller-runtime/pkg/manager/signals"
"sigs.k8s.io/controller-runtime/pkg/reconcile"
mcbuilder "sigs.k8s.io/multicluster-runtime/pkg/builder"
mcmanager "sigs.k8s.io/multicluster-runtime/pkg/manager"
mcreconcile "sigs.k8s.io/multicluster-runtime/pkg/reconcile"
"sigs.k8s.io/multicluster-runtime/providers/kind"
)
func main() {
ctx := signals.SetupSignalHandler()
provider := kind.New()
mgr, err := mcmanager.New(ctrl.GetConfigOrDie(), provider, manager.Options{})
if err != nil {
log.Fatal(err, "unable to create manager")
}
err = mcbuilder.ControllerManagedBy(mgr).
Named("multicluster-configmaps").
For(&corev1.ConfigMap{}).
Complete(mcreconcile.Func(
func(ctx context.Context, req mcreconcile.Request) (ctrl.Result, error) {
cl, err := mgr.GetCluster(ctx, req.ClusterName)
if err != nil {
return reconcile.Result{}, err
}
cm := &corev1.ConfigMap{}
if err := cl.GetClient().Get(ctx, req.Request.NamespacedName, cm); err != nil {
if apierrors.IsNotFound(err) {
return reconcile.Result{}, nil
}
return reconcile.Result{}, err
}
log.Printf("ConfigMap %s/%s in cluster %q", cm.Namespace, cm.Name, req.ClusterName)
return ctrl.Result{}, nil
},
))
if err != nil {
log.Fatal(err, "unable to create controller")
}
go provider.Run(ctx, mgr)
if err := mgr.Start(ctx); err != nil {
log.Fatal(err, "unable to run manager")
}
}
FAQ
How is it different from https://github.com/admiraltyio/multicluster-controller ?
In contrast to https://github.com/admiraltyio/multicluster-controller, multicluster-runtime keeps building on controller-runtime for most of its constructs. It is not replacing the manager, the controller or the cluster. To a large degree, this became possible through the extensive use of generics in controller-runtime. Most multicluster-runtime constructs are just type instantiations with a little glue.
Can I dynamically load provider plugins?
No, plugins are out of scope for now. Multicluster-runtime needs source code changes to
- enable multi-cluster support by replacing some controller-runtime imports with the multicluster-runtime equivalents and
- wire supported providers. The provider interface is simple. So it is not ruled out to have some plugin mechanism in the future.
Documentation
¶
Index ¶
Constants ¶
This section is empty.
Variables ¶
var ( // RegisterFlags registers flag variables to the given FlagSet if not already registered. // It uses the default command line FlagSet, if none is provided. Currently, it only registers the kubeconfig flag. RegisterFlags = config.RegisterFlags // GetConfigOrDie creates a *rest.Config for talking to a Kubernetes apiserver. // If --kubeconfig is set, will use the kubeconfig file at that location. Otherwise will assume running // in cluster and use the cluster provided kubeconfig. // // Will log an error and exit if there is an error creating the rest.Config. GetConfigOrDie = config.GetConfigOrDie // GetConfig creates a *rest.Config for talking to a Kubernetes apiserver. // If --kubeconfig is set, will use the kubeconfig file at that location. Otherwise will assume running // in cluster and use the cluster provided kubeconfig. // // Config precedence // // * --kubeconfig flag pointing at a file // // * KUBECONFIG environment variable pointing at a file // // * In-cluster config if running in cluster // // * $HOME/.kube/config if exists. GetConfig = config.GetConfig // NewControllerManagedBy returns a new controller builder that will be started by the provided Manager. NewControllerManagedBy = mcbuilder.ControllerManagedBy // NewWebhookManagedBy returns a new webhook builder that will be started by the provided Manager. NewWebhookManagedBy = builder.WebhookManagedBy // NewManager returns a new Manager for creating Controllers. // Note that if ContentType in the given config is not set, "application/vnd.kubernetes.protobuf" // will be used for all built-in resources of Kubernetes, and "application/json" is for other types // including all CRD resources. NewManager = mcmanager.New // CreateOrUpdate creates or updates the given object obj in the Kubernetes // cluster. The object's desired state should be reconciled with the existing // state using the passed in ReconcileFn. obj must be a struct pointer so that // obj can be updated with the content returned by the Server. // // It returns the executed operation and an error. CreateOrUpdate = controllerutil.CreateOrUpdate // SetControllerReference sets owner as a Controller OwnerReference on owned. // This is used for garbage collection of the owned object and for // reconciling the owner object on changes to owned (with a Watch + EnqueueRequestForOwner). // Since only one OwnerReference can be a controller, it returns an error if // there is another OwnerReference with Controller flag set. SetControllerReference = controllerutil.SetControllerReference // SetupSignalHandler registers for SIGTERM and SIGINT. A context is returned // which is canceled on one of these signals. If a second signal is caught, the program // is terminated with exit code 1. SetupSignalHandler = signals.SetupSignalHandler // Log is the base logger used by controller-runtime. It delegates // to another logr.Logger. You *must* call SetLogger to // get any actual logging. Log = log.Log // LoggerFrom returns a logger with predefined values from a context.Context. // The logger, when used with controllers, can be expected to contain basic information about the object // that's being reconciled like: // - `reconciler group` and `reconciler kind` coming from the For(...) object passed in when building a controller. // - `name` and `namespace` from the reconciliation request. // // This is meant to be used with the context supplied in a struct that satisfies the Reconciler interface. LoggerFrom = log.FromContext // LoggerInto takes a context and sets the logger as one of its keys. // // This is meant to be used in reconcilers to enrich the logger within a context with additional values. LoggerInto = log.IntoContext // SetLogger sets a concrete logging implementation for all deferred Loggers. SetLogger = log.SetLogger )
Functions ¶
This section is empty.
Types ¶
type Builder ¶
Builder builds an Application ControllerManagedBy (e.g. Operator) and returns a manager.Manager to start it.
type GroupResource ¶
type GroupResource = schema.GroupResource
GroupResource specifies a Group and a Resource, but does not force a version. This is useful for identifying concepts during lookup stages without having partially valid types.
type GroupVersion ¶
type GroupVersion = schema.GroupVersion
GroupVersion contains the "group" and the "version", which uniquely identifies the API.
type Manager ¶
Manager initializes shared dependencies such as Caches and Clients, and provides them to Runnables. A Manager is required to create Controllers.
type ObjectMeta ¶
type ObjectMeta = metav1.ObjectMeta
ObjectMeta is metadata that all persisted resources must have, which includes all objects users must create.
type Request ¶
type Request = mcreconcile.Request
Request contains the information necessary to reconcile a Kubernetes object. This includes the information to uniquely identify the object - its Name and Namespace. It does NOT contain information about any specific Event or the object contents itself.
type SchemeBuilder ¶
SchemeBuilder builds a new Scheme for mapping go types to Kubernetes GroupVersionKinds.
Directories
¶
Path | Synopsis |
---|---|
examples
|
|
hack
|
|
internal
|
|
pkg
|
|
builder
Package builder wraps other multicluster-runtime libraries and exposes simple patterns for building common Controllers.
|
Package builder wraps other multicluster-runtime libraries and exposes simple patterns for building common Controllers. |
providers
|
|
kind
Module
|