crc-operator

module
v0.0.1 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Jun 17, 2020 License: Apache-2.0

README

Experimental CodeReady Containers (CRC) Operator

This is an unofficial, experimental operator with the high level goal of this operator is to let users login to a shared OpenShift 4 cluster, click a button, and get their own private OpenShift 4 cluster sandbox with full admin access in 5 minutes or less.

It does this by using CodeReady Containers (CRC) virtual machines and Container-native Virtualization (CNV).

Installation

Prerequisites

You need a recent OpenShift or Kubernetes cluster with at least one worker node that is bare metal or that supports nested virtualization. On AWS, this means *.metal (except a1.metal). On Azure, this includes D_v3, Ds_v3, E_v3, Es_v3, F2s_v2, F72s_v2, and M series machines. Other clouds and virtualization providers supported by OCP 4 should work as well as long as they support nested virtualization.

Kubernetes clusters will need ingress-nginx installed.

Whether using OpenShift or Kubernetes clusters, you'll need a recent oc binary in your $PATH.

A known working setup is a default OCP4 Azure cluster and then add a Standard_D8s_v3 Machine to it for running 1-2 CRC VMs.

Another known working setup is a DigitalOcean Kubernetes with 8vCPU/32GB standard Droplets.

You also need a functioning install of Container-native Virtualization on OpenShift or KubeVirt on Kubernetes.

Deploy the operator

Create the CrcCluster CRD

oc apply -f deploy/crds/crc.developer.openshift.io_crcclusters_crd.yaml

Deploy the operator

oc create ns crc-operator
oc apply -f deploy/service_account.yaml
oc apply -f deploy/role.yaml
oc apply -f deploy/role_binding.yaml
of apply -f deploy/operator.yaml

Ensure the operator comes up with no errors in its logs

oc logs deployment/crc-operator -n crc-operator

Create a CRC cluster

Clone this repo, copy your OpenShift pull secret into a file called pull-secret, and run the commands below. You can substitute any name for your CRC cluster in place of my-cluster and any namespace in place of crc in the commands below.

oc new-project crc

cat <<EOF | oc apply -f -
apiVersion: crc.developer.openshift.io/v1alpha1
kind: CrcCluster
metadata:
  name: my-cluster
  namespace: crc
spec:
  cpu: 4
  memory: 16Gi
  pullSecret: $(cat $PULL_SECRET_FILE | base64 -w 0)
EOF

oc wait --for=condition=Ready crc/my-cluster -n crc --timeout=1800s

On reasonably sized Nodes, the CRC cluster usually comes up in 7-8 minutes. The very first time a CRC cluster is created on a Node, it can take quite a bit longer while the CRC VM image is pulled into the container image cache on that Node.

If the CRC cluster never becomes Ready, check the operator pod logs (as shown in the installation section above) and the known issues list below for any clues on what went wrong.

Access the CRC cluster

Once your new cluster is up and Ready, the CrcCluster resource's status block has all the information needed to access it.

Log in to the web console:

Console URL:

oc get crc my-cluster -n crc -o jsonpath={.status.consoleURL}

Kubeadmin Password:

oc get crc my-cluster -n crc -o jsonpath={.status.kubeAdminPassword}

Log in as the user kubeadmin with the password from above.

Access the cluster from the command line using oc:

Extract the kubeconfig to a kubeconfig-crc file in the current directory and use that to access the cluster:

oc get crc my-cluster -n crc -o jsonpath={.status.kubeconfig} | base64 -d > kubeconfig-crc
oc --kubeconfig kubeconfig-crc get pod --all-namespaces

Development

For tips on developing crc-operator itself, see DEVELOPMENT.md.

Known Issues

  • The first time a CrcCluster gets created on any specific Node, it takes a LONG time to pull the CRC VM image from quay.io. There's a planned CrcBundle API that may be able to mitigate this by pre-pulling the VM images into the internal registry. For now, if crcStart.sh times out and this is the first time running a VM on that specific Node, just run the script again with the exact same arguments.
  • The kubeconfigs have an incorrect certificate-authority-data that needs to get updated to match the actual cert from the running cluster. Should that have changed? Look at https://docs.openshift.com/container-platform/4.4/authentication/certificates/api-server.html for how to add an additional API server certificate with the proper name. The operator would need to generate a new cert for the exposed API server URL and follow those instructions.
  • The image locations are all hardcoded. This is very temporary, with the first iteration allowing an environment variable in the operator and a later iteration adding a new API to manage multiple CRC VM images where the user can choose which (ie 4.4.5, 4.4.6, 4.5.0, etc) they want to spin up.

Directories

Path Synopsis
cmd
pkg
apis/crc
Package crc contains crc API versions.
Package crc contains crc API versions.
apis/crc/v1alpha1
Package v1alpha1 contains API Schema definitions for the crc v1alpha1 API group +k8s:deepcopy-gen=package,register +groupName=crc.developer.openshift.io Package v1alpha1 contains API Schema definitions for the crc v1alpha1 API group +k8s:deepcopy-gen=package,register +groupName=crc.developer.openshift.io
Package v1alpha1 contains API Schema definitions for the crc v1alpha1 API group +k8s:deepcopy-gen=package,register +groupName=crc.developer.openshift.io Package v1alpha1 contains API Schema definitions for the crc v1alpha1 API group +k8s:deepcopy-gen=package,register +groupName=crc.developer.openshift.io

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL