README ¶
efs-provisioner
quay.io/external_storage/efs-provisioner:latest
Prerequisites
- An EFS file system in your cluster's region
- Mount targets and security groups such that any node (in any zone in the cluster's region) can mount the EFS file system by its File system DNS name
Deployment
Create a configmap containing the File system ID and Amazon EC2 region of the EFS file system you wish to provision NFS PVs from, plus the name of the provisioner, which administrators will specify in the provisioner
field of their StorageClass(es)
, e.g. provisioner: example.com/aws-efs
.
$ kubectl create configmap efs-provisioner \
--from-literal=file.system.id=fs-47a2c22e \
--from-literal=aws.region=us-west-2 \
--from-literal=provisioner.name=example.com/aws-efs
See Optional: AWS credentials secret if you want the provisioner to only once at startup check that the EFS file system you specified in the configmap actually exists.
Decide on & set aside a directory within the EFS file system for the provisioner to use. The provisioner will create child directories to back each PV it provisions. Then edit the volumes
section at the bottom of "deploy/deployment.yaml" so that the path
refers to the directory you set aside and the server
is the same EFS file system you specified. Create the deployment, and you're done.
volumes:
- name: pv-volume
nfs:
server: fs-47a2c22e.efs.us-west-2.amazonaws.com
path: /persistentvolumes
$ kubectl create -f deploy/deployment.yaml
deployment "efs-provisioner" created
You will need to create this directory on your EFS file system first or the efs-provisioner pod will fail to start.
Authorization
If your cluster has RBAC enabled or you are running OpenShift you must authorize the provisioner. If you are in a namespace/project other than "default" either edit deploy/auth/clusterrolebinding.yaml
or edit the oadm policy
command accordingly.
RBAC
$ kubectl create -f deploy/auth/serviceaccount.yaml
serviceaccount "efs-provisioner" created
$ kubectl create -f deploy/auth/clusterrole.yaml
clusterrole "efs-provisioner-runner" created
$ kubectl create -f deploy/auth/clusterrolebinding.yaml
clusterrolebinding "run-efs-provisioner" created
$ kubectl patch deployment efs-provisioner -p '{"spec":{"template":{"spec":{"serviceAccount":"efs-provisioner"}}}}'
OpenShift
$ oc create -f deploy/auth/serviceaccount.yaml
serviceaccount "efs-provisioner" created
$ oc create -f deploy/auth/openshift-clusterrole.yaml
clusterrole "efs-provisioner-runner" created
$ oadm policy add-scc-to-user hostmount-anyuid system:serviceaccount:default:efs-provisioner
$ oadm policy add-cluster-role-to-user efs-provisioner-runner system:serviceaccount:default:efs-provisioner
$ oc patch deployment efs-provisioner -p '{"spec":{"template":{"spec":{"serviceAccount":"efs-provisioner"}}}}'
SELinux
If SELinux is enforcing on the node where the provisioner runs, you must enable writing from a pod to a remote NFS server (EFS in this case) on the node by running:
$ setsebool -P virt_use_nfs 1
$ setsebool -P virt_sandbox_use_nfs 1
Usage
First a StorageClass
for claims to ask for needs to be created.
apiVersion: storage.k8s.io/v1beta1
kind: StorageClass
metadata:
name: slow
provisioner: example.com/aws-efs
parameters:
gidMin: "40000"
gidMax: "50000"
Parameters
gidMin
+gidMax
: The minimum and maximum value of GID range for the storage class. A unique value (GID) in this range ( gidMin-gidMax ) will be used for dynamically provisioned volumes. These are optional values. If not specified, the volume will be provisioned with a value between 2000-2147483647 which are defaults for gidMin and gidMax respectively.
Once you have finished configuring the class to have the name you chose when deploying the provisioner and the parameters you want, create it.
$ kubectl create -f deploy/class.yaml
storageclass "aws-efs" created
When you create a claim that asks for the class, a volume will be automatically created.
$ kubectl create -f deploy/claim.yaml
persistentvolumeclaim "efs" created
$ kubectl get pv
NAME CAPACITY ACCESSMODES RECLAIMPOLICY STATUS CLAIM REASON AGE
pvc-557b4436-ed73-11e6-84b3-06a700dda5f5 1Mi RWX Delete Bound default/efs 2s
Note: any pod that consumes the claim will be able to read/write to the volume. This is because the volumes are provisioned with a GID (from the default range or according to gidMin
+ gidMax
) and any pod that mounts the volume via the claim automatically gets the GID as a supplemental group.
Create a secret containing the AWS credentials of a user assigned the AmazonElasticFileSystemReadOnlyAccess policy. The credentials will be used by the provisioner only once at startup to check that the EFS file system you specified in the configmap actually exists.
$ kubectl create secret generic aws-credentials \
--from-literal=aws-access-key-id=AKIAIOSFODNN7EXAMPLE \
--from-literal=aws-secret-access-key=wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY
Add a reference to the secret in the deployment yaml.
...
- name: AWS_ACCESS_KEY_ID
valueFrom:
secretKeyRef:
name: aws-credentials
key: aws-access-key-id
- name: AWS_SECRET_ACCESS_KEY
valueFrom:
secretKeyRef:
name: aws-credentials
key: aws-secret-access-key
...