之前搭建的测试环境的Kubernetes一直没有StorageClass动态存储提供,导致很多的实验都是用emptyDir的方式,测试还是OK了,但是在企业中使用的话还是需要有永久的存储的,这里先以nfs-client-provisioner的方式,通过NFS Server来创建StorageClass,来响应k8s中pvc的创建需求。
本文环境:
Kubernetes 1.15.5 
nfs-client-provisioner chart version:1.2.8,app version:3.1.0 
Helm 3 
Amazon Linux 2 AMI,基于CentOS 7 
 有用链接:
nfs-client-provisioner GitHub:
https://github.com/kubernetes-incubator/external-storage/tree/master/nfs-client 
nfs-client-provisioner Helm Chart:
https://github.com/helm/charts/tree/master/stable/nfs-client-provisioner 
前提依赖 Kubernetes环境 请参考之前的文章:使用kubeadm快速安装kubernetes 1.15.5测试环境’ %}</p>
<h3 id="Helm3-安装"><a href="#Helm3-安装" class="headerlink" title="Helm3 安装"></a>Helm3 安装</h3><p>同样请参考之前的文章:{% post_link Pre-trial-helm3 抢先试用Helm3,并安装nginx-ingress-controller’ 
NFS Server的安装 安装 我这里在K8S Master节点上安装NFS Server,该节点IP:172.17.0.7。
其他的Kubernetes节点也需要安装,但是可以不需要配置/etc/exports,只是在挂载的时候使用。
$ sudo yum install nfs-utils rpcbind -y $ ansible dev_k8s -m yum  -a "name=rpcbind state=present"  --private-key keys/aws/dev_keyPair2_awsbj_cn.pem -b --become-method=sudo $ ansible dev_k8s -m yum  -a "name=nfs-utils state=present"  --private-key keys/aws/dev_keyPair2_awsbj_cn.pem -b --become-method=sudo 
配置 cat >> /etc/exports <<EOF /data/kubernetes-storageclass *(rw,sync,no_root_squash) EOF $ sudo mkdir -p /data/kubernetes-storageclass $ sudo chmod 777 /data/kubernetes-storageclass $ ll /data/ total 0 drwxrwxrwx 2 root root 6 Dec  2 16:58 kubernetes-storageclass 
启动服务: $ sudo systemctl restart rpcbind $ sudo systemctl restart nfs $ sudo systemctl enable  rpcbind $ sudo systemctl enable  nfs 
测试: 在随意一台K8S Node节点上测试:
$ showmount -e 172.17.0.7 Export list for  172.17.0.7: /data/kubernetes-storageclass * 
挂载测试:
$ mkdir nfs $ sudo mount 172.17.0.7:/data/kubernetes-storageclass nfs/ $ df -h Filesystem                                Size  Used Avail Use% Mounted on devtmpfs                                  7.9G     0  7.9G   0% /dev tmpfs                                     7.9G     0  7.9G   0% /dev/shm tmpfs                                     7.9G  2.7M  7.9G   1% /run tmpfs                                     7.9G     0  7.9G   0% /sys/fs/cgroup /dev/xvda1                                100G  8.6G   92G   9% / tmpfs                                     1.6G     0  1.6G   0% /run/user/10086 172.17.0.7:/data/kubernetes-storageclass  100G  7.4G   93G   8% /home/root/nfs  $ echo  test  > nfs/test  $ ll nfs total 4.0K -rw-rw-r-- 1 root root 5 Dec  2 17:24 test  $ cat nfs/test   test $ ll /data/kubernetes-storageclass/ total 4.0K -rw-rw-r-- 1 root root  5 Dec  2 17:24 test  $ sudo umount /home/microoak/nfs 
安装nfs-client-provisioner 记得一定先要更新repo:
$ helm repo update  Hang tight while  we grab the latest from your chart repositories... ...Successfully got an update from the "stable"  chart repository ...Successfully got an update from the "elastic"  chart repository Update Complete. ⎈ Happy Helming!⎈ 
搜索一下,是否有提供nfs-client-provisioner:
$ helm search repo nfs-client-provisioner NAME                         	CHART VERSION	APP VERSION	DESCRIPTION                                        stable/nfs-client-provisioner	1.2.8        	3.1.0      	nfs-client is an automatic provisioner that use... 
好了,下载下载,然后更改对应的values.yaml:
$ helm pull stable/nfs-client-provisioner $ tar xf nfs-client-provisioner-1.2.8.tgz $ cp values.yaml{,.ori} 
首先查看Helm Chart GitHub,查看所有的key解释:
Configuration The following tables lists the configurable parameters of this chart and their default values.
Parameter 
Description 
Default 
 
 
replicaCountNumber of provisioner instances to deployed 
1 
strategyTypeSpecifies the strategy used to replace old Pods by new ones 
Recreate 
image.repositoryProvisioner image 
quay.io/external_storage/nfs-client-provisioner 
image.tagVersion of provisioner image 
v3.1.0-k8s1.11 
image.pullPolicyImage pull policy 
IfNotPresent 
storageClass.nameName of the storageClass 
nfs-client 
storageClass.defaultClassSet as the default StorageClass 
false 
storageClass.allowVolumeExpansionAllow expanding the volume 允许扩充volume 
true 
storageClass.reclaimPolicyMethod used to reclaim an obsoleted volume 删除策略 
Delete 
storageClass.provisionerNameName of the provisionerName 
null 
 
storageClass.archiveOnDeleteArchive pvc when deleting 删除时归档 
true 
nfs.serverHostname of the NFS server 服务地址 
null (ip or hostname) 
 
nfs.pathBasepath of the mount point to be used 服务端的路径 
/ifs/kubernetes 
nfs.mountOptionsMount options (e.g. ‘nfsvers=3’) 
null 
 
resourcesResources required (e.g. CPU, memory) 资源大小 
{} 
rbac.createUse Role-based Access Control 
true 
podSecurityPolicy.enabledCreate & use Pod Security Policy resources 
false 
priorityClassNameSet pod priorityClassName 
null 
 
serviceAccount.createShould we create a ServiceAccount 
true 
serviceAccount.nameName of the ServiceAccount to use 
null 
 
nodeSelectorNode labels for pod assignment 
{} 
affinityAffinity settings 
{} 
tolerationsList of node taints to tolerate 
[] 
 
修改values.yaml,只列出了修改的部分:
image:   repository:  quay.azk8s.cn/external_storage/nfs-client-provisioner  nfs:   server:  172.17 .0 .7    path:  /data/kubernetes-storageclass  resources:   limits:     cpu:  100m     memory:  128Mi    requests:     cpu:  100m     memory:  128Mi  
查看渲染后的yaml:
$  helm  -n  kube-system  install  nfs-client-provisioner  ./  --dry-run  --debug install.go:148:  [debug]  Original chart version:  "" install.go:165:  [debug]  CHART PATH:  /home/microoak/kubernetes/helm/nfs-client-provisioner NAME:  nfs-client-provisioner LAST DEPLOYED:  Mon  Dec   2  17 :32:37  2019 NAMESPACE:  kube-system STATUS:  pending-install REVISION:  1 TEST SUITE:  None USER-SUPPLIED VALUES: {} COMPUTED VALUES: affinity:  {} image:   pullPolicy:  IfNotPresent    repository:  quay.io/external_storage/nfs-client-provisioner    tag:  v3.1.0-k8s1.11  nfs:   mountOptions:  null    path:  /data/kubernetes-storageclass    server:  172.17 .0 .7  nodeSelector:  {} podSecurityPolicy:   enabled:  false  rbac:   create:  true  replicaCount:  1 resources:   limits:      cpu:  100m      memory:  128Mi    requests:      cpu:  100m      memory:  128Mi  serviceAccount:   create:  true    name:  null  storageClass:   allowVolumeExpansion:  true    archiveOnDelete:  true    create:  true    defaultClass:  false    name:  nfs-client    reclaimPolicy:  Delete  strategyType:  Recreate tolerations:  [] HOOKS: MANIFEST: --- apiVersion:  storage.k8s.io/v1 kind:  StorageClass metadata:   labels:      app:  nfs-client-provisioner      chart:  nfs-client-provisioner-1.2.8      heritage:  Helm      release:  nfs-client-provisioner    name:  nfs-client  provisioner:  cluster.local/nfs-client-provisioner allowVolumeExpansion:  true reclaimPolicy:  Delete parameters:   archiveOnDelete:  "true"  --- apiVersion:  v1 kind:  ServiceAccount metadata:   labels:      app:  nfs-client-provisioner      chart:  nfs-client-provisioner-1.2.8      heritage:  Helm      release:  nfs-client-provisioner    name:  nfs-client-provisioner  --- kind:  ClusterRole apiVersion:  rbac.authorization.k8s.io/v1 metadata:   labels:      app:  nfs-client-provisioner      chart:  nfs-client-provisioner-1.2.8      heritage:  Helm      release:  nfs-client-provisioner    name:  nfs-client-provisioner-runner  rules:   -  apiGroups:  [""]      resources:  ["persistentvolumes"]      verbs:  ["get",  "list" ,  "watch" ,  "create" ,  "delete" ]    -  apiGroups:  [""]      resources:  ["persistentvolumeclaims"]      verbs:  ["get",  "list" ,  "watch" ,  "update" ]    -  apiGroups:  ["storage.k8s.io"]      resources:  ["storageclasses"]      verbs:  ["get",  "list" ,  "watch" ]    -  apiGroups:  [""]      resources:  ["events"]      verbs:  ["create",  "update" ,  "patch" ]  --- kind:  ClusterRoleBinding apiVersion:  rbac.authorization.k8s.io/v1 metadata:   labels:      app:  nfs-client-provisioner      chart:  nfs-client-provisioner-1.2.8      heritage:  Helm      release:  nfs-client-provisioner    name:  run-nfs-client-provisioner  subjects:   -  kind:  ServiceAccount      name:  nfs-client-provisioner      namespace:  kube-system  roleRef:   kind:  ClusterRole    name:  nfs-client-provisioner-runner    apiGroup:  rbac.authorization.k8s.io  --- kind:  Role apiVersion:  rbac.authorization.k8s.io/v1 metadata:   labels:      app:  nfs-client-provisioner      chart:  nfs-client-provisioner-1.2.8      heritage:  Helm      release:  nfs-client-provisioner    name:  leader-locking-nfs-client-provisioner  rules:   -  apiGroups:  [""]      resources:  ["endpoints"]      verbs:  ["get",  "list" ,  "watch" ,  "create" ,  "update" ,  "patch" ]  --- kind:  RoleBinding apiVersion:  rbac.authorization.k8s.io/v1 metadata:   labels:      app:  nfs-client-provisioner      chart:  nfs-client-provisioner-1.2.8      heritage:  Helm      release:  nfs-client-provisioner    name:  leader-locking-nfs-client-provisioner  subjects:   -  kind:  ServiceAccount      name:  nfs-client-provisioner      namespace:  kube-system  roleRef:   kind:  Role    name:  leader-locking-nfs-client-provisioner    apiGroup:  rbac.authorization.k8s.io  --- apiVersion:  apps/v1 kind:  Deployment metadata:   name:  nfs-client-provisioner    labels:      app:  nfs-client-provisioner      chart:  nfs-client-provisioner-1.2.8      heritage:  Helm      release:  nfs-client-provisioner  spec:   replicas:  1    strategy:      type:  Recreate    selector:      matchLabels:        app:  nfs-client-provisioner        release:  nfs-client-provisioner    template:      metadata:        annotations:        labels:          app:  nfs-client-provisioner          release:  nfs-client-provisioner      spec:        serviceAccountName:  nfs-client-provisioner        containers:          -  name:  nfs-client-provisioner            image:  "quay.azk8s.cn/external_storage/nfs-client-provisioner:v3.1.0-k8s1.11"            imagePullPolicy:  IfNotPresent            volumeMounts:              -  name:  nfs-client-root                mountPath:  /persistentvolumes            env:              -  name:  PROVISIONER_NAME                value:  cluster.local/nfs-client-provisioner              -  name:  NFS_SERVER                value:  172.17 .0 .7              -  name:  NFS_PATH                value:  /data/kubernetes-storageclass            resources:              limits:                cpu:  100m                memory:  128Mi              requests:                cpu:  100m                memory:  128Mi        volumes:          -  name:  nfs-client-root            nfs:              server:  172.17 .0 .7              path:  /data/kubernetes-storageclass  
安装:
$ cd  nfs-client-provisioner $ helm -n kube-system install nfs-client-provisioner ./ 
检查状态:
$ helm -n kube-system ls NAME                  	NAMESPACE  	REVISION	UPDATED                                	STATUS  	CHART                       	APP VERSION nfs-client-provisioner	kube-system	1       	2019-12-02 17:37:05.261060723 +0800 CST	deployed	nfs-client-provisioner-1.2.8	3.1.0 $ kubectl -n kube-system get po NAME                                             READY   STATUS    RESTARTS   AGE calico-kube-controllers-7b4657785d-7gw8s         1/1     Running   0          20d calico-node-lpnmf                                1/1     Running   0          10d calico-node-v26zq                                1/1     Running   0          20d calico-node-vxknt                                1/1     Running   0          10d coredns-5c98db65d4-f6bjz                         1/1     Running   3          20d coredns-5c98db65d4-xn6fg                         1/1     Running   3          20d etcd-k8s01.test.awsbj.cn                         1/1     Running   0          10d kube-apiserver-k8s01.test.awsbj.cn               1/1     Running   0          18d kube-controller-manager-k8s01.test.awsbj.cn      1/1     Running   1          18d kube-proxy-kgv74                                 1/1     Running   0          10d kube-proxy-vmbqj                                 1/1     Running   0          10d kube-proxy-zq5rl                                 1/1     Running   0          10d kube-scheduler-k8s01.test.awsbj.cn               1/1     Running   2          20d nfs-client-provisioner-794f85c5d4-m5wp2          1/1     Running   0          11s  $ kubectl -n kube-system logs nfs-client-provisioner-794f85c5d4-m5wp2  I1202 09:37:09.420679       1 leaderelection.go:185] attempting to acquire leader lease  kube-system/cluster.local-nfs-client-provisioner... I1202 09:37:09.518361       1 leaderelection.go:194] successfully acquired lease kube-system/cluster.local-nfs-client-provisioner I1202 09:37:09.518540       1 controller.go:631] Starting provisioner controller cluster.local/nfs-client-provisioner_nfs-client-provisioner-794f85c5d4-m5wp2_4fa17668-14e7-11ea-a2bb-f610a73d4488! I1202 09:37:09.518648       1 event.go:221] Event(v1.ObjectReference{Kind:"Endpoints" , Namespace:"kube-system" , Name:"cluster.local-nfs-client-provisioner" , UID:"6c942278-9a6a-4a0c-8faf-c9dbea3de870" , APIVersion:"v1" , ResourceVersion:"2760006" , FieldPath:"" }): type : 'Normal'  reason: 'LeaderElection'  nfs-client-provisioner-794f85c5d4-m5wp2_4fa17668-14e7-11ea-a2bb-f610a73d4488 became leader I1202 09:37:09.714669       1 controller.go:680] Started provisioner controller cluster.local/nfs-client-provisioner_nfs-client-provisioner-794f85c5d4-m5wp2_4fa17668-14e7-11ea-a2bb-f610a73d4488! $ kubectl get sc NAME         PROVISIONER                            AGE nfs-client   cluster.local/nfs-client-provisioner   39s 
创建pvc,测试nfs-client-provisioner 创建测试pvc $  cat  test-pvc.yaml  kind:  PersistentVolumeClaim apiVersion:  v1 metadata:   name:  test-claim  spec:   storageClassName:  nfs-client     accessModes:      -  ReadWriteMany    resources:      requests:        storage:  1Mi  $  kubectl  apply  -f  test-pvc.yaml 
检查:
$ kubectl get pvc,pv NAME                               STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE persistentvolumeclaim/test -claim   Bound    pvc-67b598fd-dec8-422d-be6a-dc7ccf589fe7   1Mi        RWX            nfs-client     4m49s NAME                                                        CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                STORAGECLASS   REASON   AGE persistentvolume/pvc-67b598fd-dec8-422d-be6a-dc7ccf589fe7   1Mi        RWX            Delete           Bound    default/test -claim   nfs-client              4m48s 
已经自动创了对应的PV,名为:pvc-67b598fd-dec8-422d-be6a-dc7ccf589fe7,我们再去NFS Server上看看:
$ ll /data/kubernetes-storageclass/ total 4.0K drwxrwxrwx 2 root     root     21 Dec  2 17:55 default-test-claim-pvc-67b598fd-dec8-422d-be6a-dc7ccf589fe7 -rw-rw-r-- 1 root root  5 Dec  2 17:24 test  
已经创建了对应的目录:格式为:
<NAMESPACE>-<PVC_NAME>-<PV_NAME>
创建测试pod $  cat  test-pod.yaml  kind:  Pod apiVersion:  v1 metadata:   name:  test-pod  spec:   containers:    -  name:  test-pod      image:  wanghkkk/busyboxplus      command:        -  "/bin/sh"      args:        -  "-c"        -  "touch /mnt/SUCCESS && exit 0 || exit 1"      volumeMounts:        -  name:  nfs-pvc          mountPath:  "/mnt"    restartPolicy:  "Never"    volumes:      -  name:  nfs-pvc        persistentVolumeClaim:          claimName:  test-claim   $  kubectl  apply  -f  test-pod.yaml 
检查:
$ kubectl get po NAME                    READY   STATUS      RESTARTS   AGE nginx-9cb7f8c7d-wk89s   1/1     Running     0          19d test -pod                0/1     Completed   0          8m8s 
检查NFS Server:
$ ll /data/kubernetes-storageclass/ total 4.0K drwxrwxrwx 2 root     root     21 Dec  2 17:55 default-test-claim-pvc-67b598fd-dec8-422d-be6a-dc7ccf589fe7 -rw-rw-r-- 1 root root  5 Dec  2 17:24 text $ ll /data/kubernetes-storageclass/default-test-claim-pvc-67b598fd-dec8-422d-be6a-dc7ccf589fe7/ total 0 -rw-r--r-- 1 root root 0 Dec  2 17:55 SUCCESS 
清理:
$ kubectl delete -f test -pod.yaml -f test -pvc.yaml 
再次检查:
$ kubectl get pv,pvc No resources found. 
查看NFS Server:
$ ll /data/kubernetes-storageclass/ total 4.0K drwxrwxrwx 2 root     root     21 Dec  2 17:55 archived-default-test-claim-pvc-67b598fd-dec8-422d-be6a-dc7ccf589fe7 -rw-rw-r-- 1 root root  5 Dec  2 17:24 text $ ll /data/kubernetes-storageclass/archived-default-test-claim-pvc-67b598fd-dec8-422d-be6a-dc7ccf589fe7/ total 0 -rw-r--r-- 1 root root 0 Dec  2 17:55 SUCCESS 
因为我们在安装的时候指定了archiveOnDelete=true,所以在删除pv的时候,该文件夹并没有被删除,只是重命名增加了archived。
本文到这里就结束了,欢迎期待后面的文章。您可以关注下方的公众号二维码,在第一时间查看新文章。