0%

在Kubernetes上安装nfs-client-provisioner来提供StorageClass

之前搭建的测试环境的Kubernetes一直没有StorageClass动态存储提供,导致很多的实验都是用emptyDir的方式,测试还是OK了,但是在企业中使用的话还是需要有永久的存储的,这里先以nfs-client-provisioner的方式,通过NFS Server来创建StorageClass,来响应k8s中pvc的创建需求。

本文环境:

  • Kubernetes 1.15.5
  • nfs-client-provisioner chart version:1.2.8,app version:3.1.0
  • Helm 3
  • Amazon Linux 2 AMI,基于CentOS 7

有用链接:

nfs-client-provisioner GitHub:

https://github.com/kubernetes-incubator/external-storage/tree/master/nfs-client

nfs-client-provisioner Helm Chart:

https://github.com/helm/charts/tree/master/stable/nfs-client-provisioner

前提依赖

Kubernetes环境

请参考之前的文章:使用kubeadm快速安装kubernetes 1.15.5测试环境’ %}</p> <h3 id="Helm3-安装"><a href="#Helm3-安装" class="headerlink" title="Helm3 安装"></a>Helm3 安装</h3><p>同样请参考之前的文章:{% post_link Pre-trial-helm3 抢先试用Helm3,并安装nginx-ingress-controller’

NFS Server的安装

安装

我这里在K8S Master节点上安装NFS Server,该节点IP:172.17.0.7。

其他的Kubernetes节点也需要安装,但是可以不需要配置/etc/exports,只是在挂载的时候使用。

# 在所有的K8S节点上安装
$ sudo yum install nfs-utils rpcbind -y

# 使用ansible
$ ansible dev_k8s -m yum -a "name=rpcbind state=present" --private-key keys/aws/dev_keyPair2_awsbj_cn.pem -b --become-method=sudo
$ ansible dev_k8s -m yum -a "name=nfs-utils state=present" --private-key keys/aws/dev_keyPair2_awsbj_cn.pem -b --become-method=sudo

配置

cat >> /etc/exports <<EOF
/data/kubernetes-storageclass *(rw,sync,no_root_squash)
EOF

# 创建所需目录
$ sudo mkdir -p /data/kubernetes-storageclass
$ sudo chmod 777 /data/kubernetes-storageclass
$ ll /data/
total 0
drwxrwxrwx 2 root root 6 Dec 2 16:58 kubernetes-storageclass

启动服务:

# 注意顺序不能颠倒
$ sudo systemctl restart rpcbind
$ sudo systemctl restart nfs

# 开机启动
$ sudo systemctl enable rpcbind
$ sudo systemctl enable nfs

测试:

在随意一台K8S Node节点上测试:

$ showmount -e 172.17.0.7
Export list for 172.17.0.7:
/data/kubernetes-storageclass *

挂载测试:

# 创建挂载点
$ mkdir nfs
# 挂载NFS Server
$ sudo mount 172.17.0.7:/data/kubernetes-storageclass nfs/
$ df -h
Filesystem Size Used Avail Use% Mounted on
devtmpfs 7.9G 0 7.9G 0% /dev
tmpfs 7.9G 0 7.9G 0% /dev/shm
tmpfs 7.9G 2.7M 7.9G 1% /run
tmpfs 7.9G 0 7.9G 0% /sys/fs/cgroup
/dev/xvda1 100G 8.6G 92G 9% /
tmpfs 1.6G 0 1.6G 0% /run/user/10086
172.17.0.7:/data/kubernetes-storageclass 100G 7.4G 93G 8% /home/root/nfs # 已经挂载
# 写入测试
$ echo test > nfs/test
$ ll nfs
total 4.0K
-rw-rw-r-- 1 root root 5 Dec 2 17:24 test
$ cat nfs/test
test

# 从NFS Server上查看:
$ ll /data/kubernetes-storageclass/
total 4.0K
-rw-rw-r-- 1 root root 5 Dec 2 17:24 test

# 测试OK后,卸载挂载
$ sudo umount /home/microoak/nfs

安装nfs-client-provisioner

记得一定先要更新repo:

$ helm repo update 
Hang tight while we grab the latest from your chart repositories...
...Successfully got an update from the "stable" chart repository
...Successfully got an update from the "elastic" chart repository
Update Complete. ⎈ Happy Helming!⎈

搜索一下,是否有提供nfs-client-provisioner:

$ helm search repo nfs-client-provisioner
NAME CHART VERSION APP VERSION DESCRIPTION
stable/nfs-client-provisioner 1.2.8 3.1.0 nfs-client is an automatic provisioner that use...

好了,下载下载,然后更改对应的values.yaml:

$ helm pull stable/nfs-client-provisioner
$ tar xf nfs-client-provisioner-1.2.8.tgz
# 备份原有的values.yaml
$ cp values.yaml{,.ori}

首先查看Helm Chart GitHub,查看所有的key解释:

Configuration

The following tables lists the configurable parameters of this chart and their default values.

Parameter Description Default
replicaCount Number of provisioner instances to deployed 1
strategyType Specifies the strategy used to replace old Pods by new ones Recreate
image.repository Provisioner image quay.io/external_storage/nfs-client-provisioner
image.tag Version of provisioner image v3.1.0-k8s1.11
image.pullPolicy Image pull policy IfNotPresent
storageClass.name Name of the storageClass nfs-client
storageClass.defaultClass Set as the default StorageClass false
storageClass.allowVolumeExpansion Allow expanding the volume 允许扩充volume true
storageClass.reclaimPolicy Method used to reclaim an obsoleted volume 删除策略 Delete
storageClass.provisionerName Name of the provisionerName null
storageClass.archiveOnDelete Archive pvc when deleting 删除时归档 true
nfs.server Hostname of the NFS server 服务地址 null (ip or hostname)
nfs.path Basepath of the mount point to be used 服务端的路径 /ifs/kubernetes
nfs.mountOptions Mount options (e.g. ‘nfsvers=3’) null
resources Resources required (e.g. CPU, memory) 资源大小 {}
rbac.create Use Role-based Access Control true
podSecurityPolicy.enabled Create & use Pod Security Policy resources false
priorityClassName Set pod priorityClassName null
serviceAccount.create Should we create a ServiceAccount true
serviceAccount.name Name of the ServiceAccount to use null
nodeSelector Node labels for pod assignment {}
affinity Affinity settings {}
tolerations List of node taints to tolerate []

修改values.yaml,只列出了修改的部分:

# 更改成azk8s中国区的源
image:
repository: quay.azk8s.cn/external_storage/nfs-client-provisioner

nfs:
server: 172.17.0.7
path: /data/kubernetes-storageclass

resources:
limits:
cpu: 100m
memory: 128Mi
requests:
cpu: 100m
memory: 128Mi

查看渲染后的yaml:

$ helm -n kube-system install nfs-client-provisioner ./ --dry-run --debug
install.go:148: [debug] Original chart version: ""
install.go:165: [debug] CHART PATH: /home/microoak/kubernetes/helm/nfs-client-provisioner

NAME: nfs-client-provisioner
LAST DEPLOYED: Mon Dec 2 17:32:37 2019
NAMESPACE: kube-system
STATUS: pending-install
REVISION: 1
TEST SUITE: None
USER-SUPPLIED VALUES:
{}

COMPUTED VALUES:
affinity: {}
image:
pullPolicy: IfNotPresent
repository: quay.io/external_storage/nfs-client-provisioner
tag: v3.1.0-k8s1.11
nfs:
mountOptions: null
path: /data/kubernetes-storageclass
server: 172.17.0.7
nodeSelector: {}
podSecurityPolicy:
enabled: false
rbac:
create: true
replicaCount: 1
resources:
limits:
cpu: 100m
memory: 128Mi
requests:
cpu: 100m
memory: 128Mi
serviceAccount:
create: true
name: null
storageClass:
allowVolumeExpansion: true
archiveOnDelete: true
create: true
defaultClass: false
name: nfs-client
reclaimPolicy: Delete
strategyType: Recreate
tolerations: []

HOOKS:
MANIFEST:
---
# Source: nfs-client-provisioner/templates/storageclass.yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
labels:
app: nfs-client-provisioner
chart: nfs-client-provisioner-1.2.8
heritage: Helm
release: nfs-client-provisioner
name: nfs-client
provisioner: cluster.local/nfs-client-provisioner
allowVolumeExpansion: true
reclaimPolicy: Delete
parameters:
archiveOnDelete: "true"
---
# Source: nfs-client-provisioner/templates/serviceaccount.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
labels:
app: nfs-client-provisioner
chart: nfs-client-provisioner-1.2.8
heritage: Helm
release: nfs-client-provisioner
name: nfs-client-provisioner
---
# Source: nfs-client-provisioner/templates/clusterrole.yaml
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
labels:
app: nfs-client-provisioner
chart: nfs-client-provisioner-1.2.8
heritage: Helm
release: nfs-client-provisioner
name: nfs-client-provisioner-runner
rules:
- apiGroups: [""]
resources: ["persistentvolumes"]
verbs: ["get", "list", "watch", "create", "delete"]
- apiGroups: [""]
resources: ["persistentvolumeclaims"]
verbs: ["get", "list", "watch", "update"]
- apiGroups: ["storage.k8s.io"]
resources: ["storageclasses"]
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources: ["events"]
verbs: ["create", "update", "patch"]
---
# Source: nfs-client-provisioner/templates/clusterrolebinding.yaml
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
labels:
app: nfs-client-provisioner
chart: nfs-client-provisioner-1.2.8
heritage: Helm
release: nfs-client-provisioner
name: run-nfs-client-provisioner
subjects:
- kind: ServiceAccount
name: nfs-client-provisioner
namespace: kube-system
roleRef:
kind: ClusterRole
name: nfs-client-provisioner-runner
apiGroup: rbac.authorization.k8s.io
---
# Source: nfs-client-provisioner/templates/role.yaml
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
labels:
app: nfs-client-provisioner
chart: nfs-client-provisioner-1.2.8
heritage: Helm
release: nfs-client-provisioner
name: leader-locking-nfs-client-provisioner
rules:
- apiGroups: [""]
resources: ["endpoints"]
verbs: ["get", "list", "watch", "create", "update", "patch"]
---
# Source: nfs-client-provisioner/templates/rolebinding.yaml
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
labels:
app: nfs-client-provisioner
chart: nfs-client-provisioner-1.2.8
heritage: Helm
release: nfs-client-provisioner
name: leader-locking-nfs-client-provisioner
subjects:
- kind: ServiceAccount
name: nfs-client-provisioner
namespace: kube-system
roleRef:
kind: Role
name: leader-locking-nfs-client-provisioner
apiGroup: rbac.authorization.k8s.io
---
# Source: nfs-client-provisioner/templates/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: nfs-client-provisioner
labels:
app: nfs-client-provisioner
chart: nfs-client-provisioner-1.2.8
heritage: Helm
release: nfs-client-provisioner
spec:
replicas: 1
strategy:
type: Recreate
selector:
matchLabels:
app: nfs-client-provisioner
release: nfs-client-provisioner
template:
metadata:
annotations:
labels:
app: nfs-client-provisioner
release: nfs-client-provisioner
spec:
serviceAccountName: nfs-client-provisioner
containers:
- name: nfs-client-provisioner
image: "quay.azk8s.cn/external_storage/nfs-client-provisioner:v3.1.0-k8s1.11"
imagePullPolicy: IfNotPresent
volumeMounts:
- name: nfs-client-root
mountPath: /persistentvolumes
env:
- name: PROVISIONER_NAME
value: cluster.local/nfs-client-provisioner
- name: NFS_SERVER
value: 172.17.0.7
- name: NFS_PATH
value: /data/kubernetes-storageclass
resources:
limits:
cpu: 100m
memory: 128Mi
requests:
cpu: 100m
memory: 128Mi
volumes:
- name: nfs-client-root
nfs:
server: 172.17.0.7
path: /data/kubernetes-storageclass

安装:

$ cd nfs-client-provisioner
$ helm -n kube-system install nfs-client-provisioner ./

检查状态:

$ helm -n kube-system ls
NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION
nfs-client-provisioner kube-system 1 2019-12-02 17:37:05.261060723 +0800 CST deployed nfs-client-provisioner-1.2.8 3.1.0
$ kubectl -n kube-system get po
NAME READY STATUS RESTARTS AGE
calico-kube-controllers-7b4657785d-7gw8s 1/1 Running 0 20d
calico-node-lpnmf 1/1 Running 0 10d
calico-node-v26zq 1/1 Running 0 20d
calico-node-vxknt 1/1 Running 0 10d
coredns-5c98db65d4-f6bjz 1/1 Running 3 20d
coredns-5c98db65d4-xn6fg 1/1 Running 3 20d
etcd-k8s01.test.awsbj.cn 1/1 Running 0 10d
kube-apiserver-k8s01.test.awsbj.cn 1/1 Running 0 18d
kube-controller-manager-k8s01.test.awsbj.cn 1/1 Running 1 18d
kube-proxy-kgv74 1/1 Running 0 10d
kube-proxy-vmbqj 1/1 Running 0 10d
kube-proxy-zq5rl 1/1 Running 0 10d
kube-scheduler-k8s01.test.awsbj.cn 1/1 Running 2 20d
nfs-client-provisioner-794f85c5d4-m5wp2 1/1 Running 0 11s # 在这里

# 查看日志,看有无报错
$ kubectl -n kube-system logs nfs-client-provisioner-794f85c5d4-m5wp2
I1202 09:37:09.420679 1 leaderelection.go:185] attempting to acquire leader lease kube-system/cluster.local-nfs-client-provisioner...
I1202 09:37:09.518361 1 leaderelection.go:194] successfully acquired lease kube-system/cluster.local-nfs-client-provisioner
I1202 09:37:09.518540 1 controller.go:631] Starting provisioner controller cluster.local/nfs-client-provisioner_nfs-client-provisioner-794f85c5d4-m5wp2_4fa17668-14e7-11ea-a2bb-f610a73d4488!
I1202 09:37:09.518648 1 event.go:221] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"cluster.local-nfs-client-provisioner", UID:"6c942278-9a6a-4a0c-8faf-c9dbea3de870", APIVersion:"v1", ResourceVersion:"2760006", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' nfs-client-provisioner-794f85c5d4-m5wp2_4fa17668-14e7-11ea-a2bb-f610a73d4488 became leader
I1202 09:37:09.714669 1 controller.go:680] Started provisioner controller cluster.local/nfs-client-provisioner_nfs-client-provisioner-794f85c5d4-m5wp2_4fa17668-14e7-11ea-a2bb-f610a73d4488!

# 查看storageclass,名称是nfs-client,后续需要用到
$ kubectl get sc
NAME PROVISIONER AGE
nfs-client cluster.local/nfs-client-provisioner 39s

创建pvc,测试nfs-client-provisioner

创建测试pvc

$ cat test-pvc.yaml 
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: test-claim
spec:
storageClassName: nfs-client # 指定storageClass的名称
accessModes:
- ReadWriteMany
resources:
requests:
storage: 1Mi

# 安装
$ kubectl apply -f test-pvc.yaml

检查:

$ kubectl get pvc,pv
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
persistentvolumeclaim/test-claim Bound pvc-67b598fd-dec8-422d-be6a-dc7ccf589fe7 1Mi RWX nfs-client 4m49s

NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
persistentvolume/pvc-67b598fd-dec8-422d-be6a-dc7ccf589fe7 1Mi RWX Delete Bound default/test-claim nfs-client 4m48s

已经自动创了对应的PV,名为:pvc-67b598fd-dec8-422d-be6a-dc7ccf589fe7,我们再去NFS Server上看看:

$ ll /data/kubernetes-storageclass/
total 4.0K
drwxrwxrwx 2 root root 21 Dec 2 17:55 default-test-claim-pvc-67b598fd-dec8-422d-be6a-dc7ccf589fe7
-rw-rw-r-- 1 root root 5 Dec 2 17:24 test

已经创建了对应的目录:格式为:

<NAMESPACE>-<PVC_NAME>-<PV_NAME>

创建测试pod

$ cat test-pod.yaml 
kind: Pod
apiVersion: v1
metadata:
name: test-pod
spec:
containers:
- name: test-pod
image: wanghkkk/busyboxplus
command:
- "/bin/sh"
args:
- "-c"
- "touch /mnt/SUCCESS && exit 0 || exit 1"
volumeMounts:
- name: nfs-pvc
mountPath: "/mnt"
restartPolicy: "Never"
volumes:
- name: nfs-pvc
persistentVolumeClaim:
claimName: test-claim # 指定上面创建的pvc名称

# 安装
$ kubectl apply -f test-pod.yaml

检查:

$ kubectl get po
NAME READY STATUS RESTARTS AGE
nginx-9cb7f8c7d-wk89s 1/1 Running 0 19d
test-pod 0/1 Completed 0 8m8s # 已经是正常完成了

检查NFS Server:

$ ll /data/kubernetes-storageclass/
total 4.0K
drwxrwxrwx 2 root root 21 Dec 2 17:55 default-test-claim-pvc-67b598fd-dec8-422d-be6a-dc7ccf589fe7
-rw-rw-r-- 1 root root 5 Dec 2 17:24 text

# 已经创建SUCCESS文件
$ ll /data/kubernetes-storageclass/default-test-claim-pvc-67b598fd-dec8-422d-be6a-dc7ccf589fe7/
total 0
-rw-r--r-- 1 root root 0 Dec 2 17:55 SUCCESS

清理:

$ kubectl delete -f test-pod.yaml -f test-pvc.yaml

再次检查:

$ kubectl get pv,pvc

No resources found.

查看NFS Server:

$ ll /data/kubernetes-storageclass/
total 4.0K
drwxrwxrwx 2 root root 21 Dec 2 17:55 archived-default-test-claim-pvc-67b598fd-dec8-422d-be6a-dc7ccf589fe7
-rw-rw-r-- 1 root root 5 Dec 2 17:24 text

$ ll /data/kubernetes-storageclass/archived-default-test-claim-pvc-67b598fd-dec8-422d-be6a-dc7ccf589fe7/
total 0
-rw-r--r-- 1 root root 0 Dec 2 17:55 SUCCESS

因为我们在安装的时候指定了archiveOnDelete=true,所以在删除pv的时候,该文件夹并没有被删除,只是重命名增加了archived。


本文到这里就结束了,欢迎期待后面的文章。您可以关注下方的公众号二维码,在第一时间查看新文章。

公众号

如有疏忽错误欢迎在留言区评论指正,如果对您有所帮助欢迎点击下方进行打赏。