这篇文章主要讲解了“kubernetes数据持久化StorageClass动态供给怎么实现”,文中的讲解内容简单清晰,易于学习与理解,下面请大家跟着小编的思路慢慢深入,一起来研究和学习“kubernetes数据持久化StorageClass
这篇文章主要讲解了“kubernetes数据持久化StorageClass动态供给怎么实现”,文中的讲解内容简单清晰,易于学习与理解,下面请大家跟着小编的思路慢慢深入,一起来研究和学习“kubernetes数据持久化StorageClass动态供给怎么实现”吧!
存储类的好处之一便是支持PV的动态供给,它甚至可以直接被视作为PV的创建模版,用户用到持久性存储时,需要通过创建PVC来绑定匹配的PV,此类操作需求较大,或者当管理员手动创建的PV无法满足PVC的所有需求时,系统按PVC的需求标准动态创建适配的PV会为存储管理带来极大的灵活性,不过仅那些属于StorageClass的PVC和PV才能产生绑定关系,即没有指定StorageClass的PVC只能绑定同类的PV。
存储类对象的名称至关重要,它是用户调用的标识,创建存储类对象时,除了名称之外,还需要为其定义三个关键字段。provisioner、parameter和reclaimPolicy。
所以kubernetes提供了一种可以动态分配的工作机制,可用自动创建PV,该机制依赖于StorageClass的api,将某个存储节点划分1T给kubernetes使用,当用户申请5Gi的PVC时,会自动从这1T的存储空间去创建一个5Gi的PV,而后自动与之进行关联绑定。
动态PV供给的启用需要事先创建一个存储类,不同的Provisoner的创建方法各有不同,并非所有的存储卷插件都由Kubernetes内建支持PV动态供给。
由于kubernetes内部不包含NFS驱动,所以需要使用外部驱动nfs-subdir-external-provisioner是一个自动供应器,它使用NFS服务端来支持动态供应。
NFS-subdir-external- provisioner实例负责监视PersistentVolumeClaims请求StorageClass,并自动为它们创建NFS所支持的PresistentVolumes。
这里的意思是要把哪个目录给kubernetes来使用。把目录共享出来。
[root@kn-server-node02-15 ~]# ll /data/总用量 0[root@kn-server-node02-15 ~]# showmount -e 10.0.0.15Export list for 10.0.0.15:/data 10.0.0.0/24
首先创建RBAC权限。
[root@kn-server-master01-13 nfs-provisioner]# cat nfs-rbac.yaml apiVersion: v1kind: ServiceAccountmetadata: name: nfs-client-provisioner # replace with namespace where provisioner is deployed namespace: default---kind: ClusterRoleapiVersion: rbac.authorization.k8s.io/v1metadata: name: nfs-client-provisioner-runnerrules: - apiGroups: [""] resources: ["nodes"] verbs: ["get", "list", "watch"] - apiGroups: [""] resources: ["persistentvolumes"] verbs: ["get", "list", "watch", "create", "delete"] - apiGroups: [""] resources: ["persistentvolumeclaims"] verbs: ["get", "list", "watch", "update"] - apiGroups: ["storage.k8s.io"] resources: ["storageclasses"] verbs: ["get", "list", "watch"] - apiGroups: [""] resources: ["events"] verbs: ["create", "update", "patch"]---kind: ClusterRoleBindingapiVersion: rbac.authorization.k8s.io/v1metadata: name: run-nfs-client-provisionersubjects: - kind: ServiceAccount name: nfs-client-provisioner # replace with namespace where provisioner is deployed namespace: defaultroleRef: kind: ClusterRole name: nfs-client-provisioner-runner apiGroup: rbac.authorization.k8s.io---kind: RoleapiVersion: rbac.authorization.k8s.io/v1metadata: name: leader-locking-nfs-client-provisioner # replace with namespace where provisioner is deployed namespace: defaultrules: - apiGroups: [""] resources: ["endpoints"] verbs: ["get", "list", "watch", "create", "update", "patch"]---kind: RoleBindingapiVersion: rbac.authorization.k8s.io/v1metadata: name: leader-locking-nfs-client-provisioner # replace with namespace where provisioner is deployed namespace: defaultsubjects: - kind: ServiceAccount name: nfs-client-provisioner # replace with namespace where provisioner is deployed namespace: defaultroleRef: kind: Role name: leader-locking-nfs-client-provisioner apiGroup: rbac.authorization.k8s.io[root@kn-server-master01-13 nfs-provisioner]# kubectl apply -f nfs-rbac.yaml serviceaccount/nfs-client-provisioner createdclusterrole.rbac.authorization.k8s.io/nfs-client-provisioner-runner createdclusterrolebinding.rbac.authorization.k8s.io/run-nfs-client-provisioner createdrole.rbac.authorization.k8s.io/leader-locking-nfs-client-provisioner createdrolebinding.rbac.authorization.k8s.io/leader-locking-nfs-client-provisioner created
[root@kn-server-master01-13 nfs-provisioner]# cat nfs-provisioner-deploy.yaml apiVersion: apps/v1kind: Deploymentmetadata: name: nfs-client-provisioner labels: app: nfs-client-provisioner # replace with namespace where provisioner is deployed namespace: defaultspec: replicas: 1 strategy: type: Recreate selector: matchLabels: app: nfs-client-provisioner template: metadata: labels: app: nfs-client-provisioner spec: serviceAccountName: nfs-client-provisioner containers: - name: nfs-client-provisioner image: k8s.GCr.io/sig-storage/nfs-subdir-external-provisioner:v4.0.2 镜像在国内是拉取不到的,因此为下载下来了放在我的Docker hub。 替换为lihuahaitang/nfs-subdir-external-provisioner:v4.0.2 volumeMounts: - name: nfs-client-root mountPath: /persistentvolumes env: - name: PROVISIONER_NAME value: k8s-sigs.io/nfs-subdir-external-provisioner NFS-Provisioner的名称,后续StorageClassName要与该名称保持一致 - name: NFS_SERVER NFS服务器的地址 value: 10.0.0.15 - name: NFS_PATH value: /data volumes: - name: nfs-client-root nfs: server: 10.0.0.15 path: /data[root@kn-server-master01-13 nfs-provisioner]# kubectl apply -f nfs-provisioner-deploy.yaml deployment.apps/nfs-client-provisioner createdPod正常运行。[root@kn-server-master01-13 nfs-provisioner]# kubectl get podsNAME READY STATUS RESTARTS AGEnfs-client-provisioner-57d6d9d5f6-dcxgq 1/1 Running 0 2m25sdescribe查看Pod详细信息;[root@kn-server-master01-13 nfs-provisioner]# kubectl describe pods nfs-client-provisioner-57d6d9d5f6-dcxgq Name: nfs-client-provisioner-57d6d9d5f6-dcxgqNamespace: defaultPriority: 0Node: kn-server-node02-15/10.0.0.15Start Time: Mon, 28 Nov 2022 11:19:33 +0800Labels: app=nfs-client-provisioner pod-template-hash=57d6d9d5f6Annotations: <none>Status: RunningIP: 192.168.2.82IPs: IP: 192.168.2.82Controlled By: ReplicaSet/nfs-client-provisioner-57d6d9d5f6Containers: nfs-client-provisioner: Container ID: docker://b5ea240a8693185be681714747f8e0a9f347492a24920dd68e629effb3a7400f Image: k8s.gcr.io/sig-storage/nfs-subdir-external-provisioner:v4.0.2 镜像来自k8s.gcr.io Image ID: docker-pullable://k8s.gcr.io/sig-storage/nfs-subdir-external-provisioner@sha256:63D5e04551ec8b5aae83b6f35938ca5ddc50a88d85492d9731810c31591fa4c9 Port: <none> Host Port: <none> State: Running Started: Mon, 28 Nov 2022 11:20:12 +0800 Ready: True Restart Count: 0 Environment: PROVISIONER_NAME: k8s-sigs.io/nfs-subdir-external-provisioner NFS_SERVER: 10.0.0.15 NFS_PATH: /data Mounts: /persistentvolumes from nfs-client-root (rw) /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-q2z8w (ro)Conditions: Type Status Initialized True Ready True ContainersReady True PodScheduled True Volumes: nfs-client-root: Type: NFS (an NFS mount that lasts the lifetime of a pod) Server: 10.0.0.15 Path: /data ReadOnly: false kube-api-access-q2z8w: Type: Projected (a volume that contains injected data from multiple sources) TokenExpirationSeconds: 3607 ConfigMapName: kube-root-ca.crt ConfigMapOptional: <nil> DownwardAPI: trueQoS Class: BestEffortNode-Selectors: <none>Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s node.kubernetes.io/unreachable:NoExecute op=Exists for 300sEvents: Type Reason Age From Message ---- ------ ---- ---- ------- NORMal Scheduled 3m11s default-scheduler Successfully assigned default/nfs-client-provisioner-57d6d9d5f6-dcxgq to kn-server-node02-15 Normal Pulling 3m11s kubelet Pulling image "k8s.gcr.io/sig-storage/nfs-subdir-external-provisioner:v4.0.2" Normal Pulled 2m32s kubelet Successfully pulled image "k8s.gcr.io/sig-storage/nfs-subdir-external-provisioner:v4.0.2" in 38.965869132s Normal Created 2m32s kubelet Created container nfs-client-provisioner Normal Started 2m32s kubelet Started container nfs-client-provisioner
创建NFS StorageClass动态供应商。
[root@kn-server-master01-13 nfs-provisioner]# cat storageclass.yaml apiVersion: storage.k8s.io/v1kind: StorageClass 类型为storageclaSSMetadata: name: nfs-provisioner-storage PVC申请时需明确指定的storageclass名称 annotations: storageclass.kubernetes.io/is-default-class: "true"provisioner: k8s-sigs.io/nfs-subdir-external-provisioner 供应商名称,必须和上面创建的"PROVISIONER_NAME"保持一致parameters: arcHiveOnDelete: "false" 如果值为false,删除pvc后也会删除目录内容,"true"则会对数据进行保留 pathPattern: "${.PVC.namespace}/${.PVC.name}" 创建目录路径的模板,默认为随机命名。[root@kn-server-master01-13 nfs-provisioner]# kubectl apply -f storageclass.yaml storageclass.storage.k8s.io/nfs-provisioner-storage createdstorage简写sc[root@kn-server-master01-13 nfs-provisioner]# kubectl get scNAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGEnfs-provisioner-storage k8s-sigs.io/nfs-subdir-external-provisioner Delete Immediate false 3sdescribe查看配详细信息。[root@kn-server-master01-13 nfs-provisioner]# kubectl describe sc Name: nfs-provisioner-storageIsDefaultClass: YesAnnotations: kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"storage.k8s.io/v1","kind":"StorageClass","metadata":{"annotations":{"storageclass.kubernetes.io/is-default-class":"true"},"name":"nfs-provisioner-storage"},"parameters":{"archiveOnDelete":"false","pathPattern":"${.PVC.namespace}/${.PVC.name}"},"provisioner":"k8s-sigs.io/nfs-subdir-external-provisioner"},storageclass.kubernetes.io/is-default-class=trueProvisioner: k8s-sigs.io/nfs-subdir-external-provisionerParameters: archiveOnDelete=false,pathPattern=${.PVC.namespace}/${.PVC.name}AllowVolumeExpansion: <unset>MountOptions: <none>ReclaimPolicy: DeleteVolumeBindingMode: ImmediateEvents: <none>
[root@kn-server-master01-13 nfs-provisioner]# cat nfs-pvc-test.yaml apiVersion: v1kind: PersistentVolumeClaimmetadata: name: nfs-pvc-testspec: storageClassName: "nfs-provisioner-storage" accessModes: - ReadWriteMany resources: requests: storage: 0.5Gi这里的PV的名字是随机的,数据的存储路径是根据pathPattern来定义的。[root@kn-server-node02-15 data]# lsdefault[root@kn-server-node02-15 data]# ll default/总用量 0drwxrwxrwx 2 root root 6 11月 28 13:56 nfs-pvc-test[root@kn-server-master01-13 pv]# kubectl get pvpvc-8ed67f7d-d829-4d87-8c66-d8a85f50772f 512Mi RWX Delete Bound default/nfs-pvc-test nfs-provisioner-storage 5m19s[root@kn-server-master01-13 nfs-provisioner]# kubectl describe pv pvc-8ed67f7d-d829-4d87-8c66-d8a85f50772fName: pvc-8ed67f7d-d829-4d87-8c66-d8a85f50772fLabels: <none>Annotations: pv.kubernetes.io/provisioned-by: k8s-sigs.io/nfs-subdir-external-provisionerFinalizers: [kubernetes.io/pv-protection]StorageClass: nfs-provisioner-storageStatus: BoundClaim: default/nfs-pvc-testReclaim Policy: DeleteAccess Modes: RWXVolumeMode: FilesystemCapacity: 512MiNode Affinity: <none>Message: Source: Type: NFS (an NFS mount that lasts the lifetime of a pod) Server: 10.0.0.15 Path: /data/default/nfs-pvc-test ReadOnly: falseEvents: <none>describe可用看到更详细的信息root@kn-server-master01-13 nfs-provisioner]# kubectl describe pvc Name: nfs-pvc-testNamespace: defaultStorageClass: nfs-provisioner-storageStatus: BoundVolume: pvc-8ed67f7d-d829-4d87-8c66-d8a85f50772fLabels: <none>Annotations: pv.kubernetes.io/bind-completed: yes pv.kubernetes.io/bound-by-controller: yes volume.beta.kubernetes.io/storage-provisioner: k8s-sigs.io/nfs-subdir-external-provisionerFinalizers: [kubernetes.io/pvc-protection]Capacity: 512Mi 定义的存储大小Access Modes: RWX 卷的读写VolumeMode: FilesystemUsed By: <none>Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal ExternalProvisioning 13m persistentvolume-controller waiting for a volume to be created, either by external provisioner "k8s-sigs.io/nfs-subdir-external-provisioner" or manually created by system administrator Normal Provisioning 13m k8s-sigs.io/nfs-subdir-external-provisioner_nfs-client-provisioner-57d6d9d5f6-dcxgq_259532a3-4dba-4183-be6d-8e8b320fc778 External provisioner is provisioning volume for claim "default/nfs-pvc-test" Normal ProvisioningSucceeded 13m k8s-sigs.io/nfs-subdir-external-provisioner_nfs-client-provisioner-57d6d9d5f6-dcxgq_259532a3-4dba-4183-be6d-8e8b320fc778 Successfully provisioned volume pvc-8ed67f7d-d829-4d87-8c66-d8a85f50772f
[root@kn-server-master01-13 nfs-provisioner]# cat Nginx-pvc-test.yamlapiVersion: v1kind: Podmetadata: name: nginx-scspec: containers: - name: nginx image: nginx volumeMounts: - name: nginx-page mountPath: /usr/share/nginx/html volumes: - name: nginx-page persistentVolumeClaim: claimName: nfs-pvc-test[root@kn-server-master01-13 nfs-provisioner]# kubectl apply -f nginx-pvc-test.yaml pod/nginx-sc created[root@kn-server-master01-13 nfs-provisioner]# kubectl describe pvcName: nfs-pvc-testNamespace: defaultStorageClass: nfs-provisioner-storageStatus: BoundVolume: pvc-8ed67f7d-d829-4d87-8c66-d8a85f50772fLabels: <none>Annotations: pv.kubernetes.io/bind-completed: yes pv.kubernetes.io/bound-by-controller: yes volume.beta.kubernetes.io/storage-provisioner: k8s-sigs.io/nfs-subdir-external-provisionerFinalizers: [kubernetes.io/pvc-protection]Capacity: 512MiAccess Modes: RWXVolumeMode: FilesystemUsed By: nginx-sc 可以看到的是nginx-sc这个Pod在使用这个PVC。和上面名称是一致的。[root@kn-server-master01-13 nfs-provisioner]# kubectl get pods nginx-scNAME READY STATUS RESTARTS AGEnginx-sc 1/1 Running 0 2m43s尝试写入数据[root@kn-server-node02-15 data]# echo "haitang" > /data/default/nfs-pvc-test/index.html访问测试。[root@kn-server-master01-13 nfs-provisioner]# curl 192.168.2.83haitang
感谢各位的阅读,以上就是“kubernetes数据持久化StorageClass动态供给怎么实现”的内容了,经过本文的学习后,相信大家对kubernetes数据持久化StorageClass动态供给怎么实现这一问题有了更深刻的体会,具体使用情况还需要大家实践验证。这里是编程网,小编将为大家推送更多相关知识点的文章,欢迎关注!
--结束END--
本文标题: kubernetes数据持久化StorageClass动态供给怎么实现
本文链接: https://lsjlt.com/news/346444.html(转载时请注明来源链接)
有问题或投稿请发送至: 邮箱/279061341@qq.com QQ/279061341
2024-05-24
2024-05-24
2024-05-24
2024-05-24
2024-05-24
2024-05-24
2024-05-24
2024-05-24
2024-05-24
2024-05-24
回答
回答
回答
回答
回答
回答
回答
回答
回答
回答
0