存储 使用

参考地址:https://www.yuque.com/u12604243/mc2aau/qgh2pg#IHoc8

ConfigMap

创建Map

#使用目录创建
kubectl create configmap ganme-config --from-file=../mapdir
#使用文件创建
kubectl create cm ganme-file --from-file=./mapdir/game.properties 
#使用字面值创建
kubectl create cm special-config2 --from-literal=sp.how=very --from-literal=sp.type=charm

ConfigMap 的使用

使用 ConfigMap 来替代环境变量

apiVersion: v1
kind: ConfigMap
metadata:
  name: special-config
  namespace: default
data:
  special.how: very
  special.type: charm
---
apiVersion: v1
kind: ConfigMap
metadata:
  name: env-config
  namespace: default
data:
  log_level: INFO
---
apiVersion: v1
kind: Pod
metadata:
  name: myapp-test-pod
spec:
  restartPolicy: Never
  containers:
    - name: test-container
      image: hub.escape.com/library/myapp:v1
      command: ["/bin/sh", "-c", "env"]
      env:
        - name: SPECIAL_LEVEL_KEY
          valueFrom:
            configMapKeyRef:
              name: special-config
              key: special.how
        - name: SPECIAL_TYPE_KEY
          valueFrom:
            configMapKeyRef:
              name: special-config
              key: special.type
      envFrom:
        - configMapRef:
          name: env-config

用 ConfigMap 设置命令行参数

apiVersion: v1
kind: ConfigMap
metadata:
  name: special-config
  namespace: default
data:
  special.how: very
  special.type: charm
---
apiVersion: v1
kind: Pod
metadata:
  name: myapp-test-pod
spec:
  restartPolicy: Never
  containers:
    - name: test-container
      image: myapp:v1
      command: ["/bin/sh", "-c", "echo $(SPECIAL_LEVEL_KEY) $(SPECIAL_TYPE_KEY)"]
      env:
        - name: SPECIAL_LEVEL_KEY
          valueFrom:
            configMapKeyRef:
              name: special-config
              key: special.how
        - name: SPECIAL_TYPE_KEY
          valueFrom:
            configMapKeyRef:
              name: special-config
              key: special.type

通过数据卷插件使用 ConfigMap

apiVersion: v1
kind: ConfigMap
metadata:
  name: special-config
  namespace: default
data:
  special.how: very
  special.type: charm
---
apiVersion: v1
kind: Pod
metadata:
  name: myapp-test-pod
spec:
  restartPolicy: Never
  containers:
    - name: test-container
      image: hub.escape.com/library/myapp:v1
      command: ["/bin/sh", "-c", "cat /etc/config/special.how"]
      volumeMounts:
        - name: config-volume
          mountPath: /etc/config
  volumes:
    - name: config-volume
      configMap:
        name: special-config

热更新

# 修改ConfigMap配置
$ kubectl edit configmap log-config

# 查找对应信息
$ kubectl exec \
    `kubectl get pods -l run=my-nginx -o=name|cut -d "/" -f2` \
    cat /etc/config/log_level
DEBUG

# 修改version/config来进行滚动更新
kubectl patch deployment my-nginx \
    --patch '{"spec": {"template": {"metadata": {"annotations": \
    {"version/config": "20190411" }}}}}'

Secret

**Secret 解决了密码、token、密钥等敏感数据的配置问题,而不需要把这些敏感数据暴露到镜像或者 Pod Spec 中。Secret **可以以 Volume 或者环境变量的方式使用。Secret 有三种类型,分别是:

  • Service Account :用来访问 Kubernetes API,由 Kubernetes 自动创建,并且会自动挂载到 Pod 的特点目录中。
  • Opaque:base64 编码格式的 Secret,用来存储密码、密钥等,相当来说不安全。
  • kubernetes.io/dockerconfigjson:用来存储私有 docker registry 的认证信息。

使用

apiVersion: v1
kind: Secret
metadata:
  name: mysecret
type: Opaque
data:
  password: MWYyZDFlMmU2N2Rm
  username: YWRtaW4=
---
apiVersion: v1
kind: Pod
metadata:
  labels:
    name: seret-test
spec:
  containers:
    - name: db
      image: hub.escape.com/library/myapp:v1
      volumeMounts:
        - name: secrets
          mountPath: "readOnly: true"
  volumes:
    - name: secrets
      secret:
        secretName: mysecre
  

Volume

直接挂载目录

apiVersion: apps/v1
kind: Deployment
metadata:
  name: pv-ng-deploy
spec:
  replicas: 2
  selector: # 定义选择器
    matchLabels: #匹配的标签,下面定义了3个标签,将会选择有这些标签的pod
      app: pvnginx
  template:
    metadata:
      labels:
        app: pvnginx
    spec:
      containers:
      - name: pvnginx
        image: nginx
        ports:
        - containerPort: 80
        volumeMounts:				#挂载pv
        - name: ng-data				#挂载资源的名称,随便起
          mountPath: /ng/data			#挂载目
         # 方法.直接挂载到nfs目录上
      volumes:
      - name: ng-data		
        nfs:
          server: 192.168.56.12
          path: /nfsdata/ng-data/wordpress

PV

一、PV(其他可参考官方文档)(静态挂载)

NFS

apiVersion: v1
kind: PersistentVolume
metadata:
  name: pv0001
spec:
  capacity:
    storage: 1Gi
  accessModes:
    - ReadWriteOnce
  persistentVolumeReclaimPolicy: Recycle
  storageClassName: nfspv # 指定存储class那么,pvc必须配置一样才行
  nfs:
    path: /tmp
    server: 192.168.56.11

本地卷:

apiVersion: v1
kind: PersistentVolume
metadata:
  name: example-pv
spec:
  capacity:
    storage: 1Gi
  volumeMode: Filesystem
  accessModes:
  - ReadWriteOnce
  persistentVolumeReclaimPolicy: Delete
  storageClassName: local-storage
  local:
    path: /disks
  nodeAffinity:
    required:
      nodeSelectorTerms:
      - matchExpressions:
        - key: kubernetes.io/hostname
          operator: In
          values:
          - k8s-node1

PV 的访问模式( accessModes) 有三种

  • ReadWriteOnce( RWO) : 是最基本的方式, 可读可写, 但只支持被单个节点挂
    载。
  • ReadOnlyMany( ROX) : 可以以只读的方式被多个节点挂载。
  • ReadWriteMany( RWX) : 这种存储可以以读写的方式被多个节点共享。 不是每一
    种存储都支持这三种方式, 像共享方式, 目前支持的还比较少, 比较常用的是
    NFS。 在 PVC 绑定 PV 时通常根据两个条件来绑定, 一个是存储的大小, 另一个就
    是访问模式

PV 的回收策略( persistentVolumeReclaimPolicy, 即 PVC 释放卷的时候 PV 该如何操
作) 也有三种 :

  • Retain, 不清理, 保留 Volume( 需要手动清理)
  • Recycle, 删除数据, 即 rm -rf /thevolume/* ( 只有 NFS 和 HostPath 支持 )
  • Delete, 删除存储资源, 比如删除 AWS EBS 卷( 只有 AWS EBS, GCE PD, Azure
    Disk 和 Cinder 支持)

卷阶段

  • Available(可用)– 卷是一个空闲资源,尚未绑定到任何申领;
  • Bound(已绑定)– 该卷已经绑定到某申领;
  • Released(已释放)– 所绑定的申领已被删除,但是资源尚未被集群回收;
  • Failed(失败)– 卷的自动回收操作失败。

二.PVC(PersistentVolumeClaim )

1.新建pvc

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: nfs-pvc
spec:
  storageClassName: nfspv #绑定与pv一直
  volumeName: zk-pop-pv-2 # 挂载pv名称 可选
  dataSource:   # 快照卷
    name: new-snapshot-test
    kind: VolumeSnapshot
    apiGroup: snapshot.storage.k8s.io
  accessModes:
    - ReadWriteOnce    #必须跟pv一直
  selector:
    matchLabels:
      pv: kod-pv01				//定义pv标签,与pv进行关联
  resources:
    requests:
      storage: 1Gi

2.pvc挂载应用

apiVersion: apps/v1
kind: Deployment
metadata:
  name: pv-ng-deploy
spec:
  replicas: 2
  selector: # 定义选择器
    matchLabels: #匹配的标签,下面定义了3个标签,将会选择有这些标签的pod
      app: pvnginx
  template:
    metadata:
      labels:
        app: pvnginx
    spec:
      containers:
      - name: pvnginx
        image: nginx
        ports:
        - containerPort: 80
        volumeMounts:				#挂载pv
        - name: ng-data				#挂载资源的名称,随便起
          mountPath: /ng/data			#挂载目录
          #方法1.挂载pvc
      volumes:
      - name: ng-data				#资源名称和volumemounts中的name对应
        persistentVolumeClaim:
          claimName: nfs-pvc			#pvc的名称

3.pv和pvc删除

  • 优先删除使用deploy
  • 在删除pvc
  • 最后删除pv

三、StorageClass(动态挂载)

Kubernetes提供了一套可以自动创建PV的机制,即:Dynamic Provisioning.而这个机制的核心在于:StorageClass这个API对象.

1.步骤

1.创建一个可用的NFS Serve
2.创建Service Account.这是用来管控NFS provisioner在k8s集群中运行的权限
3.创建StorageClass.负责建立PVC并调用NFS provisioner进行预定的工作,并让PVPVC建立管理
4.创建NFS provisioner.有两个功能,一个是在NFS共享目录下创建挂载点(volume),另一个则是建了PV并将PVNFS的挂载点建立关联  

2.搭建

2.1创建rbac配置account和权限

apiVersion: v1
kind: ServiceAccount
metadata:
  name: nfs-client-provisioner
  # replace with namespace where provisioner is deployed
  namespace: default        #根据实际环境设定namespace,下面类同
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: nfs-client-provisioner-runner
rules:
  - apiGroups: [""]
    resources: ["persistentvolumes"]
    verbs: ["get", "list", "watch", "create", "delete"]
  - apiGroups: [""]
    resources: ["persistentvolumeclaims"]
    verbs: ["get", "list", "watch", "update"]
  - apiGroups: ["storage.k8s.io"]
    resources: ["storageclasses"]
    verbs: ["get", "list", "watch"]
  - apiGroups: [""]
    resources: ["events"]
    verbs: ["create", "update", "patch"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: run-nfs-client-provisioner
subjects:
  - kind: ServiceAccount
    name: nfs-client-provisioner
    # replace with namespace where provisioner is deployed
    namespace: default
roleRef:
  kind: ClusterRole
  name: nfs-client-provisioner-runner
  apiGroup: rbac.authorization.k8s.io
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: leader-locking-nfs-client-provisioner
    # replace with namespace where provisioner is deployed
  namespace: default
rules:
  - apiGroups: [""]
    resources: ["endpoints"]
    verbs: ["get", "list", "watch", "create", "update", "patch"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: leader-locking-nfs-client-provisioner
subjects:
  - kind: ServiceAccount
    name: nfs-client-provisioner
    # replace with namespace where provisioner is deployed
    namespace: default
roleRef:
  kind: Role
  name: leader-locking-nfs-client-provisioner
  apiGroup: rbac.authorization.k8s.io

2.2创建NFS资源的StorageClass

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: managed-nfs-storage
provisioner: qgg-nfs-storage #这里的名称要和provisioner配置文件中的环境变量PROVISIONER_NAME保持一致
parameters:
  archiveOnDelete: "false"

2.3创建nfs-Provisioner

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nfs-client-provisioner
  labels:
    app: nfs-client-provisioner
  # replace with namespace where provisioner is deployed
  namespace: default  #与RBAC文件中的namespace保持一致
spec:
  replicas: 1
  selector:
    matchLabels:
      app: nfs-client-provisioner
  strategy:
    type: Recreate
  selector:
    matchLabels:
      app: nfs-client-provisioner
  template:
    metadata:
      labels:
        app: nfs-client-provisioner
    spec:
      serviceAccountName: nfs-client-provisioner
      containers:
        - name: nfs-client-provisioner
          image: quay.io/external_storage/nfs-client-provisioner:latest
          volumeMounts:
            - name: nfs-client-root
              mountPath: /persistentvolumes
          env:
            - name: PROVISIONER_NAME
              value: qgg-nfs-storage  #provisioner名称,请确保该名称与 nfs-StorageClass.yaml文件中的provisioner名称保持一致
            - name: NFS_SERVER
              value: 192.168.56.11   #NFS Server IP地址
            - name: NFS_PATH  
              value: /nfsdata    #NFS挂载卷
      volumes:
        - name: nfs-client-root
          nfs:
            server: 192.168.56.11  #NFS Server IP地址
            path: /nfsdata     #NFS 挂载卷

2.4创建pvc

apiVersion: v1
kind: PersistentVolumeClaim

metadata:
  name: test-claim
  annotations:
    volume.beta.kubernetes.io/storage-class: "managed-nfs-storage"   #与nfs-StorageClass.yaml metadata.name保持一致
spec:
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 1Mi

2.5设置默认sc

kubectl patch storageclass managed-nfs-storage -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'

2.6出现创建pvc后一直等待情况waiting for a volume to be created, either by external provisioner “qgg-nfs-storage” or manually created by system administrator

# 修改kube-apiserver.yaml文件
参考地址:https://www.cnblogs.com/Applogize/p/15161379.html
[root@k8s-matser01 nfs.rbac]# cat /etc/kubernetes/manifests/kube-apiserver.yaml
apiVersion: v1
-----
    - --tls-private-key-file=/etc/kubernetes/pki/apiserver.key
    - --feature-gates=RemoveSelfLink=false # 添加这个配置

四、案例(搭建redis服务器并挂载)

---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: redis-pvc
  annotations:
    volume.beta.kubernetes.io/storage-class: "managed-nfs-storage"   #与nfs-StorageClass.yaml metadata.name保持一致
spec:
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 100Mi
---
kind: Deployment
apiVersion: apps/v1
metadata:
  labels:
    app: devops-redis 
  name: deploy-devops-redis
spec:
  replicas: 1 
  selector:
    matchLabels:
      app: devops-redis
  template:
    metadata:
      labels:
        app: devops-redis
    spec:
      containers:
        - name: redis-container
          image: redis
          imagePullPolicy: Always
          volumeMounts:
          - mountPath: "/data/redis-data/" # 挂载容器里面的目录
            name: redis-datadir
      volumes:
        - name: redis-datadir
          persistentVolumeClaim:
            claimName: redis-pvc  # 挂载k8s的pvc名称

---
kind: Service
apiVersion: v1
metadata:
  labels:
    app: devops-redis
  name: srv-devops-redis
spec:
  type: NodePort
  ports:
  - name: http
    port: 6379 
    targetPort: 6379
  selector:
    app: devops-redis
  sessionAffinity: ClientIP
  sessionAffinityConfig:
    clientIP:
      timeoutSeconds: 10800
      

   转载规则


《存储 使用》 by XieJiayi is licensed under a 知识共享署名 4.0 国际许可协议 许可协议。转载请注明来源
 上一篇
集群调度 使用 集群调度 使用
调度过程调度分为几个部分:首先是过滤掉不满足条件的节点,这个过程称为 predicate;然后对通过的节点按照优先级排序,这个是 priority;最后从中选择优先级最高的节点。如果中间任何一步骤有错误,就直接返回错误。Predicate
2023-01-06
下一篇 
article title article title
测试
2023-01-05
  目录