加载中...

Operator 部署服务数据持久化


一、前言

1.1

默认使用Operator部署的prometheusgrafana存储数据都是使用的emptyDir格式的,所以服务重启后数据就会清空,生产环境肯定是不行的所以需要将数据持久化到磁盘

二、配置

2.1 Prometheus持久化

prometheus使用的是sts控制器,这边直接使用StorageClass

首先需要创建StorageClass存储(需要配置nfs 这里就不多赘述了)

[root@k8smaster StorageClass]# cat nfs-sc.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
  name: nfs-client-provisioner
  namespace: kube-system

---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: nfs-client-provisioner-runner
rules:
  - apiGroups: [""]
    resources: ["persistentvolumes"]
    verbs: ["get", "list", "watch", "create", "delete"]
  - apiGroups: [""]
    resources: ["persistentvolumeclaims"]
    verbs: ["get", "list", "watch", "update"]
  - apiGroups: ["storage.k8s.io"]
    resources: ["storageclasses"]
    verbs: ["get", "list", "watch"]
  - apiGroups: [""]
    resources: ["events"]
    verbs: ["list", "watch", "create", "update", "patch"]
  - apiGroups: [""]
    resources: ["endpoints"]
    verbs: ["get", "list", "watch", "create", "update", "patch"]

---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: run-nfs-client-provisioner
subjects:
  - kind: ServiceAccount
    name: nfs-client-provisioner
    namespace: kube-system
roleRef:
  kind: ClusterRole
  name: nfs-client-provisioner-runner
  apiGroup: rbac.authorization.k8s.io

---
kind: Deployment
apiVersion: apps/v1
metadata:
  name: nfs-provisioner-01
  namespace: kube-system
spec:
  replicas: 1
  strategy:
    type: Recreate
  selector:
    matchLabels:
      app: nfs-provisioner-01
  template:
    metadata:
      labels:
        app: nfs-provisioner-01
    spec:
      serviceAccountName: nfs-client-provisioner
      containers:
        - name: nfs-client-provisioner
          image: jmgao1983/nfs-client-provisioner:latest
          imagePullPolicy: IfNotPresent
          volumeMounts:
            - name: nfs-client-root
              mountPath: /persistentvolumes
          env:
            - name: PROVISIONER_NAME
              value: nfs-provisioner-01  # 此处供应者名字供storageclass调用
            - name: NFS_SERVER
              value: 10.7.48.223   # 填入NFS的地址
            - name: NFS_PATH
              value: /data/lnmp/prometheus_data  # 填入NFS服务器挂载的目录
      volumes:
        - name: nfs-client-root
          nfs:
            server: 10.7.48.223   # 填入NFS的地址
            path: /data/lnmp/prometheus_data   # 填入NFS服务器挂载的目录

---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: nfs-boge
provisioner: nfs-provisioner-01
# Supported policies: Delete、 Retain , default is Delete
reclaimPolicy: Retain
#启动
[root@k8smaster StorageClass]# kubectl apply -f nfs-sc.yaml
#查看
[root@k8smaster StorageClass]# kubectl get po -n kube-system
NAME                                       READY   STATUS    RESTARTS   AG
nfs-provisioner-01-7f7ccdb7bb-nfsxb        1/1     Running   0          5h10m

#修改prometheus存储配置
[root@k8smaster StorageClass]# kubectl -n monitoring edit prometheus k8s
spec
......
  storage:
    volumeClaimTemplate:
      spec:
        accessModes:
        - ReadWriteMany
        resources:
          requests:
            storage: 5Gi
        storageClassName: nfs-boge

#稍后可以看到StorageClass会自动创建PVC和PV
[root@k8smaster StorageClass]# kubectl get pvc -n monitoring
NAME                                 STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
prometheus-k8s-db-prometheus-k8s-0   Bound    pvc-263e60ad-44c1-41a8-9f2a-77a35f507ad0   5Gi        RWX            nfs-boge       6h7m
prometheus-k8s-db-prometheus-k8s-1   Bound    pvc-73ee2d7f-180c-40b9-925a-0bddc58c4604   5Gi        RWX            nfs-boge       6h7m
#登陆pod查看挂载
[root@k8smaster StorageClass]# kubectl exec -it -n monitoring prometheus-k8s-0 -- /bin/sh
/prometheus $ df -h
Filesystem                Size      Used Available Use% Mounted on
overlay                  40.0G     13.1G     26.8G  33% /
tmpfs                    64.0M         0     64.0M   0% /dev
tmpfs                     1.9G         0      1.9G   0% /sys/fs/cgroup
10.7.48.223:/data/lnmp/prometheus_data/monitoring-prometheus-k8s-db-prometheus-k8s-0-pvc-263e60ad-44c1-41a8-9f2a-77a35f507ad0/prometheus-db

2.2Grafana持久化

#创建Grafana的pvc
[root@k8smaster StorageClass]# cat grafana-pvc.yaml
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: grafana
  namespace: monitoring
spec:
  storageClassName: nfs-boge
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 5Gi
#部署
[root@k8smaster StorageClass]# kubectl apply -f grafana-pvc.yaml

#查看pvc是否创建
[root@k8smaster StorageClass]# kubectl get pvc -n monitoring
NAME                                 STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
grafana                              Bound    pvc-c16d1015-f1ed-40be-bfb2-d980377692ea   5Gi        RWX            nfs-boge       5h23m

#编辑grafana的deployment资源配置
[root@k8smaster StorageClass]# kubectl -n monitoring edit deployments.apps grafan
volumes:
  - emptyDir: {}
name: grafana-storage
#更改为
volumes:
    - name: grafana-storage
      persistentVolumeClaim:
        claimName: grafana
#同时建议添加环境变量配置默认密码
env:
- name: GF_SECURITY_ADMIN_USER
value: admin
- name: GF_SECURITY_ADMIN_PASSWORD
value: admin321

#保存退出

文章作者: huhuhahei
版权声明: 本博客所有文章除特別声明外,均采用 CC BY 4.0 许可协议。转载请注明来源 huhuhahei !
评论
  目录