一 前言
Rook 是一个开源的云原生存储编排器 ,为各种存储解决方案提供平台、框架和支持,以与云原生环境进行原生集成。
Rook 将存储软件转变为自我管理、自我扩展和自我修复的存储服务。它通过自动化部署、引导、配置、供应、扩展、升级、迁移、灾难恢复、监控和资源管理来实现这一点。Rook 使用底层云原生容器管理、调度和编排平台提供的设施来履行其职责。
Rook 利用扩展点深度集成到云原生环境中,并为调度、生命周期管理、资源管理、安全性、监控和用户体验提供无缝体验。
二 搭建
2.1 环境准备
需要确保节点中有未初始化的磁盘
代码下载
git clone --single-branch --branch master https://github.com/rook/rook.git
2.2 创建rook-operater
kubectl create -f crds.yaml -f common.yaml -f operator.yaml
确认operator启动成功既可继续部署
[root@master rbd]# kubectl get po -n rook-ceph | grep oper
rook-ceph-operator-757546f8c7-l9bmc 1/1 Running 0 158m
2.3 创建ceph-cluster
配置可以参考cluster的yaml文件自定义修改
kubectl apply -f cluster-test.yaml
待下面服务全部启动完成 大部分配置就完成了::(滑稽)
[root@master rbd]# kubectl get po -n rook-ceph
NAME READY STATUS RESTARTS AGE
csi-cephfsplugin-4psdt 3/3 Running 0 124m
csi-cephfsplugin-provisioner-5dc75bb888-5rt4z 6/6 Running 0 131m
csi-cephfsplugin-provisioner-5dc75bb888-k4jf7 6/6 Running 0 131m
csi-cephfsplugin-zqc8r 3/3 Running 0 124m
csi-rbdplugin-7vtwg 3/3 Running 0 126m
csi-rbdplugin-provisioner-65cb5fd9c6-m2l2x 6/6 Running 0 127m
csi-rbdplugin-provisioner-65cb5fd9c6-wh8bx 6/6 Running 0 127m
csi-rbdplugin-qr9sw 3/3 Running 0 126m
rook-ceph-mgr-a-6c7886c46d-d57ps 1/1 Running 0 150m
rook-ceph-mon-a-5576f996f8-2vhq8 1/1 Running 0 154m
rook-ceph-mon-b-556b78cd67-tlblh 1/1 Running 0 152m
rook-ceph-operator-757546f8c7-l9bmc 1/1 Running 0 160m
rook-ceph-osd-0-8495c46d54-96rd2 1/1 Running 0 148m
rook-ceph-osd-1-5d55f9f5cf-z9cxk 1/1 Running 0 148m
rook-ceph-osd-prepare-master-pfpgk 0/1 Completed 0 149m
rook-ceph-osd-prepare-node01-shktt 0/1 Completed 0 149m
这是可以登录节点查看磁盘是否已经格式化为ceph的格式
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
vda 253:0 0 60G 0 disk
└─vda1 253:1 0 60G 0 part /
vdb 253:16 0 50G 0 disk /data
vdc 253:32 0 50G 0 disk
└─ceph--b422438d--be03--4318--bd73--a2d54607831d-osd--block--8b14a4eb--0d37--40c3--a78a--6deadd4ded77
252:0 0 50G 0 lvm
2.4 部署ceph工具
kubectl create -f toolbox.yaml
启动成功就可以利用这个工具执行ceph的命令
[root@master rbd]# kubectl -n rook-ceph exec -it deploy/rook-ceph-tools -- ceph -s
cluster:
id: 7a065ad6-efcf-4b94-9df8-ac6e30375a83
health: HEALTH_OK
services:
mon: 2 daemons, quorum a,b (age 2h)
mgr: a(active, since 2h)
osd: 2 osds: 2 up (since 2h), 2 in (since 2h)
data:
pools: 2 pools, 64 pgs
objects: 17 objects, 21 MiB
usage: 32 MiB used, 100 GiB / 100 GiB avail
pgs: 64 active+clean
2.5 部署ceph dashboard
kubectl apply -f dashboard-external-http.yaml
部署完成查看svc端口既可web访问
#查看密码
kubectl -n rook-ceph get secret rook-ceph-dashboard-password -o jsonpath="{['data']['password']}" | base64 --decode && echo
#密码: NHaYC4%)5F.Js*%7tO$y
三 配置RDB Storageclass存储
3.1 创建Storageclass
kubectl apply -f storageclass-test.yaml
查看
[root@master rbd]# kubectl get sc
rook-ceph-block (default) rook-ceph.rbd.csi.ceph.com Delete Immediate true 32m
3.2 使用pod挂载测试
apiVersion: v1
kind: Pod
metadata:
name: pod-vol-ceph
spec:
containers:
- name: myapp
image: ikubernetes/myapp:v1
volumeMounts:
- name: html
mountPath: /usr/share/nginx/html/
volumes:
- name: html
persistentVolumeClaim:
claimName: rbd-pvc
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: rbd-pvc
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
storageClassName: rook-ceph-block
登录pod查看是否正常挂载
/ # df -h
Filesystem Size Used Available Use% Mounted on
overlay 60.0G 39.2G 20.8G 65% /
tmpfs 64.0M 0 64.0M 0% /dev
tmpfs 3.8G 0 3.8G 0% /sys/fs/cgroup
/dev/vda1 60.0G 39.2G 20.8G 65% /dev/termination-log
/dev/vda1 60.0G 39.2G 20.8G 65% /etc/resolv.conf
/dev/vda1 60.0G 39.2G 20.8G 65% /etc/hostname
/dev/vda1 60.0G 39.2G 20.8G 65% /etc/hosts
shm 64.0M 0 64.0M 0% /dev/shm
/dev/rbd0 975.9M 2.5M 957.4M 0% /usr/share/nginx/html
目录正常挂载
四 配置Cephfs Storageclass存储
4.1 配置mds服务
kubectl apply -f filesystem.yaml
确认服务启动
[root@master examples]# kubectl get po -n rook-ceph | grep mds
rook-ceph-mds-myfs-a-6b85579bfb-zm999 1/1 Running 0 22m
rook-ceph-mds-myfs-b-b98899f44-q88ps 1/1 Running 0 22m
4.2 创建Storageclass
kubectl apply -f storageclass.yaml
查看确认
[root@master cephfs]# kubectl get sc
rook-ceph-block rook-ceph.rbd.csi.ceph.com Delete Immediate true 88m
rook-cephfs rook-ceph.cephfs.csi.ceph.com Delete Immediate true 48m
4.3 测试文件共享
创建两个pod
apiVersion: v1
kind: Pod
metadata:
name: pod-vol-ceph-2
spec:
containers:
- name: myapp
image: ikubernetes/myapp:v1
volumeMounts:
- name: html
mountPath: /usr/share/nginx/html/
volumes:
- name: html
persistentVolumeClaim:
claimName: cephfs-pvc-demo
---
apiVersion: v1
kind: Pod
metadata:
name: pod-vol-ceph-1
spec:
containers:
- name: myapp
image: ikubernetes/myapp:v1
volumeMounts:
- name: html
mountPath: /usr/share/nginx/html/
volumes:
- name: html
persistentVolumeClaim:
claimName: cephfs-pvc-demo
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: cephfs-pvc-demo
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 1Gi
storageClassName: rook-cephfs
一个容器创建文件 登录其他容器确认是否可以看到
[root@master cephfs]# kubectl exec -it pod-vol-ceph-1 /bin/sh
kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
/ # touch /usr/share/nginx/html/test
登录另一个容器测试
[root@master cephfs]# kubectl exec -it pod-vol-ceph-2 /bin/sh
kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
/ # ls /usr/share/nginx/html/
index.html test