加载中...

Ceph横向扩容


随着业务增长ceph服务也会出现,存储空间不足的情况,这个时候就需要扩容集群

扩容一般分为两种:

  • 横向扩容(增加集群节点)
  • 纵向扩容(单节点增加更多的磁盘)

1.1 检查节点磁盘

ceph-deploy disk list ceph-node2
...
[ceph-node2][DEBUG ] connected to host: ceph-node2
[ceph-node2][DEBUG ] detect platform information from remote host
[ceph-node2][DEBUG ] detect machine type
[ceph-node2][DEBUG ] find the location of an executable
[ceph-node2][INFO  ] Running command: fdisk -l
[ceph-node2][INFO  ] Disk /dev/sda: 107.4 GB, 107374182400 bytes, 209715200 sectors
[ceph-node2][INFO  ] Disk /dev/sdb: 536.9 GB, 536870912000 bytes, 1048576000 sectors
[ceph-node2][INFO  ] Disk /dev/mapper/ceph--5ec322fc--9910--4476--a9fc--c0360c0bd446-osd--block--ab000a3d--2ddb--4480--8ef7--f3a5ff572555: 536.9 GB, 536866717696 bytes, 1048567808 sectors
[ceph-node2][INFO  ] Disk /dev/sdc: 107.4 GB, 107374182400 bytes, 209715200 sectors
# sdc就是我们新添加的磁盘

如果你的磁盘已经配置了分区 可以使用下面命令删除 注意 删除分区数据也会删除

ceph-deploy disk zap ceph-node2 /dev/sdc
...
[ceph_deploy.osd][INFO  ] Distro info: CentOS Linux 7.8.2003 Core
[ceph-node2][DEBUG ] zeroing last few blocks of device
[ceph-node2][DEBUG ] find the location of an executable
[ceph-node2][INFO  ] Running command: /usr/sbin/ceph-volume lvm zap /dev/sdc
[ceph-node2][WARNIN] --> Zapping: /dev/sdc
[ceph-node2][WARNIN] --> --destroy was not specified, but zapping a whole device will remove the partition table
[ceph-node2][WARNIN] Running command: /bin/dd if=/dev/zero of=/dev/sdc bs=1M count=10 conv=fsync
[ceph-node2][WARNIN]  stderr: 10+0 records in
[ceph-node2][WARNIN] 10+0 records out
[ceph-node2][WARNIN]  stderr: 10485760 bytes (10 MB) copied, 0.0930712 s, 113 MB/s
[ceph-node2][WARNIN] --> Zapping successful for: <Raw Device: /dev/sdc>

2.1 添加osd磁盘

ceph-deploy osd create ceph-node2 --data /dev/sdc
[ceph-node2][WARNIN] Running command: /bin/systemctl enable --runtime ceph-osd@4
[ceph-node2][WARNIN]  stderr: Created symlink from /run/systemd/system/ceph-osd.target.wants/ceph-osd@4.service to /usr/lib/systemd/system/ceph-osd@.service.
[ceph-node2][WARNIN] Running command: /bin/systemctl start ceph-osd@4
[ceph-node2][WARNIN] --> ceph-volume lvm activate successful for osd ID: 4
[ceph-node2][WARNIN] --> ceph-volume lvm create successful for: /dev/sdc
[ceph-node2][INFO  ] checking OSD status...
[ceph-node2][DEBUG ] find the location of an executable
[ceph-node2][INFO  ] Running command: /bin/ceph --cluster=ceph osd stat --format=json
[ceph_deploy.osd][DEBUG ] Host ceph-node2 is now ready for osd use

再次查看osd

ceph osd tree
ID  CLASS WEIGHT  TYPE NAME           STATUS REWEIGHT PRI-AFF
 -1       2.05426 root default
 -9       0.48999     host ceph-admin
  3   hdd 0.48999         osd.3           up  1.00000 1.00000
 -3       0.48830     host ceph-node1
  0   hdd 0.48830         osd.0           up  1.00000 1.00000
 -5       0.58768     host ceph-node2
  1   hdd 0.48999         osd.1           up  1.00000 1.00000
  4   hdd 0.09769         osd.4           up  1.00000 1.00000
 -7       0.48830     host ceph-node3
  2   hdd 0.48830         osd.2           up  1.00000 1.00000
-16             0     host ceph-node5

3.1 临时关闭集群rebalance

当我们生产环境扩容的时候,由于ceph添加osd后会默认进行pg的均衡,所以可能会影响到生产服务的稳定,可以使用一下参数临时关闭集群的rebalance

ceph osd set norebalance
ceph osd set nobackfill

查看集群状态

[root@ceph-admin ceph]# ceph -s
  cluster:
    id:     0e81966f-1e65-45b7-add4-dd6ad971f4cf
    health: HEALTH_WARN
            nobackfill,norebalance flag(s) set

再次开启

ceph osd unset norebalance
ceph osd unset nobackfill

文章作者: huhuhahei
版权声明: 本博客所有文章除特別声明外,均采用 CC BY 4.0 许可协议。转载请注明来源 huhuhahei !
评论
  目录