加载中...

Ceph 对象存储使用


一.配置Radosgw

1.1 创建radosgw

[root@node-1 ceph-deploy]# ceph-deploy rgw create node-1
---
nts/ceph-radosgw@rgw.node-1.service to /usr/lib/systemd/system/ceph-radosgw@.service.
[node-1][INFO  ] Running command: systemctl start ceph-radosgw@rgw.node-1
[node-1][INFO  ] Running command: systemctl enable ceph.target
[ceph_deploy.rgw][INFO  ] The Ceph Object Gateway (RGW) is now running on host node-1 and default port 7480

#访问url 
[root@node-1 ceph-deploy]# curl http://node-1:7480
<?xml version="1.0" encoding="UTF-8"?><ListAllMyBucketsResult xmlns="http://s3.amazonaws.com/doc/2006-03-01/"><Owner><ID>anonymous</ID><DisplayName></DisplayName></Owner><Buckets></Buckets></ListAllMyBucketsResult>
#可以正常打开证明安装没有问题

1.2 修改radosgw访问端口

#修改配置文件 添加
[client.rgw.node-1]
rgw_frontends = "civetweb port=80"

#拷贝配置文件到其他节点
[root@node-1 ceph-deploy]# ceph-deploy --overwrite-conf config push node-1 node-2 node-3 

#重启服务
[root@node-1 ceph-deploy]# systemctl restart ceph-radosgw.target 

#确认是否生效
[root@node-1 ceph-deploy]# netstat -anpt | grep :80
tcp        0      0 0.0.0.0:80              0.0.0.0:*               LISTEN      40358/radosgw

二.使用s3方式访问ceph

2.1 使用sdk访问

#创建用户
[root@node-1 ~]# radosgw-admin user create --uid ceph-s3-user --display-name "Ceph S3 User Demo"

#编写python sdk
[root@node-1 ~]# yum -y install python-boto
import boto.s3.connection

access_key = 'HWOT3KMNOG1V2xxxxxx'
secret_key = '7GxvrRfevA5cyiOy207J2Sn5RmIdoYyP7kixxxx'
conn = boto.connect_s3(
        aws_access_key_id=access_key,
        aws_secret_access_key=secret_key,
        host='10.46.36.234', port=80,
        is_secure=False, calling_format=boto.s3.connection.OrdinaryCallingFormat(),
       )

bucket = conn.create_bucket('ceph-s3-bucket')
for bucket in conn.get_all_buckets():
    print "{name} {created}".format(
        name=bucket.name,
        created=bucket.creation_date,
    )

#执行
[root@node-1 ~]# python s3client.py 
ceph-s3-bucket 2021-12-05T06:17:12.784Z

#上传文件也是需要编写sdk上传,相对比较麻烦 我们是用第二种方法

2.2 s3cmd访问

#安装s3cmd工具
[root@node-1 ~]# yum -y install s3cmd

#配置
[root@node-1 ~]# s3cmd --configure

Enter new values or accept defaults in brackets with Enter.
Refer to user manual for detailed description of all options.

Access key and Secret key are your identifiers for Amazon S3. Leave them empty for using the env variables.
Access Key: HWOT3KMNOG1xxxxxxxx 
Secret Key: 7GxvrRfevA5cyiOy207J2Sn5RmIxxxxxxxx
Default Region [US]: 

Use "s3.amazonaws.com" for S3 Endpoint and not modify it to the target Amazon S3.
S3 Endpoint [s3.amazonaws.com]: 10.46.36.234

Use "%(bucket)s.s3.amazonaws.com" to the target Amazon S3. "%(bucket)s" and "%(location)s" vars can be used
if the target S3 system supports dns based buckets.
DNS-style bucket+hostname:port template for accessing a bucket [%(bucket)s.s3.amazonaws.com]: 10.46.36.234:80/%(bucket)s       

Encryption password is used to protect your files from reading
by unauthorized persons while in transfer to S3
Encryption password: 
Path to GPG program [/usr/bin/gpg]: 

When using secure HTTPS protocol all communication with Amazon S3
servers is protected from 3rd party eavesdropping. This method is
slower than plain HTTP, and can only be proxied with Python 2.7 or newer
Use HTTPS protocol [Yes]: no

On some networks all internet access must go through a HTTP proxy.
Try setting it here if you can't connect to S3 directly
HTTP Proxy server name: 

New settings:
  Access Key: HWOT3KMNOG1V2Txxxxx
  Secret Key: 7GxvrRfevA5cyiOy207J2Snxxxxxxx
  Default Region: US
  S3 Endpoint: 10.46.36.234
  DNS-style bucket+hostname:port template for accessing a bucket: 10.46.36.234:80/%(bucket)s
  Encryption password: 
  Path to GPG program: /usr/bin/gpg
  Use HTTPS protocol: False
  HTTP Proxy server name: 
  HTTP Proxy server port: 0

Test access with supplied credentials? [Y/n] y
Please wait, attempting to list all buckets...
Success. Your access key and secret key worked fine :-)

Now verifying that encryption works...
Not configured. Never mind.

Save settings? [y/N] y
Configuration saved to '/root/.s3cfg'

s3cmd常用命令

#查看目前的bucket
[root@node-1 ~]# s3cmd ls
2021-12-05 06:17  s3://ceph-s3-bucket

#创建bucket
[root@node-1 ~]# s3cmd mb s3://s3cmd-demo
ERROR: S3 error: 403 (SignatureDoesNotMatch)
#如果有下面报错 需要修改s3cmd配置文件
signature_v2 = True
#将此配置开启

#测试上传文件
[root@node-1 ceph-deploy]# s3cmd  put /etc/hosts s3://s3cmd-demo/hosts
upload: '/etc/hosts' -> 's3://s3cmd-demo/hosts'  [1 of 1]
 266 of 266   100% in    0s    15.60 KB/s  done

#查看文件
[root@node-1 ceph-deploy]# s3cmd ls s3://s3cmd-demo
2021-12-05 06:49          380  s3://s3cmd-demo/fstab
2021-12-05 06:50          266  s3://s3cmd-demo/hosts

#上传目录
[root@node-1 ceph-deploy]# s3cmd  put /etc s3://s3cmd-demo/etc/ --recursive

#查看目录文件
[root@node-1 ceph-deploy]# s3cmd ls s3://s3cmd-demo/etc/etc/

#下载文件
[root@node-1 ceph-deploy]# s3cmd get s3://s3cmd-demo/etc/etc/yum.conf yum.download
download: 's3://s3cmd-demo/etc/etc/yum.conf' -> 'yum.download'  [1 of 1]
 1066 of 1066   100% in    0s    65.45 KB/s  done

#删除文件
[root@node-1 ceph-deploy]# s3cmd rm --recursive s3://s3cmd-demo/etc/

#注 如果有测试上传报错 416 可以参考下面的方法修复

[root@node-1 ~]# s3cmd put /etc/fstab s3://s3cmd-demo/fstab
upload: '/etc/fstab' -> 's3://s3cmd-demo/fstab'  [1 of 1]
380 of 380   100% in    0s   800.52 B/s  done
ERROR: S3 error: 416 (InvalidRange)

#修改配置文件global添加
mon_max_pg_per_osd = 300

#将配置文件分发其他节点
[root@node-1 ceph-deploy]# ceph-deploy --overwrite-conf config push node-1 node-2 node-3

#重启mon和mgr进程所有节点都需要重启
[root@node-1 ceph-deploy]# systemctl restart ceph-mgr@node-1
[root@node-1 ceph-deploy]# systemctl restart ceph-mon@node-1

#再次测试应该就可以正常上传了

三.使用swift方式访问ceph

3.1 配置swift

#根据之前的用户创建swift用户
[root@node-1 etc]# radosgw-admin subuser create --uid ceph-s3-user  --subuser=ceph-s3-user:swift --access=full

#创建公私钥
[root@node-1 etc]# radosgw-admin key create --subuser=ceph-s3-user:swift --key-type=swift --gen-secret

#安装swift客户端工具
[root@node-1 etc]# yum -y install python-pip
[root@node-1 etc]# pip install --upgrade python-swiftclient

3.2 swift客户端使用

#直接访问
[root@node-1 etc]# swift -A http://10.46.36.234/auth -U ceph-s3-user:swift -K SgNvHZvkjpDIHudg281qb3rbA33DJd39UoCYnHbl list

#直接访问参数比较多 比较麻烦可以设置环境变量访问
export ST_AUTH=http://10.46.36.234/auth
export ST_USER=ceph-s3-user:swift
export ST_KEY=SgNvHZvkjpDIHudg281qb3rbA33DJd39UoCYnHbl

#直接命令访问
[root@node-1 etc]# swift list
ceph-s3-bucket
s3cmd-demo

#创建bucket
[root@node-1 etc]# swift post swift-demo
[root@node-1 etc]# swift list
ceph-s3-bucket
s3cmd-demo
swift-dem

#上传文件
[root@node-1 etc]# swift upload swift-demo/hosts /etc/hosts
hosts/etc/hosts
[root@node-1 etc]# swift list swift-demo 
hosts/etc/hosts

#上传文件夹
[root@node-1 etc]# swift upload swift-demo/etc/ /etc

#下载文件
[root@node-1 etc]# swift download swift-demo etc/etc/passwd 
etc/etc/passwd [auth 0.004s, headers 0.007s, total 0.007s, 0.336 MB/s]

#删除文件
[root@node-1 etc]# swift delete swift-demo etc/etc/yum.repos.d/CentOS-fasttrack.repo
etc/etc/yum.repos.d/CentOS-fasttrack.repo

文章作者: huhuhahei
版权声明: 本博客所有文章除特別声明外,均采用 CC BY 4.0 许可协议。转载请注明来源 huhuhahei !
评论
  目录