存储层/ceph分布式存储
节点
存储层采用ceph分布式存储,可提供块存储、对像存储、文件存储等多种方式。 并给k8s提供后端sc支持。
ceph测试环境如下,若是生产环境,需每个进程角色配置主备方式。
节点 | os | 配置 | ip | 角色 |
---|---|---|---|---|
mgm | Rocky9.1 | 2vCPU,RAM2GB,HD:8GB | 10.2.20.59/192.168.3.x | 管理节点,ssh免密 |
ceph-mon1 | centos8.5.2111 | 2vCPU,RAM2GB,HD:8GB | 10.2.20.90/192.168.3.x | mon,mgr,mds,dashboard,rgw |
ceph-node1 | centos8.5.2111 | 2vCPU,RAM2GB,HD:8GB+10GBx2 | 10.2.20.91/192.168.3.x | osd |
ceph-node2 | centos8.5.2111 | 2vCPU,RAM2GB,HD:8GB+10GBx2 | 10.2.20.92/192.168.3.x | osd |
ceph-node3 | centos8.5.2111 | 2vCPU,RAM2GB,HD:8GB+10GBx2 | 10.2.20.93/192.168.3.x | osd |
ceph采用version 17.2.6 quincy (stable)。
采用os-w安装上述5台主机。
4.1 基本配置
4.1.1 所有节点基本配置
#配置hosts文件
cat >> /etc/hosts << 'EOF'
10.2.20.90 ceph-mon1
10.2.20.91 ceph-node1
10.2.20.92 ceph-node2
10.2.20.93 ceph-node3
EOF
#安装基础软件
cd /etc/yum.repos.d/
sed -i 's/mirrorlist/#mirrorlist/g' /etc/yum.repos.d/CentOS-*
sed -i 's|#baseurl=http://mirror.centos.org|baseurl=http://vault.centos.org|g' /etc/yum.repos.d/CentOS-*
rm -fr Centos8-2111*
wget -O /etc/yum.repos.d/CentOS-Base.repo https://mirrors.aliyun.com/repo/Centos-vault-8.5.2111.repo
yum clean all
yum makecache
yum install -y epel-release
yum -y install net-tools wget bash-completion lrzsz unzip zip tree
#关闭防火墙和selinux
systemctl disable --now firewalld
systemctl stop firewalld
setenforce 0
sed -i 's/^SELINUX=.*/SELINUX=disabled/' /etc/selinux/config
ceph-17.2.6安装源
cat> /etc/yum.repos.d/ceph.repo << 'EOF'
[ceph]
name=Ceph packages for $basearch
baseurl=https://download.ceph.com/rpm-17.2.6/el8/$basearch
enabled=1
priority=2
gpgcheck=1
gpgkey=https://download.ceph.com/keys/release.asc
[ceph-noarch]
name=Ceph noarch packages
baseurl=https://download.ceph.com/rpm-17.2.6/el8/noarch
enabled=1
priority=2
gpgcheck=1
gpgkey=https://download.ceph.com/keys/release.asc
[ceph-source]
name=Ceph source packages
baseurl=https://download.ceph.com/rpm-17.2.6/el8/SRPMS
enabled=0
priority=2
gpgcheck=1
gpgkey=https://download.ceph.com/keys/release.asc
EOF
查看ceph安装包
# yum list Ceph*
Repository extras is listed more than once in the configuration
Last metadata expiration check: 0:01:01 ago on Mon 24 Apr 2023 10:22:10 PM CST.
Installed Packages
ceph-release.noarch 1-1.el8 @System
Available Packages
ceph.x86_64 2:17.2.6-0.el8 ceph
ceph-base.x86_64 2:17.2.6-0.el8 ceph
ceph-base-debuginfo.x86_64 2:17.2.6-0.el8 ceph
ceph-common.x86_64 2:17.2.6-0.el8 ceph
ceph-common-debuginfo.x86_64 2:17.2.6-0.el8 ceph
ceph-debuginfo.x86_64 2:17.2.6-0.el8 ceph
ceph-debugsource.x86_64 2:17.2.6-0.el8 ceph
ceph-exporter.x86_64 2:17.2.6-0.el8 ceph
ceph-exporter-debuginfo.x86_64 2:17.2.6-0.el8 ceph
ceph-fuse.x86_64 2:17.2.6-0.el8 ceph
ceph-fuse-debuginfo.x86_64 2:17.2.6-0.el8 ceph
ceph-grafana-dashboards.noarch 2:17.2.6-0.el8 ceph-noarch
ceph-immutable-object-cache.x86_64 2:17.2.6-0.el8 ceph
ceph-immutable-object-cache-debuginfo.x86_64 2:17.2.6-0.el8 ceph
ceph-mds.x86_64 2:17.2.6-0.el8 ceph
ceph-mds-debuginfo.x86_64 2:17.2.6-0.el8 ceph
ceph-mgr.x86_64 2:17.2.6-0.el8 ceph
ceph-mgr-cephadm.noarch 2:17.2.6-0.el8 ceph-noarch
ceph-mgr-dashboard.noarch 2:17.2.6-0.el8 ceph-noarch
ceph-mgr-debuginfo.x86_64 2:17.2.6-0.el8 ceph
ceph-mgr-diskprediction-local.noarch 2:17.2.6-0.el8 ceph-noarch
ceph-mgr-k8sevents.noarch 2:17.2.6-0.el8 ceph-noarch
ceph-mgr-modules-core.noarch 2:17.2.6-0.el8 ceph-noarch
ceph-mgr-rook.noarch 2:17.2.6-0.el8 ceph-noarch
ceph-mon.x86_64 2:17.2.6-0.el8 ceph
ceph-mon-debuginfo.x86_64 2:17.2.6-0.el8 ceph
ceph-osd.x86_64 2:17.2.6-0.el8 ceph
ceph-osd-debuginfo.x86_64 2:17.2.6-0.el8 ceph
ceph-prometheus-alerts.noarch 2:17.2.6-0.el8 ceph-noarch
ceph-radosgw.x86_64 2:17.2.6-0.el8 ceph
ceph-radosgw-debuginfo.x86_64 2:17.2.6-0.el8 ceph
ceph-resource-agents.noarch 2:17.2.6-0.el8 ceph-noarch
ceph-selinux.x86_64 2:17.2.6-0.el8 ceph
ceph-test.x86_64 2:17.2.6-0.el8 ceph
ceph-test-debuginfo.x86_64 2:17.2.6-0.el8 ceph
ceph-volume.noarch 2:17.2.6-0.el8 ceph-noarch
cephadm.noarch 2:17.2.6-0.el8 ceph-noarch
cephfs-mirror.x86_64 2:17.2.6-0.el8 ceph
cephfs-mirror-debuginfo.x86_64 2:17.2.6-0.el8 ceph
cephfs-top.noarch 2:17.2.6-0.el8 ceph-noarch
4.1.2 管理节点
免密配置
ssh-keygen -t rsa
ssh-copy-id root@ceph-mon1
ssh-copy-id root@ceph-node1
ssh-copy-id root@ceph-node2
ssh-copy-id root@ceph-node3
配置ansible
# yum -y install ansible
# vi /etc/ansible/hosts
[ceph]
ceph-mon1
ceph-node1
ceph-node2
ceph-node3
# ansible ceph -m shell -a "date"
ceph-mon1 | CHANGED | rc=0 >>
Sat Jun 3 22:32:43 CST 2023
ceph-node3 | CHANGED | rc=0 >>
Sat Jun 3 22:32:43 CST 2023
ceph-node1 | CHANGED | rc=0 >>
Sat Jun 3 22:32:43 CST 2023
ceph-node2 | CHANGED | rc=0 >>
Sat Jun 3 22:32:43 CST 2023
安装ceph等客户端命令
# yum -y install ceph-common ceph-base
# ceph -v
ceph version 17.2.6 (d7ff0d10654d2280e08f1ab989c7cdf3064446a5) quincy (stable)
4.1.3 ceph集群节点
# ansible ceph -m shell -a "yum -y install net-tools gdisk lvm2"
# ansible ceph -m shell -a "yum -y install ceph"
# ansible ceph -m shell -a "systemctl list-unit-files | grep ceph"
...
ceph-crash.service enabled
ceph-mds@.service disabled
ceph-mgr@.service disabled
ceph-mon@.service disabled
ceph-osd@.service disabled
ceph-volume@.service disabled
ceph-mds.target enabled
ceph-mgr.target enabled
ceph-mon.target enabled
ceph-osd.target enabled
ceph.target enabled
# ansible ceph -m shell -a "ceph -v"
ceph-mon1 | CHANGED | rc=0 >>
ceph version 17.2.6 (d7ff0d10654d2280e08f1ab989c7cdf3064446a5) quincy (stable)
ceph-node1 | CHANGED | rc=0 >>
ceph version 17.2.6 (d7ff0d10654d2280e08f1ab989c7cdf3064446a5) quincy (stable)
ceph-node3 | CHANGED | rc=0 >>
ceph version 17.2.6 (d7ff0d10654d2280e08f1ab989c7cdf3064446a5) quincy (stable)
ceph-node2 | CHANGED | rc=0 >>
ceph version 17.2.6 (d7ff0d10654d2280e08f1ab989c7cdf3064446a5) quincy (stable)
每个节点 工作目录
# tree /var/lib/ceph
/var/lib/ceph
├── bootstrap-mds
├── bootstrap-mgr
├── bootstrap-osd
├── bootstrap-rbd
├── bootstrap-rbd-mirror
├── bootstrap-rgw
├── crash
│ └── posted
├── mds
├── mgr
├── mon
├── osd
└── tmp
所有节点,ceph日志目录:/var/log/ceph
4.2 管理机点配置
管理节点主要功能是管理ceph集群,包括配置文件的产生、及使用ceph命令直接访问集群。
为方便配置,在管理节点上建立一个目录,用于存放ceph集群配置过程中产生的文件,默认在此目录中产生各类配置文件,并在需要时同步到ceph各节点。例如:
# mkdir /root/ceph
# cd /root/ceph
4.2.1 ceph集群全局唯一性标识配置
# uuidgen
9b7095ab-5193-420c-b2fb-2d343c57ef52
# ansible ceph -m shell -a "echo export cephuid=9b7095ab-5193-420c-b2fb-2d343c57ef52 >> /etc/profile"
# ansible ceph -m shell -a "source /etc/profile"
# ansible ceph -m shell -a "cat /etc/profile | grep cephuid"
ceph-node1 | CHANGED | rc=0 >>
export cephuid=9b7095ab-5193-420c-b2fb-2d343c57ef52
ceph-mon1 | CHANGED | rc=0 >>
export cephuid=9b7095ab-5193-420c-b2fb-2d343c57ef52
ceph-node3 | CHANGED | rc=0 >>
export cephuid=9b7095ab-5193-420c-b2fb-2d343c57ef52
ceph-node2 | CHANGED | rc=0 >>
export cephuid=9b7095ab-5193-420c-b2fb-2d343c57ef52
4.2.2 keyring配置
ceph-authtool --create-keyring ./ceph.client.admin.keyring --gen-key -n client.admin --cap mon 'allow *' --cap osd 'allow *' --cap mds 'allow *' --cap mgr 'allow *'
ceph-authtool --create-keyring ./ceph.mon.keyring --gen-key -n mon. --cap mon 'allow *'
ceph-authtool --create-keyring ./ceph.keyring --gen-key -n client.bootstrap-osd --cap mon 'profile bootstrap-osd' --cap mgr 'allow r'