rook介绍

Rook是目前开源中比较流行的云原生的存储编排系统,专注于如何实现把ceph运行在Kubernetes平台上。将之前手工执行部署、初始化、配置、扩展、升级、迁移、灾难恢复、监控以及资源管理等转变为自动触发。比如集群增加一块磁盘,rook能自动初始化为一个OSD,并自动加入到合适的故障中,这个osd在kubernetes中是以pod形式运行的。

1、部署

  • Rook-Ceph底层存储可以是:卷、分区、block模式的pv。主要包括三部分:CRD、Operator、Cluster。
  • 下载rook安装文件并部署
git clone --single-branch --branch release-1.3 https://github.com/rook/rook.git

cd cluster/examples/kubernetes/ceph

注意:若是重装需清空node节点的/var/lib/rook下的文件, 并且要保证挂载的卷或者分区是没有文件系统的

部署:
(1) CRD:    kubectl create -f common.yaml
(2) Operator:   kubectl create -f operator.yaml
(3) Cluster:    kubectl create -f cluster.yaml

[root@node1 ~]# kubectl get pod -n rook-ceph
NAME                                              READY   STATUS      RESTARTS   AGE
csi-cephfsplugin-l5rw5                            3/3     Running     0          8m33s
csi-cephfsplugin-lrzfj                            3/3     Running     0          22m
csi-cephfsplugin-mqbg8                            3/3     Running     0          53m
csi-cephfsplugin-provisioner-58888b54f5-fj8tf     5/5     Running     5          53m
csi-cephfsplugin-provisioner-58888b54f5-p2vpt     5/5     Running     9          53m
csi-rbdplugin-5mmsl                               3/3     Running     0          8m34s
csi-rbdplugin-d4966                               3/3     Running     0          53m
csi-rbdplugin-mcsxw                               3/3     Running     0          22m
csi-rbdplugin-provisioner-6ddbf76966-498gn        6/6     Running     11         53m
csi-rbdplugin-provisioner-6ddbf76966-z2vfq        6/6     Running     6          53m
rook-ceph-crashcollector-node1-69666f444d-5bgh9   1/1     Running     0          5m41s
rook-ceph-crashcollector-node2-6c5b88dcf5-th4xk   1/1     Running     0          6m4s
rook-ceph-crashcollector-node3-54b5c58544-hz6m8   1/1     Running     0          22m
rook-ceph-mgr-a-5d85bd689f-g2dfh                  1/1     Running     0          5m41s
rook-ceph-mon-a-76c84f876b-m62d7                  1/1     Running     0          6m38s
rook-ceph-mon-b-6dc744d5b8-bspvh                  1/1     Running     0          6m22s
rook-ceph-mon-c-67f5987779-4l8vf                  1/1     Running     0          6m4s
rook-ceph-operator-6659fb4ddd-wdxnp               1/1     Running     3          55m
rook-ceph-osd-0-57954bcb4f-8b48j                  1/1     Running     0          4m59s
rook-ceph-osd-1-7c59f96f47-xnfpc                  1/1     Running     0          4m48s
rook-ceph-osd-prepare-node1-hrzg6                 1/1     Running     0          5m38s
rook-ceph-osd-prepare-node2-c5wcl                 0/1     Completed   0          5m38s
rook-ceph-osd-prepare-node3-gkt7g                 0/1     Completed   0          5m38s
rook-discover-6ll64                               1/1     Running     0          8m34s
rook-discover-bvbsh                               1/1     Running     0          22m
rook-discover-cm5ls                               1/1     Running     0          54m
说明

(1) csi-cephfsplugin-*, csi-rbdplugin-* :   ceph-FS 和ceph-rbd CSI
(2) rook-ceph-crashcollector-*:  crash 收集器
(3) rook-ceph-mgr-*:  管理后台
(4) root-ceph-mon-*: Mon监视器,维护集群中的各种状态
(5) rook-ceph-osd-*: ceph-OSD,主要功能是数据的存储,本例中每个盘会起一个OSD
(6) rook-discorer-*:  检测符合要求的存储设备

1.2、修改hosts文件

  • hostnamectl set-hostname ceph1 #节点1上执行
  • hostnamectl set-hostname ceph2 #节点2上执行
  • hostnamectl set-hostname ceph3 #节点3上执行

1.3、配置hosts

  • 三个节点都执行
cat >> /etc/hosts <<EOF  
192.168.0.13 ceph1
192.168.0.14 ceph2
192.168.0.16 ceph3
EOF

1.4、关闭防火墙

  • 三个节点都要执行
systemctl disable --now firewalld
setenforce 0
sed -i 's/^SELINUX=.*/SELINUX=disabled/' /etc/selinux/config

1.5、时间同步

yum install -y chrony
systemctl enable --now chronyd

以ceph1作为时间参考,以下在三个节点都操作,添加server ceph1 prefer
vi /etc/chrony.conf
#server 0.centos.pool.ntp.org iburst
#server 1.centos.pool.ntp.org iburst
#server 2.centos.pool.ntp.org iburst
#server 3.centos.pool.ntp.org iburst
server ceph1 prefer

重启chrony
systemctl restart chronyd

1.6、docker、lvm2和python3安装

三个节点都需要执行:
curl -fsSL get.docker.com -o get-docker.sh 
sh get-docker.sh
systemctl enable docker
systemctl restart docker

yum install -y python3

yum -y install lvm2

2、cephadm安装(ceph1)

  • 官方下载方法

curl --silent --remote-name --location https://github.com/ceph/ceph/raw/octopus/src/cephadm/cephadm

  • git下载

git clone --single-branch --branch octopus https://github.com/ceph/ceph.git

cp ceph/src/cephadm/cephadm ./

  • cephadm安装(ceph1)
./cephadm add-repo --release octopus
./cephadm install

mkdir -p /etc/ceph
cephadm bootstrap --mon-ip 192.168.0.13

安装完成提示:

             URL: https://ceph1:8443/
            User: admin
        Password: 86yvswzdzd

INFO:cephadm:You can access the Ceph CLI with:

        sudo /usr/sbin/cephadm shell --fsid 3861dbf4-fd47-11ea-a373-5254228af870 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring

INFO:cephadm:Please consider enabling telemetry to help improve Ceph:

        ceph telemetry on

For more information see:

        https://docs.ceph.com/docs/master/mgr/telemetry/

INFO:cephadm:Bootstrap complete.
  • 配置ceph.pub 将ceph1的/etc/ceph/ceph.pub放入到ceph2和ceph3的.ssh/authorized_keys,ceph2和ceph3下root目录下没有.ssh,得先创建,最低权限为700,authorized_keys最低权限600。

  • ceph指令生效

cephadm shell

  • 添加节点

ceph orch host add ceph2

ceph orch host add ceph3

3、OSD部署

  • cephadm shell下执行
  • 若块设备没显示,可加–refresh参数
[ceph: root@ceph1 /]# ceph orch device ls
HOST   PATH      TYPE   SIZE  DEVICE_ID     MODEL  VENDOR  ROTATIONAL  AVAIL  REJECT REASONS
ceph1  /dev/vdc  hdd   50.0G  vol-e801hi3v  n/a    0x1af4  1           True
ceph1  /dev/vda  hdd   20.0G  i-2iwis9yr    n/a    0x1af4  1           False  locked
ceph1  /dev/vdb  hdd   4096M                n/a    0x1af4  1           False  Insufficient space (<5GB), locked
ceph2  /dev/vdc  hdd   50.0G  vol-lqcchpox  n/a    0x1af4  1           True
ceph2  /dev/vda  hdd   20.0G  i-r53b2flp    n/a    0x1af4  1           False  locked
ceph2  /dev/vdb  hdd   4096M                n/a    0x1af4  1           False  locked, Insufficient space (<5GB)
ceph3  /dev/vdc  hdd    100G  vol-p0eksa5m  n/a    0x1af4  1           True
ceph3  /dev/vda  hdd   20.0G  i-f6njdmtq    n/a    0x1af4  1           False  locked
ceph3  /dev/vdb  hdd   4096M                n/a    0x1af4  1           False  Insufficient space (<5GB), locked
[ceph: root@ceph1 /]#

devcice确认true后,执行
[ceph: root@ceph1 /]# ceph orch apply osd --all-available-devices
Scheduled osd.all-available-devices update...
  • 查看集群状态,ceph方式及运行容器方式
ceph方式:
[ceph: root@ceph1 /]# ceph -s
  cluster:
    id:     af1f33f0-fd69-11ea-8530-5254228af870
    health: HEALTH_OK

  services:
    mon: 3 daemons, quorum ceph1,ceph2,ceph3 (age 66s)
    mgr: ceph1.rbarkj(active, since 12m), standbys: ceph2.uzdnki
    osd: 3 osds: 3 up (since 14s), 3 in (since 14s)

  data:
    pools:   1 pools, 1 pgs
    objects: 1 objects, 0 B
    usage:   3.0 GiB used, 197 GiB / 200 GiB avail
    pgs:     1 active+clean

容器方式:
[root@ceph1 ceph]# docker ps -a
CONTAINER ID        IMAGE                        COMMAND                  CREATED             STATUS              PORTS               NAMES
b417841153a9        ceph/ceph:v15                "/usr/bin/ceph-osd -…"   5 minutes ago       Up 5 minutes                            ceph-af1f33f0-fd69-11ea-8530-5254228af870-osd.2
9dbd720f4a6f        prom/prometheus:v2.18.1      "/bin/prometheus --c…"   6 minutes ago       Up 5 minutes                            ceph-af1f33f0-fd69-11ea-8530-5254228af870-prometheus.ceph1
f860282adf85        prom/alertmanager:v0.20.0    "/bin/alertmanager -…"   6 minutes ago       Up 6 minutes                            ceph-af1f33f0-fd69-11ea-8530-5254228af870-alertmanager.ceph1
a6cb4b354f46        ceph/ceph-grafana:6.6.2      "/bin/sh -c 'grafana…"   16 minutes ago      Up 16 minutes                           ceph-af1f33f0-fd69-11ea-8530-5254228af870-grafana.ceph1
b745f57895f3        prom/node-exporter:v0.18.1   "/bin/node_exporter …"   17 minutes ago      Up 17 minutes                           ceph-af1f33f0-fd69-11ea-8530-5254228af870-node-exporter.ceph1
b00c1a2c7ef6        ceph/ceph:v15                "/usr/bin/ceph-crash…"   17 minutes ago      Up 17 minutes                           ceph-af1f33f0-fd69-11ea-8530-5254228af870-crash.ceph1
99c703f18f67        ceph/ceph:v15                "/usr/bin/ceph-mgr -…"   18 minutes ago      Up 18 minutes                           ceph-af1f33f0-fd69-11ea-8530-5254228af870-mgr.ceph1.rbarkj
e972d3d3f13a        ceph/ceph:v15                "/usr/bin/ceph-mon -…"   18 minutes ago      Up 18 minutes                           ceph-af1f33f0-fd69-11ea-8530-5254228af870-mon.ceph1

参考