ceph介绍

ceph为块存储,以下通过cephadm安装ceph集群,ceph主要有三个基本角色:

  • OSD 用于集群中所有数据与对象的存储,处理集群数据的复制、恢复、回填、再均衡,并向其他osd守护进程发送心跳,然后向 Monitor 提供一些监控信息。
  • Monitor 监控整个集群的状态,维护集群的 cluster MAP 二进制表,保证集群数据的一致性。
  • MDS (可选) 为 Ceph 文件系统提供元数据计算、缓存与同步。MDS 进程并不是必须的进程,只有需要使用 CephFS 时,才需要配置 MDS 节点。

1、环境准备

1.1、节点规划

节点   os   IP    磁盘 角色
ceph1 centos 192.168.0.13 vdc cephadm,mgr, mon,osd
ceph2 centos 192.168.0.14 vdc mon,osd
ceph3 centos 192.168.0.16 vdc mon,osd

1.2、修改hosts文件

  • hostnamectl set-hostname ceph1 #节点1上执行
  • hostnamectl set-hostname ceph2 #节点2上执行
  • hostnamectl set-hostname ceph3 #节点3上执行

1.3、配置hosts

  • 三个节点都执行
cat >> /etc/hosts <<EOF  
192.168.0.13 ceph1
192.168.0.14 ceph2
192.168.0.16 ceph3
EOF

1.4、关闭防火墙

  • 三个节点都要执行
systemctl disable --now firewalld
setenforce 0
sed -i 's/^SELINUX=.*/SELINUX=disabled/' /etc/selinux/config

1.5、时间同步

yum install -y chrony
systemctl enable --now chronyd

以ceph1作为时间参考,以下在三个节点都操作,添加server ceph1 prefer
vi /etc/chrony.conf
#server 0.centos.pool.ntp.org iburst
#server 1.centos.pool.ntp.org iburst
#server 2.centos.pool.ntp.org iburst
#server 3.centos.pool.ntp.org iburst
server ceph1 prefer

重启chrony
systemctl restart chronyd

1.6、docker、lvm2和python3安装

三个节点都需要执行:
curl -fsSL get.docker.com -o get-docker.sh 
sh get-docker.sh
systemctl enable docker
systemctl restart docker

yum install -y python3

yum -y install lvm2

2、cephadm安装(ceph1)

  • 官方下载方法

curl --silent --remote-name --location https://github.com/ceph/ceph/raw/octopus/src/cephadm/cephadm

  • git下载

git clone --single-branch --branch octopus https://github.com/ceph/ceph.git

cp ceph/src/cephadm/cephadm ./

  • cephadm安装(ceph1)
./cephadm add-repo --release octopus
./cephadm install

mkdir -p /etc/ceph
cephadm bootstrap --mon-ip 192.168.0.13

安装完成提示:

             URL: https://ceph1:8443/
            User: admin
        Password: 86yvswzdzd

INFO:cephadm:You can access the Ceph CLI with:

        sudo /usr/sbin/cephadm shell --fsid 3861dbf4-fd47-11ea-a373-5254228af870 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring

INFO:cephadm:Please consider enabling telemetry to help improve Ceph:

        ceph telemetry on

For more information see:

        https://docs.ceph.com/docs/master/mgr/telemetry/

INFO:cephadm:Bootstrap complete.
  • 配置ceph.pub 将ceph1的/etc/ceph/ceph.pub放入到ceph2和ceph3的.ssh/authorized_keys,ceph2和ceph3下root目录下没有.ssh,得先创建,最低权限为700,authorized_keys最低权限600。

  • ceph指令生效

cephadm shell

  • 添加节点

ceph orch host add ceph2

ceph orch host add ceph3

3、OSD部署

  • cephadm shell下执行
  • 若块设备没显示,可加–refresh参数
[ceph: root@ceph1 /]# ceph orch device ls
HOST   PATH      TYPE   SIZE  DEVICE_ID     MODEL  VENDOR  ROTATIONAL  AVAIL  REJECT REASONS
ceph1  /dev/vdc  hdd   50.0G  vol-e801hi3v  n/a    0x1af4  1           True
ceph1  /dev/vda  hdd   20.0G  i-2iwis9yr    n/a    0x1af4  1           False  locked
ceph1  /dev/vdb  hdd   4096M                n/a    0x1af4  1           False  Insufficient space (<5GB), locked
ceph2  /dev/vdc  hdd   50.0G  vol-lqcchpox  n/a    0x1af4  1           True
ceph2  /dev/vda  hdd   20.0G  i-r53b2flp    n/a    0x1af4  1           False  locked
ceph2  /dev/vdb  hdd   4096M                n/a    0x1af4  1           False  locked, Insufficient space (<5GB)
ceph3  /dev/vdc  hdd    100G  vol-p0eksa5m  n/a    0x1af4  1           True
ceph3  /dev/vda  hdd   20.0G  i-f6njdmtq    n/a    0x1af4  1           False  locked
ceph3  /dev/vdb  hdd   4096M                n/a    0x1af4  1           False  Insufficient space (<5GB), locked
[ceph: root@ceph1 /]#

devcice确认true后,执行
[ceph: root@ceph1 /]# ceph orch apply osd --all-available-devices
Scheduled osd.all-available-devices update...
  • 查看集群状态,ceph方式及运行容器方式
ceph方式:
[ceph: root@ceph1 /]# ceph -s
  cluster:
    id:     af1f33f0-fd69-11ea-8530-5254228af870
    health: HEALTH_OK

  services:
    mon: 3 daemons, quorum ceph1,ceph2,ceph3 (age 66s)
    mgr: ceph1.rbarkj(active, since 12m), standbys: ceph2.uzdnki
    osd: 3 osds: 3 up (since 14s), 3 in (since 14s)

  data:
    pools:   1 pools, 1 pgs
    objects: 1 objects, 0 B
    usage:   3.0 GiB used, 197 GiB / 200 GiB avail
    pgs:     1 active+clean

容器方式:
[root@ceph1 ceph]# docker ps -a
CONTAINER ID        IMAGE                        COMMAND                  CREATED             STATUS              PORTS               NAMES
b417841153a9        ceph/ceph:v15                "/usr/bin/ceph-osd -…"   5 minutes ago       Up 5 minutes                            ceph-af1f33f0-fd69-11ea-8530-5254228af870-osd.2
9dbd720f4a6f        prom/prometheus:v2.18.1      "/bin/prometheus --c…"   6 minutes ago       Up 5 minutes                            ceph-af1f33f0-fd69-11ea-8530-5254228af870-prometheus.ceph1
f860282adf85        prom/alertmanager:v0.20.0    "/bin/alertmanager -…"   6 minutes ago       Up 6 minutes                            ceph-af1f33f0-fd69-11ea-8530-5254228af870-alertmanager.ceph1
a6cb4b354f46        ceph/ceph-grafana:6.6.2      "/bin/sh -c 'grafana…"   16 minutes ago      Up 16 minutes                           ceph-af1f33f0-fd69-11ea-8530-5254228af870-grafana.ceph1
b745f57895f3        prom/node-exporter:v0.18.1   "/bin/node_exporter …"   17 minutes ago      Up 17 minutes                           ceph-af1f33f0-fd69-11ea-8530-5254228af870-node-exporter.ceph1
b00c1a2c7ef6        ceph/ceph:v15                "/usr/bin/ceph-crash…"   17 minutes ago      Up 17 minutes                           ceph-af1f33f0-fd69-11ea-8530-5254228af870-crash.ceph1
99c703f18f67        ceph/ceph:v15                "/usr/bin/ceph-mgr -…"   18 minutes ago      Up 18 minutes                           ceph-af1f33f0-fd69-11ea-8530-5254228af870-mgr.ceph1.rbarkj
e972d3d3f13a        ceph/ceph:v15                "/usr/bin/ceph-mon -…"   18 minutes ago      Up 18 minutes                           ceph-af1f33f0-fd69-11ea-8530-5254228af870-mon.ceph1

参考