全文検索:
- 38 cephadmでcephインストール後、openstackで利用
- h02 ssh-copy-id -f -i /etc/ceph/ceph.pub root@ceph03 ceph orch host add ceph01 192.168.0.101 ceph orc... t add ceph02 192.168.0.102 ceph orch host add ceph03 192.168.0.103 ceph orch host label ceph01 mon ceph orch host label ceph02 mon ceph orch host label ceph03 mon ceph orch host label ceph01 osd ceph orch hos
- 03 Ubuntu GlusterFS
- ====== 03 Ubuntu GlusterFS ====== Ubuntu 20.04にGlusterFSをインストール # glusterd --version glusterfs 7... g-work01 172.16.0.153 g-work02 172.16.0.166 g-work03 EOF </code> ===== 3.peer ===== 1台目から実行 glust... er peer probe g-work02 gluster peer probe g-work03 ===== 4.Directory ===== mkdir -p /gluster/vol... ter/volume \ g-work02:/gluster/volume \ g-work03:/gluster/volume \ force </code> ===== 6.mount
- 19 Ceph OMAP META
- osd.2 </code> ===== bluefsサイズ確認 ===== [[50_dialy:2022:03:03:03#bluefsの容量確認方法]] {{tag>Ceph}}
- 22 Ceph create-volume でエラー
- sid unparsable uuid stderr: 2020-09-02 18:30:10.303 7f600913da80 -1 rocksdb: Invalid argument: Can't ... unparsable uuid stderr: -5> 2020-09-02 18:30:10.303 7f600913da80 -1 rocksdb: Invalid argument: Can't ... unparsable uuid stderr: -5> 2020-09-02 18:30:10.303 7f600913da80 -1 rocksdb: Invalid argument: Can't
- 20 Ceph PG数
- : mon: 3 daemons, quorum ceph001,ceph002,ceph003 (age 16h) mgr: ceph001(active, since 16h), standbys: ceph003, ceph004, ceph002 osd: 4 osds: 4 up (since 16
- 28 Ceph ISCSI
- -1.5.2-1.el7.x86_64.rpm -rw-r--r-- 1 root root 122032 9月 25 10:58 tcmu-runner-1.5.2-1.el7.x86_64.rpm ... > cd /iscsi-target > /iscsi-target> create iqn.2003-01.com.redhat.iscsi-gw:iscsi-igw > /iscsi-target> cd iqn.2003-01.com.redhat.iscsi-gw:iscsi-igw/gateways > /iscs
- 29 Ceph Dashboard
- instance: "ceph002" - targets: ['ceph003:9100'] labels: instance: "ceph003" - targets: ['ceph004:9100'] la
- 23 ceph コマンド
- # rbd ls pool02 vol01 vol_01 vol_02 vol_test vol_03 vol_04 </code> ==== 2.RBDの使用者確認 ==== これで192.168
- 27 Ceph OSD 切り離し
- ph </code> ====== 切り離し実施 ====== ※例としてosd.5(ceph003)で説明 ====== データ移行 ====== <color #ed1c24>※データ量によって
- 28 Ceph Mon 追加
- --overwrite-conf config push ceph001 ceph002 ceph003 ceph005 ceph006 ceph007 ===== prometheus再起動 ==
- 32 Ceph 再同期
- | 2 | 825 | exists,up | | 5 | ceph003 | 3360G | 7536G | 29 | 593k | 2 | 40