全文検索:
- 28 Ceph Mon 追加
- X-XXXXXXXXXXX mon_initial_members = ceph001, ceph004, ceph005, ceph-mon02 mon_host = 10.10.0.101,10.10.0.104,10.10.0.105, 10.10.10.12 auth_cluster_required = ... X-XXXXXXXXXXX mon_initial_members = ceph001, ceph004, ceph005, ceph-mon02 mon_host = 10.10.0.101,10.10.0.104,10.10.0.105, 10.10.10.12 auth_cluster_required =
- 29 Ceph Dashboard
- instance: "ceph003" - targets: ['ceph004:9100'] labels: instance: "ceph004" - targets: ['ceph005:9100'] la
- 20 Ceph PG数
- cluster: id: 82c91e96-51db-4813-8e53-0c0044a958f1 health: HEALTH_OK services: mo... ph001(active, since 16h), standbys: ceph003, ceph004, ceph002 osd: 4 osds: 4 up (since 16h), 4 in
- 23 ceph コマンド
- ls pool02 vol01 vol_01 vol_02 vol_test vol_03 vol_04 </code> ==== 2.RBDの使用者確認 ==== これで192.168.10.12と
- 32 Ceph 再同期
- | 2 | 1638 | exists,up | | 2 | ceph004 | 2882G | 8014G | 16 | 485k | 2 |
- 34 ZFS trim
- 0 0 (6% trimmed, started at Tue 01 Mar 2022 04:45:29 AM JST) sdc ONLINE 0 0