====== 23 ceph コマンド ======
===== ●RBDコマンド =====
==== 1.RBDの一覧 ====
# rbd ls pool02
vol01
vol_01
vol_02
vol_test
vol_03
vol_04
==== 2.RBDの使用者確認 ====
これで192.168.10.12と192.168.10.11が利用している事がわかる。
# rbd status vol01 -p pool02
Watchers:
watcher=192.168.10.12:0/1416759822 client.387874 cookie=18446462598732840961
watcher=192.168.10.11:0/886076642 client.394385 cookie=18446462598732840963
==== 3.RBD map ====
マッピング
# rbd map vol01 -p pool02
・アンマップ
# rbd unmap vol01 -p pool02
・マッピング確認
# rbd showmapped
==== 4.RBDリサイズ ====
ボリューム: vol_testtest プール: pool02 として説明
# rbd resize vol_testtest --size 2T -p pool02
# rbd info vol_testtest -p pool02
rbd image 'vol_testtest':
size 2 TiB in 524288 objects
order 22 (4 MiB objects)
snapshot_count: 0
id: 7a30a285303f3
block_name_prefix: rbd_data.7a30a285303f3
format: 2
features: layering
op_features:
flags:
create_timestamp: Sat Mar 21 11:25:30 2020
access_timestamp: Sat Mar 21 11:25:30 2020
modify_timestamp: Sat Mar 21 11:25:30 2020
マウントしたままリサイズ可能。
サイズを指定しない場合は最大まで拡張
1.casのキャッシュ解除
# systemctl stop cas-onappbk
# systemctl status cas-onappbk
# rbd showmapped
2.rbdを直接マウントしxfs_groufs
# rbd map vol_testtest -p pool02
# mount /dev/rbd0 /onapp/backups
# xfs_growfs /onapp/backups -D size
# umount /onapp/backups
# rbd unmap vol_testtest -p pool02
3.casのキャッシュを起動
# systemctl start cas-onappbk
# systemctl status cas-onappbk
# rbd showmapped
==== 5.リネーム ====
rbd rename vol_org vol_des -p pool02
==== 6.Stat ====
=== iostat ===
rbd perf image iostat
one/one-75-289-0 18/s 19/s 590 KiB/s 1.6 KiB/s 4.28 ms 4.43 ms
one/one-68-287-0 18/s 4/s 578 KiB/s 2.4 KiB/s 1.87 ms 1.41 ms
one/one-28-317-0 13/s 0/s 915 KiB/s 0 B/s 10.94 ms 0.00 ns
one/one-28-318-0 12/s 0/s 170 KiB/s 27 KiB/s 1.53 ms 6.02 ms
=== iotop ===
rbd perf image iotop
>WRITES OPS READS OPS WRITE BYTES READ BYTES WRITE LAT READ LAT IMAGE
18/s 19/s 530 KiB/s 1.6 KiB/s 4.05 ms 3.97 ms one/one-75-288-0
16/s 17/s 432 KiB/s 3.2 KiB/s 2.89 ms 2.92 ms one/one-75-289-0
16/s 4/s 449 KiB/s 3.2 KiB/s 1.34 ms 921.76 us one/one-68-287-0
10/s 0/s 111 KiB/s 0 B/s 1.63 ms 0.00 ns one/one-55-219-0
===== ●Cephコマンド =====
==== 1.Cephの状態確認 ====
「--watch-sec 1」で1秒ごとに更新
# ceph -s --watch-sec 1
==== 2.OSD毎の状態確認 ====
# ceph osd status
{{tag>ceph}}