全文検索:
- 22 Ceph create-volume でエラー
- [root@cephdev001 my-cluster]# lvcreate -n data -l 100%Free ssd Logical volume "data" created. [root@... er ceph --setgroup ceph stderr: 2020-09-02 18:30:10.301 7f600913da80 -1 bluestore(/var/lib/ceph/osd/c... ad_fsid unparsable uuid stderr: 2020-09-02 18:30:10.303 7f600913da80 -1 rocksdb: Invalid argument: Ca... tion compaction_threads stderr: 2020-09-02 18:30:10.558 7f600913da80 -1 bluestore(/var/lib/ceph/osd/c
- 28 Ceph Mon 追加
- ceph001, ceph004, ceph005, ceph-mon02 mon_host = 10.10.0.101,10.10.0.104,10.10.0.105, 10.10.10.12 auth_cluster_required = cephx auth_service_required = cephx aut
- 28 Ceph ISCSI
- de> # ls -al -rw-r--r-- 1 root root 41308 9月 25 10:59 libtcmu-1.5.2-1.el7.x86_64.rpm -rw-r--r-- 1 root root 2244 9月 25 10:59 libtcmu-devel-1.5.2-1.el7.x86_64.rpm -rw-r--r-- 1 root root 122032 9月 25 10:58 tcmu-runner-1.5.2-1.el7.x86_64.rpm # rpm -ivh... assword = admin api_port = 5000 trusted_ip_list = 10.xxx.xxx.xx,10.xxx.xxx.xx tpg_default_cmdsn_depth
- 23 ceph コマンド
- vol_04 </code> ==== 2.RBDの使用者確認 ==== これで192.168.10.12と192.168.10.11が利用している事がわかる。 <code> # rbd status vol01 -p pool02 Watchers: watcher=192.168.10.12:0/1416759822 client.387874 cookie=18446462598732840961 watcher=192.168.10.11:0/886076642 client.394385 cookie=1844646259873
- 24 Ceph OSD 追加
- /bin/dd if=/dev/zero of=/dev/ceph/osd bs=1M count=10 conv=fsync stderr: 10+0 records in 10+0 records out 10485760 bytes (10 MB) copied stderr: , 0.0488377 s, 215 MB/s --> Zapping successful f
- 31 ZFS IOPS limit
- cgset -r blkio.throttle.write_iops_device="251:16 10" DiskIO_Group # dd if=/dev/zero of=BBB bs=512 count=100 oflag=direct 100+0 records in 100+0 records out 51200 bytes (51 kB) copied, 0.000876598 s, 58.4 MB/s </code> ===== Z
- 19 Ceph OMAP META
- 69 1.00000 3.1 TiB 1.7 TiB 1.6 TiB 546 KiB 101 GiB 7.4 TiB 18.83 0.53 10 up osd.2 </code> ===== compact後 ===== <code> # ceph o... 1.6 TiB 546 KiB 1 GiB 7.4 TiB 18.83 0.53 10 up osd.2 </code> ===== bluefsサイズ確
- 20 Ceph PG数
- dd 0.81870 1.00000 838 GiB 54 GiB 53 GiB 10 KiB 986 MiB 784 GiB 6.43 0.92 89 up 0... B 62 GiB 4 KiB 712 MiB 775 GiB 7.50 1.07 102 up 2 hdd 0.81870 1.00000 838 GiB ... 5 GiB 64 GiB 9 KiB 435 MiB 774 GiB 7.71 1.10 107 up TOTAL 3.3 TiB 235 GiB 232 GiB 37 KiB 2.6 GiB 3.0 TiB 7.01
- 14 ZFS ZIL
- 03976C887715d0 1.0 727.7 1.0 51260.5 0.0 10.0 0.0 13.7 0 100 c0t500003976C887B39d0 0.0 615.9 0.0 42896.0 0.0 0.3 0.0 0.5
- 32 Ceph 再同期
- osd ceph -wi-ao---- <10.45t </code> ===== 2.再同期 ===== LVMが見えると、ceph-... osd fsid 7b03b099-c6bb-47b7-b010-143799410dda osd id 3 ... osd fsid 7b03b099-c6bb-47b7-b010-143799410dda osd id 3 ... osd fsid 7b03b099-c6bb-47b7-b010-143799410dda osd id 3