全文検索:
- 22 Ceph create-volume でエラー
- version 14.2.11 (f7fdb2f52131f54b891a2ec99d8205561242cdaf) nautilus (stable) stderr: 1: (ceph::__cep... version 14.2.11 (f7fdb2f52131f54b891a2ec99d8205561242cdaf) nautilus (stable) stderr: 1: (ceph::__cep... version 14.2.11 (f7fdb2f52131f54b891a2ec99d8205561242cdaf) nautilus (stable) stderr: 1: (()+0xf630) ... version 14.2.11 (f7fdb2f52131f54b891a2ec99d8205561242cdaf) nautilus (stable) stderr: 1: (()+0xf630)
- 31 ZFS IOPS limit
- 6 10" DiskIO_Group # dd if=/dev/zero of=BBB bs=512 count=100 oflag=direct 100+0 records in 100+0 records out 51200 bytes (51 kB) copied, 0.000876598 s, 58.4 MB/s ... 0:0 10" DiskIO_Group # dd if=/dev/zero of=BBB bs=512 count=100 oflag=direct 100+0 records in 100+0 records out 51200 bytes (51 kB) copied, 10.0009 s, 5.1 kB/s cgs
- 20 Ceph PG数
- num変更 ===== ceph osd pool set TEST-pool pg_num 128 ceph osd pool set TEST-pool pgp_num 128 ===== 3.変更の確認 ===== <code> # ceph osd df ID CLASS WEIGH... 70 1.00000 838 GiB 63 GiB 62 GiB 4 KiB 712 MiB 775 GiB 7.50 1.07 102 up 2 hdd ... ; 22 remapped pgs data: pools: 2 pools, 129 pgs objects: 20.27k objects, 78 GiB usag
- 28 Ceph ISCSI
- vel-1.5.2-1.el7.x86_64.rpm -rw-r--r-- 1 root root 122032 9月 25 10:58 tcmu-runner-1.5.2-1.el7.x86_64.r... .xxx.xx,10.xxx.xxx.xx tpg_default_cmdsn_depth = 512 backstore_hw_queue_depth = 512 backstore_queue_depth = 512 _EOM_ </code> /etc/ceph/iscsi-gateway.cfgの反映には、デーモンのリロードが必要 systemc
- 23 ceph コマンド
- _04 </code> ==== 2.RBDの使用者確認 ==== これで192.168.10.12と192.168.10.11が利用している事がわかる。 <code> # rbd status v... 01 -p pool02 Watchers: watcher=192.168.10.12:0/1416759822 client.387874 cookie=184464625987328... /s 10.94 ms 0.00 ns one/one-28-318-0 12/s 0/s 170 KiB/s 27 KiB/s 1.53 ms 6.02
- 28 Ceph Mon 追加
- t = 10.10.0.101,10.10.0.104,10.10.0.105, 10.10.10.12 auth_cluster_required = cephx auth_service_requir... t = 10.10.0.101,10.10.0.104,10.10.0.105, 10.10.10.12 auth_cluster_required = cephx auth_service_requir
- 14 ZFS ZIL
- 4 c0t500003976C887715d0 1.0 727.7 1.0 51260.5 0.0 10.0 0.0 13.7 0 100 c0t500003976C
- 17 mon HEALTH_WARN mon xxx is using a lot of disk space
- > ## Warningのしきい値変更 ceph tell mon.* injectargs --mon_data_size_warn=32212254720 </code> {{tag>ceph}}
- 27 Ceph OSD 切り離し
- w| grep [o]sd ceph 1808357 5.3 19.2 6922516 3123592 ? Ssl Jun10 274:47 /usr/bin/ceph-osd -f
- 29 Ceph Dashboard
- ull, "roles": ["administrator"], "password": "$2b$12$iOz46vkfT.zR62AmZXXXXXXXXXXXXXXXAlTOIx1Gz3az1CwqR
- 30 ZFS Linux
- ーコミットできる。</color> zfs create -s -o volblocksize=128k -V 10G pool01/zvol01 作成後は、zdデバイスが作成されている # l
- 36 LINSTOR Bcache
- :1000 0 2G 0 disk zd16 230:16 0 512M 0 disk └─bcache0 252:0 0 2G 0 disk