全文検索:
- 38 cephadmでcephインストール後、openstackで利用
- inder-backup/ </code> ==== deploy ==== kolla-ansible -i ./multinode deploy {{tag>Ceph openstack}}
- 21 Ceph マニュアルインストール
- 1 MDS=1 ../src/vstart.sh -d -n -x [[https://docs.ceph.com/docs/mimic/dev/quick_guide/]] {{tag>Ceph}}
- 37 LINSOTR + OpenStack
- nder_backup_driver: "nfs" cinder_backup_share: "192.168.30.101:/nfs" </code> {{tag>LINSOTR OpenStack}}
- 36 LINSTOR Bcache
- cache0 252:0 0 2G 0 disk └─drbd1000 147:1000 0 2G 0 disk </code> {{tag>LINSTOR drbd}}
- 13 ZFS logbias
- (228MB/s)(2178MiB/10028msec) write: IOPS=1775, BW=222MiB/s (233MB/s)(2226MiB/10028msec) {{tag>zfs}}
- 03 Ubuntu GlusterFS
- force </code> ===== 6.mount ===== mount.glusterfs g-work01:k8s-volume /mnt/ {{tag>GlusterFS}}
- 35 pgs not deep-scrubbed in time
- _pg_not_deep_scrubbed_ratio 1.0 これだと7日を超えるとWarningが出る設定 604800 * 1 / 60 / 60 / 24 = 7 {{tag>ceph}}
- 20 Ceph PG数
- _misplaced_ratio <code> # ceph config get mgr target_max_misplaced_ratio 0.050000 </code> {{tag>Ceph}}
- 19 Ceph OMAP META
- osd.2 </code> ===== bluefsサイズ確認 ===== [[50_dialy:2022:03:03:03#bluefsの容量確認方法]] {{tag>Ceph}}
- 34 ZFS trim
- g by running an on-demand (manual) TRIM periodically using the zpool trim command. </code> {{tag>ZFS}}
- 33 wipefs
- bles -h, --help display this help -V, --version display version </code> {{tag>ディスク}}
- 18 Ceph MON可用性
- e able to communicate with each other, two out of three, three out of four, and so on. {{tag>ceph}}
- 28 Ceph ISCSI
- x.xx:5000"}}} </code> === 削除の場合 === ceph dashboard iscsi-gateway-rm GATEWAY_NAME {{tag>Ceph iscsi}}
- 17 mon HEALTH_WARN mon xxx is using a lot of disk space
- e> ## Warningのしきい値変更 ceph tell mon.* injectargs --mon_data_size_warn=32212254720 </code> {{tag>ceph}}
- 32 Ceph 再同期
- === <code> # systemctl reset-failed ceph-osd@3 # systemctl start ceph-osd@3 </code> {{tag>Ceph}}