全文検索:
- 31 ZFS IOPS limit
- kIO_Group # dd if=/dev/zero of=BBB bs=512 count=100 oflag=direct 100+0 records in 100+0 records out 51200 bytes (51 kB) copied, 0.000876598 s, 58.4 MB/s </code> ===== ZVO... iskIO_Group # dd if=/dev/zero of=BBB bs=512 count=100 oflag=direct 100+0 records in 100+0 records out 5
- 13 ZFS logbias
- write: IOPS=716, BW=2868KiB/s (2937kB/s)(28.3MiB/10089msec) write: IOPS=781, BW=97.6MiB/s (102MB/s)(984MiB/10077msec) write: IOPS=600, BW=75.0MiB/s (78.7MB/s)(756MiB/10072msec) zfs set logbias=latency DataPool log... write: IOPS=1666, BW=208MiB/s (218MB/s)(2091MiB/10036msec) write: IOPS=1737, BW=217MiB/s (228MB/s)(
- 36 LINSTOR Bcache
- nvmeのSSDを利用したりする。Optane SSDなど <code> lvcreate -l 100%FREE --thinpool ubuntu-vg/CachePool linstor stora... he0 252:0 0 2G 0 disk └─drbd1000 147:1000 0 2G 0 disk zd16 230:16 0 512M ... cache0 252:0 0 2G 0 disk └─drbd1000 147:1000 0 2G 0 disk </code> {{tag>LINSTOR drbd}}
- 30 ZFS Linux
- ../zd0 ==== ZVOL resize ==== zfs set volsize=100G pool01/zvol01 確認 <code> # zfs list -o name,vols... OLSIZE pool01 - pool01/zvol01 100G </code> ===== Error ===== <code> The ZFS modul
- 32 Ceph 再同期
- rocksdb # lvremote /dev/cas/waldb # lvcreate -l 100%free -n osd ceph # lvcreate -L 2G -n waldb cas # lvcreate -l 100%free -n rocksdb cas </code> === 壊れた対象ののOSDを削除 =
- 14 ZFS ZIL
- 727.7 1.0 51260.5 0.0 10.0 0.0 13.7 0 100 c0t500003976C887B39d0 0.0 615.9 0.0 42896
- 22 Ceph create-volume でエラー
- [root@cephdev001 my-cluster]# lvcreate -n data -l 100%Free ssd Logical volume "data" created. [root@c
- 29 Ceph Dashboard
- container_name: node-exporter ports: - 9100:9100 restart: always </code> chmod 777 prometheus/data <code|prometheus/prometheus.yaml> # ca... static_configs: - targets: ['ceph001:9100'] labels: instance: "ceph001" - targets: ['ceph002:9100'] labels: instance: "ceph00