全文検索:
- 30 ZFS Linux
- zfs ===== create pool ===== zpool create pool01 /dev/rbd0 ===== pool config ===== zfs set compress=lz4 pool01 zfs set sync=disabled pool01 zfs set dedup=on pool01 zpool set autotrim=on pool01 ===== export ===== pool のマウントを外す zpool
- 22 Ceph create-volume でエラー
- ng ssd/data, it is already prepared [root@cephdev001 my-cluster]# lvremove -f /dev/ssd/data Logical... volume "data" successfully removed [root@cephdev001 my-cluster]# lvcreate -n data -l 100%Free ssd Logical volume "data" created. [root@cephdev001 my-cluster]# ceph-volume lvm prepare --bluestore ... eph --setgroup ceph stderr: 2020-09-02 18:30:10.301 7f600913da80 -1 bluestore(/var/lib/ceph/osd/ceph-
- 38 cephadmでcephインストール後、openstackで利用
- /etc/ceph # cephadm bootstrap --mon-ip 192.168.0.101 --initial-dashboard-user admin --initial-dashboar... de> ssh-copy-id -f -i /etc/ceph/ceph.pub root@ceph01 ssh-copy-id -f -i /etc/ceph/ceph.pub root@ceph02 ... ceph/ceph.pub root@ceph03 ceph orch host add ceph01 192.168.0.101 ceph orch host add ceph02 192.168.0.102 ceph orch host add ceph03 192.168.0.103 ceph o
- 17 mon HEALTH_WARN mon xxx is using a lot of disk space
- ァイルが大量にできてしまってディスクを圧迫 /var/lib/ceph/mon/ceph-mon01/store.db/*.sst <code> # ceph health detail HEALTH_WARN mon mon01 is using a lot of disk space MON_DISK_BIG mon mon01 is using a lot of disk space mon.mon01 is 15 GiB >= mon_data_size_warn (15 GiB) # du -sh /var/li
- 31 ZFS IOPS limit
- ====== 31 ZFS IOPS limit ====== ZFSは、zpoolに対しては[[01_linux:21_centos7:07_cgroup|cgroup]]が効かない ZFSでIOP... スクに制限をかけても効かない <code> # zpool status pool: pool01 state: ONLINE scan: none requested config: NAME STATE READ WRITE CKSUM pool01 ONLINE 0 0 0 rbd1 ONLI... OLで制限をかけると効く ===== <code> zfs create -V 20G pool01/zvol01 mkfs.xfs /dev/zvol/pool01/zvol01 mount /de
- 28 Ceph Mon 追加
- ===== 削除 ===== ceph-deploy mon destroy ceph-mon01 ===== config同期 ===== 追加後は、cpeh.confを同期してあげる。 ceph-deploy --overwrite-conf config push ceph001 ceph002 ceph003 ceph005 ceph006 ceph007 ===== ... X-XXX-XXXX-XXXXXXXXXXX mon_initial_members = ceph001, ceph004, ceph005, ceph-mon02 mon_host = 10.10.0.101,10.10.0.104,10.10.0.105, 10.10.10.12 auth_cluster
- 23 ceph コマンド
- == ==== 1.RBDの一覧 ==== <code> # rbd ls pool02 vol01 vol_01 vol_02 vol_test vol_03 vol_04 </code> ==== 2.RBDの使用者確認 ==== これで192.168.10.12と192.168.10.11が利用している事がわかる。 <code> # rbd status vol01 -p pool02 Watchers: watcher=192.168.10.12... ==== 3.RBD map ==== マッピング <code> # rbd map vol01 -p pool02 </code> ・アンマップ <code> # rbd unmap vol0
- 36 LINSTOR Bcache
- drbdadmでの設定は、複雑で難しい。。。 そこを簡単に管理するのがLINSTOR {{:01_linux:13_storage:linstor01.png?400|}} {{:01_linux:13_storage:linstor02.png?400|}} 今回は、LINSOTRでBcacheを利用する方法を記述 ===== 1.LINSTO... chePool linstor storage-pool create lvmthin oshv1001 LinstorCache ubuntu-vg/CachePool </code> =====
- 28 Ceph ISCSI
- d /iscsi-target > /iscsi-target> create iqn.2003-01.com.redhat.iscsi-gw:iscsi-igw > /iscsi-target> cd iqn.2003-01.com.redhat.iscsi-gw:iscsi-igw/gateways > /iscsi-target...-igw/gateways> create ceph001 10.xxx.xx.xx > /iscsi-target...-igw/gateways> cr... dashboard iscsi-gateway-list {"gateways": {"ceph001": {"service_url": "http://admin:admin@10.xxx.xx.x
- 29 Ceph Dashboard
- ceph mgr services { "dashboard": "http://ceph01:8080/", } ===== 3.ユーザ設定 ===== user:admin, pass... e static_configs: - targets: ['ceph001:9283'] - job_name: 'node' static_configs: - targets: ['ceph001:9100'] labels: instance: "ceph001" - targets: ['ceph002:9100'] la
- 02 GlusterFS
- い ===== エラーメッセージ # mount -t glusterfs 192.168.101.33:/glusterfs /mnt Mount failed. Please check t... details. hostsを書いてあげれば解決 <code> #cat /etc/hosts 192.168.101.33 gluster01 </code> {{tag>GlusterFS}}
- 03 Ubuntu GlusterFS
- <code> cat << EOF >> /etc/hosts 172.16.0.93 g-work01 172.16.0.153 g-work02 172.16.0.166 g-work03 EOF <... olume replica 2 arbiter 1 transport tcp \ g-work01:/gluster/volume \ g-work02:/gluster/volume \ ... force </code> ===== 6.mount ===== mount.glusterfs g-work01:k8s-volume /mnt/ {{tag>GlusterFS}}
- 14 ZFS ZIL
- こうなるとnfsへのアクセスがえらい遅くなる <code> $ zpool status pool01 ... STATE READ WRITE CKSUM pool01 DEGRADED 0 0 0 ... をofflineにしてやれば、とりあえず収まる <code> $ setup volume pool01 offline-lun c0t500003976C887B39d0 </code> {{tag>N
- 20 Ceph PG数
- iB 235 GiB 232 GiB 37 KiB 2.6 GiB 3.0 TiB 7.01 # ceph -s cluster: id: 82c91e96-51db... _OK services: mon: 3 daemons, quorum ceph001,ceph002,ceph003 (age 16h) mgr: ceph001(active, since 16h), standbys: ceph003, ceph004, ceph002
- 25 Ceph crash log
- crashed osd.4 crashed on host ceph006 at 2021-01-20 19:17:21.948752Z </code> ===== 対応 ===== 対象ノー... ENTITY NEW 2021-01-20_19:17:21.948752Z_ffce1753-6f57-43da-8afb-1cae8789eeff osd.4 * # ceph crash archive 2021-01-20_19:17:21.948752Z_ffce1753-6f57-43da-8afb-1cae8