全文検索:
- 38 cephadmでcephインストール後、openstackで利用
- /etc/ceph # cephadm bootstrap --mon-ip 192.168.0.101 --initial-dashboard-user admin --initial-dashboar... de> ssh-copy-id -f -i /etc/ceph/ceph.pub root@ceph01 ssh-copy-id -f -i /etc/ceph/ceph.pub root@ceph02 ... ceph/ceph.pub root@ceph03 ceph orch host add ceph01 192.168.0.101 ceph orch host add ceph02 192.168.0.102 ceph orch host add ceph03 192.168.0.103 ceph o
- 21 Ceph マニュアルインストール
- 21 Ceph マニュアルインストール ====== 新しいバージョンOctopusあたりから[[01_linux:13_storage:38_cephadm |cephadmでインストール]]できるよ
- 37 LINSOTR + OpenStack
- fy_tls_backend: "no" docker_registry: 192.168.30.101:4000 docker_registry_insecure: "yes" kolla_instal... der_backup_driver: "nfs" cinder_backup_share: "192.168.30.101:/nfs" </code> {{tag>LINSOTR OpenStack}}
- 36 LINSTOR Bcache
- drbdadmでの設定は、複雑で難しい。。。 そこを簡単に管理するのがLINSTOR {{:01_linux:13_storage:linstor01.png?400|}} {{:01_linux:13_storage:linstor02.png?400|}} 今回は、LINSOTRでBcacheを利用する方法を記述 ===== 1.LINSTO... chePool linstor storage-pool create lvmthin oshv1001 LinstorCache ubuntu-vg/CachePool </code> =====
- 03 Ubuntu GlusterFS
- <code> cat << EOF >> /etc/hosts 172.16.0.93 g-work01 172.16.0.153 g-work02 172.16.0.166 g-work03 EOF <... olume replica 2 arbiter 1 transport tcp \ g-work01:/gluster/volume \ g-work02:/gluster/volume \ ... force </code> ===== 6.mount ===== mount.glusterfs g-work01:k8s-volume /mnt/ {{tag>GlusterFS}}
- 20 Ceph PG数
- iB 235 GiB 232 GiB 37 KiB 2.6 GiB 3.0 TiB 7.01 # ceph -s cluster: id: 82c91e96-51db... _OK services: mon: 3 daemons, quorum ceph001,ceph002,ceph003 (age 16h) mgr: ceph001(active, since 16h), standbys: ceph003, ceph004, ceph002
- 19 Ceph OMAP META
- 9 1.00000 3.1 TiB 1.7 TiB 1.6 TiB 546 KiB 101 GiB 7.4 TiB 18.83 0.53 10 up o
- 34 ZFS trim
- usを確認できる。 <code> # zpool status -t pool: zpool01 state: ONLINE scan: none requested config: NAME STATE READ WRITE CKSUM zpool01 ONLINE 0 0 0 sdb ONLINE... 0 0 0 (6% trimmed, started at Tue 01 Mar 2022 04:45:29 AM JST) sdc ONLINE
- 28 Ceph ISCSI
- d /iscsi-target > /iscsi-target> create iqn.2003-01.com.redhat.iscsi-gw:iscsi-igw > /iscsi-target> cd iqn.2003-01.com.redhat.iscsi-gw:iscsi-igw/gateways > /iscsi-target...-igw/gateways> create ceph001 10.xxx.xx.xx > /iscsi-target...-igw/gateways> cr... dashboard iscsi-gateway-list {"gateways": {"ceph001": {"service_url": "http://admin:admin@10.xxx.xx.x
- 17 mon HEALTH_WARN mon xxx is using a lot of disk space
- ァイルが大量にできてしまってディスクを圧迫 /var/lib/ceph/mon/ceph-mon01/store.db/*.sst <code> # ceph health detail HEALTH_WARN mon mon01 is using a lot of disk space MON_DISK_BIG mon mon01 is using a lot of disk space mon.mon01 is 15 GiB >= mon_data_size_warn (15 GiB) # du -sh /var/li
- 32 Ceph 再同期
- v </code> 上記の設定で下記のLVMが見えているはず <code> [root@ceph001 ~]# lvs LV VG Attr LSize Pool... osd fsid 7b03b099-c6bb-47b7-b010-143799410dda osd id 3 ... osd fsid 7b03b099-c6bb-47b7-b010-143799410dda osd id 3 ... osd fsid 7b03b099-c6bb-47b7-b010-143799410dda osd id 3
- 30 ZFS Linux
- zfs ===== create pool ===== zpool create pool01 /dev/rbd0 ===== pool config ===== zfs set compress=lz4 pool01 zfs set sync=disabled pool01 zfs set dedup=on pool01 zpool set autotrim=on pool01 ===== export ===== pool のマウントを外す zpool
- 31 ZFS IOPS limit
- ====== 31 ZFS IOPS limit ====== ZFSは、zpoolに対しては[[01_linux:21_centos7:07_cgroup|cgroup]]が効かない ZFSでIOP... スクに制限をかけても効かない <code> # zpool status pool: pool01 state: ONLINE scan: none requested config: NAME STATE READ WRITE CKSUM pool01 ONLINE 0 0 0 rbd1 ONLI... OLで制限をかけると効く ===== <code> zfs create -V 20G pool01/zvol01 mkfs.xfs /dev/zvol/pool01/zvol01 mount /de
- 29 Ceph Dashboard
- ceph mgr services { "dashboard": "http://ceph01:8080/", } ===== 3.ユーザ設定 ===== user:admin, pass... e static_configs: - targets: ['ceph001:9283'] - job_name: 'node' static_configs: - targets: ['ceph001:9100'] labels: instance: "ceph001" - targets: ['ceph002:9100'] la
- 28 Ceph Mon 追加
- ===== 削除 ===== ceph-deploy mon destroy ceph-mon01 ===== config同期 ===== 追加後は、cpeh.confを同期してあげる。 ceph-deploy --overwrite-conf config push ceph001 ceph002 ceph003 ceph005 ceph006 ceph007 ===== ... X-XXX-XXXX-XXXXXXXXXXX mon_initial_members = ceph001, ceph004, ceph005, ceph-mon02 mon_host = 10.10.0.101,10.10.0.104,10.10.0.105, 10.10.10.12 auth_cluster