全文検索:
- 38 cephadmでcephインストール後、openstackで利用
- tall ==== initial bootstrap ceph ==== 192.168.0.101は、ストレージネットワーク <code> # mkdir -p /etc/ceph # cephadm bootstrap --mon-ip 192.168.0.101 --initial-dashboard-user admin --initial-dashboa... root@ceph03 ceph orch host add ceph01 192.168.0.101 ceph orch host add ceph02 192.168.0.102 ceph orch host add ceph03 192.168.0.103 ceph orch host labe
- 37 LINSOTR + OpenStack
- ify_tls_backend: "no" docker_registry: 192.168.30.101:4000 docker_registry_insecure: "yes" kolla_insta... der_backup_driver: "nfs" cinder_backup_share: "192.168.30.101:/nfs" </code> {{tag>LINSOTR OpenStack}}
- 36 LINSTOR Bcache
- nvmeのSSDを利用したりする。Optane SSDなど <code> lvcreate -l 100%FREE --thinpool ubuntu-vg/CachePool linstor storage-pool create lvmthin oshv1001 LinstorCache ubuntu-vg/CachePool </code> ====... eG linstor resource-group spawn BcacheG Volume001 10G </code> ===== 8.確認 ===== <code> ## lsblk zd0 ... sk └─bcache0 252:0 0 2G 0 disk └─drbd1000 147:1000 0 2G 0 disk zd16 230:16
- 13 ZFS logbias
- write: IOPS=716, BW=2868KiB/s (2937kB/s)(28.3MiB/10089msec) write: IOPS=781, BW=97.6MiB/s (102MB/s)(984MiB/10077msec) write: IOPS=600, BW=75.0MiB/s (78.7MB/s)(756MiB/10072msec) zfs set logbias=latency DataPool lo
- 20 Ceph PG数
- dd 0.81870 1.00000 838 GiB 54 GiB 53 GiB 10 KiB 986 MiB 784 GiB 6.43 0.92 89 up 0... B 62 GiB 4 KiB 712 MiB 775 GiB 7.50 1.07 102 up 2 hdd 0.81870 1.00000 838 GiB ... 5 GiB 64 GiB 9 KiB 435 MiB 774 GiB 7.71 1.10 107 up TOTAL 3.3 TiB 235 GiB 232 GiB 37 KiB 2.6 GiB 3.0 TiB 7.01
- 19 Ceph OMAP META
- 69 1.00000 3.1 TiB 1.7 TiB 1.6 TiB 546 KiB 101 GiB 7.4 TiB 18.83 0.53 10 up osd.2 </code> ===== compact後 ===== <code> # ceph o... 1.6 TiB 546 KiB 1 GiB 7.4 TiB 18.83 0.53 10 up osd.2 </code> ===== bluefsサイズ確
- 28 Ceph ISCSI
- de> # ls -al -rw-r--r-- 1 root root 41308 9月 25 10:59 libtcmu-1.5.2-1.el7.x86_64.rpm -rw-r--r-- 1 root root 2244 9月 25 10:59 libtcmu-devel-1.5.2-1.el7.x86_64.rpm -rw-r--r-- 1 root root 122032 9月 25 10:58 tcmu-runner-1.5.2-1.el7.x86_64.rpm # rpm -ivh... assword = admin api_port = 5000 trusted_ip_list = 10.xxx.xxx.xx,10.xxx.xxx.xx tpg_default_cmdsn_depth
- 32 Ceph 再同期
- osd ceph -wi-ao---- <10.45t </code> ===== 2.再同期 ===== LVMが見えると、ceph-... osd fsid 7b03b099-c6bb-47b7-b010-143799410dda osd id 3 type db vdo ... osd fsid 7b03b099-c6bb-47b7-b010-143799410dda osd id 3
- 30 ZFS Linux
- ればどこにmapしてもimport 可能でした。 ===== ZVOL ===== 下記の例では、10Gのpoo/01/zvol01を作成する。 <color #b5e61d>※「-s」でスパースzvo... 。</color> zfs create -s -o volblocksize=128k -V 10G pool01/zvol01 作成後は、zdデバイスが作成されている # ll /dev/z... ../zd0 ==== ZVOL resize ==== zfs set volsize=100G pool01/zvol01 確認 <code> # zfs list -o name,vol... OLSIZE pool01 - pool01/zvol01 100G </code> ===== Error ===== <code> The ZFS modu
- 31 ZFS IOPS limit
- cgset -r blkio.throttle.write_iops_device="251:16 10" DiskIO_Group # dd if=/dev/zero of=BBB bs=512 count=100 oflag=direct 100+0 records in 100+0 records out 51200 bytes (51 kB) copied, 0.000876598 s, 58.4 MB/s </code> ===== Z
- 29 Ceph Dashboard
- container_name: node-exporter ports: - 9100:9100 restart: always </code> chmod 777 prometheus/data <code|prometheus/prometheus.yaml> # ... static_configs: - targets: ['ceph001:9100'] labels: instance: "ceph001" - targets: ['ceph002:9100'] labels: instance: "ceph0
- 28 Ceph Mon 追加
- ceph001, ceph004, ceph005, ceph-mon02 mon_host = 10.10.0.101,10.10.0.104,10.10.0.105, 10.10.10.12 auth_cluster_required = cephx auth_service_required = cephx aut
- 24 Ceph OSD 追加
- /bin/dd if=/dev/zero of=/dev/ceph/osd bs=1M count=10 conv=fsync stderr: 10+0 records in 10+0 records out 10485760 bytes (10 MB) copied stderr: , 0.0488377 s, 215 MB/s --> Zapping successful f
- 27 Ceph OSD 切り離し
- 1808357 5.3 19.2 6922516 3123592 ? Ssl Jun10 274:47 /usr/bin/ceph-osd -f --cluster ceph --id 5
- 23 ceph コマンド
- vol_04 </code> ==== 2.RBDの使用者確認 ==== これで192.168.10.12と192.168.10.11が利用している事がわかる。 <code> # rbd status vol01 -p pool02 Watchers: watcher=192.168.10.12:0/1416759822 client.387874 cookie=18446462598732840961 watcher=192.168.10.11:0/886076642 client.394385 cookie=1844646259873