全文検索:
- 22 Ceph create-volume でエラー
- var/lib/ceph/osd/ceph-0/activate.monmap stderr: 2020-09-02 18:30:09.744 7f8fd77c0700 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.bootstrap... ceph/keyring.bin,: (2) No such file or directory 2020-09-02 18:30:09.744 7f8fd77c0700 -1 AuthRegistry(0x7f8fd00656b8) no keyring found at /etc/ceph/ceph.c
- 28 Ceph Mon 追加
- == ===== 追加 ===== ceph-deploy mon add ceph-mon02 ===== 削除 ===== ceph-deploy mon destroy ceph-mo... -deploy --overwrite-conf config push ceph001 ceph002 ceph003 ceph005 ceph006 ceph007 ===== promethe... le prometheus ===== エラー ===== <code> [ceph-mon02][INFO ] Running command: systemctl enable ceph.target [ceph-mon02][INFO ] Running command: systemctl enable ceph-m
- 23 ceph コマンド
- ンド ===== ==== 1.RBDの一覧 ==== <code> # rbd ls pool02 vol01 vol_01 vol_02 vol_test vol_03 vol_04 </code> ==== 2.RBDの使用者確認 ==== これで192.168.10.12と192.168.1... 11が利用している事がわかる。 <code> # rbd status vol01 -p pool02 Watchers: watcher=192.168.10.12:0/1416759... RBD map ==== マッピング <code> # rbd map vol01 -p pool02 </code> ・アンマップ <code> # rbd unmap vol01 -p pool0
- 38 cephadmでcephインストール後、openstackで利用
- h01 ssh-copy-id -f -i /etc/ceph/ceph.pub root@ceph02 ssh-copy-id -f -i /etc/ceph/ceph.pub root@ceph03 ... t add ceph01 192.168.0.101 ceph orch host add ceph02 192.168.0.102 ceph orch host add ceph03 192.168.0.103 ceph orch host label ceph01 mon ceph orch host label ceph02 mon ceph orch host label ceph03 mon ceph orch hos
- 03 Ubuntu GlusterFS
- etc/hosts 172.16.0.93 g-work01 172.16.0.153 g-work02 172.16.0.166 g-work03 EOF </code> ===== 3.peer ===== 1台目から実行 gluster peer probe g-work02 gluster peer probe g-work03 ===== 4.Directory ... nsport tcp \ g-work01:/gluster/volume \ g-work02:/gluster/volume \ g-work03:/gluster/volume \
- 13 ZFS logbias
- 3MiB/10089msec) write: IOPS=781, BW=97.6MiB/s (102MB/s)(984MiB/10077msec) write: IOPS=600, BW=75.0... write: IOPS=1737, BW=217MiB/s (228MB/s)(2178MiB/10028msec) write: IOPS=1775, BW=222MiB/s (233MB/s)(2226MiB/10028msec) {{tag>zfs}}
- 20 Ceph PG数
- 62 GiB 4 KiB 712 MiB 775 GiB 7.50 1.07 102 up 2 hdd 0.81870 1.00000 838 GiB 6... services: mon: 3 daemons, quorum ceph001,ceph002,ceph003 (age 16h) mgr: ceph001(active, since 16h), standbys: ceph003, ceph004, ceph002 osd: 4 osds: 4 up (since 16h), 4 in (since 17
- 21 Ceph マニュアルインストール
- === 今回は14.2.8を利用 ==== # git checkout 2d095e947a02261ce61424021bb43bd3022d35cb # git log commit 2d095e947a02261ce61424021bb43bd3022d35cb Author: Jenkins Build Slave Use
- 25 Ceph crash log
- tly crashed osd.4 crashed on host ceph006 at 2021-01-20 19:17:21.948752Z </code> ===== 対応 ===== ... ENTITY NEW 2021-01-20_19:17:21.948752Z_ffce1753-6f57-43da-8afb-1cae8789eeff osd.4 * # ceph crash archive 2021-01-20_19:17:21.948752Z_ffce1753-6f57-43da-8afb-1
- 28 Ceph ISCSI
- いとrbd-target-apiが起動しないので、kernel4を入れる エラー <code> 2020-09-25 13:57:26,552 CRITICAL [rbd-target-api:2879... ted but the crt/key files missing/incompa tible? 2020-09-25 13:57:26,552 CRITICAL [rbd-target-api:2881... .xx > /iscsi-target...-igw/gateways> create ceph002 10.xxx.xx.xx > /iscsi-target...-igw/gateways> cd
- 16 Ceph サイジング(RAM/CPU)
- |32-64| |Ceph Dashboard|4|8| 下記も参照BlueStore wal rocksdb サイジング [[50_dialy:2021:02:13]] {{tag>Ceph}}
- 29 Ceph Dashboard
- instance: "ceph001" - targets: ['ceph002:9100'] labels: instance: "ceph002" - targets: ['ceph003:9100'] la
- 32 Ceph 再同期
- lv ceph -wi-ao---- 202.00g ... | 0 | 0 | exists,up | | 1 | ceph002 | 3285G | 7610G | 22 | 1382k | 2 | 16
- 02 GlusterFS
- ====== 02 GlusterFS ====== 分散ストレージGlusterFS ===== xfs関係インストール ===== <code> # yum install kmod-xfs x
- 19 Ceph OMAP META
- osd.2 </code> ===== bluefsサイズ確認 ===== [[50_dialy:2022:03:03:03#bluefsの容量確認方法]] {{tag>Ceph}}