全文検索:
- 22 Ceph create-volume でエラー
- ceph version 14.2.11 (f7fdb2f52131f54b891a2ec99d8205561242cdaf) nautilus (stable) stderr: 1: (ceph::_... ceph version 14.2.11 (f7fdb2f52131f54b891a2ec99d8205561242cdaf) nautilus (stable) stderr: 1: (ceph::_... ceph version 14.2.11 (f7fdb2f52131f54b891a2ec99d8205561242cdaf) nautilus (stable) stderr: 1: (()+0xf6... ceph version 14.2.11 (f7fdb2f52131f54b891a2ec99d8205561242cdaf) nautilus (stable) stderr: 1: (()+0xf6
- 28 Ceph Mon 追加
- ite-conf config push ceph001 ceph002 ceph003 ceph005 ceph006 ceph007 ===== prometheus再起動 ===== 追加後す... XXXX mon_initial_members = ceph001, ceph004, ceph005, ceph-mon02 mon_host = 10.10.0.101,10.10.0.104,10.10.0.105, 10.10.10.12 auth_cluster_required = cephx auth_s... XXXX mon_initial_members = ceph001, ceph004, ceph005, ceph-mon02 mon_host = 10.10.0.101,10.10.0.104,10
- 32 Ceph 再同期
- 、ceph-volume lvm listでリストが見える。 <code> [root@ceph005 ~]# ceph-volume lvm list ====== osd.3 ======= ... 壊れた対象ののOSDを削除 === ID 3を想定して記載 <code> [root@ceph005 ~]# ceph osd status +----+---------+-------+-----... | 2 | 0 | exists,up | | 3 | ceph005 | 0 | 0 | 3 | 49.6k | 0 |
- 29 Ceph Dashboard
- instance: "ceph004" - targets: ['ceph005:9100'] labels: instance: "ceph005" - targets: ['ceph006:9100'] la
- 14 ZFS ZIL
- 0 0 0 c0t50000397584BC005d0 ONLINE 0 0 0 c0t5000
- 20 Ceph PG数
- misplaced_ratio <code> # ceph config get mgr target_max_misplaced_ratio 0.050000 </code> {{tag>Ceph}}
- 23 ceph コマンド
- 8/s 19/s 530 KiB/s 1.6 KiB/s 4.05 ms 3.97 ms one/one-75-288-0 16/s
- 36 LINSTOR Bcache
- ==== LINSTORのインストール自体は下記で行う。 [[06_virtualization:05_container:16_kubernetes_linstor#3. LINSTORインストール]