全文検索:
- 29 Ceph Dashboard
- mgr mgr/prometheus/rbd_stats_pools glance,cinder,nova ==== ③.docker-compose ==== docker-composeでpr... /var/lib/grafana/grafana.db restart: always node-exporter: image: prom/node-exporter container_name: node-exporter ports: - 9100:9100 restart: always </code>
- 22 Ceph create-volume でエラー
- ing,/etc/ceph/keyring,/etc/ceph/keyring.bin,: (2) No such file or directory 2020-09-02 18:30:09.744 7f8fd77c0700 -1 AuthRegistry(0x7f8fd00656b8) no keyring found at /etc/ceph/ceph.client.bootstrap-... derr: 11: (()+0x573c85) [0x563661818c85] stderr: NOTE: a copy of the executable, or `objdump -rdS <ex... derr: 11: (()+0x573c85) [0x563661818c85] stderr: NOTE: a copy of the executable, or `objdump -rdS <ex
- 26 Ceph OSD reboot
- = 1.OSD停止前に、クラスタのりバランスを一時停止 ====== ceph osd set noout ceph osd set norebalance ====== 2.OSD Node停止前に確認 ====== OSD Node停止前に、ちゃんとHEALTH_OK状態を確認しておく。 \\ <color #ed1c24>※Degradeなどの状態になっている時にOSD Node
- 28 Ceph ISCSI
- os.d/ceph-iscsi.repo [ceph-iscsi] name=ceph-iscsi noarch packages baseurl=http://download.ceph.com/ceph-iscsi/3/rpm/el7/noarch enabled=1 gpgcheck=1 gpgkey=https://download.... cess to the Ceph storage cluster from the gateway node is required, if not # colocated on an OSD node. cluster_name = ceph # Place a copy of the ceph clu
- 28 Ceph Mon 追加
- cket: exception getting command descriptions: [Errno 2] No such file or directory [ceph-mon02][WARNIN] monitor ceph-mon02 does not exist in monmap [ceph-mon02][WARNIN] neither `public_addr` nor `public_network` keys are defined for monitors [
- 38 cephadmでcephインストール後、openstackで利用
- t.cinder.keyring # ceph auth get-or-create client.nova mon 'allow r' osd 'allow class-read object_pref... s, allow rx pool=images' -o /etc/ceph/ceph.client.nova.keyring # ceph auth get-or-create client.cinder... > mkdir /etc/kolla/config mkdir /etc/kolla/config/nova mkdir /etc/kolla/config/glance mkdir -p /etc/ko... /cinder/ cp /etc/ceph/ceph.conf /etc/kolla/config/nova/ cp /etc/ceph/ceph.conf /etc/kolla/config/glanc
- 35 pgs not deep-scrubbed in time
- ====== 35 pgs not deep-scrubbed in time ====== ===== Warningの仕組み ===== デフォル設定だと、5日と6時間deep_scrubされ... deep_scrub_interval (604800.000000) * mon_warn_pg_not_deep_scrubbed_ratio (0.750000) "mon_warn_pg_not_deep_scrubbed_ratio": "0.750000" 604800 * 0.7... = 現在の値確認 ==== # ceph config get mgr mon_warn_pg_not_deep_scrubbed_ratio 0.750000 ===== 対応 ===== m
- 20 Ceph PG数
- .変更前の準備 ===== ==== scrub止める ==== ceph osd set noscrub ceph osd set nodeep-scrub ==== back_fillの値を下げておく ==== ceph tell 'osd.*' injectargs --osd-ma... , 3.0 TiB / 3.3 TiB avail pgs: 0.775% pgs not active 3006/60795 objects misplaced... ts/s </code> ===== 4.完了後 ===== ceph osd unset noscrub ceph osd unset nodeep-scrub ===== PG変更中メ
- 36 LINSTOR Bcache
- 加 ==== <code> linstor storage-pool create zfsthin node1 DfltStorPool DataPool linstor storage-pool create zfsthin node2 DfltStorPool DataPool linstor storage-pool create zfsthin node3 DfltStorPool DataPool linstor storage-pool create zfsthin node4 DfltStorPool DataPool </code> ===== 3.Bcache領
- 30 ZFS Linux
- ://download.zfsonlinux.org/epel/zfs-release.el7_8.noarch.rpm CentOS8 dnf -y install http://download.zfsonlinux.org/epel/zfs-release.el8_1.noarch.rpm <code> # cat /etc/yum.repos.d/zfs.repo ... e> ===== Error ===== <code> The ZFS modules are not loaded. Try running '/sbin/modprobe zfs' as root
- 32 Ceph 再同期
- ceph crush device class None db device /dev/cas/rocksd... ceph crush device class None encrypted 0 osd fsi... ceph crush device class None db device /dev/cas/rocksd
- 01 ALUA
- ja-JP/Red_Hat_Enterprise_Linux/5/html/5.4_Release_Notes/sect-Release_Notes-Kernel_Related_Updates.html| CentOS 5.4以降でALUAサポート]] {{tag> ALUA}}
- 33 wipefs
- n $HOME -f, --force force erasure -i, --noheadings don't print headings -J, --json use JSON output format -n, --no-act do everything except the actual write(
- 18 Ceph MON可用性
- ain a quorum on a two-monitor deployment, Ceph cannot tolerate any failures; with three monitors, o
- 24 Ceph OSD 追加
- 7e50d522-b31d-42a6-9b3a-49f92cae2d25 stderr: [errno 2] error connecting to the cluster --> RuntimeEr