全文検索:
- 29 Ceph Dashboard
- {"username": "admin", "lastUpdate": 1592953680, "name": null, "roles": ["administrator"], "password": "... metheus: image: prom/prometheus container_name: prometheus environment: TZ: Asia/Tokyo... a: image: grafana/grafana:7.3.0 container_name: grafana environment: - TZ=Asia/Tokyo ... rter: image: prom/node-exporter container_name: node-exporter ports: - 9100:9100 r
- 22 Ceph create-volume でエラー
- y Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/boot... k Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/boot... ar/lib/ceph/osd/ceph-0/keyring --create-keyring --name osd.0 --add-key AQAgZk9fqxUgKRAA8gJEbwtnuVb91YJHV... borted) ** stderr: in thread 7f600913da80 thread_name:ceph-osd stderr: 2020-09-02 18:30:10.567 7f60091
- 28 Ceph ISCSI
- M > /etc/yum.repos.d/ceph-iscsi.repo [ceph-iscsi] name=ceph-iscsi noarch packages baseurl=http://downloa... keys/release.asc type=rpm-md [ceph-iscsi-source] name=ceph-iscsi source packages baseurl=http://downloa... << _EOM_ > /etc/ceph/iscsi-gateway.cfg [config] # Name of the Ceph storage cluster. A suitable Ceph conf... uired, if not # colocated on an OSD node. cluster_name = ceph # Place a copy of the ceph cluster's admi
- 38 cephadmでcephインストール後、openstackで利用
- adm install ==== <code> # curl --silent --remote-name --location https://github.com/ceph/ceph/raw/octop... _backend_check: True cinder_enabled_backends: - name: rbd-1 - name: linstor-drbd - name: nfs-1 # Cinder-Backup enable_cinder_backup: "yes" cinder_backup_driver: "nfs
- 32 Ceph 再同期
- 8be67b9-b1a6-411e-8b65-c29c23a7968a cluster name ceph crush device class ... 8be67b9-b1a6-411e-8b65-c29c23a7968a cluster name ceph crush device class ... 8be67b9-b1a6-411e-8b65-c29c23a7968a cluster name ceph crush device class
- 19 Ceph OMAP META
- META AVAIL %USE VAR PGS STATUS TYPE NAME 2 hdd 9.09569 1.00000 3.1 ... META AVAIL %USE VAR PGS STATUS TYPE NAME 2 hdd 9.09569 1.00000 3.1
- 30 ZFS Linux
- lsize=100G pool01/zvol01 確認 <code> # zfs list -o name,volsize NAME VOLSIZE pool01 - pool01/zvol01 100G </code> ===== Error ===
- 37 LINSOTR + OpenStack
- _backend_check: True cinder_enabled_backends: - name: linstor-drbd - name: nfs-1 # Cinder-Backup enable_cinder_backup: "yes" cinder_backup_driver: "nfs"
- 14 ZFS ZIL
- NAME STATE READ WRITE CKSUM
- 23 ceph コマンド
- _count: 0 id: 7a30a285303f3 block_name_prefix: rbd_data.7a30a285303f3 format: 2
- 24 Ceph OSD 追加
- y Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/boot
- 31 ZFS IOPS limit
- 1 state: ONLINE scan: none requested config: NAME STATE READ WRITE CKSUM pool01 ON
- 34 ZFS trim
- 1 state: ONLINE scan: none requested config: NAME STATE READ WRITE CKSUM zpool01 ON