全文検索:
- 38 cephadmでcephインストール後、openstackで利用
- dm install ==== initial bootstrap ceph ==== 192.168.0.101は、ストレージネットワーク <code> # mkdir -p /etc/ceph # cephadm bootstrap --mon-ip 192.168.0.101 --initial-dashboard-user admin --initial-d... ph.pub root@ceph03 ceph orch host add ceph01 192.168.0.101 ceph orch host add ceph02 192.168.0.102 ceph orch host add ceph03 192.168.0.103 ceph orch hos
- 22 Ceph create-volume でエラー
- derr: 4: (BlueStore::~BlueStore()+0x9) [0x563661cf1639] stderr: 5: (OSD::mkfs(CephContext*, ObjectSto... derr: 4: (BlueStore::~BlueStore()+0x9) [0x563661cf1639] stderr: 5: (OSD::mkfs(CephContext*, ObjectSto... derr: 7: (BlueStore::~BlueStore()+0x9) [0x563661cf1639] stderr: 8: (OSD::mkfs(CephContext*, ObjectSto... derr: 7: (BlueStore::~BlueStore()+0x9) [0x563661cf1639] stderr: 8: (OSD::mkfs(CephContext*, ObjectSto
- 23 ceph コマンド
- _03 vol_04 </code> ==== 2.RBDの使用者確認 ==== これで192.168.10.12と192.168.10.11が利用している事がわかる。 <code> # rbd status vol01 -p pool02 Watchers: watcher=192.168.10.12:0/1416759822 client.387874 cookie=18446462598732840961 watcher=192.168.10.11:0/8860766
- 03 Ubuntu GlusterFS
- .hosts ===== <code> cat << EOF >> /etc/hosts 172.16.0.93 g-work01 172.16.0.153 g-work02 172.16.0.166 g-work03 EOF </code> ===== 3.peer ===== 1台目から実行 gluster peer probe g-work02 gluster peer
- 20 Ceph PG数
- n: 3 daemons, quorum ceph001,ceph002,ceph003 (age 16h) mgr: ceph001(active, since 16h), standbys: ceph003, ceph004, ceph002 osd: 4 osds: 4 up (since 16h), 4 in (since 17h); 22 remapped pgs data:
- 36 LINSTOR Bcache
- のインストール自体は下記で行う。 [[06_virtualization:05_container:16_kubernetes_linstor#3. LINSTORインストール]] ===== 2.デ... 0 disk └─drbd1000 147:1000 0 2G 0 disk zd16 230:16 0 512M 0 disk └─bcache0 252:0 0 2G 0 disk └─drbd1000 147:1000 0
- 37 LINSOTR + OpenStack
- _release: "yoga" kolla_internal_vip_address: "192.168.30.254" network_interface: "bond1" neutron_exter... lla_verify_tls_backend: "no" docker_registry: 192.168.30.101:4000 docker_registry_insecure: "yes" koll... der_backup_driver: "nfs" cinder_backup_share: "192.168.30.101:/nfs" </code> {{tag>LINSOTR OpenStack}}
- 02 GlusterFS
- ントできない ===== エラーメッセージ # mount -t glusterfs 192.168.101.33:/glusterfs /mnt Mount failed. Please ch... details. hostsを書いてあげれば解決 <code> #cat /etc/hosts 192.168.101.33 gluster01 </code> {{tag>GlusterFS}}
- 13 ZFS logbias
- ghput DataPool logbias=throughput write: IOPS=716, BW=2868KiB/s (2937kB/s)(28.3MiB/10089msec) wri... tency DataPool logbias = latency write: IOPS=1666, BW=208MiB/s (218MB/s)(2091MiB/10036msec) wri
- 16 Ceph サイジング(RAM/CPU)
- ====== 16 Ceph サイジング(RAM/CPU) ====== ^ コンポーネント ^ 割り当てるコア数 ^ 割り当てるRAM(G) ^ |ベースOS|4|16| |Ceph OSD|1|5| |Ceph MON|2|4| |Ceph MGR|2|4| |Ce
- 28 Ceph ISCSI
- ()] - Unable to start </code> <code> →Kernel 4.16 http://choonrpms.choon.net/centos/7/choonrpms-kernel416/x86_64/ yum remove kernel-tools-libs rpm -ivh ker
- 31 ZFS IOPS limit
- 0 # lsblk | grep rbd1 rbd1 251:16 0 2T 0 disk cgcreate -g blkio:/DiskIO_Gro... up cgset -r blkio.throttle.write_iops_device="251:16 10" DiskIO_Group # dd if=/dev/zero of=BBB bs=51
- 32 Ceph 再同期
- 02 | 3285G | 7610G | 22 | 1382k | 2 | 1638 | exists,up | | 2 | ceph004 | 2882G | 8014G | 16 | 485k | 2 | 0 | exists,up |
- 01 ALUA
- /lib/products/storage/manual/array/p10000/QL226-97161_ja.pdf|3PAR側は、3.1.3以降ALUAをサポート]] [[https://acces
- 17 mon HEALTH_WARN mon xxx is using a lot of disk space
- (15 GiB) # du -sh /var/lib/ceph/mon/ceph-mon01/ 16G /var/lib/ceph/mon/ceph-mon01/ </code> ====