全文検索:
- 44 CentOS7 chronyd NTP @01_linux:01_net
- =========================================== ^- ntp03.lagoon.nc 3 10 377 434 +27m
- 08 Ubuntu Nat iptables直 @01_linux:30_ubuntu
- ===== 下記で再起動後も反映するように設定しておく [[01_linux:30_ubuntu:03_ipables]] ====== Ubuntu NAT ufw編 ====== net.i
- 06 Ubuntu Network設定 @01_linux:30_ubuntu
- === ===== bond + bridge ===== <code> root@dadmhv03:~# cat /etc/netplan/00-installer-config.yaml net
- 08 qcow2 バックアップ 外部へ取得する場合(online external) @01_linux:08_仮想化:kvm
- <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/> </disk> . . </c... <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/> </disk> </code>
- 38 cephadmでcephインストール後、openstackで利用 @01_linux:13_storage
- h02 ssh-copy-id -f -i /etc/ceph/ceph.pub root@ceph03 ceph orch host add ceph01 192.168.0.101 ceph orc... t add ceph02 192.168.0.102 ceph orch host add ceph03 192.168.0.103 ceph orch host label ceph01 mon ceph orch host label ceph02 mon ceph orch host label ceph03 mon ceph orch host label ceph01 osd ceph orch hos
- 01 PostgreSQL streaming replica @01_linux:11_データベース:02_postgresql
- P^ |pg1001|172.16.0.51| |pg1002|172.16.0.52| |pg1003|172.16.0.53| ===== インストール ===== yum -y instal... 11 0.0 0.0 408884 4712 ? Ss 06:37 0:03 postgres: walreceiver streaming 0/8000148 </code>... </code> 他のSecondaryの接続先を変更しリロード <code> [root@pg1003 ~]# vi /data/postgresql.auto.conf host=172.16.0.52 ↓ host=172.16.0.51 [root@pg1003 ~]# systemctl reload postgresql-13.service </code
- 03 iperf3 @01_linux:09_ベンチマーク
- ====== 03 iperf3 ====== ネットワーク負荷試験ツールiperfの新しいバージョン ===== インストール ===== [[01_linux:01_net:11_yum_rpm
- 03 Ubuntu GlusterFS @01_linux:13_storage
- ====== 03 Ubuntu GlusterFS ====== Ubuntu 20.04にGlusterFSをインストール # glusterd --version glusterfs 7... g-work01 172.16.0.153 g-work02 172.16.0.166 g-work03 EOF </code> ===== 3.peer ===== 1台目から実行 glust... er peer probe g-work02 gluster peer probe g-work03 ===== 4.Directory ===== mkdir -p /gluster/vol... ter/volume \ g-work02:/gluster/volume \ g-work03:/gluster/volume \ force </code> ===== 6.mount
- 04 Strongswan IKEv2 EAP @01_linux:10_network
- = ==== VPN接続作成 ==== {{:01_linux:10_network:2022-03-23_19h10_19.png?400|}} ==== PowerShellでIPsec 設定
- 03 Strongswan IKEv2 with PSK @01_linux:10_network
- ====== 03 Strongswan IKEv2 with PSK ====== strongswanでIKEv2/IPsec のVPNを接続すると、下記のようにローカル間で通信可能となります。
- 20 Ceph PG数 @01_linux:13_storage
- : mon: 3 daemons, quorum ceph001,ceph002,ceph003 (age 16h) mgr: ceph001(active, since 16h), standbys: ceph003, ceph004, ceph002 osd: 4 osds: 4 up (since 16
- 52 MySQLQ sysbench 1.0 @01_linux:11_データベース:01_mysql
- .0009s total number of events: 9203 Latency (ms): min: ... ds fairness: events (avg/stddev): 9203.0000/0.00 execution time (avg/stddev): 9.99... 0 sum: 4803.32 Threads fairness: events (avg/stddev): ... 6.0000/0.00 execution time (avg/stddev): 4.8033/0.00 </code> ==== Fileio ==== <code> sysbench f
- 50 MySQLベンチマーク(sysbench) @01_linux:11_データベース:01_mysql
- sbtest charset=utf8; Query OK, 1 row affected (0.03 sec) mysql> GRANT ALL ON sbtest.* TO 'sbtest'@'lo
- 19 Ceph OMAP META @01_linux:13_storage
- osd.2 </code> ===== bluefsサイズ確認 ===== [[50_dialy:2022:03:03:03#bluefsの容量確認方法]] {{tag>Ceph}}
- fio @01_linux:09_ベンチマーク
- %, aggrios=0/132282, aggrmerge=0/94, aggrticks=0/7037, aggrin_queue=6990, aggrutil=68.93% sda: ios=0/132282, merge=0/94, ticks=0/7037, in_queue=6990, util=68.93% </code> ===== オプシ... _pct01 19 read_clat_pct02 20 read_clat_pct03 21 read_clat_pct04 22 read_clat_pct05 ... ct01 60 write_clat_pct02 61 write_clat_pct03 62 write_clat_pct04 63 write_clat_pct05