全文検索:
- 38 cephadmでcephインストール後、openstackで利用 @01_linux:13_storage
- h02 ssh-copy-id -f -i /etc/ceph/ceph.pub root@ceph03 ceph orch host add ceph01 192.168.0.101 ceph orc... t add ceph02 192.168.0.102 ceph orch host add ceph03 192.168.0.103 ceph orch host label ceph01 mon ceph orch host label ceph02 mon ceph orch host label ceph03 mon ceph orch host label ceph01 osd ceph orch hos
- 37 MegaCliのSmart情報 @01_linux:99_その他
- er: 0 PD Type: SATA Raw Size: 465.761 GB [0x3a386030 Sectors] Non Coerced Size: 465.261 GB [0x3a286030 Sectors] Coerced Size: 464.729 GB [0x3a175800 Sect... No Emergency Spare : No Device Firmware Level: CC03 Shield Counter: 0 Successful diagnostics completi... 9XF3MMBMST9500620NS CC03 FDE Capable: Not Capable FDE Enable: Disable
- 91 ZabbixのMySQLが遅い @01_linux:04_監視:zabbix
- .sock' port: 3306 Source distribution 140609 16:03:08 [ERROR] /usr/libexec/mysqld: Incorrect information in file: './zabbix/nodes.frm' 140609 16:03:08 [ERROR] /usr/libexec/mysqld: Incorrect information in file: './zabbix/nodes.frm' 140609 16:03:08 [ERROR] /usr/libexec/mysqld: Incorrect information in file: './zabbix/sessions.frm' 140609 16:03:08 [ERROR] /usr/libexec/mysqld: Incorrect informa
- 01 JuJu Maas Openstack @01_linux:08_仮想化:juju
- ode01| |DHCP|node01| |node02| |DHCP|node02| |node03| |DHCP|node03| ^Nic^PXE^Network^DHCP^ Gateway ^Space^ |ens3|○|192.168.0.0/24|MAAS-DHCP| 192.168.0.25... sudo snap install juju --classic </code> ===== 03. jujuにMaas登録 ===== ==== Yaml用意 ==== <code> $ vi... t/prefs/api-keys {{:01_linux:08_仮想化:juju:2022-02-03_14h58_47.png?400|}} ==== credential確認 ==== cred
- 01 PostgreSQL streaming replica @01_linux:11_データベース:02_postgresql
- P^ |pg1001|172.16.0.51| |pg1002|172.16.0.52| |pg1003|172.16.0.53| ===== インストール ===== yum -y instal... 11 0.0 0.0 408884 4712 ? Ss 06:37 0:03 postgres: walreceiver streaming 0/8000148 </code>... </code> 他のSecondaryの接続先を変更しリロード <code> [root@pg1003 ~]# vi /data/postgresql.auto.conf host=172.16.0.52 ↓ host=172.16.0.51 [root@pg1003 ~]# systemctl reload postgresql-13.service </code
- 55 corosync pacemaker @01_linux:01_net
- |1号機|2号機|3号機|VIP| ^ HostName|node01|node02|node03| ^Global|10.100.10.11|10.100.10.12|10.100.10.13|1... node01 192.168.10.12 node02 192.168.10.13 node03 ===== pcsd起動 ===== <code> # /etc/init.d/pcsd st... === <code> # pcs cluster auth node01 node02 node03 -u hacluster -p [PASSWORD] --force </code> === ク... luster setup --name pcs_cluster node01 node02 node03 </code> ===== PCS 設定 ===== ==== フィルタの停止と、初期設定
- 03 Ubuntu GlusterFS @01_linux:13_storage
- ====== 03 Ubuntu GlusterFS ====== Ubuntu 20.04にGlusterFSをインストール # glusterd --version glusterfs 7... g-work01 172.16.0.153 g-work02 172.16.0.166 g-work03 EOF </code> ===== 3.peer ===== 1台目から実行 glust... er peer probe g-work02 gluster peer probe g-work03 ===== 4.Directory ===== mkdir -p /gluster/vol... ter/volume \ g-work02:/gluster/volume \ g-work03:/gluster/volume \ force </code> ===== 6.mount
- 52 MySQLQ sysbench 1.0 @01_linux:11_データベース:01_mysql
- .0009s total number of events: 9203 Latency (ms): min: ... ds fairness: events (avg/stddev): 9203.0000/0.00 execution time (avg/stddev): 9.99... 0 sum: 4803.32 Threads fairness: events (avg/stddev): ... 6.0000/0.00 execution time (avg/stddev): 4.8033/0.00 </code> ==== Fileio ==== <code> sysbench f
- VPNサーバ(PacketiX VPN -> SoftEther VPN)移行方法 @01_linux:01_net
- -download.com/files/softether/v1.00-9026-rc2-2013.03.10-tree/Linux/SoftEther%20VPN%20Server/32bit%20-%... tel%20x86/softether-vpnserver-v1.00-9026-rc2-2013.03.10-linux-x86-32bit.tar.gz # tar zxvf softether-vpnserver-v1.00-9026-rc2-2013.03.10-linux-x86-32bit.tar.gz # cd vpnserver # make D
- 03 SSL中間証明書確認 @01_linux:02_www
- ====== 03 SSL中間証明書確認 ====== <code> openssl s_client -connect flateight.com:443 -showcerts </code> ==... nect flateight.com:443 -showcerts CONNECTED(00000003) depth=2 /C=BE/O=GlobalSign nv-sa/OU=Root CA/CN=G... nect flateight.com:443 -showcerts CONNECTED(00000003) depth=0 /C=JP/OU=Domain Control Validated/CN=log
- fio @01_linux:09_ベンチマーク
- %, aggrios=0/132282, aggrmerge=0/94, aggrticks=0/7037, aggrin_queue=6990, aggrutil=68.93% sda: ios=0/132282, merge=0/94, ticks=0/7037, in_queue=6990, util=68.93% </code> ===== オプシ... _pct01 19 read_clat_pct02 20 read_clat_pct03 21 read_clat_pct04 22 read_clat_pct05 ... ct01 60 write_clat_pct02 61 write_clat_pct03 62 write_clat_pct04 63 write_clat_pct05
- 19 Ceph OMAP META @01_linux:13_storage
- osd.2 </code> ===== bluefsサイズ確認 ===== [[50_dialy:2022:03:03:03#bluefsの容量確認方法]] {{tag>Ceph}}
- 22 Ceph create-volume でエラー @01_linux:13_storage
- sid unparsable uuid stderr: 2020-09-02 18:30:10.303 7f600913da80 -1 rocksdb: Invalid argument: Can't ... unparsable uuid stderr: -5> 2020-09-02 18:30:10.303 7f600913da80 -1 rocksdb: Invalid argument: Can't ... unparsable uuid stderr: -5> 2020-09-02 18:30:10.303 7f600913da80 -1 rocksdb: Invalid argument: Can't
- 01 ネットワーク設定nmcli @01_linux:21_centos7
- o loopback unmanaged -- vpn_nic03 tun unmanaged -- </code> ==== デバイスman... ftEtherの場合、nmcliで認識されない場合がある。 その場合[[50_dialy:2019:03:26]]で直接修正してやると治った。 ===== Bonding ===== <code> ... fy bond0 ipv4.method manual ipv4.address 192.168.103.101/16 ipv6.method ignore </code> ==== VLAN ====
- 29 TGT で thin provision @01_linux:01_net
- gets.conf> default-driver iscsi <target iqn.2014-03.storage-server:disk1> backing-store /mnt/... Mar 24 00:40 ip-192.168.10.16:3260-iscsi-iqn.2014-03.storage-server:disk1-lun-1 -> ../../sdc </code>