host | IP |
---|---|
galera-1 | 10.0.0.11 |
galera-2 | 10.0.0.12 |
galera-3 | 10.0.0.13 |
| | vip: 10.0.0.10 +----------------+-----------------+ | | | | eth0:10.0.0.11 | eth0:10.0.0.12 | eth0:10.0.0.13 +-----+-----+ +-----+-----+ +-----+-----+ | galera-1 | | galera-2 | | galera-3 | +-----+-----+ +-----+-----+ +-----+-----+
# chkconfig mysql off # chkconfig --list | grep mysql mysql 0:off 1:off 2:off 3:off 4:off 5:off 6:off
http://linux-ha.sourceforge.jp/wp/dl/packages から最新のパッケージを持ってくる。
今回は pacemaker-1.0.13-2.1.el6.x86_64.repo.tar.gz を利用する。
# tar zxvf pacemaker-1.0.13-2.1.el6.x86_64.repo.tar.gz # yum remove cluster-glue-libs # cd pacemaker-1.0.13-2.1.el6.x86_64.repo/rpm/ # yum localinstall pacemaker-1.0.13-2.el6.x86_64.rpm \ pacemaker-libs-1.0.13-2.el6.x86_64.rpm \ pacemaker-libs-devel-1.0.13-2.el6.x86_64.rpm \ cluster-glue-libs-devel-1.0.11-1.el6.x86_64.rpm \ corosynclib-devel-1.4.6-1.el6.x86_64.rpm \ heartbeat-3.0.5-1.1.el6.x86_64.rpm \ heartbeat-devel-3.0.5-1.1.el6.x86_64.rpm \ heartbeat-libs-3.0.5-1.1.el6.x86_64.rpm \ cluster-glue-libs-1.0.11-1.el6.x86_64.rpm \ corosynclib-1.4.6-1.el6.x86_64.rpm \ corosync-1.4.6-1.el6.x86_64.rpm \ cluster-glue-1.0.11-1.el6.x86_64.rpm \ cluster-glue-libs-1.0.11-1.el6.x86_64.rpm
別なやり方
http://mokky14.hatenablog.com/entry/2014/04/13/210821
# cd /tmp # tar zxvf pacemaker-1.0.13-2.1.el6.x86_64.repo.tar.gz # cd /tmp/pacemaker-1.0.13-2.1.el6.x86_64.repo yum -c pacemaker.repo install pacemaker-1.0.13 heartbeat-3.0.5 pm_crmgen pm_diskd pm_logconv-hb pm_extras
galera_1# cat /etc/ha.d/authkeys auth 1 1 sha1 secret # cat /etc/ha.d/ha.cf pacemaker on logfile /var/log/ha-log logfacility local0 debug 0 udpport 694 keepalive 2 warntime 10 deadtime 20 initdead 60 autojoin none #bcast eth0 mcast bond0 225.0.0.1 694 1 0 #ucast bond0 10.0.0.11 #ucast bond0 10.0.0.12 #ucast bond0 10.0.0.13 auto_failback off node galera-1 node galera-2 node galera-3
# chkconfig heartbeat off # chkconfig --list | grep heart heartbeat 0:off 1:off 2:off 3:off 4:off 5:off 6:off
# /etc/init.d/heartbeat start # ps auxw| grep heart root 56081 0.2 0.0 50104 7152 ? SLs 08:51 0:00 heartbeat: master control process root 56086 0.0 0.0 49944 6992 ? SL 08:51 0:00 heartbeat: FIFO reader root 56087 0.0 0.0 49940 6988 ? SL 08:51 0:00 heartbeat: write: mcast bond0 root 56088 0.0 0.0 49940 6988 ? SL 08:51 0:00 heartbeat: read: mcast bond0 496 56091 0.0 0.0 36788 2264 ? S 08:51 0:00 /usr/lib64/heartbeat/ccm 496 56092 1.3 0.0 74132 5900 ? S 08:51 0:00 /usr/lib64/heartbeat/cib root 56093 0.0 0.0 77676 2500 ? S 08:51 0:00 /usr/lib64/heartbeat/lrmd -r root 56094 0.0 0.0 69652 8288 ? SL 08:51 0:00 /usr/lib64/heartbeat/stonithd 496 56095 0.0 0.0 75940 3216 ? S 08:51 0:00 /usr/lib64/heartbeat/attrd 496 56096 0.1 0.0 81520 4056 ? S 08:51 0:00 /usr/lib64/heartbeat/crmd root 56196 0.6 0.0 9724 1852 ? S 08:51 0:00 /bin/sh /usr/lib/ocf/resource.d//heartbeat/mysql start root 56977 0.0 0.0 103240 892 pts/1 S+ 08:52 0:00 grep heart
bugでHA_BINのパスが間違っているようなので、直接書いてあげる
https://bugzilla.redhat.com/show_bug.cgi?id=1028127
# vi /etc/init.d/heartbeat 141 HA_BIN=/usr/lib64/heartbeat 142 [ -x $HA_BIN/heartbeat ] || exit 0
Pacemakerはcrmで設定を行います。
# echo << EOM > /root/galera.crm ## クラスタの全体設定 # stonith-enabled="false" : 制御不能なサーバを強制的にOFFにしない # property stonith-enabled="false" \ no-quorum-policy="ignore" ## resource-stickiness="INFINITY" :自動ファイルバックしない # 1回failしたらフェイルオーバー rsc_defaults resource-stickiness="INFINITY" \ migration-threshold="1" ## VIP定義 primitive p_ip_mysql_galera ocf:heartbeat:IPaddr2 \ params nic="bond0" iflabel="galera" \ ip="10.0.0.10" cidr_netmask="16" nic="bond0" ## MYSQLリソースを定義 primitive p_mysql ocf:heartbeat:mysql \ params config="/etc/my.cnf" \ pid="/var/run/mysqld/mysqld.pid" \ socket="/var/lib/mysql/mysql.sock" \ binary="/usr/bin/mysqld_safe" \ op monitor interval="20s" timeout="30s" \ op start interval="0" timeout="120s" \ op stop interval="0" timeout="120s" ## MYSQLリソースをクローン化 # 最大3ノードで1つのプロセスが起動、 # target-role : リソースをstartした時の動作 clone cl_mysql p_mysql \ meta interleave="true" clone-max="3" clone-node-max="1" target-role="Started" ## 重み付け、点数が高い方が優先される # galera-1が最優先 location rsc_location-1 p_ip_mysql_galera \ rule 300: #uname eq galera-1 \ rule 200: #uname eq galera-2 \ rule 100: #uname eq galera-3 ## 同居制約 MySQLと、VIPは同じところで起動 colocation c_ip_galera_on_mysql \ inf: p_ip_mysql_galera cl_mysql ## 起動順序 MySQLの方を先に起動させる order c_galera_order 0: cl_mysql p_ip_mysql_galera EOM
# crm configure load update /root/galera.crm
こちら crm設定について詳しく記述してくれてます。
リアルタイムで状態が見れる。
# crm_mon ============ Last updated: Thu Sep 11 09:59:02 2014 Stack: Heartbeat Current DC: galera-1 (61383def-55fa-472c-a75a-f8655ea458ea) - partition with quorum Version: 1.0.13-a83fae5 3 Nodes configured, unknown expected votes 2 Resources configured. ============ Online: [ galera-1 galera-2 galera-3 ] p_ip_mysql_galera (ocf::heartbeat:IPaddr2): Started galera-1 Clone Set: cl_mysql Started: [ galera-1 galera-2 galera-3 ]
Online: [ galera-1 galera-2 galera-3 ] p_ip_mysql_galera (ocf::heartbeat:IPaddr2): Started galera-2 Clone Set: cl_mysql Started: [ galera-2 galera-3 ] Stopped: [ p_mysql:0 ] Failed actions: p_mysql:0_monitor_20000 (node=galera-1, call=30, rc=7, status=complete): not running
# crm resource cleanup cl_mysql
Online: [ galera-1 galera-2 galera-3 ] p_ip_mysql_galera (ocf::heartbeat:IPaddr2): Started galera-2 Clone Set: cl_mysql Started: [ galera-1 galera-2 galera-3 ]
Online: [ galera-1 galera-2 galera-3 ] p_ip_mysql_galera (ocf::heartbeat:IPaddr2): Started galera-2 Clone Set: cl_mysql Started: [ galera-1 galera-2 galera-3 ]
galera-2がOFFLINEとなり、VIPがgalera-1に戻る事を確認
Online: [ galera-1 galera-3 ] OFFLINE: [ galera-2 ] p_ip_mysql_galera (ocf::heartbeat:IPaddr2): Started galera-1 Clone Set: cl_mysql Started: [ galera-1 galera-3 ] Stopped: [ p_mysql:2 ]