Red Hat High Availability Clustering - RH436
Q1. Configure a 3 node High availability cluster environment, cluster name
mycluster using following node.
a. Nodea.domainX.example.com
b. Nodeb.domainX.example.com
c. Nodec.domainX.example.com
Solution:
[root@nodea] # yum install pcs –y
[root@nodea] # systemctl enable pcsd
[root@nodea] # systemctl restart pcsd
[root@nodea] # firewall-cmd --permanent --add-service=high-availability; firewall-cmd --
reload
[root@nodea] # echo redhat | passwd --stdin hacluster
[root@nodea] # cat /etc/passwd | grep hacluster
[root@nodeb] # yum install pcs –y
[root@nodeb] # systemctl enable pcsd
[root@nodeb] # systemctl restart pcsd
[root@nodeb] # firewall-cmd --permanent --add-service=high-availability; firewall-cmd --
reload
[root@nodeb] # echo redhat | passwd --stdin hacluster
[root@nodeb] # cat /etc/passwd | grep hacluster
[root@nodec] # yum install pcs –y
[root@nodec] # systemctl enable pcsd
[root@nodec] # systemctl restart pcsd
[root@nodec] # yum install -y fence-virt*
[root@nodec] # firewall-cmd --permanent --add-service=high-availability; firewall-cmd --
reload
[root@nodec] # echo redhat | passwd --stdin hacluster
[root@nodec] # cat /etc/passwd | grep hacluster
[root@nodea] # pcs cluster auth nodea.domainX.example.com
nodeb.domainX.example.com nodec.domainX.example.com -u hacluster -p redhat
[root@nodea] # pcs cluster setup --name mycluster nodea.domainX.example.com
nodeb.domainX.example.com nodec.domainX.example.com -u hacluster -p redhat
[root@nodea] # corosync-quorumtool –s
Q2. Enable custom cluster logging. Configure cluster to send cluster log to
following specific file-
Nodea - /var/log/cluster-nodea.log
Nodea - /var/log/cluster-nodea.log
Nodea - /var/log/cluster-nodea.log
Solution:
ASM MAHMUDUN NABI (RHCA) | [email protected]| +8801717463112
[root@nodea] # cp /etc/corosync/corosync.conf /etc/corosync/corosync.conf_original
[root@nodea] # vim /etc/corosync/corosync.conf
logging {
to_syslog: yes
to_file: yes
logfile: /var/log/cluster/cluster-nodea.log
}
[root@nodea] # pcs cluster sync
[root@nodeb] # cp /etc/corosync/corosync.conf /etc/corosync/corosync.conf_original
[root@nodeb] # vim /etc/corosync/corosync.conf
logging {
to_syslog: yes
to_file: yes
logfile: /var/log/cluster/cluster-nodeb.log
}
[root@nodec] # cp /etc/corosync/corosync.conf /etc/corosync/corosync.conf_original
[root@nodec] # vim /etc/corosync/corosync.conf
logging {
to_syslog: yes
to_file: yes
logfile: /var/log/cluster/cluster-nodec.log
}
[root@nodea] # pcs cluster enable –all
[root@nodea] # pcs cluster start –all
[root@nodea] # pcs status
[root@nodea] # pcs cluster status
Q3. Configure fencing for the created cluster. Use fence_xvm fence resources.
Prepare fencing as such as the fence device works on pcs stonith command.
Example: pcs stonith fence Nodea.domainX.example.com
Configure fencing for:
a.nodea.domainX.example.com
b.nodeb.domainX.example.com
c.nodec.domainX.example.com
Get fence_xvm key from a given link
[root@nodea] yum install fence-virt* -y
[root@nodea] # firewall-cmd --permanent --add-port=1229/tcp; firewall-cmd –reload
[root@nodea] # mkdir /etc/cluster/
[root@nodea] # wget link/fence_xvm.key
[root@nodeb] yum install fence-virt* -y
[root@nodeb] # firewall-cmd --permanent --add-port=1229/tcp; firewall-cmd –reload
[root@nodeb] # mkdir /etc/cluster/
[root@nodeb] # wget link/fence_xvm.key
[root@nodec] yum install fence-virt* -y
[root@nodec] # firewall-cmd --permanent --add-port=1229/tcp; firewall-cmd –reload
[root@nodec] # mkdir /etc/cluster/
[root@nodec] # wget link/fence_xvm.key
ASM MAHMUDUN NABI (RHCA) | [email protected]| +8801717463112
[root@nodea] # pcs stonith create nodea fence_xvm port=” nodea” pcmk_host_list=”
nodea.domainX.example.com” key_file=” /etc/cluster/fence_xvm.key” action=” reboot”
ipport=”1229”
[root@nodea] # pcs stonith create nodeb fence_xvm port=” nodeb” pcmk_host_list=”
nodeb.domainX.example.com” key_file=” /etc/cluster/fence_xvm.key” action=” reboot”
ipport=”1229”
[root@nodea] # pcs stonith create nodec fence_xvm port=” nodec” pcmk_host_list=”
nodec.domainX.example.com” key_file=” /etc/cluster/fence_xvm.key” action=” reboot”
ipport=”1229”
[root@nodea] # pcs status
[root@nodea] # pcs cluster status
[root@nodea] # pcs stonith show
[root@nodea] # pcs stonith fence nodeb
[root@nodea] # corosync-quorumtool -s
Manually fence_xvm server create:
[root@foundationX ~] # yum install fence-virt* -y
[root@foundationX ~] # systemctl enable fence_virtd.service
[root@foundationX ~] # systemctl start fence_virtd.service
[root@foundationX ~] # systemctl restart fence_virtd.service
[root@foundationX ~] # mkdir /etc/cluster
[root@foundationX ~] # cd /etc/cluster/
[root@foundationX ~] # dd if=/dev/urandom of=fence_xvm.key bs=4k count=1
[root@foundationX ~] # ls
[root@foundationX ~] # fence_virtd -c Press Enter > Interface=none >
Q4. Configure ISCSI client-
By using following client Initiator:
1. nodea- iqn.2020-05.com.example.domainX:nodea
2. nodeb- iqn.2020-05.com.example.domainX:nodeb
3. nodec- iqn.2020-05.com.example.domainX:nodec
ISCSI block device should connect automatically when reboot nodes.
Solutions:
[root@nodea] # yum install iscsi-initiator-utils –y
[root@nodea] # vim /etc/iscsi/iscsid.conf
node.session.timeo.replacement_timeout = 5
[root@nodea] # vim /etc/iscsi/initiatorname.iscsi
InitiatorName= iqn.2020-05.com.example.domainX:nodea
[root@nodea] # systemctl enable iscsi
[root@nodea] # systemctl start iscsi
[root@nodea] # systemctl restart iscsi iscsd
Q5. Configure ISCSI Storage-
ASM MAHMUDUN NABI (RHCA) | [email protected]| +8801717463112
Configure ISCSI on every node so that each node can connect shared block
storage device from storage.domainX.example.com using target iqn.2020-
05.com.example.domainX:storage
Or
Configure ISCSI on every node so that each node can connect shared block
storage device from network 172.24.11.0 & 172.16.11.0 using target iqn.2020-
05.com.example.domainX:storage
Solutions:
[root@nodea] # iscsiadm -m discovery -t st -p 172.24.11.5
[root@nodea] # iscsiadm -m discovery -t st -p 172.16.11.5
[root@nodea] # systemctl restart iscsi iscsd
[root@nodea] # lsblk
[root@nodeb] # iscsiadm -m discovery -t st -p 172.24.11.5
[root@nodeb] # iscsiadm -m discovery -t st -p 172.16.11.5
[root@nodeb] # systemctl restart iscsi iscsd
[root@nodec] # lsblk
[root@nodec] # iscsiadm -m discovery -t st -p 172.24.11.5
[root@nodec] # iscsiadm -m discovery -t st -p 172.16.11.5
[root@nodec] # systemctl restart iscsi iscsd
[root@nodec] # lsblk
Q6. Configure multipathing for /dev/mapper/mpatha.
Configure redundant multipath device /dev/mapper/mpatha for ISCSI
block service.
Configure multipath so that it can manage by cluster.
[root@nodea] # yum install device-mapper-multipath -y
[root@nodea] # mpathconf --enable
[root@nodea] # /usr/lib/udev/scsi_id -g -u /dev/sdN
[root@nodea] # vim /etc/multipath.conf
multipaths {
multipath {
wwid WWID
alias ClusterStorage
path_grouping_policy failover
}
}
blacklist {
devnode "^vd[a-z]"
}
ASM MAHMUDUN NABI (RHCA) | [email protected]| +8801717463112
[root@nodea] # systemctl enable multipathd.service
[root@nodea] # systemctl start multipathd.service
[root@nodea] # lsblk
[root@nodea] # multipath –ll
[root@nodeb] # yum install device-mapper-multipath -y
[root@nodeb] # mpathconf --enable
[root@nodeb] # vim /etc/multipath.conf
multipaths {
multipath {
wwid WWID
alias ClusterStorage
path_grouping_policy failover
}
}
blacklist {
devnode "^vd[a-z]"
}
[root@nodeb] # systemctl enable multipathd
[root@nodeb] # systemctl start multipathd
[root@nodeb] # lsblk
[root@nodeb] # multipath -ll
[root@nodec] # yum install device-mapper-multipath -y
[root@nodec] # mpathconf --enable
[root@nodec] # vim /etc/multipath.conf
multipaths {
multipath {
wwid WWID
alias ClusterStorage
path_grouping_policy failover
}
}
blacklist {
devnode "^vd[a-z]"
}
[root@nodec] # systemctl enable multipathd
[root@nodec] # systemctl start multipathd
[root@nodec] # lsblk
[root@nodec] # multipath –ll
Q7. Configure logical volume in /dev/mapper/mpatha.
Create a 1024 MiB logical volume
Name of the logical volume is clusterdata
Clusterdata volume should be create from clustervg volume group
Solutions:
[root@nodea] # yum install dlm lvm2-cluster -y
[root@nodea] # lvmconf --enable-cluster
ASM MAHMUDUN NABI (RHCA) | [email protected]| +8801717463112
[root@nodea] # systemctl stop lvm2-lvmetad
[root@nodea] # pcs property set no-quorum-policy=freeze
[root@nodea] # pcs resource create dlm controld op monitor interval=30s on-fail=fence
clone interleave=true ordered=true
[root@nodea] # pcs resource status
[root@nodea] # pcs resource create clvmd clvm op monitor interval=30s on-fail=fence
clone interleave=true ordered=true
[root@nodea] # pcs resource status
[root@nodea] # pcs status
[root@nodea] # pcs constraint order start dlm-clone then clvmd-clone
[root@nodea] # pcs constraint colocation add clvmd-clone with dlm-clone
[root@nodea] # pcs constraint show
[root@nodea] # pvcreate /dev/mapper/mpatha
[root@nodea] # pvs
[root@nodea] # vgcreate -Ay -cy clustervg /dev/mapper/mpatha
[root@nodea] # vgs
[root@nodea] # lvcreate -L 1G -n clusterdata clustervg
[root@nodea] # lvs
[root@nodea] # lsblk
[root@nodeb] # yum install dlm lvm2-cluster -y
[root@nodeb] # lvmconf --enable-cluster
[root@nodeb] # systemctl stop lvm2-lvmetad
[root@nodeb] # lsblk
[root@nodec] # yum install dlm lvm2-cluster -y
[root@nodec] # lvmconf --enable-cluster
[root@nodec] # systemctl stop lvm2-lvmetad
[root@nodeb] # lsblk
Q8. Configure GFS2 shared Storage-
Formatted logical volume clusterdata using GFS2 on that using one more
journal in file system then cluster node
Mount logical volume clusterdata in directory /var/www/ using cluster
on each cluster node
Solutions:
[root@nodea] # yum install gfs2-utils –y
[root@nodea] # mkfs.gfs2 -j4 -p lock_dlm -t clusterX:clusterdata
/dev/clustervg/clusterdata
[root@nodea] # pcs resource create clusterfs Filesystem device=/dev/clustervg/clusterdata
directory=/var/www/ fstype=gfs2 options=noatime op monitor interval=30s on-fail=fence
clone interleave=true ordered=true
[root@nodea] # pcs status
[root@nodea] # pcs constraint order start clvmd-clone then clusterfs-clone
ASM MAHMUDUN NABI (RHCA) | [email protected]| +8801717463112
[root@nodea] # pcs constraint colocation add clusterfs-clone with clvmd-clone
[root@nodea] # pcs status
Q9. Configure HighAvailablity service-
Cluster group name should be clustergroup
Cluster should provide high availability web service
high availability web service should be reachable from 172.24.11.20
GFS2 shared storage should be the document root directory of virtual
host
Web service index.html file should provide.
Solutions:
[root@nodea] # yum install httpd -y
[root@nodea] # firewall-cmd --permanent --add-service=http; firewall-cmd –reload
[root@nodeb] # yum install httpd -y
[root@nodeb] # firewall-cmd --permanent --add-service=http; firewall-cmd –reload
[root@nodec] # yum install httpd -y
[root@nodec] # firewall-cmd --permanent --add-service=http; firewall-cmd –reload
[root@nodea] # restorecon –R –v /var/www
[root@nodea] # cd /var/www/
[root@nodea] # wget link
[root@nodea] # pcs resource create webip IPaddr2 ip="172.25.X.80" cidr_netmask=24 --
group clustergroup
[root@nodea] # pcs resource create webservice apache –group clustergroup
[root@nodea] # pcs status
[root@nodea] # pcs resource show
Q10. Configure Cluster Monitoring-
On each event change cluster node sent mail using cluster monitoring agent
to email: [email protected] Subject: ClusterAlert
Solutions:
[root@nodea] # pcs resource create webmail MailTo
email:
[email protected] Subject: “ClusterAlert” --group clustergroup
[root@nodea] # pcs status
[root@nodea] # pcs constraint order start clusterfs-clone then clustergroup
Q11. Configure cluster group behavior-
Configure clustergroup so that it can run nodea.domainX.example.com
ASM MAHMUDUN NABI (RHCA) | [email protected]| +8801717463112
When nodea is down it can run on nodeb and back to nodea when it is
available
Never run on nodeb.domainX.example.com
[root@nodea] # pcs constraint location clustergroup prefers nodea.domainX.example.com
[root@nodea] # pcs constraint location clustergroup avoids nodeb.domainX.example.com
ASM MAHMUDUN NABI (RHCA) | [email protected]| +8801717463112