Redhat Cluster
Redhat Cluster
Redhat Cluster
HOME
ABOUT ME
CONTENT
LINUX BASICS
VMware
LINUX
VMWARE
http://www.golinuxhub.com/2014/02/configure-red-ha...
DATABASE
Linux
INTERVIEW QUESTIONS
QUIZ TIME
CONTACT ME
Database
databases
more
like
Oracle,
My
SQL,
A SUCCESSFUL MAN IS ONE WHO CAN LAY A FIRM FOUNDATION WITH THE BRICKS OTHERS HAVE THROWN AT HIM!!!
Configure Red Hat Cluster using VMware, Quorum Disk, GFS2, Openfiler
POSTED BY DEEPAK PRASAD
Search
4 COMMENTS
In this article I will be showing you step by step guide to install and configure Red Hat Cluster using
VMware Workstation 10.
These are the things which I would be using as per my lab setup:
VMware Workstation 10 (any version is fine above 8)
CentOS 6.5 - 64 bit (You can use either 32 or 64 bit and also if you use earlier versions, some
rpm and packages would differ for any version below 6.0)
Openfiler 2.99 - 64 bit
Brief intro of what we are trying to accomplish
1.
2.
3.
4.
5.
6.
Configure a 2 node Red Hat Cluster using CentOS 6.5 (64 bit)
One node will be used for management purpose of cluster with luci using CentOS 6.5 (64 bit)
Openfiler will be used to configure a shared iSCSI storage for the cluster
Configure failver for both the nodes
Configure a Quorum disk with 1 one vote to test the failover
Create a common service GFS2 which will run on any one node of our cluster with failover policy
QUIZ TIME
NOTE: I will not be able to configure fencing related settings as it is not supported on vmware. For more
information please visit this site Fence Device and Agent Information for Red Hat Enterprise Linux
IMPORTANT NOTE: In this article I will not be able to explain properly all the terms used, for that you can
always refer the Official Guide from Red Hat on Cluster Administration for further clarification
Lab Setup
2 nodes with CentOS 6.5 - 64 bit
Node 1
Hostname: node1.cluster
IP Address: 192.168.1.5
Node 2
Hostname: node2.cluster
IP Address: 192.168.1.6
1 Node for Management Interface with CentOS 6.5 - 64 bit
Node 1
Hostname: node3.mgmt
IP Address: 192.168.1.7
Openfiler
Hostname: of.storage
IP Address: 192.168.1.8
1 of 16
07/18/2014 02:50 AM
http://www.golinuxhub.com/2014/02/configure-red-ha...
Before moving to start with the configuration of cluster and cluster nodes let us prepare our openfiler with
iSCSI storage.
Login to the web console of your openfiler storage (assuming that you have successfully installed openfiler
with sufficient free space for cluster storage)
LinuxHub
Me gusta
A 429 personas les gusta LinuxHub.
Here I have written one more article on configuration of openfiler which you can use for reference if you
face any issues understanding me here as I will be very brief
Configuring iSCSI storage using openfiler
Plug-in social de Facebook
POPULAR POSTS
SAMBA 4.1 AS ACTIVE DIRECTORY CONFIGURATION
GUIDE
CONFIGURE RED HAT CLUSTER USING VMWARE,
QUORUM DISK, GFS2, OPENFILER
HOW TO CONFIGURE SAMBA 4 AS SECONDARY DOMAIN
CONTROLLER
Create a new partition with the below shown options for the available disk. Mention a cylinder value for
the partition
2 of 16
07/18/2014 02:50 AM
http://www.golinuxhub.com/2014/02/configure-red-ha...
Next is to create a new Logical Volume. Create 2 Logical Volumes with custom size as per your
requirement.
For my case I will create two volumes
1. quorum with size 1400 MB (Quorum disk does not requires disk space more than 1GB)
2. SAN with all the left size which will be used for GFS2 filesystem in our cluster
On the home page of system create a ACL for the subnet which will try to access the openfiler storage.
For my case the subnet is 192.168.1.0 so I will add a new entry for the same with relative subnet mask.
Next Add iscsi target for the first disk i.e. quorum volume. You can edit the iscsi target value with custom
3 of 16
07/18/2014 02:50 AM
http://www.golinuxhub.com/2014/02/configure-red-ha...
name as I have done for my case so that it becomes easier for me to understand
Next map the volume to the iSCSI target. For quorum target select quorum partition and click on Map as
shown below
Do the same steps for SAN volume also as we did for quorum volume above. Edit the target value as
shown below
Map the volume to the iSCSI target as shown in the figure below. Be sure to the map the correct volume
Allow the ACL for that particular target in Network ACL section
What is Conga?
Conga is an integrated set of software components that provides centralized configuration and
management of Red Hat clusters and storage. Conga provides the following major features:
One Web interface for managing cluster and storage
Automated Deployment of Cluster Data and Supporting Packages
Easy Integration with Existing Clusters
No Need to Re-Authenticate
Integration of Cluster Status and Logs
Fine-Grained Control over User Permissions
4 of 16
07/18/2014 02:50 AM
http://www.golinuxhub.com/2014/02/configure-red-ha...
The primary components in Conga are luci and ricci, which are separately installable. luci is a server that
runs on one computer and communicates with multiple clusters and computers viaricci. ricci is an agent
that runs on each computer (either a cluster member or a standalone computer) managed by Conga
On node3:
Run the below command to install all the Clustering related packages
[root@node3 ~]# yum groupinstall "High Availability Management" "High Availability"
As you see as soon as we gave the discovery command with openfiler IP address, the iSCSi targets got
discovered automatically as configured on openfiler
Now restart the iscsi service once again to refresh the settings
[root@node1 ~]# service iscsi restart
Stopping iscsi:
OK
Starting iscsi:
OK
OK
Starting iscsi:
OK
5 of 16
07/18/2014 02:50 AM
http://www.golinuxhub.com/2014/02/configure-red-ha...
On node2
[root@node2 ~]# mkqdisk -c /dev/sdb -l quorum
mkqdisk v3.0.12.1
Writing new quorum disk label 'quorum' to /dev/sdb.
WARNING: About to destroy all data on /dev/sdb; proceed [N/y] ? y
Warning: Initializing previously initialized partition
Initializing status block for node 1...
Initializing status block for node 2...
Initializing status block for node 3...
Initializing status block for node 4...
Initializing status block for node 5...
Initializing status block for node 6...
Initializing status block for node 7...
Initializing status block for node 8...
Initializing status block for node 9...
Initializing status block for node 10...
Initializing status block for node 11...
Initializing status block for node 12...
Initializing status block for node 13...
Initializing status block for node 14...
Initializing status block for node 15...
Initializing status block for node 16...
6 of 16
07/18/2014 02:50 AM
http://www.golinuxhub.com/2014/02/configure-red-ha...
Explanation:
Formatting filesystem: GFS2
Locking Protocol: lock_dlm
Cluster Name: cluster1
FileSystem name: GFS
Journal: 2
Partition: /dev/sdc
Run the below command on both the nodes
[root@node1 ~]# mkfs.gfs2 -p lock_dlm -t cluster1:GFS -j 2 /dev/sdc
This will destroy any data on /dev/sdc.
It appears to contain: Linux GFS2 Filesystem (blocksize 4096, lockproto lock_dlm)
Are you sure you want to proceed? [y/n] y
Device:
/dev/sdc
Blocksize:
4096
Device Size
Filesystem Size:
Journals:
Resource Groups:
2
42
Locking Protocol:
"lock_dlm"
Lock Table:
UUID:
"cluster1:GFS"
2ff81375-31f9-c57d-59d1-7573cdfaff42
/dev/sdc
Blocksize:
4096
Device Size
Filesystem Size:
Journals:
Resource Groups:
42
Locking Protocol:
"lock_dlm"
Lock Table:
UUID:
"cluster1:GFS"
9b1cae02-c357-3634-51a3-d5c35e79ab58
OK
OK
done
done
Starting ricci:
7 of 16
07/18/2014 02:50 AM
http://www.golinuxhub.com/2014/02/configure-red-ha...
New password:
BAD PASSWORD: it is based on a dictionary word
BAD PASSWORD: is too simple
Retype new password:
passwd: all authentication tokens updated successfully.
[root@node2 ~]# /etc/init.d/ricci start
Starting oddjobd:
OK
OK
OK
Start luci...
[ OK ]
Point your web browser to https://node3.mgmt:8084 (or equivalent) to access luci
Click on Create
Provide the following details for the clusterCluster name: Cluster1(As provided above)
Node Name: node1.cluster (192.168.1.5) Make sure that hostname is resolvable
node2.cluster (192.168.1.6) Make sure that hostname is resolvable
Password: As provided for agent ricci in Step 6
Check Shared storage box as we are using GFS2
8 of 16
07/18/2014 02:50 AM
http://www.golinuxhub.com/2014/02/configure-red-ha...
Once you click on submit, the nodes will start the procedure to add the nodes (if everything goes correct or
else it will throw the error)
Now the nodes are added but they are shown in red color. Let us check the reason behind it. Click on any
of the nodes for more details
So the reason looks like most of the services are not running . Let us login to the console and start the
services
OK
OK
Stopping gfs_controld...
Stopping dlm_controld...
[
[
OK
OK
]
]
Stopping fenced...
OK
Stopping cman...
OK
OK
Unmounting configfs...
OK
IMPORTANT NOTE: If you are planning to configure Red Hat Cluster then make sure NetworkManager
service is not running
[root@node1 ~]# service NetworkManager stop
Stopping NetworkManager daemon:
OK
OK
[
[
OK
OK
]
]
OK
Mounting configfs...
OK
Starting cman...
OK
[
[
OK
OK
]
]
Starting dlm_controld...
Tuning DLM kernel config...
[
[
OK
OK
]
]
9 of 16
07/18/2014 02:50 AM
http://www.golinuxhub.com/2014/02/configure-red-ha...
Starting gfs_controld...
Unfencing self...
[
[
OK
OK
]
]
OK
[ OK ]
[
[
OK
OK
]
]
Global setup...
Loading kernel modules...
[
[
OK
OK
]
]
Mounting configfs...
OK
Starting cman...
OK
[
[
OK
OK
]
]
Starting dlm_controld...
OK
OK
Starting gfs_controld...
OK
Unfencing self...
Joining fence domain...
[
[
OK
OK
]
]
OK
OK
Now once all the services have started, let us refresh the web console and see the changes
So all the services are running and there is no more warning message on either cluster or the nodes
10 of 16
07/18/2014 02:50 AM
http://www.golinuxhub.com/2014/02/configure-red-ha...
Click on Configure from the TAB menu as shown below and select QDisk
Fill in the details as shown below
Check the box with "Use a Quorum Disk"
Provide the label name used in above steps while formatting Quorum disk in Step 4
Provide the command to be run to check the quorum status between all the nodes and the interval time
Click on Apply once done
If everything goes fine you should be able to see the below message
Give a name to your failover domain and follow the setting as shown below
Select GFS2 from the drop down menu and fill in the details
Name: Give any name
Mount Point: Before giving the mount point make sure it exists on both the nodes
Let us create these mount points on node1 and node2
[root@node1 ~]# mkdir /GFS
[root@node2 ~]# mkdir /GFS
Next fill in the device details which we formatted for GFS2 i.e. /dev/sdc
11 of 16
07/18/2014 02:50 AM
http://www.golinuxhub.com/2014/02/configure-red-ha...
You will see the below box on your screen. Select the Resource we created in Step 11.
As soon as you select GFS, all the saved setting under GFS resource will be visible under service group
section as shown below. Click on Submit to save the changes
12 of 16
07/18/2014 02:50 AM
http://www.golinuxhub.com/2014/02/configure-red-ha...
Once you click on submit, refresh the web console and you should be able to see the GFS service running
on your cluster on any of the node as shown below
13. Verification
On node1
[root@node1 ~]# clustat
Cluster Status for cluster1 @ Wed Feb 26 00:49:04 2014
Member Status: Quorate
Member Name
ID
------ ---node1.cluster
Status
node2.cluster
2 Online,
/dev/block/8:16
rgmanager
Service Name
State
Owner (Last)
------- ----
-----
----- ------
service:GFS
started
node1.cluster
So, if GFS is running on node1 then GFS should be mounted on /GFS on node1. Let us verify
[root@node1 ~]# df -h
Filesystem
Size
/dev/mapper/VolGroup-root
8.7G
3.4G
5.0G
41% /
tmpfs
/dev/sda1
495M
194M
32M
30M
464M
155M
7% /dev/shm
16% /boot
/dev/sr0
4.2G
4.2G
/dev/sdc
11G
518M
0 100% /media/CentOS_6.5_Final
9.9G
5% /GFS
13 of 16
ID
Status
07/18/2014 02:50 AM
http://www.golinuxhub.com/2014/02/configure-red-ha...
node2.cluster
/dev/block/8:16
2 Online, rgmanager
0 Online, Quorum Disk
Service Name
State
Owner (Last)
------- ----
-----
----- ------
started
node2.cluster
service:GFS
Size
8.7G
tmpfs
495M
26M
/dev/sda1
/dev/sr0
194M
4.2G
30M
4.2G
[root@node2 ~]# df -h
Filesystem
Size
/dev/mapper/VolGroup-root
tmpfs
8.7G
495M
3.4G
32M
5.0G
464M
41% /
7% /dev/shm
/dev/sda1
194M
30M
155M
16% /boot
/dev/sr0
/dev/sdc
4.2G
11G
4.2G
518M
470M
6% /dev/shm
On node2
0 100% /media/CentOS_6.5_Final
9.9G
5% /GFS
References
Red Hat Enterprise Cluster
Related Articles
Configuring iSCSI storage using openfiler
How to install openfiler
Overview of services used in Red Hat Cluster
14 of 16
07/18/2014 02:50 AM
http://www.golinuxhub.com/2014/02/configure-red-ha...
How to register Red Hat Linux with RHN (Red Hat Network )
Red hat Enterprise Linux 5.5 Installation Guide (Screenshots)
15 tips to enhance security of your Linux machine
Why is Linux more secure than windows and any other OS
What is the difference between "su" and "su -" in Linux?
What is swappiness and how do we change its value?
How to log iptables messages in different log file
What are the s and k scripts in the etc rcx.d directories
How to check all the currently running services in Linux
How to auto start service after reboot in Linux
What is virtual memory, paging and swap space?
4 comments:
dipanjan mukherjee 16 May 2014 16:48:00
This is excellent tutorial regarding step by step guide of RHEL cluster setup.
Reply
Hello Samim,
It also happened with me, in that case try to re-discover the iscsi targets and repeat
step 3 above a few times. Also restarting the iscsi services on the openfiler will help
you.
Any particular error you are getting for cman service?
You can configure log using conga. I will try to write an article on the same.
Thanks
Deepak
Reply
15 of 16
07/18/2014 02:50 AM
http://www.golinuxhub.com/2014/02/configure-red-ha...
16 of 16
07/18/2014 02:50 AM