Once you have configured the Solaris cluster , you have to add the quorum device for additional voting process. Without Quorum devices, cluster will be in installed mode. You can verify the status using “cluster show -t global | grep installmode” command.Each node in a configured cluster has one ( 1 ) quorum vote and cluster require minimum two vote to run the cluster.If any one node goes down, cluster won’t get 2 votes and it will panic second node also to avoid the data corruption on shared storage. To avoid this situation, we can make one small SAN disk as quorum device which can provide one vote.So that, if one node fails, system can still get two votes all the time on two node cluster.
Once you have configured the two node Solaris cluster, you can start configure the quorum device.
1.Check the cluster node status.
UASOL1:#clnode status === Cluster Nodes === --- Node Status --- Node Name Status --------- ------ UASOL2 Online UASOL1 Online UASOL1:#
2.You can see that ,currently cluster is in install mode.
# cluster show -t global | grep installmode installmode: enabled
3.Current cluster quorum status
UASOL1:#clq status === Cluster Quorum === --- Quorum Votes Summary from (latest node reconfiguration) --- Needed Present Possible ------ ------- -------- 1 1 1 --- Quorum Votes by Node (current status) --- Node Name Present Possible Status --------- ------- -------- ------ UASOL2 1 1 Online UASOL1 0 0 Online UASOL1:#
4.Make sure you have small size LUN is assigned to both the cluster node from SAN.
UASOL1:#echo |format Searching for disks...done AVAILABLE DISK SELECTIONS: 0. c1t0d0 VMware,-VMware Virtual -1.0 cyl 1824 alt 2 hd 255 sec 63 /pci@0,0/pci15ad,1976@10/sd@0,0 1. c1t1d0 VMware,-VMware Virtual -1.0 cyl 508 alt 2 hd 64 sec 32 /pci@0,0/pci15ad,1976@10/sd@1,0 Specify disk (enter its number): Specify disk (enter its number): UASOL1:#
5.Let me label the disk and naming the disk.
UASOL1:#format c1t1d0 selecting c1t1d0: quorum [disk formatted] FORMAT MENU: disk - select a disk type - select (define) a disk type partition - select (define) a partition table current - describe the current disk format - format and analyze the disk fdisk - run the fdisk program repair - repair a defective sector label - write label to the disk analyze - surface analysis defect - defect list management backup - search for backup labels verify - read and display labels save - save new disk/partition definitions inquiry - show vendor, product and revision volname - set 8-character volume name ! - execute , then return quit format> fdisk The default partition for the disk is: a 100% "SOLARIS System" partition Type "y" to accept the default partition, otherwise type "n" to edit the partition table. y format> volname quorum format> quit UASOL1:#
6.You can see the same LUN on UASOL2 node as well.
UASOL2:#echo |format Searching for disks...done AVAILABLE DISK SELECTIONS: 0. c1t0d0 VMware,-VMware Virtual -1.0 cyl 1824 alt 2 hd 255 sec 63 /pci@0,0/pci15ad,1976@10/sd@0,0 1. c1t1d0 VMware,-VMware Virtual -1.0 cyl 508 alt 2 hd 64 sec 32 quorum /pci@0,0/pci15ad,1976@10/sd@1,0 Specify disk (enter its number): Specify disk (enter its number): UASOL2:#
7. Populate the disks in solaris cluster.
UASOL2:#cldev populate Configuring DID devices did instance 4 created. did subpath UASOL2:/dev/rdsk/c1t1d0 created for instance 4. Configuring the /dev/global directory (global devices) obtaining access to all attached disks UASOL2:# UASOL1:#cldev populate Configuring DID devices did instance 4 created. did subpath UASOL1:/dev/rdsk/c1t1d0 created for instance 4. Configuring the /dev/global directory (global devices) obtaining access to all attached disks UASOL1:#
8.Check the devices status.
UASOL1:#cldevice list -v DID Device Full Device Path ---------- ---------------- d1 UASOL2:/dev/rdsk/c1t0d0 d1 UASOL1:/dev/rdsk/c1t0d0 d4 UASOL2:/dev/rdsk/c1t1d0 d4 UASOL1:/dev/rdsk/c1t1d0 UASOL1:#cldev show d4 === DID Device Instances === DID Device Name: /dev/did/rdsk/d4 Full Device Path: UASOL1:/dev/rdsk/c1t1d0 Full Device Path: UASOL2:/dev/rdsk/c1t1d0 Replication: none default_fencing: global UASOL1:#
9.Add the d4 as quorum device in cluster.
UASOL1:#clquorum add d4 UASOL1:#
10.Check the Quorum status
UASOL1:#clq status === Cluster Quorum === --- Quorum Votes Summary from (latest node reconfiguration) --- Needed Present Possible ------ ------- -------- 2 3 3 --- Quorum Votes by Node (current status) --- Node Name Present Possible Status --------- ------- -------- ------ UASOL2 1 1 Online UASOL1 1 1 Online --- Quorum Votes by Device (current status) --- Device Name Present Possible Status ----------- ------- -------- ------ d4 1 1 Online UASOL1:#
We have successfully configured the quorum on two node Solaris cluster 3.3 u2.
How can we test quorum device is working or not ?
Just reboot any one of the node and you can see the voting status .
UASOL2:#reboot updating /platform/i86pc/boot_archive Connection to UASOL2 closed by remote host. Connection to UASOL2 closed. UASOL1:# UASOL1:#clq status === Cluster Quorum === --- Quorum Votes Summary from (latest node reconfiguration) --- Needed Present Possible ------ ------- -------- 2 2 3 --- Quorum Votes by Node (current status) --- Node Name Present Possible Status --------- ------- -------- ------ UASOL2 0 1 Offline UASOL1 1 1 Online --- Quorum Votes by Device (current status) --- Device Name Present Possible Status ----------- ------- -------- ------ d4 1 1 Online UASOL1:#
We can see that UASOL1 is not panic by cluster. So quorum device worked well.
If you don’t have real SAN storage for shared LUN, you can use openfiler.
What’s Next ? We will configure resource group for failover local zone and perform the test.
Share it ! Comment it !! Be Sociable !!
Biman Chandra Roy says
Hi Lingesh,
I am just experimenting with 2 node-cluster in a virtualbox. I can share storage with 2 VMs (Sol11). I can see same disk even their labels in both the VMs. Even though the quorum is set as per command output below, each node panics when the other node goes down. And both panic when a 2nd quorum disk is added
Why so?
Regards
== Cluster Quorum ===
— Quorum Votes Summary from (latest node reconfiguration) —
Needed Present Possible
—— ——- ——–
2 2 3
— Quorum Votes by Node (current status) —
Node Name Present Possible Status
——— ——- ——– ——
sol4 1 1 Online
sol3 1 1 Online
— Quorum Votes by Device (current status) —
Device Name Present Possible Status
———– ——- ——– ——
d3 1 1 Online
Giri says
Worth Full notes…
srinivas says
I added quorum device as per your article , but is in offline state
I tried to enable it , I removed it and added it again , but it remains in same state
Can you suggest me to bring it online
Lingeswaran R says
Is your quorum device is coming from SAN storage ? Is that disk is SCSI-3 persistent reservation complaint ?
Lingesh
srinivas says
I configured cluster 3.3 in my Oracle virtual box
I created a HDD with ( multi-attached) option and added as quorum device , but in clq status it is showing offline
Lingeswaran R says
That will not work ….You must have SAN storage or ISCSI LUNs with SCSI PR3 . (Just google for SCSI-3 PR)
Regards
Lingesh
poobalan says
When I add quorum disk in sun cluster it throw I/O error. Shared Lun mapped to both nodes used by open filer