This article will help you to create a resource group on Solaris cluster and adding couple of resource to it. Resource group is similar to service group in veritas cluster which bundles the resources in one logical unit. Once you have configured the Solaris two node cluster and added the quorum devices, you can create a resource group. Once we create the resource group ,we will add zpool storage resource and will perform the failover test.
1. Login to one of the cluster node as root and check the cluster node status.
UASOL1:#clnode status === Cluster Nodes === --- Node Status --- Node Name Status --------- ------ UASOL2 Online UASOL1 Online UASOL1:#
2.Check the heartbeat link status of Solaris cluster.
UASOL1:#clinterconnect status === Cluster Transport Paths === Endpoint1 Endpoint2 Status --------- --------- ------ UASOL2:e1000g2 UASOL1:e1000g2 Path online UASOL2:e1000g1 UASOL1:e1000g1 Path online UASOL1:#
3.Check the quorum status.
UASOL1:#clq status === Cluster Quorum === --- Quorum Votes Summary from (latest node reconfiguration) --- Needed Present Possible ------ ------- -------- 2 3 3 --- Quorum Votes by Node (current status) --- Node Name Present Possible Status --------- ------- -------- ------ UASOL2 1 1 Online UASOL1 1 1 Online --- Quorum Votes by Device (current status) --- Device Name Present Possible Status ----------- ------- -------- ------ d5 1 1 Online UASOL1:#
4.In the above command output, everything seems to be fine. So let me create a resource group.
UASOL1:#clrg create UA-HA-ZRG UASOL1:#
5.Check the resource group status.
UASOL1:#clrg status === Cluster Resource Groups === Group Name Node Name Suspended Status ---------- --------- --------- ------ UA-HA-ZRG UASOL2 No Unmanaged UASOL1 No Unmanaged UASOL1:#
We have successfully created the resource group on Solaris cluster.
Let me create a ZFS storage pool and add it in Solaris cluster.
1.Check the cluster device instances. Here d5 d6 are from SAN storage. d5 is already used for quorum setup.
UASOL1:#cldevice list -v DID Device Full Device Path ---------- ---------------- d1 UASOL2:/dev/rdsk/c1t0d0 d1 UASOL1:/dev/rdsk/c1t0d0 d2 UASOL1:/dev/rdsk/c1t2d0 d3 UASOL2:/dev/rdsk/c1t2d0 d4 UASOL2:/dev/rdsk/c1t1d0 d4 UASOL1:/dev/rdsk/c1t1d0 d5 UASOL2:/dev/rdsk/c2t16d0 d5 UASOL1:/dev/rdsk/c2t14d0 d6 UASOL2:/dev/rdsk/c2t15d0 d6 UASOL1:/dev/rdsk/c2t13d0 UASOL1:# UASOL1:#cldevice status === Cluster DID Devices === Device Instance Node Status --------------- ---- ------ /dev/did/rdsk/d1 UASOL1 Ok UASOL2 Ok /dev/did/rdsk/d2 UASOL1 Ok /dev/did/rdsk/d3 UASOL2 Ok /dev/did/rdsk/d4 UASOL1 Ok UASOL2 Ok /dev/did/rdsk/d5 UASOL1 Ok UASOL2 Ok /dev/did/rdsk/d6 UASOL1 Ok UASOL2 Ok UASOL1:#
2.Create a new ZFS storage pool using d6.
UASOL1:#zpool create -f UAZPOOL /dev/did/dsk/d6s2 UASOL1:#zpool status UAZPOOL pool: UAZPOOL state: ONLINE scan: none requested config: NAME STATE READ WRITE CKSUM UAZPOOL ONLINE 0 0 0 /dev/did/dsk/d6s2 ONLINE 0 0 0 errors: No known data errors UASOL1:#df -h /UAZPOOL Filesystem size used avail capacity Mounted on UAZPOOL 3.0G 31K 3.0G 1% /UAZPOOL UASOL1:#
3.Register the ZFS resource type in Solaris cluster.
UASOL1:#clresourcetype register SUNW.HAStoragePlus UASOL1:#
4.Create the new cluster resource for zpool which we have created on previous step.
UASOL1:#clresource create -g UA-HA-ZRG -t SUNW.HAStoragePlus -p Zpools=UAZPOOL CLUAZPOOL UASOL1:#
- -g Resoure Group – UA-HA-ZRG
- -t Resource type – SUNW.HAStoragePlus
- -p Zpools – UAZPOOL(zpool name)
- CLUAZPOOL – Cluster Resource name.
5.Check the resource status.
UASOL1:#clresource status === Cluster Resources === Resource Name Node Name State Status Message ------------- --------- ----- -------------- CLUAZPOOL UASOL2 Offline Offline UASOL1 Offline Offline UASOL1:#
6.Bring the resource group online and check the resource status.
UASOL1:#clrg online -M -n UASOL1 UA-HA-ZRG UASOL1:#clresource status === Cluster Resources === Resource Name Node Name State Status Message ------------- --------- ----- -------------- CLUAZPOOL UASOL2 Offline Offline UASOL1 Online Online UASOL1:#
7.List the zpool where the resource group is online.
UASOL1:#zpool list NAME SIZE ALLOC FREE CAP HEALTH ALTROOT UAZPOOL 3.05G 132K 3.05G 0% ONLINE / rpool 13.9G 9.32G 4.56G 67% ONLINE - UASOL1:#
8.To test the resource group, Switch the resource group to other node.
UASOL1:#clrg switch -n UASOL2 + UASOL1:#
9.Now you can see that cluster zpool has been moved to UASOL2 node.
UASOL1:#zpool list NAME SIZE ALLOC FREE CAP HEALTH ALTROOT rpool 13.9G 9.32G 4.56G 67% ONLINE - UASOL1:#clrg status === Cluster Resource Groups === Group Name Node Name Suspended Status ---------- --------- --------- ------ UA-HA-ZRG UASOL2 No Online UASOL1 No Offline UASOL1:#ssh UASOL2 zpool list Password: NAME SIZE ALLOC FREE CAP HEALTH ALTROOT UAZPOOL 3.05G 132K 3.05G 0% ONLINE / rpool 13.9G 9.15G 4.73G 65% ONLINE - UASOL1:#
So automatic failover should work for resource group which we have just created. In the next article,we will see that how add the localzone to the cluster.
Share it ! Comment it !! Be Sociable !!!
Joe says
Great work but I would like to see how you configure HA-NFS / SVM instead of zones
Thanks in advance