11. Register the SUNW.gds resource type in Solaris cluster for high availability localzone.
UASOL1:# clresourcetype register SUNW.gds UASOL1:#
12.Navigate to the below directory and copy the config file with new zone resource name. CLUAHAZ-OS1 is cluster resource for local zone which we are going to create.
UASOL1:#cd /opt/SUNWsczone/sczbt/util UASOL1:#ls -lrt total 26 -r-xr-xr-x 1 root bin 6949 Jan 28 2013 sczbt_register -rw-r--r-- 1 root bin 4806 Jan 28 2013 sczbt_config UASOL1:#cp -p sczbt_config sczbt_config.CLUAHAZ-OS1 UASOL1:#
13.Edit the sczbt_config.CLUAHAZ-OS1 with below information.
UASOL1:#grep -v "#" sczbt_config.CLUAHAZ-OS1 RS=CLUAHAZ-OS1 RG=UA-HA-ZRG PARAMETERDIR=/UAZPOOL/UAHAZ1/params SC_NETWORK=true SC_LH=CLUAHAZ1 FAILOVER=true HAS_RS=CLUAZPOOL Zonename="UAHAZ1" Zonebrand="native" Zonebootopt="" Milestone="svc:/milestone/multi-user-server" LXrunlevel="3" SLrunlevel="3" Mounts="" UASOL1:#
- RS – Local zone Cluster Resource (End of the file name )
- RG- Existing Resource Group
- PARAMETERDIR – Local zone root path
- SC_NETWORK – If its true then local zone IP is managed by Solaris cluster. So you have provide the SC_LH value.
- SC_LH – Logical host name resource which we have created on first page.
- FAILOVER – To enable the automatic failover,provide value as true.
- HAS_RS – Provide the zpool cluster resource
- Zonename – Enter the local zone name
14.Create params directory on zone’s root path where the resource group is online.
UASOL1:#mkdir -p /UAZPOOL/UAHAZ1/params
15.Create the local zone resource by running below script with input file.
UASOL1:#pwd /opt/SUNWsczone/sczbt/util UASOL1:#./sczbt_register -f ./sczbt_config.CLUAHAZ-OS1 sourcing ./sczbt_config.CLUAHAZ-OS1 Registration of resource CLUAHAZ-OS1 succeeded. Validation of resource CLUAHAZ-OS1 succeeded. UASOL1:#
16.Enable the zone resource .
UASOL1:#clresource enable CLUAHAZ-OS1 UASOL1:#
17.Check the resource status
UASOL1:#clresource status === Cluster Resources === Resource Name Node Name State Status Message ------------- --------- ----- -------------- CLUAHAZ-OS1 UASOL2 Offline Offline UASOL1 Online Online - Service is online. CLUAHAZ1 UASOL2 Offline Offline - LogicalHostname offline. UASOL1 Online Online - LogicalHostname online. CLUAZPOOL UASOL2 Offline Offline UASOL1 Online Online UASOL1:#
18.CLUAHAZ-OS1 resource is online .So zone must be booted. Check the zone status using zoneadm command.
UASOL1:#zoneadm list -cv ID NAME STATUS PATH BRAND IP 0 global running / native shared 4 UAHAZ1 running /UAZPOOL/UAHAZ1 native shared UASOL1:#
19.Switch the resource group to another node and check the zone status.
UASOL1:#clrg switch -n UASOL2 + UASOL1:#zoneadm list -cv ID NAME STATUS PATH BRAND IP 0 global running / native shared - UAHAZ1 installed /UAZPOOL/UAHAZ1 native shared UASOL1:#
In UASOL1, zone has been halted.
20.Login to UASOL2 and check the resources status .
ASOL2:#clresource status === Cluster Resources === Resource Name Node Name State Status Message ------------- --------- ----- -------------- CLUAHAZ-OS1 UASOL2 Starting Unknown - Starting UASOL1 Offline Offline CLUAHAZ1 UASOL2 Online Online - LogicalHostname online. UASOL1 Offline Offline - LogicalHostname offline. CLUAZPOOL UASOL2 Online Online UASOL1 Offline Offline UASOL2:#zoneadm list -cv ID NAME STATUS PATH BRAND IP 0 global running / native shared 4 UAHAZ1 running /UAZPOOL/UAHAZ1 native shared UASOL2:#clresource status === Cluster Resources === Resource Name Node Name State Status Message ------------- --------- ----- -------------- CLUAHAZ-OS1 UASOL2 Online Online UASOL1 Offline Offline CLUAHAZ1 UASOL2 Online Online - LogicalHostname online. UASOL1 Offline Offline - LogicalHostname offline. CLUAZPOOL UASOL2 Online Online UASOL1 Offline Offline UASOL2:#
We have successfully failover the resource group to UASOL2.
Now UAHAZ1 local zone is highly available using Solaris cluster. We can call this type of setup as failover zone cluster or flying zone cluster .
Hope this article is informative to you .
Share it ! Comment it !! Be Sociable !!!
Gopi R says
Hi Lingeswaran,
I am facing a challange to go with setup, below is my scenario
1. Solaris 10 – GZ1, GZ2 Configurated with Sun cluster 3.3
2. Created zonepath and data mountpoint HA resource as zone-boot-hasp-rs , zone-has-rs
3. When i create zone-ora-rs and zone-lis-rs for automatic DB start during the failover for zone ( Solaris 9), The cluster switch over is fine and DB is getting started at GZ itself in place of solaris 9 zone.
My question here, If this is the situation how can i configure db automatic start for solaris 9 zone.
Even tried with SC_Network=true but no luck…
Really appreciate if you could help on this….
Gopi R
Yadunandan Jha says
Hi Lingeswaran,
I would like to configure Oracle database in SUN Cluster for HA availability for that which file system shall i use QFS, ZFS or SVM ? But don’t want to use RAC ?
I have installed sun cluster 3.3 in Solaris 10 ZFS and allocated shared space from storage as well on both the systems now i would like to create shared file system so that i can install DB in cluster so which file system shall i use for HA.
Regards,
Yadu
Lingeswaran R says
ZFS pools are better than SVM disk sets. ZFS is not a cluster filesystem.But you can use it for failover service group.
Vikrant Aggarwal says
Good Article.
Just to add here, Now the zone clusters are the way to go instead of zone fail over. This help to minimize the downtime associated in fail over of non-global zone.
Thanks
Vikrant Aggarwal
Lingeswaran R says
Yeah.we can make cluster between zones. Thank you vikrant for your comments.