This article will walk you through the zone cluster deployment on oracle Solaris. The zone cluster consists of a set of zones, where each zone represents a virtual node. Each zone of a zone cluster is configured on a separate machine. As such, the upper bound on the number of virtual nodes in a zone cluster is limited to the number of machines in the global cluster. The zone cluster design introduces a new brand of zone, called the cluster brand. The cluster brand is based on the original native brand type, and adds enhancements for clustering. The BrandZ framework provides numerous hooks where other software can take action appropriate for the brand type of zone. For example, there is a hook for software to be called during the zone boot, and zone clusters take advantage of this hook to inform the cluster software about the boot of the virtual node. Because zone clusters use the BrandZ framework, at a minimum Oracle Solaris 10 5/08 is required.
The system maintains membership information for zone clusters. Each machine hosts a component, called the Zone Cluster Membership Monitor (ZCMM), that monitors the status of all cluster brand zones on that machine. The ZCMM knows which zones belong to which zone clusters. Zone clusters are considerably simpler than global clusters. For example, there are no quorum devices in a zone cluster, as a quorum device is not needed.
clzonecluster is a utility to create , modify , delete and manage the zone cluster on sun cluster environment.
Note:
Sun Cluster is a product where zone cluster is one of the cluster type in sun cluster.
Environment:
- Operating System : Oracle Solaris 10 u9
- Cluster : Sun Cluster 3.3 (aka Oracle Solaris cluster 3.3)
Prerequisites :
- Two Oracle Solaris 10 u9 nodes or above
- Sun Cluster 3.3 package
Step : 1 Create a global cluster:
The following listed articles will help you to install and configure two node sun cluster on oracle Solaris 10.
- Install Oracle Solaris cluster 3.3 (Aka Sun Cluster) on Solaris 10 nodes.
- Configure two node sun cluster 3.3 on Solaris 10
Step: 2 Create a zone cluster inside the global cluster:
1. Login to one of the cluster node (Global zone).
2. Ensure that node of the global cluster is in cluster mode.
UASOL2:#clnode status === Cluster Nodes === --- Node Status --- Node Name Status --------- ------ UASOL2 Online UASOL1 Online UASOL2:#
3. You must keep the zone path ready for local zone installation on both the cluster nodes. Zone path must be identical on both the nodes. On Node UASOL1 ,
UASOL1:#zfs list |grep /export/zones/uainfrazone rpool/export/zones/uainfrazone 149M 4.54G 149M /export/zones/uainfrazone UASOL1:#
On Node UASOL2,
UASOL2:#zfs list |grep /export/zones/uainfrazone rpool/export/zones/uainfrazone 149M 4.24G 149M /export/zones/uainfrazone UASOL2:#
4. Create a new zone cluster.
Note:
• By default, sparse root zones are created. To create whole root zones, add the -b option to the create command.
• Specifying an IP address and NIC for each zone cluster node is optional.
UASOL1:#clzonecluster configure uainfrazone uainfrazone: No such zone cluster configured Use 'create' to begin configuring a new zone cluster. clzc:uainfrazone> create clzc:uainfrazone> set zonepath=/export/zones/uainfrazone clzc:uainfrazone> add node clzc:uainfrazone:node> set physical-host=UASOL1 clzc:uainfrazone:node> set hostname=uainfrazone1 clzc:uainfrazone:node> add net clzc:uainfrazone:node:net> set address=192.168.2.101 clzc:uainfrazone:node:net> set physical=e1000g0 clzc:uainfrazone:node:net> end clzc:uainfrazone:node> end clzc:uainfrazone> add sysid clzc:uainfrazone:sysid> set root_password="H/80/NT4F2H7g" clzc:uainfrazone:sysid> end clzc:uainfrazone> verify clzc:uainfrazone> commit clzc:uainfrazone> exit UASOL1:#
- Cluster Name = uainfrazone
- Zone Path = /export/zones/uainfrazone
- physical-host = UASOL1 (Where the uainfrazone1 should be configured)
- set hostname = uainfrazone1 (zone cluster node name)
- Zone IP Address (Optional)=192.168.2.101
Here , we have just configured one zone on UASOL1 . Clustering make sense when you configure with two or more nodes. So let me create a one more zone on UASOL2 node in same zone cluster.
UASOL1:#clzonecluster configure uainfrazone clzc:uainfrazone> add node clzc:uainfrazone:node> set physical-host=UASOL2 clzc:uainfrazone:node> set hostname=uainfrazone2 clzc:uainfrazone:node> add net clzc:uainfrazone:node:net> set address=192.168.2.103 clzc:uainfrazone:node:net> set physical=e1000g0 clzc:uainfrazone:node:net> end clzc:uainfrazone:node> end clzc:uainfrazone> commit clzc:uainfrazone> info zonename: uainfrazone zonepath: /export/zones/uainfrazone autoboot: true hostid: brand: cluster bootargs: pool: limitpriv: scheduling-class: ip-type: shared enable_priv_net: true inherit-pkg-dir: dir: /lib inherit-pkg-dir: dir: /platform inherit-pkg-dir: dir: /sbin inherit-pkg-dir: dir: /usr sysid: root_password: H/80/NT4F2H7g name_service: NONE nfs4_domain: dynamic security_policy: NONE system_locale: C terminal: xterm timezone: Asia/Calcutta node: physical-host: UASOL1 hostname: uainfrazone1 net: address: 192.168.2.101 physical: e1000g0 defrouter not specified node: physical-host: UASOL2 hostname: uainfrazone2 net: address: 192.168.2.103 physical: e1000g0 defrouter not specified clzc:uainfrazone> exit
- Cluster Name = uainfrazone
- Zone Path = /export/zones/uainfrazone
- physical-host = UASOL2 (Where the uainfrazone2 should be configured)
- set hostname = uainfrazone2 (zone cluster node name)
- Zone IP Address (Optional)=192.168.2.103
The encrypted root password is “root123” . (Zone’s root password.)
5. Verify the zone cluster.
UASOL2:#clzonecluster verify uainfrazone Waiting for zone verify commands to complete on all the nodes of the zone cluster "uainfrazone"... UASOL2:#
6. Check the zone cluster status. At this stage zones are in configured status.
UASOL2:#clzonecluster status uainfrazone === Zone Clusters === --- Zone Cluster Status --- Name Node Name Zone Host Name Status Zone Status ---- --------- -------------- ------ ----------- uainfrazone UASOL1 uainfrazone1 Offline Configured UASOL2 uainfrazone2 Offline Configured UASOL2:#
7. Install the zones using following command.
UASOL2:#clzonecluster install uainfrazone Waiting for zone install commands to complete on all the nodes of the zone cluster "uainfrazone"... UASOL2:# UASOL2:#zoneadm list -cv ID NAME STATUS PATH BRAND IP 0 global running / native shared - uainfrazone installed /export/zones/uainfrazone cluster shared UASOL2:#
Here you can see that uainfrazone is created and installed. You should be able to see the same on UASOL1 as well.
UASOL1:#zoneadm list -cv ID NAME STATUS PATH BRAND IP 0 global running / native shared - uainfrazone installed /export/zones/uainfrazone cluster shared UASOL1:#
Note: There is no difference if you run a command from UASOL1 or UASOL2 since both are in cluster.
8.Bring up the zones using clzonecluster . (You should not use zoneadm command to boot the zones)
UASOL1:#clzonecluster boot uainfrazone Waiting for zone boot commands to complete on all the nodes of the zone cluster "uainfrazone"... UASOL1:# UASOL1:#zoneadm list -cv ID NAME STATUS PATH BRAND IP 0 global running / native shared 1 uainfrazone running /export/zones/uainfrazone cluster shared UASOL1:#
In UASOL2,
UASOL2:#zoneadm list -cv ID NAME STATUS PATH BRAND IP 0 global running / native shared 3 uainfrazone running /export/zones/uainfrazone cluster shared UASOL2:#
9. Check the zone cluster status.
UASOL1:#clzonecluster status uainfrazone === Zone Clusters === --- Zone Cluster Status --- Name Node Name Zone Host Name Status Zone Status ---- --------- -------------- ------ ----------- uainfrazone UASOL1 uainfrazone1 Offline Running UASOL2 uainfrazone2 Offline Running UASOL1:#
10. Zones will reboot automatically for sysconfig. You could see that when you access the zone’s console.
UASOL1:#zlogin -C uainfrazone [Connected to zone 'uainfrazone' console] 168/168 Creating new rsa public/private host key pair Creating new dsa public/private host key pair Configuring network interface addresses: clprivnet0. rebooting system due to change(s) in /etc/default/init Apr 10 13:21:47 Cluster.Framework: cl_execd: Going down on signal 15. Apr 10 13:21:47 Cluster.Framework: cl_execd: Going down on signal 15. [NOTICE: Zone rebooting] SunOS Release 5.10 Version Generic_147148-26 64-bit Copyright (c) 1983, 2013, Oracle and/or its affiliates. All rights reserved. Hostname: uainfrazone1 uainfrazone1 console login:
11. Check the zone cluster status.
UASOL2:#clzonecluster status === Zone Clusters === --- Zone Cluster Status --- Name Node Name Zone Host Name Status Zone Status ---- --------- -------------- ------ ----------- uainfrazone UASOL1 uainfrazone1 Online Running UASOL2 uainfrazone2 Online Running UASOL2:#
We have successfully configured the two node zone cluster. What’s Next ? You should login to one of the zone and configure the resource group and resources. Just login to any one of the local zone and check the cluster status.
UASOL2:#zlogin uainfrazone [Connected to zone 'uainfrazone' pts/2] Last login: Mon Apr 11 01:58:20 on pts/2 Oracle Corporation SunOS 5.10 Generic Patch January 2005 # bash bash-3.2# export PATH=/usr/cluster/bin:$PATH bash-3.2# clnode status === Cluster Nodes === --- Node Status --- Node Name Status --------- ------ uainfrazone1 Online uainfrazone2 Online bash-3.2#
Similar to this , you could create a N-number of zone cluster under the global cluster. These zone cluster uses the host’s private network and other required resources. In the next article, we will see that how to configure the resource group on local zone.
Hope this article is informative to you.
Leave a Reply