Assumptions:
New disk:c1t1do
The new disk should be formatted with SMI label and keep all the sectors in s0. EFI label is not supported for root pool.
Note: If you can’t afford a new disk, you could break the existing mirror and use the second for this migration purpose.
1. Create a pool with the name of “rpool” using the newly configured disk.
bash-3.00# zpool create rpool c1t1d0s0 bash-3.00# zfs list NAME USED AVAIL REFER MOUNTPOINT rpool 72K 7.81G 21K /rpool
2. Verify if you are having an existing boot environment to name current boot environment,
bash-3.00# lustatus ERROR: No boot environments are configured on this system ERROR: cannot determine list of all boot environment names
3. Creating the new boot environment using rpool.
-c — current boot environment name
-n — new boot environment name
-p — Pool name
bash-3.00# lucreate -c sol_stage1 -n sol_stage2 -p rpool Checking GRUB menu... System has findroot enabled GRUB Analyzing system configuration. Comparing source boot environment file systems with the file system(s) you specified for the new boot environment. Determining which file systems should be in the new boot environment. Updating boot environment description database on all BEs. Updating system configuration files. The device is not a root device for any boot environment; ca nnot get BE ID. Creating configuration for boot environment . Source boot environment is . Creating boot environment . Creating file systems on boot environment . Creating file system for </> in zone on . Populating file systems on boot environment . Checking selection integrity. Integrity check OK. Populating contents of mount point </>. Copying. Updating compare databases on boot environment . Making boot environment bootable. Updating bootenv.rc on ABE . File propagation successful Copied GRUB menu from PBE to ABE No entry for BE in GRUB menu Population of boot environment successful. Creation of boot environment successful. bash-3.00# lustatus Boot Environment Is Active Active Can Copy Name Complete Now On Reboot Delete Status -------------------------- -------- ------ --------- ------ ---------- sol_stage1 yes yes yes no - sol_stage2 yes no no yes -
4. Activating the new boot environment which is created on top to zpool. Once the BE is created successfully, activate the new boot environment. So that system will boot from new BE from next reboot onwards (on ZFS).
Note:Do not use “reboot” command.Use “init 6”
bash-3.00# luactivate sol_stage2 System has findroot enabled GRUB Generating boot-sign, partition and slice information for PBE A Live Upgrade Sync operation will be performed on startup of boot environment < sol_stage2>. Generating boot-sign for ABE NOTE: File not found in top level dataset for BE Generating partition and slice information for ABE Boot menu exists. Generating multiboot menu entries for PBE. Generating multiboot menu entries for ABE. Disabling splashimage Re-enabling splashimage No more bootadm entries. Deletion of bootadm entries is complete. GRUB menu default setting is unaffected Done eliding bootadm entries. ********************************************************************** The target boot environment has been activated. It will be used when you reboot. NOTE: You MUST NOT USE the reboot, halt, or uadmin commands. You MUST USE either the init or the shutdown command when you reboot. If you do not use either init or shutdown, the system will not boot using the target BE. ********************************************************************** In case of a failure while booting to the target BE, the following process needs to be followed to fallback to the currently working boot environment: 1. Boot from the Solaris failsafe or boot in Single User mode from Solaris Install CD or Network. 2. Mount the Parent boot environment root slice to some directory (like /mnt). You can use the following command to mount: mount -F ufs /dev/dsk/c1t0d0s0 /mnt 3. Run utility with out any arguments from the Parent boot environment root slice, as shown below: /mnt/sbin/luactivate 4. luactivate, activates the previous working boot environment and indicates the result. 5. Exit Single User mode and reboot the machine. ********************************************************************** Modifying boot archive service Propagating findroot GRUB for menu conversion. File propagation successful File propagation successful File propagation successful File propagation successful Deleting stale GRUB loader from all BEs. File deletion successful File deletion successful File deletion successful Activation of boot environment successful. bash-3.00# lustatus Boot Environment Is Active Active Can Copy Name Complete Now On Reboot Delete Status -------------------------- -------- ------ --------- ------ ---------- sol_stage1 yes yes no no - sol_stage2 yes no yes no - -------here you can see “:Active on Reboot is yes”
5. Reboot the server using init 6 to boot from new boot environment.
bash-3.00# init 6 updating /platform/i86pc/boot_archive propagating updated GRUB menu Saving existing file in top level dataset for BE as //boot/grub/menu.lst.prev. File propagation successful File propagation successful File propagation successful File propagation successful bash-3.00# lustatus Boot Environment Is Active Active Can Copy Name Complete Now On Reboot Delete Status -------------------------- -------- ------ --------- ------ ---------- sol_stage1 yes no no yes - sol_stage2 yes yes yes no -
6. Now you can see the server is booted in ZFS.
bash-3.00# zfs list NAME USED AVAIL REFER MOUNTPOINT rpool 4.60G 3.21G 34.5K /rpool rpool/ROOT 3.59G 3.21G 21K legacy rpool/ROOT/sol_stage2 3.59G 3.21G 3.59G / rpool/dump 512M 3.21G 512M - rpool/swap 528M 3.73G 16K - bash-3.00# zpool status pool: rpool state: ONLINE scrub: none requested config: NAME STATE READ WRITE CKSUM rpool ONLINE 0 0 0 c1t1d0s0 ONLINE 0 0 0 errors: No known data errors
7. If everything goes fine, you can remove the old boot environment using the below command(UFS BE)
bash-3.00# ludelete -f sol_stage1 System has findroot enabled GRUB Updating GRUB menu default setting Changing GRUB menu default setting to <0> Saving existing file in top level dataset for BE as //boot/grub/menu.lst.prev. File propagation successful Successfully deleted entry from GRUB menu Determining the devices to be marked free. Updating boot environment configuration database. Updating boot environment description database on all BEs. Updating all boot environment configuration databases. Boot environment deleted. bash-3.00# lustatus Boot Environment Is Active Active Can Copy Name Complete Now On Reboot Delete Status -------------------------- -------- ------ --------- ------ ---------- sol_stage2 yes yes yes no -
8. Now we can use the deleted old boot environment disk for rpool mirroring. Size should equal or greater than existing rpool disk.
bash-3.00# zpool status pool: rpool state: ONLINE scrub: none requested config: NAME STATE READ WRITE CKSUM rpool ONLINE 0 0 0 c1t1d0s0 ONLINE 0 0 0 errors: No known data errors
9. Copying the partition table to the second disk.
bash-3.00# prtvtoc /dev/rdsk/c1t1d0s2 |fmthard -s - /dev/rdsk/c1t0d0s2 fmthard: New volume table of contents now in place.
10. Initiating the rpool mirroring:
bash-3.00# zpool attach rpool c1t1d0s0 c1t0d0s0 Please be sure to invoke installgrub(1M) to make 'c1t0d0s0' bootable. Make sure to wait until resilver is done before rebooting. bash-3.00# zpool status pool: rpool state: ONLINE status: One or more devices is currently being resilvered. The pool will continue to function, possibly in a degraded state. action: Wait for the resilver to complete. scrub: resilver in progress for 0h0m, 1.37% done, 0h18m to go config: NAME STATE READ WRITE CKSUM rpool ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 c1t1d0s0 ONLINE 0 0 0 c1t0d0s0 ONLINE 0 0 0 56.9M resilvered errors: No known data errors
Once the mirroring is done, the system will be running on ZFS with redundant root disks.
Thank you for reading this article.
kelechi ifediniru says
hello ,
Please what happens to the existing mount points maybe /export/home. is the filesystem automatically created on the zfs file system,
thanks
Pramod Kumar says
If u want move some file system on rpool then you need to create lu in the below
#lucreate -c -n – rpool -D
Note -if you need to move 2 fs on the rpool then use like – D <mount point -D
Mount point – like if you want to move /export/home then need to give this one
KishoreBabu says
Excellent and simple article, very helpful.
Thanks.. Kishore Babu