This article is going to explain how to break the ZFS mirror on Solaris.I am not sure how this will be useful in real production environment. Because splitting can be possible only on mirrored zpool. Unlike veritas volume manager diskgroup split, ZFS is not going to divide the zpool. Its just remove the mirror and keep the copy on mirrored disk. For an example, once you split the mirrored zpool , you will be having two copy of same zpool data.(Existing zpool will keep one disk and new Zpool will be created using the mirror disk). This operation should not done when the zpool used by application or database. In a order to split the zpool, you need to halt the database and perform this operation.
[box type=”error”]
- Mirroed root zpool can’t be split and its not supported.In ZFS, you no need to break the mirror disks for OS patching.All the ZFS root systems should use liveupgrade for OS patching and rollback operation.Note:Live upgrade commands are differnet in solaris 11(BE commands) compare to solaris 10 (LU commands)
[/box]
Here we will how to split the existing mirrored zpool.
1. Login to Solaris system where you want to break the mirrored data zpool .Check the zpool status.
root@UAAIS:~# zpool status MUA1 pool: MUA1 state: ONLINE scan: none requested config: NAME STATE READ WRITE CKSUM MUA1 ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 c8t2d0 ONLINE 0 0 0 c8t3d0 ONLINE 0 0 0 errors: No known data errors root@UAAIS:~#
2.Break the mirror using zpool split command.The below command is going to break the mirror of MUA1 zpool and its going to create a new zpool called MUA2 using the detached disk of MUA1 zpool.
root@UAAIS:~# zpool split MUA1 MUA2 root@UAAIS:~#
3.Check the zpool status of MUA1.You can see c8t3d0 disk has been removed from MUA1 zpool.
root@UAAIS:~# zpool status MUA1 pool: MUA1 state: ONLINE scan: none requested config: NAME STATE READ WRITE CKSUM MUA1 ONLINE 0 0 0 c8t2d0 ONLINE 0 0 0 errors: No known data errors root@UAAIS:~#
4.MUA2 zpool will be in exported state. You check the status of exported zpool using below command.
root@UAAIS:~# zpool import pool: MUA2 id: 231980248923509617 state: ONLINE action: The pool can be imported using its name or numeric identifier. config: MUA2 ONLINE c8t3d0 ONLINE root@UAAIS:~#
5. MUA2 zpool can be imported to this host ,but make sure that there is no conflict on the zfs datasets mountpoints. Since its just clone of MUA1, there is high possibility of using same mounts for datasets. In this case you need to modify the zfs dataset mount points manually.
root@UAAIS:~# zpool status MUA2 pool: MUA2 state: ONLINE scan: none requested config: NAME STATE READ WRITE CKSUM MUA2 ONLINE 0 0 0 c8t3d0 ONLINE 0 0 0 errors: No known data errors root@UAAIS:~# root@UAAIS:~# df -h |grep MUA MUA1 3.9G 31K 3.9G 1% /MUA1 MUA2 3.9G 31K 3.9G 1% /MUA2 root@UAAIS:~#
6.The conflict occurs on the following situations.Here is the MUA1 dataset and static mount point.
root@UAAIS:~# df -h |grep MUA1 MUA1 3.9G 31K 3.9G 1% /MUA1 MUA1/unixarena 3.9G 31K 3.9G 1% /unixarena root@UAAIS:~#
MUA2 also having same dataset with same point since its just cloned copy it.When we try to import the zpool,You will get below warning.The zpool will be imported but dataset can’t be mounted since its used by MUA1 dataset.
root@UAAIS:~# zpool import MUA2 cannot mount 'MUA2/unixarena' on '/unixarena': mountpoint or dataset is busy root@UAAIS:~# zpool list NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT MUA1 3.97G 174K 3.97G 0% 1.00x ONLINE - MUA2 3.97G 148K 3.97G 0% 1.00x ONLINE - rpool 15.6G 11.6G 3.99G 74% 1.00x ONLINE -
7.This conflict issue can be overcome by using the below set of commands.By using -R options, system will assume the new zpool as other system zpool and conflict can be avoided .
root@UAAIS:~# zpool status MUA1 pool: MUA1 state: ONLINE scan: resilvered 138K in 0h0m with 0 errors on Tue Feb 11 15:15:46 2014 config: NAME STATE READ WRITE CKSUM MUA1 ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 c8t2d0 ONLINE 0 0 0 c8t3d0 ONLINE 0 0 0 errors: No known data errors root@UAAIS:~# zpool split -R /MUA2 MUA1 MUA2 root@UAAIS:~# zpool list NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT MUA1 3.97G 215K 3.97G 0% 1.00x ONLINE - MUA2 3.97G 180K 3.97G 0% 1.00x ONLINE /MUA2 rpool 15.6G 11.6G 3.99G 74% 1.00x ONLINE - testrepo 15.9G 13.8G 2.03G 87% 1.00x ONLINE - root@UAAIS:~# df -h |grep MUA MUA1 3.9G 31K 3.9G 1% /MUA1 MUA1/unixarena 3.9G 31K 3.9G 1% /unixarena MUA2 3.9G 31K 3.9G 1% /MUA2/MUA2 MUA2/unixarena 3.9G 31K 3.9G 1% /MUA2/unixarena root@UAAIS:~#
[starlist]
- This feature is available only for mirrored zpools.
- Database and application operations should be halted before performing the zpool split operation.
- zpool can not be splited while resilvering aka mirroring in progress
- If you the zpool is configured with two way mirroring, then after the split operation, your zpool will be having two disks and new pool will be created using the third disk.
[/starlist]
Hope this article is informative for you . Please leave a comment if you have any doubt on this .
Thank you for visiting UnixArena.
Leave a Reply