root@Unixarena-SOL11:~# echo |format |grep d0 0. c8t0d0 vmware s-1.0-16.00gb="" virtual="" ware="" 1. c8t1d0 vmware s-1.0-2.00gb="" virtual="" ware="" 2. c8t2d0 vmware s-1.0-2.00gb="" virtual="" ware="" 3. c8t3d0 vmware s-1.0-2.00gb="" virtual="" ware="" 4. c8t4d0 vmware s-1.0-2.00gb="" virtual="" ware="" 5. c8t5d0 vmware s-1.0-2.00gb="" virtual="" ware="" root@Unixarena-SOL11:~#
2.Create new ZFS striped storage pool using zpool command.
root@Unixarena-SOL11:~# zpool create oracle-S c8t1d0 c8t2d0 'oracle-S' successfully created, but with no redundancy; failure of one device will cause loss of the pool root@Unixarena-SOL11:~# zpool status oracle-S pool: oracle-S state: ONLINE scan: none requested config: NAME STATE READ WRITE CKSUM oracle-S ONLINE 0 0 0 c8t1d0 ONLINE 0 0 0 c8t2d0 ONLINE 0 0 0 errors: No known data errors root@Unixarena-SOL11:~#
root@Unixarena-SOL11:~# zpool list oracle-S NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT oracle-S 3.97G 88K 3.97G 0% 1.00x ONLINE - root@Unixarena-SOL11:~# zfs list oracle-S NAME USED AVAIL REFER MOUNTPOINT oracle-S 88K 3.91G 31K /oracle-S root@Unixarena-SOL11:~#
Data is striping between two physical disks.So writing speed will be much faster than concatenation.
While creating the zpool itself,you will get warning is that,there will be no redundancy on this zpool. Failure of one physical disk will be result of loosing complete zpool.
1.Create a mirrored ZFS storage pool using zpool command.
root@Unixarena-SOL11:~# zpool create oracle-M mirror c8t1d0 c8t2d0 root@Unixarena-SOL11:~# zpool status oracle-M pool: oracle-M state: ONLINE scan: none requested config: NAME STATE READ WRITE CKSUM oracle-M ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 c8t1d0 ONLINE 0 0 0 c8t2d0 ONLINE 0 0 0 errors: No known data errors root@Unixarena-SOL11:~#
2.Check the zpool details and mount point
root@Unixarena-SOL11:~# zpool list oracle-M NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT oracle-M 1.98G 85K 1.98G 0% 1.00x ONLINE - root@Unixarena-SOL11:~# zfs list oracle-M NAME USED AVAIL REFER MOUNTPOINT oracle-M 85K 1.95G 31K /oracle-M root@Unixarena-SOL11:~#
If you want to mirror multiple disks on the same zpool,use below command.
root@Unixarena-SOL11:~# zpool create oracle-2M mirror c8t1d0 c8t2d0 mirror c8t3d0 c8t4d0 root@Unixarena-SOL11:~# zpool status oracle-2M pool: oracle-2M state: ONLINE scan: none requested config: NAME STATE READ WRITE CKSUM oracle-2M ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 c8t1d0 ONLINE 0 0 0 c8t2d0 ONLINE 0 0 0 mirror-1 ONLINE 0 0 0 c8t3d0 ONLINE 0 0 0 c8t4d0 ONLINE 0 0 0 errors: No known data errors root@Unixarena-SOL11:~# zpool list oracle-2M NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT oracle-2M 3.97G 124K 3.97G 0% 1.00x ONLINE - root@Unixarena-SOL11:~# zfs list oracle-2M NAME USED AVAIL REFER MOUNTPOINT oracle-2M 89.5K 3.91G 31K /oracle-2M root@Unixarena-SOL11:~#
If you want create zpool with two way mirroring ,use below command.
root@Unixarena-SOL11:~# zpool create oracle-2-M mirror c8t1d0 c8t2d0 c8t3d0 root@Unixarena-SOL11:~# zpool status oracle-2-M pool: oracle-2-M state: ONLINE scan: none requested config: NAME STATE READ WRITE CKSUM oracle-2-M ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 c8t1d0 ONLINE 0 0 0 c8t2d0 ONLINE 0 0 0 c8t3d0 ONLINE 0 0 0 errors: No known data errors root@Unixarena-SOL11:~#
Advantages:
1.It provides the data redundancy
2.Faster read speed.
Disadvantages:
1.Slow write rate.
Creating a RAIDZ zpool:
Raid-Z is nothing but Raid 5 .It uses distributed parity and minimum 3 hard-drives need to be used.Raid-Z can tolerate one harddisk failure.
root@Unixarena-SOL11:~# zpool status oracle-Z pool: oracle-Z state: ONLINE scan: none requested config: NAME STATE READ WRITE CKSUM oracle-Z ONLINE 0 0 0 raidz1-0 ONLINE 0 0 0 c8t1d0 ONLINE 0 0 0 c8t2d0 ONLINE 0 0 0 c8t3d0 ONLINE 0 0 0 errors: No known data errors root@Unixarena-SOL11:~# zpool list oracle-Z NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT oracle-Z 5.94G 174K 5.94G 0% 1.00x ONLINE - root@Unixarena-SOL11:~# zfs list oracle-Z NAME USED AVAIL REFER MOUNTPOINT oracle-Z 116K 3.89G 34.6K /oracle-Z
RaidZ2 or Raid 6 will also use parity mechanism like raidz but it will keep two copies of parities .So that raidZ2 can tolerate two disks failures. This raidZ2 will be useful in larger zpools as is safer.
root@Unixarena-SOL11:~# zpool create oracle-2Z raidz2 c8t1d0 c8t2d0 c8t3d0 c8t4d0 root@Unixarena-SOL11:~# zpool status oracle-2Z pool: oracle-2Z state: ONLINE scan: none requested config: NAME STATE READ WRITE CKSUM oracle-2Z ONLINE 0 0 0 raidz2-0 ONLINE 0 0 0 c8t1d0 ONLINE 0 0 0 c8t2d0 ONLINE 0 0 0 c8t3d0 ONLINE 0 0 0 c8t4d0 ONLINE 0 0 0 errors: No known data errors root@Unixarena-SOL11:~# zpool list oracle-2Z NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT oracle-2Z 7.94G 252K 7.94G 0% 1.00x ONLINE - root@Unixarena-SOL11:~# zfs list oracle-2Z NAME USED AVAIL REFER MOUNTPOINT oracle-2Z 126K 3.89G 37.4K /oracle-2Z root@Unixarena-SOL11:~#
Creating a RAIDZ3 zpool:
RaidZ3 or Raid 7 will also use parity mechanism like raidz and raidz2 but it will keep three copies of parities .So that raidZ3 can tolerate three disks failures. This raidZ3 will be useful in larger zpools as is safest .
root@Unixarena-SOL11:~# zpool create oracle-3Z raidz3 c8t1d0 c8t2d0 c8t3d0 c8t4d0 c8t5d0 root@Unixarena-SOL11:~# zpool status oracle-3Z pool: oracle-3Z state: ONLINE scan: none requested config: NAME STATE READ WRITE CKSUM oracle-3Z ONLINE 0 0 0 raidz3-0 ONLINE 0 0 0 c8t1d0 ONLINE 0 0 0 c8t2d0 ONLINE 0 0 0 c8t3d0 ONLINE 0 0 0 c8t4d0 ONLINE 0 0 0 c8t5d0 ONLINE 0 0 0 errors: No known data errors root@Unixarena-SOL11:~# zpool list oracle-3Z NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT oracle-3Z 9.94G 564K 9.94G 0% 1.00x ONLINE - root@Unixarena-SOL11:~# zfs list oracle-3Z NAME USED AVAIL REFER MOUNTPOINT oracle-3Z 132K 3.90G 39.8K /oracle-3Z root@Unixarena-SOL11:~#
root@Unixarena-SOL11:~# zpool status oracle-3Z pool: oracle-3Z state: ONLINE scan: none requested config: NAME STATE READ WRITE CKSUM oracle-3Z ONLINE 0 0 0 raidz3-0 ONLINE 0 0 0 c8t1d0 ONLINE 0 0 0 c8t2d0 ONLINE 0 0 0 c8t3d0 ONLINE 0 0 0 c8t4d0 ONLINE 0 0 0 c8t5d0 ONLINE 0 0 0 errors: No known data errors root@Unixarena-SOL11:~# zpool destroy oracle-3Z root@Unixarena-SOL11:~# zpool list oracle-3Z cannot open 'oracle-3Z': no such pool root@Unixarena-SOL11:~#
You can export the zpool from one system to another Solaris machine if both the machines are having access to the same LUNs which are used on that zpool. Here i am exporting the oracledata pool.
root@Unixarena-SOL11:~# zpool status oracledata pool: oracledata state: ONLINE scan: none requested config: NAME STATE READ WRITE CKSUM oracledata ONLINE 0 0 0 c8t1d0 ONLINE 0 0 0 c8t2d0 ONLINE 0 0 0 errors: No known data errors root@Unixarena-SOL11:~# zpool export oracledata root@Unixarena-SOL11:~# zpool list oracledata cannot open 'oracledata': no such pool root@Unixarena-SOL11:~#
Importing the Zpool:
root@Unixarena-SOL11:~# zpool import pool: oracledata id: 2756639946282462629 state: ONLINE action: The pool can be imported using its name or numeric identifier. config: oracledata ONLINE c8t1d0 ONLINE c8t2d0 ONLINE root@Unixarena-SOL11:~#
To import the zpool “oracledata” .
root@Unixarena-SOL11:~# zpool import oracledata root@Unixarena-SOL11:~# zpool list oracledata NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT oracledata 3.97G 138K 3.97G 0% 1.00x ONLINE - root@Unixarena-SOL11:~# zpool status oracledata pool: oracledata state: ONLINE scan: none requested config: NAME STATE READ WRITE CKSUM oracledata ONLINE 0 0 0 c8t1d0 ONLINE 0 0 0 c8t2d0 ONLINE 0 0 0 errors: No known data errors root@Unixarena-SOL11:~#
Zpool Integrity Check and clearing errors:
If you want to verify the zpool integrity and rectify the check sum errors, Run zpool scrub command to repair it.There is no fsck mechanism in ZFS unlike traditional filesystems.
root@Unixarena-SOL11:~# zpool scrub oracledata root@Unixarena-SOL11:~# zpool status oracledata pool: oracledata state: ONLINE scan: scrub repaired 0 in 0h0m with 0 errors on Wed Jul 24 02:22:46 2013 config: NAME STATE READ WRITE CKSUM oracledata ONLINE 0 0 0 c8t1d0 ONLINE 0 0 0 c8t2d0 ONLINE 0 0 0 errors: No known data errors root@Unixarena-SOL11:~#
If you want to clear the error counts,use zpool clear command.
root@Unixarena-SOL11:~# zpool clear oracledata root@Unixarena-SOL11:~# zpool status oracledata pool: oracledata state: ONLINE scan: scrub repaired 0 in 0h0m with 0 errors on Wed Jul 24 02:22:46 2013 config: NAME STATE READ WRITE CKSUM oracledata ONLINE 0 0 0 c8t1d0 ONLINE 0 0 0 c8t2d0 ONLINE 0 0 0 errors: No known data errors root@Unixarena-SOL11:~#
root@Unixarena-SOL11:~# zpool list NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT oracle-RZ 11.9G 186K 11.9G 0% 1.00x ONLINE - rpool 15.6G 4.07G 11.6G 26% 1.00x ONLINE - root@Unixarena-SOL11:~# zpool destroy oracle-RZ
2.Check the destroyed zpool is available for import or not.
root@Unixarena-SOL11:~# zpool import -D pool: oracle-RZ id: 6355325059104864785 state: ONLINE (DESTROYED) action: The pool can be imported using its name or numeric identifier. config: oracle-RZ ONLINE raidz1-0 ONLINE c8t1d0 ONLINE c8t2d0 ONLINE c8t3d0 ONLINE raidz1-1 ONLINE c8t4d0 ONLINE c8t5d0 ONLINE c8t6d0 ONLINE root@Unixarena-SOL11:~#
3.Lets try to import it without specifying -D option.
root@Unixarena-SOL11:~# zpool import oracle-RZ cannot import 'oracle-RZ': no such pool available root@Unixarena-SOL11:~# zpool import 6355325059104864785 cannot import '6355325059104864785': no such pool available root@Unixarena-SOL11:~#
4.Now you try with -D option.It should work.You can use zpool name or zpool id to import the zpools.
root@Unixarena-SOL11:~# zpool import -D 6355325059104864785 root@Unixarena-SOL11:~# zpool list oracle-RZ NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT oracle-RZ 11.9G 230K 11.9G 0% 1.00x ONLINE - root@Unixarena-SOL11:~#
Some of the topics are missing right now and those will be available soon.
Farid Khan says
i think you missed RaidZ can be created with minimum 2 disks also and parity is written on both disk.