We need to specify the LVM snapshot size while creating it unlike ZFS snapshots and VXVM snapshots.Creating a 500MB size snapshot for large database volumes would be sufficient to hold the all changes on that volume.
Here we will see some of the snapshot operations of LVM.
Database backup using LVM snapshots:
1.List the logical volume.
[root@mylinz ~]# lvs -a -o +devices |grep vol1 vol1 uavg -wi-ao 100.00m /dev/sdd1(13) [root@mylinz ~]# [root@mylinz vol1]# df -h . Filesystem Size Used Avail Use% Mounted on /dev/mapper/uavg-vol1 97M 22M 71M 24% /vol1 [root@mylinz vol1]#
2. Let me take the snapshot with size of 50MB. Which means the snapshot can hold the volume changes up to 50MB.
[root@mylinz vol1]# lvcreate -L 50M -s -n vol1backup /dev/uavg/vol1 Rounding up size to full physical extent 52.00 MiB Logical volume "vol1backup" created [root@mylinz vol1]#
3.Check the volume and snapshot volume details.
[root@mylinz vol1]# lvs -a -o +devices|egrep "LV|vol1" LV VG Attr LSize Origin Snap% Move Log Copy% Convert Devices vol1 uavg owi-ao 100.00m /dev/sdd1(13) vol1backup uavg swi-a- 52.00m vol1 0.02 /dev/sdd1(0) [root@mylinz vol1]#
4.Mount the snapshot volume.
[root@mylinz vol1]# mount -t ext4 /dev/uavg/vol1backup /vol1-bck [root@mylinz vol1]# df -h /vol1-bck Filesystem Size Used Avail Use% Mounted on /dev/mapper/uavg-vol1backup 97M 22M 71M 24% /vol1-bck [root@mylinz vol1]#
5.Perform the back of /vol-bck.
6. Un-mount the snapshot volume.
7.Remove the snapshot logical volume using lvremove command.
This cycle needs to performed whenever you want to take the back up of database.You can also use the LVM snapshots for other than database volumes.It’s up to you.
Checking the integrity of LVM snapshots:
If you didn’t trust the LVM snapshot ,Here we will do some small tests to check the integrity of LVM.
1.Copy the new files to the volume /vol1 and check the volume size.
[root@mylinz vol1]# cp -R /var/* . ^C [root@mylinz vol1]# [root@mylinz ~]# df -h /vol1 Filesystem Size Used Avail Use% Mounted on /dev/mapper/uavg-vol1 97M 59M 34M 64% /vol1 [root@mylinz ~]#
2.Now you check the mounted snapshot volume size.You can see that still it will reflect the old size.Which means the volume changes are not updated on snapshot volume.
[root@mylinz ~]# df -h /vol1-bck/ Filesystem Size Used Avail Use% Mounted on /dev/mapper/uavg-vol1backup 97M 22M 71M 24% /vol1-bck [root@mylinz ~]#
The below Questions may arises in your mind.
* How can we track the snapshot volume size ? Use lvs command to get those details.
Here you can see,the snapshot volume is used 74.68% from 52MB.
[root@mylinz vol1]# lvs -a -o +devices|egrep "LV|vol1" LV VG Attr LSize Origin Snap% Move Log Copy% Convert Devices vol1 uavg owi-ao 100.00m /dev/sdd1(13) vol1backup uavg swi-a- 52.00m vol1 74.68 /dev/sdd1(0) [root@mylinz vol1]#
* What will happen if the snapshot volume is fully used ?
You will get I/O error for snapshot volume.So you have lost the snapshot data.So create snapshot volume with enough size to avoid these kind of situations.If possible monitor the snapshot volume usage.
[root@mylinz ~]# lvs -a -o +devices|egrep "LV|vol1" /dev/uavg/vol1backup:read failed after 0 of 1024at 104792064:Input/output error /dev/uavg/vol1backup:read failed after 0 of 1024at 104849408:Input/output error /dev/uavg/vol1backup:read failed after 0 of 1024at 0: Input/output error /dev/uavg/vol1backup:read failed after 0 of 1024at 4096: Input/output error /dev/uavg/vol1backup:read failed after 0 of 2048at 0: Input/output error LV VG Attr LSize Origin Snap% Move Log Copy% Convert Devices vol1 uavg owi-ao 100.00m /dev/sdd1(13) vol1backup uavg Swi-Io 52.00m vol1 100.00 /dev/sdd1(0) [root@mylinz ~]# [root@mylinz ~]# df -h /vol1-bck Filesystem Size Used Avail Use% Mounted on /dev/mapper/uavg-vol1backup 97M 22M 71M 24% /vol1-bck [root@mylinz ~]# ls -lrt /vol1-bck ls: reading directory /vol1-bck: Input/output error total 0 [root@mylinz ~]#
* How to use the LVM snapshot for volume restore ?You can restore the logical volume using the existing snapshot.
1.Check the mountpoint and take some reference data for test.
[root@mylinz ~]# df -h /vol1 Filesystem Size Used Avail Use% Mounted on /dev/mapper/uavg-vol1 97M 85M 7.3M 93% /vol1 [root@mylinz ~]# ls -lrt /vol1 |tail drwxr-xr-x. 3 root root 1024 Aug 23 23:32 abrt drwxr-xr-x. 2 root root 3072 Aug 23 23:33 bin drwxr-xr-x. 2 root root 1024 Aug 23 23:44 account drwxr-xr-x. 2 root root 1024 Aug 23 23:59 games drwxr-xr-x. 3 root root 1024 Aug 23 23:59 empty drwxr-xr-x. 3 root root 1024 Aug 23 23:59 db drwxr-xr-x. 2 root root 1024 Aug 23 23:59 cvs drwxr-xr-x. 2 root root 1024 Aug 23 23:59 crash drwxr-xr-x. 15 root root 1024 Aug 23 23:59 cache drwxr-xr-x. 40 root root 1024 Aug 24 00:00 lib
2.List the data volume details.
[root@mylinz ~]# lvs |grep vol1 vol1 uavg owi-ao 100.00m [root@mylinz ~]#
3.Take the volume snapshot.
[root@mylinz vol1]# lvcreate -L 50M -s -n vol1backup /dev/uavg/vol1 Rounding up size to full physical extent 52.00 MiB Logical volume "vol1backup" created [root@mylinz vol1]#
4.So now let me remove some of the files from /vol1. Do not remove the data which is more than 50MB.(Because our snapshot size is just 50MB)
[root@mylinz vol1]# rm -rf crash account lib [root@mylinz vol1]# [root@mylinz vol1]# lvs LV VG Attr LSize Origin Snap% Move Log Copy% Convert vol1 uavg owi-ao 100.00m vol1backup uavg swi-a- 52.00m vol1 4.59
5.Let me revert the snapshot to original volume to restore the deleted contents.
[root@mylinz vol1]# lvconvert --merge /dev/uavg/vol1backup Can't merge over open origin volume Can't merge when snapshot is open Merging of snapshot vol1backup will start next activation. [root@mylinz vol1]# umount /vol1 umount: /vol1: device is busy. (In some cases useful info about processes that use the device is found by lsof(8) or fuser(1)) [root@mylinz vol1]# cd [root@mylinz ~]# umount /vol1 [root@mylinz ~]#
I was getting error because volume was in mounted status.Let me try again after un-mounting it.
[root@mylinz ~]# lvconvert --merge /dev/uavg/vol1backup Can't merge over open origin volume Merging of snapshot vol1backup will start next activation. [root@mylinz ~]# lvchange -an /dev/uavg/vol1 [root@mylinz ~]# lvconvert --merge /dev/uavg/vol1backup Snapshot vol1backup is already merging [root@mylinz ~]# [root@mylinz ~]# lvs -a -o +devices LV VG Attr LSize Origin Snap% Move Log Copy% Convert Devices vol1 uavg Owi--- 100.00m /dev/sdd1(13) [vol1backup] uavg Swi--- 52.00m vol1 /dev/sdd1(0) [root@mylinz ~]#
6.Activate the volume and mount it .
[root@mylinz ~]# lvchange -ay /dev/uavg/vol1 [root@mylinz ~]# mount -t ext4 /dev/uavg/vol1 /vol1 [root@mylinz ~]#
7.Check the removed directories.It should be restored and snapshot volume should be disappeared after merging to the volume.
[root@mylinz ~]# cd /vol1 [root@mylinz vol1]# ls -lrt |egrep "account|lib|crash" drwxr-xr-x. 2 root root 1024 Aug 24 00:54 crash drwxr-xr-x. 2 root root 1024 Aug 24 00:54 account drwxr-xr-x. 2 root root 1024 Aug 24 00:55 lib [root@mylinz vol1]# lvs LV VG Attr LSize Origin Snap% Move Log Copy% Convert vol1 uavg -wi-ao 100.00m [root@mylinz vol1]# lvs -a -o +devices LV VG Attr LSize Origin Snap% Move Log Copy% Convert Devices vol1 uavg -wi-ao 100.00m /dev/sdd1(13) [root@mylinz vol1]#
We have successfully restored the logical volume snapshot.
* How to destroy the snapshot volume ?
Destroy the snapshot using “lvremove” command.It can be removed like the logical volume.
If you encounter any error on RHEL6, please read the below article.
https://bugzilla.redhat.com/show_bug.cgi?id=761267
Hope this post will be helpful for Linux Administrators.
Thank you for visiting UnixArena.
joby mathai says
Hi,
I have a doubt regarding the snapshots. Suppose I am mounting the snapshot volume and working on it and if it gets fuller, will it affect the original volume? In my experience, I am getting errors in the orginal volume once snapshots gets full. What can be the reason? Any methods to monitor the snapshot usage and disable it before getting fuller.