This article will demonstrates that how to configure the cinder volumes backup using swift as backend storage. As you all know that swift is the Object storage within the openstack project. Swift is highly available , distributed and consistent object storage. In the previous article ,we have seen that how to make the glance image service to use swift as backed storage. Similar way , we are going to configure cinder volume’s backup storage as swift. This will help you to recover the cinder volumes or volume based instances in quick time in-case of any problem with original volume.
[box type=”info” align=”” class=”” width=””]Openstack swift uses the commodity hardware with bunch of locally attached disks to provide the object storage solution with high availability and efficient data retrieve mechanism. This would be the one of the cheapest solution to store the static files. [/box]
Note: You can’t use the swift storage to boot the instance.
Environment:
- Operating System – Ubuntu 14.04 TLS
- Openstack Branch – Juno
- Controller Node name – OSCTRL-UA (192.168.203.130)
- Storage Node name – OSSTG-UA (192.168.203.133)
- Configure Cinder services.
root@OSCTRL-UA:~# cinder service-list +------------------+-----------+------+----------+-------+----------------------------+-----------------+ | Binary | Host | Zone | Status | State | Updated_at | Disabled Reason | +------------------+-----------+------+----------+-------+----------------------------+-----------------+ | cinder-scheduler | OSCTRL-UA | nova | enabled | up | 2015-10-24T03:30:48.000000 | None | | cinder-volume | OSSTG-UA | nova | enabled | up | 2015-10-24T03:30:45.000000 | None | +------------------+-----------+------+----------+-------+----------------------------+-----------------+ root@OSCTRL-UA:~#
Assumption:
Environment has been already configured with swift storage services, cinder storage services and other basic Openstack services like nova, neutron and glance.
Configure the Cinder Backup service:
1.Login to the Storage node and install the cinder-backup service.
2.Install the cinder-backup service.
root@OSSTG-UA:~# apt-get install cinder-backup Reading package lists... Done Building dependency tree Reading state information... Done The following NEW packages will be installed: cinder-backup 0 upgraded, 1 newly installed, 0 to remove and 37 not upgraded. Need to get 3,270 B of archives. After this operation, 53.2 kB of additional disk space will be used. Get:1 http://ubuntu-cloud.archive.canonical.com/ubuntu/ trusty-updates/juno/main cinder-backup all 1:2014.2.3-0ubuntu1.1~cloud0 [3,270 B] Fetched 3,270 B in 2s (1,191 B/s) Selecting previously unselected package cinder-backup. (Reading database ... 94636 files and directories currently installed.) Preparing to unpack .../cinder-backup_1%3a2014.2.3-0ubuntu1.1~cloud0_all.deb ... Unpacking cinder-backup (1:2014.2.3-0ubuntu1.1~cloud0) ... Processing triggers for ureadahead (0.100.0-16) ... Setting up cinder-backup (1:2014.2.3-0ubuntu1.1~cloud0) ... cinder-backup start/running, process 62375 Processing triggers for ureadahead (0.100.0-16) ... root@OSSTG-UA:~#
3. Edit the /etc/cinder/cinder.conf file and update the following line on DEFAULT section.
[DEFAULT] ............ #Swift to backup cinder volume snapshot backup_driver = cinder.backup.drivers.swift
4. Restart the cinder services on controller node.
root@OSSTG-UA:~# service cinder-volume restart cinder-volume stop/waiting cinder-volume start/running, process 62564 root@OSSTG-UA:~# root@OSSTG-UA:~# service tgt restart tgt stop/waiting tgt start/running, process 62596 root@OSSTG-UA:~# root@OSSTG-UA:~# service cinder-backup status cinder-backup start/running, process 62375 root@OSSTG-UA:~#
5. Login to the controller node. Update the /etc/cinder/cinder.conf file and update the following line on DEFAULT section.
[DEFAULT] ............ #Swift to backup cinder volume snapshot backup_driver = cinder.backup.drivers.swift
6. Restart the cinder services on controller node.
root@OSCTRL-UA:~# service cinder-scheduler restart cinder-scheduler stop/waiting cinder-scheduler start/running, process 7179 root@OSCTRL-UA:~#
7. Source the admin credentials .
root@OSCTRL-UA:~# cat admin.rc export OS_USERNAME=admin export OS_PASSWORD=admin123 export OS_TENANT_NAME=admin export OS_AUTH_URL=http://OSCTRL-UA:35357/v2.0 root@OSCTRL-UA:~# source admin.rc root@OSCTRL-UA:~#
8.Verify the cinder services. You should be able to see the cinder-backup service on storage node.
root@OSCTRL-UA:~# cinder service-list +------------------+-----------+------+----------+-------+----------------------------+-----------------+ | Binary | Host | Zone | Status | State | Updated_at | Disabled Reason | +------------------+-----------+------+----------+-------+----------------------------+-----------------+ | cinder-backup | OSSTG-UA | nova | enabled | up | 2015-10-24T03:50:07.000000 | None | | cinder-scheduler | OSCTRL-UA | nova | enabled | up | 2015-10-24T03:50:12.000000 | None | | cinder-volume | OSSTG-UA | nova | enabled | up | 2015-10-24T03:50:06.000000 | None | +------------------+-----------+------+----------+-------+----------------------------+-----------------+ root@OSCTRL-UA:~#
In the above command output , we can see that cinder-backup service is up and running fine.
Test the Cinder volume Backup (Non-Root volume):
In our environment , we have tenant called “lingesh”. Note that cinder volume backup can’t be performed on fly.
1.Login to the controller node and source the “lingesh” tenant credentials.
root@OSCTRL-UA:~# cat lingesh.rc export OS_USERNAME=lingesh export OS_PASSWORD=ling123 export OS_TENANT_NAME=lingesh export OS_AUTH_URL=http://OSCTRL-UA:35357/v2.0 root@OSCTRL-UA:~# source lingesh.rc root@OSCTRL-UA:~#
2.List the available cinder volumes .
root@OSCTRL-UA:~# cinder list +--------------------------------------+--------+--------------+------+-------------+----------+--------------------------------------+ | ID | Status | Display Name | Size | Volume Type | Bootable | Attached to | +--------------------------------------+--------+--------------+------+-------------+----------+--------------------------------------+ | 9070a8b9-471d-47cd-8722-9327f3b40051 | in-use | | 1 | None | true | 3d4a2971-4dc7-4fb4-a8db-04a7a8340391 | | fc6dbba6-f8d8-4082-8f35-53bba6853982 | in-use | | 1 | None | true | 3d4a2971-4dc7-4fb4-a8db-04a7a8340391 | +--------------------------------------+--------+--------------+------+-------------+----------+--------------------------------------+ root@OSCTRL-UA:~#
3. Try to take the backup of one of the volume.
root@OSCTRL-UA:~# cinder backup-create fc6dbba6-f8d8-4082-8f35-53bba6853982 ERROR: Invalid volume: Volume to be backed up must be available (HTTP 400) (Request-ID: req-ae9a0112-a4a8-4280-8ffb-e4993dbee241) root@OSCTRL-UA:~#
Cinder backup failed with error “ERROR: Invalid volume: Volume to be backed up must be available (HTTP 400) Request-ID:XXX” . It’s failed because volumes are in “in-use” state. In Openstack juno version , you can’t take the volume backup if it is attached to any instance. You have to detach the cinder volume from OS instance to take the volume backup.
4. List the instances and see where the volume is attached.
root@OSCTRL-UA:~# nova list +--------------------------------------+------+--------+------------+-------------+--------------------------+ | ID | Name | Status | Task State | Power State | Networks | +--------------------------------------+------+--------+------------+-------------+--------------------------+ | 3d4a2971-4dc7-4fb4-a8db-04a7a8340391 | tets | ACTIVE | - | Running | lingesh-net=192.168.4.11 | +--------------------------------------+------+--------+------------+-------------+--------------------------+ root@OSCTRL-UA:~# root@OSCTRL-UA:~# nova show 3d4a2971-4dc7-4fb4-a8db-04a7a8340391 |grep volume | image | Attempt to boot from volume - no image supplied | | os-extended-volumes:volumes_attached | [{"id": "9070a8b9-471d-47cd-8722-9327f3b40051"}, {"id": "fc6dbba6-f8d8-4082-8f35-53bba6853982"}] | root@OSCTRL-UA:~#
5. Stop the instance to detach the volume. We can detach the volume on fly as well. But it may lead to data corruption if the volume is mounted within the instance.
root@OSCTRL-UA:~# nova stop tets root@OSCTRL-UA:~# nova list +--------------------------------------+------+---------+------------+-------------+--------------------------+ | ID | Name | Status | Task State | Power State | Networks | +--------------------------------------+------+---------+------------+-------------+--------------------------+ | 3d4a2971-4dc7-4fb4-a8db-04a7a8340391 | tets | SHUTOFF | - | Shutdown | lingesh-net=192.168.4.11 | +--------------------------------------+------+---------+------------+-------------+--------------------------+ root@OSCTRL-UA:~#
6. You must know which is the root volume before detaching from the OS.If you try to detach the root volume, you will error like below. “ERROR (Forbidden): Can’t detach root device volume (HTTP 403) ”
root@OSCTRL-UA:~# nova volume-detach tets 9070a8b9-471d-47cd-8722-9327f3b40051 ERROR (Forbidden): Can't detach root device volume (HTTP 403) (Request-ID: req-49b9f036-7a34-4ae5-b10f-441e20b512ba) root@OSCTRL-UA:~#
7. Let me detach the non-root volume and verify it.
root@OSCTRL-UA:~# nova volume-detach tets fc6dbba6-f8d8-4082-8f35-53bba6853982 root@OSCTRL-UA:~# root@OSCTRL-UA:~# nova show tets |grep volume | image | Attempt to boot from volume - no image supplied | | os-extended-volumes:volumes_attached | [{"id": "9070a8b9-471d-47cd-8722-9327f3b40051"}] | root@OSCTRL-UA:~#
8. Perform the cinder volume backup.
root@OSCTRL-UA:~# cinder list +--------------------------------------+-----------+--------------+------+-------------+----------+--------------------------------------+ | ID | Status | Display Name | Size | Volume Type | Bootable | Attached to | +--------------------------------------+-----------+--------------+------+-------------+----------+--------------------------------------+ | 9070a8b9-471d-47cd-8722-9327f3b40051 | in-use | | 1 | None | true | 3d4a2971-4dc7-4fb4-a8db-04a7a8340391 | | fc6dbba6-f8d8-4082-8f35-53bba6853982 | available | | 1 | None | true | | +--------------------------------------+-----------+--------------+------+-------------+----------+--------------------------------------+ root@OSCTRL-UA:~# cinder backup-create fc6dbba6-f8d8-4082-8f35-53bba6853982 +-----------+--------------------------------------+ | Property | Value | +-----------+--------------------------------------+ | id | bd708772-748a-430e-bff3-6679d22da973 | | name | None | | volume_id | fc6dbba6-f8d8-4082-8f35-53bba6853982 | +-----------+--------------------------------------+ root@OSCTRL-UA:~# root@OSCTRL-UA:~# cinder backup-list +--------------------------------------+--------------------------------------+-----------+------+------+--------------+---------------+ | ID | Volume ID | Status | Name | Size | Object Count | Container | +--------------------------------------+--------------------------------------+-----------+------+------+--------------+---------------+ | bd708772-748a-430e-bff3-6679d22da973 | fc6dbba6-f8d8-4082-8f35-53bba6853982 | available | None | 1 | 22 | volumebackups | +--------------------------------------+--------------------------------------+-----------+------+------+--------------+---------------+ root@OSCTRL-UA:~#
9. Verify the backup files using swift command. (Since swift is backing store for cinder volume backup)
root@OSCTRL-UA:~# swift list Lingesh-Container volumebackups root@OSCTRL-UA:~# swift list volumebackups volume_fc6dbba6-f8d8-4082-8f35-53bba6853982/20151024122813/az_nova_backup_bd708772-748a-430e-bff3-6679d22da973-00001 volume_fc6dbba6-f8d8-4082-8f35-53bba6853982/20151024122813/az_nova_backup_bd708772-748a-430e-bff3-6679d22da973-00002 volume_fc6dbba6-f8d8-4082-8f35-53bba6853982/20151024122813/az_nova_backup_bd708772-748a-430e-bff3-6679d22da973-00003 volume_fc6dbba6-f8d8-4082-8f35-53bba6853982/20151024122813/az_nova_backup_bd708772-748a-430e-bff3-6679d22da973-00004 volume_fc6dbba6-f8d8-4082-8f35-53bba6853982/20151024122813/az_nova_backup_bd708772-748a-430e-bff3-6679d22da973-00005 volume_fc6dbba6-f8d8-4082-8f35-53bba6853982/20151024122813/az_nova_backup_bd708772-748a-430e-bff3-6679d22da973-00006 volume_fc6dbba6-f8d8-4082-8f35-53bba6853982/20151024122813/az_nova_backup_bd708772-748a-430e-bff3-6679d22da973-00007 volume_fc6dbba6-f8d8-4082-8f35-53bba6853982/20151024122813/az_nova_backup_bd708772-748a-430e-bff3-6679d22da973-00008 volume_fc6dbba6-f8d8-4082-8f35-53bba6853982/20151024122813/az_nova_backup_bd708772-748a-430e-bff3-6679d22da973-00009 volume_fc6dbba6-f8d8-4082-8f35-53bba6853982/20151024122813/az_nova_backup_bd708772-748a-430e-bff3-6679d22da973-00010 volume_fc6dbba6-f8d8-4082-8f35-53bba6853982/20151024122813/az_nova_backup_bd708772-748a-430e-bff3-6679d22da973-00011 volume_fc6dbba6-f8d8-4082-8f35-53bba6853982/20151024122813/az_nova_backup_bd708772-748a-430e-bff3-6679d22da973-00012 volume_fc6dbba6-f8d8-4082-8f35-53bba6853982/20151024122813/az_nova_backup_bd708772-748a-430e-bff3-6679d22da973-00013 volume_fc6dbba6-f8d8-4082-8f35-53bba6853982/20151024122813/az_nova_backup_bd708772-748a-430e-bff3-6679d22da973-00014 volume_fc6dbba6-f8d8-4082-8f35-53bba6853982/20151024122813/az_nova_backup_bd708772-748a-430e-bff3-6679d22da973-00015 volume_fc6dbba6-f8d8-4082-8f35-53bba6853982/20151024122813/az_nova_backup_bd708772-748a-430e-bff3-6679d22da973-00016 volume_fc6dbba6-f8d8-4082-8f35-53bba6853982/20151024122813/az_nova_backup_bd708772-748a-430e-bff3-6679d22da973-00017 volume_fc6dbba6-f8d8-4082-8f35-53bba6853982/20151024122813/az_nova_backup_bd708772-748a-430e-bff3-6679d22da973-00018 volume_fc6dbba6-f8d8-4082-8f35-53bba6853982/20151024122813/az_nova_backup_bd708772-748a-430e-bff3-6679d22da973-00019 volume_fc6dbba6-f8d8-4082-8f35-53bba6853982/20151024122813/az_nova_backup_bd708772-748a-430e-bff3-6679d22da973-00020 volume_fc6dbba6-f8d8-4082-8f35-53bba6853982/20151024122813/az_nova_backup_bd708772-748a-430e-bff3-6679d22da973-00021 volume_fc6dbba6-f8d8-4082-8f35-53bba6853982/20151024122813/az_nova_backup_bd708772-748a-430e-bff3-6679d22da973_metadata root@OSCTRL-UA:~#
By default , cinder backup creates the 22 Objects for each volume for faster backup and recovery.
Test the Cinder volume Restore(Non-Root volume):
1. Login to the controller node and source the tenant credentials.
root@OSCTRL-UA:~# cat lingesh.rc export OS_USERNAME=lingesh export OS_PASSWORD=ling123 export OS_TENANT_NAME=lingesh export OS_AUTH_URL=http://OSCTRL-UA:35357/v2.0 root@OSCTRL-UA:~# source lingesh.rc root@OSCTRL-UA:~#
2.List the volume backup and cinder volumes.
root@OSCTRL-UA:~# cinder backup-list +--------------------------------------+--------------------------------------+-----------+------+------+--------------+---------------+ | ID | Volume ID | Status | Name | Size | Object Count | Container | +--------------------------------------+--------------------------------------+-----------+------+------+--------------+---------------+ | bd708772-748a-430e-bff3-6679d22da973 | fc6dbba6-f8d8-4082-8f35-53bba6853982 | available | None | 1 | 22 | volumebackups | +--------------------------------------+--------------------------------------+-----------+------+------+--------------+---------------+ root@OSCTRL-UA:~# cinder list +--------------------------------------+-----------+--------------+------+-------------+----------+--------------------------------------+ | ID | Status | Display Name | Size | Volume Type | Bootable | Attached to | +--------------------------------------+-----------+--------------+------+-------------+----------+--------------------------------------+ | 9070a8b9-471d-47cd-8722-9327f3b40051 | in-use | | 1 | None | true | 3d4a2971-4dc7-4fb4-a8db-04a7a8340391 | | fc6dbba6-f8d8-4082-8f35-53bba6853982 | available | | 1 | None | true | | +--------------------------------------+-----------+--------------+------+-------------+----------+--------------------------------------+ root@OSCTRL-UA:~#
3. Let me delete the volume to test the restore.
root@OSCTRL-UA:~# cinder delete fc6dbba6-f8d8-4082-8f35-53bba6853982 root@OSCTRL-UA:~# cinder list +--------------------------------------+--------+--------------+------+-------------+----------+--------------------------------------+ | ID | Status | Display Name | Size | Volume Type | Bootable | Attached to | +--------------------------------------+--------+--------------+------+-------------+----------+--------------------------------------+ | 9070a8b9-471d-47cd-8722-9327f3b40051 | in-use | | 1 | None | true | 3d4a2971-4dc7-4fb4-a8db-04a7a8340391 | +--------------------------------------+--------+--------------+------+-------------+----------+--------------------------------------+ root@OSCTRL-UA:~#
4.Let’s restore the deleted volume.
root@OSCTRL-UA:~# cinder backup-list +--------------------------------------+--------------------------------------+-----------+------+------+--------------+---------------+ | ID | Volume ID | Status | Name | Size | Object Count | Container | +--------------------------------------+--------------------------------------+-----------+------+------+--------------+---------------+ | bd708772-748a-430e-bff3-6679d22da973 | fc6dbba6-f8d8-4082-8f35-53bba6853982 | available | None | 1 | 22 | volumebackups | +--------------------------------------+--------------------------------------+-----------+------+------+--------------+---------------+ root@OSCTRL-UA:~# cinder backup-restore bd708772-748a-430e-bff3-6679d22da973 root@OSCTRL-UA:~# cinder list +--------------------------------------+------------------+-----------------------------------------------------+------+-------------+----------+--------------------------------------+ | ID | Status | Display Name | Size | Volume Type | Bootable | Attached to | +--------------------------------------+------------------+-----------------------------------------------------+------+-------------+----------+--------------------------------------+ | 59b7d2ec-79b4-4d99-accf-c4906e769bf5 | restoring-backup | restore_backup_bd708772-748a-430e-bff3-6679d22da973 | 1 | None | false | | | 9070a8b9-471d-47cd-8722-9327f3b40051 | in-use | | 1 | None | true | 3d4a2971-4dc7-4fb4-a8db-04a7a8340391 | +--------------------------------------+------------------+-----------------------------------------------------+------+-------------+----------+--------------------------------------+ root@OSCTRL-UA:~#
5. Verify the volume status.
root@OSCTRL-UA:~# cinder list +--------------------------------------+-----------+--------------+------+-------------+----------+--------------------------------------+ | ID | Status | Display Name | Size | Volume Type | Bootable | Attached to | +--------------------------------------+-----------+--------------+------+-------------+----------+--------------------------------------+ | 59b7d2ec-79b4-4d99-accf-c4906e769bf5 | available | | 1 | None | true | | | 9070a8b9-471d-47cd-8722-9327f3b40051 | in-use | | 1 | None | true | 3d4a2971-4dc7-4fb4-a8db-04a7a8340391 | +--------------------------------------+-----------+--------------+------+-------------+----------+--------------------------------------+ root@OSCTRL-UA:~#
Here we can see that volume has been restored with ID “59b7d2ec-79b4-4d99-accf-c4906e769bf5” .
6. Attach the volume to the instance and verify the contents.
root@OSCTRL-UA:~# cinder list +--------------------------------------+-----------+--------------+------+-------------+----------+--------------------------------------+ | ID | Status | Display Name | Size | Volume Type | Bootable | Attached to | +--------------------------------------+-----------+--------------+------+-------------+----------+--------------------------------------+ | 59b7d2ec-79b4-4d99-accf-c4906e769bf5 | available | | 1 | None | true | | | 9070a8b9-471d-47cd-8722-9327f3b40051 | in-use | | 1 | None | true | 3d4a2971-4dc7-4fb4-a8db-04a7a8340391 | +--------------------------------------+-----------+--------------+------+-------------+----------+--------------------------------------+ root@OSCTRL-UA:~# nova list +--------------------------------------+------+---------+------------+-------------+--------------------------+ | ID | Name | Status | Task State | Power State | Networks | +--------------------------------------+------+---------+------------+-------------+--------------------------+ | 3d4a2971-4dc7-4fb4-a8db-04a7a8340391 | tets | SHUTOFF | - | Shutdown | lingesh-net=192.168.4.11 | +--------------------------------------+------+---------+------------+-------------+--------------------------+ root@OSCTRL-UA:~# nova volume-attach tets 59b7d2ec-79b4-4d99-accf-c4906e769bf5 +----------+--------------------------------------+ | Property | Value | +----------+--------------------------------------+ | device | /dev/vdb | | id | 59b7d2ec-79b4-4d99-accf-c4906e769bf5 | | serverId | 3d4a2971-4dc7-4fb4-a8db-04a7a8340391 | | volumeId | 59b7d2ec-79b4-4d99-accf-c4906e769bf5 | +----------+--------------------------------------+ root@OSCTRL-UA:~# cinder list +--------------------------------------+--------+--------------+------+-------------+----------+--------------------------------------+ | ID | Status | Display Name | Size | Volume Type | Bootable | Attached to | +--------------------------------------+--------+--------------+------+-------------+----------+--------------------------------------+ | 59b7d2ec-79b4-4d99-accf-c4906e769bf5 | in-use | | 1 | None | true | 3d4a2971-4dc7-4fb4-a8db-04a7a8340391 | | 9070a8b9-471d-47cd-8722-9327f3b40051 | in-use | | 1 | None | true | 3d4a2971-4dc7-4fb4-a8db-04a7a8340391 | +--------------------------------------+--------+--------------+------+-------------+----------+--------------------------------------+ root@OSCTRL-UA:~#
We have successfully recovered the deleted volume using the swift storage.
Backup the Instance’s root volume
There is no way to backup the volume which are attached to the instance in openstack Juno. So if you want to backup the root volume using cinder backup , you need to follow the below steps . This procedure is only for the testing purpose and not for the production solution.
Instance root volume – > Take Snapshot – > Create the temporary volume from snapshot – > Backup the temporary volume -> Remove the temporary volume – > Destroy the snapshot of Instance’s root volume.
1.Login to the controller node and source the tenant credentials .
root@OSCTRL-UA:~# cat lingesh.rc export OS_USERNAME=lingesh export OS_PASSWORD=ling123 export OS_TENANT_NAME=lingesh export OS_AUTH_URL=http://OSCTRL-UA:35357/v2.0 root@OSCTRL-UA:~# source lingesh.rc root@OSCTRL-UA:~#
2.List the instance and volume information.
root@OSCTRL-UA:~# nova list +--------------------------------------+------+--------+------------+-------------+--------------------------+ | ID | Name | Status | Task State | Power State | Networks | +--------------------------------------+------+--------+------------+-------------+--------------------------+ | 3d4a2971-4dc7-4fb4-a8db-04a7a8340391 | tets | ACTIVE | - | Running | lingesh-net=192.168.4.11 | +--------------------------------------+------+--------+------------+-------------+--------------------------+ root@OSCTRL-UA:~# cinder list +--------------------------------------+--------+--------------+------+-------------+----------+--------------------------------------+ | ID | Status | Display Name | Size | Volume Type | Bootable | Attached to | +--------------------------------------+--------+--------------+------+-------------+----------+--------------------------------------+ | 9070a8b9-471d-47cd-8722-9327f3b40051 | in-use | | 1 | None | true | 3d4a2971-4dc7-4fb4-a8db-04a7a8340391 | +--------------------------------------+--------+--------------+------+-------------+----------+--------------------------------------+ root@OSCTRL-UA:~#
3.Create the volume snapshot with force option.
root@OSCTRL-UA:~# cinder snapshot-create 9070a8b9-471d-47cd-8722-9327f3b40051 --force True +---------------------+--------------------------------------+ | Property | Value | +---------------------+--------------------------------------+ | created_at | 2015-10-24T14:11:34.869513 | | display_description | None | | display_name | None | | id | 86f272ef-de7d-4fa7-b483-ec2fd139ab5e | | metadata | {} | | size | 1 | | status | creating | | volume_id | 9070a8b9-471d-47cd-8722-9327f3b40051 | +---------------------+--------------------------------------+ root@OSCTRL-UA:~#
You will get error like below if you didn’t use the force option. (ERROR: Invalid volume: must be available (HTTP 400) )
root@OSCTRL-UA:~# cinder snapshot-create 9070a8b9-471d-47cd-8722-9327f3b40051 ERROR: Invalid volume: must be available (HTTP 400) (Request-ID: req-2e19610a-76b6-49ab-9603-1c6e9c044703) root@OSCTRL-UA:~#
4. Create the temporary volume using the snapshot. (For backup purpose)
root@OSCTRL-UA:~# cinder snapshot-list +--------------------------------------+--------------------------------------+-----------+--------------+------+ | ID | Volume ID | Status | Display Name | Size | +--------------------------------------+--------------------------------------+-----------+--------------+------+ | 86f272ef-de7d-4fa7-b483-ec2fd139ab5e | 9070a8b9-471d-47cd-8722-9327f3b40051 | available | None | 1 | +--------------------------------------+--------------------------------------+-----------+--------------+------+ root@OSCTRL-UA:~# cinder create --snapshot-id 86f272ef-de7d-4fa7-b483-ec2fd139ab5e --display-name tets_backup_vol 1 +---------------------+--------------------------------------+ | Property | Value | +---------------------+--------------------------------------+ | attachments | [] | | availability_zone | nova | | bootable | false | | created_at | 2015-10-24T14:14:35.376439 | | display_description | None | | display_name | tets_backup_vol | | encrypted | False | | id | 20835372-3fb1-47b0-96f5-f493bd92151d | | metadata | {} | | size | 1 | | snapshot_id | 86f272ef-de7d-4fa7-b483-ec2fd139ab5e | | source_volid | None | | status | creating | | volume_type | None | +---------------------+--------------------------------------+ root@OSCTRL-UA:~#
In the above command , I have created the new volume with the size of 1GB .The new volume size should be same as the snapshot size. (see the cinder snapshot-list command output)
5. List the cinder volume. You should be able to see the new volume here.
root@OSCTRL-UA:~# cinder list +--------------------------------------+-----------+-----------------+------+-------------+----------+--------------------------------------+ | ID | Status | Display Name | Size | Volume Type | Bootable | Attached to | +--------------------------------------+-----------+-----------------+------+-------------+----------+--------------------------------------+ | 20835372-3fb1-47b0-96f5-f493bd92151d | available | tets_backup_vol | 1 | None | true | | | 9070a8b9-471d-47cd-8722-9327f3b40051 | in-use | | 1 | None | true | 3d4a2971-4dc7-4fb4-a8db-04a7a8340391 | +--------------------------------------+-----------+-----------------+------+-------------+----------+--------------------------------------+ root@OSCTRL-UA:~#
6. Initiate the cinder backup for newly created volume. (which is the clone of tets instance’s root volume ).
root@OSCTRL-UA:~# cinder backup-create 20835372-3fb1-47b0-96f5-f493bd92151d +-----------+--------------------------------------+ | Property | Value | +-----------+--------------------------------------+ | id | 1a194ba8-aa0c-41ff-9f73-9c23c4457230 | | name | None | | volume_id | 20835372-3fb1-47b0-96f5-f493bd92151d | +-----------+--------------------------------------+ root@OSCTRL-UA:~#
7. Destroy the backup volume which we have created for backup purpose.
root@OSCTRL-UA:~# cinder list +--------------------------------------+-----------+-----------------+------+-------------+----------+--------------------------------------+ | ID | Status | Display Name | Size | Volume Type | Bootable | Attached to | +--------------------------------------+-----------+-----------------+------+-------------+----------+--------------------------------------+ | 20835372-3fb1-47b0-96f5-f493bd92151d | available | tets_backup_vol | 1 | None | true | | | 9070a8b9-471d-47cd-8722-9327f3b40051 | in-use | | 1 | None | true | 3d4a2971-4dc7-4fb4-a8db-04a7a8340391 | +--------------------------------------+-----------+-----------------+------+-------------+----------+--------------------------------------+ root@OSCTRL-UA:~# cinder delete 20835372-3fb1-47b0-96f5-f493bd92151d root@OSCTRL-UA:~# cinder list +--------------------------------------+--------+--------------+------+-------------+----------+--------------------------------------+ | ID | Status | Display Name | Size | Volume Type | Bootable | Attached to | +--------------------------------------+--------+--------------+------+-------------+----------+--------------------------------------+ | 9070a8b9-471d-47cd-8722-9327f3b40051 | in-use | | 1 | None | true | 3d4a2971-4dc7-4fb4-a8db-04a7a8340391 | +--------------------------------------+--------+--------------+------+-------------+----------+--------------------------------------+ root@OSCTRL-UA:~#
8. List the backup files.
root@OSCTRL-UA:~# cinder backup-list +--------------------------------------+--------------------------------------+-----------+------+------+--------------+---------------+ | ID | Volume ID | Status | Name | Size | Object Count | Container | +--------------------------------------+--------------------------------------+-----------+------+------+--------------+---------------+ | 1a194ba8-aa0c-41ff-9f73-9c23c4457230 | 20835372-3fb1-47b0-96f5-f493bd92151d | available | None | 1 | 22 | volumebackups | +--------------------------------------+--------------------------------------+-----------+------+------+--------------+---------------+ root@OSCTRL-UA:~# swift list Lingesh-Container volumebackups root@OSCTRL-UA:~# swift list volumebackups volume_20835372-3fb1-47b0-96f5-f493bd92151d /20151024142151/az_nova_backup_1a194ba8-aa0c-41ff-9f73-9c23c4457230-00001 volume_20835372-3fb1-47b0-96f5-f493bd92151d /20151024142151/az_nova_backup_1a194ba8-aa0c-41ff-9f73-9c23c4457230-00002 volume_20835372-3fb1-47b0-96f5-f493bd92151d /20151024142151/az_nova_backup_1a194ba8-aa0c-41ff-9f73-9c23c4457230-00003 volume_20835372-3fb1-47b0-96f5-f493bd92151d /20151024142151/az_nova_backup_1a194ba8-aa0c-41ff-9f73-9c23c4457230-00004 volume_20835372-3fb1-47b0-96f5-f493bd92151d /20151024142151/az_nova_backup_1a194ba8-aa0c-41ff-9f73-9c23c4457230-00005 volume_20835372-3fb1-47b0-96f5-f493bd92151d /20151024142151/az_nova_backup_1a194ba8-aa0c-41ff-9f73-9c23c4457230-00006 volume_20835372-3fb1-47b0-96f5-f493bd92151d /20151024142151/az_nova_backup_1a194ba8-aa0c-41ff-9f73-9c23c4457230-00007 volume_20835372-3fb1-47b0-96f5-f493bd92151d /20151024142151/az_nova_backup_1a194ba8-aa0c-41ff-9f73-9c23c4457230-00008 volume_20835372-3fb1-47b0-96f5-f493bd92151d /20151024142151/az_nova_backup_1a194ba8-aa0c-41ff-9f73-9c23c4457230-00009 volume_20835372-3fb1-47b0-96f5-f493bd92151d /20151024142151/az_nova_backup_1a194ba8-aa0c-41ff-9f73-9c23c4457230-00010 volume_20835372-3fb1-47b0-96f5-f493bd92151d /20151024142151/az_nova_backup_1a194ba8-aa0c-41ff-9f73-9c23c4457230-00011 volume_20835372-3fb1-47b0-96f5-f493bd92151d /20151024142151/az_nova_backup_1a194ba8-aa0c-41ff-9f73-9c23c4457230-00012 volume_20835372-3fb1-47b0-96f5-f493bd92151d /20151024142151/az_nova_backup_1a194ba8-aa0c-41ff-9f73-9c23c4457230-00013 volume_20835372-3fb1-47b0-96f5-f493bd92151d /20151024142151/az_nova_backup_1a194ba8-aa0c-41ff-9f73-9c23c4457230-00014 volume_20835372-3fb1-47b0-96f5-f493bd92151d /20151024142151/az_nova_backup_1a194ba8-aa0c-41ff-9f73-9c23c4457230-00015 volume_20835372-3fb1-47b0-96f5-f493bd92151d /20151024142151/az_nova_backup_1a194ba8-aa0c-41ff-9f73-9c23c4457230-00016 volume_20835372-3fb1-47b0-96f5-f493bd92151d /20151024142151/az_nova_backup_1a194ba8-aa0c-41ff-9f73-9c23c4457230-00017 volume_20835372-3fb1-47b0-96f5-f493bd92151d /20151024142151/az_nova_backup_1a194ba8-aa0c-41ff-9f73-9c23c4457230-00018 volume_20835372-3fb1-47b0-96f5-f493bd92151d /20151024142151/az_nova_backup_1a194ba8-aa0c-41ff-9f73-9c23c4457230-00019 volume_20835372-3fb1-47b0-96f5-f493bd92151d /20151024142151/az_nova_backup_1a194ba8-aa0c-41ff-9f73-9c23c4457230-00020 volume_20835372-3fb1-47b0-96f5-f493bd92151d /20151024142151/az_nova_backup_1a194ba8-aa0c-41ff-9f73-9c23c4457230-00021 volume_20835372-3fb1-47b0-96f5-f493bd92151d /20151024142151/az_nova_backup_1a194ba8-aa0c-41ff-9f73-9c23c4457230_metadata root@OSCTRL-UA:~#
At this point , we have successfully backup the “tets” instance’s root volume.
How to restore instance root volume from backup ?
We will assume that nova instance “tets” have been deleted accidentally with root volume.
1. Login to the controller node and source the tenant credentials.
2. List the available cinder backup.
root@OSCTRL-UA:~# cinder backup-list +--------------------------------------+--------------------------------------+-----------+------+------+--------------+---------------+ | ID | Volume ID | Status | Name | Size | Object Count | Container | +--------------------------------------+--------------------------------------+-----------+------+------+--------------+---------------+ | 1a194ba8-aa0c-41ff-9f73-9c23c4457230 | 20835372-3fb1-47b0-96f5-f493bd92151d | available | None | 1 | 22 | volumebackups | +--------------------------------------+--------------------------------------+-----------+------+------+--------------+---------------+ root@OSCTRL-UA:~#
3.Initiate the volume restore.
root@OSCTRL-UA:~# cinder backup-list +--------------------------------------+--------------------------------------+-----------+------+------+--------------+---------------+ | ID | Volume ID | Status | Name | Size | Object Count | Container | +--------------------------------------+--------------------------------------+-----------+------+------+--------------+---------------+ | 1a194ba8-aa0c-41ff-9f73-9c23c4457230 | 20835372-3fb1-47b0-96f5-f493bd92151d | available | None | 1 | 22 | volumebackups | +--------------------------------------+--------------------------------------+-----------+------+------+--------------+---------------+ root@OSCTRL-UA:~# cinder backup-restore 1a194ba8-aa0c-41ff-9f73-9c23c4457230 root@OSCTRL-UA:~# cinder list +--------------------------------------+------------------+-----------------------------------------------------+------+-------------+----------+-------------+ | ID | Status | Display Name | Size | Volume Type | Bootable | Attached to | +--------------------------------------+------------------+-----------------------------------------------------+------+-------------+----------+-------------+ | 449ef348-04c7-4d1d-a6d0-27796dac9e49 | restoring-backup | restore_backup_1a194ba8-aa0c-41ff-9f73-9c23c4457230 | 1 | None | false | | +--------------------------------------+------------------+-----------------------------------------------------+------+-------------+----------+-------------+ root@OSCTRL-UA:~# cinder list +--------------------------------------+-----------+-----------------+------+-------------+----------+-------------+ | ID | Status | Display Name | Size | Volume Type | Bootable | Attached to | +--------------------------------------+-----------+-----------------+------+-------------+----------+-------------+ | 449ef348-04c7-4d1d-a6d0-27796dac9e49 | available | tets_backup_vol | 1 | None | true | | +--------------------------------------+-----------+-----------------+------+-------------+----------+-------------+ root@OSCTRL-UA:~#
We have successfully restored the volume from swift storage backup.
4. Let’s create the instance using the restored volume.
root@OSCTRL-UA:~# nova boot --flavor 1 --block-device source=volume,id="449ef348-04c7-4d1d-a6d0-27796dac9e49",dest=volume,shutdown=preserve,bootindex=0 --nic net-id="58ee8851-06c3-40f3-91ca-b6d7cff609a5" tets +--------------------------------------+--------------------------------------------------+ | Property | Value | +--------------------------------------+--------------------------------------------------+ | OS-DCF:diskConfig | MANUAL | | OS-EXT-AZ:availability_zone | nova | | OS-EXT-STS:power_state | 0 | | OS-EXT-STS:task_state | scheduling | | OS-EXT-STS:vm_state | building | | OS-SRV-USG:launched_at | - | | OS-SRV-USG:terminated_at | - | | accessIPv4 | | | accessIPv6 | | | adminPass | 3kpFygxQDH7N | | config_drive | | | created | 2015-10-24T14:47:04Z | | flavor | m1.tiny (1) | | hostId | | | id | 18c55ca0-8031-41d5-a9d5-c2d2828c9486 | | image | Attempt to boot from volume - no image supplied | | key_name | - | | metadata | {} | | name | tets | | os-extended-volumes:volumes_attached | [{"id": "449ef348-04c7-4d1d-a6d0-27796dac9e49"}] | | progress | 0 | | security_groups | default | | status | BUILD | | tenant_id | abe3af30f46b446fbae35a102457890c | | updated | 2015-10-24T14:47:05Z | | user_id | 3f01d4f7aa9e477cb885334ab9c5929d | +--------------------------------------+--------------------------------------------------+ root@OSCTRL-UA:~# nova list +--------------------------------------+------+--------+------------+-------------+----------+ | ID | Name | Status | Task State | Power State | Networks | +--------------------------------------+------+--------+------------+-------------+----------+ | 18c55ca0-8031-41d5-a9d5-c2d2828c9486 | tets | BUILD | spawning | NOSTATE | | +--------------------------------------+------+--------+------------+-------------+----------+ root@OSCTRL-UA:~#
5. Check the nova instance status.
root@OSCTRL-UA:~# nova list +--------------------------------------+------+--------+------------+-------------+--------------------------+ | ID | Name | Status | Task State | Power State | Networks | +--------------------------------------+------+--------+------------+-------------+--------------------------+ | 5df256ec-1529-401b-9ad5-6a16c2d710e3 | tets | ACTIVE | - | Running | lingesh-net=192.168.4.13 | +--------------------------------------+------+--------+------------+-------------+--------------------------+ root@OSCTRL-UA:~#
We have successfully recovered the nova instance root volume from swift backup storage.
Openstack liberty’s cinder backup supports the incremental backup and force backup creation for volume that are in use.
[box type=”shadow” align=”” class=”” width=””]Do not think that Openstack volume backup is painful one. cinder backup is similar to backing up the LUN directly at the SAN storage level. When it comes to Openstack instance backup , you can use any backup agents which supports your instance (OS flavour). (Ex: Redhat Linux, Ubuntu, Solaris , windows )[/box]
Hope this article informative to you. Share it ! Be Sociable !!!
Leave a Reply