KVM hosts needs to be prepared to store and provide the network access to the guest machines. In last article ,we have seen KVM package installation and VMM(virtual Machine Manager) package installation. Once you have installed the packages, you need to create the filesystem to store the virtual machines images (/var/lib/libvirt/images which is the default storage path). If you are planning to move the VM’s from one host to another host , you need a shared filesystem (NFS) or shared storage (SAN). To access the guest in external network , you must configure the bridge on host. This article is going to demonstrate the bridge creation and creating the storage pool to store the virtual machines and ISO images.
- Host – The hypervisor or physical server where all VMs are installed.
- VMs (Virtual Machines) or Guests – Virtual servers that are installed on top of a physical server.
Host Operating System (Hypervisor) – RHEL 7.2
Configure the New Bridge on host (Hypervisor):
Bridge configuration is required to provide the autonomous network access to the guests.
If you see any “virbr0” interface or bridge, just ignore it. This might get created for VM’s NAT network.
1.Login to the host .
2. View the current network configuration.
[root@UA-HA ~]# ifconfig -a eno16777736: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet 192.168.203.134 netmask 255.255.255.0 broadcast 192.168.203.255 inet6 fe80::20c:29ff:fe2d:3fce prefixlen 64 scopeid 0x20 ether 00:0c:29:2d:3f:ce txqueuelen 1000 (Ethernet) RX packets 13147 bytes 1923365 (1.8 MiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 7545 bytes 784722 (766.3 KiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536 inet 127.0.0.1 netmask 255.0.0.0 inet6 ::1 prefixlen 128 scopeid 0x10 loop txqueuelen 0 (Local Loopback) RX packets 1616 bytes 385042 (376.0 KiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 1616 bytes 385042 (376.0 KiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 [root@UA-HA ~]#
3.Re-configure the primary interface to enable the bridging. Navigate to the network configuration directory and update the “ifcfg-xxxxxx” file like following.
[root@UA-HA ~]# cd /etc/sysconfig/network-scripts/ [root@UA-HA network-scripts]# vi ifcfg-eno16777736 [root@UA-HA network-scripts]# cat ifcfg-eno16777736 HWADDR="00:0C:29:2D:3F:CE" TYPE="Ethernet" ONBOOT="yes" BRIDGE=br0 [root@UA-HA network-scripts]#
4. Create the bridge configuration file like the following.
[root@UA-HA ~]# cd /etc/sysconfig/network-scripts/ [root@UA-HA network-scripts]# cat ifcfg-br0 TYPE="Bridge" DEVICE=br0 BOOTPROTO="dhcp" ONBOOT="yes" DELAY=0 STP=0 [root@UA-HA network-scripts]#
5. Update the “sysctl.conf” file to enable the IP forwarding.
[root@UA-HA ~]# grep net /etc/sysctl.conf net.ipv4.ip_forward=1 [root@UA-HA ~]#
6. Run the sysctl command to activate the IP forwarding instantly.
[root@UA-HA ~]# sysctl -p net.ipv4.ip_forward = 1 [root@UA-HA ~]#
7. Restart the network services to activate the bridge configuration.
[root@UA-HA ~]# systemctl restart network [root@UA-HA ~]# systemctl status network ● network.service - LSB: Bring up/down networking Loaded: loaded (/etc/rc.d/init.d/network) Active: active (exited) since Mon 2015-12-14 06:26:08 EST; 11s ago Docs: man:systemd-sysv-generator(8) Process: 38831 ExecStop=/etc/rc.d/init.d/network stop (code=exited, status=0/SUCCESS) Process: 39021 ExecStart=/etc/rc.d/init.d/network start (code=exited, status=0/SUCCESS) Dec 14 06:26:07 UA-HA systemd[1]: Starting LSB: Bring up/down networking... Dec 14 06:26:08 UA-HA network[39021]: Bringing up loopback interface: Could not load file '/etc/sysconfig/network-scripts/ifcfg-lo' Dec 14 06:26:08 UA-HA network[39021]: Could not load file '/etc/sysconfig/network-scripts/ifcfg-lo' Dec 14 06:26:08 UA-HA network[39021]: Could not load file '/etc/sysconfig/network-scripts/ifcfg-lo' Dec 14 06:26:08 UA-HA network[39021]: Could not load file '/etc/sysconfig/network-scripts/ifcfg-lo' Dec 14 06:26:08 UA-HA network[39021]: [ OK ] Dec 14 06:26:08 UA-HA network[39021]: Bringing up interface eno16777736: [ OK ] Dec 14 06:26:08 UA-HA network[39021]: Bringing up interface br0: [ OK ] Dec 14 06:26:08 UA-HA systemd[1]: Started LSB: Bring up/down networking. [root@UA-HA ~]#
8. Verify the network configuration again.
[root@UA-HA ~]# ifconfig -a br0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet 192.168.203.134 netmask 255.255.255.0 broadcast 192.168.203.255 inet6 fe80::20c:29ff:fe2d:3fce prefixlen 64 scopeid 0x20 ether 00:0c:29:2d:3f:ce txqueuelen 0 (Ethernet) RX packets 104 bytes 8568 (8.3 KiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 123 bytes 12778 (12.4 KiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 eno16777736: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 ether 00:0c:29:2d:3f:ce txqueuelen 1000 (Ethernet) RX packets 13902 bytes 1985960 (1.8 MiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 8161 bytes 841989 (822.2 KiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536 inet 127.0.0.1 netmask 255.0.0.0 inet6 ::1 prefixlen 128 scopeid 0x10 loop txqueuelen 0 (Local Loopback) RX packets 1828 bytes 435567 (425.3 KiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 1828 bytes 435567 (425.3 KiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 [root@UA-HA ~]#
Looks good.
9. Verify the bridge information.
[root@UA-HA ~]# brctl show bridge name bridge id STP enabled interfaces br0 8000.000c292d3fce no eno16777736 [root@UA-HA ~]#
We have successfully created the bridge to provide the access to the guests.
Configure the Storage Pool:
There is no limitations to keep the guests in the shared filesystem. But however if you keep the guests in shared filesystem , you can easily migrate the VM’s from one host to another. The latest KVM version supports the Live VM migration(This is similar to vMotion in VMware ). The default storage pool path is /var/lib/libvirt/images.
In my tutorial, I am going to use NFS as shared filesystem.
1. My NFS server IP is 192.168.203.1 . Mount the new NFS share on mountpoint /var/lib/libvirt/images.
[root@UA-HA ~]# df -h /var/lib/libvirt/images Filesystem Size Used Avail Use% Mounted on 192.168.203.1:/D/NFS 149G 121G 29G 82% /var/lib/libvirt/images [root@UA-HA ~]#
2. List the storage pool.
[root@UA-HA ~]# virsh pool-list Name State Autostart ------------------------------------------- [root@UA-HA ~]#
3. Create the new storage pool with name of “default”.
[root@UA-HA ~]# virsh pool-build default Pool default built [root@UA-HA ~]#
4. Start the storage pool.
[root@UA-HA ~]# virsh pool-start default Pool default started [root@UA-HA ~]# [root@UA-HA ~]# virsh pool-list Name State Autostart ------------------------------------------- default active yes [root@UA-HA ~]#
5. Check the storage pool info.
[root@UA-HA ~]# virsh pool-info default Name: default UUID: 3599dd8a-edef-4c00-9ff5-6d880f1ecb8b State: running Persistent: yes Autostart: yes Capacity: 148.46 GiB Allocation: 120.35 GiB Available: 28.11 GiB [root@UA-HA ~]# The storage pool information will match with the NFS mount. (Actually calculates the "/var/lib/libvirt/images" available disk space.) [root@UA-HA ~]# df -h /var/lib/libvirt/images Filesystem Size Used Avail Use% Mounted on 192.168.203.1:/D/NFS 149G 121G 29G 82% /var/lib/libvirt/images [root@UA-HA ~]#
Configure the X11:
Enable the X11forwarding. This require to open the GUI tools using the ssh session.
[root@UA-HA ~]# cat /etc/ssh/sshd_config |grep X11Forwarding X11Forwarding yes [root@UA-HA ~]#
We have prepared the host to create the new virtual machines. In the next article, we will see that how to create the new guest using CLI.
Share it ! Comment it !! Be Sociable !!!
Jitendra says
Hi,
When we have tried to create the storage pool getting the below error message. and we have mount the nfs share on /var/lib/libvirt/images location on 2 nodes and both node have installed kvm packages, pleas suugest.
[root@kvm1 images]# virsh pool-build defaultt
error: failed to get pool ‘defaultt’
error: Storage pool not found: no storage pool with matching name ‘defaultt’
[root@kvm1 images]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/sda3 26G 7.0G 19G 28% /
devtmpfs 1.2G 0 1.2G 0% /dev
tmpfs 1.2G 80K 1.2G 1% /dev/shm
tmpfs 1.2G 8.7M 1.2G 1% /run
tmpfs 1.2G 0 1.2G 0% /sys/fs/cgroup
/dev/sda1 597M 123M 475M 21% /boot
192.168.56.4:/nfs 48G 180M 45G 1% /var/lib/libvirt/images
[root@kvm1 images]# date;uptime
Fri Aug 12 17:14:04 IST 2016
17:14:04 up 17 min, 2 users, load average: 0.01, 0.08, 0.18
[root@kvm1 images]#
================
ON another node
/dev/sda1 797M 124M 674M 16% /boot
192.168.56.4:/nfs 48G 180M 45G 1% /var/lib/libvirt/images
[root@kvm2 ~]# virsh pool-list
Name State Autostart
—————————————–
[root@kvm2 ~]# virsh pool-build default
error: failed to get pool ‘default’
error: Storage pool not found: no storage pool with matching name ‘default’
[root@kvm2 ~]#