All the new servers are shipping with 10 Gbps built-in network adapters . At the same time, servers are virtualized and one physical server may hosts many virtual servers. For an example, VMware ESXi hypervisor will be directly installed on physical server and multiple VM’s will be created on top of that. So all the VM’s will share the same 10Gbps network interface. That’s why throughput will not be always 10 Gbps between the VM’s and other network even though your VM host shows as network connected with 10Gbps. In this article , we will see the real time TCP network bandwidth between VM hosts using iperf utility. The VM hosts can be a Linux , windows or Solaris servers. Iperf works in server/client model to measure the network bandwidth.
iperf utility can be downloaded from iperf.fr. Here is pre-complied binaries for various server operating systems.
Iperf for Windows 2000, XP, 2003, Vista, 7, 8 and Windows 10 :
- Iperf 2.0.5-3 (1421 Kio) – The latest version of Iperf 2 (2014).
Iperf for Linux x86 32 bits (i386) :
- Iperf 2.0.5-2 – DEB package (53 Kio)
Iperf for Linux x86 64 bits (AMD64) :
- Iperf 2.0.5-2 – DEB package (56 Kio)
Iperf for MacOS X :
- Iperf 2.0.5 (Intel) (57 Ko)
- Iperf 1.7.0 (PowerPC) (82 Ko)
Iperf for Oracle Solaris :
- Iperf 2.0.4 for Solaris 10 x86 (62 Ko) SPARC (62 Ko)
- Iperf 2.0.4 for Solaris 9 x86 (61 Ko) SPARC (62 Ko)
- Iperf 2.0.4 for Solaris 8 x86 (61 Ko) SPARC (64 Ko)
Iperf Installation on Oracle Solaris 9/10
1. Copy the Iperf package to Solaris 10 host.
2. Login to the Solaris 10 host. Un-zip the package and install it using pkgadd.
[root@SOL10:/var/tmp]# ls -lrt -rw-r--r-- 1 root 63116 Jun 18 11:26 iperf_2.0.4_solaris10_x86.gz [root@SOL10:/var/tmp]$ gunzip iperf_2.0.4_solaris10_x86.gz [root@SOL10:/var/tmp]# ls -lrt -rw-r--r-- 1 root 163328 Jun 10 10:59 iperf_2.0.4_solaris10_x86 [root@SOL10:/var/tmp]$ file iperf_2.0.4_solaris10_x86 iperf_2.0.4_solaris10_x86: package datastream [root@SOL10:/var/tmp]$ pkgadd -d /var/tmp/iperf_2.0.4_solaris10_x86 The following packages are available: 1 SMCiperf iperf (x86) 2.0.4 Select package(s) you wish to process (or 'all' to process all packages). (default: all) [?,??,q]: Processing package instance from iperf(x86) 2.0.4 Mark Gates, Alex Warshavsky, et al Using as the package base directory. ## Processing package information. ## Processing system information. 5 package pathnames are already properly installed. ## Verifying disk space requirements. ## Checking for conflicts with packages already installed. ## Checking for setuid/setgid programs. Installing iperf as ## Installing part 1 of 1. /usr/local/bin/iperf /usr/local/doc/iperf/AUTHORS /usr/local/doc/iperf/COPYING /usr/local/doc/iperf/ChangeLog /usr/local/doc/iperf/INSTALL /usr/local/doc/iperf/README /usr/local/doc/iperf/doc/Makefile /usr/local/doc/iperf/doc/Makefile.am /usr/local/doc/iperf/doc/Makefile.in /usr/local/doc/iperf/doc/dast.gif /usr/local/doc/iperf/doc/index.html /usr/local/doc/iperf/doc/ui_license.html /usr/local/share/man/man1/iperf.1 [ verifying class ] Installation of was successful. [root@SOL10:/var/tmp]$
Here the options of iperf.
[root@SOL10:/var/tmp]$ /usr/local/bin/iperf --help Usage: iperf [-s|-c host] [options] iperf [-h|--help] [-v|--version] Client/Server: -f, --format [kmKM] format to report: Kbits, Mbits, KBytes, MBytes -i, --interval # seconds between periodic bandwidth reports -l, --len #[KM] length of buffer to read or write (default 8 KB) -m, --print_mss print TCP maximum segment size (MTU - TCP/IP header) -o, --output output the report or error message to this specified file -p, --port # server port to listen on/connect to -u, --udp use UDP rather than TCP -w, --window #[KM] TCP window size (socket buffer size) -B, --bind bind to , an interface or multicast address -C, --compatibility for use with older versions does not sent extra msgs -M, --mss # set TCP maximum segment size (MTU - 40 bytes) -N, --nodelay set TCP no delay, disabling Nagle's Algorithm -V, --IPv6Version Set the domain to IPv6 Server specific: -s, --server run in server mode -U, --single_udp run in single threaded UDP mode -D, --daemon run the server as a daemon Client specific: -b, --bandwidth #[KM] for UDP, bandwidth to send at in bits/sec (default 1 Mbit/sec, implies -u) -c, --client run in client mode, connecting to -d, --dualtest Do a bidirectional test simultaneously -n, --num #[KM] number of bytes to transmit (instead of -t) -r, --tradeoff Do a bidirectional test individually -t, --time # time in seconds to transmit for (default 10 secs) -F, --fileinput input the data to be transmitted from a file -I, --stdin input the data to be transmitted from stdin -L, --listenport # port to recieve bidirectional tests back on -P, --parallel # number of parallel client threads to run -T, --ttl # time-to-live, for multicast (default 1) -Z, --linux-congestion set TCP congestion control algorithm (Linux only) Miscellaneous: -x, --reportexclude [CDMSV] exclude C(connection) D(data) M(multicast) S(settings) V(server) reports -y, --reportstyle C report as a Comma-Separated Values -h, --help print this message and quit -v, --version print version information and quit [KM] Indicates options that support a K or M suffix for kilo- or mega- The TCP window size option can be set by the environment variable TCP_WINDOW_SIZE. Most other options can be set by an environment variable IPERF_, such as IPERF_BANDWIDTH. Report bugs to <iperf-users@lists.sourceforge.net>
#
Iperf installation on Redhat Enterprise Linux 5.x/6.x/7.x
Iperf RPM package is available in RHEL DVD itself. Copy the iperf package to Redhat Linux server.
1. Login to the server and install the package using rpm command.
[root@RHEL5 tmp]# rpm -ivh iperf-2.0.4-1.el5.rf.x86_64.rpm warning: iperf-2.0.4-1.el5.rf.x86_64.rpm: Header V3 DSA signature: NOKEY, key ID 6b8d79e6 Preparing... ########################################### [100%] 1:iperf ########################################### [100%] [root@RHEL5 tmp]#
If your server is configured with yum repository , you can use the “yum install iperf* ” command.
How to Measure the bandwidth between the two VM’s ?
Here we are going to measure bandwidth between servers SOL10 (192.168.2.34) & RHEL5 (192.168.2.40).
1. Login to SOL10 host where iperf is already installed.(Source)
2. Execute Iperf in a server mode. Keep the session live.
[root@SOL10:/root]$ iperf -f M -p 8000 -s -m ------------------------------------------------------------ Server listening on TCP port 8000 TCP window size: 0.05 MByte (default) ------------------------------------------------------------
3. Login to RHEL5 host and execute the iperf with destination IP. (Destination)
[root@RHEL5 ~]# iperf -c 192.168.2.34 -p 8000 -t 60 ------------------------------------------------------------ Client connecting to 192.168.2.34, TCP port 8000 TCP window size: 16.0 KByte (default) ------------------------------------------------------------ [ 3] local 192.168.2.40 port 50393 connected with 192.168.2.34 port 8000 [ ID] Interval Transfer Bandwidth [ 3] 0.0-60.0 sec 6.57 GBytes 941 Mbits/sec [root@RHEL5 ~]#
– t 60 – Seconds
– p 8000 – Default port used to connect to iperf.
– c 192.168.2.34 – Server IP
After 60 seconds test, will get the results like above. This results will be shown in Megabits per seconds
4. Go back to SOL10 console and see the results. It will be shown in Megabytes per second.
[root@SOL10:/root]$ iperf -f M -p 8000 -s -m ------------------------------------------------------------ Server listening on TCP port 8000 TCP window size: 0.05 MByte (default) ------------------------------------------------------------ [ 4] local 192.168.2.34 port 8000 connected with 192.168.2.40 port 18263 [ ID] Interval Transfer Bandwidth [ 4] 0.0-60.0 sec 6707 MBytes 112 MBytes/sec [ 4] MSS size 1448 bytes (MTU 1500 bytes, ethernet) ^C[root@SOL10:/root]$
The same way you can measure the real bandwidth between any type of VM’s or physical hosts. At this time, iperf supports only Solaris , Linux and Windows operating systems.
Hope this article informative to you .
ravi says
Hi,
I couldn’t find the iperf for solaris 10 in the link provided. could you please send it to me in my email id? thanks.