Cloud server providers often define host tiers by the allocated resources, but the differences in the underlying hardware, architecture and performance tuning can result in varying capabilities even between similar configurations. The easiest way to measure the real differences between servers is to run a set of tests, i.e. a benchmark, to create simple to read values for comparisons.
Benchmarking cloud servers help in quantifying the real performance behind the specifications, but a significant part of getting comparable results is to eliminate as many of the variables that could otherwise affect the benchmarks. For example:
- Use the same operating system on each server.
- Update software to their latest versions.
- Check that there are no resource hungry processes in the background.
Having the servers adequately set up, you are ready to start off with the first benchmarks. Continue below with instructions on how to rate the CPU and RAM performance.
Geekbench 3 is one of the most popular cross-platform processor benchmarking tools. It offers a standardised scoring system that separates single-core and multicore results for better comparison.
Running Geekbench on your server is quite a straightforward process, but you will need to install a couple of dependencies beforehand. The free version is 32-bit only; therefore, your 64-bit servers will need the additional runtime libraries.
sudo apt-get install libc6:i386 libstdc++6:i386
sudo dpkg --add-architecture i386 sudo aptitude update sudo aptitude install libc6:i386 libstdc++6:i386
sudo yum install wget glibc.i686 libstdc++ libstdc++.i686
With the prerequisites fulfilled, go ahead and download the benchmark package. On a good network, the download is almost instantaneous, but should not take long in any case.
wget http://cdn.primatelabs.com/Geekbench-3.4.1-Linux.tar.gz ~/
Afterwards, unpack the files and switch to the newly created folder. The following command will do just that.
tar -zxvf ~/Geekbench-3.4.1-Linux.tar.gz && cd ~/dist/Geekbench-3.4.1-Linux/
Finally, you can run the benchmark with the command below.
The Geekbench 3 performs a series of tests to measure the CPU performance with integer and floating point tasks as well as collects memory bandwidth data to rate the system memory. Once complete, you will see output similar to the example underneath.
Open the first link in your web browser to view the results. Alternatively, if you register on the Geekbench website, you can use the second link to claim the results to your profile. Claiming the results allows you to save and compare the scores later.
Upload succeeded. Visit the following link and view your results online: http://browser.primatelabs.com/geekbench3/6088030 Visit the following link and add this result to your profile: http://browser.primatelabs.com/geekbench3/claim/6088030?key=
The total score is scaled directly one-to-one to the CPU performance; double the score means double the performance. The scoring system is calibrated against a baseline score of 2500, which is the result for an Intel Core i5-2520M @ 2.50 GHz.
A big part of the overall performance of a cloud server comes from the storage device read and write speeds. A good option to use for this purpose is fio, which is an I/O benchmarking and stress testing tool available on a multitude of platforms. Install fio using the commands applicable to your system below.
Debian and Ubuntu
sudo apt-get install fio
sudo yum install wget libaio libibverbs librdmacm librbd1-devel wget https://kojipkgs.fedoraproject.org/packages/fio/2.2.8/2.el7/x86_64/fio-2.2.8-2.el7.x86_64.rpm ~/ sudo rpm -iv ~/fio-2.2.8-2.el7.x86_64.rpm
IOPS is a measuring unit of input/output operations per second and is commonly used performance metric for storage devices. The usual ways to test IOPS is to perform random read and write operations, with fio this can be done using the three examples below.
Random read/write performance
fio --name=randrw --ioengine=libaio --direct=1 --bs=4k --iodepth=64 --size=4G --rw=randrw --rwmixread=75 --gtod_reduce=1
Random read performance
fio --name=randread --ioengine=libaio --direct=1 --bs=4k --iodepth=64 --size=4G --rw=randread --gtod_reduce=1
Random write performance
fio --name=randwrite --ioengine=libaio --direct=1 --bs=4k --iodepth=64 --size=4G --rw=randwrite --gtod_reduce=1
As with the Geekbench scores, the higher the IOPS, the faster the storage. For a comparison, a standard 7,200 rpm SATA drive HDD would have a score of 75-100 IOPS.
Another way to benchmark the storage performance is to measure the latency on individual requests. IOPing is a simple tool that does just that. It runs I/O requests to the storage device to benchmark the time to reply. The results display disk latency in the same way ping –test measures network latency.
Debian and Ubuntu
wget https://launchpad.net/ubuntu/+archive/primary/+files/ioping_0.9-2_amd64.deb ~/ sudo dpkg -i ~/ioping_0.9-2_amd64.deb
wget https://kojipkgs.fedoraproject.org/packages/ioping/0.9/1.el7/x86_64/ioping-0.9-1.el7.x86_64.rpm ~/ sudo rpm -iv ~/ioping-0.9-1.el7.x86_64.rpm
Then run the test with the command below.
ioping -c 10 .
The output will show something similar to the example underneath. The time shows the I/O latency measured in microseconds.
4 KiB from . (ext4 /dev/vda1): request=1 time=78 us 4 KiB from . (ext4 /dev/vda1): request=2 time=175 us 4 KiB from . (ext4 /dev/vda1): request=3 time=167 us 4 KiB from . (ext4 /dev/vda1): request=4 time=174 us 4 KiB from . (ext4 /dev/vda1): request=5 time=168 us 4 KiB from . (ext4 /dev/vda1): request=6 time=185 us 4 KiB from . (ext4 /dev/vda1): request=7 time=179 us 4 KiB from . (ext4 /dev/vda1): request=8 time=178 us 4 KiB from . (ext4 /dev/vda1): request=9 time=177 us 4 KiB from . (ext4 /dev/vda1): request=10 time=195 us --- . (ext4 /dev/vda1) ioping statistics --- 10 requests completed in 9.00 s, 5.97 k iops, 23.3 MiB/s min/avg/max/mdev = 78 us / 167 us / 195 us / 30 us
The latency number measures the delay in response, which means the lower the delay, the better the performance.
Benchmarking tools are a very convenient way to compare performance between cloud server providers. They are readily available and straightforward to use regardless of the operating systems available. Performance demanding web applications might see a noticeable difference in the quality of the service when ran on a genuinely capable hardware, but even if you are not worried about numbers you should still know what you are getting for your money.
If you wish to read more about benchmarking between cloud server providers, check out our Cloud Benchmark at Slush blog post for how DigitalOcean and AWS EC2 fared against UpCloud. Also, if you are still unsure about cloud server performance, go ahead and find out more at our Cloud Server vs. VPS vs. Dedicated Server comparison.