A customer called because they expect the Internet access in the DC to reach 10 Gbps, but a speedtest only shows 1 Gbps throughput.

Once again, it’s a network issue. Or not?

I can’t remember how many times I have solved network performance problems only to prove that it was a server/client/application/DNS/VMware problem.

People always blame the system they are least responsible for and know the least about.

To be honest, sometimes tha cause was a firewall or a load balancer. When a box contains a state, a software bug is very likely to create problems that are difficult to detect.

In other cases, network performance expectations were line-rate, with the requestor ignoring the real throughput of the application, the server, or the impact of packet loss and latency (I’m working on a blog for that).

For this particular request, I hacked a bit the speedtest-cli utility to run multiple parallel tests, with a multiplier to make it even better.

This way the server generated about 9 Gpbs of traffic, and the client was happy to see the bandwidth graph rise close to the expected values.

The script runs one speedtest sessions for each server returned by the command

speedtest-cli --list

If an integer parameter is provided, multiple sessions per server will be executed. I have not thoroughly tested whether the multiplier affects the final result, it seemed funny useful to implement it.

Here’s the script, saved for future reference. Use it at your own risk, I take no responsibility in case of problems.

License: CC BY .


 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
#!/bin/bash
if [ $# -eq 0 ]; then
    multiplier=1
else
    multiplier="$1"
fi
for server in $(speedtest-cli --list | grep -o '^[0-9]\+'); do
    for ((i = 1; i <= multiplier; i++)); do
	    speedtest-cli --server $server&
	done
done

Links