A customer called because they expect the Internet access in the DC to reach 10 Gigabits, but a speedtest only show 1 Gigabit throughput.

Once again, it’s a network issue. Or not?

I can’t remember how many times I have solved network performance problems only to prove that it was a server/client/application/DNS/Vmware problem.

People always blame the system they are least responsible for and know the least about.

Sometimes it was the firewall or the load balancer. When a box contains a state, a software bug is very likely to create problems that are difficult to detect.

In other cases, network performance expectations were line-rate, with the requestor ignoring the real throughput of the application, the server, or the impact of packet loss and latency.

For this particular request, I hacked a bit the speedtest-cli utility to run multiple parallel test, with a multiplier to make it even better.

This way the server generated about 9 Gpbs of traffic, and the client was happy to see the bandwidth graph rise to the expected values.

The script runs one speedtest sessions for each server returned by the command “speedtest-cli –list”.

If an integer parameter is provided, multiple sessions per server will be executed. I have not thoroughly tested whether the multiplier affects the final result, it seemed useful to implement it.

Here’s the script, saved for future reference. Use it at your own risk, I take no responsibility in case of problems.

License: CC BY .


 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
#!/bin/bash
list=$(\
speedtest-cli --list |\
grep -E '^[0-9]' |\
cut -d '' -f 1 \
)
if [ $# -eq 0 ]; then
    multiplier=1
else
    multiplier="$1"
fi
for server in $list; do
    for ((i = 1; i <= multiplier; i++)); do
	    speedtest-cli --server $server&
	done
done

Links