This post is part of a series about Docker, including:

We started with the basics and moved on with adding software , using volumes and then bridging a container to the network .

As a said I’m neither a developer or a system administrator, I work as Network Engineer so I’m not the main target for Docker but I found it very useful for a specific need and now it’s time to join the dots .

Here’s the story. I do wireless networks with captive portals . I know some pleople think captive portals are bad and I agree in some way, but they have a purpose.

For those who don’t know what a captive portal is: it is a VM or an appliance that captures the client traffic, redirects it to a web authentication portal, requests valid credentials that allow the user to access the internet.

Long story short, how can we test a captive portal scalabilty?

We can configure it and see how it behaves under load of real clients (reactive) or find a way to “simulate” many clients and test in lab environment.

Yes it would be easier just to read and trust the vendor datasheet but we all know vendors sometimes are not so accurate with the published values so my approach is trust but verify .

Note: the captive portal is often used for wireless clients but it can just listen on a specific VLAN so load tests can be done with wired clients.

First try: VMs

I first tried to use VirtualBox and clone a VM many times. Even with some scripting this solution was little scalable and required lots of resources.

I need to simulate at least 500 clients per host and scale up to few thousands in the future.

Second try: Docker

On my second try I used Docker . I notice this technology thanks to Twitter and saw a big potential for my purpose.

Join the dots: the final solution

If you read all the previous posts you can see now how the dots join all toghether: I run multiple Docker containers on a host, bridge them to the host NIC using Open vSwitch , run a Python +Mechanize script stored in a shared Volume that does a login on the Captive Portal and then simulate Internet access from the client simply with curl and a list of websites.

Advantages:

  • little overhead on the host

  • “real” network access, Captive Portal need to see different MAC addressed to do its magic

  • scalable performance: I can run containers on multiple hosts for a real scale-out solution

  • manageable: the scripts are stored on a shared Volume so it’s easy to make changes

  • reusable: a few changes on the login the script allows to test different portals

  • easy to start: a simple bash loop starts as many container are neede, host performances are the only limit (more on that later)

The start script

This is the start script used to run multiple containers:

#! /bin/bash 
# start.sh
# runs multiple container
for i in {0..$1}
do
./pipework/pipework ovsbr0 $(docker run --privileged -i -t --net="none" -v /root/dockervolume:/volume -d ubuntuplus /bin/bash /volume/runthis.sh) dhcp  
done

The kill script

This is the kill script, used to kill all the containers:

#! /bin/bash
# dockerkillall.sh
# kill all the containers
while read containerID
do
docker kill $containerID 
done < <(docker ps -q)

The OVS clean script

This command removes all the Oper vSwitch ports except the ports used to bridge the switch itself.

ovs-vsctl show | grep Interface.*veth | sed 's/"//g' | awk '{ cmd= "ovs-vsctl del-port "$2 ; system(cmd) }'

Limitations

The only limit I foud so far is the time it takes to start a new container and run the custom startup script. On my PC (Dell Latitude E5550, i5 CPU, 16GM RAM) it takes about a hour to start 300 containers.

I didn’t had a chance to test it on a server hardware so I don’t know how if more powerful hardware can speed-up the process or it’s a software limitation that gets little advantage of faster CPUs. I’ll update this post with my finding later.

Wrap Up

I hope you enjoyed my posts about Docker. My target was to share a very specific use case that maybe isn’t the purpose of the techonology but I found very useful and easy to implement.