A Scripting Approach to Automate TCP Benchmarking


Umakant Gundeli

Oral Defence Date: 

Friday, June 13, 2008 - 15:00


SCI 241


3:00 PM


Professors Marguerite Murphy & Dragutin Petkovic


Benchmarking of web server interactions running over a TCP/IP protocol stack is one way to evaluate and compare system performance, e.g. of different variants of the TCP protocol or different kernel tuning strategies. Running such experiments manually using standard software tools is a tedious and error prone activity. This report presents a new Web based tool that automates the process of configuring and executing web server benchmarking experiments. Existing tools utilized in this project are as follows. Httperf is a standard tool for benchmarking web servers. Autobench is a tool that does capacity measurement by automatically executing httperf some number of times against each of the specified servers. After each iteration autobench increases the number of connections to the server per second, and simultaneously collects raw output data from httperf and converts this raw data to a tab separated value (tsv) or comma separated value (csv) format. Bench2Graph tool uses this tsv or csv formatted data to create an output graph in postscript format. Autobench partially automates the benchmarking process by running httperf several times, but the user is still forced to manually perform all of the system configuration work before and after running autobench experiments. These tasks may include tuning the kernel by setting proper values for certain kernel variables, loading or unloading kernel modules, and setting the correct autobench configuration parameters. Usually the web server under test is also tuned by modifying its configuration parameter values. Once the experiment is completed, the user has to manually import the tsv or csv formatted data into a spreadsheet and/or use the bench2graph tool to plot a graph of the results. Furthermore, the client and server software are executing on two different machines, both of which require configuration. This project addresses all of these problems by designing and developing a new Web application that allows the user to set kernel variable values and autobench configuration settings, load specified kernel modules, run autobench experiments, create an output graph automatically and restore the original kernel state after the experiments are completed. Our new tool is operational and successfully automates these activities.

Umakant Gundeli

Autobench, Benchmarking, Automation, Web Interface, Kernel, Module, Configurations and Graph.