Benchmarking and Things That Go Bust in the Night
Benchmarking is the highest form of performance simulation. It's also the most difficult, so there are many opportunities for mistakes. Many mistakes arise from a lack of sleep, while others arise from a lack of any expectations for assessing the performance data generated by the benchmark.
This talk will present the following real-life examples of such mistakes taken from my own experiences running both competitive and custom benchmarks:
o The Psychic Hotline (A fiasco in Florida)
o Smooth Lights (How LEDs rescued a TPC-B benchmark)
o Performance Graphiti (Bad data presentation can mess you up)
o The X-Files (X-windows benchmarking--almost X-rated)
In each case I will explain how simple performance models were used to set expectations for the analysis of benchmark results and correct mistakes. A key message is that even wrong expectations are better than no expectations.