This is the second article in our four-part series talking about performance problems in software products. In this blog we will discuss how performance problem are sometimes caused by insufficient and inadequate performance tests.
Insufficient and inadequate performance tests
Many performance problems occur simply because there are no adequate performance tests and benchmarks. Unfortunately, and quite often, the performance tests are done in ad-hoc fashion – and only at the end the product. Furthermore, the performance tests are not adequate. The right tests should be:
- Repeatable: so an experiment of comparison can be conducted properly and easily.
- Observable: so if any signs of poor performance are observed, the developer has a starting point to begin investigating the cause of the problem.
- Portable: so that the tests can be ran against competitor’s’ products in order for comparison.
- Easily Presentable: so that everyone can understand the comparison in a brief presentation.
- Realistic: so that measurements reflect customer-experienced realities.
- Flexible: so that is possible to implement all kinds of environmental and other modifications.
- Easy to use and deploy: so that anyone with the need to run it, can do so without too much hassle.
The benchmarks need to be published because they are useful indicators of relative performances of the product for customers and support ([PHHP06]). If the results are not published and explained, the customer might encounter many strange performance numbers of an unknown cause.