I recently heard a story from a software engineer assigned to develop performance tests for a system he was building. He reported that the assignment directive was intended to be comprehensive, but was excessively vague. Basically, he was asked to do capacity, stress, and load testing – with no clear success factors defined!
It is extremely common in our industry to be asked to do a “spitball test.” We just pick a variety of values for parameters and throw it at a web service as fast as our test infrastructure allows. This has some value. Exploratory testing is a critical part of the testing process. Developers and testers do it all the time.
In applications of all sizes, leaving things at this level can have significant negative downstream effects. It takes some time to define what you want to know about your application performance. The factors differ.
It does not require authoring a small book sized document to do the planning required. What questions do you want to answer? Is it simply, “How fast do x, y, z reports run?” Do you want to know how fast those reports run under load? How do you measure what “how fast” is? What percentile are you using when evaluating performance test data?
Understanding performance test data is always relative to the context. A simple plan describing what data you will gather along with performance targets is essential. With that in hand, you can create magical experiences for your users.