Scripted tests are detailed instructions for manual test cases. They are expensive to create and maintain, and result in the tester taking the same narrow path through the software each time they run the test.
Cem Kaner writes “It appears that following scripts is the very “best practice” available for brain-damaged rats”
And, although my eyes glaze over whenever I read more than a few pages of a detailed test plan with scripted tests, and I struggle to reverse engineer the test design motivating the tests, I’ve fallen for behaviour driven design techniques like SpecFlow and Cucumber.
A test plan / test design specification has two main goals for me:
- tell the tester what tests to run
- communicate to other project stakeholders what tests will and will not be run
Of these, goal #2 feels like the most important, and having a short set of 3-5 scripted tests, using a syntax like the SpecFlow or Cucumber Given/When/Then, can clarify the requirements and make it clear what acceptance tests are required to either begin testing or check-in.
Beyond this short set, I’d agree that diagrams, tables, checklists, and other concise expressions of test ideas generally work better than 100’s of scripted tests.
Another time when scripted tests make sense is when the software process is being audited, and the auditors expect a detailed expression of the tests, unless an auditor will be showing up an insisting on detailed test cases with specified expected results.
So, despite their bad reputation, having a select few scripted tests can add clarity to critical tests.