11 software testing myths

Posted: February 21, 2011 in Software Testing

Myth #1: Software can be tested completely

Many books on software testing start with a discussion of the impossibility of testing all the data or paths in non-trivial software, but I’m still surprised by people who ask how long it will take to finish testing, without considering that being “done” is a subjective tradeoff between the risk of bugs and the cost of delaying a software release.

Myth #2: Testers can find all the bugs

Related to the impossibility of testing everything, a tester won’t find all the bugs, and likely won’t even find all the important bugs.

Myth #3: It is the tester’s fault when an important bug is missed or found late

Related to #2, I so often see blame focused on the tester when an important bug escapes or is found late.  I’ve certainly felt it, and I’ve seen root cause analysis done that squarely identifies that the tester should have found the bug, with no consideration that the developer could have avoided the issue.

Myth #4: All bugs should be fixed

Changing code to fix a bug is a risk.  There is the risk that some other side effect may pop up, or that customers rely and expect the current behavior.  There are also can be design choices where 2 approaches are both reasonable, but they both have flaws, or two bugs are logged and choosing to fix one means the other can’t reasonably be fixed.  Software teams usually triage all the bugs, but make hard choices to determine which should be fixed.

Myth #5: An automated test is equivalent to the same manual test

When I started testing, I felt that any test worth doing was worth automating.  It took a while to realize that both the cost of that was prohibitive, and that there is value in testing closer to how the customer will use the software.  Good manual testers notice anomalies that automated tests usually miss. I’ve heard it described as “peripheral vision”.  Testing while thinking about how the software will be used can find bugs that would otherwise be missed by an automated test.

Myth #6: A quick passing automated test is free

Even if machine execution time is close to being free a poorly written test can be harmful, even if it runs fast and passes consistently. A passing test may require maintenance if the product changes.  If the test covers code without validating the correctness, it will lead to a false sense of security if code coverage is used to assess the effectiveness of the test, bad decisions if code coverage is used to prune tests with redundant code coverage, and skews metrics on the percentage of passing tests. 

Myth #7: There exists a code coverage percentage goal that makes sense

I’ve heard 90%, 80%, 60%, and 50% as reasonable code coverage numbers for black box testing, but none of those goals take into account the style of the code.  Redundant error handling, when the caller and callee both check for an invalid condition can make for more reliable code as maintenance is performed.  But redundant error checks lead to lower coverage.  And if you use a library of useful routines but the compiler can’t optimize out unreachable code, then that also leads to unreachable code.  When high code coverage is achieved, it is often through unit tests that tests code in isolation or fault injection to trigger conditions that won’t otherwise happen, and neither of these are representative of the real world use of the system.

Myth #8: Uncovered code is untested code

Since testing all data values isn’t feasible, we make assumptions about equivalence of classes of data values, and try to test at the boundaries.  In code, a macro or inline that gets expanded in several spots should be equivalent if we trust the compiler and the usage is similar.  But, if measuring block coverage, inlined code or macros may be tested in some blocks and not others, but it is the same code and we’d normally assume it to be equivalent.

Myth #9: Covered code has been fully tested

Code coverage measures the lines, blocks, or arcs that are tested.  Even if you cover all the code, the coverage doesn’t measure whether your program was functioning correctly.  It may have failed in ways that you didn’t perceive.  There might also be untested data values that would have caused incorrect behavior.

Myth #10: The best way to test is the [exploratory/model based/…] approach

Whether it is model based, exploratory, requirements-focused, or some other, there are solid techniques that help us think of software differently.  But focusing on a single technique means missing bugs that the other techniques find faster.

Myth #11: Software testing is a mundane job

New computer science graduates, who may have only been exposed to testing for a few hours in a software engineering course, often think of testing as mindless and boring.  In the movie Elf, Will Farrell has the mundane job of testing Jack in the Box toys.  He cranks each, and then braces for the puppet to pop up, and then moves on to the next one.  That’s not what professional software testing is like.  Testing requires creativity, alertness, and a passion for quality.  There are sometimes mundane tasks of repeating tests or time-consuming environment setup, but a thinking tester usually finds creative ways to solve repetitive tasks.

Advertisements
Comments
  1. Nice list.

    “Myth #11: Software testing is a mundane job

    Testing requires creativity, alertness, and a passion for quality. There are sometimes mundane tasks of repeating tests or time-consuming environment setup, but a thinking tester usually finds creative ways to solve repetitive tasks.”

    Thank you for that! Testing isn’t for everyone. But it can be very rewarding, particularly if you are doing it right.

  2. testmuse says:

    Myth #12 : Testers run grammar checkers on their blog posts.
    “Myth #1: Software can tested completely”
    I suspect “Software can be tested completely” was intended.

  3. Regarding myth #3, I think we should not say the developer should have been considered. I have no control over how well a developer does their job. I would not accept ‘blame’ for a bug making it to the customer but I would see it as an signal to re-evaluate how I plan and prioritize test cases.

  4. Good work Alan.

    I want to add one more myth to your list :

    Myth : 100% Test Coverage means software is bug-free.

    Good luck..
    Anurag Vidyarthi

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s