I’m presenting tonight at the Seattle Test Meetup, in Kirkland WA.  My topic is “Inside the Software Test Interview”, and I will be presenting my own observations on interviewing software testers and being interviewed for software test positions.  My presentation isn’t Google specific, but I came across some nice links about the Google hiring process and test engineering role that I’m including here.

Google’s hiring process

How to get a job at Google, interview questions, hiring process by Don Dodge

Get that Job at Google by Steve Yegge

Interviewing at Google – YouTube

Test jobs at Google

Conversation with a Test Engineer

11 software testing myths

Posted: February 21, 2011 in Software Testing

Myth #1: Software can be tested completely

Many books on software testing start with a discussion of the impossibility of testing all the data or paths in non-trivial software, but I’m still surprised by people who ask how long it will take to finish testing, without considering that being “done” is a subjective tradeoff between the risk of bugs and the cost of delaying a software release.

Myth #2: Testers can find all the bugs

Related to the impossibility of testing everything, a tester won’t find all the bugs, and likely won’t even find all the important bugs.

Myth #3: It is the tester’s fault when an important bug is missed or found late

Related to #2, I so often see blame focused on the tester when an important bug escapes or is found late.  I’ve certainly felt it, and I’ve seen root cause analysis done that squarely identifies that the tester should have found the bug, with no consideration that the developer could have avoided the issue.

Myth #4: All bugs should be fixed

Changing code to fix a bug is a risk.  There is the risk that some other side effect may pop up, or that customers rely and expect the current behavior.  There are also can be design choices where 2 approaches are both reasonable, but they both have flaws, or two bugs are logged and choosing to fix one means the other can’t reasonably be fixed.  Software teams usually triage all the bugs, but make hard choices to determine which should be fixed.

Myth #5: An automated test is equivalent to the same manual test

When I started testing, I felt that any test worth doing was worth automating.  It took a while to realize that both the cost of that was prohibitive, and that there is value in testing closer to how the customer will use the software.  Good manual testers notice anomalies that automated tests usually miss. I’ve heard it described as “peripheral vision”.  Testing while thinking about how the software will be used can find bugs that would otherwise be missed by an automated test.

Myth #6: A quick passing automated test is free

Even if machine execution time is close to being free a poorly written test can be harmful, even if it runs fast and passes consistently. A passing test may require maintenance if the product changes.  If the test covers code without validating the correctness, it will lead to a false sense of security if code coverage is used to assess the effectiveness of the test, bad decisions if code coverage is used to prune tests with redundant code coverage, and skews metrics on the percentage of passing tests. 

Myth #7: There exists a code coverage percentage goal that makes sense

I’ve heard 90%, 80%, 60%, and 50% as reasonable code coverage numbers for black box testing, but none of those goals take into account the style of the code.  Redundant error handling, when the caller and callee both check for an invalid condition can make for more reliable code as maintenance is performed.  But redundant error checks lead to lower coverage.  And if you use a library of useful routines but the compiler can’t optimize out unreachable code, then that also leads to unreachable code.  When high code coverage is achieved, it is often through unit tests that tests code in isolation or fault injection to trigger conditions that won’t otherwise happen, and neither of these are representative of the real world use of the system.

Myth #8: Uncovered code is untested code

Since testing all data values isn’t feasible, we make assumptions about equivalence of classes of data values, and try to test at the boundaries.  In code, a macro or inline that gets expanded in several spots should be equivalent if we trust the compiler and the usage is similar.  But, if measuring block coverage, inlined code or macros may be tested in some blocks and not others, but it is the same code and we’d normally assume it to be equivalent.

Myth #9: Covered code has been fully tested

Code coverage measures the lines, blocks, or arcs that are tested.  Even if you cover all the code, the coverage doesn’t measure whether your program was functioning correctly.  It may have failed in ways that you didn’t perceive.  There might also be untested data values that would have caused incorrect behavior.

Myth #10: The best way to test is the [exploratory/model based/…] approach

Whether it is model based, exploratory, requirements-focused, or some other, there are solid techniques that help us think of software differently.  But focusing on a single technique means missing bugs that the other techniques find faster.

Myth #11: Software testing is a mundane job

New computer science graduates, who may have only been exposed to testing for a few hours in a software engineering course, often think of testing as mindless and boring.  In the movie Elf, Will Farrell has the mundane job of testing Jack in the Box toys.  He cranks each, and then braces for the puppet to pop up, and then moves on to the next one.  That’s not what professional software testing is like.  Testing requires creativity, alertness, and a passion for quality.  There are sometimes mundane tasks of repeating tests or time-consuming environment setup, but a thinking tester usually finds creative ways to solve repetitive tasks.

Scripted tests are detailed instructions for manual test cases.  They are expensive to create and maintain, and result in the tester taking the same narrow path through the software each time they run the test.

Cem Kaner writes “It appears that following scripts is the very “best practice” available for brain-damaged rats”

And, although my eyes glaze over whenever I read more than a few pages of a detailed test plan with scripted tests, and I struggle to reverse engineer the test design motivating the tests, I’ve fallen for behaviour driven design techniques like SpecFlow and Cucumber.

A test plan / test design specification has two main goals for me:

  1. tell the tester what tests to run
  2. communicate to other project stakeholders what tests will and will not be run

Of these, goal #2 feels like the most important, and having a short set of 3-5 scripted tests, using a syntax like the SpecFlow or Cucumber Given/When/Then, can clarify the requirements and make it clear what acceptance tests are required to either begin testing or check-in.

Beyond this short set, I’d agree that diagrams, tables, checklists, and other concise expressions of test ideas generally work better than 100’s of scripted tests.

Another time when scripted tests make sense is when the software process is being audited, and the auditors expect a detailed expression of the tests, unless an auditor will be showing up an insisting on detailed test cases with specified expected results.

So, despite their bad reputation, having a select few scripted tests can add clarity to critical tests.

Software is everywhere.  It is in my gas pump, my phone, and even my audio receiver.  The increase in complexity and interconnectedness make software testing even harder, and testers yearn for better testability.

Testability is being able to observe and control the software we are testing.

Although it is essential to test as the customer will experience it, for complex software it helps to have additional control and observability.

Observability helps make problems more noticeable, and can point to the root cause. 

Control helps simulate situations that may be difficult to test in the system.

Dave Catlett developed the SOCK model of testability, which is a cool mnemonic for attributes of a testability interface:

S – Simplicity
O – Observability
C – Control
K – Knowledge of expected results

The two most essential of these are observability and control, since at least one of those is required in order to have testability.

But testability isn’t just for testers.

At Microsoft, most groups have a triad of disciplines:

PM – Program Managers
SDE – Developers (Software Development Engineers)
SDET – Testers (Software Development Engineers in Test)

And it shouldn’t only be the testers that care about testability.

For PM’s, building interfaces that allow customers and 3rd party developers to create solutions that extend the product helps make for a better overall system.  Instead of talking to PM’s about testability, emphasize programmability, extensibility, and the ability to diagnose customer issue.

For developers (SDE’s), better logging, diagnosis, and debugging allows bugs to be less mysterious, more reproducible, and easier to fix.  Developers usually have a good understanding of the code architecture, including components that are well encapsulated and with clean interfaces.  These encapsulated components are an excellent spot to look for testability interfaces.

For testers (SDET’s), the testability interfaces help find bugs that would be otherwise hard to discover. 

Testers should push for testability not only for their own benefit, but for the long-term product benefits.  Testability isn’t just for testers.

testers and rude developers

Posted: September 5, 2010 in Software Testing

Testers find bugs.  One of the skills that good testers soon learn is how to communicate the bug without seeming to attack the developer who wrote the code.  Testers provide negative feedback, and it helps to criticize the product or code, and not the developer directly, and to be objective and not inflammatory in bug descriptions.

And, although this usually works, the tester will eventually run into rude developers.  Developers who belittle testers for not finding bugs quicker.  And, if a bug is found by a customer, developers that place all the blame squarely on the tester for missing the issue.

I’ve met a few of these.  When I was a test manager, one of the developers heavily criticized one of my team members for missing issues.  I tried to coach my tester on  managing relationships, better bug finding, and not being bothered by it, but the relationship between the tester and developer was always strained from that point, and our product suffered.  I reget that I didn’t do more to prevent developer from continuing their behavior.

And I’ve been on the receiving end.  I’ve had developers throw tantrums, call me an idiot, and tell my boss that I was incompetent.  It doesn’t feel nice.  And sadly, many of these developers are rewarded for their poor behavior.  They seem to be have career success, including promotions, partly due to their strong, assertive personality.  I try to be open to feedback, but not when it crosses the line and becomes a personal attack or places all the blame on testers for missing issues.

What are the options?

  • Reality check.  Are you being too sensitive?  Discuss your interaction with someone else to see if they crossed the line.
  • Let it pass.  Perhaps the person is having a bad day, or has other stress.
  • Avoidance.  If you can avoid working with an abusive personality, it will be better for you.  Unfortunately for your company, the person will likely continue their ways.
  • Quick correction. Make it clear that you’re willing to be objective and discuss product issues, but you don’t appreciate the swearing or personal attacks.
  • Manager feedback.  Your developer’s manager should be their coach and interested in helping them succeed.  Rude behaviour is not a long term strategy for corporate success.  Be prepared with specific examples, and hope for an open reception.

I don’t need to be best friends with all of my co-workers, but civil, respectful behavior in the workplace should be universal.

Two months ago, I changed teams at Microsoft, and moved from the Office security test team to the Outlook test team.  I’m excited to focus on more than just security, but I still love security testing.  I’m fortunate to be at the BlackHat Vegas conference this week, learning more about how other people approach security testing.

One of the more entertaining presentations was Barnaby Jack’s ATM jackpot bugs.  Seeing an ATM spit out money from a quick local or remote attack makes for good theater.

But I was more impressed by the process taken to find the bugs, and the testability steps he needed, and I think they have broader applicability outside of the narrow space of ATM penetration testing.

Testability is about the control and observability of software.  An ATM is an extreme example of a black box with (usually)  limited interfaces exposed to the customer – just the stripe reader, pin pad, screen, and money dispenser.

  • Obtaining physical ATM’s – Barnaby needed to obtain several ATM’s, and placed them in his house.  A similar situation when testing cloud or server software is whether you have full access to the running bits, for at least some of your testing.
  • Access to the motherboard and debugger interface – In order to understand the software on the ATM, he needed to attach a debugger, and to do so get access to the motherboard and JTAG interface.
  • Access to the USB port – One of his bugs relied on a local firmware upgrade, using the USB port of the motherboard.  Although the money in the ATM is in a safe, the motherboard was much easier to access with a standard key.
  • Access to network traffic – His remote attack relied on understanding the proprietary network protocol, and the remote monitoring interface.  Instead of relying on the network via the phone, he was able to use the TCP/IP interface in the ATM.
  • Injecting explorer.exe – First step for Barnaby was getting the Windows CE running in the machine to run an explorer.exe shell.  This allowed much better understanding of the files and processes.
  • Copying files to a PC and using Visual Studio – Barnaby copied the executables locally, and used Visual Studio for some of his analysis and debugging.
  • API’s to inject hooks for observability and control – in order to both understand and control the running software, he injected code, both with the windows hook api and detours style hooks.
  • Understanding the proprietary checksum for firmware upgrades – Barnaby needed to take the time to understand the proprietary encrypted and checksummed firmware format to make his firmware upgrade work.

All of these show the dedication needed to understand software well enough to find cool bugs, and apply to non-security testing too.

Having attended StarEast and BlackHat, there certainly differences in the testers attending each conference, with more elitism in the security test community, more secrecy, and more value placed on the bugs found.  But the basic testing skills still apply, including getting testability in order to find bugs.

My cardiologist understands my heart, and my dentist understands my teeth.  When asked about the future of software testing, one of my colleagues predicted more specialization in testers; much as medical doctors have specialized, so will testers.

This is already happening … some of the software testing specializations that I’m aware of:

  • Performance testers measure how software consumes resources like time, cpu, memory, network i/o, disk i/o, and disk space.
  • User experience testers evaluate the usability of the interface, including the design and look and feel.
  • API testers focus on the programmability of the software.
  • Web testers look at HTML/CSS/AJAX/Flash/Silverlight applications.
  • Globalization / Localization testers look for issues that may happen when using software in different locales and languages, or when the software is translated to another language.
  • Customer focused testers feel the pain of customers, during internal dogfood, beta, and after release.
  • Domain specific testers know the non-software field that the software is aimed at, whether it be accounting, business intelligence, music, architecture, photography, or others.
  • Cloud testers understand the challenges of datacenters and how to test in production, gather telemetry, and serviceability.
  • Automation testers can turn test ideas into running test automation.  This often further is specialized by knowledge of commercial or open source tools.
  • Logo/compliance testers can evaluate the requirements set out by third parties.
  • Install / upgrade / deployment testers examine how software gets installed or upgraded.
  • Security testers hunt for vulnerabilities that allow escalation of privilege or leaking of sensitive information.

For all of these, there are core skills that help testers be effective, including test design, observation, evaluation, bug advocacy, diplomacy, and continued learning. 

Just as a doctor’s bedside manner and care for the patient is key, I think the core skills matter more in the overall effectiveness of the tester in supporting the software development effort.