When I worked on the Harrier mission computer, not surprisingly its behaviour was well thought out, and there were two phone-book-sized requirement documents covering every piece of functional behaviour in intense detail. Our test management tool was a must to keep track of it all. Add to this we were a team of 16 testers as well, it helped to divide up the work.
But it came at a cost - to work like this with requirements, scripted testing, etc typically we needed 18 months between releases. But the system was so complex, and none of us were pilots, that our only oracle for working out what was going on were the sub-system design requirements.
The problem is though that many projects I've encountered since do not tend to have the same rigid waterfall approach. This is hardly surprising - there are some fields you expect to be quite rigidly controlled - flight, medical, nuclear power plant are all applications you'd expect strongly defined behaviour in.
The issue I have with test management tools is how some people in the profession will see how well they work for a project like the Harrier mission computer, and try and use them on other projects in a different context. And there they can often find the results are variable.
Most test management tools cover the spectrum of requirement -> plan -> defect. They only work well if you have a strong footprint in all these areas. You have to have business analysts using it to amend and author requirements. Otherwise your test team will have a lot of data entry to do. And if this is happening but your requirements keep changing - guess who it is who has to keep updating them in your system?
They also only make sense if your requirements are suitably "test driven" to be testable, that is they're not "vague statements".
Here I'm going to thrash out a test tool driven approach to testing a requirement vs a Context Driven approach, and we'll see which stands up the best in terms of coverage. This is to show how painful, and misleading use of a test management tool can become if you're using it badly or on an unsuitable project.
Test Tool Approach
This is our requirement - LOG010,
- At the login screen when the user enters a username and password, they are taken to their account page.
Sadly our tester is non-too experienced, and so creates the following test, LogTest01, which they then link to the requirement,
When a manager comes along, and checks in their test management tool and sees this ...
According to the test management tool, that looks like the testing for the Log On page is perfect - it's hard to argue with 100% So hard in fact that many people will find it hard to actually ask the tester "what does 100% here mean?".
Indeed, if pressed I have heard many a tester claim that "the tests are as good as the detail in the requirements provided", and if there's any problem with the testing performed, it's source problem is that there isn't enough clarity in the requirements. It's "not a testing problem, but a requirement problem".
Context Driven Approach
For the Context Driven School of Testing this statement of it being "not a testing problem, but a requirement problem", is no get out clause. As professionals we will do the best we can with what we're given, and we'll use our skills to work beyond when we can.
In aContext Driven mentality, a software product is evaluated by "oracles" or "expectations of how the software should behave". The most obvious one of these is requirements. But other types of oracle can simply be "when I use a similar product it would behave like this".
Logging into a site is something we do every day, so unlike that tester, we should take what's in requirement LOG010, and work from and beyond it. Yes, like in script LogTest01, we should have a test that,
- When I log in with the correct username and password, I am taken to the account page.
However from use of similar websites, and implied (but not directly said) in that requirement are the following tests,
- When I log in with an incorrect username and password, I am not logged in.
- When I log in with a correct username but incorrect password, I am not logged in.
- For security it might be best if the system not tell me if I've given a correct username but incorrect username.
- If I try to log in with an incorrect password too many times, I would expect for security to be locked out of the account, even if I give the correct password.
- I expect if locked out to be locked out for either a period of time, or until I contact some form of helpdesk.
- I expect the use of upper/lower case not to be important when entering my username
- I expect the use of upper/lower case to be important when entering my password
- I expect my password to be obscured by asterisks
I ran this by one of my test team, and true to form, they also came out with,
- Should I be able to copy to/from the password field?
- The username/password is kept in the database, are there any wildcard characters in either field such as *, ', ", /n etc which will cause an issue when used as a username or password because of the way the code will interpret them?
Using the test tool driven model, we easily get 100% test coverage in a single test (or I would rather say "the illusion of 100%). But with a Context Driven approach, we powerfully cover off a lot more behaviour in about 11 tests - but the Context Driven approach doesn't offer up percentages, it's more focused on a dialogue between the tester and manager to discuss what's been done. The coverage certainly isn't 11 x 100%
The Context Driven approach certainly covers a lot more ground. I know I'm probably being unfair to people who use test management tools - most testers I know would have added an "incorrect login" test as well (so maybe 2-3 tests). But the truth is, the test management tool which is supposed to "make testing easy and visible" actually can often fog it up using a smokescreen of numbers, they particularly fail when you can't break requirements into "discreet testable statements". It also subtly decides your testing strategy - because testers feel driven to provide enough tests to "make it 100% coverage". It's possible to fix this by spending time breaking apart requirements into smaller test requirements - but this involves a lot of initial outlay that only pays off if the requirements aren't likely to change.
The Context Driven approach embraces the fact that it's often extremely difficult to achieve such perfection in requirements (unless perhaps you are in avionics and are biting the bullet of "this is going to take a lot of time"). More than anything it wants to challenge complacency (esp when you see the green 100% sign), and get testers focused on running as many tests as possible, over making assumptions. Though perhaps trying to run the most compelling of tests first.
It's certainly something to think about - when you're feeling a bit complacent, and "well I've tested everything here", try and take a step back, ignore the numbers and requirements, and go with a gut feel. Is there another test, coverage be damned, that you'd like to try? You might be surprised.