When I worked on the Harrier mission computer, not surprisingly its behaviour was well thought out, and there were two phone-book-sized requirement documents covering every piece of functional behaviour in intense detail. Our test management tool was a must to keep track of it all. Add to this we were a team of 16 testers as well, it helped to divide up the work.
But it came at a cost - to work like this with requirements, scripted testing, etc typically we needed 18 months between releases. But the system was so complex, and none of us were pilots, that our only oracle for working out what was going on were the sub-system design requirements.
The problem is though that many projects I've encountered since do not tend to have the same rigid waterfall approach. This is hardly surprising - there are some fields you expect to be quite rigidly controlled - flight, medical, nuclear power plant are all applications you'd expect strongly defined behaviour in.
The issue I have with test management tools is how some people in the profession will see how well they work for a project like the Harrier mission computer, and try and use them on other projects in a different context. And there they can often find the results are variable.
Most test management tools cover the spectrum of requirement -> plan -> defect. They only work well if you have a strong footprint in all these areas. You have to have business analysts using it to amend and author requirements. Otherwise your test team will have a lot of data entry to do. And if this is happening but your requirements keep changing - guess who it is who has to keep updating them in your system?
They also only make sense if your requirements are suitably "test driven" to be testable, that is they're not "vague statements".
Here I'm going to thrash out a test tool driven approach to testing a requirement vs a Context Driven approach, and we'll see which stands up the best in terms of coverage. This is to show how painful, and misleading use of a test management tool can become if you're using it badly or on an unsuitable project.
Test Tool Approach
This is our requirement - LOG010,
- At the login screen when the user enters a username and password, they are taken to their account page.
Sadly our tester is non-too experienced, and so creates the following test, LogTest01, which they then link to the requirement,
When a manager comes along, and checks in their test management tool and sees this ...
According to the test management tool, that looks like the testing for the Log On page is perfect - it's hard to argue with 100% So hard in fact that many people will find it hard to actually ask the tester "what does 100% here mean?".
Indeed, if pressed I have heard many a tester claim that "the tests are as good as the detail in the requirements provided", and if there's any problem with the testing performed, it's source problem is that there isn't enough clarity in the requirements. It's "not a testing problem, but a requirement problem".
Context Driven Approach
For the Context Driven School of Testing this statement of it being "not a testing problem, but a requirement problem", is no get out clause. As professionals we will do the best we can with what we're given, and we'll use our skills to work beyond when we can.
In aContext Driven mentality, a software product is evaluated by "oracles" or "expectations of how the software should behave". The most obvious one of these is requirements. But other types of oracle can simply be "when I use a similar product it would behave like this".
Logging into a site is something we do every day, so unlike that tester, we should take what's in requirement LOG010, and work from and beyond it. Yes, like in script LogTest01, we should have a test that,
- When I log in with the correct username and password, I am taken to the account page.
However from use of similar websites, and implied (but not directly said) in that requirement are the following tests,
- When I log in with an incorrect username and password, I am not logged in.
- When I log in with a correct username but incorrect password, I am not logged in.
- For security it might be best if the system not tell me if I've given a correct username but incorrect username.
- If I try to log in with an incorrect password too many times, I would expect for security to be locked out of the account, even if I give the correct password.
- I expect if locked out to be locked out for either a period of time, or until I contact some form of helpdesk.
- I expect the use of upper/lower case not to be important when entering my username
- I expect the use of upper/lower case to be important when entering my password
- I expect my password to be obscured by asterisks
I ran this by one of my test team, and true to form, they also came out with,
- Should I be able to copy to/from the password field?
- The username/password is kept in the database, are there any wildcard characters in either field such as *, ', ", /n etc which will cause an issue when used as a username or password because of the way the code will interpret them?
Using the test tool driven model, we easily get 100% test coverage in a single test (or I would rather say "the illusion of 100%). But with a Context Driven approach, we powerfully cover off a lot more behaviour in about 11 tests - but the Context Driven approach doesn't offer up percentages, it's more focused on a dialogue between the tester and manager to discuss what's been done. The coverage certainly isn't 11 x 100%
The Context Driven approach certainly covers a lot more ground. I know I'm probably being unfair to people who use test management tools - most testers I know would have added an "incorrect login" test as well (so maybe 2-3 tests). But the truth is, the test management tool which is supposed to "make testing easy and visible" actually can often fog it up using a smokescreen of numbers, they particularly fail when you can't break requirements into "discreet testable statements". It also subtly decides your testing strategy - because testers feel driven to provide enough tests to "make it 100% coverage". It's possible to fix this by spending time breaking apart requirements into smaller test requirements - but this involves a lot of initial outlay that only pays off if the requirements aren't likely to change.
The Context Driven approach embraces the fact that it's often extremely difficult to achieve such perfection in requirements (unless perhaps you are in avionics and are biting the bullet of "this is going to take a lot of time"). More than anything it wants to challenge complacency (esp when you see the green 100% sign), and get testers focused on running as many tests as possible, over making assumptions. Though perhaps trying to run the most compelling of tests first.
It's certainly something to think about - when you're feeling a bit complacent, and "well I've tested everything here", try and take a step back, ignore the numbers and requirements, and go with a gut feel. Is there another test, coverage be damned, that you'd like to try? You might be surprised.
A context-driven approach affords different interpretations. I'd like to offer one here. You say, "'oracles' or 'expectations of how the software should behave'". We don't think of oracles that way (or rather, that's only one notion of what an oracle might be. We think of an oracle as a way to recognize a problem.
ReplyDeleteOne risk associated with oracle as expectation is that explicit expectations tend to be finite. The calculator returns 4 as the result of 2 + 2. But even when that expectation is met, there can be terrible problems. Maybe the calculator returns 4 to every calculation. Maybe the calculator only handles a hundred calculations, and then overflows its memory and dies. Maybe pressing the equals key results an intolerable delay in returning the result.
We can help defend ourselves against missing important problems by realizing that, much of the time, we will fail to anticipate a problem until we recognize it. Whatever we have described or decided in advance represents only a fraction of the observations that we could make, and therefore part of our preparation should involve preparing ourselves to notice problems.
Cheers,
---Michael B.
Interesting - and thanks for the comment. I like your definition of oracles, which is certainly something to think about. I am trying to develop my team's understanding of oracles (beyond just requirements), and getting them to look at a website such as Twitter sign up, and tell me without any up-front requirements "what are your expectations". To develop a sense of trust in their "testing gut feel".
DeleteWhy does everyone always quote the 2 + 2 calculator at me? According to my Ingsoc calculator, sometimes that equals 5, sometimes equals 3, and sometimes equals 4. Sometimes it equals all of them together. But that's #1984 for you ...
Yeh, I used to really mess up the reports from these test management tools. The scripted testers would execute their tests and get a 100% coverage no failures but then there would also be 50 defects logged from a certain person who was doing exploratory and logging the bugs from that in the tool.
ReplyDeleteConference calls were a lot of fun as test managers tried to explain to project managers why there was this discrepancy between coverage and defects
Yeah - for the record, I think test management tools can work when (a) dealing with complexity, (b) there is not much change in requirements and (c) where requirements can be split into simple, easily testable statements. However I think most projects don't meet (a), (b) and (c). And what you're left with is people trying to do the best they can - but fitting a rigid structure around a framework that is flexible. Such an approach is never going to work.
Delete"what does 100% here mean?"
ReplyDeleteYup. Another case where a metric is a poor substitute for the real goal.
Elegantly put Joe.
DeleteHi TestSheep, this is really interesting.
ReplyDeleteTwo comments:
1) whoa, I didn't know I was doing Context-Driven-Testing all the time. Sounds like requirements-driven testing is nonsense at all except the areas you mentioned.
I often use test cases like "Funcation ABC FORM TEST". Exact test steps are not described, since it should be common sense what has to be tested on a FORM (saving, validation rules, tab order, ...) - How stupid would it be to to create a long list of expected results?
2) Not really the report metrics are a problem, but the pass/fail scheme IMO.
That was an important reason for me to create my own tooling for testing.
I regulary use different schemes, e.g. 100% perfect, minor issues, big issues, completely instable
This is no substitute for good communication of the test results, but much better then red/green charts.
But I think it's nonetheless useful to have some charts summarizing quality state.
Glad to see it's got you thinking. In my feeling and experience, test reporting is (and should be) more "qualatative" over "quanitative". That is the important thing is about describing (in summary) what actions you've done, over saying that you've performed a certain number of them.
ReplyDeleteGraphs only can represent numbers (ie quantitative reporting).
So try these two kinds of reports, and tell me which one is more meaningful ...
"We have completed 20 login tests, and have another 10 to do".
"We have been testing logins this morning - this has included successful logins, invalid usernames, invalid passwords, use of incorrect caps in username, use of incorrect caps in password ... this afternoon we're going to continue trying to use special characters in passwords"
A number of tests is purely arbitrary, it doesn't tell you or anyone else what you're testing. Most business owners concerns are around what your testing is including.
Invest in Ripple on eToro the World’s Best Social Trading Network...
ReplyDeleteJoin millions who have already found easier strategies for investing in Ripple.
Learn from profitable eToro traders or copy their orders automatically!