Test cases can be as long as short as they need to be. They can cover a lot of requirements in one go, or maybe just even one.
And yet many metrics for covering test progress seem to fall back on "what percentage of test cases are complete".
Think for a while about this fruit bowl ...
If I told you that today my team had finished off 5 pieces of fruit, how much would be left for the next few days? I bet you'd hope (if you're the business owner) that one of those pieces of fruit was the watermelon. But what if it was just 5 grapes?
That's what happens when you count test cases, you're saying a grape is the same amount of fruit as an apple ... or a watermelon.
If you can find a better way to track when we're out of fruit, please comment below ...
Simple. How about the weight of fruit left? 100g of watermelon is the same as 100g of grapes.
ReplyDeleteNice - and has more value for sure! Of course, even there you'd have to say "well some of the fruit is edible, and some is rind", so wouldn't be completely accurate, but a much better indicator.
DeleteSo, how do we measure the weight of a test case? #of requirements covered per test case?
DeleteProbably measuring requirements tested is a better measure if you *have* to use one. Even so as I discussed, it's easy to to get blinded by even this ...
Deletehttp://testsheepnz.blogspot.co.nz/2013/10/it-that-requirement-100-tested-yet.html
http://testsheepnz.blogspot.co.nz/2014/06/testing-is-just-as-requirements-right.html
What an analogy to express your concern, completely agree with it as a tester!
ReplyDeleteGlad it's got you in agreement.
DeleteApart from looking at the bowl?
ReplyDeleteIn all seriousness, this is matter of weighting (http://en.wikipedia.org/wiki/Weighting).
If you're really wanting to track test cases/scripts and somehow measure them with any amount of usefulness, you're going to have to track related data and/or meta-data, each of which needs to be weighted, eg. how many steps, how big are the steps, how to calculate how big the steps, estimated or historical time to execute, number of previously found bugs in steps or related steps...etc etc.
As you apply more weighting to the cases you might be able to get more useful information, but what's the cost of gathering that information, analysing it, and maintaining it? Are you getting useful information out of it, are you getting your effort's worth out of it?
I have yet to have any 1st or 2nd hand experience of a system that uses weighted metrics effectively. I don't doubt that it can be done, but I haven't personally seen or heard from someone I trust that they've seen it be done.
This is indeed a superb point, "Are you getting useful information out of it, are you getting your effort's worth out of it?". By introducing all this tailoring, you're helping the model to match reality a bit more, but is it enough to make is really accurate? And is it really worth the effort? Make no bones about it, they are hard questions to answer, and often the feeling is that "no, it's not worth". But without metrics and percentage through, what can we really be reporting on?
DeleteOften our customers ask us about metrics; how many test cases have been tested/passed for this release. I think that this is also completely irrelevant, and does not say anything about the quality. Test coverage in unit tests are more interesting. In fact they should rather ask us "how sure are you about the quality in this release" and (from our answer) they would know more than if we gave them that number...
ReplyDeleteI know myself I often feel my customer is really asking me "what things have I tested so far", but what comes out is "what percentage through are we". What if we're 98% through, but we're still using a back-door hack because the login page isn't working... ;-)
DeleteThis comment has been removed by the author.
ReplyDeleteIt seems the situation could be a bit closer to the scenario of testing software. For example, the "fruit owner" hires us to evaluate the quality of an edible arrangement (freshness of fruit, taste, presentation, etc.).
ReplyDeleteEvaluating the quality of the arrangement of course includes considerably more than simply eating each piece of fruit, however the only question asked is "how many pieces of fruit have you finished?" (without regard to which fruit, it's size, weight or other factors that would make even this limited question into potentially useful information). This of course is absurd on it's face, especially since he has previously stated if we find a problem, we will receive an entirely new (presumably similar) edible arrangement with the issue resolved so that we may continue our evaluation.
A better set of questions might be something like:
Do you believe you've gathered enough information to make an informed evaluation?
When do you believe your evaluation might be better informed?
Do you believe you've evaluated the major risks with the current arrangement?
Are there other risks or information to gather that you believe may be worth the effort of evaluation?
How long do you estimate this might take to evaluate?
What is the quality of the arrangement given the information you know currently?
How confident of this assessment are you, and do you have the information you feel necessary to be confident of an evaluation?
What other information might I find useful or important as the fruit owner that you've discovered during your evaluation?
Glad to see you're asking questions. The exercise is a bit of a surreal one, but of course it's simple. You have a bowl of fruit, when can you estimate when the bowl of fruit "is done". And ironically fruit consumption has a hell of a lot less variables than software test execution. Maybe we're avoiding the strawberries because one of us has a fatal allegy to them.
DeleteOf course if we had a constant team, and our fruit was all gala apples, we could expect something more uniform! 8-)
Some great comments here, and I love how much people are thinking about the challenge!
ReplyDelete