Wednesday, February 3, 2016

Denial 102: "But we spent a lot of money writing test cases!"

Previously we looked at the psychological effect of denial, and what drives it and makes it tick.  Today we look at an example of denial at work within testing.  Ladies and gentlemen, I give you exhibit A ...

"But we spent a lot of money writing test cases"

I talked a little about this phenomenon in a post way back in 2014.  In my career, I've had to revisit old testing schemes several times... heck, let's just call it what it is "grave robbing dead/dormant projects".

Prior to every excavation of archives, I will have heard the legend "well we were late going into testing and the testers spent a lot of time writing scripts".  This usually comes from non-testers, and gives the indication that as far as they're concerned I am sitting on a tester's gold mine of material.  It has to be valuable, because we spent a lot of money on it.  [It's value to me has to be equivalent to the cost sunk into it]

Here's what I hear in that sentence ... "well we were late going into testing" in my ears means that there were a lot of problems.  So many that a basic version of the software could not be made to work enough for testing to start.  That says there were fundamental issues on this project.  Could one of them be that no-one really knew the specifics of what they were building?  And if that is really the case, what's the chance that testers magically had a level of clairvoyance which eluded the developers?

And then there's the part that goes "and the testers spent a lot of time writing scripts", which roughly translates to ...

I kid you not - during my career I've seen some managers try to send any test contractors on leave for a month until it's ready for testing "to save costs".  So sadly if you're a contractor and you want to be paid, there's a benefit to looking busy.

As I've mentioned in my previous article, a quick attempt to correlate the execution logs with the test scripts often shows me whole reams of planned testing which was dumped and never run.

New Zealand is such a small place, which means you might have worked on that project.  You might have left the company.  You might have left the city.  BUT I WILL END UP FINDING YOU!

It frequently happens - informally I might add - and I'm glad it does, because it allows me to dig deeper to find out what really went on and find out another of a project's story.  And it typically points towards a problematic project - from vague outline to changing scope - rather than a tester who just wrote scripts they had no intention of running.

Sooner or later though, it's my job to burst people's bubble that they're not sitting on a goldmine of test scripts that are ready to run.  I'll take a scan through, and try and use what's been written to test the system to see how much use it'll be.  Invariably though it's the results or execution logs which tell me a lot more about what was done than a folder filled with test scripts.

This aligns with James Christie's discussion around how ISO 29119 impacts testing.  That standard focuses more on plans and scripts as auditable artifacts.  James Christie argues that an audit always wants to focus first and foremost on what you actually did (over what you planned to do).  And he has an excellent series of articles on that starting with part one of "Audit And Agile" here.

What I've learned with bursting bubbles is to do it gently.  Always work out ahead of time if anything is salvageable, and have your counter offer ready (so we're going to exploratory test instead maybe?).  That person who thought the testing was pre-written was probably hoping that your test effort would be minimal because you'd be able to build your effort from past work.  Let them know the degree you can capitalise on - heck if it just gives you a good set of starting ideas that's something significant.

Even if you have a good group of test cases written which match the execution log you have, you'll still have to spend time learning the system and it's intricacies.  For instance there might be a line going "expire the account", and it takes some time to find they have a test tool written which will age an account to it's expiry date, but you need to find where they kept that tool.  Almost always a lot of verbal knowledge will have gone, or the details are there, just buried in a heap of other details.

Also remember having too much documentation, as above, can be as much of a bane as none at all.  Because you have to spend a lot of time going through it all before deciding if it's any help or a hinderance.  And I don't know about you, but I'm a bit of a slow reader.


  1. One of my favourites, not! But a good post as always.

    I recently read up on Safe, the scaling Agile framework. One of their principles is to not throw good money after bad (Princicple#1 Take an economic view). My take on it is that if you sunk money into a black hole, put warning signs around it and don't do it again instead of doing more of it!
    What always suprised me is that somehow developers know what to code but testers don't have anything to do and don't know what the project is about. One approach is to give everyone in the project new information at the same time. So when a project starts get the 3 Amigos (BA, developer, tester) together and talk about customer value, functions, risks and anything else necessary to make that project successful. That's one of the things I did at a previous company and not only did it reduce project risk it also improved relationships between developers and testers, both within the company and in their private lifes. If you hear a dev and a tester talking in a relaxed way about a project at the table next to you in a bar you know they care and that something went right after all (as long as no confidential information is discussed).

    Just because there is no product yet, doesn't mean there's nothing to test or to do. A test approach needs to be created, risks managed and discussions about test coverage and acceptance criteria to be had, maybe user types, scenarios and test missions need creating, the list is nearly endless.

    If a PM or senior manager doesn't think about any of this, we can look at it at least in two ways. Either be sad, disappointed or angry. Or we can educate them and make the next project better, bit by bit.

    1. Thanks Thomas.

      What I'm trying to do here is to explore down how people often make a perception on testing - which really is based on a fallacy.

      One of the issues is that people outside a discipline see "we're spending a lot of money on this" and thinking there has to be a correlation between the money going in, and what's being produced. I've seen this for instance where BAs attend a lot of meetings ... so I might expect "a lot of requirements are being written". But the truth is there are so many meetings because requirements ARE NOT being agreed on.

      Being able to understand how people commonly arrive at these fallacies, and being able to engage and discuss is the heart of what I'm trying to do here. Plus of course for us to not get fooled either.

    2. Buat saudara punya permasalahan ekonomi: hub aki santoro,karna saya sudah membuktikan bantuan aki santoro yang berminat,hub aki santoro di nomor 0823 1294 9955, atau KLIK DI SINI

    3. Buat saudara punya permasalahan ekonomi: hub aki santoro,karna saya sudah membuktikan bantuan aki santoro yang berminat,hub aki santoro di nomor 0823 1294 9955, atau KLIK DI SINI


  2. I would usually think that writing testcases is part of exploring the project under test. so having many scrapped testcases (ie not run) is not necessarily a waste, but could also be a result of an exploration. Or a product undergoing many changes. Just doing the archival digging doesn't tell. :-)

    kepp diggin, mate

    1. My experience - if the product is running weeks late for testing, the testers have not had chance to "get their hands on the product", and are just applying ideas from what requirements they have.