Thursday, February 4, 2016

Denial 103: "But we spent a lot of money developing our automation"

Previously we looked at the psychological effect of denial, and what drives it and makes it tick.  Today we look at an example of denial at work within testing.  Ladies and gentlemen, I give you exhibit B ...

"But we spent a lot of money developing our automation"


I know the lure for a lot of people when it comes to automation is,
  • the tool is cheap (free is best yes?)
  • it's quick to produce automation scripts
  • it can run scenarios faster than a manual tester can

In a way this trap shares a lot of ground with the "testers spent a lot of time scripting" fallacy we discussed last time.

If it's quick for you to create scripts in, it's easy to try a short one-week demo of the product, and choose it as your desired automation strategy.  However, six months down the line, your manager is wondering when the automation magic will kick in asking "where's my ROI?".

Now, as much as I hate that term, they've got a point - by now you have a large library of automated scripts.  But they constantly need running and correcting (that wasn't in the brochure). So much so that they constantly fail, mainly in small but irritating ways, and typically the fails require a modification to the automation script rather than find problems in software under test that they're supposed to check.

And any time there's a major change to the system, large numbers of scripts need modification.  Heck, they all needed some modification when the login screen was modified!

This is because when you evaluated the tool and the method, you went with cost and how easy scripts were to make (hey, record and playback, how much simpler could it be?).  What you missed was maintainability.  This was covered in a WeTest Workshop from way back, and dealt with under "Automation TLC".  But an article by Messers Bach and Bolton which has just been released covers some similar ground.  [As a hint, I've found that if you don't know what the term "code reuse" is and you're writing large numbers of automation scripts, maybe you shouldn't ... ask a developer instead].

The denial mindset here (much with manual scripts) is that you've invested a lot of time and effort into automation scripting.  At some point that really should start paying off.  Right now it's not going any faster than the tests the manual testers used to run.  And although the automation occasionally find problems in new software builds, about 80-90% of the time they find a problem in the script itself which needs changing.  And typically if it's a problem in one script, it's a problem in many scripts.

The problem is, if you've not chosen a tool and built up your automation scripts with maintainability in mind, it will NEVER pay off.

Your scripts are too brittle, and they will continue to break in multiple places from small changes.  The more scripts you have, the bigger your maintenance bill of stuff that needs changing.  The hard thing is, it's probably easier to start from scratch with a new tool, than to try and build in your maintainability with what you already have.

So we chose a bad tool ... and implemented badly as well.  Is this denial?  If you're continuing to fix it every time it breaks, then yes, it is.

What we have here is what I like to call "technical testing debt", and it shares attributes with other testing debt.  This debt keeps rearing it's head - you don't have the time or backing to go back and deal with the fundamental issue.  So you band aid it, and then band aid it again, and then again.

And because you're addressing the problem so often there will be a perception problem, that surely that technical debt is decreasing with each occurrence you fix.  Right?  The technical debt is causing you to sink in time and resources - and therefore the sunk costs fallacy makes people say "as you're spending time on it, it must be decreasing".

No - not at all.  Every time it's being fixed, the fix for it is only for those occurrences where it breaks - the places where it surfaces.  To really do justice to the problem, you have to pretty much come to a full stop, and do a serious reworking of the approach to the problem, going much, much deeper.  That's something that's very hard to get backing for - especially if someone thinks you're addressing that technical debt piecemeal "because it keeps cropping up as a problem".

The good news - not every automation process goes like that.  But talk around, everyone I know has the tale of a suite of automation which went this way.  The trick is to know that if you're fixing breaks in your scripts than finding problems in your software under test, you need to ask if you're in a state of denial, and you need to address some fundamental problems in your automation strategy.

Post-script

Sometimes after I launch a blog entry, someone gives me a link so good, I need to add it to my article.  So thank you Simon for recommending this article.


No comments:

Post a Comment