Wednesday, July 16, 2014

Schrodingers jokes ... and some thoughts on test cases ...

Another day, another Schrodinger meme ...


Everybody loves Schrodinger, or rather Schrodinger's Cat.  Heck, I've even talked about it in the past.  The thought experiment ripples into a lot of everyday life.  The core of the experiment is, if you put a cat into a closed environment where a death trap has a 50% chance of being sprung in an hour, you can theorise all you want.  But the only way to know if the cat is alive or dead is to open it up and look inside.


The same it seems is true for many things in life - including my recent stint at stand up.  I thought a lot about my 10 minute slot, and to be honest wasn't too concerned.  After all I'd done plenty of presentations before to sometimes to difficult rooms.  This one just had to be funny!

I spent a lot of time running through lines, and building enough of a set up.  Thankfully I had some great support from friends like Rob Deb and Alex Finch who are stand up comics themselves.

Now they'd warned me of the effect I was about to experience, but no amount of being told actually prepared me.  It was scary, and more than once I found I'd lost where I was.  But the main thing was, I would build up and build up to the killer line of humour.  That sure fire gem of wit.  And then I delivered it ... to silence.

That it must be said was intimidating.  Oh I did get laughs, so didn't die on stage.  But still the places where I thought the big laughs would be, they just weren't.  And both Rob and Alex had warned me of this - to try out my material by dropping in lines to conversation, and see if it got a reaction.  But even then they said, what gets a big laugh today, might not get the same laugh tomorrow.  The bottom line though, the only way to know if all your material works, try it.

Of course this leads back to our testing - and much like Schrodinger, trying to build up a robust suite of tests, just through pure theory.  How does that work out?


Over the years I've had to do more than my fair share of what I call "test archaeology".  Nod your heads if any of this seems familiar ... a few years ago there was a bespoke project that was put together, everyone on the project has moved on (to other places, not died, this isn't a cursed project after all), and it's been in production ever since.  Only now after a few years, the customer wants some changes.


It's usually at this point that the only survivor from the original project, we'll call them Old Jethro, remembers "our testers wrote lots of scripts, and we spent a lot of money while they were there ... writing their scripts".

Test archaeology can be a fascinating business - you look through the archives what was done, and more often than not what I see is a surprise.  There can be reference to a number of script, and even a high level view of a number of these scripts and their names.  At this point one of either two things happen,

Scenario One - Script detail TBD

You open the scripts.  The format is amazingly thought out, with details of the author, the date, the requirement references.  It looks incredible.  So you look at step one, and it says "TBD".

Scenario Two - Oops

The other oft frequently encountered thing is to find the scripts, and they have just the detail you need to know how to do some odd behaviours (as there's no user guide).  You go to use it, and "that's suspicious" the instructions don't work.

So you check the execution logs, the weekly reports, the exit report.  And soon it becomes apparent that a lot of scripts were written ... but none of them actually run.  I've actually managed to collar one of those script writers and ask "what happened?".  And typically the response is "well requirements and scope kept changing ... and the release date kept moving back ... so we were asked to look busy and keep writing scripts".

I think we've all been in that boat, and tried to be coerced to do something like that!


This is how many a project that looks from the outside to be well documented and mapped out, under scrutiny can turn out to be nothing of the sort.  The problem is the over reliance on the idea that "if we have test cases, testing is easy", just put testing on auto-pilot, all will be fine.

It's hard to choose the worst of the two scenarios - the test cases which look written but aren't (scenario one) means at least you have a clean sheet to work with.  The problem is from the outside it doesn't look like "TBD", and you'll have a battle to prove why you need to spend more time planning (and even experimenting) with your testing, when it all should be done for you.  "Surely it's as simple as Ctrl+C, Ctrl+V?"

Scenario Two is by far scarier though - there are whole reams of tests which you have to spend time investigating before you can reason with the project to say why you're not doing them.  Old Jethro will keep reminding the team how much money was spent writing them, so they "darn well ought to work".

At the end of the day, a test case is really a very formalised test idea.  With a lot of the effort going into "making it formal".  The problem is, a test idea is very much like Schrodingers Cat or my stand-up material.  You can't just keep it secret in a box, and trust it's all good.  It needs to be brought out, and attempt to be used with the system to find out if it's any good or not.

Even when I've used test cases for my testing, I've always either had a prototype of the software, or an old version, so I can try out most of the testing I can (sometimes baring the new feature that may be absent), but to make sure that what I write down makes sense.

In truth, test cases don't have to be a bad thing.  But an over reliance that "we have test cases written" means "we have testing sorted" is going to get you stung when you find that Scenario One or Scenario Two applies to your precious test cases.  It's like Schrodinger saying "have you met my cat Mr Tiddles ... he's been in this box for the last four hours ...".


1 comment: