Thursday, June 27, 2013

The Room 101 of testing ..


“You asked me once, what was in Room 101. I told you that you knew the answer already. Everyone knows it. The thing that is in Room 101 is the worst thing in the world.”

In the book Nineteen-Eighty-Four by George Orwell, there is a room everyone dreads, Room 101.  It is a room used to break people, because it contains the thing that people fear the most – although it's also something different for every person.

With my 101st post, I’ve now covered and explored a lot about testing, and maybe it’s time to consign a few demons into the testing version of Room 101.  I’m about to exorcise and attempt to banish some of (in my opinion) testing's evils into this room, which I hope people see fit to treat like Pandora’s Box, and never open again …


Best practises


The idea of best practise (to those who champion it) is that there is only one way to do testing.  The right way.  And all you have to do is apply it from one project to another.

When we are junior testers, we tend to go “I am now working on Project Pluto.  My previous assignment was Project Neptune, and it was a success.  So I will do everything as we did for Project Neptune here, and to will also be a success”.  I know this because I did indeed have this mindset when I was much younger.

In itself this isn't a shock-horror thing.  Of course we should be building on what's worked for us in the past.  What takes a more sophisticated mindset (and broader experience) to appreciate is to understand WHY that approach worked for Project Neptune, and then work out if Project Pluto shares the same characteristics which will mean that same approach will work there.  Some people are shocked to find the answer to this can surprisingly be an emphatic NO.

Overall as testers we learn this by either having some wonderful mentors who will guide us out of this trap.  Or we learn by making this mistake, and attempting to rescue the situation when things go wrong.

Unfortunately there are some testers who when they come across an issue with their best practise will not admit that it's their approach that is wrong at all.  It is instead that their testing strategy was perfect, but it was that the rest of the software from requirements to coding “was delivered wrong”.

As I write this, I realise the incredible ludicrousness of that statement, but also realise how guilty we can be of it at times.  Software development lifecycles are not run for the benefit of testing, testing is run for the benefit of the software development lifecyle – and often we can forget that.


Fools with tools


Tools.  I think they are the bane of software testers and for some will do more harm than good.  The testing market is filled with them, and they promise the world – but rarely deliver on that promise.  In some ways we'd be better of without them because then we'd know there are “no magic shortcuts”.

I believe the phenomena touches on something that I experienced myself.  In the mid-90s I did a failed PhD research project in electrical engineering at the University of Liverpool.  At the time there was a lot of interest in neural networks for signal processing.

The idea behind my project was to use very simple testers with high degrees of noise in, and to attempt to use neural networks to “clean” this data and make it usable.  It was a frustrating failure.  The problem is that we were trying to follow this forumala …

(In) Noisy meaningless data → NEURAL NETWORK → (Out) Clean data

A read around the journals of the time made it seem feasible, and the sensor I was working with had worked in the environment of electricity transformers.  I was trying to use it to measure water flow.

It failed obviously – but this failure has meant I have more experience than most with what I call “Modern Alchemy”.  Much like the way alchemy promised to turn worthless lead into valuable gold, there are many things out there sold as offering just that from an IT point of view.

Automation tools which can be easily and robustly programmed, meaning you won't need any testers (claims circa 1999).  Test management tools which will give meaningful reporting “for free”.

The irony is I'm really not that anti-tool.  But much with “best practice” we have too many testers in the world who go “my last project/my friends project are using this tool … we should use this tool as well”.

As I described to my team the other week.  We don't exist to make software.  There were some puzzled and shocked looks to this.  I work on projects that offer our customers a solution which will grow their business or remove a pain point – that's the driver behind everything I do.  It just so happens that those solutions are usually embedded in software.  If the software I'm testing does not address this customer need, it doesn't matter how robust it is, it's fundamentally failed.

This too goes with testing tools.  You do not build your testing framework around a tool – if you do, too often you will end up abandoning it (and explaining to management why you wasted so much money on it – yes even free tools use up time and money, every hour you put into it is costing you).  As with the software project you put together, you have to first understand what your needs are before you attempt to find a tool which will address it appropriately.

There are too many testers who want to use a tool, and are trying to work backward to create a need for it.  Understand the need first, then see how a tool can help you address it, and importantly understand ways it may cause problems.  But most of all understand that if something sounds too good to be true … it probably is.

Testing is this separate thing



Whilst it's true, testing is not programming or requirements gathering, and ideally testing comes after some coding has been done, testing should not be removed and detached from the rest of the software lifecycle.

Notice I've used words like “separate” and “removed” … now in the same vein let's try this one, “divorced”?  There's something common about these words – they all hint at a relationship that's broken down.

It's probably true, you could test a piece of code without having access to a developer or BA, but it's frustrating when it happens.  In every piece of work, I've always tried to develop relationships with business analysts, managers, developers and even business owners if it's possible.  Testing works better when these lines of communication are available.

In the end, communication is all we have as a tester.  We don't fix issues we find.  We don't make key decisions on the product.  What we have is the ability to tell people information on aspects of the software, on whether it performs or whether it has issues.  To do this we need lines of communication into the larger team.

If we do our job well, we give the decision makers the power to make informed decisions on the software.

------------

But how do you keep them in Room 101?

An interesting thought as I close off this post has to be “how do we do this”.  How do we banish these things away?

Room 101 should be a cage for traits that plague the world of testing.  It should not be  a prison for people who have ever shown those behaviours.  I say this with all sincerity, because for all those crimes above, I am ashamed to say that I have been “guilty as charged” at one time or another.

We all have a world view of testing, but it's incomplete.  If we're lucky the cracks show early, and we modify.  The worst things can be if that model sees us through a few times, and we start to rely on it, and see it as a gospel “best practice” from which we dare not deviate.

The way we lock up those traits is quite simple – education.  Whether we seek mentors out to help us grow, or aim to learn from mistakes, or talk about testing in the larger test community, whether on Twitter or on a forum.  But we have to seek out education both for ourselves, and to educate those we work with – whether junior testers, or people who have a vested interest in testing such as business owners or project managers or developers.  We need to teach the tau of testing or “the way of the exploding computer”.

This brings me to an exciting event which is only a week away now, the Kiwi Workshop on Software Testing or KWST3.  This is a peer conference, and fitting in with this article, the topic this year is,

“Lighting the way; Educating others and ourselves about software testing -  (raising a new generation of thinking creative testers)” 

As an ex-teacher myself, and someone who has a vested interest in developing new, passionate testers who understand testing and master it's craft and complexities, this is going to be an amazing event.

To find some of our discussion and debate, follow us on 5th and 6th July 2013 with the #KWST3 hashtag on Twitter, and watch this space ...

3 comments:

  1. Very nice post. Project management is all about managing and meeting expectations. Using different tools this project management task become very easy and less time consuming. I hope you write again soon! Just love to read your post.
    Project Management Software

    ReplyDelete
  2. Gracie - really not sure if that's a wind up or a "bot". Tools can be useful, and they can take a lot of effort out true. But you also have to put a lot of information into a tool to get anything out of it.

    Tools only tend to be useful if they map how your team works, and if you can get the whole team to use them.

    Otherwise (a) you are making your workflow match your tool, one of the cardinal 101 sins above and (b) to get those numbers, your team has to spend time putting a lot of data into the tool, so you don't get those measures "for free". Its possible there are better methods to collect that other than tool use.

    ReplyDelete