Saturday, June 29, 2013

What Gene Roddenberry taught me about "issues that matter" ...


The Making Of Star Trek by Stephen E Whitefield and Gene Roddenberry is an amazing book that I read back in my childhood, detailing the conceptual and physical struggle to bring Star Trek to the screen, from it’s original idea, to the final product.

One piece that stayed with me was creator Gene Roddenberry’s vision of the show.  He felt the public would buy into the fantastic setting as long as they identified with the characters.  To that end, he felt the crew of the Enterprise (with the possible exception of Spock), needed to act and behave just as you’d expect the crew of a contemporary naval ship to behave.

To aid him, he developed a test he asked new writers to take.  When faced with the script below, which of the outlined issues struck the writer as most significant with this piece?


The scene is the bridge of the U.S.S (United States Starship) Enterprise. Captain Kirk is at his command position, his lovely but highly efficient female Yeoman at his side. Suddenly, and without provocation, our Starship is attacked by an alien space vessel. We try to warn the alien vessel off, but it ignores us and begins loosening bolts of photon energy-plasma at us.

The alien vessel’s attack begins to weaken our deflectors. Mr. Spock reports to Captain Kirk that the next enemy bolt will probably break through and destroy the Enterprise. At this moment, we look up to see that final energy-plasma bolt heading for us. There may be only four or five seconds of life left. Kirk puts his arms around his lovely Yeoman, comforting and embracing her as they wait for what seems like certain death.

PLEASE CHECK ONE:
(  ) Inaccurate terminology. The Enterprise is more correctly an international vessel, the United Spaceship Enterprise.
(  ) Scientifically incorrect. Energy-plasma bolts could not be photon in nature.
(  ) Unbelievable. The Captain would not hug a pretty Yeoman on the bridge of his vessel.
(  ) Concept weak. This whole story opening reeks too much of “space pirate” or similar bad science fiction.

In essence this was a test from Gene Roddenberry to see if potential writers shared his core values for his series.

Inaccurate terminology – well, this is a cosmetic issue and easily rectified with a stroke of the pen.  Likewise, with scientifically incorrect, it’s science fiction, so it does stretch science a little.  But again cosmetically can be corrected.

Concept weak … although potentially a more significant point, several of Star Treks best stories started like this – the thrilling Balance of Terror, or the Kobiyashi Maru test of The Wrath Of Khan.

So here, it’s the believability that’s the biggest issue.  If the characters when under duress are behaving in an unrealistic manner, Roddenbury felt you cheapened the drama and the premise.

Lady Gaga - the early years

Yes, Kirk did his share of canoodling with women during the series, even female subordinates.   [Starfleet really needed some regulations on dating].  However when there was an ugly Klingon in his face, he was always a man of action to roll up his sleeves and queue the wrestling music, over “copping a quick feel with the young lady”.

"You Klingon son, you killed my ... no, wait a minute ..."

Fascinating


So where does that fit in with testing?  Well, much like script writers, as testers we really need to understand the “values” of the show we’re trying to help produce.  Star Trek had a series “bible” explaining characters and what the series was trying to achieve, there may be other methods open to us, including just asking Gene Roddenberry directly (or our own version of him, the “visionary person who matters”).

In the scene above, it’s awfully tempting to edit those small issues as you go along.  However, consider this, what if the concept or the characters behaviour is weak?  You are going to tweak the cosmetics, then hand it back asking for a major rewrite.  You’ve essentially wasted your time (and potentially the authors, if you drip fed those changes first), if you’re going to ask for two cosmetic changes followed by a major rewrite.

Even when we prioritise defects, both developers and ourselves only have so much bandwidth to deal with them.  Who really wants to trawl through a hundred cosmetic defects to find the two Major issues buried in amongst them?

It’s the big ones which are the software killers, and what we need to push forward first, and concentrate our efforts on finding.  There’s thus a declaration in my testing mission statement that we will,

"Work to illuminate high impact defects rather than report cosmetic defects en mass"

The team needs to know and work on the big issues as early as they’re known.  When I first get a new build, I won’t stop to defect every cosmetic issue I come across.  I often grab quick notes and screenshots of anything that bugs me as I go along, but if it’s not stopping me, I keep going.  I want to find if the build is delivering the ability to do the key functional stuff.  I can’t do this if I’m stopping to write up every small issue.

An example of this was an issue I encountered a few years ago.  Development was running late – and the work was being done abroad.  On Monday, we received a build, but it wouldn’t allow us to create items at all in our system.  This was, as you can imagine, a level 1 MAJOR issue, that was raised immediately.  Only there was no easy fix it, and it stopped our testing dead in its tracks.

On Wednesday morning, we were closer, but it was still not working.  The foreign development team thought each night they’d cracked it, and each morning we testers found it was still broken.

To break the stalemate, I asked the development team to ring me when they’d done, and I’d test it immediately to give them the instant feedback they needed, so we could start Thursday morning going “yes we know they fixed this”, or they could try again.  I got the phone call at 3am, and went into the test environment.  I could now create items again – the Severity 1 issue was gone.

However I found out that although that issue had gone, there was a new issue, where instead of being able to create up to 12 items for each customer, we were limited to 7.  We’d removed a Severity 1 issue, but now had a Severity 3.

I had to make a quick call.  The issue I was looking at wasn’t big enough to stop us testing later in the day.  The development team had worked into the night in their timezone, and wouldn’t be able to fix this before they went home anyway (not as deploy a build).  If the Severity 1 issue was still there, it’d be different because it would have stopped us from testing altogether.  Besides I was tired, and would need to investigate this more.

I told development they’d done a good job, and I could now create.  There was a smaller issue, but I’d write it up later in the day.

In my mind, this is the same principle in action, prioritising the important defects, getting action on them, but holding back on the less important defects until you’re ready to spend time writing them up and investigating them.

Much as with Gene Roddenberry’s writers, it’s about making sure that you’re spending your time on the core values finding the gaping issues, before adding the polish afterwards.

Thank you Gene for teaching me that …

The Great Bird Of The Galaxy

Friday, June 28, 2013

Zombie bugs! Or how to face a fear of the apocalypse that will never happen ...


Everybody fears zombies.  The only thing a product owner fears more than zombies, are zombie bugs.

Much like a zombie is a human being reanimated from the grave with a mindset for eating brains, zombie bugs are defects which were fixed, but “could come back”.


I have certainly encountered my share of “bugs which came back” in the early 2000s.  It was a symptom (and indeed risk) of our Clearcase code configuration system.

Our project at the time MPA needed to do something radical.  We needed to continue to deliver 3 monthly releases, but at the same time develop a whole new capability on a new branch for software that would not be released for almost 2 years.

The diagram below shows an example of the flow.  Our starting code-base, version 1.0, was turned into two branches.  Branch A had regular small releases, whilst branch B had lots of extra features added and developed for later merge.


The problem was that Branch A had a lot of features, but also some fixes of bugs that were in the original source material version 1.0.  That meant those same bugs were also in Branch B, which wasn't receiving those fixes.  This meant when we did our inevitable big merge down the line, there was a chance we could unwittingly take merge code from Branch B, which would reintroduce a bug we were sure we'd fixed.  The zombie bug!

As per the norm for testers, this led to a fear of “the code regressing”, and caused a bit of an overreaction.  Because bugs could “just come back”, automated regression suites were developed which checked for EVERY BUG EVER FIXED to ensure we'd not regressed.

Did I hear one of my audience go “ouch” out there?  Yes indeed, before too long, our thorough testing had gone on to create more of a headache and backlog than the original problem.  [If you're a fan of The Walking Dead, just think of all the pain The Governor caused.  And he's not a zombie.]

These days, I'm not sure if I've just been on different projects that aren't as ambitious with their branching or if code configuration tools have got better, but I just don't see “zombie bugs” like I'd witnessed 10 years ago.

Certainly I realise the strategy was flawed.  If a major merge is happening, the solution is to make sure you have plenty of time to give it a thorough testing, because it will be buggy.  But focusing on seeking out “zombie bugs” means it's time you're not spending looking for new ones.  And lets face it, in any build that I ever saw zombie bugs, I also saw a whole heap of brand new, non-zombie ones.

Is it really dead, or is it pining?


Hunting all out for zombie bugs these days is a waste of time.  Do you really want to check that every typo you've ever found is still corrected?  

Occasionally there is a big bug, a Godzilla bug, whose impact has everyone in the project a little scared (perhaps it was found in production, and caused customer issues).  In this instance, to ease fears, it's probably a good idea to check once in a while that this bug has gone, so you can report once and for all “it is no more.  It has ceased to be. It's expired and gone to meet it's maker.  It's a stiff.  Bereft of life, it rests in peace”.

Sometimes you have to say this several times to get the message across (it does help to have the evidence to back you up though).  It's important business owners feels this has been put down once and for all … y'know, a double tap to the head.

So what's to be done?

Do I just play along?

The first thing to understand with code, the bigger the change, the more potential for introducing bugs.  If you are dealing with a major merge, or a huge piece of functionality added, expect problems.  But don't just go looking for issues where there have been problems in the past.

If you absolutely have to check for “bug fixes which have regressed”, make sure you are checking for the real nasty impact ones.  Don't waste time hunting out trivial ones, which in all likelihood have little real impact.

But most of all try to make sure that this wild goose chase doesn't mean you're so obsessed with old bugs, you're failing to find new ones.  After all, it would be a shame that if whilst focusing on an impending zombie apocalypse that never happens, you failed to notice that you were about to be overrun by cowboys riding dinosaurs ...


Thursday, June 27, 2013

The Room 101 of testing ..


“You asked me once, what was in Room 101. I told you that you knew the answer already. Everyone knows it. The thing that is in Room 101 is the worst thing in the world.”

In the book Nineteen-Eighty-Four by George Orwell, there is a room everyone dreads, Room 101.  It is a room used to break people, because it contains the thing that people fear the most – although it's also something different for every person.

With my 101st post, I’ve now covered and explored a lot about testing, and maybe it’s time to consign a few demons into the testing version of Room 101.  I’m about to exorcise and attempt to banish some of (in my opinion) testing's evils into this room, which I hope people see fit to treat like Pandora’s Box, and never open again …


Best practises


The idea of best practise (to those who champion it) is that there is only one way to do testing.  The right way.  And all you have to do is apply it from one project to another.

When we are junior testers, we tend to go “I am now working on Project Pluto.  My previous assignment was Project Neptune, and it was a success.  So I will do everything as we did for Project Neptune here, and to will also be a success”.  I know this because I did indeed have this mindset when I was much younger.

In itself this isn't a shock-horror thing.  Of course we should be building on what's worked for us in the past.  What takes a more sophisticated mindset (and broader experience) to appreciate is to understand WHY that approach worked for Project Neptune, and then work out if Project Pluto shares the same characteristics which will mean that same approach will work there.  Some people are shocked to find the answer to this can surprisingly be an emphatic NO.

Overall as testers we learn this by either having some wonderful mentors who will guide us out of this trap.  Or we learn by making this mistake, and attempting to rescue the situation when things go wrong.

Unfortunately there are some testers who when they come across an issue with their best practise will not admit that it's their approach that is wrong at all.  It is instead that their testing strategy was perfect, but it was that the rest of the software from requirements to coding “was delivered wrong”.

As I write this, I realise the incredible ludicrousness of that statement, but also realise how guilty we can be of it at times.  Software development lifecycles are not run for the benefit of testing, testing is run for the benefit of the software development lifecyle – and often we can forget that.


Fools with tools


Tools.  I think they are the bane of software testers and for some will do more harm than good.  The testing market is filled with them, and they promise the world – but rarely deliver on that promise.  In some ways we'd be better of without them because then we'd know there are “no magic shortcuts”.

I believe the phenomena touches on something that I experienced myself.  In the mid-90s I did a failed PhD research project in electrical engineering at the University of Liverpool.  At the time there was a lot of interest in neural networks for signal processing.

The idea behind my project was to use very simple testers with high degrees of noise in, and to attempt to use neural networks to “clean” this data and make it usable.  It was a frustrating failure.  The problem is that we were trying to follow this forumala …

(In) Noisy meaningless data → NEURAL NETWORK → (Out) Clean data

A read around the journals of the time made it seem feasible, and the sensor I was working with had worked in the environment of electricity transformers.  I was trying to use it to measure water flow.

It failed obviously – but this failure has meant I have more experience than most with what I call “Modern Alchemy”.  Much like the way alchemy promised to turn worthless lead into valuable gold, there are many things out there sold as offering just that from an IT point of view.

Automation tools which can be easily and robustly programmed, meaning you won't need any testers (claims circa 1999).  Test management tools which will give meaningful reporting “for free”.

The irony is I'm really not that anti-tool.  But much with “best practice” we have too many testers in the world who go “my last project/my friends project are using this tool … we should use this tool as well”.

As I described to my team the other week.  We don't exist to make software.  There were some puzzled and shocked looks to this.  I work on projects that offer our customers a solution which will grow their business or remove a pain point – that's the driver behind everything I do.  It just so happens that those solutions are usually embedded in software.  If the software I'm testing does not address this customer need, it doesn't matter how robust it is, it's fundamentally failed.

This too goes with testing tools.  You do not build your testing framework around a tool – if you do, too often you will end up abandoning it (and explaining to management why you wasted so much money on it – yes even free tools use up time and money, every hour you put into it is costing you).  As with the software project you put together, you have to first understand what your needs are before you attempt to find a tool which will address it appropriately.

There are too many testers who want to use a tool, and are trying to work backward to create a need for it.  Understand the need first, then see how a tool can help you address it, and importantly understand ways it may cause problems.  But most of all understand that if something sounds too good to be true … it probably is.

Testing is this separate thing



Whilst it's true, testing is not programming or requirements gathering, and ideally testing comes after some coding has been done, testing should not be removed and detached from the rest of the software lifecycle.

Notice I've used words like “separate” and “removed” … now in the same vein let's try this one, “divorced”?  There's something common about these words – they all hint at a relationship that's broken down.

It's probably true, you could test a piece of code without having access to a developer or BA, but it's frustrating when it happens.  In every piece of work, I've always tried to develop relationships with business analysts, managers, developers and even business owners if it's possible.  Testing works better when these lines of communication are available.

In the end, communication is all we have as a tester.  We don't fix issues we find.  We don't make key decisions on the product.  What we have is the ability to tell people information on aspects of the software, on whether it performs or whether it has issues.  To do this we need lines of communication into the larger team.

If we do our job well, we give the decision makers the power to make informed decisions on the software.

------------

But how do you keep them in Room 101?

An interesting thought as I close off this post has to be “how do we do this”.  How do we banish these things away?

Room 101 should be a cage for traits that plague the world of testing.  It should not be  a prison for people who have ever shown those behaviours.  I say this with all sincerity, because for all those crimes above, I am ashamed to say that I have been “guilty as charged” at one time or another.

We all have a world view of testing, but it's incomplete.  If we're lucky the cracks show early, and we modify.  The worst things can be if that model sees us through a few times, and we start to rely on it, and see it as a gospel “best practice” from which we dare not deviate.

The way we lock up those traits is quite simple – education.  Whether we seek mentors out to help us grow, or aim to learn from mistakes, or talk about testing in the larger test community, whether on Twitter or on a forum.  But we have to seek out education both for ourselves, and to educate those we work with – whether junior testers, or people who have a vested interest in testing such as business owners or project managers or developers.  We need to teach the tau of testing or “the way of the exploding computer”.

This brings me to an exciting event which is only a week away now, the Kiwi Workshop on Software Testing or KWST3.  This is a peer conference, and fitting in with this article, the topic this year is,

“Lighting the way; Educating others and ourselves about software testing -  (raising a new generation of thinking creative testers)” 

As an ex-teacher myself, and someone who has a vested interest in developing new, passionate testers who understand testing and master it's craft and complexities, this is going to be an amazing event.

To find some of our discussion and debate, follow us on 5th and 6th July 2013 with the #KWST3 hashtag on Twitter, and watch this space ...