Wednesday, August 7, 2013

Insecurity 102 - Are you getting feedback?

In my previous piece, I talked about how insecurity can blight us all, and how peer networking and feedback can be a useful tool to combat it.  Here I'm going to expand this a lot more and explore about getting feedback, how to do it and how it helps.

Feedback is vital for everyone, it lets us know what we do well, and lets us know areas where we need development.  It helps keep us rounded individuals neither insecure or overconfident.

However, it’s something we shy away from, or worst still feel that feedback “should only point out flaws”.  As testers our focus is really to give individuals “feedback on a piece of software”, and indeed, that mainly means “defects”.  But we’re in danger there of giving way to the critics syndrome of Waldorf and Statler, pointing out flaws from the safety of a box.


But in my opinion, feedback should always be about being positive.  Some people will roll their eyes and go “so we can only say nice things then … political correctness gone mad!”.  But that’s not what I mean.  Feedback can say what someone does well, but it can also say when someone is weak in an area.  But here’s the tricky thing, and why many of us shy away from dealing with this side … if someone is weak in an area, you need to give them the feedback in such a way as to “light the way” so they can make decisions about addressing it.  Such feedback needs to be sensitive and about helping the individual to develop, not about destroying them.


And that’s hard to do.  It’s all easy for it to be needlessly brutal.  Imagine your best friend wants to go on X-Factor and sing in front of the judges.  The only problem is, you know that their singing stinks.  Are you going to give them feedback for this, or let them go in front of the judges and be torn to pieces?

I’m a regular at Les Mills gyms in Wellington, and do a lot of dance classes, called Body Jam.  I was asked by the Head Instructor for some feedback on her classes at the weekend, and my response was,

I have been going to Mereana's classes for about 3 years. In that time I've seen her develop and mature her instructor style considerably.

Mereana is very positive, engaging with the class in a distinctive, bubbly manner. She is very strong in giving clear direction, and is good at giving adaptive instruction to the audience when she sees people have injury / struggling with choreography. As such I really feel everyone feels both engaged and pushed from her classes.

This led into a conversation with Mereana, where I said that although we were talking about this now, I really had hoped over the last few years I had given her that feedback in bits and pieces, and none of it came as a huge surprise.  I know I’d spoken to her about how much she’d moved from “following a choreography script” to elegantly and confidently “living the values” of the program, and give something personal and unique to her instruction.  You see her teach today, and you know there's good reason why she's currently the head teacher for the program.

Back at my office, however, it’s appraisal time within my team, and I’m busy getting 360 reviews complete for them.  But the same theme is still there with my staff as it is with Mereana.  It’s too late when someone asks for feedback after 12+ months to really be giving anything that surprises.  We need to be giving feedback at work to our peers regularly, and the appraisal needs to be a “collection of all that’s been said over the year".

Feedback need not be frightening.  Poor old Phil Cool from my last article, gets feedback from everyone with a voice or a pen - it might be me saying he’s good, it might be a critic with a secret axe to grind, it might be a drunk heckler who thinks he's funnier.

We are lucky in our office environment that we have more intimate relationships with the people who can supply our feedback.  And if we don’t feel some trust to someone we work with for feedback, then maybe we have to ask ourselves if we’re in the right team.  Trust in (the majority) of those we work with is vital, but there is always going to be some conflict.

If you want to explore more the idea of giving feedback, I cannot recommend enough Johanna Rothman's book “Behind Closed Doors” which is very much about the process of one-on-one meetings, giving this kind of feedback and coaching to help develop staff, and take them further.


Tuesday, August 6, 2013

Insecurity 101 - The Phil Cool Factor



As a kid growing up in England in the 80s, we all were entranced by comedian Phil Cool.  He was a an impressionist with an amazing ability to contort his face into different shapes.  I remember how excitedly we'd talk about the last nights show in the corridors of Abbot Beyne High School in between classes.

Here's a few examples of Phil at his best ...





You might have noticed this blog has a lot of comic tones, and maybe you think I'm making a joke when I say that I do actually take comedy really seriously.  But I love comedy which is clever and unexpected, I despise comedy that is cheap.  So you can imagine how much I was in awe when in 2009 I actually got to meet my comedic legend quite by accident at a Fairport Convention concert.

He was actually part of a folk support act that played before the main band.  Some of his songs were typically comic, but some really very straight.  At then end of the show I bumped into him whilst waiting for a friend.

What happened was a humbling eye opener for me.  Phil Cool, quite literally the coolest comedian on the TV in my childhood - a man so very loud and funny on stage - but in person ... quite a shock.  I got speaking to him, and said I'd really enjoyed his set, and what an interesting (but different) venture.  Although really polite, be seemed quite nervous, and blushing said it was nice to hear that.  The man before me wasn't a loud and bombastic comedian, but a humble, somewhat uncertain human being who struck me as doubtful of his own (immense in my opinion) talents.

Rather than shatter my perceptions of him, I came away fascinated that someone I looked up to was not the stuff of legend, but a flesh-and-blood human being not so different from myself.

You see we all have doubts.  It's a very human thing to have internal fears and insecurities.

During KWST3, talking about the same cloud of doubt, there was a point where Aaron Hodder asked people to put up their hands to say if they ever felt a fraud as a tester ...


Most of the room put theirs up, but I have to admit I didn't put my hand up.  Commentary from Michael Bolton suggested that was because of the Dunning-Kruger effect.  This is the "grad phenomemon" where someone just out of University thinks they know everything, whilst someone with 20 years experience seems to show less superiority because they're more aware of their limitations.  People who put their hand up were experienced testers who knew their limits, whilst those like myself who kept them down were "deluded newbies" who thought we knew it all.

However for myself, despite Michael's criticism, I've been in the industry, and in teaching and research science enough to know I am a tester.  In my time, I've thought enough to get feedback and to ask around both my superiors and my peers, and it turns out that much like Phil Cool, insecurity is a common thing.  We're often hiding it beneath bravado.

I never think of myself as a fraud as a tester, simply because I've had a couple of non-testing careers.  I would not have spent 16+ years in the IT industry if I was plagued by doubts about me belonging here or having worth here.  Simply put, life is too short, and I would have abandoned ship long ago.

But it doesn't come easy.  I've get to know others, and be open, and sometimes talk about how I feel out of my comfort zone with peers that I trust.  My wingman on RAF projects 10 years ago was John Preston, and we still exchange emails talking about problems and challenges we face in IT.  Through both my career and through the internet, I've built up a wide peer network, many of them people I trust to ask big questions.

I've also looked up and followed some industry leaders in IT, and got to know a few.  They can seem to tower over us as Titans, and we can look at our projects and go "I bet Heracles Tester doesn't have these problems".  What has surprised me as I've got to know a few better, is that the Phil Cool effect takes hold - even in IT.  People who seem superhuman figures, powerhouses who can get things done, the more you get to know them up close you realise they are not Testing Titans who have been empowered with mighty and mystical powers from the Gods Of Testing themselves.  They are human beings, vulnerable and at times caught and ensnared in their own emotions.  Just like you or me.

This revelation has not diminished them in my eyes - it moves them from being something iconic (but also people I can't relate to), to being more peer in nature.  But at the same time, they become people I can relate to more, and who inspire me more in daily life.  Because if they do not have to "skip the hard yards" because of their status, if they have to go through the same assault course, then maybe I can too.

I know being a "famous" tester doesn't mean the learning curve of a new project is any easier, it doesn't exempt you from the occasional tussle at work, having to explain your position as a tester or having to do the (occasional) extra hours to get a project launched.

With that knowledge I've gained through networking that there are no magic solutions or attributes, there was no purpose putting my hand up to feeling like a fraud.  It serves no-one to be modest.  And as I said, if I really felt like that, I would have jumped to a new profession by now if I genuinely did.

That said, I do have doubts and even insecurities.  To steal the words of Douglas Adams, I'd like to call them "rigidly defined areas of doubt and uncertainty".  I worry sometimes there are things when planning testing that I don't know, areas I'll not think to test enough, bad assumptions I'll make.  

These doubts are not about me as an individual, but about whether I know or have thought about enough things about what I'm testing.  But then again, isn't the first law of testing that the profession exists because people are fallible and make mistakes (in code or design).  Given that human frailty why would testers be exempt?

The answer to that is just as we do with testing, just as I mentioned about personal insecurity.  We put it out to a network and get feedback.  We put it in front of people we trust (hopefully who you work for), and ask their opinion, critique and questions.  Then the planned testing isn't a one-man-band, but the sum ideas of everyone on the team, some of whom might have different ideas to how things work to you.

Getting people to positively critique to make something better is a superb strategy for fighting neurosis and feeling more confident about what we do.  At the end of the day, giving critique and feedback to produce a positive outcome pretty much sums up what we do as testers.

Sunday, July 7, 2013

The ballard of LEEROY JENKINS, software tester ...

The last two days at KWST3 have been another interesting experience.  There were many worthwhile, worthy and educational experiences shared, and once more I feel about a dozen articles of inspiration are ready to spin off.


A huge, and fun take home from the event actually came from Oliver Erlewein on Twitter, who together with Aaron Hodder, introduced me to the tale of Leeroy Jenkins,


The youtube video is about a group called "friends for life" who play the multiple online roleplay game World of Warcraft.  They are outside a battle level which has given them trouble in the past, and wondering if to just go around it.  However it contains something that will benefit one of their group (Leeroy Jenkins).


So, they have a cunning plan.  Their leader gives instructions that he'll do an devastating area attack, which needs to be followed up on, with a similar one once done.  One after another to "shock and awe".  Meanwhile they will just try and get in there and out. Their wizzards will provide support by doing magic behind them to make their attacks more effective.   [You'll have to forgive me as I don't do World of Warcraft, so my terms there might be a bit off ...]

Their leader thinks it's a good plan, even though it only is calculated to have a 32.33% (recurring) chance of survival.  They are working on this, when one of the party (Leeroy, who is ironically the only one who will benefit from this raid) gets too impatient, and just goes "okay let's do this ... LEEROY JENKINS" screaming a battle cry and rushing in.

Caught off guard, the whole group run in after him, repeatedly chanting "stick to the plan, stick to the plan", even though the plan called for that devastating initial attack before everyone followed through.  There is a scrambled attempt after chanting this for a while to attempt to communicate, which quickly alters to "goddamn it Leeroy, why do you always do this" for the rest of the game, as the party are picked off one by one.

I have found it a really fascinating insight into the whole idea of "creating a strategy and following it through".

Of course on the internet, Leeroy Jenkins is seen as a bit of a fool, you don't want to be in a team with, because he's quite selfish and impatient (his group go "why do you ALWAYS do this Leeroy").  And there's a huge amount of truth to this.

However, if you haven't watch that video yet, I urge you to watch it now ...

The Planning

The leader says that the room has given them a lot of trouble in the past, and he comes up with a plan.  Although everyone has done the room before, he doesn't ask anyone else if they've got any other ideas.  There's no "retrospective" of "should we try this or should we try that".  Consequentially he never asks anyone for any input on anything that could improve it.  It's his plan, and he thinks it's a good plan, that's never put out to the rest of the group, even with the 32.33% odds of survival.

The problem is, a lot of "plans" and "strategies" seem ideal when removed to a few points, and not open to input from a larger group of the realities and obstacles ahead.

The Battle of the Somme, was to be a breakthrough battle during World War One for the British Empire.  Instead it led to the worst day in British military history where 20,000 soldiers were slain in a single day.

I'm sure if there had been PowerPoint back then, it would have looked something like this,


Stick To The Plan, Stick To The Plan

History, like with the Somme, is filled with battles which "seemed a good plan at the time", but rapidly became unstuck.  Leeroy Jenkins team follow in after him chanting "stick to the plan", even though the first part of the plan has already unravelled.

It's easy here to blame Leeroy, but remember, the plan only had a 32.33% chance of success.  So in all likelihood, was going to end in the same kind of result, and everyone in the party was comfortable with that.

Leeroy acts as a catalyst starting a bad situation.  However, the team tries to communicate to each other when they get into difficulties, and are all talking over each other.  They soon degenerate into blaming and cursing as they're picked off.  It's actually this failure to communicate (over Leeroy's impetuousness) that guarantees their failure.

Something about that video that strikes me from my time at KWST3.  Sometimes we can learn the worst possible lessons from things going bad.  The lesson that team, and especially the leader was jumping to was "goddamn you Leeroy" or in other words "you ruined my beautiful plan".  There is an over reliance on that plan, and not enough on the team.  Again, let me say it again, a plan which was seen to ONLY have a 32.33% chance of success.  Those aren't great odds (and of course I dispute them and even why they were needed).

One of the reasons the Somme went so wrong was the insistence of following a plan.  As above it was pretty simple, kill the enemy using an artillery barrage, then occupy their position.  It soon became clear they'd failed in step one, as the artillery had failed to kill enough Germans (who had deep, underground concrete bunkers).  The plan pretty much depended on this.  But the captains in charge stuck with it anyway ... and this is how disaster happened.

For myself, I have often written many beautiful test plans, which once we've got started I've realised reality has got in the way of.  As someone who believes in the Agile manifesto though, I do believe in,

Responding to change over following a plan

However like the leader of "friends for life", I've known many test managers who have cursed reality for failing to follow their beautiful plans.

Leeroy Jenkins - some lessons for testers

Act like there is a Leeroy Jenkins in your team.  Try not to overplan, but make sure you are including everyone in the discussion, make the plan a team plan, not your plan.

If the team does feel consistently let down by a Leeroy Jenkins, maybe it's time he found another team.  But make sure he's not being used as an easy scapegoat.  Maybe he has a point about trying to not overanalyse, and get stuck into the job at hand.

Most importantly, make sure your team knows how to talk to each other.  Before, but especially when you're in the lair of the enemy.  If someone needs help, let them speak, and respond to it.  Keep responding to the situation and ask for updates, but don't let it become a garbled chatter of panic. Ask people for updates, and try to triage requests.

Chanting "keep to the plan" won't help, and cursing Leeroy Jenkins will just make the air toxic with defeat.

If things don't work out, don't hold up your plan and being perfect, and your team as being the reason for failure "because they didn't follow the plan enough".  Things change "how dare you get eaten by giant spiders when you were supposed to be casting divine intervention ... that wasn't in my plan".

Friday, July 5, 2013

KWST3 - Learning about software testing ...

Today is the first of two days of the Kiwi Workshop on Software Testing peer conference #KWST3, which this year covers the topic of,

“Lighting the way; Educating others and ourselves about software testing -  (raising a new generation of thinking creative testers)”

This topic is of huge interest to me - I'm not only a qualified and certified teacher of science, but also have spent a good amount of time in higher education, even being self-taught in software development (and before you ask, I was a damn fine programmer as well).

Helping to develop in others an understanding of what testers do and our methods is I think a core part of the profession, whether it's developing junior staff, or demystifying our role to project managers and developers.

So what is a peer conference?

In other types of conference you tend to be lectured to by someone who has found an idea or strategy.  They use lots of powerpoint slides, and with luck a few questions are possible afterwards.

A peer conference, however, is much more interactive.

It's based around the idea of experience reports.  Any attendee can choose to give a 5-10 minute talk about an experience they've had which relates to the main topic.  At the end, that experience is given over to the group, who can explore that situation with a series of questions.

Unlike the standard conference where you're just really pooling the speakers experience, this means that the situation is expanded and explored by the sum experience and intelligence of those in the room.  Thus a peer conference is much more exploratory of the subject matter.  You can't look at an agenda and know where you'll end up by lunch, the end of the day, the end of the conference.  That's both exciting and scary ...

Numbers at a peer conference are always tricky - you need enough people to have diversity of opinion (otherwise you'll get nothing but lots of agreement out of the sessions), but not so large that people feel they don't have a voice.

The Opening Experience Report ...


This year, the opening experience was given by Anne-Marie Charrett who is a software testing coach based in Australia.  She has been teaching testers for years, and had experience at the University of Technology Sydney (UTS).

When approached to give a formulaic course at UTS, with her exploratory testing experience, she felt was too much a repeat of standards.  She found she wanted to rewrite it, and sought permission.

Before long, she was making videos, and trying to make sure the content was appropriate, and something she could stand by.  Used AST materials heavily as an influence.  But wanted to make as much hands-on as possible.  Wanted to make it experiment based, as testing is about "doing stuff".

She did the PSL course with Jerry Weinberg - which is about giving a space for learning.  About individual development over "following a course scheme".  Wanted to create something similar but around testing.

An important part was to create a space to work in - where they could be immersed in the exercise, and encouraging people to ask questions. Encouraging students that it's not about giving answers, but finding the right questions.

Some topics she covered over using some exercises and some homework,

  • Mentally modelling software - how do you understand about a product, going far beyond just requirements (if they are even available).
  • Critical thinking - how do you challenge a product?
  • State models
  • Tools and testability
  • Exploratory vs scripted testing
  • Reporting and communication

Challenges

  • The students themselves were used to just sitting in the classroom and given information and lectures.  Being student-centric, it was based on them.  Students needed to learn and develop their communication.
  • Students were postgraduate IT folk, who were more interested initially in development than testing.
  • Dealing with Unversity bureaucracy - wanted a scheduled course they could measure and examine.


She got them to go "into the wild" and into Sun Corp to test and raise problems on live projects. Sun Corp helped to mentor and develop, and students loved and got a lot out of it.  Many had changed their mind on whether they wanted to do testing as a career.

This feedback from students and companies has shown she's onto something, and the course is providing a real slice of valuable skills for a career in testing.

Saturday, June 29, 2013

What Gene Roddenberry taught me about "issues that matter" ...


The Making Of Star Trek by Stephen E Whitefield and Gene Roddenberry is an amazing book that I read back in my childhood, detailing the conceptual and physical struggle to bring Star Trek to the screen, from it’s original idea, to the final product.

One piece that stayed with me was creator Gene Roddenberry’s vision of the show.  He felt the public would buy into the fantastic setting as long as they identified with the characters.  To that end, he felt the crew of the Enterprise (with the possible exception of Spock), needed to act and behave just as you’d expect the crew of a contemporary naval ship to behave.

To aid him, he developed a test he asked new writers to take.  When faced with the script below, which of the outlined issues struck the writer as most significant with this piece?


The scene is the bridge of the U.S.S (United States Starship) Enterprise. Captain Kirk is at his command position, his lovely but highly efficient female Yeoman at his side. Suddenly, and without provocation, our Starship is attacked by an alien space vessel. We try to warn the alien vessel off, but it ignores us and begins loosening bolts of photon energy-plasma at us.

The alien vessel’s attack begins to weaken our deflectors. Mr. Spock reports to Captain Kirk that the next enemy bolt will probably break through and destroy the Enterprise. At this moment, we look up to see that final energy-plasma bolt heading for us. There may be only four or five seconds of life left. Kirk puts his arms around his lovely Yeoman, comforting and embracing her as they wait for what seems like certain death.

PLEASE CHECK ONE:
(  ) Inaccurate terminology. The Enterprise is more correctly an international vessel, the United Spaceship Enterprise.
(  ) Scientifically incorrect. Energy-plasma bolts could not be photon in nature.
(  ) Unbelievable. The Captain would not hug a pretty Yeoman on the bridge of his vessel.
(  ) Concept weak. This whole story opening reeks too much of “space pirate” or similar bad science fiction.

In essence this was a test from Gene Roddenberry to see if potential writers shared his core values for his series.

Inaccurate terminology – well, this is a cosmetic issue and easily rectified with a stroke of the pen.  Likewise, with scientifically incorrect, it’s science fiction, so it does stretch science a little.  But again cosmetically can be corrected.

Concept weak … although potentially a more significant point, several of Star Treks best stories started like this – the thrilling Balance of Terror, or the Kobiyashi Maru test of The Wrath Of Khan.

So here, it’s the believability that’s the biggest issue.  If the characters when under duress are behaving in an unrealistic manner, Roddenbury felt you cheapened the drama and the premise.

Lady Gaga - the early years

Yes, Kirk did his share of canoodling with women during the series, even female subordinates.   [Starfleet really needed some regulations on dating].  However when there was an ugly Klingon in his face, he was always a man of action to roll up his sleeves and queue the wrestling music, over “copping a quick feel with the young lady”.

"You Klingon son, you killed my ... no, wait a minute ..."

Fascinating


So where does that fit in with testing?  Well, much like script writers, as testers we really need to understand the “values” of the show we’re trying to help produce.  Star Trek had a series “bible” explaining characters and what the series was trying to achieve, there may be other methods open to us, including just asking Gene Roddenberry directly (or our own version of him, the “visionary person who matters”).

In the scene above, it’s awfully tempting to edit those small issues as you go along.  However, consider this, what if the concept or the characters behaviour is weak?  You are going to tweak the cosmetics, then hand it back asking for a major rewrite.  You’ve essentially wasted your time (and potentially the authors, if you drip fed those changes first), if you’re going to ask for two cosmetic changes followed by a major rewrite.

Even when we prioritise defects, both developers and ourselves only have so much bandwidth to deal with them.  Who really wants to trawl through a hundred cosmetic defects to find the two Major issues buried in amongst them?

It’s the big ones which are the software killers, and what we need to push forward first, and concentrate our efforts on finding.  There’s thus a declaration in my testing mission statement that we will,

"Work to illuminate high impact defects rather than report cosmetic defects en mass"

The team needs to know and work on the big issues as early as they’re known.  When I first get a new build, I won’t stop to defect every cosmetic issue I come across.  I often grab quick notes and screenshots of anything that bugs me as I go along, but if it’s not stopping me, I keep going.  I want to find if the build is delivering the ability to do the key functional stuff.  I can’t do this if I’m stopping to write up every small issue.

An example of this was an issue I encountered a few years ago.  Development was running late – and the work was being done abroad.  On Monday, we received a build, but it wouldn’t allow us to create items at all in our system.  This was, as you can imagine, a level 1 MAJOR issue, that was raised immediately.  Only there was no easy fix it, and it stopped our testing dead in its tracks.

On Wednesday morning, we were closer, but it was still not working.  The foreign development team thought each night they’d cracked it, and each morning we testers found it was still broken.

To break the stalemate, I asked the development team to ring me when they’d done, and I’d test it immediately to give them the instant feedback they needed, so we could start Thursday morning going “yes we know they fixed this”, or they could try again.  I got the phone call at 3am, and went into the test environment.  I could now create items again – the Severity 1 issue was gone.

However I found out that although that issue had gone, there was a new issue, where instead of being able to create up to 12 items for each customer, we were limited to 7.  We’d removed a Severity 1 issue, but now had a Severity 3.

I had to make a quick call.  The issue I was looking at wasn’t big enough to stop us testing later in the day.  The development team had worked into the night in their timezone, and wouldn’t be able to fix this before they went home anyway (not as deploy a build).  If the Severity 1 issue was still there, it’d be different because it would have stopped us from testing altogether.  Besides I was tired, and would need to investigate this more.

I told development they’d done a good job, and I could now create.  There was a smaller issue, but I’d write it up later in the day.

In my mind, this is the same principle in action, prioritising the important defects, getting action on them, but holding back on the less important defects until you’re ready to spend time writing them up and investigating them.

Much as with Gene Roddenberry’s writers, it’s about making sure that you’re spending your time on the core values finding the gaping issues, before adding the polish afterwards.

Thank you Gene for teaching me that …

The Great Bird Of The Galaxy

Friday, June 28, 2013

Zombie bugs! Or how to face a fear of the apocalypse that will never happen ...


Everybody fears zombies.  The only thing a product owner fears more than zombies, are zombie bugs.

Much like a zombie is a human being reanimated from the grave with a mindset for eating brains, zombie bugs are defects which were fixed, but “could come back”.


I have certainly encountered my share of “bugs which came back” in the early 2000s.  It was a symptom (and indeed risk) of our Clearcase code configuration system.

Our project at the time MPA needed to do something radical.  We needed to continue to deliver 3 monthly releases, but at the same time develop a whole new capability on a new branch for software that would not be released for almost 2 years.

The diagram below shows an example of the flow.  Our starting code-base, version 1.0, was turned into two branches.  Branch A had regular small releases, whilst branch B had lots of extra features added and developed for later merge.


The problem was that Branch A had a lot of features, but also some fixes of bugs that were in the original source material version 1.0.  That meant those same bugs were also in Branch B, which wasn't receiving those fixes.  This meant when we did our inevitable big merge down the line, there was a chance we could unwittingly take merge code from Branch B, which would reintroduce a bug we were sure we'd fixed.  The zombie bug!

As per the norm for testers, this led to a fear of “the code regressing”, and caused a bit of an overreaction.  Because bugs could “just come back”, automated regression suites were developed which checked for EVERY BUG EVER FIXED to ensure we'd not regressed.

Did I hear one of my audience go “ouch” out there?  Yes indeed, before too long, our thorough testing had gone on to create more of a headache and backlog than the original problem.  [If you're a fan of The Walking Dead, just think of all the pain The Governor caused.  And he's not a zombie.]

These days, I'm not sure if I've just been on different projects that aren't as ambitious with their branching or if code configuration tools have got better, but I just don't see “zombie bugs” like I'd witnessed 10 years ago.

Certainly I realise the strategy was flawed.  If a major merge is happening, the solution is to make sure you have plenty of time to give it a thorough testing, because it will be buggy.  But focusing on seeking out “zombie bugs” means it's time you're not spending looking for new ones.  And lets face it, in any build that I ever saw zombie bugs, I also saw a whole heap of brand new, non-zombie ones.

Is it really dead, or is it pining?


Hunting all out for zombie bugs these days is a waste of time.  Do you really want to check that every typo you've ever found is still corrected?  

Occasionally there is a big bug, a Godzilla bug, whose impact has everyone in the project a little scared (perhaps it was found in production, and caused customer issues).  In this instance, to ease fears, it's probably a good idea to check once in a while that this bug has gone, so you can report once and for all “it is no more.  It has ceased to be. It's expired and gone to meet it's maker.  It's a stiff.  Bereft of life, it rests in peace”.

Sometimes you have to say this several times to get the message across (it does help to have the evidence to back you up though).  It's important business owners feels this has been put down once and for all … y'know, a double tap to the head.

So what's to be done?

Do I just play along?

The first thing to understand with code, the bigger the change, the more potential for introducing bugs.  If you are dealing with a major merge, or a huge piece of functionality added, expect problems.  But don't just go looking for issues where there have been problems in the past.

If you absolutely have to check for “bug fixes which have regressed”, make sure you are checking for the real nasty impact ones.  Don't waste time hunting out trivial ones, which in all likelihood have little real impact.

But most of all try to make sure that this wild goose chase doesn't mean you're so obsessed with old bugs, you're failing to find new ones.  After all, it would be a shame that if whilst focusing on an impending zombie apocalypse that never happens, you failed to notice that you were about to be overrun by cowboys riding dinosaurs ...


Thursday, June 27, 2013

The Room 101 of testing ..


“You asked me once, what was in Room 101. I told you that you knew the answer already. Everyone knows it. The thing that is in Room 101 is the worst thing in the world.”

In the book Nineteen-Eighty-Four by George Orwell, there is a room everyone dreads, Room 101.  It is a room used to break people, because it contains the thing that people fear the most – although it's also something different for every person.

With my 101st post, I’ve now covered and explored a lot about testing, and maybe it’s time to consign a few demons into the testing version of Room 101.  I’m about to exorcise and attempt to banish some of (in my opinion) testing's evils into this room, which I hope people see fit to treat like Pandora’s Box, and never open again …


Best practises


The idea of best practise (to those who champion it) is that there is only one way to do testing.  The right way.  And all you have to do is apply it from one project to another.

When we are junior testers, we tend to go “I am now working on Project Pluto.  My previous assignment was Project Neptune, and it was a success.  So I will do everything as we did for Project Neptune here, and to will also be a success”.  I know this because I did indeed have this mindset when I was much younger.

In itself this isn't a shock-horror thing.  Of course we should be building on what's worked for us in the past.  What takes a more sophisticated mindset (and broader experience) to appreciate is to understand WHY that approach worked for Project Neptune, and then work out if Project Pluto shares the same characteristics which will mean that same approach will work there.  Some people are shocked to find the answer to this can surprisingly be an emphatic NO.

Overall as testers we learn this by either having some wonderful mentors who will guide us out of this trap.  Or we learn by making this mistake, and attempting to rescue the situation when things go wrong.

Unfortunately there are some testers who when they come across an issue with their best practise will not admit that it's their approach that is wrong at all.  It is instead that their testing strategy was perfect, but it was that the rest of the software from requirements to coding “was delivered wrong”.

As I write this, I realise the incredible ludicrousness of that statement, but also realise how guilty we can be of it at times.  Software development lifecycles are not run for the benefit of testing, testing is run for the benefit of the software development lifecyle – and often we can forget that.


Fools with tools


Tools.  I think they are the bane of software testers and for some will do more harm than good.  The testing market is filled with them, and they promise the world – but rarely deliver on that promise.  In some ways we'd be better of without them because then we'd know there are “no magic shortcuts”.

I believe the phenomena touches on something that I experienced myself.  In the mid-90s I did a failed PhD research project in electrical engineering at the University of Liverpool.  At the time there was a lot of interest in neural networks for signal processing.

The idea behind my project was to use very simple testers with high degrees of noise in, and to attempt to use neural networks to “clean” this data and make it usable.  It was a frustrating failure.  The problem is that we were trying to follow this forumala …

(In) Noisy meaningless data → NEURAL NETWORK → (Out) Clean data

A read around the journals of the time made it seem feasible, and the sensor I was working with had worked in the environment of electricity transformers.  I was trying to use it to measure water flow.

It failed obviously – but this failure has meant I have more experience than most with what I call “Modern Alchemy”.  Much like the way alchemy promised to turn worthless lead into valuable gold, there are many things out there sold as offering just that from an IT point of view.

Automation tools which can be easily and robustly programmed, meaning you won't need any testers (claims circa 1999).  Test management tools which will give meaningful reporting “for free”.

The irony is I'm really not that anti-tool.  But much with “best practice” we have too many testers in the world who go “my last project/my friends project are using this tool … we should use this tool as well”.

As I described to my team the other week.  We don't exist to make software.  There were some puzzled and shocked looks to this.  I work on projects that offer our customers a solution which will grow their business or remove a pain point – that's the driver behind everything I do.  It just so happens that those solutions are usually embedded in software.  If the software I'm testing does not address this customer need, it doesn't matter how robust it is, it's fundamentally failed.

This too goes with testing tools.  You do not build your testing framework around a tool – if you do, too often you will end up abandoning it (and explaining to management why you wasted so much money on it – yes even free tools use up time and money, every hour you put into it is costing you).  As with the software project you put together, you have to first understand what your needs are before you attempt to find a tool which will address it appropriately.

There are too many testers who want to use a tool, and are trying to work backward to create a need for it.  Understand the need first, then see how a tool can help you address it, and importantly understand ways it may cause problems.  But most of all understand that if something sounds too good to be true … it probably is.

Testing is this separate thing



Whilst it's true, testing is not programming or requirements gathering, and ideally testing comes after some coding has been done, testing should not be removed and detached from the rest of the software lifecycle.

Notice I've used words like “separate” and “removed” … now in the same vein let's try this one, “divorced”?  There's something common about these words – they all hint at a relationship that's broken down.

It's probably true, you could test a piece of code without having access to a developer or BA, but it's frustrating when it happens.  In every piece of work, I've always tried to develop relationships with business analysts, managers, developers and even business owners if it's possible.  Testing works better when these lines of communication are available.

In the end, communication is all we have as a tester.  We don't fix issues we find.  We don't make key decisions on the product.  What we have is the ability to tell people information on aspects of the software, on whether it performs or whether it has issues.  To do this we need lines of communication into the larger team.

If we do our job well, we give the decision makers the power to make informed decisions on the software.

------------

But how do you keep them in Room 101?

An interesting thought as I close off this post has to be “how do we do this”.  How do we banish these things away?

Room 101 should be a cage for traits that plague the world of testing.  It should not be  a prison for people who have ever shown those behaviours.  I say this with all sincerity, because for all those crimes above, I am ashamed to say that I have been “guilty as charged” at one time or another.

We all have a world view of testing, but it's incomplete.  If we're lucky the cracks show early, and we modify.  The worst things can be if that model sees us through a few times, and we start to rely on it, and see it as a gospel “best practice” from which we dare not deviate.

The way we lock up those traits is quite simple – education.  Whether we seek mentors out to help us grow, or aim to learn from mistakes, or talk about testing in the larger test community, whether on Twitter or on a forum.  But we have to seek out education both for ourselves, and to educate those we work with – whether junior testers, or people who have a vested interest in testing such as business owners or project managers or developers.  We need to teach the tau of testing or “the way of the exploding computer”.

This brings me to an exciting event which is only a week away now, the Kiwi Workshop on Software Testing or KWST3.  This is a peer conference, and fitting in with this article, the topic this year is,

“Lighting the way; Educating others and ourselves about software testing -  (raising a new generation of thinking creative testers)” 

As an ex-teacher myself, and someone who has a vested interest in developing new, passionate testers who understand testing and master it's craft and complexities, this is going to be an amazing event.

To find some of our discussion and debate, follow us on 5th and 6th July 2013 with the #KWST3 hashtag on Twitter, and watch this space ...