Tuesday, January 19, 2016

Peer 102: The MacGuffin effect in testing ...

Previously we looked at the then-to-be-released new Star Wars movie as a way to explore some psychological phenomena which we'd unconsciously be exposed to in December 2015.

Today I want to focus on one of those key factors (group delusion), and see how they affect us at work as testers.

Group Delusion

The term "group delusion" feels an awfully derogatory one - but as I discussed last time, it's subtle and can affect us all.  I personally see it as akin of peer pressure.  To me there's somewhat of a grey-blur between them, and they add up to the same kind of thing - an unconscious drift in our thinking.

I want you to imagine this scenario ...

You've just meet up with a couple of testers, Bev and Steve, who you worked with two years ago. Although it was meant to be more a social catch-up, it doesn't take too long before you start talking shop!

Bev mentioned how they've recently started implementing a MacGuffin strategy to their testing, and she's finding it really interesting, but a bit challenging.  Steve snorts a bit - his project has been using MacGuffin for over a year, and would never go back!

That evening, you get some email spam from a recruiter (you remind yourself you really should unsubscribe one of these days), when you notice the first job calls for "experience applying MacGuffin is a must!".  You scroll down, and it's not the only role which mentions it.

It only gets worse the next day when one of the senior managers calls you and several other testers into an office asking why you've not started implementing a MacGuffin policy yet ...

So we need to get down and start implementing MacGuffin right?

Hopefully your first reaction really is "what is MacGuffin, and why will it help?".  Sadly not everyone will.

A MacGuffin is a great term for this kind of effect in testing - it's a word often associated with Alfred Hitchcock, it's tool used as a plot device, and often kept vague and mysterious.

The mysterious MacGuffin used in Pulp Fiction

Over the last few years I've seen a few MacGuffins being traded around social media in association with testing.  I notice the MacGuffin effect when I start to see people wanting to build up their strategy not because a MacGuffin will help them with a specific problem, but because everyone else seems to be working with MacGuffins, and they're worried about being left behind.

It doesn't matter what they are, there is a peer pressure which creates a kind of what's called a "keeping up with the Joneses" effect.  If your neighbours, the Joneses, go out and get a new BMW and a 50 inch plasma TV, you end up asking yourself why you don't have these things, and thinking that really you ought to go out and get them yourself.  Even if it's something that previously you felt you never needed.

Let me give you a few examples of MacGuffins, and I'll break down the pitfalls I've seen ...
  • Test automation
  • Agile
  • Test management tools

Test automation


I'm actually a huge fan of automation (though I agree, it' more checking than testing).  Used well it can help my job as a tester because it will mean a developer can check that software works to at least a level where I can interact a little with it.  If it's so fundamentally broken that a user can't even log into the system, far better that's found with a little automation, than for me to find on Monday, then be told I can't get a build for another week.

The problem is automation is somewhat over-sold.  This really goes back to the 90s, where the sales blurb for the expensive software talked in terms of there being no need for testers.  All in the order of "costs half a testers wage, tests 100 times faster!".

The problem is you need a really good tester to come up with the ideas for scenarios to check.  Then you need someone to program it all up.  Then to run it.  And check the results if they show issues.  Then to maintain it when you change your code.  You still need people, and they're kind of doing "testing-esque" roles (especially determining scenarios, and investigation).  [Though you might need less of them]

If one of your tests "shows a red" (ie a fail), I guess you'll need to try and manually rerun the scenario to see why you get that result ... isn't that most likely to be a tester?

So even in a perfect scenario, you're still going to need people working testing (sorry if you bought the sales pitch).

However when you're sitting with Bev and Steve and they're going "automation is amazing ... it helps us to test really fast", everyone would want a piece of that action!  Especially if they scoff and go, "what .... you're STILL testing manually?".

I myself have experienced this kind of pressure - I had a manager at a previous company want to know why we weren't using automation for our current project.  This project was an integrated piece of hardware similar in many respects to a supermarket self-service kiosk.  They'd heard you could get some free software called Selenium, and a friend was working it on another project nearby.


Well, that was an unexpected item in my bagging area!

I was lucky to be able to go through the details with this manager, and get them to understand why it wasn't suitable, and I was glad we were able to go through it together.

My points were,

  • For automation to work well, we really needed the product in our hands in as finished a state as possible quite early on.  The reality was we'd scripted up some scenarios, but we would not get so much as a demo of the product until it was delivered for testing (and believe me I'd tried).  We were on a tight schedule, and so automation would jeopardise that.
  • It takes a long time to automate a test.  But it pays off if you expect to have to run that test dozens of time.  We had a 6-8 week window, expecting to rerun tests at the most about 3-4 times.  Once released we'd not touch this product again for years (if ever).
  • Primarily though, Selenium works on web-based application.  Our application was not even remotely web-based.  The technology just wasn't suitable.  I had to go through some Selenium websites to show this to him, but again worth doing so he'd understand.

All the same, it's easy to see how he got MacGuffined.

Agile

Again - I'm a huge fan of agile.  Just not when it's mis-sold.

I see and hear this a little - companies who've felt compelled to "jump on the agile bandwagon" because they've heard about everyone else's huge successes.  But without really "getting what agile is".

Agile though is a hard sell - when you tell customers the reality that "agile isn't about succeeding, it's about making failure more up-front so it causes less damage, cost and is easily rectified", some look a little shocked and terrified.


I thought you promised us success!

Agile has become such a big MacGuffin, that no IT company can afford to say they don't do it.  But unfortunately there's a little bit of "just call it agile" out there, where the principles aren't really being followed or understood, it's just a rebranding of what was done in waterfall.  As I've talked about addressing our agile transformation - there's an awful cargo cult trap of trying to keep doing what you've always done, but "wack an agile label" on it.

Warning signs for me tend to be,
  • "Well ... we do stand ups" [That is - that's the only difference which has been noticed]
  • "We have a 2 week development sprint ... which is followed by a 2 week testing sprint"  [Mini-waterfall]
  • "We' then run our retrospective ideas past management" [Sounds awfully like the agile team are not empowered]
  • "We do SAFe"
Test Management Tools

I'm often told by their advocates that "the best thing about this McGuffin Test Management Tool is that you get all your reporting for free".


Actually, let me repeat this and add emphasis - "the best thing about this McGuffin Test Management Tool is that you get all your reporting FOR FREE!!!".

Just like agile and just like automation, test management tools can be a very useful.  For very complex projects, with huge numbers of testers, it allows you to break down the many areas of testing into different areas, track which tests touch which requirements.  And that's useful.

When advocates of these tools scoff with "well, you can't just use Excel, can you", I usually squirm a little, and reply "well, often Excel is my tool of choice"Am I mad?

Here's my problem with test management tools.  Many of them whilst they do allow you an overview, it comes with a price, which is not free.  And I don't mean the "per seat license dollar cost".

Test management tools often constrain testers to work and test in a particular fashion - one which often is not the natural/logical/comfortable/most-efficient method to how some testers operate - especially exploratory testers.  If you have a tool which isn't really suitable for how you work, it feels clunky, it slows you down, and creates a drag on your work velocity.

They can also give a false perspective - because the kind of high-up managers who love them rarely dig down to the detail.  A couple of years ago, I had one such manager mention how nice it was to be able to see 500 scripts scoped out for the testing effort, and like a commander they could watch their status when run.  When I looked into some of these tests, they had a title, but the majority of them had no steps ...  It's something I'm aware of.  I know of some projects around Wellington where they had the test management software mandated, and the testers found using the system for testing so difficult and clunky, they ended up only using it to track bugs.

This is how some people think the charts in test management tools work ...

I like to use Excel when practical, because I can use it in any manner which makes sense for me to keep notes for testing and for breaking down the effort.  If a team is less than 5 testers, you probably don't need one.

I also know looking at graph or output from these tools does not show me where we have problems (and I'm a trained scientist good are reading trends in graphs, but also noticing noise).  Often those problems can lie under the surface of the pretty graphs.  Far better to have a daily stand-up with other testers, and get them to talk about any problems they're encountering.  That avoids me having to stare at numbers or statuses on a graph or dashboard trying to "read the signs" like some fortune teller.  I trust my testers to tell me about problems than I do a tool.  If I don't trust them, I shouldn't have them working for me.

Likewise when it comes to scaling up a group, I'd rather have 15 testers split into 3-4 groups with a team lead, and all those team leads doing a stand up with me daily to pass down problems and pain points.  Even if we're using a test tool, it's important to keep talking to the people using it.

[I'm going to avoid a discussion on test metrics and reporting here, as I've something upcoming on just that topic]



So that's the MacGuffin effect leading us into sometimes costly mistakes!  Any of that feel at all familiar?  Any other MacGuffins you've come across?

Next time we'll look at the phenomena of denialism, and the sting in the tale there...

No comments:

Post a Comment