Thursday, April 14, 2016

Thinking about test strategy ...

As background to this - this year I'm trying to collect a lot of ideas on test strategy.  It's my hope I might have enough material for a workshop on it next year.

Right now some of these ideas are getting structured, but at an early stage - so do contact me below or on Twitter with your ideas and comments.  I'd really like to push further on this!



Last week I was running an evening bootcamp on testing for Summer Of Tech.  It's an awesome opportunity not just to touch base with New Zealand's upcoming tech people, but also to champion a little bit about the fun and mindset of testing.

During the presentation though - I gave my current definition for testing ...


I've been using this slide for about a year - but realised how within that statement is something I'm thinking about more and more recentl.  Something that's been under our noses for a lot of the time.  Maybe we're so busy, blinded and programmed, because we've never really taken the time to explicitly think about it.

Control-Expect-Observe

Let's look at the standard test script format we've all had to use at one form or another ...


In a test script when we use action vs expectation we're really talking about "something I control" and "something I observe".

It's fair then to sum up a test scenario as using something we control to produce something we observe which can then be compared with something we expect.

All testing is a form of play between these factors - even exploratory testing which we talked about previously here.  We use a heuristic to vary something we control, we use an oracle to set our expectations, and we observe - raising defects if there's a variance.

Exploding the idea of strategy

At it's heart then, a test scenario seems a very simple beast - we control something, something happens ... is it what we expect?

This is probably where testing gets it's reputation for being easy - a single test scenario presented like that looks simple.  Even if we overlook how observation can be a difficult to master.*

The problem is that testing is not the execution of a single scenario.  The skill of testing is coming up with a series of scenarios which cover a system.  Again, people can come up with a scenario or two, but that doesn't make them skilled testers.

To me, being able to come up with a selection of scenarios to cover a product is not easy - and there are no guarantees.  There is such a vast variety of ways a product can go wrong, that it can be overwhelming.  Consequently we often will see members of the testing community talking about how huge testing can become.

Unfortunately that can become problematic.  I'm currently reading Daniel Kahneman's Thinking Fast And Slow, and we've come to the idea of substitution.

Simply put, when we're asked a very complex question, if it leaves us perplexed we will sometimes answer another related question for which we do have an answer.  In his book, when they asked students in one survey "are you happy?" followed by "how many dates have you been on this month?".  "Are you happy" is a complex question.  Put this way around, they found no correlation.

They then asked another group of students the same questions, but the other way around - "how many dates have you been on this month?" followed by "are you happy?".  This way around they found very strong correlation.  The reason being that when the complex question about happiness came up, they substituted a question they had just answered "how good is my dating life?".

Now remember how I mentioned how testers talk about "the possibilities of what you can test" being huge?  That means when a new system is put in front of us and we're asked "what will you test?", that it's a big complex question.

So no surprises, we tend to perform a substitution of our own, and hence we tend to answer the question initially as if it was one of the following questions ...

  • "What do you control?"
  • "How have you tested other systems?"
  • "Where have you found problems before?"


Now none of those answers is a bad first answer.  The problem is when we're falling into the trap of using one of those without realising.  They are each a very good starting point, but if you don't push further, you'll end up doing shallow testing, and potentially missing something important.

"What do you control?"

I'm going to revisit material from a previous article on Back To Basics, where we looked at registration.

Let's take a look at Facebook registration for a change ...



It's easy to take a look at this and come up with a list of the following which are things we can vary and control ...

  • First name field
  • Surname field
  • Mobile number field
  • Re-enter mobile number
  • New password
  • Birthday drop down lists
  • Female / Male radio buttons
  • Create an account
  • Various hyperlinks


When we look at these items our first approach is to create a list of test scenarios which use different data and permutations.  And that's as I've said is a superb start.  Where it's not great is if once you've exhausted ideas for this to consider you've tested everything.

Because you haven't.

What you've just done is a very competent job of functional testing.  You've chosen items that you know are simplest to control, and you've created scenarios.

What you've omitted are items for which you don't know how to control.  I think an important part of test strategy is being able to recognise "I ought to be able to vary X, but I don't know how" - then asking the question "how can I control X?".  Unless there's explicit instructions, we'll often have a blind spot for the "everything else" which defines a product which is often thrown into the "non-functional testing" basket.

What are some examples of tricky things you might want to control?

  • Platform / Browser you test your system on.  Different browsers react differently - installing other browsers is easy.  And you can set up different virtual machines to simulate other platforms.  Maybe you need an Apple Mac (good luck prying it from your architect's cold, dead fingers) - and maybe some devices such as the latest iPhone or tablet?
  • Screen resolution.  When was the last time you adjusted your screen resolution?
  • Server load.  Does the system work well with multiple users on?  You can use a tool like JMeter to simulate a lot of users on your system - simply it's a tool that allows you to control a lot of concurrent calls to your system and measure response time.
  • Hardware down.  If you've got a load balanced system, do you have a scenario with part of your system down?  It's not an obvious thing to control, but you should ask how you can get someone to do that.


Of course in talking about the above I've provided potential solutions - which was naughty of me.  You need in strategy to be brave enough to say "I need to be able to control X", even when you're not sure how.  Ask your team, and around anyway.  Maybe there's a tool out there, or maybe the team needs to build it into the system as part of testability.  But don't be too quick to say "I can't control X, so I won't bother".

At worst, if you can't find a way to control, then put it as an assumption "we weren't able to test around different X".

"How have you tested on other systems?"

Again a super method to get started, we use our experience on a previous system, and copy and paste it over.  This is great apart from,

  1. Your testing will only ever be as good as your last project (what if instead of good testing, you were instead just very lucky?).
  2. If the current system is almost identical to the last one, that's great.  But more than likely there will be differences - and do you need to account for them?

A good example, if you didn't need mobile device testing on your previous project, you're in danger of thinking that can apply to this project - are you sure?

"Where have you found problems before?"

Once again - this is a helpful method, with a hideous blind spot.  If you use only test areas where you imagine there could be problems, what if there are problems in the system you're just not imagining?  How can you think bigger in your testing approach?

If you only go looking for old bugs, the problem is you might not be taking a different path to find new ones!


A useful guide for strategy

Is there a particular approach you tend to favour?  In the early days, I tended to get stuck on "where have you found problems before".  I now tend to obsess a little on "how have you tested on other systems".

When you're looking at a new piece of work, capture all of your initial ideas and try not to think too much about it.  Once this has dried up, think how you've approach this - has it been as if you were answering,
  • "What do you control?"
  • "How have you tested other systems?"
  • "Where have you found problems before?"

Think about the blind spots associated, and dig deeper.  Try answering the other two substitute questions, and see if it forces you to go deeper.  And of course, try out the Oblique Testing cards to shake things up a bit!






Regarding Observation

* If you look at my book How To Test, there is a chapter on observation.  However dispite over 400 readers, no-one has yet Tweeted me to say "where is Chapter 7?", this is one of the deliberate mistakes in the book -


Dan Billing has accused me of overplaying the "smart arse" card on this one ...

3 comments:

  1. Hey Mike, great post! Too bad it's a day late for me to plagiarise bits in our meetup so I can sound smart. :)

    If I correct typos does that give me a gold star for 'observation skills'?
    *despite over 400 readers ;)

    ReplyDelete
    Replies
    1. Remember there's one section which is hugely typo prone by design! ;-)

      Writing a LeanPub book is interesting - I'll tell you about the fun sometime.

      Delete