Friday, April 24, 2015

Developing an approach to responsive design website testing

Since it's introduction in 2011, responsive design has become increasingly important in websites, and with good reason.  Prior to it's use, websites struggled to support the increasing uptake of access from smartphones and tablets.  Some sites would have to develop separate (and thus costly) websites - one for viewing from PCs, and one for viewing for mobiles (typically hosted at m.*).

Responsive design was a methodology of having the same page source, which would scale according to the size of page available - allowing a single source of web content to be supported on difference sizes and devices - from laptop to smartphone.  It basically allows all your information to "fit" horizontally, so if you're on a mobile device, you don't keep having to resize or scroll your screen horizontally <->.

There is a wonderful website you can try this out on here.

Open in full page on your browser, and it should look a little like this ...


Now reduce the size of your browser a little, and you should find that instead of two columns of entries, it reduces to a single column ....


Now take it down even smaller, as if you have the limited number of pixels you get from a mobile, now the labels go above the data fields ...


Pretty cool huh?  But there are also a few potential pitfalls, and this article will talk you through some of them.

Whoo-hoo, mobile testing, I'll have me some of that!

Something we need to be crystal clear about, when we're talking about testing responsive design on mobile devices, we're basically just using them as browsers.  This isn't connected with testing mobile applications which can be installed and work through Google Play or the Apple's App Store, that's a whole other different field of testing (but some people get confused).

Creating a strategy

To get the ball rolling, you need to start setting a strategy - across mobile devices and browsers.  Responsive design uses some newer HTML features, so there are older phones and browsers which really struggle.  So the question has to be - what browsers/devices matter?

When we've done browser testing in the past, we've just tended to install a whole host of browsers onto our machine, and maybe some virtual machines to cover older versions of IE, and just "get started".  Here's the catch though - it's free to install a browser (yes, even Safari).  But mobile devices cost - and if we're talking a high end model, then it costs a lot!  You have to be selective, and your client has to be willing to support the purchase of these devices.

Even "hey guys, does anyone have a phone, I just want you to check this site in testing" is a bit dubious.  You're running your testing strategy from "what you can find around the office".  The other thing is that a mobile phone is a very personal thing - I might check a site for you, but I wouldn't let you take away my phone to look at a problem I'd encountered.

If your client is keen on developing a responsive design site, then they need to be comfortable with renting or at least purchasing some devices.  And here's the thing,

  • Going forward, you will have to do some responsive design checking for every release.  It's not just something you bolt on, work for a few week and don't have to worry about anymore.
  • New devices are always being released.  This means revising those devices you use about every 6-12 months (a good rule of thumb is every time there is a new version of iOS).

Which devices?

The best answer to this is to talk to your client and ask "what browsers/devices have 5% of traffic or more".

Of course if you are just creating a responsive design, you might not be able to have reliable figures.  In this case there are lots of sources out there.  Mobile test consultancy Synapse provide some useful resources (and I've used Jae before, and he's well worth bringing in).


Apple devices - you can find information about the most popular here.


Android devices - you can find information about the most popular here.

Right now, the jury is still out on Windows Phone 8.1 uptake, as is being able to cross compare device usage (iOS vs Android vs Windows 8.1).

Looking through that list, I'd say at a bare minimum, you'd need to consider the following in your test suite,

  • iOS 8.X device
  • iOS 7.X device
  • KitKat device
  • JellyBean device

For these devices, I'd also try and consider lower spec models, especially with smaller screen resolution.  [In responsive design, smaller screen means less real estate, and potential for problems.  As testers we like problems]  That can often mean looking at a smartphone over a tablet.

Beyond that, I'd try and see it I could talk up a purchase of something in Lollipop (it's a low share, but it's the future), and maybe Windows 8.1 (especially as there are some dirt cheap Windows Phones out there right now).

Regarding the browsers on those devices - most people just use the build in browser (until analytics tell you otherwise).

Remember - this is my analysis from the current market - it will change!  Once your site is up, try and get analytics to help profile popular browsers/devices, after all it doesn't matter what other people are using, what matters is what your client's customers are using.

And on that bombshell, just a few weeks after Microsoft announced the death of IE, look who sits at the top of the most popular browsers for reading this blog?



Test Website Access

Well, typically the website you're producing is kept tightly under wraps until it's launched.  Don't forget to have a discussion about that with your web team.  Do you have a VPN you can set up on devices to access your test environment?  You're going to need some form of solution to get your device to see your test pages.

Can't we just get around needing mobile devices, and just use the browser in small mode?


If you make your browser 480 x 800 - isn't that the same as using a browser?

It's to be fair a good first step, but as you'll see below, some of the problems come from the operating system.  Android and iOS have special build in ways to handle certain items like drop down and pop-ups which mean they behave slightly unexpectedly.

So I'm set up - what next?

Okay, so someone approved your devices, you have an accessible test area and you're ready to go ... and?

So what exactly are you going to test?  What it really helps to do now is come up with a way to summarise your application.  To cross browser and cross device test you need to repeat the same elements of testing over and again for every browser and device.

Do you have any clear idea what those elements are for your system?

The following is a guide to my approach ... and remember I looked at some of this a couple of years ago (but just for browsers).

Page checking
Create a map of every page.  Confirm that,

  • can every field can be selected and data entered?
  • can every check box selected/unselected?
  • can every drop down box selected?
  • can every button be selected?
  • can you select between tabbed pages?
  • check for pop up confirmation and error messages

You are basically looking here for something not working, a button missing etc which would prevent you from being able to use a page!

Functional Checking
What are the main basic functions of the website, I need to make a list of them, and repeat on several browsers and devices.

Here's some examples (discussed previously),

  • Registration
  • Login
  • Change my account details

Generally the tests per browser/device don't have to be exhaustive - you are repeating a few examples across multiple browsers after all.  But ideally should include a success and a failure (you always want to check an error message is displayed).

Being able to create a summary overview for testing of your website, is something I hope to explore in the future - so watch this space.

Common Gotcha's

These are the areas I've commonly found problems with on responsive design.  You'll notice that some of these can be countered by good, up-front design ... so if as a tester you find yourself in a design meeting, go in forearmed ...

Landscape to portrait to landscape


A simple test - part fill your page in, turn it on it's side.  Do you lose your data?  Does any page element vanish?

Turn it back to it's original orientation, and check again.   Sometimes the changed orientation causes a page refresh, and things go missing!

Drop downs

The mobile device overrides how drop downs are handled in browsers ...


On the left is the iOS carousel, and on the right the Android selection list.

The problem though occurs if you have long selectable items on your drop downs, For example, consider a list of security questions,

  • What was the name of your first girlfriend/boyfriend?
  • What was the name of your favourite film?
  • What was the name of your junior school?


All these will truncate the questions, so all the user will see is "What was the ...".  There's really no solution for this, but rethinking the approach/redesign.

Pop ups


I've found any kind of required pop ups from a browser can be a little troublesome on mobile devices (they often just go AWOL).

This can include,

  • Are you sure you want to continue?
  • Read these terms and conditions.
  • Select a date from this calendar

Tread carefully - and always check these.

The catch with responsive design

Yeah - there was bound to be one, wasn't there?  This approach significantly reduces risk of problems in responsive websites, but it's no guarantee.  Indeed some mobile phones take a system like Android and tailor particular features, so it's hard and expensive to exhaustively test upfront.  This risk has to be known to both yourself and your client.

Furthermore, because responsive design uses newer features of HTML, it means that much older browsers and devices really don't handle the page very well.  You might want to consider that on old, out of scope browsers/devices that it at least fails gracefully with a "your browser does not support this website" error message.

Some common sense ticket items

Finally some common sense things to consider before we wrap up ...

Health and safety

Some testers are really keen to get into mobile testing.  I actually really find it a pain.  The screens are much smaller, it requires data entry just with your thumbs, you tend to hunch over the device.

This is a recipe for repetitive strain injury.  Try and mix up mobile testing with browser testing, and make sure you're taking regular breaks.

Keep them safe and keep them secret

Make sure they're locked away when not in use, though have a couple of keys to where they're kept with your team.  You just don't want them walking away.  Lock and count them in every night.

You drop it you bought it?

It's inevitable that one is going to get dropped.  What then?  Look into getting them added and listed to your building insurance.  If you can't make it clear what will happen should any damage occur.





Breaking news ...

I ran this article by Stephen Janaway - he's someone whose articles I really respect and look to when learning in the mobile space, and he has a huge amount of experience.

One of the great things about writing an article like this is that it puts down everything you know, and sometimes what you find is there's something you didn't know.  And sure enough ...


So, I'd never really though about using Chrome Developer Tools as an early form of testing - so I can't really explore here, but it might be a topic for a follow-up blog.

Sunday, April 12, 2015

Rapid Software Testing with James Bach returns to New Zealand


I count myself as extremely fortunate to have been able to take James Bach's Rapid Software Testing course back in 2012 - it's hard to believe he's not been back to repeat the course since!

If you've never done Rapid Software Testing, I highly recommend it - for myself, it was one of the most influential courses I've taken on software testing.  Indeed use Google and check out other testers opinions of the course - what I've seen has been universally positive and it seems everyone has a different tale to tell.

The course is full of clever insight - indeed, I'm forever going back over my course notes and learning/remembering additional details.  Most of all the course worked for myself as a mirror - it confirmed a few things about the direction I was working in, it suggested a few new ways of working, and I learned a few of my weaknesses.  All these things led to me becoming a better tester.

This is a list of some of my personal take homes from the course,


  • Documenting what you're going to test isn't as important as recording/documenting what you've tested #RST  [Indeed this became an important part of our approach here]
  • Boundary testing is important, but it's not the be-all. Be sure to check behind them #RST
  • Visualise the testing you've done. Are there large expanses untested? Are you sure nothing's hiding there? Maybe check again. #RST
  • When you think you've tested all you can. Defocus, and try a new approach to using software. Maybe get someone to try #RST
  • Always know the purpose and values of the software you've been given. #RST
  • Engage with customers and question them to tease out ways you can aid them better #RST
  • A test action which repeats/rewords a requirement has no real value #RST
  • An oracle is any method or person which means you have an expectation of the system under test #RST
  • Oracles can be fallable though #RST
  • We keep a list of expectations in our head about software. It only triggers when something is unexpected. You can't write them down #RST
  • Most testing methods feel instinctive. But sometimes we need to challenge & ask more questions to explore other aspects of the s/w #RST
  • My main heuristic is thinking how I'd code a piece, and the ways it could mess up if I did #RST
  • Don't be afraid to ask for a log file from developers if it makes your #testing easier.  #RST
  • But even log files are fallable #RST #dontTrustTooMuch
  • The people who most impressed me and who I'd want to work with aren't necessarily the same as ppl the instructor liked #RST #differentValues
  • If someone has an idea I feel has value, but the expert disagrees, I need to be braver, speak up & support them #RST
  • Experts in testing can advise me.  But as a tester on MY project, I am ultimately master of my own destiny. #RST
  • As a tester, I have things I do well, & things I do badly. Build a team around me which balances out my bad & harnesses my good #RST


If you're based in New Zealand, and have never attended, I cannot recommend this course enough.  Fellow tester Kim Engel has been putting in a lot of hard work organising his return in August - and you can find out more about booking here.

Saturday, April 11, 2015

A funny thing happened on the way to South Island ...


This time last week, we had our ferry tickets and motel booked for mini-cycling tour of the South Island of New Zealand.  It was an idea that my son and I had been putting together for a while ...

The thing was, I was really quite nervous about the trip.  And on talking it over with my friend Elayne, I'd noticed it was a kind of nervous that as a tester I was somewhat familiar.  Because it's a similar kind of nerves that I've had when any software project I've been a part of has gone live.  How can that be?

What we were doing wasn't super-ambitious.  We were cycling from Picton to Blenheim, a trip of 30km, we'd stay overnight, then return the next day.  We could have just booked the tickets, jumped on the ferry and hope for the best.  But to be blunt, though I love to be quite active, I'm overweight and getting old - and I don't like to set myself up for failure.

So since January we've been training.  At first just 20km rides to get used to our bikes, and to enjoy the last of the New Zealand summer.  We'd also complement this with Les Mills RPM classes during the week to help build up our "cycle fitness" (needless to say when a mid-40s father tries to compete with a mid-teens son, one builds up fitness faster than the other).

This was just a start - during this time, we found our bikes needed a number changes and repairs.  Cameron decided that he'd like to use an old mountain bike of mine, my chosen bike needed a new wheel (we popped about 3 spokes over a month).  Every weekend we'd be out on the bikes, performing a test on ourselves, our fitness and our bikes ...

Testing for geography

We used the internet to measure distances from Picton to Blenheim and get an idea of the terrain, and found similar routes we could try over here in Wellington which would contain the same challenges for distance and incline

Testing under load

We attempted to work out how much equipment and especially water we'd need to carry (New Zealand is very remote in parts, so can't just expect to pop into a supermarket and pick up what we need between destinations).  So during March we'd attempt to cycle with various levels of clothing etc on our back.  When choosing a couple of pairs of clothing here, a rain mack here, something in case it gets cold there, recharger for your phone there - each piece seems quite light, until you put it all into a pack, and stagger under the weight.

Previously in 2012, I'd gone with my son on a camp hike with the cadet force he was doing his Duke Of Edinburgh with.  It was 10km hike, camp overnight in the bush, 10km hike back.  I only tried the pack on the night before, and was close to quitting on the first 100m (the whole 2 days was complete misery because of this).  In hindsight I should have tried just short walks with the pack, and saw how I went.  It was a major motivator for both of us to be used to cycling with a similar level of equipment on us.

Testing for endurance

Being able to cover the distance with the same level of equipment was nice, but something we were very aware of, we had to get good at doing 30km on rough roads with a bit of an incline (and wind).  Then waking up the next morning, and doing it all over again.  That meant in late March, training on both Saturday and Sunday - getting used to doing that ride when we were feeling a bit sore.

And more ...

There was a lot more as well, trying out which clothes we found best to cycle in.  Working out snacks/nutrition before/during/after.  Going over the route on Google Earth so we were familiar with it.



Given all of the above, Elayne asked me "you've trained, you've prepared - what are you worried about?".  And that was when I started to notice the similarity with testing.

All the training we'd done had allowed us to check our fitness (and RPM classes to help build those fitness levels).  We were more prepared and knew that we could do it - but it wasn't assured.  Fitness training doesn't assure your success come event/race day.  It just increases the likelihood of success.

The same goes with good testing, it allows you to remove as many obvious risks as you can.  But there can always be things you don't think about (like with walking with THAT pack with so much weight in it back in 2012).

Here are some near "gotchas" which nearly ruined our cycle event,

  • We'd trained during a heatwave ... but on day one, there was heavy rain and severe wind.  Ouch.
  • We'd planned to cycle during daylight, so bike lights and hi-vis vests were a bit of an afterthought.  But the weather was so bad that we needed them to be clearly seen (the sky Biblically turned black).
  • Camerons bike suffered repeated punctures on day one.  It was obvious something was wrong with the front wheel tyre (I'm usually good at running my finger and finding any splinters/thorns in it).  We ended up just pumping the tyre every 2 km for 8 km to get us to Blenheim - where we got both the inner tube and tyre replaced.  We'd originally planned to do this trip over Easter, and realised if anything went wrong, no cycle shop would be open to assist.

You can bet if we do another cycle event, we're going to try out some extra sessions to cover some of the above.  This is the feedback loop - in testing and cycling, we get better by "doing", finding out what went wrong/wasn't so great, and focusing on improving it.

Life is full of events like this which allow you to learn an important lesson, which if you allow yourself to be mentally open to, will allow you to grow more, and feedback into your professional life.

For me, I think the takeaways are,

  • Nothing can be assured, but the more variety of testing you do on your fitness/software the more you'll be able to find weak points, and be able to take action to address them.
  • If you find you are easily achieving the goals of any session, then maybe it's time to be a bit more ambitious, and push harder.  This can mean "trying longer/steeper/more load" in cycling or "add more complexity" when testing.
  • You are always limited in testing fitness/software by your imagination - more importantly what you can imagine could go wrong.  Your own experience is the best teacher, but knowing other people's stories of "what happened to me" always helps guide and expand your imagination.
  • Sometimes it's just good to leave your cares behind, and hit the road with a good friend to have an adventure.  It is possible to overthink these things you know ...


That darn pack

 







Saturday, March 28, 2015

Mental Health 109 - The long grief. Five years on ...



Today was an emotional day.  It's Violet's 40th birthday - but those who've read for a while will know my good friend Violet died 5 years ago.  I've written a lot about my early grief when she died, but with a lot of people around me going through their own journey, I want to move the story forward into the present day a little.


Violet's death at 35 is probably the hardest bombshell I've ever had to deal with.  It was death of someone I loved so very much at an age which to be honest was unfair.

That first week, it was like there was this extreme emotion trapped within me - it felt too big to be kept inside, and like I'd burst at the seems from it.  And yet I remember being so much more angry than sad.  So very angry.

Although no-one was to blame, it just felt she died too soon, and it wasn't fair or right or just.

Not being able to attend the funeral made things that much harder.  I held a brief ceremony myself, but it was difficult.  I didn't have many mutual friends, but I was really lucky to have a lady named Jenny Day who I could talk about her so much with.  And I did talk a lot.  But in a lot of ways I felt like I was going through this myself.

But most of all the grief lingered.  Every night, just going to sleep was a struggle, because your mind always drifted to her.  It felt like all the joy had been sucked out from life.

It was the common bonds of our friendship which were harder to go on alone.  I deleted all my Regina Spektor from my MP3 players, and I stopped watching Doctor Who.  Because these were things I'd shared with her, and now they just brought me such inconsolable pain.

Over Christmas, my son and I listened to the autobiography of Donald Malarkey, one of the famous Easy Company Parachutists.  The thing I most identified with was his tale of grief over losing his best friend, Skip Monk, and how that grief followed him around, never really leaving.

Looking back 5 years on, I'm often surprised how the grief is still there.  It's still very powerful and emotional, and yet it's a gentler grief.  Like feelings of melancholy over anguish.  I'm not a great believer in the afterlife or ghosts, but often it feels like she's just next to me, only slightly out of sight and out of reach.

They say "as long as you remember them, they're not really dead" - but I hate that cliche, although at the same time seeing some truth in it.  If you live your life like Violet lived, you are someone who is nurturing of others, someone who is passionate and makes a difference.  Though Violet has been long gone now, those changes in me that her love and her friendship brought about remain.  That's probably why she always feels so very close, especially when I'm most alone, because I do carry the best bit of her within my heart.

In on odd way, the hardest part of dealing with it is moving on.  You love your departed friend, but you don't want to turn your life into a devotion to her memory that you forget that you're still alive.  When you've lost such a close friend, you're afraid of making new friends, in part because you're worried you're just looking to replace them.  But also, because having lost one friend, you sometimes want to just withdraw into isolation so you never feel that pain again.

Life has been good though, and in my own way I've managed to move on - somehow I've picked up new friends, including best friends which fill some of the hole she left behind.  I still have grief - maybe I'll always have it, but somehow it's a less scary grief - one which seems to be capable of having great beauty concurrently with a gentle sadness.  And I think there will always be a place in my heart for the girl who will always be 35, and where she left her mark ...


She loved the watercolours of John William Waterhouse, and somehow in the haunting story of The Lady Of Shallot, there is something which powerfully resonates with the tale of my Violet.

Friday, March 27, 2015

How I learned to stop worrying, and embrace public speaking ...

When I was at school, public speaking in any form terrified me - and some people find that hard to believe now.

As part of a project, I've recorded a discussion into speaking (well doing is often more effective than just saying), do enjoy, and do let me have feedback if you enjoyed it!



Tuesday, March 24, 2015

Writing a kick-ass defect!


I was asked by a graduate tester about just what should go into a good defect report.  It's a really good question, and to my surprise, something I've never written about, although in a way whether we give defect reports in a written form, or a verbal form, thinking about the information you provide when reporting a defect part of the key role of a tester.  After all, if we're not able to tell someone else, that bug isn't going to go away any time soon!

Obviously, different project will have different defect templates.  However you're bound in your career to find places where you'll find none at all, so thinking about what you need to provide is a valuable skill.

I've written many defect reports, but I've also been a developer who's fixed them.  So I know what's helpful, and what's not.  Something I really encourage new testers to read is a humorous meme of supposed log book communications between pilots and ground crew.  The pilot is vague about the issues, so the ground crew are vague about their resolution back.  Don't let this happen to you!

As with most things - writing a great defect answers some very generic questions of What-How-Why-Where-When!

So let's start the ball rolling - you're using your companies new trial piece of software, and something just doesn't seem right about it.  Let's start defining it ...

What


In a nutshell, what is the problem?  This should be a real brief summary of what the problem is.

I like to think of it in terms of elevator pitches.  Imagine you've just got into the ground floor elevator with your project manager.  You get off at the 3rd floor, she gets off at the 4th.  And she asks you "hey, I heard you found a defect this morning?".

You have 3 floors, and about 20 seconds to summarise what you've encountered.  This is a real skill in summarising what you've found to a one sentence tag.  But believe me, defects like episodes of Friends are remembered as "the one with the...".

If you have a super and concise "what the problem is" summary, it probably belongs as your title.  That way, anyone reading through your defect system will go "ah".

Hint - when you go to talk to a developer about a defect you've logged, it's best not to quote just the number.  very few people remember bug 666.  That "the one where ..." summary will be the best way you have to job their memory!

How


Okay - you've summarised the problem.  But how did you cause it to happen?

The how is a way of "repeating your steps".  Ideally with a defect, you'll be able to repeat a series of steps, and repeat the unusual behaviour.

Repeatability of a defect is great.  But sometimes you just can't repeat the behaviour.  What then?  Well that's when you have to use your judgement - often depending on how much of an issue you think it was.  It's always worth talking through with a developer.

This is one of the reasons I like to use a screen recorder when I can, because it records exactly what I did and where the issues happened.  It is alright though to have defects that can't be repeated, but it's best to put on it "I'm not able to repeat".

Oh - and they say a picture is worth a thousand words, so never underestimate the power of a screenshot!

Why

Why is this a problem?  For you to be raising this as an issue you must find there's something you don't like about what you've seen.


Sometimes it's very black and white, "the requirement says the page should display your account details, and instead it displays nothing".  Likewise, if you encountered a blue screen of death, you can be pretty sure that's not "as per design".

But sometimes you might raise a defect because there is something going on which doesn't feel right.  It bugs you (another word we use for defect).

Your description of why will lead you to another description of a defect which is important - it's severity.  Typically in projects there are more defects and oddities than time to fix.  So people tend to focus on the things which are causing the most pain.

Severity is how severe a problem something is.  If you're causing your machine to blue screen regularly, that's pretty severe, and going to impact what you can test.  If you have a spelling mistake, that's less of a problem, and certainly not going to impact your testing too much.  However as it doesn't take much of a spelling mistake to make a swear word (as the test manager who emailed me with a mispelled request for "defect counts" found out).  And although it's true to say we testers live to report these kind of defects, they can actually have a functional impact - if you have a system which generates an automated email which includes a swear word, it's likely some e-mail filters are going to put it in the junk pile!

Generally though - although there are grids and standards for defect severity, I've found you just tend to pick this up through experience.  In truth, if you get everything else right about your defect and leave the severity blank, most people can choose appropriately from the information you've provided.  But experience helps you to find tune this.

Where and When



When it comes to retracing your steps where it happened and when it happened helps.

Where - well just that.  You might have a couple of test environments, so it helps to know that.  If you're on a web application, the kind of browser (and version) usually helps.  And indeed the machine.  All this information goes double for mobile devices of course!

When can be handy too - sometimes it's known for someone to be doing a release to an environment, and forget to tell testing.  Shocking huh - but it does happen.  Or indeed it allows a developer to look through the logs around about that time to see if anything peculiar was going on in the logs.

Tuesday, March 17, 2015

The Battle Of The Winshill Rec: And how it affects you as a tester ...


There are many shades of bullying - some so subtle, half the trap is that we don't even recognise or name it as such.

But at Abbott Beyne School, there was nothing subtle about the bullying I encountered as a young teen.  There was a gang who held Winshill in a grip of fear.  When they decided to pick on someone, they'd form a circle around them, and start to beat them up.  You were never attacked by the person in front of you - it was always a punch, a kick or a knee from behind.

Everyone was afraid of the Hawfield gang - but there was something they did to people that was far worse, and stripped them of even more.  They'd walk up to a group, but often just pick on one.   If you weren't the one being picked on, you'd just stand there and do nothing.  Too terrified to intervene even on the behalf of your friend or your brother.  Whether you were being kicked, or just watching, you felt robbed of your self-respect.  Powerless.

And then one day, someone decided they'd had enough.  It was a summer holiday, and everyone knew the gang spent most of the day at the Winshill Recreational Park "the Rec" holding court.  In those pre-cell phone days of the 1980s, the word spread like wildfire - if you'd ever been wronged by the Hawfield gang, be at the Rec at 3pm.

So there we were - me and my brother.  We'd both been caught the wrong side of this gang.  We had no idea if we'd be the only ones.  There were some nerves.  And then we arrived ... and we were far from alone.  In fact it turns out there were crowds at every entrance to the park.  It felt like the battle in Gladiator.

This gang had attacked me and my brother and countless others.  It was an age pre-bullying awareness.  And they'd made us feel weak, like no one cared, like we didn't matter.  Standing in that crowd is one of my brother's favourite memories of childhood.  Standing in that crowd we learned that contrary to what the Hawfield gang had tried to drum into us - we weren't alone, we weren't weak, we did matter.  It was a euphoric moment of clarity.

It was their turn to run - although they didn't get far.  I don't remember actually kicking anyone when they were down ... but I'm pretty sure I helped carry and dump them in the nearby stream.


It was a watershed moment - the gang was broken after that.  Some individual bullying did still go on, but the terror was gone.  And we'd done it ourselves.

There are of course some pretty terrible lessons you can take from this.  Perhaps that the answer to all societies ills is just to form a bigger gang and return to others what you've been dealt with?  Ironically that's just how some gangs start out ...


This weekend we had our companies first Test Camp - it was an amazing experience as testers from several cities within our company managed to share our experiences, network, discuss.  We gave feedback at the end of it, and one comment really blew me away "I no longer feel alone".

That comment took me back to the Battle Of The Winshill Rec.  We may be out of school, but there's a lot of that experience which we're still living out.


  • Peer pressure - hey all your fellow testers are doing scripting and metrics.  You don't want to be the odd one out do you?
  • Intimidation - let's face it, we're having schemes like ISTQB and ISO 29119 imposed on us.  We're called unprofessional if we oppose them.


If you have never read David Greenlees experience you need to.  This was where someone "representing ISTQB" wrote to his CEO over his public objections to the scheme, in an attempt to wreck his career.  Yes, that's the kind of bullying which can go on about having an opinion in testing.  Fortunately David seems to have a CEO who recognised this as the nonsense it was. 

The tyranny within testing is that there are forces and interests which seem to impose schemes and actions.  Some are well meaning but misguided, and others simply that certain parties have "an agenda".

The solution, much like the Battle Of Winshill Rec is that as testers we need to mobilise.  That means YOU, the person reading this post becoming more active in the testing community.  Consider it a challenge.



Back in the September issue of Testing Circus, to celebrate four years of the magazine, I talked about how helpful it had been in my first steps in writing.  But beyond that I laid the gauntlet for others to consider picking up their pen.  Consider writing something for a magazine.  Get active on Twitter.  Find allies online and in real life with whom you can have meaningful conversations about testing.  Sometimes you might not agree - but that's okay!  Try to find common ground where you can, but explore your differences.  That's how you learn!

Maybe after David Greenlees experience, that would be enough to make you really fearful?  However I will tell you that most companies love to have testers who are passionate about what they do - so long as they are professional, and do not talk openly about customers or those they work with.  I myself always try and anonymise events and data.  Push comes to shove - you can always use a handle/fake name.  Indeed it was because I was originally unsure of my companies reaction to my blogging, that I used the handle of TestSheepNZ over "Mike Talks, Tester For Hire".  But when my company found out - they loved the fact!

That's the way we face our own Battle Of Winshill Rec, and realise we as a community of professionals are not insignificant, have a voice and are a lot more empowered than we might be led to believe.

[But please - no flushing the head of your ISTQB tutor down the toilet ... however tempting]