Saturday, December 6, 2014

The View from the Test Manager's Gallery


This article was written back in 2011, and first published in Testing Circus.  It's an article I've always had a lot of affection with, because it was my attempt to rationalise my first steps into a test leadership role.  I found a lot of it was spot-on with where I'd continue to develop, however there are a few things I'd really like to change - for instance today instead of saying "metrics matter", I'd go "reporting matters".  But at the time I was for the first time seeing how project managers were using our reporting and circulating them up - the pain those reports could bring was becoming more visible to me than in my roles previously.

An interesting trivia point - this article was originally written for the Ministry Of Testing's Testing Planet magazine, but was rejected.  Which felt a little awkward as I was assisting as an articles editor at the time (hey, you can say for sure it wasn't a clique).  I say this not out of any ill-will for Ministry Of Testing, but just to make clear that we all suffer the odd knock-back at times.  In fact the next piece I wrote for Ministry Of Testing I got fabulous support editing from Simon Knight on, and the article produced, "The Software Minefield" would become the name of my first collection of articles.




The View from the Test Manager's Gallery


This year has been one of change for me – the much sought-after promotion to test manager finally happened, and with it a new set of challenges.

Several months in, I realise that although a good test manager does need to be a good tester, the skill-set and emphasis is different in many ways to that of being a senior tester or even test team leader.

Indeed, I’ve learned that the view from the test manager’s gallery is very different from that on the factory floor of testing …

You need good people


Several of the areas that have needed testing this year have been new projects, with no allocated staff for testing.  Thankfully there are several very capable test consultancies who'll send me contracted test resources.

Ticking off the details, two of my new projects were customer facing - the kind your average man-on-the-street should be able to walk in and use, with no documentation required. Systems integration testing was being done elsewhere, and we would concentrate more on usability testing, that the interface functioned “as designed”.

So I reasoned with no real needed knowledge, I could use anyone really to perform this testing, the more junior and cheaper probably the better for my project manager (who'd be picking up the tab).  The tester I ended up with was not junior, but one with more testing experience than me.

Not what I asked for, but in hindsight something I’m very thankful for - you see, I'd planned for myself to be a lot more hand-on during the test phase, overseeing what was happening day-to-day. The reality (and the best laid plans of any manager often go this way) was that when testing started (there were some delays getting a working build to us) I had another project to work on which thanks to timescales moving (test managers should be used to this) meant we were doing our best to try and get two projects tested concurrently.

So what did Brian, my experienced tester bring to my project? Being experienced, he had worked on enough projects to need minimal supervision – I set out for him on the day he arrived what I needed to do, what our testing objectives and requirements were, who the key people were, and a couple of test script samples for him to copy the style and get him started.

He went away and created the rest of our required test scripts, and got testing. Overall he used his own initiative, would inform me of his progress and any key issues/decisions, and generally “got on with it”.

With a more junior tester (as I'd requested) I'd have expected to have needed to be much more hands-on, overseeing their work, to the determent of my other project. Instead we had clear objectives, and a framework for success passed to my tester, and they used their initiative to achieve that result.

Taking a step back



As a senior tester on past projects I usually felt obliged to take on “the hard stuff” of a project, the difficult technical testing, to leave the easier stuff for the junior testers.

Likewise as a team leader last year, I felt an obligation to do the hard items myself due to my increased exposure to the technical meetings, and let the rest of the team pick up on the rest.

I realised though there's a big issue there when you step out of the role of senior tester and towards team leader or test manager. The issue is one of control-freakery. You're taking on the tough stuff to go easy on the team, but in doing so you're also showing a distinct lack of trust and respect in the abilities of your team. You're saying some parts are beyond their skills, and setting them to fail before even giving them a chance. How can the people in your team really grow if you don't give them some challenge?

I realised this when I reviewed Brian's completed test scripts. I liked them, they achieved the desired results, but … a small piece of me felt a little unhappy with them. The reason? Because his scripts were not quite how I'd have done them myself.

After a quick ego check, I realised this was something I'd have to let go. The scripts were good, and just because they weren't quite how I'd write them, it was no reason to have parts of them redone. That's petty and doesn’t matter, as long as the end result is a good testable script.

It's like when you make a cup of tea for someone, they sip it and mmm they say it's a nice cup of tea. They then ask “did you put the milk in before the water?” and you say no. They then get quite upset saying, “but you should always put the milk in first”. Does the method really matter if the end result is still a nice cup of tea? With some people and some managers it really does – they often are the worst kind of micromanager, the kind we all hated working for ourselves, and ironically the last person we wanted to end up as!

Metrics Matter


I used to get annoyed with my test manager a few years ago when he bugged me about metrics. Running around booking metrics figures felt like it just created delays, and stopped me from actually testing.

But the fact is that metrics matter.

Let's face it, testing is always the last phase before a piece of work “goes live”. Invariably a piece of work enters testing late because the development took longer than expected.

So as a test manager we often have to deal with an expectant project manager or business owner who wants to know if their target go-live is still achievable.

Metrics are an important part of this. But they are just a part. Together with any metric or graph, there also has to be an analysis of what it means. Otherwise management have a nasty habit of seeing trends in your numbers and graphs which just aren’t there.  This mastery of metrics is an important part of the test managers role, and they can be useful or a hindrance.

A discussion on metrics and communicating to management could be an article in itself! But there are a couple of general topics to be covered,

  • the progress made through test scripts
  • defects unresolved, especially “the big ones” which are high severity and we should not go-live without
  • an action plan on the “big defects” if you have one from your developers
  • an explanation of any impediments that have affected the day, such as the system being down all morning due to connectivity issues.


Management are a busy lot, and they get a lot of emails. I find that it's useful to send them these daily reports, but also to catch them a couple of times a week, and verbally given them the 2-minute summary of how it's going and what the “big issues” are.

It's useful too as during an extended test period you're sending these emails everyday, and no-one really ever seems to respond to them. So it's nice to have feedback that people are aware of the issues and risks you're flagging on a daily basis.

So the big ticket items I’ve learned to date?

In summary, it's important to have staff who can as much as possible “get on with it” and use their initiative. But you won't keep that kind of resource if you never allow that tester the authority and space to make those kind of decisions.

But as a test manager you also need to realise your role has changed, you're taking a step back from the testing factory floor, and trying to support your test team – attempting to hussle to get a test environment, design specs, software releases – to allow them to do their job. But also to tell your test team’s story, you need the dreaded metrics to communicate to management and business owners how soon they'll have their shiny-tested product in the market.

No comments:

Post a Comment