We are told that as testers the most important relationship we have is with developers, and whilst that's true for junior testers, as we take on more leadership of our team, things change. Increasingly as we get senior it's our relationship with the project manager that becomes more and more key. Oh, the rest of the team need to keep networking with those developers, but you become the key relationship point between the project manager and the test team.
For that relationship to work, it's important to know what matters to project managers. Many people when they start in software testing think project managers are only interested in two things "hours booked" and "percentage done". Do not be fooled!
Over the last two years at Kiwibank I have been very lucky to work in a business unit of project managers, business analysts, market managers and business owners. As Dian Fossey did with gorillas, I have managed (through the miracle of co-location) to observe business people in their natural habitat, and in the process learn more about them.
But even that hasn't been quite enough. To understand more of what I heard and what I saw, I used Johanna Rothman's Manage IT guide for practical project managers. Not because I wanted to become a project manager, but because I wanted to understand them and their processes.
And so this, in a basic nutshell, is what I learned. Project managers have two main focuses, which although they link in with "hours booked" and "percentage done" they are not "hours booked" and "percentage done". They are,
- To manage the budget and timeline of the project.
- To manage the risks linked with the project.
The risks are important - in fact, project managers keep a register of risks much like we keep registers of defects. There they list potential pitfalls, how likely they are, and how significantly they could impact the project if they were to occur. Multiplying the likelihoods with the impact gives a list of the top risks, which the project managers focus on.
With that in mind, lets return to the subject of test estimates, and see how we can work more on the same page with project managers in the domain they understand. Of late I've worked with a project manager named Annabelle, and we've thrashed around our recent test plans until they've worked as below. It's been a positive relationship, and has really allowed me to see much better how elements of my plan are used,
Estimates - the good, the bad, the ugly
The starting point it to look at the project, the general scope, and just trying to size it up. There is probably some absolutely fabulous formula out there to do this. But I think there's just no substitute for past experience.
What past project does this most likely feel like? Don't be afraid to ask around people to get their experiences as well. Listen to their stories of how their past projects took more or less time that you experienced in any parallel projects of yours. Where is the complexity? What will slow us down? Are processes we're testing instantaneous, overnight, monthly? How will that affect how we test?
By now you should feel split three ways on three possible scenarios; "well if everything went perfect", "I guess we might have a few bugs and tests to redo, we normally do" and "OMG, we're screwed". From Those Darn Test Estimates, that's what we'd normally call Best Case, Probable Case, Worst Case.
But wait, we're not finished ...
Now here's where we can have a meaningful dialogue with our project manager. I'd always put into any plan a list of my assumptions, but the genius suggestion from Annabelle was that I add a column which said "Impact if wrong". If my assumption was wrong, what would it mean for my plan and my timescales? What would have to be redone?
To me this was a game changer, and suddenly I realised I was helping to communicate more to my project manager in a much more direct manner than asking Annabelle to "read between the lines". Suddenly from a dry list of assumptions, a few of them would stand clearly out, because anyone reading can clearly see "well if they're wrong, this is going to hurt". This inevitably leads to a conversation around "well how can we get more information to check this assumption or monitor it?". Suddenly people are being proactive, over waiting for issues to happen.
This of course leads to the issue of risks. Actually the word is such an important one in software development, it needs a capital, exclamation marks and all kinds of funky formatting ... Risks!!!
It was only recently I had the revelation that when I'm giving a list of risks (with likelihoods and impact of course), what I'm telling to my project manager is this 'these are the factors which will push testing onto the "Worst Case" time estimates over the "Probable Case" ones'. If you think your project manager doesn't understand this, make it clear by writing that very statement near your table of risks. It's something we sometimes take the danger of implying, but it needs to come out on the table and be made clear to all involved.
Now, us those conversations you had with other people to help you fill in these risks. On a first draft, you should really throw to a larger group all the risks people have thought of (if your project if big enough, maybe workshop it), sometimes something that sounds innocent gets someone else thinking "wait a minute ...". It's important to capture as many as possible, and then truncate and collect into themes.
Like with assumptions, once collected, a few will really just stand out with an "ouch factor", and no doubt make it into the project manager's risk register. But they're not just there to be something you fall back on when things go bad "look, said it might be a risk, turned out it was". They, particular the "ouch" ones, are something to be proactive about. How can you tell early if this is happening? What can we do about it?
Lets look back at some of the risks I identified first time around, and see if we can be more proactive ...
Software Delivered Late / Software Delivered Is Of Poor Quality
What testing is development planning to do? Can testing get early copies of software to get a feel for what's being built? Can testing talk to development about the testing planned to help development in their unit testing? Can testing get visibility of the testing developers plan to perform for unit testing?
Notice all of these are about dropping the silo mentality between testing and development and working closer together over a passing of baton between development and testing phases.
The Delivery Chain
How can you get developers and testers working closer together, especially if you're not co-located. How about a daily catchup even in phone conference? Throw in video if you can (it's the 21st Century after all, and we work in IT).
If the software development is done out of country and you only get weekly builds, can you get the development to try out scenarios on your behalf?
If you are getting weekly builds and the software prevents basic testing because it's so buggy, can development prioritise on those key bugs and get a fix over asap?
Time Erosion
Keep a log of all the meetings needed. Does everyone need to be at these meetings? Are you getting value when everyone attends? Some form of daily meeting can be invaluable, but most people will only need that one. Leaders can attend other meetings on the teams behalf, and feedback anything key to the rest of the group, likewise providing information up the chain from testers on the ground floor.
It goes without saying - don't try and to major testing sprints over major holiday seasons, and try and work on personal appraisals before/after scheduled busy times.
Requirements
James Bach in his Rapid Software Testing course calls requirements a form of Oracle, or "model of how software's supposed to work". An Oracle sets up an expectation of behaviour, so when we do something with software and it goes against our Oracle we know "well that's not right".
Requirements are the most obvious "model of how software's supposed to work" that we usually encounter but there are other kinds of Oracle as well. Can your team,
- Talk to the business owners who put forward this project?
- Talk to end users about what they'll be doing with the system?
- Talk to a subject matter expert?
- Try out similar products which are already out there on the market?
All these things help testers build up a sense of "what the product should do, and what behaviour matters".