How long is a piece of string? I'm tempted to be a wise-ass and say that to project managers when they ask me
“how long will it take to test my project?”.
That's actually unfair, experience gives us as testers an idea,
based on similar projects, of how long it'll take to test. But one of the problems is, testing is an activity which has a complex relationship with other factors in a project –
we can keep testing and testing, but if development don't start fixing some bugs, we're going to be here forever!
So yes, we can look through the designed features and estimate how long it will take to script, and how long to execute those scripts.
But how long until the product is finished testing?
How long is a piece of string?
Having worked in a test consultancy, there is no doubt about the importance of estimates. They need, when a project manager looks at them, to be attractive but also realistic, with some contingency.
What we introduced was a list of estimates for testing tasks for “best case”, “probable case” and “worst case”.
BEST PROB WORST
Test Plan 1 2 4
Test Conditions 2 3 5
Test Scripting 4 7 10
Pre-Testing 2 4 6
UAT Execution 4 5 8
Retesting 2 5 10
This gives the test manager some leeway, usually if things go okay it should follow the Probable estimates.
If they book the Best case estimates, be very worried.
What I'm finding is my project managers are taking my estimates, adding the Probable case figures together and times it by an hourly rate to get a budget. I don't know why I'm so surprised ... makes sense, but I'm used to working against time and not
$$$.
Unfortunately at the end of the last two projects we've been considerably over that budget. There seems to be several factors at play which determine which of those estimate paths our testing is going to follow, and it's important to understand and recognise them.
Software Delivered Late
You book in a test contractor to help you test for 6 weeks. They arrive on week 24 to start analysis and scripting, with some test execution happening in week 26 for 4 weeks.
Then your chief developer tells you there's going to be a 2 week delay getting the build together, it won't be available until week 30 now.
You've made a commitment to your test contractor, and are so obliged to pay him, and possibly find them other work. If you can't get them to assist elsewhere, then by week 30, you're 4 weeks in and not testing yet.
You've blown over half your budget, and Lord help you if there's any more delays!
We all know developers can often deliver late. Late software is going to burn up budget. You need to work with your Project Manager to make them aware of their duty to get software to you as scheduled in order for your budget to be met.
Software Delivered Is Of Poor Quality
Kind of the flip side of late delivered software. Your vendor has promised that the software delivered has been unit and system tested, and no bugs were found.
You wrote your test plan for acceptance testing, expecting the software to have been extensively tested beforehand, with a certain level of quality. Your project manager and you are expecting what's delivered to be a candidate for release. You turn it on, and immediately notice a dozen problems, not able to finish basic use cases.
The developers under duress delivered what they had available to schedule instead of flagging any delays. Little if any testing has happened, and basic bugs are being discovered only now. Testing 101 says
"more bugs = more fixes = more builds = more retesting".
One thing I try and do with vendors is ask for a release note and end report for testing, detailing what defects were found and what were fixed. This is a bit of a game of bluff. If I receive an end report which says
“everything was tested, and no defects were raised” I get suspicious.
Very suspicious.
I've also had vendors on conference calls inform me
“we're running a build up now, you'll have the install delivered in an hour”. I pull my project manager to one side when this happens and warn them that maybe that will mean no testing whatsoever has been done …
The Delivery Chain
If you have developers on-site who you can give defects to, they fix, build, test it's possible to get a build almost every day.
If they're off site, only receive defects daily, have to courier builds, you'll be hard pressed to get a build weekly.
If you have two weeks to test, and have a daily build, you'll have 10 opportunities to get it right.
If you have weekly builds it's not likely to happen. Your second build will have to be perfect – and it usually takes about 3-4 even with an initially high quality piece of software (there are always tweaks needed).
Time erosion
It's so easy to happen. You have,
- a daily half hour team meeting
- a one hour weekly project progress meeting
- a one hour weekly project technical meeting
- a daily 15 minute end-of-day defect wrap up meeting
- each day you spend half an hour writing a progress report for the concerned business owner
Oh you're giggling there, but we've all been there.
Did you add it all up? Yes, you're losing about a day a week. Look at your estimates, did you plan on there being so much leakage?
I'm finding we're increasingly working on projects where there are a large amount of meetings to keep track of progress. This needn't be a bad thing, and small meetings daily can help set the direction and key priorities of the day/week. But it's easy for reporting to actually delay any progress being made, and become a sizable and unknown overhead in itself.
And some projects need test management – how do you budget for that? It's not a solid
“task” again more an ongoing overhead.
Requirements?
Requirements? We didn't have time to write down everything we asked for!
Due to constraints a project has been only broadly defined, but you're required to perform specific testing against it, to a limited timeframe.
Oh and business analysts are too busy to answer your questions, so just get on with it and you know, test!
This is a nightmare position to be in. You press a button, a message is displayed. But you have no idea if it's the right message or not. There are some things you can do – you can check the application didn't die when you pressed the button, and the message made sense in the context of the button.
But if you have vague requirements, you can only vaguely test. Such projects really feel like they're setting up the test project to fail. And take the blame.
Another variant of this is you raise about 10 defects against requirements, there's a review, and a business analyst says
“oh yes these aren't defects, I asked for these changes by phone from our vendor”. If things aren't documented, how can anyone keep track of these changes?
Take it easy. Take it nice and slow. That's no way to go. Does your PM know?
Thankfully there's usually a place for these factors in a test plan under risks and assumptions. But I can't emphasise enough to you the importance to talking them over time and again with your project managers before you embark on any test estimates, so they can understand and more effectively evaluate the risks and the impact to budget.