Friday, March 29, 2013

Returning To Test Estimation ...

A couple of years ago I touched on the subject of test estimation in my article Those Darn Test Estimates.  Following a recent conversation with Jari Laakso I found myself wanting to return to the subject.  Why?  Because although at the time I was happy with that article, today I find the article isn't good enough, the reason being that in the time that's passed, I've learned a few things ...

We are told that as testers the most important relationship we have is with developers, and whilst that's true for junior testers, as we take on more leadership of our team, things change.  Increasingly as we get senior it's our relationship with the project manager that becomes more and more key.  Oh, the rest of the team need to keep networking with those developers, but you become the key relationship point between the project manager and the test team.

For that relationship to work, it's important to know what matters to project managers.  Many people when they start in software testing think project managers are only interested in two things "hours booked" and "percentage done".  Do not be fooled!


Over the last two years at Kiwibank I have been very lucky to work in a business unit of project managers, business analysts, market managers and business owners.  As Dian Fossey did with gorillas, I have managed (through the miracle of co-location) to observe business people in their natural habitat, and in the process learn more about them.

But even that hasn't been quite enough.  To understand more of what I heard and what I saw, I used Johanna Rothman's Manage IT guide for practical project managers.  Not because I wanted to become a project manager, but because I wanted to understand them and their processes.

And so this, in a basic nutshell, is what I learned.  Project managers have two main focuses, which although they link in with "hours booked" and "percentage done" they are not "hours booked" and "percentage done".  They are,

  • To manage the budget and timeline of the project.
  • To manage the risks linked with the project.


The risks are important - in fact, project managers keep a register of risks much like we keep registers of defects.  There they list potential pitfalls, how likely they are, and how significantly they could impact the project if they were to occur.  Multiplying the likelihoods with the impact gives a list of the top risks, which the project managers focus on.

With that in mind, lets return to the subject of test estimates, and see how we can work more on the same page with project managers in the domain they understand.  Of late I've worked with a project manager named Annabelle, and we've thrashed around our recent test plans until they've worked as below.  It's been a positive relationship, and has really allowed me to see much better how elements of my plan are used,


Estimates - the good, the bad, the ugly

The starting point it to look at the project, the general scope, and just trying to size it up.  There is probably some absolutely fabulous formula out there to do this.  But I think there's just no substitute for past experience.

What past project does this most likely feel like?  Don't be afraid to ask around people to get their experiences as well.  Listen to their stories of how their past projects took more or less time that you experienced in any parallel projects of yours.  Where is the complexity?  What will slow us down?  Are processes we're testing instantaneous, overnight, monthly?  How will that affect how we test?

By now you should feel split three ways on three possible scenarios; "well if everything went perfect", "I guess we might have a few bugs and tests to redo, we normally do" and "OMG, we're screwed".  From Those Darn Test Estimates, that's what we'd normally call Best Case, Probable Case, Worst Case.


But wait, we're not finished ...

Now here's where we can have a meaningful dialogue with our project manager.  I'd always put into any plan a list of my assumptions, but the genius suggestion from Annabelle was that I add a column which said "Impact if wrong".  If my assumption was wrong, what would it mean for my plan and my timescales?  What would have to be redone?

To me this was a game changer, and suddenly I realised I was helping to communicate more to my project manager in a much more direct manner than asking Annabelle to "read between the lines".  Suddenly from a dry list of assumptions, a few of them would stand clearly out, because anyone reading can clearly see "well if they're wrong, this is going to hurt".  This inevitably leads to a conversation around "well how can we get more information to check this assumption or monitor it?".  Suddenly people are being proactive, over waiting for issues to happen.

This of course leads to the issue of risks.  Actually the word is such an important one in software development, it needs a capital, exclamation marks and all kinds of funky formatting ... Risks!!!

It was only recently I had the revelation that when I'm giving a list of risks (with likelihoods and impact of course), what I'm telling to my project manager is this 'these are the factors which will push testing onto the "Worst Case" time estimates over the "Probable Case" ones'.  If you think your project manager doesn't understand this, make it clear by writing that very statement near your table of risks.  It's something we sometimes take the danger of implying, but it needs to come out on the table and be made clear to all involved.

Now, us those conversations you had with other people to help you fill in these risks.  On a first draft, you should really throw to a larger group all the risks people have thought of (if your project if big enough, maybe workshop it), sometimes something that sounds innocent gets someone else thinking "wait a minute ...".  It's important to capture as many as possible, and then truncate and collect into themes.

Like with assumptions, once collected, a few will really just stand out with an "ouch factor", and no doubt make it into the project manager's risk register.  But they're not just there to be something you fall back on when things go bad "look, said it might be a risk, turned out it was".  They, particular the "ouch" ones, are something to be proactive about.  How can you tell early if this is happening?  What can we do about it?

Lets look back at some of the risks I identified first time around, and see if we can be more proactive ...

Software Delivered Late / Software Delivered Is Of Poor Quality

What testing is development planning to do?  Can testing get early copies of software to get a feel for what's being built?  Can testing talk to development about the testing planned to help development in their unit testing?  Can testing get visibility of the testing developers plan to perform for unit testing?

Notice all of these are about dropping the silo mentality between testing and development and working closer together over a passing of baton between development and testing phases.

The Delivery Chain

How can you get developers and testers working closer together, especially if you're not co-located.  How about a daily catchup even in phone conference?  Throw in video if you can (it's the 21st Century after all, and we work in IT).

If the software development is done out of country and you only get weekly builds, can you get the development to try out scenarios on your behalf?

If you are getting weekly builds and the software prevents basic testing because it's so buggy, can development prioritise on those key bugs and get a fix over asap?

Time Erosion

Keep a log of all the meetings needed.  Does everyone need to be at these meetings?  Are you getting value when everyone attends?  Some form of daily meeting can be invaluable, but most people will only need that one.  Leaders can attend other meetings on the teams behalf, and feedback anything key to the rest of the group, likewise providing information up the chain from testers on the ground floor.

It goes without saying - don't try and to major testing sprints over major holiday seasons, and try and work on personal appraisals before/after scheduled busy times.

Requirements

James Bach in his Rapid Software Testing course calls requirements a form of Oracle, or "model of how software's supposed to work".  An Oracle sets up an expectation of behaviour, so when we do something with software and it goes against our Oracle we know "well that's not right".

Requirements are the most obvious "model of how software's supposed to work" that we usually encounter but there are other kinds of Oracle as well.  Can your team,

  • Talk to the business owners who put forward this project?
  • Talk to end users about what they'll be doing with the system?
  • Talk to a subject matter expert?
  • Try out similar products which are already out there on the market?


All these things help testers build up a sense of "what the product should do, and what behaviour matters".





Wednesday, March 13, 2013

Novopay - the tale of a compelling event in the New Zealand IT industry

Open almost any book on testing, and it will start with a "cautionary tale" about software testing, where something was released into production, and it went wrong with devastating results.  They are the ghoulish "tales around the campfire" of our IT industry.

One problem I have found within the New Zealand industry is that as testers we love to focus on anything that goes "bang" and crashes.  In 2011, I was in one a talk about software testing to a group of new developers as part of the Summer of Tech, where our introductory speaker spoke of an Ariane 5 rocket where the parts had been individually tested, but put together into a new rocket.  As you can imagine, this rocket barely cleared the launchpad before it exploded. Another cautionary tale of "you didn't test enough".


What's interested is what happened next.  One of the audience interrupted with "but we're creating applications ... not rockets.  Our stuff is hardly critical like that".

To me, this reinforced how vital it is to have stories and tales of failure which are relevant to the audience.  I thought the tale was a wonderful one, but looking around the room, I saw it failed, because the audience simply did not believe it applied to them.

New Zealand is a small and very pragmatic country.  A lot of processes here are still fairly manual compared to my home country of England, and overall that's not a bad thing.  Here in New Zealand if something isn't broken, New Zealand attitude is "why try and fix it", so,

  • In the UK for trains we have automated ticket vending machines (usually vandalised), automated turnstiles to get onto platforms etc.  We used to get told our ticket prices would have to go up above inflation to cover this automation and it's continual repair.  In New Zealand they just have an old fashioned conductor on the train who checks and clips tickets, and can sell you one for cash if you need one (without charging a fee).  Simple, but y'know it works.
  • In the UK there were plans to build a so called "chip and bin" system for collecting rubbish.  The UK rubbish collection vehicles would scan your bin, which would then be weighed as it was disposed of.  Back at the main office this data would be collected, and an itemised bill created (your bill would have to be more to cover all this new technology which would have to be developed and maintained).  Again being super pragmatic, New Zealand councils just sell you council marked rubbish bags which have a charge on them for collection.  Only rubbish in council bags is disposed of.  The more you throw away, the more bags you need.  Elegantly simple yes?


An unfortunate by-product of this pragmatic way of doing things is the attitude of "she'll be right", which means if something goes wrong, don't worry, we'll be able to fix it.  This means on the whole New Zealanders can be a little bit more risk takers than their European or American counterparts.

In a testing consultancy I previously worked for, my test manager would talk about how New Zealand would have to face an inevitable "compelling event" for software testing.  Most other countries have had one, but so far although there had been failed there hadn't been one in New Zealand.

A "compelling event" is something that forces you to take action - a wake up call the the importance of the value of testing.  A compelling event isn't just software going bad when it hits production, it's software causing a very large and public pain that it unnerves the local IT industry as a motivation to not repeat that mistake, often with a gasp of "that so easily could have been us".  It's not a rocket blowing up half way around the world, it's software that fails but feels too close for comfort compared to what you're doing right now.

It's a compelling event because once it's happened it compels you to take a good hard look at your strategies around testing and quality and ask "are we (and not just testers here) doing enough?".

Sadly, we've finally had our big compelling event here - the Novopay saga.  Novopay was an online system for managing the payment of teacher and school staff salaries.  But sadly it has gone into production to go horribly wrong, with payments to these people missing in the system.  There have been schools where teachers are missing months of pay (and teachers aren't exactly in the most affluence of careers).  More than just being "missed payments", there have also been erroneous payments gone out from schools - some teachers who've never worked for a school, are receiving payments from that school - and schools are having to bring in extra staff to go through the Novopay payments with a fine tooth comb to work out what's payments are good, what payments are errors and what payments are missing.

The media have been in a frenzy over it, hounding senior staff at the Ministry Of Education (who are behind Novopay) saying they have found a report of "200 known bugs" in the production system when it went live.  It is terrifying to me as a senior tester is how bad that sounds when the media put it like that.

Of course when I worked on an avionic computer, it'd surprise people to know it flew under the strictest of safety conditions, but still there were 1,500 known bugs in the software.  The important message though isn't the bug count, but the severity.  But this is a very difficult dialogue to have with a set of press hounding for a "big story" and "potential coverup".

In an unusual move, the Ministry of Education has released the test plans for the Novopay project into the public domain.  I took a look through, and they show a fairly thorough planned approach was taken to testing, not the slap-dash "rush into production" that many in the press are claiming.  That was pretty unnerving - most of our "testing horror stories" like the Ariane rocket involve people simply not testing, thinking it would be "alright mate".

The documentation they have also shows a decent approach to several forms of functional testing, indeed going beyond what I would have initially thought to do,

http://www.minedu.govt.nz/theMinistry/NovopayProject/NovopayTestPlans.aspx

However for all it's success in testing, something has gone very wrong in production, there can be no doubt about that.  And it is causing real grief to people whose life it should be easing.

What went wrong?  Each of us will have our opinions about this, and no doubt all of us will be right to some extent, whilst at the same time knowing in our hearts how easily something like this could happen to us.

Testing is about identifying and helping to remove risks, but it does not guarantee a bug-free product.  We can test in the many different ways we expect our software to be used, we can check for all the problems we can imagine could happen.  But that will never cover everything, there will always be issues beyond our imagining.

But the imagination of any individual has it's limits.  We have to be careful not to find ourselves screaming "inconceivable" at every bug found in production.  In my opinion, this is why testing and quality is not just a "test team" ticket.  We need to be engaged with developers, business analysts, market managers, usability experts, end users to work out a whole scope of things to test (from areas of functionality to ways of using) that is beyond the imagination we as testers can summon when looking at requirements.  But by then our scope of testing has probably increased by several orders of magnitude, and we don't have unlimited time.

That's when we need to do some risk based analysis, and cover as much of it as possible, touching on all the "most likely" cases, then touching on samples in other areas.  If you find problems, then odds are there'll be others, so keep looking in that area.

For myself then , if there is a compelling lesson to be had from Novopay it is that we need to draw up our test plans as we always have done, and then ask of others "what could I have missed".

Friday, March 8, 2013

An offer you can't refuse ...

I have been a little quiet on my blog this year - however I've been far from idle in 2013.




I've put a lot of work into writing and expanding my book The Elf Who Learned How To Test, which got very addictive writing for, and now includes 4 stories, and "the story behind the stories".  It has been an absolute blast writing something a little different, and I've really enjoyed reading the tales to my  teenage son Cameron.

Beyond that, I'm now looking forward to starting a new position in April at another company in Wellington, Datacom.  My role there will involve a lot more working with, leading and developing testers.  As you can tell from my blog, it's something I'm passionate about.

I have learned so much since coming to New Zealand.  My time at Test Consultancy Assurity taught me so much about championing testing and what testing does well.  At Kiwibank I have thrown in my lot with a fantastic group of people, where I've learned about testing's relationship with other departments and been lucky to not be a tester in a testing department, but a tester in a multi-discipline team, learning the aches, pains and victories of project managers, business analysts, market managers.

What's so exciting about the, is it feel like all my effort in writing the book The Software Minefield, I have laid down a baseline of ideas and approaches I'm hoping to find opportunity to tap into.  To celebrate this, I'm offering the book free until 1st April 2013.  All you have to do is click to buy the book, and provide the coupon code, "sZSRStQO6r6C", the book is then yours to enjoy free of charge.

Happy reading, and if you enjoy, please promote and even write a review!