To me, when I was a developer, "doing testing" was something I was very good at, but something I was assigned at the end of a project to "get it over the line". But in 2004, someone wanted to put my programming skills to good use, and I was brought in to help out on a project which was using test automation. It was a huge learning experience, which caused my job title to change from software engineer to automation tester. Since that day the word "test" has been in my job title ever since.
Currently I'm picking up a kind of "ghost in the wires" murmuring about automation testing replacing manual testers. I was even asked about directly by a student at Summer Of Tech who'd been advised "don't go into software testing, it'll soon be extinct".
Back in 2004 though I joined a department where all testing was done through automation. Since then I've always attempted to learn more about tester, and importantly tried to blend automation and manual techniques.
For many today 100% automation is the Holy Grail - why don't I believe in that anymore? What did I see?
* Note - I'm going to use the terminology "automated test" in this experience report, because it's appropriate from my understanding of the period of which I'm writing. Since James Bach and Michael Bolton have done recent work I'd really call these "automation checks" today. However I want this experience report to be true to my understanding of the time.
Project Case - an experience report
Project Case as I've said was the first project I've worked on with a dedicated team of full-time testers. It was a bespoke product of about 10 years, with a rich functionality. The test manager Stephen was a first in many ways for me - even though I was an internal transfer he put me through a barrage of questions on how I'd think to make tests for a product before accepting me on the team. He took testing very seriously, and I did learn a lot from him. Though I didn't always agree with him - but at the time, my understanding of testing, and more importantly being able to talk about it, meant I couldn't counter argue with him to the extent I could.
We were system testing for Project Case - and our tests were 100% automated. Typically about 2 months before a new release we'd start scripting up new automation. We'd even create scripts which would repeat defects that the customer had found.
At the start of system testing, we'd spend about 2 weeks testing new functionality, running tests on new features. After that it'd be 6-8 weeks running regression. Stephen was terrified about regression - which meant we ran every script we'd ever run, plus every script for every defect we'd ever found.
That in a nutshell was our project, now in more detail, I'm going to talk about where we found problems (you've probably already guessed some of them).
Automation wasn't really very efficient
So we had 100% automation - BOOM! So I guess that meant we had a tight ship? Actually the team was 12, and comparing to other teams I've since managed, I'd expect to be able to go through the same level of manual testing as we achieved with a similar sized team. Maybe (shock-horror) even with a somewhat smaller team.
We weren't running faster and smarter, for reasons that will become obvious.
The team was hired foremost for their testing ability - not coding skills
This was one of the best teams of testers I've ever worked for. And they were hired for their ability to test.
They were then made to write Visual Basic scripts. And this is where it fell apart a bit - because many weren't very good coders.
Although I never met him on this project, a lot of it was written by John Bunting. John Bunting, for Terry Pratchett fans, is a kind of Farnborough version of Bloody Stupid Johnson. He's a guy who's code is so bafflingly strange and bad, I know of friends who were still trying to unpick what he's written 5 years on from his retirement.
But our automation tests were riddled with weird code - such as a function called "back to menu" which would repeatedly hit the escape button, take a screenshot, and check for the main menu. Only sometimes it would repeatedly do this, miss the menu, then keep hitting escape and taking a screenshot. If such a script was left running overnight, it would overload the database, and we'd need to call a DBA to fix before we could run anything the next day.
It's fair to say I spent a lot of time trying to address the technical debt within our automation, but ultimately the best I managed was band aiding the code. I also tried to make the team familiar with Visual Basic coding standards, but in many ways the damage was done. Thanks John.
We needed a good level of programming skill in the team. In truth, that's why they got me.
We didn't really understand the system we were testing
Our main priority was running the legacy automation, and fixing problems as they occurred. We spent more time fixing our scripts than actually using the product under test.
Although we had some testing heavyweights here, our attention was always on the automation and fixing it. A constant problem thanks to the code of John Bunting Esquire.
Consequentially we never really understood the product under test to the level other test teams would have who'd manually tested.
We needed to be able to understand the system being tested, not the automation.
We didn't really understand the automation
These automated tests had been run for years, because they'd always been run. They were large and meandering testing many things, and as lengthy automation even when they worked they'd take several hours. When they failed, because of the quirks of our tool we'd have to start them running from scratch, whereas a manual tester could go "well I can adapt and finish from here on in".
We needed the automation to be simpler, quicker to run and with an obvious feature that it was trying to test.
The regression was too heavyweight
Often with a mature product "testing the core functionality" which people think of when they think of with regression testing. We were trying to test "everything we've ever tested", which meant for each release it got harder and longer, and we needed more people. Testing became a huge burden for the project.
We needed to focus down on what mattered, and dump everything else. If we didn't think an automated script failure would end with a critical or high level defect raised, we probably should have dumped it.
Our automation was a monumental failure
I hate reducing things to metrics. But for every one defect we were finding in the system, we were finding and fixing between ten and twenty issues in the automation code.
Think about what that means - we were testing our automation code more than we were testing our product. That in itself is a massive failure.
We needed robust code which was easy to fix, and failed relatively infrequently.
Dr Ian Malcom says ...
Yesterday, when talking about automation, and this project in particular, I found myself quoting Jeff Goldblum from Jurassic Park,
Don't get me wrong, automation can be incredibly helpful when used right, and I hope to focus on that. It can really help you, but it does so by aiding your manual testing efforts, and stop you running manual tests which are repetitive and boring - for more information please reread.
Here are some good questions to ask of your automation check you want to build ...
- Do you want to run this automation check time and again? If not, automation is the wrong strategy.
- Do you have too many automation / does your automation take too long?
- If this test failed, what level of defect would you raise? If the answer to that is low, then does it make sense to check it frequently?
- Is it clear what this script is supposed to check? Try to make each automation script check a single thing - makes it simple.
- If you have a wide variety of tests you want to run on a product, which you only expect to run once, manual testing is still the more efficient way to run them.
Sadly - I've learned this, and it cost my company of the time to learn this. People like James Bach and Michael Bolton have learned this. Local testers like Katrina Clokie, Aaron Hodder, Oliver Erlewein and Kim Engel have learned this. But it seems through gossip and misselling, there are a few organisations who need to learn this for themselves all over.
And finally - revisiting that Dr Malcolm speech ....
Revisit that famous Jurassic Park speech here. Listening to it last night, I realised with just a few tweaks about automation it would be all-too relevant.
Dr Malcolm: The lack of humility about testing that's being displayed here, uh... staggers me.
Manager: Well thank you, Dr. Malcolm, but I think things are a little bit different then you and I had feared...
Dr. Ian Malcolm: Yeah, I know. They're a lot worse.
Dr. Ian Malcolm: Don't you see the danger, inherent in what you're doing here? Test automation is the most awesome force the planet's ever seen, but you wield it like a kid that's found his dad's gun
Dr. Ian Malcolm: The problem with the engineering power that you're using here, it didn't require any discipline to attain it. You read what others had done and you took the next step to copy them. You didn't earn the knowledge for yourselves, so you don't take any responsibility for it. You stood on the shoulders of geniuses to accomplish something as fast as you could, and before you even knew what you had, you patented it, and packaged it, ... and now you're selling it, you wanna sell it. Well...
John Hammond: I don't think you're giving us our due credit. Our people have done things which nobody's ever done before...
Dr. Ian Malcolm: Yeah, yeah, but your scientists were so preoccupied with whether or not they could that they didn't stop to think if they should.
If there is a single lesson from this, just be very clear when you want to test to be automation, if it makes sense, and if it should be tested. Don't be a John Bunting.