Wednesday, June 29, 2016

AUTOMATION 12 - Help! I'm not technical!

So far we've looked in depth at unit and API testing - sometimes it's got a bit technical, and I know some of you will have thrived on this.  Others of you will have found it hard going, and today we're going to champion your place within automation, looking through another experience report, and giving you some guidance on what you should have access to on your project as a tester - even if you don't program the scripts!


Revisiting Automation As A Service

Let's start by looking at the automation as a service model we've previously looked at.  Of of the reasons we evolved this model was because of the terrible friction we've seen on the question of "who should automate".  Friction which just hasn't seemed to have gone away.  Fundamentally the arguments go that,

  • Developers write maintainable code
  • Testers known the best scenarios to check
  • Some developers have good testing skills
  • Some testers have good development skills



Automation then seems to live in a no-man's land between testing and development.

Going back to my original automation experience report, where testers were in charge, my team could create superb ideas for checking.  But the resulting automation scripts were not really very maintainable due to the poor level of coding skills.


On the other hand, on a recent project a few years ago, developers were solely in charge of our checking automation.  They would write their own automation scripts, but we testers didn't really know what those scripts covered, or even whether on any day they've passed or failed.  Developers likewise occasionally just ignored failures, because "it always does that", or they felt they had other more pressing priorities.

As I've said, the problem is, I'm still seeing this kind of squabbling on Twitter about "who should automate".  However, it was the coverage in Lisa Crispin and Janet Gregory's book Agile Testing which was the first time I saw a model where the automation responsibilities were shared between both testers and developers to get the best outcome possible for the project.

Their work was really behind the model of "automation as a service" which we evolved for an established project where we had to build the automation checking from the ground up to cover an already mature project. We took Lisa and Janet's work, and refined though consultation and retrospection (after some prototyping) with different roles within the team to find out what they needed/expected.

What we found was testers and developers needed different things from automation.  A model that only served one, didn't serve the other.  Most importantly, there became elaborated this role of automator, who could be a developer, or maybe even a tester, depending on their skills.  But their role was creating and maintaining the automation framework, but key was that they were helping to serve the needs of developer and tester alike.


With this key role defined as outside of tester and developer, this freed us to focus on how testers, especially the non-technical tester could contribute to automation.  [We assumed if the tester was sufficiently techinical, they could take on the automator role as well]

Defining scripts

On our project, the developers were clearly initially the ones suitable to take on the role of automator (newsflash: this too has changed over time).  So they were given a trial period to "get automating", creating automating checks as they saw fit.  About a fortnight into this, we all met up again, and started with the basics ...

Did you cover login with automation?
Yup!
Excellent ... what kind of scenarios?
Erm ... you can login of course.
And?
That's it, why?  What else is there to check?
How about if I give the wrong password it doesn't log me in.  If I log in incorrectly 3 times it locks my account.
Oh.  We'd not thought of that - anything else?

This was a really important conversation, because I saw in the eyes of developers a light bulb go off, and they could see how testers had some important knowledge to drive what kind of checks we should be automating.

In Agile Testing, Janet and Lisa talk a lot about using Cucumber to define tests.  I think one of the few criticisms I have of the book was it's too locked into that tool - something Lisa was keen to avoid when writing More Agile Testing, because technology changes just too rapidly.  [For instance, cloud storage and even Skype wasn't really much of a thing back when the original Agile Testing book was being authored]

For my team, they didn't use Cucumber, but instead built up a Wiki where testers specified tests and developers would make a note in the Wiki when they were built.  Pretty simple, but it worked, and fulfilled that need we'd defined for testers to,

"Define useful checking scripts for automators (even if I don't code them)"

As a tester, even if you're not technical, and some of the ideas of this series have gone over your head, you need an area where you can lay out your ideas for automated checks, It doesn't have to be complex or coded, but as we've talked about it needs to simplify to a simple yes or no.

The developers left alone would come up with a shallow example, and maybe that's okay for some features, but features like login it helps to check a bit more.  You need to be able to specify scenarios such as,

Correct details logs user in
Enter a username and correct password.
You are logged in

Incorrect details do not allow user in
Enter a username and incorrect password.
You are not logged in.

Entering incorrect details consecutively three times locks an account
Enter a username and incorrect password three times.
Enter a username and correct password.
The system warns you your account is locked.

Only three consecutive failed login attempts lock an account
Enter a username and incorrect password twice.
Enter a username and correct password.
User is logged in.
Log out.
Enter a username and incorrect password.
Enter a username and correct password.
User is logged in.

Exploring passed and failed checks

In automation we tend to be very obsessed "looking at the green", ie the automated tests which pass.  If we see a sea of green, everything's good.  And we tend to focus on building a system especially in trials which focuses on showing us that everything's working.

Trish Khoo has a fun talk on YouTube here about intermittent issues she explored in an automation suite, it's well worth watching it all, but if you're busy find time for just the first 3 minutes where she talkesabout a company she worked where every build passed it's checks.  Then one day they noticed how fast the test suite ran (30 seconds) ... turned out that the automation suite used only skeleton scripts, which were only coded to return passes (not to actually, y'know, check anything).

This talk was quite influential in our thinking, and reminding us that we needed to look into any automation run and work out actually work out what was happening.  Not quite as easy as it seemed!  For instance, we started using a numerical structure of "script 1 / 2 / 3" ... which meant we needed to cross reference the Wiki to translate "script 5 failed which is the one where ...".

So we started to use meaningful titles which linked back to the Wiki such as "Only three consecutive failed login attempts lock an account".  Each log had screenshots, which helped, but the logs were a bit too technical, so it was hard sometimes to work out exactly what was going on, and why the fail.  So we aimed to make the logs more readable especially to the non-technical, and would include the information used.

As all this started to fall into place, the testers also (much like in Trish's talk) started to notice that some of the scripts which were always passing weren't quite right.  Sometimes they'd miss a vital final assertion step (the thing that was actually supposed to be checked), or weren't doing the final commit from a vital back-office review.  Some of these things were found quite late, with testers thinking "oh, we have a check for that scenario, we're covered" - but sadly there was a defect lurking hidden in that last, missing assertion (we'll come to this again later).

Rerun scripts

Finally and simply, if a tester noticed something simple like "notice these checks failed due to a network problem, it's back up now", they needed to just be able to rerun a suite (or part of it), not have to wait for a developer to do it for them.

All these experiences became the centre-piece for our defined tester role,



If you're a tester on a project, particularly an agile project, and your automation process does not allow you
  • To define checks you want to be scripted up
  • To view whether checks have passed or failed, including the detail of what the automated check does
  • To be able to rerun the automation when it fails

Then you need to have conversations with your team about this.  It's really important that testers are not shut out of the process, even if they're not in the automator role.  Because being able to input to this process, and see what's going on is pivotal to allowing manual testers to make decisions on where to focus their attention when they do manually check.

An automation process where manual testers perform the same checks as the automation has failed to bring any efficiency to the testing process.  Manual testers need to work with and beyond the automation - something we'll look into much later in the series.

6 comments:

  1. Thanks for sharing your series about automation. Esspecially to describe the skills that each role needs. In my opinion this makes it easier for the developer, tester or automator to find out where he or she would like to improve their skills and can work on this. Personally I started as a exploratory tester and try to fill the gap to the automator in the future.

    ReplyDelete
    Replies
    1. My advice - learn some software development basics. Especially around "code reuse".

      Delete
  2. This is the basis of eggPlant, easy to use, english like scripting language, technology agnostic and future-proof. TestPlant makes the tool eggPlant functional, check it out at www.testplant.cm

    ReplyDelete
    Replies
    1. Too bad that eggplant is proprietary, expensive and therefore impractical to scale. :(

      Delete
  3. Each log had screenshots, which helped, but the logs were a bit too technica

    Royal1688 ผ่านเว็บ

    ReplyDelete
  4. It is necessary to read more such messages.
    massage London

    ReplyDelete