Monday, April 12, 2021

The most powerful tool in my facilitation arsenal...

It's been a while since I've written anything technical (I'm busy writing fiction at the moment), but this post has been brewing for a year, and a meeting today pushed me into wanting to write it up.

I've been diversifying since last I wrote! I've always thought of myself as a 'Jack/Jill of all trades' in software, I'd even like to call myself 'a renaissance man', except I've never been one for pretention.

One of the things I love about where I work, and particularly my current team is that there are always opportunities to 'self outside' of my job title and do something a little differently. This has included working as a scrum master, business analyst, developer, data analyst, it goes on!

An area I really love working in, and this goes back to my love of agile and being a scrum master is facilitation. It's a role I absolutely love, and have really grown into over the years, but it also is one I initially found challenging.

You see, I love contributing ideas and finding direction. That kind of leadership can have it's uses, but it doesn't make for good facilitation, and can (as we'll discuss) have it's limitations.

When my team comes across a problem, it's really easy to look at it, and try and propose a solution. And that's okay sometimes, but with those words I said before, with being a Jack (or Jill) of all trades, you have to remember you're also a master of none.

Generally, when the team comes together, there's a good amount of knowledge in the room. When you can tease that out of them, it makes for better solutions, as well as allowing the whole team to own the solution.

An example might be, we have some problem records in production. How do we fix them?

The shallow solution is just 'patch the data', gather the team together though and you might have one developer come out with 'well, we get this issue every year, it's to do with how one of our legacy parts of the system'.

Now we have two solutions! A quick fix, and a deeper fix. Now we might still go initially with the quick fix, but we can write up, track and look to the future to address the future fix. And that's good!

But how do you get there? I generally find there's a hesitancy for people to speak out. I think a lot of the folks I work with like to internally process things first.

So here's an example of how I handled something in a meeting today, and it's something that happens in most meetings. In this meeting we were looking for a number of initiatives for change (put up on a Trello board), but we needed one to start.

"Okay, we have our epics here, which one should we work on first?"


To me, silence is difficult. Yes, with a surname of Talks, and an occasionally loud personality, I have an answer to this. Something I'd like to do.

BUT, I think it will mean more if the team chooses themselves.

So, I play a game of chicken with the silence, holding my nerve.

Internally, I count up slowly. I will give it up to about twenty seconds before I try another tack. I'm aware I'm holding my breath slightly.

Eventually someone breaks the silence with an idea. Once one person has chimed in, another follows.

I keep track of people who've spoken, and try and make some time to ask for thoughts from some of our quieter members to make sure they're included. I've often found some of our quieter team members have as much if not more insight as the other members, and it's always worth inviting them into the conversation (without putting them on the spot too much). Generally this is a low stress question as most people have the get out default answer of  'I agree with what's been said'. But I've often found real diamonds from doing this.

And this, in truth is the biggest tool in my facilitation toolbox, the application of silence through holding my nerve, and accepting some silence in a meeting is no bad thing.

It's something I picked up particularly from some of Gitte Klitgaard;s workshops at ATD. Around 'let other's talk more, talk less yourself'.

Because I expect you'll ask - how do I break the silence when it goes on too long.

  • If the question is low stakes, you can put out your own answer. If you have a semi-decent answer, folks will tend to jump on to rubber stamp it. But you're moving the meeting forward.
  • If you have the trust of your team, you can provide a really bad, comical answer. This can move things forward, people often find it easier to respond to an idea than come up with something new (this is a technique I similarly use around decision making I should flesh out in the future). But the downside is, if the team doesn't know you well, this can make you seem like you're just dumb.
  • Ask someone directly what their thoughts are. This one should be handled with care, as it can make some people feel tense and put on the spot. You need a meeting which feels low stress, with team members you have a rapport with. It's also much easier in a room of people (vs online meeting) as you can pick up from body language clues people in the team who seem to have an idea, but are hesitant. Sometimes just making eye contact can be enough to prompt.

So in honour of the power of silence, here's a Harpo Marx pic...

Sunday, January 29, 2017

AUTOMATION 29 - The need for speed

This is part 29 of my automation series - for our story to date, check out the list of articles here.

Technical level: **

Just a couple posts ago we were looking at the goal of automaton.  In general terms, what I get from every group I've talked to is something along the line of "speedy feedback".  We're trying to shorten and supercharge the feedback loop from someone committing a change to being told "the system doesn't work".

So what's an ideal time for it to take?  This is important.

Obviously we want it to take longer than doing it manually, otherwise what's the point?

Different groups had different ideas about this.  Most just wanted some kind of "thumbs up" or "thumbs down" indicator on build quality.  And they wanted it quite quickly.  For others there was a rigorous series of regression tests they needed to run on the system, and they were trying to speed this up from "currently takes a couple of weeks".

I'm going to call these two drivers "build quality" and "regression".  They will dramatically change the timeframe we'll be looking at.  For both these drivers though, the first port of call for making things as fast as possible is to embrace the automation pyramid and to be sure to be doing at the unit or API level those checks which make sense to do there.

Driven by "build quality"

A developer finishes a code or configuration change, and wants to commit to the test environment.  But first they have to run a series of checks to make sure what's being deployed isn't obviously broken.

Ideally you want these to run as fast as possible.  The ideal time I've often heard of is about 20 minutes (the office joke is we've got a cafe on our ground floor, and this is the time it takes to have a pizza made and delivered).  Certainly any more than 40 minutes is painful.  It leads to "I guess I just kick this off before I leave and hope I'm first in tomorrow morning" *

There are ways to go faster.  Selenium GRID allows you to perform automation tests in something like a multi-threaded environment (running multiple things concurrently).  Katrina Clokie wrote an article about adoption of this at BNZ here.

Using technology to help you go faster is a definite must, it's the driving force behind trying to use the pyramid to make sure your checks are at the place where they can run most efficiently.  If you're testing hundreds of business logic combinations, do it at the unit level.

This picture though highlights something to be aware of ...

If you've got an elephant on a motorbike, and need to go faster, the first port of call is to buy a more powerful motorcycle (faster machines, Selenium GRID).

If that's still not working, it might be time to put the elephant on a diet.  That means looking through your build tests, and taking a hard critical look as "do we need them all?".  Potentially scaling it back.

Driven by "regression"

Is the driver for your automation is regression test coverage, you're probably aiming for a much bigger suite, with much longer runtimes.

It's as important even though you're looking at maybe running over days vs running in well under an hour, to still embrace the pyramid and try to have your checks running as efficiently as possible.

A few areas I've spoken with to have embraced both "build driven" and "regression".  Typically the post-build checks that they run are a cut down version of the regression test suit (hey, that means reuse), but of the top critical behaviours.

There's still a need to try and avoid the feeling of the elephant on the bike.  More automation means more to maintain, and you want to try and avoid trying to service and carry going forward tests which are trivial in nature.

At the beginning of this series I told about the test manager who'd make us run scripts which checked for every bug that we'd ever found (no matter how trivial), and how painful that got in time.  When I was at Agile Test Days in Germany, I happened to meet a fellow tester from the project included in that experience report, and we talked about how painful that approach had become.  So much so that the prohibitive cost of testing like that eventually led to us losing future contract work with that customer (the unit manager happened to be a school friend, and had confirmed that).  And for all that rerunning of those tests, we never once found a recurrence of one of those bugs to warrant the time.

[To be clear, I learned from that test manager the important of going deeper in testing than I'd normally do.  But I learned in trying to engage with that test manager and the business involved, the importance of revising testing to quite that level if we're not getting defects about the product illuminated.]

*  When I was a developer, waiting for the build to finish drove me nuts.  If it was a short build, I might update some documentation, then write a humorous meme to my friends.  If it was longer, I'd try and remote log in on spare machine, and trawl through the defect list to find a relatively simple bug I might be able to knock out of the park.  This might be a bigger mistake than the comedy meme as I'd end up switching context so would forget some of what I'd done from the compile and test that was going on!

Saturday, January 28, 2017

John Hurt dies, farewell to The War Doctor

As you know, I'll occasionally take pause to remember a significant celebrity who's died.  I don't do obituaries for every celebrity, but occasionally there is a passing which touches me.  Some figure who I want to talk about.

With Carrie Fisher and her battle with mental health, there was never any doubt.  Today though hearing about John Hurt's death, I was torn about whether to write about him or not.

I've seen interviews of him, he was quite charming and funny, I followed a page dedicated to him, I've seen videos of him at fan events.  But I know very little about him, which is a shame.  And in talking about him I find myself wanting to talk about his roles and his acting over the man himself.  Somehow that seems unfair ... but we'll get to that later!

Wow - and how many roles there have been!  I don't think I've ever not liked him in a film.  He was in a BBC production I, Claudius as the evil Caligula.  This is a must see, as you get to see a lot of very young actors who'd go on to be greats, such as Brian Blessed (BRIAN BLESSED!), Patrick Stewart, Derick Jacobi, John Rhys-Davis.

Although not his, who can forget this amazingly inspirational speech ...  [Try it at your next retrospective]

He had an amazing career, in and out of science fiction.  As a fan though I'm going to remember him for some spellbinding roles.  He's the man who painfully gave birth to an alien.  He helped Harry Potter choose his wand.  He went adventuring with Indiana Jones.  He was a loving father to Hellboy.

He played Winston Smith in the film adaptation of 1984, a book which feels an increasingly terrifying reality as we've developed a 24/7 surveillance culture.  Indeed only yesterday 1984 was revealed to be climbing the bestsellers on the back of Trump's terrifying first week in office.

In 1984, he plays a young man who tries to rebel against an oppressive system, falls in love, but in the end is broken by an unbeatable and brutal system.  His career spanned long enough that he'd later portray the oppressor as the tyrant in the dystopian future of V For Vendetta.

But it's one role particularly that I want to talk about, one he'll be forever in our hearts for, that of The War Doctor in the 50th Anniversary of Doctor Who.

Doctor Who @ 50

The 50th Anniversary story was an incredibly emotional experience for me.  This was a show I'd grown up with, and for a long time hadn't been on the television.  They'd stopped making it in 1989, then finally after an age we started getting new stories in 2005!

Over here in New Zealand they live broadcast it in the morning (about 9am).  I was up since about 6am, with butterflies in my stomach.  I of course was excited, but also emotive.  Here was a show I'd watched as a child, frightened behind the sofa with my parents telling me the Dalek's wouldn't get me.  It'd been a show my mother had similar experiences with as a girl, and one I'd got to share with my son.

That morning I felt tense, and a little sad. As soon as it was over I'd be able to share thoughts and impressions with friends on Facebook who were back in the UK.  But there were two of my friends I'd not be able to do that with - Violet, and a friend named Stewart who'd died just a few months ago.

Somehow the 50th Anniversary made me aware of all the friendships that had sprouted because of the show, the passing of the torch from my mother to me to my son.

And here was an actor I'd always loved, and he was going to play a version of the Doctor we'd never seen before.  One from the war between the Timelord and the Daleks ...

It was a strange story.  John Hurt plays The War Doctor, it looks like the Timelords are about to be exterminated by the Daleks.  If they fall, the galaxy won't be long falling to the terror of the Daleks.

He steals a doomsday device from The Timelords, it's the last one, and one everyone fears using.  He takes it to a barn, and tries to summon "the courage" to use it.

The whole show revolves around that one choice.  Death by the Daleks, or unleashing doomsday.

What marks the Doctor as such an important character in science fiction is - ironically for an alien - his humanity.  We're used to dramatic resolution to always be that "all the bad guys die".  Star Trek and Doctor Who have been unusual for this.  A foe can do bad things, and even kill others, but in the end there'll be an attempt to create a peace.

A lot of times there is peace even in these shows because "all the bad guys are dead".  But not always, in Star Trek Into Darkness, Khan does terrible things, murders a lot of people.  But in the end  Kirk doesn't kill him, he has him put into suspended animation.

There was a famous Doctor Who story involving the Zygons last season, which touched upon themes of terrorism and radicalisation.  The Zygons end up killing a lot of humans, but the Doctor has an odd trap for them - two buttons, one will help their cause, one will annihilate them.  You're free to press any you want.  In the end the Zygons choose it's too terrifying to even consider pressing any button.  The Doctor calls it a scale model of war, you might get everything you want, but on the way you risk everything as well.

What's uncomfortable about the ending, the Zygons don't get punished.  They end up restoring a peace, and realising they can't get what they want without considerable risk.  An uneasy peace is restored.  One of their ranks sees the error of their ways, and returns as peacekeeper, to try and keep the balance.

In the way we're fed media and stories, it's unnerving.  But it reflects reality.  Wars happen, but sooner or later when someone calls peace, no matter who has died, and how much it hurt, the fighting has to stop.  Otherwise every conflict becomes an unending blood feud of genocide.

When Doctor Who hits hardest, it reminds us of that.  It's one of the reasons of the appeal of the character.  He doesn't use a weapon - typically he represents a kind of trickster god who manages to use an enemies strength against itself.  He always asks them to stop, gives them a chance.  But ultimately he's like a force of karma, with an enemy's destruction coming from themselves.

In the 50th Anniversary, John Hurt's War Doctor feels weary and beaten.  He feels there's no other way, and he knows he need to activate this final doomsday weapon, "the bad wolf".  But he feels absolutely defeated in needing to.

It's a dark moment as having recruited his future selves, he's about to activate, but in the end finds another way.

As a later version of the Doctor would so eloquently put,

Tick-tock doomsday

It's an emotional scene, and all the more poignant being reminded of it after a week of Donald Trump as President.

President Trump is quite the reverse of the War Doctor, a man who is already issuing a lot of orders, and showing in continual outbursts over how well attended his inauguration was, how very unstable he is.  He's already taken action to silence any government scientists who might dare disagree with his statements using facts.

Everyone is nervous as he's a man who is very keen to use America's military might or the threat of it against anyone internal or international who he feels opposes his decision making.  And this is the man who's commands a stockpile of nuclear weapons ...

This is a man who's responsible for scientists moving the Doomsday clock - a threat barometer of how close we are to an extinction level disaster - to two and a half minutes to twelve.  We're very frighteningly close to major catastrophe.

Whether or not Trump presses the actual red button for nukes, he's pressed another red button which whilst not as immediate, is just as deadly.  All government sites have suddenly had all talk of climate changed removed.  Any laws trying to keep back pollution are in the process of being revoked.  America, one of the world's largest economies, is going to join China in going full pelt as if climate change is a myth.  We'll enjoy a boom time ... for a while, but someone else will be picking up the tab.

My Patronus is a Timelord

The Doctor, then is a character who right now we'd love to appear, to help us fight this evil, win back our planet from a menace.  Yay, evil defeated!

Film and TV gives us characters we can't help bonding to emotionally.  They become in our heads champions, a protective spirit - what the Harry Potter books called a Patronus - imaginary friends who inspire and guide us.  I wrote when Leonard Nimoy died about a similar bond I had with Spoke, that character onscreen who echoed my awkward teenage life.

The love we feel for characters can often spill over to the actors themselves.  There are moments on screen which capture, enchant and touch us, and they forever become associated with the actor associated.  The Alan Rickman died last year, the Snape scene "always" was referred to and replayed time and again, a perfect moment that gripped us,

We'd love a real life Doctor to appeal, and help fight all our tyrannies.  But the message of the Doctor is to stand up.

We might not have a real-life Doctor, but we have something even better, we have each other - the rallies against Trump have shown that people don't want to just sit back for four years and hope for the best, I hope they'll continue.  They're a reminder to me that as much as I thought my protest days were behind me, they probably lie ahead of me as well.  I emailed my friend Lisa Crispin to tell her how proud I was of her joining a march last week.

Every man woman or "none of your business" who marched, who is active over the next few years will do so with their own Patronus, one who might be fictional, real or religious ...

Whether that Patronus is a religious character who promotes understanding in the world.  Christians please note that Jesus when pressed said that taking care of your fellow man was the most important commandment along with loving your God.
Also note, there is no Muhammad here, not through lack of merit, but because it'd be offensive to show him.

Or Martin Luthor King Jr, who fought and died for civil rights.

Or Princess Leia / Carrie Fisher - leader of the rebels

Or perhaps Josephine Baker - the exotic dancer who fought in the French Resistance against Nazis, then came to America to fight for civil rights.

Or even the original troublemaker and "nasty woman" Rosa Parks.  Pioneer of the civil rights fight.  Here she is being booked by police for not giving up her seat to white folks.

For me, my Patronus is a Timelord from the planet Gallifrey, somewhere in the Kasterborous constellation.  John Hurt, thank you for being a part of my imaginary guide ...

Friday, January 27, 2017

AUTOMATION 28 - Some checks are faster than others ... surviving The Pyramid Of Doom

This is part 28 of my automation series - for our story to date, check out the list of articles here.

Technical level: **

All the way back in part 4, I talked about a zoology of automation types.  My lingo has evolved a bit, and I sometimes talk about "the automation spectrum" or yesterday I talked about "the automation toolbox".  Different terms for the same thing.  😉

There are three basic types of automation we've been talking about to date,

  • Unit testing, which occurs on parts of the code itself.  It's really great for checking business rules.
  • API testing, which involves the integration usually of an application layer to a lower layer, but can be used to drive a front end as well.  It works by inserting commands and scenarios for a system to react.
  • GUI testing, which is testing from the user interface.  Entering data, pushing buttons etc.

A decade ago when we were talking about automation within the context of testing, we were almost always thinking about a GUI test tool.  Typically the notoriously unreliable record-and-playback tools.

To really be smart in their use, it helps to not just choose one, but to learn to use them all.

Why?  What's the advantage?

One of the key things is speed!

Try using this online dice roller here.  When you click to roll the dice it,

  • sends a request back to the server
  • the server rolls a dice
  • the server renders a new page
  • your browser has to load up a new page (I know because I'm on a slow connection and can see it loading)
  • the page has to download an image corresponding to the number (it takes a moment for this to load too)
  • done

All told it can take about 2 seconds for a roll.  There is another site which is as slow, and there's an animation which shows a rolling dice, so it really can't go faster.

When we were looking at dice rolling under our unit test, we were able to run about 600,000 dice rolls in the same time.  All that test needed to do was to call the method which generated the random number.  So the round-trip is considerably faster (it helps it doesn't have to travel half way around the world and back for sure).

This is generally what we find in automation - unit testing is several orders of magnitude faster than GUI testing, with API testing somewhere in-between.

In my consultation with different automators, a great piece of advice they've given is all around thinking about testing at the appropriate layer.  This was summed up by a quote "don't test the database with a GUI tool".  A bit like the online dice roller, you have to take a long trip through a lot of layers, which you need to do once in a while, but if there's a lot of test you need to do, you want to move them much closer to the area you're testing.

An example came out last week when we were talking about ways to test a csv file upload.  I showed a method I used, we have line-by-line error messaging for uploads, so I make a file full of friendly uploads.  Then I make one where the first line is good, but line by line I break a rule - whether too many characters here, or a number in a text only field etc.

It makes for a great manual test, and we were thinking about how to automate it, originally as a GUI test.  Then we realised that each line where I'd kept or broke the rules really belonged as a JUnit test - it'd be much faster that way.

Something else which came out when we discussed this further at our company technical test study group was there was another reason to push for more JUnit testing.  It's also far easier to write a JUnit test for complicated data than it is to write a GUI one,

  • In a GUI test you tend to have to create a data item going through a series of processes.  For instance if you needed a suspended customer account, you might have to create it, validate it, then go in as an admin and suspend.
  • In a JUnit test you can just declare an item, and have the ability inside the code to set certain fields.  For instance, "declare a customer object, and set the status to suspended".

This makes for much simpler tests, but it brings challenges.  Developers and testers need to be able to look more at what's in place in terms of unit tests, and to make things visible.  We tend to have lovely dashboard for status of GUI and API tests, but the JUnit checks can be a little less visible to a tester (although you'd hope if a test fails, it won't build, so it'd be always 100% right?).

Beware The Pyramid Of Doom! *

The idea that to take best use of the speed of unit and API tests (which also tend to be more robust as well because data and interface items tend to change much less than GUI elements) you need more of them than GUI tests has been represented in a model you've probably heard of called the automation pyramid which I first read about in Lisa Crispin and Janet Gregories Agile Testing.

Simply put, because they're so much faster and easier, you'll want more unit tests than API tests than GUI tests.

Hopefully you'll see from the above why - faster and simpler!

Lisa and Janet's book was written in 2009, but as we discussed in our study group, go back just a few years around New Zealand, and we pretty much had a test pyramid then ... it just went the other way ...

As we've embraced more agile methodology and particularly the idea of automation as a toolbox over a tool, we're taking huge leaps to embrace this pyramid more in our use of automation.

* Unless you haven't noticed, The Pyramid Of Doom in automation looks somewhat like this ...

AUTOMATION 27 - Goals, constraints, strategy

Technical level: **

I originally posted my planned curriculum here back in June.  It was based on my understanding of automation at the time.

I've used the series and the interest it's generated to engage with a lot of testers.  I've spoken with a lot of automators around Wellington, and when I've been in conference.  I've collected all their stories, sifted them, and clustered the points they've raised.

A lot of what they've said has been nice affirmations of what I already knew.  But truth is, we engage with others not only to confirm our biases, but hopefully to learn and to stretch ourselves.  Today we're going to revisit some material which probably should go right at the beginning.  Most of it doesn't undermine what I've talked about before, but it does expand on it nicely.


I talked about this previously here under "automation as a service", but I'm shifting focus a little to really think more firmly about goals.

We're used to people coming up to us when talking about automation and saying "oh my god, there is a great tool, a fantastic tool".  We tend to start with this magic tool that everyone says is so easy to use.

In all my conversations last year I noticed a common theme about people who had stories to tell me.  They were all what I've come to call "second generation automators".

They had a common tale which went something like "we started using this system ... it was record and playback because our testers didn't know how to code".  The rest of their story played out pretty much as one of my denial case studies - they'd invested heavily in the tool, but every time the code changed, all the tests broke, and the maintenance for that was painful.

Inevitably what happened is they had to bin what they'd done to date and start again.  But second time around despite being under incredible pressure to get underway, many of they used the time to plan out much more.

To do this many reflected on their goals.  A common theme of those goals were that they were trying to speed up the feedback process.  If a new build came out, they wanted to find the obvious problems quickly.  These tended to be around making sure they could complete an end-to-end transaction, and that key activities could be completed.

There were differences of opinion about whether the automation was there to evaluate a new build vs doing a deep regression test.  And we'll talk about that more precisely in a later article.

The two key constraints of automation

From their previous failed attempts at automation, there were two key constraints which groups had come to appreciate,
  • Automation takes time to do well
  • Automation needs some training/expertise to make maintainable
 These were immovable written-in-stone rules, for which they had to build strategies to cope with.  We'll talk more about them below ...

Automation takes time to do well

That record-and-playback tool they'd used before, that could write a script as you tested?  They'd got burned there because every time they're try to rerun it, it'd need a tweak or ten.

Generally you could create an automated script full of checks in the time it'd take you to run about a dozen manually.  If you were going to automate something you had to be sure it was something that had value in checking repeatedly.

Every group used what I call "the strategy of focus" to deal with this constraint.  They knew their automation was going to take a while - they'd never probably automate everything.  What they needed it to do was to check (a) critical elements and (b) anything that too a long time to do manually.

I had a great chat with an automation guru from TradeMe who had this great insight.  If you're working especially on an agile team, you always are working from your most critical elements to your least critical.  It can be a trap for testers to work on new features, and want to automate them.  But in truth the most critical features to your system are those you've already built!

Every group tried in some way to accept that the automation would be an ongoing concern.  They key solution was to prioritise and focus.  They all tried to involve testers in capturing ideas for automated scripts.  One of my favourite tales was of a group which kept a spreadsheet where testers could contribute ideas for script as they came to them, and the whole team would vote to prioritise what to build next.  Democracy in action!

And don't fall into my key trap.  I often find myself going that "everyone" means developers and testers.  Sure testers are a great source of ideas for automation scenarios - we excel at this.  But don't forget particularly to get business people in to help prioritise and come up with ideas.  Occasionally they'll say "this happened once in production", and the thing that happened really scared them, they'd like to know everything has been done to reduce the risk of it ever happening again.  That's the kind of thing that's worth doing.  Not necessarily every bug that made it to production, but the ones that gave the business the jitters.

Automation needs some training/expertise to make maintainable

Again, this came from the sting of "anyone can create scripts using our record-and-playback".  That had got groups into a mess.

There was an embracing of the fact that to write automation, you needed some kind of training and some kinds of coding skills.  But also you didn't want to pick up a tool for which you needed specialist bespoke training - what I like to call "Hogwarts for automation".

Generally there's a desire to use a commonly available language.  If you're a software shop that develops in Java, it probably makes a lot of sense to develop your automation in Java.  You might have some people around who not only can help, but are really good at creating and designing maintainable Java code (you're really hope).

[Side story, once for a customer I had to learn to problem in Lisp to use their automation.  A truly bizarre language, and one of the oldest programming languages still used]

Selenium Webdriver with Java is a really popular combination.  It was one of the reasons that I spun off my Java series.  But it's important to remember this is just a handy GUI automation, as we've talked about previously, finding room for good unit and API testing is really crucial (and we'll talk about this later).

Indeed this was an important takeaway.  Many groups had been lured into their first attempt at automation through the sales pitch of "a tool".  But they'd come to appreciate that automation was not a tool, but a toolbox.  Much like when an engineer comes to your house, she doesn't just bring one spanner, she brings a toolbox!

Unit testing, API testing, units testing are different tools in our automation toolbox, and it's important we learn to use the right one for the right job.  Which leads me to my next post ...

Want to know more?

I'm talking at the upcoming Unicom automation conference in Wellington.  I will be speaking about some of what I've learned not only from my experience, but through going out and talking to people about their automation strategies.  There'll be some other great speakers there as well!

Wednesday, January 25, 2017

AUTOMATION 26 - Design patterns for automation

Technical level: *****

Today's article assumes a familiarity with ideas such as methods, privacy, JUnit tests and objects that was covered in my Java series.

The developer gurus I work with are really keen on design patterns, and often consider understanding them more important than knowing about the details of a language.

A design pattern is a set way of approaching a standard problem.  There are a couple of them floating around for Selenium, but one of the most important ones I've heard about is the page object design pattern.  There will be extensive references at the bottom of the page for further reading.

I'm going to go through an example of how I've analysed how to create automation.  You might take a slightly different approach, but hopefully you'll find the discussion of my approach useful enough to understanding what you need to consider.

First of all, in page object design, you're going to create an object for every page in your application.  This "page object" will know all about the which, what and how of the page,

  • Which elements are on the page.
  • What their state is.
  • How to perform actions on the page

In addition, it's generally a good idea to have the checking that performs pass/fail in your automation separate to the page object itself.  The page object will handle everything about doing things on the page, but will pass out information on the page to this test layer, so it's the test layer which performs the checks.

Our example ... Twitter

Okay, so we're back with Twitter again (I use this a lot).  Every page on Twitter can be turned into a page object.

For simplicity we'll say for now that there's just two.  The main feed page below,

And the login and registration page here,

You need to create objects for both, with a third layer to contain your JUnit tests.  Again we covered JUnit tests extensively in our Java series.

To make life easier, we're going to focus in and do the analysis on login and registration, and focus on this part,

Which elements are on the page "LOCATORS"

Richard Bradshaw's article on page object design here calls the kind of functionality we're going to look at now "locators", and I just love that term for describing this.

We need to create function calls to define all the page elements on the page we'll want to use.  We could do this by either defining WebElements inside the page object, or having methods which return the WebElement.

We need to choose one or the other, and stick to it.  Most importantly we want these to be private.  Only our page object should want to locate the WebElement directly.  The "what" and "how" parts of the object we design will allow external parts of the system to interface with these elements as needed.

Lets do the analysis, we have the following ...

Text fields,

  • Phone, email or username
  • Password
  • Full name
  • Email
  • Password


  • Log in
  • Sign up for Twitter

Check boxes

  • Remember me


  • Forgot password?

We need to define them all.  There's also an additional one we might want to define, you can get an error message ...

What their state is "STATE"

This is a bit trickier, but you might want to return the state of the page.  These methods should be public, and will be used by your JUnit tests.  They are similar to the idea of "getters".

So examples might be,

  • Get page content.  Return the text on the entire page.  So the JUnit test can check for certain content.
  • Get error message box state.  Return true if the error message box is displayed.
  • Get error message box content.  Return the text inside the error message box.

How to perform actions "ACTIONS"

My team call these flows.  Rather than just a single action, they're a sequence of actions.  Obviously these methods would need to be public.

If we look at our chosen page, there are three flows ...

Flow 1 - login

  • Enter username
  • Enter password
  • Select/deselect remember me
  • Select log in button

Flow 2 - register

  • Enter full name
  • Enter email
  • Enter password
  • Select sign up for Twitter

Flow 3 - forgot password

  • Select forgot password link

Notice how if this was turned into a test script, all these flows relate to the "action" not "expected result" part of a test script table.  This is intentional, as the part that checks expected result should be an assertion within the JUnit method.

And finally...

The JUnit method can string together sequences of these page object, to create a series of tests.

Let's look at login, we can call that perform login method in multiple tests.  We can leave some fields blank, we can give it correct data, we can give it false data.  Then we should use an assertion within our JUnit to find out if the system is right or now,

  • If we give the correct data, we should find the words "Home", "Moments", "Notifications" on the page.
  • If we give a field as blank, we should get an error message
  • If we give wrong details, we should get an error message

This diagram should explain the relationship (and I'm so sorry, the colour scheme seemed a good idea at the time) ...

The key thing is this approach follows two key rules of software development.  Using methods, which we discussed yesterday.  But also encapsulation, only giving access to items that another level really needs.

Further reading

There's a lot to read about, so this section contains some useful next steps ...

Tuesday, January 24, 2017

AUTOMATION 25 - Thinking about maintainability with methods

Today we're picking up again on the automation series of articles that I started last year.  You might wish to refresh yourself with what we've covered by following this link.

Technical level: ***

Previously on this blog, I’ve taken you through some an introduction to Java, which has been about understanding and playing with some core features around using the language effectively.  I chose Java because it’s popular, and my team uses it, but some of the concepts coming out about using methods, data hiding and objects are common to many languages.

Meanwhile, on the automation series, we took a look at several technologies, thinking about how they worked best … or not so well for checking.

In doing so, we’ve covered a lot of material, over 40 articles so far!  And here’s where it starts to pay off, as we bring the two together for the remainder of the automation series!

I’ve held a whole load of interviews with people around Wellington to talk about their automation, where people are now, and what “came before this”.  What’s a common story is that initially it’s been seen that they wanted testers to write automation scripts, and their testers weren’t very good at coding.  So they’ve ended up picking tools like Selenium IDE or Coded UI.

These kinds of record and playback tools can have their advantages if used well, but they’re rather clunky.  Typically they use a very simplistic language, which doesn’t allow for loops, ifs, methods or any other features of a programmable language.

So you end up with long scripts, with no real brains to them – you don’t benefit from any kind of code reuse, so everything’s written out long hand.

So you write out 100 scripts this way, and the first few steps of each one is to log in.  Then your project changes … there’s a decision to scrap the current login page, and use Facebook to provide your login service.  The developers say this isn't too big a change – but it won’t be quick for your automation.  You will need to find a change that works, and then copy and paste it into a 100 test scripts.

I like to say that in such scenarios we've taken on (without realising it) a kind of test automation debt, one that because of the limitation of our tool, we can never pay off.  Some of this will feel a bit deja vu because we talked about it here under our denial in testing series.

This is where the power of computer languages come into their own and why Selenium Webdriver (over IDE) has really come into it’s own.  As we've covered, Selenium Webdriver is driven by a fully structured language like Java.

Using WebDriver and a language like Java, you can define your steps to login as a method, and have all your 100 test scenarios call that method.

Now when the login changes, you change the login method, and the change ripples down to all the tests that use it.  This is the heart of building maintainable code, using the features of your programming language to reduce your overhead.

Don’t define something twice, when you can extract it as a method and use it over multiple tests.  We looked at how methods help us to avoid tangled code during our Java series here.

Next time we'll consider some design patterns which can be used with tools like Selenium WebDriver.