Saturday, October 27, 2012

Experiences in automation ... WeTestWorkshop




I have never lived anywhere quite like Wellington. The thing that constantly amazes me about this place is the sense of community amongst technical folk, aided by the various meetups which are organised by people who are passionate about their relative crafts.

I'm already a regular face at the AgileWelly events that are organised. I was thus noticably pleased when following KWST2, several testers decided we needed a more regular workshop, and thus the WeTest workshop was born.

This Thursday was the first event, sponsored by the consultancy I used to work for, Assurity. Katrina Edgar (the organiser behind the whole WeTest meetup) led with an experience report on the subject of test automation. It was a potentially tough crowd – about half the room were ex-developers turned testers, and 75% had experienced test automation in the last two years.

Katrina, like myself, was an ex-developer turned test automation specialist. She'd encountered 3 major projects so far in her career ...

On Project A, she was allowed to branch out into automation on a typically very manual task. She'd been given enough free reign to choose whether to automate or not. She used it to automate the very onerous task of setting up multiple configurations, which removed a lot of monotony from her tasks. The work was relatively low cost, and reaped high benefit. But that said, she felt just left to get on with in, and her automation was expressly that, HERS. It was never reviewed, and no-one else ever used.

Project B was more advanced. She came onboard and there was an automation framework already in place – a Jenkins machine which provided continuous integration tests using Selenium scripts. She felt where as Project A was more about “what are you testing” with the free reign on “how are you testing”, life here was quite the reverse. Everything was about “how can we test on Jenkins”.

The system had a Concordian front end to show whether tests had passed or not, and it was considered that if the test passed, it was “all good”. There were no follow-on manual tests, because it was believed “well our Selenium scripts test it all, don't they?”.

The problem was that the testing was highly technical, and to understand the script content, you had to be able to read Java. This meant not enough people did read through and understand the scripts and what they did. Testers would modify and make new scripts, but no-one ever reviewed them. So on the whole no-one could be sure whether these tests did test the core values of the project.

Project C echoed a lot of Project B. It was a similar system where everything had been done by automation. But it was a much older, legacy system, and all the original staff, and much of the expertise had moved on.

Thus the scripts were flaky, and needed a lot of maintenance by people who didn't really understand them all. A lot of time was spent fixing them, but no-one knew everything they did. But they'd always seemed to work before, so no-one wanted to tamper with them too much either.

Her experience report finished, the discussions around the room began. And this is where a peer-conference drastically differs from presentation-based events. It's much more interactive with many joining in, asking questions, sharing tales. Whereas in a presentation you walk out with on persons experience and ideas, at the end of a peer conference, you've had those ideas pooled together by everyone in the room.

Thus by the end of the 2 hours, we'd investigated and reached consensus in a way which was surprising to both myself and Katrina. In fact no-one could have predicted it – which is what can make these things so exciting. These were some of our take-homes by the end of the night …

Know why you are automating

Automation should be almost always about addressing a pain if you tried to do something manually. In particular it should use some strength of the computer against an area where a human being is much weaker and slower.

Here are some areas in which computer excel over humans, especially in a “testing” capacity (I will explain the use of quote marks later),
  • they are much faster
  • they can do the same task over and over again with no variance
  • always do exactly what they're told


On the flip side, this is what an experienced human being can do which computers can't,
  • does not need a script to start using the software
  • uses software in the same way as an end user (if a human is going to use your end-product, you need a human opinion on it during testing)
  • can investigate as they test
  • can notice things in software even if they're not on the checklist


To make efficient use of automation (and thus get a return of investment on the time you spend automating), you need to be addressing a pain point in your testing, and you need to be doing something in your automation that computers can do well (from the first list) rather than something that humans do well. It also needs to be doing something that you're likely to do again and again - so once scripted, it'll save you time every time it's run.

If you're Agile, and 3 days of every sprint is taken with your testers running repetitious regression tests on a mathematical function, this is the kind of pain point you can and should automate to free up some of those 3 days of testing effort.

Know what you should be automating

When test automation was first introduced in the 1990s there was a belief that many test departments should have suites of 100% automation. Experiences of the last decade have really challenged that idea.

Test automation has a place, but it's not the alpha and omega of testing. In fact many like Michael Bolton believe automation tools should be called automated checkers over automated testers (hence the quotation marks before).

The reason for this is an automated script can only check for what it's scripted to check for. It will never tell you if a screen flashes pink and yellow unless you tell it to check for that. It will never notice the kinds of things that a human tester will go “well is it supposed to be like that?” where something is not necessarily against a requirement, but not quite right either.

The cyborg tester

I've heard the concept of the cyborg tester before, and this is what started to come out from people's experience with automation. I believe I've heard it from people like James Bach and Ben Simo on Twitter – the idea is that testing isn't about doing testing all by humans, and not by doing testing all by machines.

The cyborg tester is a fusion of both man and machine, using both blended together to produce superior testing.

Automated checks are fast, repeatable, and if done right, anyone can push “play”. But they miss a lot of defects a tester would find. They are best used essentially for unit testing between building a piece of code and giving it to a human tester.

We've all had scenarios where developers have delivered new builds daily – when asked if it passed testing, you are greeted with “well it built” (which does not mean it passed any kind of test). The test team start to use it, and there are major issues, with elementary functionality failing. That means anywhere from a half day to a full day of testing is lost because we have a bad build, and no capacity in some systems to rollback to a previously working build.

How much better then to include those kind of smoke checks as part of the build process, and if the software doesn't pass those checks, it's not deployed? Such a policy follows the “test early” philosophy, and means manual testers are protected from bad builds which are so fundamentally flawed it would force them to down tools until addressed. [A working old build allows more investigation than a new, broken one]

Such a system is one of synergy, allowing testers to continue investigating on a previously stable build until a useful new build with basic core functionality can be delivered.

Automation TLC

As alluded to by Katrina, and in line with comments I've had from Janet Gregory, the automation you are doing needs to be clear and visible to the whole project. Everyone should be encouraged to look at what you are doing, review, and give feedback, especially as to whether or not it addresses business and technical needs enough.

How can you be sure your automation really addresses the core business values of your end-product? You need that feedback to target the important stuff, and cut away anything which doesn't add value (otherwise you waste time running it, you waste money maintaining it).

But more than that, many places will automate phase one of a project and like Katrinas Project B and C, will say, “we're done”. Automation isn't a “we're done” thing. Automation in an ongoing commitment.

Every time you make changes to your code base, you need to have someone looking at your automation scripts and going “does anything here need to change”. That's how you keep your automation working and relevant. If you develop for a year, and only then start to use the scripts again, you might have a nasty shock (much like Project C) where nothing seems to work any more. You might even be tempted to bin it all and start again. At this point, the automation which was there to remove your pain in testing, actually becomes your point of pain!

But more than just making sure you have resources to maintain scripts, you have to ensure your scripts are maintainable. In software code, good practices are to have commenting within code to say what each pieces intent is, peer reviews of code and to even have coding standards of things you should try to avoid (forever loops anyone?). Being an ex-developer myself, these are things I encourage in any test automation project. Going around the WeTest workshop, it became clear I was not alone.

When can we get time to automate?

This was the final question of the night I was involved in (alas I had to leave for a train at this point).

But the comment would be one many would be familiar with, “we're going to flat out with our manual testing, we're having trouble creating our base of automation scripts”.

It's tempting to think it's possible to go flat out on a project and also be able to do process improvement as you go along. Sure you can make small improvements. But to achieve significant benefits you need to allocate effort, because you need to concentrate on that task, not run it in spare time (for more on this subject read this article).

If you are finding you have too much testing to do, and find it harder and harder to achieve deadlines you need to go back to our first point, look for the areas of pain that are taking time, and see if automation will help. You might have to say to your Project Manager “look we need to pause for a bit to get some breathing room here” and perhaps extend timelines or investigate other options – it's possible development need to pause to do some cleanup themselves. But you don't need forever, you just need enough automation to ease your painpoints, and then enough resource to keep it maintained when needed.

Overview

A fantastic first meeting, and looking forward to future – thanks to all those who turned up and made this such a memorable workshop! I certainly came away with a lot to think about, and have enjoyed revisiting it here ...

The following photos are taken from the WeTest site ...



Saturday, October 13, 2012

No escaping me in October ...



There's no escaping me this October. After a few busy months, I have articles in no less than three testing magazines …

Is an article in Tea-Time With Testers about a feeling we'll all come across at some time. How do you deal with accusations you're not being a “proper tester” because “testers on my last project ...”.

Back in KWST2 we were talking about ethics, and I gave an experience report of growing up as a teenager and seeing one of the difficult ethical choices my father had to make, and how that affected me.

A look at how at a previous company we learned to scale naval products for the fishing industry by developing the persona of our end-user.

This is a brand new magazine NZ Tester, which Geoff Horne is putting together, and even if you're not from New Zealand I urge you to give this magazine your support.


Enjoy!

Tuesday, October 9, 2012

The Tester's Video Library ...



My son is a bit of a YouTube addict, and to him and his generation, this is something they turn to when they need to learn about an area and learn fast. This to me can seem quite lazy and almost a cheats way of learning, but as his depth of knowledge of Second World War history has shown, it can be very effective.

Yes - time to get with the 21st Century (so my son tells me).  Learning is no longer just about going to the library and "reading a book".  Today we have blogs, and through YouTube we have on-tap tuition when we need it.

Over the last couple of weeks I have been following his lead and trawling the internet to watch talks about testing – and I too have found it very educating. Some videos have touched upon pain points I'm all-too-aware of, others have really challenged me to raise the bar in my testing and work. And so I'd like to share them here

I'd like to thank my fellow Twitter testing wingman Kenneth Aarseth‏ for providing some of these links.  If you know any must-watch videos, please add them to the comments!


I first watched this in 2010 when it was recommended on the Yammer feed of the test consultancy I worked for. It had me hooked, and it's probably fair to say it inspired this blog. The idea (which I've previously discussed here) that any learning helps you to diversify and grow as a tester. Marlena Compton provided me her blog entry on the subject which she calls interdisciplinary studies, which is a good read.


That woman again. This video was filled with good stuff, of course a lot of it I knew from reading Janet and Lisa's book on Agile Testing. But the one thing which really stuck and challenged me in this talk was around “am I doing enough to make my testing visible?” to the whole project. Challenging. Am I making my testing visible? Yes I know I am. Are there ways to make it better visible to the people who matter? Erm … I think I need to reflect on that. There are always ways to do things better aren't there?


I loved this video, although many testers may not learn huge amounts from it compared to the others, it's still worth experiencing. Uncle Bob is an amazing storyteller, and takes us through the development of software and computers during his career. As I am an ex-developer in C I was rivited.

The main thing you will take home from this is how computers are changing constantly – they have evolved during Bob's lifetime, we'll see the same. Our skills can rapidly become outdated, so continuous learning throughout our career is something we need to embrace.



Michael challenging us on,
  • What do we mean by regression testing.
  • Can we benefit from automated checking tools?
  • Why human testing can investigate and discover issues checking tools will miss.


What I really came away with from this was the discussion about automated checking vs testing.  That using automation we can only check for the problems we could imagine when we scripted. There are all forms of issues which can quite easily slip through the net because you didn't plan your script to adapt and cover them (because you couldn't imagine them happening). This is where a sapient human tester can adapt and investigate where an automated script either grinds to a halt or even worse, carries on an the issue goes unnoticed.


I saved the best until last! Simply because it touched on so many points which are relevant to me this year. I recognised in this talk things I'm doing at the moment, but need to chase up from this talk and get better at. 

Key points in Johanna's talk were,
  • Testing is about gathering information on the product not ensuring quality
  • Manage your communications about testing via a testing dashboard
  • Learn to not spread yourself thin and to say “no” when needed
  • As a test manager, organise your testing portfolio. Make it transparent where you have no resources.
  • Don't move around testers between projects, you'll lose their expertise in areas, which is what makes good testers.

Some more Johanna, talking this time about managing your time and requests to avoid falling into the pitfall of multi-tasking.  Managing expectations - knowing when to say yes and when to say no, how to get people understanding your pressures and priorities.

Johanna Rothman: Lessons learned in project management

This is a must-watch if only for the piece at about 10 minutes about "all out people were good ... except testers, they let everyone down".

Sunday, October 7, 2012

The science of software ...


Back in 2002, I had my own form of Buccaneering.  As a developer on a UNIX aircraft project, I'd been in the buisness for over 5 years, and noticed a few parallels between some fundamental laws of physics and software engineering.

I'd written these up, and had them pinned on my desk as "The Talks Physical Laws Of Software Engineering".  Unfortunately I've lost all copies of them, so I'm recreating them from memory ... as you can see they're not comprehensive and were meant more for fun, and from a development over testing perspective.

But what they're an example of the concept of Buccaneering, taking a parallel idea from (in this case) physics, and bringing it into the software engineering context.  Of course none are particularly ground-breaking (although we were caught out by rule 6 at the time) ...

1)  Problems in code (defects) will remain, unless work is done on them [Newton's First Law Of Motion]  Really, defects just don't go away ...

2)  As you are doing work to remove defects, you are also doing work to potentially introduce new defects [Newton's Third Law of Motion]

3) Software that is developed without any monitoring will tend towards chaos [Second Law Of Thermodynamics]  You are just going to hope that everything is good?  Let me know how that works out for you ...

4) Small deviations applied over long enough distances cause massive errors [Trigonometry]  So try and find any defects early.

5) Any system will have levels of uncertainty [Heisenberg's Law of Uncertainty]  Of course you want to minimise uncertainty (which is part of what testing's about), but you cannot remove it altogether!  It will always be there in unknown finite levels.

6) The more closely you observe an event, the more likely you are to be impeding it [Heisenberg's Law of Uncertainty]  Coined as we learned that trying to rerun defects using a debugger could prevent the problems re-occuring - mainly as many debuggers of the time would initialise data stacks which would otherwise contain random data.


Maybe you've read these, and come up with a couple more examples in your head?  Congratulations!  You're thinking outside the box ...

Buccaneering knowledge ... or International Think Like A Pirate Day ...


So when I was talking about my man cave in the last entry, I was talking about a room with things which fascinate me, and which I'm passionate about.  Key amongst this is a bookcase which is filled with a variety of books, for which software testing is perhaps the least well represented.

Software testing as it relates to the technology of computers has only been around as long as computers themselves, a mere 60 years.  However software testing is about more than understanding computers - it's a big part of it yes, but it's not the only part.  It's about how human fallibility, it's about how people are psychologically wired, it's about how people behave in groups, it's about what people say vs what they mean, it's about human leadership, it's about how you take theories and prove/disprove them, it's about how ideas evolve ... it goes on.  And all of this stuff has been going on so much longer than computers.

As long as humanity has been around it's been building ... whenever it's been building there have been projects ... whenever there have been projects there have been mistakes.  Who is to know if the Tower of Babel failed because of God or because simply the architects weren't speaking enough to the on-site project managers?

So people have been analysing and writing about many of the themes common to software testing for hundreds of years.  They've analysed successful military campaigns and leaders, they've analysed the anatomy of a disaster.

James Bach when I met him this year talked about the idea of "buccaneering knowledge", going out looking for ideas and practises in other fields, and if they have value, bringing them and applying them to expand and pioneer the field of software testing.  I love the word "buccaneering" as it's so apt, you're going into another field and taking away their most valuable ideas ... at gunpoint.  But the point is you don't take everything, just the stuff you can see has value.  And being able to have an eye and a mind to challenge what you see before you and separate the good from the bad is how you become a truly innovative tester.  Because let's face it, testing comes with enough dirty laundry and baggage of it's own, without forcibly hoisting away another disciplines!

I've heard many other testers talk along similar lines.  Early in my blogging I was lucky enough to come across a video about Learning For Agile Testers by Janet Gregory which really inspired me (click to watch).

This is why it's important to be surrounded in a room by things which interest and inspire me beyond software testing, because I'm forever pooling and channelling this in with my writing, looking for the parallels and the parables and how to apply to testing.

Much like Janet says in the video, the path for your own learning by doing this is not by following someone elses passions, it's about following your own passions, and finding a way to bring it back and communicate your learnings from this area to the wider testing community.

Here are some other examples of testers whose passions feed the wider community,

  • Lisa Crispin, co-author of Agile Testing has a great compassion towards animals, most famously donkeys.  Speaking with her, the ideas of teamwork and compassion are important ones - and she's generally a motivator more by carrot than stick.
  • Bernice Ruhland, a regular contributor to Testing Circus really does feed the passions of the testing community, as she's passionate about good cooking and baking in her blog.  She's probably more of a motivator by carrot cake than stick.  In fact going deeper, we've had some conversations about following-a-recipe vs improvising-from-available-ingredients, which go deeper than cooking.
  • David Greenlees, who is deeply passionate about martial arts, and seems to have learned to break code with his BARE HANDS!
If you're reading this now, let me challenge you with this.  What are your passions, and how can they benefit the testing community?  Think on it ...

Welcome to the testing man cave ...


I've been lucky that we have a spare room in our house, which can be dedicated just to "hobbies".  My wife calls it my "man cave", and it's the place I've done most of my writing for my book the Software Minefield, and where I remote work from when I need to work from home.

As you'll come to appreciate, it's my thinking space, and I feel it's important every tester has a space do sit in and read, learn, engage to develop not just their craft but themselves.  When we lived in a one bedroom flat, I still had a space in the bedroom with an armchair where I could do this (it doesn't have to be a room) whilst my wife would watch soap operas some evenings.  [I hate soap operas]


Art is subjective ...

Part of the charm of this room is that it is a hobby room, and thus filled with the things we've collected and activities we've tried.  As I've mentioned before, my friend Violet was an artist, and tried to encourage the artist in me, despite me being more your classic scientist/engineer.  But I gave it a try, and consequently so did my wife (she is really good).

I started out really reluctantly.  I was awful at art in secondary school, but as a young child I loved to draw.  It's hard to start at an activity when you're in your late 30s and sure you're going to be awful.  So I got a book at drawing, and started with drawing wine glasses, and got comfortable doing that, and piece by piece got a bit more adventurous.

Being an engineer I draw great pictures of any machine, but people often end up looking distinctly odd.  But I've got better.

And perhaps that's the point.  I started off, I was awful.  I scaled down my ambitions, started simply and developed a bit at a time, rather than trying to do the Mona Lisa on day one.

None of us are naturals.  Not even in software testing.  But we get good by trying things out, working out the basics, what works, what doesn't.  Some people will then say we "feedback to optimise the process", but I call it just plain learning.

Books ...


Usually my bookshelf is a bit more organised into themes-by-row, but this shows just how eclectic my bookshelf is.  There are books on software engineering here (more books on programming than testing I'll admit), books on physics and astronomy (my old textbooks), history (English Civil War and Oliver Cromwell mainly), psychology, religion (I have a copy of the Bible next to a translation of the Quran).  Oh and some comic books (I never grew up).

Some of these books are for reading, some for reference.  I've always liked to have my textbooks nearby to look through.  One of the most used books in my career has been Spherical Astronomy, because it's got several equations for conversions of latitude and longitude that I've needed to use again and again.

Probably my most read book is The Watchmen, a graphic novel from the 80s which I first read when I was 16.  I read it every few years (hence two copies), and always notice something new (even a master tester doesn't notice everything first time around).  What I love about it is as I grow older and my world view changes (and indeed the roles I play change), I find myself identifying with different characters.

Please do not attempt Stairway To Heaven ...


When I was 16 and finished my school exams, I spent the whole summer trying to learn the guitar.  It was going to make me cool and get me girls.

There was a flaw though.  I never got any good.  Even basic chords I found difficult and painful to make.  I kept trying, but never got easier.

I have artwork in this room, I had a go, and got better.  But not so with the guitar.  I ended up learning to play the pennywhistle instead which isn't as cool, and did not get me girls.

These days I keep it around to remind myself you can't do everything.  Sometimes I have a go again to see if I've magically got better ... and I haven't.

Computers ...

There are a lot of computers in this room, but only one was particularly expensive.  The machine on the right was the games machine I put a lot of hard earned pennies into a few years back.  I deliberately bought a Vista machine in 2007 hearing how troubled they were.  The tester in me went "ker-ching" thinking understanding a buggy platform would help me get even more testing work.

Sometimes I think Vista gets a bad rep.  Most of the early pain came from many suppliers refusing to supply drivers for Vista platform, only for XP.  Hence a lot of the blame was laid at Vista.  In trying to fix all these problems I'd download more an more software to the machine until it became a bit bloated. Thankfully eventually stripped it down again and it worked much better.  Thats said I do feel Microsoft never got Vista working quite as it should (Windows 7 seems a much better experience).  My son uses this machine to play those huge strategy games like Rome Total War and Dawn Of War.

The machine on the left is one of the many machines I tinker with.  Back in 2003 we had a 3 month period where there was very little work on, and so my role was "downgraded" and I was made to work for IT support for a while.  It initially felt humiliating, but I got with the spirit of it.

You see, until that point I was a programmer and occasional tester of software.  But really I knew very little about the machines themselves.  During my time in IT support I learned to build the machines, swap components, apply patches, take images, build and install a machine from scratch, monitor IP addresses, manage servers, modify accounts.

So that initial humiliation turned into "wow I'm learning stuff", and made me braver with my computer at home, and a much better tester.  It's amazing the opportunities life will give you, if you embrace them for what they have the potential to be!

Consequentially I never throw computers away if I can avoid in (you will learn about Frankenstein in a moment).  But also people tend to give me old machines to see if I can make use of them.

Something I love to do is to "relife" old machines by giving them a suitable Linux platform.  Linux is much lighter than Windows, and hence a machine struggling under Windows will seem much quicker with a suitably light Linux operating system.  Also Linux allows me to keep my hand in with Unix commands (I'm an ex-Unix developer after all), although modern Linux is very GUI driven now and surprisingly intuitive that you rarely need the command line.

In all in this room there are 3 desktops and 4 laptops - 2 of those laptops are non-functional (I'm working on them).  Most of those machines are over 10 years old.  One of the desktops (the one on the left in the picture) needs to go into another room, once I'm finished with it ...

If I can get those 2 laptops working, they'll be given to some friends of ours whose kids could do with a dedicated machine for writing homework.

Deskspace ...


This is where I do some writing and where I do my office working from home.  Note the R2-D2 USB hub (again with the never-grew-up).  That seat is surprisingly comfortable, though I suppose from HR I should have an office swivel chair ...

The view ...


It's pretty isn't it?  But reminds me I need to do the lawn (draws curtains back again).

Frankenstein machine ...

I have either had this machine for a year, or else had it for 6 years.  I'm never quite sure.  Last year one of my computers died a death.  But I managed to build a new one from some of it's parts together with another machine (hence why this is called Frankenstein).

I thus have a huge emotional attachment to this machine, as it's essentially my baby.  This is the one I write most of my pieces on.  It's an Ubuntu machine connected to our old TV (which can dual purpose as a monitor).  It also serves as a Samba server to allow file sharing between the other machines in the house.

I used a wireless keyboard and write either from the couch or (like right now) sitting on the floor.



Toys and stuff ...

My wife's rule is that the mancave is the one room of the house I'm allowed to be a bit of a geek in ...

Hence there are many of the sci-fi toys I had as a kid all around the place.  Okay some I got after being a kid, but I was still a kid at heart.  I obviously loved Star Wars, and have a thing for R2-D2.  But there's more than just toys here.  There are ornaments people have given us, replica swords (though not on display), a buggle (no, not even I know why), binoculars, a telescope, board games.


There is a clock here that my late father-in-law got and repaired for us.  It's perhaps poignant that it's stopped working.  He had learned to repair clocks like I'd learned to repair computers, and just loved to tinker with them, and would tell me all about them.  Like me and machines it was more than just something to do, it was a passion.  He loved to tinker and one of his proudest achievements was rewiring a cheap kids toy to use as a doorbell ... his grandkids loved it when they pressed his doorbell, and it sounded like the Police were coming.

Wrapping up the tour ...

This room probably reminds me of my head.  There's a lot going on inside it.  It's a bit cluttered, and some hoarding going on for sure.

But there's a lot that makes me smile.  I hope this isn't just materialism, as the things inside are nothing too flash, but it all fires my memories.  It reminds my of my childhood, of things I can achieve, of things I can't achieve, of things I've read, of things I need to learn, of people who've inspired me.

Is it any wonder I do my best work here?

Tuesday, October 2, 2012

Ethics 4 - KWST2 in pictures

In writing up KWST2, I enjoyed looking through my notes I made, which I've decided to put here.

I've a decidedly visual way of looking at things like many who like to use mindmaps, so thought this would be a great way to finish this series...

James Bach was talking a lot about "my community", and I was curious who a phrase like "my community" where MY = an individual and COMMUNITY = group fits together ... probably the pedantic tester in me ...


James then explained this to me, that a group of individuals have certain opinions.  In events like peer conferences they talk about their opinion, try and find group consensus, and so through these debates (which could be fiery at times) the community opinion takes shape.  Someone who finds their values outside this consensus needs to question if they belong to that community or needs to try and draw it more towards them through debate ...


Geoff Horne and James Bach give their summary of what they think the purpose of testing is, for Geoff this is "getting a product over the line", for James it is to "inform clients of the status of their product" (two good answers) ...


James talking how the role of testers is to inform, on status, not to try and make business or technical decisions for others ...


A huge discussion was held about "who does testing serve?" as to answer this defines our responsibility ethically to this group - was it to management, or was it higher up to eventual end users.  There was no easy answer to this - there are elements of both.  But this makes our ethics very much a personal choice in many ways, and thus the right or wrong can very much be defined by the observer ...



There was talk on how our ethics are shaped by our,
  • Role
  • Culture
  • Age
  • Experience


As we go through difficult ethical situations we had a feedback loop, and we evolve our ethics.

There are also two kinds of ethics,
  • External and imposed - the ones which are written down on paper as laws or terms and conditions
  • Internal and owned - these are the ones which come from us, our personal take and behaviour on ethical challenges, which in many ways define us.



Unfortunately these different takes as seen above can cause ethical splits within cultures and communities.  Which is painful, but often how we evolve.  To bridge the gulf it was important to maintain respect as much as possible.



Finally a word of warning.  Ethics are something which need to be applied as equally to ourselves as to others, otherwise we're using them to judge, and guilty of the worst kind of hypocrisy ...