Saturday, September 28, 2013

The "Four Candles" guide to customer dialogue

When I was talking this week with another tester about how comedy can be an aid in learning, I immediately thought to the sketch below.  It was voted one of the funniest sketches of all time in a British poll, but I don't believe it's travelled well outside the country.

So I bring to you, The Two Ronnies in Four Candles ...


It's a great piece of comedy, but what has this got to do with building software?  A lot more than you would believe.

At it's heart, Four Candles is a comedy about miscommunication.  There are two perspectives to this sketch, either it's about a customer who comes into a hardware shop with a list of things that he clearly want, and finds himself increasingly bemused by a somewhat surely and agitated shopkeeper.  Or it's about a shopkeeper who finds himself frustrated by a customer who seems to be trying to be deliberately difficult, and evading any attempt to be specific.

Both points of view have some justification.

Of course within the world of software, this is where we have some empathy to this scenario.  How often have we either been supplying software to a customer, or else sending work outside of our organisation.  There is always a spectre of "how much information to supply", and no matter how much information you give or get, there is a potential for misinterpretation.  Which means software delivered which doesn't match someone's expectations of it.

This was one reason (in the early 2000s) I was pretty sure that outsourced work and distributed teams just could not work.  When everyone was located in a single office block (as my projects at the time were) when requirements were delivered in phone book sized monuments - even then I'd seen the potential for communication to fail, and what was being delivered to be less than ideal.

In our software delivery groups, we probably have more empathy with the shopkeeper than with the customer in the scenario above.  He's doing his best to deal with his customer, but he feels the customer isn't really helping him.

After being caught out over the "four candles" really being a request for "fork handles", the shopkeeper realises he'll need to ask for more information.  So when asked about "plugs", he asks for more information "what kind?", and is told "rubber, bathroom".  So he finds his box of sink plugs and asks for a size to be told he needs to supply a "13 amp" one ... that is an electrical one.

It goes on this way (hilariously), and the shopkeeper tries various tactics to try and get more information out of his customer, and thus evade a lot of needless effort (such as going up a stepladder to get a box for the wrong thing).

In the end, the shopkeeper tries to break the cycle, by asking for the list, because as it is, this is just not working, and he's becoming increasingly frustrated.  Reading through the list, he sees something, and decides at this point that it's time to walk away, and gets someone else to deal with his customer.  It turns out the item that caused the shopkeeper to decide to break the cycle was a request for "bill hooks" (bollocks - a UK swearword).

Getting out of customers what they really want is a tricky business.  But it's important though, you want happy customers, because generally happy customers equals successful business.  But also you want to save yourself wasted time when you can.  We've all been the shopkeeper who feels frustrated going back to get the box of letters from the top shelf for the "Ps".  And the customer trying to give clarification by just "repeating their ambiguous original request" doesn't help.

Like the shopkeeper we should try different tactics (clarification, getting the list from the customer etc) to try and break through the frustration.  But if we can't break through and make progress, as frustrating as it is, we have to be prepared to walk away and let someone else try.

Sunday, September 15, 2013

WEB103: In for the hunt ...

I've recently been going through the subject of browser comparability testing with one of my team, and this series have been some of the notes from these sessions.  The great thing about such coaching is it forces myself to take a good long look at the "whys" of things around this area, and learn something new myself!

So far we've taken a quick look at the history of web pages and how to get environments to actually test on.  Now comes the challenging part - what are we going to do?

Taking a pure black box approach, if we have 11 browser types and versions we need to test comparability on, then that means we have to run the same testing we've got planned 11 times, once on each browser, yes?  This is where such an approach becomes messy - testing is always under-the-gun, and such approaches just aren't going to work.  To have a valid strategy we need to think about how websites work (at a very basic level), the kind of issues we'd expect with browser incompatibility and go to where the risk is.

Web Basics

What follows is a really basic, "just enough" description of a web site, and how it works.


So, when you look at a page, your browser is provided information from a web server to create a page.  This information is in the form of html, which essentially forms of "downloadable program" of content for your browser.

What exists on your browser is completely independent of the web server at this point.  In fact, right now, disconnect your PC cable, or turn off your wi-fi.  You'll find that this web page just does not simply "vanish" if you do.  It exists on your machine, in it's browser, and won't disappear until you try and do something.

To get a good browser testing strategy, it helps to know what functionality is to do with what's on your browser, and what functionality exists on the part of the web server (and it's back end).  And that can be actually harder than you'd think.  As a user, most of our web experience is seamless between the two (black box).

But to be effective in browser testing, we need to be testing those features which are associated with the web pages rendering in the browser.  But the features which are driven from the web server and the back end we're less likely to need to test so frequently (the back end is the same back end, no matter the browser used).

Example - the good old login screen

I'm going to refer to Twitter functionality a lot here, as it's something you can take a look at and investigate with ease.  Let's consider the login page,


Here are some typical test scenarios you might explore

  • "Happy Day" - there's a Username field, a Password field, and a Sign in button.  When you enter the right username and password, you're logged in.
  • If you enter the right email and password, you're logged in.
  • If you enter the right email but the incorrect password, you're not logged in
  • If you enter the right username but the incorrect password, you're not logged in
  • If you enter the wrong username or password but the correct password, you're not logged in
  • If you enter the wrong password for an account 3 times, your account might be locked
  • If you enter the wrong password for an account 2 times, followed by the right you, you're logged in.  If you log out, enter the wrong password again, followed by the right one, you are logged in, and your account is not locked.


Usually in a system that "logs you in", the functionality that decides from the account and password detail you've provided if you will either be logged in, not logged in, or your account locked all lives in the "web server" side of the system equation (ie, not in the browser part).

So for each browser you'd probably want to see the "Happy Day" path - looking at the page, and the basic flow through.  There is also a case you might want to see the "incorrect login" and "account locked" messages at least once.

But all the validation rules listed above you expect to reside server-side.  Which means you probably need to run them at least once if you can, but not for every browser - this might be a good assumption to make, write down and "make visible" to see if someone technical architect-y disagrees.  This doesn't mean it might not be worth running these tests again in different browsers anyway if you have time, but it means it looks to be "low risk" for browser issues.

So what kind of issues should you be expecting?

Back in WEB101, I talked about an web application I originally wrote which didn't cope well in IE.

In Netscape, it looked like this,



But in IE, all the labels got stuck in a corner, like this,


In web testing, we expect that there will be something in the html that the browser won't handle well, and hence interpret badly.

So for example - in the Twitter login example, we might be missing the fields for our username, password or the sign in button.  Those kind of details are cosmetic defects yes?  Oh .... except if these things are missing or hidden, we can't supply our username, password and select to login (you can see how it makes an issue a lot more functional).  It makes our whole system unusable on the browsers.

The kind of issues we'd expect could go wrong with a browser, are roughly,

  • missing/hidden fields and buttons - high impact
  • script code for items such as drop down menus might not work - high impact
  • buttons out of alignment / text out of alignment - low impact


What behaviour is browser side?

We took an educated guess with the login example that everything about validation was web side.

I find the following rules of thumb useful for investigating which behaviours which are browser side.  First of all, as mentioned above, everything you see on a page, has the potential to be browser specific (so use your eyeballs).

But secondly, for behaviour and functionality, try this - unplug your machine from the internet (physically or by turning off the wi-fi).  Anything you can do on that page that doesn't end in an error like below, indicates functionality which is browser side over web server side,



Lets take a look at signing up for a Twitter account ...


I opened the page, then turned off my wi-fi.  Let's work through a few of these fields ...

Full name - we can enter anything, and it validates if it looks like a real name - all browser side functinality.


Email address - we can enter any text we like, so that's browser side.  But it goes into a circular loop trying to validate the email address (I suspect to see if it's been used) - so that behaviour is in the web server.


Create a password - we can enter text, and it coaches over whether it thinks the password is long enough or secure.  All this seems browser based.

Choose your username - as with email, you can enter information, but it gets hung up validating.  I think this again checks for uniqueness, and the validation is at the web server end.


Experimenting in this manner allows you to made educated guesses at the kind of behaviour on a page that's browser based, and hence worth revisiting for different browsers when time allows.  This gives you an educated guess at the areas of greatest risk in terms of browser compatibility.  It's not infallible, but it gives you a decent rule of thumb to have conversations about what you think needs retesting a lot, and what needs less retesting with the rest of the team.

Saturday, September 14, 2013

Opinion: Keith Klain and the ISTQB petition


For a while now, Keith Klain has asked me to sign his petition to the ISTQB.

His petition asks of the ISTQB, the following,

As a public service to the people who have taken the ISTQB Foundation level exam, I wish to add my name to those appealing to their board of directors for answers to the following questions: 


1) Have there ever been issues with the ISTQB Foundation exam reliability coefficient reviewed by your exam consultants Kryterion? 

2) Have the reliability co-efficients consistently shown, since the inception of the ISTQB's certification program, that results on the certification exams accurately measure the testers' knowledge of the syllabi? 

3) Have there ever been any other issues with the validity of the exams? 

4) How often do those external reviews take place? 

5) Are the results of Kryterion's (or a third party's) independent evaluations publicly available?

I have found the question of whether or not to sign this petition has split me like no other issue of late.  People who know me well also know that I have a bit of a radical past - signing petitions and even protesting are things I've a history of.  At University I petition, marched, sat in, occupied.  I would like to think I still have that level of passion even in middle age.

My dilemma though is that although I'm not a huge fan of the ISTQB, I'm not going to just sign anything that's "anti-ISTQB".  My personal issues with the ISTQB, their syllabus and how they measure "good enough" are specific points.

I strongly believe that an organisation like the ISTQB which claims to be non-profit and which also claims to be championing the testing profession should be accountable to queries from the testing community.  It's like asking a famine aid charity "how much of every dollar I give actually make it into food for Africa".  You should have a right to a certain level of transparency.

However my issue was with the questions themselves.  I know the areas I find myself not really seeing eye-to-eye with the ISTQB, and these questions didn't seem to cover that ground at all.  Truth is I didn't really understand what the questions were driving at.  I couldn't sign up to a petition "just because it was anti-ISTQB", and neither do I believe should you.

When at University I petitioned or marched or occupied, I knew why the principles being put down mattered (okay, occasionally it was to impress a girl ...).  If I can't explain and champion the principles on a petition, I really can't sign it, and that's why I didn't.  As someone who is sympathetic to the Context Driven School of Testing, I'm not going to sign a petition just to "flip the bird to the system", and neither am I going to do that because of peer pressure.

So thankfully this week, I managed via Twitter to really discuss why this petition was so important to Keith, and ultimately, to me and every tester on the planet.  It wasn't just about the questions in the petition, it was about the context of the questions, something you might miss from the questionnaire itself, but is more clear here in Keith's open letter to the ISQTB.

The root cause of the petition comes from Keith (who is a Context Driven Tester) discussing ISTQB exams with Rex Black, who is a member of the board, and past president of ISTQB.  Oh and he just happens to provide a lot of training courses on the many levels of ISTQB (which some might say is a vested interest - much like for example making the Chairman of BP the Secretary of State for the Environment).

Rex's comment around the ISTQB exams was that for the American board, the exams have professional exam consultants work with them, and the exams "though not perfect" were "constantly perfected" (whatever that means - but wouldn't elaborate).  He claimed that he could not comment beyond that due to non-disclosure agreements (which some might say is convenient).  He has declined further comment, saying pretty much that that is all we need to know.

Keith sees Rex's comments as a smokescreen - they don't really give much information or satisfactory answer.  And remember this is from a board which champions overly detailed measurement and metrics as the way to do any task.

Keith's petition therefore is about "digging for truth".  Rex is telling us that the exams are independently analysed, but that data is only shared secretly within the board.  But they can tell us on behalf of this independent group how great they are.  This reminds me of the kind of science and statistical analysis you see used to misrepresent and mislead in TV ads, especially (shudder) infomercials.

This is not how you see scientists behave in New Scientist, or at conference.  Galileo, Newton, Einstein showed their proofs and data to the world, and risked  ridicule and backlash.  They do not go "we have amazing data to prove this, which we can't tell you, but you have to just take our word on how awesome we are".

Discussing this with Keith, he convinced me how this was important to have this data, and the questions are about being able to peek into this (without infringing anyone's confidentiality).  The whole case of the ISTQB validity from Rex Black (who is a high up person in the ISTQB scheme of things), rests on these exams being independently validated and (in his roundabout way of razzle-dazzle) as perfect as a multi-choice exam can get.

But people who aren't in the inner circle aren't allowed to know details, they just have to have faith.  This reeks of the kind of leadership you get in a cult.  And this is why it's in everyone's interest - both CDT testers and testers who stand by ISTQB to sign this petition.  Not to have a go at ISTQB, but to support the level of transparency that an organisation like this needs to champion with the community it claims to represent.

Based on this, I had found my voice on this issue, and found myself agreeing with Keith's stance, signing his petition - but typical me, I added a mini essay,


The choice to sign this petition is an individual one - but needs to be made for the right reasons.  I'm not going to tell you to sign it - just to think about the issues at play here - and to give you my opinion.  If you think I've got it wrong ... that's what the comments box is for!

Discussion and disagreement isn't wrong - in fact me and Keith had to disagree and question for a whole while before we saw eye-to-eye.  We're not drones, and we shouldn't be expected to act as such.  That means not bowing down to Rex Black telling you "what you need to know, for your own good" or Keith telling you "sign my petition".  Your opinion counts - whether towards Keith or to Rex, make it count.

Thursday, September 12, 2013

WEB102: Let loose the browsers!!!

So you're aware some cross-browser testing is required, and marketing have come back to you with an extensive list ... they'd like you to test,

  • IE 11 (when available), 10, 9, 8, 7
  • Chrome the last three versions
  • Firefox the last two versions
  • Safari just the latest version
  • Opera just the latest version


Cool - let testing commence!  Only ... it's not that simple.  There are some tools out there which will let you compare different versions of the same browser, and whilst better than nothing, they don't give you a full feel.

The most obvious issue is generally a computer has just a single version of a browser on your machine.  If you upgrade IE 10 to 11, it removes your version of IE 10!


Likewise things get complicated because IE 7 (for instance) doesn't easily install on a Windows 8 machine, and needs a lot of trickery to get working.  Likewise running IE 11 on an XP machine is never going to happen.  For all these things, the situation created by hacking the system to make it work is likely going to invalidate and erode some of your assumptions for your test.

It's not surprising then in such an environment, the idea of doing browser testing by installing different versions of browser then stripping them off your machine is non-too-popular.


What has taken off to aid browser compatibility are virtual machines.  A virtual machine allows you to emulate another (obviously less powerful) computer within your own.  It's run from a file which sets up everything about the machine, and means you can use your Windows 8 machine, but pretend to be a Windows 7, XP or (shudder) Vista machine.  It's also heaps cheaper (but not as much fun) as having a whole suite of reference machines of different build.  But then again, you can have dozens of them set up differently, running on one machine (though not concurrently obviously ... as "performance may vary").  If you have Apple hardware, you can pretend to be a Windows machine (but alas not the other way around).

Each virtual machine is a file which can be shared amongst testers (providing they have similar machines), and each one can be set up with the appropriate platform and browsers you want to test on.

This means once your suite is set up, all you need to do is run up the machine you want - no continuous install/uninstall.  You just keep and share a library of the machines, and boot up the one you need.  No needing multiple machines.  Another tester I was talking to told me with virtual machines, as long as you keep backups, you can try out crazy things and almost trash the machines trying different things (including editing registry settings), because you can just return to the original (unmodified) backup so easily if you really get into trouble.  It means you can test not just on different browsers, but represent different platforms as well.

Some of the most popular virtual machine tools at the moment belong to VMWare.  Check them out.

Obviously virtual machines aren't easy to set up, and need some tech buy in and support.  However they're an increasingly popular method of having different browsers and machines "to hand" when testing.

WEB101: Browsers - how in the Wild Wacky World did we get here?

Once upon a time, on the internet ...

It was 1991, and I was having a one-to-one tutorial with my cosmology lecturer, Professor Fred Combley, regarding a presentation on the Big Bang I was due to do in a few days.  Behind him was a VAX terminal that he'd left on, and I could just about make out the word CERN on the screen from where I sat.

If you had asked me back then what on that screen would revolutionise the world, I would have guessed it was some results from a collision experiment that would change our understanding of the Universe.  And I'd be wrong.


What was revolutionary what how the text had got onto the screen.  Professor Combley was one of a team of nuclear physicists which included Doctor Susan Cartwright (my other tutor who I'm still in contact with) from the University of Sheffield.  They were part of a huge network of physicists working on the Large Electron-Positron Collider in Geneva.  However being so geographically spaced, they needed something to help them to communicate and share information easier.

Tim Berners-Lee, a computer scientist and physicist who was working for CERN thought he had a solution.  It was to provide a series of interlinked hypertext documents defined in a language called HTML1.0.  These pages could be accessed via the internet using a program called a browser, which would render them onscreen.  In his quest to make sharing information easier, he invented the very medium you're reading this on, the world wide web.

The world wide web takes off ...

The usefulness of the world wide web soon became apparent, even outside of the world of physics.  It was a phenomenon which soon snowballed over the next 20 plus years as more and more people got involved on the world wide web.

As the world wide web took off, there were two factors which really complicated the life of the modern day tester ...

Factor 1: Everyone started to make browsers.

Originally it was Netscape, who built one the first browsers for UNIX machines.  They were soon followed Opera, then Internet Explorer ... more and more followed.


Mixed in with this, the different browsers treated HTML slightly differently.  In 2000 I worked on a naval intranet project where a page was designed to read a table of the days events, and show these events as items on a 24-hour clock.  The page looked beautiful in Netscape, but in Internet Explorer, which interpreted the screen co-ordinates differently for the events, it was ugly and messy.  We had to put conditional logic into the page to work out which browser it was one, and treat the two browsers completely differently.

Many companies decided that rather than keep up, their customers could only use Internet Explorer (IE) for their website, this didn't seem too much to ask when most people were on a Windows variant, and 90% of people used IE anyway.  But times have changed.  People have Apple and Linux machines now, and have drifted away from IE in increasing numbers.  And many potential customers have machines dating back a bit which don't even support the latest version of IE.

Upgrading the Windows license for your company is a big investment - many companies have so much software that works on a particular platform, that migrating is a major undertaking.  Back in 2009, I was consulting at a company where our machines were still running NT - and was asked if I could test on Google Chrome (it's uninstallable on NT, in case you wondered).

Factor 2: It was just too darn successful

Take a look at the first web page ever published ...


Today, it's barely recognisable as a web page.  Now take a look at a couple of more modern examples ...




When Tim Berners-Lee imagined the World Wide Web almost 25 years ago, modems worked at what we think of as dial-up speeds.  He couldn't imagine being able to stream a video at dial up speed.

As the web became more popular, people pushed the definition of a web page, and it evolved.  They started to include pictures, then audio and video as internet speeds increased.  From being a single page of information, they started to be broken down into panels as peoples monitors got bigger.

As this happened, HTML, the language which made browsers possible, evolved - it's now on version 5.  Java the programming language came about in 1995, and people started to embed some programming into the browser layer in Java, Perl and a variety of other scripting languages.

With what we're doing on browsers changing, naturally the browsers themselves are changing to evolve to this demand.  Hence can the website you're developing today still be effectively rendered by a browser that's a version or two behind?

And this is the dilemma for the modern business - do you want to only reach potential customers with the latest/recent browsers?  That's not a tester decision to make, it's not even a developer one.  It's one that marketing people need to provide guidance for.

Fortunately help is at hand ... there are lists out there that help making the decision.  This map here helps show the most popular browsers (but not by version) ...


Most websites, if you already have one, can monitor the traffic they receive.  This covers the page views I got this month by browser ...


And by Operating System ...


Most companies don't choose to service everyone, but to make sure they're covering their core users.

From those stats above, I can see for instance, there's a good case for checking my page content and how it looks on IE, Firefox, Chrome, Safari, Opera.  That would cover about 94% of my audience with 5 browsers.

To my shame, I have to admit I've never checked it on Opera!  Ooops.