I've recently been going through the subject of browser comparability testing with one of my team, and this series have been some of the notes from these sessions. The great thing about such coaching is it forces myself to take a good long look at the "whys" of things around this area, and learn something new myself!
So far we've taken a quick look at the history of web pages and how to get environments to actually test on. Now comes the challenging part - what are we going to do?
Taking a pure black box approach, if we have 11 browser types and versions we need to test comparability on, then that means we have to run the same testing we've got planned 11 times, once on each browser, yes? This is where such an approach becomes messy - testing is always
under-the-gun, and such approaches just aren't going to work. To have a valid strategy we need to think about how websites work
(at a very basic level), the kind of issues we'd expect with browser incompatibility and go to where the risk is.
Web Basics
What follows is a really basic,
"just enough" description of a web site, and how it works.
So, when you look at a page, your browser is provided information from a web server to create a page. This information is in the form of html, which essentially forms of
"downloadable program" of content for your browser.
What exists on your browser is completely independent of the web server at this point. In fact, right now, disconnect your PC cable, or turn off your wi-fi. You'll find that this web page just does not simply
"vanish" if you do. It exists on your machine, in it's browser, and won't disappear until you try and do something.
To get a good browser testing strategy, it helps to know what functionality is to do with what's on your browser, and what functionality exists on the part of the web server
(and it's back end). And that can be actually harder than you'd think. As a user, most of our web experience is seamless between the two
(black box).
But to be effective in browser testing, we need to be testing those features which are associated with the web pages rendering in the browser. But the features which are driven from the web server and the back end we're less likely to need to test so frequently
(the back end is the same back end, no matter the browser used).
Example - the good old login screen
I'm going to refer to Twitter functionality a lot here, as it's something you can take a look at and investigate with ease. Let's consider the login page,
Here are some typical test scenarios you might explore
- "Happy Day" - there's a Username field, a Password field, and a Sign in button. When you enter the right username and password, you're logged in.
- If you enter the right email and password, you're logged in.
- If you enter the right email but the incorrect password, you're not logged in
- If you enter the right username but the incorrect password, you're not logged in
- If you enter the wrong username or password but the correct password, you're not logged in
- If you enter the wrong password for an account 3 times, your account might be locked
- If you enter the wrong password for an account 2 times, followed by the right you, you're logged in. If you log out, enter the wrong password again, followed by the right one, you are logged in, and your account is not locked.
Usually in a system that "
logs you in", the functionality that decides from the account and password detail you've provided if you will either be logged in, not logged in, or your account locked all lives in the "web server" side of the system equation
(ie, not in the browser part).
So for each browser you'd probably want to see the "Happy Day" path - looking at the page, and the basic flow through. There is also a case you might want to see the "
incorrect login" and
"account locked" messages at least once.
But all the validation rules listed above you expect to reside server-side. Which means you probably need to run them at least once if you can, but not for every browser - this might be a good assumption to make, write down and
"make visible" to see if someone
technical architect-y disagrees. This doesn't mean it might not be worth running these tests again in different browsers anyway if you have time, but it means it looks to be
"low risk" for browser issues.
So what kind of issues should you be expecting?
Back in WEB101, I talked about an web application I originally wrote which didn't cope well in IE.
In Netscape, it looked like this,
But in IE, all the labels got stuck in a corner, like this,
In web testing, we expect that there will be something in the html that the browser won't handle well, and hence interpret badly.
So for example - in the Twitter login example, we might be missing the fields for our username, password or the sign in button.
Those kind of details are cosmetic defects yes? Oh .... except if these things are missing or hidden, we can't supply our username, password and select to login
(you can see how it makes an issue a lot more functional). It makes our whole system unusable on the browsers.
The kind of issues we'd expect could go wrong with a browser, are roughly,
- missing/hidden fields and buttons - high impact
- script code for items such as drop down menus might not work - high impact
- buttons out of alignment / text out of alignment - low impact
What behaviour is browser side?
We took an educated guess with the login example that everything about validation was web side.
I find the following rules of thumb useful for investigating which behaviours which are browser side. First of all, as mentioned above, everything you see on a page, has the potential to be browser specific
(so use your eyeballs).
But secondly, for behaviour and functionality, try this - unplug your machine from the internet
(physically or by turning off the wi-fi). Anything you can do on that page that doesn't end in an error like below, indicates functionality which is browser side over web server side,
Lets take a look at signing up for a Twitter account ...
I opened the page, then turned off my wi-fi. Let's work through a few of these fields ...
Full name - we can enter anything, and it validates if it looks like a real name - all browser side functinality.
Email address - we can enter any text we like, so that's browser side. But it goes into a circular loop trying to validate the email address (I suspect to see if it's been used) - so that behaviour is in the web server.
Create a password - we can enter text, and it coaches over whether it thinks the password is long enough or secure. All this seems browser based.
Choose your username - as with email, you can enter information, but it gets hung up validating. I think this again checks for uniqueness, and the validation is at the web server end.
Experimenting in this manner allows you to made educated guesses at the kind of behaviour on a page that's browser based, and hence worth revisiting for different browsers when time allows. This gives you an educated guess at the areas of greatest risk in terms of browser compatibility. It's not infallible, but it gives you a decent rule of thumb to have conversations about what you think needs retesting a lot, and what needs less retesting with the rest of the team.