Showing posts with label Efficient Testing. Show all posts
Showing posts with label Efficient Testing. Show all posts
Friday, April 27, 2012
Rapid exploratory website testing ...
I was faced with an interesting challenge today at 12:50pm. We have a minor project which I'm not involved directly with, and hasn't needed any testing staff. However they've had a new supporting website produced to provide information … this had been delivered that morning - could I spend half an hour checking it out for anything "obvious".
This was an ideal opportunity to really stretch my muscles and give it a going over exploratory testing style. I knew nothing of the website, though maybe that wouldn’t be a disadvantage, I’d evaluate it much as I could within the time allowed.
1:00pm, the link came through.
Opening the page, I began my analysis. The site was essentially a marketing toy, telling prospective customers about a new service which was being provided, and would allow them to register an interest. It detailed all the features which were coming, why it would make life a lot easier, as well as links to both supporting partners and associated stories about the initiative. It also had a top menu bar which allowed you to jump to parts of the page you were interested, which dealt with a particular aspect of the new service.
Several areas had icons, which if you held your mouse over, would allow bubbles to expand, giving a more detailed graphical explanation.
Starting with the basics, I attempted to click every link, every button on the page, making sure it went to the right target. Two of the links to supporting partners could not be selected with the left mouse button, but could with a right button click menu [Defect 1].
I tried out the webpage in Internet Explorer. The menu bar buttons did not take you to the right part of the page at all, which was most curious [Defect 2].
I opened the website using Chrome and Firefox (the browsers I had available). The page looked identical in all browsers. However in these two browsers the menu bar buttons DID work as expected [Revised defect 2 with this information].
Dragged my mouse over the icons that opened graphical explanations, confirmed they made sense and didn't behave oddly if close to the browser view edge. I did wonder why one story had two links to website when others only had one (inconsistent) [Defect 3].
Read through the website – did it make sense? I noticed that two sentences in the same paragraph were virtually identical, and referred to supporting partners in an inconsistent manner [Defect 4].
There was a field to register interest by adding your email address. Tried a valid email, it was accepted (good). Tried one junk email, and got told it was invalid (good). Tried a couple of variations of invalid emails, not all were rejected. Noted it as a possible problem [Defect 5].
The bottom of the page had a huge, but empty grey box which just looked messy [Defect 6].
At this point I thought I was done. Then I had a moment of slyness. Thinking about how the menu worked for Firefox and Chrome was a little suspicious - I know web developers tend to love working and testing out on these browsers. Likewise, I also know how web developers love their big, high resolution screens. So I went into my personalised settings, and dropped my screen resolution to 600 x 800. The page now no longer fitted to the screen, and some buttons on the menu bar became mangled with some icons missing altogether [Defect 7].
I emailed my list of discoveries to the project manager. It was 1:30pm and time for lunch.
That was testing at it’s most raw, and a lot of fun (and a nice break from the meetings and documentation of the rest of the day). For the product, it was a perfectly pitched piece of ah-hoc testing. I defected everything I thought could be an issue - of those, there are about 3 things which need doing (the non-selectable links, menu not working in IE and probably the behaviour in low screen resolutions), the rest are more about consistency and look, which might be equally important if there's time.
The issue with the menu bars was discovered by a BA during the same time. But where they reported it, I managed to define it was an Internet Explorer issue, and not one on Chrome and Firefox. This made me realise that testers are more than people who "find problems" (their BA, a very talented and smart woman did that), however being a tester, it was my nature to go further than just find the problem, but "define the problem".
A most interesting exercise for sure ...
Friday, February 3, 2012
Getting your testing project into orbit!
If you look at a the underside of a rocket (hopefully not as it’s about to take off), you’ll find that there are two types of rocket engine with different functions,
- Propulsion. This is performed by the huge rocket engines, and are the workhorse of a rocket where most of the energy and fuel goes. It gives it the power to take off and escape the Earth’s pull and get into orbit.
- Steering and navigation. This is achieved both by much smaller engines (thrusters), and steering mechanisms on the big propulsion engines. They use much less energy but perform a vital function. They’re there to help to steer it and keep it on course.
A rocket launch will only be successful with a good balance of both propulsion and steering. Neglect the main boosters, and the rocket won’t have enough power, and will come plummeting down to Earth disastrously. But neglect the steering mechanisms and the rocket will be unstable and easily topple over, leading to a similar fireball.
Much like the engines of a rocket, I like to see testing activities as a similar context, they’re either for ‘moving us forward’ or ‘steering us in the right direction’.
Without doubt the activities that moves us and our project forward are those of actually executing tests, raising defect and reporting. This is how we add value to our products – we find problems and work with developers to resolve them. This is how we make our product better, and really we should aim to spend as much time and energy in here as possible.
And though we’d ideally like to spend less time on them, our steering activities are still important, because they make sure we’re on course with our main task. These activities include,
- Test estimation
- Test planning
- Requirements review
- Test scripting (and automation)
These actions by themselves have no real value, only in the way they have a relationship with our main activities.
In an ideal world you should look at the time and effort you spend on all these activities during testing a project. If you feel you’re spending more time on ‘steering’ activities than ‘main thrust’ activities something is wrong, and you’re not spending enough time on the core of where you deliver value.
This happened a few years ago – I worked on an automated project, where our scripts were often over 10 years old, and very badly coded in places. So frequently our scripts would often fall over and we’d spend a lot of time and effort during regression testing running them and fixing them. It became a bit of a joke that we ‘were getting very good at testing our scripts’ … but not so much our product.
Sometimes to just get it done it was easier to just run them manually! But that's the point. The value of these tests wasn't in supporting the automation, but to actually have the test execute the desired function in our test application.
The obvious and idealised model then for getting your testing where it needs to be is all about getting as much testing and hands-on the product as possible – this is your propulsion, your core activity as a tester.
But you also need to do just enough of the navigation tasks so that you know all this effort is pointed in the right direction to deliver what your business needs. If some task is becoming arduous (as with some of our automation), the big question has to be “is it really adding value to my core activity”. All your steering tasks should have a clear contribution towards your core tasks, if it’s not making the test execution phase better, then it’s probably excess baggage time-wise and needs a bit of trimming.
Test scripting is a good one for this. If you are writing complex and arduous scripts but having no access to your product when you do this, you have to wonder if this is going to really add value. How do you know your script will align to the finished product? Wouldn’t a lighter scripting approach, and using the product as you write them be better, because you’ll actually be doing some execution (and thus finding bugs as you go along).
Likewise with test plans. It can be tempting sometimes to try and make an iron-clad plan of all kinds of details, which is great if you have all that information to hand. However I’m frequently finding that no matter what you put in your test plan, there’s always something you’ve missed or has changed. Projects are almost living creatures with an ability to morph, change, grow from inception. Better to get a plan together, and up-issue it as changes evolve.
Get the balance right in your testing tasks, and there’s no heights you can’t reach …
In an ideal world you should look at the time and effort you spend on all these activities during testing a project. If you feel you’re spending more time on ‘steering’ activities than ‘main thrust’ activities something is wrong, and you’re not spending enough time on the core of where you deliver value.
This happened a few years ago – I worked on an automated project, where our scripts were often over 10 years old, and very badly coded in places. So frequently our scripts would often fall over and we’d spend a lot of time and effort during regression testing running them and fixing them. It became a bit of a joke that we ‘were getting very good at testing our scripts’ … but not so much our product.
Sometimes to just get it done it was easier to just run them manually! But that's the point. The value of these tests wasn't in supporting the automation, but to actually have the test execute the desired function in our test application.
The obvious and idealised model then for getting your testing where it needs to be is all about getting as much testing and hands-on the product as possible – this is your propulsion, your core activity as a tester.
But you also need to do just enough of the navigation tasks so that you know all this effort is pointed in the right direction to deliver what your business needs. If some task is becoming arduous (as with some of our automation), the big question has to be “is it really adding value to my core activity”. All your steering tasks should have a clear contribution towards your core tasks, if it’s not making the test execution phase better, then it’s probably excess baggage time-wise and needs a bit of trimming.
Test scripting is a good one for this. If you are writing complex and arduous scripts but having no access to your product when you do this, you have to wonder if this is going to really add value. How do you know your script will align to the finished product? Wouldn’t a lighter scripting approach, and using the product as you write them be better, because you’ll actually be doing some execution (and thus finding bugs as you go along).
Likewise with test plans. It can be tempting sometimes to try and make an iron-clad plan of all kinds of details, which is great if you have all that information to hand. However I’m frequently finding that no matter what you put in your test plan, there’s always something you’ve missed or has changed. Projects are almost living creatures with an ability to morph, change, grow from inception. Better to get a plan together, and up-issue it as changes evolve.
Get the balance right in your testing tasks, and there’s no heights you can’t reach …
Subscribe to:
Posts (Atom)