This piece has appeared in Testing Circus recently, however it's so at the core of my philosophy, I really wanted to include it here within my other writing!
It seems right now thanks to YouTube's advertising policy, every time I want to watch a music video, I have to endure some ghastly advert of "hi ... I'm going to tell you the secret of how you can turn your life around in 5 easy steps".
I hate those hard sell advertising of "I learned the magical secrets of how to be a millionaire, and I'll share them with a room full of 100 people in my upcoming workshop ... for $10,000 a ticket".
I have to admit, I do have a tiny bit of a mercenary streak, and for a moment, I was thinking "wow ... I could do something similar for testing". The only problem is, the testing pool is already full of sharks right now, and adding more doesn't help the community.
I've ended up talking almost daily to a young tester named Guna, who wants to become the best tester they can be. She has an awesome drive of energy, and already is very active in the community, whilst being in a difficult position of being a sole tester on their project. I'm very glad to find many of these "lone testers" joining Twitter, and seeking out the testing community.
From these conversations with Guna though, I've been wondering, What is the process to becoming the best tester you can be? I've been sketching them down for a week, and as a list they do seem surprisingly simple - occasionally I'll think to add something new, but then realise it actually goes back under the umbrella of another item.
So these are my four simple steps to becoming the best tester you can be. They are very simple concepts - however the complexity arises I'm afraid from the fact these are not "tick each off and turn your brain to standby". They are values to live each day by - they don't guarantee you will be a brilliant tester today, but promise that you will be a better tester than yesterday.
Expand what you currently know
You have some knowledge of testing right now. Look at methods to expand it. For most people that means "let's look for a training course", and courses are really useful. But there are also other methods - find out a book on testing, read a testing magazine or blog.
But don't forget to socialise as well - I learned a huge amount working alongside testers in EDS, simply because we used to all take a walk to the sandwich shop at lunchtime. We'd talk about football, news, and TV mainly. But we'd also mix in experience from other projects, things we were doing today in the test space etc. That drip feed of knowledge from others over 2 years actually amounted to something quite tangible. And indeed half that group I'm still connected with, and still my first line "brains trust" when I want to run a testing idea by.
In many locations such as Wellington, Sheffield, Brighton etc there are testers meetups where you can mingle with other testers, expand your network, and learn a few things. As well as a wealth of things to watch on YouTube or TED talks.
Ask yourself whether at work, or in your networking, "am I having meaningful conversations about testing?".
Try out new ideas
Don't just keep sticking with "we've always done it like this". Try and find new ways to trial out ideas for testing. And here, "trial" is the operative word - don't build a critical delivery around something you've never tried before. Find a way to try it out in a way that won't cause a meltdown if it doesn't go as planned. Be prepared for needing to modify your first approach to "remove the kinks".
Ask yourself why - from your experience in testing, and from other's testimony - this new method will make a difference for the project in front of you now. How can you tell if it's working?
This leads nicely to the third point ...
Monitor and seek feedback
Get used to trying ideas. Sometimes unexpected things will happen - I can't tell you what, because that's why they're unexpected. Sometimes your idea is basically good, however you may need to step back, see the bigger picture, and make some tweaks and improvements.
And don't just rely on your own observations - ask around people you trust. Hopefully right now, you're working right alongside people you trust (if not, ask yourself how can you build that trust?).
Get feedback from others about how things are going. Maybe they don't know much about testing? It might be a good opportunity to talk to them a little about how you testing working and fitting in the grander scheme.
At times though you might find the gulf between your theory and running it in practice makes your idea unworkable. Don't be afraid to "chalk this one to experience", maybe you now have enough information to see the old way had values you didn't originally see? Maybe your approach would work better for a different project. That experience is hard won and yours to use going forward.
Test everything
Embrace the philosophy and critical thinking which are core to the craft of testing. In everyday life we build our thinking on an invisible raft of assumptions we often aren't even aware is under us.
Once in a while open your eyes to the world around you, and as a mental exercise ask yourself "how can I prove to myself beyond doubt that ...". You may want to prove to yourself that "the world is round", that "roses are red", or that "two different people both independently recognise the same food stuffs as a 'sweet' taste".
Software and testing is built on a raft of assumptions, some assumptions are like "the Earth is round", we just take for granted without asking "what proof have I actually seen directly?". But sometimes we need to be mentally aware of them, because some are not sound assumptions to make - and many projects have been caught out by not realising them!
Embracing critical thinking is all about trying to notice assumption and bias, and hopefully avoiding being duped or made a sucker. A kind of critical life skill in a world with so much misinformation!
Wednesday, April 29, 2015
Friday, April 24, 2015
Developing an approach to responsive design website testing
Since it's introduction in 2011, responsive design has become increasingly important in websites, and with good reason. Prior to it's use, websites struggled to support the increasing uptake of access from smartphones and tablets. Some sites would have to develop separate (and thus costly) websites - one for viewing from PCs, and one for viewing for mobiles (typically hosted at m.*).
Responsive design was a methodology of having the same page source, which would scale according to the size of page available - allowing a single source of web content to be supported on difference sizes and devices - from laptop to smartphone. It basically allows all your information to "fit" horizontally, so if you're on a mobile device, you don't keep having to resize or scroll your screen horizontally <->.
There is a wonderful website you can try this out on here.
Open in full page on your browser, and it should look a little like this ...
Now take it down even smaller, as if you have the limited number of pixels you get from a mobile, now the labels go above the data fields ...
Pretty cool huh? But there are also a few potential pitfalls, and this article will talk you through some of them.
Whoo-hoo, mobile testing, I'll have me some of that!
Something we need to be crystal clear about, when we're talking about testing responsive design on mobile devices, we're basically just using them as browsers. This isn't connected with testing mobile applications which can be installed and work through Google Play or the Apple's App Store, that's a whole other different field of testing (but some people get confused).
Creating a strategy
To get the ball rolling, you need to start setting a strategy - across mobile devices and browsers. Responsive design uses some newer HTML features, so there are older phones and browsers which really struggle. So the question has to be - what browsers/devices matter?
When we've done browser testing in the past, we've just tended to install a whole host of browsers onto our machine, and maybe some virtual machines to cover older versions of IE, and just "get started". Here's the catch though - it's free to install a browser (yes, even Safari). But mobile devices cost - and if we're talking a high end model, then it costs a lot! You have to be selective, and your client has to be willing to support the purchase of these devices.
Even "hey guys, does anyone have a phone, I just want you to check this site in testing" is a bit dubious. You're running your testing strategy from "what you can find around the office". The other thing is that a mobile phone is a very personal thing - I might check a site for you, but I wouldn't let you take away my phone to look at a problem I'd encountered.
If your client is keen on developing a responsive design site, then they need to be comfortable with renting or at least purchasing some devices. And here's the thing,
Which devices?
The best answer to this is to talk to your client and ask "what browsers/devices have 5% of traffic or more".
Of course if you are just creating a responsive design, you might not be able to have reliable figures. In this case there are lots of sources out there. Mobile test consultancy Synapse provide some useful resources (and I've used Jae before, and he's well worth bringing in).
Apple devices - you can find information about the most popular here.
Android devices - you can find information about the most popular here.
Right now, the jury is still out on Windows Phone 8.1 uptake, as is being able to cross compare device usage (iOS vs Android vs Windows 8.1).
Looking through that list, I'd say at a bare minimum, you'd need to consider the following in your test suite,
For these devices, I'd also try and consider lower spec models, especially with smaller screen resolution. [In responsive design, smaller screen means less real estate, and potential for problems. As testers we like problems] That can often mean looking at a smartphone over a tablet.
Beyond that, I'd try and see it I could talk up a purchase of something in Lollipop (it's a low share, but it's the future), and maybe Windows 8.1 (especially as there are some dirt cheap Windows Phones out there right now).
Regarding the browsers on those devices - most people just use the build in browser (until analytics tell you otherwise).
Remember - this is my analysis from the current market - it will change! Once your site is up, try and get analytics to help profile popular browsers/devices, after all it doesn't matter what other people are using, what matters is what your client's customers are using.
And on that bombshell, just a few weeks after Microsoft announced the death of IE, look who sits at the top of the most popular browsers for reading this blog?
Test Website Access
Well, typically the website you're producing is kept tightly under wraps until it's launched. Don't forget to have a discussion about that with your web team. Do you have a VPN you can set up on devices to access your test environment? You're going to need some form of solution to get your device to see your test pages.
Can't we just get around needing mobile devices, and just use the browser in small mode?
If you make your browser 480 x 800 - isn't that the same as using a browser?
It's to be fair a good first step, but as you'll see below, some of the problems come from the operating system. Android and iOS have special build in ways to handle certain items like drop down and pop-ups which mean they behave slightly unexpectedly.
So I'm set up - what next?
Okay, so someone approved your devices, you have an accessible test area and you're ready to go ... and?
So what exactly are you going to test? What it really helps to do now is come up with a way to summarise your application. To cross browser and cross device test you need to repeat the same elements of testing over and again for every browser and device.
Do you have any clear idea what those elements are for your system?
The following is a guide to my approach ... and remember I looked at some of this a couple of years ago (but just for browsers).
Page checking
Create a map of every page. Confirm that,
You are basically looking here for something not working, a button missing etc which would prevent you from being able to use a page!
Functional Checking
What are the main basic functions of the website, I need to make a list of them, and repeat on several browsers and devices.
Here's some examples (discussed previously),
Generally the tests per browser/device don't have to be exhaustive - you are repeating a few examples across multiple browsers after all. But ideally should include a success and a failure (you always want to check an error message is displayed).
Being able to create a summary overview for testing of your website, is something I hope to explore in the future - so watch this space.
Common Gotcha's
These are the areas I've commonly found problems with on responsive design. You'll notice that some of these can be countered by good, up-front design ... so if as a tester you find yourself in a design meeting, go in forearmed ...
Landscape to portrait to landscape
A simple test - part fill your page in, turn it on it's side. Do you lose your data? Does any page element vanish?
Turn it back to it's original orientation, and check again. Sometimes the changed orientation causes a page refresh, and things go missing!
Drop downs
The mobile device overrides how drop downs are handled in browsers ...
On the left is the iOS carousel, and on the right the Android selection list.
The problem though occurs if you have long selectable items on your drop downs, For example, consider a list of security questions,
All these will truncate the questions, so all the user will see is "What was the ...". There's really no solution for this, but rethinking the approach/redesign.
Pop ups
I've found any kind of required pop ups from a browser can be a little troublesome on mobile devices (they often just go AWOL).
This can include,
Tread carefully - and always check these.
The catch with responsive design
Yeah - there was bound to be one, wasn't there? This approach significantly reduces risk of problems in responsive websites, but it's no guarantee. Indeed some mobile phones take a system like Android and tailor particular features, so it's hard and expensive to exhaustively test upfront. This risk has to be known to both yourself and your client.
Furthermore, because responsive design uses newer features of HTML, it means that much older browsers and devices really don't handle the page very well. You might want to consider that on old, out of scope browsers/devices that it at least fails gracefully with a "your browser does not support this website" error message.
Some common sense ticket items
Finally some common sense things to consider before we wrap up ...
Health and safety
Some testers are really keen to get into mobile testing. I actually really find it a pain. The screens are much smaller, it requires data entry just with your thumbs, you tend to hunch over the device.
This is a recipe for repetitive strain injury. Try and mix up mobile testing with browser testing, and make sure you're taking regular breaks.
Keep them safe and keep them secret
Make sure they're locked away when not in use, though have a couple of keys to where they're kept with your team. You just don't want them walking away. Lock and count them in every night.
You drop it you bought it?
It's inevitable that one is going to get dropped. What then? Look into getting them added and listed to your building insurance. If you can't make it clear what will happen should any damage occur.
Breaking news ...
I ran this article by Stephen Janaway - he's someone whose articles I really respect and look to when learning in the mobile space, and he has a huge amount of experience.
One of the great things about writing an article like this is that it puts down everything you know, and sometimes what you find is there's something you didn't know. And sure enough ...
So, I'd never really though about using Chrome Developer Tools as an early form of testing - so I can't really explore here, but it might be a topic for a follow-up blog.
Responsive design was a methodology of having the same page source, which would scale according to the size of page available - allowing a single source of web content to be supported on difference sizes and devices - from laptop to smartphone. It basically allows all your information to "fit" horizontally, so if you're on a mobile device, you don't keep having to resize or scroll your screen horizontally <->.
There is a wonderful website you can try this out on here.
Open in full page on your browser, and it should look a little like this ...
Now reduce the size of your browser a little, and you should find that instead of two columns of entries, it reduces to a single column ....
Now take it down even smaller, as if you have the limited number of pixels you get from a mobile, now the labels go above the data fields ...
Pretty cool huh? But there are also a few potential pitfalls, and this article will talk you through some of them.
Whoo-hoo, mobile testing, I'll have me some of that!
Something we need to be crystal clear about, when we're talking about testing responsive design on mobile devices, we're basically just using them as browsers. This isn't connected with testing mobile applications which can be installed and work through Google Play or the Apple's App Store, that's a whole other different field of testing (but some people get confused).
Creating a strategy
To get the ball rolling, you need to start setting a strategy - across mobile devices and browsers. Responsive design uses some newer HTML features, so there are older phones and browsers which really struggle. So the question has to be - what browsers/devices matter?
When we've done browser testing in the past, we've just tended to install a whole host of browsers onto our machine, and maybe some virtual machines to cover older versions of IE, and just "get started". Here's the catch though - it's free to install a browser (yes, even Safari). But mobile devices cost - and if we're talking a high end model, then it costs a lot! You have to be selective, and your client has to be willing to support the purchase of these devices.
Even "hey guys, does anyone have a phone, I just want you to check this site in testing" is a bit dubious. You're running your testing strategy from "what you can find around the office". The other thing is that a mobile phone is a very personal thing - I might check a site for you, but I wouldn't let you take away my phone to look at a problem I'd encountered.
If your client is keen on developing a responsive design site, then they need to be comfortable with renting or at least purchasing some devices. And here's the thing,
- Going forward, you will have to do some responsive design checking for every release. It's not just something you bolt on, work for a few week and don't have to worry about anymore.
- New devices are always being released. This means revising those devices you use about every 6-12 months (a good rule of thumb is every time there is a new version of iOS).
Which devices?
The best answer to this is to talk to your client and ask "what browsers/devices have 5% of traffic or more".
Of course if you are just creating a responsive design, you might not be able to have reliable figures. In this case there are lots of sources out there. Mobile test consultancy Synapse provide some useful resources (and I've used Jae before, and he's well worth bringing in).
Apple devices - you can find information about the most popular here.
Android devices - you can find information about the most popular here.
Right now, the jury is still out on Windows Phone 8.1 uptake, as is being able to cross compare device usage (iOS vs Android vs Windows 8.1).
Looking through that list, I'd say at a bare minimum, you'd need to consider the following in your test suite,
- iOS 8.X device
- iOS 7.X device
- KitKat device
- JellyBean device
For these devices, I'd also try and consider lower spec models, especially with smaller screen resolution. [In responsive design, smaller screen means less real estate, and potential for problems. As testers we like problems] That can often mean looking at a smartphone over a tablet.
Beyond that, I'd try and see it I could talk up a purchase of something in Lollipop (it's a low share, but it's the future), and maybe Windows 8.1 (especially as there are some dirt cheap Windows Phones out there right now).
Regarding the browsers on those devices - most people just use the build in browser (until analytics tell you otherwise).
Remember - this is my analysis from the current market - it will change! Once your site is up, try and get analytics to help profile popular browsers/devices, after all it doesn't matter what other people are using, what matters is what your client's customers are using.
And on that bombshell, just a few weeks after Microsoft announced the death of IE, look who sits at the top of the most popular browsers for reading this blog?
Test Website Access
Well, typically the website you're producing is kept tightly under wraps until it's launched. Don't forget to have a discussion about that with your web team. Do you have a VPN you can set up on devices to access your test environment? You're going to need some form of solution to get your device to see your test pages.
Can't we just get around needing mobile devices, and just use the browser in small mode?
If you make your browser 480 x 800 - isn't that the same as using a browser?
It's to be fair a good first step, but as you'll see below, some of the problems come from the operating system. Android and iOS have special build in ways to handle certain items like drop down and pop-ups which mean they behave slightly unexpectedly.
So I'm set up - what next?
Okay, so someone approved your devices, you have an accessible test area and you're ready to go ... and?
So what exactly are you going to test? What it really helps to do now is come up with a way to summarise your application. To cross browser and cross device test you need to repeat the same elements of testing over and again for every browser and device.
Do you have any clear idea what those elements are for your system?
The following is a guide to my approach ... and remember I looked at some of this a couple of years ago (but just for browsers).
Page checking
Create a map of every page. Confirm that,
- can every field can be selected and data entered?
- can every check box selected/unselected?
- can every drop down box selected?
- can every button be selected?
- can you select between tabbed pages?
- check for pop up confirmation and error messages
You are basically looking here for something not working, a button missing etc which would prevent you from being able to use a page!
Functional Checking
What are the main basic functions of the website, I need to make a list of them, and repeat on several browsers and devices.
Here's some examples (discussed previously),
- Registration
- Login
- Change my account details
Generally the tests per browser/device don't have to be exhaustive - you are repeating a few examples across multiple browsers after all. But ideally should include a success and a failure (you always want to check an error message is displayed).
Being able to create a summary overview for testing of your website, is something I hope to explore in the future - so watch this space.
Common Gotcha's
These are the areas I've commonly found problems with on responsive design. You'll notice that some of these can be countered by good, up-front design ... so if as a tester you find yourself in a design meeting, go in forearmed ...
Landscape to portrait to landscape
A simple test - part fill your page in, turn it on it's side. Do you lose your data? Does any page element vanish?
Turn it back to it's original orientation, and check again. Sometimes the changed orientation causes a page refresh, and things go missing!
Drop downs
The mobile device overrides how drop downs are handled in browsers ...
On the left is the iOS carousel, and on the right the Android selection list.
The problem though occurs if you have long selectable items on your drop downs, For example, consider a list of security questions,
- What was the name of your first girlfriend/boyfriend?
- What was the name of your favourite film?
- What was the name of your junior school?
All these will truncate the questions, so all the user will see is "What was the ...". There's really no solution for this, but rethinking the approach/redesign.
Pop ups
I've found any kind of required pop ups from a browser can be a little troublesome on mobile devices (they often just go AWOL).
This can include,
- Are you sure you want to continue?
- Read these terms and conditions.
- Select a date from this calendar
Tread carefully - and always check these.
The catch with responsive design
Yeah - there was bound to be one, wasn't there? This approach significantly reduces risk of problems in responsive websites, but it's no guarantee. Indeed some mobile phones take a system like Android and tailor particular features, so it's hard and expensive to exhaustively test upfront. This risk has to be known to both yourself and your client.
Furthermore, because responsive design uses newer features of HTML, it means that much older browsers and devices really don't handle the page very well. You might want to consider that on old, out of scope browsers/devices that it at least fails gracefully with a "your browser does not support this website" error message.
Some common sense ticket items
Finally some common sense things to consider before we wrap up ...
Health and safety
Some testers are really keen to get into mobile testing. I actually really find it a pain. The screens are much smaller, it requires data entry just with your thumbs, you tend to hunch over the device.
This is a recipe for repetitive strain injury. Try and mix up mobile testing with browser testing, and make sure you're taking regular breaks.
Keep them safe and keep them secret
Make sure they're locked away when not in use, though have a couple of keys to where they're kept with your team. You just don't want them walking away. Lock and count them in every night.
You drop it you bought it?
It's inevitable that one is going to get dropped. What then? Look into getting them added and listed to your building insurance. If you can't make it clear what will happen should any damage occur.
Breaking news ...
I ran this article by Stephen Janaway - he's someone whose articles I really respect and look to when learning in the mobile space, and he has a huge amount of experience.
One of the great things about writing an article like this is that it puts down everything you know, and sometimes what you find is there's something you didn't know. And sure enough ...
So, I'd never really though about using Chrome Developer Tools as an early form of testing - so I can't really explore here, but it might be a topic for a follow-up blog.
Sunday, April 12, 2015
Rapid Software Testing with James Bach returns to New Zealand
I count myself as extremely fortunate to have been able to take James Bach's Rapid Software Testing course back in 2012 - it's hard to believe he's not been back to repeat the course since!
If you've never done Rapid Software Testing, I highly recommend it - for myself, it was one of the most influential courses I've taken on software testing. Indeed use Google and check out other testers opinions of the course - what I've seen has been universally positive and it seems everyone has a different tale to tell.
The course is full of clever insight - indeed, I'm forever going back over my course notes and learning/remembering additional details. Most of all the course worked for myself as a mirror - it confirmed a few things about the direction I was working in, it suggested a few new ways of working, and I learned a few of my weaknesses. All these things led to me becoming a better tester.
This is a list of some of my personal take homes from the course,
If you're based in New Zealand, and have never attended, I cannot recommend this course enough. Fellow tester Kim Engel has been putting in a lot of hard work organising his return in August - and you can find out more about booking here.
The course is full of clever insight - indeed, I'm forever going back over my course notes and learning/remembering additional details. Most of all the course worked for myself as a mirror - it confirmed a few things about the direction I was working in, it suggested a few new ways of working, and I learned a few of my weaknesses. All these things led to me becoming a better tester.
This is a list of some of my personal take homes from the course,
- Documenting what you're going to test isn't as important as recording/documenting what you've tested #RST [Indeed this became an important part of our approach here]
- Boundary testing is important, but it's not the be-all. Be sure to check behind them #RST
- Visualise the testing you've done. Are there large expanses untested? Are you sure nothing's hiding there? Maybe check again. #RST
- When you think you've tested all you can. Defocus, and try a new approach to using software. Maybe get someone to try #RST
- Always know the purpose and values of the software you've been given. #RST
- Engage with customers and question them to tease out ways you can aid them better #RST
- A test action which repeats/rewords a requirement has no real value #RST
- An oracle is any method or person which means you have an expectation of the system under test #RST
- Oracles can be fallable though #RST
- We keep a list of expectations in our head about software. It only triggers when something is unexpected. You can't write them down #RST
- Most testing methods feel instinctive. But sometimes we need to challenge & ask more questions to explore other aspects of the s/w #RST
- My main heuristic is thinking how I'd code a piece, and the ways it could mess up if I did #RST
- Don't be afraid to ask for a log file from developers if it makes your #testing easier. #RST
- But even log files are fallable #RST #dontTrustTooMuch
- The people who most impressed me and who I'd want to work with aren't necessarily the same as ppl the instructor liked #RST #differentValues
- If someone has an idea I feel has value, but the expert disagrees, I need to be braver, speak up & support them #RST
- Experts in testing can advise me. But as a tester on MY project, I am ultimately master of my own destiny. #RST
- As a tester, I have things I do well, & things I do badly. Build a team around me which balances out my bad & harnesses my good #RST
If you're based in New Zealand, and have never attended, I cannot recommend this course enough. Fellow tester Kim Engel has been putting in a lot of hard work organising his return in August - and you can find out more about booking here.
Saturday, April 11, 2015
A funny thing happened on the way to South Island ...
This time last week, we had our ferry tickets and motel booked for mini-cycling tour of the South Island of New Zealand. It was an idea that my son and I had been putting together for a while ...
The thing was, I was really quite nervous about the trip. And on talking it over with my friend Elayne, I'd noticed it was a kind of nervous that as a tester I was somewhat familiar. Because it's a similar kind of nerves that I've had when any software project I've been a part of has gone live. How can that be?
What we were doing wasn't super-ambitious. We were cycling from Picton to Blenheim, a trip of 30km, we'd stay overnight, then return the next day. We could have just booked the tickets, jumped on the ferry and hope for the best. But to be blunt, though I love to be quite active, I'm overweight and getting old - and I don't like to set myself up for failure.
So since January we've been training. At first just 20km rides to get used to our bikes, and to enjoy the last of the New Zealand summer. We'd also complement this with Les Mills RPM classes during the week to help build up our "cycle fitness" (needless to say when a mid-40s father tries to compete with a mid-teens son, one builds up fitness faster than the other).
This was just a start - during this time, we found our bikes needed a number changes and repairs. Cameron decided that he'd like to use an old mountain bike of mine, my chosen bike needed a new wheel (we popped about 3 spokes over a month). Every weekend we'd be out on the bikes, performing a test on ourselves, our fitness and our bikes ...
Testing for geography
We used the internet to measure distances from Picton to Blenheim and get an idea of the terrain, and found similar routes we could try over here in Wellington which would contain the same challenges for distance and incline
Testing under load
We attempted to work out how much equipment and especially water we'd need to carry (New Zealand is very remote in parts, so can't just expect to pop into a supermarket and pick up what we need between destinations). So during March we'd attempt to cycle with various levels of clothing etc on our back. When choosing a couple of pairs of clothing here, a rain mack here, something in case it gets cold there, recharger for your phone there - each piece seems quite light, until you put it all into a pack, and stagger under the weight.
Previously in 2012, I'd gone with my son on a camp hike with the cadet force he was doing his Duke Of Edinburgh with. It was 10km hike, camp overnight in the bush, 10km hike back. I only tried the pack on the night before, and was close to quitting on the first 100m (the whole 2 days was complete misery because of this). In hindsight I should have tried just short walks with the pack, and saw how I went. It was a major motivator for both of us to be used to cycling with a similar level of equipment on us.
Testing for endurance
Being able to cover the distance with the same level of equipment was nice, but something we were very aware of, we had to get good at doing 30km on rough roads with a bit of an incline (and wind). Then waking up the next morning, and doing it all over again. That meant in late March, training on both Saturday and Sunday - getting used to doing that ride when we were feeling a bit sore.
And more ...
There was a lot more as well, trying out which clothes we found best to cycle in. Working out snacks/nutrition before/during/after. Going over the route on Google Earth so we were familiar with it.
Given all of the above, Elayne asked me "you've trained, you've prepared - what are you worried about?". And that was when I started to notice the similarity with testing.
All the training we'd done had allowed us to check our fitness (and RPM classes to help build those fitness levels). We were more prepared and knew that we could do it - but it wasn't assured. Fitness training doesn't assure your success come event/race day. It just increases the likelihood of success.
The same goes with good testing, it allows you to remove as many obvious risks as you can. But there can always be things you don't think about (like with walking with THAT pack with so much weight in it back in 2012).
Here are some near "gotchas" which nearly ruined our cycle event,
- We'd trained during a heatwave ... but on day one, there was heavy rain and severe wind. Ouch.
- We'd planned to cycle during daylight, so bike lights and hi-vis vests were a bit of an afterthought. But the weather was so bad that we needed them to be clearly seen (the sky Biblically turned black).
- Camerons bike suffered repeated punctures on day one. It was obvious something was wrong with the front wheel tyre (I'm usually good at running my finger and finding any splinters/thorns in it). We ended up just pumping the tyre every 2 km for 8 km to get us to Blenheim - where we got both the inner tube and tyre replaced. We'd originally planned to do this trip over Easter, and realised if anything went wrong, no cycle shop would be open to assist.
You can bet if we do another cycle event, we're going to try out some extra sessions to cover some of the above. This is the feedback loop - in testing and cycling, we get better by "doing", finding out what went wrong/wasn't so great, and focusing on improving it.
Life is full of events like this which allow you to learn an important lesson, which if you allow yourself to be mentally open to, will allow you to grow more, and feedback into your professional life.
For me, I think the takeaways are,
- Nothing can be assured, but the more variety of testing you do on your fitness/software the more you'll be able to find weak points, and be able to take action to address them.
- If you find you are easily achieving the goals of any session, then maybe it's time to be a bit more ambitious, and push harder. This can mean "trying longer/steeper/more load" in cycling or "add more complexity" when testing.
- You are always limited in testing fitness/software by your imagination - more importantly what you can imagine could go wrong. Your own experience is the best teacher, but knowing other people's stories of "what happened to me" always helps guide and expand your imagination.
- Sometimes it's just good to leave your cares behind, and hit the road with a good friend to have an adventure. It is possible to overthink these things you know ...
That darn pack
Subscribe to:
Posts (Atom)