Thursday, February 4, 2016

Denial 103: "But we spent a lot of money developing our automation"

Previously we looked at the psychological effect of denial, and what drives it and makes it tick.  Today we look at an example of denial at work within testing.  Ladies and gentlemen, I give you exhibit B ...

"But we spent a lot of money developing our automation"


I know the lure for a lot of people when it comes to automation is,
  • the tool is cheap (free is best yes?)
  • it's quick to produce automation scripts
  • it can run scenarios faster than a manual tester can

In a way this trap shares a lot of ground with the "testers spent a lot of time scripting" fallacy we discussed last time.

If it's quick for you to create scripts in, it's easy to try a short one-week demo of the product, and choose it as your desired automation strategy.  However, six months down the line, your manager is wondering when the automation magic will kick in asking "where's my ROI?".

Now, as much as I hate that term, they've got a point - by now you have a large library of automated scripts.  But they constantly need running and correcting (that wasn't in the brochure). So much so that they constantly fail, mainly in small but irritating ways, and typically the fails require a modification to the automation script rather than find problems in software under test that they're supposed to check.

And any time there's a major change to the system, large numbers of scripts need modification.  Heck, they all needed some modification when the login screen was modified!

This is because when you evaluated the tool and the method, you went with cost and how easy scripts were to make (hey, record and playback, how much simpler could it be?).  What you missed was maintainability.  This was covered in a WeTest Workshop from way back, and dealt with under "Automation TLC".  But an article by Messers Bach and Bolton which has just been released covers some similar ground.  [As a hint, I've found that if you don't know what the term "code reuse" is and you're writing large numbers of automation scripts, maybe you shouldn't ... ask a developer instead].

The denial mindset here (much with manual scripts) is that you've invested a lot of time and effort into automation scripting.  At some point that really should start paying off.  Right now it's not going any faster than the tests the manual testers used to run.  And although the automation occasionally find problems in new software builds, about 80-90% of the time they find a problem in the script itself which needs changing.  And typically if it's a problem in one script, it's a problem in many scripts.

The problem is, if you've not chosen a tool and built up your automation scripts with maintainability in mind, it will NEVER pay off.

Your scripts are too brittle, and they will continue to break in multiple places from small changes.  The more scripts you have, the bigger your maintenance bill of stuff that needs changing.  The hard thing is, it's probably easier to start from scratch with a new tool, than to try and build in your maintainability with what you already have.

So we chose a bad tool ... and implemented badly as well.  Is this denial?  If you're continuing to fix it every time it breaks, then yes, it is.

What we have here is what I like to call "technical testing debt", and it shares attributes with other testing debt.  This debt keeps rearing it's head - you don't have the time or backing to go back and deal with the fundamental issue.  So you band aid it, and then band aid it again, and then again.

And because you're addressing the problem so often there will be a perception problem, that surely that technical debt is decreasing with each occurrence you fix.  Right?  The technical debt is causing you to sink in time and resources - and therefore the sunk costs fallacy makes people say "as you're spending time on it, it must be decreasing".

No - not at all.  Every time it's being fixed, the fix for it is only for those occurrences where it breaks - the places where it surfaces.  To really do justice to the problem, you have to pretty much come to a full stop, and do a serious reworking of the approach to the problem, going much, much deeper.  That's something that's very hard to get backing for - especially if someone thinks you're addressing that technical debt piecemeal "because it keeps cropping up as a problem".

The good news - not every automation process goes like that.  But talk around, everyone I know has the tale of a suite of automation which went this way.  The trick is to know that if you're fixing breaks in your scripts than finding problems in your software under test, you need to ask if you're in a state of denial, and you need to address some fundamental problems in your automation strategy.

Post-script

Sometimes after I launch a blog entry, someone gives me a link so good, I need to add it to my article.  So thank you Simon for recommending this article.


Wednesday, February 3, 2016

Denial 102: "But we spent a lot of money writing test cases!"

Previously we looked at the psychological effect of denial, and what drives it and makes it tick.  Today we look at an example of denial at work within testing.  Ladies and gentlemen, I give you exhibit A ...

"But we spent a lot of money writing test cases"


I talked a little about this phenomenon in a post way back in 2014.  In my career, I've had to revisit old testing schemes several times... heck, let's just call it what it is "grave robbing dead/dormant projects".

Prior to every excavation of archives, I will have heard the legend "well we were late going into testing and the testers spent a lot of time writing scripts".  This usually comes from non-testers, and gives the indication that as far as they're concerned I am sitting on a tester's gold mine of material.  It has to be valuable, because we spent a lot of money on it.  [It's value to me has to be equivalent to the cost sunk into it]

Here's what I hear in that sentence ... "well we were late going into testing" in my ears means that there were a lot of problems.  So many that a basic version of the software could not be made to work enough for testing to start.  That says there were fundamental issues on this project.  Could one of them be that no-one really knew the specifics of what they were building?  And if that is really the case, what's the chance that testers magically had a level of clairvoyance which eluded the developers?

And then there's the part that goes "and the testers spent a lot of time writing scripts", which roughly translates to ...


I kid you not - during my career I've seen some managers try to send any test contractors on leave for a month until it's ready for testing "to save costs".  So sadly if you're a contractor and you want to be paid, there's a benefit to looking busy.

As I've mentioned in my previous article, a quick attempt to correlate the execution logs with the test scripts often shows me whole reams of planned testing which was dumped and never run.

New Zealand is such a small place, which means you might have worked on that project.  You might have left the company.  You might have left the city.  BUT I WILL END UP FINDING YOU!


It frequently happens - informally I might add - and I'm glad it does, because it allows me to dig deeper to find out what really went on and find out another of a project's story.  And it typically points towards a problematic project - from vague outline to changing scope - rather than a tester who just wrote scripts they had no intention of running.



Sooner or later though, it's my job to burst people's bubble that they're not sitting on a goldmine of test scripts that are ready to run.  I'll take a scan through, and try and use what's been written to test the system to see how much use it'll be.  Invariably though it's the results or execution logs which tell me a lot more about what was done than a folder filled with test scripts.

This aligns with James Christie's discussion around how ISO 29119 impacts testing.  That standard focuses more on plans and scripts as auditable artifacts.  James Christie argues that an audit always wants to focus first and foremost on what you actually did (over what you planned to do).  And he has an excellent series of articles on that starting with part one of "Audit And Agile" here.

What I've learned with bursting bubbles is to do it gently.  Always work out ahead of time if anything is salvageable, and have your counter offer ready (so we're going to exploratory test instead maybe?).  That person who thought the testing was pre-written was probably hoping that your test effort would be minimal because you'd be able to build your effort from past work.  Let them know the degree you can capitalise on - heck if it just gives you a good set of starting ideas that's something significant.

Even if you have a good group of test cases written which match the execution log you have, you'll still have to spend time learning the system and it's intricacies.  For instance there might be a line going "expire the account", and it takes some time to find they have a test tool written which will age an account to it's expiry date, but you need to find where they kept that tool.  Almost always a lot of verbal knowledge will have gone, or the details are there, just buried in a heap of other details.


Also remember having too much documentation, as above, can be as much of a bane as none at all.  Because you have to spend a lot of time going through it all before deciding if it's any help or a hinderance.  And I don't know about you, but I'm a bit of a slow reader.


Tuesday, February 2, 2016

Denial 101: Something I find hard to believe ...

Previously we looked at the effect of peer pressure when we try and adopt the new idea that's "all the rage".  Today as promised we're going to look at the impact of denial, an effect I find very much paired with the "group delusion".

As originally planned, I would be diving right in and exploring the places where we find denial as a very real effect within IT departments.  However as I wrote and expanded my ideas, I found a little too much material - so I ran it by a friend who said it really deserved to be split up over several posts ... so change of plan!


Although I talked a little about denial in my piece of the psychology of The Force Awakes (and believe me have I seen that piece validated in recent weeks), I want to spend this first post looking further down the rabbit hole understanding more about it.



Like any of the effects we've talking about, it's easy to feel a little smug and superior as we talk about it.  As if these issues are something that "happen to other people".  But denial is such a powerful effect on us as human beings because it works on our emotions (like many of the psychological effects we're looking at).  We'd love to think that we use rational thinking to control our emotions, but often it works the other way around - our thinking tends to be slaved to rationalise the emotional outcome we want.

We are all led astray in our thinking when emotions are entangled.  The point of this series of articles is to shed a bit of light on common traps we fall into, so we can be a little wiser - maybe asking ourselves a little "is this a MacGuffin effect?".

In a nutshell, denial is the rejection of facts because of an emotional reaction.  I like how this is covered within the Your Deceptive Mind chapter on denial where it says that people commonly fall into a denial trap and will start with their desired (emotional) outcome, and use it to systematically reject any evidence which does not support that outcome.  This form of reasoning increasingly requires the presence of conspiracies to support their model of thinking.

And indeed a really good example of this is the piece I wrote in 2014 where we tried to convince someone who believed in a flat earth that the Earth was round.  In that piece they initially respond to evidence with vague science, saying "the Earth is round ... but flat ... like a plate".

Then when it's said they could try Skyping someone in another part of the world to see if it's night there whilst day where the sceptic is - however they're convinced the other party will be part of the conspiracy, so debunks the experiment.

And of course there's the line which several people have said is their favourite, "When someone flies from London to Auckland via America, it's okay, because that route exists.  But if they go via Asia, then the pilot flies around Africa for a bit until the passengers get disorientated and it gets dark.  He then heads via America, to New Zealand, stopping off at a secret replica of Singapore they've build deep in the Andes".  More conspiracy.

So what leads so many people to go down that path?  Whenever we make any kind of decision, we essentially make an investment of time, money, ego, and pride into that decision - we're obviously committed to that decision "coming out alright".

But sometimes pride and ego will mean we just stick to that decision, even in the face of new and glaring evidence.  We want to be seen to be someone consistent, not someone who is ever wrong.

So rather the rectify our decision, we'll choose to undermine any contrary evidence just like our flat earther.  This is known as the "escalation of commitment" - where people continue to justify commitment of time and money based on an initial decision and "we've already invested in this course of action".  The phrase "throwing good money after bad" is of course one which perfectly sums up this behaviour.




Here's an everyday example for those of you who can remember driving before the days of SatNav.  A couple of friends, Thelma and Louise, are driving to Mexico City.  They're supposed to be stopping at the Grand Canyon as they do so, and they passed a sign that said they'd see it in 10 miles ... but that was 15 miles ago.

Thelma wants to go back as she thinks they've missed a turning.

Louise says she has an excellent sense of direction, and is sure they'll find the Grand Canyon soon enough.

Thelma says she's seen a sign saying they're heading towards a place called Bitter Springs, which means they're heading in the opposite way to both the Grand Canyon and Mexico.

Louise is sure that Thelma is just reading the map wrong and is not about to turn around now.  This way has to end up in Mexico eventually.

Right?


The couple having an argument about directions, and someone just won't turn back, because "we've come this far".  Sound at all familiar?

Our flat earther has invested time and pride into his worldview, and because conceding that worldview means conceding his pride, he refuses to.  Even to the point of turning down a "round the world cruise" he won on the lottery, because it means admitting being wrong.



So - the question is, where does escalation of commitment happen in testing?  And I'm afraid to say, it happens everywhere!  Pretty much anywhere you've sunk time and money into doing something, there is a state of mind that wants to continue doing that course of action ... because it has to pay off eventually, right?  Oh dear God please, it has to pay off!!!



Other everyday examples of denial and escalation of effort you might want to think about ...

  • A friend pointed out the Vietnam war followed this behaviour all too chillingly.  It started out as a small involvement of US forces.  But as it went badly, more and more forces were brought in from the American side, because "we've come this far".  You see something similar in the "one big push" mentality in The Great War, particularly in The Battle Of The Somme.  I talked about the Battle Of The Somme, and "sticking with the plan" in the face of changing evidence here back in 2013.
  • "I read about this amazing diet last week.  I mean, I'd tried a few other diets over the years, but this one actually works.  I read it in a magazine."  You can substitute the world "diet" with any piece of revolutionary fitness equipment which "is like having a gym in your own home", and has so changed the world, it's not available in shops ... only to order over the phone on a midnight infomerical slot.  [It's like the gyms are conspiring to keep your membership]
  • The right wing American politician whose response to this month's school shooting death toll is "the victims and their families will be in my prayers".  Just like they were last month - except this time they'll pray really hard.
  • The Aztecs used human sacrifice to appease the god of rains.  If there was no rains, then obviously they'd not sacrificed enough people.  I talked about this here in 2014.
  • Variants on this meme when politicians have committed to a policy, despite it not producing the results they promised ...



If any of those points made you squirm uncomfortably, then well done, you're waking up.



[By the way - I've taken a few shots at the right wing there, and I'm an out and out socialist at heart.  But critical thinking still applies to me - especially if someone is sharing on social media a "new item" which aligns with my beliefs.  I often am somewhat suspicious if it falls into the camp of "I knew it", and start checking up on it.  To be honest such fake stories annoy the heck out of me, because they undermine my political stance, and make it just way too easy for friends who have different opinions to "score cheap points" over me.]

Friday, January 29, 2016

Challenging Empathy: 30 years on ...

I was reminded today that this week is 30 years on from the Challenger disaster.


I've previously talked about the disaster itself.  It was an odd moment in history - like the Kennedy assassination or September 11th, everyone who was alive seems to know where they were on that night, and their thoughts and feelings.

It's my own tale I want to spend some time going through, as it links to some recent posts of mine.  I was 15, and remember my head full of revision for an upcoming physics test on radiation, when I saw the fateful footage of the Challenger.

I seemed to spend the whole night channel hopping for news and theories.  I'd known about the plan to send the first teacher into space ... we'd heard how safe and reliable the shuttle was ... were there any survivors (for a few moments there was a parachute spotted which confused everyone and gave people false hope).

It was memorable for me, because it was the penultimate time that I did a childhood tradition we probably all have had.  I couldn't sleep, so I went into my parents room to get into bed with them.


I told my parents that I just could not get them out of my head.  It upset me, but also confused me a bit.  The news was always full of terrible things - children being abducted and murdered, natural disasters, planes shot down.  But this really got to me.

The disaster is memorable for the words my mother used to explain why I was feeling that way.  I was a kid with  my head filled with space and science fiction.  Those astronauts were everything I wanted and sought to be, and that made me empathise closely with them.  They were people very much like me, doing something I desperately wanted to follow in their footsteps.  Though I didn't know them that well, it made it feel incredibly personal and scary because I saw so much of myself in them.

It's a comment I use mentally all the time to explain the world around me.  We see terrible things in the news all the time - and often it's not the numbers of people affected or killed which will get to us, but the personal tales which remind us how close the victims are to us.  That is why we change the colour of our Facebook photo after a terrorist attack in Paris, and yet seem blase about attacks elsewhere in the world where many more people die.  We see more of ourselves in the Parisian victims.

On the 30th anniversary, I'm reminded that the challenge with empathy is sometimes we need to spread it a bit wider.  To not just put ourselves in the shoes of people like us, but wherever we can, to put ourselves in the shoes of people a little different.

Tuesday, January 19, 2016

Peer 102: The MacGuffin effect in testing ...

Previously we looked at the then-to-be-released new Star Wars movie as a way to explore some psychological phenomena which we'd unconsciously be exposed to in December 2015.

Today I want to focus on one of those key factors (group delusion), and see how they affect us at work as testers.

Group Delusion

The term "group delusion" feels an awfully derogatory one - but as I discussed last time, it's subtle and can affect us all.  I personally see it as akin of peer pressure.  To me there's somewhat of a grey-blur between them, and they add up to the same kind of thing - an unconscious drift in our thinking.

I want you to imagine this scenario ...

You've just meet up with a couple of testers, Bev and Steve, who you worked with two years ago. Although it was meant to be more a social catch-up, it doesn't take too long before you start talking shop!

Bev mentioned how they've recently started implementing a MacGuffin strategy to their testing, and she's finding it really interesting, but a bit challenging.  Steve snorts a bit - his project has been using MacGuffin for over a year, and would never go back!

That evening, you get some email spam from a recruiter (you remind yourself you really should unsubscribe one of these days), when you notice the first job calls for "experience applying MacGuffin is a must!".  You scroll down, and it's not the only role which mentions it.

It only gets worse the next day when one of the senior managers calls you and several other testers into an office asking why you've not started implementing a MacGuffin policy yet ...

So we need to get down and start implementing MacGuffin right?

Hopefully your first reaction really is "what is MacGuffin, and why will it help?".  Sadly not everyone will.

A MacGuffin is a great term for this kind of effect in testing - it's a word often associated with Alfred Hitchcock, it's tool used as a plot device, and often kept vague and mysterious.

The mysterious MacGuffin used in Pulp Fiction

Over the last few years I've seen a few MacGuffins being traded around social media in association with testing.  I notice the MacGuffin effect when I start to see people wanting to build up their strategy not because a MacGuffin will help them with a specific problem, but because everyone else seems to be working with MacGuffins, and they're worried about being left behind.

It doesn't matter what they are, there is a peer pressure which creates a kind of what's called a "keeping up with the Joneses" effect.  If your neighbours, the Joneses, go out and get a new BMW and a 50 inch plasma TV, you end up asking yourself why you don't have these things, and thinking that really you ought to go out and get them yourself.  Even if it's something that previously you felt you never needed.

Let me give you a few examples of MacGuffins, and I'll break down the pitfalls I've seen ...
  • Test automation
  • Agile
  • Test management tools

Test automation


I'm actually a huge fan of automation (though I agree, it' more checking than testing).  Used well it can help my job as a tester because it will mean a developer can check that software works to at least a level where I can interact a little with it.  If it's so fundamentally broken that a user can't even log into the system, far better that's found with a little automation, than for me to find on Monday, then be told I can't get a build for another week.

The problem is automation is somewhat over-sold.  This really goes back to the 90s, where the sales blurb for the expensive software talked in terms of there being no need for testers.  All in the order of "costs half a testers wage, tests 100 times faster!".

The problem is you need a really good tester to come up with the ideas for scenarios to check.  Then you need someone to program it all up.  Then to run it.  And check the results if they show issues.  Then to maintain it when you change your code.  You still need people, and they're kind of doing "testing-esque" roles (especially determining scenarios, and investigation).  [Though you might need less of them]

If one of your tests "shows a red" (ie a fail), I guess you'll need to try and manually rerun the scenario to see why you get that result ... isn't that most likely to be a tester?

So even in a perfect scenario, you're still going to need people working testing (sorry if you bought the sales pitch).

However when you're sitting with Bev and Steve and they're going "automation is amazing ... it helps us to test really fast", everyone would want a piece of that action!  Especially if they scoff and go, "what .... you're STILL testing manually?".

I myself have experienced this kind of pressure - I had a manager at a previous company want to know why we weren't using automation for our current project.  This project was an integrated piece of hardware similar in many respects to a supermarket self-service kiosk.  They'd heard you could get some free software called Selenium, and a friend was working it on another project nearby.


Well, that was an unexpected item in my bagging area!

I was lucky to be able to go through the details with this manager, and get them to understand why it wasn't suitable, and I was glad we were able to go through it together.

My points were,

  • For automation to work well, we really needed the product in our hands in as finished a state as possible quite early on.  The reality was we'd scripted up some scenarios, but we would not get so much as a demo of the product until it was delivered for testing (and believe me I'd tried).  We were on a tight schedule, and so automation would jeopardise that.
  • It takes a long time to automate a test.  But it pays off if you expect to have to run that test dozens of time.  We had a 6-8 week window, expecting to rerun tests at the most about 3-4 times.  Once released we'd not touch this product again for years (if ever).
  • Primarily though, Selenium works on web-based application.  Our application was not even remotely web-based.  The technology just wasn't suitable.  I had to go through some Selenium websites to show this to him, but again worth doing so he'd understand.

All the same, it's easy to see how he got MacGuffined.

Agile

Again - I'm a huge fan of agile.  Just not when it's mis-sold.

I see and hear this a little - companies who've felt compelled to "jump on the agile bandwagon" because they've heard about everyone else's huge successes.  But without really "getting what agile is".

Agile though is a hard sell - when you tell customers the reality that "agile isn't about succeeding, it's about making failure more up-front so it causes less damage, cost and is easily rectified", some look a little shocked and terrified.


I thought you promised us success!

Agile has become such a big MacGuffin, that no IT company can afford to say they don't do it.  But unfortunately there's a little bit of "just call it agile" out there, where the principles aren't really being followed or understood, it's just a rebranding of what was done in waterfall.  As I've talked about addressing our agile transformation - there's an awful cargo cult trap of trying to keep doing what you've always done, but "wack an agile label" on it.

Warning signs for me tend to be,
  • "Well ... we do stand ups" [That is - that's the only difference which has been noticed]
  • "We have a 2 week development sprint ... which is followed by a 2 week testing sprint"  [Mini-waterfall]
  • "We' then run our retrospective ideas past management" [Sounds awfully like the agile team are not empowered]
  • "We do SAFe"
Test Management Tools

I'm often told by their advocates that "the best thing about this McGuffin Test Management Tool is that you get all your reporting for free".


Actually, let me repeat this and add emphasis - "the best thing about this McGuffin Test Management Tool is that you get all your reporting FOR FREE!!!".

Just like agile and just like automation, test management tools can be a very useful.  For very complex projects, with huge numbers of testers, it allows you to break down the many areas of testing into different areas, track which tests touch which requirements.  And that's useful.

When advocates of these tools scoff with "well, you can't just use Excel, can you", I usually squirm a little, and reply "well, often Excel is my tool of choice"Am I mad?

Here's my problem with test management tools.  Many of them whilst they do allow you an overview, it comes with a price, which is not free.  And I don't mean the "per seat license dollar cost".

Test management tools often constrain testers to work and test in a particular fashion - one which often is not the natural/logical/comfortable/most-efficient method to how some testers operate - especially exploratory testers.  If you have a tool which isn't really suitable for how you work, it feels clunky, it slows you down, and creates a drag on your work velocity.

They can also give a false perspective - because the kind of high-up managers who love them rarely dig down to the detail.  A couple of years ago, I had one such manager mention how nice it was to be able to see 500 scripts scoped out for the testing effort, and like a commander they could watch their status when run.  When I looked into some of these tests, they had a title, but the majority of them had no steps ...  It's something I'm aware of.  I know of some projects around Wellington where they had the test management software mandated, and the testers found using the system for testing so difficult and clunky, they ended up only using it to track bugs.

This is how some people think the charts in test management tools work ...

I like to use Excel when practical, because I can use it in any manner which makes sense for me to keep notes for testing and for breaking down the effort.  If a team is less than 5 testers, you probably don't need one.

I also know looking at graph or output from these tools does not show me where we have problems (and I'm a trained scientist good are reading trends in graphs, but also noticing noise).  Often those problems can lie under the surface of the pretty graphs.  Far better to have a daily stand-up with other testers, and get them to talk about any problems they're encountering.  That avoids me having to stare at numbers or statuses on a graph or dashboard trying to "read the signs" like some fortune teller.  I trust my testers to tell me about problems than I do a tool.  If I don't trust them, I shouldn't have them working for me.

Likewise when it comes to scaling up a group, I'd rather have 15 testers split into 3-4 groups with a team lead, and all those team leads doing a stand up with me daily to pass down problems and pain points.  Even if we're using a test tool, it's important to keep talking to the people using it.

[I'm going to avoid a discussion on test metrics and reporting here, as I've something upcoming on just that topic]



So that's the MacGuffin effect leading us into sometimes costly mistakes!  Any of that feel at all familiar?  Any other MacGuffins you've come across?

Next time we'll look at the phenomena of denialism, and the sting in the tale there...

Tuesday, January 12, 2016

Live like Bowie

Damn.


I think that describes a lot of people's reaction to yesterday's announcement of the death of David Bowie at 69 from cancer.  It's poignant because I think everyone takes a moment not just to remember him, but their own personal family heroes who were similarly lost to one cancer or another.  I like many feel huge empathy.  Not only was he a musical hero, but because we've know something of the struggle and the loss to cancer personally (for me, my beloved father-in-law especially comes to mind).

On social media, many of my friends are posting tributes to Bowie, and links to their favourite songs.  There are so many good songs, and no two friends seem to have chosen the same one - probably a testament to his back-catalogue.

David Bowie was often referred to as the "chameleon of pop".  He reinvented himself and his music regularly.  There are many bands and singers out there who find a sound and a style, and stick to it their whole life.  He didn't.

He likewise was unafraid to dabble in acting - and was particularly superb as the haunted alien of The Man Who Fell To Earth.

Bowie was never afraid to experiment, to collaborate and try new things.  Consequentially, among so many great songs, there are a few that just don't work for me.  I'm thinking about his cover of the Doors Moon Of Alabama - however even there he didn't just copy and paste, but dared to try something very different.

He sung with Freddie Mercury.  And Mick Jagger.  And Bing Crosby.  Bing Crosby!

What remains with us is almost 50 years of music.  But his memory should propel us to embrace what he did.  To embrace working with others, daring to cross the floor to collaborate with others, to not let a box confine and define us.  To avoid forging a career that repeats the same playlist of actions our whole life, but to be daring, try new things and reinvent ourselves. 

Do everything with passion - although try and avoid the cocaine habit.  And if occasionally being daring doesn't work, don't be afraid to try again.





Thursday, January 7, 2016

Looking ahead to 2016 ...

Today was my first day back at work, and on the long walk to work, I'm typically reflective on what I'd really like to see out of this year.

I was reminded this morning of something the late Leonard Nimoy said about explaining the popularity of Star Trek, "I think people enjoy watching this group of characters solve problems together".


It applies to both Star Trek and The Next Generation - but it also on the flip side to Bob Marshall's comments this morning (which really annoyed me in case you hadn't guessed), it is also a sign of a good team.  You are within a group of people who you enjoy solving problems with.

Last year I worked with a really good agile team - although to start with like many teams we had our dramas before we got sorted.  Just as we felt we were running well, we ran out of work for the project.  It was frustrating, but once we got there, and built up that trust, it was the kind of group you wanted to remain with.

It's the kind of environment we'd all like to find ourselves in, and indeed as Leonard Nimoy stated, a good reason why people are drawn to the positivity within Star Trek.  Although there were a few egos in play behind the scenes, on screen everyone works together well.


For example, you'd never find Kirk sitting in Sulu's seat going, "piloting this starship is easy Sulu - you just flip a few switches ... anyone could do it".


Likewise - although Chekov could operate the science station, it was acknowledged that Spock was better at it, and Chekov was a better navigator.


But no-one could really stand in for Uhura.



What's interesting is that during the series there was very little politics or in-fighting between these guys (unless possessed by an alien parasite).  This is a team which had well and truly in the performing end of the forming-storming-norming-performing spectrum.

And of course when they retold the story in Star Trek, they took the whole team as newbies, and put them through trials to form, storm, norm then finally perform.  The key to performing is having respect, and typically in such movies, it's a journey to earning that respect.

Here then it to performing in 2016 ...