Why do we do what we do? How do we make that choice? How do we hold to that choice? And what happens when choices conflict and collide?
I spent the last few days at an academic conference (which is actually more interesting than it might sound on paper). What was curious was a recurring theme that emerged in many of the discussions I participated in (and the presentations I saw). Given the diversity of presentations I attended, this was a surprise. Presentation topics ranged from team effectiveness to gender to diversity to leadership to organizational dysfunction. But the concept that constantly emerged in exploring these topics was that of trust.
Trust has a phenomenal influence on how we function as human beings. It drives our behaviour as individuals, it colours our relationships with those closest to us, it shapes our interactions in teams and it fundamentally drives how we function in organizations. Where there is trust in place, we tend to if not thrive then at least be comfortable. Where there is lack of trust, any range of dysfunctional behaviours can emerge.
What operationalizes trust, however, is another factor: motive. Motive is the intent that someone brings to the table. It is the underlying purpose behind their actions. Motive is what frames and defines what someone is trying to accomplish. Like politics, it’s a value neutral concept. Just as politics can be destructive or virtuous, motives can be (or appear to be) anywhere on a spectrum from blatantly underhanded to beneficial.
Where trust is undermined, it is most often because we perceive someone’s personal motives to be misaligned with the stated goals or intent of the team, the project or the organization. They may have their own personal agenda, they may be working at cross purposes to the stated objective or they may be actively working to undermine the enterprise. We see their actions as being underhanded, self-serving, manipulative or downright antagonistic. Where motives don’t match up with expectations, everything starts to unravel.
The inherent problem here is that we are dealing with perception. It is impossible to objectively assess someone else’s motive. Much of the time, we may not even be consciously aware of our own motives. While it’s true that humans tend to engage in habits and behaviours in order to get a pay-off, the challenge is knowing what that pay-off actually is. We smoke, drink, eat comfort food, get lost in front of the TV or avoid exercise because in some way that behaviour serves us, even if we implicitly know it is unhealthy. We may not recognize why this is true, yet there is still some underlying reward.
This goes a long way to addressing and explaining seemingly irrational behaviour. When others make choices that we don’t understand, don’t agree with or that seem baffling to us, there is still an underlying logic to them. It may not be our logic, to be certain. But to them, the choices that they are making and the actions that they are engaging in are still logical. While perhaps skewed, counterproductive or even self-destructive, there is still some value that they perceive, even if it is on a subconscious level.
Whatever that pay-off is, whether concrete and tangible or psychological and ephemeral, it gets at the heart of motive. It provides the basis for understanding the behavioural why that explains otherwise inexplicable choices and actions. Taking the time to assess what that value or reward might be goes along way towards knowing where someone is actually coming from, and how to appropriately respond to actions that appear misaligned or inappropriate.
Let’s take a relatively obvious example: a project manager reporting their status. Theoretically, status reports are objective things. They tell the story of how a project is doing, what has happened, what still is ahead and where the project is relative to the plan. That story is not always 100% representative of reality, however. Sometimes it isn’t even close to reality. Inaccurate status reports are astonishingly prevalent. And an essential question that we need to come to terms with is, “Why?”
Typically, errors start small. From there, they grow. A project manager might have a temporary problem that they believe will get resolved. Something that was due today is expected to be done tomorrow, so they mark it as complete. Variations in schedule or budget are expected to self-correct, so we don’t draw attention to them. What starts as a small deception or misrepresentation, however, often quickly grows. The thing that was due last week still isn’t done a week later. The small variation in budget is even bigger today. The problem we were wrestling with is now verging on being a full-blown issue. Because we weren’t entirely honest last week, though, we become that much reluctant to fess up now.
The pay-off in this instance for maintaining the misrepresentation is relatively obvious. We don’t want to get found out. We don’t want our competence questioned. In particular, we don’t want to have to be revealed in a lie and to have our honesty and integrity challenged. The irony, of course, is that the going forward strategy is essentially to double-down on the behaviour that got us here in the first place. We compound the lie, and in doing so we begin to create an alternate reality that we want—and fervently hope—might become true in the future. We become vested in this alternative reality, believing in it even, despite all objective evidence to the contrary.
Falsifying status reports is one thing. But what about larger deceptions? How do they come to be? What drives them? How do multiple people become enrolled in the same worldview? We don’t need to look too far in history for examples, and the financial marketplace provides many of them. Whether it was the involvement of investment firms like Goldman Sachs in precipitating the global financial crisis of 2008, to the accounting scandals of Tyco, WorldCom and Enron earlier in the decade, the examples are numerous. Greed is a relatively obvious motive here. The motive behind other deceptions, though, is less obvious.
Take, for example, the decision to launch the space shuttle Challenger on the morning of 28 January 1996. Most of us know the essentials of the accident that occurred, how an o-ring failed in the solid rocket booster, leading to an explosion that brought down the shuttle. Although the public record is exhaustive (the decision is well addressed in this book and this provides a good summary), fewer people know why the accident was allowed to occur (for it was, indeed, a deliberate choice).
The problem with the o-rings had already been identified, long prior to the fateful 1986 event. A prior launch in 1995, the coldest one on record, had resulted in the greatest o-ring damage of any shuttle flight, although it was far from the only flight where o-ring damage had occurred. Morton Thiokol, the manufacture of the solid rocket booster, was commissioned to investigate the cause of o-ring failures; their report was subsequently edited to replace any reference to temperature sensitivity with a more oblique and evasive reference to “o-ring resiliency.” The o-ring problem was filed away as an “acceptable flight risk.”
The day before the launch, in a pre-flight readiness check, Morton-Thiokol engineers and managers argued with NASA executives over whether it was safe to launch. Morton-Thiokol was pressured to prove it beyond a shadow of a doubt that it was not safe to launch, something they struggled to do. The engineer responsible for the o-rings, Roger Boisjoly, could not quantify the problem, even though he viscerally feared the o-ring would fail. NASA challenged Moroton-Thiokol’s data, with launch executive Larry Malloy berating them: “My God, Thiokol, when do you want me to launch—next April?”
Flight readiness checks are supposed to do the opposite of what occurred here, of course. Any expression of concern is supposed to stop a launch, and safety must be proven before again proceeding. In this case, the opposite was true. The question we have to ask, is why? And the answer, once again: motive. Not one motive, however. In this instance, there were many motives, all aligned and all inevitably moving the shuttle program in the direction of catastrophic failure.
NASA had sold the shuttle program on a capacity of 65 launches per year, more than one a week. 1985 was a record year, with only nine launches. The schedule called for 15 launches in 1986. At the end of January, not one had occurred. The Reagan government was challenging NASA to perform, and the organization was threatened with cuts to its funding or closure of the space program. There was also pressure for this particular launch to proceed, as it carried Christa McAuliffe, a schoolteacher. Putting a civilian in space had been a major plank of Reagan’s re-election campaign. With the State of the Union address scheduled for that evening, there was considerable pressure—and motive—by the White House to have the shuttle in orbit.
Morton-Thiokol had been recently advised that their exclusive contract to supply solid rocket boosters was being reviewed, with the potential to lose billions in annual revenue. The decision to overturn the “no launch” recommendation and sign off had the motive of accommodating a major customer. Larry Malloy was known to have a pathological intolerance of dissent, and was on record as asserting that under no circumstances would Marshall Space Flight Centre be responsible for a shuttle delay. Those who reported to him feared the consequence to their careers if they opposed him. Going back to the edits to the original report that deemed o-ring erosion an acceptable flight risk, there was also motive at work: keep Vandenburg Air Force Base in California as a backup launch facility for the shuttle. Temperatures in California are often far colder than in Florida, and any launch commit criteria tied to temperature would have severely constrained launch opportunities. The only person seemingly motivated by the fear of losing the shuttle and her astronauts was Roger Boisjoly.
Each of the above examples reveals how motive can compromise intent, and the resulting behaviours can derail intended outcomes. The challenge is what to do about it. It can be difficult to raise our hand and say that there is a problem. We may fear the consequences to us, to our career, to our project or to our organization. Not addressing problems early, however, usually simply results in them exponentially scaling in consequence. If we don’t want the situation to end in catastrophe, we need to be willing to address potential problems earlier.
Doing so requires keeping in mind the end purpose we are working towards, and maintaining visibility around the resulting goals or objectives. Keeping outcomes front and centre provides a means of testing motive, but it also provides a safer way of raising potential concerns. Rather than challenging directly the actions or intent of a person, it allows us to reframe the conversation. We can question whether an action is best aligned with the goal or objective, and seek how to better improve its relevance and effectiveness, rather than engaging in criticism or confrontation.
Staying clear about our own objectives and intent is an excellent guide to testing our personal motive and actions as well. Where we find ourselves engaging in behaviours we don’t value or habits we want to shift, we have the opportunity to question their alignment with where we are truly trying to go. We can reconsider our choices and reorient our motives towards outcomes we care more about. If we can do that regularly and frequently, we can shift behaviours and reorient habits that don’t serve us into actions that do.
Every decision point is an opportunity to test our own motives, and those around us. Every decision point is an opportunity to move further in the direction we want to go, or to undermine our ability to get there. Being clear about our motive and intent enables us to gain clarity about what is most important to us. Furthermore, it lets us be clear about the choices and decisions that best serve our desires and further our goals. If our motive is to make the best decisions possible under the circumstances, then the opportunity is straightforward: Make the decision you can always live with, regardless of what ultimately happens.
Bill Neaves says
Thanks for providing an excellent summary of the events leading up to that day. I have used the Challenger disaster a number of times as a teaching case in statistics and probability, in project risk management and in conflict management. It never ceases to amaze me how wanting something to be true can cause otherwise intelligent people to become blind to hard data that is telling them not to do it.