I see you. I see your look of skepticism. Quizzically reading last week’s column. Wondering, “OK, mister smarty-pants, if you don’t like KPIs, just how do you measure success?” I also heard some muttering about, “That’s all well and good for you, but some of us live in the real world, and have bosses that demand measurable results.”
I get it. We live in a rational world. There are bosses and organizations that demand proof. That want to see results. That want to know, clearly and unequivocally, that a change—if made—will lead to specific, concrete and measurable outcomes.
That’s not to say that the world is actually rational, or behaves in a way that even approximates rationality. But people want to impose measurability and conformity nonetheless, and preferably to two decimal points.
This was the point of last week’s discussion. Those who use terms like “KPI” are specifically those people that want cold, hard numbers. Who want proof. Who want solid, incontrovertible evidence that changes deliver results, and that have concrete specifics of what those results were.
The challenge is that measuring outcomes is hard. Very frequently—particularly on initiatives that matter—it’s impossible. That’s not something that those on the KPI bandwagon like to hear, but it’s still true.
This of course leads us back to the question that started the whole discussion: how do you demonstrate results, when you can’t accurately measure them? Recognizing that most of us are still obliged to do something, this is my very best effort to provide some meaningful guidance on strategies that may work.
To start, let’s go back to the example I shared last week. The initiative in question had as its primary goal the transformation of the culture within an organization. Specifically, it was to help staff be more collaborative, consultative and client-service focussed. It was to help them become consultants to their clients, getting beneath surface requests to understand the real problems and needs, and figure out how to deliver solutions in a way that allowed problems to stay solved.
That’s no small order. There is a lot of work and a lot of evolution and adoption required if an initiative like that is going to be successful. This isn’t just a change in process and approach. It is also a shift in attitude. It is a developing of skills. It is a building of awareness. And it is a change in behaviours and mindset. Done successfully, that means that a change in operating mode within the organization is completely internalized by staff. They view the new approach as normal. By the end—done well—they would have a difficult time imagining a time when they didn’t work that way.
So how do you measure something where—almost by definition—demonstration of success is that you don’t realize a change actually happened? This is the very real challenge that many of us face with our change initiatives. And it leads to my first rule of measurement: the more important something is, the harder it is to measure it well.
If we unpack what we are trying to do with this initiative, though, there are really three major forces at work. First, we led with trying to change the culture. That cultural shift is trying to adopt a more collaborative, consultative approach with clients. And it is trying to focus on better identifying client problems, and devising and implementing better solutions. Surely we can measure something in there, mightn’t we?
Well, we can and we can’t. There are a lot of different ways to think about and assess culture, for example. There are assessment frameworks designed to evaluate a range of cultural behaviours. The challenge is focusing on which changes you are wanting to evaluate. Similar problems with client service, and even with problem solving. It’s the difficulty of getting to the specifics of what different needs to look like here.
More importantly, any of these measures is perceptual. You’re asking for an opinion. Every cultural assessment asks respondents for a subjective perspective of how they perceive the culture. And there are any number of unrelated factors that might influence how—in any given moment—people respond. We can measure client satisfaction, and how that changes over time. And we can measure degrees to which people perceive problems being solved, or the effectiveness of solutions. These measures are not objective, though; they are subjective opinions based on in-the-moment responses, and we don’t know what else is driving the response.
That’s not to say don’t use those measures, but it is to say be cautious about how you interpret them, and the meaning that you build in to the results. Because subjectivity is—well—subjective. We don’t know how much a given rating is a product of real satisfaction with what we’ve done, or other factors that might influence how we are being perceived. Our client might be having a really bad day—or a really good one. They might be trying to be nice, or to not rock the boat. They might philosophically approach ratings as “I’ll give you top marks until you do something wrong” or “you start at the bottom of the scale and build up as you impress me.” Give people an odd-numbered scale, and some people will sit on the fence. But if we start with a baseline of how an organization performs today, we can at least evaluate how those scores change. Subject to all the limitations I’ve already mentioned—and more.
Moving beyond the subjectivity of the measures, there is another challenge that we have to solve. Specifically, if we’re measuring the impact of a particular initiative, there is the fundamental question of how we know any changes in performance are a result of the change we’ve made, or some other factor? And the short answer is, we don’t. Your measure might be as concrete and specific as profitability, and—directly measurable though that might be—just because that goes up doesn’t mean that the change you made was a success.
Performance can change for any number of reasons, not just the one that we are focussing on. In my client organization, for example, this isn’t the only initiative underway that is focussed on culture change, on improving the quality of service or on the effective delivery of solutions. So isolating impacts to any one initiative is nigh on impossible. You may see improvement, but what drove that increase could be the result of any number of changes. Or none of them.
In the face of challenges in measuring impact, what we often resort to is just tracking activity. How many calls did we answer? How many customers did we serve? How many widgets have we implemented? How many requests have we closed? How many people have participated in training? Given that action and behaviour is part of any change, some level of tracking might be useful. But this doesn’t tell us how well, or how effective or how responsive we were. So by all means track it, but we need to be careful about what we claim the activity to represent. Action and activity is not results and outcomes.
As we began this discussion, I noted that I was identifying strategies that may work. You will also have no doubt noted lots of conditional statements as I’ve discussed possible approaches. As much as this may look like dissembling and evasiveness, it’s a actually not. It’s once again acknowledgement that the right answer depends.
What it depends on, in this case, is what you need to be able to demonstrate. Or, to take a slightly more cynical view, what you can get away with. And what I mean in both of those hot takes is that the required measurement approach largely relies upon the expectations of the executives you need to support. And that’s going to come down to two factors: their bias towards measures and KPIs, and the degree to which they feel vulnerable on the investment being considered.
I have had executives sponsor initiatives where they know what is being done is complicated, difficult and fuzzy. Where they also know that it’s the right thing to do. And where their required proof was limited to a gut-level check that progress was being made and that things felt like they were on the right track. Which is by no means an indication that they were flakes, softies or AWOL sponsors. They passionately cared about what we were doing, they scrutinized and challenged and they kept close to the work being done. They just weren’t fussed about proving the value of the work, because they already knew and were convinced of the value of the work.
I have also had executives that demand proof. They want measures. They challenge business case after scorecard, and demand reworking and readjustment of claims, promises and metrics until they are satisfied that they have exercised appropriate scrutiny and that they can trust that results will be delivered. They believe that the metrics they’ve insisted upon will demonstrate impact, highlight problems and guarantee results. I’ve built the business cases and suggested the metrics and done what I can do to get them to a comfort level. And once they’ve applied that scrutiny, they’ve often left the project alone and paid it little further attention.
Speaking personally, I’d rather work with the trusting client that keeps their nose in (scrutiny notwithstanding) than I would the hard-nosed numbers person that judges and then bails. And the reason for that is very simple: the executive that trusts but verifies is more honest about the opportunity, and more realistic about what’s required for success. They know that the initiative is important, but they also know that it needs their on-going support and attention if success is reasonably going to be possible.
I can give you all the numbers in the world. I can design scorecards that will knock your socks off. I can devise metrics and measures to measure around just about anything. And I can do it to two decimal points of precision, and beyond. But I’d be lying if I pretended that the numbers told the full picture. They’re an approximation of the truth and a mere semblance of objective reality.
What this all comes down to is a need to be clear about what matters. And what we can do to indicate whether we are seeing more or less of what we care about in terms of actual performance. That means that the metrics that we use don’t tell us the story. What they do instead is point to where we might look in order to discover the story. Where we see changes that we hoped for, we still need to follow up and verify if what is happening is what we want, and for the reasons we are hoping for. Where we aren’t seeing the results that we want, we need to look more closely and ask why. And when results shift and change over time, we need to avoid being overly confident that what is driving the change is the awesome work that we are doing.
Measures are indicators, and nothing more. Metrics can track and inform, but they can’t extrapolate and explain. Many things can be quantified, and those numbers can tell you how many, how long, how fast or how much. But the “so what” comes from understanding the story behind the numbers. For every indicator on every dashboard, there is an underlying story that needs to be explored of what’s really happening, what’s being hoped for, what’s being hidden and what’s being ignored.