I have a confession to make. I loathe and despise the acronym “KPI.” And the term “key performance indicator.” Any time I hear them brought up is a strong and leading indicator that I’m about to not enjoy the forthcoming conversation. And the reason for that is virtually always the same.
Business has been saddled with key performance indicators for several decades now. The term arguably became popularized with the advent of the balanced scorecard, a concept developed by Robert Kaplan and David Norton that owes its origins to a project originally sponsored by KPMG to develop an improved approach to performance measures. That led to a landmark HBR article, several popular business books and a pretty successful consulting career.
The term actually showed up a little bit earlier (there’s an article by John Rockart in 1976 that describes “key indicators” and a woefully primitive means of dashboard reporting). But the term increased in usage exponentially once organizations started abusing Kaplan & Norton’s ideas.
Where all of this starts is an entirely reasonable proposition. Financial measures tell us one thing, and only one thing (how much money did we make/lose/waste/frivolously-fritter-away in a given time period). The challenge is that financial performance has nothing to do with the vision, purpose, mandate, mission, intent or alignment of a corporation and its actions. So the goal was to find some additional methods of measuring that would help move things along and provide a bit of a different, relevant and more contextually-appropriate perspective.
The astute reader might at this point be wondering what my fundamental issue with KPIs is. After all, my take is frequently different, and I’m all about relevance and context. There should be a lot for me to like here. And yet, without fail, any mention of KPIs is enough to raise my hackles, trigger my early-warning system and rarely fail to be wrong.
To unpack this, let’s start with terms and terminology. We went from “key indicators” to “key performance indicators.” Which, at the very least, allowed us to create a brand new three-letter acronym (TLA).
On the surface, there’s actually not much to find objectionable. You start with “key.” The theory here is that for something to be a valuable performance indicator, it has to be select, particular and meaningful. We’re honing in on the “right” indicators, the ones that make the biggest difference, or are expected to have the largest impact in moving the needle.
“Performance” isn’t terribly challenging a concept, either, if we keep our discussion at 30,000 feet. We want to know how an organization is really doing. While financial statements might be universal, performance of any organization is going to be dependent on what it does. It’s industry. The market it serves, the niche it occupies. The special sauce that flavours its own particular business model.
This is, of course, where part of it starts to get problematic. Strategic performance should theoretically be unique. Competitive advantage being what it is, we’re supposed to be finding our own way, and not simply doing what everyone else does. And yet, in reality, organizations often value conformance over contrarianism. Best practices suggest that there’s one accepted way of doing thing. Benchmarking demands that everyone measure everything the same way. And the sheer effort required to really get at what actually constitutes performance means that many organizations opt for the easy way out by copying what everyone else is doing.
And then there are “indicators.” What could be more innocuous than these? These are just the measures that get us to our key performance, no? No. Or at least, not usually. Measurement is actually a wretchedly complicated thing to do— if you’re going to do it well. And there are several different forces at play that further complicate things. There is what we can measure, what we should measure, what would be meaningful to measure and what would be appropriate to measure. And none of those concepts necessarily overlap each other.
It has often been said, “What gets measured gets managed.” And that’s true. We’ve known this for decades. The Hawthorne Effect got its name from the studies of productivity by Elton Mayo; when trying to control for what improved productivity, the answer was “everything.” Change increased productivity; not paying attention caused productivity to decline.
That has some consequences for what indicators we choose, and what we decide to measure. An example I’ve often used in training involved an internal IT help desk with an organization. The phone system that drove the call centre spat out a variety of statistics on a regular basis. So management would periodically get an update on the number of calls, and the average length of time for a call.
The consequence in how this was interpreted by staff was a belief that the more calls that were taken, the greater the relevance of the call centre. And the shorter the call, the more efficient they must be. What that translated to in behaviours was something along the lines of, “So, you’re having this sort of trouble? Try this, and if it doesn’t work give me a call back. Bye!” The result: short call, and a virtual guarantee that the client would be calling back again.
If we really, really cared about customer service, we’d probably measure something completely different. We’d focus on what the client being served actually cared about. Which, for a help desk, is getting their problem solved. Rather than how many calls, or how long the call was, we might care more about “To what extent did we solve your problem the first time?”
Measuring that is a very different thing. For an organization to be able to do so, we need to be able to track each unique problem that a client encounters. We also need to monitor how many attempts were required to solve the problem. We might give thought to how positively the client viewed the experience, the relevance of the solution, and the degree to which the problem stayed solved. That’s a lot more information. It also introduces a great deal more subjectivity.
And this is where we get to the heart of my problem with “key performance indicators” as a concept. The presumption is that what we are measuring needs to be clear, precise, objective and quantifiable. The whole concept of “SMART” measures (a biased acronym if ever there was one) reinforces this: the idea that indicators should be “specific, measurable, achievable, relevant and time-bound.” It’s a nice concept, and it’s a catchy way of communicating an idea. But it’s a misguided notion.
When we actually try to get at what matters, finding specific, clean and clear ways of measuring things becomes difficult. Strategic objectives—especially ones that stretch organizations in unique and challenging ways—are often fuzzy, complex and evolving. Trying to find measures to deal with those objectives is complicated. What we often wind up doing is developing measures that are proxies for behaviours and results, because measuring the actual result is impossible.
To use a highly relevant and extremely recent example, I’ve been working with an organization on a comprehensive culture change program. The organization knows they need to undertake the change. There is fundamental agreement amongst the executive about the importance of the work. They are also entirely agreed upon the approach being taken, and the way that the change program will be deployed and implemented. In the face of significant change, all of those statements are significant. They are also not common.
A little while ago, however, one of the executive asked the question, “How will we know this is working?” An innocent question, on the surface. An understandable inquiry, even. The inherent challenge is that the best answer is “You just will.” Which essentially predicates approving a significant change effort costing hundreds of thousands of dollars with the entreaty, “Trust me.”
And they don’t need to trust me. They need to trust themselves. The need to be confident in the decision they are making, and put their support behind the program’s adoption. But when someone challenges them with a need for facts, a request for evidence, a demand that they prove this was worthwhile or led to meaningful change, the trust goes away. And if I’m honest, there aren’t a lot of facts and evidence to take its place.
That’s not to say that this change program is particularly unique. There are a lot of moving parts, but the goal is to get staff to take a more entrepreneurial, consultative, client-focussed approach to their work. Laudable, necessary, and—if they get it right—extremely valuable. While the program is unquestionably taking staff out of their comfort zone, success means that the same people have exponentially improved their service to the organization, their value as employees and the relevance of the department they work for. In terms of strategic focus, it doesn’t get much more relevant.
Measuring that, though, is complex. Proving it is virtually impossible. The reason for that is that the change program is trying to shift culture, change mindsets and encourage very different behaviours. And the only way to do that is to have people within the department embrace and internalize a new way of operating. That happens over time. The effects are subtle. The shifts of any one person are gradual, and yet on a collective basis represent a widespread nudge. But while the needle will shift, there’s no clear understanding of what the movement of the needle actually represents in terms that are distinct, concrete and measurable.
That’s because, if this change really works well, people will internalize the change. They will assimilate it progressively, and in doing so not even recognize that the change is happening. If you compare mindsets and attitudes at the start with behaviours and actions at the end, you should see massive transformation. But true success in this change will be that the people making the transformation see the new way of operating as a completely natural one, where they couldn’t imagine working any differently. In a perfect world, the change isn’t a change; it just is.
That’s enough to challenge the relevance and merits of key performance indicators. The things that really matter often aren’t measurable. And if they are, the measures aren’t specific, direct or tangible. What strategically matters most to the vast majority of organizations are intangible, indirect and fuzzy. Trying to impose measures is trying to create hard edges out of fuzzy and amorphous boundaries.
My problem with key performance indicators goes a little further, though. There’s a mindset attached to the idea and invocation of KPIs that fore-shadows much more significant problems. In my experience, those that demand or promote KPIs don’t want fuzzy. They want hard, clear, measurable and quantifiable boundaries. They like certain, objective and precise understanding. And they particularly loathe the fuzzy and the conceptual.
Words matter. And the words you choose tell me your biases. Brillat-Savarin said, “Tell me what you eat, and I’ll tell you what you are.” I’d paraphrase that somewhat: “Tell me what you value, and I’ll tell you what you are.”
If you value key performance indicators, in my view, you are privileging measurement—and measurability—over what is actually important. If you value objectives, you value potential, direction and goals. And if you value outcomes, you care about what actually matters.
In my experience, those who are first to trot out the phrase “key performance indicators” are trying to, in relatively equal measure, do a number of things. They are trying to be tribalistic, using language to separate their specific insights from general understanding. They are trying to sound current, using accepted buzzwords that will get you extra bonus points in “buzzword bingo.” And they are jumping from focussing on accomplishments that matter to an emphasis on how they can keep score.
Keeping score is very different from producing results. As I’ve already mentioned, the strategic outcomes that are most imperative for most organizations are those that are most intangible and evolutionary. That means that the actions that are—for want of a better term—most “key” are those that will be least likely to have indicators associated with them. Forcing the creation of “key performance indicators” in this context is likely to emphasize measures that can be collected, rather than asking questions that should be answered.
When we place our emphasis on outcomes, the indicators are inconsequential. When we place our emphasis on indicators, though, in my experience the outcomes are far less likely to be attained.
Michael Hilbert says
Mark,
Very thought provoking and leaving me a bit confused. While I agree that all performance indicators are not subjectively measurable, I believe that most projects (at least mine) require the tracking of cost vs work completed. While these are unique to the project, and for some don’t really need to be tracked, it is a basic responsibility of the PM to make sure that the team is on track to achieve the overall outcome.
Great article as always….
Regards,
Mike
Jerome Odeh says
Hi Mark,
Interesting & lovely write up.
On Projects I’ve worked on, KPI was used for management reporting while the Project team remained focussed on the Project outcome.
My problem with KPI is that it assumes a sound basis for comparison but I’ve seen very conservative baselines used as basis for KPI measurement & this meant we always had positive KPIs even when scopes were struggling.
I also believe KPIs should always include context and few other metrics that look at scopes/projects from another angle.
Regards,
=jerome
Stacy Meyer says
Mark,
I absolutely love this article! i am grappling with this very subject right now and your description of those pushing for the measurable and discreet KPIs is spot on. You have helped me to better articulate why putting the focus on meeting strategic outcomes vs. analyzing measurements (that may or may not tell the entire story) is the right thing to do. Thank you so much for this timely and useful information.
Regards,
Stacy