If we’re going to talk process, then sooner or later we are going to have to discuss the issue of maturity. It’s a term that we love to bandy around. It gets applied in a number of different contexts, generally as a proxy for why improvements must be considered and changes need to be made. To a large extent, the idea of improving maturity gets used as cover to justify any number of actions. If we are serious about improving our processes, then we need to get clear about just what maturity gets us—and what it does not.
The idea of maturity—and maturity models—have been around for a number of decades. They were popularized, however, much more recently. In particular, the Capability Maturity Model for Software (now just known as the CMMI) was developed by the Software Engineering Institute at Carnegie Mellon University. It was the first widely popularized model, although even it took awhile to catch on. While it was developed in the mid-1980s, it was only in the last twenty years or so that it reached significant popular awareness.
The story of how and why the model was developed, though, is particularly relevant. SEI’s client was the U.S. Department of Defence (attentive readers will recognize that they’ve been a catalyst for a surprising number of management innovations). In particular, the Department of Defence wanted to be able to improve how software was developed on their behalf. The underlying assumption was that if the suppliers of software to the Department of Defence had better, more robust processes, then they would develop better software.
The Capability Maturity Model was developed first and foremost as a tool for supplier assessment. The Department of Defence was in essence investing in their suppliers. They conceived of using the Capability Maturity Model as a way to assess supplier competencies and identifying the changes that were necessary become better and more effective in delivering solutions to their client.
What is central to all of this is one fundamental assumption: that better process leads to better results. It’s a popular and appealing notion. It’s an easy one to sell to executives and practitioners alike. There are very few that would argue against the assertion. And yet, until recently, there was little actual evidence that the assumption is true.
While we can now demonstrate through research that maturity does have some impacts, we can also show that improving maturity makes a difference only in specific circumstances. Moreover, we also know that there is a point where increased levels of maturity stop supporting better results, and may in fact undermine them. And that’s not even getting in to the question of what actually constitutes “better.” In other words, we’re back once again to “it depends.”
A lot of what we now know and can verify about maturity—particularly in a project management context—comes from the work done in the Value of Project Management research project. I was a co-lead of this study from 2004 to 2008. In it we studied why organizations improve their project management, what they implement and the results that they get (whether intended or not). To date, it is still the largest research effort undertaken in the field of project management, with more than 48 researchers engaging in detailed case-study research of 65 organizations from a variety of industries around the world.
One of the most interesting findings out of the study is when maturity actually makes a difference. Thinking about tangible results (cost savings, revenue increases, efficiency improvements or avoidance of re-work) maturity doesn’t actually make much of a difference—at least, not a reliable one. There were organizations that leveraged phenomenal results with high levels of maturity, and there were organizations that did very well with no maturity at all. More importantly, most of the organizations—at every level of maturity—realized no tangible value at all from their project management investment.
That was a difficult finding to confront, and one that not a lot of organizations—or executives—were terribly fond of—which in no way made it not true. The key is why it is true. Every organization realizing tangible value was in the business of selling project management services. Every organization that didn’t realize tangible value managed projects for themselves. And the bottom line for service providers was that value was in the eye of the beholder. If there is a market for your services, then you make money, regardless of how well or poorly—or consistently or inconsistently—your services are delivered.
Where maturity did make a difference was in terms of intangible value. It made a huge difference in improving strategic alignment, delivering on strategic objectives, enhancing organizational capabilities, evolving culture and breaking down organizational silos. All of those things are extremely important. They are more awkward to measure, though, and their impact on the bottom line is more diffuse. They are still meaningful results, and maturity helps realize those results. Up to a point.
And that’s where we really need to wrestle with what maturity means. Give someone a maturity model, ask them where they want to be, and the default answer is “we want to be the best!” Give an executive five levels of maturity to work with, and they automatically want to be at Level 5. And that’s a problem. For a handful of organizations, it might make sense to target Level 5; whether they actually reach it or not is an entirely separate issue. For most organizations, however, Level 5 is not only unattainable, it’s not even desirable.
This is where we need to unpack what maturity actually means, and what the levels of maturity represent. Maturity is actually a construct comprised of two essential—but different—concepts: how robust our practices are, and how consistently we apply them. We’ve conflated these ideas, and in doing so we’ve made a rather bold assumption that they both improve in lockstep with each other. They don’t. And they shouldn’t.
One of the organizations in the Value of PM study is an interesting case in point in this regard. They had an extremely robust process. Excruciatingly so. Their project management processes extended to ten 4-inch binders of documentation, occupying literally four linear feet of shelf space. It was incredibly, mind-numbingly detailed and bureaucratic. On the robustness scale, it was off-the-charts insane. And no one followed it. So as a measure of consistency, that result wouldn’t even move the needle.
A different organization, early on in their implementation, had started with some simple, essential processes. They created some basic structure and process that was focussed, relevant, easy to use and not much of a stretch for people to understand, adopt and apply. They weren’t focussed on perfect, but on what worked—for them. And their primary attention was on getting people to apply and work with what had been defined. They wanted to build acceptance first, and then progressively work to enhance it where enhancement made sense. The focus was on improving consistency of use, not on the robustness, rigour or anal-retentive formality of the process.
Those two examples are diametrically opposed to each other, but they illustrate a fundamental point: sometimes it’s more important that the process be used, rather than that the process be particularly painstaking or detailed. Of course, sometimes we do need rigour and formality. Different answers are going to be relevant for different organizations. There isn’t any one project management, and there isn’t one right answer for how to implement and apply project management techniques.
An important truth that often gets overlooked in discussions of maturity, however, is that there is a point for all organizations where the law of diminishing returns is going to kick in. In other words, you can ramp up and improve your process, and yet for the investment you make there isn’t going to be a corresponding return in terms of value. For every organization, there is going to be a sweet spot in terms of practices and capabilities.
That sweet spot will be different for every organization. It will depend upon what they value from improving their project management. It will vary based upon how much rigour, structure and formality is necessary to deliver on their objectives (and the degree to which formality or structure might actually undermine those objectives). And it will be a product of the level of consistency that is relevant, appropriate or possible in how different projects are managed.
This isn’t to say that the idea of maturity isn’t useful or helpful in a project management context. But, as with all things, this will depend upon how maturity models are applied, interpreted and utilized. Maturity models are great tools for helping to assess the practices of an organization. They can help to think about and articulate what desired capabilities might look like. They can also be useful in guiding what improvement activities might look like.
Effectively using maturity to guide improvement, however, requires judicious application and interpretation of the model. It is critical to understand what is actually important for the organization. It is essential to separate out the ideas of formality, rigour and consistency, and think about each of these differently. Maturity models can serve as menus of options to choose from, but consequences and interactions—positive and negative—of the choices that are made need to be thought through carefully.
As we’ve already acknowledged and discussed previously, what is appropriate in terms of processes, practices and structures depends. It depends on strategy and objectives, it depends on where the organization is today, it depends on where they are trying to go, it depends upon what they value and it depends upon their overall culture.
Maturity models are tools to help in this exploration. They can be used to interpret and evaluate. But they shouldn’t be seen as prescriptive, and they shouldn’t be relied upon exclusively. Success isn’t about targeting the top of the model. What is right for an organization may not even be fully captured within a model. The right answer for an organization may actually mean different things for different areas. Considering all of this is not just perfectly acceptable, it’s essential. In fact, it’s the mature thing to do.
Over the next few weeks, I’ll be getting in to these themes in a little bit more detail. I’ll look at how and why we build the processes we do, what works and doesn’t work, and what makes sense in making process useful, constructive and meaningful.
Melinda says
Interesting. I have used maturity models occasionally to suggest to organizations where they might want to focus some attention or where there is a disconnect between what they believe about themselves and what is real. But I am more in the audit business than the development business so, as you say, maturity models make sense as a way to audit reality, not necessarily as a prescriptive.