Site icon Mark Mullaly

Sussing Out Project Success

I had an interesting conversation with a colleague over dinner the other evening about project success and how to evaluate it. The colleague was a CFO of a municipality, so the emphasis was on financial assessment, and particularly large capital projects in the public realm.

Often, these are a class of projects that we don’t do (or at least, certainly don’t appear to do) well at all. It doesn’t take long to come up with a (surprisingly long) list of notable capital project failures and perceived failures: the 23rd Avenue interchange, the North LRT (and the South LRT) and the 102 Avenue bridge in Edmonton. 16 Avenue redevelopment in Calgary. Union Station in Calgary. And nothing quite measures up to the World Trade Centre site in New York City after the 2001 terrorism attacks.

The CFO’s challenge was how to create something in the context of an annual report that would highlight the project performance of major capital initiatives. His specific and immediate question was whether a simple variance analysis of budget to actual performance might suffice. And if not, what else could provide a significant and yet straightforward way of signalling the relative health and viability (or lack thereof) of a project?

It’s an interesting question. And while it’s a straightforward question to ask, it’s not an easy question to answer. Identifying variance is theoretically simple: take the budget and subtract from it the actual costs. Any positive number came in ahead of budget, and negative budgets went over. The greater the variance, the theory goes, the worse the performance.

The question, though, is which budget number to work with. Is it the initial conceptual amount, identified when the project was simply an idea? A refined estimate, based on some further research and analysis? The budget that was developed at the time a conceptual design was developed? The revised budget when a detailed design was finalized? The pre-tender estimate? Or the amount approved on award of a contract? Every single one of these numbers is potentially viable as a budget number. And there is a wide variance between them.

To illustrate what I’m talking about, let’s explore a real project. There are many that I could choose from (the municipal sector offers an embarrassment of riches in this regard). The 16th Avenue North Urban Corridor in Calgary, Alberta offers a number of useful insights, however, so I’m going to start there.

As context, the project started life as a theoretically straightforward road-widening project, designed to improve traffic flows by increasing a major municipal thoroughfare from four lanes to six. The initial conceptual budget started at $31.4 million, based on a 2003 estimate. By the time the project was completed, nearly seven years later, the total approved budget was $89.7 million.

Those statistics alone are often the basis of outraged news stories of overruns, waste and mismanagement. And to read the audit report that was published in 2011, some of those conclusions might be made. Stepping back and rationally and objectively analyzing the results, though, a very different story emerges.

As identified earlier, the project started with a simple objective of widening a major artery to improve traffic flows. What the audit report doesn’t identify is the increase in expectations—particularly politically driven ones—that quickly emerged. This is not a new phenomenon, of course. Projects often become the repository of hopes, dreams and ambitions far beyond the original first intent. A different audit report memorably described this as a barge being so laden with expectations that it becomes incapable of moving anywhere.

In the case of the 16th Avenue project, what started as simple (to the extent that project like this are ever simple) road-widening project became the focal point of a great deal more. Politicians, particularly those representing the immediate area, started identifying a number of additional desires, wants and expectations. An effort in traffic optimization grew additional for economic development, community building and beautification. It’s the essential project equivalent of the seemingly inoccuous “While you’re out running around, would you mind…” that turns a straightforward errand into an all-day odyssey.

The audit report tells the story in numbers. On top of the original construction cost and $8 million in land acquisitions, a number of additional changes occurred. In 2005, the project grew to an approved budget of $69 million based upon the concept plan. An additional $20 million was approved in 2006 to complete the design based on the concept plan, due in part to pricing increases and in part to rising political expectations. A final $700,000 was approved in 2008 for public art, bringing the total budget approved by Council to $89.7 million.

It’s important to note my continued use of the word “approved” in the previous paragraph. One of the major findings the audit report revealed after the project was a failure to appropriately document decisions, demonstrate the need of the project and establish a business case that objectively and measurably assessed the value and viability of the project in concrete, tangible terms. And yet every one of the changes identified above went to Council, was approved by Council, and was incorporated (to greater or lesser degrees of accuracy) in the project budget.

And that’s where we get back to our original discussion of how project success can’t be measured by a simple variance because there is nothing simple about how projects vary, and a fundamental question that needs to be asked and answered in thinking about variances is “variance from what?” The project we’ve been talking about went from $31.4 million to $89.7 million, in the worst-case reading of the situation. That’s a variance of 285%. At the time of the project audit, final expenditures were projected at $89 million, which earned it a grudging comment of “…in line with the final approved budget.”

So hang on. Do we have a problem here, or don’t we? The answer depends upon how you look at the project, and from whose perspective. Projects change all the time as new expectations arise (which is why the analogy of a barge piling up with detritus is such a delightful mental image). From the time of an initial idea to actual realization of the results, what is asked for and ultimately delivered will change significantly, and for a variety of different reasons. It’s true of building a house and it’s true of large-scale capital projects.

What that means is that at the first point of identification of an idea of a project, our ability to estimate with any real accuracy what delivery will require is significantly challenged. That in no way changes the fact that a number is usually asked for here, and once articulated never gets forgotten. Industry estimation guidelines, however, suggest that the accuracy of that number can vary from -75% to +200%. In other words, a project of $10 million, but be as little as $2.5 million and as much as $30 million. That’s a big ballpark to work in, but it gives you a range of what is genuinely possible (and realistic).

What makes estimates more accurate is better and more refined information. As we get clearer about what we want, what that should look like and how we are going to get there, we can get to better numbers. The more that is known, fixed and decided, the more accurate our estimates can be.

Back to our case study in Calgary, the project budget started life with an estimate (including purchased land) of $39.4 million. It finished life with an approved budget of $89.7 million. That’s a change of 227%, which is just outside of the guidelines that we’re talking about. What’s also clear is that by the time the final number was approved, we weren’t even dealing with the same project. Additional expectations around elevated, landscaped medians, widened and landscaped sidewalks, wider medians for pedestrian safety and alternative land use designations resulted in a very different project.

From an actual approved project budget of $89.7, the final projected budget was $89 million. In other words, it can equally be argued that the project actually came in slightly under budget. The difference here depends on how you look at the project, and from what vantage point.

What all of this gets back to is a question of how it is possible to—at a high level—indicate and recognize that a project is successful. One perspective would say that you can’t, that the complexities and nuances of decision making mean that a more detailed explanation of what happened and why is almost always required. Yet there’s a different way of looking at this that might at least point to those projects that have a story to tell, and what kind of story is likely to emerge.

What I’ve outlined in exploring our case study is two different challenges. The first challenge relates to how projects evolve as they move from an idea to actual fruition as something we have decided to do. The second is what it takes to bring the agreed upon project to life. In other words, how much did the project evolve and change before we decided to actually move forward and do it? And how much more did it change in trying to get it done?

That suggests that we might look to answer the question we started with—of how to simply reflect in an annual report the health of capital projects—with two pieces of information: the variance from idea to final design, and the variance from final design to actual execution. Those two numbers tell an interesting (and different) story. How we get from idea to design is owned largely by the governance overseeing the project, and the stakeholders who shape its expectation. How we get from design to implementation is owned largely by the project managers responsible for getting the work done.

Trying to apply these measures to the Calgary project helps illustrate what these perspectives might tell us. We started with a budget (with land) of $39.4 million. The budget inflated all the way to $69 million while still exploring concepts and building designs based on them. The last approved change of $20 million was driven in part by shifting expectations, and in part by project overruns, cost increases and other delivery challenges. How that splits out is difficult to apportion, but could readily be done so with a more granular understanding of the budget.

Even without that analysis, insights are possible, even with that $20 million in play. What we do know from that information is that in getting from concept to design, the variance was between $30 and $50 million. The management variance was somewhere between $0 (an unlikely number) and $20 million (possible, but probably overstated). Those two figures alone point to different explanations than a single assertion that the overall project variance was $50 million.

What having two variance numbers reveals is interesting. It tells me that the project evolved its expectations considerably. What was originally conceived and what was ultimately approved were two very different things. There was likely a great deal of shift in stakeholder expectations, and a fair degree of politics in exploring and arriving at agreement on what the project will be. The delivery had its own challenges, but likely did a reasonable job of taking a difficult and evolving situation and getting it done in difficult circumstances.

There’s conjecture in my conclusions, certainly, but what the audit report reveals is not far off of what I’ve suggested. This also reinforces that there is some value in thinking about variances in this way. Thinking about how much a project changes before we decide to go ahead tells us a lot about the political circumstances inside the organization. Thinking about how much more the project changes once we’ve decided tp proceed tells us a lot about the management difficulties in getting the project done. Both together tell a very interesting story indeed. In fact, I’d go so far as to suggest that the greater the concept variance, the more difficult the overall project will be to manage, and the more likely that there will be a management variance in delivery.

This column started as a thought exercise over dinner and wine (as so many thought exercises actually begin). But it leads to some useful insights and conclusions. I would certainly make the argument that telling the story of any project requires exploring the specifics. To paraphrase Tolstoy’s Anna Karenina, “Happy projects are all alike; every unhappy project is unhappy in its own way.” But some indication of what constitutes that unhappiness can be gained by looking at the numbers. And in doing that, one variance can be misleading, but two variances might just begin point you in the right direction.

Exit mobile version