I had a meeting with a prospective client the other day. The opportunity was to provide some training and facilitation with their executive team to help prepare them for the implementation of a new set of organizational project management practices. To anyone that knows me, this certainly has the prospect of being something of interest. It’s work I have done extensively, it’s an audience I know how to support and speak to, and it’s a facilitation challenge that I typically quite enjoy.
So where is this going, you might ask? And why does it feel like I’m shaping this up to not quite be the slam dunk opportunity that it seems on paper? Excellent questions, both.
On the surface, the premise looks great. The devil is always in the details.
The circumstances of the situation were interesting (and in and of themselves not anything that concerned me). The reason that the organization (a municipality) was considering improving their project management practices was as a result of a recent audit. So far, so normal. What makes the situation a little unique was that the audit was undertaken at the direction of Council, not administration. And it was Council that was setting an expectation that project management practices be improved.
And this is where it gets a little more complicated. Audits are enormously useful in helping to point towards improvement opportunities. They offer the chance to review practices, compare what was actually done with the policies, practices and processes of the organizations, and point to exceptions and inconsistencies. They can also make recommendations with respect to alignment with—or adherence to—industry conventions and standards.
What comes out of an audit is four essential things:
- A definition of the standards by which the audit was done and the policies and practices against which performance was compared.
- A review of the practices that were observed.
- An analysis of the findings based on the degree to which the practices did or didn’t align with expected standards.
- Recommendations of what the organization should do to improve, based upon the findings.
At this stage of my career, I’ve been exposed to hundreds of audits, and from pretty much all sides of the table. I’ve been audited. That’s never entirely a pleasant experience, but I was comfortable and confident in the practices that I had adopted. I’ve led numerous audits, both at the request of the affected organization and a couple of times on behalf of a provincial auditor general. And I’ve read more audit reports than I can reasonably count.
Audits can be incredibly constructive and useful. Done well, they provide valuable insight and objective analysis. And depending upon the situation, they can make for incredibly juicy reading. I’ve perused audits that read like hard-boiled thrillers, and I’ve also reviewed some that are catalogues of comic ineptitude. And you can, as well; one of the great joys of audits of public organizations is there are audits are, well, public. You don’t have to go far down the internet rabbit hole to find some delightfully fascinating and occasionally altogether scandalous reading material, absolutely free of charge.
For the astute amongst you, you’re no doubt now wondering “So what’s the catch?” And well done for noticing that little build up. While observations are often surprisingly detailed—and frequently don’t pull punches—and the analysis and statements of findings are typically comprehensive, meaningful and relevant, the recommendations are often—to put not too fine a point on it—astonishingly succinct in their brevity.
There’s a reason for this, of course. For starters, the auditor wants to give management some flexibility and discretion in how they respond. The auditor has clearly pointed out the problems, explained why said findings are problematic, and suggested that something be done. There is now an opportunity to explore and assess what a reasonable and measured solution might be. At the same time, the auditor doesn’t want to be prescriptive. It’s not their job to say exactly what to do and how to do it.
And so a reasonable—and representative—recommendation might be something along the lines of “Management should adopt and implement a comprehensive set of project management practices by which organizational projects are planned, managed and delivered.” And seriously, that’s it. No more. No less.
This begs many, many more questions. Which practices? To what level of detail? How robust should they be? What should they address? How comprehensive to the need to be? Who needs to use them? Where should they come from? How should they be developed? What support and assistance will be required to make them work? What is required to implement them successfully? And how will we know if we’ve met the mark?
This is exactly the point in the conversation where I come in. If I come in. And yes, that’s a choice. When there is the right fit between need, opportunity, acceptance and relevance, I might find myself playing a role. That happens far less often than you might think. The reason why it doesn’t happen is always the same.
Early on in the conversation, after my contact explained all the things that didn’t exist (no charters, no risk registers, no status reports, minimal budget tracking, no change control and more) I asked a question. “Does your executive team understand the implications of what they are asking for?” The answer that I got was, “Probably not.” In my view, that’s where we need to start.
When organizations implement significant change—like introducing an organizational project management approach—most of them do it for one reason, and only one reason. There’s a crisis. Something happened, and this is seen as the necessary response. For this organization, the audit itself wasn’t the crisis. It was the response to some other, earlier, crisis. Something happened. Projects went off the rails. Council wasn’t appropriately informed. They decided to do something about it. And the audit is where they began.
The conclusion of the audit, and the corresponding direction from Council in response, was “do something about your project management.” And as we’ve already established, that’s pretty much the sole and complete scope of the expectation. What you do inside of that expectation can span a wide range of practices. Despite what some would tell you, there is no one single way to manage projects. There is no universal set of practices that fully define the essence of good project management, no matter what standards might advertise to the contrary.
And while crisis is a great way to start a change, it’s a horrible way to implement a reasonable, managed, deliberate change. I sat in a presentation once where the speaker said—not entirely unreasonably—that the best catalyst for change is a burning platform. When asked what should be done if an organization didn’t have a burning platform, the answer was, “Get some gasoline and a match.”
That’s funny and true. It’s also more than a little tragic. The reason for that is that the response to organizational crisis is always the same: put out the fire. And so, in the short term, it’s critical to do what it takes to establish or reassert some level of control. You need to respond to the problem at hand, make sure it doesn’t get any worse and course correct as necessary to maintain control going forward.
That initial response will be accepted in virtually any organization, for a time. As long as people recognize it’s a crisis, they will put up with any number of interim measures to respond to the crisis. But that’s an important caveat. And there are a couple of other caveats that are important to add into the mix fairly quickly. For starters, the organization will want to get back to what it considers to be normalcy relatively quickly. Crisis response is fine in an emergency, but it’s not sustainable. And anything that gets adopted in the crisis, but that doesn’t fit with the context of the culture of what the organization defines as normal, will not be maintained.
This is the test. And this is the point where, in conversation after conversation, the manager responsible for implementing the change being considered comes up with the solution, “Well, can’t we just make them? Can’t senior management tell them they have to, and then they’ll comply?” My short and simple answer to this never changes: They could. And it won’t work.
Here’s the thing about organizational change. Change is about people. You are asking people to do something different than they currently do, to change their behaviours and their habits. You are asking them to move from what they know how to do, and to do something different.
But there’s a reason that they do what they do now. It works. Maybe not perfectly. Maybe not efficiently. In fact, your current practices might be horrific, outdated, painfully slow, massively sub-optimized and bordering on antediluvian. That doesn’t matter. The people that are using them—on some bizarre level—are getting a pay-off from them. What they do today is the best they know to do. And it’s what they do. And if you want them to do differently, they need to see the value of doing it, on their terms.
In other words, practices need to fit. And practice and process change is an incremental—and slow—progression of helping people understand and demonstrate what better can look like. More importantly, it’s about getting people to understand why better is actually better, for them. They need to see and embrace and want the value of doing things differently. If they can’t see that, then they won’t do it, no matter how objectively more efficient or effective or appropriate you think the practices you are proposing might be.
That is where this particular conversation went sideways. What was being implemented was full-on project management. Charters, business cases, plans, schedules, budgets, risk schedules, scope change protocols, status reports and dashboards. Especially dashboards. Along with change management practices, enterprise risk management frameworks and a portfolio management system. In an organization that had minimal practices, and inconsistent minimal practices at that.
You can’t force those practices. You can’t make someone change. You can, within limited degrees, for short periods of time—hence our crisis response approach. But as soon as attention is removed and the crisis abates, the organization will revert back to the familiar and the comfortable.
Effective change, real change, is about a gradual, incremental, step-by-step nudging towards better. But it means better in the eyes of those that do the work, not those that advise on the work. That’s in part why auditors make general recommendations. The response to those recommendations has to be appropriate, acceptable and accepted. And if they’re not, then it’s pretty much guaranteed that there will be another, similar audit report somewhere in the future of the organization.
In the words of Niccolo Machiavelli, “It must be considered that there is nothing more difficult to take in hand, more perilous to conduct, or more uncertain in its success, than to take the lead in the introduction of a new order of things.” And that’s entirely true, if the new order of things is too great a departure from how things work today. Where the change fits, when the change is appropriate, and where the change is accepted, real transformation is possible. It’s going to be slow transformation. People need to see the value of small changes, and that incremental improvement will lead to new realities, new problems and new opportunities to create value.
Change is a long game. It’s a game that requires patience, support and enthusiasm and a willingness to progressively help make things better. That doesn’t happen in crisis, and it doesn’t happen because someone says so. It happens because people want the change, they embrace the change, and they continue to adopt new changes as they see new value being created.