Originally published in the PMI Metrics SIG Newsletter
One of the greatest challenges for most Project Management Offices today is being able to actually demonstrate their value and relevance to the organizations they support. While part of this challenge stems from an inconsistent definition of what the role of a PMO should be, a far greater issue is a fundamental failure to actually define what success for the PMO looks like.
Based upon a recent research project conducted by my company, only 45% of respondents felt that the PMO in their organization makes a significant or essential contribution to ensuring project success. In looking at the perception of the PMO’s core customers, only 55% of project managers and 57% of senior management currently see their PMO as making a significant or essential contribution to the organization. While these perceptions in and of themselves are less than flattering, most PMOs are also not taking the steps necessary to manage these perceptions effectively.
What is unfortunate in all of this is that the most significant impact stems from a failure to manage PMO implementations as the projects they are. As project managers, we are trained to formally define both completion and success criteria. We acknowledge the need to define what success looks like, and take pains to determine who actually has a vote in determining whether the project was successful or not. The consequence of not doing so is that success – like beauty – falls to the eye of the beholder. A stakeholder that disagrees with the results of the project, has a personality conflict with the project manager, feels threatened by the project, resists change or simply finds satisfaction in criticism can easily shift perceptions of project results from success to failure. Formally defining the success criteria allows us to avoid situations where our projects are held hostage to subjective whims.
The same research study, however, indicates that only 27% of PMOs currently have formally defined and articulated success criteria. Another 10% are evaluated on a purely subjective basis, 27% are only informally evaluated and a full 36% have no mechanism for evaluating project success whatsoever. Finally, 65% of PMOs have no formally defined objectives governing their mandate and role. Without the framework provided by clear objectives and formally defined success criteria, actual demonstration of success becomes impossible.
Success measures are critical to the effective definition of a PMO. Definition of our success measures, however, depends heavily upon having clearly defined the objectives that comprise the PMO’s mandate. Without clearly defined objectives, measures are worse then meaningless – they may in fact be misleading. With these objectives defined, however, we can then begin to establish the information necessary to evaluate the delivery of our objectives, and the measures that will demonstrate success.
As with any performance measures, care needs to be taken in choosing which measures we adopt. Measuring everything under the sun simply because it can be measured is not only not productive, it will simply reinforce the perception that the PMO is a valueless organization whose sole function is to create bureaucracy. Selecting the wrong measures, however, can result in reinforcing the wrong behaviours rather than promoting desirable ones. As the Hawthorne effect originally identified by Elton Mayo of Harvard University has shown, the act of measuring itself will influence attainment of the results the measures promote, regardless of the degree to which those results are desired.
There are no standard performance measures for evaluating the success of your PMO; the measures that are appropriate will depend upon the specific objectives you have defined. There is a framework that can be readily applied to identify appropriate measures, however, known as ‘Goal – Question – Measure’, as defined by Basili & Weiss. Once the goals and objectives for your PMO are clear, identify what questions if answered will determine whether or not the goal has been fully met. Finally, for that finite set of questions, articulate the measures that are necessary to be able to answer the questions. The key is the identification of those select few measures that will truly guide the PMO in both long-term planning and day-to-day decision making in working towards the clear and undisputed delivery of successful project results.
What follows below is a potential measurement model for a PMO, based upon some specific goals that have been established for it. It is important to note that this framework is not intended to be universal, nor should an organization adopt it wholesale. It is intended only as an example and illustration of the principles discussed above, to provide a working demonstration of what the measures of success for a PMO could look like.
For the sake of example, let’s assume that an organization is establishing a PMO to support projects within its IT organization with the following mandate:
- co-ordinate centralized tracking and reporting of project progress, in order to provide a single and consolidated view of projects.
- serve as a centre of excellence in defining and promoting project management practices, and improving the consistency and effectiveness of how projects are managed.
- improve the competence and skill of individual project managers through training, coaching and mentoring.
To be able to define an effective measurement framework for success, it is essential that these overall objectives be translated into measures that represent appropriate proxies for success. Success of a PMO is not the same as the success of the projects that is supports; measurement therefore requires identification of those unique measures that are appropriate and relevant to measuring the PMOs success. Possible approaches for the three major objectives defined above include:
- Centralized Tracking & Reporting. Centralization of tracking and reporting is an administrative service function. While it would be simple to create a measure of “number of consolidated reports submitted on time by the PMO” it would not be particularly relevant; this should be a base level of service, not the means by which success if measured. There will be some input measures that are appropriate – percentage compliance of time reports; percentage of projects submitting their reports on time – which will help to monitor the effectiveness of the process. Success, however, depends upon demonstrating the attainment of desired results. If we look at why centralized reporting is desirable, some different measures become more appropriate: “reduction in time of status report preparation”; “reduction in senior management time reviewing status reports”; “reduction in effort consolidating time reports and updating schedules”; “earlier identification of project issues and potential overruns”.
- Serve as a Centre Of Excellence. As with the approach to reporting, there are a number of easy metrics that do not lend themselves to demonstration of success: “number of brown-bag presentations”; “number of staff trained in the methodology”; “number of staff with copies of the methodology”. Again, these are base measures of activity, but they do not demonstrate success. A Centre Of Excellence is designed to champion improvement of project management; success in this role must reflect this improvement. Possible measures might be “increase in PM maturity against a normalized model”; “reduction of project/portfolio cost/schedule overruns”; “delivery of project outcomes”; “percentage delivery of project completion/success criteria”. These measures focus not on what the PMO is doing, but on the resulting impacts of their actions.
- Improve Competence & Skill. What is the success measure of improving competence? If I train everyone, was I successful in increasing competence or skill? If all of our PMs are certified, does that mean they are competent? While raw metrics of training or certification demonstrate what has been accomplished, they do not demonstrate whether the training is instrumental in delivering results, or that the education has in fact been retained. An objective skill or competence assessment will help to demonstrate the capability of project managers. The next step is to evaluate its impact: “accuracy of project estimates”; “delivery to the driving priorities of time, schedule and scope”; “satisfaction of project sponsors/stakeholders”; “satisfaction of project team members”.
Creating real and meaningful success measures is not an easy process. It requires effort and hard work, and a willingness to go beyond the obvious to be able to measure what success really looks like, in terms that are meaningful for your organization. As discussed earlier, the above measures are not intended as a formula that should be applied wholesale. One of the greatest challenges of being able to use measures like what have been proposed here is defining the metrics that will demonstrate their attainment, and the baseline thresholds of current performance against which progress can be evaluated. Demonstration of success requires looking beyond the obvious and in some cases embracing the difficult; not because the easy measures are bad, but because the right measures often require effort to determine.
While definition of measures is an important and essential contributor to demonstrating overall PMO success the measures themselves can only take you so far. In the final analysis, it’s not the measures you use but what you do with them that counts. Even if you are one of those select few PMOs that have defined success criteria, the next question you should expect from your management team if they are halfway awake is “So what?”
Defining how you measure success is only one dimension, and while it is necessary it is not sufficient. The next step and more important step is to define what success looks like using the measures. In other words, if part of the dimension of success for your PMO is “reduction in time of status report preparation”, then what is a reasonable target for performance o that measure? How long should it be taking on average to support status reports? This requires understanding both current performance, and how that current performance measures up against a reasonable picture of the world.
In other words, we have to benchmark. We need to understand what good performance and bad performance looks like, and be able to provide both ourselves and our management teams with the necessary information to make reasonable and informed decisions. To be able to do this effectively, we need to be able to answer some basic questions:
- What is our current performance today?
- What is similar performance in organizations that are comparable to us? Our competition, or companies with a similar value proposition, or that are of a similar size.
- What does the best-in-class performance currently in practice today look like?
Without this information, we don’t have a context to define whether our goals are actually reasonable or not. If our performance in a particular area is truly abysmal, then improving the performance even by 100% may not mean performance is now great, just that it sucks less. Without a meaningful understanding of what both we and other organizations are doing, we can’t even begin to know which is which. In benchmarking ourselves, then, we need to address two key dimensions; it is critical that we understand and articulate both internal and external evaluations of current performance.
Defining internal benchmarks and actually conducting analyses is the necessary first step in understanding the current performance baseline. This won’t just tell you how well your organization performs in a specific area, it will also reveal how consistent or inconsistent current performance currently is. In arriving at an internal benchmark, there are a number of key challenges that will quickly emerge:
- Defining consistent measurement definitions. The single greatest challenge of benchmarking, bar none, is the definition of consistent measures – the ability to be able to draw meaningful comparisons because the numbers in one organization are similar to another organization. Because measures are interpreted and applied in different ways, they are not necessarily comparable. If you think that at least a measure will be applied the same inside your organization, you are likely in for a big surprise. Think about something simple like ‘budget performance against baseline’ for a project. What is the baseline? When is it set? Is it formally tied to plan approval in all instances? Do approved change requests modify the baseline? Are all change requests in the baseline legitimate? Are change requests being used to gain management acceptance of overruns not based on a change in scope? Are there any other circumstances where the baseline is changed? Scared yet? Now, think about the challenge in evaluating someone else’s organization using the same measure – consistency is likely going to be a problem.
- Consistent sources for measures. Related to the challenge of establishing operational definitions is identifying what the source of the measures is going to be. In a lot of instances, your current systems are not going to automatically produce a report every month to show you how you are doing. Over time, you may get there – but you probably aren’t there yet. That means that we need to establish separate tracking approaches, which means that we are relying on individuals to keep track of what they are doing, rather than deriving measures that automatically monitor performance. Whether our tracking is based upon checklists, spreadsheets, databases or arcane and complex queries of our accounting system, we are relying upon people to remember to track performance diligently. Yet how many of us have tried to do our timesheet for the month at the end of the month, desperately trying to remember what we did three weeks ago last Tuesday? Now how comfortable are you with your tracking approach?
- A means of validation. As a means of addressing the concerns associated with the previous two points, there is a need to ensure that the measures that are being captured are independently verifiable. Do you have some mechanism to go back and audit whether the information that is being captured and reported is accurate and complete? We can trace a monthly financial report back to the journal entries and ultimately to the individual invoices and receipts, and audit whether or not what was reported is actually accurate. Do we have the same level of auditability in our PMO measures? Can we verify and confirm that what was reported is actually based in fact?
Once our internal measures are defined, the challenge of external benchmarking becomes a little bit easier to address. It isn’t a slam-dunk by any means, but it isn’t necessarily as hard either. While other organizations may not apply the same operational definitions to their measures as we do, as long as they have consistent definitions we can begin to make adjustments for the differences between our performance and their performance. The problem of direct, quantifiable comparison between organizations is greatest when there is no operational definition or consistency in place – any conclusion or contrast is then going to be driven by anecdotal evidence and qualitative assumptions, not fact.
External benchmarks in fact draw on two separate dimensions: a comparison of processes and evaluation of process performance. The first dimension is relatively straightforward – how different are the actual processes between one organization and another. The most reliable means of evaluating these process differences is to start with a tool like a maturity model, which is specifically designed to evaluate process performance. (Organizations interested in benchmarking their processes will be interested in the Organizational Project Management Baseline Study, a free public research project conducted by my organization. For more information, visit http://www.interthink.ca/research/). The second dimension is to understand how well the processes in place are working for their respective organizations. If you know the differences in performance, and you can identify where the differences in process are, you can begin to draw conclusions about what changes in process will result in improvements in performance.
Measuring and demonstrating the value and success of your Project Management Office may at this point sound like a lot of work. In the final analysis, it is. If your organization is committed to understanding and demonstrating the success of its PMO, and cares about which factors make the greatest contribution to that success, then it is an essential exercise. Committing to a meaningful process of measurement and comparison not only helps define your progress, but as we have already discussed it will also encourage the behaviours you are looking for. The best way to guarantee success is to measure it.
Leave a Reply