Part 6: Evaluating Tools: The Tool Must Be Accurate
Part 6: Evaluating Tools: The Tool Must Be Accurate
Part 6: Evaluating Tools: The Tool Must Be Accurate
www.prioritysystem.com
27
Project Portfolio Management Tools: Which Approach is Best? Part 6: Evaluating Tools
Unfortunately, inaccuracies in recommendations often only surface after many applications of a tool. It is difficult to collect empirical data for validating a tool's recommendations (few organizations are willing to fund recommended and not-recommended projects for the purpose of testing a tool!). Tool providers may have some evidence of their tools accuracy, but they will naturally emphasize the positive and minimize (or ignore) the negative. The data they have may not be representative of the particular tool configuration or types of projects relevant to your organization. Conducting a pilot test before fully committing to a tool is essential. Choose a variety of projects with different characteristics and see what recommendations the tool makes. Be skeptical of any odd patterns, such as projects with certain characteristics consistently being ranked either high or low. Resist the natural temptation to rationalize the results (garbage in, gospel out). Drill down to fully understand why the results come out the way they do. Since extensive tool testing is usually impossible, assessments of accuracy must, at least in part, be based on a detailed evaluation of how the tools decision model works. In this regard, two other considerations are useful. As described below, the tool must be logically sound and it must be complete.
www.prioritysystem.com
28
Project Portfolio Management Tools: Which Approach is Best? Part 6: Evaluating Tools
theory explaining how airflow around objects can produce lift. Many models are like the feathered wings of early would-be aviators. Instead of being derived from theory, they are heuristicsrule-of-thumb relationships based on observations that certain characteristics tend to be associated with certain outcomes. For example, a popular heuristic for constructing project portfolios is balance. Project portfolios of successful companies often contain a balance of low-payoff sure things and high-payoff, long-shot gambles. However, balance is not the fundamental characteristic that makes a project portfolio successful. Thus, choosing balanced project portfolios will not ensure success.
As noted previously, relevant theories for selecting and prioritizing projects include decision analysis [5], multi-attribute utility analysis (MUA) [6], modern portfolio theory [7], portfolio optimization theory [8], and real options [9]. These theories are well-established within the technical and academic communities and have been proven in many real-world applications.
Modern Portfolio Theory
Modern portfolio theory is a theory of decision making that seeks to construct a portfolio of investments offering maximum expected returns for a given level of risk. The theory quantifies the benefits of diversification as a means of reducing risk.
Real Options
Real options analysis is a valuation theory that views projects as creating options for dealing with an uncertain future. For example, a new factory created by a project carries options to shut down, abandon, or expand, depending on market conditions. Project value depends on the options created. The theory includes methods for computing project value based on the market prices of related assets.
Decision Analysis
Decision analysis is theory and collection of methods for making decisions under uncertainty. The approach involves constructing and analyzing a model of the decision problem to identify the choice, or sequence of choices, leading to outcomes most consistent with the preferences of the decision maker.
The above theories are sometimes termed axiomatic theories. Each begins with a set of axioms (assumptions, or hypotheses) about how things work, for example, about how people or organizations ought to value and choose projects. These axioms can be accepted or rejected based on observations or other evidence. The theory then derives (through mathematical "proofs") conclusions that follow from its axioms. If you can demonstrate that the axioms are acceptable (and the math is correct), the rest of the theory follows. See the side box for an example.
www.prioritysystem.com
29
Project Portfolio Management Tools: Which Approach is Best? Part 6: Evaluating Tools
By the way, the fact that there are multiple theories does not mean that you will get different answers depending on the theory that you choose to apply. The well-established theories for selecting projects typically give the same answers, provided that each theory is able to address all the relevant factors, the theories are properly applied, and the assumptions for the analyses are the same. The situation is analogous to that for theories in other fields, such as physics. For example, the mathematics for Newtons laws of motion and Einsteins theory of relativity look very different, but both theories predict the same trajectories for everyday objects under everyday conditions. The predictions only differ in situations (e.g., speeds close to the speed of light or extremely small objects) that invoke considerations that arent addressed by Newtons much simpler laws of motion. Likewise, decision analysis and real-options, for example, look very different, but they give exactly the same answers to any problems that both can fully address provided that each theory is correctly applied [11]. Nevertheless, the choice of the theory is critical. Choosing a solution approach based on ill-suited theory will make it extremely difficult, or impossible, to obtain useful and meaningful answers. Rather than cite the theories on which their tools are based, most providers merely reference the analytic techniques that their tools employ, such as balanced scorecards, strategic alignment, decision trees, linear programming and Monte Carlo simulation. Such techniques, by themselves, provide no explanation for why recommended portfolios should be preferred. Instead, they merely describe or refer to mathematical calculations that are performed. For example, scorecards typically use an equation that weights and adds scores assigned to projects. Projects are ranked based on total weighted score. There is no reason why this should lead to the identification of preferred projects. Weighted scoring techniques can sometimes be used to effectively apply appropriate theories, but only if the scoring scales and weights are structured to match the requirements of the theory. For example, MUA theory for valuing projects can sometimes be implemented using scorecards and a weight-and-add equation. However, for a weight-andadd equation to work, the metrics that are scored must meet a condition known as additive independence, scaling functions must be assigned so that the computed performance measures are proportional to value, and the weights must quantify the value of specified improvements against objectives (the weights are often assigned using an assessment technique known as the "swing weight method [13]). Unless these conditions are met, the aggregated score will not measure value and will not serve as an indicator of decision-maker preference.
www.prioritysystem.com
Tools Not Based on Sound Theories Are Exposed When Subjected to Technical Review
Congress required the Department of Energy (DOE) to rank potential sites for disposing of radioactive waste from nuclear power plants. To select the best site, the DOE initially used a scorecard approach. Each site was rated against each of the objectives established for a good site, the objectives were weighted, and the rates and weights were combined to rank the sites. Hanford, a site in Washington State, ranked highest, and the DOE published the results in a draft Environmental Impact Statement. The choice was criticized, especially by officials from Washington State, and the DOE asked a board of the National Academy of Sciences (NAS) to review the ranking methodology. The board, which included experts in decision theory, responded that DOE's method was unsatisfactory, inadequate, undocumented, and biased [12]. DOE was told to redo its analysis (a site ranking model based on MUA was developed). The new analysis ranked Yucca Mountain, Nevada, highest. DOE was forced to change its decision, causing the agency considerable embarrassment.
30
Project Portfolio Management Tools: Which Approach is Best? Part 6: Evaluating Tools
Thus, the use of a weighted scorecard does not in any way ensure that the requirements of any accepted theory are being met. The defensibility of the recommendations made by any decision model depends on whether the techniques used to value and prioritize projects are consistent with some defensible theory and on how faithfully the model implements the requirements of that theory. As an example of the dangers of using tools not based on sound theories, see the side box example above describing the Department of Energys initial attempt to rank potential sites for a nuclear waste repository. Logical defensibility is particularly important when using a tool to help make controversial decisions. Although most project decisions aren't as controversial as nuclear waste, some (for example, an electric utility's decision to acquire right-of-way to construct a transmission line) can be. Using a logically sound approach avoids errors associated with unsound methods and reduces the risk of successful challenges to the credibility of decisions. Although logical soundness does not guarantee accuracy (see below), it is far safer and wiser to use a tool based on sound theory than one that merely "seems" reasonable.
www.prioritysystem.com
31
Project Portfolio Management Tools: Which Approach is Best? Part 6: Evaluating Tools
How can such challenges be addressed? Obviously, skill is required and experts should guide the development of the decision model used by the tool. At the same time, the organization can take steps to make it practical for it to use more sophisticated tools. These steps can include investing in internal training, reassigning responsibilities, developing new sources of data, and adjusting budgeting schedules. Although the above recommendations may seem daunting, it is worth noting that it is common for an organization to initially view a quality tool as "too complex," and then later, after gaining experience and comfort with its use, to want to expand the tool and make it even more sophisticated (see the side box above for an example). Thus, organizations should avoid simplistic tools and tools whose decision model cannot be improved as experience and understanding grows.
32
Project Portfolio Management Tools: Which Approach is Best? Part 6: Evaluating Tools
Performance monitoring. What information is available to help the organization track its performance? What types of performance data are collected? Such considerations must be understood and the tool and its application process must be designed to ensure that there is a good fit to the organization. See the side box for an example. Being effective also means achieving the specific goals that motivate using a tool. For example, a tool intended for back-room use would be designed differently than one whose purpose is to demonstrate to regulators and local citizens that the organization's project decisions are in the best interests of the community. Not only would the latter have different user characteristics and features, the definitions of project benefits would be quite different.
A Centralized Project Ranking Tool May Not Fit a Decentralized Decision-Making Environment
An electric utility desired a system for allocating its $250 million annual O&M budget. The organization had a decentralized decisionmaking structure. The managers of business units (e.g., line clearance, meter reading, the call center) had historically been given authority to decide how best to spend within their respective areas. A single, centralized, project ranking system was unacceptable. It would dictate to each manager how the projects within his or her area should be prioritized. On the other hand, allowing each business unit to have its own, separate priority system would not provide a consistent approach for the enterprise as a whole. The selected system [14] involved a tiered approach to PPM. It allowed each department to apply a tool to determine the value generated under different funding levels, based on the projects that would be conducted under those funding levels. This design allowed area managers to retain authority to prioritize and select projects within their respective areas while rewarding managers who propose projects that create value consistent with corporate objectives.
Before building or purchasing a tool, think carefully about what the tool needs to do in order to be effective. See the second side box for another example of an application requiring a specialized approach. Although a tool may appear to offer lots of flexibility, it can only be adjusted within the limitations dictated by its underlying decision model. Few tools allow users to change the mathematical logic by which projects are valued or optimal portfolios are identified.
www.prioritysystem.com
33
Project Portfolio Management Tools: Which Approach is Best? Part 6: Evaluating Tools
Gaining adequate acceptance is critical to the success of the tool. For a dramatic example of a tool that was widely regarded as technically defensible, complete, accurate, and practical and, yet, was rejected, read "The Rise and Fall of a Risk-Based Priority System" [15]. The most effective way of generating acceptance is to involve in the design effort those who will use and be impacted by the tool. A collaborative process helps ensure that the tool will have exactly those characteristics necessary to best suit the organization and its needs. Equally important, involving stakeholders in the design of the decision model creates buyin by allowing skeptics to express their concerns and see first hand how those concerns are addressed. See the side box for an example.
Notes
1. These criteria are discussed at greater length in M. W. Merkhofer, Decision Science and Social Risk Management, Reidel, 1987, and V. Covello and M. W. Merkhofer, Risk Assessment Methods, Plenum, 1993. 2. R. G. Anderson, A. Bendure, S. Strait, and A. Kann. "Supporting Documentation: Laboratory Integration and Prioritization System," Los Alamos National Laboratory, Los Alamos, New Mexico, 1994. 3. See, for example, Ward Edwards and J. Robert Newman, Multiattribute Evaluation, Sage University Papers Series on Quantitative Applications in Social Sciences, Beverly Hills CA 1982. 4. The analysis and results are documented in H. Call and M. W. Merkhofer, "A MultiAttribute Utility Analysis Model for Ranking Superfund Sites", published in "Superfund '99, The Proceedings of the 9th National Conference," Washington, D. C., November 2830, 1988. 5. Robert T. Clemen, Making Hard Decisions: An Introduction to Decision Analysis, PWSKent Publishing Company, Boston, 1997. 6. Ralph L. Keeney and Howard Raiffa, Decisions with Multiple Objectives, Wiley, New York, 1976. 7. For example, E. J. Elton and M. J. Gruber, Modern Portfolio Theory and Investment Management, 4th ed., Wiley, New York, 1991.
www.prioritysystem.com
34
Project Portfolio Management Tools: Which Approach is Best? Part 6: Evaluating Tools
8. For example, R. K. Sundaram, A First Course in Optimization Theory, Cambridge, 1996. Also see Mathematical Theory for Prioritizing Projects and Optimally Allocating Capital, located on this website. 9. For example, T. Copeland, V. Antikarov, and T. E. Copeland, Real Options: A Practitioner's Guide, Texere, 2001. 10. J von Neumann and O. Morgenstern, Theories of Games and Economic Behavior, Princeton University Press, 1947. 11. James E. Smith and Robert F. Nau, Valuing Risky Projects: Options Pricing Theory and Decision Analysis, Management Science, 41 (4) 1995. 12. For a complete description, see M. W. Merkhofer and R. L. Keeney, "A Multiattribute Utility Analysis of Alternative Sites for the Disposal of Nuclear Waste," Risk Analysis, Vol. 7, No. 2, 1987, 173-194. 13. D. Von Winterfeldt and W. Edwards, Decision Analysis and Behavioral Research, New York: Cambridge University Press, 1986. 14. E. Martin and M. W. Merkhofer, "Lessons Learned - Resource Allocation based on MultiObjective Decision Analysis", Proceedings of the First Annual Power Delivery Asset Management Workshop, New York, June 3-5, 2003. This reference also appears as a paper on this website 15. K. E. Jenni, M. W. Merkhofer, and C. Williams, "The Rise and Fall of a Risk-Based Priority System: Lessons from DOE's Environmental Restoration Priority System," Risk Analysis, Vol. 15, No. 3, 1995, 397-409.
www.prioritysystem.com
35