XCi± EVALUATION METHODS IN ENVIRONMENTAL ASSESSMENT AUGUST 1990 Environment Environnement Ontario Jim Bradley, Minister/ministre ISBN 0-7729-7316-4 EVALUATION METHODS IN ENVIRONMENTAL ASSESSMENT Report Prepared by: VHB Research & Consulting Inc. LocPlan Lawrence MacDonald & Associates Report Prepared for: Environmental Assessment Branch Ontario Ministry of the Environment AUGUST 1990 o flECYCLAaiE Copyright: Queen's Printer for Ontario, 1990 This publication may be reproduced for non-commercial purposes with appropriate attribution. PIBS 1062 log 90-0005-001 Table of Contents Table of Contents iii Executive summary vii Purpose of the report vii Scope and method viii Important assumptions ...» ix Limitations of data ix Main findings ix Conclusions x 1 Introduction 1 1.1 Background to the study 1 1.1.1 Purpose 1 1.1.2 The approach and limits of the study 2 1.2 Evaluation methods in environmental assessment 3 1.2.1 What is an evaluation method 3 1.2.2 Evaluation methods in the environmental assessment process 5 1.2.3 The purpose of an evaluation method 7 2 Major components of an evaluation 12 2.1 Identification of what is important 13 2.1.1 What are criteria? 13 2.12 "Screening" criteria 14 2.1.3 The total environment as the starting point 15 2.2 How does what is in^wnant vary among the alternatives? 16 2.2.1 Nominal scales 17 2.2.2 Ordinal scales 17 2.2.3 Interval scales 18 2.2.4 Ratio scales 18 2.3 Identification of relative importance 19 2.3.1 Weights in evaluation methods 19 2.32 Examples of weighting methods 20 2.3.3 Other concerns with weights in evaluation methods 21 2.4 Integrating and aggregating the components 22 3 The choice of methods 23 3.1 Why so many methods? 23 3.1.1 Types of decisions 24 3.1.2 Decision environment 24 3.1.3 Technical considerations 25 3.2 Choosing among the methods 25 3.2.1 Characteristics of problems 26 3.2.2 Importance of the decision 28 3.2.3 Acceptability by decision-makers, public and professionals 30 3.3 Use of multiple methods 32 4 Review and analysis of methods 33 4.1 The identification of methods 33 4.2 Classification of methods 34 4.3 Sequencing of methods (phasing) 34 4.4 Evaluation methods in environmental assessments 34 4.4.1 Ad hoc 34 4.4.2 Checklists 36 4.4.3 Matrix methods 38 4.4.4 Economic approaches 40 4.4.5 Pair-wise comparisons 44 4.4.6 Mathematical programming methods 46 4.5 Why are so few methods used in Ontario? 52 4.5.1 Vulnerabilities exposed by formal methods 52 4.52 Theoreticians and practitioners are not communicating ... 54 4.5.3 EA practitioners may not be experts in evaluation systems 54 5 Things to look for in an evaluation 55 5.1 Clarity of expression 56 5.1.1 Objectivity versus partisanship 56 5.12 Complex and inaccessible presentation 58 5.2 Adequate criteria 59 5.2.1 Rationale 60 5.2.2 Con^rehensiveness 50 5.2.3 Overlapping 50 5.2.4 Level of detail 51 5.2.5 Relative number 51 5.3 Logical errors and arithmetical eirors 51 5.3.1 Methodological errors 51 5.3.2 Eirors in arithmetic 53 5.3.3 Double counting and independence of preferences 64 5.3.4 Mixing of data types 55 5.3 J Intransitivity of preferences 55 5.3.6 Use of dimensionless numbers, indices, and arbitrary scales 55 5.3.7 Confusion of magnitude and importance 56 5.4 Confusion of expert knowledge and responsibility for decisions . . 56 5.4.1 Exact values 57 5.4.2 Preference values 5g 5.5 Approach for dealing with uncertainty (& risk) 70 5.5.1 Planning paralysis 70 5.52 Choice of data types 7I 5.5.3 Sensitivity analyses 72 5.5.4 Additional research 72 5 J J Adopting a social policy for dealing with uncertainty .... 73 5.6 Selection of the method 74 5.7 Iteration [ ' 74 Conclusions and recommendations 75 6.1 Major findings 75 6.1.1 What is an evaluation method 76 6.1.2 The evaluation process 77 6.1.3 The choice of methods 77 6.1.4 Review and analysis of methods 78 6.1.5 Applying the evaluation method 78 6.2 Conclusions 79 6.2.1 Applying fonnal evaluation methods in EAs can help ensure that the objectives of the EA Act are met 79 6.2.2 Evaluation methods assist in only one part of the evaluation process 79 6.2.3 There are pressures to both increase and decrease the sophistication of methods used in EAs gO 6.2.4 Careful application and review of the use of evaluation methods in EAs is not occurring 80 6.3 Recommendations on 6.3.1 The Ministry should promote the use of formal evaluation * methods en 6.3.2 Companion studies to this one should be done for methods of impact prediction, methods of weighting, and building systems of evaluation methods 81 6.3.3 The Branch should encourage the careful choice of evaluation methods 81 6.3.4 In reviewing EAs, the Branch should watch for errors and deticiencies like those which were identified in the EAs reviewed for this study 81 7 References 82 A Bibliography 89 B Evaluation methods glossary 103 C Applicability, advantages and disadvantages of evaluation methods .... 112 1 Ad Hoc Methods 113 2 Unordered lists of criteria 114 3 Satisficing 115 4 Constraint mapping 116 5 Lexicographic ordering 118 6 Simple additive weighting (SAW) 119 7 Overlay mapping and geographic information systems 120 8 SMART (Simple Multi-Attribute Rating Technique) 121 9 PATTERN (Planning Assistance Through Technical Evaluation of Relevance Numbers) 122 10 PROLIVAN 123 11 Cost-benefit analysis (CBA) 125 12 Cost-effectiveness analysis 126 13 Cost-minimization analysis 127 14 Planning balance sheet (PBS) 128 15 Saaty's Analytical Hierarchy Procedure 130 16 Concordance and discordance analysis (ELECTRE) 132 17 TOPSIS 133 18 Linear programming (LP) 135 19 Dynamic programming (DP) 137 20 Goal programming (GP) 138 D Questions for reviewers to consider in analyzing EAs 139 Executive summary The evaluation of alternatives is a central requirement of Environmental Assessments in Ontario. To conduct this evaluation when planning a project, a proponent must use one of a number of available formal evaluation methods. The resulting evaluation is described in the proponent's Environmental Assessment submission, which is then reviewed by staff in tiie EA Branch and in other parts of the Ministry of the Environment and the Ontario government The rapid growth of technical expertise in formal evaluation and the experience to date with the evaluations conducted in Ontario Environmental Assessments have suggested to the EA Branch that both proponents and reviewers would benefit from greater familiarity with the range of evaluation methods available, their strengths and weaknesses, and the ways in which they should and should not be used. Purpose of the report In relation to Environmental Assessments conducted in Ontario, this report aims to explain the proper function of evaluation methods, the different kinds of methods available and their characteristics, and the common pitfalls to watch for and avoid in applying these methods. This explanation is intended to be of use not only to government reviewers of submitted Environmental Assessments but also to the proponents responsible for the evaluation of alternatives in planning their projects. In this way the report, it is hoped, will make a contribution to better planning and documentation, fewer unnecessary disputes with interested parties, and greater consistency and effectiveness in review. Scope and method The repon is based on an examination of Environmental Assessments submitted in Ontario, on the authors' familiarity with the literature on evaluation methods, and on the authors' experience as consultants with the Environmental Assessment process. The discussion covers not only methods used to date in Ontario but also other methods used elsewhere which were selected for their apparent inherent merits or potential applicability in Ontario. No primary research has been conducted for this report. The study has been subdivided into several tasks: What an evaluation method is, the kinds that exist, and why such methods are used in Environmental Assessment (Chapter 1) • The four pans of any evaluation: establishing the criteria, scaling the data, weighting the environmental preferences, and integrating the results (Chapter 2) • Choosing a particular evaluation method for a project: Why so many methods exist, the various kinds of decisions for which methods are required, and the things to consider in selecting a method to use (Chapter 3) • An analysis of evaluation methods: the relative merits and disadvantages of the different evaluation methods in terms of the considerations described above (Chapter 4) • Applying any evaluation method: what should be done, what should not be done, and what reviewers should look for (Chapter 5) An annotated bibliography is presented in Appendix A. A glossary is presented in Appendix B. More detailed descriptions of all the methods identified were developed during the study, and these have been presented in Appendix C as convenient summaries for interested readers. Finally, Appendix D summarizes the study by presenting a series of questions that reviewers may wish to ask in reviewing applications of evaluation methods. Important assumptions To apply the list of methods identified to Ontario experience requires some interpretation: many of these methods have not yet been used here, though of these any could be and some probably will be before long; evaluation processes often use several methods in combination, with individual methods not identified; improvised evaluations occiu" that reflect, usually less rigorously, particular metiiods identified here. An amalgamation of, or a deviation from, particular methods described in this report may be perfecdy acceptable (in fact might constitute a new method if sufficientiy novel and careful). But proponents and reviewers are well advised to make themselves aware of the characteristic shortcomings of the techniques they happen to modify, and to examine carefully the extent to which these limitations have been intensified or diminished by improvisations. Limitations of data Witii on-going research in the professional community into the use and development of methods, the list of methods presented in this report will gradually become dated. Moreover, new characteristics of listed methods may be identified, including deficiencies. But tiiese shortcomings will be far outweighed by the advantages to be gained by the greater accuracy and effectiveness that will result fi"om more rigorous application in practice of existing, theoretically defensible, formal methods. Main findings An evaluation method is defined as a formal procedure for establishing an order of preference among alternatives. It functions mainly as the fourth step in the evaluation process after criteria have been established, data have been gathered, analyzed, and scaled, and preferences have been identified and documented. The utility of a formal method is to make systematic and explicit the application of assumptions and judgements in determining outcomes. I'luperly used, an evaluation method provides a neutral basis for compromise between opposing parties and a defensible rationale for the resulting preferences. Good use of evaluation methods depends on certain requirements being met and on certain potential defects being avoided. These conditions are discussed and analyzed. For example, clarity of expression is a requirement which is furthered not only by a d'^.liberate disinterest in the outcome and an absence of partisan bias on the part of the proponent but also on special efforts by technical advisors to explain their assumptions and to demonstrate connections between data, findings, and conclusions. The criteria used have to cover the total environment and the concerns of interested parties without allowing double-counting or distortions in relative importance. Common logical and arithmetical errors such as intransitive preferences, arbitrary scales, and dimensionless numbers are explained. It is noted that magnitude should not be confused with imponance and expert knowledge with value judgments. The need to address uncertainty and risk and to use iterative procedures is emphasized. Twenty individual methods are identified, analyzed, and compared. Prospective proponents are given an outiine of the advantages and disadvantages of the methods in a way that will help in the choice of an appropriate one for a particular planning project. The methods are also individually described to give the interested reader enough background to make identifications and to approach the technical literature. Conclusions Two key principles of good planning are the basis of Uie requirements set out in the Environmental Assessment Act, and in die Ministry's Interim Guidelines on Environmental Assessment Planning and Approvals: accountability and traceability. Formal evaluation methods can help ensure that these principles are satisfied. Evaluation methods assist in only one part of the evaluation process. A properly chosen and applied evaluation method is of littie use if the impacts are poorly predicted, and if the preferred alternative is not die one which best addresses the concerns of those making the decision. There is a movement towards the use of more sophisticated techniques in other jurisdictions, and by some proponents in Ontario. To the extent that this leads to greater accountability and traceability, it is to be encouraged. However, the pressures to increase traceability and accountability also create the potential for increased controversy, and a backlash against both formal evaluation methods and the EA process. 1 Introduction 1 . 1 Background to ttie study 1.1.1 Purpose The Environmental Assessment (EA) Branch of the Ontario Ministry of the Environ- ment is charged with the administration of the Environmental Assessment Act. The responsibilities of EA Branch planning staff include advising on the requirements of the EA Act and providing comments on draft material during pre-submission consult- ation, as well as commenting on the formal EA submission and coordinating the Government Review. The evaluation of alternatives is a key component of the EA process. This report is designed to assist primarily EA Branch staff, and other appropriate Ministry of the Environment staff, in reviewing the methods used in the evaluation of alternatives and documented in EA submissions. It may also be of use to other govenmient staff and for proponents. The objective of the report is to provide an understanding of available methods for describing alternatives and dealing with trade- offs in the EA process. A number of benefits are hoped to accrue from increasing this understanding: greater consistency in the review process better ability to focus the EA process on sound planning and its documentation in EAs, for example on requiring a traceable process fewer disputes arising from poorly performed or unintelligible trade-offs (as distinct from legitimate disagreements). Ministry reviewers from outside the Branch will have a better understanding of evaluation methods with which to assess how their concerns have been addressed in EAs. Proponents will have a clearer idea of what Ministry reviewers will be expecting in EAs. Finally, the document can serve as an input to the further development of Ministry policies for sound EA planning. Such policies can help ensure that EAs meet the requirements of Section 5(3) of the Act. 1.1.2 The approach and limits of the study The repon is designed to achieve these goals through five major components: • examination of the role of evaluation methods in EA (Chapter 1) • consideration of the major components of an evaluation of alternatives (Chapter 2) • review of the different types of applications of evaluation methods in EA, and identification of criteria for selecting methods (Chapter 3) • description of some of the major types of methods used for choosing among alternatives (Chapter 4) • identification of items related to evaluation methods that reviewers should both expect to see and not to see (Chapter 5) In assembling the information for these sections, the study team reviewed a selection of EAs provided by the Ministry steering committee which were felt to provide a reasonable cross-section of the techniques in use in Ontario. In addition, the study team reviewed several key decisions of the EA Board and Joint Board which dealt with evaluation methods. This review of Ontario EAs and Board decisions was supplemented with a review of the planning literature dealing with evaluation methods. This included consideration of some evaluation methods which have not been used in Ontario EAs. Key references which were identified as part of these reviews have been hsted in Appendix A for the use of those readers who wish to pursue particular issues in greater detail. The descriptions of the appropriate selection and use of evaluation methods, and potential misuses of the same, have been developed based on both of these tasks, and the study team's familiarity witii the subject matter. Particular strengths or weaknesses discussed are not necessarily associated with any of the EAs which were reviewed. 1 .2 Evaluation methods in environmental assessment Environmental Assessment, in company with many otiier planning activities, uses what are called evaluation methods. What is an evaluation method? Why are such things used? Could they be avoided? These are the questions discussed in the remainder of this chapter. 1.2.1 What is an evaluation method An evaluation method may be defined as a formal procedure for establishing an order of preference among alternatives^. Let's look more closely at tiiis definition. Among alternatives An evaluation method may be used when it is necessary to choose between two or more alternative courses of action. The identification of alternatives therefore precedes any evaluation, being regarded sometimes as outside the evaluation method proper and other times as the first step in a method's application. In all cases the identification — the definition — of alternatives is a crucial aspect of a successful evaluation, because if alternatives have been omitted or confused even the most rigorous method will fail to demonstrate the most desirable course of action. 1 The application of an "evaluation method" is referred to as an "evaluation process", and the eventual result is an "evaluation". Establishing an order of preference An evaluation results simply in an ordering of alternatives according to preference. For example, where alternatives A to E have been identified, the evaluation may indicate that B is preferred, with D, A, C, and E in descending order of preference behind it.^ Some methods provide a measure of the extent to which B is preferred over each of the other alternatives. Though this might suggest that evaluation methods leave no room for decision making, that is not the case. Consider what happens when several different evaluation methods are applied to a given set of alternatives. Occasionally, one method may suggest a preferred alternative different than that suggested with other methods. This argues for a careful choice of evaluation methods and the desirability of applying a suite of evaluation methods where the decision is complex.^ If two metiiods give two different results for a given situation, does this falsify the whole business? It might be argued that the eventual "choice" is made unwittingly beforehand in the selection of an evaluation method. Also, anyone disliking the result might try another method. These arguments do not invalidate the use of formal evaluation methods, they only reinforce the firequent complexity of determining a preferred option and the need to carefully and consistentiy analyze the elements of the evaluation process. The numbers of evaluation methods is evidence of the complexity of tiie process and intuitive methods deal with this complexity only by creating bewilderment. To accept this criticism is to misunderstand the function of an evaluation method within the environmental planning process.'* A particular evaluation obviously should not be seen as equivalent to a decision. Evaluation methods are designed as decision aids for decision makers. They do not replace tiie need for decisions to be made, panicularly where issues such as fairness, and distribution of costs and benefits are involved. 2 This discussion is based on traditional concepts of alternatives being discrete choices. Some evaluation methods described later are capable of dealing with a continuous range of alternatives along which an "optimum" is found. See section 4.4.6, 47. 3 The use of multiple evaluation methods may seem excessively demanding. However, it is usually obtaining the inputs to evaluation methods that is demanding. Once these inputs are available, application of the methods themselves is often relatively straightforward. 4 If different people are involved in using the method, or if users have adjusted their views in light of the results of the analysis, then applying the same method to the same situation a second lime will often produce a different result! In other words, evaluation methods should serve not as mysterious independent judges but rather as convenient means of connecting assumptions to consequences so that decision-makers can explore and more fully appreciate different alternatives and value sets and ultimately they can make better decisions. Formal procedure An evaluation method is a formal, explicit, and thorough way of organizing and describing choices. In every day living, people compare alternatives and state their preferences witiiout an explicit evaluation method. Why is a formal evaluation method needed for EA? Because of the size and complexity of the problems — ^The amount and complexity of data characteristic of evaluations of large projects (and even small ones) means tiiat the iterative EA planning process requires a metiiod too comprehensive to be applied casually or intuitively. Methods are intended to be applied repeatedly, each time with deliberate changes in assumptions or data that produce changes in preferences. This evaluation process gradually shows how differences in environmental preferences result in different ratings among altematives. To allow communication and collective decision-making — Where affected interests conflict, evaluation methods are used to help rival interests reconcile tiieir differences as far as possible and reach compromises. The resort to a formal method is based on the belief tiiat, where conflicting interests must be negotiated, explicit and comprehensive consideration results in more efficient communication and eventually more acceptable decisions than unfocussed and unnecessary debate or minimizing discussion altogether. In short, where altematives must be compared, formal pro- cedures are an essential part of participatory democracy. 1.2.2 Evaluation methods in the environmental assessment process Evaluation methods have become a distinctive and central pan of modem project planning and decision-making — especially in environmental assessment. They provide a framework for organizing and using predictions of impacts and public views arising from public consultation. This report examines 20 evaluation methods, divided into six categories, available for use in the planning of Ontario undenakings subject to EA. The characteristics of the categories are summarized in Table 1. The categories are defined according to the processes that they adopt to carry out the evaluation. This basis for categorization was adopted to help reviewers identify unnamed methods they encounter. A multitude of different methods are used in EA as analysts tailor generic methods to their specific needs and environment. Some of these seem to resist classification at first, but close examination will reveal that the majority are simple variants or hybrid- of the methods examined in this study. Table 2 lists the evaluation methods considered. It should be noted that evaluation methods are not the only formal procedure used in an environmental assessment. Other methods have been developed for other parts of the EA process, such as impact prediction, public consultation, and weight determination. Some of these are listed in Table 3. Although they are unavoidably mentioned in this report, there is an entire literature associated with each, and detailed consideration of them is beyond the scope of the present work. Planners and the public are becoming familiar with evaluation processes and aware of the importance and utility of evaluation methods. At the same time, the variety of metiiods available is also growing. The point has been reached where in large or potentially controversial projects the selection of evaluation methods becomes part of the planning process. The selection of an evaluation method has two levels of influence on planning: Methods wiUi different capabilities make possible different kinds of planning processes and stages, some of which may be especially suited to a particular project. Second, within any particular planning process all the parts (such as identifying alternatives, selecting criteria, consulting and involving interested parties, as well as evaluating) must fit together consistentiy to make a unified structure. This report will help proponents find the right method or group of evaluation metiiods for their projects and integrate it into the planning process. 1.2 J The purpose of an evaluation method In EA an evaluation method is used to clarify and articulate environmental preferences and translate them into preferences among alternative undertakings. The use of a formal evaluation method has two main advantages: it provides a better basis for decision-making than would otherwise exist and it results in reasons for decisions that on examination can be traced. At the same time, however, the use of a formal method imposes requirements that, if not met, can partly or entirely eliminate these advantages. Table 1 Definition of categories for grouping of evaluation methods. Ad Hoc Methods Checklist Methods Matrix Methods involve comparison of alternatives in narrative terms with no explicitly stated method used in ordering preferences. This method is often termed 'professional judgement'. compare and evaluate alternatives against a specified set of criteria with no compensatory rules or trade-offs use matrices for comparison and evaluation of alternatives. Descriptive matrices use only the display properties of matrices and rely on 'professi- onal judgement' (i.e. the ad hoc method) to ordo- preferences. Economic Methods Cartographic Methods Pair-wise Comparison Methods Mathematical Programming Methods Mathematical matrices, including additive models, use mathematical operations to order preferences and enable trade-offs between attributes. use economic methods and principles to weight incommensurable units and temporally distinct impacts into monetary units. compare and evaluate alternatives using maps. involve the sequential comparison of alternatives in pairs as a basis for subsequent ordering of preferences using mathematical techniques. are mathematical techniques which involve searching for the alternative that best meets a specific objective function, subject to a set of constraint equations. Table 2 Evaluation Methods considered in the study Category Methods Ad Hoc Checklists Matrix Methods Economic Approaches Pair-wise Comparisons Mathematical Programming Description of alternatives and their relative strengths and weaknesses in narr-^tive terms. Unordered lists of criteria Ordered lists of criteria satisficing constraint mapping lexicographic ordering' Simple Additive: SAW Variations on SAW: Overlay mapping/ Geographic information systems SMART* PATTERN* PROLIVAN* Cost-Benefit Analysis (CBA) Cost-Effectiveness Analysis (CEA)* Cost-Minimization Analysis (CMA)* Planning Balance Sheet (PBS)* Saaty's Analytical Hierarchy Procediue Concordance and discordance analysis (ELECTRE)* TOPSIS* Linear programming Dynamic programming Goal programming Considered only in Appendix C. Table 3 Examples of methods related to parts of the EA process other than evaluation. Impact Prediction Computer simulation Ross' Environmental Interaction Matrix Leopold matrix Weighting of Preferences Ranking and categorization Rating Point allocation Indifference trade-off method Decision-analysis wei^t selection Derived from observing user behaviour Aggregation of Weights Borda-Kendall Cook and Sieford distance method Public Consultation Focus groups Questionnaires Referenda Group interaction methods, including Delphi and Nominal Group Technique Improved basis for decision-making The formal procedures of the "ideal" evaluation method offer important advantages in the planning process: • They allow comprehensive and systematic consideration of every kind of potential impact for every alternative. • By making explicit all assumptions and value judgments, they encourage the various interested parties to delineate areas of agreement and distinguish issues on which compromises must be negotiated. In the negotiation of compromises, they can express and test modifications in assumptions and value judgments to explore possible trade-offs. • The order they impose on data, assumptions, and value judgments leads to analyses and conclusions that are objective and traceable, in the sense of being capable of repetition and replication by anyone. • When issues are settled and the definitive preferences established, the result is a clear rationale for the selected alternative. • By explicitiy documenting environmental values, they help to provide the foundation of a set of precedents that can be used in future projects. In these ways, a formal evaluation method places the planning process on a firm foundation. Efficient use of technical expertise Commonly in EAs, great quantities of technical information are collected only to find that one or two factors are the key to the final decision. An explicit evaluation method has the potential to identify these critical variables early in the evaluation process so tiiat technical resources can be concentrated on resolving the major issues. This matter is discussed further in Section 5.5.3, p.74. Potential disadvantages of using a formal evaluation method Some proponents and interested parties may see disadvantages in using the formal evaluation methods discussed in this repon. All of the disadvantages arise from the more rigorous process requirements they place on proponents and planning participants compared to informal evaluation methods. Those whose unstated intentions are to impose their own wills will see disadvantages in using forma evaluation methods. The proponent who knows beforehand what is needed will scorn systematic assessment as erroneous or redundant and in any case a needless expense. The interest group that sees itself in a war of ideals will scorn systematic assessment as misrepresentation and camouflage. Neither will care for formal evaluation procedures with their dispassionate examination of alternatives and testing of value judgments. However, from a wider point of view, the tendency of formal evaluation procedures to frustrate the imposition of unilaterally defined goals will be seen instead as a disguised advantage by those who believe in modem planning principles. 10 A real disadvantage of formal methods is that members of the public are often intimidated or exasperated by the technicality of evaluation methods, especially those which use complex numerical techniques. It is ironic that formal evaluation, which aims to make decisions more easily understood, can through its procedures and routines appear to make them more mysterious than ever. It is a goal of EA, and of this rep rt as well, that decisions should be understood by those who arc affected by them. Properly use4, a formal evaluation method, though complex, should make decisions more accessible to all parties, not less. This report suggests measures that can be taken during evaluation to keep the process demystified and open. 11 Major components of an evaluation There are four basic requirements of any evaluation process: selecting criteria by identifying the things the decision depends on predicting the impacts of each alternative in terms of each criterion, and rating of these impacts in magnitude comparing the impacts, including deciding which impacts are more or less tolerable to the affected panics applying an evaluation method which combines weights and rates for each of the criteria to provide an ordering of the alternatives. The order in which these steps are carried out may vary from method to method (in fact, that may be a distinguishing feature of a method). Usually, the process is iterative, and earlier steps will be reviewed and modified in response to the findings of later steps. This iterative process is desirable, particularly when the public is being asked to formalize their values in a new and explicit way. It allows a better under- standing of consequences to affect the expression of values. 12 In addition, some methods are more capable than others of generating new alterna- tives. While some methods are designed to identify an optimal alternative chosen from along a continuous range, others deal with choosing among a limited number of discrete alternatives. 2.1 Identification of what Is Important Although the EA Act defines environment very broadly, and a requirement of EA planning is comprehensiveness, this does not necessarily mean all aspects of the environment are equally important in reaching a decision. What is important in an evaluation exercise are those things that differ among the alternatives (including the do nothing alternative), since it is on the basis of the differences that the choice is made. 2.1.1 What are criteria? When alternatives are compared by using evaluation methods, the comparison is based on a number (often a large number) of explicit features or considerations. These points of comparison are usually termed criteria or considerations.^ When an evaluation process is applied and alternatives are ordered according to preferences, these are the preferences that arise from the scores on the criteria employed. If another criterion was added to the evaluation, or if one was dropped, the result might well be a different set of preferences. Obviously, therefore, the determi- nation of the criteria to be used in an evaluation is a critical matter. In fact, it may be as important as the definition of the alternatives to be compared. The criteria used to judge alternatives will probably vary from project to project, and are established outside the evaluation framework, but they should relate to the goals, objectives, and targets of the planning exercise. The terms are flexible; some evaluations even employ both "factors" and "criteria" defming one group as a subset of the other. The stipulation is consistency within particular evaluation. This study only uses the term "criteria". 13 2.1.2 "Screening" criteria Though criteria usually act as points of comparison, most evaluations include cenain criteria that do not function in that way. Instead they set minimum conditions or cate- gorical requirements and thus can unilaterally exclude alternatives firom consideration with no comparisons being involved. These special criteria, often called "screening" or "exclusionary", tend to be used at the beginning of an evaluation so that either essentially unacceptable alternatives can be rejected early or the number of alternatives to be considered can be reduced to a "manageable" number. Of all criteria in an evaluation, those used for screening are potentially the most powerful not only because they can be used to exclude unilaterally, but also because they tend to escape the scrutiny focused on the other criteria during detailed comparisons and trade-offs. Though not always identified as the application of criteria, rules are sometimes invoked during the identification of alternatives to narrow the range of alternatives to be evaluated. These rules are actually screening criteria that deserve careful examination. One common screening criterion that exemphfies all these dangers is the concept of "proven technology". This concept, when properly applied as a criterion in an evaluation, should indicate two things: data are available for examination on the operation of a technology in an application comparable to the one proposed on examination those data show that the technology consistentiy meets certain specified standards. This criterion, used in this rigorous and objective way, deserves to be widely used for screening. Frequentiy in evaluations the concept of "proven technology" becomes instead a general qualitative indication from an engineering standpoint of the ability of purchased equipment to perform to manufacturers' specifications. As a result, evaluations sometimes make ruthless exclusions in the name of this criterion but in the absence of explicit data, selected standards, and even any evidence of inadequacy. This result would not be countenanced from criteria used at later stages in the same evaluations. 14 2. 13 The total environment as the starting point The criteria must cover all aspects of the environment, as stated in Ontario's Environ- mental Assessment Act. The Act (Section Ic) defines "environment" in very broad terms: (c) "environment" means, (i) air, land or water (ii) plant and animal life, including man (Hi) the social, economic and cultural conditions that ir\fluence the life of man or a community (iv) any building, structure, machine or other device or thing made by man (v) any solid, liquid, gas, odour, heat, sound, vibration or radiation resulting directly or indirectly from the activities of man, or (vi) any part or combination of the foregoing and the interrelationships between any two or more of them, in or of Ontario. Besides ecological factors, this defmition includes social, economic, cultural, technical, and financial factors. In short, both "natural" and "human" elements of the environ- ment must be considered in the evaluation. Furthermore, environmental effects vary greatly in many ways that must be reflected in the evaluation. For example, effects vary in: • certainty, severity and probability of occurrence • influence: some are positive (beneficial), others negative (harmful) direcmess: some are direct, such as noise; others are indkect, such as induced economic development • duration: short-term, long-term, frequency These differences must be encompassed and expressed in the criteria because the com- parison of alternatives is based upon them. All this suggests that the potential array of effects for each alternative may be enormous. However, the significant effects are those which differ between alternatives and for which impacts are substantial and important. Typically far fewer impacts meet these conditions. Therefore attention rapidly focuses on a limited number of crucial differences between the alternatives, where die bulk of the evaluation tiien concentrates. 15 Nevertheless, the starting point must remain the total environment, represented by a complete range of criteria. From this range, the evaluation narrows down, through a series of explained exclusions, to the crucial criteria among which trade-offs must be made. On these criteria, the alternatives being compared have important but different effects, so that determining the preferred alternative means determining the preferred combination of effects. Only by narrowing down in this way by explicit steps from the total overview can the proponent be sure, and be able to demonstrate, that all significant impacts have been found and captured in the evaluation. In all cases, the rationale and significance of each criterion should be carefully set out if the analysis is to be kept systematic and logical. 2.2 How does what is important vary among the alternatives? Different alternatives will be better and worse than others on particular criteria^. Several steps are required to enable these differences to be analyzed: • A means for measuring the nature of the impact must be identified. For example, if the impact of concern is on human health, how will this be measured? As an increased cancer risk? As the number of days of employ- ment lost to sickness? As the number of days spent in hospital? As the number of deaths? • Having identified the measure of the impact, it is necessary to predict the magnitude of the impact This may be one of the most difficult tasks of the environmental assessment process, since the environmental and social systems on which the impacts occur are often very complex and poorly understood. Many impact prediction methods exist for undertaking these predictions, ranging fi-om simply identifying the presence or absence of some environmental feature that may be affected, to highly sophisticated and complex computerized simulation models. The evaluation requires that the magnitude and importance of these differences be ob- jectively determined. The identification of these differences can be done objectively using nominal, ordinal, interval or ratio scales, which will now be discussed. If one alternative is better than or equal to all other alternatives with respect to all criteria, it is clearly preferred and a complicated or simple evaluation method is likely not required. "Dominant" is used to describe such an alternative. Dominant alternatives are rare in EA and decisions usually involve some form of trade-off. 16 2.2.1 Nominal scales Nominal scales involve the grouping of areas of impact into distinct, discrete cate- gories, such as "fish" and "birds". Making these assignments may be relatively simple, particularly if the things being grouped are clustered in a certain way, so that categories are obvious. The usefulness and ease of use of such categories decreases as the distinctions between them become less clear for the items being classified. Determining the boundaries between categories may then become problematic. In spite of their ease of use (at least away from the boundaries), nominal categories have several disadvantages. One is that they are usually context-specific. For example, in the case of temperature, the category "warm" means very different things if referring to outdoor temperatures in Canada in January and in July. In turn, a "warm oven" has quite a different temperature from either of these outdoor air examples. The meaning of the word "warm" may also vary from one person to another. A second disadvantage with the use of nominal "measures" is that they provide no information about differences within the category. For example, if two days are both categorized as warm, no information is available to distinguish which of the two days is wanner. A third disadvantage of nominal categories is tiiat they do not allow individual observations to be added, subtracted, multiplied or divided to enable cross-category quantitative comparisons. Again using the temperature example, tiie nominal scale gives no information which would enable you to know how much energy is needed to maintain tiie temperature inside a house when it is "cold" outside, even if it is known how much is required when it is "cool" outside, and tiiat none is needed when it is "warm". Otiier types of scales enable such decisions to be made deductively and rationally. 2.2.2 Ordinal scales Ordinal scales rank the items being scaled from first to last. Such scales solve several of die problems which are present witii nominal scales. In particular, they allow discrimination among all die items being scaled. For example, an ordinal scale can clearly illustrate die temperature ordering of a warm oven and die outside air on two July days and a February day. 17 Ordinal scales are still context-specific and still do not allow arithmetical operations because the observations are still defined by categories, not measurements^. They are context specific in that knowing the warmest day in August (say the 19th) and the warmest day in July (say the 30th) tells one nothing about which of the two is warmer. Ordinal scales do not allow arithmetic operations because there is no assurance that the difference between the first and second ranked items is the same as the difference between the second and third ranked items. For example, the tempera- ture of the oven may be 1 10°C, the first day in July 28°C, the next 26°C, and the February day 3°C. The ordinal values for each would be 1,2, 3, and 4 respectively. Clearly, one could not impute the temperature differential from the ordinal values. 2.2.3 Interval scales Interval scales use relative measurements. Taking these measurements for a group of items may be more demanding, but they provide fewer constraints. Of the limitations discussed above, the only one remaining with interval scales is that they are still context-specific. Mathematical operations can legitimately be performed because the difference between the items being scaled is expressed in relative units. The Celsius temperature scale is an interval scale which enables the temperature intervals between one pair of observations and another pair to be compared mathematically. But the fact that tiie scale is relative to the freezing point (i.e. 0°C) and not absolute means that the ratio of one temperature observation to another is not the ratio of the temperature numbers; sometiiing that is 20°C (68°F; 293°K) is not twice as warm as something that is 10°C (SOT; 283°K). 2.2.4 Ratio scales Ratio scales are often die most difficult to determine, but die most precise. In tiiese scales, measurement is made along an absolute (not relative) scale, and the ratio between two observations is reflected in the temperature numbers. In the case of temperature, a ratio scale is the Kelvin scale. Using die Kelvin scale, it can be stated that 150°K is half as hot as 300°K. Often ordinal data are treated as though they were interval or ratio data by assuming an underlying continuous distribution. This assumption is often invalid. For example, there is no reason to assume that the second place runner took twice as long to complete a race as the first place runner. 18 2.3 Identification of relative importance AU the criteria are not usually equally important in an evaluation exercise, and the relative importance of each criterion needs to be identified. This is commonly referred to as assigning "weights" or specifying preferences. Weights can be used to combine multiple attributes (i.e. criteria) of an alternative into a single indicator of utility or value. 2.3.1 Weights in evaluation methods Weights are a critical input to evaluation methods. The meaning and use of weights may vary from one evaluation method to another, and the weights may be expressed in different ways. Weights may be single values ("linear coefficients") that are multiplied by rates to give scores, or may be sophisticated equations ("objective functions") which take into account incremental importance decreasing with incremental magnitude, for example.* In some of the cartographic methods, weights may be expressed by the intensity of shading on an overlay or in the definitions of categories of "good" and "poor". In other methods, explicit weights may not be used at all (which often merely means assigning an implicit weight for each criterion of 1.0). In each case, weights should be set in accordance with the willingness to trade off the criteria, and the relative weights for any pair of criteria ought to be independent of the weights for any other criterion' (Hobbs, 1980:726). Often in environmental planning these conditions are not met (Hobbs, 1980). Examples of misuse include: • assigning weights without knowledge of the criteria or their values. This precludes providing an accurate reflection of the trade-offs that decision makers are willing to make. For example, saying that "human health" is twice as important as "cost" has little meaning unless the human health effects are specified (certain death or minor eye irritation every ten years). Similarly a measure of the cost criteria is needed to make a comparison to either of these example health effects and thereby reach conclusions about appropriate weights. 8 For example, in valuing recreational opportunities, the first few hours made available may have a much higher value tiian subsequent hours. 9 The technical term for this is "preference independence". 19 assigning weights according to the variance of the attribute being weighted (i.e. assigning higher weights to criteria showing a larger differences across the alternatives than to criteria with small differences across the alternatives). 2.3.2 Examples of weighting methods The process for getting weights from groups potentially affected by an alternative, for assigning weights, and for combining the weights of a number of groups of people is i complex one, and there is a considerable literature from several disciplines (including psychology, economics and operations research) dealing with this issue. This process is essential to EA, and merits special consideration, but cannot be detailed within the scope of the present study. As an indication of the kinds of outputs that may be produced from weighting methods as an input to evaluation methods, some examples of weighting methods are very briefly considered: ranking and categorization — this weighting method involves scaling "import- ance" using nominal or ordinal scales. For example, it might be decided that human health is more important than economic cost. This weighting method is not theoretically valid, since ratios of weights are arbitrarily fixed. There is evidence indicating that categorization in particular tends to understate ratios of weights relative to otiier weighting methods (Hobbs,1980). rating — this involves rating the "imponance" of each criterion along a scale of 0 to 10. Using the same example as above, human health might be given a ten, and economic cost a two. Although relatively easy to use, this weighting method does not necessarily produce valid results since the definition of importance may have littie to do with willingness to make trade-offs, which is fundamental to choosing between alternatives. point allocation — involves distributing some number of points (say 100) among the criteria in proportion to their "importance". As for rating, a potential problem with this method is that importance and willingness-to-trade-off a criterion are not always perceived as the same thing. In addition, some evidence suggests that peoples' perception of importance uses a logarithmic, not a linear, scale. This latter problem suggests that these methods will understate the imponance of the more imponant variables.'" 10 If a logarithmic scale is used, then a criterion with a weight of 2 is not twice as important as one with a weight of 1, but rather is eVe' = 2.71 times as important. 20 indifference trade-off method — assures theoretically valid weights by determin- ing the amount that a decision-maker would give up on one attribute to obtain more of another, but does have potential perceptual shortcomings. decision analysis weight selection — is conceptually similar to the indifference trade-off method, except that weights are related to probabilities of certain consequences, rather than their presence or absence. observer derived techniques — involve deriving rates from decision-makers' indicated preferences among real or imaginary alternatives. For example, past decisions may be mathematically analyzed to derive weights which can repli- cate the past decisions. These techniques tend to cluster weights on a few attributes. Because decision makers disregard lesser, but still important, attributes, observer-derived weights are probably not proportional to the worth of unit changes in value functions." 2.33 Other concerns with weights in evaluation methods Regardless of tiie method for assigning weights, there are several otiier problems tiiat must be dealt with in using weights in evaluation methods: • identifying the people or groups of people whose opinions on weights are sought • aggregating weights of different groups • survey bias Different groups within society differ in their views on the relative "importance" of different attributes and in their willingness to trade-off particular criteria. Identifying these groups, getting their assistance in identifying criteria, and assigiung weights to these criteria is a major role of any public consultation program. In several hearings before the Joint Board, the composition of the team assigning weights was identified as a possible source of bias in the analysis (Joint Board, 1987a; Joint Board, 1987b). The second problem that must be dealt with is the aggregation of the weights of different groups. To some extent, this involves assigning weights to the opinions 1 1 This method is more suitable for opposing than for advancing a course of action. It implicitly assumes the previous decisions are valid or acceptable, presumably despite not having used weights. 21 expressed by the different people or groups of people! Given the potential for hard feelings, this is usually done implicitly. Finally, the analyst must find a way of dealing with survey bias. Studies have shown that the way in which the question is phrased may significantly affect the response received (Tversky,1981). It is therefore important to be as careful as possible that the responses are not the result of biases introduced because of the design of the survey instrument. 2.4 Integrating and aggregating the components In principle, once all the above prior steps of the evaluation process have been completed, then the evaluation method can be applied to identify the preferences among alternatives. The evaluation methods will combine the criteria, the predicted impacts, and the weights of the various criteria to produce an ordering of the altemat- ives. This may be done for each group that may be affected by the alternatives. In practice, selection and application of the evaluation method are not the last step in the analysis, since different evaluation methods may stipulate that data be collected and assembled in particular ways, or may incorporate some of the other steps discussed above. For example, the Borda-Kendall and Cook-Sieford methods are largely concerned with aggregating the weights of different groups (Massam,1988). 22 The choice of methods Chapter 4 of this report describes 10 evaluation methods and there are a number of variations and combinations of these, 1 1 of which are discussed in Appendix C. This chapter examines why so many methods have been developed and what potential users should consider in choosing an evaluation method (ie, criteria for evaluating evaluation methods). 3.1 Why so many methods? The significant number of methods currently available and the tendency for new ones to be introduced clearly demonstrates that there is no ideal system, at least for all types of decisions. As a result, users should be aware of the strengths and limitations of the method(s) they select for their specific problem. Most methods have been developed to deal with a specific type of problem and in some cases, methods have been successfully applied to other situations, but no method is ideal for all problems. By examining the factors that have lead to the various methods, an appreciation of their individual characteristics can be obtained. 23 3.1.1 Types of decisions Evaluation methods are used in a wide range of public and private decisions. To be applicable to EA decisions, they must provide a framework for trading off non- commensurable values. Informption on geographical distribution is a common requirement in projects subject to EA (e.g., site selection, route location). Evaluation methods for these types of problems must include a capability for dealing with spatial interactions that are significant. Other decisions relate to technology selection (e.g., Energy-from- Waste). Often, there are a discrete number of alternatives from which to choose, and intermediate sizes or technologies may not be available due to manufacturing limitations. This presents a quite different problem than one where a continuous set of alternatives is available, and a different set of methods is applicable. 3.1.2 Decision environment Some decisions involve a relatively small number of "actors" or stakeholders", whereas others may involve large numbers of interested parties. Reducing the complexity and level of effort of a method often compromises accuracy and comprehensiveness but may be necessary to overcome logistical barriers. The decision objectives may affect the choice of an evaluation method. For example, if one were to strive to distribute the costs of a project widely, the "net benefits" may not be as significant as the distribution. Accordingly, an evaluation method con- centrating on such distributional effects (e.g. ELECTRE concordance analysis) would be preferred. The evaluation method must be consistent with the perceptions and principles of the decision-makers and those being affected by the decision. For example, some persons are offended by the basic concept of reducing intangible values to dollars and cents. The great variety of opinions on such matters is itself a source of many different methods. 24 3.1.3 Technical considerations Every evaluation method tries to organize and simplify the complexity of the problem to be addressed and to focus on the differences among the possible solutions. Being a simplification, any given method has within it certain implicit assumptions. The user's willingness to accept these assumptions depend on such things as the availability of data about the alternatives, or the ability and willingness to collect additional data. Different methods emphasize different attributes of the problem and may be chosen for this reason. For example, one method may be easier to understand but less rigorous than another; some methods are easier to use than others; and so on. In conclusion, none of the evaluation methods discussed in this report is preferable to any other method with respect to all the criteria set out in the following section. Some have more limited application tiian others, and many are designed to be used in combination with otiiers. Progress is expected to continue in the development of new methods but despite such progress, these advances will not eliminate the complexity and challenge faced by decision-makers as the public demands explicit rationales for their decisions. Furthermore, the problems being faced will continue to increase in complexity if for no other reason than greater understanding of the interactions of natural and socio-economic systems. 3.2 Choosing among the methods The preceding section provides an explanation for the diversity of evaluation methods and suggests that no one method is universally superior. This section provides a description of criteria which can be used to select among the methods. The criteria are summarized in Table 4. The choice of method(s) will depend on the emphasis or importance placed on each criterion. In many cases, a sequence or combination of methods is used to deal with various elements. This may include an iterative process employing more rigorous methods as the scope of the problem is better defined. This is discussed further in Section 3.3. 25 Table 4 Criteria for analyzing evaluation methods. Characteristics of Problems Problems involving spatial relationships Problems involving temporal relationships Scope of the environment affected Scale of the problem Level of interdependence and complexity Level of uncertainty Importance of the Decision Significance of being wrong and difficulty in choosing Decision schedule and implementation time Cost of deciding Technical sophistication of human and technological resources Data requirements and information availability Acceptability by Interested Parties Consistency of the method ReliabiUty of the method and precedents Simplicity of the method and graphic ease Focusing on key decision points and criteria Sensitivity of the method Ability of the method to choose an optimal alternative Theoretical defensibility of the method 3.2.1 Characteristics of problems Six criteria are defined that relate to the type of problem to be addressed. Spatial relationships Many natural environment and some socio-economic impacts have important spatial dimensions. For example, the location of a wetland with respect both to a project and to other natural system components may need to be represented by the evaluation method. Methods with this capability are usually map-based or contain spatial coordinate information. The type of spatial information may vary with the type of project. For example, the spatial information for a specific site (e.g., landfill) may differ significantly from that 26 associated with linear projects (e.g., hydro or pipeline rights-of-way). A system suitable for one may not be applicable to another. Temporal relationship Projects have both short-term and long-term impacts. Impacts may also vary seasonally. Impact prediction techniques should be used to estimate when such events are likely to occur, and the evaluation system should be able to incorporate these impact/time profiles in the analysis. Scope The EA Act defines "environment" broadly. Accordingly, evaluations include physical, biological, economic and social impacts. This criterion reflects the ability of an evaluation method to deal with this full range of variables. Scale The iterative planning process involves moving from a gross scale to progressively finer levels of analysis. Consistency is a central requirement in this process; accordingly evaluation systems that can be applied at all levels of detail are preferred. Interdependence and complexity Many impacts are highly dynamic and dependent on other components of the environment being affected. Because of this interdependence and complexity, impacts may be represented as functions of other variables rather than as simple numbers. These functions may have spatial and temporal factors in addition to relations with other criteria. In addition, the level of detail will affect the number of criteria. For example, an evaluation may include one criterion for human health or it may involve a highly detailed analysis of different population characteristics and health effects. Some systems are limited physically (or otherwise) in the number of impacts that can be considered (e.g., map overlay systems). Evaluation systems capable of handling high levels of complexity and detail are often required. 27 Probability and uncertainty Risk analysis is becoming more common in EAs; risks are often expressed as probabilities. Uncertainty is less well defined but may also be expressed mathematically. Some evaluation methods are capable of explicitly using this type of information (e.g., some mathematical programming methods) and this capability is becoming increasingly important. 3.2.2 Importance of the decision The importance of a decision, in terms of potential effects or magnitude of the investment, influences the choice of methods. Decisions with significant consequences warrant greater efforts and precision. Similarly, straightforward routine decisions are less demanding than complex issues for which comparable precedents and examples are lacking. Significance of being wrong Decisions with large "down side" risks tend to have need for techniques with high precision and sensitivity capabilities. This criterion is designed not so much to be applied against the evaluation methods themselves. Instead, users should approach the analysis of such projects cautiously to ensure that the evaluation method(s) chosen assign appropriate significance to the following criteria. Difficulty in choosing The use of a highly elaborate evaluation system is unnecessary if the "best" alternative is immediately apparent to all involved. As the preferred option becomes more obscure, more refined and complex systems will be demanded. The definition of alternatives can artificially make the selection of options unrealistically simple by ignoring possible intermediates or through restricted definitions of purpose. This should not be confused with those cases where a superior alternative can be immediately recognised. 28 Decision schedule To delay making a decision is a decision in itself. Common reasons include inadequate data, analysis or understanding. Like the preceding criterion, this criterion is not designed to be applied directly against the evaluation methods. Users must decide on the time available to arrive at a decision. Tighter schedules dictate less detailed and comprehen-^ive evaluation methods and this trade-off should be made consciously with full recognition of the consequences. Implementation time Several factors affect implementation time. Generally, simple approaches are more quickly implemented than elaborate complex systems. However, this has been significantly affected by microcomputers and their accessibility to analysts. A major factor now affecting implementation time is tiie availability of software versions of methods. Commercial microcomputer versions of many popular evaluation methods are becoming common. Implementation time is partly substituted by the cost of the software. Easily implemented methods, that can be used repeatedly and cheaply to test alternatives and perform sensitivity analyses, are preferred. Cost Cost is an amalgam of many of the following (e.g., professional time, hardware and software usage, data requirements). Strictiy speaking, its inclusion in the analysis therefore constitutes double counting, but for interpretive purposes it provides a useful integration of tiie various factors. Level of technical sophistication Some evaluation methods can be performed manually, whereas others demand tiie use of computer systems. Simple techniques can be applied by non-specialists while complex metiiods may require greater technical sophistication. Selection of appropriate evaluation methods should consider available human and technological resources. 29 Data requirements Ideally, the evaluation methods should be selected before data compilation is initiated. The type and form of the data required varies among evaluation methods. For example, some are based on nominal and ordinal data, whereas others use arithmetical operations that demand interval or ratio data. Many evaluation methods are designed to analyze distinct alternatives, but others, which include optimization routines, may require continuous relationships expressed in mathematical equations. Evaluation methods capable of dealing effectively with a variety of data types are preferred to those with less flexibility. Information availability Some types and forms of data are more easily obtained than others. For example, economic data on expenditures and revenues are often more readily available than measures of consumer surplus and willingness to pay. In addition, some types of data are easier to collect than others. For example, simple preferences among two alternatives are relatively straightforward to obtain, whereas the mapping of individuals' or groups' preference functions for a large number of possibilities is more onerous. Often simplicity is traded off for accuracy, but, all other things being equal, evaluation methods based on readily available or easily obtained information are preferred. 3.2.3 Acceptability by decision-makers, public and professionals The following eight criteria describe characteristics of evaluation methods that affect their acceptance by various "actors" in the EA planning process. For an evaluation method to be effective it is important that it be accepted by the principal actors. Consistency During public consultation, the preferences of different groups must be considered, and multiple value judgments are typically required to reach a decision. An evaluation methods should explicitly incorporate all facets of the decision process into a consistent and systematic framework. 30 Reliability Evaluation methods successfully and repeatedly applied in EA-type problems are preferred. Untried methods must be used more cautiously during initial applications. Furthermore, repeated analysis of the same data must result in the same conclusion whoever undertakes the mechanics of the analysis. Simplicity Simple methods are usually easier to comprehend than complex methods, and the more generally understandable a system is the more likely it is to be accepted. However, accuracy is often sacrificed for simplicity and reaching an acceptable balance is often difficult. One strategy is to use a complex and sophisticated system to test the accuracy of a simple system and to base the rationale of the decision upon the latter. Graphic ease Complex equations and many table of numbers are daunting to many decision-makers and the public. Evaluations that lend themselves to pictorial or graphical presentation formats are more likely to be accepted. Key decision points and criteria A primary objective of evaluation methods is to facilitate informed discussion of the key issues. Evaluation methods that allow discussions to focus on key decision points and criteria will be more effective in advancing the decision process. Public consultation is often considered a sham since individuals cannot see how their opinions have been incorporated in the analysis to arrive at a decision. Effective evaluation methods can play a central role in consensus building. 31 Sensitivity Evaluation methods capable of analyzing subtle variations and combinations are preferred to those capable only of dealing with large-scale perturbations. Optimization Often much of the public debate over a project subject to EA turns on whether there is a better alternative. This leads directly to the alternative formulation process and impact mitigation'^. Inherent in some evaluation methods is an optimization function. In these methods, the "optimal" alternative is found from a continuous range of alternatives. For example, optimization methods might be used to find the optimum mix of various waste management options; rather than specifying a relatively small number of combinations of components of different sizes, the "optimal" combination would be mathematically calculated. Use of optimization techniques does not replace the need for creative thinking but they may be of valuable assistance in the alternative creation process. Theoretical defensibiUty Evaluation methods based on accepted theoretical principles are likely to experience less resistance. With a property based evaluation method, the preferred alternative will be corroborated by other methods. 3.3 Use of multiple methods No method is likely to satisfy all the criteria identified above for any particular problem. Usually, different methods will be selected for different sub-components of the problem. In this way, a set of methods can be selected which best satisfies the identified criteria. Often a series of alternatives are put forward and then significant impacts are mitigated. This process is somewhat misleading and artificial. The extent of mitigation classically involves trade-off decisions and these should be incorporated directly in the decision process. If this approach is adopted, much greater use of optimization techniques would be warranted. 32 4 Review and analysis of methods This chapter reviews examples of methods from each of the six categories of methods identified in Chapter 1. For each of the ten methods considered, major features, ad- vantages and disadvantages are discussed. Appendix C reviews these and eleven other methods in terms of the criteria developed in Chapter 3. 4.1 The identification of methods This chapter reviews methods which have been used to assist in EAs prepared under Ontario's Environmental Assessment Act. As well, consideration is given to other evaluation methods which have been used to assist with practical planning problems, but not for projects submitted under the EA Act. The methods discussed were drawn from a selection of EAs provided by MOE which staff in the EA Branch considered to represent a reasonable cross-section of the techniques in use in Ontario. This list was supplemented with a list of other methods which have been described in the professional planning literature and which may be applicable to the types of undertakings for which approval is sought under the EA Act. 33 4.2 Classification of methods Several systems for classifying evaluation methods have been developed. Most of these categorize according to the type of analysis used, though some categorize according to the context in which they were developed (Nichols, 1982). This report uses the typology described in Chapter 1. 4.3 Sequencing of methods (phasing) In undertaking an EA, it would be most unusual to use one method exclusively. Rather, a combination of methods is normally used for different sub-components of the problem. For example, cartographic methods may be used to screen from large areas to a limited number of sites. These sites may then be compared using a different technique, such as the simple additive weighting. For the same project, linear programming (an optimization technique) may be used to determine the appropriate size or scale of the undertaking and related impact mitigation measures. 4.4 Evaluation methods in environmental assessments 4.4.1 Ad hoc The ad hoc method is not really parallel to the other methods discussed here since to some extent it represents the avoidance of evaluation. An ad hoc "evaluation" represents the writing of conclusions without doing the analysis. However, the ad hoc method is widely used in EAs, particularly to assess alternatives to, and is therefore discussed. The ad hoc method involves describing impacts in narrative terms, without the explicit specification of criteria, ratings or weights. The ad hoc metiiod typically involves litUe or no disaggregation of impacts, and is a qualitative assessment. No assurance can be provided tiiat different alternatives are treated in the same way, that the full definition of environment is considered, or tiiat public concerns have been addressed. Often the ad hoc method is not traceable. An example of an ad hoc evaluation, related to purchasing an automobile, is presented in Table 5. In spite of its limitations, tiie ad hoc method is widely used in EAs, particularly during the earlier screening stages. For example, an EA involving the selection of a corridor might evaluate alternative routes in considerable detail using a formal method, but the evaluation of alternatives to might be done using the ad hoc approach: no specific criteria are identified and systematically applied to all alternatives, and the evaluation 34 Table 5 Using an ad hoc evaluation to decide which car to buy Ad hoc methods involve describing impacts in the sentences without clearly stated decision criteria and exact and preference values. In this facetious example of choosing a new car, an ad hoc evaluation might consist of the following "argumenu": • Hondas are smaller than Rolls Royces and I heard a friend's friend drowned when his Honda went off the road into a lake. • Rolls Royces are imported whereas Cougars are made in Canada and the available colours of their upholstery are in keeping with other North American cars. • The Cougar is the best car to buy, they have entertaining commercials on television. is highly qualitative. The dismissal of alternatives is done using a narrative ap- proach." The ad hoc method usually fails to achieve the primary objectives of an evaluation system: traceability and accountability. Unless all but one of the alternatives are clearly unacceptable, the ad hoc method is unlikely to be valid, and does not compare favourably with even a simple checklist, in which each alternative is systematically considered for each criterion. When the altematives are not systematically compared in even the simplest way against a specified set of criteria, it is difficult to see how the decision was made, and what evidence supports it Because it consists of narrative descriptions of the altematives, the ad hoc method may be thought of as relatively uncomplicated. However, the basic premise of most other methods is that complex problems can be decomposed into more manageable smaller problems; these methods were designed in large part to simplify problem solving, while retaining validity. Except for very simple problems, ad hoc methods are likely to be more difficult to apply with any degree of validity than other methods. Ad hoc methods are the only methods which rely exclusively on narrative descriptions to analyze the problem. 13 This is unfortunate in that it is often at the alternatives to stage when the potential exists to go in a fundamentally different direction. 35 4.4.2 Checklists Unordered lists of criteria Most methods begin with a checklist of criteria to be considered. Unless these are then ordered, and impacts assessed using some other aggregation process, there is not a firm basis for comparing the alternatives. Thus an unordered checklist is usually only slightly superior to ad hoc methods. Nevertheless, before aggregating assessments, taking into account preferences, even unordered lists of criteria and ratings of the alternatives against each of these may have an important and useful role to play by eliminating alternatives that will clearly not be final choices. Usually, the alternatives eliminated at this stage will be what are known as dominated alternatives. Alternative A is said to dominate alternative B \i A is better than B in at least one respect and no worse than B in any respect B is thus a dominated alternative, and can be eliminated from consideration early in the evaluation. Unfortunately, it is rare to have all but one alternative dominated by another alternative; making trade-offs is a major feature of evaluations that are required as part of an EA. Although unordered lists of criteria may help reduce the number of alternatives that have to be carried forward to other evaluation methods, they will rarely eliminate the need for these other methods. In using an unordered list of criteria in this way, it is necessary to ensure that the list is comprehensive, and that those alternative discarded do not have offsetting benefits that would emerge only on consideration of a broader list of criteria. Satisficing Satisficing is a process in which an alternative must satisfy certain specific conditions before it can be considered acceptable. With satisficing, minimum acceptable levels of particular criteria must be met. Where satisficing is used, it is normally as the first step or phase in a narrowing process. It is used to eliminate "unacceptable" alternatives. Those alternatives which remain after going through satisficing are then usually subjected to more detailed scrutiny. The secondary screening stage of the GO-ALRT Whitby to Oshawa Project EA used satisficing in this way. A pervasive assumption of satisficing is that there are some criteria for which thresholds apply, above (or below) which the alternative is unacceptable. An example 36 of such a criterion might be a requirement or a limit over which the proponent has no control, like a regulation. Any alternative that cannot meet the regulation can be rejected, regardless of what other positive attributes it may have. However, if there is not some clear, externally-imposed threshold for criteria used in satisficing, then it is quite possible that some alternatives will be deleted on the basis of the criteria used, even though the alternative may have offsetting benefits. Where a legitimate case can be made for giving some criteria the power that they gain in satisficing, it is a fairly simple method for narrowing the number of alternatives available. However, it does not necessarily lead to a single alternative being selected, unless additional criteria are applied until a unique solution is achieved. Identifying the appropriate criteria to use in satisficing is both the most difficult part of the method, and the component of the method most likely to lead to misuse. The ease with which the method can be used to reduce the number of alternatives under consideration must be balanced against the ease with which legitimate alternatives can be discarded by applying particular criteria. Satisficing differs from many of the other methods in that it does not assume that certain environmental attributes or features can be traded-off against other attributes or features: a given alternative is either acceptable or unacceptable on the basis of each criterion. Constraint mapping There arc several ways of applying satisficing. A common way in use in EAs is a cartographic approach known as constraint mapping. In this approach, "unacceptable' characteristics for a site or corridor are identified and their geographic locations are mapped. Maps for all unacceptable characteristics are overlayed, and only "unconstrained" geographic areas remaining are considered for sites or corridors. Obviously, there is no assurance that a unique site or corridor will emerge from the analysis; no or multiple sites or corridors may remain after all constraints are applied.'" If no sites remain, the need for the site or the definition of the study area will need to be reconsidered, or the constraints will need to be relaxed. If multiple sites remain, it may be necessary to apply additional criteria, or to use the same or another evaluation method on the remaining sites using a finer resolution. 14 If consu-aints are applied in order from most lo least imponant, stopping at the point where only one site remains, this would be a cartographic implementation of lexicographic ordering, which is discussed in Appendix C. 37 4.4.3 Matrix methods Matrices have been found to be invaluable for identifying, describing, manipulating, and evaluating multiple criteria for a set of alternative projects among different interest groups. The matrices most commonly used in EA are tables with two different lists set along opposing axes. An interaction between components — for instance, the degree to which a particular criterion is valued by a certain interest group — is scored in the cell common to the two components. All matrix-based evaluation methods utilize some procedure (explicitly or not) to reduce the three sets (i.e. criteria versus projects, criteria versus groups, and projects versus groups) of component interactions to one or two. Additive models Additive models (often referred to as compensatory of Simple Additive Weighting models) attempt to reduce the assessment and evaluation exercise to one in which each alternative is classified using a single score (index or value) intended to represent the utility or attractiveness of the project. The additive models have their critics and defenders. The critics have four major concerns. First, the precise way in which values for criteria are combined lacks clear theoretical justiflcation'^ and further it is very difficult to collect data about actual decisions to identify particular compensatory or trade-off rules. Second, the SAW method should satisfy two specific conditions: preferential independence and utility independence. The former demands that the trade-off for pairs of criteria should be independent of the values for any other criterion, and the latter suggests that the utility associated with a particular criterion is not related to the value for any other criterion. Intuitively these conditions may be quite appealing and seem reasonable, however very rarely are they verified prior to the application of a particular additive procedure. Third, the emphasis in these models tends to rest on sole consideration of first-order impacts. 15 If ihe method lacks theoretical justification, it lacks scientific validity. Therefore, the results of the approach are suspect, and may be difficult to defend to the public and to the Board. 38 Fourth the final scores are typically dimensionless units which mean nothing to citizens and others who prefer to have information on each alternative in terms of money, jobs, loss of agricultural land, recreational opponunities, etc. As part of an evaluation process using a SAW method, typically sensitivity tests are included to examine the changes to the final score which result from changes to the values for the individual criteria which describe the disaggregated set of impacts resulting from a particular project Examination of the literature on evaluation methods does identify supporters of such models. Hwang and Yoon (1981) argue that "the simple additive weighting (SAW) method is probably the best known and very widely used method of M. A. D.M. [Multi- Attribute Decision-Making]. ..[and that]... theory, simulations, computations, and experience all suggest that the SAW method yields extremely close approximations to very much more complicated non-linear forms, while remaining far easier to use and understand." Others (Rowe and Pierce, 1982; Solomon and Haynes, 1984) have looked systematically at errors which might invalidate the method and they conclude that "the use of the SAW model is probably justified." The simplicity of use, and the ease of comprehension, probably explains why additive models are used in most EAs. For example, the Petro-Sun and Highway 416 EAs used forms of the SAW method. A number of variations on the SAW method have been proposed, and some of these are discussed in Appendix C. An elaboration on the basic steps of the SAW method which embraces most recent developments is given in Table 6. It should be remembered that this sequence is cyclical and while it appears rigid in its formal presentation, practical applications demand that it be rather loosely defined, so that the exercise might, in fact, begin at STEP 2 when a proponent brings forward a specific plan. While all the steps are important it is STEP 6 which needs great care in order to establish the legitimacy and credibility of the whole exercise, yet it is the step that is often omitted or dealt with haphazardly. Without this step the earlier ones tend to operate in a vacuum. Overlay mapping and geographic information systems (GIS) Just as satisficing can be implemented cartographically as constraint mapping, SAW can be implemented canographically as overlay mapping or part of a geographic information system. In overlay mapping, weights and rates are combined into shaded map overlays, all overlays are superimposed, and the relative suitability of geographic areas for the purpose at hand are indicated by tiie intensity of shading on tiie composite map (see McHarg, 1969). By coding the geographic information into a computer, rather than directly onto a set of maps, it is possible to exphcidy separate exact values (rates) from preference values 39 Table 6 Elaboration of the simple additive weighting method. STEP 1 Define the set of plans Define the set of interest groups Define the set of criteria STEP 2 Obtain impact values for the criteria, for each plan and convert the raw data into standardized values using functional forms or other normalization procedures. The opinions of the interest groups can be used to obtain the functional forms. STEP 3 Determine the precise way in which the standardized values are aggregated to give a final utility value for each plan. The opinions of the proponent, interest groups and others are used to obtain the relative weights for the criteria. STEP 4 Determine the utility value for each plan. STEP 5 Undertake a series of sensitivity tests to examine the effects on the utility values of altering in a systematic way: i) the alternate plans to be considered ii) the set of criteria to be considered iii) the accuracy of the raw data iv) the functional forms for each criterion v) the weights for the criteria, and the function which is used to accumulate the scores for the criteria into a final score for each plan. STEP 6 Justify the sensitivity tests in the context of the planning exercise, write reports and summarise results in terms of the distribution of costs and benefits and present the perspectives of each of the interest groups. Avoid using dimensionless numbers. Incorporate the results into the larger planning process and repeat steps if necessary. (weights), to indicate more subtle variations in suitability or impacts, to incorporate a greater number of variables (i.e. criteria) than are possible if maps are superimposed, and to define precise suitability boundaries using the additive scores. This type of GIS implementation of the SAW has been used in several EAs, notably the Ontario Hydro Southwestern Ontario Transmission Study EA. 4.4.4 Economic approaches Economic methods may also be used to evaluate alternatives. Among the methods available are cost-benefit analysis'* (CBA), cost-effectiveness analysis (CEA), cost- minimization analysis (CMA), and the planning balance sheet (PBS) method. 16 Cost-benefit analysis is sometimes referred to as benefit-cost analysis. 40 Economic methods of evaluation attempt to represent aU aspects of a project in commensurate monetary values. A reference year of evaluation is specified and aU values are expressed in dollars for that year through the application of discount rates. The first three methods discussed are only slight variants of the basic approach of CBA; PBS, while taking a new approach, still utilizes many of the strengths of CBA as an' analytical tool. Consequently, this section focuses on cost-benefit analysis. The other methods are d'scussed in Appendix C. Cost-benefit analysis was developed in the 1950s and 1960s largely as a tool to assist in the evaluation of public sector projects. The 1970s saw an increased mtercst m the field as new tools of analysis were applied to CBA to deal in a more sophisacated manner with issues of non-traded goods, social discount rates, and distributional effects. Cost-benefit analysis consists of five steps, as indicated in Table 7. Table 7 The steps in cost-benefit analysis. 1. Identification and full description of the alternative projects to be analyzed. 2. Identification and precise definiuon of project impacts on all affected groups and individuals in all time periods. 3. Evaluauon of all impacts in monetary values for each of the time periods in which they occur. 4 Calculation of costs and benefits to a monetary value in terms of a specific year by application of a discount rate. For instance, if the discount rate is 10%. a benefit valued at SlOO one year after the reference year will only be valued at S90.91 in the reference year. The equation for present valuation is PV = L Xy(l+r)' where r is the rate of discount, and t is time. 5 Comparison of altemauves in terms of Net Present Value (NPV) (present valued costs minus present valued benefits) or Internal Rate of Return (IRR) and ranking of most preferred alternatives. Impacts are evaluated in monetary terms based on the perspective of the individuals affected by each project. Costs and benefits are expressed as the individuals': Willingness to Pay (WTP) for the benefits: the maximum amount individuals affected by the project would be willing to pay for project benefits. 41 Willingness to Accept (WTA) the costs incurred: the minimum amount individuals would be willing to accept for the costs of the project or the value of the good in its most valuable alternative use. Market prices, if they exist, are used as the reference point for estimating WTP and WTA. Consumers' surplus" should be estimated for benefits and costs as well. While market prices, in the absence of major market failures, are considered to provide an unbiased indication of individuals' revealed preference for goods, reliance on market values can be fraught with difficulties, especially in the analysis of environmental projects. When markets for certain goods - such as pollution - do not exist or are incomplete, market prices will not exist or will be inappropriate measures of the social values of the good. Projects of environmental significance typically involve considerable "externalities" (i.e., effects on third parties which are not accounted for in the market place). Because of the imponance of these factors, much of the work in CBA applications to environmental problems has been concerned with attempts to measure these factors. Methods used to determine WTP values in the absence of market prices typically include direct estimation of consumers' preferences using polls, surveys, bidding games, questionnaires, and voting; indirect estimation based on observed behaviour patterns and estimation of values or "shadow prices" by experts using Delphi or other methods. Because costs and benefits must be calculated for each year (or other time period) that they are effective'^ CBA requires an explicit expression of the costs and benefits of the project over time. After removing the effects of inflation tiirough die use of constant dollars, costs and benefits are converted to present values by applying a discount rate. The discount rate chosen may significantiy affect the outcome of the analysis and therefore must be chosen carefully. The federal Treasury Board recommends using a discount rate of ten percent, with sensitivity analyses at five and fifteen percent (Treasury Board, 1976). 17 The principle of consumer surplus and its estimation is covered by most introductory microeconomic textbooks, (e.g. Mishan 1972). Consumer surplus can be estimated by subu-acting the amount actually paid for each marginal unit from the maximum amount an individual would be willing to pay. 18 The time horizon of the study is determined in part by the discount rate. With a positive discount rate, impacts will be negligible after a number of years and so are ignored. In general, the higher the discount rate, the shorter the time horizon becomes. 42 Comparison of alternatives The decision-making criteria in CBA are straight-forward: the alternative (including the do-nothing alternative) which yields the highest net present value (NPV) or internal rate of return (IRR) is preferred, other factors being equal. The project yielding the highest NPV will not necessarily be die alternative yielding the highest IRR. For some projects, the decision makers will choose to use NPV as the criterion, for others IRR will be used. NPV is dependent on the size of the project, whereas IRR measures the rate of return irrespective of the size. Cost-benefit analysis and related economic metiiods are not generally used as evaluation metiiods in EAs in Ontario. However, CBA is required for federal projects in tiie US by regulation. Canadian federal government policy requires that major new regulations or amendments to existing regulations relating to health, safety or fairness be subjected to a socio-economic impact analysis, and benefit-cost analysis and cost- effectiveness analysis are identified as primary evaluation techniques (Treasury Board, 1982). Treasury Board has published a guide on doing benefit-cost analyses (Treasury Board, 1976), While CBA has not been used in Ontario for EAs, it has been used for other environmental decisions. One such application was the MOE study of the clean- up of tiie Wabigoon River system (MOE, 1986). Although CBA and related metiiods deal explicitiy and well witii effects tiiat occur over time, spatial relationships are not well represented in CBA or associated economic methods. Data availability may be a significant limit on the scale of impacts to which CBA and related techniques can be applied. Projects which are large enough to have far reaching effects tiiroughout the economy, so that many prices change, are particularly difficuh to evaluate with CBA. Although not designed specifically for the EA process, CBA can be easily adapted so as to be compatible with the EA process by estimating values for different criteria. One of the major attributes of the metiiod is tiie attempt to avoid using dimensionless value judgments and instead to rely on empirical inputs.'' The method is clear and traceable with an internally consistent logic applicable to the EA process. Although not used as a primary tool of analysis in EAs, CBA has been used extensively and successfully in the evaluation of major public projects witii significant environmental consequences. 19 In the case of non-marketed goods, such empirical inputs are unavailable and it is through the estimation of wiUingness-to-pay (WTP) and willingness-to-accept (WTA) that environmental "value" judgements are incorporated. 43 Except for PBS, economic approaches do not involve the public in ranking alternatives. Public preferences are estimated from observed market transactions or user surveys and behaviour. The evaluation output is generally aggregated and does not facilitate the identification of key decision points in the analysis although this is not an inherent feature of the method. Individual values are aggregated into social choices through the observed be- haviour. The use of monetary units provides an easily comprehended context for the general public. In addition, sensitivity analyses of particular factors and costs can be easily demonstrated to public audiences. A key variable is the social discount rate, and CBAs normally test the effects of various discount rates on the ordering of alternatives. 4.4.5 Pair-wise comparisons Several methods have been developed that involve the use of pair-wise comparisons of alternatives. Advocates of these methods argue that people are better able to consider two things at a time than they are many things. This section considers a method that has been used in Ontario Hydro's Southwestern Ontario Transmission Study EA: the Analytical Hierarchy Procedure. Two other pair-wise comparison methods — ELECTRE and TOPSIS are discussed in Appendix C. Saaty's analytical hierarchy procedure A fairly popular pair- wise comparison method is Saaty's analytical hierarchy procedure, or AHP. AHP has been used by Ontario Hydro in its Southwestern Ontario transmission line study (Hoglund,1987). The method is normally used to estimate both preference values, and exact values by comparing criteria and alternatives respectively in a pair-wise basis. For example, a social impact criterion would be compared against a natural environment criterion, and their relative weights, or preference values, identified. Exact values or rates are estimated by comparing each alternative against each other alternative for each criterion, and providing a measiu-e of how much different that alternative is for that criterion. Once this is done for all criteria, alternatives can be 44 ordered by summing the product of the exact and preference value for each criterion.^" The method has both advantages and disadvantages relative to other methods. Among the advantages are the ability to combine complex arrays of data and judgements into a single numeric ratio. In addition, normal implementation of the method involves the calculation of an Eigen value which is a measure of the consistency of the ratings. Where consistency is identified as low, it will be desirable to reexamine the pair-wise comparisons. A relatively inexpensive computerized implementation of AHP is avail- able, and this program provides a comprehensive sensitivity analysis component. Three main disadvantages are associated with the method: • a set of default values which may be questionable in some circumstances • the potential for confusion of preference and exact values, and • the problems associated with all methods requiring relatively complex mathematical analysis, usually with computerized implementation. In Saaty's writings about AHP, in the computerized version, and in most other implementations of the method, a set of default assumptions are adopted. (See Saaty, 1987.) For example, comparisons are done using ratings of Equal, Moderate, Strong, Very Strong, and Extreme which are assigned scores of 1, 3, 5, 7 and 9 respectively. Although Saaty asserts that these are absolute values (not ordinals), it is unlikely that many users of the system will understand this detail, and use it in that way. Consequentiy users of the method may treat ordinal data as if they were ratio data. Often in environmental problems, one might expect the differences in impacts to be more than one order of magnitude, a difference which is not captured if Equal is 1 and Extreme is 9.^' In applying AHP, criteria are typically compared against criteria, and alternatives are compared against other alternatives for particular criteria. Comparing criteria against criteria is a clear means for identifying preference values, and, properly done, AHP can be of considerable assistance because of its pair-wise nature, and the consistency 20 If desired, these can be done independently by each participant in the process and the same basic approach used to assess the significance attached lo each participant's views. Il should be noted that the assignment of these scores is not an inherent part of the method, and the verbal descriptions are presented as suggestions to simplify the assigning of scores. Users will naturally be inclined to adopt this simplification, which may decrease the validity of the evaluation. 45 value that is calculated. However, in comparing alternatives, many of the impacts would already be measured on a ratio scale, and there would be little to gain from comparing these measures of impacts in a pair-wise comparison. A temptation that may be difficult to resist, particularly given the default assumptions, would be to use AHP to derive ratio measures of impacts from ordinal ones, i.e. to convert impacts measured in terms of High, Medium, and Low into ratio data. This is not defensible. Finally, AHP will suffer from the problems associated with all methods which require relatively complex mathematical analysis, the use of computers, or both. Many people will be sceptical about the results of the analysis. There are several possible reasons for this scepticism. Many people will be unable to replicate the analysis because they do not have access to the tools required to do so. This means that acceptance of the analysis will require trust that the methodology is sound, and that it has been properly applied. Unfonunately, those undertaking the analysis will often be seen as adver- saries in whom trust is not justified. Given the kinds of misuse and abuse that are possible with any method, critics ought to be sceptical. To the extent that you can use DSS [Decision Support Software) to make sure that you've covered all reasonable alternatives (and to organize your decision so that others can understand it more easily), it's helpful. To the extent that someone may use DSS as a computerized cloak of legitimacy for a decision he's trying to slide past you, it's not. When someone brings you a snazzy computer printout and claims that the "obvious" thing to do is make the choice the computer recommends, beware of the result. When it comes to computer models, an ounce of scepticism is worth a pound of regret. Even with the best of intentions there is a potentially fatal fascination in the precise probabilities and ratings that DSS grinds out at the touch of a function key. Computer fans often forget that the numbers are a model of the real world, and if the model disagrees with the real world, the real world wins. - Taylor,1988. 4.4.6 Mathematical programming methods Mathematical programming methods are designed to deal primarily with complex decisions for which an optimum solution is being sought. The "optimum" solution is the one which best meets predefined goals or objectives while staying within specified constraints. Mathematical programming methods are tools to permit the analyst to cope with the large number of potential combinations and to focus on the ones most likely to satisfy the requirements of the decision-makers. Mathematical programming methods have been designed to efficiently and automati- cally evaluate a great number of alternatives. It is the number of choices available that have led to the development of these techniques. In most EAs the range of 46 apparent alternatives is relatively small and simple comparative methods of the son described in the preceding sections of this chapter may be adequate. However, to some extent, it may be tiiat the methods used have required tiiat tiie number of alternatives considered be low and hence the tendency to reduce the set of alternatives to a small number. Mathematical p'-ogramming techniques have been used extensively by engineers, economists and management consultants for a variety of applications in Canada and elsewhere but generally these applications have not involved a public planning process and hearings with the pressure of open confrontation common in EAs. More extensive use of these techniques is constrained by several factors: inability or unwillingness to quantify— the. resistance to explicitiy quantify the relative values of "non-commensurable" or "intangible" impacts prevents the application of matiiematical techniques. These techniques demand explicit quantification of preferences values. • approach to impact mitigation — mitigation is common practice in EAs in Ontario, but tiie process is quite ad hoc and lacks adequate definition in terms of end points and constraints. Engineers often say tiiat given enough money they can build virtually anything or conversely mitigate any impact. In some cases, tills is true, altiiough tiie act of mitigation can have impacts of its own. The point is tiiat mitigation is simply another level or phase of decisions relating to a project and tiiese decisions encompass many of tiie same issues faced in choosing among alternatives (e.g., trade-offs between non- commensurable values). The concept of separating mitigation from tiie alternative selection process is faulted. The design of mitigation measures is as imbued with values and preferences as tiie choice of alternatives and is not the exclusive domain of technical experts. If mitigation is incorporated as a simultaneous part of tiie alternative selection process, tiie number and complexity of alternatives will increase substantially. These are tiie types of problems that mathematical programming methods are designed to solve. Within tiie realm of matiiematical programming methods, tiiere are considerable numbers of approaches designed to deal witii a broad range of problems witii varying characteristics. Given that mathematical programming methods have rarely been used in EAs in Ontario, tiie following discussion is not exhaustive, ratiier it is illustrative of the general types of metiiods available and how they might be used in tiie EA process. Specifically, three methods are described: 47 • linear programming • dynamic programming • goal programming. The first two are typically used for single objective problems. That is, all concerns and impacts are expressed in the same units and the optimum is determined on the basis of a single unit of measure. The final technique is designed for multi-objective problems. More exhaustive discussions of these techniques are available in standard operations research texts (e.g., Haith, 1982; Goodman, 1984; Jewell, 1986). Linear programming Linear programming (LP) is an iterative solution process that requires computer assistance. Without delving into the underlying mathematics of the method, some features of the technique are worthy of note. To solve an optimum choice problem using LP, four conditions must be met: i proportionality ii non-negativity iii additivity iv linear objective function The first condition demands that the inputs and outputs of an activity are strictly proportional. For example, if incinerating one tonne of waste results in 50 mg of lead emissions, then incinerating two tonnes leads to 1(X) mg of lead emissions. Non-negativity of decision variables is a benefit of LP for many physical problems since negative physical variables (e.g. negative 5 tonnes of pollutant) are meaningless. Non-negative values can be specified for all variables in the constraints. Additivity is another strength of LP for physical problems. This assumption is based on a "mass balance" concept, that is the total of the inputs or outputs is equal to the sum of the individual components. For example if 3 tonnes and 4 tonnes of pollutant were removed by different components of a treatment system, the total removed would be 7 tonnes. This constraint appears trivial for physical inputs but extends to all decision variables in the objective function, for which the relationship may not be as clear. The last requirement, that the objective function be linear, is the most constraining for EA applications. Some problems do not lend themselves to linear solutions. For example, there is not a linear relationship between the size of a site (in ha) and the 48 length of fence required to enclose it. Whereas the size of the site is the product of the width and depth of the propeny, the length of fence required is two times the length plus two times the width. For such problems, other methods, such as non-linear programming, are required. LP can be used to solve some non-Unear problems by partitioning curves into linear segments but this can be cumbersome. The second weakness of LP is its "black box" solution procedure. While the solution algorithm is well known and studied, it requires considerable technical knowledge to understand its operation. Also because LP is essentially a numerical technique, albeit an efficient one, it requires a computer and a relatively complex computer program to obtain a solution. These characteristics tend to isolate the layman from being able to see the internal operations of an LP program. This second weakness is offset by several considerations. First, LP is used on a routine basis extensively by many sectors of society to solve complex problems. Because of its extensive and extended use, the availability of standard LP software and the broad technical literature discussing the approach, there is strong reason for laypersons to "trust" the approach per se. Misuse and abuse is more likely to arise from misspecification of the inputs or misinterpretation of the output. While the mechanics of LP are complex, the conceptual basis for the procedure is relatively simple and it can be simply illustrated graphically for two variable problems. A second consideration is the logical and simple structure of the input for an LP. Information is provided in terms of an objective function which includes all of die decision variables (i.e., evaluation criteria) that must be considered and constraint equations which indicate the relationships among the decision variables. In the objective function, a coefficient is specified which indicates the importance of that variable (i.e., its preference value). The constraint equations include the magnitude of the impacts (i.e., exact values). Because of the parallel structure of the LP to conventional EA evaluation problems, tiie inputs are readily understandable to non- technical persons. A major strength of LP beyond its ability to cope with a great number of decision variables and its analytical efficiency, is the nature of the output The primary output from an LP consists of a quantitative description of the optimum alternative and the associated environmental "costs" (i.e., an estimate of the impacts related to the optimum alternative). In addition, LP provides another extremely useful output for decision-makers, tiiat is what are called "shadow prices". The term "price" is perhaps misleading seeing as LPs do not require monetary units to be used. A shadow price is provided for each binding constraint^ and it represents the cost (according to die 22 Binding constraints are those thai functionally hmit the feasible range of alternatives. Some constraints in an LP may not actually limit the result and they will have a zero shadow price. This information in itself is important in that the LP provides a means to eliminate insignificant 49 units of the objective function) of the last (marginal) increment of each constraint. Conversely, if a given constraint were relaxed by one increment, it represents the change in the objective function that would be realized. This result is often quite valuable for helping decision-makers and stake-holders move toward a consensus position. For example, if one constraint was the maximum capacity of pollution treatment plant, the shadow price for capacity might be say $100 000 per one per cent increase in the capacity constraint. One would be able to compare this against the environmental benefit that might be realized by a marginal increase in capacity. This output allows decision makers to quickly appreciate the imponance of each decision variable and the sensitivity of the outcome to each. The use of LP for environmental decision making is gaining popularity in the U.S. and is being used increasingly by the U.S. Forest Service (e.g., Hoekstra et al, 1987). Given the power of this analytical tool for decision making and its proven practical value, LP can be expected to be used more extensively in Ontario as the requirements of the EA Act are more stringently applied. Dynamic programming Dynamic programming (DP) is designed to deal with staged or phased decisions (i.e. a sequence of decisions). The phasing concept is often thought of chronologically but it applies equally well to decisions regarding the development of alternative systems comprising individual components such as a sewage conveyance and treatment system, municipal waste management system and route alignment. DP has the capability of incorporating quite complicated relationships including stochastic and probabilistic information in its solutions. If one were to solve for the optimum system configuration through an exhaustive iterative search, the introduction of each new possible choice increases the number of potential solutions exponentially. As a result, a system with 6 components and 3 possible levels of each would entail 729 possible solutions. DP is able to reduce the number of potential optimal solutions to be analyzed to 54. This characteristic — that additional choices increase the potential optimal solutions linearly rather than geometrically — permits DP to handle quite complex problems efficientiy. The principle underlying DP is that for each decision point (node) the best means to reach the final desired end point can be determined at any point in the decision se- quence regardless of any preceding decisions. As a result, DP often works by starting at the "end" and working to the "beginning" of the decisions sequence. In effect, DP considerations. 50 is simply an efficient iterative technique rather than a mathematical solution routine per se. DP is applicable only to problems that can be segmented into a sequence of choices. However in many cases, EA-type decision problems can be structured in a form amenable to DP. Goal programming Goal programming (GP) is a mathematical formulation designed to identify alternatives tiiat satisfy to the greatest extent possible multiple competing objectives. Satisfaction is measured by tiie difference between the ideal level of an objective (its target) and tiie value for a given alternative. GP demands that targets be set for all objectives. For example, a target for noise might be 40 dB, while for road construction costs, a target of $10M may be appropriate. GP is used when the two objectives cannot be both satisfied simultaneously. GP would identify the possible alternative measures tiiat could be employed to achieve each of tiiese objectives to the greatest extent possible. A unique solution might be found if priorities among tiie two objectives were given (e.g. noise is 2.8 times more important than cost). The major drawbacks to GP for EA applications are as follows: Absolute values of targets - The assignment of targets in absolute (i.e. ratio) units for some environmental objectives is quite difficult. In most cases, tiie relative, and not tiie absolute, values are available. The target value does affect the solution in that GP attempts to minimize the deviation from all targets. If a target is set unreasonably high, tiiis will tend to skew tiie solution toward tiiat objective. The priorities of objectives also affects this tendency. Preferences with knowledge of the alternatives - GP requires preferences (i.e. priorities) as an input but these must be given somewhat abstractiy since the feasible (i.e. non inferior) alternatives only are apparent after tiie analysis. This problem is not unique to GP and is a strong argument to use this, and most, techniques iteratively. In other words, initial targets and preferences need to be established, a set of feasible alternatives identified, tiie alternatives need to be considered, new targets and preferences established and tiie process repeated. By proceeding tiirough a series of tiiese iterations, convergence on an acceptable set of targets and preferences is possible. A wide variety of otiier multi-objective matiiematical programming methods are available and have been applied to a wide range of EA-type decisions (e.g. water resources projects, site location). These metiiods generally allow decision makers to explore quite complicated, and numerically large, alternatives practically and systematically. 51 4.5 Why are so few methods used In Ontario? The preceding discussion indicates that the range of methods used in Ontario EAs is fairly limited. Although some proponents (notably Ontario Hydro) have used a variety of methods, including some of the more sophisticated ones, most proponents only use variants of the following methods: • ad hoc • satisficing (including constraint mapping) • variations of simple additive models As discussed earlier in this chapter, these tend to be the simpler methods, and tend to be less formal than some of the other methods. With satisficing, emphasis is usually on the existence of environmental features, not impact prediction, and the development of preferences may not be readily apparent. The additive models, as used in most of the EAs reviewed, impose few restrictions on the determination of preferences, and in general do not explicitly incorporate views obtained from the public in the evaluation. There are at least three reasons why so few methods are in common use: • formal methods can expose vulnerabilities • a wide gulf exists between theory and practice because of a lack of communication • expens in disciplines required by EA are not necessarily experts in the use of evaluation methods 4.5.1 Vulnerabilities exposed by formal methods Formal methods can cause a lot of distress to the public, to proponents, and to politicians because of their potentially greater levels of traceability and accountability. Properly executed, a formal evaluation method exposes the values of the proponent to attack by those who do not share the same values. Even where these values may not be different, a major dilemma that proponents face in trying to use formal methods is the contradictory demands of public opinion, which on the one hand advocate honesty and openness, but on the other feel that intangibles cannot be quantified. By making explicit the methods, exact values and preference values, formal methods also make the weaknesses in the exact values and in their use easier to identify. 52 Use of formal methods should help to ensure that the evaluation process is technically competent, and to focus the discussion on differences between the parties, whether these are differences over the appropriate interpretation of exact values, or over preferences. In practise, public policy makers use scientific information (and other information related to EA.s) within a political context: "Policy makers" have high career stakes in the success or failure of policies for which they are responsible. Agencies and institutions likewise tend to have a vested interest in existing policies. Admitting failure, acknowledging that past policies have been ineffective, and changing those policies usually involve significant costs to the organization and those within them identified with such policies. Elected leaders in particular are under strong pressure to maintain an aura of consistency, irtfallibility, and success, but there exist strong pressures on every policy maker, from local alderman to university president, to maintain an image of constant purpose. "Scientifically based assessments that are critical of proposed programs or of the assumptions on which current policies are based present, therefore, unwelcome informa- tion..." - Hammond et al, 1983 Unfortunately, evaluation methods are often unfairly blamed for problems which develop as a result of exposing one's interpretation of the data, of the values indicated by what the community is saying, and one's own values. The problems that develop are blamed not on these interpretations themselves, and the underlying problems in providing a firm foundation for planning (because of uncertainty, different values in society, etcetera), but on the process of bringing these issues out into the open, and in particular on the evaluation method. It is always possible to find some flaw in any method, no matter how complex (in fact, complexity, and inability of all persons to understand a method is in and of itself a flaw). But the alternatives to formal methods are not attractive: • formal methods performed behind closed doors, and not presented for public scrutiny, with the necessary trust that this involves: trust that the methods have been used, and used properly. • informal methods which are not traceable or accountable. Neither of these options is consistent with the intents or requirements of the EA Act, nor with the precepts of good planning. 23 These statements apply equally to other participants in the EA process whose interests are project, not process, specific. 53 4.5.2 Theoreticians and practitioners are not communicating The second reason for the relatively limited number of methods in common use is lack of communication between theoreticians and practitioners. Two aspects of this problem are notewonhy: • the inability of theoreticians to provide evidence of other methods producing better results • the confusion of the benefits of the process and the result Although some of the more advanced methods were developed to be more theoretically valid and explicit, for example, in the way they deal with uncertainty and views of different publics, their advocates have been unable to provide evidence that these methods, which tend to be more complex, lead to the selection of different and demonstrably superior alternatives. In the absence of such evidence, it is not surprising that simpler methods are used". As emphasized throughout this report, evaluation methods are chosen not just to ensure that better alternatives are selected, but also because of their benefits in helping to identify what information to collect, and to organize the discussion. In many cases theoreticians have failed to persuade the practitioners not only of the ability of alternative evaluation methods to lead to better results, but also to assist in the process related issues. 4.5.3 EA practitioners may not be experts in evaluation systems The third reason for the limited number of techniques in common use is that practi- tioners of EA in Ontario may be unfamiliar with all available techniques. To the extent that this is a problem, the Ministry may be able to help play a technology transfer role when it is approached during pre-submission consultation. Ironically, with the proliferation of micro-computers, and the development of software packages for evaluating alternatives becoming more readily available, it is quite possible that there will be a shift towards the mathematically more complex methods, because they will be the simpler methods to use. The development of such software packages is likely to spur the presentation of arguments in support of the methods they use. 54 5 Things to look for in on evaluation We turn now from choosing a method to applying one and explaining the decisions in an Environmental Assessment. Our review of evaluations contained in EAs in Ontario and elsewhere has revealed some crucial areas of concern. Deficiencies occur frequently and can degrade the effectiveness or even destroy the conclusions of an evaluation. When this happens, the immediate result is a reduction in the overall quality of the planning process and EA. But broader consequences include damage to the reputation of the EA process and concept generally. This ultimately leads to further reluctance to use formal evaluation methods and to make the decision-making process traceable and accountable. These problems are worth avoiding. In this chapter we describe some deficiencies and mistakes to help proponents, planning participants, and reviewers become more alert to them. 55 5.1 Clarity of expression An EA is basically an explanation, and clearness is obviously a merit in any explanation. Few proponents will fail to accept the need for clarity and to assert it as an objective in their submissions. But in EA there are threats to clarity that can undermine the good intentions of planners. There are two primary sources of obscurity in EA documents: partisan bias, causing a loss of objectivity and candour; and • inadequacies of presentation, causing a loss of accessibility and intelligibility. Correcting these tendencies, which to varying degrees afflict most EAs, requires a substantial effon by proponents. Some suggestions follow that may lead to improvements. We look first at the sources of the problems and then at ways to correct them. 5.1.1 Objectivity versus partisanship When interested and affected parties do not support the proposed undertaking, the EA will encounter opposition, in public consultation and in the later hearings. If the proponent nevertheless goes ahead, the EA documents will be subjected to intense criticism and attack. Few proponents determined to proceed against this opposition will be inclined to give hostile critics "free ammunition"". Under these pressures, tiie descriptions and explanations in the EA documents will tend to slip away from objectivity and candour towards partisanship and argument in anticipation of disputes to come. This will not bother advocates of adversarial systems, who consider the process as a whole rather than just the EA documents. They see that each side will focus on aspects that suppon its case, so that a proponent's tendency to omit detail in areas not likely to benefit the proposal will be offset by the opposition's tendency to search out and concentrate on just these areas. In this view, the EA is inevitably an initial partisan document which is superseded in the public record by the revisions arising from the accumulation of opposing evidence in critical reviews, hearing transcripts, and so forth. This view stresses the imponance of objectivity in the fmal decision rather than in a preliminary and incomplete document such as an EA. 25 Some proponents are able to avoid entrenched views. For example, in its decision on the Southwestern Ontario transmission lines, the Board commended Hydro for bringing forward evidence that went against Hydro's preferred alternative. 56 Though the scrutiny involved in the review and approval process may be beneficial, public controversy and extensive hearings are not the most efficient way to reach an objective decision — and are certainly not seen by proponents as in their best interest.^ The intention of the EA process is just the opposite: to provide a way to develop a proposal based on objective fact and broad consent without arousing partisan controversy. The EA presented by a proponent is supposed to be objective and accepted, as far as possible, by interested parties. To the extent that it is partisan, it has failed. Eliminating partisanship Partisanship can be eliminated at its origin. Failing that, it can be eliminated in the EA. Elimination at the origin means simply that the proponent avoids having, or deliberate- ly drops, a vested interest in any particular alternative. This is by far the easiest solution for proponents able to do it, because it prevents many problems in the subsequent process. Moreover, it is the best way, because sometimes an initially preferred alternative turns out, after lengthy struggles to justify it, to be ill-conceived. The retention of a vested interest, but the elimination of partisanship, in the evaluation and EA is more difficult. It can be done only if the proponent insists upon fair treatment of all alternatives and is ready to shift support if another option proves superior in the evaluation. The vested interest will exert continual pressure on the planning process. To protect the process from that pressure will be difficult for the proponent, and yet conceivably worthwhile. The proponent's sincere struggle to remain objective and fair will be visible to interested parties and will earn their respect. That respect can be of considerable value to proponent in the long run. 26 In the Halton decision, the Joint Board (1989) noted: Expert witnesses often appeared reluctant to provide information that would jeopardize their client' s position. This attitude handicapped the Board in its effort to select the most environmentally suitable location for a landfill in Halton. This may suit the adversarial nature of hearings and the obligation of counsel to "protect clients' interest", but it is counterproductive and risky for a hearing Board to decide on environmental suitability in such a climate. 57 5.1.2 Complex and inaccessible presentation EAs often require much technical research and analysis in planning, technology, engineering, and the natural and social sciences. These activities and their findings must be adequately described for non-specialists in the EA process. These non-specialis*s are not just the general public. During planning they consist in the first instance of the experts in different fields working together on the project, who function as lay critics for each other. This mutual scrutiny among disciplines can become active and intense where possible trade-offs arise between kinds of impact, such as between protecting agriculture and protecting wildlife or between impacts on rural or urban areas. This internal review is as important as public consultation in making an evaluation successful. So effective translation of technical information and analyses is important at all stages. This translation function is largely the work of the specialists themselves. They can and should be asked by their clients to use simple language, consistent terminology, clear graphics, and so on. But there are also certain conventions or rules that they could be required to follow, on the grounds that the explanations provided will other- wise be deficient. Identify all important assumptions Except when they are instructions firom the proponent, assumptions are expert judgments made by the specialist to guide the study. Some matter more to the findings than others. Examples include the scope and method of the study, the size of the study area, and a model selected for impact prediction. Any assumption in which a change would alter the findings should be identified and the implications for the study's conclusions examined. Identify significant limitations in data Inadequacies and inconsistencies in data available for analysis are often unavoidable and potentially very significant for the study's findings. As with the assumptions, the potential limitations on the study's conclusions should be specifically discussed. 58 Allow confirmation of every number The data and analysis reported by a specialist should be sufficient to allow an interested reader to trace back to its origin any number given in the text or in a table or diagram. The definitions and explanations of derivation, the data transformations at every step of analysis, and the primary data themselves should all be available, with connections explained, in the tables and appendices of the specialist's report. Distinguish explanation from process Some evaluation reports have mistakenly focused on describing the evaluation as though the process itself explained the final preferences. In fact an evaluation is a technical process aimed at reaching an explanation. When asked why the chosen option was preferred, the proponent must be able to answer, not by pointing to complex tables or graphs recording the process, but by stating the reasons revealed by the process to be decisive. A process description does not provide an answer any more than a table or diagram does. Stating in words the reasons revealed by the process is like describing to the reader the conclusions to be drawn from a table or diagram. Summary The defects addressed by these rules can in principle be solved fairly simply through appropriate efforts by reviewers, technical advisors and proponents. They are thus somewhat easier to deal with than the problems of objectivity and bias discussed earlier. However, these defects also tend to flourish in cases where partisanship is a problem, because they are tiie typical products of attempts made to create a semblance of objectivity witiiout its substance. In such cases, of course, these defects will be a lot harder to correct. At die same time, their appearance should be a signal to reviewers that the evaluation may be less objective than it is represented to be. 5.2 Adequate criteria The criteria used in an evaluation are independent of the evaluation method used. But they are vital to tiie evaluation for two reasons. First, the criteria determine the data to be gathered, so that tiie omission of a potential criterion can mean the omission of data in areas of significant impact. Secondly, it is in terms of the criteria that the preferences are reached, so that the addition, subtraction, or redefinition of a criterion may completely change the results of an evaluation. Therefore the adequacy of the 59 criteria in an evaluation is always a leading concern of proponents, interested parties, and reviewers. In EAs the adequacy of evaluation criteria has four main aspects: rationale, compreh- ensiveness, overlapping, and relative numbers. 5.2.1 Rationale The criteria should be explained, that is, they should be shown to be derived logically and explicitly from the goals, objectives, or targets of the project. If the criteria are not derived from these, then it will not be apparent that the preferred alternative is the one that best meets these goals, objectives and targets. The criteria are a link in the chain that constitutes the overall rationale and justification for the project. Thus the process by which the criteria were developed should be taken seriously by the proponent, should normally involve examination of project goals in public consultation, and should be made part of the evidence presented in the EA. 5.2.2 Comprehensiveness As outlined earlier in this report, the EA Act of Ontario stipulates a comprehensive definition of the environment in which impacts must be considered. The criteria must therefore reflect all aspects of the natural and human environment. Critics should be unable to identify types of data where impacts will be significant that are omitted from the planning process and the EA. When this can be done it is a sign that the criteria have left a gap and do not completely represent the total environment. 5.2.3 Overlapping Though fully representing the total environment, the criteria must also not overlap, because this would allow certain data, appearing under two or more criteria, to be counted more than once and gain an unwarranted significance. The criteria must therefore be interpreted to ensure a clear and strict separation. 60 5.2.4 Level of detail Detail sufficient to differentiate between alternatives when significant differences exist. 5.2.5 Relative number The evaluation may be distorted if criteria are allowed to cluster or proliferate in particular areas. This tendency arises naturally from concentrations of expertise. If one planning team has a preponderance of engineers, a second has a preponderance of biologists, and a third has a preponderance of economists, the criteria will tend to cluster correspondingly, simply because experts perceive more distinctions in their own fields than in others. When such a skew appears, weighting must be done carefully. For example, if all weights are assumed to be equal, categories where criteria are more numerous will tend to be given greater emphasis in the evaluation. 5.3 Logical errors and arithmetical errors Applying an evaluation method can become a very complex task when a comprehen- sive set of criteria and a full range of alternatives must be examined, as is required under the EA Act. The difficulty increases when no specific method or set of methods is prescribed for use, as is also the case under the EA Act. With no set and defined method, proponents will encounter difficulties in the application of the methods they do chose, and errors in application may occur frequently. In our review of selected Ontario EAs, this was shown to be the case. Errors in the application of methods fall into two main categories: logic and method errors and errors in arithmetic. 5.3.1 Methodological errors Different methods require different special procedures. Only a thorough familiarity with methods in common use will prepare reviewers to identify specific errors in their implementation. Technical descriptions of specific methods are included in Appendix C and references to more detailed explanations are provided in the bibliography. Nevertheless, there are a few procedures relevant to all methods which proponents should follow and reviewers should require: 61 Clear definition and description of method(s) used A clear description in the EA document of the evaluation methods used would both assist reviewers in following the methods used and also be of value for proponents as a clear guide for their own reference. It is difficult to follow the logic when the method itself is not clearly spelled out. The EA Branch's guidelines stipulate that "the chosen method(s) should be clearly described" in the preparation of EAs but this is apparendy not a requirement for the EA document. Too often, proponents describe only the method used in the final comparison and neglect evaluation methods used in earlier stages of the analysis. Clear definitions of methods are especially important in view of the plethora of methods available and the hybrid combinations of evaluation methods widely used in Ontario EAs. Consistency with the EA framework The EA guidelines describe a logical sequence of decision-making phases in the planning framework. Each of these steps involves the application of an evaluation method; as the range of alternatives becomes more focused, the required level of detail becomes greater. And the stages differ not just in quantitative terms (e.g., level of detail) — there are important qualitative differences in methodological requirements between preliminary screening and detailed comparison. Because of different requirements in terms of level of detail, range of impacts, and other factors mentioned in Chapter 3, certain methods appropriate for particular stages of the analysis may be inappropriate for other stages. It is vital that the methods applied be consistent with the EA framework. For instance, in one of the EAs reviewed, "land availability" was used as a criterion in the net effects comparison of final alternatives, even though two alternatives were classified as "not available". The criterion of availability obviously should have been considered much earlier. Consistency in the application of methods In addition to involving different stages of analysis, a single EA may also involve parallel evaluations of different classes of alternatives which are later analyzed as complementary combinations of alternatives. This is common in EAs for Waste Management Master Plans or in Class EAs. In both instances — different stages of the same analysis and parallel analyses — it is very important that the methods used are consistent with each other. In particular, this requires that the methods used in successive stages of the analysis incorporate the components of previous stages. By "incorporate", we mean that no information or dimensions of previous stages of the analysis should be disregarded or ignored in successive stages. If components, criteria, or aspects of the analysis are not 62 aggregated with others through a decision rule, it should be assumed explicitly that the alternatives show no divergence with respect to this factor. Such an admonition may seem trite to those familiar with the fundamental purpose of evaluation methods, however, with multi-stage and multi-component EAs, these methodological errors are easy to make and often hard to spot. In one of the EAs reviewed, a slightly different set of criteria was used to select a component of a waste management plan from alternatives than was used to evaluate combinations of the components with this criteria not being picked up or aggregated in the final analysis. It is especially common for criteria used in the initial ad hoc screening to be ignored in later stages of the analysis. A more serious error occurs when an aspect or dimension of the analysis is not incorporated into later stages. For instance, temporal or spatial aspects may not be accounted for, or considered in, successive stages. 5.3.2 Errors in arithmetic Not only complex equations, but simple arithmetic operations merit close attention. In one of the Ontario EAs examined, one criterion was scored with a greater (negat- ive) impact in the net effects analysis (after mitigation) than had been registered before mitigation. In addition, the scores registered for this criterion were not a multiple of the appropriate weighting factor. Because the undetected error accounted for almost 40% of the lowest total score in the analysis, it significantiy affected the choice of the most preferred alternative. In another of the EAs examined, an error seems to have been made in the selection of the most preferred alternative. The erroneously selected alternative was rated equal or best (the decision rule) in 31 out of 34 criteria. Another alternative was not only rated equal or best in 31 criteria, it was also rated best in one more criteria tiian was the selected alternative. When simple aritiimetic errors such as this occur that are presented in tiie report, it is likely that errors involving more complicated mathematics are also made in the analysis not presented in the report. Hence, all significant calculations, if not all calculations should be either presented in the report or made easily available to reviewers. At the same time, the principle of Occam's Razor, minimization of costs, and the desire to reduce the risk of errors all call for the use of simpler evaluation methods over the use of more complex methods whenever the benefits of the more complex methods are marginal. 63 5.3.3 Double counting and independence of preferences The danger of double counting is present with all evaluation methods, but particularly high with methods which account for secondary impacts or with methods such as cost- benefit analysis. This problem — and the problem of incorrectly counting transfer payments as costs or benefits — has a greater likelihood of arising when the spatial or temporal frame of reference has not been clearly defined. Double counting occurs in cost-benefit analysis when a benefit or cost is counted once as a flow and then subsequently as an additional flow or as a change in asset valuation derived from the flow. For instance, money received as rent may be counted once as income to the rentier (a flow) and later as a change in her wealth. The money could be counted again as a flow it if were spent by the rentier and counted as income for another person (e.g. multiplier effects). Transfer payments (such as government subsidies) should not be fully counted when the frame of reference includes those who support the transfer payments through taxes or otherwise. This arises frequently when assessors tabulate the economic or environmental benefits of a particular project without accounting for costs incurred through government within the frame of reference. The double counting problem arises with environmental impacts where intermediate as well as final impacts are evaluated. As an example, the costs of SOj pollution may be counted twice, once using an indicator of total emissions and a second time using the indicator of crop loss. Double counting may also take more subtle forms. Preferences for particular impacts may not be significantly independent of preferences for other impacts. This occurs when particular impacts are the indirect causes for other evaluated impacts, as noted above, or when impacts are complements or substitutes for each other. When the evaluated impacts are substitutes for each other, there may be significant double counting. Where the impacts are complements, there may be some underestimation of final impacts. Estimates of preferences or measurements should be made for paired or multiple impacts vis a vis measurements of individual impacts to determine what degree of interdependence is present. 64 5.3.4 Mixing of data types The different types of scales most commonly used as measurements for evaluation methods are discussed in Section 2.2. One error that is commonly made is the application of arithmetical operations on ordinal or even nominal scales. Because there is no indication of a difference between different points on the scales, performing mathematical operation with these scales makes no sense. Problems can also arise with the use of interval scales when data have been arbitrarily or inconsistendy standardized. 5.3J Intransitivity of preferences Transitivity is a propeny of sets which is normally required in logic. The principle is that if option A is preferred over option B, and option B is preferred over option C, then option A should be preferred over option C. Altiiough this seems only reasonable, it is quite common for there to be imphcit intransitivity of preferences. In some cases, this may reflect an inconsistency in the way that preferences are described. There are legitimate occasions when intransitivity can occur. For example, consider the case of three persons ranking tiiree options as follows: Person 1 Person 2 Person 3 Rank 1 A C B Rank 2 B A C Ranks C B A In tills circumstance, altiiough each individual's preference are transitive (as required by the ranking process, the group's preferences, based on a "majority vote" are not: two people prefer option A over option B (persons 1 and 2), two people prefer option C over option B (persons 1 and 3), but option A is not preferred over option C by tiie majority (only person 1 prefers A over C). When examining individual preferences, it is important to identify whether preferences are transitive, and to look for an explanation for intransitive preferences. When examining group preferences, it is important to look for assumptions of transitivity, and to ensure that they are valid given the underlying data. 65 5.3.6 Use of dimensionless numbers, indices, and arbitrary scales Often in an effort to combine impacts measured in incommensurable units, or to simplify the analysis, measured data may be convened into dimensionless numbers, indices and arbitrary scales. Although this is not necessarily a problem, when it is done extra caution must be exercised to ensure that the transformed data are not misused; it is easy to lose sight of the underlying meaning of data transformed in these ways, and to use them to make decisions which are different from those that would be made if the real meaning was fully understood. 5.3.7 Confusion of magnitude and importance Although impacts of large magnitude have a greater likelihood of being significant or imponant, this is not always the case and a clear distinction needs to be made between magnitude and importance. Importance or significance may be affected by considerations other than magnitude (Beanlands,1983:46). For example, the supply or abundance of an environmental attribute may be critical in determining the significance of an impact on that attribute. If the amount of an environmental attribute destroyed were large compared with the amount or supply of that attribute, then the impact may be considered significant. Thus in some cases, the fraction of an attribute destroyed or lost may be more important than the absolute amount destroyed or lost. 5.4 Confusion of expert knowledge and responsibility for decisions Evaluation methods used as a part of EA make use of several different types of information. An imponant distinction to be made is between what can be called exact values and preference values." Exact values are assigned to those things that can be measured using a given "instrument" or procedure, and which are "objective". Different observers should consistently generate values at the same level. Preference values are assigned according to opinions and preferences and are "subject- ive". Different individuals would (and would be expected to) estimate these values at different levels. 27 This distinction is sometimes referred to as the distinction between "facts" and "values". 66 5.4.1 Exact values "Experts" can, by virtue of their training and experience, assist in providing exact values. Exact values are derived from measurements made with an "instrument". Although referred to as exact, it should be noted that their precision is limited by the precision of the measuring instrument, which may be the observational powers of the expert. For example, an entomologist is more likely to notice the presence of various species of insects than is a botanist (who may be too occupied observing the flowers to even notice the bees). Considerable controversy can revolve around issues of exact values. Scientists may disagree about the appropriate instruments to use to take the measurements, about the measurements suggested by the instruments, and may disagree about the implications of these measures. For example, hydrogeologists may disagree on whether or not well drilling records are adequate to assess sub-surface conditions for a proposed landfill site; they may disagree on whether sand layers identified by the well records are continuous or discontinuous; and they may disagree on whether or not the continuity is relevant as a potential exposure pathway. Courts and administrative tribunals have evolved ways of assessing scientific informat- ion, and dealing with these differences among scientists. The EA Board has the option of both questioning wimesses directly, and of hiring its own experts to help it to understand the technical issues involved (Jeffery, 1988). Although courts have traditionally avoided evaluation of the theory or reasoning underlying scientific evidence, a more active approach has begun to evolve (Black, 1988). Courts are beginning to look closely at experts' reasoning, and they are requiring scientists to conform to the standards and criteria of science (Black, 1988), but yet recognizing distinctions between appropriate standards of proof for different types of decisions. For example, the standard of proof for criminal proceedings is higher than for civil proceedings, and science has its own standards of proof (Jeffery, 1988). At the same time, these decision-making bodies must recognize the distinction between planning and science^, while at the same time not creating a situation which uncouples the law from scientific reality and leads to uncertainty and the inhibition of scientific progress (Black, 1988). 28 Planning requires a decision, even if it is only not to proceed now, and thus must be based on the best available evidence. Science can wait until the evidence is 'adequate'. 67 5.4.2 Preference values In many environmental assessments (under the Environmental Assessment Act, and other statutory and administrative requirements), expens in relevant disciplines are engaged to perform the scaling of the exact values (e.g., biologists may choose value functions for ecological attributes) or linear value functions are adopted.^' That is, the expen says, on the basis of his expenise in a particular disciple, how the Scaling of exact values by experts within a discipline is a legitimate domain for the expert, provided that the expen is being asked to address the things about which the decision-maker is concerned. For example, a biologist being requested to scale the appropriateness of various tree species for a public park ought to know whether the decision -maker is interested in a park with trees for shade, for children to climb, or for a sanctuary for rare or endangered species. Unfonunately, "expens" are often involved, with the encouragement of proponents who often fmd it easier to pass on the responsibility, in making trade-offs across disci- plines where their opinions have no greater legitimacy than anyone else's. Ironically, these decisions are sometimes attributed to professional judgement, when in fact "professional" is one thing they are not. For many of the issues that must be addressed in an EA, there are no "expens". The relative imponance of various environmental features is a matter of opinion, and no-one's opinion has claim to greater legitimacy than another's. There is no "right" or "wrong", and there is no universally accepted "scientific" or "expen" way of making incommensurable units commensurate or trading-off impacts among criteria. As indicated by the chairman of the EA Board (Jeffery, 1988): "... scientific or specialized technical expertise may not necessarily reflect the societal value judgments which must be made when dealing with the social, economic and cultural issues relevant to environmental assessment legislation." However, their training and experience may enable expens to assist decision makers and others in setting their preference values by identifying what things to consider, providing interpretation and clarification of things to consider, and by giving examples of how these have been considered before. For example, many decisions involving hazardous chemicals or waste adopt a level of cancer risk which is considered, if not acceptable, at least tolerable. The selection of this risk level is not something that toxicologists or epidemiologists have special capabilities for determining. However, 29 Adopting linear value functions has its limitations, as should be clear from the earlier discussion of the difference between magnitude and importance. 30 This "scaUng of the exact values" may be simply a statement that one alternative is the best, or a scaling along a ratio scale. 68 they can assist decision makers in setting this level by comparing it to the background cancer risk, contrasting it with risks faced in society from other activities, and by indicating what levels have been adopted in other decisions, and by other decision- making bodies. Responsibility for preference values If the technical expert cannot be expected to make the trade-offs across disciplines, then who can be expected to? In submitting an EA, it is unusual for one alternative to be clearly superior for all environmental attributes, and therefore the proponent must make these trans-disciplinary trade-offs. In submitting one alternative as an undertaking, the proponent must be satisfied with the trade-offs this altemative implies. Ultimately, the Minister, or the EA Board must decide whether such trade- offs lead, in die words of the EA Act, to "the betterment of the people of the whole or any part of Ontario by providing for the protection, conservation, and wise management in Ontario of the environment." Preference values and public consultation All parties to an EA recognize the difficulty of these trade-offs, and that different groups within society have different views about how they ought to be decided. In deciding what represents the betterment of the people of the whole or any part of Ontario, the Minister or the Board will desire to have input from the various affected groups in society on their views on these issues. This is thus a primary purpose of any public consultation program: to provide information on the views of different groups within society on what trade-offs ought to be made. It is the proponent's obligation to demonstrate that these views have been identified and considered in the analysis.^' There is a substantial literature dealing with appropriate methods for identifying these views, and for aggregating them in a way which can be used for generating an ordering of alternatives. These methods are largely outside the scope of this study and they are thus only touched upon. 69 5.6 Approach for dealing with uncertainty (& risk) EAs deal with predictions about the future impacts of cenain courses of action, with and without the project or mitigation. Often this inherent uncertainty becomes a significant and contentious issue in the decision-making process, often because proponents are uneasy about acknowledging that they are unaware of the possible impacts of th-j proposed undertaking. Opponents try to characterize the proponents' (and society's) ignorance as a reason for denying approval. Organizations have developed various means for dealing with (or avoiding dealing with) uncertainty (e.g. Michael, 1976) including: planning paralysis trj'ing to translate uncertainty into risk performing evaluations using data types which are less precise performing sensitivity analyses undenaking additional (possibly primary) research open acknowledgement, consultation, and participation in exploratory programs 5.5.1 Planning paralysis Often the inherent uncenainty associated with a situation can lead to what might be described as "planning paralysis". Planning paralysis is a common and persistent problem. Nelson et al warned of it in the 1960s: "Government policy making presently has ... a tendency to delay for a long time the introduction of a new program because of uncertainties and then suddenly to jump in fully with a large commitment to a prescribed program, with no better knowledge base than before, when political pressures for doing something became strong. Once proposed or initiated, the program is then popularized among the public and in the Congress as a sure antidote, rather than as a promising probe of the environment. "This knowledge myth, which forces dedicated public servants to engage in charlatanism, seriously impedes the development of public polic) The channelling of large sums of money into programs predetermined on the basis of sketchy information narrows the range of alternatives that can be tried, and thus reduces the range of policy instruments that have to be tested. Further, it deters useful experimentation, since all programs are action 70 programs. It places a high premium on actions likely to yield simple-minded quantitative indexes of immediate successes.... Conditions of great uncertainty call for imaginative and flexible probings, not vacillation between inaction and commitment." - Nelson et al, 1964, as cited in Michael, 1976:114-115 5.5.2 Choice of data types An approach often proposed for dealing with "intangibles" or environmental effects, the nature of which is uncertain, is to advocate the use of nominal data within an evaluation system (Joint Board, 1987a:76). The argument is that the assignment of a specific quantitative value on an interval or ratio scale suggests more precision than actually exists, and that this suggestion can be avoided by characterizing the effect with such terms as "poor", "fair" or "good". Although use of such nominal terms may suggest an imprecision which is perhaps analogous to that in the environmental attribute being described, such an approach does not advance the ability to compare alternatives or to establish preferences among alternatives. The nominal terms themselves are subject to broad interpretation by different observers, and they thus fail to convey both specific information about the attribute being described, and about the interpretation of the attribute by the person conducting the analysis. The problems associated with the use of nominal data are compounded if they are implicitly or explicidy converted into quantitative measures (assumed to be interval for the purpose of aggregation) to enable them to be "added" to identify the preference ranking of alternatives. (If they are not, they are unlikely to be useful in helping to identify preferences.) For example, "cool" may be assigned a value of "1", and "warm" may be assigned a value of "2". However, there is no assurance (in fact it is unlikely) that any given observation in the warm category has a temperature twice as high as an observation in the cool category. Identifying one alternative as preferable to another because it is rated "good" against more criteria, or "bad" against fewer criteria obscures information about the degree of goodness or badness, even if the measure of this degree is uncertain. Instead, a greater degree of certainty is implicitly assumed: not only is the ratio of good:fair:poor assumed to have particular values for each criterion, but these ratios are assumed to apply across criteria. 71 5.5.3 Sensitivity analyses Many methods require that a specific value be given for each exact value and each preference value. This requirement must be met, even where there is considerable uncertainty about the "real" value. Sensitivity analyses test the implications on the final results of providing different values for particular variables, or providing different sets of variables. If small changes in a particular variable change the results of the analysis, then the results are considered "sensitive" to that variable. Alternatively, some variables may have no impact on the results of the analysis over any conceivable range of values; the results are insensitive to the value of this variable. Further work to reduce uncertainty can be directed towards finding the values for those variables which have a significant effect on the results. By testing different values for variables, the "real" value of which is uncertain, it is possible to determine whether or not this uncenainty is significant. For variables which do not affect the results over any reasonable range of values, the analysis suggests that the uncenainty for that variable is not a concern, given the values for the other variables." In developing sensitivity analyses, the analyst must be careful to ensure that the analyses presented are useful to the users of the analysis. Presenting scores of possible outcomes may be counterproductive if they fail to help users to focus on the key variables affecting the results of the analysis, and to help users to direct their thinking towards constructive ways of proceeding given the uncertainty in the sensitive variables. 5.5.4 Additional research There is uncertainty associated with the predictions of impacts of alternatives, and with the significance that various groups in society attach to these impacts. This uncertainty can be grouped into two classes: uncenainties associated with the inadequacy of the data available on which to make decisions 32 Of course, if some of the other variables are incorrectly specified, then it is possible that an insensitive value would be a sensitive one if the other variables were correctly specified. 72 uncertainties associated with the inadequacy of understanding of the systems likely to be affected. Some of these uncertainties may be reduced by additional data collection or research aimed at gaining a better understanding of die system(s) likely to be affected; others cannot be. The costs of this additional data collection must be compared against the potential consequences associated with proceeding in an uncertain decision environment, or making the "wrong" decision because of uncenainty. While additional research is often warranted, uncertainty is unavoidable when dealing with future projections. This and other tactics should not be considered as solutions to remove uncertainty, but rather to better cope witii it. 5.5.5 Adopting a social policy for dealing with uncertainty When sensitivity analyses indicate that variables which have a relatively high degree of uncertainty are sensitive, and when measures to reduce this uncertainty, for example through additional research or data collection, are impossible or impractical, the uncertainty should be explicitiy considered in the analysis of the alternatives. Failure to do so has been identified as a critical analytical weakness (Ayres, 1984). Usually, this will require the specification of criteria relating to a particular social policy. An example of such a social policy is to give preference to particular types of alternatives when there is high uncertainty and potentially significant effects. Under these circumstances, preferred options may have the following characteristics: • be capable of being implemented in phases, with monitoring of impacts of each phase to ensure that impacts are acceptable before proceeding with subsequent stages • be designed to enable monitoring and potential remedial actions • incorporate extensive public involvement, in part to engender trust and to distribute responsibility and control to those most likely to be affected Of course there is a cost associated with such an approach, and tiiis cost must be captured in the analysis as well. Another possible social policy is to act as if one knew what was really involved well enough to assign probabilities to outcomes. Regrettably, this happens all too often by default, rather than through tiie deliberate adoption of such a social policy. This assignment of perceived probabiUties to situations where actual probabilities are 73 unknown can lead to self-delusion and deluding of others, and is all too often sold as "professional judgement". The claim of "professional judgement" is used as a substitute for evidence to suppon the claim, and the pohcy options are obscured. 5.6 Selection of the methiod Criteria for selecting methods were described in chapter 3. Proponents ought to indicate that they considered these in choosing the method(s) that was adopted. Particularly where the decision is an important one, it may be desirable to use more than one method and to test whether the conclusions reached are affected by the choice of method. Several studies have shown that the choice of method can make an important difference to which alternative is preferred." There are several possible reasons for methods to lead to divergent results (Hobbs, 1985): decision makers are not sure of their values. decision rules are not correctly chosen weighting or scaling methods measure the wrong concept user misunderstanding, boredom or fatigue perceptual biases 5.7 Iteration The selection of criteria, the choice of the method, the weights and the rates ought to be reviewed in light of the results. This might be done by tracing backwards through the process and determining whether the criteria that determine the outcome of the evaluation are really the ones that the decision-maker thinks are most important, and best distinguish among the alternatives. In many ways this should not be very surprising. If a complex method and simple method always gave converging results, why would anyone bother with the complex method? Implicit in the choice of the more complex method is that it is more valid, and that the simple method is less valid. 74 This process is particularly important when those providing preference values are inexperienced with explicit evaluation methods and they are uncertain about their values or are unable to clearly articulate these. Iteration allows decision makers to converge on the appropriate explicit preference values that reflect their implicit value set. A preferred alternative will emerge from the first iteration. If the decision maker finds this alternative undesirable, it suggests that the stated preference values are either incorrect or possibly that the balancing of trade-offs has not been accurately performed mentally by the decision maker. In either case, iteration allows the decision maker to explore more fully his values and conclusions. 75 Conclusions and reconnnnenda- tirns 6.1 Major findings 6.1.1 What is an evaluation method An evaluation method is a formal procedure for establishing an order of preference among options. There are a wide variety of evaluation methods used, and these can be grouped into six categories: ad hoc checklists matrix methods economic methods pair-wise comparison methods mathematical programming methods 76 The purposes of evaluation methods are i) to provide an improved basis for decision making, and ii) to improve traceability and accountability. 6.1.2 The evaluation process There are four components to the evaluation process, once the alternatives have been identified: • identification of criteria which describe what things are of concern • assessment of the predicted impacts associated with the alternatives, for each criterion • identification of the willingness to trade-off each criterion against other criteria, and hence make the measures commensurate • aggregation of the criteria, the impact assessments and the willingness to trade- off criteria to produce an ordering of the alternatives A variety of methods have been developed to assist in carrying out each of these components. Evaluation methods are concerned with the aggregation process. Some evaluation methods also perform some or part of the other components of the evaluation process, or may determine how these ought to be done. 6.1.3 The choice of methods Most methods have been developed to deal with a specific type of problem, but no method is ideal for all problems. The choice of methods requires consideration of: the type of problem for which a decision is needed • the decision environment • technical considerations, including data requirements, rigour and understandability Methods can be selected using a set of criteria which can be grouped into three groups: the characteristics of the impacts (e.g. geographic, temporal, certainty) 77 the importance of the decision (including the consequences of being wrong, and the resources available to make the decision) • the acceptability of the features of the method by decision-makers, the public, and professionals In practice, more than one method is (and should be) usually used in an EA to deal with different phases and components of the evaluation. 6.1.4 Review and analysis of methods The methods in common use in Ontario include: the ad hoc method satisficing (including constraint mapping) • simple additive methods These tend to be the less sophisticated methods, and do not ensure traceability and accountability as well as some of the other methods. There are other methods used in selected EAs, including the analytical hierarchy process, a pair-wise comparison process. In addition, some of the other methods are being increasingly used in other jurisdictions for helping with planning problems. Reasons for the heavy reliance on the simpler methods are their decreased require- ments for proponents to make explicit facts and values which can be attacked by op- ponents, and the inability of theoreticians to persuade practitioners that the more complex methods are sufficientiy superior to merit the extra effort they require to implement. 6.1.5 Applying the evaluation method The review of evaluation methods used in EAs in Ontario and elsewhere reveals some crucial areas of concern. Among the deficiencies and mistakes that may be associated with the application of evaluation methods are: a lack of clarity of expression, in some cases the result of partisanship, in others the result of too complex and inaccessible a presentation. 78 logical and arithmetic errors which can subvert, rather than assist decision- making, and which in some cases result in the preferred choice being incorrectly identified (using the decision rule chosen) confusion of expert knowledge and responsibility for decisions ignoring uncertainty, allowing planning to be paralysed because of uncertainty, or coping with it using methods which actually create the illusion of certainty choosing evaluation methods without careful consideration of the available options, and their appropriateness for the problem at hand not reviewing the inputs to the evaluation in light of the outcome to ensure that important impacts and concerns have been accurately captured. 6.2 Conclusions A number of conclusions emerge from the study: 6.2.1 Applying formal evaluation methods in EAs can help ensure that the objectives of the EA Act are met Two key principles of good planning are the basis of the requirements set out in the Environmental Assessment Act, and in the Ministry's Interim Guidelines on Environmental Assessment Planning and Approvals: accountability and traceability. Properly applied formal evaluation methods can help ensure that the EA is both. Use of formal methods can offer potential benefits throughout the evaluation, not just in the final detailed comparison of a small number of options. 6.2.2 Evaluation methods assist in only one part of the evaluation process Evaluation methods, as defined in this report, do not deal with or deal in only a limited way with other aspects of the evaluation process, in particular, impact prediction, and weighting. For an EA to be effective these must also be carefully and thoughtfully executed. A properly chosen and applied evaluation method is of iittie use if the impacts are poorly predicted, and if the results do not reflect the concerns of those making tiie decision. 79 6.2.3 There are pressures to both increase and decrease the sophistication of methods used in EAs Already, some proponents are bringing forward some of the more advanced techniques, and other jurisdictions are increasingly relying on more elaborate and formal techniques. This can be expected to occur in EAs generally, pushed in part by the increasing availability of so-called "decision-making" software packages. However, with this increasing sophistication is the increased possibility of both misuse and abuse. The pressures to increase traceability and accountability also create the potential for increased controversy, and a backlash against both formal evaluation methods and the EA process. 6.2.4 Careful application and review of the use of evaluation methods in EAs is not occurring EAs reviewed, including some which have passed through the approval process, contained numerous problems which were not detected or corrected by the consultant, the proponent, reviewers or the EA Board. In some cases, the evaluation remained mysterious. In others, the preferred alternative was not supported by the stated decision rule. 6.3 Recommendations The conclusions lead us to the following recommendations: 6.3.1 The Ministry should promote the use of formal evaluation methods It is the rare project where the best alternative is obvious to everyone. Where the best alternative is not obvious, the process by which one alternative is selected should be traceable, and accountable. Formal evaluation methods can help ensure that it is, and the Ministry should therefore promote their use. Formal evaluation methods can be useful at each stage in the planning process, including screening and comparing alternatives to, not just for comparing the fmal set of alternative methods. 6.3.2 Companion studies to this one should be done for methods of impact prediction, methods of weighting, and building systems of evaluation methods The scope of the present study limited consideration of these other aspects of methods in EA to a fairly cursory overview. Yet they are essential to ensure that undertakings satisfy the basic purpose o^ the EA Act. 6.3.3 The Branch should encourage the careful choice of evaluation methods This will involve consideration of the strengths and limitations of the available methods, and is likely to lead to the use of a greater variety of methods. In addition to making this report available to participants in the EA process, the Branch may wish to consider offering training programs at a variety of levels, suited to the needs of different types of participants. 6.3.4 In reviewing EAs, the Branch should watch for errors and deficiencies like those which were identified in the EAs reviewed for this study This report identifies both errors, and indicators of possible errors. Reviewers should look for these errors in EAs they are reviewing, and should bring these to the attention of proponents. It is hoped that doing so will eventually lead to the submission of better EAs, and "the betterment of the people of the whole or any part of Ontario by providing for the protection, conservation and wise management in Ontario of the environment". 81 References Ahmad, Yusuf J. 1985. Guidelines to Environmental Impact Assessment in Developing Countries. United Nations Environment Programme Hodder and Stoughton, London Atkins, Ros 1984. A comparative analysis of the utility of EIA methods, in Clark et al (eds) 1984, pp. 241-52. Ayres, Robert U. 1984. Improving the scientific basis of public and private decision- making. Technological Forecasting and Social Change. 26:195-199 Bakus, G.J., Stillwell, W.G., Latter, S.M. and Wallerstein, M.C. 1982. Decision making: with applications for environmental management, Environmental Management. 6(6):493-504 Beanlands, G.E. and Duinker, P.N. 1983. An Ecological Framework for Environmental Impact Assessment in Canada. Institute for Resource and Environmental Studies, Dalhousie University. Published in cooperation with Federal Environmental Assessment Review Office Bendix, S. and Graham, H.R. 1978. Environmental Assessment: Approaching Maturity. Ann Arbor Science, Ann Arbor, Michigan Bisset, R. 1984. Methods for assessing direct impacts, in Clark et al (1984), pp. 195- 12. Bisset, R. 1987. Methods for Environmental Impact Assessment: a selective survey with case studies, in Biswas and Geping, 1987. 82 Biswas, A.K. and Geping, Q. (eds) 1987. Environmental Impact Assessment for Developing Countries. Tycooly International, London. Black, Bert 1988. Evolving legal standards for the admissibility of scientific evidence. Science. 239:1508-1512 (25 March). Black, Peter E. 1981. Environmental Impact Analysis. Praeger, New York Blissett, M. (ed) 1975. Environmental Impact Analysis. Lyndon B. Johnson School of Public Affairs, University of Texas at Austin, Austin, Texas Canter, Larry 1979. Handbook of Variables for Environmental Impact Assessment. Ann Arbor science Publishers, Ann Arbor, Michigan Canter, Larry W. 1977. Environmental Impact Assessment. McGraw-Hill, New York Carley, M.J. 1984. Social Impact Assessment and Monitoring: A Guide to the Literature. Westview Press, Boulder Clark, B.D. et al 1980. Environmental Impact Assessment: a bibliography with abstracts. Mansell, London. Clark, B.D. et al (eds) 1984. Perspectives on Environmental Impact Assessment. D. Reidel Publishing Company Covello, Vincent T. 1987. Decision analysis and risk management decision making: Issues and methods. Risk Analysis. 7(2):131-139 Dee, N. et al 1971. Design of an Environmental Evaluation System. Batelle Columbus Laboratories, Columbus, Ohio. Elliot, M.L. 1981. Pulling the pieces together: Amalgamation in environmental impact assessment. Environmental Impact Assessment Review. 2(l):ll-38. Environment Canada 1974. An Environmental Assessment ofNanaimo Port Alternatives. Environment Canada, Ottawa. Environmental Assessment Board (EAB). 1987. Annual Report. Fiscal Year Ending March 31st 1987, Toronto. Federal Environmental Assessment Review Office 1985. Environmental Assessment in Canada: Summary of Current Practice. Ottawa 83 Federal Environmental Assessment Review Office 1986. Initial Assessment Guide: Federal Environmental Assessment and Review Process. Ottawa Fischer, D.W. and Davies, G.S. 1973. An approach to assessing environmental impacts. Journal of Environmental Management. Fuggle, R.F. and Shopley, J.B. 1984. A Comprehensive Review of Current Environmental Impact Assessment Methods and Techniques. Journal of Environmental Management. 18:25-47 Goodman, A. 1984. Principles of Water Resources Planning. Prentice-Hall Haith, D. 1982. Environmental Systems Optimization. John Wiley and Sons Hammond, K.R., Mumpower, J., Dennis, R.L., Fitch, S., Crumpacker, W. 1983. Fundamental obstacles to the use of scientific information in public policy making. Technological Forecasting and Social Change. 24:287-297 Hart, S., G. Enk, J. Jordan and P. Perreault (eds.) 1984. Improving Impact Assessment: Increasing the Relevance and Utilization of Technical and Scientific Information. Boulder, CO: Westview Press. Henderson, J.E. 1982. Handbook of Environmental Quality Measurement and Assessment: Methods and Techniques. Instruction Report E-82-2. Vicksburg, MS: Environmental Laboratory, U.S. Army Corps of Engineers Waterways Experiment Station. Hobbs, B.F. 1980. A comparison of weighting methods in power plant siting. Decision Sciences. 11:725-737 Hobbs, B.F. 1985. Choosing how to choose: comparing amalgamation methods for environmental impact assessment. Environmental Impact Review. 5:301-319 Hobbs, B.F. 1986. What can we learn from experiments in multiobjective decision analysis? IEEE Transactions on Systems, Man, and Cybernetics. SMC-16(3):384-394 (May-June). Hoekstra, T.W., Dyer, A.A., and LeMaster, D.C. (eds) 1987. FORPLAN: An evaluation of a forest planning tool. Proceedings of a Symposium USDA Forest Service. General Technical Report RM-140, 164p Hoglund, G.M. and A. Buck 1987. Southwestern Ontario transmission HI: Decision making techniques in complex route & system, selection studies Proceedings - Fourth 84 Symposium on Environmental Concerns in Rights-of-Way Management. Indianapolis, IN (25-28 October) Rolling, C.S. (ed) 1978. Adaptive Environmental Assessment and Management. John Wiley and Sons, New York. Hwang, C.L. 1987. Group Decision Making under Multiple Criteria. Springer Verlag, New York Hwang, C.L. and Yoon, K. 1981. Multi Attribute Decision Making. Sprinter- Verlag, New York Jain, Ravinder Kumar et al 1980. Environmental Impact Analysis: A New Dimension in Decision Making. Van Nostrand Reinhold, New York. Jeffery, Michael I. 1988. Science and the tribunal: Dealing with scientific evidence in the adversarial arena. Alternatives. 15(2):24-30 Jewell, T.K. 1986. A Systems Approach to Civil Engineering Planning and Design. Haiper Rowe Joint Board 1987a. Proposed transmission plan of Ontario Hydro for Southwestern Ontario. Reasons for Decision. Joint Board under the Consolidated Hearings Act, 1981, Toronto, Ontario. Joint Board 1987b. Highway 416 Reasons for Decision. Joint Board under the Consolidated Hearings Act, 1981, Toronto, Ontario Joint Board 1989. Regional Municipality of Halton application for approval to acquire land for the construction, operation and maintenance of a sanitary landfill. Reasons for Decision and Decision. Decision of the Joint Board under the Consolidated Hearings Act Kozlowski, J. and Hughes, J.T. 1972. Threshold Analysis. The Architectural Press, London. Krauskopf, T.M. and Bunde, D.C. 1972. Evaluation of environmental impact through a computer modelling process, in Ditton, R. and Goodale, T. (eds), pp. 107-25. Leopold, L.B. et al 1971. A Procedure for Evaluating Environmental Impact. US Geological Survey Circular 645, US Geological Survey, Washington, D.C. Lichfield, N. 1970. Evaluation methodology of urban and regional plans: A review. Regional Studies. 4:151-165. 8S Lichfield, N. et al 1975. Evaluation in the Planning Process. Pergamon Press, Oxford. Maclaren, V.W. 1985. The Future of Environmental Impact Assessment in Canada. University of Toronto, Department of Geography, Toronto MacLaren, V.W. and Whitney, J.B. 1985. Environmental Impact Assessment: The Canadian Experience. University of Toronto Institute for Environmental Studies, Toronto Massam, B.H. 1980. Spatial Search: Applications to Planning Problems in the Public Sector. Oxford: Pergamon Press. Massam, B.H. 1988. Multi Criteria Decision Making Techniques (MCDM) in Planning. Progress in Planning. 30:1-84 McAllister, D.M. 1980. Evaluation in Environmental Planning. MIT Press. Cambridge, MA McHarg, I. 1969. Design with Nature. Doubleday, New York. Michael, Donald 1976. On Learning to Plan — and Planning to Learn. San Francisco: Jossey-Bass Publishers Mishan, E.J. 1976. Cost Benefit Analysis. Praeger, New York. Munn, R.E. (ed) 1975. Environmental Impact Assessment: Principles and Procedures. SCOPE Report No. 5. International Council of Scientific Unions Scientific Committee on Problems of the Environment, Toronto Nichols, R. and Hyman, E. 1980. A Review and Analysis of Fifteen Methodologies for Environmental Assessment. Center for Urban and Regional Studies, University of North Carolina at Chapel Hill, Chapel Hill, North Carolina, USA, 27514 Nichols, R. and Hyman, E. 1982. Evaluation of environmental assessment methods. Journal of the Water Resources Planning and Management Division, American Society of Civil Engineers. 108, WR1:87-105. Nijkamp, P. and Delft, Ad van 1977. Multicriteria Analysis and Regional Decision Making. Martinus Nijhof, London O'Riordan, Timothy and Hey, R. (eds) 1976. Environmental Impact Assessment. Saxon House, Famborough, Hampshire 86 Ontario Ministry of the Environment 1986. Mercury pollution in the Wabigoon- English river system: A socio-economic assessment of remedial measures. Corporate Policy and Planning Division, Toronto. Ontario Ministry of the Environment. 1987. A Citizen's Guide to Environmental Assessment. Toronto. (14 pp.) Ontario Ministry of the Environment. 1987. A Proponent's Guide to Environmental Assessment. Toronto. (22 pp.) Ontario Ministry of the Environment. 1989. Interim Guidelines on Environmental Assessment Planning and Approvals. Environmental Assessment Branch (July) Ontario Ministry of the Environment. 1987. Guidelines on Pre-Submission Consultation in the EA Process. Environmental Assessment Branch (November) (27 pp.) Ontario Ministry of the Environment. 1987. The role of the review and the review participants in the EA process. M.O.E. Policy Manual. Section No. 03-01-01, effective 12 Nov 1987. Environmental Assessment Branch. (27 pp.) Ortolano, L. 1984. Environmental Planning and Decision-making. John Wiley and Sons Parks Canada 1973. Environmental Analysis: A Review of Selected Techniques. Canada Department of Indian and Northern Affairs, Ottawa Plewes, M. and Whimey, J.B.R. (eds) 1977. Environmental Impact Assessment in Canada: Processes and Approaches. University of Toronto, Institute of Environmental Studies, Toronto Robens, T.M. and Roberts, R.D. 1984. Planning and Ecology. Chapman and Hall, London Ross, J.H. 1974. Quantitative Aids to Environmental Impact Assessment. Environment Canada Lands Directorate, Occasional Paper No. 3, Ottawa Rowe, M.D. and Pierce, B.L. 1982. Sensitivity of the weighting summation decision method to incorrect application. Socio-Economic Planning Science. 16(4): 173- 177 Saaty, Thomas L. 1987. Risk — its priority and probability: the analytic hierarchy process. Risk Analysis. 7(2):159-172 87 Solomon, B.D. and Haynes, K.E. 1984. A survey and critique of multiobjective power plant siting decision rules. Socio-Economic Planning Sciences. 18(l):71-79 Sorensen, J. 1971. A Framework for Identification and Control of Resource Degradation and Conflict in the Multiple Use of the Coastal Zone. University of California at Berkeley, Department of Landscape Architecture Stokey, E. and Zeckhauser, R. 1978. A Primer for Policy Analysis. W.W. Norton & Company Inc., New York. Stokoe, Peter K. 1988. Integrating Economics and EI A: Institutional Design and Analytical Tools. Background paper prepared for the Canadian Environmental Assessment Research Council (Ottawa) Stover, L.V. 1972. Environmental Impact Assessment: a procedure. Sanders and Thomas, Pottstown. Taylor, Jared and Taylor, William 1987. Searching for solutions: Decision support programs can give your answers; they can also present risks. PC Magazine. 6(15):311- 352 Treasury Board Canada 1976. Benefit-Cost Analysis Guide. Supply and Services Canada Treasury Board Canada 1982. Chapter 490: Socio-economic impact analysis. Administrative Policy Manual. Supply and Services Canada Tversky, A. and Kahneman, D. 1981. The framing of decisions and the psychology of choice. Science. 211:453-458 Viohl, R.C. and Mason, K.G.M. 1974. Environmental Impact Assessment Methodologies: An Annotated Bibliography. Monticello, Illinois Warner, M., and E. Preston. 1974. A Review of Environmental Impact Methodologies. Report EPA-600/5-74-002, United States Environmental Protection Agency. Wathem, P. 1984. Ecological modelling in impact analysis, in Roberts, R.D. and Roberts, T.M. (eds) (1984), pp. 80-98 Wathem, P. 1984. Methods for assessing indirect impacts, in Clark et al (1984), pp. 213-31 88 Appendix A Bibliography Ahmad. Yusuf J. 1985. Guidelines to Environmental Impact Assessment in Developing Countries. United Nations Environment Programme Hodder and Stoughton, London A very basic guide to EIA attempting to present a simple and cost-effective format for £1A statements for developing countries. Chapter on Cost-Benefit Analysis as a tool for environmental decision-making. Useful Usl of basic problems that often arise in carrymg out an EIA in a developing country and suggested solutions to overcome these problems. Atkins. Ros 1984. A comparative analysis of the utility of EU methods, in Clark et al (eds) 1984. pp. Critical review of six different methods for performing EAs. Lists criteria for a theoreucal comparison of different methods, describes methods in summary form and evaluates their utility by use of a case study. Concludes that McHarg overiay approach would not have been as useful as the ad hoc approaches used in identifying linkages EES and Leopold methods also failed to identify conflicts identified by the ad hoc approach. Sorensen Approach seemed likely to identify most conflicts. "All (methods) inadequately consider conflicts between environmental processes and human objectives and the limitaDons and costs which such processes can cause. Perhaps the limitations of reducuonist science are partly responsible, in that systems are reduced to their component parts in order to understand and control them." Ayres. Robert U 1984. Improving the scientific basis of public and private decision- making. Technological Forecasting and Social Change. 26:195-199 Ayres suggests three critical analytical weaknesses that account for so much bad analysis: uncertamty. myopia, and externalities. Major improvements in public and private decision-making wait upon progress in these areas Bakus, G.J., Stillwell, W.G., Latter, S.M. and Wallerstein, M.C. 1982. Decision making: with applications for environmental management, Environmental Management. 6(6):493-504 Beanlands, G.E. and Duinker, P.N. 1983. An Ecological Framework for Environmental Impact Assessment in Canada. Institute for Resource and Environmental Studies, Dalhousie University. Published in cooperation with Federal Environmental Assessment Review Office The study reviews the extent to which the science of ecology could contribute to the design and conduct of assessment studies and recommends ways in which this could realistically be achieved. As such, the study focuses on the impact prediction parts of environmental assessment Bendix, S. and Graham, H.R. 1978. Environmental Assessment: Approaching Maturity. Ann Arbor Science, Ann Arbor, Michigan Papers presented at a seminar concenu^ting on a reu-ospective of EIA in the US, mostly using a case study approach. Bisset, R. 1984. Methods for assessing direct impacts, in Clark et al (1984), pp. 195- 12. Reviews checklists (simple, scaling-weighting, and questionnaire), and matrices (Leopold), discussing advantages and disadvantages. Concludes these methods are lacking because they cannot easily take account of indirect effects but suggests may be of use in simply identifying direct impacts. Bisset, R. 1987. Methods for Environmental Impact Assessment: a selective survey with case studies, in Biswas and Geping, 1987. Account of main activities involved in EIA, selective study of EIA methods, and discussion of case studies in developing countries. Studies examined and critiqued include checklists, overlay mapping, networks, systems diagrams, and simulation modelling. Emphasis on networks and simulation modelling. Biswas, A.K. and Geping, Q. (eds) 1987. Environmental Impact Assessment for Developing Countries. Tycooly International, London. Black, Bert 1988. Evolving legal standards for the admissibility of scientific evidence. Science. 239:1508-1512 (25 March). 90 The article discusses the problems faced by judges and Uwyers in ensuring the scientific validity of scientific evidence. As recent cases involving the health effects of chemicals and drugs make clear, however, irrauonal and inconsistem decisions result when courts do not hold expert witnesses to the standards and cntena of their own disciplines. A trend toward more thorough judicial review of scientific claims has developed, and it should be encouraged. Black, Peter E. 1981. Environmental Impact Analysis. Praeger, New Vork BUssett. M. (ed) 1975. Environmental Impact Analysis. Lyndon B. Johnson School of PubUc Affairs, University of Texas at Austin, Austin, Texas Canter, Larry 1979. Handbook of Variables for Environmental Impact Assessment. Ann Arbor Science Publishers, Ann Arbor, Michigan Canter, Larry W. 1977. Environmental Impact Assessment. McGraw-Hill, New York Carley, M.J. 1984. Social Impact Assessment and Monitoring: A Guide to the Literature. Westview Press, Boulder Clark, B.D. et al 1980. Environmental Impact Assessment: a bibHography with abstracts. Mansell, London. Extensive annotated bibUography on methods of EA and E^;" ^^^^^^^f^""^^^' including the US. Canada, Australia, Continental Europe, and the UK. Chapter uUes are: Aids to Impact Assessment, Critiques and Reviews of ElA. ELA and other Aspects of Planning. EIA in Selected Countries, and Informauon Sources.Useful mtroducuon to each chapter. Clark, B.D. et al (eds) 1984. Perspectives on Environmental Impact Assessment. D. Reidel Publishing Company 91 Covello, Vincent T. 1987. Decision analysis and risk management decision making: Issues and methods. Risk Analysis. 7(2): 13 1-139 This paper provides an overview of decision analysis and its use in risk management decision-making. The paper discusses the distinctive characteristics of decision analysis and compares these characteristics with those of its principal alternative — cost- benefit analysis. The paper also discusses each of the steps in a decision analysis and the strengths and limitations of the methods. Dee, N. et al 1971. Design of an Environmental Evaluation System. Batelle Columbus Laboratories, Columbus, Ohio. Primary reference for Batelle EES scaling & weighting Checklist method. Elliot, M.L. 1981. Pulling the pieces together: Amalgamation in environmental impact assessment. Environmental Impact Assessment Review. 2(1): 1 1-38. Environment Canada 1974. An Environmental Assessment of Nanaimo Port Alternatives. Environment Canada, Ottawa. Application of Ross's Component Interaction Mauix. Environmental Assessment Board (EAB). 1987. Annual Report. Fiscal Year Ending March 31st 1987, Toronto. Federal Environmental Assessment Review Office 1985. Environmental Assessment in Canada: Summary of Current Practice. Ottawa Federal Environmental Assessment Review Office 1986. Initial Assessment Guide: Federal Environmental Assessment and Review Process. Ottawa Fischer, D.W. and Davies, G.S. 1973. An approach to assessing environmental impacts. Journal of Environmental Management. Primary reference for Fischer and Davies Environmental Baseline Evaluation, a descriptive matrix method. 92 Fuggle, R.F. and Shopley, J.B. 1984. A Comprehensive Review of Current Environmental Impact Assessment Methods and Techniques. Journal of Environmental Management. 18:25-47 Reviews Ad-Hoc, Checklist (simple, scaling, weight-scaling). Matrix ("Presentational" descriptive, symbolized, characterized, numeric, combinative) and Mathematical (Peterson, Ross Interaction, Input-Output)), Network (Sorensen and System diagrams). Overlay (McHarg), Modelling, and Adaptive methods. Goodman, A. 1984. Principles of Water Resources Planning. Prentice-Hall Haith, D. 1982. Environmental Systems Optimization. John Wiley and Sons Hammond, K.R., Mumpower, J., Dennis, R.L., Fitch, S., Crumpacker, W. 1983. Fundamental obstacles to the use of scientific information in public policy making. Technological Forecasting and Social Change. 24:287-297 Hart, S., G. Enk, J. Jordan and P. Perreault (eds.) 1984. Improving Impact Assessment: Increasing the Relevance and Utilization of Technical and Scientific Information. Boulder, CO: Westview Press. Henderson, J.E. 1982. Handbook of Environmental Quality Measurement and Assessment: Methods and Techniques. Instruction Report E-82-2. Vicksburg, MS: Environmental Laboratory, U.S. Army Corps of Engineers Waterways Experiment Station. Hobbs, B.F. 1980. A comparison of weighting methods in power plant siting. Decision Sciences. 1 1 :725-737 A good introduction to weighting metiiods and potential problems with their use including examples related to power plant siting. The paper includes an example bom western Maryland in which two weighting metiiods were used, leading to two different locations being selected. 93 Hobbs, B.F. 1985. Choosing how to choose: comparing amalgamation methods for environmental impact assessment. Environmental Impact Review. 5:301-319 This article gives an overview of amalgamation methods and proposes four criteria for choosing among them: the purpose to be served, ease of use, validity, and results compared to other methods. Because experiments have repeatedly shown that the method chosen can significantly affect what decision is made, EIA practitioners should place more emphasis on the last two criteria than they have in the past Finally, recent results in psychology and management science are discussed for practitioners selecting evaluation methods. Hobbs, B.F. 1986. What can we learn from experiments in multiobjective decision analysis? IEEE Transactions on Systems, Man, and Cybernetics. SMC-16(3):384-394 (May-June). Many multiobjective analysis (MOA) methods differ greatly in what purpose they serve, their ease of use and validity, and the decisions they yield. This wide variety of available methods bewilders potential users, resulting in inappropriate matching of methods and problems and unnecessary user dissatisfaction. Experiments that apply different methods to the same problem can help dispel this confusion by clarifying how methods differ. Although such experiments have already been useful, better designed experiments that address neglected issues would teach us even more. Users of MOA methods are introduced to the knowledge that can be gained from MOA experiments. To illustrate the potentials and limitations of experiments, results from a comparison of MOA methods in power plant siting are summarized. Hoekstra, T.W., Dyer, A.A., and LeMaster, D.C. (eds) 1987. FORPLAN: An evaluation of a forest planning tool. Proceedings of a Symposium. USDA Forest Service General Technical Report RM140, 164p. Hoglund, G.M. and A. Buck 1987. Southwestern Ontario transmission III: Decision making techniques in complex route & system, selection studies Proceedings - Fourth Symposium on Environmental Concerns in Rights-of-Way Management. Indianapolis, IN (25-28 October) This paper describes the application of the Analytic Hierarchy Process and a simple weighting-summation model that were used by Hydro in the southwestern Ontario transmission line system planning. HoUing, C.S. (ed) 1978. Adaptive Environmental Assessment and Management. John Wiley and Sons, New York. Primary reference for the adaptive environmental assessment method, a form of simulation modelling. 94 Hwang, C.L. 1987. Group Decision Making under Multiple Criteria. Springer Verlag, New York Hwang, C.L. and Yoon, K. 1981. Multi Attribute Decision Making. Sprinter- Verlag, New York Jain, Ravinder Kumar et al 1980. Environmental Impact Analysis: A New Dimension in Decision Making. Van Nostrand Reinhold, New York. Introductory text with a broad coverage of different aspects of EIA. Focus on US experiences with chapter on EIA methodologies. Large appendix on attribute descriptors useful for design of an EIA. Reviews Ad Hoc. Checklist (EES. Institute of Ecology. Stover). Overlay (McHarg and Krauskopf and Bunde). Matrix (Leopold) and Network (Sorensen) methods. These are discussed briefly. Also advocates a set of methodology review criteria for EIA under the main categories of: Impact Identification (comprehensiveness, specificity, timing & duration, data sources, and isolauon of impacts). Impact Measurement (objectivity, magnitude, and explicit indicators), Impact Interpretation (significance, explicit criteria, uncertainty, risk, alternatives comparison, aggregation, and public involvement). Impact Communication (identification of affected parties, setting description, summary format, key issues, and NEPA compUance). Resource Requirements (data, manpower, time, costs, and technologies), ReplicabiUty (ambiguity, analyst bias), and Hexibility (scale flexibility, range, adaptability). Jeffery, Michael I. 1988. Science and the tribunal: Dealing with scientific evidence in the adversarial arena. Alternatives. 15(2):24-30 Jewell, T.K. 1986. A Systems Approach to Civil Engineering Planning and Design. Harper Rowe Joint Board 1987a. Proposed transinission plan of Ontario Hydro for Southwestern Ontario. Reasons for Decision. Joint Board under the Consolidated Hearings Act, 1981, Toronto, Ontario. 95 Joint Board 1987b. Highway 416 Reasons for Decision. Joint Board under the Consolidated Hearings Act, 1981, Toronto, Ontario Joint Board 1989. Regional Municipality of Halton application for approval to acquire land for the construction, operation and maintenance of a sanitary landfill. Reasons for Decision and Decision. Decision of the Joint Board under the Consolidated Hearings Act. Kozlowski, J. and Hughes, J.T. 1972. Threshold Analysis. The Architectural Press, London. Primary reference for threshold analysis, a variant on cost-benefit analysis suitable mainly for urban planning problems. Krauskopf, T.M. and Bunde, D.C. 1972. Evaluation of environmental impact through a computer modelling process, in Ditton, R. and Goodale, T. (eds), pp. 107-25. Primary reference for computerized overlay method. Leopold, L.B. et al 1971. A Procedure for Evaluating Environmental Impact. US Geological Survey Circular 645, US Geological Survey, Washington, D.C. Primary reference for Leopold matrix method. Lichfield, N. 1970. Evaluation methodology of urban and regional plans: A review. Regional Studies. 4:151-165. Makes a comparative review of some twenty plan evaluation methodologies which have been used in practice or advocated in the literature. It does so by reference lo ten criteria to which comprehensive evaluation methodologies should conform if they are to suit the purpose, concerning itself with the potential of the methodology rather than the actual example of its use. Advocates the planning balance sheet Lichfield, N. et al 1975. Evaluation in the Planning Process. Pergamon Press, Oxford. Discussion of su^ngths and weaknesses of Planning Balance Sheet Method vis a vis Hill's GAM. Maclaren, V.W. 1985. The Future of Environmental Impact Assessment in Canada. University of Toronto, Department of Geography, Toronto 96 MacLaren, V.W. and Whitney, J.B. 1985. Environmental Impact Assessment: The Canadian Experience. University of Toronto Institute for Environmental Studies, Toronto Massam, B.H. 1980. Spatial Search: Applications to Planning Problems in the Public Sector. Oxford: Pergamon Press. Massam, B.H. 1988. Multi Criteria Decision Making Techniques (MCDM) in Planning. Progress in Planning. 30:1-84 This monograph provides an introduction, a review and a critique of multi-criteria techniques as used in the resolution of planning problems. To give a better understanding of the role of the techniques in planning, a set of empirical case studies is included. The author argues that recognition of the complexity of the milieu within which planning occurs means that the search for an ideal formal MCDM technique to solve planning problems is a chimera, but their role in helping to enlighten and organize choice among planning options is important. Also included are criteria for judging an MCDM technique. McAllister, D.M. 1980. Evaluation in Environmental Planning. MIT Press. Cambridge, MA McHarg, I. 1969. Design with Nature. Doubleday, New York. Primary reference for McHarg overlay method. Michael, Donald 1976. On Learning to Plan — and Planning to Learn. San Francisco: Jossey-Bass Publishers Mishan, E.J. 1976. Cost Benefit Analysis. Praeger, New York. Thorough and concise reference volume on cost benefit analysis. 97 Munn, R.E. (ed) 1975. Environmental Impact Assessment: Principles and Procedures. SCOPE Report No. 5. International Council of Scientific Unions Scientific Committee on F*roblems of the Environment, Toronto Reviews Leopold matrix, McHarg overlay, Krauskopf and Bunde (1972) computerized overlay, and Battelle EES (Dee ei al, 1972) from the perspective of conducting EIAs according to a clearly defined set of criteria, these criteria include: capability (identification, prediction, interpretation, communication, and inspection procedures), action complexity capability, risk assessment capability, capability for flagging extremes, replicability of results, level of detail ^screening of alternatives, detailed assessment, and documentation stage), and resource requirements (money, time, skilled manpower, computational, and knowledge). Also includes section on the problem of uncertainty. Nichols, R. and Hyman, E. 1980. A Review and Analysis of Fifteen Methodologies for Environmental Assessment. Center for Urban and Regional Studies, University of Nonh Carolina at Chapel Hill, Chapel Hill, North Carolina, USA, 27514 Nichols, R. and Hyman, E. 1982. Evaluation of environmental assessment methods. Journal of the Water Resources Planning and Management Division, American Society of Civil Engineers. 108, WR1:87-105. Twelve representative methods for environmental assessment are described, evaluated and compared in terms of the following seven evaluation criteria: u-eaunent of the probabilistic nature of environmental quality, incorporation of indirect and feedback effects, dynamic characteristics, multiple-objectives approach to social welfare, clear separation of facts and values, facilitation of participation, and efficiency in resource and time requirements. Nijkamp, P. and Delft, Ad van 1977. Multicriteria Analysis and Regional Decision Making. Martinus Nijhof, London O'Riordan, Timothy and Hey, R. (eds) 1976. Environmental Impact Assessment. Saxon House, Famborough, Hampshire Ontario Ministry of the Environment 1986. Mercury pollution in the Wabigoon-English river system: A socio-economic assessment of remedial measures. Corporate Policy and Planning Division, Toronto. 98 Ontario Ministry of the Environment. 1987. A. Citizen's Guide to Environmental Assessment. Toronto. (14 pp.) Ontario Ministry of the Environment. 1987. A Proponent's Guide to Environmental Assessment. Toronto. (22 pp.) Ontario Ministry of the Environment. 1989. Interim Guidelines on Environmental Assessment Planning and Approvals. Environmental Assessment Branch (July) Ontario Ministry of the Environment. 1987. Guidelines on Pre-Submission Consultation in the EA Process. Environmental Assessment Branch (November) (27 pp.) Ontario Ministry of the Environment. 1987. The role of the review and the review participants in the EA process. M.O.E. Policy Manual. Section No. 03-01-01, effective 12 Nov 1987. Environmental Assessment Branch. (27 pp.) Ortolano, L. 1984. Environmental Planning and Decision-making. Sons John Wiley and Parks Canada 1973. Environmental Analysis: A Review of Selected Techniques. Canada Department of Indian and Northern Affairs, Ottawa Reviews McHarg overlay, Leopold matrix and other methods from the perspective of land use planning rather than from the perspective of environmental assessment. Plewes, M. and Whitney, J.B.R. (eds) 1977. Environmental Impact Assessment in Canada: Processes and Approaches. University of Toronto, Institute of Environmental Studies, Toronto 99 Roberts, T.M. and Robens, R.D. 1984. Planning and Ecology. Chapman and Hall, London Colleciion of articles from a workshop on the subject with an emphasis mainly on UK planning experiences. One article on evaluation methods ("Ecological modelling in impact analysis," P. Wathem, pp. 80-98) and two on Canadian procedures ("Ontario Hydro and the Canadian environmental impact assessment procedures", "Ecological information and methodologies required for environmental assessment of Canadian power generation installations," both by W.R. Effer). Ross, J.H. 1974. Quantitative Aids to Environmental Impact Assessment. Environment Canada, Lands Directorate, Occasional Paper 3, Ottawa. Primary reference for Ross's environmental interaction matrix method, used in the EA, An Environmental Assessment of Nanaimo Port Alternatives, Environment Canada, 1974. Rowe, M.D. and Pierce, B.L. 1982. Sensitivity of the weighting summation decision method to incorrect application. Socio-Economic Planning Science. 16(4): 173- 177 Saaty, Thomas L. 1987. Risk — its priority and probability: the analytic hierarchy process. Risk Analysis. 7(2): 159-172 This paper provides illustrations of how one can deal with risk and uncertainty using the analytic hierarchy process. The paper also includes a discussion of how to deal with risk strategies involving interdependence. Particular emphasis is made on the siting of nuclear power plants. Solomon, B.D. and Haynes, K.E. 1984. A survey and critique of multiobjective power plant siting decision rules. Socio-Economic Planning Sciences. 18(l):71-79 Sorensen, J. 1971. A Framework for Identification and Control of Resource Degradation and Conflict in the Multiple Use of the Coastal Zone. University of California at Berkeley, Depanment of Landscape Architecture Primary reference for Sorensen stepped matrix, a network method. Stokey, E. and Zeckhauser, R. 1978. A Primer for Policy Analysis. W.W. Norton Company Inc., New York. An excellent introduction to the principles of public-policy analysis and to the major decision-making techniques and tools. Basic constructs from economics, operations 100 research, benefil-cost analysis and decision theory are set forth in clear prose, without resort to difficult mathematical formulations. Stokoe, Peter K. 1988. Integrating Economics and Elk: Institutional Design and Analytical Tools. Background paper prepared for the Canadian Environmental Assessment Research Council (Ottawa) Reviews the relationship of economics and impact assessment, including discussion of a number of analytical tools including benefits and damage estimation, economic- ecological models, and cost-benefit analysis. Stover, L.V. 1972. Environmental Impact Assessment: a procedure. Sanders and Thomas, Pottstown. Primary reference for environmental impact indices technique, a scaling index method. Taylor, Jared and Taylor, WilUam 1987. Searching for solutions: Decision support programs can give your answers; they can also present risks. PC Magazine. 6(15):31 1-352 Reviews commercial computer packages which enable the analyst to use a number of evaluation methods, including AHP, simple additive models, linear programming, Monte Carlo simulation, and satisficing (sometimes in combination). Treasury Board Canada 1976. Benefit-Cost Analysis Guide. Supply and Services Canada Describes and discusses the various steps to follow in performing a benefit-cost analysis, as well as the conceptual and technical problems related to the use of this methodology as an approach to the problem of evaluating and selecting individual government projects. Treasury Board Canada 1982. Chapter 490: Socio-economic impact analysis. Administrative Policy Manual. Supply and Services Canada This document describes the SEIA policy, and includes several relevant appendices. Appendix E presents a discussion of the methodologies usually employed to evaluate allocative effects and the information normally provided to describe non-aUocative effects when performing a SEIA. Appendix F describes assumptions to be used, and their sources. Tversky, A. and Kahneman, D. 1981. The framing of decisions and the psychology of choice. Science. 211:453-458 101 Viohl, R.C. and Mason, K.G.M. 1974. Environmental Impact Assessment Methodologies: An Annotated Bibliography. Monticello, Illinois Wamer, M., and E. Preston. 1974. A Review of Environmental Impact Methodologies. Repon EPA-600/5-74-002, United States Environmental Protection Agency. Wathem, P. 1984. Ecological modelling in impact analysis, in Roberts, R.D. and Roberts, T.M. (eds) (1984), pp. 80-98 Reviews Matrix (Leopold and Interaction), Network (Sorensen), and Index (Odum Totality, Stover Environmental Impact Indices, and Battelle EES), and Modelling (AEA) methods. Brief discussion of advantages and disadvantages of alternative methods. Advocate of simulation modelling. Wathem, P. 1984. Methods for assessing indirect impacts, in Clark et al (1984), pp. 213-31 Review of methods for identifying indirect impacts and assessing spatial impacts. Discusses Ross' interaction matrix. Sorensen stepped matrix and IMPACT network, and McHarg, Krauskopf and Bunde, and Dudnik and Schachiel overlay methods. Concludes that network and system diagram methods assume that a statement of indirect hnkages indicates the likely effects of development upon a system, which ignores the adaptivity of complex ecosystems and consequently changing equilibria. 102 Appendix B Evaluation methods glossary Accountability . . Under obligation to explain, report on or justify something such as an activity, plan or program. Additivity ^ u j ^ i Property that the total of inputs or outputs is equal to the sum of their mdividual components. Agency Organization reviewing EAs as part of the formal government review procedure. Includes some crown corporations such as (Ontario Hydro, CN Rail), provincial ministries, federal departments, and other specified organizations. Aggregation The assemblage of distinct or varied things into one whole. Alternative Well-defined and distinct course of action which fulfils a given set of requirements. The EA Act distinguishes between alternatives to the undertaking and alternatives methods of carrying out the undertaking. Interest groups' valuation of alternatives 103 The estimated value or worth attributed to alternatives by interest groups based on a weighing of the impacts and their advantages and disadvantages. Assessment Analysis and appraisal of a given course of action against alternatives according to a defined set of criteria. Commensurable units Units which can be measured by a similar standard. Compensation required Minimum amount affected individuals would be willing to accept to forego existing benefits or to shoulder the costs of a proposed project. See willingness to pay. Comprehensive Including much or all. In the context of EA, comprehensive is used in reference to the definition of environment in the EA Act, the full range of alternatives, and the requirements set out in section 5.3 of the Act. Constraint A restriction imposed on the available options. May be an externally imposed restriction, such as a regulation, used to identify feasible options or may be a defined value of a criterion used in constraint mapping to restrict spatial options. Consumers' surplus In economic theory, this refers to the gap between the total utility of a good to consumers and its total market value. Criteria Principles or standards used to compare and judge alternatives. Data Facts, statistics, information, or a group or body of these, either measured, or derived by calculation or experimentation. Decision maker An individual or group or body of individuals charged with the responsibility to come to a conclusion, form a judgement, or resolve a problem. Dimensionless numbers Numbers which do not have measurement units, such as length, mass, or time, associated with them. 104 Distributive effect ^ . , , .^^^^ Impact an alternative has on the relative distribution of income, wealth, or welfare among those affected. Dominating alternative ... „, ,„ One alternative dominates other alternatives if that altemaave is equal or supenor to the others in all respects. If it is less preferred to another in any one respect, it is not dominant. Slntionra"ccounting for a particular factor more than once through the evaluation method. Environment In the EA Act, defined broadly to include: i) air, land or water; ii) plant or animal life, including man; iii) social, economic, and cultural conditions influencing the life of man or iv) V) vi) community; . . , , any building, structure, machine or other device or thing made by man, any solid, liquid, gas, odour, heat, sound, vibration, or radiaaon resultmg directly or indirecdy from the activities of man; any part or combination of the foregoing and tiie interrelationships between any two or more of them, in or of Ontario. The outcome of a process which appraises the advantages and disadvantages of alternatives. See evaluation method and evaluation process. Evaluation method _„^„«e A formal procedure for estabUshing an order of preference among alternatives. Evaluation process , ,4 The process involving: the identification of criteria; ratmg of predicted impacts, assignment of weights to criteria; and the aggregation of weights, rates and cntena to produce an ordering of alternatives. Evaluation methods are concerned witii the aggregation stage of tiiis process. Exact values r u • ^*. «f Values derived as much as possible from objective measurements of the impacts ot alternative plans. Exact values are derived from measurements made with an "instrument". Also called impact values or facts. 105 Externalities Effects of a project on third parties which are not adequately accounted for through the marketplace. Externalities exist because markets for certain goods — such as pollution — are insufficient or nonexistent. External costs are an evaluation of these effects. Impact Effect of a project. Impact prediction To estimate beforehand, within a range of cenainty, what is expected to happen when an action or alternative is carried out. Impact value Another term for exact values. Importance Relative significance attributed to a particular criterion or alternative by individuals based on their preferences. Importance is distinct from magnitude because it is affected by both the subjective value judgements of individuals (rather than solely by objective instruments) and by critical threshold levels in ecosystems. Weights are commonly used to attribute importance to criteria although the unit of measurement itself may reflect measures of imponance, for instance money values reflect preferences of individuals. Interest group Group of individuals expected to be affected by a proposed project by virtue of a common interest or concern. Interest groups are defined by the assessor and so include not only those who perceive themselves to be part of a defined group. Interest groups are often defined by the identified impacts of a project. See stakeholders. Internal rate of return The discount rate at which the total benefits over the lifetime of a project are equal to the projects' total costs. Interval scale Arrangement of data along a common rule with differences expressed in relative units. Appropriate for arithmetic manipulation. Intransitivity of preferences Preferences of all pairs in a sequence differs from that for every pair of successive members. For example, preferences for A, B, and C are intransitive if A is preferred over B, B is preferred over C, but A is not preferred over C. 106 Iterative Repetition of steps of a process a number of times in order to realize a satisfactory or convergent outcome. Linear objective function An objective function that can be defined using a linear equation of the form y=!i+hx Magnitude Objective measure of the size of an impact an alternative is predicted to have for a given criterion. Magnitudes of impacts are identified according to a defined scale. Matrix A rectangular array of values. Mitigation Taking actions which remove or alleviate to some degree the negative impacts associated with tiie implementation of alternatives. Net present value (NPV) The value of discounted total benefits over the lifetime of a project minus the value of its discounted total costs. The higher the discount rate, the less impact future benefits and costs will have in the analysis. Nominal scale Grouping into distinct, discrete categories identified by name. Not appropriate for aritiimetic operations. Normalize Arithmetical conversion of data to a standard, for example, by making the largest value equal to 1.0 and the smallest value equal to 0. This process normally leads to dimensionless numbers. Objective function A mathematical statement of a decision rule; an explicit definition of what the decision is to accomplish. Objectivity The act of exhibiting facts uncoloured by feelings, opinions, or prejudice. Opinion Judgement or belief based on grounds short of proof (Often used pejoratively.) Two types of opinions exist: opinions offered to avoid getting proof (e.g. to save the effort of collecting data), and opinions which express the personal assessments of importance 107 or value, which cannot be objectively measured. Where opinions are of the latter type, the pejorative connotation is inappropriate, however, care must be taken to identify whose opinion is being expressed, and to assess whether it is being expressed truthfully. Optimization The process of finding the best compromise between opposing tendencies. Optimization techniques usually refer to techniques for finding the "optimum" point along a continuous range of alternatives. Ordinal scale Sequencing of items from first to last according to a common feature, but making no distinction between relative differences among the items. Not appropriate for arithmetic manipulation. Outcome That which results from something; a consequence of action or actions taken. Pareto efficiency Situation in which a redistribution of resources would not make anyone better off without making at least one individual worse off. Assumption commonly used in economic exercise to analyze optima and equilibria. Partisanship The act of exhibiting bias or preference. Phasing Implementation of ideas or actions in stages. Planning paralysis The inability or unwillingness to take appropriate actions in a timely manner, usually panicularly as a result of concern about the uncertain nature of data available to assist the decision. Preference value Values representing different interest groups' valuations of different criteria. Also referred to as opinions or values. Preferential independence Property whereby interest groups' preference value for any pair of things (e.g. criteria or alternatives) is independent of their preferences for any other thing. 108 Present value The value of total discounted benefits or costs. See net present value. Process A systematic series of actions directed toward some end. Proportionality Property where two variables are strictly proportional; that is, the magnitude of one of the variables can be derived by multiplying the magnitude of the other variable by a constant real number, i.e. for all values of x,y = ax, where a is a real number. Public The community in general; the people constituting a community or having something in common. Ranking Use of an ordinal scale to compare alternatives. Rate Assign an exact value to a criterion for an alternative. Also used as a noun to describe this exact value. Ratio scale Arrangement of data along an absolute scale, where the ratio between two data can be identified and is of significance. Revealed preference Inferred preferences based on historical actions. Risk Probability that a given outcome will or will not materialize. Distinct from uncertainty in that the alternative outcomes are known or defined and that the probability of each is measurable. Scale Mechanism for categorizing relative magnitudes. Common scales used are nominal, ordinal, interval, and ratio. Also used to refer to the level of detail of spatial analysis, by reference to maps drawn at certain proportions to the landscape. Scope Refers to the extent to which an evaluation is comprehensive. 109 Score An overall measure of the preference of an alternative against one or several criteria, encompassing both exact and preference values. Screening Process of eliminating alternatives from further consideration which do not meet minimum conditions or categorical requirements. Sensitivity analyses Sensitivity analyses test the implications on the final results of varying the values of the impact or preference values, usually within the expected range of uncertainty. Social costs The total costs on society of a particular good or project — including both the private costs and the external costs. Stakeholders Individuals expected to be affected by a proposed project, often classified according to common interests. See also interest group. Standardization Conversion of scores to a common scale. Stochastic Governed by the laws of probability. Systemic Dealing with a system as a whole; not confined to a particular part. Traceability Characteristic of an evaluation process which enables its development and implementation to be followed with ease. Trade-offs Things of value given up in order to gain different things of value. Uncertainty Lacking knowledge about possible outcomes or their associated probabilities. Distinct from risk in that risk is the measurable probability of known outcomes. Utility independence Property that the utility associated with a particular criterion is not related to the value for any other criterion. 10 Utility value A measure of the nature of the usefulness of something. Value Relative worth, merit or usefulness. Value judgement An opinion, estimate, or conclusion based on assessing the relative worth, merit or usefulness of something. Weights Imponance attributed to a criterion relative to other criteria. Willingness to pay (WTP) Maximum amount affected individuals would be willing to pay for particular goods or benefits. See compensation required. Ill Appendix C Applicability, advantages and disadvantages of evaluation methods 112 1 Ad Hoc Methods Type of problem The ad hoc method is applicable where there is little to be lost by choosing the wrong alternative. Advantages • The ad hoc method requires no special training to be used Disadvantages Results of the ad hoc evaluation are typically not traceable, nor replicable For very complex decisions, concerned decision-makers may be unable to determine the preferred alternative using the ad hoc method, and this may lead to planning paralysis. 113 Unordered lists of criteria Type of problem Use of unordered lists of criteria may be applicable early on in the planning process, as part of the criteria identification process, and to eliminate alternatives that are clearly inferior. Advantages • identifies the criteria to be considered in the evaluation • may assist in eliminating dominated alternatives • simple to use and to understand Disadvantages • provides no firm basis for comparing alternatives usually will require the use of additional methods after it is applied 114 3 Satisficing Type of problem Satisficing may be applied where there are a discrete set of alternatives to be evaluated. Threshold levels are specified for a set of criteria, and alternatives on the wrong side of the threshold are rejected. Satisficing can use geographic data (see constraint mapping), but does not incorporate explicit means for dealing witii impacts occurring over different time periods. The method is capable of dealing with the full scope of tiie environment, however, thresholds for some environmental features may be difficult to identify and justify. Advantages takes littie time to implement particularly useful for an initial screening of alternatives easy for decision-makers and public to understand Disadvantages cannot accommodate a trade-off of impacts — alternatives may be deleted on the basis of operative criteria, even though tiie alternatives may have offsetting benefits does not lead to a unique solution 115 Constraint mapping Type of problem Constraint mapping is a cartographic implementation of satisficing used for narrowing from large areas or regions to sites or corridors. Discrete geographic alternatives (i.e. sites) do not need to be identified. Rather, broad areas are identified and eliminated where impacts will exceed specified levels for criteria. Advantages can be used for screening and used to vastly restrict the range of inquiry • easy to apply and easy for decision-makers and the public to understand provides explicit and traceable explanation for areas eliminated Disadvantages it may be difficult to apply constraint mapping where impacts are interdependent or complex, or where impacts occur at a distance from their source all data must be capable of being presented on maps 116 there may be a temptation to map features, rather than areas where impacts will exceed a specified threshold thresholds for some or many criteria may be difficuh to identify or justify within criteria, sensitivity analyses may be difficult or impossible may not lead to a unique solution 117 5 Lexicographic ordering Type of problem Lexicographic ordering is an extension of satisficing which enables sequential rejection of alternatives until only the desired range of alternatives remains (for example, until only one alternative remains). Lexicographic ordering can be used where there are a relatively large number of alternatives, and may reduce this number to one that is more manageable. Advantages lexicographic ordering may quickly and easily eliminate some or many alternatives the method is simple to use and understand the full range of the environment can be captured by the method results are replicable, and simple to understand and present Disadvantages the method requires that criteria be ordered from most to least important; this may be difficult or impossible if indiscriminately used, alternatives with offsetting benefits may be rejected. 118 6 Simple additive weighting (SAW) Type of problem Simple additive weighing is applicable where there are a discrete set of alternatives to be evaluated. The method does not lend itself directiy to the analysis of geographic data, and it does not incorporate explicit means for dealing with impacts occurring over different time periods. The method requires the user to identify criteria, and these may capture the full scope of the environment. The method can be applied at various scales of inquiry. Advantages • the method is mathematically simple and is inexpensive to implement • the method provides a consistent framework and produces results which can be replicated • key decision points and criteria can be identified and the sensitivity of the outcome to variations in these can be fairly readily assessed. Disadvantages • the method can be easily abused, giving an impression of rigour and objectivity which does not exist • second order impacts are not easily assessed with the metiiod 119 7 Overlay mapping and geo- graphic information systems Type of problem Overlay mapping and geographic information systems (CIS) are to simple additive weighting as constraint mapping is to satisficing. These methods are appropriate for choosing sites or corridors. Advantages • the methods allow trade-offs among criteria (cf constraint mapping) • methods are replicable, traceable, and understandable • full scope of the environment can be captured. Disadvantages • for overlay mapping there are practical constraints on how many overlays (criteria) can be used • subtle variations, even if important, are difficult to capture on overlays • it may be difficult to avoid mixing exact values and preference values (especially for overlay mapping). 120 SMART (Simple Multi-Attribute Rating Technique) Type of problem SMART is a specific implementation of SAW, and is thus applicable to the same types of problems. In SMART, the means for soliciting preference values of individuals or groups are addressed. Decision-makers are asked to rank criteria and then, beginning witii the lowest ranked criterion, ratio values are assigned to preferences. Once assigned, these values are normalized and the standard SAW method is used. Advantages the method of assigning preference values, which is the essence of SMART is relatively simple for decision-makers to understand and use Disadvantages the number of criteria that can be addressed by the method is best limited to a relatively small number (e.g. eight to fifteen) 121 PATTERN (Planning Assistance Through Technical Evaluation of Relevance Numbers) Type of problem The Planning Assistance Through Technical Evaluation of Relevance Numbers (PATTERN) method is a variation on tiie SAW metiiod applicable where there are multiple "levels" of concern, such as different scales of impact, different interest groups, different environmental components and a set of alternatives. These are structured in a "relevance tree" consisting of a series of levels, on each of which there are a number of items. All the items on any one level are relevant to each of the items on the level above, with the top level consisting of the ultimate objective. Numerical estimates of significance must be given for each item in a level to the items in the next lowest level. This quickly results in a large number of coefficients, and consequently the method is only applicable where there are a limited number of alternatives. The method has most often been applied in program design and evaluation, rather than in evaluation of project alternatives. Advantages Helps to clarify the issues of concern Disadvantages it may be difficult to keep preferences and facts distinct 122 10 PROUVAN Type of problem Probabilistic Linear Vector Analysis (PROLIVAN) is a modification of simple additive weighting which incorporates two modifications: differential weighting of long-term and short-term impacts • inclusion of errors for the scores so that in the final presentation of the scores for each alternative confidence limits are defined. Scores of individual alternatives, with confidence limits, can be presented graphically, as indicated in Figure 1. Advantages • extends the simple additive model to enable an assessment of uncertainty • reflects the relative importance of long- and shon- term impacts Disadvantages • decision-makers may find it difficult to use the additional information on confidence limits. 123 1 Normalized utility score 0.8 0.6 0.4 0.2 -'■ I A B C D Plan I 95% confidence limit ■ Mean scores Figure 1 Typical example of results of PROLIVAN. 124 1 1 Cost-benefit analysis (CBA) Type of problem Cost-benefit analysis is applicable for comparing a limited number of alternatives. In CBA, all impacts are expressed in monetary terms and added together to determine total benefits and total costs. The preferred alternative is the one with either the highest ratio of benefits divided by costs, or the largest difference between the two. Advantages high degree of reproducibility • accommodates impacts over time • provides a consistent framework with theoretical underpinnings dollars and cents terminology is well recognized by the public Disadvantages many externalities are difficult to quantify; some interest groups submit that their concerns cannot be captured in monetary terms • cost-benefit analysis does not deal with distributional questions — who benefits and who incurs the cost. These must be dealt with separately. • interaction between impacts are not well captured; all costs and benefits are assumed to be additive. 125 12 Cost-effectiveness analysis Type of problem Cost-effectiveness analysis is a special application of cost-benefit analysis used when there is a fixed budget of costs available for an undertaking and the best way of using the total budget is to be determined by the calculation of the benefits of various alternatives. In general, cost-effectiveness analysis is not well suited to environmental assessment where "net effects" must be estimated and documented. Advantages • can be used to assess impacts over a period of time • requires fewer resources than cost-benefit analysis • provides a consistent framework and is easily presented Disadvantages • less compatible with the goals and requirements of the EA Act than cost- benefit analysis 126 13 Cost-minimization analysis Type of problem Cost-minimization analysis is best used where objectives are well defined and not subject to change. The technique is similar to CBA, however the benefits remain constant and only the money needed to implement the solution are compared. Advantages • since objectives are fixed they can be stated in non-monetary terms Disadvantages • specific objectives for each component of the environment are difficult to identify and justify • for alternatives that meet specified environmental objectives, no comparison is made of the extent to which the alternative reduces negative impacts or enhances positive impacts 127 14 Planning balance sheet (PBS) Type of problem The planning balance sheet analysis method is an adaptation of cots-benefit analysis which addresses the limitations of CBA with respect to dealing with distributional issues. In PBS, alternatives are evaluated with respect to the benefits and costs accruing to homogeneous sectors of the community. Those who benefit and those who suffer from each alternative are assumed to be engaged in either a notional or real transaction where the latter produces services for sale to the former. The transactions are not limited to goods and services exchanged on the market but include externalities and non-traded goods. The objective is a construction of a comprehensive set of social accounts including the resource costs of generating the goods and services. Wherever possible, transactions are quantified in monetary terms, but where this is not possible, the method allows other measures to be used instead. Advantages presents the effect of each project in relation to the stated objectives of each group of each sector of the community externalities and non-traded goods can be included in the analysis impact distribution is better represented than in CBA, CEA and CMA methodologies 128 Disadvantages may not lead to a unique, clearly preferable solution 129 15 Saaty's Analytical Hierarchy Pro- cedure Type of problem AHP is applicable where there are a discrete set of alternatives to be evaluated. The method can be implemented cartographically for geographic data (see overlay mapping and GIS), but does not incorporate explicit means for dealing with impacts occurring over different time periods. The method enables the user to specify appropriate criteria, and it is therefore possible to capture the full scope of the environment. The method can be applied at various scales of inquiry. The ability of the method to deal with interdependent impacts is determined by the extent to which criteria can be specified which are independent. This will often be difficult for EA problems. AHP provides no explicit means of incorporating a range of impacts with different probabilities of occurrence. Advantages • Combines complex arrays of data and judgements into a single numeric ratio • measure of consistency of responses provided by method • relatively inexpensive computerized implementation is available, with a comprehensive sensitivity analysis component 130 Disadvantages • Default "ratio" values may be viewed as ordinal data by users • Preference and exact values may be confused • Relatively complex mathematical analysis or use of computers may be greeted with scepticism by decision-makers, especially where the results are counter- intuitive 131 16 Concordance and discordance analysis (ELECTRE) Type of problem The ELECTRE method (Elimination et Choice Translating Reality) is suitable for comparing a relatively small number of alternatives, in much the same situations where the simple additive weighting method might be used. In this method. Advantages • intangibles and externalities are easily included in the analysis Disadvantages • cumbersome when there are a lot of alternatives to be compared 132 17 TOPSIS Type of problem Can only be used where the relationship between two impacts is relatively simple, for example where there is a trade-off of impacts rather than where the impacts that are interdependent. An ideal (though not necessarily realistic) solution must be readily apparent. Cumbersome when there are many alternatives to be assessed. Typical output is presented in Figure 2. Advantages alternatives can be ranked graphic descriptions facilitate the presentation of analysis and results Disadvantages implementation of this methodology is complex when many criteria are considered this methodology cannot accommodate probabilities 133 dimensionless numbers used in the analysis render results which are hard to interpret Ideal Criterion 2 Figure 2 Two dimension mapping of four alternatives and an ideal option in TOPS IS. 134 18 Linear programming (LP) Type of problem Linear programming is a technique for allocating scarce resources in a way which best meets pre- specified objectives, while staying within identified constraints. The method finds an optimal solution given the available alternatives, the measure of "best" on which the decision will be made, the weights that measure the effectiveness of the alternatives, the coefficients describing the impacts associated with each alternative, and the constraints on total impacts. Advantages where the pre-conditions can be met, LP provides a mathematically defensible solution to the problem where the inputs to the LP cannot be precisely specified, LP may be an aid to conceptualizing the problem computerized implementations of LP are readily available Disadvantages some problems may involve relationships that are non-linear • the required assumption of certainty may be invalid for many environmental problems 135 there may not be a unique solution to the problem relatively complex mathematical analysis and use of computers may be greeted with scepticism by decision-makers, especially where the results are counter- intuitive results do not show the impacts to individual n-oups 136 19 Dynamic programming (DP) Type of problem Dynamic programming is an optimization method applicable where the components of the preferred altemative are known, but their relative contributions are not. Many of the problems dealt with in EAs are of this type. In dynamic programming, the number of potential combinations that need to be considered is greatly reduced, and allows tiie problem to be tackled in a series of manageable stages. Advantages can incorporate complicated relationships, including stochastic and probabilistic information can handle complex problems efficientiy. Disadvantages applicabiUty limited to problems capable of being segmented into a sequence of choices. 137 20 Goal programming (GP) Type of problem Goal programming is an optimization method that is applicable when the components of the preferred alternative are known, but their relative contributions are not. Goal programming demands that targets (or ideal levels) be set for all criteria, and identifies alternatives (in this case, systems of components) that best meet each criterion. By assigning weights to the criteria, the optimum alternative can be specified. Advantages a full range of component mixes can be considered; discrete alternatives need not be specified Disadvantages assignment of targets in absolute units for some environmental objectives is difficult preferences must be specified without a clear knowledge of the feasible alternatives. 138 Questions for reviewers to con- sider In analyzing EAs Do the criteria relate to the goals, objectives and target of the planning process? Is the rationale and significance of each criterion carefully set out? Have the magnitude and importance of the differences in effects of alterna- tives been objectively determined? Using what sort of scale? Have arithmetic operations been used only on data measured on interval or ratio scales, and not on nominal or ordinal scales? Have weights or preferences been set in accordance with the willingness to trade off the criteria? Have weights been assigned with knowledge of the criteria and their values? What weighting method was used? In setting weights used in the evaluation, whose weights have been sought? How have they been aggregated? Has survey bias been considered? Has the method selected been chosen with consideration of the specific problem at hand? Has the selection considered the type of the problem, the decision environment and technical features of the methods? Has more than one method been used where the problem has several dimensions, sub-components or phases? Section 2.1.1 Section 2.1.3 Section 2.2 Section 2.2 Section 2.3.1 Section 2.3.2 Section 2.3.3 Section 3.1 Section 3.3, 4.3 139 Is the rationale for selecting the preferred alternative clearly explained? Has it strived for objectivity and avoided partisanship? Is the presentation accessible? Are important assumptions identified? Are significant limitations in data identified? Can every number be confirmed? Is explanation distinguished from process? Are the criteria adequate? Is the rationale for each provided? Are the criteria comprehensive? Has overlapping of criteria been avoided? Is the level of detail adequate? Is there a reasonable distribution of criteria, rather than having clustering or proliferation in particular areas? Have logical and methodological errors been avoided? Have methods used been clearly defined and described? Is the choice of and use of methods consistent with the EA framework? Have methods been consistently applied? Has the arithmetic been done correctly? Has double counting been avoided? Have data measured on different types of scales been kept separate? Are preferences transitive, or have intransitive preferences been explained? If data have been transformed into dimensionless numbers or indices has this been done correctly, and are the resulting numbers used properly? Has a distinction been drawn between magnitude and importa- nce? Has as clear as possible a distinction been drawn between fact and opinion? Where do the preference values come from? Are they derived from public consultation? Is there a discussion of uncertainty associated with the data used in the evaluation? How has this uncertainty affected the evaluation? Have sensitivity analyses been done? Has additional research been initiated when appropriate? Is there a social policy in place for dealing with uncertainty? How was the evaluation method(s) selected? Have several evaluation methods been considered, applied or both? Have the selection of criteria, the choice of the method and the weights and rates been reviewed in light of the results of the evaluation? Has iteration been used to help decision makers converge on the appropriate explicit preference values that reflect their implicit value set? Section 5.1.1 Section 5.1 Section 5.2 Section 5.3 Section 5.4 Section 5.5 Section 5.6 Section 5.7 140 DISCLAIMER This report was prepared for the Ontario Ministry of the Environment as part of a ministry-funded project. The views and ideas expressed in this report are those of the author and do not necessarily reflect the views and policies of the Ministry of the Environment, nor does mention of trade names or commercial products constitute endorsement or recommendation for use.