National Academies Press: OpenBook
« Previous: B Selendang Ayu Plea Agreement
Page 182
Suggested Citation:"C Expert Judgment." Transportation Research Board. 2009. Risk of Vessel Accidents and Spills in the Aleutian Islands: Designing a Comprehensive Risk Assessment - Special Report 293. Washington, DC: The National Academies Press. doi: 10.17226/12443.
×
Page 182
Page 183
Suggested Citation:"C Expert Judgment." Transportation Research Board. 2009. Risk of Vessel Accidents and Spills in the Aleutian Islands: Designing a Comprehensive Risk Assessment - Special Report 293. Washington, DC: The National Academies Press. doi: 10.17226/12443.
×
Page 183
Page 184
Suggested Citation:"C Expert Judgment." Transportation Research Board. 2009. Risk of Vessel Accidents and Spills in the Aleutian Islands: Designing a Comprehensive Risk Assessment - Special Report 293. Washington, DC: The National Academies Press. doi: 10.17226/12443.
×
Page 184
Page 185
Suggested Citation:"C Expert Judgment." Transportation Research Board. 2009. Risk of Vessel Accidents and Spills in the Aleutian Islands: Designing a Comprehensive Risk Assessment - Special Report 293. Washington, DC: The National Academies Press. doi: 10.17226/12443.
×
Page 185
Page 186
Suggested Citation:"C Expert Judgment." Transportation Research Board. 2009. Risk of Vessel Accidents and Spills in the Aleutian Islands: Designing a Comprehensive Risk Assessment - Special Report 293. Washington, DC: The National Academies Press. doi: 10.17226/12443.
×
Page 186
Page 187
Suggested Citation:"C Expert Judgment." Transportation Research Board. 2009. Risk of Vessel Accidents and Spills in the Aleutian Islands: Designing a Comprehensive Risk Assessment - Special Report 293. Washington, DC: The National Academies Press. doi: 10.17226/12443.
×
Page 187
Page 188
Suggested Citation:"C Expert Judgment." Transportation Research Board. 2009. Risk of Vessel Accidents and Spills in the Aleutian Islands: Designing a Comprehensive Risk Assessment - Special Report 293. Washington, DC: The National Academies Press. doi: 10.17226/12443.
×
Page 188

Below is the uncorrected machine-read text of this chapter, intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text of each book. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

APPENDIX C Expert Judgment THE USE OF EXPERT OPINION An expert is an individual with specialized knowledge or skill in some specific domain. While in principle any degree of knowledge of a subject qualifies one as an expert to that degree, a person is called an expert only when he or she is believed to be much more knowledgeable than a layperson about the subject of interest. Expert opinion can be viewed as an expression of the judgment of an expert on a subject or issue. An opinion is usually regarded as an impression, personal assessment, or subjective estimation of a quality or quantity of interest. Expert opinion, in contrast to factual information, is a judgment or a belief that, at least in the mind of the receiver of the opinion, is based on uncertain informa- tion or limited knowledge. The primary reason for eliciting expert opinion is to deal with uncertainty with regard to selected technical issues. Issues most suited to elicitation of expert opinion involve significant uncertainty, are controversial or contentious, are complex, and can have a signifi- cant effect on risk. Elicitation and use of expert opinion should be regarded as a heuristic and not a scientific tool for exploring such issues that would otherwise be inaccessible. Two measures of quality for the elicitation and use of expert opinion are “substantive goodness” and “normative goodness.” Substantive goodness refers to the knowledge of the expert rela- 182

Expert Judgment • 183 tive to the problem at hand. Normative goodness, on the other hand, refers to the expert’s ability to express that knowledge in accor- dance with the calculus of probabilities and in close correspondence with his or her actual opinions. Depending on the situation, one or the other type of goodness predominates. Questions that need to be considered with regard to the use of expert opinion fall into two categories: (a) elicitation (e.g., how to select the experts, how many to select for a given issue, and how to elicit their opinions) and (b) how to use the elicited opinions and information about the experts to estimate the unknown quantity. It has been widely documented that judgmental estimates are subject to a number of potential biases. Two biases that are particularly important in the practice of risk assessment are (a) the possibility of systematic overestimation or underestimation and (b) overconfidence, or the tendency for people to give “overly narrow confidence intervals which reflect more certainty than is justified by their knowledge about the assessed quantities” (Tversky and Kahneman 1974). The following are helpful points to consider when expert opinion is used: • It is important to select good domain experts and train them in normative aspects of the subject. • Aggregating the opinions of multiple experts tends to yield more accurate results than using the opinion of a single expert. • Mathematical methods of aggregation are generally preferable to behavioral methods for reaching consensus. • The quality of expert judgments can be substantially improved by decomposing the problem into a number of more elementary problems. • Significantly better overall results are obtained if the initial prob- lem definition and decomposition are performed with care and in consultation with the experts. • Expert opinions are subject to bias and overconfidence. Effective means of reducing overconfidence are (a) using calibration tech- niques and (b) encouraging experts to actively identify evidence that tends to contradict their initial opinions (see the discussion below on bias). • Sources of strong dependency among experts should be identified. Weak dependency does not appear to have a major impact on the value of multiple expert judgments.

184 • Risk of Vessel Accidents and Spills in the Aleutian Islands THE FACILITATOR A facilitator is an expert with the interpersonal skills needed to control the elicitation process and ensure that all available infor- mation emerges, that the experts are fairly heard, and that their views are not subsumed by those of others. To these ends, it is important not only that the experts selected represent a range of expertise but also that the facilitator challenge them to explain the basis for their judgments. A facilitator can directly address any biases. For example, representativeness bias involves replacing a careful evaluation of the available information with quick conclu- sions based on partial information or allowing irrelevant informa- tion to affect one’s conclusions. The facilitator must have the skill to sense when an individual is exercising such bias. Moreover, it is important to understand the heuristics people often use to develop subjective probability distributions and the associated biases. Knowing which framings for eliciting distri- butions cause problems makes it possible to use those that work better. Because the facilitator is familiar with the potential biases associated with the subject at hand, he or she can test the group’s ideas and lead them in the right direction. The following strate- gies should be used either explicitly or implicitly through the facilitator’s questioning (Budnitz et al. 1998; see also Tversky and Kahneman 1974): • Construct simple models of the maximum and minimum points of the distribution, avoiding focus on the central tendency until the end points are studied to avoid anchoring; test these models to examine the evidence supporting them rather than relying on opinion alone. • Seek consensus on the evidence considered by the analysis team. • Test distributions by asking whether the assessor agrees it is equally likely for the real answer to lie between the 25th and 75th percentiles or outside them, or between the 40th and 60th percentiles and outside the 10th and 90th percentiles. Sometimes these questions must be phrased in ways that avoid suggesting the answer. • Use a strong facilitator who ensures that each participant individually puts his or her evidence on the table and justifies it (Budnitz et al. 1998). The facilitator must use judgment in

Expert Judgment • 185 deciding when to push the participants rather than going through a long and tedious checklist. • Exercise care in assessing parameters that are not directly observ- able. The distribution is supposed to reflect the analyst’s evi- dence concerning a particular parameter. If the analyst has little direct experience with the parameter, it can be difficult to justify an informative prior distribution. CONTROLLING UNINTENTIONAL BIAS IN USING EXPERT OPINION One of the most important concerns associated with the use of a consensus expert judgment process is that of unintentional bias. In the subjective process of developing probability distributions, strong controls are needed to prevent bias from distorting the results (i.e., to prevent derivation of results that fail to reflect the team’s state of knowledge). Perhaps the best approach is to under- stand thoroughly how unintended bias can occur. With that knowl- edge, the facilitator and team can guard against its influence in their deliberations. A number of studies present substantial evidence that people [both naive analysts and subject matter (domain) experts] are not naturally good at estimating probability (including uncertainty in the form of probability distributions or variance) (Cooke 1991; Hogarth 1975; Mosleh et al. 1988). For example, Hogarth (1975) notes that, according to psychologists, people have only limited information-processing capacity. This implies that their percep- tion of information is selective, that they must apply heuristics and cognitive simplification mechanisms, and that they process infor- mation in sequential fashion. These characteristics in turn often lead to a number of problems in assessing subjective probability. Evaluators often are subject to the following: • They ignore uncertainty (this is a simplification mechanism). Uncertainty is uncomfortable and complicating and beyond most people’s training. • They lack an understanding of the impact of sample size on uncertainty. Domain experts often give more credit to their expe- rience than it deserves (e.g., if they have not seen something

186 • Risk of Vessel Accidents and Spills in the Aleutian Islands happen in 20 years, they may assume it cannot happen or that its occurrence is much more unlikely than once in 20 years). • They lack an understanding of or fail to think hard enough about independence and dependence. • They have a need to structure the situation, which leads them to imagine patterns even when none exist. • They are fairly accurate at judging central tendency, especially the mode, but may significantly underestimate the range of uncer- tainty (e.g., in half the cases, people’s estimates of 98 percent intervals fail to include the true values) and are influenced by beliefs of colleagues and by preconceptions and emotions. • They rely on a number of heuristics to simplify the process of assessing probability distributions. Some of these heuristics introduce bias into the assessment process. Examples of this last problem include the following: • Representativeness: People assess probabilities by the degree to which they view a known proposition as representative of a new one. Thus stereotypes and snap judgments can influence their assessment. In addition, representativeness ignores prior proba- bility (Siu and Kelly 1998), that is, what one’s initial judgment of the probability of a new proposition would be before consid- ering new evidence—in this case, one’s assumption about the representativeness of the known proposition. Clearly the prior should have an impact on the posterior probability, but basing one’s judgment on similarity alone ignores that point. This also implies that representativeness is insensitive to sample size (since one jumps to a final conclusion on the basis of the assumption of similarity alone). • Availability: People assess the probability of an event by the ease with which instances can be recalled. This availability of the information is confused with its occurrence rate. Several associated biases have been observed: – Biases from the retrievability of instances (recency, familiarity, and salience), – Biases from the effectiveness of a search set (the mode of search may affect the ability to recall), and – Biases of imaginability (the ease of constructing inferences is not always connected with the probability).

Expert Judgment • 187 • Anchoring and adjustment: People start with an initial value and adjust it to account for other factors affecting the analysis. The problem is that it appears to be difficult to make appropriate adjustments. It is easy to imagine being locked in to one’s initial estimate, but anchoring is much more sinister than this alone. A number of experiments have shown that even when one’s initial estimates are totally arbitrary and represented as such to the participants, the effect is strong. Suppose that two groups are each told that a starting point has been picked randomly from which to work; the one given the higher arbitrary starting point generates higher probability. One technique found helpful is to develop estimates for the upper and lower bounds before addressing most-likely values. Rather than concluding prematurely that people are irredeem- ably poor at generating subjective estimates of probability, one should realize that many such applications have been successful. Hogarth (1975) points out that studies of experienced meteorolo- gists have shown excellent agreement with facts. Thus, it is essential to understand what techniques can help yield good assessments. Winkler and Murphy (1978) make a useful distinction between two kinds of expertise or “goodness.” “Substantive” expertise refers to knowledge of the subject matter of concern, while “normative” expertise is the ability to express opinions in probabilistic form. Hogarth (1975) points out that the subjects in most studies reviewed were neither substantive nor normative experts. A number of stud- ies have shown that normative experts (whose domain knowledge is critical) can generate appropriate probability distributions but that substantive experts require significant training and experience or assistance (such as that provided by a facilitator) to do well. REFERENCES Budnitz, R. J., G. Apostolakis, D. M. Boore, L. S. Cluff, K. J. Coppersmith, C. A. Cornell, and P. A. Morris. 1998. Use of Technical Expert Panels: Applications to Probabilistic Seismic Hazard Analysis. Risk Analysis, Vol. 18, No. 4, pp. 463–469. Cooke, R. M. 1991. Experts in Uncertainty: Opinion and Subjective Probability in Science. Oxford University Press, New York.

188 • Risk of Vessel Accidents and Spills in the Aleutian Islands Hogarth, R. M. 1975. Cognitive Process and the Assessment of Subjective Probability Distributions. Journal of the American Statistical Association, Vol. 75, No. 350, pp. 271–294. Mosleh, A., V. M. Bier, and G. Apostolakis. 1988. A Critique of Current Practice for the Use of Expert Opinions in Probabilistic Risk Assessment. Reliability Engineering and System Safety, Vol. 20, pp. 63–85. Siu, N. O., and D. L. Kelly. 1998. Bayesian Parameter Estimation in Probabilistic Risk Assessment. Reliability Engineering and System Safety, Vol. 62, pp. 89–116. Tversky, A., and D. Kahneman. 1974. Judgment Under Uncertainty: Heuristics and Biases. Science, Vol. 185, pp. 1124–1131. Winkler, R. L., and A. H. Murphy. 1978. “Good” Probability Assessors. Journal of Applied Meteorology, Vol. 7, pp. 751–758.

Next: D Human Reliability Analysis »
Risk of Vessel Accidents and Spills in the Aleutian Islands: Designing a Comprehensive Risk Assessment - Special Report 293 Get This Book
×
MyNAP members save 10% online.
Login or Register to save!
Download Free PDF

TRB Special Report 293, Risk of Vessel Accidents and Spills in the Aleutian Islands: Designing a Comprehensive Risk Assessment, provides guidance for a comprehensive risk assessment of vessel accidents and spills in the Aleutian Islands. The report examines data related to the risk of oil, chemical, and other hazardous cargo spills from vessel traffic through the Aleutian Islands and identifies key information needed to conduct a comprehensive risk assessment.

  1. ×

    Welcome to OpenBook!

    You're looking at OpenBook, NAP.edu's online reading room since 1999. Based on feedback from you, our users, we've made some improvements that make it easier than ever to read thousands of publications on our website.

    Do you want to take a quick tour of the OpenBook's features?

    No Thanks Take a Tour »
  2. ×

    Show this book's table of contents, where you can jump to any chapter by name.

    « Back Next »
  3. ×

    ...or use these buttons to go back to the previous chapter or skip to the next one.

    « Back Next »
  4. ×

    Jump up to the previous page or down to the next one. Also, you can type in a page number and press Enter to go directly to that page in the book.

    « Back Next »
  5. ×

    To search the entire text of this book, type in your search term here and press Enter.

    « Back Next »
  6. ×

    Share a link to this book page on your preferred social network or via email.

    « Back Next »
  7. ×

    View our suggested citation for this chapter.

    « Back Next »
  8. ×

    Ready to take your reading offline? Click here to buy this book in print or download it as a free PDF, if available.

    « Back Next »
Stay Connected!