of the mapping systems used to identify likelihood of introduction based on those characteristics has been fully validated. Even validation of the Australian weed risk-assessment system (see Box 6-1) can be viewed as somewhat equivocal. Thus, these components of current pest risk-assessment systems cannot yet satisfy the science-based criteria of repeatability and peer review for validity. Simberloff and Alexander (1998) came to similar conclusions when they reviewed risk-assessment systems.

Similar criticisms can be leveled against the procedures by which the component of the risk-assessment systems dealing with consequences are assembled. Much of the information required to assess consequences is based on expert judgment, which is often subjective, can vary substantially among evaluators, and can be influenced by political or other external factors. To our knowledge, the algorithms used to categorize the overall consequences have not been validated. Furthermore, uncertainty is never explicitly incorporated into the evaluation process. In some cases (such as in the International Plant Protection Convention protocol), uncertainty is recorded but apparently then ignored.

Pest risk-assessment systems have value, provided that the reasoning used is underlain by careful documentation. The process of conducting a qualitative risk assessment is at least as valuable as the specific risk values that are produced, because the process, when carefully documented, provides a mechanism for assembling and synthesizing relevant information and knowledge. Furthermore, the process of risk assessment catalyzes a thorough consideration of the relevant events. When a consistent and logical presentation justifies the parameters used in a risk assessment and its conclusions, assessments can meet the scientific criteria of transparency and openness. Many risk assessments use slightly different characteristics and methods for determining likelihood, depending on the specifics of the situation being considered. That variability is a strength of the risk-assessment systems, not a shortcoming.

In the absence of careful documentation, specific risk values are worthless. That is a serious problem because any specific value imparts to some policymakers and a large part of the public a scientific aura and a sense of knowledge that might not be warranted. Pest risk assessments that lack careful documentation can do more harm than if no assessment had been undertaken.

A recent risk assessment of the introduction of the cape tulip (Homeria spp.) (USDA/APHIS 1999) illustrates both the limitations and the strengths of qualitative risk assessment (see Box 6-2). Determination of the likelihood of introduction is the principal weak point in the assessment. The most apparent problem is the lack of rationale for the scores given in the pathway analysis. The analysis fails the test of transparency; there is little way to critique the scores other than to say that they are unsubstantiated. A weakness in the protocol used is that scores are summed rather than viewed as the likelihoods of events in a sequential chain. For grain imports and ornamental plant shipments, the risk assessment indicated only a low likelihood that Homeria will escape detection at the point of entry. If

The National Academies | 500 Fifth St. N.W. | Washington, D.C. 20001
Copyright © National Academy of Sciences. All rights reserved.
Terms of Use and Privacy Statement