Click for next page ( 84


The National Academies | 500 Fifth St. N.W. | Washington, D.C. 20001
Copyright © National Academy of Sciences. All rights reserved.
Terms of Use and Privacy Statement



Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.

OCR for page 83
83 A]l ANATOMY OF RISK ASSESSING: SCIENTIFIC AND EXTRA-SCIENTIFIC COMPONENTS IN TEE ASSESSMENT OF SCIENTIFIC DATA ON CANCER RISES Lawrence E. McCray SUMMARY A single risk management decision is often based on an assessment theta itself, comprises many discrete decisions--choices among assumptions, interpretations, relative weighting of conflicting pieces of evidence-that analysts must make if useful overall conclusions are to be reached concerning the existence or level o f a cancer risk. Firs paper attempts to identify common elements of risk assess- ment, to characterize these components individually, then to draw general inferences about the nature of risk assessment. The paper covers three areas: the inherent structure of risk assessment the relationship of scientific judgment and "value" judgment in risk assessment some implications for the organization and management of risk assessment APPROACH Regulation--and the rule of law more generally--demands s implifica- tion. lathe regulation of potential public health risks often demands simple categorical findings ~ for example, whether a particular chemical is carcinogenic or not). Regulation by its nature cannot easily tolerate ambiguities or cope with probabilities: it proceeds as if a "Simplification Operative" is at world. A highway speed limit, for example, cannot reasonably be posted as "around 55 mph"; a speeding ticket cannot reasonably state that a drifter "probably" was exceed ing the pos ted speed . NOTE: This paper was originally prepared for the use of the National Research Council' ~ Committee on the Institutional Means for Assessment of Risks to Public Realth. It is not intended to present independent positions or interpretations on scientific or policy matters. It does not necessarily reflect the judgment or position of the Committee or the National Research Council. 1 t has not been sub jected to the internal review procedures that apply to reports prepared by NRC commit tees .

OCR for page 83
84 The assessment of a public health risk is inherently complex, and ambiguities and probabilities abound. Scientists consider many qualifying factors when contemplating a chemical's potential car- cinogenicity. Gaps in data and knowledge are typically large. Results that are conclusive enough to satisfy a scientist 's profes- sional ~ tandard of proof are rare. If public health risks are to be regulated at all, many assu'eprions~deliberate choices in the face of scientific uncertainty--must be made in order to satisfy ~ regulator's need for ~ implified answers to two questions: Is the subs Lance carcinogenic or not? - How does human risk vary with actual exposure to the subs Lance? This paper is an initial inquiry into the nature of those analytic choices--the inherent "components" of rick assessment. Me analysis identifies 36 distinct components*, which are listed in ache attachment . The components fal ~ into three analytically distinct activities O hazard identification, dose-response assessment, and exposure assessments Hazard identification involves the qualita- tive determination of whether a particular agent causes a particular adverse effect in humans. Dose-response assessment describes how such effects are related to dose. Exposure assessment estimates the level of human exposure to the substance, with and/or without regulatory controls. A risk assessment, thus, combines a hazard identification or a dose-response assessment with an exposure assessment. The 36 components are arrayed in the attachment according to these three activities, and, within each activity, according to the type of the available scientific data. Generally, we can classify these as: ( 1) human data, (2~ animal bioassay data, and (3~ data from other sources. Fewer than 36 choices will be confronted in any one assessment; the actual number depends on the nature o f the evidence that i s avai lab le to be evaluated. The list of components was originally generated by abstracting the issues covered in reviews of the scientif ic principles of carcinogenic risk assessment by the U. SO government ' s Interagency Regulatory Liaison Group ant other organizations. Reviewers of early drafts of thin paper were then asked to suggest additions to the list to make it a comprehensive accounting of the areas where discretion is applied in particular assessments. * This list was later expanded to include 50 components. See "National Academy Press, 1983 ~ pp. 29-33.

OCR for page 83
85 GENERALIZATIONS ABOUT TI1E MATURE ~ : . OF DISCRETION IN RISK A;SSESSt:NT A review of the 36 components leads to several general observations concerning the structure of risk assessment: The components of risk assessment vary widely in form. Some, for example, involve quite simple choices among a limited number of options: Examples: Component 13 is the choice of a static tical confidence limit (the choice of 95: is conventional) that is used to classify bioassay results as "positive. " Component 9 is the binary decision whether or not to count or to ignore benign tumors as positive results in bioassays. Others are umbrella judgments that may incorporate a large number of scientific fac tors or an open-ended array of choices . Examples: Component 4 addresses the scientific accept- ability of an epidemiology study. Many factors affect this judgment--clearly, the list of potential flaws in de s igning 0 r conduc t ing ep idemi o l ogy s t ud ie s i s 1 ong . This judgment mus ~ be applied for each available study. Component 16 is an open-ended ques tion of whether particular bioassay results should be discounted for purposes of hazard identification because the test animal' s physiological response to the chemical is unique, and thus that its response does not reliably predict human responses. This list of possible scientific rationales for this kind of physiological extenuation is long, if not open-ended-- including interspecies metabolic differences, pharmacokinetic differences, etc. It is doubtful that a checklist could be constructed that would cover all rationales. Each assessment involves many mandatory choices. Discussions of risk assessment policies for particular substances often reduce to debates over one or two scientific issues--typically, for example, the shape to be assumed for the dose-response funct' on and/or whether or not to use upper confidence levels to define the doe-response curve. In truth, however, an analyst's discretionary judgments unavoidably enter an assessment at many points, whether or not they are explicitly presented and subjected to scrutiny. In fact 20 of the components normal ly require discretionary choices (whether implici t or explicit ~ for every quantitative risk assessment that involves both animal and human effects studies. The "Simplification Imperative" Casts a wide net.

OCR for page 83
86 Examp les: Componen t 14 requires a j udgment whether a particular bioassay is s Prong enough to be considered in hazard evaluation; the risk assessor must make this choice for each s tudy encountered. Component 35 covers the treatment of particularly -~uscept- ible populations: even a decision not to evaluate such susceptibles is a choice among analytic options, whether recognized or not . I f only animal data are available and a quint itative assessment is performed, 15 mandatory choices remain. For the hazard identification phase alone, 10 mandatory components present themselves (7 for bio- assay data, 3 for human effects data). Lee other 16 components come into play only under special circums tances . Examples: Component 7 covers the case of weighting results . . Of bioassays that used different routes of exposure than the one that is of primary regulatory concern (e.g., whether results from a stomach incubation study are relevant for airborne chemical exposures3. This component does not come into play i f all bioassay were inhalation studies. Moat of the components involve some form of weighting. Twenty-eight o f the components involve weighting decisions, many of which involve decisions about which facts, among a conflicting set of findings, are to be given consideration. Examples: Component 6 covers the relative weights to be placed on pos ~ Live and negative epidemiology results when both are reported for a subs Lance. Component 26 treats the decision on whether to base dose- response assessment solely on results from the most sensitive bioassay treatment group; this Ray be expressed equivalently as "what relative weight should be given to results from different treatment Soups in a bioassay?" One choice is to apply weighting values of one to response of the most sensitive treatment group and a value of zero to data from al 1 other treatment groups. Another possible choice would be to apply equal weights, in effect averaging results across treatment groupsO Component 21 covers the "grand" weighting decision for hazard identification: what relative weights to apply all the results from Truman secures, bioassays, structure- activity considerations, and short-term tests to reach a final inference about cause-and-effect in humans.

OCR for page 83
87 Me other 3 components involve a choice among ~ cat is ci:a' cricaria or a choice among alternative ways to express results. , Examples: Component 13 requires a decision about the statistical confidence level to be used to classify bioassay test results as "poniti~re." Component 24 requires a choice between using "best estimates" or "upper confidence limits" in character- izing the dose-response func tion. The components of risk assessment appear to have varying levels of specificity--but for many, the level of specificity is somewhat unc tear . A ques tion of interest in evaluating the advisability of generic guidelines for risk assessment is the level of generality at which discretionary judgment must be applied. The most specific components are those that apply to specific test results--e."., a particular bioassay report. Midrange applications are those that apply to the i~}c assessment for a particular substance, but not across sub- s~ances. A generic component involves judgment that could be applied across substances and, thus, across regulatory programs . The results of an attempt to classify components ire this manner are inconclusive. Three components (4, 14, 15) appear to be of the most specific variety; they apply to individual Ices t resul ts. Example: Component 14 involves a characterization o f tine scientific acceptability of particular bioassays. Five midrange components ~16, 20, 21, 25, 31) seem rela- ti~rely clearly to apply to the unitary risk assessment--that is, they apply to multiple data types for a particular substance, but not across subs lances. Example: Component 16 weighs physiological extenuations for a particular chemical risk. These are typically based on an understanding of human metabolic pathways or pharmacokineeic factors specific to the chemical. Determinations for component 16 seem unlikely to general- ize across chemicals. Six components (5, 13, 19, 24, 30, 3~) appear to be amenable to generic policy formulation. (N_: these components tend to the "value" end of the science-value spectrum discussed in the following section. ~

OCR for page 83
88 Example: Component 19 requires a definition of the statistical confidence level for defining positive short-term tests. Whatever that definition is, it seems reasonable to hold it constant across bested substances . However, well over half of the components seem to resist easy ciassificatione Examoles: Component 8 requires a decision about whether total body tumors or specific tumor types should be counted in bioassays. It is unclear whether this decision could be made generically or should remain flexible for chemical-by-chemical de ~ erminat ion. Component 7 requires decisions in assessing animal data in the case that the route of exposure in the study data is different from that o ~ regulatory concern. Some rules of thumb may be desirable (eggs, don't rely too much on studies involving derreal exposures for assessing airborne human exposures), but some argue that there should be case- by-case evaluation o f the ques tion O Component 28 requires choices among varying interspecies conversion factors in dose-response estimation. There are two or three dominant options, including those based on relative surface area and relative body weight. Some obeenrers would oppose the choice of any one conversion factor as a generic policy, asserting that, for example, metabolic factors may require case-by~case variation. 1~ seems clear that there are limits to the extent to which rick assessments can be made uniform by the imposition of generic rules. There are many points in an assessment where scientific considerations unique to the subs tance under evaluation should be as ses sed. Ten tif ferent fields of expertise are touched in the components of risk assessment; the field that is most pervasively relevant is ~~, _ ]~ _ . . expertise; "concordance analysis" may be o f strategic Importance over the lone term. We have made an initial attempt to describe the ma jar f ield of knowledge that is applicable for each component. The results:

OCR for page 83
89 Fie Id o f Knowledge Biostatistics Carcinogenes is Toxicology Pathology Ep idemio logy Gene tics Medicine Nutri ~ ion B iochemis try Teratology No . o f Component s 13 11 10 6 4 4 3 2 1 One reason that hiostatistics is pervasive is that many of the components require giving relative weights to findings from different tud ies or tes ts; this quest ion turns on the re let ive power of the tests and relative strengths of association reported in test results. For nearly half of the components, a blend of disciplinary background may be required: Number of Sc lent i f ic F ie Ids Required 4 2 1 o No. of Components 2 2 11 19 2 36 Many of the components that res ~ on a single disciplinary field are found in the hazard identification phase of risk assessment; the need for multidisciplinary expertise is more common in the dose-response estimation phase. These findings have implications for the administrative management of risk assessment. Because so many specialized fields may be direct by relevant, it may prove difficult for agencies to engage experts on all the relevant fields in the units that conduct risk assessments, or for groups that review risk assessments to ensure that all relevant disciplines are represented. Not listed among the standard fields of scientific knowledge is unique "discipline" that bat a bearing on choices for a number of components; we may call this discipline "concordance analysis" (some have suggested the term "risk assessment science" for the same concepts. Concordance studies involve empirical reviews of the concordance between indicators of carcinogenicity revealed in lower species and known human carcinogenicity. This line of empirical inquiry is largely independent of any. of the standard scientific disciplines.

OCR for page 83
go Examples: Component IB requires judgment about the predictive power of particular short-term tests; con- fidence in such tests will be enhanced if concordance studies show them to be highly correlated with bioassay results, which in turn show some concordance with human care tnogen~ct ty. Component 12 requires a decision as to whether a bio- assay should be considered positive if any single sex/ dose/strain grouping is positive, or whether the results for other groupings should be factored in. Concordance studies could eventually address this issue by applying alternative decision schemes to available animal data on known human carcinogens and determining which scheme "pre- ticted" the fewes ~ wrong answers . Components are better characterized for hazard identification ant dose-response assessment than for exposure assessment. Tweney-one of the components deal with hazard identification, and 10 more are concerned with dose-response estimation. Exposure assessment accounts for only two components (32, 33~. This gives an impression that exposure assessments at least in relative terms, is an ad hoc undertaking. Mere are two plausible reasons for this con- trast. Lee firs t9 and the most likely, is that exposure assessment procedures vary widely by type of exposure, with very few major analytic elements common to all routes of exposure: for example, an exposure assessment for a food additive may simply share few prominent assumptions/interpretations with an exposure assessment for a mobile source air pollutant. A second possibility is that a larger number of common components of exposure assessment are present, but they simply have not been recognized and developed because public attention and analytic focus have been devoted to questions of toxicology. SCIENCE ANSI VALUE IN RISK ASSESSMENT [there has been much debate offer whether risk assessment is 'iscientific" or "political" in nature, and, therefore, whether scientists or politically accountable officials should have the final authority in performing assessments. Familiar assertions in the current debate over risk assessment include these: Risk assessment is inherently scientific in nature; it should be done in isolation from political influences, which can only distort true scientific judgments lathe basic problem in risk assessment is that political appointees in the agenc ies conceal their value judgments under the mantle of science. i,

OCR for page 83
91 lathe basic problem in risk assessment is that scientists conceal their personal value judgments in risk assessments with the mant le of science"; or, al ternative ly, "All scien- tific judgments mus t necessarily be made by scientis ts; however, not all judgments made by scientists are necessarily scientific. All risk assessments are inherently pal itical. Since so fence canno t ful ly charac terize care inogenes is, there is no alternative but to apply value judgments in areas of sci~nti fic uncertainty . As in most controversies, there is probably an element of truth in each of these conflicting observations. It is possible that the diffi- culty in understanding the relative role of scientific judgment and Prague judgment in risk assessment is that observers have addressed the question for the risk assessment process as a ~hole. The problem may be resolved by examining the question for the individual components of risk assessment. Many observers believe that risk assessments involve a mix of scientific and extra-scientific judgments. It~is section reports an experimental attempt to address science/value questions for the several components o f risk assessment . The centra 1 idea is to classify each of the 36 components as a "scientific" judgment, "~ralue-based'' judgment, or as an intermediate form. The exercise corresponds to a requirement of the FDA s ~ udy contrac t . * lye experimental approach was to rate each component on a five- point scale ranging from "pure science" to "pure value. " These ratings were supplied by scientists and social sc ientis ts who were knowledgeable about carcinogenic risk assessment and its uses in policy. Me underlying premise--a naive one, in retrospect-~as that segregating matters of science from matters of value might hold a key for the study of institutional means of risk assessment. Scientific components of a risk assessment, for example, might be left to an organization primarily responsive to scientific authority, while extra- scientific considerations involving value judgment might be isolated and determined by individuals responsive to nodal democratic processes--for example, political appointees in the regulatory agencies. The results scuttle any hopes for such a neat solution. * "The process of risk assessment will be delineated in terms of its individual components, identifying and distinguishing those that are scientific in nature from those that are value judgments or policy. In addition, an effort will be made to identify and describe those components that are neither strictly science or policy but a hybrid consisting of elements of both. "

OCR for page 83
92 None of the components is purely scientific. Reviewers characterized no component as "pure science O " Upon reflection, it became clear that this finding is tauto- togical: the list of components had been constructed in a way that excluded consideration of purely scientific considerations. Clearly, there are very many matters of pure science in a risk assessment, for an obvious example, the term "pure science" would include the laws of addition. Addition is used in all risk assessments and faulty addition would certainly a ffect the scientific merit of a risk assess- ment. However, because there appears to be scientific certainty--or at least con~ensus--about the laws of addition, the matter is not addressed in the materials ~ formal guidelines, case summaries) that served as the basis for identifying the 36 components. A handful of components are seen as pure value judgments. Six components (59 24, 30, 32, 34, 36) appear to involve no scientific JudgmentO Example: Component ~ requires a ~ statistical threshold for considering human effects studies to be "positive." Th is choice rests on the value society places on a~roid- ing false negatives -. e.g., can we accept 1 chance in 20 (or ~ in 1007 ~ of falsely exonerating a harmful substance? Science cannot illuminate the answer to this question. For the majority of components, reviewers see a mix of science and value--and they disagree widely on the proportion of the mix. Reviewers tend to define "scientific as reflecting the degree of current scient if ic consensus O For 30 of the components, observers characterized the item in the midrange between pure science and pure value. For 20 of the 30, there is serious disagreement (not obviously reflecting the general policy orientation or disciplinary training of the observers about the proportions of the mix. Example: Component 23 requires a choice (or choices) among mathematical models deco extrapolate from high to low doses in animal studies. Some observers see this as "mostly scientific 9 t. emphasizing that the choice muse be constrained by scientific considerations--lik~ statistical goodness of fit in the observed range and biological plausibility in the loser dose range. Others see ache choice as "mostly value," claiming that scientific

OCR for page 83
93 cons idesat tons mere ly narrow the range o f sens ib le mode Is (and noting the lack of scientific consensus in biological plausibility); this leaves the final selection open to value judgment . Further d iscus ~ ion between two such observers turns to the dif fuse and metaphys ical. After averaging the ratings in cases where observers' ratings diverge, we find the array of general ratings ''Cendencies" for the 30 midrange c omponents to be: Mos tly Sc ientific ~ Intermediate Mixed ~ Intermediate) Mostly Value 9 7 3 4 30 In general, components in the hazard identification phase of risk assessment are perceived as more 'iscientific"; 13 of the 21 components in this phase are listed in the firs ~ two categories. For dose- response assessment, only 3 of its 10 components are listed in these categoric s . Although some of the components are Judged to be "mostly scien- tifiC" judgments, even for these there is a margin of difference in opinions among scientist~--one that makes the choice of scientist consulted an entry point for value considerations. . Example: Component 15 covers the pathology for bioassays. No one doubts that pathology should be left Deco qualified pathologists; however, there is some scientific variation in the way different com- ponent pathologic ts characterize the same results--a difference they perceive as based solely on scientific considerations, not personal values. However, the differences correspond to different levels of con- ser~ratism about risk, the key value judgment; this forces a choice among different pathologists' findings, and that secondary choice i tsel f may be af fected by value cons iderat ions . In describing the basis of their ratings, the observers appeared to be using an estimate of current degree of ~meurys Berg ,5~5 to help them judge the extent to which a Pi ii "scientific." For example, rationales.for rating were typically accompanied by statements like "No good scientist would question this approach," or "the best scientists don't agree on this now." This is not the only possible definition of the concept, as outlined below.

OCR for page 83
94 Me key question in distinguishing scientific judgments from value judgment is the definition of the adjective 'tscientific." Alternative definitions have different implications. Three distinct uses of the term "ecientifict' are discernable I. Consensus. ~ component is "scientific" to the extent that qualified scientists agree on the way to interpret particular data. 2. Empirical confirmation. A component is "scientific" if it is subject to confirmation or disconfi`~uation by the scientific method -- that is, ttse question can be resolved by future scientific tests or other findings. 3. Expertise. A component is 'iscientific" if, in practice, it mus t be determined by scientists because lay persons cannot be easily trained to understand all the complex factors that must be considered In making a final choice O (For example, lawyers cannot be expected to learn to read bioassay tissue slides co--- petently and must, for this reason, defer to pathologists. ~ The general ques tion in the s tudy pert armed by the Cocci ttee on the Institutional Means for Assessment of Risks to Public Health may be viewed as, ''What elements in a risk assessment should be left to the scientists to decide?" Lee superficial answer, of course, is, "the scientific questions. " The three definitions of ''scientific" have different practical implications for managing risk assessments: Use of the consensus definition is attractive because it provides a dynamic, flexible approach to a dynamic scientific field. As scientific consensus forms a particular question, that question would move beyond the reach of nonscientific judgment. For our purpose, however, its usefulness of the consensus definition is limited by the difficulty in opera~ionalizing it; for example (as the case or' Arkansas creation science testimony demonstrates) a few eminent scien~cists can be found who will oppose many widely-held Pierogi fic theories--whicl, would mean that very few components indeed would even be defined as "scientific .'t A criterion of unanimity is impossibly high. A Lesser standard of ''majority support" is impractical, too. Historians of science point out that science, by its very nature, cannot be democratized; very frequently the majority views of scientists are upset by new scientif ic findings that are, at first, resisted by a numerical majority of scientists. The central problem is that science has no centralized system of authority--that is, science has no formalized way to certify the dominance of particular theories for the convenience of policy formulators.

OCR for page 83
95 Use of the empirical confirmation definition is perhaps, the best literal definition of the term "scientific"--though it, too, may be difficult to operationalize. This definition directs attention Deco a central puzzle in the use of scientific expertise in policymaking; is it true that experts have a better " fee 1" for the answers to as yet untested scienti fic questions than laypersons? Some may doubt that scientists' informed bunches are more reliable than those of nonscien- tists. And even if scientists are better at guessing future scien- tific answers than lay persons, they may not guess in unison, leaving unsolved the familiar issue of what a policymaker should do when different scientists give different answers. ~ Use of the expertise definition raises other operational questions. Does the layperson or the scientist decide whether a particular component is too complex for lay decision making? How does the lay decision-maker make sure that the expert's personal values do not affect the expert' s analysis? For risk assessment ~ the term "value judgment" is synonymous with ''selec tion o f the appropriate degree o f conservatism. t' For all the components seen as reflecting "value judgment," the underlying question appears to be how conservative a judgment to make in the face or scientific uncertainty. The choice for these com- ponents in essentially a matter of determining whether to employ principles of risk-averseness, which would lead to the use of worst- case assumptions, or whether principles of risk-tolerance should be emp toyed . SOME: GENERAL OBSERVATIONS ON MANAGING RISK ASSESSMENTS A review of the components of risk assessment leads to the fo l lowing propos it ions: A review of the components of risk assessment confirms the difficulty of managing risk assessment in the federal government. Risk assessments are performed in many diverse programs in the federal government. Ideally, these assessments should: (a) reflect the latest scientific advances in knowledge and (b) be consistent. Federal management of assessments is greatly complicated by several facts that are inherent in risk assessment: - There are many points in an individual assessment where discretion must be applied to cope with scientific uncertainty--and the results of an assess- ment are very sensitive to the assumptions inserted at all these points.

OCR for page 83
96 It is difficult to distinguish in any objective way the scientific and the extra-scientific con- siderations af fecting the choices among these as sump t ions . Many different scientific disciplines may be germane to a particular assessment, and multiple expertise may be involved in resolving particular components of an assessment. lthe choice of assumptions cannot be determined generically-~many must be left to case-by-case judgment of the facts at hand for a particular subs tance . Reducing risk assessment to a set of prede~e: Ruined discis ion rules could preclude the ceommodation of scientific data unique to individual cases . If risk assessment truly Is an inextricable mix of scientific lo. to make sure that the assumptions made in assessing risk are rount~nely scientific and political scrutiny To summarize results from the lest section, there is no practical objective definition of the term '9scientific," and even if we employ subjective ratings by informed observers we find it difficult to distinguish scientific judgment from value 3utgment in risk assessment. It is therefore impractical to partition the responsibilities for sisk assessment between distinct groups of scientists and policy officials. This leaves no practical alternative but deco subject the assessments themselves--whoever performs thereto the independent review of both scientists and responsible policy officials. This line of argument leads to three propositions: 1. Risk assessments should routinely identify each area of inference where scientific un- certainty is confronted, and should s tate the analytic choicer s) made in each area. 2. Risk assessments should routinely be reviewed by some body of scientific experts, which should ascertain 'whether the assumptions made are con- ~ is tent with current science O 30 Ultimate responsibility for all assumptions made should be borne by policy officials in order to ensure that any value judgments applied are subject to democratic processes.

OCR for page 83
97 The "Parameterization Tactic" may be of little practical use. One method that has been widely endorsed as a device that separates scientific and value judgment may be teared the ''?arameterization Tactic"; it suggests that when there is scientific uncertainty in a risk assessment, the analyst should express the range of scien- tifically acceptable values and proceed with a multi-faceted analysis. For example, the P. Tactic holds when two or three interspecies conversion factors are possible, the calculation should be done two or three ways, and each value carried forward--presumably in tabular form--for the decision-maker to choose among on policy grounds O The P. Tactic is useful where there are only a few sources of uncertainty in a risk assessment. This premise rarely holds. As noted above, a typical risk assessment has 20 or more components, each with an associated uncertainty factor. This implies a risk assessment presented as a 20-dimen~ional matrix to the decision maker. Such form is likely to: prove incomprehens ible to the policymaker provide uselessly wide overall ranges of risk estimates In addition, many of the components are "all or nothing'' judgments (e.g., whether benign tumors should be counted) that are difficult to express numerical ly. The P. Tactic cannot easily apply to hazard identification, which amounts deco a series of such binary determina- tions.

OCR for page 83
98 ATTAClIME:~T 36 COM1?ONENTS OF RISK ASSESSMENT Hazard Identification (21 components) Inferences from Human Dat a ( 6 Corapanent 8 ) I) How should results from different routes of exposure be weighted? (e . g., Can conclusions about inhalation risk be drawn from data on exposure to the same chemical in drinking water or food?) 2) How should results at different tumor sites be weighted? (eOgO, Should total tumors be counted or just those of the type or organ site of primary concern? ~ 3) How should benign tumors be weighted in comparison with . . ~ ma. .lgnanc~es ~ 4) Is the study scientifically adequate9* (e.g., Does it meet minimum standards of acceptability for epidemiology? Are there flaws in study design or execution that should be Icept in mind In using the Judy f indings ? ~ 5) Which measure of association (confidence level, excess - incidence level) should be used to determine whether a study is "positive"?* (e.gO, What ratio of relative risk constitutes positive finding? ~ 6) How should the various available s tudy findings be weighted?* (e.g., Should positive studies outweigh negative studies?) Inferences from Animal Bioassay Data (11 components) 7) How should results from different routes of exposure be weighted? (e.g., Should studies involving administration by gavage be counted as valid for potential air pollutants? ~ 8) Bow should data from different tumor sites be weighted?** (e.g., Should ~otal-body tumors be counted or just those at specific organ sites? ~ ~- * A mandatory consideration if human data are present. ** A mandatory consideration if bioassay results are present.

OCR for page 83
99 9) How should benign tumors be weighted in comparison to malignancies? 10) Should tumor incidence or the number of affected animals be counted?** 11) How should results of studies showing high levels of spontaneous tumors in controls be factored in? 12) How should different treatment groups be weighted in determining whether a test is '"positive"? (Should only the most sensitive dose/sex/strain be considered? Should 'tfalloff" at higher doses be discounted?** 13) What confidence level should be applied to classify a test as positive' (e.g., Should the 95: confidence interval be used to reject eve mill hypothesis of "no causal relationship between dose and effect?"** 14) Does the study meet minimum standards for acceptability in bioassays? Are there flaws in experimental design or execution that should be kept in mind?** 15) Is the pathology adequate? (e.g., Are currently acceptable definitions of lesion types employed?** 16) Are there physiological extenuations (e.g., A chemical's unique metabolic pathway or unique pha`-=acokinetics, expression at unique organ site, possibility of a toxic mechanismrof-action) That should be considered? a 17) How should varying test results be weighted? (e.g., How many positive tests are required for a finding of carcinogenicity? Should negative tests be given zero weight?** Inferences from Other Data (3 components) 18) Is a positive test in a particular short-term screening assay indicative of carcinogenicity? 19) What confidence level should be used to reject the null hypothes is in short-term tests? **A mandatory consideration if bioassay results are present.

OCR for page 83
100 20) How much weight should be given to risk indications from structure/activity analysis? General 21) Mat relative weights should be given to available human, bioassay, and other test indicators in concluding whether a chemical is a carcinogen?** Dose Response Assessment (10 components) Inferences from Human Data (4 components) 22) Which results from epidemiological studies should be , considered? (edge, Should the dose response curare be based only on the steepest DR curare among epidemiology studies? ~ 23 ~ What mathematical model should be used to extrapolate from observed doses to policy-relevant doses?* 24) Should the dose response relationship be expressed as "tee s t es ~ irate ~ " or in upper conf idence ~ imit ~ ?* 25 ~ How should physiological extenuations be factored in the dose- response relationship? Inferences from Bioassay Data (6 components) 26) How should varying studies be factored into the dose response estimation? (e.g., Should the dose response estimate be based solely on the most sensitive treatment group (strain/sex)? )** 27 ~ What mathematical model should be used to extrapolate from experimental doses to human exposure levels?** 28) What factor should be employed for interspecies conversion of dose from animal to human?** 29 ~ Should time-to~tumor ef fee ts be incorporated? 30) Should dose~response relationship be presented as "beat estimates" or "upper confidence level" data points?** 31) How should physiological extenuations (metabolic saturation effects, etc. ~ be factored into the dose-response estimation? * A mandatory consideration if human data are present. ** A mandatory consideration if bioassay results are present.

OCR for page 83
101 Exposure Assessment 32) Mat points on the "gluttony scale" (e.g., 90~h percentile of exposure, 4 times average intake, hypothetical "worst case" scenario) should be evaluated? *** 33 ~ To what extent should tar~et-or~an exposure subs ti Cute for exposure or intake levels ? Express ion of Overall Resul ts 34) What are the statistical uncertainties in the assessment, and how should the range of uncertainty be presented? 35) How should allowances for the most susceptible individuals (genetically predisposed, fetuses/infants, immunologically impaired) be mad?** 36) Mat only of disk (deaths? life-years lost? tumor incidence? . . should be used to express ultimate results?*** *** Handatory considerations for all quantitative risk as ses~ments .

OCR for page 83