Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.
Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.
OCR for page 55
--> 3 Scenario Selection and Modeling In this chapter, the NRC committee summarizes the general purpose of exposure scenarios, the reasons for using environmental transport models, and the specific scenarios and models employed by the Department of Toxic Substances Control (DTSC) of the California Environmental Protection Agency (Cal/EPA). A framework for evaluation of the exposure scenarios and aspects of the models that DTSC used are discussed. The focus of the chapter is the general processes involved in scenario selection and development of models, rather than particular details. Evaluations of some specific aspects of the DTSC scenarios and models can be found in Chapter 4. Exposure Scenarios: Purpose People, animals, plants, and lower organisms (collectively, receptors) are, or can be, exposed to contaminants originating from waste materials in a variety of ways and to varying degrees. Moreover, exposure pathways will vary with time, as wastes are handled differently, and as the behaviors of the exposed populations change. Predicting every way in which any receptor might be exposed is a never-ending and ever-changing task. Despite this, the fundamental aspects of exposures are often fairly constant; and the exposures of a large number, sometimes the great majority, of different receptors can be represented with fair accu-
OCR for page 56
--> racy by abstract exposure scenarios that are representations incorporating the most important of those fundamental aspects. The purpose of exposure scenarios is to provide abstract representations of real-world exposure situations in sufficient detail and in sufficient number to capture all the important exposure pathways. As emphasized in Chapter 2, the risk-management goals drive the selection of the exposure scenarios; and an adequate selection of scenarios is essential for the validity of a risk assessment. Even if the best possible models of environmental fate and transport were to be used, a poor selection of exposure scenarios might invalidate any conclusions drawn from the assessment. Poorly determined scenarios might result in risk estimates that are not remotely related to situations likely to occur in the real world, or in the evaluation of risks to the wrong set of receptors. For DTSC's purposes, assumptions regarding proximity to landfills, types of likely exposures (e.g., contact with contaminated soil, or through inhalation of airborne contaminants), and duration of exposure are critical to developing an accurate risk profile. Modeling: Purpose The use of environmental fate and transport models to predict environmental concentrations is required due to a lack of comprehensive measurements of contaminants in situations of interest as well as the fact that the risk assessments are often conducted a priori. Available measurements (either from the field or from laboratory experiments) are used to extrapolate to probable situations. In DTSC's risk-based approach to classifying wastes, the modeling is required to estimate environmental concentrations of contaminants in certain well-defined exposure situations (the exposure scenarios). The models are used to evaluate the extent to which receptors (human, for the health-effects modeling; animal, plant, or lower organisms, for the ecological-effects modeling) would be exposed to chemicals contained in the waste. The object is to choose models that account sufficiently accurately for the physical processes that occur in transport of the contaminants along the major pathways of exposure. Summary of DTSC Exposure Scenarios DTSC proposed four exposure scenarios to model risks of wastes not
OCR for page 57
--> covered under the federal Resource Conservation and Recovery Act (RCRA). A summary of these scenarios, based upon the DTSC report (1998a, pp. 56 ff.), follows. Given the concerns about DTSC's proposed approach as discussed in this report, the committee does not endorse the accuracy, completeness, or appropriateness of these scenarios or the following descriptions, which are based upon the DTSC documentation. Deficiencies in the scenarios and their descriptions, and instances where DTSC has varied from its stated intent, are indicated below. Adjacent Resident Scenario Residents are assumed to live 100 meters (m) from the fence-line of an operating landfill accepting wastes. The waste is assumed to contain up to 100% of the specific chemical contaminant under analysis. Landfill size and soil and waste conditions are consistent with those found at California landfill sites. Various transport mechanisms for the chemical contaminants are included in the modeling. The chemicals might be transported through the air as a vapor or attached to dust particles. The vapor might be inhaled, and the particles might be inhaled or deposited on backyard soil and vegetation. Residents might come into direct contact with resulting contaminated soil and they might consume foods contaminated by chemicals taken up from the contaminated soil or deposited dust particles. For some chemicals, transfer from a mother's blood into breast milk for ingestion by nursing infants is also taken into account. Waste Worker Scenario Workers handling the waste are assumed to handle undiluted waste directly, either before arrival at the landfill or at the landfill and without (as far as the committee could determine) any assumed institutional health and safety controls. Pathways of exposure are: inhalation of dusts or vapors; inadvertent ingestion of the waste; and dermal contact with the waste, leading to dermal absorption. Land Conversion Scenario This scenario replicates the assumptions of land application used by the
OCR for page 58
--> U.S. Environmental Protection Agency (EPA) in deriving maximum concentrations of contaminants allowed in biosolids (40 CFR Part 503). The assumptions include an application of waste to the land surface at a rate of 0.7 kilograms of waste (kg) per square meter per year for 20 years. After the last application, residents are assumed to occupy homes built on the land. The residents have attributes and exposure pathways similar to those in the adjacent resident scenario, but are additionally assumed to eat fish from a surface water body adjacent to the land on which the material was applied. Ecological Effects Scenario Ecological effects were addressed using a 2-step approach. The first step relies on the ecological toxicity exit concentrations developed for the proposed EPA Hazardous Waste Identification Rule (HWIR) (60 Fed. Regist. 66344, Dec. 21, 1995). HWIR is an EPA program similar to DTSC's program to categorize waste according to risks arising from its disposal. In the HWIR, EPA proposed waste concentrations for chemicals based on residential exposures as well as ecological effects. Many of the chemicals for which DTSC has proposed total threshold limit concentrations (TTLCs) are included in the HWIR. For those chemicals common to both HWIR and DTSC, if the concentrations proposed for residential protection in the HWIR proposal are less than those based on ecological effects in that same proposal, no further action would be necessary by DTSC. For such chemicals, the residential protection concentrations in the DTSC proposal would be considered to be protective for ecological effects. As a second step, for the remainder of the chemicals with ecologically based HWIR-proposed values, DTSC has developed ecologically based TTLCs. Modeling Used in the Scenarios The modeling efforts used in these four scenarios are summarized in Table 3-1. One of those efforts was a modification by DTSC of CalTOX, a multipathway, multimedia model (DTSC 1998a, pp. 78–647). It is implemented through a set of Microsoft Excel spreadsheets, using data summarized specifically for CalTOX (DTSC 1998a, pp. 182, 555, 591),
OCR for page 59
--> TABLE 3-1 Models and Spreadsheets Used by DTSC for the Scenario-Chemical Combinationsa Criteria Upper TTLC Calculations Lower TTLC Calculations Scenario Modeled Residents Near Landfill Waste Worker Residents on Converted Land Ecological Organic chemicals CalTOX landfill PEA worker organic (Work_org.xls) CalTOX land conversion Described in DTSC (1998a) Inorganic lead LeadSpread off-site (Offsite.xls) LeadSpread worker (Worker.xls) LeadSpread land conversion (Landconv.xls) Inorganic chemicals PEA off-site (Off_risk.xls) (Off_haz.xls) PEA worker inorganic (Work_met.xls) PEA land conversion (Lnd_risk.xls) (Lnd_haz.xls) a Source: Adapted from DTSC (1998a).
OCR for page 60
--> and further data summarized in a separate spreadsheet ''datcal.xls'' maintained by DTSC. The preliminary endangerment assessment model (PEA) (DTSC 1998a, p. 794) was specifically modified for application by DTSC and also encoded in Microsoft Excel spreadsheets. DTSC modified a basic spreadsheet with specifics for the particular exposure scenario, chemical class, or desired toxicity measure (hazard index or lifetime risk estimate) to yield six spreadsheets. LeadSpread (DTSC 1998a, p. 775) is a lead risk-assessment model maintained by DTSC to estimate blood lead levels in adults and children due to exposures to lead-contaminated environmental media. For this application, it was implemented in a Microsoft Excel spreadsheet. As with the PEA, the spreadsheet was modified for each exposure scenario to contain information specific to that scenario. The modeling effort for the ecological scenario was considerably more limited. No implementations equivalent to the spreadsheets used for human receptors in the exposure scenarios were applied for ecological receptors. Analysis of Scenarios and Modeling Because of the large number of assumptions that went into the exposure scenarios, it is not feasible to address them all in this report. The committee was not asked to conduct a complete review of the extensive modeling; it did look in detail at selected areas. In both the scenarios and modeling, there were substantial errors or omissions in conceptualization or application of the models. In this chapter, the apparent reasons for the shortcomings are examined by illustrating a few specific concerns. A more extensive discussion of the details of particular modeling problems is given in Chapter 4. The scenarios and modeling will be examined according to the requirements discussed in Chapter 2. The following discussion examines the adequacy of DTSC's proposal with respect to Scenarios: Connection to Policy. How the scenarios used by DTSC are selected, and how they are connected to the policy aims of the program. Scenarios: Completeness and Coverage. How it was ascertained that the scenarios examined are necessary and sufficient.
OCR for page 61
--> Scenarios: Physical Processes and Models. What physical processes act to disperse contaminants from waste within each of the various scenarios, and how those physical processes are modeled within each scenario. Mathematical Models and Their Implementation. The mathematical representation of the physical models used within each scenario, how those models are simplified, and how the simplified mathematical models are implemented. Parameter Values Used. How representative are the parameter values selected by DTSC and used in the models for each scenario? Variability and Uncertainty. How well do the parameter values account for variability across the State of California (with respect to geographical location), and between individuals within California? How the uncertainty of some of the parameter values is taken into account. Sensitivity Analyses. The adequacy and completeness of the sensitivity analyses performed by DTSC. Validation and Quality Control. The adequacy of the procedures adopted by DTSC to ensure validation and quality control of its process, and the effectiveness of those procedures. The ecological scenario is considered separately, because DTSC implemented it in a very different manner from the others. The committee does not purport to provide an exhaustive list of errors, omissions, or limitations of the modeled scenarios; rather it illustrates several examples of particularly egregious difficulties that were encountered. It is incumbent upon DTSC to review all the scenario and model assumptions, replace the ones that most clearly are in need of modification, and evaluate the assumptions in light of newer data and more realistic scenarios. Scenarios: Connection to Policy As discussed in Chapter 2, the relevant policy objectives for this program have not been stated in adequate detail. Without any discussion of the policies that the DTSC wishes to implement, the committee found great difficulty in assessing the appropriateness of the exposure scenarios selected by DTSC. The DTSC in written and oral responses (DTSC, personal commun., October 9, 1998, see Appendix C, No. 28; DTSC, per-
OCR for page 62
--> sonal commun., November 20, 1998) suggested that there had been some sort of process by which the scenarios included were selected; however, the process was not documented. An illustration of the difficulty in examining the appropriateness of the scenarios arises in the waste-worker scenario used to develop the upper TTLCs. DTSC has apparently decided to regulate some wastes based on the typical behavior observed for workers at landfills (a scenario in which institutional control or health and safety rules are absent). However, the committee assumes that DTSC has some influence on the behavior of such workers through direct regulation of the landfills themselves. Did DTSC consider, under the California Regulatory Structure Update (RSU), the possibility of modifying landfill worker behavior to meet required goals for landfills, rather than modifying the allowed input to landfills? There is no documentation that such an action was ruled out as a policy; but if not, then the waste worker scenario selected might be completely inappropriate—a better approach might be to select acceptable waste concentrations based on other criteria, and then regulate the behavior of the landfill worker, as is done for hazardous wastes. Many of the details of the adjacent resident scenario were heavily criticized in the public comments. Those comments suggested that it is implausible that the scenario as developed by DTSC could occur, and the committee agrees with that criticism (for more details, see Chapter 4). Indeed, some extreme examples cited in the comments (e.g., the quarter-acre plot with cows and chickens, a large garden, and a fishing pond, all simultaneously used for provision of food) are literally impossible and cannot be construed to represent a plausible (let alone likely) scenario for adjacent residents. This scenario raises serious questions about both the selection of parameter values (see below) and the relationship of the scenario to policy goals. For example, a scenario such as this might be appropriate if the policy goal is the protection of the most-exposed individual, but if the scenario is meant to be plausible the parameter values must be chosen so that they are not mutually exclusive (even at the extremes of distributions of such parameter values). When asked about this particular example, DTSC stated that only one pathway almost always dominates any exposure scenario (DTSC, personal commun., November 20, 1998); however, this response is not sufficient to reduce the criticism of this scenario without more information from DTSC on how it arrived at this conclusion. Although the scenarios discussed above for the calculation of TTLCs
OCR for page 63
--> are flawed, they are explicitly defined. However, the derivation of soluble or extractable regulatory thresholds (SERTs) does not appear to fit into any of them; indeed, no formal definition is given of the precise scenario used for SERT derivation. The scenario appears to be based on drinking water ingestion, and may be loosely related to the adjacent resident scenario. However, any correspondence between the scenarios is not documented, and there appear to be significant differences—for example the raininess implicit in the SERT scenario appears to contradict the dustiness of the adjacent resident scenario. Once again, the major problem appears to be poorly defined policy goals which, according to oral presentations by DTSC (DTSC, personal commun., September 10, 1998), were changed during the process. However, neither the original nor subsequent policy goals are documented. The SERT scenario as implemented appears to be based more on a worst-case analysis than any other scenarios because a worst-case landfill, as well as a worst-case location for residents living downgradient of this landfill, was used. Such conditions might not actually exist in California. The selection of a single dilution attenuation factor of 100 to generically represent California landfills has not been justified, particularly in a probabilistic scheme. Moreover, DTSC has made no attempt to connect California landfill conditions to the generic landfill conditions used by the U.S. Environmental Protection Agency in its original development of the RCRA toxicity characteristic leaching procedure (TCLP) (not cited by DTSC in its documentation). The derivation of the liner protection factors is misplaced, in that it starts from a hypothetical base-case that does not (as far as the committee can tell) correspond to the U.S. Environmental Protection Agency's generic base-case. And there is no explained connection between the derived criteria and the toxicity indicators used (surface-water-quality-criteria, maximum concentration limits and the calculated groundwater concentrations). For example, is it California policy that all groundwater must meet maximum contaminant level requirements, or surface-water-quality criteria? Logically, if such a connection exists, it should exist at all downgradient distances, not just those corresponding to a dilution attenuation factor of 100 (see Chapter 4 for a further discussion of dilution attenuation factor). It is incumbent on DTSC to devise and document meaningful scenarios. The choice of the adjacent resident scenario, the waste worker scenario, or the land conversion scenario depends to a great extent on policy aims, and their effective application depends strongly on the
OCR for page 64
--> details of their implementation. Although the general types of scenarios selected might be appropriate, DTSC needs documentation of the process of scenario selection, and the specifics of these scenarios almost certainly will need to be modified. Scenarios: Completeness and Coverage DTSC has not provided sufficient information on all the types of waste situations that it is trying to address. For example, if the scenarios involving landfills are supposed to also take account of waste piles (as suggested by DTSC), this approach must be justified explicitly. The committee questions whether the approach taken by DTSC has resulted in some of the peculiar selections of parameter values, or has resulted in combinations of parameter values (each selected from possible situations) that are used to construct physically impossible scenarios. For example, 100% of the landfill surface is assumed to consist of a waste. Is that because this scenario is meant to also represent a waste pile, or because there is a policy (unstated in the scenario descriptions, but suggested in oral presentations) that some particular class of waste can be used as landfill-cover material? Without some discussion of the totality of situations that the DTSC expects to regulate, there is no way to judge the completeness of the scenario selection or the adequacy of the few scenarios selected to take account of other situations. For example, the committee was unable to discern whether DTSC intended to evaluate potentially contaminated soil removed from one site and emplaced on another. The scenarios have to provide complete coverage for the policy goals. For example, if protection of sensitive subpopulations such as children is among the policy goals, it would be up to DTSC to demonstrate how its chosen scenarios were protective of that subpopulation. In this particular case, would the chosen scenarios adequately account for a school built on land on which contaminated soil had been emplaced? Scenarios: Physical Processes and Models For both health- and ecological-effects modeling, DTSC needs to provide a clear description of the scenarios to be considered, followed by an
OCR for page 65
--> analysis of the physical processes that can occur to a waste and the individual chemicals in it under each scenario. The processes of most interest generally involve dispersion of chemicals from the waste into the environment, where receptors can be exposed to them. Such processes include evaporation of chemicals, followed by dispersion through the air; dissolution in water and transport by water flow or dispersion, adherence to dust, followed by dispersion of that dust by the wind; adherence of contaminated soil to a person's skin, followed by diffusion of the chemical through the skin; and many others. The object of such descriptions is to elucidate the pathways that might lead to exposure of the chosen receptors. Once the pathways and processes have been identified, various models can be used to estimate the rate at which chemicals travel through the pathway, and, hence, the rates at which receptors would be exposed. The fate and transport models are used to evaluate the flows of chemicals through the environment under the conditions envisioned by the scenarios. To that end, the correct models have to be selected to represent the physical processes involved in the scenarios; and parameter values appropriate to the scenario must be used. For example, in the landfill scenarios, little consideration has been given to the groundwater pathway (although the groundwater pathway is discussed separately in the SERT derivations, it should also be incorporated in any multipathway assessment); wind-blown dust is believed to be the predominant source for exposure pathways. There are multiple problems with the assumptions selected for evaluation of these pathways, ranging from assumptions about mixing-height to particle-size distributions to the assumption of a monofill dump. Each of these problematic assumptions affects the expected exposure, yet the specifics regarding their selection and use are not clearly identified. An example of inadequate specification of the physical processes occurs in the modeling of dust exposures. Exposures to dust occur for all the modeling scenarios for humans: the on-site landfill worker, the off-site resident near a landfill; and the resident on converted land. In all cases, the DTSC has arbitrarily selected a dust concentration of 50 micrograms per cubic meter (µg/m3) to represent the airborne dust at a contaminated area because it corresponds to a typical ambient average dust concentration. Such a choice is both incorrect and internally inconsistent. For example, the off-site resident near the landfill is supposed to be exposed to 50 µg/m3 from the adjacent landfill, whereas the resi-
OCR for page 66
--> dent on converted land is exposed to 50 µg/m3 from land on his or her property. Thus, two distinct, small source areas each contribute concentrations equal to this total. An adequate representation of total exposures to dust requires three modeling efforts: an emission-rate estimate, a dispersion estimate, and a deposition-rate estimate. The first can be accomplished by using dust-emission models (e.g., those available in EPA's AP-42, 1997b), the second can be accomplished using standard dispersion models (e.g., industrial source complex (ISC) model, fugitive dust model (FDM)), and the third can be accomplished using standard models (e.g., the California Air Resources Board (CARB) algorithms, as implemented in ISC2 (EPA 1992); or the acid deposition and oxidant model (ADOM) algorithms, as implemented in ISC3 (EPA 1995b)). Mathematical Models and Their Implementation Estimation of environmental concentrations resulting from particular physical processes is generally carried out by using mathematical models. In principle, other approaches could be used (e.g., using analog models or direct measurement in matching situations), but the alternatives are generally too limited, too expensive, or otherwise impractical. As mentioned above, the particular models used must match the processes involved in the particular scenarios selected. Unfortunately, there are many instances in which this does not occur in the DTSC modeling. The organic vapor emission model is one example, but there are other mismatches (see the discussion of dust modeling above and the discussions in Chapter 4). The organic-vapor-emission model used in the waste worker exposure spreadsheet is incorrect for the scenario described. The adopted model corresponds to the average emission rate over some period if a uniform contaminated material is placed (to infinite depth) over the whole area (in this case, the whole landfill) at time zero, with no infiltration rate of rainfall. For a landfill, contaminated material is unlikely to be placed uniformly to great depth over the whole landfill just at the time of initial employment of any individual worker. Although a zero infiltration rate is possible in some areas of California, organic vapor emissions are strongly affected by rainfall infiltration in wet climates, or by the effective negative infiltration in dry climates. Omission of such an
OCR for page 67
--> important physical process as positive and negative infiltration cannot be justified. The organic-vapor-emission model used has two principal problems even for situations of zero infiltration for rainfall. For highly volatile materials, the predicted emission rates are so high that any practical depth of contamination is rapidly depleted in its entirety. Although not so obvious, the emission-rate predictions for highly involatile materials are also much too high. Such involatile materials hardly evaporate at all, and so are barely depleted. In practice, their emission rate is governed more by diffusion through the boundary layer of air above the soil than by diffusion through the soil—the depletion depth is measured in microns. Thus two further physical processes have been omitted—depletion of the source material and the presence of an important evaporation barrier. Moreover, even the (incorrect) organic-vapor-emission model selected was incorrectly implemented in the spreadsheet, because an additional factor of 0.1 was included. No justification for such a factor was documented, and there is no physical process to which it might correspond. Scenarios and Models: Parameter Values Even if adequate mathematical models are selected to match the physical processes occurring in each scenario, the evaluations can be invalidated by selection of incorrect parameter values for use in those models. Chapter 4 discusses many cases of incorrect or unjustified parameter values. Two examples of one class of error in the selection of parameter values that appears to be fairly pervasive in DTSC's modeling are given here. This error is the selection of parameter values that do not correspond to what is required for the scenario. In the modeling of dust exposures, the deposition rate used by DTSC is said to come from a report done by D. P. Hsieh and co-workers in 1996 (provided in DTSC 1998a, p. 555). That reference does not contain any estimate for deposition velocity for wind-blown dust. It does instead have a deposition velocity, and a standard deviation (SD) for that velocity, for air particles, based on many measurements of deposition rates for ambient particles under many environmental conditions. The numerical values for the mean and SD used by DTSC are actually very different from those in the cited reference, but this transcription error is
OCR for page 68
--> not the issue here. There are two fundamental errors involved in using the cited reference for the deposition velocity in DTSC's modeling of the dust-deposition components of the scenarios. First, deposition velocity depends on particle size and environmental conditions (e.g., wind speed, surface roughness), so that the particle-size distribution for the dust in DTSC's scenarios is important. The size distribution for background ambient particles is substantially different from that for wind-blown dust, so that the measurements do not correspond to the sizes of the particles involved. Second, DTSC used the standard deviation of all the measurements (due to geographical, temporal, surface condition, and meteorological variation), but this value does not correspond to what is required in the scenarios, which is the variability (from place to place) of the long-term average deposition velocity. A similar problem with a variability estimate can be found in the Hsieh report (provided in DTSC 1998a, pp. 555 ff.) estimate for average ambient dust concentration. The dust modeling performed by DTSC is incorrect, because the mean and variance (from place to place in California) of the long-term average ambient dust concentration is required. However, the values given in DTSC (1998a, pp. 555ff) clearly do not correspond to such annual averages. What appears to have been given is the mean and SD of all the daily measurements at 42 stations, instead of the mean and SD of the annual averages at those 42 stations. The air dispersion modeling used in the modified PEA and Lead-Spread models was also inadequate, principally as a result of selection of an incorrect parameter value. This modeling purported to use a box model; yet the vertical dimension of the box is not discussed. Tracking down the reference to the value of this parameter leads to the PEA documentation (DTSC 1998a, p. 794), where again this parameter value is not discussed, but is simply selected as 2 m. A box model might be adequate in both contexts; but the parameter values are critical and must be based on physical processes, not arbitrarily selected. In the original context of the PEA model, an urban garden of a dimension about 22 m, a box height of 2 m might be appropriate; but it is certainly not appropriate for DTSC's dispersion modeling of a landfill, where the dimension of the landfill is approximately 670 m. Even for such a simple model, some discussion of the physical principles involved is required. Examination of the support documents used for CalTOX indicate that there are other problems with DTSC's parameter value estimation. Two such problems can be found in the estimate for the partition coeffi-
OCR for page 69
--> cient for plant tissue for trichloroethylene (TCE) (Hsieh et al. 1994, p. 25). First, this parameter is estimated for TCE based on a regression equation computed by Travis and Arms (1988) using 29 persistent organochlorines. What is not mentioned, and what is likely to completely invalidate the use of such data, is that TCE has a much higher vapor pressure than any of those 29 chemicals. In other words, the correlation is being used far outside its likely range of validity. Second, a further problem is that the estimated value is obtained from a log-log correlation, with the assumption of a normal error distribution on the logarithmic scale; but the estimated value is then reported as the mean in Hsieh et al. (1994). In such circumstances, the value estimated from the correlation is a biased median estimate, not a mean. These two errors appear to be pervasive—the same type of errors occur for several of the other biotransfer factors listed in Hsieh et al. (1994). For biotransfer factors to meat, milk, and eggs, correlations derived for compounds that are practically not metabolized are being extended to compounds that are rapidly metabolized. Once again, the correlations are being used far outside their range of validity. Models: Variability and Uncertainty The treatment of variability and uncertainty1 is inadequate in the DTSC report. No distinction is drawn between the two. Such a distinction is important in establishing compliance with health protection goals such 1 The definitions of variability and uncertainty given below are taken from the NRC report Science and Judgment in Risk Assessment (1994). Variability is defined as the individual-to-individual differences in quantities associated with predicted risk such as in measures of or parameters used to model ambient concentration, uptake or exposure per unit ambient concentration, biologically effective does per unit exposure, and increased risk per unit effective dose. Uncertainty is defined as the lack of precise knowledge as to what the truth is, whether qualitative or quantitative. Uncertainty may occur in estimates or the types, probability, and magnitude of effects of and/or exposures to a chemical.
OCR for page 70
--> as protecting 95% of the population 90% of the time, although it is difficult for the committee to comment extensively on this topic, because of the lack of adequate documentation of the policy aims to be met by DTSC's modeling. For practically all distributions, DTSC has assumed lognormality. (There are a few exceptions provided in DTSC's response to the committee—although one of the distributions that was supposed to be triangular was inadvertently omitted from the spreadsheet implementation.) DTSC has not verified the adequacy of this assumption, and the committee cannot confirm it. It is plausible that such an assumption might be adequate for certain calculations, for example, for estimating the standard deviation of the result of the calculations. However, to the extent that policy was articulated, it called for regulations based on the 90th percentile of output distributions. Some attempt needs to be made to verify that the distribution of results is not too affected by the assumption that all input distributions are lognormal, particularly where very few distributions contribute the majority of the variability in the output. Models: Sensitivity Analyses Sensitivity analysis is a procedure for determining how sensitive the results of a complex model or analysis are to various assumptions about the value of parameters and the structure of relationships. The construction of complex analyses, such as those used by DTSC in the proposed waste-classification system, should be accompanied by an extensive and comprehensive strategy for analyzing the sensitivity of the results of the analyses. Although the model is being developed, the repeated application of sensitivity-analysis techniques allows the developers, the reviewers, and, ultimately, the users of the model to do the following: Identify the dominant factors that most influence the models' results. Focus attention on assumptions concerning dominant pathways and improve efforts to reduce the uncertainty associated with them. Focus development, analytical, and review efforts on the most important assumptions on which the model and analysis are based. A comprehensive sensitivity analysis requires a full understanding
OCR for page 71
--> of the structure of a model and the nature and strength of the multifarious assumptions that have been made in its construction. It is rarely possible to conduct a sensitivity analysis on every assumption a model contains, and distilling useful insights from such a process would be almost as difficult as undertaking the analysis itself. Thus the sensitivity analysis must, in its design, include substantial insight into the factors that have the potential to most influence the results. To the extent that sensitivity analyses were conducted, no apparent effort was made to communicate the insights gained from these analyses to the public in a clear manner. The statistical results were summarized deep within the appendices of the reports presented for review. It requires a reader with substantial experience in stochastic models to interpret the results that were summarized. It might be that this failure to communicate the results in a clear manner resulted in substantial unnecessary debate about assumptions that were not important. For example, DTSC staff responsible for some of the analyses indicated that the sensitivity analyses demonstrated that the model's results were insensitive to assumptions about how much clothing a landfill operator was wearing because dermal exposures were not responsible for significant amounts of risk. Assuming that this interpretation is correct, communicating it clearly to the public would have saved significant amounts of controversy.2 Another method of assessing the sensitivity and quality of the models is to evaluate intermediate results within the calculation. DTSC should develop basic model input data and make available all intermediate output data for independent evaluation. This would allow an external group, such as the NRC committee, to evaluate the results of the modeling more completely. At the second public meeting, DTSC staff members suggested that the risk estimate obtained from the scenario selected is typically dominated by exposure via a single environmental pathway. They further asserted that the scenarios selected account for all of the important environmental pathways and should suffice to protect the health and safety of California residents. The documents presented to the committee do not provide adequate support for either of these contentions. 2 The fact that one of the questions DTSC submitted to the committee focused on the dress habits of landfill operators suggests that the results of the analysis were not clearly communicated to the department managers either.
OCR for page 72
--> Specifically, detailed sensitivity analyses and comparative modeling runs of selected sites should be included in the document. DTSC used the sensitivity analysis options currently available in the Monte Carlo software, Crystal Ball by Decisioneering, Inc. Although such subfunctions are quite useful for a scoping analysis, a more thorough treatment of sensitivity analysis is required. In particular, the only parameter values for which sensitivity analyses were performed were those assigned distributions in the analyses. However, some of the most important parameter values (e.g., soil-mixing depth, dispersion model box height) were not assigned distributions, but were simply given as point estimates. These are usually the assumptions that are most uncertain and based on the least knowledge. The sensitivity of the results to such parameter values was not examined in detail, if at all. Moreover, some of the parameter values that ought to be included in a sensitivity analysis are hidden in the support documents. For example, the half-lives of chemicals in the ground-surface and root-zone soils (Hsieh et al., 1994, p. 22) are based on estimated values (reported as measurements), but it is arbitrarily assumed that there is an extra factor of 5 uncertainty. The sensitivity of any of DTSC's results to this factor is unknown because sensitivity analyses were not performed on any of the uncertainty factors. Upper percentiles of the risk distribution might be very sensitive to such factors. A detailed analysis of variance of all results (see, e.g., McKone and Ryan 1989), coupled with other stepwise regression analyses might lead to a different conclusion as to which parameters are most influential. For example, variables possessing considerable colinearity could be reduced in their apparent significance. The committee cannot fully evaluate the strengths and weaknesses of DTSC's approach without such analyses. Even within the framework of the sensitivity analyses performed by DTSC, it would be useful to see the results for several selected cases. This could permit assessment of questions such as: Is it always a single pathway that dominates? How do the exposures from the various scenarios differ? If multiple pathways contribute, how are they best controlled? DTSC documents do not appear to provide this information. Models: Validation and Quality Control There appears to have been very little validation of any of the models
OCR for page 73
--> used by DTSC, in any sense of the term. In the context of DTSC's approach, validation could include the following: Each exposure scenario selected should correspond to areal-world situation. Each physical process that is modeled should occur in the corresponding scenario. The theoretical models describing the physical processes should correctly represent such physical processes. The simplified mathematical models representing the theoretical models should adequately approximate the theoretical model over all allowed ranges of input values. The implementation of the model should correspond to the simplified mathematical model; that is, the implementation should be a mapping of the simplified mathematical model (expressed in mathematical notation) to a computational scheme (e.g., a spreadsheet, a specialized computer program, or a set of such programs) in such a way as to preserve the mathematical structure of the simplified mathematical model. The only difference between numerical results produced by the implementation and those produced by the simplified mathematical model, for any allowed input values, should be due to the finite arithmetic precision of the practical computational devices. Good practice should be applied to minimize any such differences. The parameter values used in running the implementation for any particular scenario must be correct (i.e., match experiments) for the conditions of that scenario and should correspond to the definition of the parameter in the mathematical model (i.e., be based on measurements of the correct parameter). The results from the implementation of the model should be acceptably close to observations over the entire range of input values. Although it is unlikely that all these validations would ever be fully performed for any modeling system, it is quite apparent that many of them that are entirely feasible have not been carried out for many of the model implementations used by DTSC. Thus, the committee has found the following disparities: The physical processes occurring in the scenarios differ from the models selected to represent them (dust emissions, dispersion of dusts and vapors, vapor emissions).
OCR for page 74
--> The implementations of some models do not correspond to the simplified mathematical models described in the documentation. Some input values do not correspond to the values required by the model in the context of the particular scenario. Some input values used differ from the documentation. Ecological Scenario The approach taken to ecological evaluation is substantially different from that adopted for evaluation of human health risks. DTSC proposed a two-step method to determine the potential level of hazard of wastes to wildlife. In DTSC's report, this approach was adopted ''due to the lack of an accepted risk assessment methodology for ecological toxicity (DTSC 1998a, p. 62).'' This methodology was followed only for the lower TTLCs, that distinguish nonhazardous from special wastes, because it was thought that the protection of ecological resources would be similar in class I, class II, and class III landfills. No justification for making this assumption was given by DTSC. DTSC should provide a rigorous validation of this assumption. The first step was the use of the values presented in the federal proposed HWIR (60 Fed. Register 66344, Dec. 21, 1995). The HWIR methods include scenarios developed for residential exposure and a variety of ecological effects, although the similarity of any of these scenarios to those adopted by DTSC is not discussed. The documents describing the HWIR approach are obscure (RTI 1995a,b), and were not provided to the committee for review. It is unclear to what extent wildlife-specific exposure scenarios were adopted in the HWIR. DTSC did develop ecological-effects TTLCs for those chemicals for which the HWIR analysis indicated that the derived exit-level (similar to a TTLC) for ecological receptors would be less than the exit level for humans using the HWIR scenarios and analyses. The methods applied by DTSC to derive TTLCs that are specific for wildlife did not include wildlife-specific exposure scenarios, but rather only applied wildlife-specific threshold values for the hazard of chemicals. Thus, the lower threshold TTLCs derived were not truly specific to wildlife, because they still relied on a scenario of human exposure. This approach might not always be protective of wildlife for several reasons. The diets of wildlife are often not as varied as those of humans. Wildlife eat what is available in their home range and, thus, can have significantly greater exposures
OCR for page 75
--> than humans. A good example of this is birds living in areas adjacent to hazardous-waste landfills along the shores of the Great Lakes (Ludwig et al. 1993). In those areas the exposure of wildlife is much greater than that of humans, and because the toxicity of chemicals might be similar in wildlife and humans, the risk to wildlife could be much greater than that to humans. Based on this line of reasoning, the use of human exposure scenarios is inappropriate for wildlife. Of the 36 chemicals listed by DTSC, 28 have both HWIR human or HWIR ecological exit levels3 (DTSC 1998a, p. 63). Of the 28 chemicals, 11 have HWIR ecological exit levels were less than HWIR human exit levels4, and 17 have higher values (DTSC claims 18 of 29, see footnote 3). Based on this analysis, DTSC argued that the application of lower threshold TTLCs derived from human health endpoints and human exposure scenarios would be protective of ecological receptors for those 17 (or 18) chemicals. Of the eight chemicals for which no HWIR comparison was possible, DTSC contended that "the other eight chemicals [for which no HWIR ecological exits levels were calculated] were considered low priority in the HWIR analysis and, therefore, are not likely to present significant threats to ecosystems at concentrations below those that would be of concern for human health." Although either or both of these arguments may be true, DTSC provides no analytical support for these conclusions. In fact, because HWIR exit levels for ecological receptors were less than human receptors for approximately one-third of the chemicals, it could be argued that the application of exit-level TTLCs based on protection of human health would fail approximately one-third of the time (but see footnote 4). This seems to be a strong argument for the development of a specific ecological risk-assessment scheme for wildlife receptors. For hexavalent chromium, one of the 11 chemicals passing thought the HWIR screen, DTSC indicated that the proposed human-health-based TTLC equaled the HWIR ecological exit level, and considered this to demonstrate that the TTLC would be adequately protective of wildlife. 3 In the text, DTSC (1998a, p. 62) claims that 29 of the 37 chemicals being analyzed had both HWIR human and HWIR ecological exit levels; and at other parts of the report a reference is made to 38 chemicals being analyzed. 4 The committee notes that the HWIR implementation contained many errors, so that it is impossible to confirm that the same results would be obtained from a correct implementation of the intended HWIR approach.
OCR for page 76
--> Again, analytical support is lacking. For the remaining 10 chemicals for which the HWIR ecological exit levels were lower than the human exit levels (endrin, methoxychlor, lead, mercury, selenium, nickel, vanadium, cadmium, zinc, and copper), DTSC moved to a second step of ecological risk assessment. The development of lower-threshold TTLCs for these chemicals was based on an additional chemical-by-chemical analysis by DTSC. This further analysis was primarily a refinement of the reference doses for wildlife and did not include a refinement of the exposure assessment. The lower-threshold TTLCs derived included consideration of background concentrations and screening-level values derived by other organizations. The TTLC values derived for the 10 chemicals seem reasonable, but were not derived by use of a defined wildlife-specific ecological risk-assessment procedure. Because this refinement only considered the hazard portion of the risk assessment, it is inadequate to demonstrate protection of wildlife. Furthermore, the methods applied to the 10 chemicals are somewhat arbitrary and chemical specific, and could not be applied as a more generalized method for other chemicals. DTSC has no explicit plan or method to add more chemicals to the list of TTLCs. Currently, the ecological methodology is at best incomplete and poorly justified, and at worst potentially underprotective. With respect to the ecological methodology used by HWIR, the committee agrees with the U.S. Environmental Protection Agency's Science Advisory Board (EPA 1996): The ecological analysis in the HWIR document is fundamentally flawed because a lack of data has been implicitly equated with lack of adverse ecological effect throughout the analysis. As a result, only a handful of well-studied chemicals have actually received a scientifically credible review. The Subcommittee recommends, therefore, that the Agency discard the proposed screening procedure for selecting the initial subset of chemicals for ecological analysis and instead require that a minimum dataset be satisfied before ecologically based exit criteria are calculated. For those chemicals for which the minimum dataset cannot be satisfied, the Agency should clearly indicate that the exit criteria are based solely on human health considerations. The exit criteria should be reevaluated, however, when and if additional data on ecological effects become available.
Representative terms from entire chapter: