4
Issues of Model Application

After consideration of the scenarios and models chosen by the Department of Toxic Substances Control (DTSC) of the California Environmental Protection Agency in Chapters 2 and 3, the NRC committee examined the quality of the data used in the models for classification of wastes. This chapter examines (1) detailed model parameters for exposure pathways such as those leading to dietary intake; (2) selection of parameters, such as dilution attenuation factors, for specific models; (3) analytical methods, especially the use of either the waste extraction test (WET) or the toxic characteristic leaching procedure (TCLP) extraction methods; and (4) human and ecological toxicity tests.

Model Parameters

After the selection of scenarios and models to be used in the risk assessment, the models must then be implemented with the correct parameter values. It is incumbent on DTSC to review its modeling to ensure correct selection of parameter values to correspond to the scenario in which the model is used. The committee examined certain parts of the documentation and spreadsheets to evaluate whether a suitable quality-control process had been applied to DTSC's modeling (see Chapter 3). In its review of the DTSC report, the committee found numerous errors and inconsistencies in the selection of the component models and the model



The National Academies | 500 Fifth St. N.W. | Washington, D.C. 20001
Copyright © National Academy of Sciences. All rights reserved.
Terms of Use and Privacy Statement



Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.

OCR for page 77
--> 4 Issues of Model Application After consideration of the scenarios and models chosen by the Department of Toxic Substances Control (DTSC) of the California Environmental Protection Agency in Chapters 2 and 3, the NRC committee examined the quality of the data used in the models for classification of wastes. This chapter examines (1) detailed model parameters for exposure pathways such as those leading to dietary intake; (2) selection of parameters, such as dilution attenuation factors, for specific models; (3) analytical methods, especially the use of either the waste extraction test (WET) or the toxic characteristic leaching procedure (TCLP) extraction methods; and (4) human and ecological toxicity tests. Model Parameters After the selection of scenarios and models to be used in the risk assessment, the models must then be implemented with the correct parameter values. It is incumbent on DTSC to review its modeling to ensure correct selection of parameter values to correspond to the scenario in which the model is used. The committee examined certain parts of the documentation and spreadsheets to evaluate whether a suitable quality-control process had been applied to DTSC's modeling (see Chapter 3). In its review of the DTSC report, the committee found numerous errors and inconsistencies in the selection of the component models and the model

OCR for page 77
--> parameters. The following list indicates some of the types of errors and problems that were found for input parameters. Given the nature of the task and the time allotted, the committee identified as many specific problems as it could and provides examples of these in this chapter. However, the committee did not prioritize these problems and notes that not all of them are equally serious or have the same impact on the outcome of the risk assessments. This list should be taken not as a complete set of problems that needs to be corrected, but as an illustration of the type of problems and errors that a complete quality-control program should be designed to locate and correct. The problems and errors can be classified into several types, with some problems and errors occurring simultaneously: Transcription errors: The values have been incorrectly transcribed from an original reference. Mistaken identity: The values are correctly derived from measurements, but the measurements are of the wrong physical quantity in the context of the particular scenario and model. Mistaken derivation: The values are derived from measurements of the correct physical parameter, but the derivation is incorrect in the context of the particular scenario and model. Incorrect extrapolation: The values are derived from physical measurements by using an extrapolation that is inapplicable. Impossible: The values used are physically impossible. For completeness, the following types of problems that should be corrected by an adequate quality-control process are also discussed in this chapter: Inappropriate model errors: The model used does not correspond to the physical processes occurring in the scenario Structural errors: Errors or ambiguities in the structure of the models that lead to errors in calculations. Documentation errors: The description of the model differs from the model that was intended to be adopted. Implementation errors: The implementation of the model differs from the mathematical model adopted. Calculation errors: Something has been incorrectly calculated, but it is not possible to determine what went wrong.

OCR for page 77
--> Of course, documentation errors and implementation errors often occur together, and either or both can occur at any stage in the translation from physical description to simplified physical description to mathematical model to simplified mathematical model to implementation of the model. Parameter Selection for Scenarios The most basic level in scenario development is the selection of the specific parameters needed to implement the models in the context of the scenario. Such parameters include food intakes, quantities of soil eaten, dust-deposition rates, bioconcentration factors, soil-mixing depths, vapor pressures, soil porosities, inhalation rates, and solubilities. Below is an analysis of the types of problems encountered when DTSC's choices of parameter values were examined. Food Intake The food intake values used in the scenarios are based on data that may not be directly relevant to the citizens of California. It may reasonably be expected that the scenarios outlined would result in a small number of individuals incurring a large, albeit unknown, risk. However, the number of such individuals relative to the population of California, and the risk incurred by these individuals, is not knowable without completing model runs using the exposure scenarios developed by DTSC. The committee is, therefore, unable to estimate the effect the food intake and population assumptions have on total risk. Some of the committee's concerns with the application of food intake data are described below. The specific dietary intake parameter values need to be realistic. Those used by DTSC do not appear to have been selected for real-life conditions and draw upon data that are 10 to 30 years old (an example of the mistaken identity error). Focusing on the adjacent resident scenario, current data need to be gathered on the types of residents near facilities. What fraction are farm households? If they are not farm households, do these households produce and consume their own meat, eggs, and dairy products? If DTSC selects farms as the basis for its adjacent resident scenario, it should collect data on the number of small, family-owned farms in

OCR for page 77
--> California because residents of these farms are most likely to use home-grown crops as principal sources of food. Are they located near waste sites? How many? How far? Are there California demographic data to support those given in the U.S. Environmental Protection Agency's (EPA) Exposure Factors Handbook (EFH) (EPA 1990a)? As with the discussion of the scenarios for the population subject to the food intake assumptions mentioned above, these questions also raise the problem of estimating the changes in population living near hazardous waste sites and producing their own food. With the changes in farming from small, family-owned farms to agribusiness, the risk to a small number of individuals may be reduced in time. However, the large, agribusiness farm subjected to contamination of crops by a nearby hazardous waste site could increase the risk (by a smaller amount) to a larger segment of the population through the sale of contaminated crops. Therefore, DTSC might also, in the definition of its scenarios, wish to take account of time trends in agriculture, perhaps resulting in fewer small farms near waste sites, and perhaps resulting in wider dispersion of contaminated produce from larger farms. Some of the difficulty in the exposure assessments for the adjacent resident scenario can be traced to estimates of dietary intake, most specifically for home-grown foods. These estimates are presented in the CalTOX parameter values section of the DTSC report (1998a; pp. 611 ff). The primary reference for home-grown food intake is from the first revised EFH (EPA 1990a). Table 4-1 reports the fraction of various foods that are assumed home-grown. TABLE 4-1 Consumption of Home-Grown Foods Home-Grown Fraction of Food Obtained from Home-Grown Source Food Type Mean Coefficient of Variation Fruits and Vegetables 0.24 0.7 Grains 0.12 0.7 Milk 0.4 0.7 Meat 0.44 0.5 Eggs 0.4 0.7 Fish 0.7 0.3   Source: Adapted from Table III, Activity patterns, household parameters, and other exposure factors [DTSC 1998a, p. 613), which was adapted from EFH [EPA 1990a].

OCR for page 77
--> Taking even the mean values for the fraction of foods consumed would require residential conditions that would be illegal under various ordinances in most communities, and are unlikely to be observed for any individual. The problem with using these food intake estimates stems from using data collected in a specific survey and attempting to apply it to a more general or inappropriate situation. The following sections examine the problems with using the data on home-grown food intake by specific food type. Fruits and Vegetables The primary EFH reference for fruits and vegetables reports a decrease in the average size of the garden from 600 ft2 in 1982 to 325 ft2 in 1986. Extrapolation to 1999 would suggest that even smaller garden sizes are the current norm. Furthermore, the gardens are found to produce approximately 0.9 lbs of produce per square foot annually, or about 300 lbs of produce. Given the U.S. Department of Agriculture's results cited in the EFH indicating consumption of 201 g/day of vegetables, a 325 ft2 garden would fully support two individuals on this intake of vegetables. However, this is an average yield and does not take into account the differential yield for different vegetables, for example, pumpkins versus spinach. The EFH further reports that the largest numbers of such gardens are in the Midwest and South and that more individuals in rural settings tend such gardens compared with those living in cities and suburban areas. Neither EFH nor the DTSC report provides specific data on the number of gardens in California. It is reasonable to assume that some state-specific data for consumption of home-grown fruits and vegetables are available for California, a major agricultural state, yet only data from the EFH are used. Table 2-10 in the EFH shows that the percentage of home-grown fruits and vegetables consumed ranges from 4.2% for lettuce to 75% for lima beans; these values are used in the DTSC report. The EFH data cited were gathered for a specific survey and no evidence is given by DTSC to support the applicability of such data to conditions in California. The EFH specifically cautions the reader on the representativeness of these data, which were drawn from a small number of days and quantified by recall only. DTSC uses an average value taken from Table 2-10 and presumes that consumption of all fruits and vegetables matches

OCR for page 77
--> this average value from home gardens; however, no rationale is provided for this presumption. Furthermore, these consumption values for home-grown fruits and vegetables seem excessively large for the California population as a whole. DTSC does not specify who the target group is or who will be protected. It would appear that DTSC is looking at a maximally exposed individual. Although these values might be accurate for home gardeners in 1986, their validity for a population that consists predominantly of urban and suburban dwellers (another mistaken identity error) is questionable. DTSC has not demonstrated that the population assumed to grow and consume these foods exists. DTSC provides no support for the size of the garden versus food consumption, nor do they provide information about subpopulations who might be vegetarians, low income and subsistence farmers, specific ethnic groups, or children. Whether explicit account needs to be taken of any such subpopulations depends on the scenarios under evaluation to meet specific policy goals. The public comments indicate that it is possible to ascertain the number of hazardous-waste sites in the state and the distances of the nearest residences. With such information, it should be possible to adequately characterize home gardeners living near hazardous-waste sites, including the average distance from their residences to those sites. Grains For the grain consumption pathway, DTSC makes use of data exclusively on corn, because corn is the only "grain" product mentioned in the EFH. However, the committee suspects that corn grown in home gardens is used as a vegetable, not a grain (a mistaken identity error). It is not aware of any data on grinding corn meal from corn grown in a home garden. DTSC further compounds this poor data analysis in that other grains, presumably wheat and similar products, are assumed to be identical to corn. Lacking any data supporting the use of wheat as a vegetable, its use can safely be assumed to be as a grain to make flour and other products. It is extremely unlikely that the typical home garden produces 12% of the wheat flour used in the residence. Meat, Dairy, and Eggs The DTSC report states that, in farm households, the annual fraction of

OCR for page 77
--> home-grown beef consumed is 44% with a coefficient of variation (CV) of 0.5 (DTSC 1998a, p. 612). Similarly, for dairy products and eggs (by direct assumption of the equivalence between dairy products and eggs), the values are 40%, with a CV of 0.7. These values might be biased because they were based, according to EFH (EPA 1990a), on a survey of 900 rural farm households published in 1966, and they only apply to farm households. It is highly unlikely that such numbers can apply to suburban and urban settings, where keeping livestock is usually against local ordinances (a mistaken identity error). Again, the central issue is what population is being protected? Clearly, this component of the scenario uses a value based on a maximally exposed individual, not on the broader population. With the changes in U.S. agricultural practices since the 1960's from family farms to agribusiness, the application of these data to any residents of California must be justified by DTSC. Fish The consumption rate of fish for recreational or subsistence anglers and the fraction of fish eaten from local sources are also subject to controversy. The consumption rates in the EFH are based on data collected in 1973–1974 (EPA 1990a), and might no longer be valid, particularly given the number of no fishing advisories in effect for many California waters. DTSC also assumes that the shape of the distribution of intake is triangular, however, the shape of the triangle is not indicated and the basis for the assumption is unsubstantiated. The committee urges DTSC to incorporate more recent exposure factors (e.g., those given in EFH published by EPA in 1997) as well as data that are representative of California urban, suburban, and rural populations. Parameter Selection Within Specific Models This section highlights some of the committee's concerns regarding the use of models for soluble or extractable regulatory thresholds (SERTs) and some specific parameter problems for the preliminary endangerment assessment (PEA), LeadSpread, and CalTOX exposure models used to develop the toxicity threshold limit concentrations (TTLCs).

OCR for page 77
--> Soluble or Extractable Regulatory Thresholds There are various structural problems with the DTSC's implementation process for SERTs in general: The use of a single dilution attenuation factor ensures that variability in the population's exposure to groundwater is not taken into account. As DTSC acknowledged during the second public meeting (DTSC, personal commun., November 20, 1998), the SERT scenario was modified for some sort of worst-case exposure, not for exposure at a 90th percentile of the population as documented. There is a logical disconnection between some toxicity indicators (surface-water-quality criteria, maximum contaminant level) and the calculated values with which they are compared (groundwater concentration). Logically, if such a comparison is meaningful, it should also be meaningful at all downgradient distances, not just those corresponding to a dilution attenuation factor of 100. This is probably connected to the previous problem, and its solution requires explicit specification and acknowledgment of the policy objective. There are also problems with the SERT definition. The toxicity values used for the SERT calculations are particularly puzzling. These toxicity values include the ambient water quality criteria for aquatic life and maximum contaminant levels. Ambient water quality criteria apply to surface-water bodies, so that an extra dilution has to be taken into account, that is, where the groundwater runs into the surface water body. In particular cases, such as water bodies that are fed only by contaminated groundwater, the dilution factor might be greater than unity (e.g., if the water body evaporates and the contaminant is nonvolatile). If DTSC is attempting a probabilistic approach, then some distribution for this further dilution is required. If DTSC is attempting a worst-case analysis, then the worst-case would have to be applied. This type of error could be either a mistaken identity error if ambient water quality criteria were assumed to be relevant to groundwater, or an extrapolation error if it was assumed that groundwater concentrations correspond to surface-water concentrations. The maximum contaminant level is used as an indicator level applicable to groundwater. Given that health-based levels are separately

OCR for page 77
--> derived, it appears that DTSC is using the maximum contaminant level as an enforceable standard for all California groundwater. Thus, the committee questions the use of an maximum contaminant level for a risk-based approach. This appears to be a mistaken identity error. DTSC's approach to the use of a liner protection factor to take better account of modern landfills is also misguided. DTSC has attempted to estimate the liner protection factor by comparing a lined landfill with an unlined landfill. However, the parameter values used for the unlined landfill appear to correspond to a fairly tight landfill with clay liner. But these parameter values do not correspond to the parameter values used in the original EPA modeling. Thus, the liner protection factor is calculated from an incorrect base value. This could be classified as a mistaken identity error for all the parameter values for the unlined landfill. The SERT scenario is so poorly defined that the committee cannot comment on its applicability—it can simply point out where the implementation does not agree with the documentation. The following paragraphs identify some of the specific problems that were encountered in the review of the lower (nonhazardous) and upper (hazardous) SERT calculations. Calculations for Lower SERTs The DTSC spreadsheet for SERTs has a 100% correlation between the distributional calculations for risk and the hazard index. Although this does not affect the results of the current calculations, it is possible that in a more complex analysis such a correlation would be incorrect. (In fact, in the subsequent calculation of upper SERTs using the DTSC spreadsheet, this correlation is essential to get correct results, because the minimum function is applied at an intermediate stage of calculation). This is a potential structural error, although it does not affect the current calculations. The logic in the spreadsheet for SERT calculations does not correspond to the description given in the DTSC documentation (DTSC 1998a, pp. 43–45). The minimum value of the health-based level, maximum contaminant level, or ambient water quality criteria is applied before the statistical lower 10th percentile is calculated for health-based level and the lower 10th percentile of this minimum has been found. In principle,

OCR for page 77
--> this should make no difference to the final results, although one can expect problems in labeling some intermediate results (see below). This is a documentation error, or possibly a structural error (although it does not affect the final result). Possibly as a result of the preceding calculation, the values given in the column labeled ''Health-based level × 100'' (DTSC 1998a, p. 46) do not correspond to the health-based level × 100, where the health-based level is computed as the 10th percentile as described in the text. In fact, for each of six chemicals (aldrin, kepone, arsenic, beryllium, thallium, and vanadium), the value given in the table is correct to one significant figure, that is, it is indeed the health-based level × 100, where the health-based level is the lower 10th percentile value. For all but four of the remaining chemicals in the table, the value given can be obtained (to one significant figure) from the same calculation, but by using the mean values of each parameter in the calculation, not the lower 10th percentile of the distribution resulting from using the parameter value distributions. So the value given does not correspond to the text description. For the remaining four chemicals (chlordane, methoxychlor, chromium VI, and molybdenum), it is not clear how the values given in the table were derived, because they do not correspond to either calculation or to anything in the spreadsheet. The spreadsheet apparently used a Monte Carlo approach to evaluate the 10th percentile of the lognormal distribution required for calculating the health-based level for the lower SERT. Although the spreadsheet entries show correct values (within 0.7%) for the 10th percentile in most cases, in five cases (cobalt, fluoride, molybdenum, thallium, vanadium) the entries are more than 15% in error. This appears to be a calculation error. The calculation for the health-based level involves a single lognormal distribution (for the hazard index) or a multiplicative combination of three lognormals (for risk), which is also lognormal. Therefore, the calculation of the lower SERT is analytically straightforward. Calculations for Upper SERTs The difference between the calculation of the lower and upper SERTs is the liner protection factor. The upper SERTs are calculated by multiplying the lowest of the health-based level, the maximum contaminant level

OCR for page 77
--> or the ambient water quality criteria by a dilution attenuation factor of 100 and a liner protection factor. The DTSC documentation (DTSC 1998a, p. 47) specifies that an liner protection factor was entered as a "custom distribution" but provides no indication of how the values were derived. The custom distribution function in the Crystal Ball software by Decisioneering, Inc., allows various options for defining distributions (combinations of point values with assigned relative weights together with piecewise linear densities), but the DTSC documentation does not specify what options were used. During the first public meeting, DTSC stated that six values (two sites, three conditions, using the HELP model) were used as a custom distribution (DTSC, personal commun., September 10, 1998). The SERT spreadsheet contains a list of six values for liner protection factors. They are (in the order listed): 36, 190, 1600, 22, 118, 970. The Crystal Ball custom distribution add-in, however, lists a different set of six values, entered as point values with equal relative weights. These values are (in increasing sequence): 18, 99, 118, 191, 970, 986. The DTSC documentation gives three generic values for liner protection factor—18, 99, 986, using an approximate model that takes into account leakage through the liner versus leakage through clay only (DTSC 1998a, p. 1,487). The documentation later cites the HELP model as giving three values each for two precipitation and evapotranspiration regimes—Los Angeles and Eureka (DTSC 1998a, p. 1,488). The values are: 36, 190, 1,600 for Los Angeles and 22, 118, 970 for Eureka. These six values are identical to those listed in the spreadsheet, but not in the custom distribution in Crystal Ball. The documentation then appears to include a printout of a spreadsheet (source not provided by DTSC) that provides yet another set of values for all three cases: 36, 190, 1,600 for Los Angeles; 22, 120, 970 for Eureka; and 18, 99, 990 for a generic case (DTSC 1998a, p. 1,492). Thus, the documentation is not clear on what values are used to derive the upper SERTs—the resulting difference between documentation and implementation could be a documentation error or a transcription error. The spreadsheet calculations for upper SERT results appear to correspond to the values for liner protection factors present in the Crystal Ball custom distribution (which do not correspond to any documented set of values). DTSC indicated that the spreadsheets contained the most valid calculations, so that the liner-protection-factor values in the Crystal

OCR for page 77
--> Chemical Properties Many chemical properties are given in Table 2 (DTSC 1998a, p. 808) with a citation to the DATCAL.XLS spreadsheet. The committee has a copy of this file and the associated DATREF.XLS file. Although both files contain some references, in many cases there is no indication as to the original data source from which these values were taken. For example, there is no explanation or reference as to why the CVs for the vapor pressures for chlordane and TCDD are equal to 1.58 and 1.57, respectively, whereas many CVs for other substances are much smaller. Analytical Methods California is proposing to no longer require the use of the waste extraction test (WET) for determining the extractable constituents of hazardous wastes not classified under the Resource Conservation and Recovery Act (RCRA), relying rather on the use of EPA's toxic characteristic leaching procedure (TCLP). The TCLP has long been required by EPA to define the toxic constituents of RCRA hazardous wastes. In 1972, California's then-new Hazardous Waste Control Act defined "hazardous waste" and "extremely hazardous waste" and, in 1977, California added the requirement that the state develop and adopt criteria and guidelines for the identification of these two waste categories. The California Assessment Manual (CAM) (California Department of Health Services, 1981) prescribed the use of WET as the state's test procedure. The CAM-WET test extracted solid wastes with pH 5 citrate buffer for 48 hr (Table 4-2). At the federal level, the 1984 amendments to RCRA led to adoption by EPA of a batch extraction test, called the extraction procedure, which was designed to simulate processes occurring in landfills and that might TABLE 4-2 Comparison of Conditions for WET and TCLP Test Conditions WET TCLP Solid-to-Solution Ratio 1:10 1:20 Buffer Citrate, pH 5 Acetate, pH 5 Time 48 hrs 18 hrs Enclosure Status Not enclosed Closed system with zero headspace   Source: DTSC (1998a, p. 1114).

OCR for page 77
--> contribute to the leaching of toxic constituents. EPA subsequently replaced the extraction procedure with the TCLP (55 Fed. Regist. 11798, March 29, 1990). In the standard version of the TCLP, a pH 5 acetate buffer is used in a 18-hr extraction test (Table 4-2). A comparison of WET and TCLP, using California wastes and waste composites, was provided by DTSC in the Regulatory Structure Update Extraction Test Project Summary Report contained in the DTSC report (DTSC 1998a, p 1078). WET consistently extracted more of 10 elements than TCLP (Table 4-3), with the exception of one mercury result. For several waste-element combinations, WET extract concentrations exceeded TCLP extract concentrations by 1 to 2 orders of magnitude. The major difference in the two extraction procedures is that the citrate buffer in WET leads to chelation of some elements (e.g., lead), and the direct release of elements bound in the solid-phase by dissolving high content metals (e.g., iron) by chelation (DTSC, 1998a). Although WET extracted more of the test elements than TCLP, a comparison of results with municipal solid waste leachate (MSWL) indicated that WET is generally more exhaustive than TCLP, leading to significant overprediction of what is actually present in the leachate for many elements (Table 4-3). On balance, TCLP gave a better representation of what actually leaches from these landfills for most, if not all, elements. Thus WET generally overestimates what leaches out of landfill waste over the lifetime and post-closure period of a landfill, whereas TCLP's results in leaching simulation are more in line with observed leaching behavior. In subsequent tests, citrate, which is used as a buffer in WET but not in TCLP, has not been found to be a constituent of leachate from California landfills. For these reasons, and for the sake of harmonizing with EPA by requiring that only one test in California, DTSC has proposed replacing WET in favor of TCLP for its non-RCRA solid hazardous waste classification testing program. For an exact simulation of landfill leachates, neither WET nor TCLP provides satisfactory performance for oily wastes, for volatiles that might reach groundwater by diffusion, or for some elements occurring as oxyanions, such as arsenic, chromium, molybdenum, and selenium. Also, neither test adequately addresses questions of speciation for chemicals that can exist in more than one form, such as element, salt, and anion. WET overestimates the leaching potential for many elements in representative California landfill wastes, but there are several exceptions to this, such as cadmium, nickel, and thallium. Based on the shortcomings of both the WET and the TCLP, and the

OCR for page 77
--> TABLE 4-3 Comparison of WET and TCLP in Short-Term Extractions (mg/L)a Substance WET TCLP Max MSWL Arsenic 6.51 0.13 2.08   49 0.06 2.07   4.9 0.10 0.13 Beryllium 0.02 <0.001 0.00   0.02 <0.001 0.01   <0.01 <0.001 0.01 Cadmium 23 11.85 27.7 Cobalt 0.87 0.42 0.87   <0.20 0.02 0.03   0.83 0.07 0.02 Mercury 0.02 0.575 0.19   0.03 0.003 0.01 Molybdenum 1.27 <0.030 0.45   <0.3 <0.030 0.44   0.84 <0.030 0.04 Nickel 174 163 334 Lead 391 11.1 19.1   275 11.9 5.05   16.80 1.750 1.80 Selenium <0.80 <0.080 1.43 Thallium 3.79 1.500 4.45 a Data are for municipal solid waste landfill leachates (MSWL) from Hyperion (Los Angeles), Los Gatos (Guadalupe), Lodi, and Ukiah, California. Source: Adapted from Table 7 (DTSC 1998a, p. 1060). fact that both test procedures have beneficial features (exhaustive extractability of WET, simulation more reflective of actual leachate content and acceptability of TCLP), the committee supports the development of a single test protocol to classify California's hazardous waste, and to do so in harmony with the classification test of EPA. Such a test should provide results that can be related to field-realistic exposures, including the uncertainties associated with leaching pathways in the field. Understandably, DTSC may choose not to pursue this effort alone

OCR for page 77
--> based upon limitations in needed resources. Such an effort would also be quite time-consuming because of its nationwide implications and the need for extensive testing and validation under a variety of Waste, climatic, and soil conditions. The committee recognizes that the TCLP has nationwide status, use, and acceptability. Harmonizing California's extraction test with that required by EPA would minimize the testing burden on waste disposers in California, who would need to conduct only the TCLP. However, DTSC has not yet provided convincing arguments, either to the committee or, based on written comments, to stakeholders, for the sole adoption of TCLP and elimination of WET. The committee recommends that DTSC conduct an open evaluation of the experimental evidence, including the results of side-by-side testing and the opinions of its own staff, federal EPA counterparts, and stakeholders, before reaching a conclusion on the three possibilities before it: (1) adopt the TCLP as the sole test; (2) continue requiring both the TCLP and the WET; or (3) develop a new test that overcomes the deficiencies of both the TCLP and the WET. There is one very important aspect of the use of either WET or TCLP assays that DTSC has overlooked in its modeling effort and that it should bear in mind in further evaluations. DTSC is currently using these assays as though they exactly match field conditions; indeed, much of the argument and experimental program has gone into the evaluation of how well each of them matches field conditions. However, a probabilistic approach needs to explicitly introduce the uncertainties in extrapolations such as those from a laboratory assay to the field, and this DTSC has failed to do in considering either the WET or the TCLP methods. Either assay could be used in a probabilistic procedure, although each would have different uncertainties associated with its use. DTSC has expended much effort in a commendable experimental evaluation of WET and TCLP, and the experimental results appear to provide a suitable basis for evaluating the uncertainties associated with the results of those assays with leaching in field conditions. The different biases and/or larger uncertainties associated with certain types of chemicals, or certain types of waste stream, can be built into the probabilistic modeling. For the (DTSC-designated) category 2 elements of arsenic, antimony, molybdenum, selenium, and vanadium, DTSC proposes to use unadjusted TFLCs for arsenic, molybdenum, and antimony, and to use either the detection limit or develop a new test for selenium and vanadium. Use of the detection limit has the disadvantage of being driven by the state of analytical methodology rather than risk, contrary to the aim

OCR for page 77
--> of the DTSC program. Also, it is somewhat arbitrary, for example, in its use of the analytical limit of detection (LOD) rather than analytical limit of quantitation (LOQ) or twice the LOQ. Similarly, DTSC proposes to use twice the estimated quantitation level (2X EQL) in lieu of a SERT when the calculated concentration of the SERT is less than the EQL. In both cases, the committee emphasizes that there is no connection between the sensitivity of chemical analytical methods and the sensitivity of biological receptors, thus, the use of 2X EQL to establish a SERT is also not risk-based. Analytical methods are continually being improved as new instrumental and other techniques are introduced, and detection limits vary from laboratory to laboratory and sample to sample. Detection limits, and limits of quantification, may be influenced by background. This needs to be taken into account when analyzing for naturally-occurring substances (mercury, selenium, cadmium, etc.), which may vary in background concentration from location to location. It also needs to be taken into account for organic contaminants for which the matrix may contain substances that mimic or interfere with the analyte of interest. This matrix effect may also vary from sample to sample and location to location. Biological sensitivity is fixed by the inherent toxicity of the analytes and response of the organism being exposed to the analyte under specified conditions. Comparative testing to determine if the use of detection limits as proxy values is protective under reasonable exposure scenarios is lacking. It might turn out that such proxy values are protective, but this can not be determined from the information provided to the committee. DTSC should undertake, for example, a comparison of the SERT values with the EQLs. This should be done by evaluating a range of compounds with different toxic potencies and EQL values to determine the degree of protectiveness. Toxicity Tests Tests Related to Human Health The proper evaluation of the potential adverse health effects of a substance requires knowledge of the chemical and physical properties of the test material; anticipated human exposure conditions, including

OCR for page 77
--> environmental levels, duration, pathway(s) and populations; the nature of the anticipated acute and immediate effects or delayed or chronic effects; and (usually) at least one appropriate nonhuman (e.g., animal) model. Only acute toxicity tests will be addressed in this section. The interaction between the assessment of risk from acute toxicity and the assessment of risk from chronic toxicity is not entirely clear from the flow chart in the DTSC documentation (DTSC 1998a, p. 36). From the figure, one would assume that the first screen is for chronic toxicity as assessed by the development of TTLCs and SERTs, followed by assessment of risk from the acute toxicity of the chemicals based on acute toxicity assays. However, only 38 chemicals have passed through the first (TTLC) screen and, as noted in previous chapters, there does not appear to be a clearly defined method to either add or delete chemicals from either the TTLC or SERT lists. The chronic toxicity risk assessments are based on reference doses or concentrations, or cancer potency factors that were designed to protect the general population, including sensitive subpopulations. Thus, the thresholds developed based on chronic low-dose exposures of the general population would be expected to be much lower than the thresholds that might be developed for acute toxicity based on almost any acute exposure scenario. The acute oral toxicity thresholds are based on doses or concentrations calculated to be lethal for half of the test animals (LD50 or LC50 values), divided by a safety factor of 100 and multiplied by an estimated ingestion "rate". The rate given is 5 mg/kg of body weight for children. This is not a rate but rather a dose, although the value is said to be derived from a percentile of the CalTOX parameter corresponding to a rate (the soil ingestion rate). A rate would be a dose per day or some other unit of time. Because only a dose is given, it appears that the threshold is designed to protect someone who, in a one-time, or at least infrequent, situation, actually eats the waste directly, but does not eat it on a daily or regular basis. The acute toxicity threshold so derived could be considered to be protective against lethality for such a one-time ingestion event, but would not necessarily be protective against more subtle toxicity, particularly if the ingestion occurred on a repeated basis. DTSC needs to clarify the purpose of the acute toxicity thresholds, who is to be protected by these thresholds, and whether it expects the exposures to occur one-time or be repeated. For the acute dermal toxicity thresholds, DTSC provides a better description of the parameter values used in the derivation of the thresh-

OCR for page 77
--> olds and correctly uses a dermal contact rate (in millgrams per kilogram per day). A minor point is that the DTSC text refers to oral LD50 values rather than dermal values (DTSC 1998a, p. 72). DTSC presents the use of the acute oral and dermal toxicity thresholds as though they are based on various acute exposure scenarios (DTSC 1998a, pp. 72–74). This is a reasonable approach, but DTSC has not presented clearly defined goals and appropriate scenarios to meet those goals. For example, is it DTSC's intent to protect the most sensitive subpopulation from death if, in a one-time situation, a member of that population wanders on site and eats the waste? Or is it to protect a person who occasionally wanders on the site and eats the waste once a week? Or is it to protect those who live near the waste site and might inhale vapors and particles emitted from the site on a daily basis? The use of oral and dermal LD50 values is apparently for scenarios in which there is a high-end ingestion or dermal contact with raw waste streams by a child. The parameters selected for oral ingestion and dermal contact rates fail in this purpose, however, through an error of mistaken identity. What are used are upper percentiles of ingestion and dermal contact rate parameters derived for use in CalTOX; but the distributions of those rates for CalTOX should correspond to the variabilities between individuals in long-term average rates. What are required for the acute scenarios are distributions that also include day-to-day variability for individuals. However, because the scenarios are not adequately described, it is not clear with what frequency the estimated dose will be consumed. For the acute inhalation thresholds, no exposure scenario is presented, merely a rationale that corresponds to a highly unlikely, and maybe impossible, situation. For vapors, the assumption appears to be that persons could be exposed to vapors in equilibrium with fresh waste undepleted by off-gassing (the committee assumes that the temperature of 250°C specified on page 73 is a misprint for 25°C). Although a scenario for a waste worker might be constructed in which such a situation is possible, it is doubtful that there are any such situations involving the general public; moreover, workers should be protected at lower levels by Occupational Safety and Health Administration (OSHA) standards. The basis for the rationale for the particulate inhalation thresholds is even less secure. What scenarios can DTSC suggest that would result in acute exposures that are limited to the OSHA time-weighted average standards, or the long-term National Ambient Air Quality Standard for particulate matter?

OCR for page 77
--> The committee found incorporation of different safety or uncertainty factors for the different acute thresholds also to be questionable. For the oral and dermal exposures, a safety or uncertainty factor of 100 is incorporated; for particles, a safety factor of 10 is included, and for vapor exposures, no safety factor is proposed at all. As for the TTLC and SERT derivations, a consistent approach requires DTSC to make explicit its protection goals, and then to evaluate scenarios with parameter values that correspond to those goals. It is not appropriate to use only acute toxicity tests and short-term exposure scenarios, rather than chronic toxicity or long-term exposures, as the basis of risk assessments for waste classification and disposal. Both types of information have an important place. For chemicals for which there are no TTLC or SERT values, the risk assessments should not be based solely on acute toxicity, that is, by using bioassays with the crudest of endpoints, lethality if chronic toxicity data are available. Such an approach would not give any consideration to reproductive and developmental toxicity, any chronic toxicity (including cancer) or genetic toxicity. At the very least, DTSC should review readily available chronic or other effects data (genetic toxicity, reproductive and/or developmental toxicity) for each of the waste components and compare the concentrations of the components in the waste with the concentrations found to cause no, low, or infrequent effects. Possible sources of such chronic effects information include EPA's maximum contaminant levels for drinking water, oral reference doses or inhalation reference concentrations, or cancer potency factors, and the Agency for Toxic Substances and Disease Registry's minimum risk levels. These values are easily accessed for numerous chemicals and most have been subject to scientific peer review. If only acute toxicity data are available for the risk assessment, DTSC should follow standard practice and use an additional uncertainty factor to account for the lack of data regarding potential chronic toxicity at concentrations that are lower than those causing acute toxicity. DTSC should take into account the slope of the dose-response curve for acute toxicity data when choosing the uncertainty factor, if such data are available. Failure to use an appropriate uncertainty factor my seriously underestimate the risks associated with chemicals for which only acute toxicity data are available and may result in unprotective thresholds. If the waste contains several chemicals for which chronic toxicity data are available (e.g., several polycyclic aromatic hydrocarbons, which act at common sites to exert their toxicity) then the additive, synergistic, or

OCR for page 77
--> antagonistic effects of these chemicals, if known, should also be considered when assessing the risk posed by the waste. It appears that if a waste does not contain any of the TTLC or SERT chemicals, and is classified as nonhazardous on the basis of its acute toxicity, it is not subject to further scrutiny but may be disposed in nonhazardous waste landfills or by other methods such as recycling or land application. As a result, a waste that may pose serious chronic or mutagenic risks at concentrations far below those that cause acute effects and where long-term exposure may be expected as a result of its disposal, may be inappropriately classified as nonhazardous using DTSC's current or proposed classification system. In essence, the use of acute effects data permits higher (less conservative) risk thresholds for wastes than would be possible if chronic effects data for chemicals without TTLCs or SERTs were required. It appears to the committee that the current and proposed DTSC methods provide distinct disincentives for the identification of chronic effects data for particular wastes or waste constituents, since any such identification is likely to result more stringent regulation. DTSC should also consider the inclusion of respiratory, ocular, and dermal irritation testing as well as allergic sensitization testing, in its battery of acute toxicity tests. The nuisance factor of odors may also have to be taken into account to meet some goals. Members of a community living close to a waste site are more likely to be aware of and concerned about acute effects related to the irritant and odor properties of the waste than any other type of toxicity. Respiratory irritation might exacerbate existing health conditions such as asthma. If more than a single short-term exposure is anticipated (e.g., in waste workers or those living near a waste-disposal site), the potential for sensitization (allergenicity) may be relevant. A further problem related to the acute and chronic effects of specific chemicals is that the DTSC approach does not take into account the speciation or chemical form of metal contaminants. This is an arbitrary simplification that is not based on true risks. For example, chromium (III) at low doses is an essential nutrient for humans, whereas chronic exposure to chromium (VI) has been associated with lung cancer in humans; the toxic effects of elemental chromium are relatively unknown. Some consideration of the species and chemical form of the metal contaminants present should be attempted for both acute and chronic risk assessments.

OCR for page 77
--> Tests Related to Ecology DTSC proposes to protect aquatic organisms by classifying wastes using acute lethality to fish. Two thresholds based on acute lethality (96-hour LC50) to fish of extracts are used to establish the category to which a waste will be assigned. The first threshold, at an LC50 of 30 mg/L, is used to classify a waste as hazardous, and the second threshold, at an LC50 of 500 mg/L, is used to distinguish between nonhazardous and special wastes. The 30-mg/L value is derived from 500 mg/L divided by 18, the 10th percentile estimate for the liner protection factor. The current threshold for classifying waste as hazardous is based on a 96-hour LC50. value of 500 mg/L (22 California Code of Regulations § 66261.24 (a)(6)). DTSC proposes to retain this regulation but to use only a fish acute lethality bioassay to bring wastes into the lower tier of hazardous waste. It appears that even if a waste was not classified as hazardous based on comparing its concentration with a TTLC, it could still be classified as hazardous based on the results of the fish acute lethality test. It is unclear from the DTSC document if or how SERTs will be applied in the classification of wastes in the ecological scenario. It would appear that for wildlife, total concentrations of chemicals in wastes will be compared with TTLCs and a fish acute lethality test will be performed. The fish acute lethality test does not include the potential for bioaccumulation or biomagnification and would not be useful for compounds that are chronically toxic and have great acute to chronic ratios. The proposed methodology assumes that fish are the most sensitive aquatic organisms. This is certainly not always the case, for some aquatic organisms are more sensitive than fish to a number of compounds. Thus, the proposed screening methodology might not sufficiently protect aquatic life or wildlife that eat aquatic organisms. Also, DTSC does not specify how a waste would fail the bioassay test. Presumably, if the TCLP leachate causes greater than 50% lethality of the fish, the waste will be classified as hazardous. The committee concludes that the use of an acute bioassay using fish would not be sufficient to protect aquatic organisms or animals that might eat aquatic organisms. Retaining consideration of aquatic toxicity in the screening system is appropriate and is supported by the committee; however, the selection of a threshold value of an LC50 for TTLC for listing a material as hazardous is considered to be somewhat arbitrary and has no scientific justifica-

OCR for page 77
--> tion. This is not a risk-based approach. For the approach to be risk-based, DTSC must consider exposure and dose or concentration simultaneously when establishing a risk threshold. The risk presented by a waste is a function of exposure concentration and a threshold for acute effects; thus, setting a single value for a threshold is inappropriate. Although a single value might be predictive, it has not been demonstrated that it will be protective relative to possible aquatic concentrations.