The past century of industrial, military, and commercial activity in the United States has resulted in hundreds of thousands of hazardous waste sites where organic compounds and metals contaminate surface and subsurface soils and sediments. In order to reduce risks to human and ecological receptors, considerable time and money have been spent remediating these sites since passage of major environmental legislation (e.g., Superfund). To overcome the difficulties inherent in hazardous waste remediation and resource constraints, as well as to help prioritize cleanup efforts, potentially responsible parties and regulators have recently begun to consider using the concept of bioavailability during hazardous waste site management. This interest stems from observations that some contaminants in soils or sediments appear to be less available to cause harm to humans and ecological receptors than is suggested by their total concentration, such that cleanup levels expressed as bulk concentrations may not correlate with actual risk. This phenomenon, known to involve physicochemical interactions between contaminants and solid particles, can become accentuated with aging of the contaminated soils or sediments.
The extent to which chemicals are bioavailable has significant implications for the cleanup of contaminated media. If it can be demonstrated that greater levels of contamination can be left in soil or sediment without additional risk, decreased costs and smaller remediation volumes may be realized, and an opportunity for less intrusive remedial approaches exists. Growing interest in this issue led the National Research Council (NRC) in 2000 to undertake a comprehensive
study that would examine the bioavailability of contaminants in soil and sediment, focusing on those factors that influence the percentage of total contaminant levels to which humans and ecological receptors are exposed. Several key questions served to guide the study:
What scientific understanding is missing that would provide confidence in the use of bioavailability factors for different contaminant classes? That is, what bioavailability mechanisms and processes require better understanding? What are the highest priority research needs? For which contaminant classes, environmental settings, and organism classes are bioavailability assessments most important?
What tools (biological, chemical, and physical) are available to characterize and measure bioavailability for different contaminant classes, and what new tools are needed? What criteria should be used to validate these tools?
How do treatment processes affect bioavailability for different contaminant classes? How does bioavailability affect treatment processes that rely on microbial degradation of contaminants?
How and when should bioavailability information be used? What are its implications for relevant regulations? How can information on bioavailability be reliably communicated, especially to the public?
The NRC committee convened to address these tasks reached several overarching conclusions and recommendations about our current understanding of processes that affect whether contaminants in soils and sediments are bioavailable to humans, animals, microorganisms, and plants. Detailed conclusions and recommendations are found in this summary and throughout the report.
Bioavailability processes are defined as the individual physical, chemical, and biological interactions that determine the exposure of plants and animals to chemicals associated with soils and sediments. In the broadest sense, bioavailability processes describe a chemical’s ability to interact with the biological world, and they are quantifiable through the use of multiple tools. Bioavailability processes incorporate a number of steps not all of which are significant for all contaminants or all settings, and there are barriers that change exposure at each step. Thus, bioavailability processes modify the amount of chemical in soil or sediment that is actually absorbed and available to cause a biological response.
Bioavailability processes are embedded within existing human health and ecological risk frameworks. The goal of bioavailability analysis is to reduce uncertainty in exposure estimates and thus improve the accuracy of risk assessment. However, today “bioavailability” is commonly thought of in relation to one process only—absorption efficiency—such that a single “bioavailability
factor” is used as an adjustment to applied dose. Other bioavailability processes are hidden within risk assessment, and assumptions made about these processes are not clear.
Mechanistic understanding of bioavailability processes is ultimately needed to improve the scientific basis of risk assessment. Thus, tools for measuring bioavailability processes that further mechanistic understanding and promote predictive model development are preferred over conventional empirical approaches. In the short term, empirical approaches are useful in generating site-specific information—provided that their results are analyzed using a weight-of-evidence approach and with an understanding that they will be replaced with more mechanistic tools as they are developed. At any given site, a suite of tools will be necessary to describe bioavailability processes in soils or sediments.
The potential for the consideration of bioavailability processes to influence risk-based decision-making is greatest where certain chemical, environmental, and regulatory factors align, that is:
where the contaminant is (and is likely to remain) the risk driver at a site;
where the default assumptions made for a particular site are inappropriate;
where significant change to remedial goals is likely (e.g., because large amounts of contaminated soil or sediment are involved);
where conditions present at the site are unlikely to change substantially over time; and
where regulatory and public acceptance is high.
These factors should be evaluated before committing the resources needed for a detailed consideration of bioavailability processes.
DEFINING BIOAVAILABILITY PROCESSES
The individual physical, chemical, and biological interactions that determine the exposure of organisms to chemicals associated with soils and sediments are defined herein as “bioavailability processes” (Figure ES-1). The report adopts the term “bioavailability processes” because “bioavailability” has been defined in different ways that are often discipline-specific—creating a semantic stumbling block that can confound use of the term. Presently, our mechanistic understanding of the bioavailability processes described below is highly variable, and quantitative descriptive models of bioavailability processes in most cases are lacking.
“A” in Figure ES-1—contaminant binding and release—refers to the physical and [bio]chemical phenomena that bind, unbind, expose, or solubilize a contaminant associated with soil or sediment. Binding may occur by adsorption on solid surfaces or within a phase like natural organic matter, or by change in form
as by covalent bonding or precipitation. Contaminants become bound to solids as a result of chemical, electrostatic, and hydrophobic interactions, the strength of which vary considerably. An important aspect governing contaminant–solid interactions is time; with aging, a contaminant generally is subject to transformation or incorporation into a more stable solid phase that can lead to a decrease in contaminant bioavailability. Contaminants can be released to fluid in contact with soil or sediment in response to changes in water saturation, in water and gas chemistry, and in solid surface properties. Biologically induced release is common in natural systems, including release mediated by digestive processes, microorganisms, plants, and bioturbating invertebrates.
“B” in Figure ES-1 involves the movement of a released contaminant to the membrane of an organism, while C involves the movement of contaminants still bound to the solid phase. Contaminants dissolved in the aqueous or gas phases are subject to transport processes such as diffusion, dispersion, and advection that may carry the contaminant to the surface of a living organism. These same processes can also transport contaminants still bound to small solid particles (colloids) to within close proximity of potential receptors. As contaminants are being transported, they can undergo transformation reactions (including oxidation–reduction reactions, hydrolysis, acid–base reactions, and photolysis) that can affect greatly the bioavailability and toxicity of the contaminant. It should be noted that if association–dissociation processes have occurred internally (as in the gut lumen), fate and transport processes prior to uptake across a biological membrane may be limited.
The bioavailability process depicted as D entails movement from the external environment through a physiological barrier and into a living system. Because of the enormous diversity of organisms and their physiologies, the actual process of contaminant uptake into a cell—or factors that may impede or facilitate uptake—varies depending on receptor type. One common factor among all organisms is the presence of a cellular membrane that separates the cytoplasm (cell interior) from the external environment. Most contaminants must pass through this membrane (by passive diffusion, facilitated diffusion, or active transport) before deleterious effects on the cell or organism occur. For bacteria and plants, contaminants must be dissolved in the aqueous phase before they can be taken up. However, elsewhere in the natural world there are exceptions to the notion that bioavailability is directly dependent on solubility. For example, contaminant-laden particles that undergo phagocytosis can be delivered directly into some cells (although within the cell the contaminant may eventually need to be solubilized to reach its site of biological action). Uptake mechanisms relevant to humans include absorption across the gut wall, the skin, and the lining of the lungs.
“E” in Figure ES-1 refers to paths taken by the chemical following uptake across a membrane, for example, metabolic processing or exerting a toxic effect within a particular tissue. In general, the magnitude and the nature of the effect will be determined by the form and concentration of the chemical at its active site(s). If concentrations of the chemical achieved at the biological targets are too low, or if the chemical has been converted to a form that no longer interacts with the target, no effect will be observed. On the other hand, exposure may lead to concentrations that are sufficiently high so as to be lethal. Between these extremes is the potential for non-lethal, yet deleterious effects such as reduced metabolic activity, impaired reproduction, and increased sensitivity to physical or chemical stresses. Of particular importance is the bioaccumulation of contaminants (e.g., polychlorinated biphenyls or PCBs) to storage sites within tissues that are often inaccessible to normal elimination mechanisms such as metabolism and excretion. Slow release of the chemical from these storage sites can result in protracted “exposure” within the body even when exposure outside the body has been reduced.
Bioaccumulated contaminants may become available at some point to higher-order organisms that eat the plant or animal in which the contaminants are stored. In fact, food chain transfer is probably a more important exposure pathway to contaminants in soils and sediment for higher-order animals than is direct ingestion of soil or sediment. Depending on the extent of bioaccumulation in each organism, animals can be exposed to contaminants at concentrations higher than those found in the solids from which the compound originated (biomagnification).
The committee’s definition of “bioavailability processes” incorporates all the steps that take a chemical from being bound or isolated in soil or sediment to being absorbed into an organism (Processes A through D). Although of great
importance in determining the overall effect of a contaminant on an organism, E processes are not considered bioavailability processes per se because soil and sediment no longer play a role. While it is instructive to consider bioavailability processes in isolation, it is imperative to realize that they occur in concert and often are interdependent. Nonetheless, typically a few steps will be most restrictive and thus impart the greatest impact on total bioavailability (i.e., for a given situation, a select few processes are expected to dominate). In planning a bioavailability assessment, which typically will involve measurement of various physical-chemical properties and some kind of biological response, the objective should be to characterize only the most critical features of the system using appropriate tools.
CURRENT USE OF BIOAVAILABILITY IN THE MANAGEMENT OF CONTAMINATED SOIL AND SEDIMENT
Bioavailability processes overlap with many of the exposure pathways commonly considered during risk assessment and thus are an integral part of exposure assessment. However, their consideration is not always obvious or explicit. For both human health and ecological risk assessment, bioavailability processes may be dealt with by using either default values in exposure equations or site-specific data and information.
Human Health Risk Assessment
In human health risk assessment, “bioavailability” is specifically used in reference to absorption into systemic circulation—consistent with the toxicological use of the term. This encompasses bioavailability process D in Figure ES-1 as well as some process E steps, such as liver processing. Bioavailability processes leading up to absorption (A–C) are also included in human health risk assessments, but are instead described as “fate and transport” processes.
When considering bioavailability as the fraction of the chemical that is absorbed into systemic circulation, two operational definitions are important— absolute and relative bioavailability. Absolute bioavailability is the fraction of the applied dose that is absorbed and reaches the systemic circulation (and can never be greater than 100 percent). Relative bioavailability represents a comparison of absorption under two different sets of conditions—for example from a soil sample vs. food—and can be greater than or less than 100 percent. These values are used in exposure assessments, particularly for exposure by direct ingestion of soil or sediment and by dermal contact. For example, the exposure intake equation for incidental ingestion of soil invokes a relative bioavailability adjustment factor if the absolute bioavailability for the case of concern is known to differ from the absolute bioavailability implicit in the toxicity value used. Dermal exposure equations have additional relative correction factors because there are very
few toxicity values available specifically for the dermal route. The inhalation pathway presents even more complexity, and there are few examples of situations where a bioavailability adjustment factor has been used to refine an inhalation risk assessment.
Studies using animals as surrogates for humans have been conducted at a small number of sites to determine relative bioavailability (and to a lesser extent absolute bioavailability) for different chemical–solid combinations. These studies have shown that there is considerable variability in the relative bioavailability values measured for a certain contaminant in different soil types. Nonetheless, there is a general paucity of absorption data that has led to the extensive use of simplifying or default adjustment factors regarding chemical absorption in human health risk assessments. Federal and state regulatory agencies, as a practical matter, often specify the defaults they regard as acceptable, mainly for dermal contact and oral ingestion of soil. Default values are sometimes given for single chemicals or, where less information is available, for classes of chemicals. The use of national default values for relative and absolute bioavailability has been most thoroughly developed for lead-contaminated sites.
The most prominent default is that relative bioavailability is assumed to be 100 percent unless there is compelling contrary evidence and a scientifically defensible adjustment factor can be derived. In most instances, an assumption of 100 percent relative bioavailability is conservative, because most toxicity tests utilize forms of a chemical that tend to be readily absorbed. However, this is not always the case, and treatment with the chemical in diet, for example, may represent sub-optimal conditions for absorption. Under these circumstances, it is possible that exposure to the chemical in an environmental medium like soil may entail greater absorption than during the critical toxicity study. In this situation, an assumption of 100 percent relative bioavailability will underpredict the potential for exposure.
Ecological Risk Assessment
Bioavailability processes are also considered in exposure intake equations for ecological risk assessment. However, when compared with human health risk assessment there is greater complexity in ecological risk assessment because of the many species, physiologies, and physicochemical processes that must be considered. Some organisms feed directly on soils and sediments and thereby access contaminants, other species absorb dissolved chemicals across their external membranes, and still other species access contaminants that originated in soils and sediments by eating organisms exposed via the first two routes.
Two pathways frequently drive ecological risk assessments—direct contact of invertebrates with soils or sediments and exposure to wildlife feeding on soil invertebrates and plants. For the direct contact pathway, relatively simple techniques have been developed that predict the partitioning of metals and organics
between different phases—solid, aqueous, or within an organism—with the latter two representing the bioavailable fraction. These estimates of the bioavailable fraction of a contaminant pool are directly compared to threshold concentrations known to cause negative effects, if thresholds are known. Or, estimates of the bioavailable fractions can be used to model contaminant transfer to higher trophic levels.
Two partitioning techniques have become commonplace. For metals, normalizing their concentrations in sediment to acid volatile sulfides (AVS) has been suggested as a universal explanation of metal availability from sediments. The theory assumes that low pore water concentrations of metal translate into limited bioavailability. However, there are numerous environmental settings and organisms for which AVS is not applicable, thus limiting its potential. For organics, much attention has been given to the biota-soil/sediment-accumulation-factor (BSAF), an empirical ratio defined as the chemical concentration in tissue over the chemical concentration in soil or sediment. Because BSAF values are dependent on the physical–chemical properties of both the organic compound and solid as well as on the lipid content of the organism, they are site- and species-specific, although there have been attempts to apply BSAF values measured in one location to other locations. Thus, the commonly used normalization paradigms for the direct contact pathway have substantial uncertainties, and, at best, may capture only the crudest influences.
The wildlife exposure pathway includes not only direct ingestion of soils and sediments but also exposure to chemicals accumulated in the tissues of prey. As such, approaches to determining the bioavailability of contaminants in lower-order animals like invertebrates (discussed above) are important in wildlife exposure modeling. Although wildlife also may be exposed via incidentally ingested soils or sediments, little effort has been spent determining relative bioavailability adjustment factors because of difficulties in making such measurements; they typically are assumed to be 100 percent. Other than this assumption, there are few if any default relative bioavailability values commonly used in ecological risk assessment—unlike with human health risk assessment.
Site-specific assessments that have been labeled specifically as “incorporating bioavailability” have occurred for a small subset of risk assessments across the country. Typical measurements of relative bioavailability reflect the difference between uptake of soil-bound contaminant vs. contaminant in the dosing medium used for the toxicity study. For human health risk assessment, such studies are most prevalent for the oral route of exposure and for inorganic contaminants (arsenic, cadmium, lead, and mercury). Bioavailability processes are commonly included in ecological risk assessments, although they have not been labeled as “bioavailability assessments or adjustments” per se. Nonetheless, there
are certain pathways (e.g., sediment to invertebrates) and chemicals (persistent, bioaccumulative compounds) for which bioavailability information has been frequently sought and has gained regulatory acceptance.
Legal and Regulatory Framework
One of the most prominent and explicit uses of bioavailability is its incorporation into the regulatory standards for biosolids (sludge) disposal. Biosolids are the residual material from municipal water treatment, and they are sometimes used to restore or remediate soils. Since the late 1970s, the U.S. Environmental Protection Agency (EPA) has developed standards to assure that no adverse effects would occur as a result of land application of biosolids. Over time these Part 503 regulations have incorporated a great deal of research data, such that for all exposure pathways other than human ingestion of biosolids, the bioavailable fraction, rather than the total concentration of the compounds of concern, forms the basis of the regulations.
Other examples of using bioavailability concepts in managing hazardous waste are less obvious. Within the contaminated soil field, the legal and regulatory view of “bioavailability” is narrower than the processes illustrated in Figure ES-1, in that the primary focus has been on absorption (particularly absorption into systemic circulation for humans) and thus on direct contact with soils via the oral and dermal pathways. As mentioned above, the most common default assumption about absorption has been that contaminants are equally bioavailable from soil as from the medium used in the critical toxicity study, although some states have set default values other than 100 percent relative bioavailability for broad use. The replacement of default values with site-specific measurements has not been acknowledged in laws or regulations for hazardous waste cleanup at the federal or state level, although there is also no formal prohibition against doing so.
EPA’s only quasi-official recognition of bioavailability is in the Risk Assessment Guidance for Superfund, which refers to “adjustments for absorption efficiency.” There is no agency-wide guidance on the data necessary to substantiate such an adjustment, however, leaving that critical determination to EPA regional offices, state agencies, or the judgment of risk assessors and others. An informal survey conducted by the committee to determine how EPA regional offices were considering bioavailability in hazardous waste programs revealed that recognition, acceptance, and utilization of bioavailability factors in state and federal cleanup projects are limited at best, with wide variations among the regions. These differences may be explained only partially by regional differences in the nature, types, and costs of cleanups. Hesitancy to replace default values with site-specific measurements of bioavailability, especially for human health risk assessment, may reflect agency concern with increased analytical costs, anxiety about public acceptance of the concept and methods, concerns
about legal challenges, and the absence of more formal national guidance. Thus, despite the lack of legal impediments, bioavailability studies are not a regular feature of site-specific risk assessment.
With regard to contaminated sediments, several federal agencies routinely conduct surveys of sediment quality and biological effects, and in doing so try to account for certain bioavailability processes. Similar to the lack of guidance apparent in the soil remediation arena, the approaches used by the different agencies are highly variable. The National Oceanic and Atmospheric Administration uses an empirical, statistical approach for screening sediment quality that does not explicitly address bioavailability processes. EPA’s more chemical approach has been to develop criteria for protecting ecosystems from sediment toxicity using equilibrium partitioning theory (e.g., AVS). The U.S. Army Corps of Engineers’ experimental approach tests the toxicity of every sediment (for disposal of dredge spoils), and thereby implicitly considers bioavailability on a sediment-by-sediment basis. These differences serve as a point of confusion for practitioners hoping to better quantify the risks involved in various sediment management scenarios, and they reflect the lack of consensus among environmental managers about how to deal with bioavailability processes.
Although consideration of bioavailability processes is inherent to risk assessment, usually only some bioavailability processes are considered explicitly, and assumptions made about other processes are not transparent. For example, there has been more focus on the absorption aspect of bioavailability (through the use of default values for dermal and oral relative bioavailability and BSAF values) while many of the other processes have been less explicitly examined. The default values used to represent certain bioavailability processes in risk assessment may not be protective and appropriate for all circumstances. Thus, replacing default values with site-specific information should be encouraged. It must be remembered that consideration of site-specific information on bioavailability processes may result in either an increase or decrease compared to the default value.
At present there is no legal recognition of “bioavailability” in soil cleanup, although bioavailability concepts are emerging for sediment management, and they have been more fully embraced for biosolids management and disposal. Formal recognition of “bioavailability” in state and federal regulatory contexts would eliminate at least some of the hesitancy and confusion on the part of risk assessors and managers regarding the acceptability of the concept.
There is no clear regulatory guidance or scientific consensus about the level and lines of evidence needed for comprehensive bioavailability process assessment. That is, it is not clear what threshold of knowledge is sufficient to be able to replace default assumptions about bioavailability with site-specific mea-
surements. Regulatory guidance from EPA is needed that addresses what information must be included in a bioavailability process assessment, its scientific validity, acceptable models of exposure, and other issues. This may help to guide research efforts that will further our mechanistic understanding of bioavailability processes.
A myriad of physical, chemical, and biological tools has been used to evaluate bioavailability. These range from analytical techniques like spectroscopy that directly address where and how a chemical is associated with sediment or soil to techniques like extractions that operationally address form. Biological tools typically consider entry of the contaminant into the living organism (process D in Figure ES-1) without directly measuring processes A–C. However, processes A, B, or C might be manipulated by other means, with biological tools then being used to evaluate an organism’s responses to those manipulations. The state of the science is such that little consensus exists about optimal approaches for measuring bioavailability.
A table is provided within this report (Table 4-2) that specifies generic strengths and limitations of many tools. The seven criteria used to evaluate the tools are (1) the tool’s applicability to field settings; (2) its applicability to the solid phase; (3) whether it measures a single process vs. lumped processes; (4) its relevance to biouptake (bioavailability process D); (5) whether its results can be generalized to other sites; (6) its relevance to regulation; and (7) its usefulness as a research tool. The criteria reflect the committee’s opinion that mechanistic approaches (that determine the form and associations of a contaminant) have the greatest potential for ultimately defining bioavailability processes and narrowing uncertainties, although they are less applicable at present. Regulatory and industry interests tend to prefer simplified approaches that are operational (e.g., extractions), that provide shortcuts to estimate mechanistic processes (e.g., equilibrium partitioning), or that estimate bioavailability indirectly via complex responses (e.g., toxicity bioassays). Because some of these approaches lack explanatory capability and have limited applicability, they should be employed cautiously in the current regulatory environment so as not to increase uncertainty or the degree to which actions seem arbitrary.
No one method achieves the highest rating in all categories, and none of these methods fails all criteria, illustrating that every tool has tradeoffs. Among the tests reviewed, some are appropriate for some situations, but most are not generally applicable to a wide spectrum of situations. It is important to recognize that most tools are still in development and few are fully validated by a body of work relating their predictions to independent measures from nature.
Techniques to Characterize Interactions among Phases
Mechanistic understanding of physicochemical phenomena controlling bioavailability processes requires knowledge of the geochemical compartments that contain the contaminant, the forms of the contaminant, and interactions of the contaminant within the compartment. Several new instruments that can help to develop this understanding are evaluated. For example, microscale surface mass spectrometric and infrared spectroscopic methods are capable of describing the occurrence and role of black carbon that may serve as an especially strong sorbent for organic contaminants. X-ray absorption spectroscopy can discern the distribution and bonding of metals in solids and provide data on element mineralogy for use in modeling the solubility of mineral assemblages. Owing to the sophisticated, specific nature of these instruments, most will remain research tools. However, detailed examination of selected environmental samples advances mechanistic understanding and thereby furthers the development of validated conceptual models for describing the chemical and kinetic factors controlling contaminant release, transport, and exposure.
Physical–Chemical Extraction Techniques for Measuring Bioavailability
A wide variety of simple, empirical extraction tests are used to estimate the bioavailable fraction of a contaminant pool. The tests involve chemical extraction for metal contaminants and extraction using organic solvents or solid phase adsorbents for organic contaminants. For human health risk assessment, extractions have been developed to mimic mammalian digestive processes, and thus measure the bioaccessible fraction of a contaminant bound to a solid phase. Most extractions used in ecological risk assessment account for contaminant release from the solid surface to pore water. Thus, they are most successful (i.e., predictive) when biological uptake is dominated by a pore-water pathway (e.g., plant uptake of metals). Extractions cannot account for other, more complicated uptake mechanisms that control an organism’s overall dose, such as dietary exposure, acid extraction, removal by surfactants, ligand complexation in solution and on membranes, transport with amino acids, and enzymatic breakdown of organic chemicals.
Extraction procedures do not (with a few exceptions) remove metals or organic compounds from specific components of soils and sediments, nor can they explain the type or character of the sorbent phase to which an organic sorbate may be sequestered. Thus, they are operational, not mechanistic, methods for estimating contaminant availability. Such tests should be viewed as qualitative measures of reactivity that may be useful as screening tools. Validation of extraction tests (via correlation with a biological measure of bioavailability) is sparse, reflecting the difficulty and expense of bioassays using humans, ecologi-
cal receptors, or a surrogate. Certainly no one universal extraction procedure has been shown to consistently correlate with tissue concentrations in plants or animals across complicated environmental conditions.
Biologically Based Techniques for Measuring Bioavailability
Bioassays are employed to study influential biological processes themselves and as probes to study physical and chemical processes. Almost any technique that measures a biological response to contaminant exposure is suitable. However, interpreting the results from such experiments is not always straightforward because biological processes other than the one under investigation can affect the results. Tests that measure biological responses at levels of organization closest to contaminant transport across the membrane—such as assimilation efficiency and isolated organ tests—are easy to interpret from a mechanistic standpoint compared to responses that take place at more complex levels of organization. At the next level of organization is whole organism bioaccumulation, measured in feeding studies with invertebrates, fish, birds, and mammals. Bioaccumulation is not just the result of movement across the membrane, but also is influenced by how the organism encounters its environment and by species-specific internal processing mechanisms like digestion.
Other tests that measure more complicated biological responses or groups of processes reveal less about uptake and accumulation but are valuable for studying toxic effects. For example, biochemical responses to exposure at the cellular level can be measured with biomarkers such as P450. Toxicity tests (acute and sublethal) are widely used both in the lab and in situ to evaluate bioavailability, because they are practical, they depict responses of high relevance, and they are particularly useful for helping to understand the effect of contaminant mixtures. Because the number of potentially confounding factors grows beyond those relevant to whole organism bioaccumulation, toxicity tests are not optimal mechanistic indicators of bioavailability processes (as defined on page 2). Thus, there are tradeoffs between the biologically based tests available. In particular, those tests that directly measure biouptake provide unambiguous results about distinct mechanisms, but they may not capture the complexity of the environmental system nor speak to important effects that can be addressed by, for example, mesocosms and toxicity tests.
Biological tests are frequently used to validate the physical and chemical tools discussed earlier, or to provide complementary evidence about bioavailability processes in a system. For example, assimilation efficiency used in parallel with spectroscopy could reveal the properties of sediments that control bioavailability process A. Many of the tools discussed represent the state of the art or require additional research in order to reach their potential, especially molecular tools such as biomarkers and reporter systems.
Choosing Tools for Human Health and Ecological Risk Assessment
Prior to engaging in measurement of contaminant bioavailability from soils or sediments, it is critical to establish an accurate site conceptual model that describes the relevant exposure pathways, the receptors to whom the exposures are occurring, and the environmental conditions under which the exposures are occurring. This information is vital because all available tools for assessing bioavailability processes are receptor-, pathway-, and contaminant-specific, such that bioavailability data for a chemical for one exposure pathway are not necessarily applicable to another exposure pathway. The lack of an accurate site conceptual model can lead to measurement of the wrong endpoint or selection of an inappropriate bioavailability tool.
Regulatory acceptance of the tools used to generate bioavailability information in risk assessment is expected to be influenced by several factors, including the relevance of the tools to the site conditions and the extent of tool validation. Validation variously refers to the performance of a tool or approach in terms of reproducibility, reliability, and multi-lab calibration. An appropriate body of experimental work to validate a tool would (1) clarify where and when a tool yields a definitive response; (2) clarify that the tool can be linked to a biological response of a similar magnitude, and that the linkage stands up across a range of conditions in the type of environment that is being managed; (3) test the prediction of bioavailability using different types of experiments and field studies; (4) clarify which types of biological responses are best predicted by the approach; and (5) include critiques of the best applications and the limits of the tool, especially compared to alternatives. A tool that is well accepted and validated should be given greater weight than one that is new or experimental.
No single tool has been developed that can universally describe or measure “bioavailability,” and approaches that have attempted this have failed. Thus, a complementary group of tools that characterize different bioavailability processes is a better choice than multiple tools that focus on only one step. Ideally, risk managers should consider processes influencing contaminant concentration, form, or transformation; biological processes affecting uptake; and linkages between internal concentrations and adverse effects in receptors. The complexity of this requirement illustrates the importance of a more comprehensive approach to exposure assessment as compared to a single-value regulatory approach in evaluating contaminant bioavailability. The corollary is that simple tests should be used cautiously. Simplification should only proceed once more mechanistic knowledge has become available, not in lieu of such information.
To avoid misapplying bioavailability tools it is important to understand the environmental setting for which a tool was designed and intended. The long-term success of implementing considerations of bioavailability in hazardous
waste management depends upon developing improved models and measurement techniques appropriate to site-specific conditions. Confusion in the regulatory process could result if tools intended for other purposes are misapplied to soil and sediment management.
An intensive effort to develop mechanistic tools or models based on mechanisms is critical to future development of bioavailability tools. Many operational tools (e.g., extractions, normalizations, and simple models) have proven ambiguous or shown large uncertainties in their estimates of bioavailability when rigorously tested. Such empirical tests cannot be extrapolated to other sites, nor can they be used with confidence to understand permanence or unforeseen conditions. They are poorly correlated across species and ranges of environmental conditions.
MOVING FORWARD WITH BIOAVAILABILITY IN DECISION-MAKING
The limitations in our understanding of bioavailability processes have important ramifications for site management. The most obvious is that lack of knowledge may inadvertently support poor decisions regarding exposure assessment and, subsequently, how much contamination should be cleaned up and at what cost. There are also treatment remedies that rely heavily on increasing or decreasing bioavailability, and without a better understanding of bioavailability processes it is difficult if not impossible to know if such treatments are effective.
Treatment technologies reported to “decrease bioavailability” generally impede transfer of a contaminant from the soil or sediment matrix to a living organism. Examples of such technologies include biostabilization (bioremediation to reduce contaminant mobility and toxicity of contaminated soils and sediments); sediment capping (reducing the ability of a bottom dwelling organism to get to the contaminant, and increasing mass transfer distance); vitrification or solidification (decreasing contaminant mobility by increasing mass transfer resistance out of the solid matrix); and chemical alteration (e.g., converting a compound to a low solubility form or redox state via amendment). Other technologies attempt to increase pollutant removal or destruction by facilitating bioavailability processes. These technologies increase mass transfer from the sorbed phase via physical or chemical means. Examples of the former include grinding or mixing to decrease diffusional paths, or increasing matrix temperature to increase mass transfer rates. Chemical means include the use of surfactants, co-solvents, or chelating agents to increase mass transfer by (1) increasing the apparent aqueous solubility of hydrophobic organic compounds or (2) mediating changes to the geosorbent matrix structure.
Determining whether these technologies are actually working to increase or decrease bioavailability is hampered by the plethora of different bioavailability
tools and measurements used whose relevance to treatment effectiveness is not clear. Indeed, there is no consensus on the tools or methods that should be employed to measure “bioavailability reduction” in the course of remedial technology selection or on how results from those tests should be incorporated into risk assessment. As a result, the state-of-the-practice consists of applying a battery of assays to the soil or sediment under investigation that all have some relationship (however ill defined) to contaminant bioavailability. Using biostabilization as an example, a review of remedies for hydrocarbon-contaminated soils found that a wide variety of surrogate measures of bioavailability were utilized. These included Microtox™ assays, reduction in the water soluble fraction, leachability evaluations, dermal uptake through human cadaver skin, absorption efficiency via feeding studies in mice, earthworm uptake and toxicity tests, desorption tests, and supercritical fluid extraction. Some of these correlative assays may aid in short-term decision making, but in the absence of better capabilities to measure bioavailability processes they must be applied with extreme caution to ensure that appropriate site management decisions are made. Further, the permanency of treatment technologies that aim to reduce or enhance bioavailability has not been addressed, in part because tools to assess bioavailability processes over long time scales and over a range of soil and sediment conditions are not yet developed.
Finally, site managers should be cognizant of treatment technologies that may unintentionally affect bioavailability. Especially for sediment dredging and for new technologies that have yet to be fully tested, like phytoremediation, there may be unanticipated side effects that result in undesirable changes in bioavailability to certain receptors.
Next Steps at Individual Sites
Various actions are needed to make progress in incorporating bioavailability processes in risk assessment and decision-making at individual sites, in acknowledging bioavailability processes in regulations and creating appropriate guidance, and in better understanding bioavailability processes on a mechanistic level. At individual sites, key issues that need to be addressed include (1) selecting appropriate measurement and modeling tools; (2) assessing and (when possible) reducing uncertainty in understanding, models, and parameters for particular bioavailability processes; (3) developing long-term monitoring plans that include monitoring of bioavailability processes critical to the risk-based remedial plan implemented; and (4) including community groups in remediation planning at early stages.
The development of tools relevant to bioavailability is a rapidly growing field, such that there can be considerable confusion regarding which tools and how many to choose in order for the results to be useful in decision-making. In
the face of limited information and imperfect tools, weight-of-evidence approaches may prove useful. That is, the results of tests should be combined to provide “multiple lines of evidence” about bioavailability processes at a site. This approach is especially needed to make near-term progress at sites where appropriate mechanistic tools are lacking, such that empirical tools must initially be relied on. (When it is possible to choose tools that will provide better mechanistic understanding, this opportunity should be exploited and not bypassed in favor of conventional empirical assessment approaches.) As more robust mechanistic methods evolve, the need for a multiple lines of evidence approach should diminish concomitant with our increasing ability to predict impacts, leading to greater acceptance of risk assessment that includes explicit consideration of bioavailability processes.
At the present time, many bioavailability processes are hidden within default assumptions that are both highly simplified and uncertain. More explicit, site-specific consideration of bioavailability processes in risk assessment can reduce this uncertainty. However, if there is substantial uncertainty associated with a bioavailability process that controls the ultimate estimated risk, there may be a tendency to not measure that process explicitly and instead to use conservative assumptions. Thus, it is important to recognize the uncertainty in each bioavailability process descriptor and the potential for propagation of error in risk assessment. The influence of bioavailability process uncertainty and variability on the overall risk can be assessed qualitatively, quantitatively through sensitivity analysis (deterministic risk evaluation), or through stochastic risk assessment.
The expanded consideration of bioavailability processes in the current risk assessment paradigm will likely alter both the prioritization of remediation efforts and the decisions pertaining to the remedial technology(s) chosen at individual sites. Whether these decisions provide long-term protection to humans and the environment will depend, in part, on how much is known about bioavailability processes over time. Thus, replacing default bioavailability assumptions with site-specific measurements must be accompanied by evaluations of future system states via newly focused long-term monitoring, including the potential for events to occur that might reintroduce unacceptable exposure conditions. Presently, there is almost no guidance on approaches for long-term monitoring that specifically target the stability of the contaminant “form” instead of total contaminant concentration.
Communities often have concerns about explicit consideration of bioavailability processes in risk assessment at hazardous waste sites. Bioavailability assessments may be viewed as a “do-nothing” or “do-less” approach, given that incorporating bioavailability information into risk assessment may raise acceptable contaminant concentrations in soil or sediment. Also, because bioavailability studies may not be conducted for the ultimate receptor of concern, or may yield results with considerable uncertainty, a community may not be confident that the
scientific evidence is adequate to apply the results within their community. Of the limited cases to date where communities have been presented with bioavailability information, the responses have ranged from strong support to acceptance to strong objection.
Because bioavailability processes for contaminated soils and sediments are inherently part of risk assessment, bioavailability does not present a unique risk communication problem. Thus, the public should be introduced to the concept of bioavailability as being a fundamental component of risk assessment no different from other exposure parameters or toxicity values. The technical components that should be part of any public communication program regarding bioavailability include (1) the factors that affect bioavailability from soils or sediments, (2) the concepts of absolute bioavailability and relative bioavailability, (3) the technical basis for the established toxicity values, (4) the selection of a model for bioavailability studies and why it was chosen, (5) how uncertainty was handled, and (6) how site-specific bioavailability information will be incorporated into the risk assessment. Finally, it should be acknowledged that rarely are bioavailability studies undertaken simply to improve the accuracy of a risk assessment. Rather they are performed to justify site cleanup goals that are more financially or technically feasible, and that involve leaving appreciable amounts of contaminant mass in place, while still being protective of public health and the environment.
Next Steps in the Regulatory Arena
The resistance in some regulatory domains to allowing site-specific measurements of some bioavailability processes to replace default assumptions stems from many factors, including uncertain methodologies and lack of validation, public anxiety and suspicion about motives, and lack of precedent. A viable way to move around these obstacles and achieve more widespread consideration of bioavailability processes in risk-based management of contaminated soils and sediments is to invoke an adaptive management approach, which embraces two ideas. The first is that there should be pilot studies to experiment with different tools and models. The second is that agencies should use the results from such efforts to develop a common systematic approach to determine how and when to incorporate bioavailability concepts into regulations in a consistent manner. Adaptive management concepts are not new, but rather are akin to the scientific method and engineering problem solving. An adaptive management example relevant to bioavailability is the approach recently recommended by EPA for determining the efficacy of dredging and how much PCB-contaminated sediment to dredge from the Hudson River. The plan involves evaluating risks over time and adjusting cleanup plans as performance monitoring data are acquired and analyzed.
Next Steps in the Scientific Arena
Expansion of bioavailability considerations into risk assessment and remedial decision-making requires improved scientific understanding and models for a number of key bioavailability processes. Investment in mechanistic understanding and models will prove more profitable in the long-term than reliance on empirical knowledge because models have greater predictive power for a broader range of situations. As part of this research effort it will be important to draw ties between mechanistic understanding and more operational tests for bioavailability with studies that, for example, quantitatively examine both the form of a contaminant and its biological uptake. Other areas in need of attention include contaminant–solid interactions (especially the nature and effects of aging on contaminant release rates), the feeding ecology of animals, and how organisms bioaccumulate and transfer contaminants to their predators. Better understanding of whether and when associations between contaminants and soils and sediments can be made permanent should be a future research goal. The results from such research are needed before bioavailability explanations can be used with confidence to determine the amounts of soil and sediment to be remediated.
Much information on bioavailability of contaminants comes from industry-funded studies at specific sites, particularly for human health risk assessments. Such studies are usually, and understandably, not conducted in a way that advances understanding of fundamental underlying processes. Over the last decade, EPA has supported studies on mobility of chemicals in the environment, uptake relevant to assessing ecological risks, and bioavailability processes that might affect bioremediation. Yet despite this research investment, progress in understanding these bioavailability processes is limited. Unless a greater commitment is made to fund bioavailability studies from a research rather than industry-driven perspective, progress in developing information that can be used to advance human health and ecological risk assessments will be slow.