Research to Address Gaps in Environmental Health Assessments During Disasters
There are numerous unknowns about the extent of environmental exposures during any disaster, and Hurricane Katrina is no exception, noted Thomas Burke of the Johns Hopkins University’s Bloomberg School of Public Health. In the aftermath, there were concerns about toxic agents, mold, physical hazards, and the multiple pathways of exposure. For some of these harms, there may be unique pathways of exposure that are not a part of the risk assessment process. Thus, scientists during a disaster may be addressing complex exposure pathways or a combination of agents, which complicates the response and risk communication for public health. In order to provide accurate information, scientists need to understand the affected community to know the potential routes of exposures. This information will guide research to prepare for future disasters and guide public health actions, noted Burke.
SURVEILLANCE FOR ENVIRONMENTAL HEALTH
Burke questioned how to build on the current scientific research base to further the field’s advancement. He suggested that one way to start the process to ensure that environmental health scientists meet the needs of the practitioners and the community and provide a basis for research is to do effective surveillance.
The Institute of Medicine (IOM) 1988 report The Future of Public Health noted that the removal of environmental health authority from public health agencies has led to fragmented responsibility, lack of coordination, and inadequate attention to the health dimensions of environmental problems. Burke noted that two key findings from this report were important during the response to Hurricane Katrina. First, environmental public health services are vulnerable during times of budget shortfalls or unexpected emergencies, as no dedicated funding for core environmental public health services exists (IOM, 1988). Second, the lack of a coordinated statewide approach and inadequate training and technical support
hinder environmental public health technological advances (IOM, 1988). Burke suggested that without rectifying these shortcomings the response to any disaster will be hindered.
The 2000 Pew Environmental Health Commission report reaffirmed the findings of this IOM report and further stated that the United States lacks a cohesive national strategy to identify environmental hazards, measure population exposures, and track
One way to start the process to ensure that environmental health scientists meet the needs of the practitioners and the community and provide a basis for research is to do effective surveillance.
health conditions as they relate to the environment (Environmental Health Tracking Project, 2000). Local public health officials need this fundamental information, noted Burke. Currently, basic information on incidence and trends in health conditions related to environmental exposures is largely unavailable. Environmental health is making progress, but at the local level the translation has not happened, asserted Burke. This lack of translation exemplifies the problem in the Gulf Coast region. Burke noted that if public health doesn’t have baseline exposure data, then officials will not be able to reassure individuals that their exposures are low or that illness rates are low. He further asserted that any tracking program needs to have a rapid response capacity to assist practitioners and communities throughout the country during a disaster.
On the positive side, the field can take advantage of progress in technology and research to accomplish this. One of the first steps is to use current paradigms to have surveillance for environmental health. From these efforts, it is necessary to address fundamental understanding of the hazards, measuring and tracking the exposures, and then developing a way to track the health status of the community. One example that was implemented during Hurricane Katrina is the use of large-scale geographic information systems by the National Institute of Environmental Health Sciences (NIEHS) to illustrate the location of refineries and other hazardous materials in the city of New Orleans. This effort, Burke noted, was a good first step in scoping the issue to address prevention and target the response.
Burke noted that progress has been made by agencies such as the Environmental Protection Agency (EPA) to make data available on their websites. However, he added that the field has not made progress in interpreting the data or making that information usable, particularly to the affected communities.
Strength of Biomonitoring
One tool that may become very important for environmental health is biomonitoring, which measures the amount of chemicals absorbed by the body, provides a measure of individual or population exposure levels, evaluates health
effects, identifies those at highest risk, tracks trends, and guides prevention strategies. As an emerging technology, it can help during a disaster not only by identifying individuals at highest risk and by racking trends, but also, most importantly, by reassuring the public. In the case of the terrorist attacks of 9/11, this could have provided a tool to reassure workers that they were adequately protected while they were exposed to potentially hazardous situations. Biomonitoring should be part of the research agenda to move the field forward, asserted Burke. Although it is currently widespread in the research lab, the field has not fully developed its practical capacity. This is one area for future development.
During the workshop, Burke noted that many speakers discussed the need to consider how to assess exposure and its potential health implications. The traditional way is to compare the levels with standard benchmarks. During a disaster in which there may be hundreds of exposures, this may not always be possible. One way to address the problems is to look at their potential health end points. Considerable research exists on various end points that were found in the floodwaters. From this, scientists can build the evidence base to move forward with research, but in order to understand the long-term health effects, it is important to know that we are looking for the right end points. Thus, Burke suggested that more discussion needs to occur in this area by asking if the regulatory monitoring lists are the chemical exposures that communities need to know about
It is time to move the field forward by listening to the practitioners and communities; developing surveillance as a foundation for research, risk assessment, and prevention; and encouraging translation and communication of research into practice. This approach is not only about Hurricane Katrina, but also creates a pathway to address basic environmental public health.
and whether these are the tools that are most effective in informing practitioners to move toward prevention. Looking at the same pollutants—many of them now banned—without considering the evolving hazards (e.g., pharmaceuticals, newer persistent compounds) may be a bit of “looking for keys under the lamppost.” The data on these compounds have not been developed, and scientists don’t know their health effects. Burke concluded that it is time to move the field forward by listening to practitioners and communities; developing surveillance as a foundation for research, risk assessment, and prevention; and encouraging translation and communication of research into practice. Finally, he noted that this approach is not only about Hurricane Katrina, but also creates a pathway to address basic environmental public health.
DEFINING AND WORKING WITH SUSCEPTIBLE POPULATIONS
Hurricane Katrina was instrumental in calling attention to environmental health in the minds of the public, observed Maureen Lichtveld of the Tulane University School of Public Health. People became fluent in discussing basic needs, such as sewage and sanitation—items that are often taken for granted during everyday life. One issue that has received considerable attention in addition to basic needs is the issue of a susceptible population—those at most risk.
Lichtveld noted that defining susceptibility will take time, and it will require sustained investments in time, expertise, and funding. The questions are the who, what, and why of susceptibility. For example, who is susceptible? Traditionally, susceptible populations are considered children, the elderly, individuals with asthma, and immunosuppressed individuals; alternatively, one may use a strict definition based on biomarkers or define susceptibility guided only by clinical manifestation. For research
Defining susceptibility will take time, and it will require sustained investments in time, expertise, and funding.
to progress, environmental health scientists need to transcend the traditional views, asserted Lichtveld. This requires breaking down silos and examining the intersection of the population and the complex exposure conditions. For Hurricane Katrina, the definition is further complicated because one needs to consider whether susceptible populations are based on pre-or post-Katrina status, noted Lichtveld.
To illustrate the complexity, Lichtveld pointed out that the flooding affected many different ethnic populations. Figures 7-1 and 7-2 illustrate where, on the basis of 2000 U.S. census data, the African American and Caucasian populations resided compared with the flooding. Similarly, 2002 data derived from the Louisiana Childhood Blood Lead Surveillance System indicate that children with elevated lead levels were disproportionately affected by the flooding. To facilitate crisis decision making, data depicted by flood contour maps similar to Figures 7-1 and 7-2 were compared with key sociodemographic factors to define potentially susceptible populations. The sociodemographic factors evaluated include such economic indicators as median family income, poverty, and unemployment as well as leading health conditions such as pediatric asthma. Lichtveld noted that these were the only somewhat reliable environmental health data that scientists had during the early post-Katrina phase. According to Lichtveld, the data point to very significant contributing sociodemographic factors and baseline health conditions that placed segments of the affected populations at increased risk. In question, however, is the relevance of pre-Katrina data to determine post-Katrina environmental public health interventions. For example, what should be the appropriate unit of analysis to address the needs of susceptible populations in a systematic, scientific fashion following disasters? The ultimate challenge, argued
Lichtveld, is a collective commitment to generate the appropriate data aimed at characterizing and addressing real risks to those most vulnerable.
Need for Participatory Research
If public health looks at the continuum from environmental contamination to disease, there are many questions that arise during a disaster response. Lichtveld noted that for science to embark on research that matters, it needs to yield a demonstrable return on investment in terms of prevention. Such research must engage the end users, whether it is called community-based participatory research or collaborative research. Environmental health research in the context of disasters cuts across more than one disease or condition and informs new environmental policies. She suggested that scientists need to take an exploratory approach to defining the types of susceptibilities. Such research should not focus
on what scientists would like to know, but rather what public health must know to be able to advance prevention. Finally, while research needs to be informed by bench science, it has to be flexible to answer questions from the trenches as disasters unfold.
Lichtveld noted that as was the case with other disasters, two increasingly intertwined and long-term public
It is not acceptable for scientists to only inform and educate the communities. We need to listen and learn from the community if we are to engage in more informed and more relevant research.
health issues remain after Hurricane Katrina: environmental health and mental health. Both are underrecognized, neglected, and seldom integrated in the planning and implementation of the recovery phase. To characterize and address the needs of susceptible populations, Lichtveld
suggested that research in the context of disasters must also inform the practice and the policy of environmental health. She also observed that surveillance and monitoring can serve as relevant public health tools in post-disaster situations, playing a pivotal role in identifying and addressing the research gaps most likely to expedite just-in-case and just-in-time risk characterization and management.
In conclusion, Lichtveld noted that one leads by doing, by example. It is not acceptable for scientists to only inform and educate the communities. Scientists need to listen and learn from the community in order to engage in more informed and more relevant research.
FROM EXPOSURE TO DISEASE OUTCOME
The framework presented in Figure 7-3 was introduced 25 years ago as a collaborative effort by the Interagency Regulatory Group, which included EPA, the Occupational Safety and Health Agency, the Consumer Product Safety Commission, and the Food and Drug Administration. This work was published by the National Research Council in 1983 and provided a framework for what became the Red Book for risk assessment in the federal government.
Gilbert Omenn of the Schools of Medicine and Public Health at the University of Michigan noted that the weakest part of this risk assessment framework has been exposure analysis. Scientists have put considerable effort into hazard identification of individual chemicals, including complex mixtures. It was not
until the past decade that science has begun to put more emphasis on exposures. The delay had a number of causes, primarily because exposure research is complicated to perform and the results hard to interpret into meaningful policies. He asserted that this work was crucial and resulted in a new discipline of exposure analysis, carried out in concert with toxicologists.
The traditional approach to exposure science has been to focus on one chemical at a time in one medium for one health effect. Today this is unrealistic, as we live in an environment that contains a complex mixture of chemicals, noted Omenn. There are large numbers of chemicals in the air, water, soil, and foods that can reach individuals through multiple pathways. This was recognized in the mid-1990s by the Presidential/Congressional Commission on Risk Assessment and Risk Management (called the Omenn Commission for its chair). During the hearings, the public challenged the regulatory approach that focused on one chemical at a time. From their own assessments, people realized that they are exposed to a “soup of chemicals” throughout the day, and they wanted answers to the complexity of their environment.
Omenn noted that there are examples of scientists testing complex mixtures, although it is complicated to estimate how much each individual element is contributing to the overall effect. Omenn further noted that in addition to diesel exhaust, testing mixtures consisting of polluted air, contaminated water, and contaminated foods is feasible, and more effort needs to be put into addressing the public’s concerns about such combined exposures. These contaminations are not limited to chemicals; microbial contaminations are also important. In fact, Omenn noted that the workshop has
Chemical–microbial interactions is a rich research area for future understanding of risks during disasters.
emphasized microbial contamination but that chemical–microbial contamination has received little attention to date, even though chemical–microbial interaction is a rich research area for future understanding of risks during disasters.
During a disaster, the first task is to respond to the immediate, emergent needs of the people in the affected area, the areas that are indirectly affected, and the first responders themselves, said Omenn. This will always be the first task, although we must be prepared to address risks as they unfold. Scientists can focus on the well-known risks before they begin to address the unknowns, which will be a long-term agenda. As mentioned earlier, there are some resources to aid in the process by drawing from the established databases, such as the National Health and Nutrition Examination Survey, to give some benchmarks and baselines on body burdens related to vari-
ous exposures. This background information in itself shows the value of conducting such routine monitoring on a continuing basis, noted Omenn. For disasters, exposures are more complicated. Scientists and policy makers need to remember that the individual exposures and complex mixtures are interacting against a background of extreme individual stress, community stress, hunger, dehydration, physical trauma, and crowding.
Wilson and Suk (2005) reviewed a disease-oriented approach to exposure research (Figure 7-4). In this approach, one starts with an individual chemical and then studies the hazards associated with it. Associations were examined from the diseases back to environmental exposures because, despite having extensive testing and research on individual chemicals, scientists have not been effective at connecting the linkages along the continuum.
Scientists would start with the diseases and work back to the early signs of pathogenesis (i.e., the pathobiology of the tissue). From that point they could determine what hazards from exposures and lifestyle decisions might produce those molecular tissues and clinical abnormalities. There is a lot to explore about the sources, transport, fate, and encounter of environmental agents that are intertwined with variations in human behavior and activity patterns.
Increasingly, scientists hope that the molecular tools will make possible studies in animals similar to studies that are feasible in already exposed people.
Finally, there is the powerful use of informatics to address the large molecular signature datasets that are now generated.
What everyone needs to understand about exposure assessment is that no matter how much we know about the toxicology of a chemical, if people are not actually exposed or are exposed at a negligible level, there is not a health risk.
Omenn emphasized that what people need to understand about exposure assessment is that no matter how much is known about the toxicology of a chemical, if people are not actually exposed or are exposed at a negligible level, there is no health risk. There are quantitative bases for reassuring individuals. For Hurricane Katrina, despite the terrible conditions, infectious outbreaks were few. Water was an important issue, because people questioned if it was contaminated and when it would be safe to drink. Surveillance and ongoing basic research would help to inform the issue, because scientists would know if the surface and groundwater results were similar to the levels that existed prior to the hurricane in New Orleans or around the country. Omenn noted that such information could be reassuring to residents. Although there is no guarantee that no troublesome chemicals were released or are yet to be released in the recovery and rebuilding efforts, the fundamental principle is that we need to start communication with people in the community and respond to the questions that are asked, concluded Omenn.
Improving Measures of Exposure
During the workshop, the question was raised of how scientists can begin to address which types of tools they need for making more sophisticated and personal measures of exposure. Personal measurements can start with external or internal measurements. There are two categories for the external environment: environmental sensors and geographic information systems (GIS), noted Omenn. The environmental sensors would be devices to detect and quantify priority exposures, including continuous monitoring with multiplexed sensors and analytical tools to link data across multiple scales—from the macro environmental level to the personal. The GIS would develop priority environmental and population datasets, modeling and mapping tools to link the datasets, and GIS displays for individualized exposure assessment.
The strategy for the present and the latter part of the 10-year planning period for a recent NIEHS exposure assessment working group was to take advantage of devices already in use that can detect and quantify exposures to priority agents (Weis et al., 2005). These devices need to be validated in appropriate populations and their meanings interpreted for application in real public health practice. There is already a capacity to do continuous monitoring with a variety of sensors. The
premise is to put them together so that they will be more efficient, less costly, and more practical.
We want to have analytical tools that link data across multiple scales and integrate them to make a network of sensor information. The idea would be to go from a scale of measuring nitrogen and sulfur oxides in the ambient general environment to the use of laser and infrared sensor technologies, looking for releases and changes in concentrations for a broad range of chemicals. At the micro scale, there are personal dosimeters. These would look at such agents as carbon monoxide, carbon dioxide, volatile organic compounds, pesticides, and polycyclic aromatic hydrocarbons in the workplace, household, and personal environments. Nanoscale technology is new, and it could measure not only chemicals, but also bacteria and viruses that are important, noted Omenn. Some of that work is going on in research on biowarfare countermeasures.
In the internal environment, scientists have biological sensors and body burden assays. Personal monitors may include in vitro sensors for studying responses that could then be looked at in vivo. Such biological sensors could include wearable personal monitors for activity patterns, in vitro sensors for early biological responses, deployable in vivo (microscale/nanoscale) sensors, and data management for such devices. Furthermore, they could be used for body burden assays based on improved methods of sample preparation and analysis, improved sample matrix, higher sensitivity and selectivity, assessment of biologically effective doses, and linked analysis of environmental levels, observed Omenn.
Activity measures may pick up physiological changes, although some detect only motion, time, or place. Wireless tracking devices can range from tracking disoriented people in nursing homes to homeland security. Such monitoring, especially without informed consent, is going to be a large social issue, although the technology is evolving very rapidly.
Electrochemical and optical sensors are able to capture information that is important in clinical monitoring. There are many types of affinity-based reagents and other ways of measuring and quantifying chemicals and microbiological targets. On the body burden issue, the National Health and Nutrition Examination Survey (NHANES) covers 148 chemicals in 2,400 people sampled in the 1999–2000 cycle. The idea is to measure chemicals that have been nominated and chosen because they are of public health importance and feasible to measure; they may be metabolites or reaction products. These provide a baseline or benchmark for the types of questions that people need to answer during a disaster. The information in the NHANES database is very valuable; it needs to be updated with the most salient chemicals for new cycles of population monitoring.
Finally, toxicogenomics and toxicoproteomics—measuring changes simultaneously in thousands of genes and proteins—could be used, especially if methods are introduced with much higher throughput of specimens. Alternatively, deep analyses of pooled samples might be quite appropriate at disaster sites and in preceding research on exposed groups. These methods make possible a systems
biology approach. The new methods offer a good opportunity to analyze animal and human specimens similarly, facilitating conclusions across species. Science can take advantage of the human exposures to direct the choice of exposures in animal studies so that the animal studies will be more relevant to the questions for people, noted Omenn.
When determining technologies, he noted, we have to look realistically at what is currently available, what can be validated and applied in the next five years, and what needs a longer time frame to come into practical use. Omenn suggested that we need to look at new tools and methods case by case for applications. We must make decisions on what is sufficiently validated to be used in clinical applications and public health applications while other work goes forward on further research advances to make progress for future applications that are not yet feasible.
Need for Longitudinal Studies
According to Omenn, although case-control studies are valuable, they have numerous shortcomings, including frequent bias toward the more severe end of the disease spectrum, problems of selection of the control group, recall bias for environmental exposures and family history, and inability to identify predictive biomarkers that signal the future onset of disease. The cohort approach has the advantage of large sample sizes and the potential opportunity to fully represent subgroups of the population, including minority groups and a broad spectrum of ages.
It would be useful to have a large-scale national study that provides information on an ongoing basis in the United States, as in the United Kingdom, Iceland, Estonia, and Japan. Two challenges will be how to weigh different subgroups in the population for balanced enrollment and how to engage their interest and long-term participation. Furthermore, it may be possible to examine a range of genetic backgrounds and environmental exposures while including a family-based component. The characteristics of a desirable gene-environment cohort study, now in the planning stages at NIEHS and the National Human Genome Research Institute of the National Institutes of Health, would include sophisticated dietary, lifestyle, and environmental exposure data, collection and storage of biological specimens, a sophisticated data management system, access to material and data by all researchers, and goals that are not hypothesis limited, Omenn asserted.
We should be much better prepared, more knowledgeable, with better tools and better connected with the questions that people really want us to address during a disaster.
He noted that there needs to be a comprehensive community engagement from the outset and meaningful state-of-the-art consent for the indi-
vidual and their representatives to allow and define uses of the specimens and data with regular feedback to the participants. Omenn suggested that to do that right, scientists need to learn from such examples as the community and scientific responses to Hurricane Katrina and other emergencies. We must learn, he emphasized, how to support on a continuing basis the local, state, and federal public health agencies. Otherwise we will be facing similar challenges, with even more chaos for the people involved, in the years ahead. Instead, we should be much better prepared, more knowledgeable, with better tools and better connected with the questions that people really want us to address during a disaster.