The Study and Management of Interactions
This chapter suggests procedures and research strategies that can be used by the military to investigate potential interactions (based on the matrices proposed in Chapter 4), manage known interactions, and conduct the necessary research to identify and study unknown interactions. The suggested procedures and research strategies involve the use of civilian systems that are already in place, the increased computerization of military and VA records to monitor both exposures and outcomes, increased linkages between databases, and careful design and conduct of experimental and epidemiologic studies.
The completed matrices, described in Chapter 4, will display known interactions. These adverse health effects then can be avoided or studied to identify ways to minimize them. Although the best way to manage known adverse interactions would be to avoid them, in practice this often proves to be unattainable because (1) the mechanisms for avoidance that are in place are less than perfect, (2) there may be environmental factors beyond human control, and (3) there may be circumstances in which the benefits anticipated from the use of particular combinations of agents significantly outweigh the risks.
Mechanisms of avoidance currently in place include “Dear Dr.” letters alerting physicians and pharmacists of adverse interactions and notices from the FDA and drug manufacturers. Patients can also be alerted by reading the fact sheets for prescribed drugs. In addition, most pharmacies in the United States
today use computerized systems to keep records. Drugs with known adverse interactive effects are flagged in these systems so that when a prescription for one of these drugs is filled, the pharmacist is alerted to check for concomitant use of the other drugs that are known to interact with that drug adversely. The fact that these mechanisms are not fail-safe is documented in a recent report (Thompson and Oster, 1996) describing the concomitant or overlapping prescription of terfenadine, a nonsedating antihistamine, with macrolide antibiotics or the imidazole antifungal agent ketoconazole.
Just as in the civilian sector, it behooves the military to develop systems that can be used to avoid or minimize the prescription of drugs with known interactions. The Uniformed Services Prescription Database Project (USPDP) (see Chapter 4) provides a system that could be used to flag known drug interactions within the military pharmacy system. To be useful during military deployments, prescribed drugs must also be entered prospectively into the computerized system so that interactions can be identified during deployment. Adding biologics, including vaccines, immunoglobulins, and immune diagnostic agents (e.g., tuberculin skin tests), to the database, as the USPDP has proposed, would produce a multipurpose tracking system by allowing drug-drug and drug-other agent interactions to be flagged.
In some instances the use of drugs with known interactions cannot be avoided during military deployments. In these instances of unavoidable exposures to known interactions, the military should carefully study the exposure-effects relationships so that adverse effects can be minimized in the future.
Systematic postdeployment studies of known interactions will provide the necessary data to determine how to minimize adverse effects in training situations and possibly in future deployments. Those surveillance systems in use or in development within the U.S. military (e.g., the Individual Patient Data System, the Army Medical Surveillance Activity, the USPDP, etc.; see Chapter 4) can be used to accomplish this task.
Potential interactions (see Chapter 3) that are identified as high priorities for study can be actively investigated in experimental in vitro and in vivo systems as well as with experimental and observational studies with human volunteers. The procedures for investigation could be, in general, similar to those used by regulatory agencies (e.g., FDA and the Environmental Protection Agency [EPA]). To conserve resources, a tiered approach would be useful. For example, identification of a potential hazard can first be accomplished in appropriately chosen in vitro systems. Studies can then be extended to in vivo animal models with appropriate species and experimental designs.
Depending on the information uncovered, the experimental designs of animal toxicity studies may vary from simple to increasingly complex studies to address the issues of dose-response relationships as well as the suitability for extrapolation to the assessment of risk in humans. If early experimental studies of a combination show clear toxicity or reduced efficacy, the agents in the combination can then be considered to have a known adverse interaction (see below), and their simultaneous use should be avoided if possible. If they are used in military operations, individuals taking them should be monitored, if necessary. If early experimental studies show minimal toxicity and little or no decrease in efficacy with the use of the combination, human volunteer studies similar to early phase drug development trials (e.g. FDA Phase I drug development trials; 21 CFR 312.21) may be warranted.
The procedures for studying potential interactions proposed by the committee are expensive. Realistically, only high-priority interactions can be studied as described above. Other combinations of agents that could interact but that need to be used by the military should be monitored by the same procedures recommended for monitoring known interactions (see below).
To demonstrate the utility of the tiered approach, the study conducted by Abou-Donia et al. (1996), which investigated the interaction of pyridostigmine bromide (PB), DEET, and permethrin, is used as an example. In that study, neurotoxicity was produced in chicken hens exposed to these agents individually or simultaneously (exposure for 5 days/week for 2 months to 5 mg of PB per kg of body weight per day in water, given orally, 500 mg of DEET neat per kg/day given subcutaneously, and 500 mg of permethrin in corn oil per kg/day given subcutaneously). At these dosages exposure to single compounds resulted in minimal toxicity. Combinations of two or more agents produced greater neurotoxicity than that caused by individual agents.
Although the findings of the study by Abou-Donia et al. (1996) are interesting, certain issues must be further investigated before such information can be applied directly to the assessment of risk in humans. For example, the dose-response relationship must be defined, since only one dosage was used. One of the chemicals studied, permethrin, was given subcutaneously at 500 mg/kg/day in corn oil. According to a recent National Research Council report (1994a) entitled Health Effects of Permethrin-Impregnated Army Battle-Dress Uniforms, the estimated exposure dose to a soldier in the Persian Gulf War was 6.8 × 10−4 mg/kg/day. This represents a 7,352,941-fold lower dose than that used in the study of Abou-Donia et al. (1996). A reasonable question is, would an interaction be seen at realistic human exposure levels?
Applying the committee's tiered approach (see below), the study of AbouDonia et al. (1996) should be expanded and repeated to include a range of doses, preferably including doses low enough to be representative of human exposure levels. Once such dose-response relationships are defined, the information can
be used to help interpret the results of retrospective studies with soldiers deployed in the Persian Gulf War.
Given the array of substances (some predictable, some not) to which deployed military personnel are exposed, unanticipated adverse effects are likely. It is to the military's advantage to identify these so that the agents' use can be minimized to the extent possible. The research elements needed to identify unanticipated adverse effects include an enhanced emphasis on toxicological screening studies focused on interactions and an increased surveillance in epidemiologic settings. With regard to surveillance, the identification of sentinel cases may indicate rare reactions to single agents as well as unpredictable or unusual interactions among multiple compounds. Such sentinel cases may be severe and may well provide unique insights into the pathobiologic properties of the various agents and their interactions. If identified, sentinel cases should be subjected to thorough investigation to elucidate the nature and meaning of the interaction.
In toxicology, interaction is a general term that has been applied to toxicity test results that deviate from the additive behavior of the dose or the response expected on the basis of the dose-response curves obtained for individual agents. The term synergism is used when the results are greater than would be anticipated from the simple addition of doses or responses. Antagonism is a situation in which the response is less than that which would be predicted on the basis of a simple addition of doses or responses. Potentiation has been used to characterize synergistic effects that occur when one component of the mixture has no effect by itself but is capable of enhancing the effect of a second component in the mixture. Additivity is used for the situation in which the combined effect of the components of a mixture is equal to the sum of the effects of each agent given alone. Furthermore, one chemical may enhance or antagonize the effect of another chemical in a simple mixture but exhibit different effects in a complex mixture or when given by different routes, and it is well recognized that chemicals with different modes of action may exhibit nonadditive interactions.
Although epidemiologic studies are more likely than toxicologic studies with animals to involve exposures of humans to mixtures of chemicals or other
toxic agents and could thus provide a more reliable basis for risk assessment, epidemiologic data are rarely available for the specific mixtures of agents and the exposure situations that are of interest. Moreover, although laboratory animal studies are not infallible, the principle set forth by the International Agency for Research on Cancer and endorsed by the U.S. Office of Science and Technology Policy (1985) is that “in the absence of adequate data on humans, it is reasonable, for practical purposes, to regard chemicals for which there is sufficient evidence of carcinogenicity in animals as if they presented a carcinogenic risk in humans.” Therefore, in this section of the report, a general approach and rationale for conducting animal toxicity studies with a core mixture of compounds (i.e., the major drugs, biologics, and chemicals that are to be given to deployed U.S. military personnel) are proposed.
The assumption that doses of different agents can be treated as roughly additive in producing a response appears to be reasonably consistent with the experimental evidence on the joint actions of chemicals in mixtures (National Research Council, 1994b), and the low incidence of synergism in the EPA Database on Toxic Interactions supports the use of the assumption of additivity in mixtures. Fewer than 3 percent of the 331 studies in the database (which contains data on more than 600 chemicals) showed clear evidence of synergism (U.S. Environmental Protection Agency, 1988). A similar low incidence of synergistic responses was observed by a committee of the National Research Council (1988) and by Krishnan and Brodeur (1991). However, most of these studies examined the interactions of only two compounds, and few of them examined long-term effects.
Ideally, for the most complete assessment of the potential interactions of drugs, biologics, and chemicals in U.S. military forces, a process such as the following should be adopted. Different regions of the world should be characterized according to weather and geographic conditions; ecosystems; abundance of plant, animal, and microbial species; prevalence of diseases; possible anthropogenic pollutants; and other environmental conditions. Within each region, a list of the potential dangers that military personnel might face regarding possible exposures to warfare agents, chemicals, environmental and physical stresses, diseases, pests, prophylactic drugs, biologics, and so on should be compiled and analyzed. Then, under the climatic conditions of each of these regions, animal studies should be carried out to detect at least the four major toxicity categories (i.e., immunotoxicity, developmental and reproductive toxicity, neurotoxicity, and carcinogenicity).
However, to conserve resources and as a starting point, the committee suggests the following prototype experiment with the understanding that more specific scenarios may be incorporated into the experimental design of subsequent studies as needed. At a minimum, the following combination exposure scenarios should be studied for each of the toxicities mentioned above:
the complete combination: drugs, biologics, and chemicals whose use is anticipated;
drugs whose use is anticipated;
biologics whose use is anticipated;
chemicals to which exposure is anticipated; and
The doses of each entity to be used in animal studies should be the anticipated level of exposure to the soldiers (on a milligram-per-kilogram, millimole-per-kilogram, or units-per-kilogram basis), which would be the baseline study dose, plus two other dose levels (10 times and 100 times this baseline dose).
This recommendation may be considered a first tier screening for possible adverse health hazards. Any toxicologic interaction detected within any of the groups should be a warning flag to DoD, and a decision must be made with respect to the risks and benefits involved in using the agents. Beyond this first tier, any additional studies should be on a case-by-case basis, guided by the recommendations of an expert panel of investigators.
Some of the conventional toxicity testing protocols may not be applicable in these studies because they are either too expensive and resource-intensive or not sensitive enough with respect to toxic responses, or both. Therefore, there is a need for continuing refinement and improvement of experimental toxicology methodologies by using the latest advances in molecular biology and genetics and in computer sciences. For example, to deal more effectively with interactions, investigators can use and integrate state-of-the-art advances in (1) computational technology; (2) physiologically based pharmacokinetic and pharmacodynamic modeling; (3) model-directed, unconventional, focused, mechanistically based, short-term toxicology studies; and (4) other mathematical and statistical modeling tools.
An advance in the pharmacokinetic and pharmacodynamic modeling of chemical carcinogenesis is the expression, in terms of fundamental cell cycle kinetics and within the framework of a linear multistage model for cancer of events, of the character of the biological processes governing cell replication and cancer (El-Masri et al., 1996; Thomas et al., 1996). Physiologically based pharmacodynamic (PBPD) modeling can be and has been used to produce reasonable estimates of cancer incidences in exposed animals. To achieve this objective, the PBPD model will have to integrate the events of cellular injury, death, and division as well as mutational events in the cells that will lead to an increased rate of cellular proliferation. Other aspects of the PBPD model will have to delineate the events of the tissue in the resting state or under accelerated growth conditions, such as in neonatal animals or following chemical injury. Many of these biological processes also can be described in the model in terms of cell cycle kinetics.
The cell cycle portion of the model describes the events that lead the cells from one phase to the other. These cellular phases are the G0 (resting phase), G1 (a gap or pause after stimulation in which some biochemical activities are occurring), S (synthesis phase, particularly DNA synthesis), G2 (a second gap), and M (mitosis phase). The number of cells in each phase can be described by mass-balance equations. The mass transfer of cells from one phase to the other is related to the residence time of the cells in each phase. The mathematical construction of this cell cycle model can be incorporated into the PBPD model to reflect such events as the possible mutational effects of the chemicals and cell proliferation rates under a variety of conditions. Current developments in immunohistochemical staining as well as molecular biology techniques with factors (e.g., oncogenes, cytokines, and tumor suppressor genes) that are reported in the literature to influence the rates of various stages of the cell cycle may prove to be fruitful in the possible prediction of cancer in humans by allowing much more efficient experimental animal models. By comparing cell cycle kinetics in preneoplastic clones of cells (e.g., liver foci in Ito's system or foci in SHE cell transformation assays) and in surrounding normal cells, mechanistically based biomarkers may be identified, and these biomarkers can be used to demonstrate more sensitively the carcinogenic potentials of chemicals or chemical mixtures (El-Masri et al., 1996; Thomas et al., 1996).
Work can be done to improve animal models so that they might flag compounds that are human health hazards. Preventive measures then may be implemented. Work can also be done to develop more efficient, less costly, state-of-the-art experimental and computer model systems, which will also help to raise warning flags. Use of these experimental and computer model systems in conjunction with epidemiologic investigations will form a powerful approach to identifying and minimizing future potential health problems. Current research advances are such that the methodologies, animal models, and systems presently used will be continually challenged, improved, and refined. Therefore, it is important to be flexible and to be prepared to adopt the latest advances in biomedical research to improve and refine the preventive measures described in this report.
Data Analytic Approach
In addition to animal toxicity studies, careful data analysis is also needed. Recent advances in statistical methodology, discussed below, allow for the efficient detection of interactions by making use of data from studies with single agents. These advances help to characterize quantitatively earlier qualitative work based on the interpretation of isobolograms.
In the simplest case of two agents, each of which produces a single response, one can plot in two dimensions the set of doses (x, y), where x is the dose of Agent 1 and y is the dose of Agent 2, that produces identical responses. The line connecting this set of doses is an isobole, and its graph is an isobologram (see Figure 1 in Machado and Robinson, 1994). Thus, an isobologram is analogous to a topographic map, in which identical responses correspond to identical elevations. In the simple two-agent case, the two-dimensional plot of isobolograms permits a simple, qualitative interpretation: a straight line is indicative of an additive effect of the two agents, that is, no interaction. A convex isobologram is evidence that the response from the combination of the two agents is less than the sum of their responses, which is an antagonistic interaction. A concave isobologram is evidence that the response from the combination of the two agents is greater than the sum of their responses, which is a synergistic interaction.
Combinations of more than two agents must be studied in higher-dimensional space, where lines become surfaces and straight lines become planar surfaces. In 1981, Berenbaum quantified and generalized the isobologram to higher dimensions and used it to detect and characterize interactions of a combination of drugs or chemicals, showing that the contours of the constant response of the dose-response surface are planar if the components of the combination have an effect that is additive. In direct analogy to the two-agent case, if the observed response to the combination is statistically greater than that predicted under additivity, it is concluded that a synergistic interaction has taken place. For increasing dose-response relationships, if the observed response to the combination is statistically less than that predicted under additivity, it is concluded that an antagonistic interaction has taken place. If there is no statistical difference between the response predicted under additivity and the response observed upon exposure to the combination, it can be concluded that the components of the combination do not interact. The logic of the approach outlined above was used by Finney (1964), Berenbaum (1985), and Kelly and Rice (1990), among others, to detect and characterize interactions involving combinations of agents.
The real strength of this approach is that relatively few data are required to implement it. Under the assumption of additivity, in particular, the estimated dose-response surface can be calculated from the dose-response curves for the single agents; such data are likely to be available as a result of earlier product development research. One then needs only to collect additional data on the results of exposure to the combination of interest at the specified doses of the constituents.
The required single-agent dose-response data are likely to include multiple control groups, one for each agent under study, especially if these data were collected from several studies. Ideally, such control data can be used to estimate
the background rate of response, although an important consideration is their proper inclusion in the analyses. If all of the single-agent control data are collected simultaneously, there should not be any problem combining them. However, when single-agent data are found in the literature or are collected at points in time that are remote to the time of collection of the combination data, then the problem is similar to the historical control problem discussed by Prentice et al. (1992). In extending earlier approaches, Gennings and Carter (1996) used a single parameter for the background (control) rate and developed a methodology that can be used to detect and characterize interactions by incorporating this parameter into the additivity model three different ways: as a fixed-effects parameterization, as a random-effects approach following Prentice et al. (1992), and as an approach involving the use of estimating equations (Liang and Zeger, 1986).
With suitable preclinical models, the methods described above can be extended from animal toxicology studies to human studies, permitting the design and analysis of prospective studies that can test directly the existence of interaction effects when evaluating the potential health consequences of exposure to combinations of drugs, vaccines, and chemicals. Again, it is possible that many of the single-agent data are already available as a result of the research done in evaluating the individual agents. Even if the existing single-agent data are not adequate, the approach outlined above is still efficient in that the experimental effort required is greatly reduced to the generation of single-agent dose-response curves and the responses at particular fixed-dose combinations. In the case of five agents, each to be studied at four doses plus the control level, for example, the number of experimental groups to be evaluated for response is 26 (i.e., 5 × 5 plus the one combination dose). In contrast, the complete set of experiments (55) requires the evaluation of response among 3,125 experimental groups.
If no interaction (departure from additivity) is detected by the above analyses, there may be no need to study the combination of agents further. If an interaction is detected, however, further studies may need to be done to identify which set of agents is responsible for the departure from additivity. Even if this is the case, the number of additional experiments should be less than the number of all possible combinations (see Narotsky et al., 1995, for an example of a full 5 × 5 × 5 factorial study of three chemical compounds analyzed to detect the presence of all two-way and three-way interactions).
The approach for detecting interactions outlined above is directly applicable to the study of a particular complex mixture of biologics, chemicals, and drugs, as advocated earlier. Let B represent a given combination of biologics, let C represent a given combination of chemicals, and let D represent a given combination of drugs. The complex mixture is represented by B + C + D. One set of experiments designed to provide data to be analyzed by the methodology described above determines responses to the following sets of exposures:
Control, B + C + D, 10 (B + C + D), 100 (B + C + D);
Control, B, 10B, 100B;
Control, C, 10C, 100C; and
Control, D, 10D, 100D.
The first set of exposures yields the combination agent data, and the next three sets yield the single-agent data for B, C, and D, respectively (National Research Council, 1988).
The methodology described above applies only to the class of linear models. Generalization of the methodology to include nonlinear models, often used in the assessment of risk to human populations, should be encouraged.
Despite the advantages of the above approach, it is not a fail-safe method for detecting all interactions. Therefore, it is necessary to use surveillance systems to supplement the information gathered by the above strategy.
This section describes approaches to monitoring exposures, identifying adverse health events, and investigating disease-exposure relationships in military personnel. The suggested monitoring (described below) requires the development of large databases that can be expanded and used for many years. The process used to develop the database for the USPDP provides a model for database development and expansion. First, a small pilot study was done at one site to identify and solve the problems that arise in collecting and cataloging the data of interest. Then the pilot project was slowly extended. This same process of starting small with stepwise expansions seems to provide efficiency in the development of useful databases.
Gutthann and Garcia-Rodriguez (1993) provide an example of the use of a large linkable database to study the adverse health effects of combinations of drugs in a civilian setting. Using the databases from the Saskatchewan Health Plan, they studied the risk for hospitalization for liver injury associated with the interaction of nonsteroidal anti-inflammatory drugs and other hepatotoxic drugs and found that concomitant current exposure to two or more drugs increased the risk above what would be expected from the sum of the individual risks.
Identifying and Recording Exposures
The first goal for monitoring exposures in deployed personnel is to have computerized records to identify personnel who were deployed and their dates of deployment and return. The location during deployment should be available at the unit level. A personnel location database (geographical identification sys-
tem [GIS]) is being developed by DoD retrospectively for those deployed during the Persian Gulf War. The GIS will identify the location of each unit during that deployment. Efforts should be made to generate such a database prospectively in future major deployments so that appropriate studies can be done in a timely fashion. Such a database must be linkable to other exposure and outcome information.
Computerized records of environmental exposures should be developed when the situation warrants. Currently, the military is assembling data from the Persian Gulf War for a database to be linked with the personnel location database. This database includes environmental and meteorological data (e.g., air quality data in the area of the oil fires) that were collected during the conflict. Because major environmental exposures such as oil fires are difficult to predict, the military should develop contingency plans to call in a deployable technical team to do environmental monitoring in a timely fashion. The computerized software should be developed in advance so that the data generated by such a technical team can be linked to the personnel location database (as described above in item 1).
Automated field records indicating the drugs and biologics given to each individual should be developed. USPDP can be extended to deployment situations if the prescriptions filled and the vaccines administered to each individual can be electronically recorded in the field.
Occupational exposures can be categorized for study using the current classification system, the military occupational specialty (MOS). A computerized database with MOS data on individuals throughout their service lives must be linkable to other exposure and outcome data. The MOS is a general name of the assigned duty—for example, truck driver or medical officer. As with many occupational categorization systems, the MOS does not always correspond to the actual tasks that an individual may be doing and does not allow one to quantify the occupational exposures that the individual may experience. However, the MOS can serve as a general classification for job assignments, as long as its limitations fire known and stated.
Personal exposures during deployment, such as use of nonprescription medication, tobacco, alcohol, recreational drugs, and personally purchased pesticides, may be important health determinants, but exposures to these substances are difficult to assess. A minimal strategy for collecting data on these exposures is to have an anonymous postdeployment questionnaire that is administered to a random sample of the returning deployed troops. If general identifier data such as age, race, sex, reserve, or active-duty status are also collected, the general levels of such exposures can be assessed for specific subsets of the deployed forces. Some important exposures might not be anticipated—for example, the flea collars that were worn by some Persian Gulf War military personnel during their deployment. Preventive medicine officers in the field should be alerted to
the need to systematically identify and record any such exposures and estimate their frequency of use. Contingency plans should be made to determine when expert advice should be called in to determine whether such an unanticipated exposure should be disallowed.
Although the above discussion of exposure monitoring is directed toward existing or planned data collection efforts and databases, one should not ignore the possibilities for developing better epidemiologic databases on exposure. Biomarker data, for example, hold out the possibility of providing more refined measures of exposure. The Army/Navy Serum Repository should be considered as an important source of specimens from which such biomarker data might be developed.
1. Monitor sentinel events. This can be done by expanding the military Reportable Disease Surveillance System (RDSS) (see Chapter 3 for description of RDSS). Just as CDC compiles a list of reportable diseases, the military monitors a similar (but not identical) list of diseases in the RDSS (see list of notifiable diseases in Chapter 3). The committee recommends that, as an aid in identifying adverse effects of interactions, the Armed Forces Epidemiology Board and its experts identify appropriate additional diseases and conditions that should be reported. The additional conditions should cover the categories of expected toxicities identified in the matrix analyses described in Chapter 3.
For example, the category neurologic toxicity might include neurologic diseases like multiple sclerosis; the category immunological toxicity might include immune-suppression-related diseases like herpes zoster, autoimmune-related diseases like systemic lupus and thyroid disease, and hypersensitivity-related diseases; the category liver toxicity might include acute liver injury; the category nephrotoxicity might include acute renal failure, and so on. Available empirical data can be used to help identify appropriate additions. For example, the events reported to the Vaccine Adverse Event Reporting System (VAERS) in the civilian sector can be surveyed and may suggest items such as marked hair loss as well as specific diseases. Prior DoD studies can also be reviewed to identify potential sentinel events that should be added to the notifiable diseases list. Because decreased effectiveness is one of the potential adverse effects of vaccine-vaccine interactions, increased incidence of any diseases that should be prevented by the vaccination program should also be monitored. In addition, when new, separate vaccines are administered simultaneously, serologic studies should be undertaken to measure antibody responses.
Drug data can also be monitored for prescriptions specific to particular diseases. Such drug-related illnesses could include agranulocytosis, aplastic ane-
mia, Stevens-Johnson syndrome, toxic epidermal necrolysis, and anaphylaxis. For example, inhaled steroids are prescribed almost exclusively for asthma, so an increase in their use might reflect either an increased prevalence or increased severity of that disease.
Experts can review the data periodically, evaluate apparent increases, and recommend investigation when warranted. This recommendation requires little new development, can be implemented by expanding systems already in operation, and should be activated in the near future.
2. Design small prospective studies to collect data before and after deployment to monitor immunologic, neurobehavioral, endocrinologic and reproductive, and genetic changes associated with deployment. Comparisons of the results of these studies to similar studies of nondeployed forces could provide reassuring data if relatively sensitive markers showed no adverse effects associated with deployment. If effects are seen, they would help direct future research. Such studies could be done with relatively small samples at relatively low costs. The immunologic testing could all be done with the sera for HIV testing obtained from all deployed personnel before their deployment and with a single blood sample obtained after the deployment (and with sera obtained at two comparable time periods for nondeployed forces) to measure markers of immune suppression, autoimmunity, and hyperreactivity. Neurobehavioral testing could be done with a battery of tests, including computerized tests of cognitive functioning, measures of balance and vision, and tests of peripheral nerve function. Mutagenesis could be monitored by genetic analyses of lymphocytes with the same blood sample collected for immunologic measures. Endocrinologic and reproductive biomarkers could include reproductive and thyroid hormone measures and semen analyses. Linkage to the Army/Navy Serum Repository could provide an opportunity to obtain data on serum biomarkers. In addition, detailed data on symptoms at baseline and after deployment for deployed and comparison groups would be useful. A carefully designed questionnaire could be developed to collect detailed data on such symptoms as headaches, tiredness, weakness, rashes and other skin effects, joint pain, muscle aches, sensitivity to odor, and feelings of depression and hopelessness. It could be used periodically to test several groups of military personnel so that baseline data on the occurrence of such symptoms and the changes in such symptoms over time would be available.
3. Use the available reporting systems in the civilian sector, VAERS and MEDWatch, as alerting mechanisms to identify potential interactions that should be studied. Adverse outcomes from interactions are often initially identified by astute clinicians. The committee recommends the adoption of appropriate directives requiring military medical facilities to use the MEDWatch reporting systems, similar to the directives for reporting to VAERS that already exist in the DoD immunization directives (Army Regulation 40-562).
4. Moderate to long latency effects are difficult to identify because military personnel spend relatively short terms in the military (even career personnel tend to leave after 20 years). Deployed reservists return to the civilian health care system immediately after deployment, so their health outcomes are difficult to identify. Those veterans who use the VA health care system after leaving the military are a select minority, so VA records will not identify many of those with disease. However, more efforts are needed to link VA records with military personnel records so that whatever follow-up the VA can provide is useable. The National Death Index is a record of all deaths in the United States, and this can provide death certificate information for all military personnel.
Epidemiologic Investigation of Disease-Exposure Relationships
Descriptive, Case-Series Studies
If a sentinel event triggers an investigation, the group of individuals identified as cases, those with the reportable condition, can be described as a case series. Exposure data for these individuals can be identified from the databases described above to monitor exposure, from hard-copy medical records, and by questioning the individual (unless he or she is deceased). If the event is otherwise rare and the exposure combination is very specific, the cause of the adverse health event may be inferred from such descriptive data; in most situations, however, it will be necessary to compare the cases with a group of controls to identify the risk factors. Nonetheless, recent methodological developments in the analysis of case series data allow the production of relatively good estimates of relative incidence without the use of controls—see Farrington et al. (1996) for an example of this methodology applied to adverse reactions to vaccines.
Case-control studies compare patients with a particular disease (cases) with individuals not having that disease (controls), looking for differences in the exposures of the two groups. They can be done relatively inexpensively to investigate any marked increase in a sentinel event identified through the reportable diseases program. The cases, individuals verified to have the reportable disease of interest, would have already been identified. Controls could be drawn at random from the target population from which the cases are identified, for example, all those who were active-duty personnel at the time of case identification. If potential exposures of interest have been identified, it may also be efficient to match cases with controls on such variables as gender, race, age, length of time in the military, and perhaps base location.
Case-control studies can also be done to evaluate outcomes not identified through the reportable diseases program. In such instances, case identification will need to be performed through a more cumbersome mechanism; for example, hospital discharges could be used to identify potential cases for a condition of interest. Because the discharge record lists diagnoses that were investigated (not necessarily confirmed), the discharge records merely give a pool of candidate cases. Cases to be studied should contain only confirmed cases (which are usually determined by a separate process that includes abstracting information from hard-copy medical records) from among the potential cases that were selected at random from the target population; matching on selected variables should be done when it is expected to improve efficiency. Case identification is particularly problematic when the disease to be studied does not have a well-defined diagnosis that can be tracked through International Classification of Disease coding.
Birth defects and reproductive outcomes could be studied using case-control design. Selected birth defects that are rarely fatal yet are distinct enough to be identified at birth are less problematic for study than all birth defects. Adverse reproductive outcomes are not uncommon in pregnancy and usually carry no unique characteristics that would link them to the exposures, with the possible exception of unusual congenital malformation syndromes. Aside from the situation of a unique malformation, however, most studies of reproductive outcomes will require comparison groups to determine if the association between the health outcome of interest and exposure to drugs, biologics, and chemicals is different from what would normally be detected within this population. In addition, the problem of identifying reproductive effects is magnified over the problem of identifying other health outcomes in that a couple's exposures, not an individual's exposure, may be related to the risk. The medical records of both members of the pair may not be in the military database, so that records would not be available within the same system. Many other confounding variables may influence the risk of reproductive problems, but all of these may not be known to the military.
The exposure data for a case-control study can be collected by computer linkage if exposures have been automated (see section entitled Identifying and Recording Exposures earlier in this chapter). In addition, the cases and controls can be found and asked to complete a personal, telephone, or self-administered questionnaire, if appropriate. The major disadvantage of self-reporting is that recall and reporting of exposures may well be different for cases and controls.
Cohort studies can be used to identify subsets of a population and to follow them over time, looking for differences in their outcomes. Cohort studies generally are used to compare exposed individuals with unexposed individuals, although they can also be used to compare one exposure to another. They can be performed either prospectively or retrospectively by recreating those past events with automated or manual medical records, questionnaires, or interviews. For example, there are plans under way to do clinical and epidemiologic studies of three cohorts of multiply-immunized civilians. All three cohorts consist of former or current U.S. Army Medical Research Institute for Infectious Diseases (USAMRIID) laboratory workers who have received multiple immunizations. These three separate cohorts consist of 99 men who were studied in 1954, 1962, and 1971; former “reunion” employees, who meet once every three years; and current USAMRIID employees. Although the proposed studies are not studies of military cohorts, they may provide useful data applicable to the military.
A cohort study provides the basis for determining the excess risk of adverse health outcomes associated with interactions of agents compared with the risk of adverse health outcomes in those not exposed to the same agents. Considerations in the design and conduct of a cohort study of the association between interactions of agents and adverse health events include the following: the ability to select a well-defined cohort (study population), the ability to obtain accurate exposure histories and data on potential confounding factors, and the ability to ascertain all relevant disease events of interest.
This type of study can be used to assess the effects of agents that are known to interact but whose use cannot be avoided in field operations, those that have potential interactions, and those whose potential for interaction is as yet unknown. For example, this is one of the strategies being used to evaluate the health effects of the Persian Gulf War; hospitalization data have been compared for deployed and nondeployed personnel. Unfortunately, military personnel tend to leave the military, so their health can no longer be monitored within the military health care system. Therefore, most medium- to long-term sequelae will not be identified in military records. VA records can be used if they are available, but those who use the VA system will not be representative of the exposed and unexposed groups. With SSN availability, researchers could link to other (non-VA) public and private administrative databases concerning health care. For example, the National Death Index provides unbiased data on long-term sequelae, but it can only be used to study all-causes mortality and mortality associated with conditions having high fatality rates, such as some cancers. These overall death rates and specific causes of death can be compared among different exposure groups (assuming that sufficient exposure data are collected so that groups can be identified; see Monitoring Exposures section above). Linkage to
the Army/Navy Serum Repository can provide biomarker data for subjects during their time in the military. Finally, randomized experiments and intervention studies, when feasible and ethical, can provide very useful data on the effects of the interactions of various agents.
The military will be faced with increasing and continuously evolving problems involving the interactions of drugs, biologics, and chemicals. Each combination of agents with the potential for adverse interactions is likely to pose specific research challenges and unique problems to the detection, evaluation, and management of interactions. The advice of experts in the fields of toxicology, epidemiology, and pharmacology will be needed on a continuing basis to assist military scientists and program managers in developing experimental approaches, selecting model systems, designing epidemiologic studies and surveillance programs, and providing information during policy discussions concerning the costs and benefits of potential decisions. An expert advisory panel established under a chartered advisory structure, such as the Armed Forces Epidemiology Board, and comprising experts in the several needed disciplines could provide the appropriate advice and guidance to the military research community, those who perform preventive medicine activities, and health care providers.
In addition, close coordination among the programs within the DoD — including, but not limited to drug and biological product development, preventive medicine, clinical medicine, chemical warfare defense activities, and oversight committees—will be necessary to address the recommendations of this committee. The committee acknowledges that it will not be possible to complete all of the database development and recommended research immediately and simultaneously. Cost-benefit considerations and feasibility issues will need to be addressed to prioritize and develop a feasible agenda of future research activities. Coordination among the programs will be particularly important to the successful completion of this task.