A registry is a structured system for collecting and maintaining data on a group of people characterized by a specific disease, condition, exposure, or event. Registries may be used to facilitate research, monitor health, or to provide information to registrants. This chapter begins with a broad overview of the use of registries for health research. Because registries are one of the primary methods used by the Department of Veterans Affairs (VA) to assemble information on the effects of military-service related exposures, examples of VA and Department of Defense (DoD) registries that address specific exposures, health outcomes, or groups are summarized. The limitations of registry data, specifically the common biases encountered, are then discussed. This is followed by an examination of the characteristics of some potential comparison populations that might be used in studies of health outcomes in the participants in the Airborne Hazards and Open Burn Pit (AH&OBP) Registry. The chapter concludes with a discussion of some of the inherent limitations of using registry data for assessing associations and drawing conclusions concerning the relationship between exposures and health outcomes.
In an ideal study of the relationship between an exposure and an outcome, the characteristics of the population of interest, the agent(s) and level(s) of exposure they were subjected to, and the outcome or outcomes that were observed would all be carefully and objectively measured. Well-designed epidemiologic studies are the best means of measuring such associations, but large epidemiologic studies take years of planning to design and are expensive to conduct. In addition, the excessive control of study parameters that is often required in these studies may limit the generalizability of conclusions to a broader population. As a result, epidemiologists have developed several means to gather and use the information that is feasible to obtain.
Registries are one means of accomplishing this. They are generally quicker to establish, cost-efficient, and allow for the ascertainment of several exposures and health outcomes on a defined population. In some registries, this information is supplemented or updated over time. The motivating factors for participation and retention vary according to the registry but may include the apparent relevance, importance, and credibility of the registry or its sponsoring organization (Gliklich et al., 2014). Participants may be directly solicited from lists of persons believed to have the desired characteristics (active recruitment) or via broad appeals to populations thought to include potentially qualified subjects (passive recruitment). In registries that depend on passive recruitment, selective par-
ticipation is a concern because those persons experiencing adverse health outcomes—whatever their cause—may be more motivated to enroll.
Registry data are collected through active or passive means, or a combination of the two. Active data collection consists of collecting data specifically for the purpose of the registry (such as through a clinical examination or questionnaire). Passive data collection relies on sources that were collected for purposes other than the registry (such as administrative health records, claims databases, or pharmacy records).
An epidemiologic registry may be established to serve one or more predetermined scientific, clinical, or policy purposes such as addressing a public concern (Gliklich et al., 2014). Such registries are most useful when designed for a specific goal or research question that then directs how and which data are to be collected. Antão and colleagues (2015) proposed that the considerations for establishing a registry should include the public health significance of the exposure or condition; the scientific contribution that the registry would provide; the ultimate purpose, duration, scope of data collection, and outcomes of the registry; and whether the registry will be a useful mechanism to achieve the stated goal.
There are several possible uses for registries as data-collection instruments. They can be used to systematically document the experiences of people who choose to participate and to more methodically record otherwise anecdotal reports of exposure and disease. A second use of registries is to define or calculate the minimum number of affected individuals even if the universe of potential participants is undefined (subject to the accuracy of self-reported exposure and disease reports). Likewise, the data can be used to determine the minimum number of exposures and diseases reported by participants for descriptive purposes that may serve as a basis for informed speculation about the magnitude of the problem. Other uses of registry data include generating hypotheses about specific exposures and health effects that may not have been considered otherwise and discovering previously unidentified associations between exposures and health problems, particularly if the health event of concern is rare or distinctive. The data may motivate more focused investigations of health outcomes and give researchers information they can use to develop better designed studies.
Whereas many registries are established with the intention of providing quantitative data, more often the data that they can provide are qualitative. For example, registries may serve as an expression of good faith on the part of the sponsor to be responsive to concerns that have been raised and to provide a forum for receiving testimony of those who wish to provide it. In this vein, registries can be used to facilitate communication with and outreach to specific populations, which may include updates on new scientific and medical developments or new programs or policies relevant to the participants. However, the committee notes that although several registries have been established, particularly by government agencies, to be responsive to concerned constituents, that motivation should not be the primary reason for establishing a registry.
The World Trade Center Health Registry is one example of a registry that was appropriately designed and established to track the health of participants after a unique exposure (the World Trade Center terrorist attacks on September 11, 2001) and that resulted in many scientific publications on the health conditions of participants over time. Its data were used to demonstrate increased reporting of newly diagnosed respiratory symptoms, asthma, posttraumatic stress disorder, and serious psychological distress (for example, Bowler et al., 2012; Brackbill et al., 2013; Farfel et al., 2008). Although the studies based on this registry data were not without limitations, such as by selective participation over time, there were significant findings; for example, rescue and recovery workers who wore respirators when responding were less likely to report respiratory problems 5 to 6 years after the day of the attacks than those who went without adequate respiratory protection (Antão et al., 2011). In addition, the registry serves as an important tool to inform health care services, project needs for affected populations, and link affected participants to services.
In contrast, the National Exposure Registry was established in 1989 in response to the 1980 Congressional Comprehensive Environmental Response, Compensation, and Liability Act. Although it is regarded as one of the largest data repositories for tracking specific environmental chemical exposures and registrants’ health conditions over time, it has several shortcomings. Foremost, the National Exposure Registry lacks individual exposure measures, and therefore, this flawed design does not allow it to effectively assess exposure, as it was intended. Moreover, there is no process for validating self-reported exposures and health outcomes, appropriate control groups are absent, and no biomarkers exist for most hazardous substances; all of these issues reduced the registry’s value
for research or drawing meaningful conclusions. The numbers and rates of many observed health effects that were reported to be high compared with national norms might be explained by this registry’s design and methodological shortcomings (Schultz et al., 2010).
The next section briefly summarizes the two main types of registries and their application by VA and DoD to obtain exposure- and health-related information.
Two primary types of registries are used to address environmental health issues: disease- or outcome-based, and exposure-based. Generally, both types gather the same sorts of information. The difference between them is the issue that is the focus of the effort—the potential determinants of a health outcome versus the possible consequences of an exposure.
In disease- or outcome-based registries, the eligibility to participate may be dependent on particular signs or symptoms, a diagnosis, or vital status. Depending on the design, inclusion criteria, and extent of information collected, disease registries can be used to estimate the prevalence and incidence of a condition of interest, track or estimate health care resource utilization, study disease progression, or serve as sampling frames for selecting populations for additional studies (Rabeneck, 2001).
The AH&OBP Registry is an exposure-based registry, and this type of registry is the focus of the discussion in the following sections. Exposure-based registries collect information about persons potentially at risk for adverse health outcomes due to specific occupational or environmental exposures encountered. The exposures of interest may be chemical, biological, radiological or nuclear agents, or they may be the result of environmental conditions (for example, airborne dust, extreme heat, or natural or man-made disasters). Exposure registries can be used to help evaluate potential health outcomes or latent conditions resulting from an exposure, especially if the exposure is not common or its potential effects are not well characterized.
VA has established several registries, most of them in response to Congressional directives, along with related programs to monitor the health of veterans of particular conflicts or who were potentially exposed to specific environmental agents during military service. Exposures differ by military conflict, but include herbicides (most prominently, Agent Orange); depleted uranium or embedded fragments; ionizing radiation; and such hazards as smoke from oil-well fires, wind-borne dust, or chemical pollution. Four of the seven active VA exposure registries are targeted at 1990–1991 Gulf War (Operation Desert Shield and Operation Desert Storm) or post-9/11 Gulf region military operations. Table 2-1 lists examples of active VA exposure or event-based environmental health-related registries along with a few of their salient characteristics.
Although each of the VA registries was designed to focus on particular exposures and populations, the eligibility criteria often changed over time, generally in the direction of being more inclusive. For example, the Agent Orange Registry was established in 1978 to monitor veterans’ health concerns that may have resulted from their exposure to herbicides in Vietnam. However, the registry has since expanded its scope and now includes veterans of Korea and Thailand who believed they were exposed to these herbicides. Similarly, the Persian Gulf Registry Health Examination Program was established in 1992 and later expanded to cover Operation Iraqi Freedom (OIF) and Operation New Dawn (OND) veterans and renamed the Gulf War Registry.
Because different veterans were subject to different potential exposures, the types of questions asked to ascertain the relevant information differ, as do the types of laboratory tests that may be offered and the areas of emphasis during the medical history and clinical exam, which leads to variations in the type, amount, and detail of data collected by VA registries (VA, 2016b,c). Registry data generally constitute the sum total of information on participants available for analysis, although they are occasionally supplemented by other sources, such as military records.
To be eligible to participate in a VA registry, a veteran must sometimes complete a clinical examination conducted by a VA provider and supply self-reported exposure and other information. The registry examination may
TABLE 2-1 Active (as of June 2016) VA Environmental Health Registries
|Registry Name||Period of Military Service||Enrollees||Target Population||Data Source(s)|
|Ionizing Radiation||1940s–1950s; 1960s||24,550||Approximately 400,000 veterans are eligible to enroll (VA, 2004). Veterans who participated in a test involving atmospheric detonation of a nuclear device; participated in the occupation of Hiroshima or Nagasaki (August 6, 1945, to July 1, 1946); were interned in Japan during World War II; received radium irradiation treatments while on active duty; or were involved in “radiation-risk activities” (VA, 2004, 2010, 2015a).||Ionizing radiation exam; voluntary.|
|Agent Orange||1960s–1970s||573,000||Veterans with service in Vietnam between 1962 and 1975; or who served in a unit or near the Korean Demilitarized Zone April 1, 1968–August 31, 1971; served on Thailand bases between February 28, 1961, and May 7, 1975; or who were otherwise exposed to herbicides during a military operation or as a result of testing, transporting, or spraying herbicides for military purposes (VA, 2012, 2015b).||Voluntary comprehensive health exam, which includes an exposure history, medical history, physical exam, and other tests if needed.|
|Depleted Uranium Followup||1990s–present||79||Veterans of the Gulf War, Bosnia, Operation Desert Shield, Operation Desert Storm, and Operation Iraqi Freedom (OIF) conflicts “who were on, in or near vehicles hit with ‘friendly fire’; rescuers entering burning vehicles, and those near burning vehicles; salvaging damaged vehicles; or near fires involving DU munitions” (McDiarmid, 2011; VA, 2015c).||Screening via an exposure questionnaire and urine test; physical exams and clinical tests for exposed persons.|
|Gulf War||1990s–present||150,000||Veterans who served in the Gulf during Operation Desert Shield, Operation Desert Storm, OIF, or Operation New Dawn (OND). The registry had enrolled 150,000 1990–1991 Gulf War veterans as of January 2015 and more than 29,000 OIF/OND veterans as of February 2014 (VA, 2015a,e).||Voluntary health evaluation, including exposure and medical history, laboratory tests, and a physical exam.|
|Toxic Embedded Fragments||2000s–present||9,450||Veterans with active duty service in Operation Enduring Freedom, OIF, or OND. The veteran must have, or likely have, an embedded fragment as the result of injury received while serving in an area of conflict (VA, 2016a).||Responses to screening questions; heath and exposure-related information, such as fragment composition data; and, urine biomonitoring results from electronic medical record systems and follow-up screening.|
provide baseline health information, but unless the pathognomonic diagnostic findings and test results have been associated with exposure, an exam cannot determine causation or improve diagnostic accuracy or quality of care. Clinical examinations provide objective information on health status, in contrast to relying solely on self-reported disease information, which is difficult to validate. However, clinical examinations are cross-sectional, point-in-time data and cannot confirm or quantify exposures that may have occurred as a result of military service except in circumstances where unambiguous evidence is present, such as is true for participants in the Depleted Uranium and Toxic Embedded Fragments registries. With few exceptions,1 follow-up exams are not conducted.
How well VA registries serve to provide new information to add to the scientific knowledge base on a particular topic, contribute to hypothesis generation, or improve programs in the Veterans Health Administration is open to question. The committee failed to identify any publications that used data from the Ionizing Radiation Registry for disease surveillance or in epidemiologic investigations. However, others such as the Agent Orange and the Depleted Uranium Follow-Up registries serve as both databases of health surveillance for their respective populations for VA and sources of data used in epidemiologic studies. For example, the Agent Orange Registry population was used in at least five studies examining the health conditions of these veterans. Bullman et al. (1991) conducted a case-control study using a subset of registry participants to compare demographic and military characteristics of veterans who did and did not have posttraumatic stress disorder. Bullman and Kang (1994) used the registry population to assess the risk of mortality due to traumatic causes for Vietnam veterans who had posttraumatic stress disorder. Other studies used subsamples of the registry population to examine the risk of testicular cancer (Bullman et al., 1994) and nonmelanoma invasive skin cancers (Clemens et al., 2014) with presumed exposure to dioxin and other herbicide contaminants. The Depleted Uranium Follow-Up Registry continues to publish findings on the long-term health consequences in veterans exposed to depleted uranium (Hodge et al., 2001; McDiarmid et al., 2001, 2011; Shvartsbeyn et al., 2011; Squibb et al., 2005). In addition to surveillance, VA also uses some of its registries as an outreach tool, providing registry participants with newsletters and updated information on issues of interest. For example, participants of the Agent Orange Registry receive the Agent Orange Newsletter quarterly, which provides selected research findings, a summary of exposure locations, and other information relevant to the registry population, such as how to apply for a disability claim and new studies of Vietnam veterans (VA, 2012, 2016d).
Similar to VA, DoD has also established registries in response to particular exposures that service members might have encountered; these are briefly summarized in Table 2-2. DoD previously offered an examination program that was similar to VA’s Persian Gulf Registry Health Examination Program, called the Comprehensive Clinical Evaluation Program (NIH, 2016; VA and DoD, 2002), for persons serving on active duty. The program began in 1994 and was discontinued on June 1, 2002 (NARA, 2002). Self-reported data were collected on the use of pyridostigmine bromide, having experienced infectious diseases, and exposure to pesticides; chemical and biologic agents; multiple vaccinations; depleted uranium; and airborne hazards from sand, dust, smoke, burn pits, and oil-well fires (IOM, 2010). An Institute of Medicine (IOM) committee had evaluated this registry and the corresponding uniform case assessment protocol with regard to the protocol used, program implementation and administration, outreach efforts to veterans, and provider education, and that committee recommended several improvements to the protocol, referral process, evaluation feedback mechanism, consistency of data reporting, and approach to systematically updating patient information in the registry (IOM, 1998).
DoD’s Force Health Protection Program is focused on protecting individuals from hazardous physical, chemical, and biological agents in the air, water, and soil. Within this program, DoD identified the need for environmental health surveillance registries for service members with occupational and environmental health exposures that could cause illness and for any exposure that was not expected to cause illness but that could provide individual-level exposure data (DoD, 2016a). Two registries are currently monitoring occupational and environmental health
TABLE 2-2 Active (as of June 2016) DoD Environmental Health Registries
|Registry Name||Period of Military Service||Enrollees||Target Population||Data Source(s)|
|Operation Tomodachi||March 12–May 11, 2011||75,000||All identified individuals from the target population are included in the registry: service members, civil servants, Department of Defense (DoD) contractors, and dependents of service members and DoD civilian employees who were on the four main islands of Japan or on U.S. Navy-affiliated ships near Japan at any time between March 12 and May 11, 2011.||Radiation measurements were taken on military installations and in areas where service members were engaged in humanitarian missions.|
|Gulf War Oil Well Fire Smoke||August 2, 1990–February 28, 1991*||750,000||Members of the armed forces exposed to fumes of burning oil in connection with Operation Desert Storm.||Daily oil-well fire smoke exposure for an individual was estimated from environmental and other data and included in registry data.|
|Comprehensive Clinical Evaluation Program||1990–1991||32,876||Veterans of the 1990–1991 Gulf War.||In-person survey and two-stage health evaluation; voluntary participation.|
* These dates reflect the period of Operation Desert Storm, although oil-well fires were burned between February 2, 1991, and October 29, 1991.
exposures among select groups of service members: the Operation Tomodachi Registry and the Gulf War Oil Well Fire Smoke Registry.
The Operation Tomodachi Registry was established in response to the March 11, 2011, earthquake and tsunami in Japan and the subsequent release of radiation at the Fukushima Daiichi Nuclear Power Station (DoD, 2016b). Its purpose was to monitor U.S. service members who were on or near the mainland of Japan in the 2 months following the incident by creating a comprehensive database of exposures and health outcomes. DoD also tested water, air, and soil and used the data to calculate potential exposure doses for 13 areas (installations and major cities) on the Japanese mainland where the majority of the DoD population (about 58,000 persons) was stationed. In total, the registry contains data on about 75,000 DoD-affiliated individuals, including nearly 17,000 who were associated with U.S. Navy fleet-based operations (DoD, 2016b). Information includes locations and estimated whole body and thyroid radiation doses (Dunavant et al., 2013).
The Gulf War Oil Well Fire Smoke Registry was established in response to Public Law 102-190, which required DoD to “establish and maintain a record relating to members of the Armed Forces who were exposed to the smoke/fumes from burning oil wells” (DoD, 2016c). More than 750 oil wells were set on fire while Iraqi forces were retreating from Kuwait during the 1991 conflict. These fires caused a marked decrease in air quality and were a health risk for a large part of the country, especially for those in their immediate vicinity. The registry includes more than 750,000 DoD personnel who served in the Gulf War during the time the oil-well fires were burning. Exposures were estimated using information submitted by registry participants, troop location data, the DoD personnel registry, satellite images, and meteorological models. Modeling data were used to estimate smoke exposure, and daily estimates were combined to provide an overall risk estimate.
There are significant inherent limitations in the use of registries to draw inferences regarding the presence or strength of an association between an exposure and a health outcome. This section provides an overview of some
of the weaknesses of registry data and the various sorts of problems one encounters in using registries to make scientific conclusions. The overview begins with a discussion of the biases introduced by selective participation, which affects all registries that rely on voluntary participation. Other potential biases (misclassification, recall, and reporting) and their implications on data quality are then described, and Table 2-3 provides a summary table of the biases and other limitations of registry data and their implications for data analysis. Subsequent sections address the challenge of identifying an appropriate comparison population to use in evaluations of health outcomes and discuss the cumulative effect these weaknesses exert and how they limit the extent to which registry data may be used to evaluate exposure–health outcome associations. Chapter 3 offers additional observations on how these considerations affect the scientific value of information from the AH&OBP Registry.
Several factors influence participation in a registry, including its perceived relevance to the respondent, the importance or scientific credibility of the registry or the sponsor, the survey length and time commitment required to complete it, the degree to which the respondent believes participation will yield a benefit, and the respondent’s altruism (Groves et al., 1992). This is especially true for registries that depend on voluntary participation, where those considerations are weighed with the other risks and burdens of participation as well as with any incentives for participation (Raftery et al., 2005). Monetary incentives, for example, have been shown to increase health survey response rates in U.S. veterans (Coughlin et al., 2011).
A registry involving an exposure–health outcome relationship in which people choose to participate may selectively include those who were more highly exposed than the average within the eligible population or those who are more concerned about the potential health effects resulting from such an exposure because these individuals have a greater stake in the issue. For example, Smith et al. (2002) found that 1990–1991 Gulf War veterans who were exposed to the heaviest fighting in theater and had served longer deployments were more likely to participate in a DoD or VA registry than veterans who were deployed for shorter time periods and experienced less intense combat. This is important because a registry population with nearly universal reporting of an exposure or outcome is unlikely to be representative of the full, eligible population. To the extent that these and other factors differ between participants and nonparticipants, such selective participation may seriously undermine the potential utility of the registry to fulfill the objectives and answer the questions that it was intended to address (Hernán et al., 2004).
Nonrandom differences in participation are a specific form of selection or nonresponse bias—a type of systematic error that occurs when the study sample differs from the target population of the study in a way that makes it
TABLE 2-3 Limitations of Registries and Resultant Effects
Effects representativeness so that findings may not be generalizable to the broader, target population
May result in exaggerated or underestimated estimates of an effect
Threatens internal validity and distorts the magnitude of estimates of an effect
May result in exaggerated or underestimated estimates of an effect
|Lack of active follow-up||May lead to incomplete ascertainment of outcomes|
|Passive data collection||May lead to missing data|
|Enrollment of ineligible registrants||May weaken the generalizability of findings|
|Large numbers of participants||May lead to inflated “statistical significance” but not necessarily clinical relevance|
unreflective of the exposures, health outcomes, or exposure–health outcome associations present in the population of interest (Rothman, 2012). Registries that rely on completely voluntary participation and where efforts to contact, recruit, and persuade eligible persons to enroll and participate are not targeted to the full eligible population are especially prone to selection or nonresponse bias. The potential for bias is dependent on the rates of participation (generally lower rates result in a greater potential for bias) and on the extent to which various key variables such as exposures and health outcomes systematically differ between the participant and target populations. For example, persons who perceive themselves as exposed to hazards or those who are experiencing health problems may be more likely to participate than persons who do not consider themselves ill or at risk, thereby resulting in a sample of participants that is not representative of the target population and therefore introducing selection bias.
It is time consuming for participants to provide large amounts of information or similar information for multiple events, and research suggests that a lack of time is a factor in low response rates in surveys of military personnel (Miller and Aharoni, 2015). Another form of selection bias can result if persons who have more events to report exceed the time they have allotted themselves or have been allotted to complete the study, become fatigued, or lose interest. In such circumstances, respondents may not report all eligible events or information related to them, or they may game their answers to avoid having to answer follow-up queries (Egleston et al., 2011). The more onerous an instrument is to complete, the more likely those with greater motivation to participate (such as those who believe that they had high exposure, are ill, or both) will be overrepresented in the study population.
The effects of selective participation bias may be mitigated through an improved representation of participants obtained through changes to messaging or more targeted outreach and communications and by providing incentives to respond—targeting eligible persons who were potentially exposed, but who are not currently experiencing adverse health outcomes, for example. Response fatigue may be minimized by such steps as limiting repetitive questions, using previous responses to eliminate later questions that are no longer salient, and making sure that the survey content is perceived as relevant by respondents (Rolstad et al., 2011).
Misclassification bias results when the information collected about or from respondents is inaccurate and leads to respondent being placed in an incorrect category. It can occur for either an exposure—such as classifying people as exposed when they were not—or a health outcome (Rothman, 2012). Misclassification of participants can be either differential or nondifferential. Differential misclassification occurs when categorization errors for one variable of interest (exposure, for example) are related to their status in another variable of interest (health outcome, for example). This would be the case if respondents who experience shortness of breath were more likely to erroneously report having been more highly exposed to burn pits than persons who do not have shortness of breath. If the pattern of error in reporting exposure is not related to the presence of the outcome, then the misclassification is nondifferential. Differential misclassification can either exaggerate or underestimate an effect; for dichotomous comparisons, nondifferential misclassification generally biases the result toward the null, that is, toward finding no association between an exposure and an outcome. For multiple categories of exposure, nondifferential misclassification among exposure categories can produce the appearance of a more linear or monotonically increasing relationship when in fact the underlying relationship is nonlinear.
Data collection that is based on self-report rather than objective measures introduces the potential for recall and reporting biases. Recall bias results, for example, if respondents who self-report health problems report their exposure experience differently than those without health problems, thereby threatening the internal validity of the study (Hassan, 2005). When exposed and nonexposed (or greater or lesser exposed) respondents report events or health outcomes in a manner that is different between the two groups, it can lead to differential misclassification that can then distort the magnitude of the measure of association toward or away from the null, depending on the magnitude and direction of the bias (Hassan, 2005).
For many registries, the time between the event of interest, design and implementation of the registry, and
recruitment of eligible persons is a factor in recall bias. Given that it has been 25 years since the 1990–1991 Gulf War, veterans of this conflict may not correctly recall all potential exposures or specific details (such as how many hours they were exposed to smoke or fumes) requested in the AH&OBP Registry questionnaire. Recall bias would result in stronger observed associations if persons who were experiencing health problems remembered and reported military exposures to a greater extent than persons who were not experiencing such problems. Similarly, persons who do not perceive that an exposure has affected their current health status may be less likely to recall the exposure or to report related symptoms or diagnoses.
In a registry where participants are asked to assess their own types and levels of exposures, these data are not comparable to, nor as specific as, air monitoring data or similar objective measures of exposure. Respondents report whether they believe they were exposed to various chemical, environmental, or biological agents, but quantifying actual exposure intensity or differentiating among specific chemical components is much more difficult. Furthermore, subjective exposure reports are strongly influenced by recall bias (persons who are ill, or believe themselves highly exposed, may differentially overestimate past exposures, for example).
Similarly, self-report bias influences respondents’ reports of health outcomes in registry data. Frequently, individuals cannot accurately recall specific names of medical diagnoses or the dates when such diagnoses were made. Likewise, individuals often evaluate past health in relation to their current health; persons who are currently ill may, for example, differentially overestimate the length of their illness or may mistakenly omit earlier illnesses which they would now consider minor in relation to current illness. Other incentives may also influence self-reports of health data. For example, a belief—accurately or not—that participation in the registry or registry data could influence access to health care or other services or key decisions regarding future exposures or deployment practices might affect how an individual appraises and reports his or her health conditions. To reduce the effect of self-reported bias, researchers often attempt to validate self-reported exposure and outcome data against objective measures, such as air monitoring data or medical records, respectively.
To assess the degree to which an exposure may cause a specific health problem, an appropriate comparison group is needed. Ideally, this group should resemble the study group as closely as possible in terms of the characteristics that are related to the risk of the health outcome of interest so that differences in outcomes can be attributed to the factor of interest (such as an exposure) rather than to other factors (confounding factors or biases). The characteristics of interest should be available to be known for all who are eligible to participate in the study or at least for large and well-defined subgroups of eligible persons. Generally, such characteristics include basic demographic information such as sex, age, education level, race, ethnicity, and marital status. For military populations, additional service characteristics such as branch, component, deployment dates and locations, and military occupational specialty are desirable.
A registry that collects self-reported data on both exposures and health outcomes is inherently influenced by same-source bias. An example would be people who believe they were highly exposed overestimating their exposures or those who believe they were not exposed underestimating their exposures. Similarly, people who believe that their health conditions are a result of an exposure of interest at any level are more likely to participate in such a registry. Comparing self-reported exposures and health outcomes provides a quantitative assessment of whether individuals tend to attribute their own conditions to the exposure in question. As such, the exposures and health outcomes are considered in a complementary way rather than based on whether there is an objective or true association between the exposures and outcomes of interest. To determine whether a true association between exists between an exposure and outcome of interest, a well-designed epidemiological study, with objective exposure and outcome metrics, is needed.
The committee was not charged with designing an epidemiologic study, rather it was asked to perform an analysis of “how [AH&OBP] registry participants differ in demographic or exposure status (to the extent avail-
able data allows) from non-participant groups, such as all deployers or appropriate U.S. comparison populations.” Ultimately, the only comparisons the committee makes in this report are comparisons among registry participants who have different levels of exposure potential because internal comparisons mitigate some of the biases of the sample (although the registry participants constitute a very self-selected group). However, to be responsive to the statement of task, this section identifies some potential comparison groups and the strengths and weaknesses of each. Even if a representative comparison group were available, self-report bias for both exposures and health outcomes would continue to be a concern because both types of bias influence participation.
A matter that complicates the identification of an appropriate comparison group for any study of health outcomes of service members who participated in the Southwest Asia theater of military operations during the 1990–1991 Gulf War and thereafter is the inherent differences in demographic make-up, the conditions and characteristics of deployment, and potential exposures experienced between individuals who served in that theater during that time and those who did not. To give just one example, most participants in the 1990–1991 Gulf War operations had a single deployment that lasted less than 1 year. In contrast, as of December 2011, 47% of all Operation Enduring Freedom (OEF), OIF, and OND active-duty service members, 35% of reservists, and 35% of National Guardsmen had deployed more than once, with their cumulative lengths of deployment averaging 15.2–17.6 months depending on branch of service and component (IOM, 2013). Methods exist to adjust for differences in such factors to make the different veteran populations more comparable, but there are limits to the effectiveness of statistical adjustments when the differences are so extensive. Alternatively, each deployment cohort could be considered separately, but this lessens the power of statistical testing.
U.S. Civilian Population
One possible comparison group might be a demographically adjusted population of individuals who completed the National Health Interview Survey (NHIS), from which many of the questions on health behaviors and conditions included in the AH&OBP Registry were drawn. In principle, comparing registry participants with a group of individuals with similar demographic makeup from the general U.S. population might result in some useful inferences. However, such an approach would introduce large problems that would make the results highly questionable. First, military personnel are specifically excluded from participating in the NHIS, which covers only the noninstitutional household population of the United States, raising the question of whether there are systematic differences between the groups that would make comparisons unreliable. While veterans are included in the NHIS, an analysis showed that the 2013 NHIS included only 932 veterans who had served since 1990, 523 of whom served overseas and 409 were nondeployed, thereby further limiting the NHIS as a viable comparison group (May and Haider, 2014). Second, any comparison analyses would be subject to significant selection bias because persons who are able to serve in the military and deploy to combat zones are healthier and fitter than the general population (the “healthy warrior effect”) that is sampled by the NHIS or any other national survey (Miller et al., 2012). Third, the NHIS is administered as an in-person household interview survey, and research suggests that in-person interviews yield different responses to the same questions than either telephone- or Web-administered surveys (Dillman et al., 2014).
Millennium Cohort Study
VA and DoD have conducted several surveys of military and veteran populations that have included some components of the eligible population defined for the AH&OBP Registry and some of the content. However, none of those surveys are well suited for comparisons with the registry population, except for possibly the Millennium Cohort Study, a prospective longitudinal survey of post-9/11 service members and veterans explicitly designed to collect data on and assess relationships between potential exposures and health outcomes (Ryan et al., 2007). The IOM recommended that DoD conduct prospective epidemiologic research (IOM, 1996, 2000), such as the
Millennium Cohort Study, in order to assess the impact of deployment and exposures on the long-term health outcomes of military service members. The prospective population design of the Millennium Cohort Study mitigates the inherent deficiencies of collecting retrospective registry data. However, since it is limited to post-9/11 deployed service members and veterans who deployed to post-9/11 only (which make up the majority of registry participants; see Chapter 4), it is not an appropriate comparison group for 1990–1991 Gulf War or stabilization period deployers (January 1992–September 2001). The Millennium Cohort Study does not contain questions on the same exposures or health outcomes as the AH&OBP Registry, which would also affect the types of comparisons that could be made. Responses from the baseline and follow-up surveys are routinely matched with Defense Manpower and Data Center records data, and potential participants in the Millennium Cohort Study who meet the eligibility for the AH&OBP Registry could be identified. If that were possible, demographic and military service characteristics distributions, as well as exposures and health outcomes collected by Millennium Cohort Study, could be compared with those reported by post-9/11 registry participants. There are a number of limitations and barriers to gaining access to and using these data, which prevented them from being analyzed by the committee, but in principle it may be a suitable resource for comparison of post-9/11 registry participants.
Nondeployed Veterans and Service Members
Because deployment itself is not the exposure of interest—as it often is with other studies of 1990–1991 Gulf War and OEF/OIF/OND veterans and service members—nondeployed or deployed-elsewhere groups are other potential comparison groups. Unlike the general U.S. population, this group is more similar to registry participants because they had to meet the same types of standards to be accepted into military service, and deployment records exist for all registry-eligible time periods. Deployment for a given period is generally determined from administrative data defined by a given timeframe—1990 to 1991 for example. A service member may be labeled as “nondeployed” during that time frame but then deployed later, creating misclassification.
There is evidence that persons who deploy are different than those who do not, based on characteristics such as military occupational specialties, readiness, and other factors that define “deployability.” The majority of service members in each service branch have deployed in support of OEF/OIF/OND (Baiocchi, 2013). Thus, using nondeployed service members and veterans is an especially problematic comparison group for the specific subset of those deployed and eligible for the registry. Those deployed but otherwise not eligible for the AH&OBP Registry likely have similar problems of eligibility. A more representative group might consist of service members or veterans who were eligible to deploy but did not. However, only 4% (20,000) of active duty soldiers met that requirement as of December 2011 (Baiocchi, 2013).
VA Health Care Users
In principle, veterans who are not participants in the registry and use VA services could serve as a comparison group for registry participants who use VA services. However, information on exposures is not collected or available in these sources. Furthermore, such comparisons would be limited to VA users only and would exclude veterans who are not using VA services. Approximately 46% of deployed and 36% of nondeployed Gulf War veterans and 61% of deployed OEF/OIF/OND veterans use VA services (NASEM, 2015; VA, 2015d). In the past, veterans who used VA health care services were, in general, older, had lower incomes, and had more health problems than nonusers. Therefore, users and nonusers of VA health care might differ in important characteristics that might compromise comparisons between them (NASEM, 2015).
For the reasons noted above, registry data are not likely to provide the information necessary to evaluate the strengths of associations or potential cause-and-effect relationships between an exposure and a health outcome. Whether or not the data suggest an association, their value in assessing the impact of exposure on health is limited. Self-reported data may be useful for recording individual stories of experiences and signs or symptoms that
may have developed that are indicative of a particular exposure. However, in most instances it is not possible to translate this information into quantitative data that are suitable for making scientific inferences. Therefore, the committee believes that VA post-deployment health registries are primarily useful as a mechanism to create a roster of concerned individuals and provide outreach and health risk communication to potentially exposed and concerned veterans.
It is understandable that some might view a registry as a means to generate disease incidence or prevalence data among participants and want to use it to determine whether the frequency of such reports is different from what would be expected in a population that is otherwise similar but was not exposed. However—for the reasons previously discussed—it is not possible to confidently draw conclusions regarding this from the information collected. The motivation to participate in voluntary registries often is a result of personal experience, so that those who have suffered health problems—particularly problems potentially attributed to the exposures of interest—are more likely to enroll than those individuals who do not experience such outcomes. The data, therefore, reflect this selective participation, resulting in a rate of adverse outcomes among participants that is uninformative for comparisons to other populations.
In epidemiologic studies, assessing whether there is an association between an exposure and a health outcome requires a comparison of the presence of the outcome in an exposed population versus the presence in a comparable population lacking such exposure. As noted earlier in the chapter, because of incomplete and likely unrepresentative participation, the calculation of disease rates among the enrollees does not reflect the rates in the total population of those exposed. With information on health experience only among those who chose to participate, little benefit can come from comparing the experience of this group to some other population to address the question of whether the disease rate was elevated as a result of exposures.
Given the limitations of registries, the data from them may support an evaluation of the possibility of a relationship but cannot be used to determine whether such a relationship actually exists. Because such relationships are often of great interest to both registry participants and sponsors, it sets the stage for disappointment when enrollment and data analysis are completed.
Registry data may, however, motivate epidemiologic studies that would be better designed as a result of the information they generate. For example, a well-designed questionnaire that captured participants’ self-reported information could signal the presence of an unusual or atypical health outcome. Variations in outcomes as a function of specific elements such as locations of deployment, military occupation, and time periods of deployments might also yield targets for rigorous study. Such applications make use of the registry data in a way that takes advantage of the information generated without exceeding the limitations imposed by the quality of the data.
Registries are one method for ascertaining information about potential exposures and health outcomes in a defined population. Well-designed registries may be useful for identifying rare conditions in a population of interest, driving hypothesis generation, and informing larger and more rigorous study designs. VA and DoD have established several registries with the intent of collecting and monitoring information on health effects that may be a result of deployment-related exposures. However, registries have several inherent limitations, including selection and misclassification biases that preclude their use in evaluating associations between exposures and health outcomes. The many limitations of registries and the inherent biases associated with the data collected prevent their use in evaluating statistical associations or drawing conclusions regarding whether a particular health outcome results from a specific exposure. Registries may yield information useful in determining which exposure–health outcome issues should be investigated using more rigorous data collection and analysis methods.
The next chapter—Chapter 3—extends and deepens this discussion, focusing on the AH&OBP Registry and addressing how these issues affect the interpretation of its data.
Antão, V. C., L. L. Pallos, Y. K. Shim, J. H. Sapp II, R. M. Brackbill, J. E. Cone, S. D. Stellman, and M. R. Farfel 2011. Respiratory protective equipment, mask use, and respiratory outcomes among World Trade Center rescue and recovery workers. American Journal of Industrial Medicine 54(12):897–905.
Antão, V. C., O. I. Muravov, J. Sapp, 2nd, T. C. Larson, L. L. Pallos, M. E. Sanchez, G. D. Williamson, and D. K. Horton. 2015. Considerations before establishing an environmental health registry. America Journal of Public Health 105(8):1543–1551.
Baiocchi, D. Measuring Army deployments to Iraq and Afghanistan. 2013. Santa Monica, CA: RAND Corporation. http://www.rand.org/pubs/research_reports/RR145.html (accessed December 2, 2016).
Bowler, R. M., H. Harris, and J. Li. 2012. Longitudinal mental health impact among police responders to the 9/11 terrorist attack. American Journal of Industrial Medicine 55(4):297–312.
Brackbill, R. M., S. D. Stellman, S. E. Perlman, D. J. Walker, and M. R. Farfel. 2013. Mental health of those directly exposed to the World Trade Center disaster: Unmet mental health care need, mental health treatment service use, and quality of life. Social Science and Medicine 81:110–114.
Bullman, T. A., and H. K. Kang. 1994. Posttraumatic stress disorder and the risk of traumatic deaths among Vietnam veterans. Journal of Nervous and Mental Disease 182(11):604–610.
Bullman, T. A., H. K. Kang, and T. L. Thomas. 1991. Posttraumatic stress disorder among Vietnam veterans on the Agent Orange Registry. A case-control analysis. Annals of Epidemiology 1(6):505–512.
Bullman, T. A., K. K. Watanabe, and H. K. Kang. 1994. Risk of testicular cancer associated with surrogate measures of Agent Orange exposure among Vietnam veterans on the Agent Orange Registry. Annals of Epidemiology 4(1):11–16.
Clemens, M. W., A. L. Kochuba, M. E. Carter, K. Han, J. Liu, and K. Evans. 2014. Association between Agent Orange exposure and nonmelanotic invasive skin cancer: A pilot study. Plastic and Reconstructive Surgery 133(2):432–437.
Coughlin, S. S., P. Aliaga, S. Barth, S. Eber, J. Maillard, C. M. Mahan, H. K. Kang, A. Schneiderman, S. DeBakey, P. Vanderwolf, and M. Williams. 2011. The effectiveness of a monetary incentive on response rates in a survey of recent U.S. veterans. Survey Practice 4(1). http://www.surveypractice.org/index.php/SurveyPractice/article/view/91/html (accessed December 2, 2016).
Dillman, D. A., J. D. Smyth, and L. Melani Christian. 2014. Internet, mail, and mixed-mode surveys: The tailored design method, 3rd edition. Hoboken, NJ: John Wiley & Sons.
DoD (Department of Defense). 2016a. Environmental health surveillance registries. https://registry.csd.disa.mil/registryWeb/DisplayAbout.do (accessed September 26, 2016).
DoD. 2016b. About the Operation Tomodachi Registry (OTR). https://registry.csd.disa.mil/registryWeb/Registry/OperationTomodachi/DisplayAbout.do (accessed September 26, 2016).
DoD. 2016c. About the Gulf War Oil Well Fire Smoke Registry. https://registry.csd.disa.mil/registryWeb/Registry/OWFSR/DisplayAbout.do (accessed September 26, 2016).
Dunavant, J. D., M. Chehata, D. R. Case, M. Mckenzie-Carter, J. Cassata, R. Marro, R. Ranellone, K. Knappmiller, G. Falo, L. Alleman, and P. Blake. 2013. Operation Tomodachi Registry: Radiation data compendium. Fort Belvoir, VA: Defense Threat Reducation Agency. https://registry.csd.disa.mil/registryWeb/docs/registry/optom/DTRA-TR-13-044.pdf (accessed September 26, 2016).
Egleston, B. L., S. M. Miller, and N. J. Meropol. 2011. The impact of misclassification due to survey response fatigue on estimation and identifiability of treatment effects. Statistics in Medicine 30(30):3560–3572.
Farfel, M., L. DiGrande, R. Brackbill, A. Prann, J. Cone, S. Friedman, D. J. Walker, G. Pezeshki, P. Thomas, S. Galea, D. Williamson, T. R. Frieden, and L. Thorpe. 2008. An overview of 9/11 experiences and respiratory and mental health conditions among World Trade Center Health Registry enrollees. Journal of Urban Health 85(6):880–909.
Gliklich, R. E., N. A. Dreyer, and M. B. Leavy (eds.). 2014. Registries for evaluating patient outcomes: A user’s guide, 3rd ed. Rockville, MD: Agency for Healthcare Research and Quality. https://www.ncbi.nlm.nih.gov/books/NBK208619 (accessed December 2, 2016).
Groves, R. M., R. B. Cialdini, and M. P. Couper. 1992. Understanding the decision to participate in a survey. Public Opinion Quarterly 56(4):475–495.
Hassan, E., 2005. Recall bias can be a threat to retrospective and prospective research designs. Internet Journal of Epidemiology 3(2):1–7.
Hernán, M. A., S. Hernández-Díaz, and J. M. Robins. 2004. A structural approach to selection bias. Epidemiology 15(5):615–625.
Hodge, S. J., J. Ejnik, K. S. Squibb, M. A. McDiarmid, E. R. Morris, M. R. Landauer, and D. E. McClain. 2001. Detection of depleted uranium in biological samples from Gulf War veterans. Military Medicine 166(12 Suppl):69–70.
IOM (Institute of Medicine). 1996. Health consequences of service during the Persian Gulf War: Recommendations for research information systems. Washington, DC: National Academy Press.
IOM. 1998. Adequacy of the VA Persian Gulf Registry and uniform case assessment protocol. Washington, DC: National Academy Press.
IOM. 2000. Protecting those who serve: Strategies to protect the health of deployed U.S. forces. Washington, DC: National Academy Press.
IOM. 2010. Gulf War and health, volume 8: Update of health effects of serving in the Gulf War. Washington, DC: The National Academies Press.
IOM. 2013. Returning home from Iraq and Afghanistan: Assessment of readjustment needs of veterans, service members, and their families. Washington, DC: The National Academies Press.
May, L., and J. Haider. 2014. Preliminary assessment of NHIS data for providing a burn pit registry comparison group. Prepared for the Committee on the Assessment of the Department of Veterans Affairs Airborne Hazards and Open Burn Pit Registry. November 20.
McDiarmid, M. A., K. Squibb, S. Engelhardt, M. Oliver, P. Gucer, P. D. Wilson, R. Kane, M. Kabat, B. Kaup, L. Anderson, D. Hoover, L. Brown, and D. Jacobson-Kram. 2001. Surveillance of depleted uranium exposed Gulf War veterans: Health effects observed in an enlarged “friendly fire” cohort. Journal of Occupational and Environmental Medicine 43(12):991–1000.
McDiarmid, M. A., S. M. Engelhardt, C. D. Dorsey, M. Oliver, P. Gucer, J. M. Gaitens, R. Kane, A. Cernich, B. Kaup, D. Hoover, A. A. Gaspari, M. Shvartsbeyn, L. Brown, and K. S. Squibb. 2011. Longitudinal health surveillance in a cohort of Gulf War veterans 18 years after first exposure to depleted uranium. Journal of Toxicology and Environmental Health Part A 74(10):678–691.
Miller, L., and E. Aharoni. 2015. Understanding low survey response rates among young U.S. military personnel. Santa Monica, CA: RAND Corporation. http://www.rand.org/content/dam/rand/pubs/research_reports/RR800/RR881/RAND_RR881.pdf (accessed December 2, 2016).
Miller, M., C. Barber, M. Young, D. Azrael, K. Mukamal, and E. Lawler. 2012. Veterans and suicide: A reexamination of the National Death Index–Linked National Health Interview Survey. American Journal of Public Health 102(Suppl 1):S154–S159.
NARA (National Archives and Records Administration). 2002. Request for records disposition authority: Comprehensive Clinical Evaluation Program (CCEP). https://www.archives.gov/records-mgmt/rcs/schedules/departments/departmentof-defense/office-of-the-secretary-of-defense/rg-0330/n1-330-02-001_sf115.pdf (accessed September 26, 2016).
NASEM (National Academies of Sciences, Engineering, and Medicine). 2015. Considerations for designing an epidemiologic study for multiple sclerosis and other neurologic disorders in pre and post 9/11 Gulf War veterans. Washington, DC: The National Academies Press.
NIH (National Institutes of Health). 2016. National Cancer Institute Division of Cancer Control & Population Sciences. Department of Defense Comprehensive Clinical Evaluation Program (CCEP). http://epi.grants.cancer.gov/pharm/pharmacoepi_db/ccep.html (accessed September 26, 2016).
Rabeneck, B. 2001. Using the national registry of HIV-infected veterans in research: Lessons for the development of disease registries. Journal of Clinical Epidemiology 54(12):1195–1203.
Raftery J., P. Roderick, and A. Stevens. 2005. Potential use of routine databases in health technology assessment. Health Technology Assessment 9(20):1–106.
Rolstad, S., J. Adler, and A. Rydén. 2011. Response burden and questionnaire length: Is shorter better? A review and meta-analysis. Value in Health 14(8):1101–1108.
Rothman, K. J. 2012. Epidemiology: An introduction, 2nd ed. New York: Oxford University Press.
Ryan, M. A., T. C. Smith, B. Smith, P. Amoroso, E. J. Boyko, G. C. Gray, G. D. Gackstetter, J. R. Riddle, T. S. Wells, G. Gumbs, T. E. Corbeil, and T. I. Hooper. 2007. Millennium cohort: Enrollment begins a 21-year contribution to understanding the impact of military service. Journal of Clinical Epidemiology 60(2):181–191.
Schultz, M. G, J. H. Sapp, C. D. Cusack, and J. M. Fink. 2010. The National Exposure Registry: History and lessons learned. Journal of Environmental Health 72(7):20–25.
Shvartsbeyn, M., P. Tuchinda, J. Gaitens, K. S. Squibb, M. A. McDiarmid, and A. A. Gaspari. 2011. Patch testing with uranyl acetate in veterans exposed to depleted uranium during the 1991 Gulf War and the Iraqi conflict. Dermatitis 22(1):33–39.
Squibb, K. S., R. W. Leggett, and M. A. McDiarmid. 2005. Prediction of renal concentrations of depleted uranium and radiation dose in Gulf War veterans with embedded shrapnel. Health Physics 89(3):267–273.
Smith, T. C., B. Smith, M. A. K. Ryan, G. C. Gray, T. I. Hooper, J. M. Heller, N. A. Dalager, H. K. Kang, and G. D. Gackstetter. 2002. Ten years and 100,000 participants later: Occupational and other factors influencing participation in U.S. Gulf War health registries. Journal of Occupational and Environmental Medicine 44(8):758–768.
VA (Department of Veterans Affairs). 2004. Ionizing Radiation Review 2(1) [Entire issue]. http://www.publichealth.va.gov/docs/radiation/irr_newsletter_dec04.pdf (accessed October 12, 2016).
VA. 2010. Ionizing Radiation Review 4(1) [Entire issue]. http://www.publichealth.va.gov/exposures/radiation/publications/index.asp (accessed October 12, 2016).
VA. 2012. Agent Orange Review 26(1) [Entire issue]. http://www.publichealth.va.gov/docs/agentorange/reviews/newsletterwinter2012.pdf (accessed October 13, 2016).
VA. 2015a. Ionizing Radiation Registry health exam for veterans. http://www.publichealth.va.gov/exposures/radiation/benefits/registry-exam.asp (accessed December 2, 2016).
VA. 2015b. Agent Orange Registry health exam for veterans. http://www.publichealth.va.gov/exposures/agentorange/benefits/registry-exam.asp (accessed December 2, 2016).
VA. 2015c. Depleted Uranium Follow-Up Program. http://www.publichealth.va.gov/exposures/depleted_uranium/followup_program.asp (accessed December 2, 2016).
VA. 2015d. Analysis of VA health care utilization among Operation Enduring Freedom (OEF), Operation Iraqi Freedom (OIF), and Operation New Dawn (OND) veterans. Washington, DC: Veterans Health Administration.
VA. 2015e. Gulf War research strategic plan 2013–2017: 2015 update. http://www.research.va.gov/pubs/docs/GWResearchStrategicPlan.pdf (accessed December 2, 2016).
VA. 2016a. Toxic Embedded Fragment Surveillance Center. http://www.publichealth.va.gov/exposures/toxic_fragments/surv_center.asp (accessed June 23, 2016).
VA. 2016b. Public health: Military exposures. http://www.publichealth.va.gov/exposures (accessed June 21, 2016).
VA. 2016c. Environemntal exposures programs and services for veterans. http://www.publichealth.va.gov/docs/exposures/registry-evaluation-brochure.pdf (accessed December 2, 2016).
VA. 2016d. Agent Orange newsletter: Information for Vietnam-era veterans and their families. http://www.publichealth.va.gov/docs/agentorange/reviews/ao-newsletter-summer-2016.pdf (accessed December 2, 2016).
VA and DoD (Department of Veterans Affairs and Department of Defense). 2002. Department of Defense Comprehensive Clinical Evaluation Program (CCEP). In Combined analysis of the VA and DoD Gulf War clinical evaluation programs: Study of the clinical findings from systematic medical examinations of 100,339 U.S. Gulf War veterans. http://www.gulflink.osd.mil/combined_analysis/gulf_war_clinical_evaluation_programs.htm#ccep (accessed August 26, 2016).
This page intentionally left blank.