veterans with respect to their duties and location during deployment, their military status during the war (active duty, reserves, or National Guard), their military status after the war (active duty, reserves, or discharged), their branch of service (Army, Navy, Air Force, or Marines), or ease of ascertainment (IOM, 1999). The most representative studies are population-based: the cohorts are selected on the basis of where their members reside. In population-based studies of Gulf War veterans, the cohort might be the entire deployed population, as in studies of Canadian and Australian veterans, or a random selection from the population of interest, as in several studies of US and UK veterans. The committee, in evaluating major cohort studies, gave greater weight to Gulf War studies that were population based.
A study’s representativeness, even if it is population based, can be compromised by low participation rates. Low participation rates can introduce selection bias, such as when Gulf War veterans who are symptomatic choose to participate more frequently than those who are not symptomatic. Nondeployed veterans, who might be healthier, may be less inclined to participate. In some studies, researchers not only try to assess the potential for selection bias by comparing participants with nonparticipants from both deployed and nondeployed populations, but also implement strategies to reduce the impact of selection bias, such as by oversampling nondeployed populations as in the study by Eisen and colleagues (2005).
Selection bias might also occur through the so-called healthy warrior effect. That form of bias has the potential to occur in most of the major cohorts that compare deployed veterans with nondeployed personnel. The healthy warrior effect is a form of selection bias insofar as chronically ill or less fit members of the armed forces might be less likely to have been deployed than more fit members. Some of the best studies attempt to measure the potential for selection bias and adjust for it in the analysis.
Many cohort studies rely on self-reports of symptoms and medical conditions. This may introduce reporting bias, which occurs when the study population (in this case the deployed veterans) over- or underreports symptoms or medical conditions relative to a comparison group (in this case the nondeployed veterans). This over- or underreporting may be related to beliefs about the effect of deployment on health, especially among deployed veterans who, if they are experiencing health problems, may have already formed an opinion on the cause of their malady. Comparison groups, in contrast, may have little reason to conjecture possible links between past exposures and any current health conditions they may be experiencing. In most cases, reporting bias leads to an overestimation of the prevalence of symptoms or diagnoses in the deployed population.
Self-reports of symptoms or medical conditions might sometimes introduce another type of bias known as outcome misclassification, in which there are errors in how symptoms or medical conditions are classified into outcomes and analyzed. One Gulf War study sought to document outcome misclassification by comparing veterans’ symptom reporting on questionnaires with clinical examination about 3 months later (McCauley et al., 1999a). The study found that the extent of misclassification depended on the type of symptom being reported; agreement between questionnaire and clinical examination ranged from 4% to 79%. The overall problem led the investigators to caution that questionnaire data, in the absence of clinical evaluation or adjustment, might lead to outcome misclassification (McCauley et al., 1999b). Another study also found poor reliability and validity of self-reported medical diagnoses when compared with medical records (Gray et al., 1999). In contrast, a study by the Department of Veterans Affairs (VA) (Kang et al., 2000), which verified a random subset of self-reported conditions (n = 4200) against medical records, found a strong correlation between the two (above