When developing a registry, it is important to clearly define its goals and expected benefits while being mindful about its limitations. To maximize its use and value, developers need to take into account the various logistical constraints (time, expertise, money) to appropriately designing, implementing, and maintaining the registry as well as to conducting outreach to eligible users and making optimal use of the registry. The rationale for and process used to develop the registry both need to be fully explicated to allow registry participants and users of registry information to take these factors into account.
This chapter focuses on the development and implementation of the Airborne Hazards and Open Burn Pit (AH&OBP) Registry and its key element, the self-assessment questionnaire. The chapter begins with a brief review of the salient recommendation from an initial report of the long-term health consequences of exposure to burn pits in Iraq and Afghanistan and the resulting decisions and directives from Congress. Following a discussion on the registry’s development, the chapter offers evaluations of the design of the questionnaire and the types of questions used in it, including considerations for how the questions might be improved. Although registries cannot substitute for well-designed epidemiologic studies, improvements to the questionnaire could result in an improved collection instrument. The ability to supplement registry data by linking the registry to other Department of Veterans Afffairs (VA) and Department of Defense (DoD) data sources is next considered. Finally, the recruitment and enrollment of registry participants is discussed.
Large expenditures of time and money—and substantial levels of experience and technical and scientific expertise—are required to establish and maintain an exposure registry. One justification for making such expenditures is that the exposure or exposures of concern may present a clear health risk. The authoring committee of Long-Term Health Consequences of Exposure to Burn Pits in Iraq and Afghanistan recommended that “a prospective study of the long-term health effects of exposure to burn-pit emissions in military personnel deployed at [Joint Base Balad]” be conducted (IOM, 2011, p. 8). Notwithstanding this recommendation for conduct of a prospective epidemiologic study, in January 2013 Congress passed Public Law 112-260 (reproduced in Appendix A) directing VA to establish the AH&OBP Registry within 12 months of the law’s enactment. Within this narrow window of time, VA was tasked to develop a process and structure to create a comprehensive and targeted exposure and health outcomes questionnaire and to make it available for veterans’ use.
The 12-month schedule required for developing and beginning the implementation of a well-designed and tested registry to assess and track the exposures of interest and their possible health effects was short, considering the complexity of the registry’s intent and the tasks involved. Briefly, there are many undertakings required to establish any registry of military personnel or veterans, but the largest involve
- defining those eligible for the study (based largely on data from DoD records that are not readily structured for research purposes);
- developing a comprehensive outreach and recruitment strategy for reaching the full population of potentially eligible individuals;
- determining the most effective mode for enrollment, vetting eligibility, and capturing the information required;
- determining what information on deployments, exposures, health, and other indicators (including military and demographic characteristics) needs to be measured;
- determining the best sources of data for capturing this information (for example, self-report or military records);
- establishing the best mode (such as in-person, online, or computer-assisted interviews) of data collection for self-report assessments;
- examining previous registries, epidemiologic and health surveys, and other sources to determine the best questions to use or adapt that are consistent with the selected mode;
- testing of the selected and developed questions to ensure that they are well understood by participants and that they effectively capture the key information required while also minimizing burden;
- selecting appropriate mechanisms for supporting and maintaining a registry over its expected lifetime that is consistent with best current practices;
- implementing tests, evaluations, and revisions to ensure that all components of the registry system work together as intended; and
- developing a strategy to use the collected information for the stated purpose of the registry.
All of these processes need to be conducted within the regulatory constraints and ethical considerations that go into any effort that involves the acquisition and management of personally identifiable information. These tasks are difficult to accomplish successfully even when time is not a factor, and there are limitations to the time savings that can be realized with the application of greater amounts of money, people, or effort. It is thus open to question whether the time period allotted by Congress was realistic.
Public Law 112-260 includes provisions specifying that the registry would be for “eligible individuals who may have been exposed to toxic airborne chemicals and fumes caused by open burn pits” and would include any information that VA determined as “necessary to ascertain and monitor the health effects” of individuals who served in the Armed Forces and reported exposure to toxic airborne chemicals and fumes caused by open burn pits. The law directs that registry participants are to be notified of significant developments in the study and in the treatment of conditions associated with exposure to toxic airborne chemicals and fumes caused by open burn pits.
VA has, in various forums, articulated multiple goals and intents for the AH&OBP Registry. The registry website states that the data collected will be used to help monitor health conditions affecting eligible veterans and service members, to help veterans and service members who report deployment-related exposure concerns, and to improve VA programs. It then states the following benefits of participation: creating a point to identify changes in health over time, using the completed questionnaire to discuss concerns with a health care provider, and learning about follow-up care and VA benefits (VA, 2016a). VA also stated that it intends to use the registry to generate potential hypotheses about exposure response relationships but acknowledges that subsequent studies would be needed to test these hypotheses (VA, 2014a). In a presentation to the committee, VA said that data from the registry will also be used more generally to improve programs in the Veterans Health Administration (VHA) and to provide outreach to veterans who may have experienced adverse health outcomes as a result of their exposures (Ciminera, 2015a). The lack of a consistent message makes it difficult to evaluate the degree to which the registry is meet-
ing its stated intents and suggests a lack of focus that—as the report will later detail—is reflected in information gathering that does not appear to serve a sound research purpose.
To determine whether the registry is fulfilling its intent or purpose—as defined by VA or otherwise—the committee performed a comprehensive assessment, the results of which make up the remainder of this chapter and are further addressed in Chapters 4–6. The assessment begins with a review of the registry’s development, including the design and construction of the registry questionnaire and initial testing. The next section provides a detailed description and evaluation of the registry questionnaire, including the appropriateness of topics covered, layout, structural features, and directions. This is followed by a discussion of how data from other sources could be linked with registry data to provide a more complete picture of veteran and service member health. Eligibility criteria for participation, a description of the communications and outreach efforts used by VA to advertise the availability of the registry, and the process for enrolling and completing the questionnaire are detailed in the final section.
The AH&OBP Registry consists of responses to an online questionnaire completed by eligible veterans and service members and is housed in the Office of Patient Care Services within VHA. Similar to other Congressionally mandated registries, such as the Agent Orange Registry and the Gulf War Registry, there is no anticipated closing or end date for the AH&OBP Registry (Lezama, 2015). Historically, the number of new participants declines over time but can increase with new media or scientific reports on the health consequences of the target exposures.
Developing a registry of this nature is a major challenge given the large, diverse population of interest, the complexity of the exposures and health outcomes of interest, the requirement to have it be easily accessible to all interested and eligible veterans, and the desirability of having it be completed online (Ciminera, 2015b; VA, 2014a). VA relied on two working groups of VA and DoD subject-matter experts to advise it on the content and design of the questionnaire and registry. The working groups were tasked with developing “clinical guidance, a registry analysis plan (to include DoD exposure data) and supporting information technology requirements” (VA, 2013) and developing additional questions related to environmental exposures unique to military service in contingency operations (VA, 2014a). Staff within the Post-Deployment Health Group of the VHA Office of Public Health (now the Office of Patient Care Services) were consulted for statistical expertise.
The first working group meeting took place in October 2012 and had nine members with expertise in primary care, pulmonology, and public or environmental health. That working group was responsible for developing and implementing clinical evaluation guidance for primary care and other providers to support the interagency Airborne Hazards Action Plan, under which the AH&OBP Registry falls. That working group was also tasked with recommending methods to disseminate educational materials based on the guidance it developed and to develop recommendations to improve collaboration among specialists with the goal of improving the consistency of specialist evaluations and interagency situational awareness for unusual cases or clusters of cases (VA, 2012).
The second working group met in January 2013 and was composed of 10 VA and DoD subject-matter experts in public or environmental health and other specialties who had little overlap with the previous working group participants. The purpose of this working group was to develop an exposure assessment instrument that would be integrated into the registry questionnaire (VA, 2013). Meeting notes, draft or final products, or other materials related to the outcomes of the two working groups are not available, so it was not possible for the committee to further evaluate the work of these groups. VA also conducted usability testing beginning in October 2013, and it included a human factors analysis by its Office of Informatics and Analytics (Ciminera, 2015a,b).
The committee appreciates that the VA staff members who were involved with developing and implementing the registry appear to have been experts in military occupational health and were conscientious in their approach to designing and implementing the questionnaire, especially given the short timeframe. However, the committee believes that the approach of using personnel in VHA’s Office of Public Health Post-Deployment Health Group and establishing two working groups of VA and DoD subject-matter experts—but not experts in survey design or survey research methods—to advise on questionnaire content and design was insufficient for developing a registry
of this scale. Expanding the expertise to include specialists in fields such as survey design and research methods would have provided input on many issues that do not appear to have been considered in the design, testing, and implementation of the registry. For example, depending on the goal of the registry (surveillance or hypothesis generation), consulting with experts in exposure and disease ascertainment would ensure that, within the constraints of the data collection approach, the most accurate and complete data would be generated. Survey design experts would consider the characteristics of the population of interest (educational and cultural background, incentives and disincentives to participate, and so forth) and, especially, the distinctive features of Web-based survey design and data collection, such as more rapid use of the data, the ability to tailor the survey to individuals through automatic application of fill-ins and skip patterns, increased data reliability, and a reduction in survey costs. Thus, the committee concludes that many of the problems in design and implementation (discussed under Questionnaire Quality) could have been anticipated and ameliorated had experts in survey research been consulted.
In brief, a fairly standard process has evolved across the survey research industry for developing and testing questionnaires, including
- questionnaire formatting that is consistent with and takes advantage of the specific mode(s) of the survey carried out;
- expert independent review of draft questionnaires to identify and resolve problems with question clarity and how questions, potential responses, and instructions are worded;
- standard questionnaire appraisal systems to evaluate and improve survey questions in order to identify potential problems in the wording or structure of questions that may lead to difficulties in questionnaire administration, miscommunication, or other failings (Institute for Social Research, 2016); and
- cognitive interviewing: conducting one-on-one interviews with members of the targeted population that probe respondents’ understanding of questions and how they form their responses, the burden of providing a response, and their willingness to provide high-quality responses.
In the case of Web-based surveys, evaluation is typically conducted using cognitive interviewing techniques such as “think aloud” sessions with subjects as they complete a draft survey and usability testing with respondents to determine how they use the Web-survey program to maneuver through the instrument.1 It is unclear whether or to what extent such work was carried out during the AH&OBP Registry pilot phase, but the committee believes that some of the problems with the questionnaire that it observed might have been avoided if such processes had been rigorously applied. Any registry of this type will benefit from the conduct of a thorough, standardized assessment of the instruments used for data gathering before launch.
VA contracted with two private-sector firms for Web implementation and information technology support of the registry and to design the required database architecture to ensure the information could be accessed, stored, linked to other database systems, and extracted. The Web-based format allows for real-time performance monitoring and quality improvement initiatives to be part of the system architecture. VA stated that this capacity makes it possible to monitor weekly metrics such as the numbers of new registrants and registrant user status, to monitor the registry’s status for accessibility, and to log helpdesk calls that provide technical assistance to users (Montopoli, 2016a). VA indicated that the agency has also implemented better integration of the AH&OBP Registry database with the health care and enrollment data available in the VHA Corporate Data Warehouse.
In addition to the Web-based version of the questionnaire and registry, VA has implemented a version that can be accessed with a mobile application accessible on Android, Blackberry, iPad, iPhone, and Windows Phone platforms. The mobile application format was introduced September 1, 2015, and had been used by more than 16,000 individuals through September 15, 2016 (representing ~10% of all individuals who accessed the registry as of that time). An internal analysis of time to complete the questionnaire using the mobile app version found that the average completion time for users who started and completed the questionnaire on the same day was 61 minutes (Personal communication, Michael Montopoli, Director, Post-9/11 Era Environmental Health Program, VA, September 15, 2016).
1 This topic is further discussed in the section titled Open Comment Period and Pilot Testing.
DoD’s Defense Health Agency and Army Public Health Command store results from questionnaires from active-duty participants, and VA stores completed questionnaires for all other participants (Ciminera, 2015a). The data are collected and analyzed by the Post-Deployment Health Group of the VHA Office of Public Health (VA, 2014a).
At the end of August 2015, an update to the registry was released and implemented. No changes were made to the content of the questionnaire, but several system updates were implemented. The updates included migrating the database platform from the original MongoDB to SQL software, enhancing the VHA staff portal to make VA health care users’ registry data more easily accessible to VA providers and facilities and adding capabilities for ad hoc reporting, creating a “data mart”2 in VHA’s Corporate Data Warehouse for internal analyses of the raw registry data, and integrating with eBenefits3 to allow access to the registry from the eBenefits website (Lezama, 2016).
The AH&OBP Registry is distinctive from previous registries established by VA in that it was designed to be completed using an online interface only (no paper forms or computer assisted interviews). An online questionnaire was proposed because it could potentially improve access and population monitoring while limiting the burden of participation (Ciminera, 2015a). Veterans and service members of the recent conflicts have high levels of Internet access, and the committee appreciates VA’s effort to attempt to include a more representative group of participants by designing the registry to be an online format that can be accessed from nearly anywhere there is an Internet connection. However, although a Web-based survey may confer benefits over more traditional methods of mail surveys, in-person interviews, or computer-assisted telephone interviewing, not all eligible persons have access to a Web-enabled device like a computer or smart phone. The Pew Research Center (2011) notes that “[p]eople with lower incomes, less education, living in rural areas or age 65 and older are underrepresented among internet users and those with high-speed internet access.” And researchers have noted that Web-based surveys tend to have lower response rates than other types of surveys (Fan and Yan, 2010).
The questionnaire was designed to take about 30 minutes to complete (Federal Register, 2013a) in order to collect relevant information while limiting participation burden. While the questionnaire includes many new questions tailored to the registry and its purpose, it makes extensive use of relevant questions used in another ongoing survey, the National Health Interview Study (NHIS). Recycling questions used in and validated by the NHIS rather than developing new questions may be considered a strength of the registry questionnaire, but it is also one of its key weaknesses. VA chose to use some existing NHIS questions for health conditions and symptoms. These were taken from four different sections of the 2013 NHIS Adult Core module: conditions, health status and limitation of activity, health behaviors, and health care access and utilization (Personal communication, Michael Montopoli, Director, Post-9/11 Era Environmental Health Program, VA, August 18, 2016). However, not all questions for each section were used or presented in the same order as the NHIS, and taking them out of the surrounding context may affect their validity.
A comparison between the registry questions and the 2013 NHIS performed by the committee’s subcontractor showed significant discrepancies in wording for questions on respiratory conditions, with only 3 of 13 questions showing an exact match, 8 questions with no match, and 2 questions with changes in reference period. The correspondence is better for questions on cardiovascular conditions and health behaviors: 4 of 6 and 11 of 13 exact matches in wording, respectively. Only 5 of 13 questions on cancer history and other conditions had exact matches. Changing the wording of the questions and the order in which they are presented weakens any assumption that the registry questions have been validated.
Furthermore, the NHIS was designed to be a cross-sectional household interview survey of the civilian4
2 A data mart is a subset of a data warehouse focused on a specific set of information.
3 “eBenefits is a joint VA/DoD web portal that provides resources and self-service capabilities to Veterans, Service members, and their families to research, access and manage their VA and military benefits and personal information” (VA, 2014b).
4 Active-duty military personnel (but not veterans) are explicitly excluded from the NHIS, unless another family member is a civilian and eligible. However, when data are collected on military personnel, they are limited to familial factors and given a final weight of zero so that their individual characteristics are not counted in national estimates (CDC, 2015).
noninstitutionalized U.S. population and is conducted using face-to-face interviews. As a result, most of the questions, formats, and responses used in the AH&OBP Registry questionnaire were developed for use in interview-based rather than Web-based surveys. In survey interviews, questions and response choices are typically read by interviewers to respondents and answers recorded by the interviewer. While these interviews are generally now computer-assisted, they are not self-administered. Using a different means to administer the questions also calls into doubt whether they can be considered to be validated for information-gathering purposes.
The committee appreciates that under the time constraints given by Congress, using existing validated questions from previous surveys made sense. However, the NHIS was not designed for or validated using active-duty or veteran populations and the designers should have considered surveys that have been used or validated in studies of veterans and service members and those that were explicitly designed for Web-based administration.
For example, the Millennium Cohort Study began in 2001 and was designed to be a 21-year longitudinal study to survey U.S. military personnel from all service branches to evaluate the health risks of military exposures, including deployment on short- and long-term health outcomes. Through 2011, the study had enrolled more than 200,000 participants (Crum-Cianflone, 2013). It was designed to link survey responses with several data sources, including VA and DoD data, to complement the self-reported questionnaire responses. Before implementing the survey, the developers conducted several focus groups that discussed the instrument, and they tested it in a pilot study of 1,000 participants, which resulted in systematic validation of the instrument. Quality control processes, including methods to encourage nonbiased responses and retention, have been established and are used to make improvements and correct errors as the study progresses. The Millennium Cohort Study has been found to be a reasonably representative sample of the U.S. military, and its survey data have been confirmed as having excellent reliability in several publications (Crum-Cianflone, 2013).
The most recent iteration of the Millennium Cohort Study instrument is designed to be completed either online or on paper by mail, consists of about 450 questions, takes approximately 30–45 minutes to complete, and includes validated questions on militarily relevant issues, including a specific question on exposure to smoke from burning trash and/or feces (Crum-Cianflone, 2013). The 9 questions on environmental exposures experienced during deployment were derived from a subset of the 23 environmental exposure questions included in VA’s National Health Survey of Persian Gulf War Era Veterans, first conducted in 1995 (Kang et al., 2000). The Millennium Cohort Study questionnaire collects information on mental, physical, behavioral, and functional health and incorporates several standardized instruments, such as the 36-Item Short Form Health Survey, Patient Health Questionnaire, Posttraumatic Stress Disorder Checklist–Civilian Version, and the so-called CAGE questions for alcohol problems (NIAAA, 2016; RAND, 2016; VA, 2016b). In sum, the Millennium Cohort Study illustrates that there are other approaches to eliciting information of the type sought by the AH&OBP Registry that better align with validated health survey instruments and that yield results that are more useful in research.
VA sought to improve access and participation in the registry while limiting the burden on participants in part by offering an optional in-person exam instead of requiring that the exam be a criterion for participation, as was done for other VA registries. The reasoning was that prospective subjects might be more likely to participate if they were not required to go to a specific place to register or be inconvenienced by scheduling and completing an exam that they might not feel they need. As mentioned elsewhere, one value of VA registries is that they generate a roster of concerned individuals; other uses include outreach, surveillance, and health-risk communication to potentially exposed veterans. Therefore, not requiring an in-person physical evaluation at a VA medical center as a requisite for participation increases the potential for greater representation among service members and veterans, especially those who are not enrolled in VA, who have already seen a non-VA provider for a condition or concern potentially related to the exposure, who would have to travel great distances for an exam, who are unable to miss work, or who do not have or wish to take the time to make an appointment and receive an exam. On the other hand, removing the requirement to undergo a physical exam results in a loss of valuable objective health and functional status information and a potential mechanism to verify self-reported information for at least a subsample of the registry population.
After completing and submitting the questionnaire, participants are given the option to print and save a copy of their responses and may schedule the optional in-person clinical evaluation. VA health care providers conduct the exam for veterans and members of the reserves and National Guard who are not currently activated. Participants who are enrolled in the VA health care system can make an appointment with their primary care provider or patient-aligned care team. Veterans and non-activated service members not enrolled in the VA health care system need to first contact a VA environmental health coordinator (Sharkey et al., 2014). Active-duty service members and members of the reserve and National Guard who are on active duty orders for more than 30 days may request a medical evaluation through their designated medical treatment facility or DoD primary care manager. When requesting an appointment, service members are instructed to indicate that the appointment is for “health concerns related to the Airborne Hazards and Open Burn Pit Registry exposures” (Sharkey et al., 2014).
Both VA and DoD have prepared fact sheets for participants as well as clinical guidance for health care providers conducting follow-up exams. The in-person exam is not standardized, which makes it difficult to assess various aspects of its process or quality. General broad guidance is available to VA clinicians via a training webinar, and the committee was provided with a copy of the National Note Airborne Hazards and Burn Pit Initial Evaluation Clinical Template. The guidance advises physicians to use “their own evidence based knowledge, expertise, and skills to guide a patient-centered evaluation and management” and adds that additional diagnostic tests or specialty consultations may be appropriate (VA, 2016c). The committee notes that although there are no known conditions that are directly attributable to burn pit exposure, the clinical exam is useful in that it allows participants an opportunity to be connected with a provider, articulate any health concerns they may have, and, if warranted, undergo appropriate diagnostic testing or referral and begin treatment to improve symptoms.
DoD does not require a specific form for the clinical assessment, but providers are encouraged to review the service member’s questionnaire; to take a medical history that focuses on occupational and environmental exposures, airborne hazards, and smoking history; and to determine the person’s primary concern or complaint. The provider may perform a physical examination if indicated based on symptoms or concerns, order additional diagnostic tests, or refer the service member to a specialist for further evaluation. The examination, diagnoses, and any referrals are to be fully documented in the service member’s medical record (Defense Health Board, 2015). The U.S. Army Public Health Command has developed provider education and training—including a downloadable provider registry exam toolbox—concerning the registry and the associated clinical exam for physicians, physician assistants, and nurse practitioners who may be seeing these DoD participants (Montopoli, 2016a). The training is intended to familiarize military providers with the registry and its purpose, provide coding and recording guidance, and provide additional optional clinical tools for assessment that have been developed by VA and DoD, such as an algorithm for primary care providers that details a clinical approach to the participant requesting the clinical exam (Ciminera, 2015b). VA was given the opportunity to review the DoD materials during their development. The U.S. Army Public Health Center has hosted multiple, internal video teleconferences with provider groups and has produced other marketing efforts in person, in print, and online (Montopoli, 2016a).
The computerized patient record system note template within the VA electronic health record was developed by VA’s New Jersey War Related Illness and Injury Study Center with input from VA’s Office of Public Health, patient-aligned care team primary care providers, and environmental health clinicians. The note template was created to standardize the clinical evaluations conducted by VA, collect information on health outcomes, and capture administrative data for registry monitoring and improvement. It allows clinicians to view an individual’s questionnaire responses online through a secure portal and provides links to additional information about airborne hazards, health conditions possibly related to those exposures, and guidance for completing an appropriate evaluation. The note also allows clinicians to document the person’s chief complaint; medical, social, family, and substance-use history; a physical exam, documenting positive and negative findings by system; diagnostic tests and evaluations performed to date with applicable results; and an overall assessment, recommendations, and follow-up orders. The computerized patient record system patch was installed in late November 2014 (Ciminera, 2015b; Montopoli, 2016a), but the software to indicate that a clinical evaluation related to registry participation took place was not finalized until November 2015 (Lezama, 2015). The template is intended as a clinical progress note and not as a standardized data collection tool.
Registry participation may be entered by the provider in a patient’s medical record, but there is no standard
“flag” indicating that a patient is in part of the registry. When a patient is seen for an in-person exam related to participation in the AH&OBP Registry, the provider may choose to use the recommended standard AH&OBP Registry note template to capture complaints, abnormalities, and other data from that exam, but that use is not required. The AH&OBP Registry questionnaire data and VHA clinical data can be aggregated within the VHA Corporate Data Warehouse for use by registry staff or VA researchers (Montopoli, 2016a).
As of fall 2016, VA was in the process of developing new patient appointment scheduling software that would allow the registry to interface with that system, allowing registry participants who are enrolled with VA to request the clinical examination directly through the registry (Montopoli, 2016a). By November 30, 2015, about 28,800 individuals had completed the questionnaire and indicated that they were interested in having a health exam, but only 750 participants (2.5% of those interested) had received the health exam. The reasons for the low proportion of interested respondents completing an in-person clinical exam are unknown. An analysis of the chief complaints (participants may have indicated more than one) for the 543 participants who underwent a clinical evaluation showed that the three most common complaints were shortness of breath (57.5%), decreased exercise ability (47.8%), and chronic sinus infections (47.3%) (Lezama, 2015). However, the chief complaints cited during the clinical evaluations have not been matched with the most prevalent conditions noted on the questionnaire to validate responses or to determine whether the persons undergoing evaluation are reporting different outcomes or conditions, reporting a greater severity or differences in other parameters, or reporting conditions at different frequencies than are observed among all registry participants.
One strength of the AH&OBP Registry is that the completed questionnaire generates a record of potential exposures and health concerns that is recorded in the participant’s VA electronic health record that can be accessed by military and veteran health care system providers, and that can be downloaded and printed by the participant for his or her reference and the use of other health care providers. Extending this functionality would provide an even greater benefit. The lack of time available for health care providers to do thorough clinical work-ups is well documented (Chen et al., 2009; IOM, 2013; NASEM, 2015b). Something as simple as a one-page document that would extract relevant exposures and reported health outcomes, including potential functional impacts, would provide participants with a means of quickly and efficiently educating their providers on their concerns and would give clinicians information allowing them to better tailor their care to the individual’s health care needs.
Following an initial draft of the questionnaire, VA announced a 60-day public comment period in a June 5, 2013, Federal Register posting (Federal Register, 2013a) to address the following issues with regard to the registry’s information collection:
(1) Whether the proposed collection of information is necessary for the proper performance of VHA’s functions, including whether the information will have practical utility; (2) the accuracy of VHA’s estimate of the burden of the proposed collection of information; (3) ways to enhance the quality, utility, and clarity of the information to be collected; and (4) ways to minimize the burden of the collection of information on respondents, including through the use of automated collection techniques or the use of other forms of information technology.
The comment period was later extended an additional 15 days (Federal Register, 2013b). VA also submitted the questionnaire to the Office of Management and Budget for review and comment (on September 6, 2013), and a 30-day public comment period followed that activity (Federal Register, 2013c). In total, VA received approximately 300 comments from individuals and veterans’ advocacy groups (Ciminera, 2015a).
Several veterans service organizations responded to VA’s request for comments on the questionnaire and registry during the open comment period; some of these groups addressed the committee with their concerns during the May 2015 workshop. For example, the Sergeant Thomas Joseph Sullivan Center (SSC) submitted multiple-page letters to both VA and the committee detailing its concerns about the questionnaire and registry (SSC, 2013, 2015). Several of their and other advocacy groups’ concerns focused on changes that they believed would enhance the quality, utility, and clarity of the information to be collected. SSC’s letter to VA stated that the questionnaire should
permit respondents to disclose objectively reportable symptoms, diagnoses, and functional limitations covering all organs and bodily systems potentially affected by airborne exposures, rather than being limited to primarily the respiratory and cardiovascular systems. The letter also stated that adding a comprehensive checklist to the questionnaire and an open field to capture additional participant comments and concerns would best address these concerns. The SSC also sought improvements to the questionnaire to capture information on exposures during deployment and on what diseases veterans have or developed post deployment. According to the veterans service organizations that attended the committee’s workshop,5 their recommendations to VA made in the open-comment periods were not implemented in the final version of the questionnaire.
Moreover, VA stated in its justification to the Office of Management and Budget (OMB; the federal agency that reviews and approves government collections of information) that “[n]o testing of the instrument employed to collect self-reported data will be done in more than 10 individuals” (VA, 2014a). In the same application, VA stated that it “performed extensive one-on-one software usability testing with eight Veterans to improve the web application user interface” and that additional technical testing of the system would be conducted in a production environment to ensure system functionality under varying system loads once OMB approval was granted. OMB approved the questionnaire in March 2014. However, despite the statements about restricting the number of individuals who participated in pilot testing the questionnaire and registry, the pilot testing effort was more extensive.
Pilot testing for the registry questionnaire took place for a little less than 2 months, from April 25 to June 18, 2014, at three VA sites: Detroit, Indianapolis, and New Jersey (Ciminera, 2015b). During this time there were 321 participants: 72 persons consented to participate but did not complete the questionnaire, 194 completed the questionnaire, and the remaining 55 pilot phase users consented and completed the questionnaire after the pilot period had ended (Ciminera, 2015c). There is no information available on how these pilot testers were selected; on what follow-up, if any, was conducted to determine why 72 consenters did not complete the questionnaire; or on the details of the experiences of and the lessons learned by those who did complete the questionnaire. VA told the committee that several changes were made to the questionnaire following the pilot phase (Ciminera, 2015a), but no specifics were provided. No changes have been made to the questionnaire since the registry opened nationally on June 19, 2014 (Ciminera, 2015b; Federal Register, 2014; Lezama, 2016; VA, 2015a).
In an effort of this scale, a thorough piloting of survey components and enrollment processes would be expected and needed. No matter how carefully planned the approach might be, initial efforts at implementation always reveal new challenges that call for refinements. This includes a qualitative assessment of the participants’ experience, often through focus groups, and an examination of the data initially collected to ensure that the questionnaire is working as desired.
Public Law 112-260 (see Appendix A) specified that individuals who participate in the registry must have deployed on or after September 11, 2001, in support of a contingency operation while serving in the Armed Forces (whether active duty, reserve, or National Guard) and during their deployment must have been based or stationed at a location where an open burn pit was used. Open burn pit was defined in the law as an area of land located in Afghanistan or Iraq that was designated by the Secretary of Defense to be used for disposing solid waste by burning in the outdoor air and that does not contain a commercially manufactured incinerator or other equipment specifically designed and manufactured for the burning of solid waste. VA later modified this definition to allow participation by a much larger pool of veterans and service members. First, the location of deployment was expanded beyond Iraq and Afghanistan to include the entire Southwest Asia theater of operations: Kuwait, Saudi Arabia, Bahrain, Djibouti, Oman, Qatar, and United Arab Emirates; the Gulf of Aden, Gulf of Oman, Persian Gulf, Arabian Sea, and Red Sea; and the airspace above all of the listed countries and bodies of water. Second, it extended the timing of eligible deployments to begin on August 2, 1990, for the Southwest Asia theater of opera-
tions (except Afghanistan and Djibouti, in which an eligible deployment began on or after September 11, 2001). The decision to expand the eligible population to include 1990–1991 Gulf War veterans was made because these service members experienced some environmental exposures that were similar to those experienced during the post-9/11 conflicts. VA indicates that a small number of persons who were not eligible under these criteria have also been permitted to submit questionnaire responses (Ciminera, 2015a).
With the expanded eligibility pool, VA estimates that approximately 3.5 million service members and veterans are eligible to participate in the registry. Based on previous experiences with other congressionally mandated VA environmental health registries, VA estimates that 10% (350,000) of the eligible target population may participate over a 10-year period. Of the projected 350,000 participants, VA estimates that 200,000 of them will request an in-person clinical evaluation (Lezama, 2016).
To participate in the registry, an eligible service member or veteran must first have a Premium DoD Self-Service Logon Level 2 account. To obtain the account, an individual must meet one of several requirements: have a DoD Common Access Card with an accessible reader; have a Defense Finance and Accounting Service myPay account; or be a veteran, dependent of a veteran, survivor of a veteran, or registered in the Defense Enrollment Eligibility Reporting System (DEERS). The DEERS account provides secure, self-service identification that is used to access several account and password-protected websites as well as access to VA eBenefits. A registry help desk is available to service members and veterans who are experiencing difficulties registering for an account or accessing the online questionnaire (VA, 2014b). No information was available to the committee on whether or how these access requirements affect participation in the registry.
Eligibility to participate is confirmed using the VA Defense Information Repository database, where the information is derived from DoD sources. This process to confirm eligibility is an advantage of the AH&OBP Registry over other VA registries in that the eligible population is well-defined by reference to records on periods of deployment and deployment locations compiled by the Defense Manpower Data Center (DMDC), which also maintains a broad array of demographic and military characteristics information for all of these eligible veterans and service members (further discussed in Chapter 4).
After eligibility is confirmed, an individual may access the questionnaire. The first section of the questionnaire lists the individual’s eligible deployment segment records and gives the participant the opportunity to either confirm the system data or correct or enter additional deployment history information. Individuals are able to modify the dates of deployment, add missing deployments, and select or enter the bases they served at while deployed. However, the committee heard from veterans who had participated in the registry that the process of updating deployment information and entering the names of bases and dates was difficult and frustrating. While Section 1.1 of the registry is clearly crucial, it would benefit from design modifications that make it easier and more user-friendly for registry participants.
On the database side of the registry, “userEntered” and “userVerified” fields indicate, respectively, whether the record was entered by the registrant and whether the user indicated the information is accurate (Lezama, 2016), which allows researchers to examine how well this data entry system works. An analysis of deployment segments of registry participants found that 20% of all deployment segments provided by DoD were not verified by respondents as correct. When deployment segments were stratified by date (before September 11, 2001, versus September 11, 2001, and after), pre-9/11 deployment segments accounted for 2.7% of total deployments and of these, 61% were not verified, compared with 19% of post-9/11 deployment segments that were not verified. Of all verified deployment segments, 14% were entered by participants. Again stratifying by the era of service, 70% of pre-9/11 deployment segments were entered by participants compared with 12% of deployment segments for post-9/11 (Ciminera, 2015c).
For persons who do not have at least one eligible deployment segment noted in DoD records, VA can issue waivers to allow persons to participate and enter deployment segment information that is not in DoD records. In a few cases, VA reported that participants were identified who had no eligible or validated deployment segments. These participants were found to be eligible based on system data, but they had indicated that the records were
incorrect without entering corrected deployment information, and, as a result of the skip patterns, the section on deployment exposures was not completed. Therefore, in these cases potentially eligible persons would have no validated deployment segments, and their data would not be used in comparisons with the eligible population (see Chapter 4).
VA states that the questionnaire takes about 40 minutes to complete (VA, 2016d). However, the veterans who had participated in the registry and attended the committee’s workshop stated that in practice the questionnaire took closer to an hour to complete. Along with longer times needed, several reported that the website would freeze and they would have to start again, sometimes requiring multiple attempts before the questionnaire could be completed and submitted. Participants are able to save sections of the questionnaire as they complete them, and they are able to come back to a section to continue or submit it (Montopoli, 2016b).
An analysis of the time required to complete the questionnaire, which did not include the first section of deployment segment verification, found that nearly 37,000 participants had completed the questionnaire and that about 75% of participants completed it in 45 minutes or less (Ciminera, 2015d). The median time of completion was 31 minutes. Further analysis revealed that the time required to complete the questionnaire was directly related to the number of deployment segments for an individual. For example, 51% of participants who had 1 to 3 deployment segments completed the questionnaire in 30 minutes or less, whereas 41% of participants who had 10 or more deployment segments completed the questionnaire in 30 minutes or less. Since the deployment verification section was not included in the time to completion analyses, the times are underestimated and likely proportionate with the number of deployment segments that an individual needs to verify or manually input.
In the June 25, 2014, Federal Register notice, VA stated that it, in coordination with DoD, would conduct extensive outreach to veterans and service members to raise awareness about the registry and to inform eligible individuals of the advantages of participation (Federal Register, 2014). The various communication and outreach efforts used by VA and DoD to promote participation in the registry were broad and not specifically targeted to a single subset of the eligible population, presumably with the intention of maximizing the reach to all eligible participants. The communications strategy included using intermediaries within and associated with VA, a social media campaign, and other electronic notifications to publicize and encourage participation in the registry.
The intermediaries used included points of contact in veterans service organizations, public affairs officers, VA environmental health clinicians and coordinators, and OEF/OIF/OND program managers (VA, 2014a). VA medical centers and facilities were sent fact sheets and postcard-sized flyers for distribution. VA environmental health coordinators, OEF/OIF/OND program managers, and clinicians who work directly with veterans were encouraged to inform the veterans about the registry. VA stated that it will continue to coordinate with veterans service organizations to encourage greater communication with a broader set of veterans who may not use VA services (Ciminera, 2015a; VA, 2014a).
The social media campaign was chosen in part because eligible veterans, being younger than veterans from other eras, tend to be more aware of and are more likely to use social media and the Internet. “VA expects that by using social media sites, websites, and postcard/fact sheets to inform veterans about the value of participating in the registry, participation in the registry will be maximized” (VA, 2014a). Because the questionnaire was to be completed online, social media and other website forums to invite participation would be appropriate.
While both of these assertions or suppositions by VA sound reasonable, the committee is not aware of any strong evidence that they are in fact true. It is reasonable to assume that veterans who are eligible for the registry are more likely than older veterans of earlier eras to use social media and the Internet, but this does not necessarily mean that structuring outreach around these types of sources will ensure maximum participation in the registry. First, although many of these veterans may be reached through such channels, it is plausible that others (and perhaps even most) will not be, either because they do not use social media or because they do not access the media channels (such as VA’s Twitter feed) that promote the registry. As part of the development and rollout of the registry, an empirical test of these assumptions and a systematic evaluation of the communication plan on which they are based could have been carried out, with a specific focus on veterans’ awareness of these channels and messages
and their perceptions of their clarity and effectiveness. Because so few of those eligible even attempted to access the registry, and fewer still successfully enrolled and completed the questionnaire (a topic addressed in Chapter 4), a reason might have been that these channels were less effective than expected in stimulating participation.
Second, research indicates that the most effective methods or modes used to approach a target population, communicate with them, and encourage their participation are not necessarily the same as—and are often quite different from—the methods used to interview or otherwise capture the information desired from them (Dillman and Messer, 2010; Dillman et al., 2014). For example, many Web-based surveys initially approach potential subjects by mail and telephone as well as emails and text messaging and very few have been successful in using only indirect or passive communications and recruitment by general social media, message boards, and the like, other than for pretesting during the development and testing of questionnaires and methods (Couper, 2000; Scherpenzeel and Toepoel, 2012; Tourangeau et al., 2013).
Beginning in July and continuing through November 2014, information about and links to the registry were publicized using several avenues in VA and DoD. Such approaches included posts on VA and VHA Facebook pages and Twitter accounts, emails to subscribers to the VHA GovDelivery listserv, inclusion in the VAntage Point Blog, and posts to VA websites (for example, www.va.gov/health/insidevha.asp and www.publichealth.va.gov). VA reported that the Facebook posts reached 458,208 veterans; there were 26,636 “click-throughs;” and there were 10,281 likes, comments, and shares (Ciminera, 2015a). The VHA GovDelivery listserv had 77,927 recipients; 16% opened the message, and 3% clicked on a link in the email. VAntage Point Blog had 18,307 unique page views, and the Inside Veterans Health webpage had more than 10,000 unique page views. In September 2015, Web banners and announcements were posted to several VA websites. An update was included in the digital-only Post-9/11 Vet Newsletter, and this newsletter was shared with veteran service organization liaisons. Announcements regarding the newsletter were shared on VHA Facebook and Twitter accounts with the potential to reach about 214,000 and 71,500 followers, respectively. A message was sent through the GovDelivery listserv to the Public Health’s Military Exposures subscriber list of 45,658 recipients, and 17.9% opened the message. VA reported that the webpage for the Post-9/11 Vet Newsletter had received 6,230 views and that the burn pits registry article had received more than 2,500 page views (Lezama, 2016). In addition to publicizing the registry on websites and through social media and dedicated listservs, announcements were included in VA newsletters, such as the Post-9/11 Vet Newsletter and WRIISC Advantage Newsletter (Ciminera, 2015a; VA, 2014a).
DoD publicized the registry to eligible active-duty service members and activated members of the reserve and National Guard. Members of the reserve forces and National Guard who were not activated were under the purview of VA. DoD also used social media (Facebook; Twitter) to advertise the registry, but metrics were not reported. Announcements were posted to DoD and military websites, printed in military-oriented newspapers, and made through communications from each service branch to their active-duty service members (Defense Health Board, 2015).
The registry went live in June 2014, and ongoing communication and outreach efforts appear to have diminished within a few months of the launch. This drop-off took place at a time when there were several instances6 when the registry was unavailable—sometimes for several days at a time—due to software problems or other issues (Montopoli, 2016a). Few messages about those disruptions appeared on the registry homepage or in any other venue. For example, the committee observed in September 2015 that the questionnaire was unavailable for more than a week before a message was posted indicating that the registry was experiencing difficulties and advising interested persons to check back at a later time. During the committee’s open session in December 2015, VA acknowledged that ongoing communication with potential registrants was an issue (Lezama, 2015). VA provided the committee with a copy of its communications strategy for the registry for 2016. The plan consisted of several avenues including continued social media messaging, GovDelivery listserv emails, a Twitter chat, posts to the VAntage Point blog, and announcements at various meetings and calls (Lezama, 2016). However, for most of these activities the time frame was not stated.
As opposed to generic posts and shares on VHA social media sites and broad communications and outreach initiatives that may or may not reach the intended population of service members and veterans, a more focused
6 VA reports that more than 30 outages occurred between the time the registry went public and May 30, 2016.
approach would help ensure that—to the maximum extent possible—persons eligible to participate are receiving the communications. For example, in the same way that VA uses the registry as an outreach mechanism to mail post-participation fact sheets to participants who have submitted questionnaires (Ciminera, 2015b), one area that VA could explore would be to target communications at participants who start but do not complete or submit the questionnaire (discussed in Chapter 4). These follow-up communications could offer reminders or direct individuals to sources that may help answer any questions they may have.
A second approach might be to more intensively and systematically target specific groups of eligible persons in an attempt to produce a more representative cohort of those eligible. For example, several surveys of veterans and service members, including the Millennium Cohort Study and VA’s National Health Surveys of 1990–1991 Gulf War era veterans and OEF/OIF veterans, oversample women and reserve/National Guard personnel (Crum-Cianflone, 2011; Smith et al., 2007). The Millennium Cohort Study also specifically targeted persons with recent past deployments in its first baseline assessment (Crum-Cianflone, 2011). Making such populations a larger part of the registry would increase the confidence with which one could draw generally applicable conclusions from the data it contains. A formal analysis of the demographic and other characteristics of non-respondents could provide clues that would help VA target specific groups through more appropriate channels or contact methods.
Encouraging enrollment by eligible persons who were not exposed to burn pits or who were exposed to burn pits but are not experiencing any adverse health outcomes will also likely lead to a broader representation and therefore possibly lead to improving the estimates and generalizations that can be made using registry data. Another method to improve representation might be to offer incentives to participants who are selected and targeted in a manner to enhance representativeness.
The main webpage for accessing the registry states that its purpose is for persons “to report exposures to airborne hazards (such as smoke from burn pits, oil-well fires, or pollution during deployment), as well as other exposures and health concerns” (VA, 2016e). However, based on the registry name, the emphasis in communications and messaging regarding it, and the number of questions in it relating to burn pit exposure compared with other exposures, VA has highlighted exposure to burn pit emissions as its primary interest. It is therefore not surprising that 96% of participants in the database available for committee review reported being exposed to a burn pit on at least one of their deployments (Chapter 5). If VA wishes to gain a greater understanding of exposures and health concerns in the entire population of Southwest Asia theater of operations veterans, then outreach efforts should target all eligible persons, regardless of whether they were exposed to a burn pit, and the messages should encourage all eligible persons to participate, emphasizing participation for persons not experiencing symptoms of poor health and those who were not exposed specifically to burn pits.
If a purpose of the registry is hypothesis generation related to exposures to airborne hazards and health concerns, there would be benefit in a targeted outreach to those persons who are likely to have been among the most highly exposed. These persons may be identified through additional linkages with DoD records of deployment locations, number of deployments, length of deployments, and, potentially, military occupation specialty. However, such targeted efforts are limited by the registry’s architecture and compatibility with additional sources and databases (a topic discussed in the section titled “Linking Other Data to Registry Data”). It would thus be appropriate to pilot test any such effort in order to determine whether it is achieving the intended goals.
Veterans service organizations and military service organizations have an interest in military occupational exposures and are a valuable resource for getting information out to their membership. VA should consider how it can better work with these organizations on an ongoing basis to increase awareness of the registry and encourage participation by concerned individuals.
The quality of a registry is dependent on “the confidence that the design, conduct, and analysis . . . can be shown to protect against bias (systematic error) and errors in inference” (AHRQ, 2010, p. 307). The value of the collected information also relies on the quality of that data as well as its use and purpose for decision making. This section first considers the design of the questionnaire separately from the actual questions it contains. In particular, the discussion begins with an examination of various aspects of the questionnaire’s design, including its layout,
directions, and flow of questions. The second part of the section provides a broad description and evaluation of the types of questions included and of the areas where improvements (such as the relevance and appropriateness of the questions) could be made that would enhance the overall value of the registry.
The intent of this section and the examples it provides is to be illustrative of general categorical problems with the structure of the questionnaire and formats of questions. It is not an exhaustive or an item-by-item assessment, but rather a general overview of the types of problems that were observed. The committee was not tasked with redesigning the questionnaire and is not in a position to offer alternative wordings or approaches, which would require testing to validate. Rather, its charge was limited to suggesting changes that could improve the instrument.
The complete AH&OBP Registry questionnaire, version 15 (December 2014) is reproduced in Appendix C. Box 3-1 contains an outline of the questionnaire’s sections and topic areas. The questionnaire contains approximately 140 questions. A participant may answer more or less than this number depending on the applicability of skip patterns and the number of eligible deployment segments he or she has indicated. For example, the Tobacco Exposures section of the questionnaire consists of 10 questions, but persons who report that they have not smoked more than 100 cigarettes in the first question skip the next four questions, because those questions are not applicable to them. On the other hand, respondents are instructed to answer the same nine questions on location-specific deployment exposures for each eligible deployment segment, with some respondents having multiple eligible deployment segments. Respondents were required to answer every question (“don’t know” and “refused” options were provided) in order to submit the questionnaire and therefore be included in the registry.
A limitation of this evaluation of the questionnaire is that it is based on the paper version of the questionnaire and on the committee’s understanding of the online version, but the committee was unable to have an interactive review of the questionnaire as it is coded to appear and is implemented online because there was no mechanism to allow the full committee access to the online version. VA indicates that there are only minor editorial difference between the paper version and the text of the online form, but to the extent that the paper and online versions are discordant or otherwise different, some of these comments may not reflect user experience.
A Web-based questionnaire can offer advantages, and the AH&OBP Registry questionnaire exploits these to some extent. For example, it uses previous responses to fill in content for questions that follow, although not consistently. However, there are other advantages of the online format that the registry fails to make use of. These and other problems identified by the committee are detailed below.
The questionnaire is divided into several sections that are inconsistently labeled and numbered. Several of the sections, such as between Deployment History and Symptoms and Medical History, cover multiple topics in varying degrees of detail. The order of the questions primarily follows the information objectives of the registry rather than being presented in a flow that might better optimize the interest and engagement of participants. While transitions between major sections are generally used, such as between Deployment History and Symptoms and Medical History, transitions for groups of questions within a section (for example, Location Specific Deployment Exposures and General Military Occupational Exposures) are rarely used.
Directions and Clarity
Directions and clarifying instructions are rarely provided throughout the questionnaire. In the interview-based NHIS—from which several of the AH&OBP Registry questions were adopted—all possible responses are read to respondents (with some exceptions, such as “refused”). Reading the responses may help respondents who do
not recognize a word or condition in written form but recognize it when spoken, allowing them to provide a valid response to a question rather than skipping it or answering “don’t know.” This may be particularly relevant for some of the medical conditions that respondents are asked about.
If the NHIS questions are to be used in a self-administrated Web-based format, they should have been changed to account for these differences. Additionally, because specific instructions and clarifications by question are available to NHIS interviewers, these could be made available to the respondents either in the text or available by a keystroke to help them better understand the intention of specific questions.
A variety of different time reference periods are used throughout the questionnaire, sometimes within the same group of questions, which can easily lead to misunderstanding or confusion. Although sometimes the key
reference period words are bolded (for example, never versus past 12 months), the questions are not otherwise differentiated from each other and do not use transition language to provide increased clarity.
Certain formatting strategies, such as skip patterns, are typically used in survey design to reduce respondent burden and confusion and, in turn, increase the quality of responses. The AH&OBP Registry questionnaire uses complex skip patterns (for example, in the Tobacco Exposure section [2.5]) that have more potential to confuse respondents than help them navigate the series of questions, sometimes with very little apparent gain. Since the committee could not access the online version of the questionnaire, it is not possible to comment on the format as seen in real time.
The questionnaire appears to be at an appropriate reading level. The committee did not have access to the software used by survey organizations to assess the questionnaire, but used the functions available in Microsoft (MS) Word to conduct a cursory assessment. The Flesche reading ease score (an indicator of readability based on an algorithm that uses the number of words per sentence and the number of syllables per word) is 70.0 on a scale of 100.0, suggesting that the questions are easy to read. The Flesche-Kincaid grade level is computed to be 5.7, which indicates that a U.S. fifth-grader should be able to read the questionnaire. Of note, MS Word caps reading level at grade 12 (high school senior), and any reading level above that is reported as grade 12. All persons who serve in the military must have completed high school or have an equivalent GED, so a questionnaire written at a fifth-grade level is acceptable. However, the committee notes that neither of these measures is an indicator of comprehension.
Each question included in the survey should serve a purpose, whether to elicit details on potential exposures, symptoms and health conditions, or to investigate factors that may influence associations and therefore need to be adjusted for in analyses. Just as important as what is asked is how it is asked, because poorly worded or confusing questions will not elicit useful information. The following discussion provides some examples of the types of questions used throughout the questionnaire by topic area (for example, exposures, symptoms and conditions, and other behavior and effect modifiers) and how they could have been improved.
The committee noted some general issues related to how the questions are phrased. Examples of these types of questions are compound questions, bundled questions, and “check-all” formats. The use of these formats is confusing and problematic, but each is used several times in the questionnaire.
A compound question addresses more than one issue in the same question, but allows for only one answer. Alternatively, a question is also compound if it presents more than one issue in the available responses. An example of a compound question is 1.4.A:
Did you do anything differently during your deployment(s), when you thought or were informed air quality was bad (for example during dust storms or heavy pollution days)? 1. Yes, 2. No, 3. Never thought of this, 4. I was not informed or aware of bad air quality, 5. I do not wish to answer, 6. Don’t know.
This question is compound because the exposure and outcome information are mixed in the question. Respondents are not given the opportunity to indicate whether they had encountered any circumstances where they thought the air quality was bad or to note when they were first informed of that fact before they are asked whether they took a different action. For persons who endorsed yes, the possible responses in the follow-up question (1.4.B) also appear to be problematic in that some (such as response 6 to this question, Spent less time in convoy) would be out of one’s control and at least one other potential response (response 11, I did not [or could not] do anything differently) would seem to be an invalid choice because it contradicts the “yes” response to 1.4A.
A second example of a compound question is 1.4.F, which attempts to determine severity of common symptoms likely related to poor air quality:
During your deployment(s), did you seek medical care for wheezing, difficulty breathing, itchy or irritated nose, eyes or throat that you thought was the result of poor air quality? 1. Yes, 2. No, 3. I do not wish to answer, 4. Don’t know.
This question is not useful because it groups several symptoms of differing degrees of importance (that is, wheezing and difficulty breathing are much more likely to have medically important implications than itchy or irritated nose or eyes) while attempting to elicit severity (for example, bothersome enough to require medical care).
“Bundled” questions seek answers to a series of questions that a respondent may endorse or not, followed by a summary question(s) referring to all of those previously endorsed, such that the response to it cannot be associated directly with any one of those endorsed. This type of question format may have a similar effect as, but is not the same as, a compound question.
Instead of presenting separate questions for each activity, disease, or symptom of interest, the questionnaire frequently includes long lists of questions and responses formatted as “check all that apply.” This design decision was presumably intended to limit respondent burden, but the use of a forced-choice format (yes or no to each item) rather than these so called “check-all” formats is widely regarded as a superior method for collecting these data, especially for self-administered Web-based surveys (Smyth et al., 2006). Check-all formats are problematic because it is unclear whether an unchecked response means “no,” is overlooked, or is otherwise missing, and this issue is magnified when the lists are presented across multiple pages, as often happens on the AH&OBP questionnaire.
Question 1.4.B, referenced in the compound question discussion, is one example of a question presented in the check-all format. Survey methodologists have shown that respondents endorse more options under a fixed-choice format, but that this format does not result in fatigue or acquiescence bias (responding the same to all questions in that section) or high levels of nonresponse (Callegaro et al., 2015; Mooney and Carlson, 1996; Rasinski et al., 1994; Smyth et al., 2006, 2008). The increased response using a forced-choice format is thought to require more cognitive processing, and, therefore, respondents take more time to carefully consider each item individually and the appropriate response on this type of question. More responses do not necessarily indicate better quality of responses, but studies have shown that a forced-choice format likely produces more accurate responses than the check-all-that-apply format (Ericson and Nelson, 2007; Feindt et al., 1997).
A questionnaire should be designed and laid out in a manner that lessens the chance that respondents will be fatigued or unengaged before they get to the questions of greatest importance. The time required to complete the AH&OBP Registry questionnaire is critically dependent on the number of deployment segments an individual has and when those deployments occurred—which is the first item that respondents are asked about. As shown in Table 4-3, about 37% of registry respondents made available to the committee had five or more deployment segments, and approximately 11% had 10 or more. Therefore, given the time required for verifying and correcting deployment information, especially for pre-9/11 deployment segments (the majority of which are user entered), participation in the registry can be a very time-consuming process which may contribute to not all of the eligible segments being completed, biased responses (reporting the same high levels of exposure for all deployments, for example), or to dropout at that or later stages, introducing additional forms of selection or nonresponse bias. The questionnaire presents eligible deployment segments in order from oldest to most recent. Reversing the order so that the most recent segments are presented first might improve data collection and reliability, especially for participants with multiple segments to verify, add, and answer the same questions about. Furthermore, individuals are likely to recall recent deployments more accurately than older ones, and if survey fatigue occurs and respondents decide to skip inputting some segments, more accurate information would have been obtained for the most recent subset.
The second section of the questionnaire (1.2: Location Specific Deployment Exposures) consists of nine questions about possible exposures encountered for each deployment segment that was verified or added. The first question asks whether persons who served during 1990–1992 were exposed to soot, ash, smoke, or fumes from the Gulf War oil-well fires. This question is not displayed if the person did not serve in Operation Desert Shield or Operation Desert Storm in 1990–1991. The second and third questions ask where the participant spent most of his or her time, using a list of base names or text entry. Exposure to a burn pit is not introduced until the
fourth question (near a burn pit during the deployment dates, defined as on the base or close enough to the base to see the smoke). The next question collects information on who was in charge of the burn pit (U.S. forces or contractor, coalition forces, or host nation). Neither the purpose of including this question nor how it can be used in analyses of registry data is clear. Next is a yes-or-no question on whether the participant’s duties included the burn pit. The seventh question in this module asks for the number of hours (0–24) the respondent had exposure to smoke or fumes on a typical day between worksite and housing. Given that the primary purpose of the registry, as stated by its title, is to collect information on health outcomes potentially related to burn pit exposures, these questions should have been asked before other airborne-hazard exposure questions, such as soot, ash, smoke, or fumes from the Gulf War oil-well fires. Additionally, if burn pit exposure is the crux of the registry, more than four questions are needed on that topic.
Some of the questions, presumably those of greatest importance, are poorly formatted and written. For example, Question 1.2.F requires a yes-or-no response as to whether a person’s duties during deployment included the burn pit, with some examples of what these duties might entail. Inquiry on this topic could be strengthened by adding more specific questions such as “Did you personally throw anything into the burn pit?” and an open ended question for the respondent to list what she or he saw in the burn pit (plastic water bottles, uniforms, munitions, medical supplies, and the like). No information is collected on the size of the burn pit, whether incinerators were in use, or other proxies for the level and types of individual exposure, such as intensity, smoldering versus flame exposure, the proximity of work or housing locations to the burn pit, and the like. Collecting this type of information would likely strengthen the inferences that could be made using the registry data. Furthermore, the questionnaire does not distinguish between the large military burn pits that are the object of investigation and civilian trash-burning activities, possibly resulting in exposure misclassification as some respondents recall their experiences near civilian trash burning instead.
Other questions in this section ask respondents to specify the number of hours in a typical day (0–24) that they may have been subject to a particular exposure. Using a broad range of numeric responses instead of grouping possible responses (for example, 0, 1–5, 6–10, . . .) requests a level of precision that likely exceeds the capacity of many participants to recall. Moreover, people tend to recall events that were unusual over the mundane. For example, if there was a period within a deployment of particularly intense smoke, this will likely have a greater influence on recall and lead to a reporting bias toward reporting an average exposure across the entire deployment that is closer to the short-term intense exposure than to the more mild exposure that was experienced the rest of the time. Questions eliciting information about both typical and peak exposures would have improved the questionnaire.
Section 1.2 begins with participants being presented with a list of base names from which to select and indicate where the majority of their time during deployment was spent or they may use text entry if they do not see the base listed. The committee heard from its data analysis contractor that the list is not exhaustive, and although a provision is made for “other” write-ins, it is likely that participants may know or recall certain locations by different names. This section is also problematic for data cleaning and analysis because the locations of some bases were classified—specifically, smaller forward operating bases—and service members may not precisely know or be able to spell the names of the places to which they were deployed (Szema, 2015). Thus, editing and reconciling these responses requires a great deal of time and effort.
The final question of the locations-specific deployment exposures section asks the number of hours on a typical day that the person was near sewage ponds. No definition or additional description of “sewage ponds” is given. It is unclear to the committee why sewage pond exposure (which had an item response of “don’t know” that was more than 38%) was thought to be important enough to be asked for every eligible deployment, but exposures to dust and airborne hazards (the other half of the registry’s title) were grouped and later assessed as exposure that occurred on any deployment. Questions of general military occupational exposures, environmental exposures, and regional air pollution also do not differentiate between experiences for individual deployment segments and include exposures (for example, number of days a month a person performed pesticide duties) that appear to be outside the purview of the registry. Grouping military occupational exposures, environmental exposures, and regional air pollution is a shortcoming because the experiences of one deployment do not necessarily translate to others, and grouping them together makes it more difficult to estimate exposure level or duration.
Furthermore, if a purpose of the registry is to elucidate information that may potentially be used for hypoth-
esis generation and health care improvement, additional, well-designed questions are needed to collect that level of detail. Instead, the questionnaire attempts to elucidate some of the exposure information through compound questions. While the questions may have been intentionally phrased in an attempt to reduce participant burden, it introduces the possibility of confusion, misinterpretation, or logical inconsistency (if the respondent believes that the response to one part contradicts another). Instead, such questions should more appropriately be broken into multiple parts, using skip patterns where appropriate. For example, in Question 1.3.D, instead of asking “In a typical month, how many days did you perform refueling operations?” the first question could be “Did you ever perform refueling operations?” with yes-or-no response options. If the person answers yes, then the next question could be “In a typical month, how many days did you perform refueling operations?” with possible responses grouped into ranges of days instead of allowing any whole number between 0 and 31.
Finally, given that the AH&OBP Registry is intended to be a military exposure-based registry, several important potential exposures are missing. For example, the questionnaire does not ask about other sources of combustion products, such as exposure to burning trash and other materials in the absence of large burn pits. Nor are there questions about high-risk jobs other than combat. Other important sources of military exposures are also missing, such as exposure to diesel exhaust, welding, paint, or other chemical fumes or to organic dusts such as those associated with wet or water-damaged indoor environments during deployment.7 Gathering information on these types of potential exposures would likely enrich the registry.
Health Condition Questions
Section 2: Symptoms and Medical History has two primary weaknesses: the wording or phrasing of many of the individual questions, and the fact that the questionnaire does not capture all diagnoses of concern and that for those diagnoses that are included, the questionnaire does not capture information with the specificity that would be necessary to draw inferences about the presence or absence of specific diagnoses in the registry population. The section begins with five questions on functional limitations. These questions were taken from the NHIS, although they are only a subset of all the questions on functional limitations asked in that survey and are presented in a different order. In the NHIS the functional-limitations questions are intended to be used to assess severe dysfunction and not to capture a range of functional limitations of varying severity. Because no information is collected about predeployment functional status, the utility of these questions is further limited.8 Moreover, Question 2.1.F—which asks that the respondent indicate, for any question in the series that was endorsed as “difficult,” the condition that causes the difficulty with those activities—is formatted as check-all-that-apply as opposed to forced choice. Many of the conditions listed are neither necessary nor appropriate as explanations for the functional limitations elicited in the questionnaire and could be eliminated, such as birth defect, diabetes, fibromyalgia/lupus, hearing problem, hernia, migraine headaches, multiple sclerosis, other developmental problem, polio, senility, thyroid problems, ulcer, varicose veins, and hemorrhoids. Moreover, the reasons for each functional limitation may vary, and for persons who indicate one or more difficulties with functional activities, the questionnaire is not structured to allow the respondent to supply different reasons for difficulties with different activities. For example, the primary reason that a person might have difficulty jogging a mile may be different from the reason the person has difficulty climbing a flight of stairs.
None of the questions on functional limitations, symptoms, or health conditions ask about onset or severity following individual deployment segments. The discussion that follows illustrates examples of the many questions that are overly general, are unable to be answered accurately, or appear to have limited relevance. The health conditions questions range from non-specific symptoms (such as fatigue, hay fever, and allergies) to very specific diagnoses or technical terms (idiopathic pulmonary fibrosis, constrictive bronchiolitis, angina pectoris). The questionnaire does not collect objective measures or validated diagnoses. As a proxy for diagnosis, all the questions eliciting specific diagnoses begin with “Have you ever been told by a doctor or other health care professional that you had . . . ?”
8 Arguably, persons with functional limitations before deployment would not have been deployed, but this is an assumption without information to back it.
There are many examples of awkwardly and poorly worded questions or responses that likely result from changes made to a number of the standardized questions used in the NHIS (May and Haider, 2014). Using Section 2.2.1 (Respiratory Conditions) as an example, it is not clear why filter Question 2.2.1.F
Have you ever been told by a doctor or other health care professional that you had some lung disease or condition other than asthma, emphysema, chronic bronchitis or COPD? 1. Yes, 2. No, 3. I do not wish to answer, 4. Don’t know.
is needed other than to skip two questions on specific lung diseases—constrictive bronchiolitis and pulmonary fibrosis. Furthermore, it appears that responses to Questions 2.2.1.A–H could be used to tailor the wording in 2.2.1.I
[If B–F = yes] When you were told you had asthma, emphysema, chronic bronchitis, COPD or some other lung disease by a doctor or other health care professional, were you told before, during, or after deployment? (check all that apply.) 1. Before deployment, 2. During deployment, 3. After deployment, 4. I do not wish to answer, 5. Don’t know.
to include only those conditions endorsed by the respondent rather including the whole list because, for persons who endorse multiple conditions, it is unclear for which diagnosis the timing refers. Or persons who indicated that the onset of the respiratory condition or conditions was before deployment, a follow-up question (2.2.1.J) asks whether the lung disease got better, worse, or about the same during deployment. This is poorly worded—it assumes a respondent has a single lung disease, and does not indicate whether any change was determined by a doctor or by the respondent’s subjective assessment. Additionally, because many of the respondents have had multiple deployment segments, specifying before, during, or after deployment does not clarify the temporality of condition onset (whether it might have occurred after one deployment but before the next, for example). The same type of problematic questions are repeated in both the Cardiovascular Conditions (2.2.2) and Other Conditions (2.2.3) sections.
Moreover, the respiratory and cardiovascular outcomes addressed in the questionnaire are limited to the following doctor diagnosed respiratory conditions: hay fever or allergies (to pollen, dust, or animals only); asthma; emphysema; chronic bronchitis; chronic obstructive pulmonary disease; lung disease other than asthma, emphysema, chronic bronchitis, or COPD; constrictive bronchiolitis; and pulmonary fibrosis or idiopathic pulmonary fibrosis. If a respondent endorses “other lung condition,” no additional details or information is collected. Other important respiratory conditions that, if included, might have strengthened the ability of the registry data to be used for hypothesis generation include reduced lung function, eosinophilic pneumonia, other lung infections (such as tuberculosis, fungal pneumonia, community-acquired pneumonia), lung scarring or fibrosis (a more inclusive diagnosis than idiopathic pulmonary fibrosis), bronchiolitis other than constrictive bronchiolitis (respiratory or obliterative), sarcoidosis/hypersensitivity pneumonitis, rhinosinusitis, and vocal cord dysfunction.
Other wording problems with specific questions occur throughout the questionnaire. For example, Question 2.2.1.K does not define how “currently” should be interpreted. Question 2.2.1.L appears to be a follow-up to 2.2.1.K and asks about the same nine symptoms or conditions experienced in the past 12 months (as opposed to currently). Both questions are formatted as check-all instead of forced choice, which generates better results for questions of this type. These questions could potentially be used to check for internal validity of the questionnaire because persons who endorsed any current symptoms should also endorse the same symptoms for the past 12 months. The questions specifically ask respondents whether they have any of a list of symptoms, but not all of the possible responses are symptoms (chronic sinus infection/sinusitis, for example, is a diagnosis), and others are too vague (a “decreased ability to exercise” may be due to musculoskeletal problems or deconditioning) or compound (“chest pain, chest discomfort or chest tightness” may be due to cardiac, respiratory, or musculoskeletal conditions) to attribute directly to a respiratory condition. For persons who endorsed shortness of breath or breathlessness in the past 12 months (2.2.1.L), 2.2.1.M is a follow-up question which seeks to elicit additional details on the severity of this symptom, and participants are directed to choose one response that best corresponds to their level of severity. However, the possible responses are not mutually exclusive.
Other examples of poorly phrased questions occur in Section 3: Health Concerns. Questions 3.E and 3.I ask respondents to rate their level of concern that something they breathed during deployment has already affected
their health or will affect their future health. This is a leading question; the 3-point response scale—“not at all,” “a little,” or “very concerned”—is not optimal, and the order presented will likely skew responses toward the middle (Choi and Pak, 2005). Reversing the order and adding an additional option for an even number of possible responses, such as “very concerned,” “somewhat concerned,” “a little concerned,” and “not at all concerned,” would improve the questions and potential responses. In this same section, questions mix past and present tense. The high rates of “don’t know” responses for 3.B (During your deployment(s), do you believe that you were sick because of something you breathed?) and 3.C (Do you currently have a sickness or condition you think began or got worse because of something you breathed during deployments(s)?)—29% and 31%, respectively—likely indicate confusion or uncertainty. Question 3.K asks which exposure the individual thinks had the biggest overall effect on his or her health. This is an important and difficult question to ask, and it requires a judgment call that is difficult to make. The list does not appear to be complete or exhaustive, and no “other specify” option is provided. Additionally, the fourth response in Question 3.K (“military jobs while I’m not deployed”) is inconsistent with the focus of two previous questions in this section (3.C and 3.H) which specify “during deployment.”
There remains much scientific uncertainty about the conditions and diseases that may result from deployed service members’ exposure to airborne hazards in Iraq and Afghanistan and some studies have shown that other organs and organ systems are also affected (IOM, 2011). A questionnaire focused primarily on respiratory and cardiovascular outcomes is not sufficient for hypothesis generation and surveillance. Because the committee surmises that the developers were intending to include a broader set of health outcomes (as reflected in questions in Section 2.2.3: Other Conditions), the questionnaire could be improved by adding more questions eliciting details for outcomes related to these other conditions of interest as well; single questions of whether a respondent had “problems” or “conditions” related to broad categories of health outcomes are insufficient and not useful for evaluating issues regarding potential exposures and these outcomes. For example, although no other questions about the digestive system are included, Question 2.2.3.D (inexplicably placed between a question on immune system problems and a question on doctor-diagnosed chronic multisymptom illness) asks about a doctor-diagnosed liver condition within the past 12 months. In a related omission, no questions are included on comorbidities.
The health concerns section appears primarily composed of questions with little or no testing or validation. In the case of certain questions and responses, the wording or phrasing is unclear, awkward, or nonspecific. For example, in Question 3.A, as well as in Sections 2.2 and 2.3, “predeployment” (or “before deployment” for Sections 2.2 and 2.3) is not defined. This term could be interpreted as meaning immediately before deployment or at any time prior to deployment. For other questions, such as 3.F and 3.J, the list of possible health concern responses is not all inclusive. While a response of “other problem” is provided, people tend to avoid using it (Bradburn et al., 2004, p. 58). and usually this option is phrased as “other specify”; however, there is not an option to write in an area of concern that is not listed. Also included in this set of responses is the concern of “effect on children or ability to have children,” which is different from how the other response options are worded because it is phrased as a personal issue as opposed to a more general reproductive problem. Its placement is also odd, and given its current wording, it should either follow the cancer section or be examined separately as it covers two different problems (that is, a compound response).
The questionnaire contains seven questions related to cancer (Section 2.4). No transitional language or instructions are provided to make respondents aware of the change in topic from the previous section that requests height and weight (Section 2.3). Respondents are asked about up to three types of cancer diagnoses, but they are not informed of this at the beginning of the section. Respondents who answer that they have ever been told by a doctor or other health professional that they have cancer or a malignancy of any kind are then presented with a check-one format for the type of cancer. This type of format is inconsistent with the question intent. If the registry designers were interested in only three types of cancer, the question could be reworded to identify the three types by, for example, order of onset, then obtaining the age at diagnosis for each. Questions 2.4.C, 2.4.E, and 2.4.G (age at primary, secondary, and tertiary cancer diagnosis) in this section are open-ended and accept whole numbers between 0 and 99, leading to unnecessary errors. By using date of birth and current date, the acceptable range could be reduced considerably.
Questions Regarding Other Factors That Might Influence Health Outcomes
Tobacco use has been shown to affect respiratory and cardiovascular conditions and may also affect associations with exposure to burn pits and other airborne hazards. Thus, it is important to collect information on it and other potential confounders and effect modifiers for inclusion in analyses. Other factors may also influence the relationship between exposure and health outcome, but the degree of influence is unknown, so questions about these factors, such as height and weight, are presumably included “to be safe” (and can be used to calculate body mass index). Other questions have been included that appear to have no scientific relevance, such as places of residence.
The tobacco and smoking questions included are relatively standard, but they could be improved by instituting tailored range checks, which can easily be done in an online survey format. For example, the response to Question 2.5.D (How long has it been since you quit smoking cigarettes?) should be between the age reported in Question 2.5.B (How old were you when you first started to smoke fairly regularly?) and the individual’s current age. Instead of asking for the number of cigarettes a person smokes, packs of cigarettes may be a better measure and easier for a person to answer. Whereas the first five questions in this section ask specifically about cigarette smoking, Questions 2.5.F through 2.5.I ask about the use of other tobacco products. To highlight this change in focus, certain words could be bolded, and definitions and lists could be made clearer to avoid confusion and improve the questions. Two questions specific to smoking while deployed are separated from the more general smoking questions and are not well coordinated with the general tobacco exposure questions.
Only one question (2.7.A) is included to assess alcohol use
In the PAST YEAR, how often did you ever drink any type of alcoholic beverage [Included are liquor such as whiskey or gin, beer, wine, wine coolers, and any other type of alcoholic beverage]? “On average, how many days per week did you drink?” 1. Never, 2. Less than one, 3.1–7 days per week, 4. I do not wish to answer, 5. Don’t know.
This question is badly worded, confusing, and compound. The response options are also poor in that persons should be allowed to enter the number of days between 0 and 7, as is the case for other questions in the questionnaire on number of hours or days of exposures. Instead, the responses group the possible answers into 0, less than 1, and 1–7; essentially assigning the same severity score for consuming alcohol one day per week and every day. Additional questions asking the average number of drinks per day when an individual consumes alcohol would improve this section.
Other questions cannot be answered accurately by a respondent, or they request information that does not appear to have a relationship to evaluating the effects of wartime airborne exposures. For example, Questions 2.2.3.H (How often do you snore?) and 2.2.3.I (How often do you have times when you stop breathing during your sleep?) appear to have little salience, as evidenced by the high rate of item nonresponse (39% responded “don’t know” to Question 2.2.3.I). It is not clear that many people can answer these questions accurately or what the purpose of collecting this information would be as these are not health conditions that would be related to burn pit or other airborne hazards exposures.
Following the sections on health concerns, the questions in the last one-third of the instrument (Sections 4 through 6) ask about current residence, place of longest residence, main occupation outside of the military, dust and other exposures in civilian jobs, and home environment and hobbies. These questions are presumably included to gather additional information on other exposures that may affect deployment-specific exposures and reported symptom and condition outcomes. However, many of these questions are problematic, and it is unclear how any of this information would be analyzed.
Section 4 begins by stating, “Poor air quality in places where you’ve lived may impact how deployment exposures affect you.” This is a broad statement and implies that the person’s prior residence is somehow to blame for exposures experienced on deployment and for any health conditions they may be experiencing. It is unclear how such information could be used for analysis and what assumptions, if any, about exposure risk can be made from broad domains of city, state, zip code, duration of residence, and “address where you lived the longest before age 13.”
The intent of Question 5.2 on main occupation outside of the military appears to be to identify those individuals in specific occupations relevant to the focus of Sections 5.3 (Dust Exposures), 5.4 (Gas, Smoke, Vapors, or
Fumes), and 5.5 (Asbestos Exposure) which follow, rather than creating a classification of civilian occupations per se. About 12% of respondents endorsed working “in any dusty job outside the military,” and most of these respondents appeared to try to fit or force their occupations into these restricted categories, as evidenced by less than 5% selecting “other.” This makes analysis problematic and indicates that the answers are likely subject to considerable response error.
Ultimately, the questions attempting to elicit exposures from main nonmilitary occupations are vague and of little use. For example, in Question 5.3.A, “dusty job” is a vague, undefined term and likely to be interpreted differently by participants. The occupational exposure questions do not consistently use the same language in the questions. For instance, Questions 5.3.B.1 and 5.4.B.1 mix the terms “biggest exposure” and “longest exposure” within the question, and these terms are not necessarily defined the same way in people’s minds. These questions would be improved if the terms were consistent and well-defined. Similar to the questions asked in the health conditions, Questions 5.3.B.2 and 5.4.B.2 should be formatted as forced-choice rather than check-all-that-apply responses. Finally, Questions 5.3.B.3 and 5.4.B.3 suffer from limitations similar to those of the deployment exposure questions in Section 1.2; the ranges of years provided are too broad (0–99) and subject to error; and it is not clear that this level of precision is actually required. If so, a range check edit is needed to flag and verify or correct apparently invalid or unlikely values, or one could create a categorical response scale with ranges of years combined.
The questions on asbestos exposure (Section 5.5), inquire about combined civilian and military occupations and exposures—something that is not done elsewhere in the questionnaire. There is no explanation or preamble to the section or its first question. The fact that 34% of the respondents indicated “don’t know” to this question (5.5A: Have you ever worked in a job with asbestos exposure, including military service?) suggests that it has serious problems in comprehension or reflects a genuine lack of knowledge about potential past exposures. It also is not clear why asbestos exposure was not included in military-specific occupations and separately for civilian occupations. Additionally, 5.5.B is awkwardly worded (including asking respondents to circle their answers, which is not compatible with an online instrument9) and allows for multiple responses instead of the question having been designed to be clearer through forced-choice.
The section on home environment and hobbies (Section 6) begins with a single sentence of introductory text, “Exposures in your home environment or hobbies may impact how deployment exposures affect you.” Three questions ask whether or not respondents live with or visit traditional farm animals (not otherwise defined), have had mold in their home, or lived in a home with elevated levels of radon (6.A, 6.B, 6.C). Respondents are then asked to select from a detailed but incomplete list any hobbies that they participate in and to indicate how many hours per week, on average, they spend on those hobbies (6.D, 6.E). Aside from these questions being non-specific, the section is particularly problematic in that questions mix reference periods between “ever” (6.B and 6.C) and the present tense and in its use of wording such as “on a regular basis” (6.A) without making this clear. Moreover, Sections 5 and 6 ask questions that are not central or relevant to the focus of the registry and add substantially to the amount of time needed to complete the questionnaire without adding value.
The questionnaire contains a substantial number of questions on non-military, nondeployment variables, which are of little relevance to the stated purpose of registry. While critical disease influences such as smoking need to be considered, there is little basis for trying to address potential or subtle influences associated with other jobs, environments, and lifestyle factors. VA states that the registry’s primary purpose is to record self-reported exposures and health outcomes and to explore possible associations or determine potential health effects from exposure to airborne environmental hazards in service members and veterans. This purpose is not served by the poorly worded, nonspecific questions aimed at nondeployment-related factors. Such information might have utility in the context of an epidemiological analysis but, as discussed elsewhere, the registry data are inappropriate to use for that purpose.
9 Question 1.2.E also asks respondents to circle their responses.
VA designed the AH&OBP Registry with the intent of integrating multiple VA and DoD data sources to supplement questionnaire information and to provide a more complete picture of long-term health associated with exposure to burn pits. The registry was designed to link to and incorporate data from several other sources, including DoD’s DMDC, the Veterans Benefits Administration, and VA health records, including Medicare and mortality data (Ciminera, 2015).
Currently, the VA information technology system is able to link questionnaire responses to VA clinical data and other VA administrative data (Montopoli, 2016a). VA has added a template note to the electronic health record which allows a provider to indicate that a clinical evaluation related to registry participation has occurred. However, the registry design and architecture do not allow for information, once submitted, to be updated, either in the form of changes to individual items in the questionnaire itself—for example, additional information or new diagnoses gleaned through the clinical evaluation—or by linking with sources other than basic DMDC deployment information. The capability to add supplemental information, such as from follow-up questionnaires, has reportedly been added to the system, which would allow any future follow-up questions to supplement or update information in a participant’s record, but information on plans for conducting follow-up on the population is unavailable. VA medical records and other administrative data, such as vital status, can be linked to the registry data to provide additional information that might be used to identify health status or mortality or support registry operations (Montopoli, 2016a). The ability to link and update participant responses and health records for VA users would allow VA to validate responses, initiate longitudinal follow-up of VA users, and conduct sub-analyses of respondents to determine whether those with clinical evaluation data are reporting different outcomes, more severe health outcomes, or other factors that may differ from other registry participants.
VA data are easier to link and incorporate with questionnaire information than data from other sources, specifically DoD. Although VA has stated that it plans to gather longitudinal data on registry participants and has added that capability to the system, the committee found no details or information on methods of how this might be operationalized. For veterans, some additional information such as enrollment in VA, use of any VA benefits, use of VA health and mental health services, service-connected disabilities, and the like could also be examined by extracting those data and appending them to the registry information. The committee was told that DoD data are much harder to link and incorporate and are currently limited to DMDC data that provides deployment, demographic, and military characteristics information; VA has no plans to append DoD medical records to the registry (Montopoli, 2016a).
To the extent possible, linking individual participants with information on them that is available from VA or DoD databases could both increase the accuracy of the registry data and reduce respondents’ burden. Linking registry data with VA and DoD medical records, including hospitalization data, would allow for the evaluation of both subjective and objective health outcomes and validate self-reported conditions. Additionally, if DoD personnel records could be linked to the registry, they might provide additional information on pre-deployment conditions and exposures. For example, DoD has begun piloting the use of personal monitoring devices to link monitored exposures with individual health outcomes. These devices measure and record vitals, location, and external exposures (such as radiation and organic vapors) (Hartman et al., 2016). In the future, such devices could collect information on exposures that could potentially be linked to reported health outcomes in medical records or to health outcomes reported by registry participants. It is possible that other data sources may eventually be linked to the registry data, but because the registry was not designed specifically to link to ancillary data sources, the structure or validation of those data may shape or limit their use compared with the primary VA and DoD data linkage sources for which the registry was specifically designed.
Before linking or appending information or databases to the registry, several compatibility issues need to be considered. Primary among these would be what is being gained by the additional information. For example, linking to additional VA sources will only provide additional information for VA users. Less than half (46%) of deployed 1990–1991 Gulf War veterans and 64% of OEF, OIF, and OND veterans use VA for their health needs (NASEM, 2015a; VA, 2015b). Even among VA health care users, not all use VA for all of their health needs. Should it be determined that the additional information is advantageous to include, the second issue will be to determine whether
the registry database architecture and data structure are compatible with the form, structure, and availability of the intended sources to be linked. Third, validation of data quality of the additional sources should be conducted prior to a proposed linkage. Such quality criteria might include the type of data (self-report, clinical exam, laboratory tests), temporal information (longitudinal, cross-sectional), and whether measures of validity and reliability were embedded in a data source during data gathering. Fourth, the type of statistical techniques that will be used for linking data records (for example, deterministic matching, probabilistic matching, or another method) needs to be considered, since it will depend, in part, on the type of data being linked. Finally, the timing of and consequences related to updating information need to be considered. Timing includes how often the linking source information will be updated, for example, a single instance versus periodic or as-needed occurrence. For linked source information that differs from the registry, which source will be considered primary? Who or what algorithm will be used to validate the differing information? For sources that will be updated more than once, one consideration will be what will happen to the existing information when there is an update—whether it will be overwritten or a new field will be created. Answering these questions and implementing linkages across multiple data sources is critical for ensuring maximum utility of the registry.
Based on the information presented in this chapter, the committee has reached the following findings, conclusions, and recommendations related to the actions taken by VA to a design and implement a registry for the purpose of collecting health outcomes related to potential exposure to burn pits and other airborne hazards. Subsequent chapters revisit some of these issues in greater detail and offer additional observations based on analyses of registry data.
The 12-month timeframe directed by Congress for VA to establish a comprehensive and targeted exposure and health outcomes questionnaire and registry and to make it available for veterans’ use was quite short, considering the complexity of its intent and the tasks involved, including development and testing. Among other issues, the short timeframe did not allow for designers to consult needed expertise to implement a well designed instrument that could handle the complexities required of it in terms of information to be collected and how the information could best be managed and used. It is open to question whether the time period allotted by Congress was realistic.
The AH&OBP Registry questionnaire is problematic in many respects. This chapter identifies flaws in its layout, directions, and in the flow of the questions, and it cites several examples of poorly worded questions and questions that are not relevant to the stated goal of the registry or reflective of the limitations of the instrument. Questions that were designed for other types of surveys (notably, the NHIS) were seemingly used without regard to whether taking them out of that context affected their validity. And the process of verifying DoD-supplied information on locations where a respondent had served at (deployment segments)—something that was no doubt intended to be a time-saving element of the questionnaire—was instead burdensome to those with large numbers of deployments and may have led to response fatigue.
These shortcomings likely stem in part from the developers of the registry having not consulted with external experts in questionnaire design while developing the instrument and having not used specialized Web-based survey software. Thus, the committee recommends that VA involve external survey experts experienced in Web-based instruments in any restructuring of the registry questionnaire.
The question of how the registry questionnaire should be changed at this point in time depends critically on what VA intends for the registry to accomplish going forward. VA states that the data collected by the registry will be used for several purposes: to help monitor health conditions affecting eligible veterans and service members, to improve VA programs aimed at helping veterans and service members with deployment exposure concerns, to generate potential hypotheses about exposure response relationships, to improve programs in the VHA, and to provide outreach to veterans who may have experienced adverse health outcomes as a result of their exposures. However, except for the outreach activities and perhaps hypothesis generation, it does not appear that the registry—in general, or the data collected by it—is useful for addressing the stated purposes. For example, if it were to be used to help monitor health conditions affecting eligible veterans and service members, there should be a mechanism to ensure the periodic follow-up of participants. Or if burn pit exposure is to be the focus, more
than four appropriately worded questions would be required to elicit information on the duration and intensity of exposure. Instead, the registry’s most productive use appears to be as a forum to allow eligible veterans and service members to register their concerns about potential exposures and current health effects. If this more modest goal were to be adopted, the process of enrollment could be markedly simplified and data collection streamlined with little loss of information. Therefore, the committee recommends that VA eliminate the questionnaire sections addressing locations of previous residences (Section 4), non-military work history (5), and home environment, community, or hobbies (6), all of which collect data that might only be useful in epidemiologic studies of the population. Removing these sections would result in a less burdensome instrument with little if any loss of usable information for any of the stated purposes of the registry. Having a clear, consistent message about the purpose of the registry will allow it be tailored to focus on the issues of most importance and minimize the burden of completing the questionnaire.
VA stated that, as directed in the public law, it will use the recommendations from an independent scientific organization review (this study) to improve the registry program, including improving the questionnaire as necessary (VA, 2014a). However, the committee notes that addressing the problems identified in this chapter will not be sufficient to overcome the fundamental design flaws of the registry and that the registry will continue to have little value as a scientific tool for research or monitoring veterans’ health or improving the delivery of VA health care services.
As previously noted, the registry’s design and architecture do not allow for information, once submitted, to be updated. However, such a Web-based system does have the capacity to add supplemental information from participants—for example, from longitudinal or follow-up questionnaires, if conducted—and to link to VA and DoD data sources such as medical records, other administrative data (including benefits and vital status), and selected DoD information. Linkages with these other sources have the potential to reduce future participant burden, increase data quality (by avoiding recall and other biases), and increase the utility of the registry database. The AH&OBP Registry has the potential to be an advance in design over other VA registries if better use is made of this capacity. The committee recommends that once VA clarifies the intent and purpose of the registry, it develop a specific plan for more seamlessly integrating relevant VA and DoD data sources with the registry’s data, with the goals of reducing future participant burden, increasing data quality by restructuring questions to minimize recall and other biases, and improving the usefulness of the registry database as an information source for health care professionals and researchers.
Although a Web-based questionnaire may confer several benefits over more traditional methods, not all eligible persons have access to a computer or the Internet. Offering additional formats is not a trivial matter, but it would potentially improve access to the registry. Steps should therefore be taken to ensure that this subset of eligible persons has the opportunity to participate in the registry. The committee recommends that alternative means of completing the questionnaire such as a mail-in form or via a computer-assisted phone interview be offered in order to ensure that the subset of eligible persons who do not use or are not facile with the Internet have the opportunity to participate in the registry. Eligibility could be checked when a potential participant contacts VA. It would be necessary to put work into developing a system that would minimize the burdens on both the respondent and VA but the challenges are surmountable.
Other additional outreach efforts are also desirable. The relatively small number of respondents to date suggests that generic posts and shares on VA social media sites and broad communications and outreach initiatives are not reaching the full intended population of service members and veterans. One means to expand the participant pool would be to use data on current respondents to determine the characteristics of the people who are not signing up and then tailor messages and media on the basis of this information. Clarifying the purpose of the registry would help inform the question of how best to ensure that—to the greatest extent possible—persons eligible to participate are aware of it.
A strong point of the AH&OBP Registry—as a record of the respondent’s potential exposures and health concerns—should be taken further advantage of. This information can already be accessed by military and veteran health care system providers tied into VA’s electronic medical records and can be downloaded and printed by respondents. The committee recommends that VA enhance the utility of the registry by developing a concise version of questionnaire responses focused on information that would be most useful in a routine clinical
encounter and make it available for download. A one-page synopsis, for example, could facilitate conversations between patients and providers about medical concerns, leading to more productive visits and better focus on health care needs.
Finally, VA should investigate why so few of the respondents who say they would be interested in the in-person medical exam offered in conjunction with the registry have actually arranged for one. The committee recommends that VA continue its efforts to make it easier for participants to schedule and get the optional health examination offered as part of the registry—such as through targeted follow-up of respondents who indicate interest—and that it investigate the reasons that such a small percentage of respondents who indicate interest in an exam (~2.5%, to date) request one. Adding a means of scheduling an exam as part of the questionnaire—a capability that the committee understands is being implemented—will be a useful first step.
AHRQ (Agency for Healthcare Research and Quality). 2010. Registries for Evaluating Patient Outcomes: A User’s Guide, 2nd ed. Edited by R. E. Gliklich and N. A. Dreyer. Rockville, MD. P. 307.
Bradburn, N. M., S. Sudman, and B. Wansink. 2004. Asking questions: The definitive guide to questionnaire design—for market research, political polls, and social and health questionnaires, 2nd ed. San Francisco, CA: John Wiley & Sons.
Callegaro, M., M. H. Murakami, Z. Tepman, and V. Henderson. 2015. Yes–no answers versus check-all in self-administered modes: A systematic review and analyses. International Journal of Market Research 57(2):203–223.
CDC (Centers for Disease Control and Prevention). 2015. 2014 National Health Interview Survey (NHIS) public use data release. Hyattsville, MD: Department of Health and Human Services, Centers for Disease Control and Prevention, National Center for Health Statistics. http://ftp.cdc.gov/pub/Health_Statistics/NCHs/Dataset_Documentation/NHIS/2014/srvydesc.pdf (accessed December 3, 2016).
Chen, L. M., W. R. Farwell, and A. K. Jha. 2009. Primary care visit duration and quality: Does good care take longer? Archives of Internal Medicine 169(20):1866–1872.
Choi, B. C. K. and A. W. P. Pak. 2005. A Catalogue of biases in questionnaires. Preventing Chronic Disease 2(1). http://www.cdc.gov/pcd/issues/2005/jan/04_0050.htm (accessed July 12, 2016).
Ciminera, P. 2015a. Charge to the Committee on the Assessment of the Department of Veterans Affairs Airborne Hazards and Open Burn Pit Registry. PowerPoint Presentation Presented at the Meeting, Washington, DC. March 13.
Ciminera, P. 2015b. VA Airborne Hazards and Open Burn Pit Registry. VA mobile discussion series, August 28. PowerPoint presentation. https://mobile.va.gov/sites/default/files/VMH-093AugustDiscussionSeries.pdf (accessed September 29, 2015).
Ciminera, P. 2015c. Deployment segment analysis from the Airborne Hazards and Open Burn Pit Registry memo. Prepared for the Department of Veterans Affairs Office of Public Health, Post-Deployment Health Group by Westat. April 2015.
Ciminera, P. 2015d. Time to complete self-assessment questionnaire. Prepared for the Department of Veterans Affairs Office of Public Health, Post-Deployment Health Group by Westat. May.
Couper, M. P. 2000. Review: Web surveys: A review of issues and approaches. Public Opinion Quarterly 64(4):464–494.
Crum-Cianflone, N. F. 2013. The Millennium Cohort Study: Answering long-term health concerns of U.S. military service members by integrating longitudinal survey data with military health system records. In J. Amara and A. Hendricks (eds.), Military health care: From pre-deployment to post-separation. New York: Routledge.
Defense Health Board. 2015. Deployment Pulmonary Health. Office of the Assistant Secretary of Defense. Falls Church, Virginia. http://www.health.mil/Reference-Center/Reports/2015/02/11/Deployment-Pulmonary-Health (accessed February 9, 2017).
Dillman, D. A., and B. L. Messer. 2010. Mixed-mode surveys. In J. D. Wright and P. V. Marsden (eds.), Handbook of survey research. San Diego, CA: Elsevier. Pp. 551–574.
Dillman, D. A., J. D. Smyth, and L. M. Christian. 2014. Internet, phone, mail, and mixed-mode surveys: The tailored design method, 3rd ed. Hoboken, NJ: John Wiley & Sons.
Ericson, L. and C. Nelson. 2007. A comparison of forced-choice and mark-all-that-apply formats for gathering information on health insurance in the 2006 American Community Survey Content Test. In Federal Committee on Statistical Methodology Research Conference, Arlington, VA. Available online at: https://fcsm.sites.usa.gov/files/2014/05/2007FCSM_EricsonVI-A.pdf (accessed September 29, 2016).
Fan, W., and Z. Yan. 2010. Factors affecting response rates of the Web survey: A systematic review. Computers in Human Behavior 26(2):132–139.
Federal Register. 2013a. Department of Veterans Affairs [OMB Control No. 2900–NEW] Proposed information collection (Open Burn Pit Registry Airborne Hazard Self-Assessment Questionnaire) activity: Extension of comment period. Notice: 44625 Vol. 78, No. 108. Published June 5.
Federal Register. 2013b. Department of Veterans Affairs [OMB Control No. 2900–NEW] Proposed information collection (Open Burn Pit Registry Airborne Hazard Self-Assessment Questionnaire) activity: Comment Request. Notice: 33895 Vol. 78, No. 142. Published July 24.
Federal Register. 2013c. Department of Veterans Affairs agency information collection (Open Burn Pit Registry Airborne Hazard Self-Assessment Questionnaire) activities under OMB review. Notice: 54956 Vol. 78, No. 173. Published September 6.
Federal Register. 2014. Department of Veterans Affairs establishment of the Airborne Hazards and Open Burn Pit Registry. Notice: 36142 Vol. 79, No. 122. Published June 25.
Feindt, P., I. Schreiner, and J. Bushery. 1997 Reinterview: a tool for survey quality improvement. Proceedings of the Joint Statistical Meeting, Survey Research Methods Section. Washington, DC: American Statistical Association. Pp. 105–110.
Hartman, R., S. Jones, and K. Phillips. 2016. DoD’s development of the total exposure health (TEH) initiative. Presentation to the National Academy of Sciences, Engineering, and Medicine Board on the Health of Select Populations. June 14.
Institute for Social Research. 2016. Guidelines for best practice in cross-cultural surveys, fourth edition. University of Michigan, Ann Arbor. http://ccsg.isr.umich.edu/images/PDFs/CCSG_Full_Guidelines_2016_Version.pdf (accessed February 9, 2017).
IOM (Institute of Medicine). 2011. Long-term health consequences of exposure to burn pits in Iraq and Afghanistan. Washington, DC: The National Academies Press.
IOM. 2013. Best care at lower cost: The path to continuously learning health care in America. Washington, DC: The National Academies Press.
Kang, H. K., C. M. Mahan, K. Y. Lee, C. A. Magee, and F. M. Murphy. 2000. Illnesses among United States veterans of the Gulf War: A population-based survey of 30,000 veterans. Journal of Occupational and Environmental Medicine 42(5):491–501.
Lezama, N. G. 2015. Presentation to the Committee on the Assessment of the Department of Veterans Affairs Airborne Hazards and Open Burn Pit Registry. December 15.
Lezama, N. G. 2016. Responses from the Department of Veterans Affairs to questions from the Committee on the Assessment of the Department of Veterans Affairs Airborne Hazards and Open Burn Pit Registry. Received February 18.
May, L., and J. Haider. 2014. Preliminary assessment of NHIS data for providing a burn pit registry comparison group. Prepared for the Committee on the Assessment of the Department of Veterans Affairs Airborne Hazards and Open Burn Pit Registry. November 20.
Mooney, G. M. and B. Lepidus Carlson. 1996. Reducing mode effects in “mark all that apply” questions. Proceedings of the Survey Research Methods Section, American Statistical Association 614–619.
Montopoli, M. 2016a. VA responses to requests and questions from the committee. Received June 20.
Montopoli, M. 2016b. VA responses to committee’s questions following August 24–25, 2015, meeting. Received September 15.
NASEM (National Academies of Sciences, Engineering, and Medicine). 2015a. Considerations for designing an epidemiologic study for multiple sclerosis and other neurologic disorders in pre and post 9/11 Gulf War veterans. Washington, DC: The National Academies Press.
NASEM. 2015b. Improving diagnosis in health care. Washington, DC: The National Academies Press.
NIAAA (National Institute on Alcohol Abuse and Alcoholism). 2016. CAGE questionnaire. http://pubs.niaaa.nih.gov/publications/inscage.htm (accessed September 25, 2016).
Pew Research Center. 2011. Internet surveys. http://www.people-press.org/methodology/collecting-survey-data/internet-surveys (accessed September 29, 2016).
RAND. 2016. RAND Medical Outcomes Study (MOS): 36-Item Short Form Health Survey (SF-36). http://www.rand.org/health/surveys_tools/mos/36-item-short-form.html (accessed September 25, 2016).
Rasinski, K. A., D. Mingay, and N. M. Bradburn. 1994. Do respondents really “mark all that apply” on self-administered questions? Public Opinion Quarterly 58(3):400–408.
Scherpenzeel, A., and V. Toepoel. 2012. Recruiting a probability sample for an online panel. Effects of contact mode, incentives, and information. Public Opinion Quarterly 76(3):470–490.
Sharkey, J. M., D. K. Harkins, T. L. Schickedanz, and C. P. Baird. 2014. Department of Defense participation in the Department of Veterans Affairs Airborne Hazards and Open Burn Pit Registry: Process, guidance to providers, and communication. US Army Medical Department Journal July–September:44–50.
Smith, T. C., I. G. Jacobson, B. Smith, T. I. Hooper, M. A. Ryan, and the Millennium Cohort Study Team. 2007. The occupational role of women in military service: Validation of occupation and prevalence of exposures in the Millennium Cohort Study. International Journal of Environmental Health Research 17(4):271–284.
Smyth, J. D., D. A. Dillman, L. M. Christian, and M. J. Stern., 2006. Comparing check-all and forced-choice question formats in Web surveys. Public Opinion Quarterly 70(1):66–77.
Smyth, J. D., L. M. Christian, and D. A. Dillman. 2008. Does “yes or no” on the telephone mean the same as check-all-that-apply on the Web? Public Opinion Quarterly 72(1):103–111.
SSC (Sergeant Thomas Joseph Sullivan Center). 2013. Letter to Ms. Cynthia Harvey-Pryor at the Veterans Health Administration regarding preliminary comments to the OMB application for the AH&OBP self-assessment questionnaire. Provided to the Committee on the Assessment of the Department of Veterans Affairs Airborne Hazards and Open Burn Pit Registry. Washington, DC. May 1.
SSC. 2015. Statement of Peter M. Sullivan to the Committee on the Assessment of the Department of Veterans Affairs Airborne Hazards and Open Burn Pit Registry. Washington, DC. May 1.
Szema, A. 2015. Analysis of VA burn pits registry: Testimony to workshop. Presentation to the Committee on the Assessment of the Department of Veterans Affairs Airborne Hazards and Open Burn Pit Registry. May 1.
Tourangeau, R., F. G. Conrad, and M. P. Couper. 2013. The Science of Web Surveys. New York: Oxford University Press.
VA (Department of Veterans Affairs). 2012. Airborne Hazards Clinical Evaluation Working Group charter. Email from Nicolas Lezama, deputy chief consultant, Post-Deployment Health Services Patient Care Services, Veterans Health Administration on February 18.
VA. 2013. Open Burn Pit Registry/Airborne Hazards Exposure Assessment Working Group charter. Email from Nicolas Lezama, deputy chief consultant, Post-Deployment Health Services Patient Care Services, Veterans Health Administration on February 18.
VA. 2014a. Justification template: Airborne hazards and Open Burn Pit Registry self-assessment questionnaire. www.reginfo.gov/public/do/DownloadDocument?objectID=44258503 (accessed September 29, 2016).
VA. 2014b. eBenefits fact sheet. https://www.ebenefits.va.gov/ecms-proxy/document/ebenefits-liferay/dynamic-content/ebenefits/assets/downloads/eBenefits_factsheet.pdf (accessed September 25, 2016).
VA. 2015a. Report on data from the Airborne Hazards and Open Burn Pit (AH&OBP) Registry. http://www.publichealth.va.gov/docs/exposures/va-ahobp-registry-data-report-april2015.pdf (accessed September 10, 2016).
VA. 2015b. Analysis of VA health care utilization among Operation Enduring Freedom (OEF), Operation Iraqi Freedom (OIF), and Operation New Dawn (OND) veterans. Washington, DC: Veterans Health Administration.
VA. 2016a. Airborne Hazards and Open Burn Pit Registry: About the registry. https://veteran.mobilehealth.va.gov/AHBurnPitRegistry/index.html#page/about (accessed January 28, 2016).
VA. 2016b. PTSD checklist for DSM-5 (PCL-5). http://www.ptsd.va.gov/professional/assessment/adult-sr/ptsd-checklist.asp. (accessed September 25, 2016).
VA. 2016c. U.S. Department of Veterans Affairs clinicians guide to airborne hazards. http://www.publichealth.va.gov/docs/exposures/clinician-guide-airborne-hazards.pdf (accessed September 29, 2016).
VA. 2016d. Airborne Hazards and Open Burn Pit Registry fact sheet. http://www.publichealth.va.gov/docs/exposures/burn-pitregistry-fact-sheet.pdf# (accessed April 4, 2016)
VA. 2016e. Airborne Hazards and Open Burn Pit Registry. https://veteran.mobilehealth.va.gov/AHBurnPitRegistry/#page/home (accessed February 23, 2016).
This page intentionally left blank.