This chapter provides an overview of the committee’s data-gathering efforts. The committee approached its task by organizing itself into work groups. Each group consisted of four or five members who had expertise in a particular topic, such as family, treatment, outcomes, economics, community, access and barriers to care, and methods. The methods group supervised the limited descriptive data analyses that the committee conducted to ensure uniformity in definitions and approach.
First, the committee directed the overall search of the literature. In an effort to stay current, searches were updated three times over the course of the Phase 2 study period. The literature was provided to the committee members, who read the articles and summarized them for the report. The literature focused primarily on Operation Iraqi Freedom (OIF) and Operation Enduring Freedom (OEF) populations—and Operation New Dawn (OND) as available—but if the literature was sparse or nonexistent, the committee members included studies of other military or civilian populations. Second, the committee inventoried current federal research efforts (Appendix D) in the areas of concern to the committee (as outlined in the legislation) to identify research gaps and to recommend additional research to address them. Chapters 4–9 provide research recommendations, under the heading of Future Research Directions, based on the lack of literature or gaps in funded research, which federal agencies should consider as a way of improving knowledge about readjustment problems. Third, in an effort to understand the characteristics of the all-volunteer military force deployed in support of OEF and OIF, the committee requested data from the Department of Defense (DOD) Defense Manpower Data Center (DMDC). Those data were examined and frequencies or counts of the variables of interest were tabulated so that the committee could have an appreciation of the characteristics of the people deployed (Chapter 3). Fourth, the committee explored other methods of supplementing information on subjects on which the literature was sparse, such as overseeing rapid ethnographic assessments of communities that might have been affected by deployments and repeat deployments (Chapter 7 and Appendix E).
The committee began its work by overseeing extensive searches of the peer-reviewed medical and scientific literature, including published articles, other peer-reviewed reports, government reports, congressional testimony, and dissertations. The searches retrieved over 7,000 potentially useful studies, and their titles and abstracts were reviewed. The committee
focused its attention on studies of OEF and OIF populations and their families. Overall, the committee decided to not use comparisons to civilian populations because military members are likely to differ from civilians in both observable and unobservable dimensions (the military is a highly selected population; applicants meet a range of eligibility criteria and have a desire to take on the duties of military service). To the extent that these differences may themselves be associated with an outcome, direct comparisons between military and civilian populations will be misleading. That said, some civilian studies are included where they are useful for interpreting key findings in the military data. For example, military divorce rates are discussed in the context of civilian divorce rates in the family chapter (see Chapter 6).
The review excluded case reports, case series that involved few participants, and studies of acute outcomes that resolved themselves within days to a few months for the outcomes chapter. After its assessment of the titles and abstracts, the committee members identified about 3,000 studies for further review. The committee conducted three major searches over the Phase 2 study period (in August 2010, May 2011, and February 2012) with MEDLINE and PsycINFO. It also searched the National Technical Information Service database for various government reports, such as those of the Government Accountability Office, the Congressional Budget Office, and the Office of Management and Budget; for congressional testimony; and for annual reports to Congress from DOD and the Department of Veterans Affairs (VA) on issues related to OEF and OIF. In addition to those overall searches, the staff and committee members conducted numerous smaller searches on topics related to the task. All searches were entered into the committee’s EndNote database, and titles, abstracts, and papers were made available for the committee members’ review.
All searches were run against MEDLINE and PsycINFO by using the OvidSP platform. Results were limited to the English language and articles published from 2000 to the present. All result sets were de-duplicated to eliminate occurrence of the same reference two or more times in the two databases, and all results were exported to an EndNote library.
The strategy for the searches was to establish a “base set” that identified OEF and OIF populations of interest: troops on active duty in the military, National Guard troops, reservists, and veterans; and families, spouses, relatives, and caregivers associated with veterans or with those on active duty. That base set was then combined with four broad categories of terms grouped according to issues of concern to or affecting returning veterans, such as reintegration, debt, and unemployment; programs and resources, such as education, employment, and loanrepayment programs; health outcomes, such as traumatic brain injury, depression, and substance use; and additional health care, social, and psychosocial issues.
The committee directed Institute of Medicine (IOM) staff to assemble a table that would include a list of current federally funded research in the subjects of concern to its task, specifically, studies related to OEF, OIF, and OND service members, veterans, and families (see Appendix D). Searches used the following databases and Web sites: National Institutes of Health (NIH) RePORT (NIH Research Portfolio Online Reporting Tools), a portfolio of NIH research activities; ClinicalTrials.gov, an NIH registry and results database of clinical studies; Congressionally Directed Medical Research Programs, which emphasizes subcategories of Defense Women’s Health Research and Deployment Related Medical Research; and VA Health
Those sources were first searched in May and June 2010, and a final update was conducted in September 2012. The committee also received updates on the research being conducted by the RAND Corporation (personal communications, Terri Tanielian and Rajeev Ramchand, June 2010 and September 2012). The table of current research was sent to VA in February 2011 and September 2012 for review of its accuracy.
The committee members carefully examined the table and in concert with its literature review conducted a gap analysis so that subjects on which research was lacking would be highlighted and future research recommended. The committee highlights future directions and makes specific research recommendations on the basis of its assessment of the literature and its review of the table of funded research (see “Future Directions” in Chapters 4-9).
DEMOGRAPHIC ANALYSIS OF DEPLOYED PERSONNEL
Since 1974, the DMDC has maintained the largest archive of DOD data, including all branches and components (regular, reserve, National Guard, and civilian) of military personnel, on manpower, training, and financial matter. Data have been collected on over 42 million people connected to DOD who have been followed through their military careers (accession, service, separation, and retirement). The DMDC accesses, receives, and combines data from many sources, programs, databases, and personnel files—on active-duty, reserve, National Guard, and retired military personnel and contractors and civilians—and data from VA, the Social Security Administration, Medicare, and other sources to allow for reporting of entitlements, benefits, and readiness; personnel identification, validation, and authentication; and decision-support purposes. The committee has accessed and referred to the DMDC for various purposes and at various times throughout its study.
The committee wanted to understand basic information about who was deployed, and members requested and received demographic data from the DMDC on all those who were deployed anywhere in support of OEF, OIF, and OND1 from September 11, 2001, through December 31, 2010. We received a total of 2,147,398 records, including a file of those on active duty (1,450,004), a file of those in the reserves and National Guard (697,394), and a file that contained the deployment histories of all service members in those two groups.2 The DOD instruction files that contained the documents that we used to select the variables of interest were
• Active-duty personnel—DODI 1336.5.
• Reserve and National Guard personnel—DODI 7730.54.
• Deployment—DODI 6490.3.
The variables received from DMDC for the deployed active-duty service members and for the deployed reserve active-duty members are listed below in Table 2.1.
1OND officially began in September 2010. The committee’s file ends on December 31, 2010, so it contains few records on the OND population.
2IOM staff produced files for committee members that were stripped of all personal identifiers so that committee members did not have access to personal information. Case numbers were assigned to each record.
|Deployed Active-Duty Variables|
|Service||Primary service occupation|
|Component||Armed Forces Qualification Test percentile|
|Social Security number||Duty-service occupation|
|Last name||Duty-unit location, state|
|First name||Duty-unit location, country|
|Middle name||Duty-unit location, ZIP|
|Cadency||Transaction effective date|
|Date of birth||Place of birth, state|
|Sex||Place of birth, country|
|Marital status||Home of record, state|
|Education level||Deployment start date|
|Residence ZIP||Deployment end date|
|Spouse Social Security number||Location begin date|
|Dependents quantity||Location end date|
|Prior Social Security number||Location, country|
|Active federal military service base calendar date||Active-duty involuntary retention reason code|
|Military Accession Program source code||Pay grade|
|Enlisted career status code||Military career category code (for Army only|
|Reserve Active-Duty Variables|
|Service||Date of initial entry into reserve forces|
|Component||Active-duty start date|
|Reserve category code||Active-duty end date|
|Reserve subcategory||Pay grade|
|Social Security number||Active federal military service months quantity|
|Social Security number verification||Service occupation|
|Last name||Armed Forces Qualification Test percentile|
|First name||Duty-service occupation|
|Middle name||Separation incentive benefits and/or pay indicator|
|Cadency||Assigned unit, state|
|Date of birth||Assigned unit, country|
|Sex||Assigned unit, ZIP|
|Marital status||Reserve service bonus incentive type code|
|Race||Reserve service education incentive type code|
|Education level||Component deployment|
|Home mailing state||Deployment start date|
|Home mailing ZIP||Deployment end date|
|Spouse Social Security number||Location begin date|
|Dependents quantity||Location end date|
|Date of initial entry into uniformed service||Location, country|
The deployment data provided with those two files were for the most recent deployment identified at the time of the file creation. They did not provide the complete deployment history. Therefore, a later file of all deployments with the following data elements was requested: Social Security number, date of birth, deployment start date, deployment end date, location begin date, location end date, and location country. The deployments provided by the DMDC included all deployments to any location considered by DOD to be in support of OIF, OEF, or the Global War on Terror. Each deployment start and end date identified a unique deployment episode that was counted as one deployment.
By using those data, the committee was able to understand some characteristics of the allvolunteer military force that served in support of OEF and OIF during that period. The details of the demographic characteristics are presented in the discussion, tables, and figures in Chapter 3. The following paragraphs describe how the variables selected for use in Chapter 3 were coded.
Branch of service. The standard service codes were used: Army, Air Force, Coast Guard, Marine Corps, and Navy.
Component. The three components used were regular (active), National Guard, and reserves, although for some analyses the National Guard and reserves were described together as the reserve components. For several analyses, branch and component were crossed as follows: regular and reserves (all five services) and Army and Air Force National Guard.
Pay grade. On the basis of the pay-grade or pay-rate variables, pay grades ranged from E1 to E9 for enlisted personnel, from O1 to O10 for commissioned officers, and from W1 to W5 for warrant officers. Those were further collapsed for some analyses to junior enlisted (E1–E4), senior enlisted (E5–E9), junior officers (O1–O3), senior officers (O4–O10), and warrant officers.
Sex. Male or female.
Age. Age as of December 31, 2010, was calculated by using the date-of-birth variable. For some analyses, the distribution was collapsed in 5-year intervals: less than 20 years old, 20– 24, 25–29, 30–34, 35–39, 40–44, 45–49, 50–54, and 55 or older.
Education. The variable was created by collapsing the detailed education-attainment variable to less than a high-school education, high-school degree or equivalent (for example, a GED), some college, college degree, or at least some postgraduate education.
Race. Coding rules established by the National Center for Health Statistics were used to code race. White was coded for those who identified themselves only as white with no other race identified. Black was coded for those who identified themselves as black or African American only or as primarily black but with additional race groups. Asian/Pacific Islander was coded for those who identified themselves as Asian, Native Hawaiian, or other Pacific Islander only or as primarily Asian, Native Hawaiian, or other Pacific Islander but with additional race groups. American Indian/Alaskan Native was coded for those who identified themselves as American Indian or Alaskan Native only or as primarily American Indian or Alaskan Native but with
Ethnicity. This was coded as Hispanic if a service member identified himself or herself as Mexican, Puerto Rican, Cuban, Latin American, or of other Hispanic descent. Those who indicated no ethnicity or another coded ethnicity were identified as non-Hispanic. People with missing or unknown ethnicity codes were categorized as of unknown ethnicity.
Marital status. Marital status was categorized as never married, married, divorced or legally separated, and other (widowed or interlocutory decree) as of the end date of the file. Those with missing or unknown marital status codes were categorized as of unknown marital status. Percentage married was most often used.
Children. The proportions of service members who had children and the numbers of children are based on the “dependent” variable that includes only dependent children rather than all dependents.
Deployment. Although there are different ways to define deployment, the committee chose to use the DOD definition as provided by the DMDC. In this context, deployment count or number of deployments was based on a count of deployment records in the deployment file with discrete (nonoverlapping) start and end dates for each service member. The length of deployment was based on the start and end dates of each deployment for both single and multiple deployers; the cumulative number of months for multiple deployers was based on summing the length of all their deployments on the basis of their start and end dates. Dwell time (the time between deployments) was calculated by taking the end date of a given deployment and subtracting it from the begin date of the next deployment for those who had two or more deployments. Location of deployment was more complicated to determine. The DMDC Contingency Tracking System contains data fields for specifying the location of each deployment designated as in direct support of the OEF, OIF, or OND mission, but some individual records do not have movements in and out of country. In particular, before 2005, although the DMDC tracked deployments to Iraq and Afghanistan, the location codes were mostly unknown or based on the embarkation country of the service member (for example, Bahrain, Kuwait, or Qatar). In 2005, the Defense Theater Accountability System increased the level of detail to include each change in country during a given service member’s deployment. Thus, use of the Afghanistan and Iraq country codes to identify those who served in these areas of responsibility would underestimate the numbers of service members who actually served there from September 11, 2001, through 2010.
In recognition of the possibility that the deployments of many of those who served in Iraq or Afghanistan were not assigned location codes or were assigned codes based on embarkation countries, each deployment location on the file was classified as
• Afghanistan or Iraq.
• Middle East locations designated as eligible for combat-zone pay or benefits (Djibouti, Israel, Jordan, Kyrgyzstan, Kuwait, Pakistan, Saudi Arabia, Somalia, Turkey, Uzbekistan, and Yemen).
• Other known countries or locations (for example, Germany or Republic of Korea).
• Unknown (location data missing).
On the basis of the codes, it was possible to categorize all those deployed (1) at least one deployment to Afghanistan or Iraq; (2) at least one Middle East country and all other deployment
countries/locations are known; (3) Middle East and/or other countries, with at least one unknown location; (4) only countries other than Afghanistan, Iraq, or the Middle East; and (5) none of the deployment locations are known (all are missing). Chapter 3 details all the demographic characteristics of those deployed and provides information on the nature of the deployments and dwell time.
In its efforts to gain some understanding of the effects of deployment on communities, the committee reviewed the literature that was available. The committee also contracted with Westat to conduct ethnographic assessments with the committee’s oversight and input. (The individual ethnographic assessments may be found in Appendix E, and a summary of findings appears in Chapter 7.)
Ethnography is an approach to understanding communities that involves an array of datacollection strategies, including document review, participant observation, in-depth interviews of key community members, and less formal interviews with persons throughout a community as the opportunity arises. Although ethnographers have traditionally emphasized the importance of spending an extended period (at least 1 year) in the community of interest, there is agreement that under some circumstances valuable and accurate findings can be obtained through a substantially shorter period of field work.
Two-person ethnographic teams from Westat visited six sites to understand the effects of multiple deployments on targeted domains in each of the selected locations. The following paragraphs describe the methods used by the ethnographic teams; they describe the eligibility and selection criteria for study sites and the training of project-team members. The training aimed to ensure a consistent approach to data collection and reporting in the six sites.
The project team and IOM worked closely together to identify possible study sites and to decide which ones to select. The presence of a large Army or Marine Corps base where service members have been deployed multiple times was the primary criterion for site selection. Mindful of the potential importance of geographic diversity, the project team and IOM agreed on
• Jacksonville, North Carolina
• Watertown, New York
• El Paso, Texas
• Olympia-Tacoma, Washington
IOM requested two additional sites that had National Guard units and suggested that the project team look for communities in states that had a history of a strong National Guard—such as Florida, Indiana, Michigan, Ohio, and Pennsylvania—and from which the local Guard unit had deployed more than once. Because IOM was interested in community-level effects of deployments, the project team added the criterion that units have large numbers of local Guard members, as opposed to, for example, members from four neighboring states and Guam.
Research assistants on the project team set about locating several such units and quickly found that, although there are websites for the National Guard in each state, they vary
substantially in quality, currency, level of detail (for example, where unit members were from), and even Web domain name (for instance, some have the “.mil” domain name, others use “.org”). Because there was no way to “drill down” to the local units within all the suggested states and thus no ready means of identifying which units had seen more than one deployment to OIF or OEF, the research assistants expanded their search to include other states that had a noted National Guard history. By reviewing various states’ National Guard websites, they were able to navigate via hotlinks to more detailed information about the units, including information about their deployment histories; the percentage of unit members who had deployed once, twice, or three times or more; and how many members come from towns near the location of the unit headquarters. The research assistants also accessed newspaper archives to identify the extent to which communities were paying attention to the comings and goings of their Guard members. Feature articles and letters to the editor, for example, offered important insights into a Guard unit’s local “presence.”
Using those search techniques, the project team identified two National Guard locations that seemed appropriate for the IOM study’s objectives: Georgetown, South Carolina, whose armory serves as the headquarters for the 1/178th Field Artillery Battalion; and the town of Little Falls, Minnesota, which hosts Camp Ripley, the training facility for the Minnesota National Guard, and has a long history of its residents’ joining the National Guard. Those communities and the rationale for their selection were presented to and approved by IOM for study inclusion.
IOM and Westat staff participated in training. The training emphasized the objectives of the community assessment and provided an opportunity for team members to review planned data-collection procedures. Interview protocols were also reviewed and discussed.
Westat Data-Collection Procedures
Research on Community Background
An ethnographic study can begin before a researcher goes to the field. Project team members conducted Internet-based environmental scans to collect important background information relevant to the site visit and to shape their understanding of the community. They also uncovered important social networks through initial calls to key informants (such as citycouncil members and the director of a mental-health clinic) and by asking them for additional referrals in the community. Those steps set the foundation for the development of each site-visit plan.
Ethnographic Data Collection
Study teams used a variety of data-collection approaches that went beyond the initial community scan, including semistructured small-group discussions, in-depth interviews, and onsite materials review (for example, of local daily newspapers). Perhaps the hallmark of ethnography is participant observation, in which the researcher engages with the community “as though” she or he were a regular community member. Observations made during such engagements and personal experiences can offer valuable insights into community dynamics. In a rapid appraisal, venues for participant observation must be chosen with efficiency in mind—that is, selection of events or venues that will provide the best information relevant to the study.
For the present study, community fairs proved to be an excellent venue for team members to observe the degree to which military members and their families interacted with the civilian community. Such fairs also often have dedicated areas where community-based organizations can advertise their services. That is one way in which social-service organizations may conduct outreach to the community, and observing which organizations were present—or absent—was telling during team members’ attendance at the fairs.
Question Matrix Review
The Westat study team created a “working” data matrix (see Table 2.2) that crossed the critical research questions with key categories of interviewees. Team members placed a check mark in the appropriate box each time an interview involved a particular topic. That approach helped to ensure cross-site consistency in data-collection procedures. It also provided the study team with a quick way to ensure that information was being collected on all topics of interest and that interviews were taking place with members of key sectors of the community, such as emergency-services personnel and school faculty and staff.
TABLE 2.2 Question Matrix for Site-Visit Interviews
|Questions by Domain||Business Leaders||Social Service Providers||School Personnel||Clergy||Civilian Community Members||Military Families||Civic Leaders|
|How has the deployment situation affected overall commerce in the community? How has it affected the labor supply? What impact has it had on consumer demand? (Increased? Decreased?)||√||√|
|How has it affected the unemployment rates within the community? What, if any, stores have gone out of business or left town? Conversely, what, if any, new stores have come into town?||√||√||√||√||√|
|How have the deployments affected the housing market? Increase in population? People have moved away? Change in foreclosure rates? Change in home ownership vs rentals?||√||√||√||√||√||√|
Data Management and Security
Team members collected two forms of data: field notes on observations or interviews and audio recordings of formal interviews. Each evening, to enhance data security, site-visit staff uploaded audio recordings to a secure file transfer protocol (SFTP) site and informed an assigned staff member on Westat’s Rockville, Maryland, campus of the number of files to expect. Rockville-based staff then downloaded the files to a secure network drive. After being notified that the files had transferred correctly, site-visit staff members deleted the files on their recording devices. Files that had been uploaded to the SFTP were also deleted at this time.
Using the data matrix, team members reviewed appropriate audio files and field notes for patterns regarding each key question. For example, to address the effects of multiple deployments on the local economy, analysts reviewed relevant sections of interviews with city
leaders and members of the Chamber of Commerce. Staff sometimes also examined notes from less formal discussions with local business leaders (such as the owner of a storage facility or coffee shop). In conducting the analysis, the researchers did not search solely for consensus; often, because interviewees occupied different positions in a community, their perspectives on an issue could be quite different. When possible, analysts relied on other data gathered during the site visit and social theory to try to explain why differences might exist.
Each site-visit team conducted its own data analysis and submitted a 12- to 15-page report shortly after each visit. The case-study outline was structured to match the question matrix, so it created a straightforward framework for conveying findings. Each report was reviewed by one or more senior members of the project team before it was submitted to IOM for review. (See Appendix E for the individual site summaries.) The summary of findings from the ethnographic assessments is presented in Chapter 7.