National Academies Press: OpenBook

Evaluating Military Advertising and Recruiting: Theory and Methodology (2004)

Chapter: 3. Monitoring Trends in Youth Attitudes, Values, and Propensity

« Previous: 2. Theoretical Approaches
Suggested Citation:"3. Monitoring Trends in Youth Attitudes, Values, and Propensity." National Research Council. 2004. Evaluating Military Advertising and Recruiting: Theory and Methodology. Washington, DC: The National Academies Press. doi: 10.17226/10867.
×
Page 40
Suggested Citation:"3. Monitoring Trends in Youth Attitudes, Values, and Propensity." National Research Council. 2004. Evaluating Military Advertising and Recruiting: Theory and Methodology. Washington, DC: The National Academies Press. doi: 10.17226/10867.
×
Page 41
Suggested Citation:"3. Monitoring Trends in Youth Attitudes, Values, and Propensity." National Research Council. 2004. Evaluating Military Advertising and Recruiting: Theory and Methodology. Washington, DC: The National Academies Press. doi: 10.17226/10867.
×
Page 42
Suggested Citation:"3. Monitoring Trends in Youth Attitudes, Values, and Propensity." National Research Council. 2004. Evaluating Military Advertising and Recruiting: Theory and Methodology. Washington, DC: The National Academies Press. doi: 10.17226/10867.
×
Page 43
Suggested Citation:"3. Monitoring Trends in Youth Attitudes, Values, and Propensity." National Research Council. 2004. Evaluating Military Advertising and Recruiting: Theory and Methodology. Washington, DC: The National Academies Press. doi: 10.17226/10867.
×
Page 44
Suggested Citation:"3. Monitoring Trends in Youth Attitudes, Values, and Propensity." National Research Council. 2004. Evaluating Military Advertising and Recruiting: Theory and Methodology. Washington, DC: The National Academies Press. doi: 10.17226/10867.
×
Page 45
Suggested Citation:"3. Monitoring Trends in Youth Attitudes, Values, and Propensity." National Research Council. 2004. Evaluating Military Advertising and Recruiting: Theory and Methodology. Washington, DC: The National Academies Press. doi: 10.17226/10867.
×
Page 46
Suggested Citation:"3. Monitoring Trends in Youth Attitudes, Values, and Propensity." National Research Council. 2004. Evaluating Military Advertising and Recruiting: Theory and Methodology. Washington, DC: The National Academies Press. doi: 10.17226/10867.
×
Page 47
Suggested Citation:"3. Monitoring Trends in Youth Attitudes, Values, and Propensity." National Research Council. 2004. Evaluating Military Advertising and Recruiting: Theory and Methodology. Washington, DC: The National Academies Press. doi: 10.17226/10867.
×
Page 48
Suggested Citation:"3. Monitoring Trends in Youth Attitudes, Values, and Propensity." National Research Council. 2004. Evaluating Military Advertising and Recruiting: Theory and Methodology. Washington, DC: The National Academies Press. doi: 10.17226/10867.
×
Page 49
Suggested Citation:"3. Monitoring Trends in Youth Attitudes, Values, and Propensity." National Research Council. 2004. Evaluating Military Advertising and Recruiting: Theory and Methodology. Washington, DC: The National Academies Press. doi: 10.17226/10867.
×
Page 50
Suggested Citation:"3. Monitoring Trends in Youth Attitudes, Values, and Propensity." National Research Council. 2004. Evaluating Military Advertising and Recruiting: Theory and Methodology. Washington, DC: The National Academies Press. doi: 10.17226/10867.
×
Page 51
Suggested Citation:"3. Monitoring Trends in Youth Attitudes, Values, and Propensity." National Research Council. 2004. Evaluating Military Advertising and Recruiting: Theory and Methodology. Washington, DC: The National Academies Press. doi: 10.17226/10867.
×
Page 52
Suggested Citation:"3. Monitoring Trends in Youth Attitudes, Values, and Propensity." National Research Council. 2004. Evaluating Military Advertising and Recruiting: Theory and Methodology. Washington, DC: The National Academies Press. doi: 10.17226/10867.
×
Page 53
Suggested Citation:"3. Monitoring Trends in Youth Attitudes, Values, and Propensity." National Research Council. 2004. Evaluating Military Advertising and Recruiting: Theory and Methodology. Washington, DC: The National Academies Press. doi: 10.17226/10867.
×
Page 54
Suggested Citation:"3. Monitoring Trends in Youth Attitudes, Values, and Propensity." National Research Council. 2004. Evaluating Military Advertising and Recruiting: Theory and Methodology. Washington, DC: The National Academies Press. doi: 10.17226/10867.
×
Page 55
Suggested Citation:"3. Monitoring Trends in Youth Attitudes, Values, and Propensity." National Research Council. 2004. Evaluating Military Advertising and Recruiting: Theory and Methodology. Washington, DC: The National Academies Press. doi: 10.17226/10867.
×
Page 56
Suggested Citation:"3. Monitoring Trends in Youth Attitudes, Values, and Propensity." National Research Council. 2004. Evaluating Military Advertising and Recruiting: Theory and Methodology. Washington, DC: The National Academies Press. doi: 10.17226/10867.
×
Page 57
Suggested Citation:"3. Monitoring Trends in Youth Attitudes, Values, and Propensity." National Research Council. 2004. Evaluating Military Advertising and Recruiting: Theory and Methodology. Washington, DC: The National Academies Press. doi: 10.17226/10867.
×
Page 58
Suggested Citation:"3. Monitoring Trends in Youth Attitudes, Values, and Propensity." National Research Council. 2004. Evaluating Military Advertising and Recruiting: Theory and Methodology. Washington, DC: The National Academies Press. doi: 10.17226/10867.
×
Page 59
Suggested Citation:"3. Monitoring Trends in Youth Attitudes, Values, and Propensity." National Research Council. 2004. Evaluating Military Advertising and Recruiting: Theory and Methodology. Washington, DC: The National Academies Press. doi: 10.17226/10867.
×
Page 60
Suggested Citation:"3. Monitoring Trends in Youth Attitudes, Values, and Propensity." National Research Council. 2004. Evaluating Military Advertising and Recruiting: Theory and Methodology. Washington, DC: The National Academies Press. doi: 10.17226/10867.
×
Page 61
Suggested Citation:"3. Monitoring Trends in Youth Attitudes, Values, and Propensity." National Research Council. 2004. Evaluating Military Advertising and Recruiting: Theory and Methodology. Washington, DC: The National Academies Press. doi: 10.17226/10867.
×
Page 62
Suggested Citation:"3. Monitoring Trends in Youth Attitudes, Values, and Propensity." National Research Council. 2004. Evaluating Military Advertising and Recruiting: Theory and Methodology. Washington, DC: The National Academies Press. doi: 10.17226/10867.
×
Page 63
Suggested Citation:"3. Monitoring Trends in Youth Attitudes, Values, and Propensity." National Research Council. 2004. Evaluating Military Advertising and Recruiting: Theory and Methodology. Washington, DC: The National Academies Press. doi: 10.17226/10867.
×
Page 64
Suggested Citation:"3. Monitoring Trends in Youth Attitudes, Values, and Propensity." National Research Council. 2004. Evaluating Military Advertising and Recruiting: Theory and Methodology. Washington, DC: The National Academies Press. doi: 10.17226/10867.
×
Page 65
Suggested Citation:"3. Monitoring Trends in Youth Attitudes, Values, and Propensity." National Research Council. 2004. Evaluating Military Advertising and Recruiting: Theory and Methodology. Washington, DC: The National Academies Press. doi: 10.17226/10867.
×
Page 66
Suggested Citation:"3. Monitoring Trends in Youth Attitudes, Values, and Propensity." National Research Council. 2004. Evaluating Military Advertising and Recruiting: Theory and Methodology. Washington, DC: The National Academies Press. doi: 10.17226/10867.
×
Page 67

Below is the uncorrected machine-read text of this chapter, intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text of each book. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

3 Monitoring Trends in Youth Attitudes, Values, and Propensity n our earlier report (National Research Council, 2003), we argued that "military readiness may best be served when the first role of military advertising is to support the overall propensity to enlist in the youth population and to maintain a propensity level that will enable produc- tivity in military recruiting" (p. 6~. On this basis, we recommended that "a key objective of the Office of the Secretary of Defense should be to increase the propensity to enlist in the youth population" (p. 8~. Clearly the more one knows about the determinants of propensity, the more it becomes possible to develop effective communications or other types of interventions to increase propensity. In this chapter we propose a cohort-based sequential sample survey with a longitudinal component that we think will provide the information necessary for both tracking the determinants of propensity and for developing more effective communi- cations to increase and maintain the pool of youth with the propensity to join the military. This survey design has several advantages over current and recent survey designs that have been used by the Department of Defense (DoD) to track youth attitudes toward military service: ~ . . . . 1. By assessing an unchanging set of core questions annually, this survey design provides a consistent set of data that can be used to moni- tor and track changes in the determinants of propensity, as well as in propensity itself, over time. Consistent measures from a constant set of questions are obviously necessary to discern changes in propensity over time across different cohorts. This perhaps can provide an early warning to changes in the recruiting market and provide some indication regard- 40

MONITORING YOUTH TRENDS 41 ing how advertising and other resources and incentives should be ad- justed to reflect market changes. 2. One can assess the reasons for differences (cross-section) and changes (longitudinal) in key outcome variables over time. With structured model- ing, the Services can determine which of the underlying determinants of propensity are driving current levels of propensity, and how they may have differed in the past. This will affect the development of advertising messages and the levels and types of incentives used to attract youth into military service. 3. The longitudinal nature of the data in particular, the follow-up of individuals provides the basis for assessing how critical determinants of propensity evolve over time. Not only will this allow the assessment of changes in the relative importance of factors influencing propensity to enlist over time, but also the follow-up sample will permit analyses of the relationships among initial intention, subsequent intention, and actual status (enlistment, college attendance, or entering the civilian workforce). These latter analyses may suggest ways to help young adults act on their initial intentions to enlist. In the previous chapter, we proposed a model that suggested that propensity to join the military (or a particular Service such as the Army or Navy) was primarily determined by attitudes, norms, and self-efficacy (see also National Research Council, 2003~. According to the model, the relative importance of these three variables as determinants of intention can vary as a function of both the behavior and the population being considered. That is, some behaviors may be primarily influenced by atti- tudes while others may be primarily influenced by norms or self-efficacy. Similarly, a behavior that is primarily under attitudinal influence in one population (or population segment) may be primarily under the influence of norms or self-efficacy in another population. In order to develop interventions to increase the proportion of the population with a propensity to enlist at any given point in time, one needs to assess the underlying determinants of propensity. That is, the more one knows the values and relative importance of attitudes, norms, and self-efficacy in a given population, and the more one knows about the beliefs underlying these attitudes, norms, and perceptions of self-efficacy, the more likely it is that one can design an effective message (or other type of intervention) to increase propensity (Fishbein et al., 2001~. How- ever, this type of information (i.e., the most relevant data for guiding the development of effective messages or other interventions to increase pro- pensity) is currently not available. Although some of this information was assessed by the Youth Attitude Tracking Study (YATS), a major problem with YATS is that, rather than getting complete data from each respon-

42 EVALUATING MILITARY ADVERTISING AND RECRUITING dent, respondents answered only small, randomly selected subsets of the questions, making complete analyses at the individual level impossible. It was for this reason that the committee prepared a letter report evaluating YATS, which was included as Appendix A in the committee's Phase I report (National Research Council, 2003), and presenting a num- ber of findings and recommendations. Among these was the recommen- dation that "ongoing surveys to assess the critical determinants of propensity should be conducted on a regular basis." These surveys would allow for individual-level analyses to identify the psychosocial determi- nants of propensity. Thus, we recommended that, whenever a survey is designed, "consideration should be given to randomly assigning inter- related blocks of information to the same subgroups. Consideration should be given to maintaining sufficient sample size and content within a block of relevant questions so that multivariate analysis can be con- ducted without serious missing data problems" (p. 294~. In addition to conducting a survey to identify the determinants of propensity, it is important to assess changes in these underlying beliefs over time. Thus we also recommended that "DoD consider using a con- tinuous tracking survey methodology for such issues as propensity to enlist, advertising awareness, awareness of direct response campaigns, involvement in high school activities, and perceptions of the military" (p. 295~. More specifically, we recommended that "a portfolio of surveys at different time intervals replace the current annual YATS administra- tion" (p. 294~. Moreover, we suggested that, to be maximally useful, the formatting of the questionnaire must permit individual-level multivariate analyses. For example, it should permit assessment of an individual's complete set of salient beliefs about the consequences (or costs and ben- efits) of joining the military. It is important to note that in response to these suggestions, DoD funded Wirthlin Worldwide to conduct a number of youth and influ- encer surveys (Bailey et al., 2002; Sattar et al., 2002~. More specifically, Wirthlin conducted four surveys with respondents between ages 15 and 21 (March and April 2001; fuly and August 2001; October and November 2001; and October and November 2002) and four surveys with adults ages 22 to 85 (May 2001; September and October 2001; lanuary 2002; and September 2002~. While the youth surveys provided information neces- sary for testing some of the relations in the theoretical model, no single survey obtained information on all of the theoretical determinants of propensity. Thus while it was possible to examine some relationships that could not be examined by the YATS data, a full test of the model was not possible. Moreover, a large number of items focused on "genera- tional" questions and in particular on variables assumed to be related to "millennials" (e.g., on team orientation, decision making, life satisfac-

MONITORING YOUTH TRENDS 43 lion). There is little evidence that variables of this type are in any way related to the propensity to enlist or to actual enlistment behavior (Na- tional Research Council, 2002~. Turning to the adult surveys, Wirthlin does provide some interesting and important data concerning adult perceptions of the military and adults' propensity (or intentions) to "encourage [young people] to join a military service" or to "get a job" or to "attend a four-year college." It is interesting to note, however, that there are no questions assessing the determinants of these intentions to encourage youth to choose any of these career paths. In sum, the current set of surveys does not yet provide the data neces- sary for identifying the critical determinants of propensity, and thus they do not provide the data necessary for developing effective interventions (including mass media advertising) to increase the pool of individuals with a propensity to enlist. In order to assist and provide guidance for mass media and other interventions to increase this pool of people, com- plete data concerning the determinants of propensity at the individual level are needed. Data are also needed to evaluate the effectiveness of advertising and other interventions. Thus, in addition to assessing whether a communication campaign has produced changes in propensity (and its underlying determinants), it is also necessary to track the extent to which the message is reaching its intended audiences (i.e., it is impor- tant to assess exposure to message content). In addition, in order to evalu- ate message (or advertising) effectiveness, it is necessary not only to determine whether a message is having a direct effect on propensity and its determinants, but also to look for other paths of effect for example, messages can influence relevant referent (such as parents) who, in turn, influence youth; messages may increase conversations among youth vis- a-vis joining the service; messages may increase the likelihood that youth will talk to others (people in the military, school counselors, parents) about joining the military. One implication of these other paths is that, although a campaign may be effective, one may not always find differ- ences between those who are and those who are not exposed to the adver- tising campaign. Thus, surveys should be designed to take these other paths of effect into account. USE OF SURVEY RESEARCH Survey research can be used for many purposes. A survey conducted at a given point in time can provide important prevalence data about propensity and its psychosocial determinants. These data also permit the investigation of relationships among beliefs, attitudes, perceived norms, self-efficacy, and propensity. Equally important, experimental or quasi-

44 EVALUATING MILITARY ADVERTISING AND RECRUITING experimental designs or both can be built into a given survey. For exam- ple, in order to investigate how changes in one or more variables (or policies) could affect propensity, respondents taking the survey could be randomly assigned to different forms of the survey instrument that pre- sented different recruitment or enlistment scenarios. Given this manipu- lation, one could then assess, for example, how different increases in pay or different education policies influenced the likelihood that one would join the military. Furthermore, by repeating the survey (or parts of it) over time, either cross-sectionally or longitudinally, one can observe changes in any of these variables or in the relationships among them. Moreover, by repeating surveys one can develop quasi-experimental (time series) designs to assess the efficacy of a given advertising cam- paign or recruitment strategy as a means of producing changes in pro- pensity and its underlying determinants (Campbell and Stanley, 1966; Shadish, Cook, and Campbell, 2002~. Even when the major goal of conducting a survey is to provide infor- mation to increase propensity to join the military, there are a number of issues that must be addressed prior to developing one or more survey instruments. 1. What is the target population? What age group (or groups) should be targeted? That is, should one attempt to increase propensity among youth, adolescents, or young adults? Alternatively, knowing that there are important others who may strongly influence a person's propensity to join the military, one must decide whether to view these influencers as the target audience. It is therefore important to consider whether different surveys are needed for different populations and, if so, how often should each be conducted. 2. With one or more target populations identified, one needs to con- sider a number of sampling issues: size of samplers), number of (distin- guishable) samples, response rate, and other biases. In addition, one should consider questions of timing (annual, quarterly, continuous). 3. What survey methodology should be used and where and how can one maximize access to the population of interest? More specifically, should the mode of administration be face-to-face, self-administered, conducted by telephone (each of which can also be computer assisted), conducted via the Internet, or should it be a mail questionnaire? Simi- larly, should it be individually or group administered (e.g., at school)? 4. Turning to the content of the survey, one must consider such things as question ordering, question coverage, and question wording. Other issues relate to question types; for example, should the questions be open- ended (with and without prompts) or closed-ended (multiple choice)? What should be the order of presentation of the questions?

MONITORING YOUTH TRENDS 45 The remainder of this chapter will discuss issues related to survey struc- ture, implementation, and content. SURVEY STRUCTURE AND IMPLEMENTATION DECISIONS The issues and recommendations presented here are broadly compat- ible with those in our 2000 letter report concerning YATS, but here we provide a greater degree of specificity along some dimensions. What is recommended here need not be viewed as a proposed replacement for the current pattern of telephone surveys, except insofar as cost constraints dictate some degree of trade-off. That said, it should be acknowledged that if the series of monitoring surveys recommended here were to be carried out, the overlap in coverage is such that it would eventually be possible to phase out much of the telephone survey work now being done. The focus of this section is on monitoring trends in youth attitudes, values, and propensity. Nevertheless, the survey strategies outlined here could readily be adapted to incorporate other material. One example would be to measure, and perhaps monitor, reactions to advertising (see Chapter 5), although we would not propose this to the exclusion of other survey and nonsurvey methods focusing on advertising. Another exam- ple, as suggested above, would be to explore reactions to possible new or modified incentives for military service (see also Chapter 7~. A number of matters will have to be resolved before undertaking a set of surveys such as those proposed here. We outline them briefly, and in each case state the committee's recommendations. Need for Long-Term Funding Monitoring survey designs are built with the expectation that fund- ing will be available to carry out the research over a considerable period of time. It is not a good investment of federal funds or of investigator time to initiate monitoring survey efforts only to drop funding after a short period of time. Careful consideration needs to be given to the spon- sor's ability and willingness to make funding commitments for multiyear (e.g., five-year) periods. The survey approach outlined here supposes such funding commitments, and the committee strongly urges that monitoring surveys not be initiated until such commitments are in place. Selecting Target Samples Earlier work summarized in the committee's Phase I report showed that, among many young people, enlistment propensity tends to firm up

46 EVALUATING MILITARY ADVERTISING AND RECRUITING by or before the end of high school. Although students provide answers to propensity questions as early as 8th grade in the Monitoring the Future (MTF) surveys, the proportions providing "definite" answers are much higher among 12th grade students. Senior year propensity measures, how- ever, can be complicated by the likelihood that many seniors who eventu- ally enlist make their decisions and commitments before leaving high school; accordingly, their "propensity" answers might better be described as reports of decisions already taken. This is clearly consistent with the often reported finding (Fishbein and Ajzen, 1975; Ajzen and Fishbein, 1980; Albarracin, Johnson, Fishbein, and Muellerleile, 2001) that the shorter the time interval between the assessment of intention (or propen- sity) and the observation of behavior, the better the prediction (i.e., the higher the correlation).] Taking into account these factors, the committee recommends that the lower age boundary for target samples be about age 16-17. If there is an interest in propensity or related factors at lower ages, they could be the subject of limited special studies rather than monitoring surveys. By age 16-17 young men and women have had to confront questions about their next steps after high school (college, military service, civilian employ- ment), so this seems a good lower age bound for target samples. Specifi- cally, for reasons spelled out below, we recommend the 11th grade of high school as an optimal start point for the target sample age range. The end point for the target sample is less easy to specify; however, given the increased recruiting attention to college students, the committee recommends that inclusion extend to at least age 23 the point by which most young adults either have completed college or are relatively unlikely to do so. We leave open the possibility that it may prove useful to extend the age span a bit farther; however, we think it unlikely that it would be cost-effective to extend it much beyond age 25. Use of Self-Completed Questionnaires There are very large potential cost advantages in surveys employing self-completed questionnaires. There are also constraints and limitations. The committee considered several important constraints and judged them acceptable for the proposed monitoring surveys. 1The fact that YATS excluded respondents who had already committed to the military, while Monitoring the Future included all high school seniors, helps to explain why, although both surveys found strong evidence that propensity does predict enlistment, MTF found stronger intention-behavior relationships than did YATS (see, e.g., Bachman, Segal, Freedman-Doan, and O'Malley, 1998~.

MONITORING YOUTH TRENDS 47 Self-completed questionnaires, particularly when carried out on a large scale, tend to be limited to closed-ended multiple-choice question items. In spite of these restrictions in question-and-answer format, self- completed questionnaires have been used successfully in monitoring sur- veys (e.g., the Youth Risk Behavior Survey, Monitoring the Future). In contrast, YATS used telephone personal interview methods, which allow for open-ended questions and for potentially complex question branching strategies. Interestingly, even in the YATS telephone interviews, the sur- vey data that tended to be most useful for monitoring purposes involved closed-ended items. Accordingly, the committee would find it acceptable to use self-completed questionnaires for monitoring purposes, even though that may constrain the survey questions to closed-ended items. (Open-ended items can be included in self-completed surveys, although they add costs, complexities, and time delays.) The committee recom- mends that other approaches (focus groups, smaller scale interview and elicitation studies, etc.) be used to develop and evaluate the closed-ended items. Another extremely important limitation of self-completed question- naires is that the target respondents must be able to read and understand the questions. Limiting the target population to literate respondents could be a serious bias in many studies, and even in the surveys proposed here this constraint needs to be kept in mind. Nevertheless, given a primary focus on potential military recruits, and given current and future military requirements for literacy among all Service members, the committee con- siders the literacy constraint tolerable. One other dimension of self-completed questionnaires to be noted involves respondent motivation. All survey methods require some degree of motivation on the part of respondents, but arguably there are some methods that are more inherently motivating than others. Face-to-face personal interviews can be particularly motivating, because some degree of personal relationship is established between interviewer and inter- viewee. Telephone interviews may develop such a relationship to some extent; however, in a period in which telemarketing has imposed greatly on the good will of those still willing to answer their phones in person (rather than use an answering machine), the motivational value of phone interviewing may be debatable. In any case, one of the factors to be con- sidered in developing any survey strategy is respondent motivation, and that will be particularly important for the surveys proposed here. The committee thinks that topics related to career choice are of great interest and importance to most of those in the target survey population and that, with sufficient care, self-administered questionnaires can be developed that will maintain respondent motivation.

48 EVALUATING MILITARY ADVERTISING AND RECRUITING Use of School-Based Administrations Why survey in schools? To paraphrase Willie Sutton's bank robber comment: because that's where the students are. School-based survey administrations are wonderfully cost-effective at least from the stand- point of survey researchers. Teachers and administrators, however, are becoming increasingly dubious about school-based surveys, partly because they seem to be proliferating, and also because the increased demands for student performance and school accountability have made classroom time a more precious commodity. The result is that obtaining cooperation for school-based surveys is increasingly difficult. Nevertheless, in spite of the difficulties, there is much to recommend school-based group-administered self-completed questionnaire surveys: the sampling is fairly straightforward and can be quite accurately repre- sentative; response rates can be quite high among students; and, once again, it can be very cost-effective. Moreover, school-based surveys can also be treated as a baseline or starting point for panel surveys (repeated surveys of the same respondents over time). Panel surveys involve some respondent attrition, to be sure, but the base-year data can provide con- siderable information about the lost respondents (and how to make statis- tical adjustments to compensate for the loss). Matters of Entry, Motivation, and Acceptability A Pentagon-sponsored survey focused on military propensity, asking only questions about military jobs and missions, is likely to be acceptable to some school administrators and less so to others. Moreover, in most schools there is likely to be a range of parent and student reactions to a "military survey." Pentagon sponsorship should, of course, be acknowl- edged; but if the survey were a joint undertaking sponsored by the DoD and others, and if the content reflected that joint sponsorship, the survey might be more attractive to all concerned. Moreover, in order to do a good job, as well as to be broadly acceptable to school personnel, parents, and students, the military portion of the questionnaire content should be bal- anced in its items about military service, military working conditions, duty to country, and the like. If the survey comes across as promilitary rather than balanced in its approach, it is likely to generate considerable controversy and resistance in some communities and school districts. An emphasis on national service might aid in making the survey more broadly attractive. A closely related point is that if personal identification of respondents is requested in order to permit follow-up surveys of some respondents, great care must be taken to ensure, and communicate, that no identifying

MONITORING YOUTH TRENDS 49 information or individual survey responses will be available to anyone other than those conducting the survey. In particular, it must be clear to everyone that survey responses and the names of participants will not be made available to recruiters. This is not the sort of thing that can be mentioned once with the expectation that everyone will take note and remember. Rather, it is something that will need to be repeated often, to a number of relevant audiences. Proposed Features of a Monitoring Survey Series The committee considered a number of survey designs and options in its deliberations. None of the options was ruled out; to the contrary, it was considered important that multiple methods be employed at various points. In particular, the use of focus groups and individual personal interviews with open-ended questions was recommended as means of forming and improving questionnaire items. Nevertheless, consistent with the discussion of broad issues outlined above, the committee developed a design that it recommends for a monitoring survey series. We outline some key features below. A Cohort-Sequential Design Using Self-Completed Questionnaires The monitoring design proposed here would obtain new and rela- tively large samples of high school students annually and then track sub- sets of those students on a regular schedule during subsequent years. This is a cost-effective research strategy for generating descriptive data on youth and young adults, and it can permit disentangling changes over time that reflect age differences, cohort differences, and secular (historic) trends. (See, e.g., Johnston, O'Malley, and Bachman, 2003a, 2003b, 2003c, for illustrations of all three kinds of changes in substance use.) The design has a bonus feature: because it tracks the same individuals across time, it can be the basis for panel analyses exploring individual changes and growth trajectories, and it can distinguish chronological (and possibly causal) ordering of events. Base-Year Surveys of High School Students The committee recommends that the starting point in the cohort- sequential design consist of school-based group-administered self- completed questionnaire surveys of 11th grade students throughout (and representative of) the United States. As noted earlier, 11th grade is a point by which many students have confronted choices about their next steps

50 EVALUATING MILITARY ADVERTISING AND RECRUITING after high school but before very many have made firm commitments to further education or to military service. Most students in 11th grade are age 16 or 17, thus the choice of 11th grade as the starting point sets a fairly narrow age boundary. The choice of 11th grade rather than a particular age is governed, in part, by practical considerations involved in school-based surveys; however, the choice of a grade-based rather than age-based starting point also takes account of the fact that decisions about next career or education steps tend to be firmed up during 11th grade. In any case, we judge that grouping students by grade rather than by age is well suited to the purposes of the research. For the reasons outlined above, the committee recommends 11th grade as the earliest point for the school-based monitoring surveys. Although surveys at earlier grades (and ages) could reveal things about the developmental progression of propensity, such could be the subject of special studies rather than an ongoing monitoring effort. The committee did not, however, reach a firm conclusion about whether 11th grade was the only grade worth including in school-based surveys. An obvious alternative would be to include both 11th and 12th grades, and we can see advantages as well as disadvantages of such a strategy. Later in this chap- ter we outline in greater detail two alternative strategies and consider some of the costs and benefits of each. One strategy involves sampling only 11th grade students and then tracking them with follow-up surveys, perhaps as often as once a year. The other strategy involves sampling both 11th and 12th grade students, with follow-up surveys every two years. It is important to note that either strategy will take time to develop the full range of respondents. Periodic Tracking of Subsamples Monitoring the Future tracks its panel respondents by mail every two years, with half of them resurveyed 1, 3, 5, 7, 9, and 11 years after high school and the other half resurveyed 2, 4, 6, 8, 10, and 12 years after high school. (Later follow-ups take place at modal ages 35, 40, and 45.) This every-other-year schedule works reasonably well for monitoring shifts in substance use, as well as for most other purposes; however, the commit- tee notes that follow-up surveys conducted on an annual basis would provide greater detail in tracking changes in propensity, time of enlist- ment, and other related factors. Anything more frequent than annual surveys would probably be burdensome to many respondents, and in any case seems unnecessary for monitoring purposes. Mailed self-completed questionnaires, accompanied by a modest "honorarium" check and coupled with phone follow-up prompts when necessary, can produce reasonably high response rates

MONITORING YOUTH TRENDS 51 (even among those on active military duty). Some panel attrition occurs;2 however, taking account of the initial base-year information obtained from all respondents often permits satisfactory statistical adjustments.3 Scheduling of Surveys Throughout the Year The committee sees advantages in spreading surveys throughout the year in order to monitor more sensitively those military-related events that occur at unpredictable times during the year. As examples, surveys spread across the year might have provided some indications of short- term (versus longer term) impacts of events such as the World Trade Center attacks on September 11, 2001, and the initiation of the Iraq war in March 2003. Survey data spread across the years could be pooled across one-year intervals, or longer or shorter intervals, for various analysis pur- poses. Also, some analyses might need to take account of seasonal fluc- tuations in key indicators. It should be noted that some cost saving could result from spreading data collection across the year, particularly with respect to the follow-up surveys, simply because there would be fewer peaks and valleys in staff activities involved in carrying out the surveys. Although spreading surveys throughout the year is judged to be de- sirable and quite attainable with the follow-up surveys, that objective cannot be met with high school surveys. In-school surveys cannot be carried out during summer vacation, nor are the first and last few weeks of school likely to be acceptable for surveys. Spreading the in-school surveys across about a six-month interval (October or November through March or April) is probably most feasible. Another approach would be to concentrate in-school data collections (to the extent possible) in two periods approximately six months apart (e.g., October and April). Com- plex sampling decisions would be needed in order to determine whether 2For example, MTF mail follow-up surveys one or two years after the 12th grade in- school survey have yielded returns of about 65 percent in recent years. Longer term response rates in that study (ages 3040) have been 50 percent or higher. Response rates to mail follow-up surveys can vary markedly, depending on a number of survey parameters. These include length and interest level for the questionnaire, incentives (i.e., payments) used, and extent of tracking efforts to persuade delinquent respondents to return questionnaires. Thus it is not possible to predict in advance what response rates would be possible in the absence of knowing a great deal about such parameters. The MTF experiences noted here are merely suggestive; a DoD survey could perhaps do better, and it certainly also could do worse. 3As one example, Bachman, Freedman-Doan, Segal, and O'Malley (2000) made use of imputation techniques to adjust for the fact that panel attrition was greater than average among young men with high military propensity. Another approach is posts/ratification, in which differential weights are used to "reconstitute," in effect, the initial distribution on important variables.

52 EVALUATING MILITARY ADVERTISING AND RECRUITING schools could be offered much choice about when during the year they would be surveyed. Obviously, the greater the choice permitted, the greater the likelihood of obtaining a school's participation. It might prove possible to develop adjustments in seasonal data (taking account of char- acteristics of schools that fall into each of the intervals being analyzed) so that flexibility in school survey scheduling could be tolerated while still taking advantage of some spread of survey administration times through- out the school year. Follow-up surveys can be scheduled throughout the year, and that scheduling could (and probably should) be independent of (orthogonal to) the timing of an individual respondent's base-year survey date. Thus, for purposes of exploring short-term impacts of various events during the year, the follow-up surveys spaced evenly throughout the year would be the better source of data. Sampling StrategiesforIn-School Samples A multistage stratified random sampling approach is proposed in which the first stage is the geographical area, the second stage is the school, and the third stage is the student. Schools are selected with prob- ability proportionate to (estimated or reported) size. In smaller schools, all students in the relevant grade are targeted; in larger schools, a sample of students is selected (by sampling classrooms, not individual students). These procedures have worked reasonably well in Monitoring the Future across several decades; however, various other approaches are available for drawing school-based samples. Such sampling details can be devel- oped by the relevant contractor, should the decision be made to conduct such surveys. Sample-related issues to be kept in mind are that school refusal is a source of potentially important biases in survey results, as are respondent refusals; accordingly, efforts should be made to avoid such losses. It is also important to keep in mind that once a school is recruited and admin- istration arrangements made, the marginal costs of additional students included in the sample are very low; consequently, it is not at all costly to have a fairly large number of respondents in school-based group- administered surveys. The primary factor influencing costs is the number of schools; that is also a key factor in determining how accurately the survey represents the target population. In Monitoring the Future, sur- veys of larger than 100 schools have always been judged necessary in order to have adequate levels of accuracy (the range for samples of 12th grade students has been from 124 to 146 schools). Other school-based surveys, such as those conducted by the Department of Education, have often used larger numbers of schools (with correspondingly higher costs).

MONITORING YOUTH TRENDS 53 Once the sample of schools has been obtained, it is relatively cost-effective to sample at least 100 students per school. The committee recommends samples of 10,000-15,000 students per year (per grade). Among the key advantages of large sample sizes is the improved ability to provide estimates for relatively rare subgroups; race- ethnic subgroups are an obvious example, but another example is students with high military propensity particularly women with high propensity Another advantage of large sample sizes is that it permits the use of multiple questionnaire forms; such an approach would include key "core" information (including demographics, propensity, etc.) in all forms but could cover a wider range of topics (or a wider variety of questions about the same topics) without appearing repetitious to respondents. Additional Options for School Surveys Once a school has been recruited to participate in the data collection, additional opportunities arise for collecting relevant data. This tempta- tion must be resisted, or at least considered carefully, because the risk is that attempting to exploit the additional possibilities may have the very undesirable consequence of causing the school to withdraw (or refuse to grant in the first place) its agreement to participate in the survey. With that caution clearly in mind, we note that participating schools offer an opportunity to survey counselors. Although collecting data about specific students would almost surely be inappropriate, counselor data could be used to characterize the school as a whole, including the school atmo- sphere as it pertains to military service, recruitment, etc. Counselors could be surveyed about their own views, but also about their perceptions of school climate and perhaps certain relevant school practices. Counselor surveys might be Internet-based, if that seemed preferable. A modest payment (honorarium) for participation, as well as clear specification that participation is voluntary, would go a long way toward securing school agreement for such an additional option, as well as ensuring high levels of counselor participation. Sampling Strategies for Follow-up Monitoring Surveys The surveys of students should include respondents' names and mail- ing addresses (on separate computer-matched numbered pages), and respondents must be persuaded of the security of this information and thus be willing to provide it.4 The listing of names and addresses, along 4A valuable additional piece of information would be Social Security number, because this could be used for military record checks (to include enlistment, separation, and per-

54 EVALUATING MILITARY ADVERTISING AND RECRUITING with the demographic and other data that can be matched with the names, becomes an exceedingly valuable base from which to develop target follow- up samples. The follow-up sampling strategy is different from the base-year strat- egy in a number of respects most notably because the marginal cost of follow-up respondent surveys is far greater than the marginal cost of the base-year surveys. As a result, it is more important to be efficient in draw- ing the follow-up target samples. Because of the information available, such efficiencies are readily attainable. The base-year surveys might involve fairly wide ranges of weights because of the variations in school size, numbers of students selected (which can be adjusted up or down to make administrations more conve- nient for the schools, thus facilitating school agreement to participate), and a variety of other (lesser) factors involved in sample selection. In follow-up samples, however, selection ordinarily should be inversely pro- portionate to base-year sample weights; the result would be an equally weighted and highly efficient follow-up target sample. Specifically, school-to-school differences in sampling weights should be taken into account and eliminated. Thus, schools that happened to contribute relatively large numbers of respondents (each having correspondingly small base-year sample weights), would have relatively small propor- tions of their survey participants selected for follow-up compared with other schools. An important departure from equally weighted follow- up target samples is that respondents of particular interest to the Services should be oversampled. In particular, respondents with high military propensity should be oversampled. These are relatively rare individuals, and that might be reason enough to overselect them; more to the point, they are particularly important to the Services and thus should constitute a sufficiently large proportion of the follow-up target samples to permit analyses. In con- trast, individuals who expect to enter college should be undersampled, primarily because they represent such a large proportion of 11th grade students that it is unnecessary for analysis purposes to have them consti- tute an equally large proportion of the follow-up target sample. These and other adjustments in likelihood of inclusion in the follow-up target samples can be corrected through appropriate weights in analyses. Other possible adjustments include oversampling of minority group members haps other useful data). However, requesting such information without a clear explanation is likely to alarm some respondents, as well as some parents and school officials. Even if an accurate explanation is provided (i.e., to permit a link to later military records, if any), that also would be likely to raise alarms. The committee recommends careful pilot testing before attempting to include Social Security numbers as part of identifying information.

MONITORING YOUTH TRENDS 55 or individuals with occupational or educational interest profiles that are of particular importance to the Services. Additional Follow-Up Options Because the follow-up monitoring samples discussed above would not exhaust the numbers of base-year respondents who provide names and addresses, a variety of other special studies could be undertaken using some of these individuals. One sort of special study could target some individuals for Internet-based follow-up surveys. If that approach worked for a substantial portion of those sampled, and if it proved cost- effective and otherwise equivalent or preferable to the mail follow-up approach, then it might be possible to collect a significant portion of follow-ups via the Internet. This approach could be explored carefully and then, if successful, gradually phased in as an alternative to mailed surveys. (Under present conditions, it could not entirely supplant mailed surveys without missing a significant subset of respondents who would not be able or motivated to use Internet-based survey approaches.) Trade-Offs Among Various Cohort-Sequential Designs A great many designs are possible within the parameters suggested above. For purposes of illustrating the issues involved, we contrast just two design possibilities. The simpler design, which we will call Plan A, involves annual surveys of 11th grade students with a subset selected for annual follow-ups for a period of six years. This would (eventually) gen- erate data for the age range from 16-17 to 22-23, permitting analysts to make distinctions among differences interpretable as age effects, cohort differences, and period effects (i.e., secular trends or historic changes). It would also permit relatively fine-grained tracking of individual changes in attitudes, plans, and behaviors. The second design, Plan B. involves annual surveys of both 11th and 12th grade students with subsets from each grade selected for three follow-ups on a two-year cycle. This design would track the 11th grade respondents from ages 16-17 to ages 22-23 (like Plan A) and would track the 12th grade respondents from ages 17-18 to ages 23-24. This design (again, like Plan A) would permit distinctions among age, period, and cohort effects, and it would permit tracking of individual changes (although the two-year intervals would be less fine-grained than the one- year intervals in Plan A). Many variations are possible within each of these two designs. As one example, Plan A could be extended to seven years of follow-up so as to match the overall age span in Plan B. As another example, Plan A could

56 EVALUATING MILITARY ADVERTISING AND RECRUITING involve a two-year cycle of follow-up (like Plan B), or Plan B could involve a one-year cycle (like Plan A). Moreover, additional designs could be explored; for example, in-school surveys could be conducted with both grades, but follow-ups (on either a one-year or two-year cycle) could be carried out using subsets from only the 11th grade samples (so as to have all panel data begin with propensity measured earlier than the senior year). It is not our intention here to arrive at a final ideal design; such deci- sions can better be made when more is known about the overall funding commitments, the nature of sponsorship (i.e., DoD only or shared), the likely success of obtaining school participation, and a variety of other crucial details (in which, as we know, the devil resides). For present pur- poses, it is sufficient to use Plans A and B as aids in illustrating some of the many trade-offs that will have to be considered to arrive at a final set of design specifications. In the remainder of this section, we focus on several key issues that should be considered in making decisions about design; these include school recruitment, the logistics of survey adminis- tration, follow-up participation and response rates, numbers of cases and amounts of data per case, and cross-cutting all the others consider- ations of costs. School recruitment. Recruiting schools to participate in school-based surveys involves considerable effort and costs. If schools were willing to have both 11th and 12th grade students surveyed, the 12th grade survey (i.e., using Plan B rather than Plan A) would be a relatively low-cost addition. One of the costs, however, would be that schools could be recruited to participate for only one year rather than multiple years. If a school were to participate for more than one year, then many students would be surveyed once in 11th grade and again, a year later, in 12th grade; such an approach would add to student burden and could compli- cate analyses in various ways. Another cost might involve lower school participation rates, if the greater burden of surveying two grades prompted some schools to decline. (We suspect such declines would be relatively infrequent, but it is a risk that must be taken into account.) Survey administration. There are advantages to completing all survey administrations in any one school during a single day. Not only does that tend to hold down survey administration costs (we recommend that administration be handled by a professional survey staff rather than school personnel), but it also reduces opportunities for students to dis- cuss the survey with each other and thereby potentially contaminate responses. The logistics of managing both an 11th grade survey and a 12th grade survey in the same school at the same time (Plan B) would be

MONITORING YOUTH TRENDS 57 somewhat more demanding than an 11th grade survey only (Plan A); per- school costs would thus be higher for Plan B than for Plan A, but certainly much less than double. One consideration affecting logistics and costs is whether identical questionnaire forms would be used for the 11th and 12th grade surveys (in Plan B). We see little need for separate forms (given that a single question can be used to identify respondents' grade level, and such a question is a necessary control in any case). The cost and logistical advantages of limiting to one set of forms are evident. Response rates. Plan B would produce a better sampling of young people age 17-18 (i.e., 12th graders) than would Plan A. Under Plan A, the data from those ages 17-18 would be obtained from mail follow-up sur- veys of those surveyed as 11th graders a year earlier; even with the best efforts at tracking target respondents and securing their participation, roughly one-quarter to one-third of those targeted are likely to be lost. In contrast, under Plan B data from those ages 17-18 would be obtained by in-school surveys of 12th graders; losses in student participation would be much smaller. (The MTF experience in recent years is that 12 percent in the 12th grade sample was lost, primarily due to absence; the resulting biases can be corrected, at least in part, by using special weights that take account of the rates of recent absenteeism reported by students who did participate in the survey.) Follow-up participation rates. We noted above that substantial portions of target samples are lost in mail follow-up surveys. That would occur for the follow-ups in both Plan A and Plan B; however, there is room for subtle differences between the two plans. On one hand, the two-year intervals between surveys in Plan B would presumably result in lower respondent burden than the one-year intervals in Plan A, and that might contribute to higher response rates in Plan B. On the other hand, the two- year intervals permit more time for respondents to become lost to the researchers and perhaps also to lose interest, and that might contribute to lower response rates in Plan B. We are not sure how these countervailing tendencies would play out; they might simply cancel each other. In any case, we believe that with careful efforts to track panel respondents (including some form of "keep in touch" mailing no less frequently than once a year), coupled with some modest financial incentive and an effort to keep the questionnaire content interesting and not overly burdensome, the differences in follow-up participation between Plan A and Plan B are not likely to be large. Numbers of cases versus amounts of data per case. Let us for the moment make the simple assumptions that Plan A would involve surveying 10,000

58 EVALUATING MILITARY ADVERTISING AND RECRUITING 11th grade students each year and then follow a subset (say, a target sample of 3,000) of each such sample for five or six annual follow-up surveys, whereas Plan B would survey 10,000 11th grade students plus 10,000 12th grade students each year and then follow a subset (again, 3,000) of each for three biennial follow-up surveys. Under these assump- tions, Plan A would yield roughly half as many total respondents as Plan B. but it would involve nearly twice as many data collections for each. Once fully under way, each plan would provide appreciable num- bers of cases for any given year and any age within the range specified- that is, roughly age 16 to age 23; moreover, except for the 12th grade data, the numbers of cases at any particular age or year would be roughly equal. But this would be accomplished differently, with Plan B providing about twice the total number of individuals but Plan A providing more fine-grained detail for each. For panel analyses, the two-year intervals would probably provide sufficient detail for most purposes. Moreover, key questions, such as those about entering and leaving military service, could be asked with specific dates (to the nearest month, etc.) and thus provide nearly as much useable detail with a biennial follow-up sequence as with an annual sequence. Accordingly, the committee judges that the loss of fine-grained detail in the panel data under Plan B is not a severe handicap. The advantage of having twice as many total cases under Plan B than Plan A is, however, considerable. The opportunities for cross-validation, exploration of rare subgroups, and other analytic improvements are potentially quite valuable, given that sufficient resources are provided for extensive analyses of the data. Cost considerations. We have mentioned some of the cost consider- ations above, but here we focus on them specifically. The first point to be made is that costs will be determined by a great many factors, most of which we cannot (and should not) try to specify here. What we can do is focus on a few of the trade-offs and make some broad observations about cost considerations. Perhaps the broadest observation to be made is that we cannot with confidence estimate which plan would cost more than the other. Clearly, as noted earlier, if 11th and 12th grade students were sur- veyed in the same schools at the same times, there would be considerable cost savings quite possibly the Plan B school surveys would cost only half again what the Plan A surveys cost (i.e., the 12th grade survey might be added at half price). One could consider that 12th grade in-school survey cost as being weighed against the cost of a mail follow-up of the 11th grade sample (in Plan A) one year later. It is hard to know which would be more expensive;

MONITORING YOUTH TRENDS 59 much would depend on those devilish details. Depending on how many follow-ups were used in each, the follow-up costs might be fairly similar under the two plans. If Plan A involved six annual follow-ups and Plan B involved three biennial follow-ups of approximately twice as many people (e.g., targeting 3,000 each from the 11th and 12th grade in-school survey samples), the total numbers of targeted follow-ups would be identical and the costs would be quite similar. The Plan B follow-ups would be a bit more expensive because of the extra mailings needed to keep track of respondents in the off years. It should be noted that in the illustration above there is an extra year of data under Plan B; specifically, the 12th graders followed up a third time would have modal ages of 23-24, whereas the 11th graders followed up a sixth time (under Plan A) would be age 22-23. If the third follow-up of 12th graders were eliminated in Plan B. the over- all cost comparisons would be more similar; however, there would be less complete data available on college completion (and possible enlistment afterward) if only two biennial follow-ups were used for 12th graders. SURVEY CONTENT If a major purpose of the survey is to identify the determinants of propensity or to track changes in propensity and its determinants over time, then, at a minimum, it will be necessary to assess propensity, atti- tudes, perceived norms, self-efficacy, and the behavioral, normative, and control beliefs underlying attitudes, norms, and self-efficacy. These clearly should comprise the core questions in any survey design. There are now fairly standardized instruments for assessing these variables (see e.g., Fishbein et al., 2001; Ajzen and Fishbein, 1980~. How- ever, as we shall see below, although it's possible to develop measures of propensity (or intention), attitude, perceived norms, and self-efficacy without going directly to the population of interest, one must go to that population to identify the salient behavioral, normative, and control beliefs underlying these variables. Thus, formative research is necessary prior to developing a fixed-item survey instrument designed to assess underlying beliefs. Propensity, Attitudes, Perceived Norms, and Self-Efficacy Propensity/Intention: Propensity (or intention) is typically measured by asking a person how likely it is that he or she will (or will not) engage in the behavior in question. For example, the respondent could be asked to indicate the extent to which it is likely or unlikely that:

60 EVALUATING MILITARY ADVERTISING AND RECRUITING I will join the military (Army/Navy/Air Force/Marine Corps) sometime in the next N (months, years) extremely unlikely : : : : extremely likely Alternatively, they could be asked whether they "strongly agree," "agree," "neither agree nor disagree," "disagree," or "strongly disagree" with the statement. Attitude: Attitude refers to a person's overall feeling of favorable- ness or unfavorableness toward performing the behavior in question. Although there are many ways to assess attitude, the most commonly used instrument to assess attitude is the semantic differential (Osgood, Suci, and Tannenbaum, 1975~. For example, respondents could be asked to indicate whether: My joining the military (Army/Navy/Air Force/Marine Corps) sometime in the next N months would be: good wise pleasant enjoyable . . bad foolish unpleasant unenjoyable Perceived Injunctive Norm: The perceived injunctive (or subjective) norm refers to a person's belief that their important others think they should or should not engage in the behavior in question. Thus, for exam- ple, respondents could be asked to indicate the degree to which they agree or disagree (or think it's likely or unlikely) that: Most people who are important to me think I should join the military (Army/ Navy/Air Force/Marine Corps) sometime in the next N months. strongly agree agree neither agree nor disagree disagree strongly disagree Perceived Descriptive Norm: The perceived descriptive norm is one's perception of what important others are actually doing vis-a-vis the behavior in question. Thus it would be important to ask questions such as: How many people like you will join the military (Army/Navy/Air Force/Marine Corps) in the next N months? none very few some

MONITORING YOUTH TRENDS or: and: almost all all Out of 100 people like you, how many will join the military (Army/Navy/Air Force/Marine Corps) in the next N months? How many people do you personally know who have been in or are now in the military? 61 Self-Efficacy/Perceived Control: Self-efficacy and perceived behav- ioral control refer to one's belief that he or she has the necessary skills and abilities to perform the behavior in question, even under a number of difficult circumstances. That is, it refers to the perception that one could perform the behavior if one "really wanted to." Items to measure this either could use the semantic differential format. For example: My joining the military (Army/Navy/Air Force/Marine Corps) in the next N months is: up to me under my control or they could be put in terms of a certainty question, such as: not up to me not under my control How certain are you that, if you really wanted to, you could join the military (Army/Navy/Air Force/Marine Corps) in the next N months? certain I cannot : : : : certain I can Underlying Behavioral, Normative, and Control Beliefs Generally speaking, there are also standardized items that can be used for assessing behavioral, normative, and control beliefs. For example, behavioral beliefs are usually measured with items using the following format. My performing (Behavior X) will lead to/prevent (Outcome Y). extremely unlikely extremely likely Injunctive normative beliefs are usually measured with items using the following format:

62 EVALUATING MILITARY ADVERTISING AND RECRUITING (Referent A) thinks I should perform (Behavior X) extremely unlikely : : : : extremely likely Descriptive normative beliefs are usually measured with items like: (Referent A) performed or is currently performing (Behavior X). yes no Efficacy beliefs are usually measured with items using the following for- mat: How certain are you that, if you really wanted to, you could perform (Behavior X), even if (Barrier A) were present? certain I cannot : : : : certain I can What the above illustrations should make clear is that, prior to devel- oping a fixed-item instrument to assess the beliefs that underlie attitudes, norms, and self-efficacy, one must first conduct formative research to identify the outcomes, referents, and barriers that are salient for the popu- lation in question. Thus one could go to a small sample of the population in question and ask the following open-ended questions: To Identify Salient Outcomes: - What do you see as the advantages of your joining the military (or a particular Service) in the next N months? That is, what are the good things that would happen if you joined the military in the next N months? - What do you see as the disadvantages of your joining the military (or a particular Service) in the next N months? That is, what are the bad things that would happen if you joined the military in the next N months? - What else comes to mind when you think about joining the military (or a particular Service) in the next N months? To Identify Relevant Referents (and Important Others): - Please list those people who would approve of your joining the military in the next N months. - Please list those people who would disapprove of your joining the military in the next N months. - Please list any other people you would talk to or whose opinions you would consider if you were trying to decide whether or not to join the military in the next N months.

MONITORING YOUTH TRENDS 63 To Identify Salient Barriers and Facilitators: - Please list those things that would facilitate or make it easy for you to join the military in the next N months. - Please list those things that would prevent or make it hard for you to join the military in the next N months. Responses to the above questions can be content analyzed and the most frequently mentioned (i.e., the most salient) outcomes, referents, barriers, and facilitators can be identified. This information can then be used to develop fixed-item survey questions to assess underlying behav- ioral, normative, and control or self-efficacy beliefs. This type of formative research is also critical as a first step in developing advertising strategy. Outcome Evaluation In addition to assessing behavioral beliefs (or outcome expectancies), it is also necessary to assess the evaluation of the salient outcomes. Most often this is done as follows: (Outcome A) is: good : bad However, with YATS, rather than evaluating outcomes, respondents were asked to indicate the extent to which a given outcome was impor- tant to them. Thus it's necessary to consider whether to assess impor- tance, or value, or both.5 Similarly, in assessing behavioral beliefs, should one simply ask if joining the military (or specific Service) will lead to each of the outcomes, or should a YATS-type question be retained that asks whether a given outcome is more likely to be obtained from the military or from a civilian job? Moreover, if one takes this comparative approach, is the appropriate comparison military versus civilian or military versus college? Irrespec- tive of the answer to this question, we would recommend that the core set of questions includes assessment of both propensity (or intentions) to continue one's education (e.g., go to college) and to join the civilian work- force (e.g., get a job). Finally, YATS assessed beliefs about each Service as well as the mili- tary in general. Thus another question that must be addressed is whether Because there were very few changes in ratings of importance over time, and because, with few exceptions, judgments of importance were not related to propensity, we recom- mended dropping importance in our earlier letter report National Research Council, 2000~.

64 EVALUATING MILITARY ADVERTISING AND RECRUITING a survey should ask about the military or about each Service separately, or both? Similarly, should it include the reserve forces? Distal Variables Distal variables may be related to propensity, but theoretically they are assumed to exert their influence indirectly by influencing underlying behavioral, normative, and control beliefs. For example, gender differ- ences in propensity should be explained by finding that men and women hold different behavioral, normative, or control beliefs about joining the military. Similarly, the finding that young adults from the South are more inclined to join the military than those from the North should be related to differences in the beliefs held by young adults in these two geographic areas. Thus an important question to consider is what distal variables should be assessed. First, it seems reasonable to consider demographic variables that are known to be related to enlistment. These would include such variables as age, gender, ethnicity, socioeconomic status, education, geographic location (region/urban-rural/state) and employment status. Second, as pointed out earlier, particularly with respect to youth, a very important potential influence on propensity is the image (or proto- type) that one has of the kind of person who pursues a given choice option (e.g., the image of the kind of person who enlists in the military). Not only is it important to assess the prototype per se, but of equal inter- est is the extent to which a person's own self-concept maps onto (i.e., is consistent or inconsistent with) the prototype. As a result, we recommend that assessments of both prototype and self-concept be included in the set of core items comprising the survey instrument. Specifically, participants could be asked to rate "a person who enlists in the military" as well as one's self on a series of semantic differential scales, such as: "wise/foolish," "aggressive/timid," "works well with others/works best alone," "strong/ weak," etc. Formative research would be necessary to identify the scales most appropriate for assessing a military prototype. Finally, it may also be useful consider other distal variables, such as attitudes (e.g., toward the military, education, or war), as well as personality or individual dif- ference variables, such as intelligence and sensation seeking (Zuckerman, 1979; Palmgreen et al., 1995) The above considerations are focused on survey content. We have tried to identify the types of questions that are needed to identify and monitor changes in propensity and its determinants. The data obtained from these questions provide the kinds of information necessary for devel- oping effective media campaigns as well as for evaluating the effective- ness of advertising (or other types of interventions) designed to increase propensity. However, as indicated above, there are many other issues

MONITORING YOUTH TRENDS 65 that need to be addressed in designing survey research. For example, in order to evaluate the effectiveness of various advertising campaigns or changes in recruitment policy, it is necessary to obtain data at regular intervals over time. We have recommended the use of a cohort-based sequential sample survey with a longitudinal component that in our view will provide the information necessary for both tracking propensity and its determinants and for developing more effective communications to increase and main- tain the pool of youth with the propensity to join the military. Moreover, by following the cohort on a regular basis, it will be possible, at least in part, to evaluate the effects of current events, as well as changes in adver- tising and recruitment policies. At the same time, however, we recognize that other data need to be obtained in order to fully evaluate advertising effectiveness and other recruitment initiatives. For this reason, in our earlier letter report on YATS, we recommended that "a portfolio of sur- veys at different time intervals replace the current annual YATS adminis- tration" (National Research Council, 2000~. Let us thus briefly consider two other possible surveys. Influencer Surveys As noted earlier, there is considerable evidence that the decision to join the military is strongly influenced by other people. While much of this influence should be assessed through measures of injunctive and descriptive norms, it is important to recognize that these normative beliefs are usually quite consistent with reality. One usually is quite accurate in one's beliefs about the normative proscriptions and behaviors of relevant others. Thus, it may sometimes be necessary to change the beliefs and attitudes of influencers. In order to do this, however, one must first under- stand why influencers (parents, teachers, friends, etc.) support or oppose joining the military, as well as why they are (or are not) inclined to recom- mend military service. Since this is a totally different target audience, a separate influencer survey is recommended. It is important, however, to distinguish between two roles of an influ- encer. On one hand, an influencer can encourage/discourage or support/ oppose enlistment (i.e., as injunctive normative influences see e.g., National Research Council, 2003; Rutter, 1980~. On the other hand, influencers may be viewed as transmitters or evaluators of advertising or other types of interventions. They may influence and shape the way a person evaluates a given advertisement, advertising campaign, or recruit- ment policy (Hornik, 1997~. Considerations such as these suggest a sepa- rate survey to more precisely track exposure to, and to evaluate the effectiveness of, advertising campaigns or other policy changes.

66 EVALUATING MILITARY ADVERTISING AND RECRUITING Exposure to Media and/or Recruiters Although there are many ways to monitor exposure, the committee recommends that, if at all possible, respondents be presented with current TV/print/radio/Internet ads (plus one or more dummy ads) and be asked whether they saw or read it and, if so, how many times? This can easily be done if the survey instrument is computer assisted or presented on the Internet.6 In addition to these subjective estimates of exposure, we would also recommend that gross rating points, as well as other indices of time or space purchased, be tracked. Ad evaluation: For each of the ads to which the respondent reports exposure, a series of questions such as the following could be asked: Did you believe the ad? yes no Did the ad tell you something you didn't already know? yes no Was the ad appropriate for you (or people like you)? yes no Was the ad interesting? yes no Did you like or dislike the ad? like dislike To assess indirect paths of effect: For each ad (whether or not the respondent reported exposure), a series of questions such as the follow- ing could be asked: Which of the following people have you talked to about this ad? Provide respondents with a list of influencers (based on open-ended surveys) as well as "none" and "other (please specify)" alternatives and ask them to check all that apply. For each influencer checked: Did (Referent A) like or dislike the ad? To assess interpersonal contacts: In addition to asking whether the respondent discussed ads (or policies) with various influencers, it would also be important to know if they discussed joining the military with these others. Thus respondents' could also be asked such things as: 6As indicated above, because of changing technology, it is possible that follow-ups of the cohort sample could eventually be done using the Internet. If this were the case, tracking could also be accomplished as part of a cohort-based sequential sample survey design.

MONITORING YOUTH TRENDS Who have you talked to about joining the military? Check all that apply. surveys). 67 Provide respondents with a list of influencers (based on open-ended For each influencer checked: Did Referent A support or oppose your join ing? support oppose INTERACTION OF PURPOSE, CONTENT, AND DESIGN In this chapter, we have recommended a survey design and a set of core questions to assess and monitor changes in propensity and its deter- minants. We have also recognized that influencer and advertising track- ing surveys are also necessary to gain a more complete understanding of advertising effectiveness and the role that advertising, recruiters, and other influencers play in the recruitment process. We recommend that there should be at least two types of surveys: an annual or semiannual cohort-based sequential sample survey with a longitudinal component to monitor changes in, and provide an in-depth understanding of, the deter- minants of propensity. In addition, this survey should include a more general set of questions about exposure to media and interactions with recruiters and other influencers. In different years (or at different times), questions about more distal attitudes and values could be assessed. While such an annual or semiannual survey should address most of the key questions concerning propensity and recruitment, assessments of whether one has been exposed to a current advertising campaign, whether one has talked to others about that campaign, or whether one has talked to a recruiter will clearly vary as a function of advertising expenditure and military policies concerning incentives as well as the number and placement of recruiters. Thus, we recommend a brief, continuous track- ing survey to assess exposure to specific events, advertisements (or other recruitment policy changes), and the extent to which the respondent engaged in discussions with influencers about the campaign or about joining the military. This survey should also monitor respondents' evalua- tions of the ads they were exposed to, as well as provide data to allow one to track changes in propensity, attitudes, norms, and self-efficacy over time. We recommend that separate studies (or surveys) be conducted to address specific populations, such as influencers and younger populations.

Next: 4. Advertising Planning: Generative and Evaluative Approaches »
Evaluating Military Advertising and Recruiting: Theory and Methodology Get This Book
×
Buy Hardback | $61.00
MyNAP members save 10% online.
Login or Register to save!
Download Free PDF

It is anticipated that in the coming decade the Department of Defense (DoD) will field and test new advertising and recruiting initiatives designed to improve the recruiting outlook. The DoD needs a comprehensive research and evaluation strategy based on sound research principles that will ensure valid, reliable, and relevant results to discover the most promising policies. The primary objective of this book is to help the DoD improve its research on advertising and recruiting policies.

Evaluating Military Advertising and Recruiting: Theory and Methodology presents a framework for evaluation that links different types of research questions to various research methodologies. The framework identifies four major categories of research questions and four broad methodological approaches. The first category of research question asks “What does a target audience see as attractive or unattractive features of a program?” It is well suited to examination via qualitative methods, such as focus groups, unstructured or open-ended surveys, and interviews. The second category of research question asks “What is the effect of a program on specified attitudes or behavioral intentions?” It is well suited to examination via surveys, experiments, and quasi experiments. The third category of research question asks “What is the effect of a proposed new program on enlistment?” It is well suited to examination via experiments and quasi experiments. The final category of research question asks “What is the effect of an existing program on enlistment?” It is well suited to examination via econometric modeling.

  1. ×

    Welcome to OpenBook!

    You're looking at OpenBook, NAP.edu's online reading room since 1999. Based on feedback from you, our users, we've made some improvements that make it easier than ever to read thousands of publications on our website.

    Do you want to take a quick tour of the OpenBook's features?

    No Thanks Take a Tour »
  2. ×

    Show this book's table of contents, where you can jump to any chapter by name.

    « Back Next »
  3. ×

    ...or use these buttons to go back to the previous chapter or skip to the next one.

    « Back Next »
  4. ×

    Jump up to the previous page or down to the next one. Also, you can type in a page number and press Enter to go directly to that page in the book.

    « Back Next »
  5. ×

    To search the entire text of this book, type in your search term here and press Enter.

    « Back Next »
  6. ×

    Share a link to this book page on your preferred social network or via email.

    « Back Next »
  7. ×

    View our suggested citation for this chapter.

    « Back Next »
  8. ×

    Ready to take your reading offline? Click here to buy this book in print or download it as a free PDF, if available.

    « Back Next »
Stay Connected!