Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.
Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.
OCR for page 271
School & Health: Our Nation's Investment 6 Challenges in School Health Research and Evaluation OVERVIEW OF RESEARCH AND EVALUATION One of the primary arguments for establishing comprehensive school health programs (CSHPs) has been that they will improve students' academic performance and therefore improve the employability and productivity of our future adult citizens. Another argument relates to public health impact—since one-third of the Healthy People 2000 objectives can be directly attained or significantly influenced through the schools, CSHPs are seen as a means to reduce not only morbidity and mortality but also health care expenditures. It is likely that the future of CSHPs will be determined by the degree to which they are able to demonstrate a significant impact on educational and/or health outcomes. Evaluation of any health promotion program poses numerous challenges such as measurement validity, respondent bias, attrition, and statistical power. The situation is even more challenging for CSHPs, for several reasons. First, these programs comprise multiple, interactive components, such as classroom, family, and community interventions, each employing multiple intervention strategies. Therefore, it is often difficult to determine which intervention components and specific messages, activities, and services are responsible for observed treatment effects. Second, given the broad scope of CSHPs, it is difficult to determine what the realistic outcomes should be, and measuring these outcomes in school-age children (be it the actual behavior or precursors such as communication skills) is often problematic, especially when outcomes have to do
OCR for page 272
School & Health: Our Nation's Investment with such sensitive matters as drug use or sexual behavior. Finally, though some aspects of a CSHP (e.g., classroom curricula) can be replicated, many aspects of the CSHP (e.g., staffing patterns, local norms, and community resources) differ across schools, cities, states, and regions. Consequently, the results of even the most rigorous evaluations may not be generalizable to other settings. This chapter examines these and other issues related to the evaluation of CSHPs. First, general principles of research and evaluation, as applied to school health programs, are reviewed. Then the challenges and difficulties associated with research and evaluation of comprehensive, multi-component programs are examined. Finally, the difficulties and uncertainties related to research and evaluation of even a single, relatively well-defined component of comprehensive programs—the health education component—are be considered. The committee felt that it was appropriate to focus on health education in this chapter, because of the relative maturity of research in this area. Specific aspects of health education research have been chosen that highlight challenges in evaluating school-based interventions, as well as in interpreting ambiguous, if not conflicting, results relevant to other components of the comprehensive program. Discussion of the research and evaluation of other components of CSHPs—health services, nutrition or foodservices, physical education, and so forth—is found in the general discussion of these components in earlier chapters. Types of School Health Research Research and evaluation of comprehensive school health programs can be divided into three categories: basic research, outcome evaluation, and process evaluation. Basic Research An ultimate goal of CSHPs is to influence behavior. Basic research in CSHPs involves inquiry into the fundamental determinants of behavior as well as mechanisms of behavior change. Basic research includes examination of factors thought to influence health behavior—such as peer norms, self-efficacy, legal factors, health knowledge, and parental attitudes—as well as specific behavior change strategies. Basic research often employs epidemiologic strategies, such as cross-sectional or longitudinal analyses, as well as pilot intervention studies designed to isolate specific behavior change strategies, although often on a smaller scale than full outcome trials. A primary function of basic behavioral research is to in-
OCR for page 273
School & Health: Our Nation's Investment form the development of interventions, whose effects can then be tested in outcome evaluation trials. Outcome Evaluation Outcome evaluation includes empirical examination of the impact of interventions on targeted outcomes. Possible outcomes (or dependent variables) include health knowledge, attitudes, skills, behaviors, biologic measures, morbidity, mortality, and cost-effectiveness. Interventions (or independent variables) include specific health education curricula, teaching strategies, organizational change, environmental change, or health service delivery models. This type of evaluation in its most basic form resembles the randomized clinical trial with experimental and control groups, along with the requisite null hypothesis assumptions and concern for internal and external validity. Outcome evaluation can further be divided into three stages: efficacy, effectiveness, and implementation effectiveness trials (Flay, 1986). Efficacy. Efficacy testing involves the evaluation of an intervention under ideal, controlled implementation conditions. During this stage, for example, teachers may be paid to ensure that they implement a health curriculum, or other motivational strategies may be used to ensure fidelity. The goal of efficacy testing is to determine the potential effect of an intervention, with less concern for feasibility or replicability. In drug study parlance, during this stage of research efforts are made to ensure that the ''drug" is taken so that biologic effects, or lack thereof, can be attributed to the drug rather than to degree of compliance. Effectiveness. In effectiveness trials, interventions are implemented under real-world circumstances with the associated variations in implementation and participant exposure. Effectiveness trials help determine if interventions can reliably be used under real-world conditions and the extent to which effects observed under efficacy conditions are reproduced in natural settings. Some programs, despite being efficacious, may not be effective if they are difficult to implement or are not accepted by staff or students. Effectiveness research is of particular concern because the results of efficacy testing and, to a lesser extent, of effectiveness trials may not always be generalizable to the real world. Implementation Effectiveness. In implementation effectiveness trials, variations in implementation methods are manipulated experimentally and outcomes are measured (Flay, 1986). For example, the outcomes can be compared when a CSHP is implemented with or without a school
OCR for page 274
School & Health: Our Nation's Investment coordinator or when a health education program is implemented by peers rather than adults. Process Evaluation Once an intervention has demonstrated adequate evidence for efficacy and effectiveness, it can be assumed that replications of the intervention will yield effects similar to those observed in prior outcomes research trials. The validity of this assumption is enhanced when multiple effectiveness trials have been successfully conducted under varying conditions and the intervention is delivered with fidelity in a setting and with a target population similar to those used in the initial testing. It is at this point that process evaluation becomes the desired level of assessment. The goal of process evaluation is not to determine the basic impact of an intervention but rather to determine whether a proven intervention was properly implemented, and what factors may have contributed to the intervention's success or failure at the particular site. Implementation and/or participant exposure can be used as proxies for formal outcome evaluation. Key process evaluation strategies include implementation monitoring (e.g., teacher observation), quality assurance, and assessing consumer reactions (e.g., student, teacher, and parent response to the program). Evaluation at this level may include some elements of outcome evaluation. Desired outcomes are often stated as objectives to be achieved by the program, which can be evaluated pre- and post-intervention, and may include a comparison group or references to normative data. Random selection and assignment of participants are typically not employed, however, and the level of rigor used to collect and analyze data is often less stringent than in formal outcome evaluation. This type of evaluation is sometimes referred to as program evaluation. Although program evaluation can include rigorous design and analyses, in many real world program evaluations the assessment is often secondary to the intervention. Such interventions often do not bother with randomized design, control groups, or complex statistics. The evaluation is adapted to the intervention, rather than the inverse. For example, pragmatic issues, more than experimental design, often determine sample size and which sites are assigned to treatment or comparison conditions. In basic research and outcome evaluation on the other hand, evaluation is the principal reason that the intervention is being conducted; pragmatic issues often yield to methodologic concerns, and evaluation procedures largely are determined prior to initiating intervention activities.
OCR for page 275
School & Health: Our Nation's Investment Linking Outcome and Process Evaluations Although outcome and process evaluation are described above as being sequential, the two often are conducted concurrently by linking process data to outcome data in order to determine causal pathways. One application of linking process and outcome data is the dose–response analysis—measuring the relationship between intervention dose and level of outcomes. For example, student behavioral outcomes can be examined relative to levels of teachers' curriculum implementation in a health education study or to students' level of clinic usage in a health services study. A positive dose–response relationship is seen as evidence for construct validity—that is, observed outcomes are attributed to the intervention rather than to other influences. Numerous health education studies have established a dose–response relationship between curriculum exposure and student outcomes (Connell et al., 1985; Parcel et al., 1991; Resnicow et al., 1992; Rohrbach et al., 1993; Taggart et al., 1990). Less is known about dose–response in other components of CSHPs. Who Conducts the Research? The various types of school health research are conducted by a diverse group of professionals. Basic research and outcome evaluation are typically conducted by doctoral-level professionals from university and freestanding research centers, often with funding from the federal government (though such studies also are supported by private foundations or corporations). Evaluating CSHPs at the level of basic research or outcome evaluation is largely beyond the fiscal and professional capacity of most local and even state education agencies. Process evaluation, on the other hand, can be conducted by local education agencies, perhaps in partnership with local public health agencies. Many models of CSHPs include an evaluation component, and it is important to delineate what type of evaluation schools and education agencies should reasonably be expected to conduct on the local level. Although carried out by research professionals, basic research and outcome evaluation should not be abstract academic pursuits that are an end in themselves. Greater interaction is needed between researchers and those who actually implement programs. It would be desirable to stimulate and support research and evaluation alliances among colleges of education, schools of public health, and college of medicine. Bringing together the expertise from all three sectors in school health research and evaluation centers may enhance the understanding and interaction between these sectors and produce research and evaluation methods that can address cross-sector issues more accurately. This also will lead to
OCR for page 276
School & Health: Our Nation's Investment developing programs that can be disseminated more easily and to reducing the number of researchers working in isolation. Uses for Research and Evaluation Basic research, outcome evaluation, and process evaluation are also conducted for different audiences and intentions. The first two are largely intended to build scientific knowledge and are generally published in the peer-reviewed literature. The latter generally is used to demonstrate feasibility of an intervention, as well as to document the facts that program implementation objectives were met and funds were properly spent. Such reports are typically requested by or intended for state education agencies, local education agencies, or funding sources that may have sponsored the local project. Local program evaluations of pilot programs also are used to justify expanding dissemination efforts. All three types of evaluation can contribute to the development and dissemination of comprehensive school health programs, although it is important that they be applied in their proper sequence. Process evaluation studies are inappropriate for demonstrating intervention efficacy or measuring cost-effectiveness, just as basic research approaches may go beyond what is necessary for local program evaluation. To merit dissemination, programs should first undergo formal experimental efficacy and effectiveness testing; lower standards may result in adoption of suboptimal programs and ultimately impair the credibility of school health programs among their educational and public health constituencies (Ennett et al., 1994). METHODOLOGICAL CHALLENGES Although traditional experimental studies using control or comparison groups are appropriate for testing individual program components and specific intervention strategies, this may not be the case for the overall CSHP, which is a complex entity and varies from site to site. In a recent discussion of methods to evaluate such complex systems as CSHPs, Shaw (1995) proposed that the use of the classic experimental design to conduct outcome evaluations may be outmoded and inadequate for several reasons. First, the randomized clinical trial, with its tightly controlled and defined independent and dependent variables, cannot measure and capture large-scale, rapidly changing systems. Traditional experimental design ignores the need for timely formative descriptive data, maintains the artificial roles of the researcher as external expert and the subject as passive recipient of a defined treatment, and fails to recognize the complex nature of multifaceted programs that vary according to community needs.
OCR for page 277
School & Health: Our Nation's Investment Furthermore, there may be ethical dilemmas in randomly assigning students to treatment versus control groups when children's health and well-being are at stake. It will be difficult—and possibly not feasible—to conduct traditional randomized trials on entire comprehensive programs. However, interventions associated with individual program components should be developed and tested by using rigorous methods that involve experimental and control groups, with the requisite concern for internal and external validity. In this section, some of the methodological challenges of demonstrating program impacts are examined. Challenges in Assessing Validity A goal of studying CSHPs at the level of efficacy testing is to measure the extent to which programs produce the desired outcomes (internal validity)—that is, to determine whether there is a causal relationship between the independent variable (CSHP) and defined outcomes such as knowledge, health practices, or health status. Defining the Independent Variable The first measurement challenge is the difficulty in defining the independent variable (the CSHP) or "treatment." Knapp (1995) has described this dilemma: "The 'independent variable' is elusive. It can be many different kinds of things, even within the same intervention; far from being a fixed treatment, as assessed by many research designs, the target of study is more often a menu of possibilities." Ironically, the most successful programs—which are, in fact, comprehensive, multifaceted, interdisciplinary and well integrated into the community—may be the most difficult to define and segregate into components readily identifiable as the independent variable. It may be impossible, for example, to separate effects of the school from those of the community (Perry et al., 1992). This poses an important assessment dilemma. While it is vital that comprehensive programs be evaluated as a whole (Lopez and Weiss, 1994), it is unlikely that any individual program could be replicated in its entirety in a different community with its varying infrastructure, needs, and values. Thus, internal validity—the extent to which the effectiveness of the entire program is being accurately measured—may be high, but external validity—the extent to which the findings can be generalized and replicated beyond a single setting—is sacrificed. Because of limited resources, one might wish to prioritize individual program components based on their relative efficacy. However, the over-
OCR for page 278
School & Health: Our Nation's Investment all effect of comprehensive programs may well be more than or different from the sum of its parts. Using a factorial design to examine the effects of individual components or combinations of components would require an unwieldy number of experimental conditions and large sample size. Thus, the independent variables in a CSHP not only may be difficult to define and measure, but it is unlikely that a consensus of what should comprise the intervention can or even should be reached. Defining the Dependent Variable In similar ways, defining the appropriate, feasible, and measurable outcomes (dependent variables) of a CSHP is equally challenging. Is it necessary to use change in health-related behaviors, such as smoking or drug use, to measure effectiveness of health education programs, or is the acquisition of knowledge and skills sufficient? If behavior change outside the school is required to declare effectiveness, this would seem to represent an educational double standard. For example, the quality and effectiveness of mathematics education are measured by determining mathematics knowledge and skills, using some sort of school-based assessment, not by determining whether the student actually balances a checkbook or accurately fills out an income tax form as an adult. Likewise, the quality of instruction in literature or political science is measured by the acquisition of knowledge, not by whether the student writes novels, reads poetry, votes, or becomes a contributing citizen. Similarly, should appropriate outcomes for school health services be improved health status, behaviors, and long-term health outcomes, or is simply access to and utilization of services a sufficient end point? Is a reduction in absenteeism a proxy for improved health status and a reasonable indicator of health outcomes? Dependent variables used to measure effectiveness of school-linked health services have included linking students with no prior care to health services, decreased use of the emergency room for primary care, identification of previously unidentified health problems, access to and utilization of services by students and families, perceptions and health knowledge of students and their parents, decreasing involvement in risk behaviors, and health status indicators (Glick et al., 1995; Kisker et al., 1994; Lewin-VHI and Institute of Health Policy Studies, 1995). Some of these measures simply determine whether school services provide access and utilization, whereas other measures look for a change in health status and behavior. However, if improved health status and behaviors are declared to be the expectation for school health services, does this hold the school to higher standards than those of other health care providers? The committee points out that, although influencing health behavior
OCR for page 279
School & Health: Our Nation's Investment and health status are ultimate goals of CSHPs, such end points involve personal decisionmaking beyond the control of the school. Other factors—family, peers, community, and the media—exert tremendous influence on students, and schools should not bear total responsibility for students' health behavior and health status. Schools should be held accountable for conveying health knowledge, providing a health-promoting environment, and ensuring access to high-quality services; these are the reasonable outcomes for judging the merit of a CSHP.1 Other outcomes—improved attendance, better cardiovascular fitness, less drug abuse, or fewer teen pregnancies, for example—may also be considered, but the committee believes that such measures must be interpreted with caution, since they are influenced by personal decisionmaking and factors beyond the control of the school. In particular, null or negative outcomes for these measures should not necessarily lead to declaring the CSHP a failure; rather, they may imply that other sources of influence on children and young people oppose and outweigh the influence of the CSHP. Other Issues In addition to the above difficulties, all of the potential biases and challenges inherent in any research also apply. Serious threats to validity in measuring effects of CSHP include: the Hawthorne effect—positive outcomes simply due to being part of an investigation, regardless of the nature of the intervention; self-reporting biases—responding with answers that are thought to be "correct" and socially desirable; Type III error—incorrectly concluding that an intervention is not effective, when in fact ineffectiveness is due to the incorrect implementation of the intervention. ensuring even and consistent distribution of the intervention; sorting out effects of confounding and extraneous variables; isolating effective ingredients of multifaceted programs; control groups that are not comparable; differential and selective attrition in longitudinal studies; inadequate reliability and validity of measurement tools; and vague or inadequate conceptualization of study variables. 1 This view is consistent with earlier discussion in this chapter that for the local school, the desired level of evaluation is process evaluation. If the school is providing health curricula and health services that have been shown through basic research and outcome evaluation to produce positive health outcomes, the committee suggests that the crucial question at the school level should be whether the interventions are implemented properly.
OCR for page 280
School & Health: Our Nation's Investment Another problem in drawing conclusions from reported research is "reporting bias"—the fact that only positive findings tend to be reported in the literature while studies with negative or inconclusive results are not often published. It is also important to remember that results that are statistically significant may not always have educational and public health significance. Challenges Related to Feasibility The kinds of large-scale research studies necessary to assess long-term outcomes of CSHPs are extremely costly and require extensive coordination. Since such programs are usually implemented for entire schools, communities, regions, or states, a majority of the children who participate are at relatively low risk for a number of outcomes of potential relevance. In addition, often only small to moderate outcome effects are sought. Hence, sample size needs are large, particularly when the unit of measurement is the school or the community rather than the individual. Once efficacy and effectiveness have been demonstrated, the problem of developing a feasible program evaluation plan is compounded by the lack of evaluation expertise at the local or regional level and the inadequate or incompatible information systems for collecting, analyzing, and disseminating information. Local planners often need assistance in selecting and implementing evaluation strategies and in identifying means to make existing data more useful. For school health education, there are numerous guidelines and evaluation manuals from the Centers for Disease Control and Prevention (CDC), the Department of Health and Human Service's Center for Substance Abuse Prevention at the Substance Abuse and Mental Health Services Administration, and the Educational Development Center, to help states develop an evaluation plan. The national evaluation plan for the Healthy Schools, Healthy Communities Program provides helpful information for the evaluation of school health services (Lewin-VHI and Institute of Health Policy Studies, 1995). This plan is facilitated by a standardized data collection system and marks the first time that health education and health services will be systematically analyzed with a management information system that records different types of health education interventions, utilization of health services, and outcomes. CHALLENGES AND FUTURE DIRECTIONS FOR SCHOOL HEALTH EDUCATION RESEARCH Health education is one of the essential components of CSHPs. As
OCR for page 281
School & Health: Our Nation's Investment described in earlier chapters, health instruction has taken place in schools for many years, and the field is reasonably well defined and developed compared to some of the other aspects of a CSHP. Health education research has been an active field, but considerable knowledge gaps exist and research findings are often ambiguous, unexpected, or sometimes seemingly contradictory. This section focuses on some of the challenges and unresolved questions in classroom health education and suggests issues that merit further study. Effects of Comprehensive Health Education The preponderance of school health education research has consisted of outcome evaluations focusing on categorical risk behavior, such as smoking, drug use, sexual behavior, and nutrition. A few notable studies have examined several risk behaviors simultaneously—such as nutrition, physical activity, and smoking—as risk reduction interventions for cardiovascular disease or cancer (Luepker et al., 1996; Resnicow et al., 1991) or have looked at efforts to prevent drug, alcohol, and tobacco abuse (Pentz, 1989a), but there have been very few studies that evaluate comprehensive, multitopic health education programs (Connell et al., 1985; Errecart et al., 1991). The lack of evaluation studies of comprehensive health education is to a large extent the result of how school health research has been funded at the federal level. Generally, health concerns are divided into categorical areas for research and demonstration funding; the result is that funding agencies are interested in funding only research and development projects that address their particular disease area of responsibility. There is a scarcity of hard data about the potential impact of overall comprehensive classroom health education programs. Only a few commercially available multitopic school health curricula have been evaluated to test their effectiveness (e.g., the Know Your Body program). Some of these either are old and or have not made use of the methods demonstrated to be effective in categorical research and demonstration projects, which means that schools are faced with adopting programs that have not been evaluated or attempting to piece together evaluated programs. How Much Health Education Is Enough? There is consensus that health education programming should span kindergarten through grade 12 (Lohrman et al., 1987). However, the precise number and sequence of lessons required to achieve significant enduring effects have not been clearly defined. As mentioned previously, such determinations are complicated by uncertainties in what end points
OCR for page 285
School & Health: Our Nation's Investment year and, in the Ellickson and Bell (1990) and Flay et al. (1989) studies, three to five booster sessions in subsequent years. Botvin's intervention contained 15 lessons in the first year and 15 additional lessons over the next two years. Other explanations include superiority of the Life Skills Training curriculum, including its content, format, and teacher training procedures, as well as higher levels of teacher implementation. Although the results of Botvin's study of substance use prevention are encouraging, research regarding the optimal dose and timing of curricula addressing other health behaviors is still needed. Given that achieving change in language arts and mathematics skills requires daily instruction for 12 academic years, it is reasonable to conclude that changes in health knowledge and in health behaviors also will require more instruction than one semester, the standard middle and secondary school requirement. Active Ingredients of Health Education Many successful health education programs employ several conceptually diverse intervention strategies such as didactic, affective, and behavioral activities directed at students, as well as environmental and policy change. Although there is considerable evidence that such programs as a whole can work, the construct validity of specific subcomponents—that is, "why" programs achieve or fail to achieve their desired effects—remains unclear (McCaul and Glasgow, 1985). Consider, for example, skills training. During the 1980s, numerous skills-based interventions aimed at increasing general and behavior-specific skills were developed and evaluated (Botvin et al., 1984; Donaldson et al., 1995; Flay, 1985; Kirby, 1992; McCaul and Glasgow, 1985). While initial results were encouraging and skills training has become an integral component of many school health education programs (Botvin et al., 1980; CDC, 1988, 1994; Flay, 1985; Glynn, 1989; Kirby, 1992; Pentz et al., 1989b; Schinke et al., 1985; Walter et al., 1988), many "skills-based" programs include other intervention strategies, such as modifying personal and group norms and outcome expectations, which also many have contributed to the reported intervention effects (Botvin et al., 1984; Ellickson and Bell, 1990; Murray et al., 1989; Pentz et al., 1989a; Walter et al., 1987). Several studies specifically designed to test the independent effects of skills training have found this approach to be largely ineffective (Elder et al., 1993; Hansen and Graham, 1991; Sussman et al., 1993). Instead, these studies indicate that modifying normative beliefs—students' assumptions regarding the prevalence and acceptability of substance use—appears to be the ''active ingredient" of many of the skills training programs. Despite the questionable effectiveness of skills training in substance use prevention, skills may be important in other behavioral domains such as sexuality, nutrition, and
OCR for page 286
School & Health: Our Nation's Investment exercise (Baranowksi, 1989; Perry et al., 1990; Sikkema et al., 1995; St. Lawrence et al., 1995; Warzak et al., 1995). Similarly, although there is acceptance on the part of many health educators that peers are effective "messengers," the evidence for the effectiveness of peer-based health education is also somewhat equivocal (Bangert-Drowns, 1988; Clarke et al., 1986; Ellickson et al., 1993; Johnson et al., 1986; McCaul and Glasgow, 1985; Murray et al., 1988; Perry et al., 1989; Telch et al., 1990). The effectiveness of peer-based programs is likely to depend more on how peers are included in the program than on simply having peer-led activities. In a review of programs to reduce sexual risk behavior, Kirby and coworkers found several differences between programs that had an impact on behavior and those that did not (Kirby et al., 1994). Although the authors warn that generalizations must be made cautiously, ineffective curricula tended to be broader and less focused. Effective curricula clearly focused on the specific values, norms, and skills necessary to avoid sex or unprotected sex, whereas ineffective curricula covered a broad range of topics and discussed many values and skills. Interestingly, the length of the program or the amount of skills practice did not appear to predict the success of programs. The authors suggest, however, that skills practice may be effective only when clear values or norms are emphasized or when skills focus specifically on avoiding undesirable sexual behavior rather than on developing more general communication skills. Given the limited funding and classroom time available for health education, it is important that school health education programs include primarily those approaches known to influence health behavior. Providing health information is a necessary but certainly not sufficient condition for affecting behavior. Identifying "active ingredients" can be achieved through factorial designs as well as post hoc statistical techniques such as structural models, and discriminant analysis can be used to elucidate mediating variables and specific intervention components that may account for program effects (Botvin and Dusenbury, 1992; Dielman et al., 1989; MacKinnon et al., 1991). Risk-Factor-Specific Versus Problem Behavior Intervention Models Numerous studies have found that "problem" behaviors—such as the use of alcohol, marijuana, and tobacco; precocious sexual involvement; and delinquent activity—are positively correlated and occur in clusters. Problem Behavior Theory proposes an underlying psychologic phenomenon of "unconventionality" as the unifying etiologic explanation (see Basen-Engquist et al., 1996; Donovan and Jessor, 1985; Donovan et al., 1988; Resnicow et al., 1995). This conceptualization of health behavior has
OCR for page 287
School & Health: Our Nation's Investment significant implications for CSHPs. As opposed to commonly used risk-factor-specific interventions that deal with each behavior separately, Problem Behavior Theory suggests that high-risk and problem behaviors can be prevented by an intervention that addresses common predisposing causes. Such interventions may be not only more effective but also more efficient, since fewer total lessons may be required to alter the common "core" causes. In addition to generic interventions, it may also be necessary to apply general strategies to selected high-risk behaviors. However, most school systems do not conceptualize health education from this perspective. Instead, health instruction is broken down into discrete content areas, more akin to the risk-factor-specific approach. Additional research, particularly studies examining the effects of interventions addressing traits that may underlie clusters of risk behaviors, is needed before health education is restructured toward a more targeted model of health behavior change. Realistic Outcomes for School Health Education It can be argued that previous studies reporting weak or null behavioral outcomes employed health education interventions of insufficient dose and breadth. Many of the interventions had no more than 10 lessons, delivered over the course of one year, and few or no subsequent booster lessons. As noted earlier, the positive long-term behavioral effects reported by Botvin and colleagues (1995) may be attributed largely to the increased dose. Additionally, had the categorical programs for which no long-term behavioral effects were observed been delivered within the context of a comprehensive school health program, positive effects may have been observed. It is important to set realistic expectations for school health education, particularly since many of the programs used in our schools provide a dose of insufficient intensity and duration, whose effects are further attenuated by inadequate levels of teacher implementation. As stated earlier, although influencing behavior is an ultimate goal of school health education, schools should not bear the total responsibility for student behavior, given all the other influences on students—family, peers, the media, community norms, and expectations—that are beyond the control of the school. Schools should be held accountable for providing a high-quality, up-to-date health education program that is delivered by qualified teachers using curricula that are based on research and have been validated through outcome evaluation. Schools should be held responsible for arming students with the knowledge, attitudes, and skills to adopt health-enhancing behavior and to avoid health-compromising behavior. If these conditions are met but behavioral outcomes are still less than desired, then other sources of influence on students must be exam-
OCR for page 288
School & Health: Our Nation's Investment ined for alignment with school health education messages. In addition, there may be delayed effects on behavior in later life, even if no immediate behavioral impacts are observed. There is encouraging evidence that when school-based interventions are delivered along with complementary community-wide or media campaigns, significant long-term behavioral effects can be achieved (Flynn et al., 1994; Kelder et al., 1993; Perry et al., 1992; see Flay et al., 1995, for an exception). Therefore, although health education delivered in isolation may not be able to produce lasting behavioral effects, when combined with other activities or implemented within a comprehensive school health program, significant enduring changes in behavior as well as physical risk factors can be achieved. There is considerable evidence that comprehensive curricula can produce significant short-term effects on multiple health behaviors, including substance use, diet, and exercise (Bush et al., 1989; Connell et al., 1985; Errecart et al., 1991; Resnicow et al., 1992; Walter et al., 1988, 1989). However, many of the assumptions regarding the effectiveness of classroom health education derive from studies of categorical programs, and it is unclear to what degree the effects observed for categorical programs are diminished or magnified when taught within a comprehensive framework. Although it can be argued that incorporating categorical programs within a comprehensive framework would attenuate effects because the focus on any one behavior or health issue would be diminished, it could also be argued that program effects would be enhanced because comprehensive programs provide extended if not synergistic application and reinforcement of essential skills across a wide range of topics. This is another area that calls for further research. SUMMARY OF FINDINGS AND CONCLUSIONS Research and evaluation of CSHPs can be divided into three categories: basic research, outcome evaluation, and process evaluation. Basic research involves inquiry into the fundamental determinants of behavior as well as mechanisms of behavior change. A primary function of basic research is to inform the development of interventions that can then be tested in outcome evaluation trials. Outcome evaluation involves the empirical examination of interventions on targeted outcomes, based on the randomized clinical trial approach with experimental and control groups. Process evaluation determines whether a proven intervention was properly implemented and examines factors that may have contributed to the intervention's success or failure. Basic research and outcome evaluation are typically conducted by professionals from university or other research centers and are largely beyond the capacity of local education agencies.
OCR for page 289
School & Health: Our Nation's Investment The committee believes that process evaluation is the appropriate level of evaluation in local programs. Research and evaluation are particularly challenging for CSHPs. Since these programs comprise multiple interactive components, it is often difficult to attribute observed effects to specific components or to separate program effects from those of the family or community. Determining what outcomes are realistic and measuring outcomes in students are often problematic, especially when outcomes involve sensitive matters such as drug use or sexual behavior. Furthermore, since CSHPs are unique to a particular setting, the results of even the most rigorous evaluations may not be generalizable to other situations. Interventions associated with the separate, individual components of CSHPs—health education, health services, nutrition services, and so forth—should be developed and tested using rigorous methods involving experimental and control groups. However, such an approach is likely to be difficult—and possibly not feasible—for studying entire comprehensive programs or determining the differential effects of individual components and combinations of components. A fundamental issue involves determining what outcomes are appropriate and reasonable to expect from CSHPs. The committee recognizes that although influencing health behavior and health status is an ultimate goal of a CSHP, such end points involve factors beyond the control of the school. The committee believes that the reasonable outcomes on which a CSHP should be judged are equipping students with the knowledge, attitudes, and skills necessary for healthful behavior; providing a health-promoting environment; and ensuring access to high-quality services. Other outcomes—improved cardiovascular fitness or a reduction in absenteeism, drug abuse, or teen pregnancies, for example—may also be considered, but the committee believes that such measures must be interpreted with caution, since they are influenced by factors beyond the control of the school. In particular, null or negative measures for these outcomes should not necessarily lead to declaring the CSHP a failure; rather, they may imply that other sources of influence oppose and outweigh that of the CSHP or that the financial investment in the CSHP is so limited that returns are minimal. RECOMMENDATIONS In order for CSHPs to accomplish the desired goal of influencing behavior, the committee recommends the following: An active research agenda on comprehensive school health programs should be pursued in order to fill critical knowledge
OCR for page 290
School & Health: Our Nation's Investment gaps; increased emphasis should be placed on basic research and outcome evaluation and on the dissemination of these research and outcome findings. Research is needed about the effectiveness of specific intervention strategies such as skills training, normative education, or peer education; the effectiveness of specific intervention messages such as abstinence versus harm reduction; and the required intensity and duration of health education programming. Evidence suggests that common underlying factors may be responsible for the clustering of health-compromising behaviors and that interventions may be more effective if they address these underlying factors in addition to intervening to change risk behaviors. Additional research is needed to understand the etiology of problem behavior clusters and to develop optimal problem behavior interventions. And finally, since the acquisition of health-related social skills—such as negotiation, decisionmaking, and refusal skills—is a desired end point of CSHPs, basic research is needed to develop valid measures of social skills that can then be used as proxy measures of program effectiveness. Diffusion-related research is critical to ensure that efforts of research and development lead to improved practice and a greater utilization of effective methods and programs. Therefore, high priority should be given to studying how programs are adopted, implemented, and institutionalized. The feasibility and effectiveness of techniques of integrating concepts of health into science and other school subjects should also be examined. Since the overall effects of comprehensive school health programs are not yet known and outcome evaluation of such complex systems poses significant challenges, the committee recommends the following: A major research effort should be launched to establish model comprehensive programs and develop approaches for their study. Specific outcomes of overall programs should be examined, including education (improved achievement, attendance, and graduation rates), personal health (resistance to "new social morbidities," improved biologic measures), mental health (less depression, stress, and violence), improved functionality, health systems (more students with a "medical home," reduction in use of emergency rooms or hospitals), self-sufficiency (pursuit of higher education or job), and future health literacy and health status. Studies could look at differential impacts of programs produced by such factors as program structure, characteristics of students, and type of school and community. A thorough understanding of the feasible and effective (including
OCR for page 291
School & Health: Our Nation's Investment cost-effective) interventions in each separate area of a CSHP will be necessary to provide the basis for combining components to produce a comprehensive program. The committee recommends that further study of each of the individual components of a CSHP—for example, health education, health services, counseling, nutrition, school environment—is needed. Additional studies are needed in a number of other areas. First, more data are needed about the advantages (cost and effectiveness) and disadvantages of providing health and social services in schools compared to other community sites—or compared to not providing services anywhere—as a function of community and student characteristics. This information will require overall consensus about the criteria to use for determining the quality of school health programs. It is also important to know how best to influence change in the climate and organizational structure of school districts and individual schools in order to bring about the adoption and implementation of CSHPs. Finally, there is a need for an analysis of the optimal structure, operation, and personnel needs of CSHPs. REFERENCES Bangert-Drowns, R.L. 1988. The effects of school-based substance abuse education: A meta-analysis. Journal of Drug Education 18:243–264. Baranowski, T. 1989. Reciprocal determinism at the stages of behavior change: An integration of community, personal and behavioral perspectives. International Quarterly of Community Health Education 10(4):297–327. Basen-Engquist, K., Edmundson, E., and Parcel, G.S. 1996. Structure of health risk behavior among high school students. Journal of Consulting and Clinical Psychology 64(4):764–775. Bell, R.M., Ellickson, P.L., and Harrison, E.R. 1993. Do drug prevention effects persist into high school? How Project Alert did with ninth graders. Preventive Medicine 22:463–483. Botvin, G.J., and Dusenbury, L. 1992. Smoking prevention among urban minority youth: Assessing effects on outcome and mediating variables. Health Psychology 11:290–299. Botvin, G.J., Eng, A., and Williams, C.L. 1980. Preventing the onset of cigarette smoking through life skills training. Preventive Medicine 9:135–143. Botvin, G.J., Renick, N.L., and Baker, E. 1983. The effects of scheduling format and booster sessions on a broad-spectrum psychosocial approach to smoking prevention. Journal of Behavioral Medicine 6(4):359–379. Botvin, G.J., Baker, E., Renick, N.L., Filazzola, A.D., and Botvin, E.M. 1984. A cognitive-behavioral approach to substance abuse prevention. Addictive Behaviors 9:137–147. Botvin, G.J., Baker, E., Dusenbury, L., and Botvin, E.M. 1995. Long-term follow-up results of a randomized drug abuse prevention trial in a white middle class population. Journal of Behavioral Medicine 273(14):1106–1112.
OCR for page 292
School & Health: Our Nation's Investment Bush, P.J., Zuckerman, A.E., Taggart, V.S., Theiss, P.K., Peleg, E.O., and Smith, S.A. 1989. Cardiovascular risk factor prevention in black school children: The "Know Your Body" evaluation project. Health Education Quarterly 16(2):215–227. Centers for Disease Control and Prevention. 1988. Guidelines for effective school health education to prevent the spread of AIDS. Morbidity and Mortality Weekly Report 37(Suppl.)2:1–14. Centers for Disease Control and Prevention. 1994. Guidelines for school health programs to prevent tobacco use and addiction. Journal of School Health 64(9):353–360. Clarke, J.H., MacPherson, B., Holmes, D.R., and Jones, R. 1986. Reducing adolescent smoking: A comparison of peer-led, teacher-led, and expert interventions. Journal of School Health 56(3):102–106. Connell, D.B., Turner, R.R., and Mason, E.F. 1985. Summary of findings of the school health education evaluation: Health promotion effectiveness, implementation, and costs. Journal of School Health 55(8):316–321. Dielman, T.E., Shope, J.T., Butchart, A.T., Campaneilli, P.C., and Caspar, R.A. 1989. A covariance structure model test of antecedents of adolescent alcohol misuse and a prevention effort. Journal of Drug Education 19(4):337–361. Donaldson, S.I., Graham, J.W., Piccinin, A.M., and Hansen, W.B. 1995. Resistance-skills training and onset of alcohol use: Evidence for beneficial and potentially harmful effects in public schools and in private Catholic schools. Health Psychology 14(4):291–300. Donovan, J.E., and Jessor, R. 1985. Structure of problem behavior in adolescence and young adulthood. Journal of Consulting and Clinical Psychology 53:890–904. Donovan, J.E., Jessor, R., and Costa, F.M. 1988. Syndrome of problem behavior in adolescence: A replication. Journal of Consulting and Clinical Psychology 56:762–765. Elder, J.P., Sallis, J.F., Woodruff, S.I., and Wildey, M.B. 1993. Tobacco-refusal skills and tobacco use among high risk adolescents. Journal of Behavioral Medicine 16:629–642. Ellickson, P.L., and Bell, R.M. 1990. Drug prevention in junior high: A multi-site longitudinal test. Science 247:1299–1305. Ellickson, P.L., Bell, R.M., and Harrison, E.R. 1993. Changing adolescent propensities to use drugs: Results from Project ALERT. Health Education Quarterly 20(2):227–242. Ennett, S.T., Tobler, N.S., Ringwalt, C.L., and Flewelling, R.L. 1994. How effective is drug abuse resistance education? A meta-analysis of Project DARE outcome evaluations. American Journal of Public Health 84(9):1394–1401. Errecart, M.T., Walberg, H.J., Ross, J.G., Gold, R.S., Fielder, J.L., and Kolbe, L.J. 1991. Effectiveness of Teenage Health Teaching Modules. Journal of School Health 61(1):26–30. Flay, B.R. 1985. Psychosocial approaches to smoking prevention: A review of findings. Health Psychology 4(5):449–488. Flay, B.R. 1986. Efficacy and effectiveness trials in the development of health promotion programs. Preventive Medicine 15:451–474. Flay, B.R., Phil, D., Koepke, D., Thomson, S.J., Santi, S., Best, A., and Brown, K.S. 1989. Six-year follow-up of the first Waterloo school smoking prevention trial. American Journal of Public Health 79:1371–1376. Flay, B.R., Miller, T.Q., Hedeker, D., Siddiqui, O., Britton, C.F., Brannon, B.R., Johnson, C.A., Hansen, W.B., Sussman, S., and Dent, C. 1995. The television, school, and family smoking prevention and cessation project. Preventive Medicine 24:29–40. Flynn, B.S., Worden, J.K., Secker-Walker, R.H., Pirie, P.L., Badger, G.J., Carpenter, J.H., and Geller, B.M. 1994. Mass media and school interventions for cigarette smoking prevention: Effects two years after completion. American Journal of Public Health 84(7):1148–1150.
OCR for page 293
School & Health: Our Nation's Investment Glick, B., Doyle, L., Ni, H., Gao, D., and Pham, C. 1995. School-based health center program evaluation: Perceptions, knowledge, and attitudes of parents/guardians of eleventh graders. A limited dataset presented to the Multnomah County (Oregon) Commissioners, March 21. Glynn, T.J. 1989. Essential elements of school-based smoking prevention programs. Journal of School Health 59(5):181–188. Hansen, W.B., and Graham, J.W. 1991. Preventing alcohol, marijuana, and cigarette use among adolescents: Peer pressure resistance training versus establishing conservative norms. Preventive Medicine 20:414–430. Harris, L. 1988. Health: You've Got to be Taught: An Evaluation of Comprehensive Health Education in American Public Schools. New York: Metropolitan Life Foundation. Johnson, C.A., Hansen, W.B., Collins, L.M., and Graham, J.W. 1986. High-school smoking prevention: Results of a three-year longitudinal study. Journal of Behavioral Medicine 9(5):439–452. Kelder, S.J., Perry, C.L., and Kleep, K.I. 1993. Community-wide youth exercise promotion: Long-term outcomes of the Minnesota Heart Health Program and the Class of 1989 Study. Journal of School Health 53(5):218–223. Kirby, D. 1992. School-based programs to reduce sexual risk-taking behaviors. Journal of School Health 62(7):280–287. Kirby, D., Short, L., Collins, J., Rugg, D., Kolbe, L., Howard, M., Miller, B., Sonenstein, F., and Zabin, L.S. 1994. School-based programs to reduce sexual risk behaviors: A review of effectiveness. Public Health Reports 109(3):339–359. Kisker, E.E., Marks, E.L., Morrill, W.A., and Brown, R.S. 1994. Healthy Caring: An Evaluation Summary of the Robert Wood Johnson Foundation's School-Based Adolescent Health Care Program. Princeton, N.J.: Mathtech. Knapp, M.S. 1995. How shall we study comprehensive, collaborative services for children and families? Educational Research 24(4):5–16. Lewin-VHI and Institute of Health Policy Studies. 1995. Healthy schools, healthy communities program: National evaluation. Submitted to Bureau of Primary Health Care, Health Resources and Services Administration, U.S. Department of Health and Human Services by Lewin-VHI, Inc., and Institute for Health Policy Studies, University of California at San Francisco, February, 1995. Lohrman, D.K., Gold, R.S., and Jubb, W.H. 1987. School health education: A foundation for school health programs. Journal of School Health 57(10):420–425. Lopez, M.E., and Weiss, H.B. 1994. Can we get here from there? Examining and expanding the research base for comprehensive, school-linked early childhood services. Paper commissioned for the Invitational Conference of the U.S. Department of Education and the American Educational Research Association: School-Linked Comprehensive Services for Children and Families, Leesburg, Va., September 28–October 2. Luepker, R.V., Perry, C.L., McKinlay, S.M., Nader, P.R., Parcel, G.S., Stone, E.J., Webber, L.S., Elder, J.P., Feldman, H.A., Johnson, C.C., Kelder, S.H., and Wu, M. 1996. Outcomes of a field trial to improve children's dietary patterns and physical activity: The Child and Adolescent Trial for Cardiovascular Health (CATCH). Journal of the American Medical Association 275(10):768–776. MacKinnon, D.P., Johnson, C.A., Pentz, M.A., Dwyer, J.H., Hansen, W.B., Flay, B.R., and Wang, E.Y. 1991. Mediating mechanisms in a school-based drug prevention program: First-year effects of the Midwestern Prevention Project. Health Psychology 10(3):164–172. McCaul, K.D., and Glasgow, R.E. 1985. Preventing adolescent smoking: What have we learned about treatment construct validity? Health Psychology 4:361–387.
OCR for page 294
School & Health: Our Nation's Investment Murray, D.M., Davis-Hearn, M., Goldman, A., Pirie, P., and Luepker, R.V. 1988. Fourth- and five-year follow-up results from four seventh grade smoking prevention strategies. Journal of Behavioral Medicine 11(4):395–405. Murray, D.M., Pirie, P., Luepker, R.V., and Pallonen, U. 1989. Five- and six-year follow-up results from four seventh-grade smoking prevention strategies. Journal of Behavioral Medicine 12:207–218. Parcel, G.S., Ross, J.G., Lavin, A.T., Portnoy, B., Nelson, G.D., and Winters, F. 1991. Enhancing implementation of the Teenage Health Teaching Modules. Journal of School Health 61(1):35–38. Pentz, M.A., Dwyer, J.H., MacKinnon, D.P., Flay, B.R., Hansen, W.B., Wang, E.Y., and Johnson, C.A. 1989a. A multicommunity trial for primary prevention of adolescent drug abuse: Effects on drug use prevalence. Journal of American Medical Association 261:3259–3266. Pentz, M.A., MacKinnon, D.P., and Flay, B.R., Hansen, W.B., Johnson, C.A., and Dwyer, J.H. 1989b. Primary prevention of chronic diseases in adolescence: Effects of the Midwestern Prevention Project on tobacco use. American Journal of Epidemiology 130(4):713–724. Perry, C.L., Grant, M., Ernberg, G., Florenzano, R.U., Langdon, M.C., Myeni, A.D., Waahlberg, R., Berg, S., Andersson, K., and Fisher, K.J. 1989. WHO collaborative study on alcohol education and young people: Outcomes of four-country pilot study. International Journal of the Addictions 24(12):1145–1171. Perry, C.L., Baranowski, T., and Parcel, G. 1990. How individuals, environments and health behavior interact: Social learning theory. In Health Behavior and health Education Theory, Research, and Practice , K. Glanz, F.M. Lewis, and B. Rimer, eds. New York: Jossey-Bass. Perry, C.L., Kelder, S.H., Murray, D.M., and Klepp, K. 1992. Community-wide smoking prevention: Long-term outcomes of the Minnesota Heart Health Program and the Class of 1989 study. American Journal of Public Health 82(9):1210–1216. Resnicow, K., Cross, D., and Wynder, E. 1991. The role of comprehensive school-based interventions: The results of four "Know Your Body" studies. Annals of the New York Academy of Sciences 623:285–297. Resnicow, K., Cohn, L., Reinhardt, J., Cross, D., Futterman, R., Kirschner, E., Wynder, E.L., and Allegrante, J. 1992. A three-year evaluation of the "Know Your Body" program in minority school children. Health Education Quarterly 19(4):463–480. Resnicow, K., Ross, D., and Vaughan, R. 1995. The structure of problem and conventional behaviors in African-American youth. Journal of Clinical and Consulting Psychology 63(4):594–603. Rohrbach, L.A., Graham, J.W., and Hansen, W.B. 1993. Diffusion of a school-based substance abuse prevention program: Predictors of program implementation. Preventive Medicine 22(2):237–260. Schinke, S.P., Gilchrist, L., and Snow, W.H. 1985. Skills intervention to prevent cigarette smoking among adolescents. American Journal of Public Health 75:665–667. Shaw, K.M. 1995. Challenges in evaluating systems reform. The Evaluation Exchange: Emerging Strategies in Evaluating Child and Family Services 1(1):2–3. Sikkema, K.J., Winett, R.A., and Lombard, D.N. 1995. Development and evaluation of an HIV-risk reduction program for female college students. AIDS Education and Prevention 7(2):145–159. St. Lawrence, J.S., Jefferson, K.W., Alleyne, E., and Brasfield, T.L. 1995. Comparison of education versus behavioral skills training interventions in lowering sexual HIV-risk behavior of substance-dependent adolescents. Journal of Consulting and Clinical Psychology 63(1):154–157.
OCR for page 295
School & Health: Our Nation's Investment Sussman, S., Dent, C.W., Stacy, A.W., Sun, P., Craig, S., Simon, T.R., Burton D., and Flay, B.R. 1993. Project towards no tobacco use, one-year behavior outcomes. American Journal of Public Health 83(9):1245–1250. Taggart, V.S., Bush, P.J., Zuckerman, A.E., and Theiss, P.K. 1990. A process of evaluation of the District of Columbia ''Know Your Body" project. Journal of School Health 60(2):60–66. Telch, M.J., Miller, L.M., Killen, J.D., Cooke, S., and Maccoby, N. 1990. Social influences approach to smoking prevention: The effects of videotape delivery with and without same-age peer leader participation. Addictive Behaviors 15(1):21–28. Walter, H.J., Hofman, A., and Barrett, L.T., Connelly, P.A., Kost, K.L., Walk, E.H., and Patterson, R. 1987. Primary prevention of cardiovascular disease among children: Three-year results of a randomized intervention trial. In Cardiovascular Risk Factors in Childhood: Epidemiology and Prevention, B. Hetzel and G.S. Berenson, eds. Netherlands: Elsevier. Walter, H.J., Hofman, A., Vaughan, R., and Wynder, E.L. 1988. Modification of risk factors for coronary heart disease. New England Journal of Medicine 318:1093–1100. Walter, H.J., Vaughan, R.D., and Wynder, E.L. 1989. Primary prevention of cancer among children: Changes in cigarette smoking and diet after six years of intervention. Journal of the National Cancer Institute 81:995–999. Warzak, W.J., Grow, C.R., Poler, M.M., and Walburn, J.N. 1995. Enhancing refusal skills: Identifying contexts that place adolescents at risk for unwanted sexual activity. Journal of Developmental and Behavioral Pediatrics 16(2):98–100.
Representative terms from entire chapter: