Evaluating and Disseminating Intervention Research
Efforts to change health behaviors should be guided by clear criteria of efficacy and effectiveness of the interventions. However, this has proved surprisingly complex and is the source of considerable debate.
The principles of science-based interventions cannot be overemphasized. Medical practices and community-based programs are often based on professional consensus rather than evidence. The efficacy of interventions can only be determined by appropriately designed empirical studies. Randomized clinical trials provide the most convincing evidence, but may not be suitable for examining all of the factors and interactions addressed in this report.
Information about efficacious interventions needs to be disseminated to practitioners. Furthermore, feedback is needed from practitioners to determine the overall effectiveness of interventions in real-life settings. Information from physicians, community leaders, public health officials, and patients are all-important for determining the overall effectiveness of interventions.
The preceding chapters review contemporary research on health and behavior from the broad perspectives of the biological, behavioral, and social sciences. A recurrent theme is that continued multidisciplinary and interdisciplinary efforts are needed. Enough research evidence has accumulated to warrant wider application of this information. To extend its
use, however, existing knowledge must be evaluated and disseminated. This chapter addresses the complex relationship between research and application. The challenge of bridging research and practice is discussed with respect to clinical interventions, communities, public agencies, systems of health care delivery, and patients.
During the early 1980s, the National Heart, Lung, and Blood Institute (NHLBI) and the National Cancer Institute (NCI) suggested a sequence of research phases for the development of programs that were effective in modifying behavior (Greenwald, 1984; Greenwald and Cullen, 1984; NHLBI, 1983): hypothesis generation (phase I), intervention methods development (phase II), controlled intervention trials (phase III), studies in defined populations (phase IV), and demonstration research (phase V). Those phases reflect the importance of methods development in providing a basis for large-scale trials and the need for studies of the dissemination and diffusion process as a means of identifying effective application strategies. A range of research and evaluation methods are required to address diverse needs for scientific rigor, appropriateness and benefit to the communities involved, relevance to research questions, and flexibility in cost and setting. Inclusion of the full range of phases from hypothesis generation to demonstration research should facilitate development of a more balanced perspective on the value of behavioral and psychosocial interventions.
Choice of Outcome Measures
The goals of health care are to increase life expectancy and improve health-related quality of life. Major clinical trials in medicine have evolved toward the documentation of those outcomes. As more trials documented effects on total mortality, some surprising results emerged. For example, studies commonly report that, compared with placebo, lipid-lowering agents reduce total cholesterol and low-density lipoprotein cholesterol, and might increase high-density lipoprotein cholesterol, thereby reducing the risk of death from coronary heart disease (Frick et al., 1987; Lipid Research Clinics Program, 1984). Those trials usually were not associated with reductions in death from all causes (Golomb, 1998; Muldoon
et al, 1990). Similarly, He et al. (1999) demonstrated that intake of dietary sodium in overweight people was not related to the incidence of coronary heart disease but was associated with mortality form coronary heart disease. Another example can be found in the treatment of cardiac arrhythmia. Among adults who previously suffered a myocardial infarction, symptomatic cardiac arrhythmia is a risk factor for sudden death (Bigger, 1984). However, a randomized drug trial in 1455 post-infarction patients demonstrated that those who were randomly assigned to take an anti-arrhythmia drug showed reduced arrhythmia, but were significantly more likely to die from arrhythmia and from all causes than those assigned to take a placebo. If investigators had measured only heart rhythm changes, they would have concluded that the drug was beneficial. Only when primary health outcomes were considered was it established that the drug was dangerous (Cardiac Arrhythmia Suppression Trial (CAST) Investigators, 1989).
Many behavioral intervention trials document the capacity of interventions to modify risk factors (NHLBI, 1998), but relatively few Level I studies measured outcomes of life expectancy and quality of life. As the examples above point out, assessing risk factors may not be adequate. Ramifications of interventions are not always apparent until they are fully evaluated. It is possible that a recommendation for a behavioral change could increase mortality through unforeseen consequences. For example, a recommendation of increased exercise might heighten the incidence of roadside auto fatalities. Although risk factor modification is expected to improve outcomes, assessment of increased longevity is essential. Measurement of mortality as an endpoint does necessitate long-duration trials that can incur greater costs.
One approach to representing outcomes comprehensively is the quality-adjusted life year (QALY). QALY is a measure of life expectancy (Gold et al., 1996; Kaplan and Anderson, 1996) that integrates mortality and morbidity in terms of equivalents of well-years of life. If a woman expected to live to age 75 dies of lung cancer at 50, the disease caused 25 lost life-years. If 100 women with life expectancies of 75 die at age 50, 2,500 (100×25 years) life-years would be lost. But death is not the only outcome of concern. Many adults suffer from diseases that leave them more or less disabled for long periods. Although still alive, their quality of life is
diminished. QALYs account for the quality-of-life consequences of illnesses. For example, a disease that reduces quality by one-half reduces QALY by 0.5 during each year the patient suffers. If the disease affects 2 people, it will reduce QALY by 1 (2×0.5) each year. A pharmaceutical treatment that improves life by 0.2 QALYs for 5 people will result in the equivalent of 1 QALY if the benefit is maintained over a 1-year period. The basic assumption is that 2 years scored as 0.5 each add to the equivalent of 1 year of complete wellness. Similarly, 4 years scored as 0.25 each are equivalent to 1 year of complete wellness. A treatment that boosts a patient’s health from 0.50 to 0.75 on a scale ranging from 0.0 (for death) to 1.0 (for the highest level of wellness) adds the equivalent of 0.25 QALY. If the treatment is applied to 4 patients, and the duration of its effect is 1 year, the effect of the treatment would be equivalent to 1 year of complete wellness. This approach has the advantage of considering benefits and side-effects of treatment programs in a common term. Although QALYs typically are used to assess effects on patients, they also can be used as a measure of effect on others, including caregivers who are placed at risk because their experience is stressful. Most important, QALYs are required for many methods of cost-effectiveness analysis. The most controversial aspect of the methodology is the method for assigning values along the scale. Three methods are commonly used: standard reference gamble, time-tradeoff, and rating scales. Economists and psychologists differ on their preferred approach to preference assessment. Economists typically prefer the standard gamble because it is consistent with the axioms of choice outlined in decision theory (Torrence, 1976). Economists also accept time-tradeoff because it represents choice even though it is not exactly consistent with the axioms derived from theory (Bennett and Torrence, 1996). However, evidence from experimental studies questions many of the assumptions that underlie economic models of choice. In particular, human evaluators do poorly at integrating complex probability information when making decisions involving risk (Tversky and Fox, 1995). Economic models often assume that choice is rational. However, psychological experiments suggest that methods commonly used for choice studies do not represent the true underlying preference continuum (Zhu and Anderson, 1991). Some evidence supports the use of simple rating scales (Anderson and Zalinski, 1990). Recently, research by economists has attempted to integrate studies from cognitive science, while psychologists have begun investigations of choice and decision-making (Tversky and Shafir, 1992). A significant body of studies demonstrates that differ-
ent methods for estimating preferences will produce different values (Lenert and Kaplan, 2000). This happens because the methods ask different questions. More research is needed to clarify the best method for valuing health states.
The weighting used for quality adjustment comes from surveys of patient or population groups, an aspect of the method that has generated considerable discussion among methodologists and ethicists (Kaplan, 1994). Preference weights are typically obtained by asking patients or people randomly selected from a community to rate cases that describe people in various states of wellness. The cases usually describe level of functioning and symptoms. Although some studies show small but significant differences in preference ratings between demographic groups (Kaplan, 1998), most studies have shown a high degree of similarity in preferences (see Kaplan, 1994, for review). A panel convened by the U.S. Department of Health and Human Services reviewed methodologic issues relevant to cost and utility analysis (the formal name for this approach) in health care. The panel concluded that population averages rather than patient group preference weights are more appropriate for policy analysis (Gold et al., 1996).
Several authors have argued that resource allocation on the basis of QALYs is unethical (see La Puma and Lawlor, 1990). Those who reject the use of QALY suggest that QALY cannot be measured. However, the reliability and validity of quality-of-life measures are well documented (Spilker, 1996). Another ethical challenge to QALYs is that they force health care providers to make decisions based on cost-effectiveness rather than on the health of the individual patient.
Another common criticism of QALYs is that they discriminate against the elderly and the disabled. Older people and those with disabilities have lower QALYs, so it is assumed that fewer services will be provided to them. However, QALYs consider the increment in benefit, not the starting point. Programs that prevent the decline of health status or programs that prevent deterioration and functioning among the disabled do perform well in QALY outcome analysis. It is likely that QALYs will not reveal benefits for heroic care at the very end of life. However, most people prefer not to take treatment that is unlikely to increase life expectancy or improve quality of life (Schneiderman et al., 1992). Ethical issues relevant to the use of cost-effectiveness analysis are considered in detail in the report of the Panel on Cost-Effectiveness in Health and Medicine (Gold et al., 1996).
Evaluating Clinical Interventions
Behavioral interventions have been used to modify behaviors that put people at risk for disease, to manage disease processes, and to help patients cope with their health conditions. Behavioral and psychosocial interventions take many forms. Some provide knowledge or persuasive information; others involve individual, family, group, or community programs to change or support changes in health behaviors (such as in tobacco use, physical activity, or diet); still others involve patient or health care provider education to stimulate behavior change or risk-avoidance. Behavioral and psychosocial interventions are not without consequence for patients and their families, friends, and acquaintances; interventions cost money, take time, and are not always enjoyable. Justification for interventions requires assurance that the changes advocated are valuable. The kinds of evidence required to evaluate the benefits of interventions are discussed below.
Evidence-based medicine uses the best available scientific evidence to inform decisions about what treatments individual patients should receive (Sackett et al., 1997). Not all studies are equally credible. Last (1995) offered a hierarchy of clinical research evidence, shown in Table 7-1. Level I, the most rigorous, is reserved for the randomized clinical trials (RCT), in which participants are randomly assigned to the experimental condition or to a meaningful comparison condition—the most widely accepted standard for evaluating interventions. Such trials involve
TABLE 7-1 Research Evidence Hierarchy
Randomized controlled trial
Controlled trial without randomization
Cohort or case control analytic study
Multiple time series
Uncontrolled experiment with dramatic results
SOURCE: Last, 1995, by permission of Lancet Ltd. All rights reserved.
either “single blinding” (investigators know which participants are assigned to the treatment and groups but participants do not) or “double blinding” (neither the investigators nor the participants know the group assignments) (Friedman et al., 1985). Double blinding is difficult in behavioral intervention trials, but there are some good examples of single-blind experiments. Reviews of the literature often grade studies according to levels of evidence. Level I evidence is considered more credible than Level II evidence; Level III evidence is given little weight.
There has been concern about the generalizability of RCTs (Feinstien and Horwitz, 1997; Horwitz, 1987a,b; Horwitz and Daniels, 1996; Horwitz et al., 1996, 1990; Rabeneck et al., 1992), specifically because the recruitment of participants can result in samples that are not representative of the population (Seligman, 1996). There is a trend toward increased heterogeneity of the patient population in RCTs. Even so, RCTs often include stringent criteria for participation that can exclude participants on the basis of comorbid conditions or other characteristics that occur frequently in the population. Furthermore, RCTs are often conducted in specialized settings, such as university-based teaching hospitals, that do not draw representative population samples. Trials sometimes exhibit large dropout rates, which further undermine the generalizability of their findings.
Oldenburg and colleagues (1999) reviewed all papers published in 1994 in 12 selected journals on public health, preventive medicine, health behavior, and health promotion and education. They graded the studies according to evidence level: 2% were Level I RCTs and 48% were Level II. The authors expressed concern that behavioral research might not be credible when evaluated against systematic experimental trials, which are more common in other fields of medicine. Studies with more rigorous experimental designs are less likely to demonstrate treatment effectiveness (Heaney and Goetzel, 1997; Mosteller and Colditz, 1996). Although there have been relatively few behavioral intervention trials, those that have been published have supported the efficacy of behavioral interventions in a variety of circumstances, including smoking, chronic pain, cancer care, and bulimia nervosa (Compas et al., 1998).
Efficacy and Effectiveness
Efficacy is the capacity of an intervention to work under controlled conditions. Randomized clinical trials are essential in establishing the ef-
fects of a clinical intervention (Chambless and Hollon, 1998) and in determining that an intervention can work. However, demonstration of efficacy in an RCT does not guarantee that the treatment will be effective in actual practice settings. For example, some reviews suggest that behavioral interventions in psychotherapy are generally beneficial (Matt and Navarro, 1997), others suggest that interventions are less effective in clinical settings than in the laboratory (Weisz et al., 1992), and others find particular interventions equally effective in experimental and clinical settings (Shadish et al., 1997).
The Division of Clinical Psychology of the American Psychological Association recently established criteria for “empirically supported” psychological treatments (Chambless and Hollon, 1998). In an effort to establish a level of excellence in validating the efficacy of psychological interventions the criteria are relatively stringent. A treatment is considered empirically supported if it is found to be more effective than either an alternative form of treatment or a credible control condition in at least two RCTs. The effects must be replicated by at least two independent laboratories or investigative teams to ensure that the effects are not attributable to special characteristics of a specific investigator or setting. Several health-related behavior change interventions meeting those criteria have been identified, including interventions for management of chronic pain, smoking cessation, adaptation to cancer, and treatment of eating disorders (Compas et al., 1998).
An intervention that has failed to meet the criteria still has potential value and might represent important or even landmark progress in the field of health-related behavior change. As in many fields of health care, there historically has been little effort to set standards for psychological treatments for health-related problems or disease. Recently, however, managed-care and health maintenance organizations have begun to monitor and regulate both the type and the duration of psychological treatments that are reimbursed. A common set of criteria for making coverage decisions has not been articulated, so decisions are made in the absence of appropriate scientific data to support them. It is in the best interest of the public and those involved in the development and delivery of health-related behavior change interventions to establish criteria that are based on the best available scientific evidence. Criteria for empirically supported treatments are an important part of that effort.
Evaluating Community-Level Interventions
Evaluating the effectiveness of interventions in the communities requires different methods. Developing and testing interventions that take a more comprehensive, ecologic approach, and that are effective in reducing risk-related behaviors and influencing the social factors associated with health status, require many levels and types of research (Flay, 1986; Green et al., 1995; Greenwald and Cullen, 1984). Questions have been raised about the appropriateness of RCTs for addressing research questions when the unit of analysis is larger than the individual, such as a group, organization, or community (McKinlay, 1993; Susser, 1995). While this discussion uses the community as the unit of analysis, similar principles apply to interventions aimed at groups, families, or organizations.
Review criteria of community interventions have been suggested by Hancock and colleagues (Hancock et al., 1997). Their criteria for rigorous scientific evaluation of community intervention trials include four domains: (1) design, including the randomization of communities to condition, and the use of sampling methods that assure representativeness of the entire population; (2) measures, including the use of outcome measures with demonstrated validity and reliability and process measures that describe the extent to which the intervention was delivered to the target audience; (3) analysis, including consideration of both individual variation within each community and community-level variation within each treatment condition; and (4) specification of the intervention in enough detail to allow replication.
Randomization of communities to various conditions raises challenges for intervention research in terms of expense and statistical power (Koepsell et al., 1995; Murray, 1995). The restricted hypotheses that RCTs test cannot adequately consider the complexities and multiple causes of human behavior and health status embedded within communities (Israel et al., 1995; Klitzner, 1993; McKinlay, 1993; Susser, 1995). A randomized controlled trial might actually alter the interaction between an intervention and a community and result in an attenuation of the effectiveness of the intervention (Fisher, 1995; McKinlay, 1993). At the level of community interventions, experimental control might not be possible, especially when change is unplanned. That is, given the different sociopolitical structures, cultures, and histories of communities and the numerous factors that are beyond a researcher’s ability to control, it might be impossible to identify and maintain a commensurate comparison community (Green et al., 1996; Hollister and Hill, 1995; Israel et al., 1995; Klitzner, 1993;
Mittelmark et al., 1993; Susser, 1995). Using a control community does not completely solve the problem of comparison, however, because one “cannot assume that a control community will remain static or free of influence by national campaigns or events occurring in the experimental communities” (Green et al., 1996, p. 274).
Clear specification of the conceptual model guiding a community intervention is needed to clarify how an intervention is expected to work (Koepsell, 1998; Koepsell et al., 1992). This is the contribution of the Theory of Change model for communities described in Chapter 6. A theoretical framework is necessary to specify mediating mechanisms and modifying conditions. Mediating mechanisms are pathways, such as social support, by which the intervention induces the outcomes; modifying conditions, such as social class, are not affected by the intervention but can influence outcomes independently. Such an approach offers numerous advantages, including the ability to identify pertinent variables and how, when, and in whom they should be measured; the ability to evaluate and control for sources of extraneous variance; and the ability to develop a cumulative knowledge base about how and when programs work (Bickman, 1987; Donaldson et al., 1994; Lipsey, 1993; Lipsey and Polard, 1989). When an intervention is unsuccessful at stimulating change, data on mediating mechanisms can allow investigators to determine whether the failure is due to the inability of the program to activate the causal processes that the theory predicts or to an invalid program theory (Donaldson et al., 1994).
Small-scale, targeted studies sometimes provide a basis for refining large-scale intervention designs and enhance understanding of methods for influencing group behavior and social change (Fisher, 1995; Susser, 1995; Winkleby, 1994). For example, more in-depth, comparative, multiple-case-study evaluations are needed to explain and identify lessons learned regarding the context, process, impacts, and outcomes of community-based participatory research (Israel et al., 1998).
Community-Based Participatory Research and Evaluation
As reviewed in Chapter 4, broad social and societal influences have an impact on health. This concept points to the importance of an approach that recognizes individuals as embedded within social, political, and economic systems that shape their behaviors and constrain their access to resources necessary to maintain their health (Brown, 1991;
Gottlieb and McLeroy, 1994; Krieger, 1994; Krieger et al., 1993; Lalonde, 1974; Lantz et al., 1998; McKinlay, 1993; Sorensen et al., 1998a, b; Stokols, 1992, 1996; Susser and Susser, 1996a,b; Williams and Collins, 1995; World Health Organization [WHO], 1986). It also points to the importance of expanding the evaluation of interventions to incorporate such factors (Fisher, 1995; Green et al., 1995; Hatch et al., 1993; Israel et al., 1995; James, 1993; Pearce, 1996; Sorensen et al., 1998a,b; Steckler et al., 1992; Susser, 1995).
This is exemplified by community-based participatory programs, which are collaborative efforts among community members, organization representatives, a wide range of researchers and program evaluators, and others (Israel et al., 1998). The partners contribute “unique strengths and shared responsibilities” (Green et al., 1995, p. 12) to enhance understanding of a given phenomenon, and they integrate the knowledge gained from interventions to improve the health and well-being of community members (Dressler, 1993; Eng and Blanchard, 1990–1; Hatch et al., 1993; Israel et al., 1998; Schulz et al., 1998a). It provides “the opportunity…for communities and science to work in tandem to ensure a more balanced set of political, social, economic, and cultural priorities, which satisfy the demands of both scientific research and communities at higher risk” (Hatch et al., 1993, p. 31). The advantages and rationale of community-based participatory research are summarized in Table 7–2 (Israel et al., 1998). The term “community-based participatory research,” is used here to clearly differentiate from “community-based research,” which is often used in reference to research that is placed in the community but in which community members are not actively involved.
Table 7-3 presents a set of principles, or characteristics, that capture the important components of community-based participatory research and evaluation (Israel et al., 1998). Each principle constitutes a continuum and represents a goal, for example, equitable participation and shared control over all phases of the research process (Cornwall, 1996; Dockery, 1996; Green et al., 1995). Although the principles are presented here as distinct items, community-based participatory research integrates them.
There are four major foci of evaluation with implications for research design: context, process, impact, and outcome (Israel, 1994; Israel et al., 1995; Simons-Morton et al., 1995). A comprehensive community-based participatory evaluation would include all types, but it is often financially practical to pursue only one or two. Evaluation design is extensively reviewed in the literature (Campbell and Stanley, 1963; Cook and
TABLE 7-2 Rationale for Community-Based Participatory Research
Enhances the relevance and usefulness of research data for all partners involved
Brown 1995; Cousins and Earl 1995; Schulz et al. 1998b
Joins partners with diverse skills, knowledge, expertise, and sensitivities to address complex problems
Butterfoss et al., 1993; Hall 1992; Himmelman 1992; Israel et al. 1989; Schensul et al. 1987
Improves quality and validity of research by engaging local knowledge and local theory based on experience of people involved
Altman 1995; Bishop 1996; deKoning and Martin 1996; Dressler 1993; Elden and Levin 1991; Gaventa 1993; Hall 1992; Maguire 1987; Schensul et al. 1987; Vega 1992
Recognizes limitations of concept of “value-free” science (Denzin 1994) and encourages self-reflexive, engaged, and self-critical role of researchers
Denzin, 1994; Reason 1994; Zich and Temoshok 1986
Acknowledges that knowledge is power, thus knowledge gained can be used by all partners involved to direct resources and influence policies that will benefit community
deKoning and Martin 1996; Dressler 1993; Hall 1992; Himmelman 1992; Maguire 1987; Tandon 1981
Strengthens research and program development capacity of partners
Altman 1995; Green et al. 1995; Schensul et al. 1987; Schulz, et al. 1998a; Singer 1993, 1994
Creates theory grounded in social experience and creates better informed and more effective practice guided by such theories
Altman 1995; Schensul 1985
Increases possibility of overcoming understandable distrust of research on part of communities that have historically been subjects of such research
Hatch et al. 1993; Schulz, et al. 1998b
Has potential to “bridge the cultural gaps that may exist” (Brown, 1995, p. 211) between partners involved
Bishop 1994, 1996; Hatch et al. 1993; Schulz et al. 1998b; Vega 1992
Overcomes fragmentation and separation of individual from culture and context that are often evident in more narrowly defined, categorical approaches
Green et al. 1995; Israel et al. 1994; Reason 1994; Stokols 1996
Is consistent with implications or principles of practice that emanate from conceptual framework of stress process, for example, context-specific, comprehensive approach, and multiple outcomes
Israel et al., 1996
Provides additional funds and possible employment opportunities for community partners
Altman 1995; Nyden and Wiewel 1992; Schulz et al. 1998b
Aims to improve health and well-being of communities involved, both directly through examining and addressing identified needs and indirectly through increasing power and control over research process
Durie 1996; Green et al. 1995; Hatch et al. 1993; Schulz et al. 1998a, deKoning and Martin 1996; Israel and Schurman 1990; Israel et al. 1994; Wallerstein 1992
Involves communities that have been marginalized on basis of race, ethnicity, class, gender, sexual orientation in examining consequences of marginalization and attempting to reduce and eliminate it
deKoning and Martin 1996; Gaventa 1993; Hatch et al. 1993; Krieger 1994; Maguire 1987; Vega 1992; Williams and Collins 1995
SOURCE: Israel et al., 1998. Reprinted with permission of Pergaus Books Publishers, a member of Perseus Books, L.L.C.
Reichardt, 1979; Dignan, 1989; Green, 1977; Green and Gordon, 1982; Green and Lewis, 1986; Guba and Lincoln, 1989; House, 1980; Israel et al., 1995; Patton, 1987, 1990; Rossi and Freeman, 1989; Shadish et al., 1991; Stone et al., 1994; Thomas and Morgan, 1991; Windsor et al., 1994; Yin, 1993).
Context encompasses the events, influences, and changes that occur naturally in the project setting or environment during the intervention
TABLE 7-3 Principles of Community-Based Participatory Research and Evaluation
Recognizes community as unit of identity
Hatch et al. 1993; Israel et al. 1994; Klein 1968; Sarason 1984; Steckler et al., 1993; Steuart 1993; Stringer 1996
Builds on strengths and resources within community
Berger and Neuhaus, 1977; CDC/ATSDR 1997; Israel and Schurman, 1990; McKnight 1987, 1994; Minkler 1989; Putnam, 1993; Steuart 1993
Facilitates collaborative partnerships in all phases of research
Bishop 1994; 1996; CDC/ATSDR 1997; Cornwall and Jewkes 1995; deKoning and Martin 1996; Durie 1996; Fawcett 1991; Gaventa 1993; Goodman 1999; Green et al. 1995; Hatch et al. 1993; Israel et al. 1992a, b; Levine et al. 1992; Lillie-Blanton and Hoffman 1995; Maguire 1996; Mittelmark et al. 1993; Nyden and Wiewel 1992; Park et al. 1993; Schulz, et al. 1998a; Singer 1993; Stringer 1996
Integrates knowledge and action for benefit of all partners
Cornwall and Jewkes 1995; deKoning and Martin 1996; Fawcett 1991; Green et al. 1995; Israel et al. 1994; Lather, 1986; Lincoln and Reason 1996; Maguire 1987; Park et al. 1993; Reason 1988; Schulz, et al. 1998a; Singer 1993; Stringer 1996
Promotes a colearning and empowering process that attends to social inequalities
Bishop 1994, 1996; CDC/ATSDR 1997; Cornwall and Jewkes 1995; deKoning and Martin 1996; Elden and Levin, 1991; Eng and Parker 1994; Freire 1987; Israel et al. 1994; Labonte 1994; Lillie-Blanton and Hoffman 1995; Maguire, 1987; Nyden and Wiewel 1992; Robertson and Minkler 1994; Schulz, et al. 1998a; Singer 1993; Stringer 1996; Yeich and Levine, 1992
Involves cyclic and iterative process
Altman 1995; Cornwall and Jewkes 1995; Fawcett et al. 1996; Hatch et al. 1993; Israel et al. 1994; Levine et al. 1992; Reason 1994; Smithies and Adams 1993; Stringer 1996; Tandon 1981
Addresses health from both positive and ecological perspectives
Antonovsky 1985; Baker and Brownson 1999; Brown 1991; Durie 1996; Goodman 1999; Gottlieb and McLeroy 1994; Hancock 1993; Israel et al. 1994; Krieger 1994; McKinlay 1993; Schulz, et al. 1998a; Stokols 1992, 1996; WHO 1986
Disseminates findings and knowledge gained to all partners
Bishop 1996; Dressler 1993; Fawcett 1991; Fawcett et al. 1996; Francisco et al. 1993; Gaventa 1993; Hall 1992; Israel et al, 1992a; Lillie-Blanton and Hoffman 1995; Maguire 1987; Schulz et al. 1998a; Singer 1994; Whitehead 1993
Involves long-term commitment of all partners
CDC/ATSDR 1997; Hatch et al. 1993; Israel et al., 1992a; Mittelmark et al. 1993; Schulz et al. 1998a,b
SOURCE: Israel et al., 1998. Reprinted with permission of Pergaus Books Publishers, a member of Perseus Books, L.L.C.
that might affect the outcomes (Israel et al., 1995). Context data provide information about how particular settings facilitate or impede program success. Decisions must be made about which of the many factors in the context of an intervention might have the greatest effect on project success.
Evaluation of process assesses the extent, fidelity, and quality of the implementation of interventions (McGraw et al., 1994). It describes the actual activities of the intervention and the extent of participant exposure, provides quality assurance, describes participants, and identifies the internal dynamics of program operations (Israel et al., 1995).
A distinction is often made in the evaluation of interventions between impact and outcome (Green and Lewis, 1986; Israel et al., 1995;
Simons-Morton et al., 1995; Windsor et al., 1994). Impact evaluation assesses the effectiveness of the intervention in achieving desired changes in targeted mediators. These include the knowledge, attitudes, beliefs, and behavior of participants. Outcome evaluation examines the effects of the intervention on health status, morbidity, and mortality. Impact evaluation focuses on what the intervention is specifically trying to change, and it precedes an outcome evaluation. It is proposed that if the intervention can effect change in some intermediate outcome (“impact”), the “final“ outcome will follow.
Although the association between impact and outcome may not always be substantiated (as discussed earlier in this chapter), impact may be a necessary measure. In some instances, the outcome goals are too far in the future to be evaluated. For example, childhood cardiovascular risk factor intervention studies typically measure intermediate gains in knowledge (Parcel et al., 1989) and changes in diet or physical activity (Simons-Morton et al., 1991). They sometimes assess cholesterol and blood pressure, but they do not usually measure heart disease because that would not be expected to occur for many years.
Given the aims and the dynamic context within which community-based participatory research and evaluation are conducted, methodologic flexibility is essential. Methods must be tailored to the purpose of the research and evaluation and to the context and interests of the community (Beery and Nelson, 1998; deKoning and Martin, 1996; Dockery, 1996; Dressler, 1993; Green et al., 1995; Hall, 1992; Hatch et al., 1993; Israel et al., 1998; Marin and Marin, 1991; Nyden and Wiewel, 1992; Schulz et al., 1998b; Singer, 1993; Stringer, 1996). Numerous researchers have suggested greater use of qualitative data, from in-depth interviews and observational studies, for evaluating the context, process, impact, and outcome of community-based participatory research interventions (Fortmann et al., 1995; Goodman, 1999; Hugentobler et al., 1992; Israel et al., 1995, 1998; Koepsell et al., 1992; Mittelmark et al., 1993; Parker et al., 1998; Sorensen et al., 1998a; Susser, 1995). Triangulation is the use of multiple methods and sources of data to overcome limitations inherent in each method and to improve the accuracy of the information collected, thereby increasing the validity and credibility of the results (Denzin, 1970; Israel et al., 1995; Reichardt and Cook, 1980; Steckler et al., 1992). For examples of the integration of qualitative and quantitative methods in research and evaluation of public-health interventions, see Steckler et al. (1992) and Parker et al. (1998).
Assessing Government Interventions
Despite the importance of legislation and regulation to promote public health, the effectiveness of government interventions are poorly understood. In particular, policymakers often cannot answer important empirical questions: do legal interventions work and at what economic and social cost? In particular, policymakers need to know whether legal interventions achieve their intended goals (e.g., reducing risk behavior). If so, do legal interventions unintentionally increase other risks (risk/risk tradeoff)? Finally, what are the adverse effects of regulation on personal or economic liberties and general prosperity in society? This is an important question not only because freedom has an intrinsic value in democracy, but also because activities that dampen economic development can have health effects. For example, research demonstrates the positive correlation between socioeconomic status and health (Chapter 4).
Legal interventions often are not subjected to rigorous research evaluation. The research that has been done, moreover, has faced challenges in methodology. There are so many variables that can affect behavior and health status (e.g., differences in informational, physical, social, and cultural environments) that it can be extraordinarily difficult to demonstrate a causal relationship between an intervention and a perceived health effect. Consider the methodologic constraints in identifying the effects of specific drunk-driving laws. Several kinds of laws can be enacted within a short period, so it is difficult to isolate the effect of each law. Publicity about the problem and the legal response can cross state borders, making state comparisons more difficult. Because people who drive under the influence of alcohol also could engage in other risky driving behaviors (e.g., speeding, failing to wear safety belts, running red lights), researchers need to control for changes in other highway safety laws and traffic law enforcement. Subtle differences between comparison communities can have unanticipated effects on the impact of legal interventions (DeJong and Hingson, 1998; Hingson, 1996).
Despite such methodologic challenges, social science researchers have studied legal interventions, often with encouraging results. The social science, medical, and behavioral literature contains evaluations of interventions in several public health areas, particularly in relation to injury prevention (IOM, 1999; Rivara et al., 1997a,b). For example, studies have evaluated the effectiveness of regulations to prevent head injuries (bicycle helmets: Dannenberg et al., 1993; Kraus et al., 1994; Lund et al., 1991; Ni et al., 1997; Thompson et al., 1996a,b), choking and suffocation (refrig-
erator disposal and warning labels on thin plastic bags: Kraus, 1985), child poisoning (childproof packaging: Rogers, 1996), and burns (tap water: Erdmann et al., 1991). One regulatory measure that has received a great deal of research attention relates to reductions in cigarette-smoking (Chapter 6).
Legal interventions can be an important part of strategies to change behaviors. In considering them, government and other public health agencies face difficult and complex tradeoffs between population health and individual rights (e.g., autonomy, privacy, liberty, property). One example is the controversy over laws that require motorcyclists to wear helmets. Ethical concerns accompany the use of legal interventions to mandate behavior change and must be part of the deliberation process.
It is not enough to demonstrate that a treatment benefits some patients or community members. The demand for health programs exceeds the resources available to pay for them so that treatments provide clinical benefit and value for money. Investigators, clinicians, and program planners must demonstrate that their interventions constitute a good use of resources.
Well over $ 1 trillion is spent on health care each year in the United States. Current estimates suggest that expenditures on health care exceed $4000 per person (Health Care Financing Administration, 1998). Investments are made in health care to produce good health status for the population, and it is usually assumed that more investment will lead to greater health. Some expenditures in health care produce relatively little benefit; others produce substantial benefits. Cost-effectiveness analysis (CEA) can help guide the use of resources to achieve the greatest improvement in health status for a given expenditure.
Consider the medical interventions in Table 7-4, all of which are well-known, generally accepted, and widely used. Some are traditional medical care and some are preventive programs. To emphasize the focus on increasing good health, the table presents the data in units of health bought for $1 million rather than in dollars per unit of health, the usual approach in CEA. The life-year is the most comprehensive unit measure of health. Table 7-4 reveals several important points about resource allocation. There is tremendous variation among the interventions in what can be accomplished for $1 million; which nets 7,750 life-years if used for influenza vaccinations for the elderly, 217 life-years if applied to smoking-cessation
TABLE 7-4 Life-Years Yielded by Selected Interventions per $1 Million, 1997 Dollars
programs, but only 2 life-years if used to supply Lovastatin to men aged 35–44 who have high total cholesterol but no heart disease and no other risk factors for heart disease.
How effectively an intervention contributes to good health depends not only on the intervention, but also on the details of its use. Antihypertensive medication is effective, but Propranolol is more cost-effective than Captopril. Thyroid screening is more cost-effective in women than in men. Lovastatin produces more good health when targeted at older high-risk men than at younger low-risk men. Screening for cervical cancer at 3-year intervals with the Pap smear yields 36 life-years per $1 million (compared with no screening), but each $1 million spent to increase the frequency of screening to 2 years brings only 1 additional life-year.
The numbers in Table 7-4 illustrate a central concept in resource allo-
cation: opportunity cost. The true cost of choosing to use a particular intervention or to use it in a particular way is not the monetary cost per se, but the health benefits that could have been achieved if the money had been spent on another service instead. Thus, the opportunity cost of providing annual Pap smears ($1 million) rather than smoking-cessation programs is the 217 life-years that could have been achieved through smoking cessation.
The term cost-effectiveness is commonly used but widely misunderstood. Some people confuse cost-effectiveness with cost minimization. Cost minimization aims to reduce health care costs regardless of health outcomes. CEA does not have cost-reduction per se as a goal but is designed to obtain the most improvement in health for a given expenditure. CEA also is often confused with cost/benefit analysis (CBA), which compares investments with returns. CBA ranks the amount of improved health associated with different expenditures with the aim of identifying the appropriate level of investment. CEA indicates which intervention is preferable given a specific expenditure.
Usually, costs are represented by the net or difference between the total costs of the intervention and the total costs of the alternative to that intervention. Typically, the measure of health is the QALY. The net health effect of the intervention is the difference between the QALYs produced by an intervention and the QALYs produced by an alternative or other comparative base.
Comprehensive as it is, CEA does not include everything that might be relevant to a particular decision—so it should never be used mechanically. Decision-makers can have legitimate reasons to emphasize particular groups, benefits, or costs more heavily than others. Furthermore, some decisions require information that cannot be captured easily in a CEA, such as the effect of an intervention on individual privacy or liberty.
CEA is an analytical framework that arises from the question of which ways of promoting good health—procedures, tests, medications, educational programs, regulations, taxes or subsidies, and combinations and variations of these—provide the most effective use of resources. Specific recommendations about behavioral and psychosocial interventions will contribute the most to good health if they are set in this larger context and based on information that demonstrates that they are in the public interest. However, comparing behavioral and psychosocial interventions with other ways of promoting health on the basis of cost-effectiveness requires additional research. Currently there are too few studies that meet this standard to support such recommendations.
A basic assumption underlying intervention research is that tested interventions found to be effective are disseminated to and implemented in clinics, communities, schools, and worksites. However, there is a sizable gap between science and practice (Anderson, 1998; Price, 1989, 1998). Researchers and practitioners need to ensure that an intervention is effective, and that the community or organization is prepared to adopt, implement, disseminate, and institutionalize it. There also is a need for demonstration research (phase V) to explain more about the process of dissemination itself.
Dissemination to Consumers
Biomedical research results are commonly reported in the mass media. Nearly every day people are given information about the risks of disease, the benefits of treatment, and the potential health hazards in their environments. They regularly make health decisions on the basis of their understanding of such information. Some evidence shows that lay people often misinterpret health risk information (Berger and Hendee, 1989; Fischhoff, 1999a) as do their doctors (Kalet et al., 1994; Kong et al., 1986). On the question of such a widely publicized issue as mammography, for example, evidence suggests that women overestimate their risk of getting breast cancer by a factor of at least 20 and that they overestimate the benefits of mammography by a factor of 100 (Black et al., 1995). In a study of 500 female veterans (Schwartz et al., 1997), half the women overestimated their risk of death from breast cancer by a factor of 8. This did not appear to be because the subjects thought that they were more at risk than other women; only 10% reported that they were at higher risk than the average woman of their age. The topic of communication of health messages to the public is discussed at length in an IOM report, Speaking of Health: Assessing Health Communication. Strategies for Diverse Populations (IOM, 2001).
Communicating Risk Information
Improving communication requires understanding what information the public needs. That necessitates both descriptive and normative analyses, which consider what the public believes and what the public should know, respectively. Juxtaposing normative and descriptive analyses might
provide guidance for reducing misunderstanding (Fischhoff and Downs, 1997). Formal normative analysis of decisions involves the creation of decision trees, showing the available options and the probabilities of various outcomes of each, whose relative attractiveness (or aversiveness) must be evaluated by people. Although full analyses of decision problems can be quite complex, they often reveal ways to drastically simplify individuals’ decision-making problems—in the sense that they reveal a small number of issues of fact or value that really merit serious attention (Clemen, 1991; Merz et al., 1993; Raiffa, 1968). Those few issues can still pose significant challenges for decision makers. The actual probabilities can differ from people’s subjective probabilities (which govern their behavior). For example, a woman who overestimates the value of a mammogram might insist on tests that are of little benefit to her and mistrust the political/ medical system that seeks to deny such care (Woloshin et al., 2000). Obtaining estimates of subjective probabilities is difficult. Although eliciting probabilities has been studied in other contexts over the past two generations (von Winterfeldt and Edwards, 1986; Yates, 1990), it has received much less attention in medical contexts, where it can pose questions that people are unwilling or unable to confront (Fischhoff and Bruine de Bruin, 1999).
In addition to such quantitative beliefs, people often need a qualitative understanding of the processes by which risks are created and controlled. This allows them to get an intuitive feeling for the quantitative estimates, to feel competent to make decisions in their own behalf, to monitor their own experience, and to know when they need help (Fischhoff, 1999b; Leventhal and Cameron, 1987). Not seeing the world in the same way as scientists do also can lead lay people to misinterpret communications directed at them. One common (and some might argue, essential) strategy for evaluating any public health communication or research instrument is to ask people to think aloud as they answer draft versions of questions (Ericsson and Simon, 1994; Schriver, 1989). For example, subjects might be asked about the probability of getting HIV from unprotected sexual activity. Reasons for their assessments might be explored as they elaborate on their impressions and the assumptions they use (Fischhoff, 1999b; McIntyre and West, 1992). The result should both reveal their intuitive theories and improve the communication process.
When people must evaluate their options, the way in which information is framed can have a substantial effect on how it is used (Kahneman and Tversky, 1983; Schwartz, 1999; Tversky and Kahneman, 1988). The
fairest presentation of risk information might be one in which multiple perspectives are used (Kahneman and Tversky, 1983, 1996). For example, one common situation involves small risks that add up over the course of time, through repeated exposures. The chances of being injured in an automobile crash are very small for any one outing, whether or not the driver wears a seatbelt. However, driving over a lifetime creates a substantial risk—and a substantial benefit for seatbelt use. One way to communicate that perspective is to do the arithmetic explicitly, so that subjects understand it (Linville et al., 1993). Another method that helps people to understand complex information involves presenting ranges rather than best estimates. Science is uncertain, and it should be helpful for people to understand the intervals within which their risks are likely to fall (Lipkus and Hollands, 1999).
Risk communication can be improved. For example, many members of the public have been fearful that proximity to electromagnetic fields and power lines can increase the risk of cancer. Studies revealed that many people knew very little about properties of electricity. In particular, they usually were unaware that exposure decreases as a function of the cube root of distance from the lines. After studying mental models of this risk, Morgan (1995) developed a tiered brochure that presented the problem at a variety of risks. The brochure addressed common misconceptions and explained why scientists disagree about the risks posed by electromagnetic fields. Participants on each side of the debate reviewed the brochure for fairness. Several hundred thousand copies of the brochure have now been distributed. This approach to communication requires that the public listen to experts, but it also requires that the experts listen to the public. Providing information is not enough; it is necessary to take the next step to demonstrate that the information is presented in an unbiased fashion and that the public accurately processes what is offered (Edworthy and Adams, 1997; Hadden, 1986; Morgan et al., 2001; National Research Council, 1989).
The electromagnetic field brochure is an example of a general approach in cognitive psychology, in which communications are designed to create coherent mental models of the domain being considered (Ericsson and Simon, 1994; Fischhoff, 1999b; Gentner and Stevens, 1983; Johnson-Laird, 1980). The bases of these communications are formal models of the domain. In the case of the complex processes creating and controlling risks, the appropriate representation is often an influence diagram, a directed graph that captures the uncertain relationships among the factors
involved (Clemen, 1991; Morgan et al., 2001). Creating such a diagram requires pooling the knowledge of diverse disciplines, rather than letting each tell its own part of the story. Identifying the critical messages requires considering both the science of the risk and recipients’ intuitive conceptualizations.
Presentation of Clinical Research Findings
Research results are commonly misinterpreted. When a study shows that the effect of a treatment is statistically significant, it is often assumed that the treatment works for every patient or at least for a high percentage of those treated. In fact, large experimental trials, often with considerable publicity, promote treatments that have only minor effects in most patients. For example, contemporary care for high blood serum cholesterol has been greatly influenced by results of the Coronary Primary Prevention Trial or CPPT Lipid Research Clinics Program, 1984, in which men were randomly assigned to take a placebo or cholestyramine. Cholestyramine can significantly lower serum cholesterol and, in this trial, reduced it by an average of 8.5%. Men in the treatment group experienced 24% fewer heart attack deaths and 19% fewer heart attacks than did men who took the placebo.
The CPPT showed a 24% reduction in cardiovascular mortality in the treated group. However, the absolute proportions of patients who died of cardiovascular disease were similar in the 2 groups: there were 38 deaths among 1900 participants (2%) in the placebo group and 30 deaths among 1906 participants (1.6%) in the cholestyramine group. In other words, taking the medication for 6 years reduced the chance of dying from cardiovascular disease from 2% to 1.6%.
Because of the difficulties in communicating risk ratio information, the use of simple statistics, such as the number needed to treat (NNT), has been suggested (Sackett et al., 1997). NNT is the number of people that must be treated to avoid one bad outcome. Statistically, NNT is defined as the reciprocal of the absolute-risk reduction. In the cholesterol example, if 2% (0.020) of the patients died in the control arm of an experiment and 1.6% (0.016) died in the experimental arm, the absolute risk reduction is 0.020–0.016=0.004. The reciprocal of 0.004 is 250. In this case, 250 people would have to be treated for 6 years to avoid 1 death from coronary heart disease. Treatments can harm as well as benefit, so in addition to calculating the NNT, it is valuable to calculate the number
needed to harm (NNH). This is the number of people a clinician would need to treat to produce one adverse event. NNT and NNH can be modified for those in particular risk groups. The advantage of these simple numbers is that they allow much clearer communication of the magnitude of treatment effectiveness.
Shared Decision Making
Once patients understand the complex information about outcomes, they can fully participate in the decision-making process. The final step in disseminating information to patients involves an interactive process that allows patients to make informed choices about their own health-care.
Despite a growing consensus that they should be involved, evidence suggests that patients are rarely consulted. Wennberg (1995) outlined a variety of common medical decisions in which there is uncertainty. In each, treatment selection involves profiles of risks and benefits for patients. Thiazide medications can be effective at controlling blood pressure, they also can be associated with increased serum cholesterol; the benefit of blood pressure reduction must be balanced against such side effects as dizziness and impotence.
Factors that affect patient decision making and use of health services are not well understood. It is usually assumed that use of medical services is driven primarily by need, that those who are sickest or most disabled use services the most (Aday, 1998). Although illness is clearly the major reason for service use, the literature on small-area variation demonstrates that there can be substantial variability in service use among communities that have comparable illness burdens and comparable insurance coverage (Wennberg, 1998). Therefore, social, cultural, and system variables also contribute to service use.
The role of patients in medical decision making has undergone substantial recent change. In the early 1950s, Parsons (1951) suggested that patients were excluded from medical decision making unless they assumed the “sick role,” in which patients submit to a physician’s judgment, and it is assumed that physicians understand the patients’ preferences. Through a variety of changes, patients have become more active. More information is now available, and many patients demand a greater role (Sharf, 1997). The Internet offers vast amounts of information to patients; some of it misleading or inaccurate (Impicciatore et al., 1997). One difficulty is that many patients are not sophisticated consumers of technical medical information (Strum, 1997).
Another important issue is whether patients want a role. The literature is contradictory on this point; at least eight studies have addressed the issue. Several suggest that most patients express little interest in participating (Cassileth et al., 1980; Ende et al., 1989; Mazur and Hickam, 1997; Pendleton and House, 1984; Strull et al., 1984; Waterworth and Luker, 1990). Those studies challenge the basis of shared medical decision making. Is it realistic to engage patients in the process if they are not interested? Deber (Deber, 1994; Deber et al., 1996) has drawn an important distinction between problem solving and decision making. Medical problem solving requires technical skill to make an appropriate diagnosis and select treatment. Most patients prefer to leave those judgments in the hands of experts (Ende et al., 1989). Studies challenging the notion that patients want to make decisions typically asked questions about problem solving (Ende et al., 1989; Pendleton and House, 1984; Strull et al., 1984).
Shared decision making requires patients to express personal preferences for desired outcomes, and many decisions involve very personal choices. Wennberg (1998) offers examples of variation in health care practices that are dominated by physician choice. One is the choice between mastectomy and lumpectomy for women with well-defined breast cancer. Systematic clinical trials have shown that the probability of surviving breast cancer is about equal after mastectomy and after lumpectomy followed by radiation (Lichter et al., 1992). But in some areas of the United States, nearly half of women with breast cancer have mastectomies (for example, Provo, Utah); in other areas less than 2% do (for example, New Jersey; Wennberg, 1998). Such differences are determined largely by surgeon choice; patient preference is not considered. In the breast cancer example, interviews suggest that some women have a high preference for maintaining the breast, and others feel more comfortable having more breast tissue removed. The choices are highly personal and reflect variations in comfort with the idea of life with and without a breast. Patients might not want to engage in technical medical problem solving, but they are the only source of information about preferences for potential outcomes.
The process by which patients exercise choice can be difficult. There have been several evaluations of efforts to involve patients in decision making. Greenfield and colleagues (1985) taught patients how to read their own medical records and offered coaching on what questions to ask during encounters with physicians. In this randomized trial involving patients with peptic ulcer disease, those assigned to a 20-minute treatment had fewer functional limitations and were more satisfied with their care
than were patients in the control group. A similar experiment involving patients treated for diabetes showed that patients randomly assigned to receive visit preparation scored significantly better than controls on three dimensions of health-related quality of life (mobility, role performance, physical activity). Furthermore, there were significant improvements for biochemical measures of diabetes control (Greenfield et al., 1988).
Many medical decisions are more complex than those studied by Greenfield and colleagues. There are usually several treatment alternatives, and the outcomes for each choice are uncertain. Also, the importance of the outcomes might be valued differently by different people. Shared decision-making programs have been proposed to address those concerns (Kasper et al., 1992). The programs usually use electronic media. Some involve interactive technologies in which a patient becomes familiar with the probabilities of various outcomes. With some technologies, the patient also has the opportunity to witness others who have embarked on different treatments. Video allows a patient to witness the outcomes of others who have made each treatment choice. A variety of interactive programs have been systematically evaluated. In one study (Barry et al., 1995), patients with benign prostatic hyperplasia were given the opportunity to use an interactive video. The video was generally well received, and the authors reported that there was a significant reduction in the rate of surgery and an increase in the proportion who chose “watchful waiting” after using the decision aid. Flood et al. (1996) reported similar results with an interactive program.
Not all evaluations of decision aids have been positive. In one evaluation of an impartial video for patients with ischemic heart disease, (Liao et al., 1996) 44% of the patients found it helpful for making treatment choices but more than 40% reported that it increased their anxiety (Liao et al., 1996). Most of the patients had received advice from their physicians before watching the video.
Despite enthusiasm for shared medical decision making, little systematic research has evaluated interventions to promote it (Frosch and Kaplan, 1999). Systematic experimental trials are needed to determine whether the use of shared decision aids enhances patient outcomes. Although decision aids appear to enhance patient satisfaction, it is unclear whether they result in reductions in surgery, as suggested by Wennberg (1998), or in improved patient outcomes (Frosch and Kaplan, 1999).
Dissemination Through Organizations
The effect of any preventive intervention depends both on its ability to influence health behavior change or reduce health risks and on the extent to which the target population has access to and participates in the program. Few preventive interventions are free-standing in the community. Rather, organizations serve as “hosts” for health promotion and disease prevention programs. Once a program has proven successful in demonstration projects and efficacy trials, it must be adopted and implemented by new organizations. Unfortunately, diffusion to new organizations often proceeds very slowly (Murray, 1986; Parcel et al., 1990).
A staged change process has been proposed for optimal diffusion of preventive interventions to new organizations. Although different researchers have offered a variety of approaches, there is consensus on the importance of at least four stages (Goodman et al., 1997):
dissemination, during which organizations are made aware of the programs and their benefits;
adoption, during which the organization commits to initiating the program;
implementation, during which the organization offers the program or services;
maintenance or institutionalization, during which the organization makes the program part of its routines and standard offerings.
Research investigating the diffusion of health behavior change programs to new organizations can be seen, for example, in adoption of prevention curricula by schools and of preventive services by medical care practices.
Schools are important because they allow consistent contact with children over their developmental trajectory and they provide a place where acquisition of new information and skills is normative (Orlandi, 1996b). Although much emphasis has been placed on developing effective health behavior change curricula for students throughout their school years, the literature is replete with evaluations of school-based curricula that suggest that such programs have been less than successful (Bush et
al., 1989; Parcel et al., 1990; Rohrbach et al., 1996; Walter, 1989). Challenges or barriers to effective diffusion of the programs include organizational issues, such as limited time and resources, few incentives for the organization to give priority to health issues, pressure to focus on academic curricula to improve student performance on proficiency tests, and unclear role delineation in terms of responsibility for the program; extra-organizational issues or “environmental turbulence,” such as restructuring of schools, changing school schedules or enrollments, uncertainties in public funding; and characteristics of the programs that make them incompatible with the potential host organizations, such as being too long, costly, and complex (Rohrbach et al., 1996; Smith et al., 1995).
Initial or traditional efforts to enhance diffusion focused on the characteristics of the intervention program, but more recent studies have focused on the change process itself Two NCI-funded studies to diffuse tobacco prevention programs throughout schools in North Carolina and Texas targeted the four stages of change and were evaluated through randomized, controlled trials (Goodman et al., 1997; Parcel et al., 1989, 1995; Smith et al., 1995; Steckler et al., 1992). Teacher-training interventions appeared to enhance the likelihood of implementation in each study (an effect that has been replicated in other investigations; see Perry et al., 1990). However, other strategies (e.g., process consultation, newsletters, self-paced instructional video) were less successful at enhancing adoption and institutionalization. None of the strategies attempted to change the organizing arrangements (such as reward systems or role responsibilities) of the school districts to support continued implementation of the program.
These results suggest that further reliance on organizational change theory might greatly enhance the diffusion of programs more rapidly and thoroughly. For example, Rohrbach et al. (1996, pp. 927–928) suggest that “change agents and school personnel should work as a team to diagnose any problems that may impede program implementation and develop action plans to address them [and that]…change agents need to promote the involvement of teachers, as well as that of key administrators, in decisions about program adoption and implementation.” These suggestions are clearly consistent with an organizational development approach. Goodman and colleagues (1997) suggest that the North Carolina intervention might have been more effective had it included more participative problem diagnosis and action planning, and had consultation been less directive and more oriented toward increasing the fit between the host organization and the program.
Primary care medical practices have long been regarded as organizational settings that provide opportunities for health behavior interventions. With the growth of managed care and its financial incentives for prevention, these opportunities are even greater (Gordon et al., 1996). Much effort has been invested in the development of effective programs and processes for clinical practices to accomplish health behavior change. However, the diffusion of such programs to medical practices has been slow (e.g., Anderson and May, 1995; Lewis, 1988).
Most systemic programs encourage physicians, nurses, health educators, and other members of the health-professional team to provide more consistent change-related statements and behavioral support for health-enhancing behaviors in patients (Chapter 5). There might be fundamental aspects of a medical practice that support or inhibit efforts to improve health-related patient behavior (Walsh and McPhee, 1992). Visual reminders to stay up-to-date on immunizations, to stop smoking cigarettes, to use bicycle helmets, and to eat a healthy diet are examples of systemic support for patient activation and self-care (Lando et al., 1995). Internet support for improved self-management of diabetes has shown promise (McKay et al., 1998). Automated chart reminders to ask about smoking status, update immunizations, and ensure timely cancer-screening examinations—such as Pap smears, mammography, and prostate screening—are systematic practice-based improvements that increase the rate of success in reaching stated goals on health process and health behavior measures (Cummings et al., 1997). Prescription forms for specific telephone callback support can enhance access to telephone-based counseling for weight loss, smoking cessation, and exercise and can make such behavioral teaching and counseling more accessible (Pronk and O’Connor, 1997). Those and other structural characteristics of clinical practices are being used and evaluated as systematic practice-based changes that can improve treatment for, and prevention of, various chronic illnesses (O’Connor et al., 1998).
Barriers to diffusion include physician factors, such as lack of training, lack of time, and lack of confidence in one’s prevention skills; health-care system factors, such as lack of health-care coverage and inadequate reimbursement for preventive services in fee-for-service systems; and office organization factors, such as inflexible office routines, lack of reminder systems, and unclear assignment of role responsibilities (Thompson et al., 1995; Wagner et al., 1996).
The capitated financing of many managed-care organizations greatly
reduces system barriers. Interventions that have focused solely on physician knowledge and behavior have not been very effective. Interventions that also addressed office organization factors have been more effective (Solberg et al., 1998b; Thompson et al., 1995). For example, the diffusion of the Put Prevention Into Practice (PPIP) program (Griffith et al., 1995), a comprehensive federal effort, was recommended by the U.S. Preventive Services Task Force and is distributed by federal agencies and through professional associations. Using a case study approach, McVea and colleagues (1996) studied the implementation of the program in family practice settings. They found that PPIP was “used not at all or only sporadically by the practices that had ordered the kit” (p. 363). The authors suggested that the practices that provided selected preventive services did not adopt the PPIP because they did not have the organizational skills and resources to incorporate the prevention systems into their office routines without external assistance.
Descriptive research clearly indicates a need for well-conceived and methodologically-rigorous diffusion research. Many of the barriers to more rapid and effective diffusion are clearly “systems problems” (Solberg et al., 1998b). Thus, even though the results are somewhat mixed, recent work applying systems approaches and organizational development strategies to the diffusion dilemma is encouraging. In particular, the emphasis on building internal capacity for diffusion of the preventive interventions—for example, continuous quality improvement teams (Solberg et al., 1998a) and the identification and training of “program champions” within the adopting systems (Smith et al., 1995)—seems crucial for institutionalization of the programs.
Dissemination to Community-Based Groups
This section examines three aspects of dissemination: the need for dissemination of effective community interventions, community readiness for interventions, and the role of dissemination research.
Dissemination of Effective Community Interventions
Dissemination requires the identification of core and adaptive elements of an intervention (Pentz et al., 1990; Pentz and Trebow, 1997;
Price, 1989). Core elements are features of an intervention program or policy that must be replicated to maintain the integrity of the interventions as they are transferred to new settings. They include theoretically based behavior change strategies, targeting of multiple levels of influence, and the involvement of empowered community leaders (Florin and Wandersman, 1990; Pentz, 1998). Practitioners need training in specific strategies for the transfer of core elements (Bero et al., 1998; Orlandi, 1986). In addition, the amount of intervention delivered and its reach into the targeted population might have to be unaltered to replicate behavior change in a new setting. Research has not established a quantitative “dose” of intervention or a quantitative guide for the percentage of core elements that must be implemented to achieve behavior change. Process evaluation can provide guidance regarding the desired intensity and fidelity to intervention protocol. Botvin and colleagues (1995), for example, found that at least half the prevention program sessions needed to be delivered to achieve the targeted effects in a youth drug abuse prevention program. They also found that increased prevention effects were associated with fidelity to the intervention protocol, which included standardized training of those implementing the program, implementation within 2 weeks of that training, and delivery of at least two program sessions or activities per week (Botvin et al., 1995).
Adaptive elements are features of an intervention that can be tailored to local community, organizational, social, and economic realities of a new setting without diluting the effectiveness of the intervention (Price, 1989). Adaptations might include timing and scheduling or culturally meaningful themes through which the educational and behavior change strategies are delivered.
Community and Organizational Readiness
Community and organizational factors might facilitate or hinder the adoption, implementation, and maintenance of innovative interventions. Diffusion theory assumes that the unique characteristics of the adopter (such as community, school, or worksite) interact with the specific attributes of the innovation (risk factor targets) to determine whether and when an innovation is adopted and implemented (Emmons et al., 2000; Rogers, 1983, 1995). Rogers (1983, 1995) has identified characteristics that predict the adoption of innovations in communities and organizations. For example, an innovation that has a relative advantage over the
idea or activity that it supersedes is more likely to be adopted. In the case of health promotion, organizations might see smoke-free worksites as having a relative advantage not only for employee health, but also for the reduction of absenteeism. An innovation that is seen as compatible with adopters’ sociocultural values and beliefs, with previously introduced ideas, or with adopters’ perceived needs for innovation is more likely to be implemented. The less complex, and clearer the innovation, the more likely it is to be adopted. For example, potential adopters are more likely to change their health behaviors when educators provide clear specification of the skills needed to change the behaviors. Trialability is the degree to which an innovation can be experimented with on a limited basis. In nutrition education, adopters are more likely to prepare low-fat recipes at home if they have an opportunity to taste the results in a class or supermarket and are given clear, simple directions for preparing them. Finally, observability is the degree to which the results of an innovation are visible to others. In health behavior change, an example of observability might be attention given to a health promotion program by the popular press (Pentz, 1998; Rogers, 1983).
The ability to identify effective interventions and explain the characteristics of communities and organizations that support dissemination of those interventions provides the basic building blocks for dissemination. It is necessary, however, to learn more about how dissemination occurs to increase its effectiveness (Pentz, 1998). What are the core elements of interventions, and how can they be adapted (Price, 1989)? How do the predictors of diffusion function in the dissemination process (Pentz, 1998)? What characteristics of community leaders are associated with dissemination of prevention programs? What personnel and material resources are needed to implement and maintain prevention programs? How can written materials and training in program implementation be provided to preserve fidelity to core elements (Price, 1989)?
Dissemination research could help identify alternatives to conceptualizing transfer of intervention technology from research to the practice setting. Rather than disseminating an exact replication of specific tested interventions, program transfer might be based on core and adaptive intervention components at both the individual and community organizational levels (Blaine et al., 1997; Perry 1999). Dissemination might also
be viewed as replicating a community-based participatory research process, or as a planning process that incorporates core components (Perry 1999), rather than exact duplication of all aspects of intervention activities.
The principles of community-based participatory research presented here could be operationalized and used as criteria for examining the extent to which these dimensions were disseminated to other projects. The guidelines developed by Green and colleagues (1995) for classifying participatory research projects also could be used. Similarly, based on her research and experience with children and adolescents in school health behavior change programs, Perry (1999) developed a guidebook that outlines a 10-step process for developing communitywide health behavior programs for children and adolescents.
Facilitating Interorganizational Linkages
To address complex health issues effectively, organizations increasingly form links with one another to form either dyadic connections (pairs) or networks (Alter and Hage, 1992). The potential benefits of these interorganizational collaborations include access to new information, ideas, materials, and skills; minimization of duplication of effort and services; shared responsibility for complex or controversial programs; increased power and influence through joint action; and increased options for intervention (e.g., one organization might not experience the political constraints that hamper the activities of another; Butterfoss et al., 1993). However, interorganizational linkages have costs. Time and resources must be devoted to the formation and maintenance of relationships. Negotiating the assessment and planning processes can take a longer time. And sometimes an organization can find that the policies and procedures of other organizations are incompatible with its own (Alter and Hage, 1992; Butterfoss et al., 1993).
One way a dyadic linkage between organizations can serve health-promoting goals grows out of the diffusion of innovations through organizations. An organization can serve as a “linking agent” (Monahan and Scheirer, 1988), facilitating the adoption of a health innovation by organizations that are potential implementors. For example, the National Institute for Dental Research (NIDR) developed a school-based program to encourage children to use a fluoride mouth rinse to prevent caries. Rather than marketing the program directly to the schools, NIDR worked with
state agencies to promote the program. In a national study, Monahan and Scheirer (1988) found that when state agencies devoted more staff to the program and located a moderate proportion of their staff in regional offices (rather than in a central office) there was likely to be a larger proportion of school districts implementing the program. Other programs, such as the Heart Partners program of the American Heart Association (Roberts-Gray et al., 1998), have used the concept of linking agents to diffuse preventive interventions. Studies of these approaches attempt to identify the organizational policies, procedures, and priorities that permit the linking agent to successfully reach a large proportion of the organizations that might implement the health behavior program. However, the research in this area does not allow general conclusions or guidelines to be drawn.
Interorganizational networks are commonly used in community-wide health initiatives. Such networks might be composed of similar organizations that coordinate service delivery (often called consortia) or organizations from different sectors that bring their respective resources and expertise to bear on a complex health problem (often called coalitions). Multihospital systems or linkages among managed-care organizations and local health departments for treating sexually transmitted diseases (Rutherford, 1998) are examples of consortia. The interorganizational networks used in Project ASSIST and COMMIT, major NCI initiatives to reduce the prevalence of smoking, are examples of coalitions (U.S. Department of Health and Human Services, 1990).
Stage theory has been applied to the formation and performance of interorganizational networks (Alter and Hage, 1992; Goodman and Wandersman, 1994). Various authors have posited somewhat different stages of development, but they all include: initial actions, to form the coalition; the formalization of the mission, structure, and processes of the coalition; planning, development, and implementation of programmatic activities; and accomplishment of the coalition’s health goals. Stage theory suggests that different strategies are likely to facilitate success at different stages of development (Lewin, 1951; Schein, 1987). The complexity, formalization, staffing patterns, communication and decision-making patterns, and leadership styles of the interorganizational network will affect its ability to progress toward its goals (Alter and Hage, 1992; Butterfoss et al., 1993; Kegler et al., 1998a,b).
In 1993, Butterfoss and colleagues reviewed the literature on community coalitions and found “relatively little empirical evidence” (p. 315) to bring to bear on the assessment of their effectiveness. Although the use of
coalitions in community-wide health promotion continues, the accumulation of evidence supporting their effectiveness is still slim. Several case studies suggest that coalitions and consortia can be successful in bringing about changes in health behaviors, health systems, and health status (e.g., Butterfoss et al., 1998; Fawcett et al., 1997; Kass and Freudenberg, 1997; Myers et al., 1994; Plough and Olafson, 1994). However, the conditions under which coalitions are most likely to thrive and the strategies and processes that are most likely to result in effective functioning of a coalition have not been consistently identified empirically.
Evaluation models, such as the FORECAST model (Goodman and Wandersman, 1994) and the model proposed by the Work Group on Health Promotion and Community Development at the University of Kansas (Fawcett et al., 1997), address the lack of systematic and rigorous evaluation of coalitions. These models provide strategies and tools for assessing coalition functioning at all stages of development, from initial formation to ultimate influence on the coalition’s health goals and objectives. They are predicated on the assumption that the successful passage through each stage is necessary, but not sufficient, to ensure successful passage through the next stage. Widespread use of these and other evaluation frameworks and tools can increase the number and quality of the empirical studies of the effects of interorganizational linkages.
Orlandi (1996a) states that diffusion failures often result from a lack of fit between the proposed host organization and the intervention program. Thus, he suggests that if the purpose is to diffuse an existing program, the design of the program and the process of diffusion need to be flexible enough to adapt to the needs and resources of the organization. If the purpose is to develop and disseminate a new program, innovation development and transfer process should be integrated. Those conclusions are consistent with some of the studies reviewed above. For example, McVea et al. (1996) concluded that a “one size fits all” approach to clinical preventive systems was not likely to diffuse effectively.
Aday, L.A. (1998). Evaluating the Healthcare System: Effectiveness, Efficiency, and Equity. Chicago: Health Administration Press.
Alter, C. and Hage, J. (1992). Organisations Working Together. Newbury Park, CA: Sage.
Altman, D.G. (1995). Sustaining interventions in community systems: On the relationship between researchers and communities. Health Psychology, 14, 526–536.
Anderson, L.M. and May, D.S. (1995). Has the use of cervical, breast, and colorectal cancer screening increased in the United States? American Journal of Public Health, 85, 840–842.
Anderson, N.B. (1998). After the discoveries, then what? A new approach to advancing evidence-based prevention practice (pp. 74–75). Programs and abstracts from NIH Conference, Preventive Intervention Research at the Crossroads, Bethesda, MD.
Anderson, N.H., and Zalinski, J. (1990). Functional measurement approach to self-estimation in multiattribute evaluation. In N.H.Anderson (Ed.), Contributions to Information Integration Theory, Vol. 1: Cognition; Vol. 2: Social; Vol. 3: Developmental. (pp. 145–185): Hillsdale, NJ: Erlbaum Press.
Antonovsky, A. (1985). The life cycle, mental health and the sense of coherence. Israel Journal of Psychiatry and Related Sciences, 22 (4), 273–280.
Baker, E.A. and Brownson, C.A. (1999). Defining characteristics of community-based health promotion programs. In R.C.Brownson, E.A.Baker, and L.F.Novick (Eds.) Community -Based Prevention Programs that Work (pp. 7–19). Gaithersburg, MD: Aspen.
Balestra, D.J. and Littenberg, B. (1993). Should adult tetanus immunization be given as a single vaccination at age 65? A cost-effectiveness analysis. Journal of General Internal Medicine, 8, 405–412.
Barry, M.J., Fowler, F.J., Mulley, A.G., Henderson, J.V., and Wennberg, J.E. (1995). Patient reactions to a program designed to facilitate patient participation in treatment decisions for benign prostatic hyperplasia. Medical Care, 33, 771–782.
Beery, B. and Nelson, G. (1998). Evaluating community-based health initiatives: Dilemmas, puzzles, innovations and promising directions. Making outcomes matter. Seattle: Group Health/Kaiser Permanente Community Foundation.
Bennett, K.J. and Torrance, G.W. (1996). Measuring health preferences and utilities: Rating scale, time trade-off and standard gamble methods. In B. Spliker (Ed.) Quality of Life and Pharmacoeconomics in Clinical Trials (pp. 235–265). Philadelphia: Lippincott-Raven.:
Berger, E.S. and Hendee, W.R. (1989). The expression of health risk information. Archives of Internal Medicine, 149, 1507–1508.
Berger, P.L. and Neuhaus, R.J. (1977). To empower people: The role of mediating structures in public policy. Washington, DC: American Enterprise Institute for Public Policy Research.
Bero, L.A., Grilli, R., Grimshaw, J.M., Harvey, E., Oxman, A.D., and Thomson, M.A. (1998). Closing the gap between research and practice: An overview of systematic reviews of interventions to promote the implementation of research findings. British Medical Journal , 317, 465–468.
Bickman, L. (1987). The functions of program theory. New Directions in Program Evaluation, 33, 5–18.
Bigger, J.T.J. (1984). Antiarrhythmic treatment: An overview. American Journal of Cardiology, 53, 8B–16B.
Bishop, R. (1994). Initiating empowering research? New Zealand Journal of Educational Studies, 29, 175–188.
Bishop, R. (1996). Addressing issues of self-determination and legitimation in Kaupapa Maori research. In B.Webber (Ed.) Research Perspectives in Maori Education (pp. 143– 160). Wellington, New Zealand: Council for Educational Research.
Black, W.C., Nease, R.F.J., and Tosteson, A.N. (1995). Perceptions of breast cancer risk and screening effectiveness in women younger than 50 years of age. Journal of the National Cancer Institute, 87, 720–731.
Blaine, T.M., Forster, J.L., Hennrikus, D., O’Neil, S., Wolfson, M., and Pham, H. (1997). Creating tobacco control policy at the local level: Implementation of a direct action organizing approach. Health Education and Behavior, 24, 640–651.
Botvin, G.J., Baker, E., Dusenbury, L., Botvin, E.M., and Diaz, T. (1995). Long-term follow-up results of a randomized drug abuse prevention trial in a white middle-class population. Journal of the American Medical Association, 273, 1106–1112.
Brown, E.R. (1991). Community action for health promotion: A strategy to empower individuals and communities. International Journal of Health Services, 21, 441–456.
Brown, P. (1995). The role of the evaluator in comprehensive community initiatives. In J.P.Connell, A.C.Kubisch, L.B.Schorr, and C.H.Weiss (Eds.) New Approaches to Evaluating Community Initiatives (pp. 201–225). Washington, DC: Aspen.
Bush, P.J, Zuckerman, A.E, Taggart, V.S, Theiss, P.K, Peleg, E.O, and Smith, S.A (1989). Cardiovascular risk factor prevention in black school children: The Know Your Body: Evaluation Project. Health Education Quarterly, 16, 215–228.
Butterfoss, F.D, Morrow, A.L., Rosenthal, J., Dini, E., Crews, R.C., Webster, J.D., and Louis, P. (1998). CINCH: An urban coalition for empowerment and action. Health Education and Behavior, 25, 212–225.
Butterfoss, F.D, Goodman, R.M., and Wandersman, A. (1993). Community coalitions for prevention and health promotion. Health Education Research, 8, 315–330.
Campbell, D.T. and Stanley, J.C. (1963). Experimental and Quasi-Experimental Designs for Research. Chicago: Rand McNally.
Cardiac Arrhythmia Suppression Trial (CAST) Investigators. (1989). Preliminary report: Effect of encainide and flecainide on mortality in a randomized trial of arrhythmia suppression after myocardial infarction. The Cardiac Arrhythmia Suppression Trial (CAST) Investigators. New England Journal of Medicine, 321, 406–412.
Cassileth, B.R., Zupkis, R.V., Sutton-Smith, K., and March, V. (1980). Information and participation preferences among cancer patients. Annals of Internal Medicine, 92, 832– 836.
Centers for Disease Control, Agency for Toxic Substances and Disease Registry (CDC/ ATSDR). (1997). Principles of Community Engagement. Atlanta: CDC Public Health Practice Program Office.
Chambless, D.L. and Hollon, S.D. (1998). Defining empirically supported therapies. Journal of Consulting and Clinical Psychology, 66, 7–18.
Clemen, R.T. (1991). Making Hard Decisions. Boston: PWS-Kent. Compas, B.E., Haaga, D.F., Keefe, F.J., Leitenberg, H., and Williams, D.A. (1998). Sampling of empirically supported psychological treatments from health psychology: Smoking, chronic pain, cancer, and bulimia nervosa. Journal of Consulting and Clinical Psychology, 66, 89–112.
Cook, T.D. and Reichardt, C.S. (1979). Qualitative and Quantitative Methods in Evaluation Research. Beverly Hills, CA: Sage.
Cornwall, A. (1996). Towards participatory practice: Participatory rural appraisal (PRA) and the participatory process. In K.deKoning, and M.Martin (Eds.) Participatory Research in Health: Issues and Experiences (pp. 94–107). London: Zed Books.
Cornwall, A. and Jewkes, R. (1995). What is participatory research? Social Science and Medicine, 41, 1667–1676.
Cousins, J.B. and Earl, L.M. (Eds.) (1995). Participatory Evaluation: Studies in Evaluation Use and Organizational Learning. London: Falmer.
Cromwell, J., Bartosch, W.J., Fiore, M.C., Hasselblad, V., and Baker, T. (1997). Cost-effectiveness of the clinical practice recommendations in the AHCPR guideline for smoking cessation. Journal of the American Medical Association, 278, 1759–1766.
Cummings, N.A., Cummings, J.L., and Johnson, J.N. (Eds.). (1997). Behavioral Health in Primary Care: A Guide for Clinical Integration. Madison, CT: Psychosocial Press.
Danese, M.D., Powe, N.R., Sawin, C.T., and Ladenson, P.W. (1996). Screening for mild thyroid failure at the periodic health examination: A decision and cost-effectiveness analysis. Journal of the American Medical Association, 276, 285–292.
Dannenberg, A.L., Gielen, A.C., Beilenson, P.L., Wilson, M.H., and Joffe, A. (1993). Bicycle helmet laws and educational campaigns: An evaluation of strategies to increase children’s helmet use. American Journal of Public Health, 83, 667–674.
Deber, R.B. (1994). Physicians in health care management. 7. The patient-physician partnership: Changing roles and the desire for information. Canadian Medical Association Journal, 151, 171–176.
Deber, R.B., Kraetschmer, N., and Irvine, J. (1996). What role do patients wish to play in treatment decision making? Archives of Internal Medicine, 156, 1414–1420.
DeJong, W. and Hingson, R. (1998). Strategies to reduce driving under the influence of alcohol. Annual Review of Public Health, 19, 359–378.
deKoning, K. and Martin, M. (1996). Participatory research in health: Setting the context. In K.deKoning and M.Martin (Eds.) Participatory Research in Health: Issues and Experiences (pp. 1–18). London: Zed Books.
Denzin, N.K. (1970). The research act. In N.K.Denzin (Ed.) The Research Act in Sociology: A Theoretical Introduction to Sociological Methods (pp. 345–360). Chicago, IL: Aldine.
Denzin, N.K. (1994). The suicide machine. In R.E.Long (Ed.) Suicide. (Vol. 67, No. 2). New York: H.W.Wilson.
Dignan, M.B. (Ed.) (1989). Measurement and evaluation of health education, 2nd edition. Springfield, IL: C.C.Thomas.
Dockery, G. (1996). Rhetoric or reality? Participatory research in the National Health Service, UK. In K.deKoning and M.Martin (Eds.) Participatory Research in Health: Issues and Experiences (pp. 164–176). London: Zed Books.
Donaldson, S.I., Graham, J.W., and Hansen, W.B. (1994). Testing the generalizability of intervening mechanism theories: Understanding the effects of adloescent drug use prevention interventions. Journal of Behavioral Medicine, 17, 195–216.
Dressler, W.W. (1993). Commentary on “Community Research: Partnership in Black Communities.” American Journal of Preventive Medicine, 9, 32–34.
Durie, M.H. (1996). Characteristics of Maori health research. Presented at Hui Whakapiripiri: A Hui to Discuss Strategic Directions for Maori Health Research, Eru Pomare Maori Health Research Centre, Wellington School of Medicine, University of Otago, Wellington, New Zealand.
Eddy, D.M. (1990). Screening for cervical cancer. Annals of Internal Medicine, 113, 214– 226. Reprinted in Eddy, D.M. (1991). Common Screening Tests. Philadelphia: American College of Physicians.
Edelson, J.T., Weinstein, M.C., Tosteson, A.N.A., Williams, L., Lee, T.H., and Goldman, L. (1990). Long-term cost-effectiveness of various initial monotherapies for mild to moderate hypertension. Journal of the American Medical Association, 263, 407–413.
Edworthy, J. and Adams, A.S. (1997). Warning Design. London: Taylor and Francis.
Elden, M. and Levin, M. (1991). Cogenerative learning. In W.F.Whyte (Ed.) Participatory Action Research (pp. 127–142). Newbury Park, CA: Sage.
Emmons, K.M., Thompson, B., Sorensen, G., Linnan, L., Basen-Engquist, K., Biener, L., and Watson, M. (2000). The relationship between organizational characteristics and the adoption of workplace smoking policies. Health Education and Behavior, 27, 483– 501.
Ende, J., Kazis, L., Ash, A., and Moskowitz, M.A. (1989). Measuring patients’ desire for autonomy: Decision making and information-seeking preferences among medical patients. Journal of General Internal Medicine, 4, 23–30.
Eng, E. and Blanchard, L. (1990–1). Action-oriented community diagnosis: A health education tool. International Quarterly of Community Health Education, 11, 93–110.
Eng, E. and Parker, E.A. (1994). Measuring community competence in the Mississippi Delta: the interface between program evaluation and empowerment. Health Education Quarterly , 21, 199–220.
Erdmann, T.C., Feldman, K.W., Rivara, F.P., Heimbach, D.M., and Wall, H.A. (1991). Tap water burn prevention: The effect of legislation. Pediatrics, 88, 572–577.
Ericsson, A. and Simon, H.A. (1994). Verbal Protocol As Data. Cambridge, MA: MIT Press.
Fawcett, S.B, Lewis, R.K, Paine-Andrews, A., Francisco, V.T, Richter, K.P., Williams, E.L., and Copple, B. (1997). Evaluating community coalitions for prevention of substance abuse: The case of Project Freedom. Health Education and Behavior, 24, 812–828.
Fawcett, S.B. (1991). Some values guiding community research and action. Journal of Applied Behavior Analysis, 24, 621–636.
Fawcett, S.B., Paine-Andrews, A., Francisco, V.T., Schultz, J.A., Richter, K.P., Lewis, R.K., Harris, K.J., Williams, E.L., Berkley, J.Y., Lopez, C.M., and Fisher, J.L. (1996). Empowering community health initiatives through evaluation. In D.Fetterman, S. Kaftarian, and A.Wandersman (Eds.) Empowerment Evaluation: Knowledge And Tools Of Self-Assessment And Accountability (pp. 161–187). Thousand Oaks, CA: Sage.
Feinstein, A.R. and Horwitz, R.I. (1997). Problems in the “evidence” of “evidence-based medicine.” American Journal of Medicine, 103, 529–535.
Fischhoff, B. (1999a). Risk Perception And Risk Communication. Presented at the Workshop on Health, Communications and Behavior of the IOM Committee on Health and Behavior: Research, Practice and Policy, Irvine, CA.
Fischhoff, B. (1999b). Why (cancer) risk communication can be hard. Journal of the National Cancer Institute Monographs, 25, 7–13.
Fischhoff, B. and Bruine de Bruin, W (1999). Fifty/fifty=50? Journal of Behavioral Decision Making, 12, 149–163.
Fischhoff, B. and Downs, J. (1997). Accentuate the relevant. Psychological Science, 18, 154–158.
Fisher, E.B., Jr. (1995). The results of the COMMIT trial. American Journal of Public Health, 85, 159–160.
Flay, B. (1986). Efficacy and effectiveness trials (and other phases of research) in the development of health promotion programs. Preventive Medicine, 15, 451–474.
Flood, A.B., Wennberg, J.E., Nease, R.F.J., Fowler, F.J.J., Ding, J., and Hynes, L.M. (1996). The importance of patient preference in the decision to screen for prostate cancer. Prostate Patient Outcomes Research Team [see comments]. Journal of General Internal Medicine, 11, 342–349.
Florin, P. and Wandersman, A. (1990). An introduction to citizen participation, voluntary organizations, and community development: Insights for empowerment through research. American Journal of Community Psychology, 18, 41–53.
Francisco, V.T., Paine, A.L., and Fawcett, S.B. (1993). A methodology for monitoring and evaluating community health coalitions. Health Education Research, 8, 403–416.
Freire, P. (1987). Education for Critical Consciousness. New York: Continuum.
Frick, M.H., Elo, O. Haapa, K., Heinonen, O.P., Heinsalmi, P., Helo, P., Huttunen, J.K., Kaitaniemi, P., Koskinen, P., Manninen, V., Maenpaa, H., Malkonen, M., Manttari, M., Norola, S., Pasternack, A., Pikkarainen, J., Romo, M., Sjoblom, T., and Nikkila, E.A. (1987). Helsinki Heart Study: Primary-prevention trial with gemfibrozil in middle-aged men with dyslipidemia. Safety of treatment, changes in risk factors, and incidence of coronary heart disease. New England Journal of Medicine, 317, 1237– 1245.
Friedman, L.M., Furberg, C.M., and De Mets, D.L. (1985). Fundamentals of Clinical Trials, 2nd edition. St. Louis: Mosby-Year Book.
Frosch, M. and Kaplan, R.M. (1999). Shared decision-making in clinical practice: Past research and future directions. American Journal of Preventive Medicine, 17, 285–294.
Gaventa, J. (1993). The powerful, the powerless, and the experts: Knowledge struggles in an information age. In P.Park, M.Brydon-Miller, B.Hall, and T.Jackson (Eds.) Voices of Change: Participatory Research In The United States and Canada (pp. 21–40). Westport, CT: Bergin and Garvey.
Gentner, D. and Stevens, A. (1983). Mental Models (Cognitive Science). Hillsdale, NJ: Erlbaum.
Gold, M.R., Siegel, J.E., Russell, L.B., and Weinstein, M.C. (Eds.) (1996). Cost-Effectiveness in Health And Medicine. New York: Oxford University Press.
Goldman, L., Weinstein, M.C., Goldman, P.A., and Williams, L.W. (1991). Cost-effectiveness of HMG-CoA reductase inhibition. Journal of the American Medical Association, 6, 1145–1151.
Golomb, B.A. (1998). Cholesterol and violence: is there a connection? Annals of Internal Medicine, 128, 478–487.
Goodman, R.M. (1999). Principles and tools for evaluating community-based prevention and health promotion programs. In R.C.Brownson, E.A.Baker, and L.F.Novick (Eds.) Community-Based Prevention Programs That Work (pp. 211–227). Gaithersburg, MD: Aspen.
Goodman, R.M. and Wandersman, A. (1994). FORECAST: A formative approach to evaluating community coalitions and community-based initiatives. Journal of Community Psychology, Supplement, 6–25.
Goodman, R.M., Steckler, A., and Kegler, M.C. (1997). Mobilizing organizations for health enhancement: Theories of organizational change. K.Glanz, F.M.Lewis, and B.K.Rimer (Eds.) Health Behavior and Health Education, 2nd edition (pp. 287–312). San Francisco: Jossey-Bass.
Gordon, R.L, Baker, E.L, Roper, W.L, and Omenn, G.S. (1996). Prevention and the reforming U.S. health care system: Changing roles and responsibilities for public health. Annual Review of Public Health, 17, 489–509.
Gottlieb, N.H. and McLeroy, K.R. (1994). Social health. In M.P.O’Donnell, and J.S. Harris (Eds.) Health promotion in the workplace, 2nd edition (pp. 459–493). Albany, NY: Delmar.
Green, L.W. (1977). Evaluation and measurement: Some dilemmas for health education. American Journal of Public Health, 67, 155–166.
Green, L.W. and Gordon, N.P. (1982). Productive research designs for health education investigations. Health-Education, 13, 4–10.
Green, L.W. and Lewis, F.M. (1986). Measurement and Evaluation in Health Education and Health Promotion. Palo Alto, CA: Mayfield.
Green, L.W., George, M.A., Daniel, M., Frankish, C.J., Herbert, C.J., Bowie, W.R., and O’Neil, M. (1995). Study of Participatory Research in Health Promotion. University of British Columbia, Vancouver: The Royal Society of Canada.
Green, L.W., Richard, L., and Potvin, L. (1996). Ecological foundations of health promotion. American Journal of Health Promotion, 10, 270–281.
Greenfield, S., Kaplan, S., and Ware, J.E. (1985). Expanding patient involvement in care. Annals of Internal Medicine, 102, 520–528.
Greenfield, S., Kaplan, S.H., Ware, J.E., Yano, E.M., and Frank, H.J.L. (1988). Patients participation in medical care: Effects on blood sugar control and quality of life in diabetes. Journal of General Internal Medicine, 3, 448–457.
Greenwald, P. (1984). Epidemiology: A step forward in the scientific approach to preventing cancer through chemoprevention. Public Health Reports, 99, 259–264.
Greenwald, P. and Cullen, J.W. (1984). A scientific approach to cancer control. CA: A Cancer Journal for Clinicians, 34, 328–332.
Griffith H.M., Dickey, L., and Kamerow, D.B. (1995) Put prevention into practice: a systematic approach. Journal of Public Health Management and Practice, 1, 9–15
Guba, E.G. and Lincoln, Y.S. (1989). Fourth Generation Evaluation. Newbury Park, CA: Sage.
Hadden, S.G. (1986). Read The Label: Reducing Risk By Providing Information. Boulder, CO: Westview.
Hall, B.L. (1992). From margins to center? The development and purpose of participatory research. American Sociologist, 23, 15–28.
Hancock, L., Sanson-Fisher, R.W., Redman, S., Burton, R., Burton, L, Butler, J., Girgis, A., Gibberd, R., Hensley, M., McClintock, A., Reid, A., Schofield, M., Tripodi, T., and Walsh, R. (1997). Community action for health promotion: A review of methods and outcomes 1990–1995. American Journal of Preventive Medicine, 13, 229–239.
Hancock, T. (1993). The healthy city from concept to application: Implications for research . In J.K.Davies and M.P.Kelly (Eds.) Healthy Cities: Research and Practice (pp. 14–24). New York: Routledge.
Hatch, J., Moss, N., Saran, A., Presley-Cantrell, L., and Mallory, C. (1993). Community research: partnership in Black communities. American Journal of Preventive Medicine, 9, 27–31.
He, J., Ogden, L.G., Vupputuri, S., Bazzano, L.A., Loria, C., and Whelton, P.K. (1999) Dietary sodium intake and subsequent risk of cardiovascular disease in overweight adults. Journal of the American Medical Association, 282, 2027–2034.
Health Care Financing Administration, Department of Health and Human Services. (1998). Highlights: National Health Expenditures, 1997 [On-line]. Available: <http://www.hcfa.gov/stats/nhe-oact/hilites.htm>. Accessed October 31, 1998.
Heaney, C.A. and Goetzel, R.Z. (1997). A review of health-related outcomes of multi-component worksite health promotion programs. American Journal of Health Promotion, 11, 290–307.
Hingson, R. (1996). Prevention of drinking and driving. Alcohol Health and Research World, 20, 219–226.
Himmelman, A.T. (1992). Communities Working Collaboratively for a Change. University of Minnesota, MN: Humphrey Institute of Public Affairs.
Hollister, R.G. and Hill, J. (1995). Problems in the evaluation of community-wide initiatives. In J.P.Connell, A.C.Kubisch, L.B.Schorr, and C.H.Weiss (Eds.) New Approaches to Evaluating Community Initiatives (pp. 127–172). Washington, DC: Aspen.
Horwitz, R.I. and Daniels S.R. (1996). Bias or biology: Evaluating the epidemiologic studies of L-tryptophan and the eosinophilia-myalgia syndrome. Journal of Rheumatology Supplement, 46, 60–72.
Horwitz, R.I. (1987a). Complexity and contradiction in clinical trial research. American Journal of Medicine, 82, 498–510.
Horwitz, R.I. (1987b). The experimental paradigm and observational studies of cause-effect relationships in clinical medicine. Journal of Chronic Disease, 40, 91–99.
Horwitz, R.I., Singer, B.H., Makuch, R.W., and Viscoli, C.M. (1996). Can treatment that is helpful on average be harmful to some patients? A study of the conflicting information needs of clinical inquiry and drug regulation. Journal of Clinical Epidemiology , 49, 395–400.
Horwitz, R.I., Viscoli, C.M., Clemens, J.D., and Sadock, R.T. (1990). Developing improved observational methods for evaluating therapeutic effectiveness. American Journal of Medicine, 89, 630–638.
House, E.R. (1980). Evaluating with validity. Beverly Hills, CA: Sage.
Hugentobler, M.K, Israel, B.A., and Schurman, S.J. (1992). An action research approach to workplace health: Integrating methods. Health Education Quarterly, 19, 55–76.
Impicciatore, P., Pandolfini, C., Casella, N., and Bonati, M. (1997). Reliability of health information for the public on the world wide web: Systematic survey of advice on managing fever in children at home. British Medical Journal, 314, 1875–1881.
IOM (Institute of Medicine) (1999). Reducing the Burden of Injury: Advancing Prevention and Treatment. Washington, DC: National Academy
IOM (Institute of Medicine) (2001). Speaking of Health: Assessing Health Communication. Strategies for Diverse Populations. C.Chrvala and S.Scrimshaw (Eds.). Washington, DC: National Academy Press.
Israel, B.A. (1994). Practitioner-oriented Approaches to Evaluating Health Education Interventions: Multiple Purposes—Multiple Methods. Paper presented at the National Conference on Health Education and Health Promotion, Tampa, FL.
Israel, B.A., and Schurman, S.J. (1990). Social support, control and the stress process. In K.Glanz, F.M.Lewis, and B.K.Rimer (Eds.) Health Behavior and Health Education: Theory, Research and Practice (pp. 179–205). San Francisco: Jossey-Bass.
Israel, B.A., Baker, E.A., Goldenhar, L.M., Heaney, C.A., and Schurman, S.J. (1996). Occupational stress, safety, and health: Conceptual framework and principles for effective prevention interventions. Journal of Occupational Health Psychology, 1, 261– 286.
Israel, B.A., Checkoway, B., Schulz, A.J., and Zimmerman, M.A. (1994). Health education and community empowerment: conceptualizing and measuring perceptions of individual, organizational, and community control. Health Education Quarterly, 21, 149–170.
Israel, B.A., Cummings, K.M., Dignan, M.B., Heaney, C.A., Perales, D.P., Simons-Morton, B.G., and Zimmerman, M.A. (1995). Evaluation of health education programs: Current assessment and future directions. Health Education Quarterly, 22, 364–389.
Israel, B.A., Schulz, A.J., Parker, E.A., and Becker, A.B. (1998). Review of community-based research: Assessing partnership approaches to improve public health. Annual Review of Public Health, 19, 173–202.
Israel, B.A., Schurman, S.J., and House, J.S. (1989). Action research on occupational stress: Involving workers as researchers. International Journal of Health Services, 19, 135–155.
Israel, B.A., Schurman, S.J., and Hugentobler, M.K. (1992a). Conducting action research: Relationships between organization members and researchers. Journal of Applied Behavioral Science, 28, 74–101.
Israel, B.A., Schurman, S.J., Hugentobler, M.K., and House, J.S. (1992b). A participatory action research approach to reducing occupational stress in the United States. In V. DiMartino (Ed.) Preventing Stress at Work: Conditions of Work Digest, Vol. II (pp. 152– 163). Geneva, Switzerland: International Labor Office.
James, S.A. (1993). Racial and ethnic differences in infant mortality and low birth weight: A psychosocial critique. Annals of Epidemiology, 3, 130–136.
Johnson-Laird, P.N. (1980). Mental models: Towards a cognitive science of language, inference and consciousness Cognitive Science, No. 6. New York: Cambridge University Press.
Kahneman D. and Tversky, A. (1983). Choices, values, and frames. American Psychologist, 39, 341–350.
Kahneman, D. and Tversky, A. (1996). On the reality of cognitive illusions. Psychological Review, 103, 582–591.
Kalet, A., Roberts, J.C., and Fletcher, R. (1994). How do physicians talk with their patients about risks? Journal of General Internal Medicine, 9, 402–404.
Kaplan, R.M. (1994). Value judgment in the Oregon Medicaid experiment. Medical Care, 32, 975–988.
Kaplan, R.M. (1998). Profile versus utility based measures of outcome for clinical trials. In M.J.Staquet, R.D.Hays, and P.M.Fayers (Eds.) Quality of Life Assessment in Clinical Trials (pp. 69–90). London: Oxford University Press.
Kaplan, R.M. and Anderson, J.P. (1996). The general health policy model: An integrated approach. In B.Spilker (Ed.) Quality of Life and Pharmacoeconomics in Clinical Trials (pp. 309–322). Philadephia: Lippencott-Raven.
Kasper, J.F., Mulley, A.G., and Wennberg, J.E. (1992). Developing shared decision-making programs to improve the quality of health care. Quality Review Bulletin, 18, 183–190.
Kass, D. and Freudenberg, N. (1997). Coalition building to prevent childhood lead poisoning: A case study from New York City. In M.Minkler (Ed.), Community Organizing and Community Building for Health (pp. 278–288). New Brunswick, NJ: Rutgers University Press.
Kegler, M.C., Steckler, A., Malek, S.H., and McLeroy, K. (1998a). A multiple case study of implementation in 10 local Project ASSIST coalitions in North Carolina. Health Education Research, 13, 225–238.
Kegler, M.C., Steckler, A., McLeroy, K., and Malek, S.H. (1998b). Factors that contribute to effective community health promotion coalitions: A study of 10 Project ASSIST coalitions in North Carolina. American Stop Smoking Intervention Study for Cancer Prevention. Health Education and Behavior, 25, 338–353.
Klein, D.C. (1968). Community Dynamics and Mental Health. New York: Wiley.
Klitzner, M. (1993). A public health/dynamic systems approach to community-wide alcohol and other drug initiatives. In R.C.Davis, A.J.Lurigo, and D.P.Rosenbaum (Eds.) Drugs and the Community (pp. 201–224). Springfield, IL: Charles C.Thomas.
Koepsell, T.D. (1998). Epidemiologic issues in the design of community intervention trials. In R.Brownson, and D.Petitti (Eds.) Applied Epidemiology: Theory To Practice (pp. 177–212). New York: Oxford University Press.
Koepsell, T.D., Diehr, P.H., Cheadle, A., and Kristal, A. (1995). Invited commentary: Symposium on community intervention trials. American Journal of Epidemiology, 142, 594–599.
Koepsell, T.D., Wagner, E.H., Cheadle, A.C., Patrick, D.L., Martin, D.C., Diehr, P.H., Perrin, E.B., Kristal, A.R., Allan-Andrilla, C.H., and Dey, L.J. (1992). Selected methodological issues in evaluating community-based health promotion and disease prevention programs. Annual Review of Public Health, 13, 31–57.
Kong, A., Barnett, G.O., Mosteller, F., and Youtz, C. (1986). How medical professionals evaluate expressions of probability. New England Journal of Medicine, 315, 740–744.
Kraus, J.F. (1985). Effectiveness of measures to prevent unintentional deaths of infants and children from suffocation and strangulation. Public Health Report, 100, 231–240.
Kraus, J.F., Peek, C., McArthur, D.L., and Williams, A. (1994). The effect of the 1992 California motorcycle helmet use law on motorcycle crash fatalities and injuries. Journal of the American Medical Association, 272, 1506–1511.
Krieger, N. (1994). Epidemiology and the web of causation: Has anyone seen the spider? Social Science and Medicine, 39, 887–903.
Krieger, N., Rowley, D.L, Herman, A.A., Avery, B., and Phillips, M.T. (1993). Racism, sexism and social class: Implications for studies of health, disease and well-being. American Journal of Preventive Medicine, 9, 82–122.
La Puma, J. and Lawlor, E.F. (1990). Quality-adjusted life-years. Ethical implications for physicians and policymakers. Journal of the American Medical Association 263, 2917– 2921.
Labonte, R. (1994). Health promotion and empowerment: reflections on professional practice . Health Education Quarterly, 21, 253–268.
Lalonde, M. (1974). A new perspective on the health of Canadians. Ottawa, ON: Ministry of Supply and Services.
Lando, H.A., Pechacek, T.F., Pirie, P.L., Murray, D.M., Mittelmark, M.B., Lichtenstein, E., Nothwehyr, F., and Gray, C. (1995). Changes in adult cigarette smoking in the Minnesota Heart Health Program. American Journal of Public Health, 85, 201–208.
Lantz, P.M., House, J.S., Lepkowski, J.M., Williams, D.R., Mero, R.P., and Chen, J. (1998). Socioeconomic factors, health behaviors, and mortality. Journal of the American Medical Association, 279, 1703–1708.
Last, J. (1995). Redefining the unacceptable. Lancet, 346, 1642–1643.
Lather, P. (1986). Research as praxis. Harvard Educational Review, 56, 259–277.
Lenert, L., and Kaplan, R.M. (2000). Validity and interpretation of preference-based measures of health-related quality of life. Medical Care, 38, 138–150.
Leventhal, H. and Cameron, L. (1987). Behavioral theories and the problem of compliance. Patient Education and Counseling, 10, 117–138.
Levine, D.M, Becker, D.M, Bone, L.R, Stillman, F.A, Tuggle II, M.B., Prentice, M., Carter, J., and Filippeli, J. (1992). A partnership with minority populations: A community model of effectiveness research. Ethnicity and Disease, 2, 296–305.
Lewin, K. (1951) Field Theory in Social Science. New York: Harper.
Lewis, C.E. (1988). Disease prevention and health promotion practices of primary care physicians in the United States. American Journal of Preventive Medicine, 4, 9–16.
Liao, L., Jollis, J.G., DeLong, E.R., Peterson, E.D., Morris, K.G., and Mark, D.B. (1996). Impact of an interactive video on decision making of patients with ischemic heart disease. Journal of General Internal Medicine, 11, 373–376.
Lichter, A.S., Lippman, M.E., Danforth, D.N., Jr., d’Angelo, T., Steinberg, S.M., deMoss, E., MacDonald, H.D., Reichert, C.M., Merino, M., Swain, S.M., et al. (1992). Mastectomy versus breast-conserving therapy in the treatment of stage I and II carcinoma of the breast: A randomized trial at the National Cancer Institute. Journal of Clinical Oncokgy, 10, 976–983.
Lillie-Blanton, M. and Hoffman, S.C. (1995). Conducting an assessment of health needs and resources in a racial/ethnic minority community. Health Services Research, 30, 225–236.
Lincoln, Y.S. and Reason, P. (1996). Editor’s introduction. Qualitative Inquiry, 2, 5–11.
Linville, P.W., Fischer, G.W., and Fischhoff, B. (1993). AIDS risk perceptions and decision biases. In J.B.Pryor and G.D.Reeder (Eds.) The Social Psychology of HIV Infection (pp. 5–38). Hillsdale, NJ: Lawrence Erlbaum.
Lipid Research Clinics Program. (1984). The Lipid Research Clinics Coronary Primary Prevention Trial results. I. Reduction in incidence of coronary heart disease. Journal of the American Medical Association, 251, 351–364.
Lipkus, I.M. and Hollands, J.G. (1999). The visual communication of risk. Journal of National Cancer Institute Monographs, 25, 149–162.
Lipsey, M.W. (1993). Theory as method: Small theories of treatments. New Direction in Program Evaluation, 57, 5–38.
Lipsey, M.W. and Polard, J.A. (1989). Driving toward theory in program evaluation: More models to choose from. Evaluation and Program Planning, 12, 317–328.
Lund, A.K., Williams, A.F., and Womack, K.N. (1991). Motorcycle helmet use in Texas. Public Health Reports, 106, 576–578.
Maguire, P. (1987). Doing Participatory Research: A Feminist Approach. School of Education, Amherst, MA: The University of Massachusetts.
Maguire, P. (1996). Considering more feminist participatory research: What’s congruency got to do with it? Qualitative Inquiry, 2, 106–118.
Marin, G. and Marin, B.V. (1991). Research with Hispanic Populations. Newbury Park, CA: Sage.
Matt, G.E. and Navarro, A.M. (1997). What meta-analyses have and have not taught us about psychotherapy effects: A review and future directions. Clinical Psychology Review, 17, 1–32.
Mazur, D.J. and Hickam, D.H. (1997). Patients’ preferences for risk disclosure and role in decision making for invasive medical procedures. Journal of General Internal Medicine, 12, 114–117.
McGraw, S.A., Stone, E.J., Osganian, S.K., Elder, J.P., Perry, C.L., Johnson, C.C., Parcel, G.S., Webber, L.S., and Luepker, R.V. (1994). Design of process evaluation within the child and adolescent trial for cardiovascular health (CATCH). Health Education Quarterly, S5–S26.
McIntyre, S. and West, P. (1992). What does the phrase “safer sex” mean to you? AIDS, 7, 121–126.
McKay, H.G., Feil, E.G., Glasgow, R.E., and Brown, J.E. (1998). Feasibility and use of an internet support service for diabetes self-management. The Diabetes Educator, 24, 174– 179.
McKinlay, J.B. (1993). The promotion of health through planned sociopolitical change: challenges for research and policy. Social Science and Medicine, 36, 109–117.
McKnight, J.L. (1987). Regenerating community. Social Policy, 17, 54–58.
McKnight, J.L. (1994). Politicizing health care. In P.Conrad, and R.Kern (Eds.) The Sociology Of Health And Illness: Critical Perspectives, 4th Edition (pp. 437–441). New York: St. Martin’s.
McVea, K., Crabtree, B.F., Medder, J.D., Susman, J.L., Lukas, L., McIlvain, H.E., Davis, C.M., Gilbert, C.S., and Hawver, M. (1996). An ounce of prevention? Evaluation of the ‘Put Prevention into Practice’ program. Journal of Family Practice, 43, 361–369.
Merz, J., Fischhoff, B., Mazur, D.J., and Fischbeck, P.S. (1993). Decision-analytic approach to developing standards of disclosure for medical informed consent. Journal of Toxics and Liability, 15, 191–215.
Minkler, M. (1989). Health education, health promotion and the open society: An historical perspective. Health Education Quarterly, 16, 17–30.
Mittelmark, M.B., Hunt, M.K., Heath, G.W., and Schmid, T.L. (1993). Realistic outcomes: Lessons from community-based research and demonstration programs for the prevention of cardiovascular diseases. Journal of Public Health Policy, 14, 437–462.
Monahan, J.L. and Scheirer, M.A. (1988). The role of linking agents in the diffusion of health promotion programs. Health Education Quarterly, 15, 417–434.
Morgan, M.G. (1995). Fields from Electric Power [brochure]. Pittsburgh, PA: Department of Engineering and Public Policy, Carnegie Mellon University.
Morgan, M.G., Fischhoff, B., Bostrom, A., and Atman, C. (2001). Risk Communication: The Mental Models Approach. New York: Cambridge University Press.
Mosteller, F. and Colditz, G.A. (1996). Understanding research synthesis (meta-analysis). Annual Review of Public Health, 17, 1–23.
Muldoon, M.F., Manuck, S.B., and Matthews, K.A. (1990). Lowering cholesterol concentrations and mortality: A quantitative review of primary prevention trials. British Medical Journal, 301, 309–314.
Murray, D. (1995). Design and analysis of community trials: Lessons from the Minnesota Heart Health Program. American Journal of Epidemilogy, 142, 569–575.
Murray, D.M. (1986). Dissemination of community health promotion programs: The Fargo-Moorhead Heart Health Program. Journal of School Health, 56, 375–381.
Myers, A.M, Pfeiffle, P, and Hinsdale, K. (1994). Building a community-based consortium for AIDS patient services. Public Health Reports, 109, 555–562.
National Research Council, Committee on Risk Perception and Communication. (1989). Improving Risk Communication. Washington, DC: National Academy Press.
NHLBI (National Heart, Lung, and Blood Institute). (1983). Guidelines for Demonstration And Education Research Grants. Washington, DC: National Institutes of Health.
NHLBI (National Heart, Lung, and Blood Institute). (1998). Report of the Task Force on Behavioral Research in Cardiovascular, Lung, and Blood Health and Disease. Bethesda, MD: National Institutes of Health.
Ni, H., Sacks, J.J., Curtis, L., Cieslak, P.R., and Hedberg, K. 1997. Evaluation of a statewide bicycle helmet law via multiple measures of helmet use. Archives of Pediatric and Adolescent Medicine, 151, 59–65.
Nyden, P.W. and Wiewel, W. (1992). Collaborative research: harnessing the tensions between researcher and practitioner. American Sociologist, 24, 43–55.
O’Connor, P.J., Solberg, L.I., and Baird, M. (1998). The future of primary care. The enhanced primary care model. Journal of Family Practice, 47, 62–67.
Office of Technology Assessment, U.S. Congress. (1981). Cost-Effectiveness of Influenza Vaccination. Washington, DC: Office of Technology Assessment.
Oldenburg, B., French, M., and Sallis, J.F. (1999). Health behavior research: The quality of the evidence base. Paper presented at the Society of Behavioral Medicine Twentieth Annual Meeting, San Diego, CA.
Orlandi, M.A (1996a). Health Promotion Technology Transfer: Organizational Perspectives. Canadian Journal of Public Health, 87, Supplement 2, 528–533.
Orlandi, M.A. (1996b). Prevention Technologies for Drug-Involved Youth. In J.Inciardi, L.Metsch, and C.McCoy (Eds.) Intervening with Drug-Involved Youth: Prevention, Treatment, and Research (pp. 81–100). Newbury Park, CA: Sage Publications.
Orlandi, M.A. (1986). The diffusion and adoption of worksite health promotion innovations: An analysis of barriers. Preventive Medicine, 15, 522–536.
Parcel, G.S, Eriksen, M.P, Lovato, C.Y., Gottlieb, N.H., Brink, S.G., and Green, L.W (1989). The diffusion of school-based tobacco-use prevention programs: Program description and baseline data. Health Education Research, 4, 111–124.
Parcel, G.S, O’Hara-Tompkins, N.M, Harris, R.B., Basen-Engquist, K.M., McCormick, L.K., Gottlieb, N.H., and Eriksen, M.P. (1995). Diffusion of an Effective Tobacco Prevention Program. II. Evaluation of the Adoption Phase. Health Education Research, 10, 297–307.
Parcel, G.S, Perry, C.L, and Taylor W.C. (1990). Beyond Demonstration: Diffusion of Health Promotion Innovations. In N.Bracht (Ed.), Health Promotion at the Community Level (pp. 229–251). Thousand Oaks, CA: Sage Publications.
Parcel, G.S., Simons-Morton, B.G., O’Hara, N.M,. Baranowski, T., and Wilson, B. (1989). School promotion of healthful diet and physical activity: Impact on learning outcomes and self-reported behavior. Health Education Quarterly, 16, 181–199.
Park, P., Brydon-Miller, M., Hall, B., and Jackson, T. (Eds.) (1993). Voices of Change: Participatory Research in the United States and Canada. Westport, CT: Bergin and Garvey.
Parker, E.A., Schulz, A.J., Israel, B.A., and Hollis, R. (1998). East Side Village Health Worker Partnership: Community-based health advisor intervention in an urban area. Health Education and Behavior, 25, 24–45.
Parsons, T. (1951). The Social System. Glencoe, IL: Free Press.
Patton, M.Q. (1987). How to Use Qualitative Methods In Evaluation. Newbury Park, CA: Sage Publications.
Patton, M.Q. (1990). Qualitative Evaluation And Research Methods, 2nd Edition. Newbury Park, CA: Sage Publications.
Pearce, N. (1996). Traditional epidemiology, modern epidemiology and public health. American Journal of Public Health, 86, 678–683.
Pendleton, L. and House, W.C. (1984). Preferences for treatment approaches in medical care. Medical Care, 22, 644–646.
Pentz, M.A. (1998). Research to practice in community-based prevention trials. Preventive intervention research at the crossroads: contributions and opportunities from the behavioral and social sciences. Programs and Abstracts (pp. 82–83). Bethesda, MD.
Pentz, M.A., and Trebow, E. (1997). Implementation issues in drug abuse prevention research. Substance Use and Misuse, 32, 1655–1660.
Pentz, M.A., Trebow, E., Hansen, W.B., MacKinnon, D.P., Dwyer, J.H., Flay, B.R., Daniels, S., Cormack, C., and Johnson, C.A. (1990). Effects of program implementation on adolescent drug use behavior: The Midwestern Prevention Project (MPP). Evaluation Review, 14, 264–289.
Perry, C.L. (1999). Cardiovascular disease prevention among youth: Visioning the future. Preventive Medicine, 29, S79–S83.
Perry, C.L., Murray, D.M, and Griffin, G. (1990). Evaluating the statewide dissemination of smoking prevention curricula: Factors in teacher compliance. Journal of School Health, 60, 501–504.
Plough, A. and Olafson, F. (1994). Implementing the Boston Healthy Start Initiative: A case study of community empowerment and public health. Health Education Quarterly, 21, 221–234.
Price, R.H. (1989). Prevention programming as organizational reinvention: From research to implementation. In M.M.Silverman and V.Anthony (Eds.) Prevention of Mental Disorders, Alcohol and Drug Use in Children and Adolescents (pp. 97–123). Rockville, MD: Department of Health and Human Services.
Price, R.H. (1998). Theory guided reinvention as the key high fidelity prevention practice. Paper presented at the National Institute of Health meeting, “Preventive Intervention Research at the Crossroads: Contributions and Opportunities from the Behavioral and Social Sciences,” Bethesda, MD.
Pronk, N.P. and O’Connor, P.J. (1997). Systems approach to population health improvement. Journal of Ambulatory Care Management, 20, 24–31.
Putnam, R.D. (1993). Making Democracy Work: Civic Traditions in Modern Italy. Princeton: Princeton University.
Rabeneck, L., Viscoli, C.M., and Horwitz, R.I. (1992). Problems in the conduct and analysis of randomized clinical trials. Are we getting the right answers to the wrong questions? Archives of Internal Medicine, 152, 507–512.
Raiffa, H. (1968). Decision Analysis. Reading, MA: Addison-Wesley.
Reason, P. (1994). Three approaches to participative inquiry. In N.K.Denzin and Y.S. Lincoln (Eds.) Handbook of Qualitative Research (pp. 324–339). Thousand Oaks, CA: Sage.
Reason, P. (Ed.). (1988). Human Inquiry in Action: Developments in New Paradigm Research. London: Sage.
Reichardt, C.S. and Cook, T.D. (1980). “Paradigms Lost”: Some thoughts on choosing methods in evaluation research. Evaluation and Program Planning: An International Journal 3, 229–236.
Rivara, F.P., Grossman, D.C., and Cummings, P. (1997a). Injury prevention. First of two parts. New England Journal of Medicine, 337, 543–548.
Rivara, F.P., Grossman, D.C., Cummings P. (1997b). Injury prevention. Second of two parts. New England Journal of Medicine, 337, 613–618.
Roberts-Gray, C., Solomon, T., Gottlieb, N., and Kelsey, E. (1998). Heart partners: A strategy for promoting effective diffusion of school health promotion programs. Journal of School Health, 68, 106–116.
Robertson, A. and Minkler, M. (1994). New health promotion movement: A critical examination. Health Education Quarterly, 21, 295–312.
Rogers, E.M. (1983). Diffusion of Innovations, 3rd ed. New York: The Free Press.
Rogers, E.M. (1995). Communication of Innovations. New York: The Free Press.
Rogers, G.B. (1996). The safety effects of child-resistant packaging for oral prescription drugs. Two decades of experience. Journal of the American Medical Association, 275, 1661–1665.
Rohrbach, L.A, D’Onofrio, C., Backer, T., and Montgomery, S. (1996). Diffusion of school’ based substance abuse prevention programs. American Behavioral Scientist, 39, 919– 934.
Rossi, P.H. and Freeman, H.E. (1989). Evaluation: A Systematic Approach, 4th Edition. Newbury Park, CA: Sage Publications.
Rutherford, G.W. (1998). Public health, communicable diseases, and managed care: Will managed care improve or weaken communicable disease control? American Journal of Preventive Medicine, 14, 53–59.
Sackett, D.L., Richardson, W.S., Rosenberg, W., and Haynes, R.B. (1997) Evidence-Based Medicine: How to Practice and Teach EBM. New York: Churchill Livingstone.
Sarason, S.B. (1984). The Psychological Sense of Community: Prospects for a Community Psychology. San Francisco: Jossey-Bass.
Schein, E.H. (1987). Process Consulting. Reading, MA: Addition Wesley.
Schensul, J.J., Denelli-Hess, D., Borreo, M.G., and Bhavati, M.P. (1987). Urban comadronas: Maternal and child health research and policy formulation in a Puerto Rican community. In D.D.Stull and J.J.Schensul (Eds.) Collaborative Research and Social Change: Applied Anthropology in Action (pp. 9–32). Boulder, CO: Westview.
Schensul, S.L. (1985). Science, theory and application in anthropology. American Behavioral Scientist, 29, 164–185.
Schneiderman, L.J., Kronick, R., Kaplan, R.M., Anderson, J.P., and Langer, R.D. (1992). Effects of offering advance directives on medical treatments and costs. Annals of Internal Medicine 117, 599–606.
Schriver, K.A. (1989). Evaluating text quality: The continuum from text-focused to reader-focused methods. IEEE Transactions on Professional Communication, 32, 238–255.
Schulz, A.J, Israel, B.A, Selig, S.M., and Bayer, I.S. (1998a). Development and implementation of principles for community-based research in public health. In R.H. Macnair (Ed.) Research Strategies For Community Practice (pp. 83–110). New York: Haworth Press.
Schulz, A.J., Parker, E.A., Israel, B.A, Becker, A.B., Maciak, B., and Hollis, R. (1998b). Conducting a participatory community-based survey: Collecting and interpreting data for a community health intervention on Detroit’s East Side. Journal of Public Health Management Practice, 4, 10–24.
Schwartz, L.M., Woloshin, S., Black, W.C., and Welch, H.G. (1997). The role of numeracy in understanding the benefit of screening mammography. Annals of Internal Medicine, 127, 966–972.
Schwartz, N. (1999). Self-reports: How the questions shape the answer. American Psychologist, 54, 93–105.
Seligman M.E. (1996). Science as an ally of practice. American Psychologist, 51, 1072– 1079.
Shadish, W.R., Cook, T.D., and Leviton, L.C. (1991). Foundations of Program Evaluation. Newbury Park, CA: Sage Publications.
Shadish, W.R., Matt, G.E., Navarro, A.M., Siegle, G., Crits-Christoph, P., Hazelrigg, M.D., Jorm, A.F., Lyons, L.C., Nietzel, M.T., Prout, H.T., Robinson, L., Smith, M.L., Svartberg, M., and Weiss, B. (1997). Evidence that therapy works in clinically representative conditions. Journal of Consulting and Clinical Psychology, 65, 355–365.
Sharf, B.F. (1997). Communicating breast cancer on-line: Support and empowerment on the internet. Women and Health, 26, 65–83.
Simons-Morton, B.G., Green, W.A., and Gottlieb, N. (1995). Health Education and Health Promotion, 2nd Edition. Prospect Heights, IL: Waveland.
Simons-Morton, B.G., Parcel, G.P., Baranowski, T., O’Hara, N., and Forthofer, R. (1991). Promoting a healthful diet and physical activity among children: Results of a school-based intervention study. American Journal of Public Health, 81, 986–991.
Singer, M. (1993). Knowledge for use: Anthropology and community-centered substance abuse research . Social Science and Medicine, 37, 15–25.
Singer, M. (1994). Community-centered praxis: Toward an alternative non-dominative applied anthropology. Human Organization, 53, 336–344.
Smith, D.W., Steckler, A., McCormick, L.K., and McLeroy, K.R. (1995). Lessons learned about disseminating health curricula to schools. Journal of Health Education, 26, 37– 43.
Smithies, J. and Adams, L. (1993). Walking the tightrope. In J.K.Davies, and M.P.Kelly (Eds.) Healthy Cities: Research and Practice (pp. 55–70). New York: Routledge.
Solberg, L.I., Kottke, T.E, and Brekke, M.L. (1998a). Will primary care clinics organize themselves to improve the delivery of preventive services? A randomized controlled trial. Preventive Medicine, 27, 623–631.
Solberg, L.I., Kottke, T.E., Brekke, M.L., Conn, S.A., Calomeni, C.A., and Conboy, K.S. (1998b). Delivering clinical preventive services is a systems problem. Annals of Behavioral Medicine, 19, 271–278.
Sorensen, G., Emmons, K., Hunt, M.K., and Johnston, D. (1998a). Implications of the results of community intervention trials. Annual Rreview of Public Health, 19, 379– 416.
Sorensen, G., Thompson, B., Basen-Engquist, K., Abrams, D., Kuniyuki, A., DiClemente, C., and Biener, L. (1998b). Durability, dissemination and institutionalization of worksite tobacco control programs: Results from the Working Well Trial. International Journal of Behavioral Medicine, 5, 335–351.
Spilker, B. (1996). Quality of Life and Pharmacoeconomics. In B.Spilker (Ed) Clinical Trials 2nd Edition. Philadelphia: Lippincott-Raven.
Steckler, A., Goodman, R.M., McLeroy, K.R., Davis, S., and Koch, G. (1992). Measuring the diffusion of innovative health promotion programs. American Journal of Health Promotion, 6, 214–224.
Steckler, A.B., Dawson, L., Israel, B.A., and Eng, E. (1993). Community health development: An overview of the works of Guy W.Steuart. Health Education Quarterly, Suppl. 1, S3–S20.
Steckler, A.B., McLeroy, K.R., Goodman, R.M., Bird, S.T., and McCormick, L. (1992). Toward integrating qualitative and quantitative methods: an introduction. Health Education Quarterly, 19, 1–8.
Steuart, G.W. (1993). Social and cultural perspectives: Community intervention and mental health. Health Education Quarterly, S99.
Stokols, D. (1992). Establishing and maintaining healthy environments: Toward a social ecology of health promotion. American Psychologist, 47, 6–22.
Stokols, D. (1996). Translating social ecological theory into guidelines for community health promotion. American Journal of Health Promotion, 10, 282–298.
Stone, E.J., McGraw, S.A., Osganian, S.K., and Elder, J.P. (Eds.) (1994). Process evaluation in the multicenter Child and Adolescent Trial for Cardiovascular Health (CATCH). Health Education Quarterly, Suppl. 2, 1–143.
Stringer, E.T. (1996). Action Research: A Handbook For Practitioners. Thousand Oaks, CA: Sage.
Strull, W.M., Lo, B., and Charles, G. (1984). Do patients want to participate in medical decision making? Journal of the American Medical Association, 252, 2990–2994.
Strum, S. (1997). Consultation and patient information on the Internet: The patients’ forum. British Journal of Urology, 80, 22–26.
Susser, M. (1995). The tribulations of trials-intervention in communities. American Journal of Public Health, 85, 156–158.
Susser, M. and Susser, E. (1996a). Choosing a future for epidemiology. I.Eras and paradigms. American Journal of Public Health, 86, 668–673.
Susser, M. and Susser, E. (1996b). From black box to Chinese boxes and eco-epidemiology. American Journal of Public Health, 86, 674–677.
Tandon, R. (1981). Participatory evaluation and research: Main concepts and issues. In W. Fernandes, and R.Tandon (Eds.) Participatory Research and Evaluation (pp. 15–34). New Delhi: Indian Social Institute.
Thomas, S.B. and Morgan, C.H. (1991). Evaluation of community-based AIDS education and risk reduction projects in ethnic and racial minority communities. Evaluation and Program Planning , 14, 247–255.
Thompson, D.C., Nunn, M.E., Thompson, R.S., and Rivara, F.P. (1996a). Effectiveness of bicycle safety helmets in preventing serious facial injury. Journal of the American Medical Association, 276, 1974–1975.
Thompson, D.C., Rivara, F.P., and Thompson, R.S. (1996b). Effectiveness of bicycle safety helmets in preventing head injuries: A case-control study. Journal of the American Medical Association, 276, 1968–1973.
Thompson, R.S., Taplin, S.H., McAfee, T.A., Mandelson , M.T., and Smith, A.E. (1995). Primary and secondary prevention services in clinical practice. Twenty years’ experience in development, implementation, and evaluation. Journal of the American Medical Association 273, 1130–1135.
Torrance, G.W. (1976). Toward a utility theory foundation for health status index models. Health Services Research, 11, 349–369.
Tversky, A. and Fox, C.R. (1995). Weighing risk and uncertainty. Psychological Review, 102, 269–283.
Tversky, A. and Kahneman, D. (1988). Rational choice and the framing of decisions. In D.E.Bell, H.Raiffa, and A.Tversky (Eds.) Decision Making: Descriptive, Normative, And Prescriptive Interactions (pp. 167–192). Cambridge: Cambridge University Press.
Tversky, A. and Shafir, E. (1992). The disjunction effect in choice under uncertainty. Psychological Science, 3, 305–309.
U.S. Department of Health and Human Services. (1990). Smoking, Tobacco, and Cancer Program: 1985–1989 Status Report. Washington, DC: NIH Publication #90–3107.
Vega, W.A. (1992). Theoretical and pragmatic implications of cultural diversity for community research. American Journal of Community Psychology, 20, 375–391.
Von Winterfeldt, D. and Edwards, W. (1986). Decision Analysis and Behavioral Research. New York: Cambridge University Press.
Wagner, E., Austin, B., and Von Korff, M. (1996). Organizing care for patients with chronic illness. Millbank Quarterly, 76, 511–544.
Wallerstein, N. (1992). Powerlessness, empowerment, and health: implications for health promotion programs. American Journal of Health Promotion, 6, 197–205.
Walsh, J.M.E. and McPhee, S.J. (1992). A systems model of clinical preventive care: An analysis of factors influencing patient and physician. Health Education Quarterly, 19, 157–175.
Walter, H.J. (1989). Primary prevention of chronic disease among children: The school-based “Know Your Body Intervention Trials.” Health Education Quarterly, 16, 201– 214.
Waterworth, S. and Luker, K.A. (1990). Reluctant collaborators: Do patients want to be involved in decisions concerning care? Journal of Advanced Nursing, 15, 971–976.
Weisz, J.R., Weiss, B., and Donenberg, G.R. (1992). The lab versus the clinic. Effects of child and adolescent psychotherapy. American Psychologist, 47, 1578–1585.
Wennberg, J.E. (1995). Shared decision making and multimedia. In L.M.Harris (Ed.) Health and the New Media: Technologist Transforming Personal And Public Health (pp. 109–126). Mahwah, NJ: Erlbaum.
Wennberg, J.E. (1998). The Dartmouth Atlas Of Health Care In the United States. Hanover, NH: Trustees of Dartmouth College.
Whitehead, M. (1993). The ownership of research. In J.K.Davies and M.P.Kelly (Eds.) Healthy Cities: Research and practice (pp. 83–89). New York: Routledge.
Williams, D.R. and Collins, C. (1995). U.S. socioeconomic and racial differences in health: patterns and explanations. Annual Review of Sociology, 21, 349–386.
Windsor, R., Baranowski, T., Clark, N., and Cutter, G. (1994). Evaluation Of Health Promotion, Health Education And Disease Prevention Programs. Mountain View, CA: Mayfield.
Winkleby, M.A. (1994). The future of community-based cardiovascular disease intervention studies. American Journal of Public Health, 84, 1369–1372.
Woloshin, S., Schwartz, L.M., Byram, S.J., Sox, H.C., Fischhoff, B., and Welch, H.G. (2000). Women’s understanding of the mammography screening debate. Archives of Internal Medicine, 160, 1434–1440.
World Health Organization (WHO). (1986). Ottawa Charter for Health Promotion. Copenhagen: WHO.
Yates, J.F. (1990). Judgment and Decision Making. Englewood Cliffs, NJ: Prentice-Hall.
Yeich, S. and Levine, R. (1992). Participatory research’s contribution to a conceptualization of empowerment. Journal of Applied Social Psychology, 22, 1894–1908.
Yin, R.K. (1993). Applications of case study research. Applied Social Research Methods Series, Vol. 34, Newbury Park, CA: Sage Publications.
Zhu, S.H. and Anderson, N.H. (1991). Self-estimation of weight parameter in multi-attribute analysis. Organizational Behavior and Human Decision Processes, 48, 36–54.
Zich, J. and Temoshok, C. (1986). Applied methodology: A primer of pitfalls and opportunities in AIDS research. In D.Feldman, and T.Johnson (Eds.) The Social Dimensions of AIDS (pp. 41–60). New York: Praeger.